You've already forked android_kernel_samsung_ot8
mirror of
https://github.com/gta7lite/android_kernel_samsung_ot8.git
synced 2026-02-08 17:24:41 -08:00
(from https://lore.kernel.org/patchwork/patch/906216/) Provide infrastructure to do a speculative fault (not holding mmap_sem). The not holding of mmap_sem means we can race against VMA change/removal and page-table destruction. We use the SRCU VMA freeing to keep the VMA around. We use the VMA seqcount to detect change (including umapping / page-table deletion) and we use gup_fast() style page-table walking to deal with page-table races. Once weve obtained the page and are ready to update the PTE, we validate if the state we started the fault with is still valid, if not, well fail the fault with VM_FAULT_RETRY, otherwise we update the PTE and were done. [Manage the newly introduced pte_spinlock() for speculative page fault to fail if the VMA is touched in our back] [Rename vma_is_dead() to vma_has_changed() and declare it here] [Fetch p4d and pud] [Set vmd.sequence in __handle_mm_fault()] [Abort speculative path when handle_userfault() has to be called] [Add additional VMAs flags checks in handle_speculative_fault()] [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()] [Dont set vmf->pte and vmf->ptl if pte_map_lock() failed] [Remove warning comment about waiting for !seq&1 since we dont want to wait] [Remove warning about no huge page support, mention it explicitly] [Dont call do_fault() in the speculative path as __do_fault() calls vma->vm_ops->fault() which may want to release mmap_sem] [Only vm_fault pointer argument for vma_has_changed()] [Fix check against huge page, calling pmd_trans_huge()] [Use READ_ONCE() when reading VMAs fields in the speculative path] [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we cant support for processing done in vm_normal_page()] [Check that vma->anon_vma is already set when starting the speculative path] [Check for memory policy as we cant support MPOL_INTERLEAVE case due to the processing done in mpol_misplaced()] [Dont support VMA growing up or down] [Move check on vm_sequence just before calling handle_pte_fault()] [Dont build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT] [Add mem cgroup oom check] [Use READ_ONCE to access p*d entries] [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()] [Dont fetch pte again in handle_pte_fault() when running the speculative path] [Check PMD against concurrent collapsing operation] [Try spin lock the pte during the speculative path to avoid deadlock with other CPUs invalidating the TLB and requiring this CPU to catch the inter processors interrupt] [Move define of FAULT_FLAG_SPECULATIVE here] [Introduce __handle_speculative_fault() and add a check against mm->mm_users in handle_speculative_fault() defined in mm.h] MTK-Commit-Id: 24d82a943671ba79710fa1a4115c0cfeb51cbbc3 FROMLIST: mm: provide speculative fault infrastructure Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Change-Id: Id22d575f3fa9fc3a212211ca9a4daf8883629982 Signed-off-by: Chinwen Chang <chinwen.chang@mediatek.com> CR-Id: ALPS04983389 Feature: Memory Optimization
24 lines
385 B
C
24 lines
385 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef _LINUX_HUGETLB_INLINE_H
|
|
#define _LINUX_HUGETLB_INLINE_H
|
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
|
|
|
#include <linux/mm.h>
|
|
|
|
static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
|
|
{
|
|
return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
|
|
}
|
|
|
|
#else
|
|
|
|
static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
#endif
|
|
|
|
#endif
|