Hugh Dickins
c01d5b3007
shmem: get_unmapped_area align huge page
...
Provide a shmem_get_unmapped_area method in file_operations, called at
mmap time to decide the mapping address. It could be conditional on
CONFIG_TRANSPARENT_HUGEPAGE, but save #ifdefs in other places by making
it unconditional.
shmem_get_unmapped_area() first calls the usual mm->get_unmapped_area
(which we treat as a black box, highly dependent on architecture and
config and executable layout). Lots of conditions, and in most cases it
just goes with the address that chose; but when our huge stars are
rightly aligned, yet that did not provide a suitable address, go back to
ask for a larger arena, within which to align the mapping suitably.
There have to be some direct calls to shmem_get_unmapped_area(), not via
the file_operations: because of the way shmem_zero_setup() is called to
create a shmem object late in the mmap sequence, when MAP_SHARED is
requested with MAP_ANONYMOUS or /dev/zero. Though this only matters
when /proc/sys/vm/shmem_huge has been set.
Link: http://lkml.kernel.org/r/1466021202-61880-29-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Hugh Dickins <hughd@google.com >
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
5a6e75f811
shmem: prepare huge= mount option and sysfs knob
...
This patch adds new mount option "huge=". It can have following values:
- "always":
Attempt to allocate huge pages every time we need a new page;
- "never":
Do not allocate huge pages;
- "within_size":
Only allocate huge page if it will be fully within i_size.
Also respect fadvise()/madvise() hints;
- "advise:
Only allocate huge pages if requested with fadvise()/madvise();
Default is "never" for now.
"mount -o remount,huge= /mountpoint" works fine after mount: remounting
huge=never will not attempt to break up huge pages at all, just stop
more from being allocated.
No new config option: put this under CONFIG_TRANSPARENT_HUGEPAGE, which
is the appropriate option to protect those who don't want the new bloat,
and with which we shall share some pmd code.
Prohibit the option when !CONFIG_TRANSPARENT_HUGEPAGE, just as mpol is
invalid without CONFIG_NUMA (was hidden in mpol_parse_str(): make it
explicit).
Allow enabling THP only if the machine has_transparent_hugepage().
But what about Shmem with no user-visible mount? SysV SHM, memfds,
shared anonymous mmaps (of /dev/zero or MAP_ANONYMOUS), GPU drivers' DRM
objects, Ashmem. Though unlikely to suit all usages, provide sysfs knob
/sys/kernel/mm/transparent_hugepage/shmem_enabled to experiment with
huge on those.
And allow shmem_enabled two further values:
- "deny":
For use in emergencies, to force the huge option off from
all mounts;
- "force":
Force the huge option on for all - very useful for testing;
Based on patch by Hugh Dickins.
Link: http://lkml.kernel.org/r/1466021202-61880-28-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
65c453778a
mm, rmap: account shmem thp pages
...
Let's add ShmemHugePages and ShmemPmdMapped fields into meminfo and
smaps. It indicates how many times we allocate and map shmem THP.
NR_ANON_TRANSPARENT_HUGEPAGES is renamed to NR_ANON_THPS.
Link: http://lkml.kernel.org/r/1466021202-61880-27-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
fc127da085
truncate: handle file thp
...
For shmem/tmpfs we only need to tweak truncate_inode_page() and
invalidate_mapping_pages().
truncate_inode_pages_range() and invalidate_inode_pages2_range() are
adjusted to use page_to_pgoff().
Link: http://lkml.kernel.org/r/1466021202-61880-26-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
83929372f6
filemap: prepare find and delete operations for huge pages
...
For now, we would have HPAGE_PMD_NR entries in radix tree for every huge
page. That's suboptimal and it will be changed to use Matthew's
multi-order entries later.
'add' operation is not changed, because we don't need it to implement
hugetmpfs: shmem uses its own implementation.
Link: http://lkml.kernel.org/r/1466021202-61880-25-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
c78c66d1dd
radix-tree: implement radix_tree_maybe_preload_order()
...
The new helper is similar to radix_tree_maybe_preload(), but tries to
preload number of nodes required to insert (1 << order) continuous
naturally-aligned elements.
This is required to push huge pages into pagecache.
Link: http://lkml.kernel.org/r/1466021202-61880-24-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
e2f0a0db95
page-flags: relax policy for PG_mappedtodisk and PG_reclaim
...
These flags are in use for file THP.
Link: http://lkml.kernel.org/r/1466021202-61880-23-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
7751b2da6b
vmscan: split file huge pages before paging them out
...
This is preparation of vmscan for file huge pages. We cannot write out
huge pages, so we need to split them on the way out.
Link: http://lkml.kernel.org/r/1466021202-61880-22-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
9a73f61bdb
thp, mlock: do not mlock PTE-mapped file huge pages
...
As with anon THP, we only mlock file huge pages if we can prove that the
page is not mapped with PTE. This way we can avoid mlock leak into
non-mlocked vma on split.
We rely on PageDoubleMap() under lock_page() to check if the the page
may be PTE mapped. PG_double_map is set by page_add_file_rmap() when
the page mapped with PTEs.
Link: http://lkml.kernel.org/r/1466021202-61880-21-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
baa355fd33
thp: file pages support for split_huge_page()
...
Basic scheme is the same as for anon THP.
Main differences:
- File pages are on radix-tree, so we have head->_count offset by
HPAGE_PMD_NR. The count got distributed to small pages during split.
- mapping->tree_lock prevents non-lockless access to pages under split
over radix-tree;
- Lockless access is prevented by setting the head->_count to 0 during
split;
- After split, some pages can be beyond i_size. We drop them from
radix-tree.
- We don't setup migration entries. Just unmap pages. It helps
handling cases when i_size is in the middle of the page: no need
handle unmap pages beyond i_size manually.
Link: http://lkml.kernel.org/r/1466021202-61880-20-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
37f9f5595c
thp: run vma_adjust_trans_huge() outside i_mmap_rwsem
...
vma_addjust_trans_huge() splits pmd if it's crossing VMA boundary.
During split we munlock the huge page which requires rmap walk. rmap
wants to take the lock on its own.
Let's move vma_adjust_trans_huge() outside i_mmap_rwsem to fix this.
Link: http://lkml.kernel.org/r/1466021202-61880-19-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
b237aded41
thp: prepare change_huge_pmd() for file thp
...
change_huge_pmd() has assert which is not relvant for file page. For
shared mapping it's perfectly fine to have page table entry writable,
without explicit mkwrite.
Link: http://lkml.kernel.org/r/1466021202-61880-18-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
628d47ce98
thp: skip file huge pmd on copy_huge_pmd()
...
copy_page_range() has a check for "Don't copy ptes where a page fault
will fill them correctly." It works on VMA level. We still copy all
page table entries from private mappings, even if they map page cache.
We can simplify copy_huge_pmd() a bit by skipping file PMDs.
We don't map file private pages with PMDs, so they only can map page
cache. It's safe to skip them as they can be re-faulted later.
Link: http://lkml.kernel.org/r/1466021202-61880-17-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
af9e4d5f2d
thp: handle file COW faults
...
File COW for THP is handled on pte level: just split the pmd.
It's not clear how benefitial would be allocation of huge pages on COW
faults. And it would require some code to make them work.
I think at some point we can consider teaching khugepaged to collapse
pages in COW mappings, but allocating huge on fault is probably
overkill.
Link: http://lkml.kernel.org/r/1466021202-61880-16-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
d21b9e57c7
thp: handle file pages in split_huge_pmd()
...
Splitting THP PMD is simple: just unmap it as in DAX case. This way we
can avoid memory overhead on page table allocation to deposit.
It's probably a good idea to try to allocation page table with
GFP_ATOMIC in __split_huge_pmd_locked() to avoid refaulting the area,
but clearing pmd should be good enough for now.
Unlike DAX, we also remove the page from rmap and drop reference.
pmd_young() is transfered to PageReferenced().
Link: http://lkml.kernel.org/r/1466021202-61880-15-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
b5072380eb
thp: support file pages in zap_huge_pmd()
...
split_huge_pmd() for file mappings (and DAX too) is implemented by just
clearing pmd entry as we can re-fill this area from page cache on pte
level later.
This means we don't need deposit page tables when file THP is mapped.
Therefore we shouldn't try to withdraw a page table on zap_huge_pmd()
file THP PMD.
Link: http://lkml.kernel.org/r/1466021202-61880-14-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
95ecedcd6a
thp, vmstats: add counters for huge file pages
...
THP_FILE_ALLOC: how many times huge page was allocated and put page
cache.
THP_FILE_MAPPED: how many times file huge page was mapped.
Link: http://lkml.kernel.org/r/1466021202-61880-13-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
1010245964
mm: introduce do_set_pmd()
...
With postponed page table allocation we have chance to setup huge pages.
do_set_pte() calls do_set_pmd() if following criteria met:
- page is compound;
- pmd entry in pmd_none();
- vma has suitable size and alignment;
Link: http://lkml.kernel.org/r/1466021202-61880-12-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
dd78fedde4
rmap: support file thp
...
Naive approach: on mapping/unmapping the page as compound we update
->_mapcount on each 4k page. That's not efficient, but it's not obvious
how we can optimize this. We can look into optimization later.
PG_double_map optimization doesn't work for file pages since lifecycle
of file pages is different comparing to anon pages: file page can be
mapped again at any time.
Link: http://lkml.kernel.org/r/1466021202-61880-11-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
7267ec008b
mm: postpone page table allocation until we have page to map
...
The idea (and most of code) is borrowed again: from Hugh's patchset on
huge tmpfs[1].
Instead of allocation pte page table upfront, we postpone this until we
have page to map in hands. This approach opens possibility to map the
page as huge if filesystem supports this.
Comparing to Hugh's patch I've pushed page table allocation a bit
further: into do_set_pte(). This way we can postpone allocation even in
faultaround case without moving do_fault_around() after __do_fault().
do_set_pte() got renamed to alloc_set_pte() as it can allocate page
table if required.
[1] http://lkml.kernel.org/r/alpine.LSU.2.11.1502202015090.14414@eggly.anvils
Link: http://lkml.kernel.org/r/1466021202-61880-10-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
bae473a423
mm: introduce fault_env
...
The idea borrowed from Peter's patch from patchset on speculative page
faults[1]:
Instead of passing around the endless list of function arguments,
replace the lot with a single structure so we can change context without
endless function signature changes.
The changes are mostly mechanical with exception of faultaround code:
filemap_map_pages() got reworked a bit.
This patch is preparation for the next one.
[1] http://lkml.kernel.org/r/20141020222841.302891540@infradead.org
Link: http://lkml.kernel.org/r/1466021202-61880-9-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
dcddffd41d
mm: do not pass mm_struct into handle_mm_fault
...
We always have vma->vm_mm around.
Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
6fb8ddfc45
thp, mlock: update unevictable-lru.txt
...
Add description of THP handling into unevictable-lru.txt.
Link: http://lkml.kernel.org/r/1466021202-61880-7-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Kirill A. Shutemov
1f52e67e5e
khugepaged: recheck pmd after mmap_sem re-acquired
...
Vlastimil noted[1] that pmd can be no longer valid after we drop
mmap_sem. We need recheck it once mmap_sem taken again.
[1] http://lkml.kernel.org/r/12918dcd-a695-c6f4-e06f-69141c5f357f@suse.cz
Link: http://lkml.kernel.org/r/1466021202-61880-6-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00
Ebru Akagunduz
8024ee2a09
mm, thp: fix locking inconsistency in collapse_huge_page
...
After creating revalidate vma function, locking inconsistency occured
due to directing the code path to wrong label. This patch directs to
correct label and fix the inconsistency.
Related commit that caused inconsistency:
http://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/?id=da4360877094368f6dfe75bbe804b0f0a5d575b0
Link: http://lkml.kernel.org/r/1464956884-4644-1-git-send-email-ebru.akagunduz@gmail.com
Link: http://lkml.kernel.org/r/1466021202-61880-4-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Ebru Akagunduz <ebru.akagunduz@gmail.com >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com >
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com >
Cc: Stephen Rothwell <sfr@canb.auug.org.au >
Cc: Rik van Riel <riel@redhat.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2016-07-26 16:19:19 -07:00