Vasily Averin
fe573327ff
tracing: incorrect gfp_t conversion
...
Fixes the following sparse warnings:
include/trace/events/*: sparse: cast to restricted gfp_t
include/trace/events/*: sparse: restricted gfp_t degrades to integer
gfp_t type is bitwise and requires __force attributes for any casts.
Link: https://lkml.kernel.org/r/331d88fe-f4f7-657c-02a2-d977f15fbff6@openvz.org
Signed-off-by: Vasily Averin <vvs@openvz.org >
Cc: Steven Rostedt <rostedt@goodmis.org >
Cc: Ingo Molnar <mingo@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-05-13 07:20:18 -07:00
Matthew Wilcox (Oracle)
adf88aa8ea
mm: remove alloc_pages_vma()
...
All callers have now been converted to use vma_alloc_folio(), so convert
the body of alloc_pages_vma() to allocate folios instead.
Link: https://lkml.kernel.org/r/20220504182857.4013401-5-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Reviewed-by: Christoph Hellwig <hch@lst.de >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2022-05-13 07:20:15 -07:00
Matthew Wilcox (Oracle)
f584b68005
mm: Add vma_alloc_folio()
...
This wrapper around alloc_pages_vma() calls prep_transhuge_page(),
removing the obligation from the caller. This is in the same spirit
as __folio_alloc().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Reviewed-by: Zi Yan <ziy@nvidia.com >
Reviewed-by: William Kucharski <william.kucharski@oracle.com >
2022-04-07 09:43:41 -04:00
Andrey Konovalov
ada543af3b
mm, kasan: fix __GFP_BITS_SHIFT definition breaking LOCKDEP
...
KASAN changes that added new GFP flags mistakenly updated
__GFP_BITS_SHIFT as the total number of GFP bits instead of as a shift
used to define __GFP_BITS_MASK.
This broke LOCKDEP, as __GFP_BITS_MASK now gets the 25th bit enabled
instead of the 28th for __GFP_NOLOCKDEP.
Update __GFP_BITS_SHIFT to always count KASAN GFP bits.
In the future, we could handle all combinations of KASAN and LOCKDEP to
occupy as few bits as possible. For now, we have enough GFP bits to be
inefficient in this quick fix.
Link: https://lkml.kernel.org/r/462ff52742a1fcc95a69778685737f723ee4dfb3.1648400273.git.andreyknvl@google.com
Fixes: 9353ffa6e9 ("kasan, page_alloc: allow skipping memory init for HW_TAGS")
Fixes: 53ae233c30 ("kasan, page_alloc: allow skipping unpoisoning for HW_TAGS")
Fixes: f49d9c5bb1 ("kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS")
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de >
Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Matthew Wilcox <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-04-01 11:46:09 -07:00
Andrey Konovalov
9353ffa6e9
kasan, page_alloc: allow skipping memory init for HW_TAGS
...
Add a new GFP flag __GFP_SKIP_ZERO that allows to skip memory
initialization. The flag is only effective with HW_TAGS KASAN.
This flag will be used by vmalloc code for page_alloc allocations backing
vmalloc() mappings in a following patch. The reason to skip memory
initialization for these pages in page_alloc is because vmalloc code will
be initializing them instead.
With the current implementation, when __GFP_SKIP_ZERO is provided,
__GFP_ZEROTAGS is ignored. This doesn't matter, as these two flags are
never provided at the same time. However, if this is changed in the
future, this particular implementation detail can be changed as well.
Link: https://lkml.kernel.org/r/0d53efeff345de7d708e0baa0d8829167772521e.1643047180.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Acked-by: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Collingbourne <pcc@google.com >
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-03-24 19:06:47 -07:00
Andrey Konovalov
53ae233c30
kasan, page_alloc: allow skipping unpoisoning for HW_TAGS
...
Add a new GFP flag __GFP_SKIP_KASAN_UNPOISON that allows skipping KASAN
poisoning for page_alloc allocations. The flag is only effective with
HW_TAGS KASAN.
This flag will be used by vmalloc code for page_alloc allocations backing
vmalloc() mappings in a following patch. The reason to skip KASAN
poisoning for these pages in page_alloc is because vmalloc code will be
poisoning them instead.
Also reword the comment for __GFP_SKIP_KASAN_POISON.
Link: https://lkml.kernel.org/r/35c97d77a704f6ff971dd3bfe4be95855744108e.1643047180.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Acked-by: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Collingbourne <pcc@google.com >
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-03-24 19:06:47 -07:00
Andrey Konovalov
f49d9c5bb1
kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS
...
Only define the ___GFP_SKIP_KASAN_POISON flag when CONFIG_KASAN_HW_TAGS is
enabled.
This patch it not useful by itself, but it prepares the code for additions
of new KASAN-specific GFP patches.
Link: https://lkml.kernel.org/r/44e5738a584c11801b2b8f1231898918efc8634a.1643047180.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Acked-by: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Collingbourne <pcc@google.com >
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-03-24 19:06:47 -07:00
Andrey Konovalov
c82ce3195f
mm: clarify __GFP_ZEROTAGS comment
...
__GFP_ZEROTAGS is intended as an optimization: if memory is zeroed during
allocation, it's possible to set memory tags at the same time with little
performance impact.
Clarify this intention of __GFP_ZEROTAGS in the comment.
Link: https://lkml.kernel.org/r/cdffde013973c5634a447513e10ec0d21e8eee29.1643047180.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com >
Acked-by: Marco Elver <elver@google.com >
Cc: Alexander Potapenko <glider@google.com >
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dmitry Vyukov <dvyukov@google.com >
Cc: Evgenii Stepanov <eugenis@google.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Peter Collingbourne <pcc@google.com >
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-03-24 19:06:46 -07:00
NeilBrown
bf507030f3
doc: convert 'subsection' to 'section' in gfp.h
...
Patch series "Remove remaining parts of congestion tracking code", v2.
This patch (of 11):
Various DOC: sections in gfp.h have subsection headers (~~~) but the
place where they are included in mm-api.rst does not have section, only
chapters.
So convert to section headers (---) to avoid confusion. Specifically if
sections are added later in mm-api.rst, an error results.
Link: https://lkml.kernel.org/r/164549971112.9187.16871723439770288255.stgit@noble.brown
Link: https://lkml.kernel.org/r/164549983733.9187.17894407453436115822.stgit@noble.brown
Signed-off-by: NeilBrown <neilb@suse.de >
Cc: Jan Kara <jack@suse.cz >
Cc: Wu Fengguang <fengguang.wu@intel.com >
Cc: Jaegeuk Kim <jaegeuk@kernel.org >
Cc: Chao Yu <chao@kernel.org >
Cc: Jeff Layton <jlayton@kernel.org >
Cc: Ilya Dryomov <idryomov@gmail.com >
Cc: Miklos Szeredi <miklos@szeredi.hu >
Cc: Trond Myklebust <trond.myklebust@hammerspace.com >
Cc: Anna Schumaker <Anna.Schumaker@Netapp.com >
Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com >
Cc: Darrick J. Wong <djwong@kernel.org >
Cc: Philipp Reisner <philipp.reisner@linbit.com >
Cc: Lars Ellenberg <lars.ellenberg@linbit.com >
Cc: Paolo Valente <paolo.valente@linaro.org >
Cc: Jens Axboe <axboe@kernel.dk >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-03-22 15:57:00 -07:00
Miles Chen
04a536bfbd
include/linux/gfp.h: further document GFP_DMA32
...
kmalloc(..., GFP_DMA32) does not return DMA32 memory because the DMA32
kmalloc cache array is not implemented. (Reason: there is no such user
in kernel).
Put a short comment about this so people can understand this by reading
the comment.
[1] https://lists.linuxfoundation.org/pipermail/iommu/2018-December/031696.html
Link: https://lkml.kernel.org/r/20211207093610.6406-1-miles.chen@mediatek.com
Signed-off-by: Miles Chen <miles.chen@mediatek.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-01-15 16:30:29 +02:00
Michal Hocko
be1a13eb51
mm: drop node from alloc_pages_vma
...
alloc_pages_vma is meant to allocate a page with a vma specific memory
policy. The initial node parameter is always a local node so it is
pointless to waste a function argument for this. Drop the parameter.
Link: https://lkml.kernel.org/r/YaSnlv4QpryEpesG@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com >
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Cc: Ben Widawsky <ben.widawsky@intel.com >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: Feng Tang <feng.tang@intel.com >
Cc: Andrea Arcangeli <aarcange@redhat.com >
Cc: Mel Gorman <mgorman@techsingularity.net >
Cc: Mike Kravetz <mike.kravetz@oracle.com >
Cc: Randy Dunlap <rdunlap@infradead.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Dan Williams <dan.j.williams@intel.com >
Cc: "Huang, Ying" <ying.huang@intel.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-01-15 16:30:29 +02:00
Thibaut Sautereau
595ec1973c
mm/page_alloc: fix __alloc_size attribute for alloc_pages_exact_nid
...
The second parameter of alloc_pages_exact_nid is the one indicating the
size of memory pointed by the returned pointer.
Link: https://lkml.kernel.org/r/YbjEgwhn4bGblp//@coeus
Fixes: abd58f38df ("mm/page_alloc: add __alloc_size attributes for better bounds checking")
Signed-off-by: Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr >
Acked-by: Kees Cook <keescook@chromium.org >
Cc: Daniel Micay <danielmicay@gmail.com >
Cc: Levente Polyak <levente@leventepolyak.net >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-12-25 12:20:56 -08:00
Linus Torvalds
512b7931ad
Merge branch 'akpm' (patches from Andrew)
...
Merge misc updates from Andrew Morton:
"257 patches.
Subsystems affected by this patch series: scripts, ocfs2, vfs, and
mm (slab-generic, slab, slub, kconfig, dax, kasan, debug, pagecache,
gup, swap, memcg, pagemap, mprotect, mremap, iomap, tracing, vmalloc,
pagealloc, memory-failure, hugetlb, userfaultfd, vmscan, tools,
memblock, oom-kill, hugetlbfs, migration, thp, readahead, nommu, ksm,
vmstat, madvise, memory-hotplug, rmap, zsmalloc, highmem, zram,
cleanups, kfence, and damon)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org >: (257 commits)
mm/damon: remove return value from before_terminate callback
mm/damon: fix a few spelling mistakes in comments and a pr_debug message
mm/damon: simplify stop mechanism
Docs/admin-guide/mm/pagemap: wordsmith page flags descriptions
Docs/admin-guide/mm/damon/start: simplify the content
Docs/admin-guide/mm/damon/start: fix a wrong link
Docs/admin-guide/mm/damon/start: fix wrong example commands
mm/damon/dbgfs: add adaptive_targets list check before enable monitor_on
mm/damon: remove unnecessary variable initialization
Documentation/admin-guide/mm/damon: add a document for DAMON_RECLAIM
mm/damon: introduce DAMON-based Reclamation (DAMON_RECLAIM)
selftests/damon: support watermarks
mm/damon/dbgfs: support watermarks
mm/damon/schemes: activate schemes based on a watermarks mechanism
tools/selftests/damon: update for regions prioritization of schemes
mm/damon/dbgfs: support prioritization weights
mm/damon/vaddr,paddr: support pageout prioritization
mm/damon/schemes: prioritize regions within the quotas
mm/damon/selftests: support schemes quotas
mm/damon/dbgfs: support quotas of schemes
...
2021-11-06 14:08:17 -07:00
Chen Wandun
c00b6b9610
mm/vmalloc: introduce alloc_pages_bulk_array_mempolicy to accelerate memory allocation
...
Commit ffb29b1c25 ("mm/vmalloc: fix numa spreading for large hash
tables") can cause significant performance regressions in some
situations as Andrew mentioned in [1]. The main situation is vmalloc,
vmalloc will allocate pages with NUMA_NO_NODE by default, that will
result in alloc page one by one;
In order to solve this, __alloc_pages_bulk and mempolicy should be
considered at the same time.
1) If node is specified in memory allocation request, it will alloc all
pages by __alloc_pages_bulk.
2) If interleaving allocate memory, it will cauculate how many pages
should be allocated in each node, and use __alloc_pages_bulk to alloc
pages in each node.
[1]: https://lore.kernel.org/lkml/CALvZod4G3SzP3kWxQYn0fj+VgG-G3yWXz=gz17+3N57ru1iajw@mail.gmail.com/t/#m750c8e3231206134293b089feaa090590afa0f60
[akpm@linux-foundation.org: coding style fixes]
[akpm@linux-foundation.org: make two functions static]
[akpm@linux-foundation.org: fix CONFIG_NUMA=n build]
Link: https://lkml.kernel.org/r/20211021080744.874701-3-chenwandun@huawei.com
Signed-off-by: Chen Wandun <chenwandun@huawei.com >
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com >
Cc: Eric Dumazet <edumazet@google.com >
Cc: Shakeel Butt <shakeelb@google.com >
Cc: Nicholas Piggin <npiggin@gmail.com >
Cc: Kefeng Wang <wangkefeng.wang@huawei.com >
Cc: Hanjun Guo <guohanjun@huawei.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-11-06 13:30:37 -07:00
Kees Cook
abd58f38df
mm/page_alloc: add __alloc_size attributes for better bounds checking
...
As already done in GrapheneOS, add the __alloc_size attribute for
appropriate page allocator interfaces, to provide additional hinting for
better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other
compiler optimizations.
Link: https://lkml.kernel.org/r/20210930222704.2631604-8-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org >
Co-developed-by: Daniel Micay <danielmicay@gmail.com >
Signed-off-by: Daniel Micay <danielmicay@gmail.com >
Cc: Andy Whitcroft <apw@canonical.com >
Cc: Christoph Lameter <cl@linux.com >
Cc: David Rientjes <rientjes@google.com >
Cc: Dennis Zhou <dennis@kernel.org >
Cc: Dwaipayan Ray <dwaipayanray1@gmail.com >
Cc: Joe Perches <joe@perches.com >
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com >
Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com >
Cc: Miguel Ojeda <ojeda@kernel.org >
Cc: Nathan Chancellor <nathan@kernel.org >
Cc: Nick Desaulniers <ndesaulniers@google.com >
Cc: Pekka Enberg <penberg@kernel.org >
Cc: Tejun Heo <tj@kernel.org >
Cc: Vlastimil Babka <vbabka@suse.cz >
Cc: Alexandre Bounine <alex.bou9@gmail.com >
Cc: Gustavo A. R. Silva <gustavoars@kernel.org >
Cc: Ira Weiny <ira.weiny@intel.com >
Cc: Jing Xiangfeng <jingxiangfeng@huawei.com >
Cc: John Hubbard <jhubbard@nvidia.com >
Cc: kernel test robot <lkp@intel.com >
Cc: Matt Porter <mporter@kernel.crashing.org >
Cc: Randy Dunlap <rdunlap@infradead.org >
Cc: Souptick Joarder <jrdr.linux@gmail.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-11-06 13:30:34 -07:00
Matthew Wilcox (Oracle)
cc09cb1341
mm/page_alloc: Add folio allocation functions
...
The __folio_alloc(), __folio_alloc_node() and folio_alloc() functions
are mostly for type safety, but they also ensure that the page allocator
allocates a compound page and initialises the deferred list if the page
is large enough to have one.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Reviewed-by: Christoph Hellwig <hch@lst.de >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
2021-10-18 07:49:40 -04:00
Matthew Wilcox (Oracle)
b424de33c4
mm: Add arch_make_folio_accessible()
...
As a default implementation, call arch_make_page_accessible n times.
If an architecture can do better, it can override this.
Also move the default implementation of arch_make_page_accessible()
from gfp.h to mm.h.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Reviewed-by: David Howells <dhowells@redhat.com >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
2021-10-18 07:49:39 -04:00
Linus Torvalds
65090f30ab
Merge branch 'akpm' (patches from Andrew)
...
Merge misc updates from Andrew Morton:
"191 patches.
Subsystems affected by this patch series: kthread, ia64, scripts,
ntfs, squashfs, ocfs2, kernel/watchdog, and mm (gup, pagealloc, slab,
slub, kmemleak, dax, debug, pagecache, gup, swap, memcg, pagemap,
mprotect, bootmem, dma, tracing, vmalloc, kasan, initialization,
pagealloc, and memory-failure)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org >: (191 commits)
mm,hwpoison: make get_hwpoison_page() call get_any_page()
mm,hwpoison: send SIGBUS with error virutal address
mm/page_alloc: split pcp->high across all online CPUs for cpuless nodes
mm/page_alloc: allow high-order pages to be stored on the per-cpu lists
mm: replace CONFIG_FLAT_NODE_MEM_MAP with CONFIG_FLATMEM
mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMA
docs: remove description of DISCONTIGMEM
arch, mm: remove stale mentions of DISCONIGMEM
mm: remove CONFIG_DISCONTIGMEM
m68k: remove support for DISCONTIGMEM
arc: remove support for DISCONTIGMEM
arc: update comment about HIGHMEM implementation
alpha: remove DISCONTIGMEM and NUMA
mm/page_alloc: move free_the_page
mm/page_alloc: fix counting of managed_pages
mm/page_alloc: improve memmap_pages dbg msg
mm: drop SECTION_SHIFT in code comments
mm/page_alloc: introduce vm.percpu_pagelist_high_fraction
mm/page_alloc: limit the number of pages on PCP lists when reclaim is active
mm/page_alloc: scale the number of pages that are batch freed
...
2021-06-29 17:29:11 -07:00
Mike Rapoport
d3c251ab95
arch, mm: remove stale mentions of DISCONIGMEM
...
There are several places that mention DISCONIGMEM in comments or have
stale code guarded by CONFIG_DISCONTIGMEM.
Remove the dead code and update the comments.
Link: https://lkml.kernel.org/r/20210608091316.3622-7-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Acked-by: Arnd Bergmann <arnd@arndb.de >
Reviewed-by: David Hildenbrand <david@redhat.com >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Richard Henderson <rth@twiddle.net >
Cc: Vineet Gupta <vgupta@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-06-29 10:53:55 -07:00
Uladzislau Rezki (Sony)
a2afc59fb2
mm/page_alloc: add an alloc_pages_bulk_array_node() helper
...
Patch series "vmalloc() vs bulk allocator", v2.
This patch (of 3):
Add a "node" variant of the alloc_pages_bulk_array() function. The helper
guarantees that a __alloc_pages_bulk() is invoked with a valid NUMA node
ID.
Link: https://lkml.kernel.org/r/20210516202056.2120-1-urezki@gmail.com
Link: https://lkml.kernel.org/r/20210516202056.2120-2-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com >
Acked-by: Mel Gorman <mgorman@suse.de >
Cc: Mel Gorman <mgorman@suse.de >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Nicholas Piggin <npiggin@gmail.com >
Cc: Hillf Danton <hdanton@sina.com >
Cc: Michal Hocko <mhocko@suse.com >
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com >
Cc: Steven Rostedt <rostedt@goodmis.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-06-29 10:53:52 -07:00
Peter Collingbourne
c275c5c6d5
kasan: disable freed user page poisoning with HW tags
...
Poisoning freed pages protects against kernel use-after-free. The
likelihood of such a bug involving kernel pages is significantly higher
than that for user pages. At the same time, poisoning freed pages can
impose a significant performance cost, which cannot always be justified
for user pages given the lower probability of finding a bug. Therefore,
disable freed user page poisoning when using HW tags. We identify
"user" pages via the flag set GFP_HIGHUSER_MOVABLE, which indicates
a strong likelihood of not being directly accessible to the kernel.
Signed-off-by: Peter Collingbourne <pcc@google.com >
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com >
Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9
Link: https://lore.kernel.org/r/20210602235230.3928842-5-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org >
2021-06-04 19:32:21 +01:00
Peter Collingbourne
013bb59dbb
arm64: mte: handle tags zeroing at page allocation time
...
Currently, on an anonymous page fault, the kernel allocates a zeroed
page and maps it in user space. If the mapping is tagged (PROT_MTE),
set_pte_at() additionally clears the tags. It is, however, more
efficient to clear the tags at the same time as zeroing the data on
allocation. To avoid clearing the tags on any page (which may not be
mapped as tagged), only do this if the vma flags contain VM_MTE. This
requires introducing a new GFP flag that is used to determine whether
to clear the tags.
The DC GZVA instruction with a 0 top byte (and 0 tag) requires
top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of
whether KASAN_HW is enabled.
Signed-off-by: Peter Collingbourne <pcc@google.com >
Co-developed-by: Catalin Marinas <catalin.marinas@arm.com >
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com >
Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com >
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com >
Link: https://lore.kernel.org/r/20210602235230.3928842-4-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org >
2021-06-04 19:32:21 +01:00
Shijie Luo
cb152a1a95
mm: fix some typos and code style problems
...
fix some typos and code style problems in mm.
gfp.h: s/MAXNODES/MAX_NUMNODES
mmzone.h: s/then/than
rmap.c: s/__vma_split()/__vma_adjust()
swap.c: s/__mod_zone_page_stat/__mod_zone_page_state, s/is is/is
swap_state.c: s/whoes/whose
z3fold.c: code style problem fix in z3fold_unregister_migration
zsmalloc.c: s/of/or, s/give/given
Link: https://lkml.kernel.org/r/20210419083057.64820-1-luoshijie1@huawei.com
Signed-off-by: Shijie Luo <luoshijie1@huawei.com >
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-05-07 00:26:33 -07:00
Minchan Kim
78fa51503f
mm: use proper type for cma_[alloc|release]
...
size_t in cma_alloc is confusing since it makes people think it's byte
count, not pages. Change it to unsigned long[1].
The unsigned int in cma_release is also not right so change it. Since we
have unsigned long in cma_release, free_contig_range should also respect
it.
[1] 67a2e213e7 , mm: cma: fix incorrect type conversion for size during dma allocation
Link: https://lore.kernel.org/linux-mm/20210324043434.GP1719932@casper.infradead.org/
Link: https://lkml.kernel.org/r/20210331164018.710560-1-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org >
Reviewed-by: David Hildenbrand <david@redhat.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: David Hildenbrand <david@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-05-05 11:27:24 -07:00
Mel Gorman
0f87d9d30f
mm/page_alloc: add an array-based interface to the bulk page allocator
...
The proposed callers for the bulk allocator store pages from the bulk
allocator in an array. This patch adds an array-based interface to the
API to avoid multiple list iterations. The page list interface is
preserved to avoid requiring all users of the bulk API to allocate and
manage enough storage to store the pages.
[akpm@linux-foundation.org: remove now unused local `allocated']
Link: https://lkml.kernel.org/r/20210325114228.27719-4-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net >
Reviewed-by: Alexander Lobakin <alobakin@pm.me >
Acked-by: Vlastimil Babka <vbabka@suse.cz >
Cc: Alexander Duyck <alexander.duyck@gmail.com >
Cc: Christoph Hellwig <hch@infradead.org >
Cc: Chuck Lever <chuck.lever@oracle.com >
Cc: David Miller <davem@davemloft.net >
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org >
Cc: Jesper Dangaard Brouer <brouer@redhat.com >
Cc: Matthew Wilcox (Oracle) <willy@infradead.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-04-30 11:20:43 -07:00