You've already forked linux-apfs
mirror of
https://github.com/linux-apfs/linux-apfs.git
synced 2026-05-01 15:00:59 -07:00
Merge branch 'akpm' (patches from Andrew)
Merge patch-bomb from Andrew Morton: - a few misc things - Andy's "ambient capabilities" - fs/nofity updates - the ocfs2 queue - kernel/watchdog.c updates and feature work. - some of MM. Includes Andrea's userfaultfd feature. [ Hadn't noticed that userfaultfd was 'default y' when applying the patches, so that got fixed in this merge instead. We do _not_ mark new features that nobody uses yet 'default y' - Linus ] * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (118 commits) mm/hugetlb.c: make vma_has_reserves() return bool mm/madvise.c: make madvise_behaviour_valid() return bool mm/memory.c: make tlb_next_batch() return bool mm/dmapool.c: change is_page_busy() return from int to bool mm: remove struct node_active_region mremap: simplify the "overlap" check in mremap_to() mremap: don't do uneccesary checks if new_len == old_len mremap: don't do mm_populate(new_addr) on failure mm: move ->mremap() from file_operations to vm_operations_struct mremap: don't leak new_vma if f_op->mremap() fails mm/hugetlb.c: make vma_shareable() return bool mm: make GUP handle pfn mapping unless FOLL_GET is requested mm: fix status code which move_pages() returns for zero page mm: memcontrol: bring back the VM_BUG_ON() in mem_cgroup_swapout() genalloc: add support of multiple gen_pools per device genalloc: add name arg to gen_pool_get() and devm_gen_pool_create() mm/memblock: WARN_ON when nid differs from overlap region Documentation/features/vm: add feature description and arch support status for batched TLB flush after unmap mm: defer flush of writable TLB entries mm: send one IPI per CPU to TLB flush all entries after unmapping pages ...
This commit is contained in:
@@ -0,0 +1,40 @@
|
|||||||
|
#
|
||||||
|
# Feature name: batch-unmap-tlb-flush
|
||||||
|
# Kconfig: ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
|
||||||
|
# description: arch supports deferral of TLB flush until multiple pages are unmapped
|
||||||
|
#
|
||||||
|
-----------------------
|
||||||
|
| arch |status|
|
||||||
|
-----------------------
|
||||||
|
| alpha: | TODO |
|
||||||
|
| arc: | TODO |
|
||||||
|
| arm: | TODO |
|
||||||
|
| arm64: | TODO |
|
||||||
|
| avr32: | .. |
|
||||||
|
| blackfin: | TODO |
|
||||||
|
| c6x: | .. |
|
||||||
|
| cris: | .. |
|
||||||
|
| frv: | .. |
|
||||||
|
| h8300: | .. |
|
||||||
|
| hexagon: | TODO |
|
||||||
|
| ia64: | TODO |
|
||||||
|
| m32r: | TODO |
|
||||||
|
| m68k: | .. |
|
||||||
|
| metag: | TODO |
|
||||||
|
| microblaze: | .. |
|
||||||
|
| mips: | TODO |
|
||||||
|
| mn10300: | TODO |
|
||||||
|
| nios2: | .. |
|
||||||
|
| openrisc: | .. |
|
||||||
|
| parisc: | TODO |
|
||||||
|
| powerpc: | TODO |
|
||||||
|
| s390: | TODO |
|
||||||
|
| score: | .. |
|
||||||
|
| sh: | TODO |
|
||||||
|
| sparc: | TODO |
|
||||||
|
| tile: | TODO |
|
||||||
|
| um: | .. |
|
||||||
|
| unicore32: | .. |
|
||||||
|
| x86: | ok |
|
||||||
|
| xtensa: | TODO |
|
||||||
|
-----------------------
|
||||||
@@ -303,6 +303,7 @@ Code Seq#(hex) Include File Comments
|
|||||||
0xA3 80-8F Port ACL in development:
|
0xA3 80-8F Port ACL in development:
|
||||||
<mailto:tlewis@mindspring.com>
|
<mailto:tlewis@mindspring.com>
|
||||||
0xA3 90-9F linux/dtlk.h
|
0xA3 90-9F linux/dtlk.h
|
||||||
|
0xAA 00-3F linux/uapi/linux/userfaultfd.h
|
||||||
0xAB 00-1F linux/nbd.h
|
0xAB 00-1F linux/nbd.h
|
||||||
0xAC 00-1F linux/raw.h
|
0xAC 00-1F linux/raw.h
|
||||||
0xAD 00 Netfilter device in development:
|
0xAD 00 Netfilter device in development:
|
||||||
|
|||||||
@@ -0,0 +1,144 @@
|
|||||||
|
= Userfaultfd =
|
||||||
|
|
||||||
|
== Objective ==
|
||||||
|
|
||||||
|
Userfaults allow the implementation of on-demand paging from userland
|
||||||
|
and more generally they allow userland to take control of various
|
||||||
|
memory page faults, something otherwise only the kernel code could do.
|
||||||
|
|
||||||
|
For example userfaults allows a proper and more optimal implementation
|
||||||
|
of the PROT_NONE+SIGSEGV trick.
|
||||||
|
|
||||||
|
== Design ==
|
||||||
|
|
||||||
|
Userfaults are delivered and resolved through the userfaultfd syscall.
|
||||||
|
|
||||||
|
The userfaultfd (aside from registering and unregistering virtual
|
||||||
|
memory ranges) provides two primary functionalities:
|
||||||
|
|
||||||
|
1) read/POLLIN protocol to notify a userland thread of the faults
|
||||||
|
happening
|
||||||
|
|
||||||
|
2) various UFFDIO_* ioctls that can manage the virtual memory regions
|
||||||
|
registered in the userfaultfd that allows userland to efficiently
|
||||||
|
resolve the userfaults it receives via 1) or to manage the virtual
|
||||||
|
memory in the background
|
||||||
|
|
||||||
|
The real advantage of userfaults if compared to regular virtual memory
|
||||||
|
management of mremap/mprotect is that the userfaults in all their
|
||||||
|
operations never involve heavyweight structures like vmas (in fact the
|
||||||
|
userfaultfd runtime load never takes the mmap_sem for writing).
|
||||||
|
|
||||||
|
Vmas are not suitable for page- (or hugepage) granular fault tracking
|
||||||
|
when dealing with virtual address spaces that could span
|
||||||
|
Terabytes. Too many vmas would be needed for that.
|
||||||
|
|
||||||
|
The userfaultfd once opened by invoking the syscall, can also be
|
||||||
|
passed using unix domain sockets to a manager process, so the same
|
||||||
|
manager process could handle the userfaults of a multitude of
|
||||||
|
different processes without them being aware about what is going on
|
||||||
|
(well of course unless they later try to use the userfaultfd
|
||||||
|
themselves on the same region the manager is already tracking, which
|
||||||
|
is a corner case that would currently return -EBUSY).
|
||||||
|
|
||||||
|
== API ==
|
||||||
|
|
||||||
|
When first opened the userfaultfd must be enabled invoking the
|
||||||
|
UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or
|
||||||
|
a later API version) which will specify the read/POLLIN protocol
|
||||||
|
userland intends to speak on the UFFD and the uffdio_api.features
|
||||||
|
userland requires. The UFFDIO_API ioctl if successful (i.e. if the
|
||||||
|
requested uffdio_api.api is spoken also by the running kernel and the
|
||||||
|
requested features are going to be enabled) will return into
|
||||||
|
uffdio_api.features and uffdio_api.ioctls two 64bit bitmasks of
|
||||||
|
respectively all the available features of the read(2) protocol and
|
||||||
|
the generic ioctl available.
|
||||||
|
|
||||||
|
Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should
|
||||||
|
be invoked (if present in the returned uffdio_api.ioctls bitmask) to
|
||||||
|
register a memory range in the userfaultfd by setting the
|
||||||
|
uffdio_register structure accordingly. The uffdio_register.mode
|
||||||
|
bitmask will specify to the kernel which kind of faults to track for
|
||||||
|
the range (UFFDIO_REGISTER_MODE_MISSING would track missing
|
||||||
|
pages). The UFFDIO_REGISTER ioctl will return the
|
||||||
|
uffdio_register.ioctls bitmask of ioctls that are suitable to resolve
|
||||||
|
userfaults on the range registered. Not all ioctls will necessarily be
|
||||||
|
supported for all memory types depending on the underlying virtual
|
||||||
|
memory backend (anonymous memory vs tmpfs vs real filebacked
|
||||||
|
mappings).
|
||||||
|
|
||||||
|
Userland can use the uffdio_register.ioctls to manage the virtual
|
||||||
|
address space in the background (to add or potentially also remove
|
||||||
|
memory from the userfaultfd registered range). This means a userfault
|
||||||
|
could be triggering just before userland maps in the background the
|
||||||
|
user-faulted page.
|
||||||
|
|
||||||
|
The primary ioctl to resolve userfaults is UFFDIO_COPY. That
|
||||||
|
atomically copies a page into the userfault registered range and wakes
|
||||||
|
up the blocked userfaults (unless uffdio_copy.mode &
|
||||||
|
UFFDIO_COPY_MODE_DONTWAKE is set). Other ioctl works similarly to
|
||||||
|
UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an
|
||||||
|
half copied page since it'll keep userfaulting until the copy has
|
||||||
|
finished.
|
||||||
|
|
||||||
|
== QEMU/KVM ==
|
||||||
|
|
||||||
|
QEMU/KVM is using the userfaultfd syscall to implement postcopy live
|
||||||
|
migration. Postcopy live migration is one form of memory
|
||||||
|
externalization consisting of a virtual machine running with part or
|
||||||
|
all of its memory residing on a different node in the cloud. The
|
||||||
|
userfaultfd abstraction is generic enough that not a single line of
|
||||||
|
KVM kernel code had to be modified in order to add postcopy live
|
||||||
|
migration to QEMU.
|
||||||
|
|
||||||
|
Guest async page faults, FOLL_NOWAIT and all other GUP features work
|
||||||
|
just fine in combination with userfaults. Userfaults trigger async
|
||||||
|
page faults in the guest scheduler so those guest processes that
|
||||||
|
aren't waiting for userfaults (i.e. network bound) can keep running in
|
||||||
|
the guest vcpus.
|
||||||
|
|
||||||
|
It is generally beneficial to run one pass of precopy live migration
|
||||||
|
just before starting postcopy live migration, in order to avoid
|
||||||
|
generating userfaults for readonly guest regions.
|
||||||
|
|
||||||
|
The implementation of postcopy live migration currently uses one
|
||||||
|
single bidirectional socket but in the future two different sockets
|
||||||
|
will be used (to reduce the latency of the userfaults to the minimum
|
||||||
|
possible without having to decrease /proc/sys/net/ipv4/tcp_wmem).
|
||||||
|
|
||||||
|
The QEMU in the source node writes all pages that it knows are missing
|
||||||
|
in the destination node, into the socket, and the migration thread of
|
||||||
|
the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE
|
||||||
|
ioctls on the userfaultfd in order to map the received pages into the
|
||||||
|
guest (UFFDIO_ZEROCOPY is used if the source page was a zero page).
|
||||||
|
|
||||||
|
A different postcopy thread in the destination node listens with
|
||||||
|
poll() to the userfaultfd in parallel. When a POLLIN event is
|
||||||
|
generated after a userfault triggers, the postcopy thread read() from
|
||||||
|
the userfaultfd and receives the fault address (or -EAGAIN in case the
|
||||||
|
userfault was already resolved and waken by a UFFDIO_COPY|ZEROPAGE run
|
||||||
|
by the parallel QEMU migration thread).
|
||||||
|
|
||||||
|
After the QEMU postcopy thread (running in the destination node) gets
|
||||||
|
the userfault address it writes the information about the missing page
|
||||||
|
into the socket. The QEMU source node receives the information and
|
||||||
|
roughly "seeks" to that page address and continues sending all
|
||||||
|
remaining missing pages from that new page offset. Soon after that
|
||||||
|
(just the time to flush the tcp_wmem queue through the network) the
|
||||||
|
migration thread in the QEMU running in the destination node will
|
||||||
|
receive the page that triggered the userfault and it'll map it as
|
||||||
|
usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it
|
||||||
|
was spontaneously sent by the source or if it was an urgent page
|
||||||
|
requested through an userfault).
|
||||||
|
|
||||||
|
By the time the userfaults start, the QEMU in the destination node
|
||||||
|
doesn't need to keep any per-page state bitmap relative to the live
|
||||||
|
migration around and a single per-page bitmap has to be maintained in
|
||||||
|
the QEMU running in the source node to know which pages are still
|
||||||
|
missing in the destination node. The bitmap in the source node is
|
||||||
|
checked to find which missing pages to send in round robin and we seek
|
||||||
|
over it when receiving incoming userfaults. After sending each page of
|
||||||
|
course the bitmap is updated accordingly. It's also useful to avoid
|
||||||
|
sending the same page twice (in case the userfault is read by the
|
||||||
|
postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration
|
||||||
|
thread).
|
||||||
@@ -369,7 +369,7 @@ static void __init at91_pm_sram_init(void)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
sram_pool = gen_pool_get(&pdev->dev);
|
sram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||||
if (!sram_pool) {
|
if (!sram_pool) {
|
||||||
pr_warn("%s: sram pool unavailable!\n", __func__);
|
pr_warn("%s: sram pool unavailable!\n", __func__);
|
||||||
return;
|
return;
|
||||||
|
|||||||
@@ -297,7 +297,7 @@ static int __init imx_suspend_alloc_ocram(
|
|||||||
goto put_node;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
ocram_pool = gen_pool_get(&pdev->dev);
|
ocram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||||
if (!ocram_pool) {
|
if (!ocram_pool) {
|
||||||
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
|
|||||||
@@ -451,7 +451,7 @@ static int __init imx6q_suspend_init(const struct imx6_pm_socdata *socdata)
|
|||||||
goto put_node;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
ocram_pool = gen_pool_get(&pdev->dev);
|
ocram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||||
if (!ocram_pool) {
|
if (!ocram_pool) {
|
||||||
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ static int socfpga_setup_ocram_self_refresh(void)
|
|||||||
goto put_node;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
ocram_pool = gen_pool_get(&pdev->dev);
|
ocram_pool = gen_pool_get(&pdev->dev, NULL);
|
||||||
if (!ocram_pool) {
|
if (!ocram_pool) {
|
||||||
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
pr_warn("%s: ocram pool unavailable!\n", __func__);
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
|
|||||||
+2
-2
@@ -488,7 +488,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
|
|||||||
int arch_add_memory(int nid, u64 start, u64 size)
|
int arch_add_memory(int nid, u64 start, u64 size)
|
||||||
{
|
{
|
||||||
pg_data_t *pgdat;
|
pg_data_t *pgdat;
|
||||||
unsigned long start_pfn = start >> PAGE_SHIFT;
|
unsigned long start_pfn = PFN_DOWN(start);
|
||||||
unsigned long nr_pages = size >> PAGE_SHIFT;
|
unsigned long nr_pages = size >> PAGE_SHIFT;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@@ -517,7 +517,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
|
|||||||
#ifdef CONFIG_MEMORY_HOTREMOVE
|
#ifdef CONFIG_MEMORY_HOTREMOVE
|
||||||
int arch_remove_memory(u64 start, u64 size)
|
int arch_remove_memory(u64 start, u64 size)
|
||||||
{
|
{
|
||||||
unsigned long start_pfn = start >> PAGE_SHIFT;
|
unsigned long start_pfn = PFN_DOWN(start);
|
||||||
unsigned long nr_pages = size >> PAGE_SHIFT;
|
unsigned long nr_pages = size >> PAGE_SHIFT;
|
||||||
struct zone *zone;
|
struct zone *zone;
|
||||||
int ret;
|
int ret;
|
||||||
|
|||||||
+2
-2
@@ -33,8 +33,8 @@ void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end)
|
|||||||
/* Don't allow bogus node assignment */
|
/* Don't allow bogus node assignment */
|
||||||
BUG_ON(nid >= MAX_NUMNODES || nid <= 0);
|
BUG_ON(nid >= MAX_NUMNODES || nid <= 0);
|
||||||
|
|
||||||
start_pfn = start >> PAGE_SHIFT;
|
start_pfn = PFN_DOWN(start);
|
||||||
end_pfn = end >> PAGE_SHIFT;
|
end_pfn = PFN_DOWN(end);
|
||||||
|
|
||||||
pmb_bolt_mapping((unsigned long)__va(start), start, end - start,
|
pmb_bolt_mapping((unsigned long)__va(start), start, end - start,
|
||||||
PAGE_KERNEL);
|
PAGE_KERNEL);
|
||||||
|
|||||||
@@ -41,6 +41,7 @@ config X86
|
|||||||
select ARCH_USE_CMPXCHG_LOCKREF if X86_64
|
select ARCH_USE_CMPXCHG_LOCKREF if X86_64
|
||||||
select ARCH_USE_QUEUED_RWLOCKS
|
select ARCH_USE_QUEUED_RWLOCKS
|
||||||
select ARCH_USE_QUEUED_SPINLOCKS
|
select ARCH_USE_QUEUED_SPINLOCKS
|
||||||
|
select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if SMP
|
||||||
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
|
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
|
||||||
select ARCH_WANT_FRAME_POINTERS
|
select ARCH_WANT_FRAME_POINTERS
|
||||||
select ARCH_WANT_IPC_PARSE_VERSION if X86_32
|
select ARCH_WANT_IPC_PARSE_VERSION if X86_32
|
||||||
|
|||||||
@@ -380,3 +380,4 @@
|
|||||||
371 i386 recvfrom sys_recvfrom compat_sys_recvfrom
|
371 i386 recvfrom sys_recvfrom compat_sys_recvfrom
|
||||||
372 i386 recvmsg sys_recvmsg compat_sys_recvmsg
|
372 i386 recvmsg sys_recvmsg compat_sys_recvmsg
|
||||||
373 i386 shutdown sys_shutdown
|
373 i386 shutdown sys_shutdown
|
||||||
|
374 i386 userfaultfd sys_userfaultfd
|
||||||
|
|||||||
@@ -329,6 +329,7 @@
|
|||||||
320 common kexec_file_load sys_kexec_file_load
|
320 common kexec_file_load sys_kexec_file_load
|
||||||
321 common bpf sys_bpf
|
321 common bpf sys_bpf
|
||||||
322 64 execveat stub_execveat
|
322 64 execveat stub_execveat
|
||||||
|
323 common userfaultfd sys_userfaultfd
|
||||||
|
|
||||||
#
|
#
|
||||||
# x32-specific system call numbers start at 512 to avoid cache impact
|
# x32-specific system call numbers start at 512 to avoid cache impact
|
||||||
|
|||||||
@@ -261,6 +261,12 @@ static inline void reset_lazy_tlbstate(void)
|
|||||||
|
|
||||||
#endif /* SMP */
|
#endif /* SMP */
|
||||||
|
|
||||||
|
/* Not inlined due to inc_irq_stat not being defined yet */
|
||||||
|
#define flush_tlb_local() { \
|
||||||
|
inc_irq_stat(irq_tlb_count); \
|
||||||
|
local_flush_tlb(); \
|
||||||
|
}
|
||||||
|
|
||||||
#ifndef CONFIG_PARAVIRT
|
#ifndef CONFIG_PARAVIRT
|
||||||
#define flush_tlb_others(mask, mm, start, end) \
|
#define flush_tlb_others(mask, mm, start, end) \
|
||||||
native_flush_tlb_others(mask, mm, start, end)
|
native_flush_tlb_others(mask, mm, start, end)
|
||||||
|
|||||||
@@ -12,7 +12,7 @@
|
|||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/export.h>
|
#include <linux/export.h>
|
||||||
#include <linux/watchdog.h>
|
#include <linux/nmi.h>
|
||||||
|
|
||||||
#include <asm/cpufeature.h>
|
#include <asm/cpufeature.h>
|
||||||
#include <asm/hardirq.h>
|
#include <asm/hardirq.h>
|
||||||
@@ -3627,7 +3627,10 @@ static __init int fixup_ht_bug(void)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
watchdog_nmi_disable_all();
|
if (lockup_detector_suspend() != 0) {
|
||||||
|
pr_debug("failed to disable PMU erratum BJ122, BV98, HSD29 workaround\n");
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
x86_pmu.flags &= ~(PMU_FL_EXCL_CNTRS | PMU_FL_EXCL_ENABLED);
|
x86_pmu.flags &= ~(PMU_FL_EXCL_CNTRS | PMU_FL_EXCL_ENABLED);
|
||||||
|
|
||||||
@@ -3635,7 +3638,7 @@ static __init int fixup_ht_bug(void)
|
|||||||
x86_pmu.commit_scheduling = NULL;
|
x86_pmu.commit_scheduling = NULL;
|
||||||
x86_pmu.stop_scheduling = NULL;
|
x86_pmu.stop_scheduling = NULL;
|
||||||
|
|
||||||
watchdog_nmi_enable_all();
|
lockup_detector_resume();
|
||||||
|
|
||||||
get_online_cpus();
|
get_online_cpus();
|
||||||
|
|
||||||
|
|||||||
@@ -140,6 +140,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
|
|||||||
info.flush_end = end;
|
info.flush_end = end;
|
||||||
|
|
||||||
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
|
count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
|
||||||
|
trace_tlb_flush(TLB_REMOTE_SEND_IPI, end - start);
|
||||||
if (is_uv_system()) {
|
if (is_uv_system()) {
|
||||||
unsigned int cpu;
|
unsigned int cpu;
|
||||||
|
|
||||||
|
|||||||
@@ -392,6 +392,16 @@ int register_mem_sect_under_node(struct memory_block *mem_blk, int nid)
|
|||||||
for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
|
for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
|
||||||
int page_nid;
|
int page_nid;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* memory block could have several absent sections from start.
|
||||||
|
* skip pfn range from absent section
|
||||||
|
*/
|
||||||
|
if (!pfn_present(pfn)) {
|
||||||
|
pfn = round_down(pfn + PAGES_PER_SECTION,
|
||||||
|
PAGES_PER_SECTION) - 1;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
page_nid = get_nid_for_pfn(pfn);
|
page_nid = get_nid_for_pfn(pfn);
|
||||||
if (page_nid < 0)
|
if (page_nid < 0)
|
||||||
continue;
|
continue;
|
||||||
|
|||||||
@@ -2157,7 +2157,7 @@ static int coda_probe(struct platform_device *pdev)
|
|||||||
/* Get IRAM pool from device tree or platform data */
|
/* Get IRAM pool from device tree or platform data */
|
||||||
pool = of_gen_pool_get(np, "iram", 0);
|
pool = of_gen_pool_get(np, "iram", 0);
|
||||||
if (!pool && pdata)
|
if (!pool && pdata)
|
||||||
pool = gen_pool_get(pdata->iram_dev);
|
pool = gen_pool_get(pdata->iram_dev, NULL);
|
||||||
if (!pool) {
|
if (!pool) {
|
||||||
dev_err(&pdev->dev, "iram pool not available\n");
|
dev_err(&pdev->dev, "iram pool not available\n");
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|||||||
+4
-4
@@ -186,10 +186,10 @@ static int sram_probe(struct platform_device *pdev)
|
|||||||
if (IS_ERR(sram->virt_base))
|
if (IS_ERR(sram->virt_base))
|
||||||
return PTR_ERR(sram->virt_base);
|
return PTR_ERR(sram->virt_base);
|
||||||
|
|
||||||
sram->pool = devm_gen_pool_create(sram->dev,
|
sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY),
|
||||||
ilog2(SRAM_GRANULARITY), -1);
|
NUMA_NO_NODE, NULL);
|
||||||
if (!sram->pool)
|
if (IS_ERR(sram->pool))
|
||||||
return -ENOMEM;
|
return PTR_ERR(sram->pool);
|
||||||
|
|
||||||
ret = sram_reserve_regions(sram, res);
|
ret = sram_reserve_regions(sram, res);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ config VGA_CONSOLE
|
|||||||
depends on !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && \
|
depends on !4xx && !8xx && !SPARC && !M68K && !PARISC && !FRV && \
|
||||||
!SUPERH && !BLACKFIN && !AVR32 && !MN10300 && !CRIS && \
|
!SUPERH && !BLACKFIN && !AVR32 && !MN10300 && !CRIS && \
|
||||||
(!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \
|
(!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \
|
||||||
!ARM64
|
!ARM64 && !ARC
|
||||||
default y
|
default y
|
||||||
help
|
help
|
||||||
Saying Y here will allow you to use Linux in text mode through a
|
Saying Y here will allow you to use Linux in text mode through a
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ obj-$(CONFIG_ANON_INODES) += anon_inodes.o
|
|||||||
obj-$(CONFIG_SIGNALFD) += signalfd.o
|
obj-$(CONFIG_SIGNALFD) += signalfd.o
|
||||||
obj-$(CONFIG_TIMERFD) += timerfd.o
|
obj-$(CONFIG_TIMERFD) += timerfd.o
|
||||||
obj-$(CONFIG_EVENTFD) += eventfd.o
|
obj-$(CONFIG_EVENTFD) += eventfd.o
|
||||||
|
obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
|
||||||
obj-$(CONFIG_AIO) += aio.o
|
obj-$(CONFIG_AIO) += aio.o
|
||||||
obj-$(CONFIG_FS_DAX) += dax.o
|
obj-$(CONFIG_FS_DAX) += dax.o
|
||||||
obj-$(CONFIG_FILE_LOCKING) += locks.o
|
obj-$(CONFIG_FILE_LOCKING) += locks.o
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user