Files
linux-t2-patches/1000-linux-hardened.patch
T

4586 lines
160 KiB
Diff

diff -rupN linux-hardened/Documentation/admin-guide/kernel-parameters.txt linux-5.17.1/Documentation/admin-guide/kernel-parameters.txt
--- linux-hardened/Documentation/admin-guide/kernel-parameters.txt 2022-04-05 20:57:00.838873433 +0900
+++ linux-5.17.1/Documentation/admin-guide/kernel-parameters.txt 2022-03-28 17:03:22.000000000 +0900
@@ -550,6 +550,17 @@
nosocket -- Disable socket memory accounting.
nokmem -- Disable kernel memory accounting.
+ checkreqprot [SELINUX] Set initial checkreqprot flag value.
+ Format: { "0" | "1" }
+ See security/selinux/Kconfig help text.
+ 0 -- check protection applied by kernel (includes
+ any implied execute protection).
+ 1 -- check protection requested by application.
+ Default value is set via a kernel config option.
+ Value can be changed at runtime via
+ /sys/fs/selinux/checkreqprot.
+ Setting checkreqprot to 1 is deprecated.
+
cio_ignore= [S390]
See Documentation/s390/common_io.rst for details.
clk_ignore_unused
@@ -3864,11 +3875,6 @@
the specified number of seconds. This is to be used if
your oopses keep scrolling off the screen.
- extra_latent_entropy
- Enable a very simple form of latent entropy extraction
- from the first 4GB of memory as the bootmem allocator
- passes the memory pages to the buddy allocator.
-
pcbit= [HW,ISDN]
pcd. [PARIDE]
diff -rupN linux-hardened/Documentation/admin-guide/sysctl/kernel.rst linux-5.17.1/Documentation/admin-guide/sysctl/kernel.rst
--- linux-hardened/Documentation/admin-guide/sysctl/kernel.rst 2022-04-05 20:57:00.865873773 +0900
+++ linux-5.17.1/Documentation/admin-guide/sysctl/kernel.rst 2022-03-28 17:03:22.000000000 +0900
@@ -868,8 +868,6 @@ with respect to CAP_PERFMON use cases.
>=1 Disallow CPU event access by users without ``CAP_PERFMON``.
>=2 Disallow kernel profiling by users without ``CAP_PERFMON``.
-
->=3 Disallow use of any event by users without ``CAP_PERFMON``.
=== ==================================================================
@@ -1412,26 +1410,6 @@ If a value outside of this range is writ
``EINVAL`` error occurs.
-tiocsti_restrict
-================
-
-This toggle indicates whether unprivileged users are prevented from using the
-``TIOCSTI`` ioctl to inject commands into other processes which share a tty
-session.
-
-= ============================================================================
-0 No restriction, except the default one of only being able to inject commands
- into one's own tty.
-1 Users must have ``CAP_SYS_ADMIN`` to use the ``TIOCSTI`` ioctl.
-= ============================================================================
-
-When user namespaces are in use, the check for ``CAP_SYS_ADMIN`` is done
-against the user namespace that originally opened the tty.
-
-The kernel config option ``CONFIG_SECURITY_TIOCSTI_RESTRICT`` sets the default
-value of ``tiocsti_restrict``.
-
-
traceoff_on_warning
===================
diff -rupN linux-hardened/Documentation/networking/ip-sysctl.rst linux-5.17.1/Documentation/networking/ip-sysctl.rst
--- linux-hardened/Documentation/networking/ip-sysctl.rst 2022-04-05 20:57:01.470881378 +0900
+++ linux-5.17.1/Documentation/networking/ip-sysctl.rst 2022-03-28 17:03:22.000000000 +0900
@@ -716,24 +716,6 @@ tcp_comp_sack_nr - INTEGER
Default : 44
-tcp_simult_connect - BOOLEAN
- Enable TCP simultaneous connect that adds a weakness in Linux's strict
- implementation of TCP that allows two clients to connect to each other
- without either entering a listening state. The weakness allows an attacker
- to easily prevent a client from connecting to a known server provided the
- source port for the connection is guessed correctly.
-
- As the weakness could be used to prevent an antivirus or IPS from fetching
- updates, or prevent an SSL gateway from fetching a CRL, it should be
- eliminated by disabling this option. Though Linux is one of few operating
- systems supporting simultaneous connect, it has no legitimate use in
- practice and is rarely supported by firewalls.
-
- Disabling this may break TCP STUNT which is used by some applications for
- NAT traversal.
-
- Default: Value of CONFIG_TCP_SIMULT_CONNECT_DEFAULT_ON
-
tcp_slow_start_after_idle - BOOLEAN
If set, provide RFC2861 behavior and time out the congestion
window after an idle period. An idle period is defined at
diff -rupN linux-hardened/Makefile linux-5.17.1/Makefile
--- linux-hardened/Makefile 2022-04-05 20:57:01.744884822 +0900
+++ linux-5.17.1/Makefile 2022-03-28 17:03:22.000000000 +0900
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 17
-SUBLEVEL = 0
+SUBLEVEL = 1
EXTRAVERSION =
NAME = Superb Owl
diff -rupN linux-hardened/arch/Kconfig linux-5.17.1/arch/Kconfig
--- linux-hardened/arch/Kconfig 2022-04-05 20:57:01.745884835 +0900
+++ linux-5.17.1/arch/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -937,7 +937,7 @@ config ARCH_MMAP_RND_BITS
int "Number of bits to use for ASLR of mmap base address" if EXPERT
range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
- default ARCH_MMAP_RND_BITS_MAX
+ default ARCH_MMAP_RND_BITS_MIN
depends on HAVE_ARCH_MMAP_RND_BITS
help
This value can be used to select the number of bits to use to
@@ -971,7 +971,7 @@ config ARCH_MMAP_RND_COMPAT_BITS
int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT
range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX
default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT
- default ARCH_MMAP_RND_COMPAT_BITS_MAX
+ default ARCH_MMAP_RND_COMPAT_BITS_MIN
depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS
help
This value can be used to select the number of bits to use to
@@ -1162,7 +1162,6 @@ config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
config RANDOMIZE_KSTACK_OFFSET_DEFAULT
bool "Randomize kernel stack offset on syscall entry"
depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
- default y
help
The kernel stack offset can be randomized (after pt_regs) by
roughly 5 bits of entropy, frustrating memory corruption
diff -rupN linux-hardened/arch/arm64/Kconfig linux-5.17.1/arch/arm64/Kconfig
--- linux-hardened/arch/arm64/Kconfig 2022-04-05 20:57:02.338892289 +0900
+++ linux-5.17.1/arch/arm64/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -1405,7 +1405,6 @@ config RODATA_FULL_DEFAULT_ENABLED
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
- default y
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
@@ -1994,7 +1993,6 @@ config RANDOMIZE_BASE
bool "Randomize the address of the kernel image"
select ARM64_MODULE_PLTS if MODULES
select RELOCATABLE
- default y
help
Randomizes the virtual address at which the kernel image is
loaded, as a security feature that deters exploit attempts
diff -rupN linux-hardened/arch/arm64/configs/defconfig linux-5.17.1/arch/arm64/configs/defconfig
--- linux-hardened/arch/arm64/configs/defconfig 2022-04-05 20:57:02.602895607 +0900
+++ linux-5.17.1/arch/arm64/configs/defconfig 2022-03-28 17:03:22.000000000 +0900
@@ -1,3 +1,4 @@
+CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ_IDLE=y
diff -rupN linux-hardened/arch/arm64/include/asm/elf.h linux-5.17.1/arch/arm64/include/asm/elf.h
--- linux-hardened/arch/arm64/include/asm/elf.h 2022-04-05 20:57:02.618895808 +0900
+++ linux-5.17.1/arch/arm64/include/asm/elf.h 2022-03-28 17:03:22.000000000 +0900
@@ -124,10 +124,14 @@
/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers.
*/
-#define ELF_ET_DYN_BASE 0x100000000UL
+#ifdef CONFIG_ARM64_FORCE_52BIT
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)
+#else
+#define ELF_ET_DYN_BASE (2 * DEFAULT_MAP_WINDOW_64 / 3)
+#endif /* CONFIG_ARM64_FORCE_52BIT */
#ifndef __ASSEMBLY__
@@ -185,10 +189,10 @@ extern int arch_setup_additional_pages(s
/* 1GB of VA */
#ifdef CONFIG_COMPAT
#define STACK_RND_MASK (test_thread_flag(TIF_32BIT) ? \
- ((1UL << mmap_rnd_compat_bits) - 1) >> (PAGE_SHIFT - 12) : \
- ((1UL << mmap_rnd_bits) - 1) >> (PAGE_SHIFT - 12))
+ 0x7ff >> (PAGE_SHIFT - 12) : \
+ 0x3ffff >> (PAGE_SHIFT - 12))
#else
-#define STACK_RND_MASK (((1UL << mmap_rnd_bits) - 1) >> (PAGE_SHIFT - 12))
+#define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12))
#endif
#ifdef __AARCH64EB__
diff -rupN linux-hardened/arch/csky/include/asm/uaccess.h linux-5.17.1/arch/csky/include/asm/uaccess.h
--- linux-hardened/arch/csky/include/asm/uaccess.h 2022-04-05 20:57:02.738897317 +0900
+++ linux-5.17.1/arch/csky/include/asm/uaccess.h 2022-03-28 17:03:22.000000000 +0900
@@ -3,14 +3,13 @@
#ifndef __ASM_CSKY_UACCESS_H
#define __ASM_CSKY_UACCESS_H
-#define user_addr_max() \
- (uaccess_kernel() ? KERNEL_DS.seg : get_fs().seg)
+#define user_addr_max() (current_thread_info()->addr_limit.seg)
static inline int __access_ok(unsigned long addr, unsigned long size)
{
- unsigned long limit = current_thread_info()->addr_limit.seg;
+ unsigned long limit = user_addr_max();
- return ((addr < limit) && ((addr + size) < limit));
+ return (size <= limit) && (addr <= (limit - size));
}
#define __access_ok __access_ok
diff -rupN linux-hardened/arch/hexagon/include/asm/uaccess.h linux-5.17.1/arch/hexagon/include/asm/uaccess.h
--- linux-hardened/arch/hexagon/include/asm/uaccess.h 2022-04-05 20:57:02.754897518 +0900
+++ linux-5.17.1/arch/hexagon/include/asm/uaccess.h 2022-03-28 17:03:22.000000000 +0900
@@ -25,17 +25,17 @@
* Returns true (nonzero) if the memory block *may* be valid, false (zero)
* if it is definitely invalid.
*
- * User address space in Hexagon, like x86, goes to 0xbfffffff, so the
- * simple MSB-based tests used by MIPS won't work. Some further
- * optimization is probably possible here, but for now, keep it
- * reasonably simple and not *too* slow. After all, we've got the
- * MMU for backup.
*/
+#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
+#define user_addr_max() (uaccess_kernel() ? ~0UL : TASK_SIZE)
-#define __access_ok(addr, size) \
- ((get_fs().seg == KERNEL_DS.seg) || \
- (((unsigned long)addr < get_fs().seg) && \
- (unsigned long)size < (get_fs().seg - (unsigned long)addr)))
+static inline int __access_ok(unsigned long addr, unsigned long size)
+{
+ unsigned long limit = TASK_SIZE;
+
+ return (size <= limit) && (addr <= (limit - size));
+}
+#define __access_ok __access_ok
/*
* When a kernel-mode page fault is taken, the faulting instruction
diff -rupN linux-hardened/arch/m68k/include/asm/uaccess.h linux-5.17.1/arch/m68k/include/asm/uaccess.h
--- linux-hardened/arch/m68k/include/asm/uaccess.h 2022-04-05 20:57:02.820898348 +0900
+++ linux-5.17.1/arch/m68k/include/asm/uaccess.h 2022-03-28 17:03:22.000000000 +0900
@@ -12,14 +12,17 @@
#include <asm/extable.h>
/* We let the MMU do all checking */
-static inline int access_ok(const void __user *addr,
+static inline int access_ok(const void __user *ptr,
unsigned long size)
{
- /*
- * XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
- * for TASK_SIZE!
- */
- return 1;
+ unsigned long limit = TASK_SIZE;
+ unsigned long addr = (unsigned long)ptr;
+
+ if (IS_ENABLED(CONFIG_CPU_HAS_ADDRESS_SPACES) ||
+ !IS_ENABLED(CONFIG_MMU))
+ return 1;
+
+ return (size <= limit) && (addr <= (limit - size));
}
/*
diff -rupN linux-hardened/arch/microblaze/include/asm/uaccess.h linux-5.17.1/arch/microblaze/include/asm/uaccess.h
--- linux-hardened/arch/microblaze/include/asm/uaccess.h 2022-04-05 20:57:02.838898574 +0900
+++ linux-5.17.1/arch/microblaze/include/asm/uaccess.h 2022-03-28 17:03:22.000000000 +0900
@@ -39,24 +39,13 @@
# define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
-static inline int access_ok(const void __user *addr, unsigned long size)
+static inline int __access_ok(unsigned long addr, unsigned long size)
{
- if (!size)
- goto ok;
+ unsigned long limit = user_addr_max();
- if ((get_fs().seg < ((unsigned long)addr)) ||
- (get_fs().seg < ((unsigned long)addr + size - 1))) {
- pr_devel("ACCESS fail at 0x%08x (size 0x%x), seg 0x%08x\n",
- (__force u32)addr, (u32)size,
- (u32)get_fs().seg);
- return 0;
- }
-ok:
- pr_devel("ACCESS OK at 0x%08x (size 0x%x), seg 0x%08x\n",
- (__force u32)addr, (u32)size,
- (u32)get_fs().seg);
- return 1;
+ return (size <= limit) && (addr <= (limit - size));
}
+#define access_ok(addr, size) __access_ok((unsigned long)addr, size)
# define __FIXUP_SECTION ".section .fixup,\"ax\"\n"
# define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n"
diff -rupN linux-hardened/arch/nds32/include/asm/uaccess.h linux-5.17.1/arch/nds32/include/asm/uaccess.h
--- linux-hardened/arch/nds32/include/asm/uaccess.h 2022-04-05 20:57:03.002900635 +0900
+++ linux-5.17.1/arch/nds32/include/asm/uaccess.h 2022-03-28 17:03:22.000000000 +0900
@@ -70,9 +70,7 @@ static inline void set_fs(mm_segment_t f
* versions are void (ie, don't return a value as such).
*/
-#define get_user __get_user \
-
-#define __get_user(x, ptr) \
+#define get_user(x, ptr) \
({ \
long __gu_err = 0; \
__get_user_check((x), (ptr), __gu_err); \
@@ -85,6 +83,14 @@ static inline void set_fs(mm_segment_t f
(void)0; \
})
+#define __get_user(x, ptr) \
+({ \
+ long __gu_err = 0; \
+ const __typeof__(*(ptr)) __user *__p = (ptr); \
+ __get_user_err((x), __p, (__gu_err)); \
+ __gu_err; \
+})
+
#define __get_user_check(x, ptr, err) \
({ \
const __typeof__(*(ptr)) __user *__p = (ptr); \
@@ -165,12 +171,18 @@ do { \
: "r"(addr), "i"(-EFAULT) \
: "cc")
-#define put_user __put_user \
+#define put_user(x, ptr) \
+({ \
+ long __pu_err = 0; \
+ __put_user_check((x), (ptr), __pu_err); \
+ __pu_err; \
+})
#define __put_user(x, ptr) \
({ \
long __pu_err = 0; \
- __put_user_err((x), (ptr), __pu_err); \
+ __typeof__(*(ptr)) __user *__p = (ptr); \
+ __put_user_err((x), __p, __pu_err); \
__pu_err; \
})
diff -rupN linux-hardened/arch/x86/Kconfig linux-5.17.1/arch/x86/Kconfig
--- linux-hardened/arch/x86/Kconfig 2022-04-05 20:57:03.637908618 +0900
+++ linux-5.17.1/arch/x86/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -1206,7 +1206,8 @@ config VM86
default X86_LEGACY_VM86
config X86_16BIT
- bool "Enable support for 16-bit segments"
+ bool "Enable support for 16-bit segments" if EXPERT
+ default y
depends on MODIFY_LDT_SYSCALL
help
This option is required by programs like Wine to run 16-bit
@@ -2306,7 +2307,7 @@ config COMPAT_VDSO
choice
prompt "vsyscall table for legacy applications"
depends on X86_64
- default LEGACY_VSYSCALL_NONE
+ default LEGACY_VSYSCALL_XONLY
help
Legacy user code that does not know how to find the vDSO expects
to be able to issue three syscalls by calling fixed addresses in
@@ -2402,7 +2403,8 @@ config CMDLINE_OVERRIDE
be set to 'N' under normal conditions.
config MODIFY_LDT_SYSCALL
- bool "Enable the LDT (local descriptor table)"
+ bool "Enable the LDT (local descriptor table)" if EXPERT
+ default y
help
Linux can allow user programs to install a per-process x86
Local Descriptor Table (LDT) using the modify_ldt(2) system
diff -rupN linux-hardened/arch/x86/configs/x86_64_defconfig linux-5.17.1/arch/x86/configs/x86_64_defconfig
--- linux-hardened/arch/x86/configs/x86_64_defconfig 2022-04-05 20:57:03.646908731 +0900
+++ linux-5.17.1/arch/x86/configs/x86_64_defconfig 2022-03-28 17:03:22.000000000 +0900
@@ -1,3 +1,4 @@
+CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ=y
diff -rupN linux-hardened/arch/x86/entry/vdso/vma.c linux-5.17.1/arch/x86/entry/vdso/vma.c
--- linux-hardened/arch/x86/entry/vdso/vma.c 2022-04-05 20:57:03.680909158 +0900
+++ linux-5.17.1/arch/x86/entry/vdso/vma.c 2022-03-28 17:03:22.000000000 +0900
@@ -298,9 +298,55 @@ up_fail:
}
#ifdef CONFIG_X86_64
+/*
+ * Put the vdso above the (randomized) stack with another randomized
+ * offset. This way there is no hole in the middle of address space.
+ * To save memory make sure it is still in the same PTE as the stack
+ * top. This doesn't give that many random bits.
+ *
+ * Note that this algorithm is imperfect: the distribution of the vdso
+ * start address within a PMD is biased toward the end.
+ *
+ * Only used for the 64-bit and x32 vdsos.
+ */
+static unsigned long vdso_addr(unsigned long start, unsigned len)
+{
+ unsigned long addr, end;
+ unsigned offset;
+
+ /*
+ * Round up the start address. It can start out unaligned as a result
+ * of stack start randomization.
+ */
+ start = PAGE_ALIGN(start);
+
+ /* Round the lowest possible end address up to a PMD boundary. */
+ end = (start + len + PMD_SIZE - 1) & PMD_MASK;
+ if (end >= TASK_SIZE_MAX)
+ end = TASK_SIZE_MAX;
+ end -= len;
+
+ if (end > start) {
+ offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
+ addr = start + (offset << PAGE_SHIFT);
+ } else {
+ addr = start;
+ }
+
+ /*
+ * Forcibly align the final address in case we have a hardware
+ * issue that requires alignment for performance reasons.
+ */
+ addr = align_vdso_addr(addr);
+
+ return addr;
+}
+
static int map_vdso_randomized(const struct vdso_image *image)
{
- return map_vdso(image, 0);
+ unsigned long addr = vdso_addr(current->mm->start_stack, image->size-image->sym_vvar_start);
+
+ return map_vdso(image, addr);
}
#endif
diff -rupN linux-hardened/arch/x86/include/asm/elf.h linux-5.17.1/arch/x86/include/asm/elf.h
--- linux-hardened/arch/x86/include/asm/elf.h 2022-04-05 20:57:03.720909661 +0900
+++ linux-5.17.1/arch/x86/include/asm/elf.h 2022-03-28 17:03:22.000000000 +0900
@@ -247,11 +247,11 @@ extern int force_personality32;
/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers.
*/
#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
- 0x100000000UL)
+ (DEFAULT_MAP_WINDOW / 3 * 2))
/* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. This could be done in user space,
@@ -333,8 +333,8 @@ extern unsigned long get_sigframe_size(v
#ifdef CONFIG_X86_32
-#define __STACK_RND_MASK(is32bit) ((1UL << mmap_rnd_bits) - 1)
-#define STACK_RND_MASK ((1UL << mmap_rnd_bits) - 1)
+#define __STACK_RND_MASK(is32bit) (0x7ff)
+#define STACK_RND_MASK (0x7ff)
#define ARCH_DLINFO ARCH_DLINFO_IA32
@@ -343,11 +343,7 @@ extern unsigned long get_sigframe_size(v
#else /* CONFIG_X86_32 */
/* 1GB for 64bit, 8MB for 32bit */
-#ifdef CONFIG_COMPAT
-#define __STACK_RND_MASK(is32bit) ((is32bit) ? (1UL << mmap_rnd_compat_bits) - 1 : (1UL << mmap_rnd_bits) - 1)
-#else
-#define __STACK_RND_MASK(is32bit) ((1UL << mmap_rnd_bits) - 1)
-#endif
+#define __STACK_RND_MASK(is32bit) ((is32bit) ? 0x7ff : 0x3fffff)
#define STACK_RND_MASK __STACK_RND_MASK(mmap_is_ia32())
#define ARCH_DLINFO \
@@ -411,4 +407,5 @@ struct va_alignment {
} ____cacheline_aligned;
extern struct va_alignment va_align;
+extern unsigned long align_vdso_addr(unsigned long);
#endif /* _ASM_X86_ELF_H */
diff -rupN linux-hardened/arch/x86/kernel/acpi/boot.c linux-5.17.1/arch/x86/kernel/acpi/boot.c
--- linux-hardened/arch/x86/kernel/acpi/boot.c 2022-04-05 20:57:03.768910264 +0900
+++ linux-5.17.1/arch/x86/kernel/acpi/boot.c 2022-03-28 17:03:22.000000000 +0900
@@ -1328,6 +1328,17 @@ static int __init disable_acpi_pci(const
return 0;
}
+static int __init disable_acpi_xsdt(const struct dmi_system_id *d)
+{
+ if (!acpi_force) {
+ pr_notice("%s detected: force use of acpi=rsdt\n", d->ident);
+ acpi_gbl_do_not_use_xsdt = TRUE;
+ } else {
+ pr_notice("Warning: DMI blacklist says broken, but acpi XSDT forced\n");
+ }
+ return 0;
+}
+
static int __init dmi_disable_acpi(const struct dmi_system_id *d)
{
if (!acpi_force) {
@@ -1451,6 +1462,19 @@ static const struct dmi_system_id acpi_d
DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
},
},
+ /*
+ * Boxes that need ACPI XSDT use disabled due to corrupted tables
+ */
+ {
+ .callback = disable_acpi_xsdt,
+ .ident = "Advantech DAC-BJ01",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "NEC"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Bearlake CRB Board"),
+ DMI_MATCH(DMI_BIOS_VERSION, "V1.12"),
+ DMI_MATCH(DMI_BIOS_DATE, "02/01/2011"),
+ },
+ },
{}
};
diff -rupN linux-hardened/arch/x86/kernel/cpu/common.c linux-5.17.1/arch/x86/kernel/cpu/common.c
--- linux-hardened/arch/x86/kernel/cpu/common.c 2022-04-05 20:57:03.788910516 +0900
+++ linux-5.17.1/arch/x86/kernel/cpu/common.c 2022-03-28 17:03:22.000000000 +0900
@@ -409,7 +409,6 @@ EXPORT_SYMBOL_GPL(native_write_cr4);
void cr4_update_irqsoff(unsigned long set, unsigned long clear)
{
unsigned long newval, cr4 = this_cpu_read(cpu_tlbstate.cr4);
- BUG_ON(cr4 != __read_cr4());
lockdep_assert_irqs_disabled();
diff -rupN linux-hardened/arch/x86/kernel/process.c linux-5.17.1/arch/x86/kernel/process.c
--- linux-hardened/arch/x86/kernel/process.c 2022-04-05 20:57:03.843911207 +0900
+++ linux-5.17.1/arch/x86/kernel/process.c 2022-03-28 17:03:22.000000000 +0900
@@ -46,8 +46,6 @@
#include <asm/proto.h>
#include <asm/frame.h>
#include <asm/unwind.h>
-#include <asm/elf.h>
-#include <linux/sizes.h>
#include "process.h"
@@ -643,7 +641,6 @@ void speculation_ctrl_update_current(voi
static inline void cr4_toggle_bits_irqsoff(unsigned long mask)
{
unsigned long newval, cr4 = this_cpu_read(cpu_tlbstate.cr4);
- BUG_ON(cr4 != __read_cr4());
newval = cr4 ^ mask;
if (newval != cr4) {
@@ -953,10 +950,7 @@ unsigned long arch_align_stack(unsigned
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
- if (mmap_is_ia32())
- return mm->brk + get_random_long() % SZ_32M + PAGE_SIZE;
- else
- return mm->brk + get_random_long() % SZ_1G + PAGE_SIZE;
+ return randomize_page(mm->brk, 0x02000000);
}
/*
diff -rupN linux-hardened/arch/x86/kernel/sys_x86_64.c linux-5.17.1/arch/x86/kernel/sys_x86_64.c
--- linux-hardened/arch/x86/kernel/sys_x86_64.c 2022-04-05 20:55:46.302936501 +0900
+++ linux-5.17.1/arch/x86/kernel/sys_x86_64.c 2022-03-28 17:03:22.000000000 +0900
@@ -52,6 +52,13 @@ static unsigned long get_align_bits(void
return va_align.bits & get_align_mask();
}
+unsigned long align_vdso_addr(unsigned long addr)
+{
+ unsigned long align_mask = get_align_mask();
+ addr = (addr + align_mask) & ~align_mask;
+ return addr | get_align_bits();
+}
+
static int __init control_va_addr_alignment(char *str)
{
/* guard against enabling this on other CPU families */
@@ -109,7 +116,10 @@ static void find_start_end(unsigned long
}
*begin = get_mmap_base(1);
- *end = get_mmap_base(0);
+ if (in_32bit_syscall())
+ *end = task_size_32bit();
+ else
+ *end = task_size_64bit(addr > DEFAULT_MAP_WINDOW);
}
unsigned long
@@ -186,7 +196,7 @@ get_unmapped_area:
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
- info.low_limit = get_mmap_base(1);
+ info.low_limit = PAGE_SIZE;
info.high_limit = get_mmap_base(0);
/*
diff -rupN linux-hardened/arch/x86/mm/init_32.c linux-5.17.1/arch/x86/mm/init_32.c
--- linux-hardened/arch/x86/mm/init_32.c 2022-04-05 20:57:03.942912452 +0900
+++ linux-5.17.1/arch/x86/mm/init_32.c 2022-03-28 17:03:22.000000000 +0900
@@ -529,9 +529,9 @@ static void __init pagetable_init(void)
#define DEFAULT_PTE_MASK ~(_PAGE_NX | _PAGE_GLOBAL)
/* Bits supported by the hardware: */
-pteval_t __supported_pte_mask __ro_after_init = DEFAULT_PTE_MASK;
+pteval_t __supported_pte_mask __read_mostly = DEFAULT_PTE_MASK;
/* Bits allowed in normal kernel mappings: */
-pteval_t __default_kernel_pte_mask __ro_after_init = DEFAULT_PTE_MASK;
+pteval_t __default_kernel_pte_mask __read_mostly = DEFAULT_PTE_MASK;
EXPORT_SYMBOL_GPL(__supported_pte_mask);
/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */
EXPORT_SYMBOL(__default_kernel_pte_mask);
diff -rupN linux-hardened/arch/x86/mm/init_64.c linux-5.17.1/arch/x86/mm/init_64.c
--- linux-hardened/arch/x86/mm/init_64.c 2022-04-05 20:57:03.943912464 +0900
+++ linux-5.17.1/arch/x86/mm/init_64.c 2022-03-28 17:03:22.000000000 +0900
@@ -98,9 +98,9 @@ DEFINE_ENTRY(pte, pte, init)
*/
/* Bits supported by the hardware: */
-pteval_t __supported_pte_mask __ro_after_init = ~0;
+pteval_t __supported_pte_mask __read_mostly = ~0;
/* Bits allowed in normal kernel mappings: */
-pteval_t __default_kernel_pte_mask __ro_after_init = ~0;
+pteval_t __default_kernel_pte_mask __read_mostly = ~0;
EXPORT_SYMBOL_GPL(__supported_pte_mask);
/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */
EXPORT_SYMBOL(__default_kernel_pte_mask);
diff -rupN linux-hardened/arch/x86/mm/tlb.c linux-5.17.1/arch/x86/mm/tlb.c
--- linux-hardened/arch/x86/mm/tlb.c 2022-04-05 20:57:03.955912615 +0900
+++ linux-5.17.1/arch/x86/mm/tlb.c 2022-03-28 17:03:22.000000000 +0900
@@ -1148,7 +1148,7 @@ void flush_tlb_one_user(unsigned long ad
*/
STATIC_NOPV void native_flush_tlb_global(void)
{
- unsigned long cr4, flags;
+ unsigned long flags;
if (static_cpu_has(X86_FEATURE_INVPCID)) {
/*
@@ -1168,9 +1168,7 @@ STATIC_NOPV void native_flush_tlb_global
*/
raw_local_irq_save(flags);
- cr4 = this_cpu_read(cpu_tlbstate.cr4);
- BUG_ON(cr4 != __read_cr4());
- __native_tlb_flush_global(cr4);
+ __native_tlb_flush_global(this_cpu_read(cpu_tlbstate.cr4));
raw_local_irq_restore(flags);
}
diff -rupN linux-hardened/block/blk-mq.c linux-5.17.1/block/blk-mq.c
--- linux-hardened/block/blk-mq.c 2022-04-05 20:57:04.049913797 +0900
+++ linux-5.17.1/block/blk-mq.c 2022-03-28 17:03:22.000000000 +0900
@@ -1003,7 +1003,7 @@ static void blk_complete_reqs(struct lli
rq->q->mq_ops->complete(rq);
}
-static __latent_entropy void blk_done_softirq(void)
+static __latent_entropy void blk_done_softirq(struct softirq_action *h)
{
blk_complete_reqs(this_cpu_ptr(&blk_cpu_done));
}
diff -rupN linux-hardened/drivers/acpi/battery.c linux-5.17.1/drivers/acpi/battery.c
--- linux-hardened/drivers/acpi/battery.c 2022-04-05 20:57:04.169915305 +0900
+++ linux-5.17.1/drivers/acpi/battery.c 2022-03-28 17:03:22.000000000 +0900
@@ -59,6 +59,10 @@ MODULE_PARM_DESC(cache_time, "cache time
static const struct acpi_device_id battery_device_ids[] = {
{"PNP0C0A", 0},
+
+ /* Microsoft Surface Go 3 */
+ {"MSHW0146", 0},
+
{"", 0},
};
@@ -1148,6 +1152,14 @@ static const struct dmi_system_id bat_dm
DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
},
},
+ {
+ /* Microsoft Surface Go 3 */
+ .callback = battery_notification_delay_quirk,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
+ },
+ },
{},
};
diff -rupN linux-hardened/drivers/acpi/video_detect.c linux-5.17.1/drivers/acpi/video_detect.c
--- linux-hardened/drivers/acpi/video_detect.c 2022-04-05 20:57:04.207915783 +0900
+++ linux-5.17.1/drivers/acpi/video_detect.c 2022-03-28 17:03:22.000000000 +0900
@@ -415,6 +415,81 @@ static const struct dmi_system_id video_
DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
},
},
+ /*
+ * Clevo NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2 have both a
+ * working native and video interface. However the default detection
+ * mechanism first registers the video interface before unregistering
+ * it again and switching to the native interface during boot. This
+ * results in a dangling SBIOS request for backlight change for some
+ * reason, causing the backlight to switch to ~2% once per boot on the
+ * first power cord connect or disconnect event. Setting the native
+ * interface explicitly circumvents this buggy behaviour, by avoiding
+ * the unregistering process.
+ */
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xRU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xNU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xNU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
+ },
+ },
+ {
+ .callback = video_detect_force_native,
+ .ident = "Clevo NL5xNU",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
+ },
+ },
/*
* Desktops which falsely report a backlight and which our heuristics
diff -rupN linux-hardened/drivers/ata/libata-core.c linux-5.17.1/drivers/ata/libata-core.c
--- linux-hardened/drivers/ata/libata-core.c 2022-04-05 20:57:04.231916085 +0900
+++ linux-5.17.1/drivers/ata/libata-core.c 2022-03-28 17:03:22.000000000 +0900
@@ -4614,7 +4614,7 @@ void ata_qc_free(struct ata_queued_cmd *
struct ata_port *ap;
unsigned int tag;
- BUG_ON(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
+ WARN_ON_ONCE(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
ap = qc->ap;
qc->flags = 0;
@@ -4631,7 +4631,7 @@ void __ata_qc_complete(struct ata_queued
struct ata_port *ap;
struct ata_link *link;
- BUG_ON(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
+ WARN_ON_ONCE(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
WARN_ON_ONCE(!(qc->flags & ATA_QCFLAG_ACTIVE));
ap = qc->ap;
link = qc->dev->link;
diff -rupN linux-hardened/drivers/bluetooth/btusb.c linux-5.17.1/drivers/bluetooth/btusb.c
--- linux-hardened/drivers/bluetooth/btusb.c 2022-04-05 20:57:04.413918372 +0900
+++ linux-5.17.1/drivers/bluetooth/btusb.c 2022-03-28 17:03:22.000000000 +0900
@@ -405,6 +405,8 @@ static const struct usb_device_id blackl
BTUSB_WIDEBAND_SPEECH },
/* Realtek 8852AE Bluetooth devices */
+ { USB_DEVICE(0x0bda, 0x2852), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x0bda, 0xc852), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x0bda, 0x385a), .driver_info = BTUSB_REALTEK |
@@ -482,6 +484,8 @@ static const struct usb_device_id blackl
/* Additional Realtek 8761BU Bluetooth devices */
{ USB_DEVICE(0x0b05, 0x190e), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x2550, 0x8761), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
/* Additional Realtek 8821AE Bluetooth devices */
{ USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
@@ -2041,6 +2045,8 @@ static int btusb_setup_csr(struct hci_de
*/
set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks);
+ set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
/* Clear the reset quirk since this is not an actual
* early Bluetooth 1.1 device from CSR.
@@ -2051,7 +2057,7 @@ static int btusb_setup_csr(struct hci_de
/*
* Special workaround for these BT 4.0 chip clones, and potentially more:
*
- * - 0x0134: a Barrot 8041a02 (HCI rev: 0x1012 sub: 0x0810)
+ * - 0x0134: a Barrot 8041a02 (HCI rev: 0x0810 sub: 0x1012)
* - 0x7558: IC markings FR3191AHAL 749H15143 (HCI rev/sub-version: 0x0709)
*
* These controllers are really messed-up.
@@ -2080,7 +2086,7 @@ static int btusb_setup_csr(struct hci_de
if (ret >= 0)
msleep(200);
else
- bt_dev_err(hdev, "CSR: Failed to suspend the device for our Barrot 8041a02 receive-issue workaround");
+ bt_dev_warn(hdev, "CSR: Couldn't suspend the device for our Barrot 8041a02 receive-issue workaround");
pm_runtime_forbid(&data->udev->dev);
diff -rupN linux-hardened/drivers/char/Kconfig linux-5.17.1/drivers/char/Kconfig
--- linux-hardened/drivers/char/Kconfig 2022-04-05 20:57:04.444918762 +0900
+++ linux-5.17.1/drivers/char/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -314,6 +314,7 @@ config NSC_GPIO
config DEVMEM
bool "/dev/mem virtual device support"
+ default y
help
Say Y here if you want to support the /dev/mem device.
The /dev/mem device is used to access areas of physical
@@ -346,6 +347,7 @@ config NVRAM
config DEVPORT
bool "/dev/port character device"
depends on ISA || PCI
+ default y
help
Say Y here if you want to support the /dev/port device. The /dev/port
device is similar to /dev/mem, but for I/O ports.
diff -rupN linux-hardened/drivers/char/tpm/tpm-chip.c linux-5.17.1/drivers/char/tpm/tpm-chip.c
--- linux-hardened/drivers/char/tpm/tpm-chip.c 2022-04-05 20:57:04.492919365 +0900
+++ linux-5.17.1/drivers/char/tpm/tpm-chip.c 2022-03-28 17:03:22.000000000 +0900
@@ -274,14 +274,6 @@ static void tpm_dev_release(struct devic
kfree(chip);
}
-static void tpm_devs_release(struct device *dev)
-{
- struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
-
- /* release the master device reference */
- put_device(&chip->dev);
-}
-
/**
* tpm_class_shutdown() - prepare the TPM device for loss of power.
* @dev: device to which the chip is associated.
@@ -344,7 +336,6 @@ struct tpm_chip *tpm_chip_alloc(struct d
chip->dev_num = rc;
device_initialize(&chip->dev);
- device_initialize(&chip->devs);
chip->dev.class = tpm_class;
chip->dev.class->shutdown_pre = tpm_class_shutdown;
@@ -352,39 +343,20 @@ struct tpm_chip *tpm_chip_alloc(struct d
chip->dev.parent = pdev;
chip->dev.groups = chip->groups;
- chip->devs.parent = pdev;
- chip->devs.class = tpmrm_class;
- chip->devs.release = tpm_devs_release;
- /* get extra reference on main device to hold on
- * behalf of devs. This holds the chip structure
- * while cdevs is in use. The corresponding put
- * is in the tpm_devs_release (TPM2 only)
- */
- if (chip->flags & TPM_CHIP_FLAG_TPM2)
- get_device(&chip->dev);
-
if (chip->dev_num == 0)
chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
else
chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
- chip->devs.devt =
- MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
-
rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
if (rc)
goto out;
- rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
- if (rc)
- goto out;
if (!pdev)
chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
cdev_init(&chip->cdev, &tpm_fops);
- cdev_init(&chip->cdevs, &tpmrm_fops);
chip->cdev.owner = THIS_MODULE;
- chip->cdevs.owner = THIS_MODULE;
rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
if (rc) {
@@ -396,7 +368,6 @@ struct tpm_chip *tpm_chip_alloc(struct d
return chip;
out:
- put_device(&chip->devs);
put_device(&chip->dev);
return ERR_PTR(rc);
}
@@ -445,14 +416,9 @@ static int tpm_add_char_device(struct tp
}
if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip)) {
- rc = cdev_device_add(&chip->cdevs, &chip->devs);
- if (rc) {
- dev_err(&chip->devs,
- "unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
- dev_name(&chip->devs), MAJOR(chip->devs.devt),
- MINOR(chip->devs.devt), rc);
- return rc;
- }
+ rc = tpm_devs_add(chip);
+ if (rc)
+ goto err_del_cdev;
}
/* Make the chip available. */
@@ -460,6 +426,10 @@ static int tpm_add_char_device(struct tp
idr_replace(&dev_nums_idr, chip, chip->dev_num);
mutex_unlock(&idr_lock);
+ return 0;
+
+err_del_cdev:
+ cdev_device_del(&chip->cdev, &chip->dev);
return rc;
}
@@ -654,7 +624,7 @@ void tpm_chip_unregister(struct tpm_chip
hwrng_unregister(&chip->hwrng);
tpm_bios_log_teardown(chip);
if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip))
- cdev_device_del(&chip->cdevs, &chip->devs);
+ tpm_devs_remove(chip);
tpm_del_char_device(chip);
}
EXPORT_SYMBOL_GPL(tpm_chip_unregister);
diff -rupN linux-hardened/drivers/char/tpm/tpm-dev-common.c linux-5.17.1/drivers/char/tpm/tpm-dev-common.c
--- linux-hardened/drivers/char/tpm/tpm-dev-common.c 2022-04-05 20:55:47.407950391 +0900
+++ linux-5.17.1/drivers/char/tpm/tpm-dev-common.c 2022-03-28 17:03:22.000000000 +0900
@@ -69,7 +69,13 @@ static void tpm_dev_async_work(struct wo
ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
sizeof(priv->data_buffer));
tpm_put_ops(priv->chip);
- if (ret > 0) {
+
+ /*
+ * If ret is > 0 then tpm_dev_transmit returned the size of the
+ * response. If ret is < 0 then tpm_dev_transmit failed and
+ * returned an error code.
+ */
+ if (ret != 0) {
priv->response_length = ret;
mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
}
diff -rupN linux-hardened/drivers/char/tpm/tpm.h linux-5.17.1/drivers/char/tpm/tpm.h
--- linux-hardened/drivers/char/tpm/tpm.h 2022-04-05 20:55:47.410950429 +0900
+++ linux-5.17.1/drivers/char/tpm/tpm.h 2022-03-28 17:03:22.000000000 +0900
@@ -234,6 +234,8 @@ int tpm2_prepare_space(struct tpm_chip *
size_t cmdsiz);
int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
size_t *bufsiz);
+int tpm_devs_add(struct tpm_chip *chip);
+void tpm_devs_remove(struct tpm_chip *chip);
void tpm_bios_log_setup(struct tpm_chip *chip);
void tpm_bios_log_teardown(struct tpm_chip *chip);
diff -rupN linux-hardened/drivers/char/tpm/tpm2-space.c linux-5.17.1/drivers/char/tpm/tpm2-space.c
--- linux-hardened/drivers/char/tpm/tpm2-space.c 2022-04-05 20:57:04.494919391 +0900
+++ linux-5.17.1/drivers/char/tpm/tpm2-space.c 2022-03-28 17:03:22.000000000 +0900
@@ -58,12 +58,12 @@ int tpm2_init_space(struct tpm_space *sp
void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space)
{
- mutex_lock(&chip->tpm_mutex);
- if (!tpm_chip_start(chip)) {
+
+ if (tpm_try_get_ops(chip) == 0) {
tpm2_flush_sessions(chip, space);
- tpm_chip_stop(chip);
+ tpm_put_ops(chip);
}
- mutex_unlock(&chip->tpm_mutex);
+
kfree(space->context_buf);
kfree(space->session_buf);
}
@@ -574,3 +574,68 @@ out:
dev_err(&chip->dev, "%s: error %d\n", __func__, rc);
return rc;
}
+
+/*
+ * Put the reference to the main device.
+ */
+static void tpm_devs_release(struct device *dev)
+{
+ struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
+
+ /* release the master device reference */
+ put_device(&chip->dev);
+}
+
+/*
+ * Remove the device file for exposed TPM spaces and release the device
+ * reference. This may also release the reference to the master device.
+ */
+void tpm_devs_remove(struct tpm_chip *chip)
+{
+ cdev_device_del(&chip->cdevs, &chip->devs);
+ put_device(&chip->devs);
+}
+
+/*
+ * Add a device file to expose TPM spaces. Also take a reference to the
+ * main device.
+ */
+int tpm_devs_add(struct tpm_chip *chip)
+{
+ int rc;
+
+ device_initialize(&chip->devs);
+ chip->devs.parent = chip->dev.parent;
+ chip->devs.class = tpmrm_class;
+
+ /*
+ * Get extra reference on main device to hold on behalf of devs.
+ * This holds the chip structure while cdevs is in use. The
+ * corresponding put is in the tpm_devs_release.
+ */
+ get_device(&chip->dev);
+ chip->devs.release = tpm_devs_release;
+ chip->devs.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
+ cdev_init(&chip->cdevs, &tpmrm_fops);
+ chip->cdevs.owner = THIS_MODULE;
+
+ rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
+ if (rc)
+ goto err_put_devs;
+
+ rc = cdev_device_add(&chip->cdevs, &chip->devs);
+ if (rc) {
+ dev_err(&chip->devs,
+ "unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
+ dev_name(&chip->devs), MAJOR(chip->devs.devt),
+ MINOR(chip->devs.devt), rc);
+ goto err_put_devs;
+ }
+
+ return 0;
+
+err_put_devs:
+ put_device(&chip->devs);
+
+ return rc;
+}
diff -rupN linux-hardened/drivers/crypto/qat/qat_4xxx/adf_drv.c linux-5.17.1/drivers/crypto/qat/qat_4xxx/adf_drv.c
--- linux-hardened/drivers/crypto/qat/qat_4xxx/adf_drv.c 2022-04-05 20:57:04.981925512 +0900
+++ linux-5.17.1/drivers/crypto/qat/qat_4xxx/adf_drv.c 2022-03-28 17:03:22.000000000 +0900
@@ -75,6 +75,13 @@ static int adf_crypto_dev_config(struct
if (ret)
goto err;
+ /* Temporarily set the number of crypto instances to zero to avoid
+ * registering the crypto algorithms.
+ * This will be removed when the algorithms will support the
+ * CRYPTO_TFM_REQ_MAY_BACKLOG flag
+ */
+ instances = 0;
+
for (i = 0; i < instances; i++) {
val = i;
bank = i * 2;
diff -rupN linux-hardened/drivers/crypto/qat/qat_common/qat_crypto.c linux-5.17.1/drivers/crypto/qat/qat_common/qat_crypto.c
--- linux-hardened/drivers/crypto/qat/qat_common/qat_crypto.c 2022-04-05 20:57:04.997925713 +0900
+++ linux-5.17.1/drivers/crypto/qat/qat_common/qat_crypto.c 2022-03-28 17:03:22.000000000 +0900
@@ -161,6 +161,13 @@ int qat_crypto_dev_config(struct adf_acc
if (ret)
goto err;
+ /* Temporarily set the number of crypto instances to zero to avoid
+ * registering the crypto algorithms.
+ * This will be removed when the algorithms will support the
+ * CRYPTO_TFM_REQ_MAY_BACKLOG flag
+ */
+ instances = 0;
+
for (i = 0; i < instances; i++) {
val = i;
snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i);
diff -rupN linux-hardened/drivers/gpu/drm/msm/msm_gpu_devfreq.c linux-5.17.1/drivers/gpu/drm/msm/msm_gpu_devfreq.c
--- linux-hardened/drivers/gpu/drm/msm/msm_gpu_devfreq.c 2022-04-05 20:57:07.158952878 +0900
+++ linux-5.17.1/drivers/gpu/drm/msm/msm_gpu_devfreq.c 2022-03-28 17:03:22.000000000 +0900
@@ -83,6 +83,12 @@ static struct devfreq_dev_profile msm_de
static void msm_devfreq_boost_work(struct kthread_work *work);
static void msm_devfreq_idle_work(struct kthread_work *work);
+static bool has_devfreq(struct msm_gpu *gpu)
+{
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ return !!df->devfreq;
+}
+
void msm_devfreq_init(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
@@ -149,6 +155,9 @@ void msm_devfreq_cleanup(struct msm_gpu
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
+ if (!has_devfreq(gpu))
+ return;
+
devfreq_cooling_unregister(gpu->cooling);
dev_pm_qos_remove_request(&df->boost_freq);
dev_pm_qos_remove_request(&df->idle_freq);
@@ -156,16 +165,24 @@ void msm_devfreq_cleanup(struct msm_gpu
void msm_devfreq_resume(struct msm_gpu *gpu)
{
- gpu->devfreq.busy_cycles = 0;
- gpu->devfreq.time = ktime_get();
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
- devfreq_resume_device(gpu->devfreq.devfreq);
+ if (!has_devfreq(gpu))
+ return;
+
+ df->busy_cycles = 0;
+ df->time = ktime_get();
+
+ devfreq_resume_device(df->devfreq);
}
void msm_devfreq_suspend(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
+ if (!has_devfreq(gpu))
+ return;
+
devfreq_suspend_device(df->devfreq);
cancel_idle_work(df);
@@ -185,6 +202,9 @@ void msm_devfreq_boost(struct msm_gpu *g
struct msm_gpu_devfreq *df = &gpu->devfreq;
uint64_t freq;
+ if (!has_devfreq(gpu))
+ return;
+
freq = get_freq(gpu);
freq *= factor;
@@ -207,7 +227,7 @@ void msm_devfreq_active(struct msm_gpu *
struct devfreq_dev_status status;
unsigned int idle_time;
- if (!df->devfreq)
+ if (!has_devfreq(gpu))
return;
/*
@@ -253,7 +273,7 @@ void msm_devfreq_idle(struct msm_gpu *gp
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
- if (!df->devfreq)
+ if (!has_devfreq(gpu))
return;
msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1),
diff -rupN linux-hardened/drivers/gpu/drm/virtio/virtgpu_gem.c linux-5.17.1/drivers/gpu/drm/virtio/virtgpu_gem.c
--- linux-hardened/drivers/gpu/drm/virtio/virtgpu_gem.c 2022-04-05 20:57:07.455956611 +0900
+++ linux-5.17.1/drivers/gpu/drm/virtio/virtgpu_gem.c 2022-03-28 17:03:22.000000000 +0900
@@ -248,6 +248,9 @@ void virtio_gpu_array_put_free(struct vi
{
u32 i;
+ if (!objs)
+ return;
+
for (i = 0; i < objs->nents; i++)
drm_gem_object_put(objs->objs[i]);
virtio_gpu_array_free(objs);
diff -rupN linux-hardened/drivers/net/ethernet/apm/xgene/xgene_enet_main.c linux-5.17.1/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
--- linux-hardened/drivers/net/ethernet/apm/xgene/xgene_enet_main.c 2022-04-05 20:57:09.774985762 +0900
+++ linux-5.17.1/drivers/net/ethernet/apm/xgene/xgene_enet_main.c 2022-03-28 17:03:22.000000000 +0900
@@ -696,6 +696,12 @@ static int xgene_enet_rx_frame(struct xg
buf_pool->rx_skb[skb_index] = NULL;
datalen = xgene_enet_get_data_len(le64_to_cpu(raw_desc->m1));
+
+ /* strip off CRC as HW isn't doing this */
+ nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
+ if (!nv)
+ datalen -= 4;
+
skb_put(skb, datalen);
prefetch(skb->data - NET_IP_ALIGN);
skb->protocol = eth_type_trans(skb, ndev);
@@ -717,12 +723,8 @@ static int xgene_enet_rx_frame(struct xg
}
}
- nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
- if (!nv) {
- /* strip off CRC as HW isn't doing this */
- datalen -= 4;
+ if (!nv)
goto skip_jumbo;
- }
slots = page_pool->slots - 1;
head = page_pool->head;
diff -rupN linux-hardened/drivers/net/wireless/ath/regd.c linux-5.17.1/drivers/net/wireless/ath/regd.c
--- linux-hardened/drivers/net/wireless/ath/regd.c 2022-04-05 20:55:58.877094561 +0900
+++ linux-5.17.1/drivers/net/wireless/ath/regd.c 2022-03-28 17:03:22.000000000 +0900
@@ -667,14 +667,14 @@ ath_regd_init_wiphy(struct ath_regulator
/*
* Some users have reported their EEPROM programmed with
- * 0x8000 or 0x0 set, this is not a supported regulatory
- * domain but since we have more than one user with it we
- * need a solution for them. We default to 0x64, which is
- * the default Atheros world regulatory domain.
+ * 0x8000 set, this is not a supported regulatory domain
+ * but since we have more than one user with it we need
+ * a solution for them. We default to 0x64, which is the
+ * default Atheros world regulatory domain.
*/
static void ath_regd_sanitize(struct ath_regulatory *reg)
{
- if (reg->current_rd != COUNTRY_ERD_FLAG && reg->current_rd != 0)
+ if (reg->current_rd != COUNTRY_ERD_FLAG)
return;
printk(KERN_DEBUG "ath: EEPROM regdomain sanitized\n");
reg->current_rd = 0x64;
diff -rupN linux-hardened/drivers/net/wireless/ath/wcn36xx/main.c linux-5.17.1/drivers/net/wireless/ath/wcn36xx/main.c
--- linux-hardened/drivers/net/wireless/ath/wcn36xx/main.c 2022-04-05 20:57:11.278004655 +0900
+++ linux-5.17.1/drivers/net/wireless/ath/wcn36xx/main.c 2022-03-28 17:03:22.000000000 +0900
@@ -1513,6 +1513,9 @@ static int wcn36xx_platform_get_resource
if (iris_node) {
if (of_device_is_compatible(iris_node, "qcom,wcn3620"))
wcn->rf_id = RF_IRIS_WCN3620;
+ if (of_device_is_compatible(iris_node, "qcom,wcn3660") ||
+ of_device_is_compatible(iris_node, "qcom,wcn3660b"))
+ wcn->rf_id = RF_IRIS_WCN3660;
if (of_device_is_compatible(iris_node, "qcom,wcn3680"))
wcn->rf_id = RF_IRIS_WCN3680;
of_node_put(iris_node);
diff -rupN linux-hardened/drivers/net/wireless/ath/wcn36xx/wcn36xx.h linux-5.17.1/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
--- linux-hardened/drivers/net/wireless/ath/wcn36xx/wcn36xx.h 2022-04-05 20:57:11.281004693 +0900
+++ linux-5.17.1/drivers/net/wireless/ath/wcn36xx/wcn36xx.h 2022-03-28 17:03:22.000000000 +0900
@@ -97,6 +97,7 @@ enum wcn36xx_ampdu_state {
#define RF_UNKNOWN 0x0000
#define RF_IRIS_WCN3620 0x3620
+#define RF_IRIS_WCN3660 0x3660
#define RF_IRIS_WCN3680 0x3680
static inline void buff_to_be(u32 *buf, size_t len)
diff -rupN linux-hardened/drivers/tty/Kconfig linux-5.17.1/drivers/tty/Kconfig
--- linux-hardened/drivers/tty/Kconfig 2022-04-05 20:57:13.894037539 +0900
+++ linux-5.17.1/drivers/tty/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -121,6 +121,7 @@ config UNIX98_PTYS
config LEGACY_PTYS
bool "Legacy (BSD) PTY support"
+ default y
help
A pseudo terminal (PTY) is a software device consisting of two
halves: a master and a slave. The slave device behaves identical to
diff -rupN linux-hardened/drivers/tty/tty_io.c linux-5.17.1/drivers/tty/tty_io.c
--- linux-hardened/drivers/tty/tty_io.c 2022-04-05 20:57:13.994038796 +0900
+++ linux-5.17.1/drivers/tty/tty_io.c 2022-03-28 17:03:22.000000000 +0900
@@ -171,7 +171,6 @@ static void free_tty_struct(struct tty_s
put_device(tty->dev);
kvfree(tty->write_buf);
tty->magic = 0xDEADDEAD;
- put_user_ns(tty->owner_user_ns);
kfree(tty);
}
@@ -2263,8 +2262,6 @@ static int tty_fasync(int fd, struct fil
return retval;
}
-int tiocsti_restrict __read_mostly = IS_ENABLED(CONFIG_SECURITY_TIOCSTI_RESTRICT);
-
/**
* tiocsti - fake input character
* @tty: tty to fake input into
@@ -2283,12 +2280,6 @@ static int tiocsti(struct tty_struct *tt
char ch, mbz = 0;
struct tty_ldisc *ld;
- if (tiocsti_restrict &&
- !ns_capable(tty->owner_user_ns, CAP_SYS_ADMIN)) {
- dev_warn_ratelimited(tty->dev,
- "Denied TIOCSTI ioctl for non-privileged process\n");
- return -EPERM;
- }
if ((current->signal->tty != tty) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (get_user(ch, p))
@@ -3129,7 +3120,6 @@ struct tty_struct *alloc_tty_struct(stru
tty->index = idx;
tty_line_name(driver, idx, tty->name);
tty->dev = tty_get_device(tty);
- tty->owner_user_ns = get_user_ns(current_user_ns());
return tty;
}
diff -rupN linux-hardened/drivers/tty/tty_ldisc.c linux-5.17.1/drivers/tty/tty_ldisc.c
--- linux-hardened/drivers/tty/tty_ldisc.c 2022-04-05 20:57:13.996038821 +0900
+++ linux-5.17.1/drivers/tty/tty_ldisc.c 2022-03-28 17:03:22.000000000 +0900
@@ -828,15 +828,6 @@ static struct ctl_table tty_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
- {
- .procname = "tiocsti_restrict",
- .data = &tiocsti_restrict,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax_sysadmin,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
{ }
};
diff -rupN linux-hardened/drivers/usb/core/Makefile linux-5.17.1/drivers/usb/core/Makefile
--- linux-hardened/drivers/usb/core/Makefile 2022-04-05 20:56:03.423151706 +0900
+++ linux-5.17.1/drivers/usb/core/Makefile 2022-03-28 17:03:22.000000000 +0900
@@ -11,7 +11,6 @@ usbcore-y += phy.o port.o
usbcore-$(CONFIG_OF) += of.o
usbcore-$(CONFIG_USB_PCI) += hcd-pci.o
usbcore-$(CONFIG_ACPI) += usb-acpi.o
-usbcore-$(CONFIG_SYSCTL) += sysctl.o
obj-$(CONFIG_USB) += usbcore.o
diff -rupN linux-hardened/drivers/usb/core/hub.c linux-5.17.1/drivers/usb/core/hub.c
--- linux-hardened/drivers/usb/core/hub.c 2022-04-05 20:57:14.045039437 +0900
+++ linux-5.17.1/drivers/usb/core/hub.c 2022-03-28 17:03:22.000000000 +0900
@@ -5243,12 +5243,6 @@ static void hub_port_connect(struct usb_
goto done;
return;
}
-
- if (deny_new_usb) {
- dev_err(&port_dev->dev, "denied insert of USB device on port %d\n", port1);
- goto done;
- }
-
if (hub_is_superspeed(hub->hdev))
unit_load = 150;
else
diff -rupN linux-hardened/drivers/usb/core/sysctl.c linux-5.17.1/drivers/usb/core/sysctl.c
--- linux-hardened/drivers/usb/core/sysctl.c 2022-04-05 20:57:14.049039488 +0900
+++ linux-5.17.1/drivers/usb/core/sysctl.c 1970-01-01 09:00:00.000000000 +0900
@@ -1,43 +0,0 @@
-#include <linux/errno.h>
-#include <linux/printk.h>
-#include <linux/init.h>
-#include <linux/sysctl.h>
-#include <linux/usb.h>
-
-static struct ctl_table usb_table[] = {
- {
- .procname = "deny_new_usb",
- .data = &deny_new_usb,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax_sysadmin,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
- { }
-};
-
-static struct ctl_table usb_root_table[] = {
- { .procname = "kernel",
- .mode = 0555,
- .child = usb_table },
- { }
-};
-
-static struct ctl_table_header *usb_table_header;
-
-int __init usb_init_sysctl(void)
-{
- usb_table_header = register_sysctl_table(usb_root_table);
- if (!usb_table_header) {
- pr_warn("usb: sysctl registration failed\n");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-void usb_exit_sysctl(void)
-{
- unregister_sysctl_table(usb_table_header);
-}
diff -rupN linux-hardened/drivers/usb/core/usb.c linux-5.17.1/drivers/usb/core/usb.c
--- linux-hardened/drivers/usb/core/usb.c 2022-04-05 20:57:14.051039513 +0900
+++ linux-5.17.1/drivers/usb/core/usb.c 2022-03-28 17:03:22.000000000 +0900
@@ -71,9 +71,6 @@ MODULE_PARM_DESC(autosuspend, "default a
#define usb_autosuspend_delay 0
#endif
-int deny_new_usb __read_mostly = 0;
-EXPORT_SYMBOL(deny_new_usb);
-
static bool match_endpoint(struct usb_endpoint_descriptor *epd,
struct usb_endpoint_descriptor **bulk_in,
struct usb_endpoint_descriptor **bulk_out,
@@ -1011,9 +1008,6 @@ static int __init usb_init(void)
usb_debugfs_init();
usb_acpi_register();
- retval = usb_init_sysctl();
- if (retval)
- goto sysctl_init_failed;
retval = bus_register(&usb_bus_type);
if (retval)
goto bus_register_failed;
@@ -1048,8 +1042,6 @@ major_init_failed:
bus_notifier_failed:
bus_unregister(&usb_bus_type);
bus_register_failed:
- usb_exit_sysctl();
-sysctl_init_failed:
usb_acpi_unregister();
usb_debugfs_cleanup();
out:
@@ -1073,7 +1065,6 @@ static void __exit usb_exit(void)
usb_hub_cleanup();
bus_unregister_notifier(&usb_bus_type, &usb_bus_nb);
bus_unregister(&usb_bus_type);
- usb_exit_sysctl();
usb_acpi_unregister();
usb_debugfs_cleanup();
idr_destroy(&usb_bus_idr);
diff -rupN linux-hardened/fs/exec.c linux-5.17.1/fs/exec.c
--- linux-hardened/fs/exec.c 2022-04-05 20:57:14.758048400 +0900
+++ linux-5.17.1/fs/exec.c 2022-03-28 17:03:22.000000000 +0900
@@ -66,7 +66,6 @@
#include <linux/io_uring.h>
#include <linux/syscall_user_dispatch.h>
#include <linux/coredump.h>
-#include <linux/random.h>
#include <linux/uaccess.h>
#include <asm/mmu_context.h>
@@ -282,8 +281,6 @@ static int __bprm_mm_init(struct linux_b
mm->stack_vm = mm->total_vm = 1;
mmap_write_unlock(mm);
bprm->p = vma->vm_end - sizeof(void *);
- if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
- bprm->p ^= get_random_int() & ~PAGE_MASK;
return 0;
err:
mmap_write_unlock(mm);
diff -rupN linux-hardened/fs/inode.c linux-5.17.1/fs/inode.c
--- linux-hardened/fs/inode.c 2022-04-05 20:57:14.902050210 +0900
+++ linux-5.17.1/fs/inode.c 2022-03-28 17:03:22.000000000 +0900
@@ -97,10 +97,6 @@ long get_nr_dirty_inodes(void)
return nr_dirty > 0 ? nr_dirty : 0;
}
-/* sysctl */
-int device_sidechannel_restrict __read_mostly = 1;
-EXPORT_SYMBOL(device_sidechannel_restrict);
-
/*
* Handle nr_inode sysctl
*/
@@ -133,15 +129,6 @@ static struct ctl_table inodes_sysctls[]
.mode = 0444,
.proc_handler = proc_nr_inodes,
},
- {
- .procname = "device_sidechannel_restrict",
- .data = &device_sidechannel_restrict,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax_sysadmin,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
{ }
};
diff -rupN linux-hardened/fs/jbd2/transaction.c linux-5.17.1/fs/jbd2/transaction.c
--- linux-hardened/fs/jbd2/transaction.c 2022-04-05 20:57:14.922050462 +0900
+++ linux-5.17.1/fs/jbd2/transaction.c 2022-03-28 17:03:22.000000000 +0900
@@ -842,27 +842,38 @@ EXPORT_SYMBOL(jbd2_journal_restart);
*/
void jbd2_journal_wait_updates(journal_t *journal)
{
- transaction_t *commit_transaction = journal->j_running_transaction;
+ DEFINE_WAIT(wait);
- if (!commit_transaction)
- return;
+ while (1) {
+ /*
+ * Note that the running transaction can get freed under us if
+ * this transaction is getting committed in
+ * jbd2_journal_commit_transaction() ->
+ * jbd2_journal_free_transaction(). This can only happen when we
+ * release j_state_lock -> schedule() -> acquire j_state_lock.
+ * Hence we should everytime retrieve new j_running_transaction
+ * value (after j_state_lock release acquire cycle), else it may
+ * lead to use-after-free of old freed transaction.
+ */
+ transaction_t *transaction = journal->j_running_transaction;
- spin_lock(&commit_transaction->t_handle_lock);
- while (atomic_read(&commit_transaction->t_updates)) {
- DEFINE_WAIT(wait);
+ if (!transaction)
+ break;
+ spin_lock(&transaction->t_handle_lock);
prepare_to_wait(&journal->j_wait_updates, &wait,
- TASK_UNINTERRUPTIBLE);
- if (atomic_read(&commit_transaction->t_updates)) {
- spin_unlock(&commit_transaction->t_handle_lock);
- write_unlock(&journal->j_state_lock);
- schedule();
- write_lock(&journal->j_state_lock);
- spin_lock(&commit_transaction->t_handle_lock);
+ TASK_UNINTERRUPTIBLE);
+ if (!atomic_read(&transaction->t_updates)) {
+ spin_unlock(&transaction->t_handle_lock);
+ finish_wait(&journal->j_wait_updates, &wait);
+ break;
}
+ spin_unlock(&transaction->t_handle_lock);
+ write_unlock(&journal->j_state_lock);
+ schedule();
finish_wait(&journal->j_wait_updates, &wait);
+ write_lock(&journal->j_state_lock);
}
- spin_unlock(&commit_transaction->t_handle_lock);
}
/**
@@ -877,8 +888,6 @@ void jbd2_journal_wait_updates(journal_t
*/
void jbd2_journal_lock_updates(journal_t *journal)
{
- DEFINE_WAIT(wait);
-
jbd2_might_wait_for_commit(journal);
write_lock(&journal->j_state_lock);
diff -rupN linux-hardened/fs/namei.c linux-5.17.1/fs/namei.c
--- linux-hardened/fs/namei.c 2022-04-05 20:57:14.976051140 +0900
+++ linux-5.17.1/fs/namei.c 2022-03-28 17:03:22.000000000 +0900
@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
path_put(&last->link);
}
-static int sysctl_protected_symlinks __read_mostly = 1;
-static int sysctl_protected_hardlinks __read_mostly = 1;
-static int sysctl_protected_fifos __read_mostly = 2;
-static int sysctl_protected_regular __read_mostly = 2;
+static int sysctl_protected_symlinks __read_mostly;
+static int sysctl_protected_hardlinks __read_mostly;
+static int sysctl_protected_fifos __read_mostly;
+static int sysctl_protected_regular __read_mostly;
#ifdef CONFIG_SYSCTL
static struct ctl_table namei_sysctls[] = {
diff -rupN linux-hardened/fs/nfs/Kconfig linux-5.17.1/fs/nfs/Kconfig
--- linux-hardened/fs/nfs/Kconfig 2022-04-05 20:56:05.046172108 +0900
+++ linux-5.17.1/fs/nfs/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -195,6 +195,7 @@ config NFS_DEBUG
bool
depends on NFS_FS && SUNRPC_DEBUG
select CRC32
+ default y
config NFS_DISABLE_UDP_SUPPORT
bool "NFS: Disable NFS UDP protocol support"
diff -rupN linux-hardened/fs/overlayfs/Kconfig linux-5.17.1/fs/overlayfs/Kconfig
--- linux-hardened/fs/overlayfs/Kconfig 2022-04-05 20:56:05.358176030 +0900
+++ linux-5.17.1/fs/overlayfs/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -124,19 +124,3 @@ config OVERLAY_FS_METACOPY
that doesn't support this feature will have unexpected results.
If unsure, say N.
-
-config OVERLAY_FS_UNPRIVILEGED
- bool "Overlayfs: turn on unprivileged user namespace mounts"
- default n
- depends on OVERLAY_FS
- help
- When disabled, unprivileged users will not be able to create
- new overlayfs mounts. This cuts the attack surface if no
- unprivileged user namespace mounts are required like for
- running rootless containers.
-
- Overlayfs has been part of several recent local privilege
- escalation exploits, so if you are security-conscious
- you want to disable this.
-
- If unsure, say N.
diff -rupN linux-hardened/fs/overlayfs/super.c linux-5.17.1/fs/overlayfs/super.c
--- linux-hardened/fs/overlayfs/super.c 2022-04-05 20:57:15.200053956 +0900
+++ linux-5.17.1/fs/overlayfs/super.c 2022-03-28 17:03:22.000000000 +0900
@@ -2165,9 +2165,7 @@ static struct dentry *ovl_mount(struct f
static struct file_system_type ovl_fs_type = {
.owner = THIS_MODULE,
.name = "overlay",
-#ifdef CONFIG_OVERLAY_FS_UNPRIVILEGED
.fs_flags = FS_USERNS_MOUNT,
-#endif
.mount = ovl_mount,
.kill_sb = kill_anon_super,
};
diff -rupN linux-hardened/fs/proc/Kconfig linux-5.17.1/fs/proc/Kconfig
--- linux-hardened/fs/proc/Kconfig 2022-04-05 20:56:05.371176193 +0900
+++ linux-5.17.1/fs/proc/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -41,6 +41,7 @@ config PROC_KCORE
config PROC_VMCORE
bool "/proc/vmcore support"
depends on PROC_FS && CRASH_DUMP
+ default y
help
Exports the dump image of crashed kernel in ELF format.
diff -rupN linux-hardened/fs/stat.c linux-5.17.1/fs/stat.c
--- linux-hardened/fs/stat.c 2022-04-05 20:57:15.247054547 +0900
+++ linux-5.17.1/fs/stat.c 2022-03-28 17:03:22.000000000 +0900
@@ -51,13 +51,8 @@ void generic_fillattr(struct user_namesp
stat->gid = i_gid_into_mnt(mnt_userns, inode);
stat->rdev = inode->i_rdev;
stat->size = i_size_read(inode);
- if (is_sidechannel_device(inode) && !capable_noaudit(CAP_MKNOD)) {
- stat->atime = inode->i_ctime;
- stat->mtime = inode->i_ctime;
- } else {
- stat->atime = inode->i_atime;
- stat->mtime = inode->i_mtime;
- }
+ stat->atime = inode->i_atime;
+ stat->mtime = inode->i_mtime;
stat->ctime = inode->i_ctime;
stat->blksize = i_blocksize(inode);
stat->blocks = inode->i_blocks;
@@ -124,14 +119,9 @@ int vfs_getattr_nosec(const struct path
STATX_ATTR_DAX);
mnt_userns = mnt_user_ns(path->mnt);
- if (inode->i_op->getattr) {
- int retval = inode->i_op->getattr(mnt_userns, path, stat, request_mask, query_flags);
- if (!retval && is_sidechannel_device(inode) && !capable_noaudit(CAP_MKNOD)) {
- stat->atime = stat->ctime;
- stat->mtime = stat->ctime;
- }
- return retval;
- }
+ if (inode->i_op->getattr)
+ return inode->i_op->getattr(mnt_userns, path, stat,
+ request_mask, query_flags);
generic_fillattr(mnt_userns, inode, stat);
return 0;
diff -rupN linux-hardened/include/linux/cache.h linux-5.17.1/include/linux/cache.h
--- linux-hardened/include/linux/cache.h 2022-04-05 20:56:06.030184477 +0900
+++ linux-5.17.1/include/linux/cache.h 2022-03-28 17:03:22.000000000 +0900
@@ -37,8 +37,6 @@
#define __ro_after_init __section(".data..ro_after_init")
#endif
-#define __read_only __ro_after_init
-
#ifndef ____cacheline_aligned
#define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
#endif
diff -rupN linux-hardened/include/linux/capability.h linux-5.17.1/include/linux/capability.h
--- linux-hardened/include/linux/capability.h 2022-04-05 20:56:06.034184527 +0900
+++ linux-5.17.1/include/linux/capability.h 2022-03-28 17:03:22.000000000 +0900
@@ -208,7 +208,6 @@ extern bool has_capability_noaudit(struc
extern bool has_ns_capability_noaudit(struct task_struct *t,
struct user_namespace *ns, int cap);
extern bool capable(int cap);
-extern bool capable_noaudit(int cap);
extern bool ns_capable(struct user_namespace *ns, int cap);
extern bool ns_capable_noaudit(struct user_namespace *ns, int cap);
extern bool ns_capable_setid(struct user_namespace *ns, int cap);
@@ -235,10 +234,6 @@ static inline bool capable(int cap)
{
return true;
}
-static inline bool capable_noaudit(int cap)
-{
- return true;
-}
static inline bool ns_capable(struct user_namespace *ns, int cap)
{
return true;
diff -rupN linux-hardened/include/linux/fs.h linux-5.17.1/include/linux/fs.h
--- linux-hardened/include/linux/fs.h 2022-04-05 20:57:15.626059311 +0900
+++ linux-5.17.1/include/linux/fs.h 2022-03-28 17:03:22.000000000 +0900
@@ -3619,15 +3619,4 @@ static inline int inode_drain_writes(str
return filemap_write_and_wait(inode->i_mapping);
}
-extern int device_sidechannel_restrict;
-
-static inline bool is_sidechannel_device(const struct inode *inode)
-{
- umode_t mode;
- if (!device_sidechannel_restrict)
- return false;
- mode = inode->i_mode;
- return ((S_ISCHR(mode) || S_ISBLK(mode)) && (mode & (S_IROTH | S_IWOTH)));
-}
-
#endif /* _LINUX_FS_H */
diff -rupN linux-hardened/include/linux/fsnotify.h linux-5.17.1/include/linux/fsnotify.h
--- linux-hardened/include/linux/fsnotify.h 2022-04-05 20:57:15.630059361 +0900
+++ linux-5.17.1/include/linux/fsnotify.h 2022-03-28 17:03:22.000000000 +0900
@@ -96,9 +96,6 @@ static inline int fsnotify_file(struct f
if (file->f_mode & FMODE_NONOTIFY)
return 0;
- if (mask & (FS_ACCESS | FS_MODIFY) && is_sidechannel_device(file_inode(file)))
- return 0;
-
return fsnotify_parent(path->dentry, mask, path, FSNOTIFY_EVENT_PATH);
}
diff -rupN linux-hardened/include/linux/highmem.h linux-5.17.1/include/linux/highmem.h
--- linux-hardened/include/linux/highmem.h 2022-04-05 20:57:15.642059512 +0900
+++ linux-5.17.1/include/linux/highmem.h 2022-03-28 17:03:22.000000000 +0900
@@ -226,13 +226,6 @@ static inline void tag_clear_highpage(st
#endif
-static inline void verify_zero_highpage(struct page *page)
-{
- void *kaddr = kmap_atomic(page);
- BUG_ON(memchr_inv(kaddr, 0, PAGE_SIZE));
- kunmap_atomic(kaddr);
-}
-
/*
* If we pass in a base or tail page, we can zero up to PAGE_SIZE.
* If we pass in a head page, we can zero up to the size of the compound page.
diff -rupN linux-hardened/include/linux/interrupt.h linux-5.17.1/include/linux/interrupt.h
--- linux-hardened/include/linux/interrupt.h 2022-04-05 20:57:15.664059789 +0900
+++ linux-5.17.1/include/linux/interrupt.h 2022-03-28 17:03:22.000000000 +0900
@@ -592,13 +592,13 @@ extern const char * const softirq_to_nam
struct softirq_action
{
- void (*action)(void);
+ void (*action)(struct softirq_action *);
};
asmlinkage void do_softirq(void);
asmlinkage void __do_softirq(void);
-extern void __init open_softirq(int nr, void (*action)(void));
+extern void open_softirq(int nr, void (*action)(struct softirq_action *));
extern void softirq_init(void);
extern void __raise_softirq_irqoff(unsigned int nr);
diff -rupN linux-hardened/include/linux/kobject_ns.h linux-5.17.1/include/linux/kobject_ns.h
--- linux-hardened/include/linux/kobject_ns.h 2022-04-05 20:56:06.255187305 +0900
+++ linux-5.17.1/include/linux/kobject_ns.h 2022-03-28 17:03:22.000000000 +0900
@@ -45,7 +45,7 @@ struct kobj_ns_type_operations {
void (*drop_ns)(void *);
};
-int __init kobj_ns_type_register(const struct kobj_ns_type_operations *ops);
+int kobj_ns_type_register(const struct kobj_ns_type_operations *ops);
int kobj_ns_type_registered(enum kobj_ns_type type);
const struct kobj_ns_type_operations *kobj_child_ns_ops(struct kobject *parent);
const struct kobj_ns_type_operations *kobj_ns_ops(struct kobject *kobj);
diff -rupN linux-hardened/include/linux/mm.h linux-5.17.1/include/linux/mm.h
--- linux-hardened/include/linux/mm.h 2022-04-05 20:57:15.737060706 +0900
+++ linux-5.17.1/include/linux/mm.h 2022-03-28 17:03:22.000000000 +0900
@@ -883,15 +883,10 @@ static inline void set_compound_page_dto
page[1].compound_dtor = compound_dtor;
}
-static inline compound_page_dtor *get_compound_page_dtor(struct page *page)
-{
- VM_BUG_ON_PAGE(page[1].compound_dtor >= NR_COMPOUND_DTORS, page);
- return compound_page_dtors[page[1].compound_dtor];
-}
-
static inline void destroy_compound_page(struct page *page)
{
- (*get_compound_page_dtor(page))(page);
+ VM_BUG_ON_PAGE(page[1].compound_dtor >= NR_COMPOUND_DTORS, page);
+ compound_page_dtors[page[1].compound_dtor](page);
}
static inline bool hpage_pincount_available(struct page *page)
diff -rupN linux-hardened/include/linux/perf_event.h linux-5.17.1/include/linux/perf_event.h
--- linux-hardened/include/linux/perf_event.h 2022-04-05 20:57:15.796061448 +0900
+++ linux-5.17.1/include/linux/perf_event.h 2022-03-28 17:03:22.000000000 +0900
@@ -1348,14 +1348,6 @@ static inline int perf_is_paranoid(void)
return sysctl_perf_event_paranoid > -1;
}
-static inline int perf_allow_open(struct perf_event_attr *attr)
-{
- if (sysctl_perf_event_paranoid > 2 && !perfmon_capable())
- return -EACCES;
-
- return security_perf_event_open(attr, PERF_SECURITY_OPEN);
-}
-
static inline int perf_allow_kernel(struct perf_event_attr *attr)
{
if (sysctl_perf_event_paranoid > 1 && !perfmon_capable())
diff -rupN linux-hardened/include/linux/slub_def.h linux-5.17.1/include/linux/slub_def.h
--- linux-hardened/include/linux/slub_def.h 2022-04-05 20:57:15.877062466 +0900
+++ linux-5.17.1/include/linux/slub_def.h 2022-03-28 17:03:22.000000000 +0900
@@ -122,11 +122,6 @@ struct kmem_cache {
unsigned long random;
#endif
-#ifdef CONFIG_SLAB_CANARY
- unsigned long random_active;
- unsigned long random_inactive;
-#endif
-
#ifdef CONFIG_NUMA
/*
* Defragmentation by allocating from a remote node.
diff -rupN linux-hardened/include/linux/sysctl.h linux-5.17.1/include/linux/sysctl.h
--- linux-hardened/include/linux/sysctl.h 2022-04-05 20:57:15.907062843 +0900
+++ linux-5.17.1/include/linux/sysctl.h 2022-03-28 17:03:22.000000000 +0900
@@ -73,8 +73,6 @@ int proc_douintvec_minmax(struct ctl_tab
size_t *lenp, loff_t *ppos);
int proc_dou8vec_minmax(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos);
-int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
- void *buffer, size_t *lenp, loff_t *ppos);
int proc_dointvec_jiffies(struct ctl_table *, int, void *, size_t *, loff_t *);
int proc_dointvec_userhz_jiffies(struct ctl_table *, int, void *, size_t *,
loff_t *);
diff -rupN linux-hardened/include/linux/tty.h linux-5.17.1/include/linux/tty.h
--- linux-hardened/include/linux/tty.h 2022-04-05 20:57:15.919062994 +0900
+++ linux-5.17.1/include/linux/tty.h 2022-03-28 17:03:22.000000000 +0900
@@ -15,7 +15,6 @@
#include <uapi/linux/tty.h>
#include <linux/rwsem.h>
#include <linux/llist.h>
-#include <linux/user_namespace.h>
/*
@@ -252,7 +251,6 @@ struct tty_struct {
int write_cnt;
struct work_struct SAK_work;
struct tty_port *port;
- struct user_namespace *owner_user_ns;
} __randomize_layout;
/* Each of a tty's open files has private_data pointing to tty_file_private */
@@ -262,8 +260,6 @@ struct tty_file_private {
struct list_head list;
};
-extern int tiocsti_restrict;
-
/* tty magic number */
#define TTY_MAGIC 0x5401
diff -rupN linux-hardened/include/linux/usb.h linux-5.17.1/include/linux/usb.h
--- linux-hardened/include/linux/usb.h 2022-04-05 20:57:15.925063070 +0900
+++ linux-5.17.1/include/linux/usb.h 2022-03-28 17:03:22.000000000 +0900
@@ -2030,17 +2030,6 @@ extern void usb_led_activity(enum usb_le
static inline void usb_led_activity(enum usb_led_event ev) {}
#endif
-/* sysctl.c */
-extern int deny_new_usb;
-#ifdef CONFIG_SYSCTL
-extern int usb_init_sysctl(void);
-extern void usb_exit_sysctl(void);
-#else
-static inline int usb_init_sysctl(void) { return 0; }
-static inline void usb_exit_sysctl(void) { }
-#endif /* CONFIG_SYSCTL */
-
-
#endif /* __KERNEL__ */
#endif
diff -rupN linux-hardened/include/linux/user_namespace.h linux-5.17.1/include/linux/user_namespace.h
--- linux-hardened/include/linux/user_namespace.h 2022-04-05 20:57:15.935063195 +0900
+++ linux-5.17.1/include/linux/user_namespace.h 2022-03-28 17:03:22.000000000 +0900
@@ -139,8 +139,6 @@ static inline void set_rlimit_ucount_max
#ifdef CONFIG_USER_NS
-extern int unprivileged_userns_clone;
-
static inline struct user_namespace *get_user_ns(struct user_namespace *ns)
{
if (ns)
@@ -174,8 +172,6 @@ extern bool current_in_userns(const stru
struct ns_common *ns_get_owner(struct ns_common *ns);
#else
-#define unprivileged_userns_clone 0
-
static inline struct user_namespace *get_user_ns(struct user_namespace *ns)
{
return &init_user_ns;
diff -rupN linux-hardened/include/net/bluetooth/hci.h linux-5.17.1/include/net/bluetooth/hci.h
--- linux-hardened/include/net/bluetooth/hci.h 2022-04-05 20:57:15.978063736 +0900
+++ linux-5.17.1/include/net/bluetooth/hci.h 2022-03-28 17:03:22.000000000 +0900
@@ -255,6 +255,16 @@ enum {
* during the hdev->setup vendor callback.
*/
HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER,
+
+ /* When this quirk is set, HCI_OP_SET_EVENT_FLT requests with
+ * HCI_FLT_CLEAR_ALL are ignored and event filtering is
+ * completely avoided. A subset of the CSR controller
+ * clones struggle with this and instantly lock up.
+ *
+ * Note that devices using this must (separately) disable
+ * runtime suspend, because event filtering takes place there.
+ */
+ HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL,
};
/* HCI device flags */
diff -rupN linux-hardened/include/net/tcp.h linux-5.17.1/include/net/tcp.h
--- linux-hardened/include/net/tcp.h 2022-04-05 20:57:16.046064591 +0900
+++ linux-5.17.1/include/net/tcp.h 2022-03-28 17:03:22.000000000 +0900
@@ -247,7 +247,6 @@ void tcp_time_wait(struct sock *sk, int
/* sysctl variables for tcp */
extern int sysctl_tcp_max_orphans;
extern long sysctl_tcp_mem[3];
-extern int sysctl_tcp_simult_connect;
#define TCP_RACK_LOSS_DETECTION 0x1 /* Use RACK to detect losses */
#define TCP_RACK_STATIC_REO_WND 0x2 /* Use static RACK reo wnd */
diff -rupN linux-hardened/include/sound/pcm.h linux-5.17.1/include/sound/pcm.h
--- linux-hardened/include/sound/pcm.h 2022-04-05 20:57:16.093065182 +0900
+++ linux-5.17.1/include/sound/pcm.h 2022-03-28 17:03:22.000000000 +0900
@@ -401,6 +401,7 @@ struct snd_pcm_runtime {
wait_queue_head_t tsleep; /* transfer sleep */
struct fasync_struct *fasync;
bool stop_operating; /* sync_stop will be called */
+ struct mutex buffer_mutex; /* protect for buffer changes */
/* -- private section -- */
void *private_data;
diff -rupN linux-hardened/init/Kconfig linux-5.17.1/init/Kconfig
--- linux-hardened/init/Kconfig 2022-04-05 20:57:16.257067243 +0900
+++ linux-5.17.1/init/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -443,7 +443,6 @@ config USELIB
config AUDIT
bool "Auditing support"
depends on NET
- default y
help
Enable auditing infrastructure that can be used with another
kernel subsystem, such as SELinux (which requires this for
@@ -1232,22 +1231,6 @@ config USER_NS
If unsure, say N.
-config USER_NS_UNPRIVILEGED
- bool "Allow unprivileged users to create namespaces"
- depends on USER_NS
- default n
- help
- When disabled, unprivileged users will not be able to create
- new namespaces. Allowing users to create their own namespaces
- has been part of several recent local privilege escalation
- exploits, so if you need user namespaces but are
- paranoid^Wsecurity-conscious you want to disable this.
-
- This setting can be overridden at runtime via the
- kernel.unprivileged_userns_clone sysctl.
-
- If unsure, say N.
-
config PID_NS
bool "PID Namespaces"
default y
@@ -1477,8 +1460,9 @@ menuconfig EXPERT
Only use this if you really know what you are doing.
config UID16
- bool "Enable 16-bit UID system calls"
+ bool "Enable 16-bit UID system calls" if EXPERT
depends on HAVE_UID16 && MULTIUSER
+ default y
help
This enables the legacy 16-bit UID syscall wrappers.
@@ -1507,13 +1491,14 @@ config SGETMASK_SYSCALL
If unsure, leave the default option here.
config SYSFS_SYSCALL
- bool "Sysfs syscall support"
+ bool "Sysfs syscall support" if EXPERT
+ default y
help
sys_sysfs is an obsolete system call no longer supported in libc.
Note that disabling this option is more secure but might break
compatibility with some systems.
- If unsure say N here.
+ If unsure say Y here.
config FHANDLE
bool "open by fhandle syscalls" if EXPERT
@@ -1652,7 +1637,8 @@ config SHMEM
which may be appropriate on small systems without swap.
config AIO
- bool "Enable AIO support"
+ bool "Enable AIO support" if EXPERT
+ default y
help
This option enables POSIX asynchronous I/O which may by used
by some high performance threaded applications. Disabling
@@ -1883,7 +1869,7 @@ config VM_EVENT_COUNTERS
config SLUB_DEBUG
default y
- bool "Enable SLUB debugging support"
+ bool "Enable SLUB debugging support" if EXPERT
depends on SLUB && SYSFS
help
SLUB has extensive debug support features. Disabling these can
@@ -1893,6 +1879,7 @@ config SLUB_DEBUG
config COMPAT_BRK
bool "Disable heap randomization"
+ default y
help
Randomizing heap placement makes heap exploits harder, but it
also breaks ancient binaries (including anything libc5 based).
@@ -1941,6 +1928,7 @@ endchoice
config SLAB_MERGE_DEFAULT
bool "Allow slab caches to be merged"
+ default y
depends on SLAB || SLUB
help
For reduced kernel memory fragmentation, slab caches can be
@@ -1956,7 +1944,6 @@ config SLAB_MERGE_DEFAULT
config SLAB_FREELIST_RANDOM
bool "Randomize slab freelist"
depends on SLAB || SLUB
- default y
help
Randomizes the freelist order used on creating new pages. This
security feature reduces the predictability of the kernel slab
@@ -1965,7 +1952,6 @@ config SLAB_FREELIST_RANDOM
config SLAB_FREELIST_HARDENED
bool "Harden slab freelist metadata"
depends on SLAB || SLUB
- default y
help
Many kernel heap attacks try to target slab cache metadata and
other infrastructure. This options makes minor performance
@@ -1974,23 +1960,6 @@ config SLAB_FREELIST_HARDENED
sanity-checking than others. This option is most effective with
CONFIG_SLUB.
-config SLAB_CANARY
- depends on SLUB
- depends on !SLAB_MERGE_DEFAULT
- bool "SLAB canaries"
- default y
- help
- Place canaries at the end of kernel slab allocations, sacrificing
- some performance and memory usage for security.
-
- Canaries can detect some forms of heap corruption when allocations
- are freed and as part of the HARDENED_USERCOPY feature. It provides
- basic use-after-free detection for HARDENED_USERCOPY.
-
- Canaries absorb small overflows (rendering them harmless), mitigate
- non-NUL terminated C string overflows on 64-bit via a guaranteed zero
- byte and provide basic double-free detection.
-
config SHUFFLE_PAGE_ALLOCATOR
bool "Page allocator randomization"
default SLAB_FREELIST_RANDOM && ACPI_NUMA
diff -rupN linux-hardened/kernel/audit.c linux-5.17.1/kernel/audit.c
--- linux-hardened/kernel/audit.c 2022-04-05 20:57:16.272067432 +0900
+++ linux-5.17.1/kernel/audit.c 2022-03-28 17:03:22.000000000 +0900
@@ -1730,9 +1730,6 @@ static int __init audit_enable(char *str
if (audit_default == AUDIT_OFF)
audit_initialized = AUDIT_DISABLED;
- else if (!audit_ever_enabled)
- audit_initialized = AUDIT_UNINITIALIZED;
-
if (audit_set_enabled(audit_default))
pr_err("audit: error setting audit state (%d)\n",
audit_default);
diff -rupN linux-hardened/kernel/bpf/core.c linux-5.17.1/kernel/bpf/core.c
--- linux-hardened/kernel/bpf/core.c 2022-04-05 20:57:16.286067608 +0900
+++ linux-5.17.1/kernel/bpf/core.c 2022-03-28 17:03:22.000000000 +0900
@@ -530,7 +530,7 @@ void bpf_prog_kallsyms_del_all(struct bp
/* All BPF JIT sysctl knobs here. */
int bpf_jit_enable __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON);
-int bpf_jit_harden __read_mostly = 2;
+int bpf_jit_harden __read_mostly;
long bpf_jit_limit __read_mostly;
long bpf_jit_limit_max __read_mostly;
diff -rupN linux-hardened/kernel/capability.c linux-5.17.1/kernel/capability.c
--- linux-hardened/kernel/capability.c 2022-04-05 20:56:07.538203433 +0900
+++ linux-5.17.1/kernel/capability.c 2022-03-28 17:03:22.000000000 +0900
@@ -449,12 +449,6 @@ bool capable(int cap)
return ns_capable(&init_user_ns, cap);
}
EXPORT_SYMBOL(capable);
-
-bool capable_noaudit(int cap)
-{
- return ns_capable_noaudit(&init_user_ns, cap);
-}
-EXPORT_SYMBOL(capable_noaudit);
#endif /* CONFIG_MULTIUSER */
/**
diff -rupN linux-hardened/kernel/events/core.c linux-5.17.1/kernel/events/core.c
--- linux-hardened/kernel/events/core.c 2022-04-05 20:57:16.342068311 +0900
+++ linux-5.17.1/kernel/events/core.c 2022-03-28 17:03:22.000000000 +0900
@@ -414,13 +414,8 @@ static struct kmem_cache *perf_event_cac
* 0 - disallow raw tracepoint access for unpriv
* 1 - disallow cpu events for unpriv
* 2 - disallow kernel profiling for unpriv
- * 3 - disallow all unpriv perf event use
*/
-#ifdef CONFIG_SECURITY_PERF_EVENTS_RESTRICT
-int sysctl_perf_event_paranoid __read_mostly = 3;
-#else
int sysctl_perf_event_paranoid __read_mostly = 2;
-#endif
/* Minimum for 512 kiB + 1 user control page */
int sysctl_perf_event_mlock __read_mostly = 512 + (PAGE_SIZE / 1024); /* 'free' kiB per user */
@@ -12143,7 +12138,7 @@ SYSCALL_DEFINE5(perf_event_open,
return -EINVAL;
/* Do we allow access to perf_event_open(2) ? */
- err = perf_allow_open(&attr);
+ err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);
if (err)
return err;
diff -rupN linux-hardened/kernel/fork.c linux-5.17.1/kernel/fork.c
--- linux-hardened/kernel/fork.c 2022-04-05 20:57:16.349068400 +0900
+++ linux-5.17.1/kernel/fork.c 2022-03-28 17:03:22.000000000 +0900
@@ -82,7 +82,6 @@
#include <linux/perf_event.h>
#include <linux/posix-timers.h>
#include <linux/user-return-notifier.h>
-#include <linux/user_namespace.h>
#include <linux/oom.h>
#include <linux/khugepaged.h>
#include <linux/signalfd.h>
@@ -1923,10 +1922,6 @@ static __latent_entropy struct task_stru
if ((clone_flags & (CLONE_NEWUSER|CLONE_FS)) == (CLONE_NEWUSER|CLONE_FS))
return ERR_PTR(-EINVAL);
- if ((clone_flags & CLONE_NEWUSER) && !unprivileged_userns_clone)
- if (!capable(CAP_SYS_ADMIN))
- return ERR_PTR(-EPERM);
-
/*
* Thread groups must share signals as well, and detached threads
* can only be started up within the thread group.
@@ -3041,12 +3036,6 @@ int ksys_unshare(unsigned long unshare_f
if (unshare_flags & CLONE_NEWNS)
unshare_flags |= CLONE_FS;
- if ((unshare_flags & CLONE_NEWUSER) && !unprivileged_userns_clone) {
- err = -EPERM;
- if (!capable(CAP_SYS_ADMIN))
- goto bad_unshare_out;
- }
-
err = check_unshare_flags(unshare_flags);
if (err)
goto bad_unshare_out;
diff -rupN linux-hardened/kernel/printk/sysctl.c linux-5.17.1/kernel/printk/sysctl.c
--- linux-hardened/kernel/printk/sysctl.c 2022-04-05 20:57:16.417069254 +0900
+++ linux-5.17.1/kernel/printk/sysctl.c 2022-03-28 17:03:22.000000000 +0900
@@ -11,6 +11,15 @@
static const int ten_thousand = 10000;
+static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+{
+ if (write && !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ return proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+}
+
static struct ctl_table printk_sysctls[] = {
{
.procname = "printk",
diff -rupN linux-hardened/kernel/rcu/tiny.c linux-5.17.1/kernel/rcu/tiny.c
--- linux-hardened/kernel/rcu/tiny.c 2022-04-05 20:57:16.427069380 +0900
+++ linux-5.17.1/kernel/rcu/tiny.c 2022-03-28 17:03:22.000000000 +0900
@@ -104,7 +104,7 @@ static inline bool rcu_reclaim_tiny(stru
}
/* Invoke the RCU callbacks whose grace period has elapsed. */
-static __latent_entropy void rcu_process_callbacks(void)
+static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
{
struct rcu_head *next, *list;
unsigned long flags;
diff -rupN linux-hardened/kernel/rcu/tree.c linux-5.17.1/kernel/rcu/tree.c
--- linux-hardened/kernel/rcu/tree.c 2022-04-05 20:57:16.431069430 +0900
+++ linux-5.17.1/kernel/rcu/tree.c 2022-03-28 17:03:22.000000000 +0900
@@ -2790,7 +2790,7 @@ static __latent_entropy void rcu_core(vo
queue_work_on(rdp->cpu, rcu_gp_wq, &rdp->strict_work);
}
-static void rcu_core_si(void)
+static void rcu_core_si(struct softirq_action *h)
{
rcu_core();
}
diff -rupN linux-hardened/kernel/rcu/tree_plugin.h linux-5.17.1/kernel/rcu/tree_plugin.h
--- linux-hardened/kernel/rcu/tree_plugin.h 2022-04-05 20:57:16.435069480 +0900
+++ linux-5.17.1/kernel/rcu/tree_plugin.h 2022-03-28 17:03:22.000000000 +0900
@@ -556,16 +556,16 @@ rcu_preempt_deferred_qs_irqrestore(struc
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}
- /* Unboost if we were boosted. */
- if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
- rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
-
/*
* If this was the last task on the expedited lists,
* then we need to report up the rcu_node hierarchy.
*/
if (!empty_exp && empty_exp_now)
rcu_report_exp_rnp(rnp, true);
+
+ /* Unboost if we were boosted. */
+ if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+ rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
} else {
local_irq_restore(flags);
}
diff -rupN linux-hardened/kernel/sched/fair.c linux-5.17.1/kernel/sched/fair.c
--- linux-hardened/kernel/sched/fair.c 2022-04-05 20:57:16.458069770 +0900
+++ linux-5.17.1/kernel/sched/fair.c 2022-03-28 17:03:22.000000000 +0900
@@ -10970,7 +10970,7 @@ out:
* run_rebalance_domains is triggered when needed from the scheduler tick.
* Also triggered for nohz idle balancing (with nohz_balancing_kick set).
*/
-static __latent_entropy void run_rebalance_domains(void)
+static __latent_entropy void run_rebalance_domains(struct softirq_action *h)
{
struct rq *this_rq = this_rq();
enum cpu_idle_type idle = this_rq->idle_balance ?
diff -rupN linux-hardened/kernel/softirq.c linux-5.17.1/kernel/softirq.c
--- linux-hardened/kernel/softirq.c 2022-04-05 20:57:16.476069996 +0900
+++ linux-5.17.1/kernel/softirq.c 2022-03-28 17:03:22.000000000 +0900
@@ -56,7 +56,7 @@ DEFINE_PER_CPU_ALIGNED(irq_cpustat_t, ir
EXPORT_PER_CPU_SYMBOL(irq_stat);
#endif
-static struct softirq_action softirq_vec[NR_SOFTIRQS] __ro_after_init __aligned(PAGE_SIZE);
+static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
@@ -555,7 +555,7 @@ restart:
kstat_incr_softirqs_this_cpu(vec_nr);
trace_softirq_entry(vec_nr);
- h->action();
+ h->action(h);
trace_softirq_exit(vec_nr);
if (unlikely(prev_count != preempt_count())) {
pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
@@ -700,7 +700,7 @@ void __raise_softirq_irqoff(unsigned int
or_softirq_pending(1UL << nr);
}
-void __init open_softirq(int nr, void (*action)(void))
+void open_softirq(int nr, void (*action)(struct softirq_action *))
{
softirq_vec[nr].action = action;
}
@@ -760,7 +760,8 @@ static bool tasklet_clear_sched(struct t
return false;
}
-static void tasklet_action_common(struct tasklet_head *tl_head,
+static void tasklet_action_common(struct softirq_action *a,
+ struct tasklet_head *tl_head,
unsigned int softirq_nr)
{
struct tasklet_struct *list;
@@ -799,14 +800,14 @@ static void tasklet_action_common(struct
}
}
-static __latent_entropy void tasklet_action(void)
+static __latent_entropy void tasklet_action(struct softirq_action *a)
{
- tasklet_action_common(this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
+ tasklet_action_common(a, this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
}
-static __latent_entropy void tasklet_hi_action(void)
+static __latent_entropy void tasklet_hi_action(struct softirq_action *a)
{
- tasklet_action_common(this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
+ tasklet_action_common(a, this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
}
void tasklet_setup(struct tasklet_struct *t,
diff -rupN linux-hardened/kernel/sysctl.c linux-5.17.1/kernel/sysctl.c
--- linux-hardened/kernel/sysctl.c 2022-04-05 20:57:16.482070071 +0900
+++ linux-5.17.1/kernel/sysctl.c 2022-03-28 17:03:22.000000000 +0900
@@ -91,9 +91,6 @@
#if defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_LOCK_STAT)
#include <linux/lockdep.h>
#endif
-#ifdef CONFIG_USER_NS
-#include <linux/user_namespace.h>
-#endif
#if defined(CONFIG_SYSCTL)
@@ -899,35 +896,6 @@ static int proc_taint(struct ctl_table *
}
/**
- * proc_dointvec_minmax_sysadmin - read a vector of integers with min/max values
- * checking CAP_SYS_ADMIN on write
- * @table: the sysctl table
- * @write: %TRUE if this is a write to the sysctl file
- * @buffer: the user buffer
- * @lenp: the size of the user buffer
- * @ppos: file position
- *
- * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
- * values from/to the user buffer, treated as an ASCII string.
- *
- * This routine will ensure the values are within the range specified by
- * table->extra1 (min) and table->extra2 (max).
- *
- * Writing is only allowed when the current task has CAP_SYS_ADMIN.
- *
- * Returns 0 on success, -EPERM on permission failure or -EINVAL on write
- * when the range check fails.
- */
-int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
- void *buffer, size_t *lenp, loff_t *ppos)
-{
- if (write && !capable(CAP_SYS_ADMIN))
- return -EPERM;
-
- return proc_dointvec_minmax(table, write, buffer, lenp, ppos);
-}
-
-/**
* struct do_proc_dointvec_minmax_conv_param - proc_dointvec_minmax() range checking structure
* @min: pointer to minimum allowable value
* @max: pointer to maximum allowable value
@@ -1621,12 +1589,6 @@ int proc_dou8vec_minmax(struct ctl_table
return -ENOSYS;
}
-int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
- void *buffer, size_t *lenp, loff_t *ppos)
-{
- return -ENOSYS;
-}
-
int proc_dointvec_jiffies(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
@@ -1852,15 +1814,6 @@ static struct ctl_table kern_table[] = {
.mode = 0644,
.proc_handler = proc_dointvec,
},
-#ifdef CONFIG_USER_NS
- {
- .procname = "unprivileged_userns_clone",
- .data = &unprivileged_userns_clone,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec,
- },
-#endif
#ifdef CONFIG_PROC_SYSCTL
{
.procname = "tainted",
@@ -2902,7 +2855,6 @@ EXPORT_SYMBOL(proc_douintvec);
EXPORT_SYMBOL(proc_dointvec_jiffies);
EXPORT_SYMBOL(proc_dointvec_minmax);
EXPORT_SYMBOL_GPL(proc_douintvec_minmax);
-EXPORT_SYMBOL(proc_dointvec_minmax_sysadmin);
EXPORT_SYMBOL(proc_dointvec_userhz_jiffies);
EXPORT_SYMBOL(proc_dointvec_ms_jiffies);
EXPORT_SYMBOL(proc_dostring);
diff -rupN linux-hardened/kernel/time/hrtimer.c linux-5.17.1/kernel/time/hrtimer.c
--- linux-hardened/kernel/time/hrtimer.c 2022-04-05 20:57:16.488070147 +0900
+++ linux-5.17.1/kernel/time/hrtimer.c 2022-03-28 17:03:22.000000000 +0900
@@ -1753,7 +1753,7 @@ static void __hrtimer_run_queues(struct
}
}
-static __latent_entropy void hrtimer_run_softirq(void)
+static __latent_entropy void hrtimer_run_softirq(struct softirq_action *h)
{
struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
unsigned long flags;
diff -rupN linux-hardened/kernel/time/timer.c linux-5.17.1/kernel/time/timer.c
--- linux-hardened/kernel/time/timer.c 2022-04-05 20:57:16.501070310 +0900
+++ linux-5.17.1/kernel/time/timer.c 2022-03-28 17:03:22.000000000 +0900
@@ -1740,7 +1740,7 @@ static inline void __run_timers(struct t
/*
* This function runs timers and the timer-tq in bottom half context.
*/
-static __latent_entropy void run_timer_softirq(void)
+static __latent_entropy void run_timer_softirq(struct softirq_action *h)
{
struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
diff -rupN linux-hardened/kernel/user_namespace.c linux-5.17.1/kernel/user_namespace.c
--- linux-hardened/kernel/user_namespace.c 2022-04-05 20:57:16.550070926 +0900
+++ linux-5.17.1/kernel/user_namespace.c 2022-03-28 17:03:22.000000000 +0900
@@ -21,13 +21,6 @@
#include <linux/bsearch.h>
#include <linux/sort.h>
-/* sysctl */
-#ifdef CONFIG_USER_NS_UNPRIVILEGED
-int unprivileged_userns_clone = 1;
-#else
-int unprivileged_userns_clone;
-#endif
-
static struct kmem_cache *user_ns_cachep __read_mostly;
static DEFINE_MUTEX(userns_state_mutex);
diff -rupN linux-hardened/lib/Kconfig.debug linux-5.17.1/lib/Kconfig.debug
--- linux-hardened/lib/Kconfig.debug 2022-04-05 20:57:16.558071027 +0900
+++ linux-5.17.1/lib/Kconfig.debug 2022-03-28 17:03:22.000000000 +0900
@@ -415,9 +415,6 @@ config SECTION_MISMATCH_WARN_ONLY
If unsure, say Y.
-config DEBUG_WRITABLE_FUNCTION_POINTERS_VERBOSE
- bool "Enable verbose reporting of writable function pointers"
-
config DEBUG_FORCE_FUNCTION_ALIGN_64B
bool "Force all function address 64B aligned" if EXPERT
help
@@ -553,7 +550,7 @@ config DEBUG_FS
choice
prompt "Debugfs default access"
depends on DEBUG_FS
- default DEBUG_FS_ALLOW_NONE
+ default DEBUG_FS_ALLOW_ALL
help
This selects the default access restrictions for debugfs.
It can be overridden with kernel command line option
@@ -987,7 +984,6 @@ menu "Debug Oops, Lockups and Hangs"
config PANIC_ON_OOPS
bool "Panic on Oops"
- default y
help
Say Y here to enable the kernel to panic when it oopses. This
has the same effect as setting oops=panic on the kernel command
@@ -997,7 +993,7 @@ config PANIC_ON_OOPS
anything erroneous after an oops which could result in data
corruption or other issues.
- Say Y if unsure.
+ Say N if unsure.
config PANIC_ON_OOPS_VALUE
int
@@ -1612,7 +1608,6 @@ menu "Debug kernel data structures"
config DEBUG_LIST
bool "Debug linked list manipulation"
depends on DEBUG_KERNEL || BUG_ON_DATA_CORRUPTION
- default y
help
Enable this to turn on extended checks in the linked-list
walking routines.
@@ -1652,7 +1647,6 @@ config DEBUG_NOTIFIERS
config BUG_ON_DATA_CORRUPTION
bool "Trigger a BUG when data corruption is detected"
select DEBUG_LIST
- default y
help
Select this option if the kernel should BUG when it encounters
data corruption in kernel memory structures when they get checked
@@ -1780,7 +1774,6 @@ config STRICT_DEVMEM
config IO_STRICT_DEVMEM
bool "Filter I/O access to /dev/mem"
depends on STRICT_DEVMEM
- default y
help
If this option is disabled, you allow userspace (root) access to all
io-memory regardless of whether a driver is actively using that
diff -rupN linux-hardened/lib/Kconfig.kfence linux-5.17.1/lib/Kconfig.kfence
--- linux-hardened/lib/Kconfig.kfence 2022-04-05 20:57:16.559071039 +0900
+++ linux-5.17.1/lib/Kconfig.kfence 2022-03-28 17:03:22.000000000 +0900
@@ -84,13 +84,4 @@ config KFENCE_KUNIT_TEST
during boot; say M if you want the test to build as a module; say N
if you are unsure.
-config KFENCE_BUG_ON_DATA_CORRUPTION
- bool "Trigger a BUG when data corruption is detected"
- default y
- help
- Select this option if the kernel should BUG when kfence encounters
- data corruption of kfence managed objects after error report.
-
- If unsure, say Y.
-
endif # KFENCE
diff -rupN linux-hardened/lib/irq_poll.c linux-5.17.1/lib/irq_poll.c
--- linux-hardened/lib/irq_poll.c 2022-04-05 20:56:07.916208185 +0900
+++ linux-5.17.1/lib/irq_poll.c 2022-03-28 17:03:22.000000000 +0900
@@ -75,7 +75,7 @@ void irq_poll_complete(struct irq_poll *
}
EXPORT_SYMBOL(irq_poll_complete);
-static void __latent_entropy irq_poll_softirq(void)
+static void __latent_entropy irq_poll_softirq(struct softirq_action *h)
{
struct list_head *list = this_cpu_ptr(&blk_cpu_iopoll);
int rearm = 0, budget = irq_poll_budget;
diff -rupN linux-hardened/lib/kobject.c linux-5.17.1/lib/kobject.c
--- linux-hardened/lib/kobject.c 2022-04-05 20:57:16.584071354 +0900
+++ linux-5.17.1/lib/kobject.c 2022-03-28 17:03:22.000000000 +0900
@@ -1023,9 +1023,9 @@ EXPORT_SYMBOL_GPL(kset_create_and_add);
static DEFINE_SPINLOCK(kobj_ns_type_lock);
-static const struct kobj_ns_type_operations *kobj_ns_ops_tbl[KOBJ_NS_TYPES] __ro_after_init;
+static const struct kobj_ns_type_operations *kobj_ns_ops_tbl[KOBJ_NS_TYPES];
-int __init kobj_ns_type_register(const struct kobj_ns_type_operations *ops)
+int kobj_ns_type_register(const struct kobj_ns_type_operations *ops)
{
enum kobj_ns_type type = ops->type;
int error;
diff -rupN linux-hardened/lib/nlattr.c linux-5.17.1/lib/nlattr.c
--- linux-hardened/lib/nlattr.c 2022-04-05 20:57:16.598071529 +0900
+++ linux-5.17.1/lib/nlattr.c 2022-03-28 17:03:22.000000000 +0900
@@ -790,8 +790,6 @@ int nla_memcpy(void *dest, const struct
{
int minlen = min_t(int, count, nla_len(src));
- BUG_ON(minlen < 0);
-
memcpy(dest, nla_data(src), minlen);
if (count > minlen)
memset(dest + minlen, 0, count - minlen);
diff -rupN linux-hardened/lib/vsprintf.c linux-5.17.1/lib/vsprintf.c
--- linux-hardened/lib/vsprintf.c 2022-04-05 20:57:16.637072020 +0900
+++ linux-5.17.1/lib/vsprintf.c 2022-03-28 17:03:22.000000000 +0900
@@ -848,7 +848,7 @@ static char *ptr_to_id(char *buf, char *
return pointer_string(buf, end, (const void *)hashval, spec);
}
-int kptr_restrict __read_mostly = 2;
+int kptr_restrict __read_mostly;
static noinline_for_stack
char *restricted_pointer(char *buf, char *end, const void *ptr,
diff -rupN linux-hardened/mm/Kconfig linux-5.17.1/mm/Kconfig
--- linux-hardened/mm/Kconfig 2022-04-05 20:57:16.666072384 +0900
+++ linux-5.17.1/mm/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -308,8 +308,7 @@ config KSM
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
depends on MMU
- default 32768 if ARM || (ARM64 && COMPAT)
- default 65536
+ default 4096
help
This is the portion of low virtual memory which should be protected
from userspace allocation. Keeping a user from writing to low pages
diff -rupN linux-hardened/mm/Kconfig.debug linux-5.17.1/mm/Kconfig.debug
--- linux-hardened/mm/Kconfig.debug 2022-04-05 20:57:16.666072384 +0900
+++ linux-5.17.1/mm/Kconfig.debug 2022-03-28 17:03:22.000000000 +0900
@@ -130,7 +130,6 @@ config DEBUG_WX
depends on ARCH_HAS_DEBUG_WX
depends on MMU
select PTDUMP_CORE
- default y
help
Generate a warning if any W+X mappings are found at boot.
diff -rupN linux-hardened/mm/kfence/report.c linux-5.17.1/mm/kfence/report.c
--- linux-hardened/mm/kfence/report.c 2022-04-05 20:57:16.701072824 +0900
+++ linux-5.17.1/mm/kfence/report.c 2022-03-28 17:03:22.000000000 +0900
@@ -8,7 +8,6 @@
#include <linux/stdarg.h>
#include <linux/kernel.h>
-#include <linux/bug.h>
#include <linux/lockdep.h>
#include <linux/math.h>
#include <linux/printk.h>
@@ -268,10 +267,6 @@ void kfence_report_error(unsigned long a
lockdep_on();
-#ifdef CONFIG_KFENCE_BUG_ON_DATA_CORRUPTION
- BUG();
-#endif
-
if (panic_on_warn)
panic("panic_on_warn set ...\n");
diff -rupN linux-hardened/mm/mmap.c linux-5.17.1/mm/mmap.c
--- linux-hardened/mm/mmap.c 2022-04-05 20:57:16.727073151 +0900
+++ linux-5.17.1/mm/mmap.c 2022-03-28 17:03:22.000000000 +0900
@@ -235,13 +235,6 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
newbrk = PAGE_ALIGN(brk);
oldbrk = PAGE_ALIGN(mm->brk);
- /* properly handle unaligned min_brk as an empty heap */
- if (min_brk & ~PAGE_MASK) {
- if (brk == min_brk)
- newbrk -= PAGE_SIZE;
- if (mm->brk == min_brk)
- oldbrk -= PAGE_SIZE;
- }
if (oldbrk == newbrk) {
mm->brk = brk;
goto success;
diff -rupN linux-hardened/mm/page_alloc.c linux-5.17.1/mm/page_alloc.c
--- linux-hardened/mm/page_alloc.c 2022-04-05 20:57:16.741073327 +0900
+++ linux-5.17.1/mm/page_alloc.c 2022-03-28 17:03:22.000000000 +0900
@@ -158,15 +158,6 @@ struct pcpu_drain {
static DEFINE_MUTEX(pcpu_drain_mutex);
static DEFINE_PER_CPU(struct pcpu_drain, pcpu_drain);
-bool __meminitdata extra_latent_entropy;
-
-static int __init setup_extra_latent_entropy(char *str)
-{
- extra_latent_entropy = true;
- return 0;
-}
-early_param("extra_latent_entropy", setup_extra_latent_entropy);
-
#ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
volatile unsigned long latent_entropy __latent_entropy;
EXPORT_SYMBOL(latent_entropy);
@@ -1672,25 +1663,6 @@ static void __free_pages_ok(struct page
__count_vm_events(PGFREE, 1 << order);
}
-static void __init __gather_extra_latent_entropy(struct page *page,
- unsigned int nr_pages)
-{
- if (extra_latent_entropy && !PageHighMem(page) && page_to_pfn(page) < 0x100000) {
- unsigned long hash = 0;
- size_t index, end = PAGE_SIZE * nr_pages / sizeof hash;
- const unsigned long *data = lowmem_page_address(page);
-
- for (index = 0; index < end; index++)
- hash ^= hash + data[index];
-#ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
- latent_entropy ^= hash;
- add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
-#else
- add_device_randomness((const void *)&hash, sizeof(hash));
-#endif
- }
-}
-
void __free_pages_core(struct page *page, unsigned int order)
{
unsigned int nr_pages = 1 << order;
@@ -1710,6 +1682,7 @@ void __free_pages_core(struct page *page
}
__ClearPageReserved(p);
set_page_count(p, 0);
+
atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
/*
@@ -1776,7 +1749,6 @@ void __init memblock_free_pages(struct p
{
if (early_page_uninitialised(pfn))
return;
- __gather_extra_latent_entropy(page, 1 << order);
__free_pages_core(page, order);
}
@@ -1866,7 +1838,6 @@ static void __init deferred_free_range(u
if (nr_pages == pageblock_nr_pages &&
(pfn & (pageblock_nr_pages - 1)) == 0) {
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- __gather_extra_latent_entropy(page, 1 << pageblock_order);
__free_pages_core(page, pageblock_order);
return;
}
@@ -1874,7 +1845,6 @@ static void __init deferred_free_range(u
for (i = 0; i < nr_pages; i++, page++, pfn++) {
if ((pfn & (pageblock_nr_pages - 1)) == 0)
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- __gather_extra_latent_entropy(page, 1);
__free_pages_core(page, 0);
}
}
@@ -2438,12 +2408,6 @@ inline void post_alloc_hook(struct page
*/
kernel_unpoison_pages(page, 1 << order);
- if (IS_ENABLED(CONFIG_PAGE_SANITIZE_VERIFY) && want_init_on_free()) {
- int i;
- for (i = 0; i < (1 << order); i++)
- verify_zero_highpage(page + i);
- }
-
/*
* As memory initialization might be integrated into KASAN,
* kasan_alloc_pages and kernel_init_free_pages must be
diff -rupN linux-hardened/mm/slab.h linux-5.17.1/mm/slab.h
--- linux-hardened/mm/slab.h 2022-04-05 20:57:16.761073578 +0900
+++ linux-5.17.1/mm/slab.h 2022-03-28 17:03:22.000000000 +0900
@@ -623,13 +623,9 @@ static inline struct kmem_cache *virt_to
struct slab *slab;
slab = virt_to_slab(obj);
-#ifdef CONFIG_BUG_ON_DATA_CORRUPTION
- BUG_ON(!slab);
-#else
if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n",
__func__))
return NULL;
-#endif
return slab->slab_cache;
}
@@ -662,15 +658,10 @@ static inline struct kmem_cache *cache_f
return s;
cachep = virt_to_cache(x);
- if (cachep && cachep != s) {
-#ifdef CONFIG_BUG_ON_DATA_CORRUPTION
- BUG();
-#else
- WARN(1, "%s: Wrong slab cache. %s but object is from %s\n",
- __func__, s->name, cachep->name);
+ if (WARN(cachep && cachep != s,
+ "%s: Wrong slab cache. %s but object is from %s\n",
+ __func__, s->name, cachep->name))
print_tracking(cachep, x);
-#endif
- }
return cachep;
}
#endif /* CONFIG_SLOB */
@@ -696,7 +687,7 @@ static inline size_t slab_ksize(const st
* back there or track user information then we can
* only use the space before that information.
*/
- if ((s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_STORE_USER)) || IS_ENABLED(CONFIG_SLAB_CANARY))
+ if (s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_STORE_USER))
return s->inuse;
/*
* Else we can use all the padding etc for the allocation
@@ -741,8 +732,6 @@ static inline void slab_post_alloc_hook(
p[i] = kasan_slab_alloc(s, p[i], flags, init);
if (p[i] && init && !kasan_has_integrated_init())
memset(p[i], 0, s->object_size);
- if (p[i] && init && s->ctor)
- s->ctor(p[i]);
kmemleak_alloc_recursive(p[i], s->object_size, 1,
s->flags, flags);
}
@@ -826,10 +815,8 @@ static inline bool slab_want_init_on_all
{
if (static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON,
&init_on_alloc)) {
-#ifndef CONFIG_SLUB
if (c->ctor)
return false;
-#endif
if (c->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))
return flags & __GFP_ZERO;
return true;
@@ -840,15 +827,9 @@ static inline bool slab_want_init_on_all
static inline bool slab_want_init_on_free(struct kmem_cache *c)
{
if (static_branch_maybe(CONFIG_INIT_ON_FREE_DEFAULT_ON,
- &init_on_free)) {
-#ifndef CONFIG_SLUB
- if (c->ctor)
- return false;
-#endif
- if (c->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))
- return false;
- return true;
- }
+ &init_on_free))
+ return !(c->ctor ||
+ (c->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)));
return false;
}
diff -rupN linux-hardened/mm/slab_common.c linux-5.17.1/mm/slab_common.c
--- linux-hardened/mm/slab_common.c 2022-04-05 20:57:16.762073591 +0900
+++ linux-5.17.1/mm/slab_common.c 2022-03-28 17:03:22.000000000 +0900
@@ -32,10 +32,10 @@
#include "slab.h"
-enum slab_state slab_state __ro_after_init;
+enum slab_state slab_state;
LIST_HEAD(slab_caches);
DEFINE_MUTEX(slab_mutex);
-struct kmem_cache *kmem_cache __ro_after_init;
+struct kmem_cache *kmem_cache;
static LIST_HEAD(slab_caches_to_rcu_destroy);
static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work);
@@ -55,7 +55,7 @@ static DECLARE_WORK(slab_caches_to_rcu_d
/*
* Merge control. If this is set then no merging of slab caches will occur.
*/
-static bool slab_nomerge __ro_after_init = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
+static bool slab_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT);
static int __init setup_slab_nomerge(char *str)
{
diff -rupN linux-hardened/mm/slub.c linux-5.17.1/mm/slub.c
--- linux-hardened/mm/slub.c 2022-04-05 20:57:16.766073641 +0900
+++ linux-5.17.1/mm/slub.c 2022-03-28 17:03:22.000000000 +0900
@@ -189,12 +189,6 @@ static inline bool kmem_cache_debug(stru
return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS);
}
-static inline bool has_sanitize_verify(struct kmem_cache *s)
-{
- return IS_ENABLED(CONFIG_SLAB_SANITIZE_VERIFY) &&
- slab_want_init_on_free(s);
-}
-
void *fixup_red_left(struct kmem_cache *s, void *p)
{
if (kmem_cache_debug_flags(s, SLAB_RED_ZONE))
@@ -563,55 +557,6 @@ static inline bool cmpxchg_double_slab(s
return false;
}
-#if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SLAB_CANARY)
-/*
- * See comment in calculate_sizes().
- */
-static inline bool freeptr_outside_object(struct kmem_cache *s)
-{
- return s->offset >= s->inuse;
-}
-
-/*
- * Return offset of the end of info block which is inuse + free pointer if
- * not overlapping with object.
- */
-static inline unsigned int get_info_end(struct kmem_cache *s)
-{
- if (freeptr_outside_object(s))
- return s->inuse + sizeof(void *);
- else
- return s->inuse;
-}
-#endif
-
-#ifdef CONFIG_SLAB_CANARY
-static inline unsigned long *get_canary(struct kmem_cache *s, void *object)
-{
- return object + get_info_end(s);
-}
-
-static inline unsigned long get_canary_value(const void *canary, unsigned long value)
-{
- return (value ^ (unsigned long)canary) & CANARY_MASK;
-}
-
-static inline void set_canary(struct kmem_cache *s, void *object, unsigned long value)
-{
- unsigned long *canary = get_canary(s, object);
- *canary = get_canary_value(canary, value);
-}
-
-static inline void check_canary(struct kmem_cache *s, void *object, unsigned long value)
-{
- unsigned long *canary = get_canary(s, object);
- BUG_ON(*canary != get_canary_value(canary, value));
-}
-#else
-#define set_canary(s, object, value)
-#define check_canary(s, object, value)
-#endif
-
#ifdef CONFIG_SLUB_DEBUG
static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
static DEFINE_RAW_SPINLOCK(object_map_lock);
@@ -692,13 +637,13 @@ static inline void *restore_red_left(str
* Debug settings:
*/
#if defined(CONFIG_SLUB_DEBUG_ON)
-static slab_flags_t slub_debug __ro_after_init = DEBUG_DEFAULT_FLAGS;
+static slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS;
#else
-static slab_flags_t slub_debug __ro_after_init;
+static slab_flags_t slub_debug;
#endif
-static char *slub_debug_string __ro_after_init;
-static int disable_higher_order_debug __ro_after_init;
+static char *slub_debug_string;
+static int disable_higher_order_debug;
/*
* slub is about to manipulate internal object metadata. This memory lies
@@ -749,6 +694,26 @@ static void print_section(char *level, c
metadata_access_disable();
}
+/*
+ * See comment in calculate_sizes().
+ */
+static inline bool freeptr_outside_object(struct kmem_cache *s)
+{
+ return s->offset >= s->inuse;
+}
+
+/*
+ * Return offset of the end of info block which is inuse + free pointer if
+ * not overlapping with object.
+ */
+static inline unsigned int get_info_end(struct kmem_cache *s)
+{
+ if (freeptr_outside_object(s))
+ return s->inuse + sizeof(void *);
+ else
+ return s->inuse;
+}
+
static struct track *get_track(struct kmem_cache *s, void *object,
enum track_item alloc)
{
@@ -756,9 +721,6 @@ static struct track *get_track(struct km
p = object + get_info_end(s);
- if (IS_ENABLED(CONFIG_SLAB_CANARY))
- p = (void *)p + sizeof(void *);
-
return kasan_reset_tag(p + alloc);
}
@@ -891,9 +853,6 @@ static void print_trailer(struct kmem_ca
off = get_info_end(s);
- if (IS_ENABLED(CONFIG_SLAB_CANARY))
- off += sizeof(void *);
-
if (s->flags & SLAB_STORE_USER)
off += 2 * sizeof(struct track);
@@ -1029,9 +988,8 @@ skip_bug_print:
* Meta data starts here.
*
* A. Free pointer (if we cannot overwrite object on free)
- * B. Canary for SLAB_CANARY
- * C. Tracking data for SLAB_STORE_USER
- * D. Padding to reach required alignment boundary or at minimum
+ * B. Tracking data for SLAB_STORE_USER
+ * C. Padding to reach required alignment boundary or at minimum
* one word if debugging is on to be able to detect writes
* before the word boundary.
*
@@ -1049,9 +1007,6 @@ static int check_pad_bytes(struct kmem_c
{
unsigned long off = get_info_end(s); /* The end of info */
- if (IS_ENABLED(CONFIG_SLAB_CANARY))
- off += sizeof(void *);
-
if (s->flags & SLAB_STORE_USER)
/* We also have user information there */
off += 2 * sizeof(struct track);
@@ -1738,16 +1693,8 @@ static __always_inline void kfree_hook(v
}
static __always_inline bool slab_free_hook(struct kmem_cache *s,
- void *x, bool init, bool canary)
+ void *x, bool init)
{
- /*
- * Postpone setting the inactive canary until the metadata
- * has potentially been cleared at the end of this function.
- */
- if (canary) {
- check_canary(s, x, s->random_active);
- }
-
kmemleak_free_recursive(x, s->flags);
debug_check_no_locks_freed(x, s->object_size);
@@ -1776,14 +1723,7 @@ static __always_inline bool slab_free_ho
rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0;
memset((char *)kasan_reset_tag(x) + s->inuse, 0,
s->size - s->inuse - rsize);
- if (!IS_ENABLED(CONFIG_SLAB_SANITIZE_VERIFY) && s->ctor)
- s->ctor(x);
- }
-
- if (canary) {
- set_canary(s, x, s->random_inactive);
}
-
/* KASAN might put x into memory quarantine, delaying its reuse. */
return kasan_slab_free(s, x, init);
}
@@ -1798,7 +1738,7 @@ static inline bool slab_free_freelist_ho
void *old_tail = *tail ? *tail : *head;
if (is_kfence_address(next)) {
- slab_free_hook(s, next, false, false);
+ slab_free_hook(s, next, false);
return true;
}
@@ -1811,7 +1751,7 @@ static inline bool slab_free_freelist_ho
next = get_freepointer(s, object);
/* If object's reuse doesn't have to be delayed */
- if (!slab_free_hook(s, object, slab_want_init_on_free(s), true)) {
+ if (!slab_free_hook(s, object, slab_want_init_on_free(s))) {
/* Move object to the new freelist */
set_freepointer(s, object, *head);
*head = object;
@@ -1823,22 +1763,6 @@ static inline bool slab_free_freelist_ho
* accordingly if object's reuse is delayed.
*/
--(*cnt);
-
- /* Objects that are put into quarantine by KASAN will
- * still undergo free_consistency_checks(), which
- * checks whether the freelist pointer is valid if it
- * is located after the object (see check_object()).
- * Since this is the case for slab caches with
- * constructors, we need to fix the freelist pointer
- * after init_on_free has overwritten it.
- *
- * Note that doing this for all caches (not just ctor
- * ones) would cause a GPF due to KASAN poisoning and
- * the way set_freepointer() eventually dereferences
- * the freepointer.
- */
- if (slab_want_init_on_free(s) && s->ctor)
- set_freepointer(s, object, NULL);
}
} while (object != old_tail);
@@ -1852,9 +1776,8 @@ static void *setup_object(struct kmem_ca
void *object)
{
setup_object_debug(s, slab, object);
- set_canary(s, object, s->random_inactive);
object = kasan_init_slab_obj(s, object);
- if (unlikely(s->ctor) && !has_sanitize_verify(s)) {
+ if (unlikely(s->ctor)) {
kasan_unpoison_object_data(s, object);
s->ctor(object);
kasan_poison_object_data(s, object);
@@ -3301,24 +3224,7 @@ redo:
}
maybe_wipe_obj_freeptr(s, object);
-
- if (has_sanitize_verify(s) && object) {
- /* KASAN hasn't unpoisoned the object yet (this is done in the
- * post-alloc hook), so let's do it temporarily.
- */
- kasan_unpoison_object_data(s, object);
- BUG_ON(memchr_inv(object, 0, s->object_size));
- if (s->ctor)
- s->ctor(object);
- kasan_poison_object_data(s, object);
- } else {
- init = slab_want_init_on_alloc(gfpflags, s);
- }
-
- if (object) {
- check_canary(s, object, s->random_inactive);
- set_canary(s, object, s->random_active);
- }
+ init = slab_want_init_on_alloc(gfpflags, s);
out:
slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init);
@@ -3633,12 +3539,8 @@ static inline void free_large_kmalloc(st
{
unsigned int order = folio_order(folio);
-#ifdef CONFIG_BUG_ON_DATA_CORRUPTION
- BUG_ON(order == 0);
-#else
if (WARN_ON_ONCE(order == 0))
pr_warn_once("object pointer: 0x%p\n", object);
-#endif
kfree_hook(object);
mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B,
@@ -3696,7 +3598,7 @@ int build_detached_freelist(struct kmem_
}
if (is_kfence_address(object)) {
- slab_free_hook(df->s, object, false, false);
+ slab_free_hook(df->s, object, false);
__kfence_free(object);
p[size] = NULL; /* mark object processed */
return size;
@@ -3761,9 +3663,8 @@ int kmem_cache_alloc_bulk(struct kmem_ca
void **p)
{
struct kmem_cache_cpu *c;
- int i, k;
+ int i;
struct obj_cgroup *objcg = NULL;
- bool init = false;
/* memcg and kmem_cache debug support */
s = slab_pre_alloc_hook(s, &objcg, size, flags);
@@ -3822,35 +3723,12 @@ int kmem_cache_alloc_bulk(struct kmem_ca
local_unlock_irq(&s->cpu_slab->lock);
slub_put_cpu_ptr(s->cpu_slab);
- if (has_sanitize_verify(s)) {
- int j;
-
- for (j = 0; j < i; j++) {
- /* KASAN hasn't unpoisoned the object yet (this is done in the
- * post-alloc hook), so let's do it temporarily.
- */
- kasan_unpoison_object_data(s, p[j]);
- BUG_ON(memchr_inv(p[j], 0, s->object_size));
- if (s->ctor)
- s->ctor(p[j]);
- kasan_poison_object_data(s, p[j]);
- }
- } else {
- init = slab_want_init_on_alloc(flags, s);
- }
-
- for (k = 0; k < i; k++) {
- if (!is_kfence_address(p[k])) {
- check_canary(s, p[k], s->random_inactive);
- set_canary(s, p[k], s->random_active);
- }
- }
-
/*
* memcg and kmem_cache debug support and memory initialization.
* Done outside of the IRQ disabled fastpath loop.
*/
- slab_post_alloc_hook(s, objcg, flags, size, p, init);
+ slab_post_alloc_hook(s, objcg, flags, size, p,
+ slab_want_init_on_alloc(flags, s));
return i;
error:
slub_put_cpu_ptr(s->cpu_slab);
@@ -3880,9 +3758,9 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk);
* and increases the number of allocations possible without having to
* take the list_lock.
*/
-static unsigned int slub_min_order __ro_after_init;
-static unsigned int slub_max_order __ro_after_init = PAGE_ALLOC_COSTLY_ORDER;
-static unsigned int slub_min_objects __ro_after_init;
+static unsigned int slub_min_order;
+static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
+static unsigned int slub_min_objects;
/*
* Calculate the order of allocation given an slab object size.
@@ -4064,7 +3942,6 @@ static void early_kmem_cache_node_alloc(
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
- set_canary(kmem_cache_node, n, kmem_cache_node->random_active);
n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
slab->freelist = get_freepointer(kmem_cache_node, n);
slab->inuse = 1;
@@ -4238,9 +4115,6 @@ static int calculate_sizes(struct kmem_c
s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
}
- if (IS_ENABLED(CONFIG_SLAB_CANARY))
- size += sizeof(void *);
-
#ifdef CONFIG_SLUB_DEBUG
if (flags & SLAB_STORE_USER)
/*
@@ -4314,10 +4188,6 @@ static int kmem_cache_open(struct kmem_c
#ifdef CONFIG_SLAB_FREELIST_HARDENED
s->random = get_random_long();
#endif
-#ifdef CONFIG_SLAB_CANARY
- s->random_active = get_random_long();
- s->random_inactive = get_random_long();
-#endif
if (!calculate_sizes(s, -1))
goto error;
@@ -4646,9 +4516,6 @@ void __check_heap_object(const void *ptr
offset -= s->red_left_pad;
}
- if (!is_kfence)
- check_canary(s, (void *)ptr - offset, s->random_active);
-
/* Allow address range falling entirely within usercopy region. */
if (offset >= s->useroffset &&
offset - s->useroffset <= s->usersize &&
diff -rupN linux-hardened/mm/swap.c linux-5.17.1/mm/swap.c
--- linux-hardened/mm/swap.c 2022-04-05 20:57:16.768073666 +0900
+++ linux-5.17.1/mm/swap.c 2022-03-28 17:03:22.000000000 +0900
@@ -101,8 +101,6 @@ static void __put_single_page(struct pag
static void __put_compound_page(struct page *page)
{
- compound_page_dtor *dtor;
-
/*
* __page_cache_release() is supposed to be called for thp, not for
* hugetlb. This is because hugetlb page does never have PageLRU set
@@ -111,15 +109,7 @@ static void __put_compound_page(struct p
*/
if (!PageHuge(page))
__page_cache_release(page);
- dtor = get_compound_page_dtor(page);
- if (!PageHuge(page))
- BUG_ON(dtor != free_compound_page
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- && dtor != free_transhuge_page
-#endif
- );
-
- (*dtor)(page);
+ destroy_compound_page(page);
}
void __put_page(struct page *page)
diff -rupN linux-hardened/mm/util.c linux-5.17.1/mm/util.c
--- linux-hardened/mm/util.c 2022-04-05 20:57:16.774073742 +0900
+++ linux-5.17.1/mm/util.c 2022-03-28 17:03:22.000000000 +0900
@@ -348,9 +348,9 @@ unsigned long arch_randomize_brk(struct
{
/* Is the current task 32bit ? */
if (!IS_ENABLED(CONFIG_64BIT) || is_compat_task())
- return mm->brk + get_random_long() % SZ_32M + PAGE_SIZE;
+ return randomize_page(mm->brk, SZ_32M);
- return mm->brk + get_random_long() % SZ_1G + PAGE_SIZE;
+ return randomize_page(mm->brk, SZ_1G);
}
unsigned long arch_mmap_rnd(void)
diff -rupN linux-hardened/net/bluetooth/hci_sync.c linux-5.17.1/net/bluetooth/hci_sync.c
--- linux-hardened/net/bluetooth/hci_sync.c 2022-04-05 20:57:16.861074835 +0900
+++ linux-5.17.1/net/bluetooth/hci_sync.c 2022-03-28 17:03:22.000000000 +0900
@@ -2806,6 +2806,9 @@ static int hci_set_event_filter_sync(str
if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
return 0;
+ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
+ return 0;
+
memset(&cp, 0, sizeof(cp));
cp.flt_type = flt_type;
@@ -2826,6 +2829,13 @@ static int hci_clear_event_filter_sync(s
if (!hci_dev_test_flag(hdev, HCI_EVENT_FILTER_CONFIGURED))
return 0;
+ /* In theory the state machine should not reach here unless
+ * a hci_set_event_filter_sync() call succeeds, but we do
+ * the check both for parity and as a future reminder.
+ */
+ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
+ return 0;
+
return hci_set_event_filter_sync(hdev, HCI_FLT_CLEAR_ALL, 0x00,
BDADDR_ANY, 0x00);
}
@@ -4825,6 +4835,12 @@ static int hci_update_event_filter_sync(
if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
return 0;
+ /* Some fake CSR controllers lock up after setting this type of
+ * filter, so avoid sending the request altogether.
+ */
+ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
+ return 0;
+
/* Always clear event filter when starting */
hci_clear_event_filter_sync(hdev);
diff -rupN linux-hardened/net/core/dev.c linux-5.17.1/net/core/dev.c
--- linux-hardened/net/core/dev.c 2022-04-05 20:57:16.938075803 +0900
+++ linux-5.17.1/net/core/dev.c 2022-03-28 17:03:22.000000000 +0900
@@ -4878,7 +4878,7 @@ int netif_rx_any_context(struct sk_buff
}
EXPORT_SYMBOL(netif_rx_any_context);
-static __latent_entropy void net_tx_action(void)
+static __latent_entropy void net_tx_action(struct softirq_action *h)
{
struct softnet_data *sd = this_cpu_ptr(&softnet_data);
@@ -6493,7 +6493,7 @@ static int napi_threaded_poll(void *data
return 0;
}
-static __latent_entropy void net_rx_action(void)
+static __latent_entropy void net_rx_action(struct softirq_action *h)
{
struct softnet_data *sd = this_cpu_ptr(&softnet_data);
unsigned long time_limit = jiffies +
diff -rupN linux-hardened/net/ipv4/Kconfig linux-5.17.1/net/ipv4/Kconfig
--- linux-hardened/net/ipv4/Kconfig 2022-04-05 20:56:08.493215438 +0900
+++ linux-5.17.1/net/ipv4/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -267,7 +267,6 @@ config IP_PIMSM_V2
config SYN_COOKIES
bool "IP: TCP syncookie support"
- default y
help
Normal TCP/IP networking is open to an attack known as "SYN
flooding". This denial-of-service attack prevents legitimate remote
@@ -743,26 +742,3 @@ config TCP_MD5SIG
on the Internet.
If unsure, say N.
-
-config TCP_SIMULT_CONNECT_DEFAULT_ON
- bool "Enable TCP simultaneous connect"
- help
- Enable TCP simultaneous connect that adds a weakness in Linux's strict
- implementation of TCP that allows two clients to connect to each other
- without either entering a listening state. The weakness allows an
- attacker to easily prevent a client from connecting to a known server
- provided the source port for the connection is guessed correctly.
-
- As the weakness could be used to prevent an antivirus or IPS from
- fetching updates, or prevent an SSL gateway from fetching a CRL, it
- should be eliminated by disabling this option. Though Linux is one of
- few operating systems supporting simultaneous connect, it has no
- legitimate use in practice and is rarely supported by firewalls.
-
- Disabling this may break TCP STUNT which is used by some applications
- for NAT traversal.
-
- This setting can be overridden at runtime via the
- net.ipv4.tcp_simult_connect sysctl.
-
- If unsure, say N.
diff -rupN linux-hardened/net/ipv4/sysctl_net_ipv4.c linux-5.17.1/net/ipv4/sysctl_net_ipv4.c
--- linux-hardened/net/ipv4/sysctl_net_ipv4.c 2022-04-05 20:57:17.068077438 +0900
+++ linux-5.17.1/net/ipv4/sysctl_net_ipv4.c 2022-03-28 17:03:22.000000000 +0900
@@ -585,15 +585,6 @@ static struct ctl_table ipv4_table[] = {
.extra1 = &sysctl_fib_sync_mem_min,
.extra2 = &sysctl_fib_sync_mem_max,
},
- {
- .procname = "tcp_simult_connect",
- .data = &sysctl_tcp_simult_connect,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
{ }
};
diff -rupN linux-hardened/net/ipv4/tcp_input.c linux-5.17.1/net/ipv4/tcp_input.c
--- linux-hardened/net/ipv4/tcp_input.c 2022-04-05 20:57:17.079077576 +0900
+++ linux-5.17.1/net/ipv4/tcp_input.c 2022-03-28 17:03:22.000000000 +0900
@@ -82,7 +82,6 @@
#include <net/mptcp.h>
int sysctl_tcp_max_orphans __read_mostly = NR_FILE;
-int sysctl_tcp_simult_connect __read_mostly = IS_ENABLED(CONFIG_TCP_SIMULT_CONNECT_DEFAULT_ON);
#define FLAG_DATA 0x01 /* Incoming frame contained data. */
#define FLAG_WIN_UPDATE 0x02 /* Incoming ACK was a window update. */
@@ -6273,7 +6272,7 @@ discard:
tcp_paws_reject(&tp->rx_opt, 0))
goto discard_and_undo;
- if (th->syn && sysctl_tcp_simult_connect) {
+ if (th->syn) {
/* We see SYN without ACK. It is attempt of
* simultaneous connect with crossed SYNs.
* Particularly, it can be connect to self.
diff -rupN linux-hardened/net/llc/af_llc.c linux-5.17.1/net/llc/af_llc.c
--- linux-hardened/net/llc/af_llc.c 2022-04-05 20:57:17.152078494 +0900
+++ linux-5.17.1/net/llc/af_llc.c 2022-03-28 17:03:22.000000000 +0900
@@ -275,6 +275,7 @@ static int llc_ui_autobind(struct socket
{
struct sock *sk = sock->sk;
struct llc_sock *llc = llc_sk(sk);
+ struct net_device *dev = NULL;
struct llc_sap *sap;
int rc = -EINVAL;
@@ -286,16 +287,15 @@ static int llc_ui_autobind(struct socket
goto out;
rc = -ENODEV;
if (sk->sk_bound_dev_if) {
- llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
- if (llc->dev && addr->sllc_arphrd != llc->dev->type) {
- dev_put(llc->dev);
- llc->dev = NULL;
+ dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
+ if (dev && addr->sllc_arphrd != dev->type) {
+ dev_put(dev);
+ dev = NULL;
}
} else
- llc->dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
- if (!llc->dev)
+ dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
+ if (!dev)
goto out;
- netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
rc = -EUSERS;
llc->laddr.lsap = llc_ui_autoport();
if (!llc->laddr.lsap)
@@ -304,6 +304,12 @@ static int llc_ui_autobind(struct socket
sap = llc_sap_open(llc->laddr.lsap, NULL);
if (!sap)
goto out;
+
+ /* Note: We do not expect errors from this point. */
+ llc->dev = dev;
+ netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
+ dev = NULL;
+
memcpy(llc->laddr.mac, llc->dev->dev_addr, IFHWADDRLEN);
memcpy(&llc->addr, addr, sizeof(llc->addr));
/* assign new connection to its SAP */
@@ -311,6 +317,7 @@ static int llc_ui_autobind(struct socket
sock_reset_flag(sk, SOCK_ZAPPED);
rc = 0;
out:
+ dev_put(dev);
return rc;
}
@@ -333,6 +340,7 @@ static int llc_ui_bind(struct socket *so
struct sockaddr_llc *addr = (struct sockaddr_llc *)uaddr;
struct sock *sk = sock->sk;
struct llc_sock *llc = llc_sk(sk);
+ struct net_device *dev = NULL;
struct llc_sap *sap;
int rc = -EINVAL;
@@ -348,25 +356,27 @@ static int llc_ui_bind(struct socket *so
rc = -ENODEV;
rcu_read_lock();
if (sk->sk_bound_dev_if) {
- llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
- if (llc->dev) {
+ dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
+ if (dev) {
if (is_zero_ether_addr(addr->sllc_mac))
- memcpy(addr->sllc_mac, llc->dev->dev_addr,
+ memcpy(addr->sllc_mac, dev->dev_addr,
IFHWADDRLEN);
- if (addr->sllc_arphrd != llc->dev->type ||
+ if (addr->sllc_arphrd != dev->type ||
!ether_addr_equal(addr->sllc_mac,
- llc->dev->dev_addr)) {
+ dev->dev_addr)) {
rc = -EINVAL;
- llc->dev = NULL;
+ dev = NULL;
}
}
- } else
- llc->dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
+ } else {
+ dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
addr->sllc_mac);
- dev_hold_track(llc->dev, &llc->dev_tracker, GFP_ATOMIC);
+ }
+ dev_hold(dev);
rcu_read_unlock();
- if (!llc->dev)
+ if (!dev)
goto out;
+
if (!addr->sllc_sap) {
rc = -EUSERS;
addr->sllc_sap = llc_ui_autoport();
@@ -398,6 +408,12 @@ static int llc_ui_bind(struct socket *so
goto out_put;
}
}
+
+ /* Note: We do not expect errors from this point. */
+ llc->dev = dev;
+ netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
+ dev = NULL;
+
llc->laddr.lsap = addr->sllc_sap;
memcpy(llc->laddr.mac, addr->sllc_mac, IFHWADDRLEN);
memcpy(&llc->addr, addr, sizeof(llc->addr));
@@ -408,6 +424,7 @@ static int llc_ui_bind(struct socket *so
out_put:
llc_sap_put(sap);
out:
+ dev_put(dev);
release_sock(sk);
return rc;
}
diff -rupN linux-hardened/net/mac80211/cfg.c linux-5.17.1/net/mac80211/cfg.c
--- linux-hardened/net/mac80211/cfg.c 2022-04-05 20:57:17.160078594 +0900
+++ linux-5.17.1/net/mac80211/cfg.c 2022-03-28 17:03:22.000000000 +0900
@@ -2148,14 +2148,12 @@ static int copy_mesh_setup(struct ieee80
const struct mesh_setup *setup)
{
u8 *new_ie;
- const u8 *old_ie;
struct ieee80211_sub_if_data *sdata = container_of(ifmsh,
struct ieee80211_sub_if_data, u.mesh);
int i;
/* allocate information elements */
new_ie = NULL;
- old_ie = ifmsh->ie;
if (setup->ie_len) {
new_ie = kmemdup(setup->ie, setup->ie_len,
@@ -2165,7 +2163,6 @@ static int copy_mesh_setup(struct ieee80
}
ifmsh->ie_len = setup->ie_len;
ifmsh->ie = new_ie;
- kfree(old_ie);
/* now copy the rest of the setup parameters */
ifmsh->mesh_id_len = setup->mesh_id_len;
diff -rupN linux-hardened/net/netfilter/nf_tables_api.c linux-5.17.1/net/netfilter/nf_tables_api.c
--- linux-hardened/net/netfilter/nf_tables_api.c 2022-04-05 20:57:17.261079864 +0900
+++ linux-5.17.1/net/netfilter/nf_tables_api.c 2022-03-28 17:03:22.000000000 +0900
@@ -9275,17 +9275,23 @@ int nft_parse_u32_check(const struct nla
}
EXPORT_SYMBOL_GPL(nft_parse_u32_check);
-static unsigned int nft_parse_register(const struct nlattr *attr)
+static unsigned int nft_parse_register(const struct nlattr *attr, u32 *preg)
{
unsigned int reg;
reg = ntohl(nla_get_be32(attr));
switch (reg) {
case NFT_REG_VERDICT...NFT_REG_4:
- return reg * NFT_REG_SIZE / NFT_REG32_SIZE;
+ *preg = reg * NFT_REG_SIZE / NFT_REG32_SIZE;
+ break;
+ case NFT_REG32_00...NFT_REG32_15:
+ *preg = reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
+ break;
default:
- return reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
+ return -ERANGE;
}
+
+ return 0;
}
/**
@@ -9327,7 +9333,10 @@ int nft_parse_register_load(const struct
u32 reg;
int err;
- reg = nft_parse_register(attr);
+ err = nft_parse_register(attr, &reg);
+ if (err < 0)
+ return err;
+
err = nft_validate_register_load(reg, len);
if (err < 0)
return err;
@@ -9382,7 +9391,10 @@ int nft_parse_register_store(const struc
int err;
u32 reg;
- reg = nft_parse_register(attr);
+ err = nft_parse_register(attr, &reg);
+ if (err < 0)
+ return err;
+
err = nft_validate_register_store(ctx, reg, data, type, len);
if (err < 0)
return err;
diff -rupN linux-hardened/net/netfilter/nf_tables_core.c linux-5.17.1/net/netfilter/nf_tables_core.c
--- linux-hardened/net/netfilter/nf_tables_core.c 2022-04-05 20:57:17.262079876 +0900
+++ linux-5.17.1/net/netfilter/nf_tables_core.c 2022-03-28 17:03:22.000000000 +0900
@@ -201,7 +201,7 @@ nft_do_chain(struct nft_pktinfo *pkt, vo
const struct nft_rule_dp *rule, *last_rule;
const struct net *net = nft_net(pkt);
const struct nft_expr *expr, *last;
- struct nft_regs regs;
+ struct nft_regs regs = {};
unsigned int stackptr = 0;
struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];
bool genbit = READ_ONCE(net->nft.gencursor);
diff -rupN linux-hardened/scripts/Makefile.modpost linux-5.17.1/scripts/Makefile.modpost
--- linux-hardened/scripts/Makefile.modpost 2022-04-05 20:57:17.576083823 +0900
+++ linux-5.17.1/scripts/Makefile.modpost 2022-03-28 17:03:22.000000000 +0900
@@ -48,7 +48,6 @@ MODPOST = scripts/mod/modpost \
$(if $(CONFIG_MODVERSIONS),-m) \
$(if $(CONFIG_MODULE_SRCVERSION_ALL),-a) \
$(if $(CONFIG_SECTION_MISMATCH_WARN_ONLY),,-E) \
- $(if $(CONFIG_DEBUG_WRITABLE_FUNCTION_POINTERS_VERBOSE),-f) \
-o $@
ifdef MODPOST_VMLINUX
diff -rupN linux-hardened/scripts/gcc-plugins/Kconfig linux-5.17.1/scripts/gcc-plugins/Kconfig
--- linux-hardened/scripts/gcc-plugins/Kconfig 2022-04-05 20:57:17.604084175 +0900
+++ linux-5.17.1/scripts/gcc-plugins/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -39,11 +39,6 @@ config GCC_PLUGIN_LATENT_ENTROPY
is some slowdown of the boot process (about 0.5%) and fork and
irq processing.
- When extra_latent_entropy is passed on the kernel command line,
- entropy will be extracted from up to the first 4GB of RAM while the
- runtime memory allocator is being initialized. This costs even more
- slowdown of the boot process.
-
Note that entropy extracted this way is not cryptographically
secure!
diff -rupN linux-hardened/scripts/mod/modpost.c linux-5.17.1/scripts/mod/modpost.c
--- linux-hardened/scripts/mod/modpost.c 2022-04-05 20:57:17.633084540 +0900
+++ linux-5.17.1/scripts/mod/modpost.c 2022-03-28 17:03:22.000000000 +0900
@@ -33,8 +33,6 @@ static int warn_unresolved = 0;
/* How a symbol is exported */
static int sec_mismatch_count = 0;
static int sec_mismatch_warn_only = true;
-static int writable_fptr_count = 0;
-static int writable_fptr_verbose = false;
/* ignore missing files */
static int ignore_missing_files;
/* If set to 1, only warn (instead of error) about missing ns imports */
@@ -998,7 +996,6 @@ enum mismatch {
ANY_EXIT_TO_ANY_INIT,
EXPORT_TO_INIT_EXIT,
EXTABLE_TO_NON_TEXT,
- DATA_TO_TEXT
};
/**
@@ -1125,12 +1122,6 @@ static const struct sectioncheck section
.good_tosec = {ALL_TEXT_SECTIONS , NULL},
.mismatch = EXTABLE_TO_NON_TEXT,
.handler = extable_mismatch_handler,
-},
-/* Do not reference code from writable data */
-{
- .fromsec = { DATA_SECTIONS, NULL },
- .bad_tosec = { ALL_TEXT_SECTIONS, NULL },
- .mismatch = DATA_TO_TEXT
}
};
@@ -1318,10 +1309,10 @@ static Elf_Sym *find_elf_symbol(struct e
continue;
if (!is_valid_name(elf, sym))
continue;
+ if (sym->st_value == addr)
+ return sym;
/* Find a symbol nearby - addr are maybe negative */
d = sym->st_value - addr;
- if (d == 0)
- return sym;
if (d < 0)
d = addr - sym->st_value;
if (d < distance) {
@@ -1456,13 +1447,7 @@ static void report_sec_mismatch(const ch
char *prl_from;
char *prl_to;
- if (mismatch->mismatch == DATA_TO_TEXT) {
- writable_fptr_count++;
- if (!writable_fptr_verbose)
- return;
- } else {
- sec_mismatch_count++;
- }
+ sec_mismatch_count++;
get_pretty_name(from_is_func, &from, &from_p);
get_pretty_name(to_is_func, &to, &to_p);
@@ -1584,12 +1569,6 @@ static void report_sec_mismatch(const ch
fatal("There's a special handler for this mismatch type, "
"we should never get here.");
break;
- case DATA_TO_TEXT:
- fprintf(stderr,
- "The %s %s:%s references\n"
- "the %s %s:%s%s\n",
- from, fromsec, fromsym, to, tosec, tosym, to_p);
- break;
}
fprintf(stderr, "\n");
}
@@ -2539,7 +2518,7 @@ int main(int argc, char **argv)
struct dump_list *dump_read_start = NULL;
struct dump_list **dump_read_iter = &dump_read_start;
- while ((opt = getopt(argc, argv, "ei:fmnT:o:awENd:")) != -1) {
+ while ((opt = getopt(argc, argv, "ei:mnT:o:awENd:")) != -1) {
switch (opt) {
case 'e':
external_module = 1;
@@ -2550,9 +2529,6 @@ int main(int argc, char **argv)
(*dump_read_iter)->file = optarg;
dump_read_iter = &(*dump_read_iter)->next;
break;
- case 'f':
- writable_fptr_verbose = true;
- break;
case 'm':
modversions = 1;
break;
@@ -2648,11 +2624,6 @@ int main(int argc, char **argv)
nr_unresolved - MAX_UNRESOLVED_REPORTS);
free(buf.p);
- if (writable_fptr_count && !writable_fptr_verbose)
- warn("modpost: Found %d writable function pointer%s.\n"
- "To see full details build your kernel with:\n"
- "'make CONFIG_DEBUG_WRITABLE_FUNCTION_POINTERS_VERBOSE=y'\n",
- writable_fptr_count, (writable_fptr_count == 1 ? "" : "s"));
return error_occurred ? 1 : 0;
}
diff -rupN linux-hardened/security/Kconfig linux-5.17.1/security/Kconfig
--- linux-hardened/security/Kconfig 2022-04-05 20:57:17.644084678 +0900
+++ linux-5.17.1/security/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -9,7 +9,7 @@ source "security/keys/Kconfig"
config SECURITY_DMESG_RESTRICT
bool "Restrict unprivileged access to the kernel syslog"
- default y
+ default n
help
This enforces restrictions on unprivileged users reading the kernel
syslog via dmesg(8).
@@ -19,34 +19,10 @@ config SECURITY_DMESG_RESTRICT
If you are unsure how to answer this question, answer N.
-config SECURITY_PERF_EVENTS_RESTRICT
- bool "Restrict unprivileged use of performance events"
- depends on PERF_EVENTS
- default y
- help
- If you say Y here, the kernel.perf_event_paranoid sysctl
- will be set to 3 by default, and no unprivileged use of the
- perf_event_open syscall will be permitted unless it is
- changed.
-
-config SECURITY_TIOCSTI_RESTRICT
- bool "Restrict unprivileged use of tiocsti command injection"
- default y
- help
- This enforces restrictions on unprivileged users injecting commands
- into other processes which share a tty session using the TIOCSTI
- ioctl. This option makes TIOCSTI use require CAP_SYS_ADMIN.
-
- If this option is not selected, no restrictions will be enforced
- unless the tiocsti_restrict sysctl is explicitly set to (1).
-
- If you are unsure how to answer this question, answer N.
-
config SECURITY
bool "Enable different security models"
depends on SYSFS
depends on MULTIUSER
- default y
help
This allows you to choose different security modules to be
configured into your kernel.
@@ -72,7 +48,6 @@ config SECURITYFS
config SECURITY_NETWORK
bool "Socket and Networking Security Hooks"
depends on SECURITY
- default y
help
This enables the socket and networking security hooks.
If enabled, a security module can use these hooks to
@@ -179,7 +154,6 @@ config HARDENED_USERCOPY
bool "Harden memory copies between kernel and userspace"
depends on HAVE_HARDENED_USERCOPY_ALLOCATOR
imply STRICT_DEVMEM
- default y
help
This option checks for obviously wrong memory regions when
copying memory to/from the kernel (via copy_to_user() and
@@ -206,7 +180,6 @@ config FORTIFY_SOURCE
# https://bugs.llvm.org/show_bug.cgi?id=50322
# https://bugs.llvm.org/show_bug.cgi?id=41459
depends on !CC_IS_CLANG
- default y
help
Detect overflows of buffers in common string and memory functions
where the compiler can determine and validate the buffer sizes.
diff -rupN linux-hardened/security/Kconfig.hardening linux-5.17.1/security/Kconfig.hardening
--- linux-hardened/security/Kconfig.hardening 2022-04-05 20:57:17.644084678 +0900
+++ linux-5.17.1/security/Kconfig.hardening 2022-03-28 17:03:22.000000000 +0900
@@ -208,7 +208,6 @@ config STACKLEAK_RUNTIME_DISABLE
config INIT_ON_ALLOC_DEFAULT_ON
bool "Enable heap memory zeroing on allocation by default"
- default yes
help
This has the effect of setting "init_on_alloc=1" on the kernel
command line. This can be disabled with "init_on_alloc=0".
@@ -221,7 +220,6 @@ config INIT_ON_ALLOC_DEFAULT_ON
config INIT_ON_FREE_DEFAULT_ON
bool "Enable heap memory zeroing on free by default"
- default yes
help
This has the effect of setting "init_on_free=1" on the kernel
command line. This can be disabled with "init_on_free=0".
@@ -256,21 +254,6 @@ config ZERO_CALL_USED_REGS
be evaluated for suitability. For example, x86_64 grows by less
than 1%, and arm64 grows by about 5%.
-config PAGE_SANITIZE_VERIFY
- bool "Verify sanitized pages"
- default y
- help
- When init_on_free is enabled, verify that newly allocated pages
- are zeroed to detect write-after-free bugs.
-
-config SLAB_SANITIZE_VERIFY
- bool "Verify sanitized SLAB allocations"
- default y
- depends on !KASAN
- help
- When init_on_free is enabled, verify that newly allocated slab
- objects are zeroed to detect write-after-free bugs.
-
endmenu
endmenu
diff -rupN linux-hardened/security/selinux/Kconfig linux-5.17.1/security/selinux/Kconfig
--- linux-hardened/security/selinux/Kconfig 2022-04-05 20:56:09.512228247 +0900
+++ linux-5.17.1/security/selinux/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -3,7 +3,7 @@ config SECURITY_SELINUX
bool "NSA SELinux Support"
depends on SECURITY_NETWORK && AUDIT && NET && INET
select NETWORK_SECMARK
- default y
+ default n
help
This selects NSA Security-Enhanced Linux (SELinux).
You will also need a policy configuration and a labeled filesystem.
@@ -70,6 +70,29 @@ config SECURITY_SELINUX_AVC_STATS
/sys/fs/selinux/avc/cache_stats, which may be monitored via
tools such as avcstat.
+config SECURITY_SELINUX_CHECKREQPROT_VALUE
+ int "NSA SELinux checkreqprot default value"
+ depends on SECURITY_SELINUX
+ range 0 1
+ default 0
+ help
+ This option sets the default value for the 'checkreqprot' flag
+ that determines whether SELinux checks the protection requested
+ by the application or the protection that will be applied by the
+ kernel (including any implied execute for read-implies-exec) for
+ mmap and mprotect calls. If this option is set to 0 (zero),
+ SELinux will default to checking the protection that will be applied
+ by the kernel. If this option is set to 1 (one), SELinux will
+ default to checking the protection requested by the application.
+ The checkreqprot flag may be changed from the default via the
+ 'checkreqprot=' boot parameter. It may also be changed at runtime
+ via /sys/fs/selinux/checkreqprot if authorized by policy.
+
+ WARNING: this option is deprecated and will be removed in a future
+ kernel release.
+
+ If you are unsure how to answer this question, answer 0.
+
config SECURITY_SELINUX_SIDTAB_HASH_BITS
int "NSA SELinux sidtab hashtable size"
depends on SECURITY_SELINUX
diff -rupN linux-hardened/security/selinux/hooks.c linux-5.17.1/security/selinux/hooks.c
--- linux-hardened/security/selinux/hooks.c 2022-04-05 20:57:17.683085168 +0900
+++ linux-5.17.1/security/selinux/hooks.c 2022-03-28 17:03:22.000000000 +0900
@@ -136,7 +136,21 @@ static int __init selinux_enabled_setup(
__setup("selinux=", selinux_enabled_setup);
#endif
-static const unsigned int selinux_checkreqprot_boot;
+static unsigned int selinux_checkreqprot_boot =
+ CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE;
+
+static int __init checkreqprot_setup(char *str)
+{
+ unsigned long checkreqprot;
+
+ if (!kstrtoul(str, 0, &checkreqprot)) {
+ selinux_checkreqprot_boot = checkreqprot ? 1 : 0;
+ if (checkreqprot)
+ pr_warn("SELinux: checkreqprot set to 1 via kernel parameter. This is deprecated and will be rejected in a future kernel release.\n");
+ }
+ return 1;
+}
+__setup("checkreqprot=", checkreqprot_setup);
/**
* selinux_secmark_enabled - Check to see if SECMARK is currently enabled
diff -rupN linux-hardened/security/selinux/selinuxfs.c linux-5.17.1/security/selinux/selinuxfs.c
--- linux-hardened/security/selinux/selinuxfs.c 2022-04-05 20:57:17.689085244 +0900
+++ linux-5.17.1/security/selinux/selinuxfs.c 2022-03-28 17:03:22.000000000 +0900
@@ -748,9 +748,18 @@ static ssize_t sel_write_checkreqprot(st
return PTR_ERR(page);
length = -EINVAL;
- if (sscanf(page, "%u", &new_value) != 1 || new_value)
+ if (sscanf(page, "%u", &new_value) != 1)
goto out;
+ if (new_value) {
+ char comm[sizeof(current->comm)];
+
+ memcpy(comm, current->comm, sizeof(comm));
+ pr_warn_once("SELinux: %s (%d) set checkreqprot to 1. This is deprecated and will be rejected in a future kernel release.\n",
+ comm, current->pid);
+ }
+
+ checkreqprot_set(fsi->state, (new_value ? 1 : 0));
length = count;
selinux_ima_measure_state(fsi->state);
diff -rupN linux-hardened/security/yama/Kconfig linux-5.17.1/security/yama/Kconfig
--- linux-hardened/security/yama/Kconfig 2022-04-05 20:56:09.555228787 +0900
+++ linux-5.17.1/security/yama/Kconfig 2022-03-28 17:03:22.000000000 +0900
@@ -2,7 +2,7 @@
config SECURITY_YAMA
bool "Yama support"
depends on SECURITY
- default y
+ default n
help
This selects Yama, which extends DAC support with additional
system-wide security settings beyond regular Linux discretionary
diff -rupN linux-hardened/sound/core/oss/pcm_oss.c linux-5.17.1/sound/core/oss/pcm_oss.c
--- linux-hardened/sound/core/oss/pcm_oss.c 2022-04-05 20:57:17.720085633 +0900
+++ linux-5.17.1/sound/core/oss/pcm_oss.c 2022-03-28 17:03:22.000000000 +0900
@@ -774,6 +774,11 @@ static int snd_pcm_oss_period_size(struc
if (oss_period_size < 16)
return -EINVAL;
+
+ /* don't allocate too large period; 1MB period must be enough */
+ if (oss_period_size > 1024 * 1024)
+ return -ENOMEM;
+
runtime->oss.period_bytes = oss_period_size;
runtime->oss.period_frames = 1;
runtime->oss.periods = oss_periods;
@@ -1043,10 +1048,9 @@ static int snd_pcm_oss_change_params_loc
goto failure;
}
#endif
- oss_period_size *= oss_frame_size;
-
- oss_buffer_size = oss_period_size * runtime->oss.periods;
- if (oss_buffer_size < 0) {
+ oss_period_size = array_size(oss_period_size, oss_frame_size);
+ oss_buffer_size = array_size(oss_period_size, runtime->oss.periods);
+ if (oss_buffer_size <= 0) {
err = -EINVAL;
goto failure;
}
diff -rupN linux-hardened/sound/core/oss/pcm_plugin.c linux-5.17.1/sound/core/oss/pcm_plugin.c
--- linux-hardened/sound/core/oss/pcm_plugin.c 2022-04-05 20:57:17.721085646 +0900
+++ linux-5.17.1/sound/core/oss/pcm_plugin.c 2022-03-28 17:03:22.000000000 +0900
@@ -62,7 +62,10 @@ static int snd_pcm_plugin_alloc(struct s
width = snd_pcm_format_physical_width(format->format);
if (width < 0)
return width;
- size = frames * format->channels * width;
+ size = array3_size(frames, format->channels, width);
+ /* check for too large period size once again */
+ if (size > 1024 * 1024)
+ return -ENOMEM;
if (snd_BUG_ON(size % 8))
return -ENXIO;
size /= 8;
diff -rupN linux-hardened/sound/core/pcm.c linux-5.17.1/sound/core/pcm.c
--- linux-hardened/sound/core/pcm.c 2022-04-05 20:57:17.722085659 +0900
+++ linux-5.17.1/sound/core/pcm.c 2022-03-28 17:03:22.000000000 +0900
@@ -969,6 +969,7 @@ int snd_pcm_attach_substream(struct snd_
init_waitqueue_head(&runtime->tsleep);
runtime->status->state = SNDRV_PCM_STATE_OPEN;
+ mutex_init(&runtime->buffer_mutex);
substream->runtime = runtime;
substream->private_data = pcm->private_data;
@@ -1002,6 +1003,7 @@ void snd_pcm_detach_substream(struct snd
} else {
substream->runtime = NULL;
}
+ mutex_destroy(&runtime->buffer_mutex);
kfree(runtime);
put_pid(substream->pid);
substream->pid = NULL;
diff -rupN linux-hardened/sound/core/pcm_lib.c linux-5.17.1/sound/core/pcm_lib.c
--- linux-hardened/sound/core/pcm_lib.c 2022-04-05 20:57:17.725085696 +0900
+++ linux-5.17.1/sound/core/pcm_lib.c 2022-03-28 17:03:22.000000000 +0900
@@ -1906,9 +1906,11 @@ static int wait_for_avail(struct snd_pcm
if (avail >= runtime->twake)
break;
snd_pcm_stream_unlock_irq(substream);
+ mutex_unlock(&runtime->buffer_mutex);
tout = schedule_timeout(wait_time);
+ mutex_lock(&runtime->buffer_mutex);
snd_pcm_stream_lock_irq(substream);
set_current_state(TASK_INTERRUPTIBLE);
switch (runtime->status->state) {
@@ -2219,6 +2221,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(str
nonblock = !!(substream->f_flags & O_NONBLOCK);
+ mutex_lock(&runtime->buffer_mutex);
snd_pcm_stream_lock_irq(substream);
err = pcm_accessible_state(runtime);
if (err < 0)
@@ -2310,6 +2313,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(str
if (xfer > 0 && err >= 0)
snd_pcm_update_state(substream, runtime);
snd_pcm_stream_unlock_irq(substream);
+ mutex_unlock(&runtime->buffer_mutex);
return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
}
EXPORT_SYMBOL(__snd_pcm_lib_xfer);
diff -rupN linux-hardened/sound/core/pcm_memory.c linux-5.17.1/sound/core/pcm_memory.c
--- linux-hardened/sound/core/pcm_memory.c 2022-04-05 20:57:17.725085696 +0900
+++ linux-5.17.1/sound/core/pcm_memory.c 2022-03-28 17:03:22.000000000 +0900
@@ -163,19 +163,20 @@ static void snd_pcm_lib_preallocate_proc
size_t size;
struct snd_dma_buffer new_dmab;
+ mutex_lock(&substream->pcm->open_mutex);
if (substream->runtime) {
buffer->error = -EBUSY;
- return;
+ goto unlock;
}
if (!snd_info_get_line(buffer, line, sizeof(line))) {
snd_info_get_str(str, line, sizeof(str));
size = simple_strtoul(str, NULL, 10) * 1024;
if ((size != 0 && size < 8192) || size > substream->dma_max) {
buffer->error = -EINVAL;
- return;
+ goto unlock;
}
if (substream->dma_buffer.bytes == size)
- return;
+ goto unlock;
memset(&new_dmab, 0, sizeof(new_dmab));
new_dmab.dev = substream->dma_buffer.dev;
if (size > 0) {
@@ -189,7 +190,7 @@ static void snd_pcm_lib_preallocate_proc
substream->pcm->card->number, substream->pcm->device,
substream->stream ? 'c' : 'p', substream->number,
substream->pcm->name, size);
- return;
+ goto unlock;
}
substream->buffer_bytes_max = size;
} else {
@@ -201,6 +202,8 @@ static void snd_pcm_lib_preallocate_proc
} else {
buffer->error = -EINVAL;
}
+ unlock:
+ mutex_unlock(&substream->pcm->open_mutex);
}
static inline void preallocate_info_init(struct snd_pcm_substream *substream)
diff -rupN linux-hardened/sound/core/pcm_native.c linux-5.17.1/sound/core/pcm_native.c
--- linux-hardened/sound/core/pcm_native.c 2022-04-05 20:57:17.728085734 +0900
+++ linux-5.17.1/sound/core/pcm_native.c 2022-03-28 17:03:22.000000000 +0900
@@ -685,33 +685,40 @@ static int snd_pcm_hw_params_choose(stru
return 0;
}
+#if IS_ENABLED(CONFIG_SND_PCM_OSS)
+#define is_oss_stream(substream) ((substream)->oss.oss)
+#else
+#define is_oss_stream(substream) false
+#endif
+
static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
struct snd_pcm_runtime *runtime;
- int err, usecs;
+ int err = 0, usecs;
unsigned int bits;
snd_pcm_uframes_t frames;
if (PCM_RUNTIME_CHECK(substream))
return -ENXIO;
runtime = substream->runtime;
+ mutex_lock(&runtime->buffer_mutex);
snd_pcm_stream_lock_irq(substream);
switch (runtime->status->state) {
case SNDRV_PCM_STATE_OPEN:
case SNDRV_PCM_STATE_SETUP:
case SNDRV_PCM_STATE_PREPARED:
+ if (!is_oss_stream(substream) &&
+ atomic_read(&substream->mmap_count))
+ err = -EBADFD;
break;
default:
- snd_pcm_stream_unlock_irq(substream);
- return -EBADFD;
+ err = -EBADFD;
+ break;
}
snd_pcm_stream_unlock_irq(substream);
-#if IS_ENABLED(CONFIG_SND_PCM_OSS)
- if (!substream->oss.oss)
-#endif
- if (atomic_read(&substream->mmap_count))
- return -EBADFD;
+ if (err)
+ goto unlock;
snd_pcm_sync_stop(substream, true);
@@ -799,16 +806,21 @@ static int snd_pcm_hw_params(struct snd_
if (usecs >= 0)
cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
usecs);
- return 0;
+ err = 0;
_error:
- /* hardware might be unusable from this time,
- so we force application to retry to set
- the correct hardware parameter settings */
- snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
- if (substream->ops->hw_free != NULL)
- substream->ops->hw_free(substream);
- if (substream->managed_buffer_alloc)
- snd_pcm_lib_free_pages(substream);
+ if (err) {
+ /* hardware might be unusable from this time,
+ * so we force application to retry to set
+ * the correct hardware parameter settings
+ */
+ snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ if (substream->ops->hw_free != NULL)
+ substream->ops->hw_free(substream);
+ if (substream->managed_buffer_alloc)
+ snd_pcm_lib_free_pages(substream);
+ }
+ unlock:
+ mutex_unlock(&runtime->buffer_mutex);
return err;
}
@@ -848,26 +860,31 @@ static int do_hw_free(struct snd_pcm_sub
static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
{
struct snd_pcm_runtime *runtime;
- int result;
+ int result = 0;
if (PCM_RUNTIME_CHECK(substream))
return -ENXIO;
runtime = substream->runtime;
+ mutex_lock(&runtime->buffer_mutex);
snd_pcm_stream_lock_irq(substream);
switch (runtime->status->state) {
case SNDRV_PCM_STATE_SETUP:
case SNDRV_PCM_STATE_PREPARED:
+ if (atomic_read(&substream->mmap_count))
+ result = -EBADFD;
break;
default:
- snd_pcm_stream_unlock_irq(substream);
- return -EBADFD;
+ result = -EBADFD;
+ break;
}
snd_pcm_stream_unlock_irq(substream);
- if (atomic_read(&substream->mmap_count))
- return -EBADFD;
+ if (result)
+ goto unlock;
result = do_hw_free(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
+ unlock:
+ mutex_unlock(&runtime->buffer_mutex);
return result;
}
@@ -1173,15 +1190,17 @@ struct action_ops {
static int snd_pcm_action_group(const struct action_ops *ops,
struct snd_pcm_substream *substream,
snd_pcm_state_t state,
- bool do_lock)
+ bool stream_lock)
{
struct snd_pcm_substream *s = NULL;
struct snd_pcm_substream *s1;
int res = 0, depth = 1;
snd_pcm_group_for_each_entry(s, substream) {
- if (do_lock && s != substream) {
- if (s->pcm->nonatomic)
+ if (s != substream) {
+ if (!stream_lock)
+ mutex_lock_nested(&s->runtime->buffer_mutex, depth);
+ else if (s->pcm->nonatomic)
mutex_lock_nested(&s->self_group.mutex, depth);
else
spin_lock_nested(&s->self_group.lock, depth);
@@ -1209,18 +1228,18 @@ static int snd_pcm_action_group(const st
ops->post_action(s, state);
}
_unlock:
- if (do_lock) {
- /* unlock streams */
- snd_pcm_group_for_each_entry(s1, substream) {
- if (s1 != substream) {
- if (s1->pcm->nonatomic)
- mutex_unlock(&s1->self_group.mutex);
- else
- spin_unlock(&s1->self_group.lock);
- }
- if (s1 == s) /* end */
- break;
+ /* unlock streams */
+ snd_pcm_group_for_each_entry(s1, substream) {
+ if (s1 != substream) {
+ if (!stream_lock)
+ mutex_unlock(&s1->runtime->buffer_mutex);
+ else if (s1->pcm->nonatomic)
+ mutex_unlock(&s1->self_group.mutex);
+ else
+ spin_unlock(&s1->self_group.lock);
}
+ if (s1 == s) /* end */
+ break;
}
return res;
}
@@ -1350,10 +1369,12 @@ static int snd_pcm_action_nonatomic(cons
/* Guarantee the group members won't change during non-atomic action */
down_read(&snd_pcm_link_rwsem);
+ mutex_lock(&substream->runtime->buffer_mutex);
if (snd_pcm_stream_linked(substream))
res = snd_pcm_action_group(ops, substream, state, false);
else
res = snd_pcm_action_single(ops, substream, state);
+ mutex_unlock(&substream->runtime->buffer_mutex);
up_read(&snd_pcm_link_rwsem);
return res;
}
@@ -1843,11 +1864,13 @@ static int snd_pcm_do_reset(struct snd_p
int err = snd_pcm_ops_ioctl(substream, SNDRV_PCM_IOCTL1_RESET, NULL);
if (err < 0)
return err;
+ snd_pcm_stream_lock_irq(substream);
runtime->hw_ptr_base = 0;
runtime->hw_ptr_interrupt = runtime->status->hw_ptr -
runtime->status->hw_ptr % runtime->period_size;
runtime->silence_start = runtime->status->hw_ptr;
runtime->silence_filled = 0;
+ snd_pcm_stream_unlock_irq(substream);
return 0;
}
@@ -1855,10 +1878,12 @@ static void snd_pcm_post_reset(struct sn
snd_pcm_state_t state)
{
struct snd_pcm_runtime *runtime = substream->runtime;
+ snd_pcm_stream_lock_irq(substream);
runtime->control->appl_ptr = runtime->status->hw_ptr;
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
runtime->silence_size > 0)
snd_pcm_playback_silence(substream, ULONG_MAX);
+ snd_pcm_stream_unlock_irq(substream);
}
static const struct action_ops snd_pcm_action_reset = {
diff -rupN linux-hardened/sound/pci/ac97/ac97_codec.c linux-5.17.1/sound/pci/ac97/ac97_codec.c
--- linux-hardened/sound/pci/ac97/ac97_codec.c 2022-04-05 20:57:17.821086903 +0900
+++ linux-5.17.1/sound/pci/ac97/ac97_codec.c 2022-03-28 17:03:22.000000000 +0900
@@ -938,8 +938,8 @@ static int snd_ac97_ad18xx_pcm_get_volum
int codec = kcontrol->private_value & 3;
mutex_lock(&ac97->page_mutex);
- ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
- ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
+ ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
+ ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
mutex_unlock(&ac97->page_mutex);
return 0;
}
diff -rupN linux-hardened/sound/pci/cmipci.c linux-5.17.1/sound/pci/cmipci.c
--- linux-hardened/sound/pci/cmipci.c 2022-04-05 20:57:17.844087192 +0900
+++ linux-5.17.1/sound/pci/cmipci.c 2022-03-28 17:03:22.000000000 +0900
@@ -298,7 +298,6 @@ MODULE_PARM_DESC(joystick_port, "Joystic
#define CM_MICGAINZ 0x01 /* mic boost */
#define CM_MICGAINZ_SHIFT 0
-#define CM_REG_MIXER3 0x24
#define CM_REG_AUX_VOL 0x26
#define CM_VAUXL_MASK 0xf0
#define CM_VAUXR_MASK 0x0f
@@ -3265,7 +3264,7 @@ static int snd_cmipci_probe(struct pci_d
*/
static const unsigned char saved_regs[] = {
CM_REG_FUNCTRL1, CM_REG_CHFORMAT, CM_REG_LEGACY_CTRL, CM_REG_MISC_CTRL,
- CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_MIXER3, CM_REG_PLL,
+ CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_AUX_VOL, CM_REG_PLL,
CM_REG_CH0_FRAME1, CM_REG_CH0_FRAME2,
CM_REG_CH1_FRAME1, CM_REG_CH1_FRAME2, CM_REG_EXT_MISC,
CM_REG_INT_STATUS, CM_REG_INT_HLDCLR, CM_REG_FUNCTRL0,
diff -rupN linux-hardened/sound/pci/hda/patch_realtek.c linux-5.17.1/sound/pci/hda/patch_realtek.c
--- linux-hardened/sound/pci/hda/patch_realtek.c 2022-04-05 20:57:17.918088122 +0900
+++ linux-5.17.1/sound/pci/hda/patch_realtek.c 2022-03-28 17:03:22.000000000 +0900
@@ -9020,6 +9020,7 @@ static const struct snd_pci_quirk alc269
SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
@@ -9103,6 +9104,8 @@ static const struct snd_pci_quirk alc269
SND_PCI_QUIRK(0x1558, 0x8561, "Clevo NH[57][0-9][ER][ACDH]Q", ALC269_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
@@ -11067,6 +11070,7 @@ static const struct snd_pci_quirk alc662
SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
diff -rupN linux-hardened/sound/soc/sti/uniperif_player.c linux-5.17.1/sound/soc/sti/uniperif_player.c
--- linux-hardened/sound/soc/sti/uniperif_player.c 2022-04-05 20:56:10.656242627 +0900
+++ linux-5.17.1/sound/soc/sti/uniperif_player.c 2022-03-28 17:03:22.000000000 +0900
@@ -91,7 +91,7 @@ static irqreturn_t uni_player_irq_handle
SET_UNIPERIF_ITM_BCLR_FIFO_ERROR(player);
/* Stop the player */
- snd_pcm_stop_xrun(player->substream);
+ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
}
ret = IRQ_HANDLED;
@@ -105,7 +105,7 @@ static irqreturn_t uni_player_irq_handle
SET_UNIPERIF_ITM_BCLR_DMA_ERROR(player);
/* Stop the player */
- snd_pcm_stop_xrun(player->substream);
+ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
ret = IRQ_HANDLED;
}
@@ -138,7 +138,7 @@ static irqreturn_t uni_player_irq_handle
dev_err(player->dev, "Underflow recovery failed\n");
/* Stop the player */
- snd_pcm_stop_xrun(player->substream);
+ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
ret = IRQ_HANDLED;
}
diff -rupN linux-hardened/sound/soc/sti/uniperif_reader.c linux-5.17.1/sound/soc/sti/uniperif_reader.c
--- linux-hardened/sound/soc/sti/uniperif_reader.c 2022-04-05 20:56:10.656242627 +0900
+++ linux-5.17.1/sound/soc/sti/uniperif_reader.c 2022-03-28 17:03:22.000000000 +0900
@@ -65,7 +65,7 @@ static irqreturn_t uni_reader_irq_handle
if (unlikely(status & UNIPERIF_ITS_FIFO_ERROR_MASK(reader))) {
dev_err(reader->dev, "FIFO error detected\n");
- snd_pcm_stop_xrun(reader->substream);
+ snd_pcm_stop(reader->substream, SNDRV_PCM_STATE_XRUN);
ret = IRQ_HANDLED;
}
diff -rupN linux-hardened/sound/usb/mixer_maps.c linux-5.17.1/sound/usb/mixer_maps.c
--- linux-hardened/sound/usb/mixer_maps.c 2022-04-05 20:57:18.443094722 +0900
+++ linux-5.17.1/sound/usb/mixer_maps.c 2022-03-28 17:03:22.000000000 +0900
@@ -543,6 +543,16 @@ static const struct usbmix_ctl_map usbmi
.map = bose_soundlink_map,
},
{
+ /* Corsair Virtuoso SE Latest (wired mode) */
+ .id = USB_ID(0x1b1c, 0x0a3f),
+ .map = corsair_virtuoso_map,
+ },
+ {
+ /* Corsair Virtuoso SE Latest (wireless mode) */
+ .id = USB_ID(0x1b1c, 0x0a40),
+ .map = corsair_virtuoso_map,
+ },
+ {
/* Corsair Virtuoso SE (wired mode) */
.id = USB_ID(0x1b1c, 0x0a3d),
.map = corsair_virtuoso_map,
diff -rupN linux-hardened/sound/usb/mixer_quirks.c linux-5.17.1/sound/usb/mixer_quirks.c
--- linux-hardened/sound/usb/mixer_quirks.c 2022-04-05 20:57:18.445094747 +0900
+++ linux-5.17.1/sound/usb/mixer_quirks.c 2022-03-28 17:03:22.000000000 +0900
@@ -3360,9 +3360,10 @@ void snd_usb_mixer_fu_apply_quirk(struct
if (unitid == 7 && cval->control == UAC_FU_VOLUME)
snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
break;
- /* lowest playback value is muted on C-Media devices */
- case USB_ID(0x0d8c, 0x000c):
- case USB_ID(0x0d8c, 0x0014):
+ /* lowest playback value is muted on some devices */
+ case USB_ID(0x0d8c, 0x000c): /* C-Media */
+ case USB_ID(0x0d8c, 0x0014): /* C-Media */
+ case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
if (strstr(kctl->id.name, "Playback"))
cval->min_mute = 1;
break;
diff -rupN linux-hardened/tools/perf/Documentation/security.txt linux-5.17.1/tools/perf/Documentation/security.txt
--- linux-hardened/tools/perf/Documentation/security.txt 2022-04-05 20:56:11.058247681 +0900
+++ linux-5.17.1/tools/perf/Documentation/security.txt 2022-03-28 17:03:22.000000000 +0900
@@ -148,7 +148,6 @@ Perf tool provides a message similar to
>= 0: Disallow raw and ftrace function tracepoint access
>= 1: Disallow CPU event access
>= 2: Disallow kernel profiling
- >= 3: Disallow use of any event
To make the adjusted perf_event_paranoid setting permanent preserve it
in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)
diff -rupN linux-hardened/tools/perf/util/evsel.c linux-5.17.1/tools/perf/util/evsel.c
--- linux-hardened/tools/perf/util/evsel.c 2022-04-05 20:57:18.797099172 +0900
+++ linux-5.17.1/tools/perf/util/evsel.c 2022-03-28 17:03:22.000000000 +0900
@@ -2884,7 +2884,6 @@ int evsel__open_strerror(struct evsel *e
">= 0: Disallow raw and ftrace function tracepoint access\n"
">= 1: Disallow CPU event access\n"
">= 2: Disallow kernel profiling\n"
- ">= 3: Disallow use of any event\n"
"To make the adjusted perf_event_paranoid setting permanent preserve it\n"
"in /etc/sysctl.conf (e.g. kernel.perf_event_paranoid = <setting>)",
perf_event_paranoid());