Merge 5.10.189 into android12-5.10-lts

Changes in 5.10.189
	init: Provide arch_cpu_finalize_init()
	x86/cpu: Switch to arch_cpu_finalize_init()
	ARM: cpu: Switch to arch_cpu_finalize_init()
	ia64/cpu: Switch to arch_cpu_finalize_init()
	m68k/cpu: Switch to arch_cpu_finalize_init()
	mips/cpu: Switch to arch_cpu_finalize_init()
	sh/cpu: Switch to arch_cpu_finalize_init()
	sparc/cpu: Switch to arch_cpu_finalize_init()
	um/cpu: Switch to arch_cpu_finalize_init()
	init: Remove check_bugs() leftovers
	init: Invoke arch_cpu_finalize_init() earlier
	init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()
	x86/fpu: Remove cpuinfo argument from init functions
	x86/fpu: Mark init functions __init
	x86/fpu: Move FPU initialization into arch_cpu_finalize_init()
	x86/speculation: Add Gather Data Sampling mitigation
	x86/speculation: Add force option to GDS mitigation
	x86/speculation: Add Kconfig option for GDS
	KVM: Add GDS_NO support to KVM
	x86/xen: Fix secondary processors' FPU initialization
	x86/mm: fix poking_init() for Xen PV guests
	x86/mm: Use mm_alloc() in poking_init()
	mm: Move mm_cachep initialization to mm_init()
	x86/mm: Initialize text poking earlier
	Documentation/x86: Fix backwards on/off logic about YMM support
	x86/cpu: Add VM page flush MSR availablility as a CPUID feature
	x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX]
	tools headers cpufeatures: Sync with the kernel sources
	x86/bugs: Increase the x86 bugs vector size to two u32s
	x86/cpu, kvm: Add support for CPUID_80000021_EAX
	x86/srso: Add a Speculative RAS Overflow mitigation
	x86/srso: Add IBPB_BRTYPE support
	x86/srso: Add SRSO_NO support
	x86/srso: Add IBPB
	x86/srso: Add IBPB on VMEXIT
	x86/srso: Fix return thunks in generated code
	x86/srso: Tie SBPB bit setting to microcode patch detection
	xen/netback: Fix buffer overrun triggered by unusual packet
	x86: fix backwards merge of GDS/SRSO bit
	Linux 5.10.189

Change-Id: Ibaf2cd3f0542d497374bcf135e9faf1791e9af5d
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2023-08-23 15:12:23 +00:00
71 changed files with 1176 additions and 419 deletions

View File

@@ -510,17 +510,18 @@ Description: information about CPUs heterogeneity.
cpu_capacity: capacity of cpu#.
What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/meltdown
/sys/devices/system/cpu/vulnerabilities/spectre_v1
/sys/devices/system/cpu/vulnerabilities/spectre_v2
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
/sys/devices/system/cpu/vulnerabilities/l1tf
/sys/devices/system/cpu/vulnerabilities/mds
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
/sys/devices/system/cpu/vulnerabilities/meltdown
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
/sys/devices/system/cpu/vulnerabilities/retbleed
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
/sys/devices/system/cpu/vulnerabilities/spectre_v1
/sys/devices/system/cpu/vulnerabilities/spectre_v2
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities

View File

@@ -0,0 +1,109 @@
.. SPDX-License-Identifier: GPL-2.0
GDS - Gather Data Sampling
==========================
Gather Data Sampling is a hardware vulnerability which allows unprivileged
speculative access to data which was previously stored in vector registers.
Problem
-------
When a gather instruction performs loads from memory, different data elements
are merged into the destination vector register. However, when a gather
instruction that is transiently executed encounters a fault, stale data from
architectural or internal vector registers may get transiently forwarded to the
destination vector register instead. This will allow a malicious attacker to
infer stale data using typical side channel techniques like cache timing
attacks. GDS is a purely sampling-based attack.
The attacker uses gather instructions to infer the stale vector register data.
The victim does not need to do anything special other than use the vector
registers. The victim does not need to use gather instructions to be
vulnerable.
Because the buffers are shared between Hyper-Threads cross Hyper-Thread attacks
are possible.
Attack scenarios
----------------
Without mitigation, GDS can infer stale data across virtually all
permission boundaries:
Non-enclaves can infer SGX enclave data
Userspace can infer kernel data
Guests can infer data from hosts
Guest can infer guest from other guests
Users can infer data from other users
Because of this, it is important to ensure that the mitigation stays enabled in
lower-privilege contexts like guests and when running outside SGX enclaves.
The hardware enforces the mitigation for SGX. Likewise, VMMs should ensure
that guests are not allowed to disable the GDS mitigation. If a host erred and
allowed this, a guest could theoretically disable GDS mitigation, mount an
attack, and re-enable it.
Mitigation mechanism
--------------------
This issue is mitigated in microcode. The microcode defines the following new
bits:
================================ === ============================
IA32_ARCH_CAPABILITIES[GDS_CTRL] R/O Enumerates GDS vulnerability
and mitigation support.
IA32_ARCH_CAPABILITIES[GDS_NO] R/O Processor is not vulnerable.
IA32_MCU_OPT_CTRL[GDS_MITG_DIS] R/W Disables the mitigation
0 by default.
IA32_MCU_OPT_CTRL[GDS_MITG_LOCK] R/W Locks GDS_MITG_DIS=0. Writes
to GDS_MITG_DIS are ignored
Can't be cleared once set.
================================ === ============================
GDS can also be mitigated on systems that don't have updated microcode by
disabling AVX. This can be done by setting gather_data_sampling="force" or
"clearcpuid=avx" on the kernel command-line.
If used, these options will disable AVX use by turning off XSAVE YMM support.
However, the processor will still enumerate AVX support. Userspace that
does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM
support will break.
Mitigation control on the kernel command line
---------------------------------------------
The mitigation can be disabled by setting "gather_data_sampling=off" or
"mitigations=off" on the kernel command line. Not specifying either will default
to the mitigation being enabled. Specifying "gather_data_sampling=force" will
use the microcode mitigation when available or disable AVX on affected systems
where the microcode hasn't been updated to include the mitigation.
GDS System Information
------------------------
The kernel provides vulnerability status information through sysfs. For
GDS this can be accessed by the following sysfs file:
/sys/devices/system/cpu/vulnerabilities/gather_data_sampling
The possible values contained in this file are:
============================== =============================================
Not affected Processor not vulnerable.
Vulnerable Processor vulnerable and mitigation disabled.
Vulnerable: No microcode Processor vulnerable and microcode is missing
mitigation.
Mitigation: AVX disabled,
no microcode Processor is vulnerable and microcode is missing
mitigation. AVX disabled as mitigation.
Mitigation: Microcode Processor is vulnerable and mitigation is in
effect.
Mitigation: Microcode (locked) Processor is vulnerable and mitigation is in
effect and cannot be disabled.
Unknown: Dependent on
hypervisor status Running on a virtual guest processor that is
affected but with no way to know if host
processor is mitigated or vulnerable.
============================== =============================================
GDS Default mitigation
----------------------
The updated microcode will enable the mitigation by default. The kernel's
default action is to leave the mitigation enabled.

View File

@@ -16,3 +16,5 @@ are configurable at compile, boot or run time.
multihit.rst
special-register-buffer-data-sampling.rst
processor_mmio_stale_data.rst
gather_data_sampling.rst
srso

View File

@@ -0,0 +1,133 @@
.. SPDX-License-Identifier: GPL-2.0
Speculative Return Stack Overflow (SRSO)
========================================
This is a mitigation for the speculative return stack overflow (SRSO)
vulnerability found on AMD processors. The mechanism is by now the well
known scenario of poisoning CPU functional units - the Branch Target
Buffer (BTB) and Return Address Predictor (RAP) in this case - and then
tricking the elevated privilege domain (the kernel) into leaking
sensitive data.
AMD CPUs predict RET instructions using a Return Address Predictor (aka
Return Address Stack/Return Stack Buffer). In some cases, a non-architectural
CALL instruction (i.e., an instruction predicted to be a CALL but is
not actually a CALL) can create an entry in the RAP which may be used
to predict the target of a subsequent RET instruction.
The specific circumstances that lead to this varies by microarchitecture
but the concern is that an attacker can mis-train the CPU BTB to predict
non-architectural CALL instructions in kernel space and use this to
control the speculative target of a subsequent kernel RET, potentially
leading to information disclosure via a speculative side-channel.
The issue is tracked under CVE-2023-20569.
Affected processors
-------------------
AMD Zen, generations 1-4. That is, all families 0x17 and 0x19. Older
processors have not been investigated.
System information and options
------------------------------
First of all, it is required that the latest microcode be loaded for
mitigations to be effective.
The sysfs file showing SRSO mitigation status is:
/sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow
The possible values in this file are:
- 'Not affected' The processor is not vulnerable
- 'Vulnerable: no microcode' The processor is vulnerable, no
microcode extending IBPB functionality
to address the vulnerability has been
applied.
- 'Mitigation: microcode' Extended IBPB functionality microcode
patch has been applied. It does not
address User->Kernel and Guest->Host
transitions protection but it does
address User->User and VM->VM attack
vectors.
(spec_rstack_overflow=microcode)
- 'Mitigation: safe RET' Software-only mitigation. It complements
the extended IBPB microcode patch
functionality by addressing User->Kernel
and Guest->Host transitions protection.
Selected by default or by
spec_rstack_overflow=safe-ret
- 'Mitigation: IBPB' Similar protection as "safe RET" above
but employs an IBPB barrier on privilege
domain crossings (User->Kernel,
Guest->Host).
(spec_rstack_overflow=ibpb)
- 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider
scenario - the Guest->Host transitions
only.
(spec_rstack_overflow=ibpb-vmexit)
In order to exploit vulnerability, an attacker needs to:
- gain local access on the machine
- break kASLR
- find gadgets in the running kernel in order to use them in the exploit
- potentially create and pin an additional workload on the sibling
thread, depending on the microarchitecture (not necessary on fam 0x19)
- run the exploit
Considering the performance implications of each mitigation type, the
default one is 'Mitigation: safe RET' which should take care of most
attack vectors, including the local User->Kernel one.
As always, the user is advised to keep her/his system up-to-date by
applying software updates regularly.
The default setting will be reevaluated when needed and especially when
new attack vectors appear.
As one can surmise, 'Mitigation: safe RET' does come at the cost of some
performance depending on the workload. If one trusts her/his userspace
and does not want to suffer the performance impact, one can always
disable the mitigation with spec_rstack_overflow=off.
Similarly, 'Mitigation: IBPB' is another full mitigation type employing
an indrect branch prediction barrier after having applied the required
microcode patch for one's system. This mitigation comes also at
a performance cost.
Mitigation: safe RET
--------------------
The mitigation works by ensuring all RET instructions speculate to
a controlled location, similar to how speculation is controlled in the
retpoline sequence. To accomplish this, the __x86_return_thunk forces
the CPU to mispredict every function return using a 'safe return'
sequence.
To ensure the safety of this mitigation, the kernel must ensure that the
safe return sequence is itself free from attacker interference. In Zen3
and Zen4, this is accomplished by creating a BTB alias between the
untraining function srso_untrain_ret_alias() and the safe return
function srso_safe_ret_alias() which results in evicting a potentially
poisoned BTB entry and using that safe one for all function returns.
In older Zen1 and Zen2, this is accomplished using a reinterpretation
technique similar to Retbleed one: srso_untrain_ret() and
srso_safe_ret().

View File

@@ -1484,6 +1484,26 @@
Format: off | on
default: on
gather_data_sampling=
[X86,INTEL] Control the Gather Data Sampling (GDS)
mitigation.
Gather Data Sampling is a hardware vulnerability which
allows unprivileged speculative access to data which was
previously stored in vector registers.
This issue is mitigated by default in updated microcode.
The mitigation may have a performance impact but can be
disabled. On systems without the microcode mitigation
disabling AVX serves as a mitigation.
force: Disable AVX to mitigate systems without
microcode mitigation. No effect if the microcode
mitigation is present. Known to cause crashes in
userspace with buggy AVX enumeration.
off: Disable GDS mitigation.
gcov_persist= [GCOV] When non-zero (default), profiling data for
kernel modules is saved and remains accessible via
debugfs, even when the module is unloaded/reloaded.
@@ -2949,22 +2969,23 @@
Disable all optional CPU mitigations. This
improves system performance, but it may also
expose users to several CPU vulnerabilities.
Equivalent to: nopti [X86,PPC]
Equivalent to: gather_data_sampling=off [X86]
kpti=0 [ARM64]
nospectre_v1 [X86,PPC]
nobp=0 [S390]
nospectre_v2 [X86,PPC,S390,ARM64]
spectre_v2_user=off [X86]
spec_store_bypass_disable=off [X86,PPC]
ssbd=force-off [ARM64]
kvm.nx_huge_pages=off [X86]
l1tf=off [X86]
mds=off [X86]
tsx_async_abort=off [X86]
kvm.nx_huge_pages=off [X86]
mmio_stale_data=off [X86]
no_entry_flush [PPC]
no_uaccess_flush [PPC]
mmio_stale_data=off [X86]
nobp=0 [S390]
nopti [X86,PPC]
nospectre_v1 [X86,PPC]
nospectre_v2 [X86,PPC,S390,ARM64]
retbleed=off [X86]
spec_store_bypass_disable=off [X86,PPC]
spectre_v2_user=off [X86]
ssbd=force-off [ARM64]
tsx_async_abort=off [X86]
Exceptions:
This does not have any effect on
@@ -5222,6 +5243,17 @@
Not specifying this option is equivalent to
spectre_v2_user=auto.
spec_rstack_overflow=
[X86] Control RAS overflow mitigation on AMD Zen CPUs
off - Disable mitigation
microcode - Enable microcode mitigation only
safe-ret - Enable sw-only safe RET mitigation (default)
ibpb - Enable mitigation by issuing IBPB on
kernel entry
ibpb-vmexit - Issue IBPB only on VMEXIT
(cloud-specific mitigation)
spec_store_bypass_disable=
[HW] Control Speculative Store Bypass (SSB) Disable mitigation
(Speculative Store Bypass vulnerability)

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 188
SUBLEVEL = 189
EXTRAVERSION =
NAME = Dare mighty things

View File

@@ -290,6 +290,9 @@ config ARCH_HAS_DMA_SET_UNCACHED
config ARCH_HAS_DMA_CLEAR_UNCACHED
bool
config ARCH_HAS_CPU_FINALIZE_INIT
bool
# Select if arch init_task must go in the __init_task_data section
config ARCH_TASK_STRUCT_ON_STACK
bool

View File

@@ -1,20 +0,0 @@
/*
* include/asm-alpha/bugs.h
*
* Copyright (C) 1994 Linus Torvalds
*/
/*
* This is included by init/main.c to check for architecture-dependent bugs.
*
* Needs:
* void check_bugs(void);
*/
/*
* I don't know of any alpha bugs yet.. Nice chip
*/
static void check_bugs(void)
{
}

View File

@@ -4,6 +4,7 @@ config ARM
default y
select ARCH_32BIT_OFF_T
select ARCH_HAS_BINFMT_FLAT
select ARCH_HAS_CPU_FINALIZE_INIT if MMU
select ARCH_HAS_DEBUG_VIRTUAL if MMU
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE

View File

@@ -1,7 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* arch/arm/include/asm/bugs.h
*
* Copyright (C) 1995-2003 Russell King
*/
#ifndef __ASM_BUGS_H
@@ -10,10 +8,8 @@
extern void check_writebuffer_bugs(void);
#ifdef CONFIG_MMU
extern void check_bugs(void);
extern void check_other_bugs(void);
#else
#define check_bugs() do { } while (0)
#define check_other_bugs() do { } while (0)
#endif

View File

@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/init.h>
#include <linux/cpu.h>
#include <asm/bugs.h>
#include <asm/proc-fns.h>
@@ -11,7 +12,7 @@ void check_other_bugs(void)
#endif
}
void __init check_bugs(void)
void __init arch_cpu_finalize_init(void)
{
check_writebuffer_bugs();
check_other_bugs();

View File

@@ -8,6 +8,7 @@ menu "Processor type and features"
config IA64
bool
select ARCH_HAS_CPU_FINALIZE_INIT
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO

View File

@@ -1,20 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This is included by init/main.c to check for architecture-dependent bugs.
*
* Needs:
* void check_bugs(void);
*
* Based on <asm-alpha/bugs.h>.
*
* Modified 1998, 1999, 2003
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co.
*/
#ifndef _ASM_IA64_BUGS_H
#define _ASM_IA64_BUGS_H
#include <asm/processor.h>
extern void check_bugs (void);
#endif /* _ASM_IA64_BUGS_H */

View File

@@ -1071,8 +1071,7 @@ cpu_init (void)
}
}
void __init
check_bugs (void)
void __init arch_cpu_finalize_init(void)
{
ia64_patch_mckinley_e9((unsigned long) __start___mckinley_e9_bundles,
(unsigned long) __end___mckinley_e9_bundles);

View File

@@ -4,6 +4,7 @@ config M68K
default y
select ARCH_32BIT_OFF_T
select ARCH_HAS_BINFMT_FLAT
select ARCH_HAS_CPU_FINALIZE_INIT if MMU
select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS

View File

@@ -1,21 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* include/asm-m68k/bugs.h
*
* Copyright (C) 1994 Linus Torvalds
*/
/*
* This is included by init/main.c to check for architecture-dependent bugs.
*
* Needs:
* void check_bugs(void);
*/
#ifdef CONFIG_MMU
extern void check_bugs(void); /* in arch/m68k/kernel/setup.c */
#else
static void check_bugs(void)
{
}
#endif

View File

@@ -10,6 +10,7 @@
*/
#include <linux/kernel.h>
#include <linux/cpu.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/delay.h>
@@ -523,7 +524,7 @@ static int __init proc_hardware_init(void)
module_init(proc_hardware_init);
#endif
void check_bugs(void)
void __init arch_cpu_finalize_init(void)
{
#if defined(CONFIG_FPU) && !defined(CONFIG_M68KFPU_EMU)
if (m68k_fputype == 0) {

View File

@@ -4,6 +4,7 @@ config MIPS
default y
select ARCH_32BIT_OFF_T if !64BIT
select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
select ARCH_HAS_CPU_FINALIZE_INIT
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_KCOV
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE if !EVA

View File

@@ -1,17 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This is included by init/main.c to check for architecture-dependent bugs.
*
* Copyright (C) 2007 Maciej W. Rozycki
*
* Needs:
* void check_bugs(void);
*/
#ifndef _ASM_BUGS_H
#define _ASM_BUGS_H
#include <linux/bug.h>
#include <linux/delay.h>
#include <linux/smp.h>
#include <asm/cpu.h>
@@ -30,17 +24,6 @@ static inline void check_bugs_early(void)
check_bugs64_early();
}
static inline void check_bugs(void)
{
unsigned int cpu = smp_processor_id();
cpu_data[cpu].udelay_val = loops_per_jiffy;
check_bugs32();
if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
check_bugs64();
}
static inline int r4k_daddiu_bug(void)
{
if (!IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))

View File

@@ -11,6 +11,8 @@
* Copyright (C) 2000, 2001, 2002, 2007 Maciej W. Rozycki
*/
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/export.h>
#include <linux/screen_info.h>
@@ -829,3 +831,14 @@ static int __init setnocoherentio(char *str)
}
early_param("nocoherentio", setnocoherentio);
#endif
void __init arch_cpu_finalize_init(void)
{
unsigned int cpu = smp_processor_id();
cpu_data[cpu].udelay_val = loops_per_jiffy;
check_bugs32();
if (IS_ENABLED(CONFIG_CPU_R4X00_BUGS64))
check_bugs64();
}

Some files were not shown because too many files have changed in this diff Show More