mirror of
https://github.com/Dasharo/linux.git
synced 2026-03-06 15:25:10 -08:00
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
"The bulk is in-kernel pointer authentication, activity monitors and
lots of asm symbol annotations. I also queued the sys_mremap() patch
commenting the asymmetry in the address untagging.
Summary:
- In-kernel Pointer Authentication support (previously only offered
to user space).
- ARM Activity Monitors (AMU) extension support allowing better CPU
utilisation numbers for the scheduler (frequency invariance).
- Memory hot-remove support for arm64.
- Lots of asm annotations (SYM_*) in preparation for the in-kernel
Branch Target Identification (BTI) support.
- arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the
PMU init callbacks, support for new DT compatibles.
- IPv6 header checksum optimisation.
- Fixes: SDEI (software delegated exception interface) double-lock on
hibernate with shared events.
- Minor clean-ups and refactoring: cpu_ops accessor,
cpu_do_switch_mm() converted to C, cpufeature finalisation helper.
- sys_mremap() comment explaining the asymmetric address untagging
behaviour"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
mm/mremap: Add comment explaining the untagging behaviour of mremap()
arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
arm64: Introduce get_cpu_ops() helper function
arm64: Rename cpu_read_ops() to init_cpu_ops()
arm64: Declare ACPI parking protocol CPU operation if needed
arm64: move kimage_vaddr to .rodata
arm64: use mov_q instead of literal ldr
arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
lkdtm: arm64: test kernel pointer authentication
arm64: compile the kernel with ptrauth return address signing
kconfig: Add support for 'as-option'
arm64: suspend: restore the kernel ptrauth keys
arm64: __show_regs: strip PAC from lr in printk
arm64: unwind: strip PAC from kernel addresses
arm64: mask PAC bits of __builtin_return_address
arm64: initialize ptrauth keys for kernel booting task
arm64: initialize and switch ptrauth kernel keys
arm64: enable ptrauth earlier
arm64: cpufeature: handle conflicts based on capability
arm64: cpufeature: Move cpu capability helpers inside C file
...
This commit is contained in:
112
Documentation/arm64/amu.rst
Normal file
112
Documentation/arm64/amu.rst
Normal file
@@ -0,0 +1,112 @@
|
||||
=======================================================
|
||||
Activity Monitors Unit (AMU) extension in AArch64 Linux
|
||||
=======================================================
|
||||
|
||||
Author: Ionela Voinescu <ionela.voinescu@arm.com>
|
||||
|
||||
Date: 2019-09-10
|
||||
|
||||
This document briefly describes the provision of Activity Monitors Unit
|
||||
support in AArch64 Linux.
|
||||
|
||||
|
||||
Architecture overview
|
||||
---------------------
|
||||
|
||||
The activity monitors extension is an optional extension introduced by the
|
||||
ARMv8.4 CPU architecture.
|
||||
|
||||
The activity monitors unit, implemented in each CPU, provides performance
|
||||
counters intended for system management use. The AMU extension provides a
|
||||
system register interface to the counter registers and also supports an
|
||||
optional external memory-mapped interface.
|
||||
|
||||
Version 1 of the Activity Monitors architecture implements a counter group
|
||||
of four fixed and architecturally defined 64-bit event counters.
|
||||
- CPU cycle counter: increments at the frequency of the CPU.
|
||||
- Constant counter: increments at the fixed frequency of the system
|
||||
clock.
|
||||
- Instructions retired: increments with every architecturally executed
|
||||
instruction.
|
||||
- Memory stall cycles: counts instruction dispatch stall cycles caused by
|
||||
misses in the last level cache within the clock domain.
|
||||
|
||||
When in WFI or WFE these counters do not increment.
|
||||
|
||||
The Activity Monitors architecture provides space for up to 16 architected
|
||||
event counters. Future versions of the architecture may use this space to
|
||||
implement additional architected event counters.
|
||||
|
||||
Additionally, version 1 implements a counter group of up to 16 auxiliary
|
||||
64-bit event counters.
|
||||
|
||||
On cold reset all counters reset to 0.
|
||||
|
||||
|
||||
Basic support
|
||||
-------------
|
||||
|
||||
The kernel can safely run a mix of CPUs with and without support for the
|
||||
activity monitors extension. Therefore, when CONFIG_ARM64_AMU_EXTN is
|
||||
selected we unconditionally enable the capability to allow any late CPU
|
||||
(secondary or hotplugged) to detect and use the feature.
|
||||
|
||||
When the feature is detected on a CPU, we flag the availability of the
|
||||
feature but this does not guarantee the correct functionality of the
|
||||
counters, only the presence of the extension.
|
||||
|
||||
Firmware (code running at higher exception levels, e.g. arm-tf) support is
|
||||
needed to:
|
||||
- Enable access for lower exception levels (EL2 and EL1) to the AMU
|
||||
registers.
|
||||
- Enable the counters. If not enabled these will read as 0.
|
||||
- Save/restore the counters before/after the CPU is being put/brought up
|
||||
from the 'off' power state.
|
||||
|
||||
When using kernels that have this feature enabled but boot with broken
|
||||
firmware the user may experience panics or lockups when accessing the
|
||||
counter registers. Even if these symptoms are not observed, the values
|
||||
returned by the register reads might not correctly reflect reality. Most
|
||||
commonly, the counters will read as 0, indicating that they are not
|
||||
enabled.
|
||||
|
||||
If proper support is not provided in firmware it's best to disable
|
||||
CONFIG_ARM64_AMU_EXTN. To be noted that for security reasons, this does not
|
||||
bypass the setting of AMUSERENR_EL0 to trap accesses from EL0 (userspace) to
|
||||
EL1 (kernel). Therefore, firmware should still ensure accesses to AMU registers
|
||||
are not trapped in EL2/EL3.
|
||||
|
||||
The fixed counters of AMUv1 are accessible though the following system
|
||||
register definitions:
|
||||
- SYS_AMEVCNTR0_CORE_EL0
|
||||
- SYS_AMEVCNTR0_CONST_EL0
|
||||
- SYS_AMEVCNTR0_INST_RET_EL0
|
||||
- SYS_AMEVCNTR0_MEM_STALL_EL0
|
||||
|
||||
Auxiliary platform specific counters can be accessed using
|
||||
SYS_AMEVCNTR1_EL0(n), where n is a value between 0 and 15.
|
||||
|
||||
Details can be found in: arch/arm64/include/asm/sysreg.h.
|
||||
|
||||
|
||||
Userspace access
|
||||
----------------
|
||||
|
||||
Currently, access from userspace to the AMU registers is disabled due to:
|
||||
- Security reasons: they might expose information about code executed in
|
||||
secure mode.
|
||||
- Purpose: AMU counters are intended for system management use.
|
||||
|
||||
Also, the presence of the feature is not visible to userspace.
|
||||
|
||||
|
||||
Virtualization
|
||||
--------------
|
||||
|
||||
Currently, access from userspace (EL0) and kernelspace (EL1) on the KVM
|
||||
guest side is disabled due to:
|
||||
- Security reasons: they might expose information about code executed
|
||||
by other guests or the host.
|
||||
|
||||
Any attempt to access the AMU registers will result in an UNDEFINED
|
||||
exception being injected into the guest.
|
||||
@@ -248,6 +248,20 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
- HCR_EL2.APK (bit 40) must be initialised to 0b1
|
||||
- HCR_EL2.API (bit 41) must be initialised to 0b1
|
||||
|
||||
For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
|
||||
- If EL3 is present:
|
||||
CPTR_EL3.TAM (bit 30) must be initialised to 0b0
|
||||
CPTR_EL2.TAM (bit 30) must be initialised to 0b0
|
||||
AMCNTENSET0_EL0 must be initialised to 0b1111
|
||||
AMCNTENSET1_EL0 must be initialised to a platform specific value
|
||||
having 0b1 set for the corresponding bit for each of the auxiliary
|
||||
counters present.
|
||||
- If the kernel is entered at EL1:
|
||||
AMCNTENSET0_EL0 must be initialised to 0b1111
|
||||
AMCNTENSET1_EL0 must be initialised to a platform specific value
|
||||
having 0b1 set for the corresponding bit for each of the auxiliary
|
||||
counters present.
|
||||
|
||||
The requirements described above for CPU mode, caches, MMUs, architected
|
||||
timers, coherency and system registers apply to all CPUs. All CPUs must
|
||||
enter the kernel in the same exception level.
|
||||
|
||||
@@ -6,6 +6,7 @@ ARM64 Architecture
|
||||
:maxdepth: 1
|
||||
|
||||
acpi_object_usage
|
||||
amu
|
||||
arm-acpi
|
||||
booting
|
||||
cpu-feature-registers
|
||||
|
||||
@@ -117,6 +117,7 @@ config ARM64
|
||||
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
|
||||
select HAVE_ARCH_AUDITSYSCALL
|
||||
select HAVE_ARCH_BITREVERSE
|
||||
select HAVE_ARCH_COMPILER_H
|
||||
select HAVE_ARCH_HUGE_VMAP
|
||||
select HAVE_ARCH_JUMP_LABEL
|
||||
select HAVE_ARCH_JUMP_LABEL_RELATIVE
|
||||
@@ -280,6 +281,9 @@ config ZONE_DMA32
|
||||
config ARCH_ENABLE_MEMORY_HOTPLUG
|
||||
def_bool y
|
||||
|
||||
config ARCH_ENABLE_MEMORY_HOTREMOVE
|
||||
def_bool y
|
||||
|
||||
config SMP
|
||||
def_bool y
|
||||
|
||||
@@ -951,11 +955,11 @@ config HOTPLUG_CPU
|
||||
|
||||
# Common NUMA Features
|
||||
config NUMA
|
||||
bool "Numa Memory Allocation and Scheduler Support"
|
||||
bool "NUMA Memory Allocation and Scheduler Support"
|
||||
select ACPI_NUMA if ACPI
|
||||
select OF_NUMA
|
||||
help
|
||||
Enable NUMA (Non Uniform Memory Access) support.
|
||||
Enable NUMA (Non-Uniform Memory Access) support.
|
||||
|
||||
The kernel will try to allocate memory used by a CPU on the
|
||||
local memory of the CPU and add some more
|
||||
@@ -1497,6 +1501,9 @@ config ARM64_PTR_AUTH
|
||||
bool "Enable support for pointer authentication"
|
||||
default y
|
||||
depends on !KVM || ARM64_VHE
|
||||
depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC
|
||||
depends on CC_IS_GCC || (CC_IS_CLANG && AS_HAS_CFI_NEGATE_RA_STATE)
|
||||
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
|
||||
help
|
||||
Pointer authentication (part of the ARMv8.3 Extensions) provides
|
||||
instructions for signing and authenticating pointers against secret
|
||||
@@ -1504,16 +1511,72 @@ config ARM64_PTR_AUTH
|
||||
and other attacks.
|
||||
|
||||
This option enables these instructions at EL0 (i.e. for userspace).
|
||||
|
||||
Choosing this option will cause the kernel to initialise secret keys
|
||||
for each process at exec() time, with these keys being
|
||||
context-switched along with the process.
|
||||
|
||||
If the compiler supports the -mbranch-protection or
|
||||
-msign-return-address flag (e.g. GCC 7 or later), then this option
|
||||
will also cause the kernel itself to be compiled with return address
|
||||
protection. In this case, and if the target hardware is known to
|
||||
support pointer authentication, then CONFIG_STACKPROTECTOR can be
|
||||
disabled with minimal loss of protection.
|
||||
|
||||
The feature is detected at runtime. If the feature is not present in
|
||||
hardware it will not be advertised to userspace/KVM guest nor will it
|
||||
be enabled. However, KVM guest also require VHE mode and hence
|
||||
CONFIG_ARM64_VHE=y option to use this feature.
|
||||
|
||||
If the feature is present on the boot CPU but not on a late CPU, then
|
||||
the late CPU will be parked. Also, if the boot CPU does not have
|
||||
address auth and the late CPU has then the late CPU will still boot
|
||||
but with the feature disabled. On such a system, this option should
|
||||
not be selected.
|
||||
|
||||
This feature works with FUNCTION_GRAPH_TRACER option only if
|
||||
DYNAMIC_FTRACE_WITH_REGS is enabled.
|
||||
|
||||
config CC_HAS_BRANCH_PROT_PAC_RET
|
||||
# GCC 9 or later, clang 8 or later
|
||||
def_bool $(cc-option,-mbranch-protection=pac-ret+leaf)
|
||||
|
||||
config CC_HAS_SIGN_RETURN_ADDRESS
|
||||
# GCC 7, 8
|
||||
def_bool $(cc-option,-msign-return-address=all)
|
||||
|
||||
config AS_HAS_PAC
|
||||
def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
|
||||
|
||||
config AS_HAS_CFI_NEGATE_RA_STATE
|
||||
def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n)
|
||||
|
||||
endmenu
|
||||
|
||||
menu "ARMv8.4 architectural features"
|
||||
|
||||
config ARM64_AMU_EXTN
|
||||
bool "Enable support for the Activity Monitors Unit CPU extension"
|
||||
default y
|
||||
help
|
||||
The activity monitors extension is an optional extension introduced
|
||||
by the ARMv8.4 CPU architecture. This enables support for version 1
|
||||
of the activity monitors architecture, AMUv1.
|
||||
|
||||
To enable the use of this extension on CPUs that implement it, say Y.
|
||||
|
||||
Note that for architectural reasons, firmware _must_ implement AMU
|
||||
support when running on CPUs that present the activity monitors
|
||||
extension. The required support is present in:
|
||||
* Version 1.5 and later of the ARM Trusted Firmware
|
||||
|
||||
For kernels that have this configuration enabled but boot with broken
|
||||
firmware, you may need to say N here until the firmware is fixed.
|
||||
Otherwise you may experience firmware panics or lockups when
|
||||
accessing the counter registers. Even if you are not observing these
|
||||
symptoms, the values returned by the register reads might not
|
||||
correctly reflect reality. Most commonly, the value read will be 0,
|
||||
indicating that the counter is not enabled.
|
||||
|
||||
endmenu
|
||||
|
||||
menu "ARMv8.5 architectural features"
|
||||
|
||||
@@ -65,6 +65,17 @@ stack_protector_prepare: prepare0
|
||||
include/generated/asm-offsets.h))
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_ARM64_PTR_AUTH),y)
|
||||
branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all
|
||||
branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf
|
||||
# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the
|
||||
# compiler to generate them and consequently to break the single image contract
|
||||
# we pass it only to the assembler. This option is utilized only in case of non
|
||||
# integrated assemblers.
|
||||
branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a
|
||||
KBUILD_CFLAGS += $(branch-prot-flags-y)
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
|
||||
KBUILD_CPPFLAGS += -mbig-endian
|
||||
CHECKFLAGS += -D__AARCH64EB__
|
||||
|
||||
@@ -9,8 +9,8 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
||||
#define AES_ENTRY(func) SYM_FUNC_START(ce_ ## func)
|
||||
#define AES_ENDPROC(func) SYM_FUNC_END(ce_ ## func)
|
||||
#define AES_FUNC_START(func) SYM_FUNC_START(ce_ ## func)
|
||||
#define AES_FUNC_END(func) SYM_FUNC_END(ce_ ## func)
|
||||
|
||||
.arch armv8-a+crypto
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ SYM_FUNC_END(aes_decrypt_block5x)
|
||||
* int blocks)
|
||||
*/
|
||||
|
||||
AES_ENTRY(aes_ecb_encrypt)
|
||||
AES_FUNC_START(aes_ecb_encrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -79,10 +79,10 @@ ST5( st1 {v4.16b}, [x0], #16 )
|
||||
.Lecbencout:
|
||||
ldp x29, x30, [sp], #16
|
||||
ret
|
||||
AES_ENDPROC(aes_ecb_encrypt)
|
||||
AES_FUNC_END(aes_ecb_encrypt)
|
||||
|
||||
|
||||
AES_ENTRY(aes_ecb_decrypt)
|
||||
AES_FUNC_START(aes_ecb_decrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -110,7 +110,7 @@ ST5( st1 {v4.16b}, [x0], #16 )
|
||||
.Lecbdecout:
|
||||
ldp x29, x30, [sp], #16
|
||||
ret
|
||||
AES_ENDPROC(aes_ecb_decrypt)
|
||||
AES_FUNC_END(aes_ecb_decrypt)
|
||||
|
||||
|
||||
/*
|
||||
@@ -126,7 +126,7 @@ AES_ENDPROC(aes_ecb_decrypt)
|
||||
* u32 const rk2[]);
|
||||
*/
|
||||
|
||||
AES_ENTRY(aes_essiv_cbc_encrypt)
|
||||
AES_FUNC_START(aes_essiv_cbc_encrypt)
|
||||
ld1 {v4.16b}, [x5] /* get iv */
|
||||
|
||||
mov w8, #14 /* AES-256: 14 rounds */
|
||||
@@ -135,7 +135,7 @@ AES_ENTRY(aes_essiv_cbc_encrypt)
|
||||
enc_switch_key w3, x2, x6
|
||||
b .Lcbcencloop4x
|
||||
|
||||
AES_ENTRY(aes_cbc_encrypt)
|
||||
AES_FUNC_START(aes_cbc_encrypt)
|
||||
ld1 {v4.16b}, [x5] /* get iv */
|
||||
enc_prepare w3, x2, x6
|
||||
|
||||
@@ -167,10 +167,10 @@ AES_ENTRY(aes_cbc_encrypt)
|
||||
.Lcbcencout:
|
||||
st1 {v4.16b}, [x5] /* return iv */
|
||||
ret
|
||||
AES_ENDPROC(aes_cbc_encrypt)
|
||||
AES_ENDPROC(aes_essiv_cbc_encrypt)
|
||||
AES_FUNC_END(aes_cbc_encrypt)
|
||||
AES_FUNC_END(aes_essiv_cbc_encrypt)
|
||||
|
||||
AES_ENTRY(aes_essiv_cbc_decrypt)
|
||||
AES_FUNC_START(aes_essiv_cbc_decrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -181,7 +181,7 @@ AES_ENTRY(aes_essiv_cbc_decrypt)
|
||||
encrypt_block cbciv, w8, x6, x7, w9
|
||||
b .Lessivcbcdecstart
|
||||
|
||||
AES_ENTRY(aes_cbc_decrypt)
|
||||
AES_FUNC_START(aes_cbc_decrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -238,8 +238,8 @@ ST5( st1 {v4.16b}, [x0], #16 )
|
||||
st1 {cbciv.16b}, [x5] /* return iv */
|
||||
ldp x29, x30, [sp], #16
|
||||
ret
|
||||
AES_ENDPROC(aes_cbc_decrypt)
|
||||
AES_ENDPROC(aes_essiv_cbc_decrypt)
|
||||
AES_FUNC_END(aes_cbc_decrypt)
|
||||
AES_FUNC_END(aes_essiv_cbc_decrypt)
|
||||
|
||||
|
||||
/*
|
||||
@@ -249,7 +249,7 @@ AES_ENDPROC(aes_essiv_cbc_decrypt)
|
||||
* int rounds, int bytes, u8 const iv[])
|
||||
*/
|
||||
|
||||
AES_ENTRY(aes_cbc_cts_encrypt)
|
||||
AES_FUNC_START(aes_cbc_cts_encrypt)
|
||||
adr_l x8, .Lcts_permute_table
|
||||
sub x4, x4, #16
|
||||
add x9, x8, #32
|
||||
@@ -276,9 +276,9 @@ AES_ENTRY(aes_cbc_cts_encrypt)
|
||||
st1 {v0.16b}, [x4] /* overlapping stores */
|
||||
st1 {v1.16b}, [x0]
|
||||
ret
|
||||
AES_ENDPROC(aes_cbc_cts_encrypt)
|
||||
AES_FUNC_END(aes_cbc_cts_encrypt)
|
||||
|
||||
AES_ENTRY(aes_cbc_cts_decrypt)
|
||||
AES_FUNC_START(aes_cbc_cts_decrypt)
|
||||
adr_l x8, .Lcts_permute_table
|
||||
sub x4, x4, #16
|
||||
add x9, x8, #32
|
||||
@@ -305,7 +305,7 @@ AES_ENTRY(aes_cbc_cts_decrypt)
|
||||
st1 {v2.16b}, [x4] /* overlapping stores */
|
||||
st1 {v0.16b}, [x0]
|
||||
ret
|
||||
AES_ENDPROC(aes_cbc_cts_decrypt)
|
||||
AES_FUNC_END(aes_cbc_cts_decrypt)
|
||||
|
||||
.section ".rodata", "a"
|
||||
.align 6
|
||||
@@ -324,7 +324,7 @@ AES_ENDPROC(aes_cbc_cts_decrypt)
|
||||
* int blocks, u8 ctr[])
|
||||
*/
|
||||
|
||||
AES_ENTRY(aes_ctr_encrypt)
|
||||
AES_FUNC_START(aes_ctr_encrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -409,7 +409,7 @@ ST5( st1 {v4.16b}, [x0], #16 )
|
||||
rev x7, x7
|
||||
ins vctr.d[0], x7
|
||||
b .Lctrcarrydone
|
||||
AES_ENDPROC(aes_ctr_encrypt)
|
||||
AES_FUNC_END(aes_ctr_encrypt)
|
||||
|
||||
|
||||
/*
|
||||
@@ -433,7 +433,7 @@ AES_ENDPROC(aes_ctr_encrypt)
|
||||
uzp1 xtsmask.4s, xtsmask.4s, \tmp\().4s
|
||||
.endm
|
||||
|
||||
AES_ENTRY(aes_xts_encrypt)
|
||||
AES_FUNC_START(aes_xts_encrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -518,9 +518,9 @@ AES_ENTRY(aes_xts_encrypt)
|
||||
st1 {v2.16b}, [x4] /* overlapping stores */
|
||||
mov w4, wzr
|
||||
b .Lxtsencctsout
|
||||
AES_ENDPROC(aes_xts_encrypt)
|
||||
AES_FUNC_END(aes_xts_encrypt)
|
||||
|
||||
AES_ENTRY(aes_xts_decrypt)
|
||||
AES_FUNC_START(aes_xts_decrypt)
|
||||
stp x29, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
@@ -612,13 +612,13 @@ AES_ENTRY(aes_xts_decrypt)
|
||||
st1 {v2.16b}, [x4] /* overlapping stores */
|
||||
mov w4, wzr
|
||||
b .Lxtsdecctsout
|
||||
AES_ENDPROC(aes_xts_decrypt)
|
||||
AES_FUNC_END(aes_xts_decrypt)
|
||||
|
||||
/*
|
||||
* aes_mac_update(u8 const in[], u32 const rk[], int rounds,
|
||||
* int blocks, u8 dg[], int enc_before, int enc_after)
|
||||
*/
|
||||
AES_ENTRY(aes_mac_update)
|
||||
AES_FUNC_START(aes_mac_update)
|
||||
frame_push 6
|
||||
|
||||
mov x19, x0
|
||||
@@ -676,4 +676,4 @@ AES_ENTRY(aes_mac_update)
|
||||
ld1 {v0.16b}, [x23] /* get dg */
|
||||
enc_prepare w21, x20, x0
|
||||
b .Lmacloop4x
|
||||
AES_ENDPROC(aes_mac_update)
|
||||
AES_FUNC_END(aes_mac_update)
|
||||
|
||||
@@ -8,8 +8,8 @@
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/assembler.h>
|
||||
|
||||
#define AES_ENTRY(func) SYM_FUNC_START(neon_ ## func)
|
||||
#define AES_ENDPROC(func) SYM_FUNC_END(neon_ ## func)
|
||||
#define AES_FUNC_START(func) SYM_FUNC_START(neon_ ## func)
|
||||
#define AES_FUNC_END(func) SYM_FUNC_END(neon_ ## func)
|
||||
|
||||
xtsmask .req v7
|
||||
cbciv .req v7
|
||||
|
||||
@@ -587,20 +587,20 @@ CPU_LE( rev w8, w8 )
|
||||
* struct ghash_key const *k, u64 dg[], u8 ctr[],
|
||||
* int rounds, u8 tag)
|
||||
*/
|
||||
ENTRY(pmull_gcm_encrypt)
|
||||
SYM_FUNC_START(pmull_gcm_encrypt)
|
||||
pmull_gcm_do_crypt 1
|
||||
ENDPROC(pmull_gcm_encrypt)
|
||||
SYM_FUNC_END(pmull_gcm_encrypt)
|
||||
|
||||
/*
|
||||
* void pmull_gcm_decrypt(int blocks, u8 dst[], const u8 src[],
|
||||
* struct ghash_key const *k, u64 dg[], u8 ctr[],
|
||||
* int rounds, u8 tag)
|
||||
*/
|
||||
ENTRY(pmull_gcm_decrypt)
|
||||
SYM_FUNC_START(pmull_gcm_decrypt)
|
||||
pmull_gcm_do_crypt 0
|
||||
ENDPROC(pmull_gcm_decrypt)
|
||||
SYM_FUNC_END(pmull_gcm_decrypt)
|
||||
|
||||
pmull_gcm_ghash_4x:
|
||||
SYM_FUNC_START_LOCAL(pmull_gcm_ghash_4x)
|
||||
movi MASK.16b, #0xe1
|
||||
shl MASK.2d, MASK.2d, #57
|
||||
|
||||
@@ -681,9 +681,9 @@ pmull_gcm_ghash_4x:
|
||||
eor XL.16b, XL.16b, T2.16b
|
||||
|
||||
ret
|
||||
ENDPROC(pmull_gcm_ghash_4x)
|
||||
SYM_FUNC_END(pmull_gcm_ghash_4x)
|
||||
|
||||
pmull_gcm_enc_4x:
|
||||
SYM_FUNC_START_LOCAL(pmull_gcm_enc_4x)
|
||||
ld1 {KS0.16b}, [x5] // load upper counter
|
||||
sub w10, w8, #4
|
||||
sub w11, w8, #3
|
||||
@@ -746,7 +746,7 @@ pmull_gcm_enc_4x:
|
||||
eor INP3.16b, INP3.16b, KS3.16b
|
||||
|
||||
ret
|
||||
ENDPROC(pmull_gcm_enc_4x)
|
||||
SYM_FUNC_END(pmull_gcm_enc_4x)
|
||||
|
||||
.section ".rodata", "a"
|
||||
.align 6
|
||||
|
||||
65
arch/arm64/include/asm/asm_pointer_auth.h
Normal file
65
arch/arm64/include/asm/asm_pointer_auth.h
Normal file
@@ -0,0 +1,65 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ASM_ASM_POINTER_AUTH_H
|
||||
#define __ASM_ASM_POINTER_AUTH_H
|
||||
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/asm-offsets.h>
|
||||
#include <asm/cpufeature.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
/*
|
||||
* thread.keys_user.ap* as offset exceeds the #imm offset range
|
||||
* so use the base value of ldp as thread.keys_user and offset as
|
||||
* thread.keys_user.ap*.
|
||||
*/
|
||||
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
|
||||
mov \tmp1, #THREAD_KEYS_USER
|
||||
add \tmp1, \tsk, \tmp1
|
||||
alternative_if_not ARM64_HAS_ADDRESS_AUTH
|
||||
b .Laddr_auth_skip_\@
|
||||
alternative_else_nop_endif
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA]
|
||||
msr_s SYS_APIAKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APIAKEYHI_EL1, \tmp3
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB]
|
||||
msr_s SYS_APIBKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APIBKEYHI_EL1, \tmp3
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA]
|
||||
msr_s SYS_APDAKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APDAKEYHI_EL1, \tmp3
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB]
|
||||
msr_s SYS_APDBKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APDBKEYHI_EL1, \tmp3
|
||||
.Laddr_auth_skip_\@:
|
||||
alternative_if ARM64_HAS_GENERIC_AUTH
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA]
|
||||
msr_s SYS_APGAKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APGAKEYHI_EL1, \tmp3
|
||||
alternative_else_nop_endif
|
||||
.endm
|
||||
|
||||
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
|
||||
alternative_if ARM64_HAS_ADDRESS_AUTH
|
||||
mov \tmp1, #THREAD_KEYS_KERNEL
|
||||
add \tmp1, \tsk, \tmp1
|
||||
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
|
||||
msr_s SYS_APIAKEYLO_EL1, \tmp2
|
||||
msr_s SYS_APIAKEYHI_EL1, \tmp3
|
||||
.if \sync == 1
|
||||
isb
|
||||
.endif
|
||||
alternative_else_nop_endif
|
||||
.endm
|
||||
|
||||
#else /* CONFIG_ARM64_PTR_AUTH */
|
||||
|
||||
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
|
||||
.endm
|
||||
|
||||
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3
|
||||
.endm
|
||||
|
||||
#endif /* CONFIG_ARM64_PTR_AUTH */
|
||||
|
||||
#endif /* __ASM_ASM_POINTER_AUTH_H */
|
||||
@@ -256,12 +256,6 @@ alternative_endif
|
||||
ldr \rd, [\rn, #VMA_VM_MM]
|
||||
.endm
|
||||
|
||||
/*
|
||||
* mmid - get context id from mm pointer (mm->context.id)
|
||||
*/
|
||||
.macro mmid, rd, rn
|
||||
ldr \rd, [\rn, #MM_CONTEXT_ID]
|
||||
.endm
|
||||
/*
|
||||
* read_ctr - read CTR_EL0. If the system has mismatched register fields,
|
||||
* provide the system wide safe value from arm64_ftr_reg_ctrel0.sys_val
|
||||
@@ -430,6 +424,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
|
||||
9000:
|
||||
.endm
|
||||
|
||||
/*
|
||||
* reset_amuserenr_el0 - reset AMUSERENR_EL0 if AMUv1 present
|
||||
*/
|
||||
.macro reset_amuserenr_el0, tmpreg
|
||||
mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1
|
||||
ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4
|
||||
cbz \tmpreg, .Lskip_\@ // Skip if no AMU present
|
||||
msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0
|
||||
.Lskip_\@:
|
||||
.endm
|
||||
/*
|
||||
* copy_page - copy src to dest using temp registers t1-t8
|
||||
*/
|
||||
|
||||
@@ -5,7 +5,12 @@
|
||||
#ifndef __ASM_CHECKSUM_H
|
||||
#define __ASM_CHECKSUM_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/in6.h>
|
||||
|
||||
#define _HAVE_ARCH_IPV6_CSUM
|
||||
__sum16 csum_ipv6_magic(const struct in6_addr *saddr,
|
||||
const struct in6_addr *daddr,
|
||||
__u32 len, __u8 proto, __wsum sum);
|
||||
|
||||
static inline __sum16 csum_fold(__wsum csum)
|
||||
{
|
||||
|
||||
24
arch/arm64/include/asm/compiler.h
Normal file
24
arch/arm64/include/asm/compiler.h
Normal file
@@ -0,0 +1,24 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ASM_COMPILER_H
|
||||
#define __ASM_COMPILER_H
|
||||
|
||||
#if defined(CONFIG_ARM64_PTR_AUTH)
|
||||
|
||||
/*
|
||||
* The EL0/EL1 pointer bits used by a pointer authentication code.
|
||||
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
|
||||
*/
|
||||
#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual)
|
||||
#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual)
|
||||
|
||||
/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */
|
||||
#define ptrauth_clear_pac(ptr) \
|
||||
((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \
|
||||
(ptr & ~ptrauth_user_pac_mask()))
|
||||
|
||||
#define __builtin_return_address(val) \
|
||||
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
|
||||
|
||||
#endif /* CONFIG_ARM64_PTR_AUTH */
|
||||
|
||||
#endif /* __ASM_COMPILER_H */
|
||||
@@ -55,12 +55,12 @@ struct cpu_operations {
|
||||
#endif
|
||||
};
|
||||
|
||||
extern const struct cpu_operations *cpu_ops[NR_CPUS];
|
||||
int __init cpu_read_ops(int cpu);
|
||||
int __init init_cpu_ops(int cpu);
|
||||
extern const struct cpu_operations *get_cpu_ops(int cpu);
|
||||
|
||||
static inline void __init cpu_read_bootcpu_ops(void)
|
||||
static inline void __init init_bootcpu_ops(void)
|
||||
{
|
||||
cpu_read_ops(0);
|
||||
init_cpu_ops(0);
|
||||
}
|
||||
|
||||
#endif /* ifndef __ASM_CPU_OPS_H */
|
||||
|
||||
@@ -58,7 +58,10 @@
|
||||
#define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48
|
||||
#define ARM64_HAS_E0PD 49
|
||||
#define ARM64_HAS_RNG 50
|
||||
#define ARM64_HAS_AMU_EXTN 51
|
||||
#define ARM64_HAS_ADDRESS_AUTH 52
|
||||
#define ARM64_HAS_GENERIC_AUTH 53
|
||||
|
||||
#define ARM64_NCAPS 51
|
||||
#define ARM64_NCAPS 54
|
||||
|
||||
#endif /* __ASM_CPUCAPS_H */
|
||||
|
||||
@@ -208,6 +208,10 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
|
||||
* In some non-typical cases either both (a) and (b), or neither,
|
||||
* should be permitted. This can be described by including neither
|
||||
* or both flags in the capability's type field.
|
||||
*
|
||||
* In case of a conflict, the CPU is prevented from booting. If the
|
||||
* ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability,
|
||||
* then a kernel panic is triggered.
|
||||
*/
|
||||
|
||||
|
||||
@@ -240,6 +244,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
|
||||
#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4))
|
||||
/* Is it safe for a late CPU to miss this capability when system has it */
|
||||
#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5))
|
||||
/* Panic when a conflict is detected */
|
||||
#define ARM64_CPUCAP_PANIC_ON_CONFLICT ((u16)BIT(6))
|
||||
|
||||
/*
|
||||
* CPU errata workarounds that need to be enabled at boot time if one or
|
||||
@@ -279,9 +285,20 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
|
||||
|
||||
/*
|
||||
* CPU feature used early in the boot based on the boot CPU. All secondary
|
||||
* CPUs must match the state of the capability as detected by the boot CPU.
|
||||
* CPUs must match the state of the capability as detected by the boot CPU. In
|
||||
* case of a conflict, a kernel panic is triggered.
|
||||
*/
|
||||
#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU
|
||||
#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \
|
||||
(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
|
||||
|
||||
/*
|
||||
* CPU feature used early in the boot based on the boot CPU. It is safe for a
|
||||
* late CPU to have this feature even though the boot CPU hasn't enabled it,
|
||||
* although the feature will not be used by Linux in this case. If the boot CPU
|
||||
* has enabled this feature already, then every late CPU must have it.
|
||||
*/
|
||||
#define ARM64_CPUCAP_BOOT_CPU_FEATURE \
|
||||
(ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU)
|
||||
|
||||
struct arm64_cpu_capabilities {
|
||||
const char *desc;
|
||||
@@ -340,18 +357,6 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
|
||||
return cap->type & ARM64_CPUCAP_SCOPE_MASK;
|
||||
}
|
||||
|
||||
static inline bool
|
||||
cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
|
||||
{
|
||||
return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU);
|
||||
}
|
||||
|
||||
static inline bool
|
||||
cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap)
|
||||
{
|
||||
return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU);
|
||||
}
|
||||
|
||||
/*
|
||||
* Generic helper for handling capabilties with multiple (match,enable) pairs
|
||||
* of call backs, sharing the same capability bit.
|
||||
@@ -390,14 +395,16 @@ unsigned long cpu_get_elf_hwcap2(void);
|
||||
#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
|
||||
#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
|
||||
|
||||
/* System capability check for constant caps */
|
||||
static __always_inline bool __cpus_have_const_cap(int num)
|
||||
static __always_inline bool system_capabilities_finalized(void)
|
||||
{
|
||||
if (num >= ARM64_NCAPS)
|
||||
return false;
|
||||
return static_branch_unlikely(&cpu_hwcap_keys[num]);
|
||||
return static_branch_likely(&arm64_const_caps_ready);
|
||||
}
|
||||
|
||||
/*
|
||||
* Test for a capability with a runtime check.
|
||||
*
|
||||
* Before the capability is detected, this returns false.
|
||||
*/
|
||||
static inline bool cpus_have_cap(unsigned int num)
|
||||
{
|
||||
if (num >= ARM64_NCAPS)
|
||||
@@ -405,14 +412,53 @@ static inline bool cpus_have_cap(unsigned int num)
|
||||
return test_bit(num, cpu_hwcaps);
|
||||
}
|
||||
|
||||
/*
|
||||
* Test for a capability without a runtime check.
|
||||
*
|
||||
* Before capabilities are finalized, this returns false.
|
||||
* After capabilities are finalized, this is patched to avoid a runtime check.
|
||||
*
|
||||
* @num must be a compile-time constant.
|
||||
*/
|
||||
static __always_inline bool __cpus_have_const_cap(int num)
|
||||
{
|
||||
if (num >= ARM64_NCAPS)
|
||||
return false;
|
||||
return static_branch_unlikely(&cpu_hwcap_keys[num]);
|
||||
}
|
||||
|
||||
/*
|
||||
* Test for a capability, possibly with a runtime check.
|
||||
*
|
||||
* Before capabilities are finalized, this behaves as cpus_have_cap().
|
||||
* After capabilities are finalized, this is patched to avoid a runtime check.
|
||||
*
|
||||
* @num must be a compile-time constant.
|
||||
*/
|
||||
static __always_inline bool cpus_have_const_cap(int num)
|
||||
{
|
||||
if (static_branch_likely(&arm64_const_caps_ready))
|
||||
if (system_capabilities_finalized())
|
||||
return __cpus_have_const_cap(num);
|
||||
else
|
||||
return cpus_have_cap(num);
|
||||
}
|
||||
|
||||
/*
|
||||
* Test for a capability without a runtime check.
|
||||
*
|
||||
* Before capabilities are finalized, this will BUG().
|
||||
* After capabilities are finalized, this is patched to avoid a runtime check.
|
||||
*
|
||||
* @num must be a compile-time constant.
|
||||
*/
|
||||
static __always_inline bool cpus_have_final_cap(int num)
|
||||
{
|
||||
if (system_capabilities_finalized())
|
||||
return __cpus_have_const_cap(num);
|
||||
else
|
||||
BUG();
|
||||
}
|
||||
|
||||
static inline void cpus_set_cap(unsigned int num)
|
||||
{
|
||||
if (num >= ARM64_NCAPS) {
|
||||
@@ -447,6 +493,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
|
||||
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
|
||||
}
|
||||
|
||||
/*
|
||||
* Fields that identify the version of the Performance Monitors Extension do
|
||||
* not follow the standard ID scheme. See ARM DDI 0487E.a page D13-2825,
|
||||
* "Alternative ID scheme used for the Performance Monitors Extension version".
|
||||
*/
|
||||
static inline u64 __attribute_const__
|
||||
cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
|
||||
{
|
||||
u64 val = cpuid_feature_extract_unsigned_field(features, field);
|
||||
u64 mask = GENMASK_ULL(field + 3, field);
|
||||
|
||||
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
|
||||
if (val == 0xf)
|
||||
val = 0;
|
||||
|
||||
if (val > cap) {
|
||||
features &= ~mask;
|
||||
features |= (cap << field) & mask;
|
||||
}
|
||||
|
||||
return features;
|
||||
}
|
||||
|
||||
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
|
||||
{
|
||||
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
|
||||
@@ -590,15 +659,13 @@ static __always_inline bool system_supports_cnp(void)
|
||||
static inline bool system_supports_address_auth(void)
|
||||
{
|
||||
return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
|
||||
(cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
|
||||
cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
|
||||
cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
|
||||
}
|
||||
|
||||
static inline bool system_supports_generic_auth(void)
|
||||
{
|
||||
return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
|
||||
(cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
|
||||
cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
|
||||
cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
|
||||
}
|
||||
|
||||
static inline bool system_uses_irq_prio_masking(void)
|
||||
@@ -613,11 +680,6 @@ static inline bool system_has_prio_mask_debugging(void)
|
||||
system_uses_irq_prio_masking();
|
||||
}
|
||||
|
||||
static inline bool system_capabilities_finalized(void)
|
||||
{
|
||||
return static_branch_likely(&arm64_const_caps_ready);
|
||||
}
|
||||
|
||||
#define ARM64_BP_HARDEN_UNKNOWN -1
|
||||
#define ARM64_BP_HARDEN_WA_NEEDED 0
|
||||
#define ARM64_BP_HARDEN_NOT_REQUIRED 1
|
||||
@@ -678,6 +740,11 @@ static inline bool cpu_has_hw_af(void)
|
||||
ID_AA64MMFR1_HADBS_SHIFT);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_AMU_EXTN
|
||||
/* Check whether the cpu supports the Activity Monitors Unit (AMU) */
|
||||
extern bool cpu_has_amu_feat(int cpu);
|
||||
#endif
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif
|
||||
|
||||
@@ -60,7 +60,7 @@
|
||||
#define ESR_ELx_EC_BKPT32 (0x38)
|
||||
/* Unallocated EC: 0x39 */
|
||||
#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */
|
||||
/* Unallocted EC: 0x3B */
|
||||
/* Unallocated EC: 0x3B */
|
||||
#define ESR_ELx_EC_BRK64 (0x3C)
|
||||
/* Unallocated EC: 0x3D - 0x3F */
|
||||
#define ESR_ELx_EC_MAX (0x3F)
|
||||
|
||||
@@ -267,6 +267,7 @@
|
||||
|
||||
/* Hyp Coprocessor Trap Register */
|
||||
#define CPTR_EL2_TCPAC (1 << 31)
|
||||
#define CPTR_EL2_TAM (1 << 30)
|
||||
#define CPTR_EL2_TTA (1 << 20)
|
||||
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
|
||||
#define CPTR_EL2_TZ (1 << 8)
|
||||
|
||||
@@ -36,6 +36,8 @@
|
||||
*/
|
||||
#define KVM_VECTOR_PREAMBLE (2 * AARCH64_INSN_SIZE)
|
||||
|
||||
#define __SMCCC_WORKAROUND_1_SMC_SZ 36
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/mm.h>
|
||||
@@ -75,6 +77,8 @@ extern void __vgic_v3_init_lrs(void);
|
||||
|
||||
extern u32 __kvm_get_mdcr_el2(void);
|
||||
|
||||
extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ];
|
||||
|
||||
/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
|
||||
#define __hyp_this_cpu_ptr(sym) \
|
||||
({ \
|
||||
|
||||
@@ -481,7 +481,7 @@ static inline void *kvm_get_hyp_vector(void)
|
||||
int slot = -1;
|
||||
|
||||
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) {
|
||||
vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs_start));
|
||||
vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
|
||||
slot = data->hyp_vectors_slot;
|
||||
}
|
||||
|
||||
@@ -510,14 +510,13 @@ static inline int kvm_map_vectors(void)
|
||||
* HBP + HEL2 -> use hardened vertors and use exec mapping
|
||||
*/
|
||||
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
|
||||
__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs_start);
|
||||
__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs);
|
||||
__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
|
||||
}
|
||||
|
||||
if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) {
|
||||
phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs_start);
|
||||
unsigned long size = (__bp_harden_hyp_vecs_end -
|
||||
__bp_harden_hyp_vecs_start);
|
||||
phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs);
|
||||
unsigned long size = __BP_HARDEN_HYP_VECS_SZ;
|
||||
|
||||
/*
|
||||
* Always allocate a spare vector slot, as we don't
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user