You've already forked linux-rockchip
mirror of
https://github.com/armbian/linux-rockchip.git
synced 2026-01-06 11:08:10 -08:00
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas: - arm64 perf: DDR PMU driver for Alibaba's T-Head Yitian 710 SoC, SVE vector granule register added to the user regs together with SVE perf extensions documentation. - SVE updates: add HWCAP for SVE EBF16, update the SVE ABI documentation to match the actual kernel behaviour (zeroing the registers on syscall rather than "zeroed or preserved" previously). - More conversions to automatic system registers generation. - vDSO: use self-synchronising virtual counter access in gettimeofday() if the architecture supports it. - arm64 stacktrace cleanups and improvements. - arm64 atomics improvements: always inline assembly, remove LL/SC trampolines. - Improve the reporting of EL1 exceptions: rework BTI and FPAC exception handling, better EL1 undefs reporting. - Cortex-A510 erratum 2658417: remove BF16 support due to incorrect result. - arm64 defconfig updates: build CoreSight as a module, enable options necessary for docker, memory hotplug/hotremove, enable all PMUs provided by Arm. - arm64 ptrace() support for TPIDR2_EL0 (register provided with the SME extensions). - arm64 ftraces updates/fixes: fix module PLTs with mcount, remove unused function. - kselftest updates for arm64: simple HWCAP validation, FP stress test improvements, validation of ZA regs in signal handlers, include larger SVE and SME vector lengths in signal tests, various cleanups. - arm64 alternatives (code patching) improvements to robustness and consistency: replace cpucap static branches with equivalent alternatives, associate callback alternatives with a cpucap. - Miscellaneous updates: optimise kprobe performance of patching single-step slots, simplify uaccess_mask_ptr(), move MTE registers initialisation to C, support huge vmalloc() mappings, run softirqs on the per-CPU IRQ stack, compat (arm32) misalignment fixups for multiword accesses. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (126 commits) arm64: alternatives: Use vdso/bits.h instead of linux/bits.h arm64/kprobe: Optimize the performance of patching single-step slot arm64: defconfig: Add Coresight as module kselftest/arm64: Handle EINTR while reading data from children kselftest/arm64: Flag fp-stress as exiting when we begin finishing up kselftest/arm64: Don't repeat termination handler for fp-stress ARM64: reloc_test: add __init/__exit annotations to module init/exit funcs arm64/mm: fold check for KFENCE into can_set_direct_map() arm64: ftrace: fix module PLTs with mcount arm64: module: Remove unused plt_entry_is_initialized() arm64: module: Make plt_equals_entry() static arm64: fix the build with binutils 2.27 kselftest/arm64: Don't enable v8.5 for MTE selftest builds arm64: uaccess: simplify uaccess_mask_ptr() arm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header kselftest/arm64: Fix typo in hwcap check arm64: mte: move register initialization to C arm64: mm: handle ARM64_KERNEL_USES_PMD_MAPS in vmemmap_populate() arm64: dma: Drop cache invalidation from arch_dma_prep_coherent() arm64/sve: Add Perf extensions documentation ...
This commit is contained in:
@@ -3203,6 +3203,7 @@
|
||||
spectre_v2_user=off [X86]
|
||||
spec_store_bypass_disable=off [X86,PPC]
|
||||
ssbd=force-off [ARM64]
|
||||
nospectre_bhb [ARM64]
|
||||
l1tf=off [X86]
|
||||
mds=off [X86]
|
||||
tsx_async_abort=off [X86]
|
||||
@@ -3609,7 +3610,7 @@
|
||||
|
||||
nohugeiomap [KNL,X86,PPC,ARM64] Disable kernel huge I/O mappings.
|
||||
|
||||
nohugevmalloc [PPC] Disable kernel huge vmalloc mappings.
|
||||
nohugevmalloc [KNL,X86,PPC,ARM64] Disable kernel huge vmalloc mappings.
|
||||
|
||||
nosmt [KNL,S390] Disable symmetric multithreading (SMT).
|
||||
Equivalent to smt=1.
|
||||
@@ -3627,6 +3628,10 @@
|
||||
vulnerability. System may allow data leaks with this
|
||||
option.
|
||||
|
||||
nospectre_bhb [ARM64] Disable all mitigations for Spectre-BHB (branch
|
||||
history injection) vulnerability. System may allow data leaks
|
||||
with this option.
|
||||
|
||||
nospec_store_bypass_disable
|
||||
[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
|
||||
|
||||
|
||||
100
Documentation/admin-guide/perf/alibaba_pmu.rst
Normal file
100
Documentation/admin-guide/perf/alibaba_pmu.rst
Normal file
@@ -0,0 +1,100 @@
|
||||
=============================================================
|
||||
Alibaba's T-Head SoC Uncore Performance Monitoring Unit (PMU)
|
||||
=============================================================
|
||||
|
||||
The Yitian 710, custom-built by Alibaba Group's chip development business,
|
||||
T-Head, implements uncore PMU for performance and functional debugging to
|
||||
facilitate system maintenance.
|
||||
|
||||
DDR Sub-System Driveway (DRW) PMU Driver
|
||||
=========================================
|
||||
|
||||
Yitian 710 employs eight DDR5/4 channels, four on each die. Each DDR5 channel
|
||||
is independent of others to service system memory requests. And one DDR5
|
||||
channel is split into two independent sub-channels. The DDR Sub-System Driveway
|
||||
implements separate PMUs for each sub-channel to monitor various performance
|
||||
metrics.
|
||||
|
||||
The Driveway PMU devices are named as ali_drw_<sys_base_addr> with perf.
|
||||
For example, ali_drw_21000 and ali_drw_21080 are two PMU devices for two
|
||||
sub-channels of the same channel in die 0. And the PMU device of die 1 is
|
||||
prefixed with ali_drw_400XXXXX, e.g. ali_drw_40021000.
|
||||
|
||||
Each sub-channel has 36 PMU counters in total, which is classified into
|
||||
four groups:
|
||||
|
||||
- Group 0: PMU Cycle Counter. This group has one pair of counters
|
||||
pmu_cycle_cnt_low and pmu_cycle_cnt_high, that is used as the cycle count
|
||||
based on DDRC core clock.
|
||||
|
||||
- Group 1: PMU Bandwidth Counters. This group has 8 counters that are used
|
||||
to count the total access number of either the eight bank groups in a
|
||||
selected rank, or four ranks separately in the first 4 counters. The base
|
||||
transfer unit is 64B.
|
||||
|
||||
- Group 2: PMU Retry Counters. This group has 10 counters, that intend to
|
||||
count the total retry number of each type of uncorrectable error.
|
||||
|
||||
- Group 3: PMU Common Counters. This group has 16 counters, that are used
|
||||
to count the common events.
|
||||
|
||||
For now, the Driveway PMU driver only uses counters in group 0 and group 3.
|
||||
|
||||
The DDR Controller (DDRCTL) and DDR PHY combine to create a complete solution
|
||||
for connecting an SoC application bus to DDR memory devices. The DDRCTL
|
||||
receives transactions Host Interface (HIF) which is custom-defined by Synopsys.
|
||||
These transactions are queued internally and scheduled for access while
|
||||
satisfying the SDRAM protocol timing requirements, transaction priorities, and
|
||||
dependencies between the transactions. The DDRCTL in turn issues commands on
|
||||
the DDR PHY Interface (DFI) to the PHY module, which launches and captures data
|
||||
to and from the SDRAM. The driveway PMUs have hardware logic to gather
|
||||
statistics and performance logging signals on HIF, DFI, etc.
|
||||
|
||||
By counting the READ, WRITE and RMW commands sent to the DDRC through the HIF
|
||||
interface, we could calculate the bandwidth. Example usage of counting memory
|
||||
data bandwidth::
|
||||
|
||||
perf stat \
|
||||
-e ali_drw_21000/hif_wr/ \
|
||||
-e ali_drw_21000/hif_rd/ \
|
||||
-e ali_drw_21000/hif_rmw/ \
|
||||
-e ali_drw_21000/cycle/ \
|
||||
-e ali_drw_21080/hif_wr/ \
|
||||
-e ali_drw_21080/hif_rd/ \
|
||||
-e ali_drw_21080/hif_rmw/ \
|
||||
-e ali_drw_21080/cycle/ \
|
||||
-e ali_drw_23000/hif_wr/ \
|
||||
-e ali_drw_23000/hif_rd/ \
|
||||
-e ali_drw_23000/hif_rmw/ \
|
||||
-e ali_drw_23000/cycle/ \
|
||||
-e ali_drw_23080/hif_wr/ \
|
||||
-e ali_drw_23080/hif_rd/ \
|
||||
-e ali_drw_23080/hif_rmw/ \
|
||||
-e ali_drw_23080/cycle/ \
|
||||
-e ali_drw_25000/hif_wr/ \
|
||||
-e ali_drw_25000/hif_rd/ \
|
||||
-e ali_drw_25000/hif_rmw/ \
|
||||
-e ali_drw_25000/cycle/ \
|
||||
-e ali_drw_25080/hif_wr/ \
|
||||
-e ali_drw_25080/hif_rd/ \
|
||||
-e ali_drw_25080/hif_rmw/ \
|
||||
-e ali_drw_25080/cycle/ \
|
||||
-e ali_drw_27000/hif_wr/ \
|
||||
-e ali_drw_27000/hif_rd/ \
|
||||
-e ali_drw_27000/hif_rmw/ \
|
||||
-e ali_drw_27000/cycle/ \
|
||||
-e ali_drw_27080/hif_wr/ \
|
||||
-e ali_drw_27080/hif_rd/ \
|
||||
-e ali_drw_27080/hif_rmw/ \
|
||||
-e ali_drw_27080/cycle/ -- sleep 10
|
||||
|
||||
The average DRAM bandwidth can be calculated as follows:
|
||||
|
||||
- Read Bandwidth = perf_hif_rd * DDRC_WIDTH * DDRC_Freq / DDRC_Cycle
|
||||
- Write Bandwidth = (perf_hif_wr + perf_hif_rmw) * DDRC_WIDTH * DDRC_Freq / DDRC_Cycle
|
||||
|
||||
Here, DDRC_WIDTH = 64 bytes.
|
||||
|
||||
The current driver does not support sampling. So "perf record" is
|
||||
unsupported. Also attach to a task is unsupported as the events are all
|
||||
uncore.
|
||||
@@ -18,3 +18,4 @@ Performance monitor support
|
||||
xgene-pmu
|
||||
arm_dsu_pmu
|
||||
thunderx2-pmu
|
||||
alibaba_pmu
|
||||
|
||||
@@ -272,6 +272,9 @@ HWCAP2_WFXT
|
||||
HWCAP2_EBF16
|
||||
Functionality implied by ID_AA64ISAR1_EL1.BF16 == 0b0010.
|
||||
|
||||
HWCAP2_SVE_EBF16
|
||||
Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0010.
|
||||
|
||||
4. Unused AT_HWCAP bits
|
||||
-----------------------
|
||||
|
||||
|
||||
@@ -110,6 +110,8 @@ stable kernels.
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| ARM | Cortex-A510 | #2441009 | ARM64_ERRATUM_2441009 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| ARM | Cortex-A510 | #2658417 | ARM64_ERRATUM_2658417 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 |
|
||||
|
||||
@@ -331,6 +331,9 @@ The regset data starts with struct user_za_header, containing:
|
||||
been read if a PTRACE_GETREGSET of NT_ARM_ZA were executed for each thread
|
||||
when the coredump was generated.
|
||||
|
||||
* The NT_ARM_TLS note will be extended to two registers, the second register
|
||||
will contain TPIDR2_EL0 on systems that support SME and will be read as
|
||||
zero with writes ignored otherwise.
|
||||
|
||||
9. System runtime configuration
|
||||
--------------------------------
|
||||
|
||||
@@ -111,7 +111,7 @@ the SVE instruction set architecture.
|
||||
|
||||
* On syscall, V0..V31 are preserved (as without SVE). Thus, bits [127:0] of
|
||||
Z0..Z31 are preserved. All other bits of Z0..Z31, and all of P0..P15 and FFR
|
||||
become unspecified on return from a syscall.
|
||||
become zero on return from a syscall.
|
||||
|
||||
* The SVE registers are not used to pass arguments to or receive results from
|
||||
any syscall.
|
||||
@@ -452,6 +452,24 @@ The regset data starts with struct user_sve_header, containing:
|
||||
* Modifying the system default vector length does not affect the vector length
|
||||
of any existing process or thread that does not make an execve() call.
|
||||
|
||||
10. Perf extensions
|
||||
--------------------------------
|
||||
|
||||
* The arm64 specific DWARF standard [5] added the VG (Vector Granule) register
|
||||
at index 46. This register is used for DWARF unwinding when variable length
|
||||
SVE registers are pushed onto the stack.
|
||||
|
||||
* Its value is equivalent to the current SVE vector length (VL) in bits divided
|
||||
by 64.
|
||||
|
||||
* The value is included in Perf samples in the regs[46] field if
|
||||
PERF_SAMPLE_REGS_USER is set and the sample_regs_user mask has bit 46 set.
|
||||
|
||||
* The value is the current value at the time the sample was taken, and it can
|
||||
change over time.
|
||||
|
||||
* If the system doesn't support SVE when perf_event_open is called with these
|
||||
settings, the event will fail to open.
|
||||
|
||||
Appendix A. SVE programmer's model (informative)
|
||||
=================================================
|
||||
@@ -593,3 +611,5 @@ References
|
||||
http://infocenter.arm.com/help/topic/com.arm.doc.ihi0055c/IHI0055C_beta_aapcs64.pdf
|
||||
http://infocenter.arm.com/help/topic/com.arm.doc.subset.swdev.abi/index.html
|
||||
Procedure Call Standard for the ARM 64-bit Architecture (AArch64)
|
||||
|
||||
[5] https://github.com/ARM-software/abi-aa/blob/main/aadwarf64/aadwarf64.rst
|
||||
|
||||
@@ -748,6 +748,12 @@ S: Supported
|
||||
F: drivers/infiniband/hw/erdma
|
||||
F: include/uapi/rdma/erdma-abi.h
|
||||
|
||||
ALIBABA PMU DRIVER
|
||||
M: Shuai Xue <xueshuai@linux.alibaba.com>
|
||||
S: Supported
|
||||
F: Documentation/admin-guide/perf/alibaba_pmu.rst
|
||||
F: drivers/perf/alibaba_uncore_dwr_pmu.c
|
||||
|
||||
ALIENWARE WMI DRIVER
|
||||
L: Dell.Client.Kernel@dell.com
|
||||
S: Maintained
|
||||
|
||||
@@ -149,6 +149,7 @@ config ARM64
|
||||
select HAVE_ARCH_AUDITSYSCALL
|
||||
select HAVE_ARCH_BITREVERSE
|
||||
select HAVE_ARCH_COMPILER_H
|
||||
select HAVE_ARCH_HUGE_VMALLOC
|
||||
select HAVE_ARCH_HUGE_VMAP
|
||||
select HAVE_ARCH_JUMP_LABEL
|
||||
select HAVE_ARCH_JUMP_LABEL_RELATIVE
|
||||
@@ -230,6 +231,7 @@ config ARM64
|
||||
select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD
|
||||
select TRACE_IRQFLAGS_SUPPORT
|
||||
select TRACE_IRQFLAGS_NMI_SUPPORT
|
||||
select HAVE_SOFTIRQ_ON_OWN_STACK
|
||||
help
|
||||
ARM 64-bit (AArch64) Linux support.
|
||||
|
||||
@@ -733,6 +735,19 @@ config ARM64_ERRATUM_2077057
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
config ARM64_ERRATUM_2658417
|
||||
bool "Cortex-A510: 2658417: remove BF16 support due to incorrect result"
|
||||
default y
|
||||
help
|
||||
This option adds the workaround for ARM Cortex-A510 erratum 2658417.
|
||||
Affected Cortex-A510 (r0p0 to r1p1) may produce the wrong result for
|
||||
BFMMLA or VMMLA instructions in rare circumstances when a pair of
|
||||
A510 CPUs are using shared neon hardware. As the sharing is not
|
||||
discoverable by the kernel, hide the BF16 HWCAP to indicate that
|
||||
user-space should not be using these instructions.
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
config ARM64_ERRATUM_2119858
|
||||
bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode"
|
||||
default y
|
||||
@@ -1562,6 +1577,9 @@ config THUMB2_COMPAT_VDSO
|
||||
Compile the compat vDSO with '-mthumb -fomit-frame-pointer' if y,
|
||||
otherwise with '-marm'.
|
||||
|
||||
config COMPAT_ALIGNMENT_FIXUPS
|
||||
bool "Fix up misaligned multi-word loads and stores in user space"
|
||||
|
||||
menuconfig ARMV8_DEPRECATED
|
||||
bool "Emulate deprecated/obsolete ARMv8 instructions"
|
||||
depends on SYSCTL
|
||||
|
||||
@@ -18,6 +18,7 @@ CONFIG_NUMA_BALANCING=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_BLK_CGROUP=y
|
||||
CONFIG_CGROUP_PIDS=y
|
||||
CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CGROUP_HUGETLB=y
|
||||
CONFIG_CPUSETS=y
|
||||
CONFIG_CGROUP_DEVICE=y
|
||||
@@ -102,6 +103,8 @@ CONFIG_ARM_SCMI_CPUFREQ=y
|
||||
CONFIG_ARM_TEGRA186_CPUFREQ=y
|
||||
CONFIG_QORIQ_CPUFREQ=y
|
||||
CONFIG_ACPI=y
|
||||
CONFIG_ACPI_HOTPLUG_MEMORY=y
|
||||
CONFIG_ACPI_HMAT=y
|
||||
CONFIG_ACPI_APEI=y
|
||||
CONFIG_ACPI_APEI_GHES=y
|
||||
CONFIG_ACPI_APEI_PCIEAER=y
|
||||
@@ -126,6 +129,8 @@ CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
|
||||
# CONFIG_COMPAT_BRK is not set
|
||||
CONFIG_MEMORY_HOTPLUG=y
|
||||
CONFIG_MEMORY_HOTREMOVE=y
|
||||
CONFIG_KSM=y
|
||||
CONFIG_MEMORY_FAILURE=y
|
||||
CONFIG_TRANSPARENT_HUGEPAGE=y
|
||||
@@ -139,12 +144,16 @@ CONFIG_IP_PNP_DHCP=y
|
||||
CONFIG_IP_PNP_BOOTP=y
|
||||
CONFIG_IPV6=m
|
||||
CONFIG_NETFILTER=y
|
||||
CONFIG_BRIDGE_NETFILTER=m
|
||||
CONFIG_NF_CONNTRACK=m
|
||||
CONFIG_NF_CONNTRACK_EVENTS=y
|
||||
CONFIG_NETFILTER_XT_MARK=m
|
||||
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
|
||||
CONFIG_NETFILTER_XT_TARGET_LOG=m
|
||||
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
|
||||
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
|
||||
CONFIG_NETFILTER_XT_MATCH_IPVS=m
|
||||
CONFIG_IP_VS=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
CONFIG_IP_NF_FILTER=m
|
||||
CONFIG_IP_NF_TARGET_REJECT=m
|
||||
@@ -1349,4 +1358,12 @@ CONFIG_DEBUG_FS=y
|
||||
# CONFIG_SCHED_DEBUG is not set
|
||||
# CONFIG_DEBUG_PREEMPT is not set
|
||||
# CONFIG_FTRACE is not set
|
||||
CONFIG_CORESIGHT=m
|
||||
CONFIG_CORESIGHT_LINK_AND_SINK_TMC=m
|
||||
CONFIG_CORESIGHT_CATU=m
|
||||
CONFIG_CORESIGHT_SINK_TPIU=m
|
||||
CONFIG_CORESIGHT_SINK_ETBV10=m
|
||||
CONFIG_CORESIGHT_STM=m
|
||||
CONFIG_CORESIGHT_CPU_DEBUG=m
|
||||
CONFIG_CORESIGHT_CTI=m
|
||||
CONFIG_MEMTEST=y
|
||||
|
||||
@@ -2,10 +2,22 @@
|
||||
#ifndef __ASM_ALTERNATIVE_MACROS_H
|
||||
#define __ASM_ALTERNATIVE_MACROS_H
|
||||
|
||||
#include <linux/const.h>
|
||||
#include <vdso/bits.h>
|
||||
|
||||
#include <asm/cpucaps.h>
|
||||
#include <asm/insn-def.h>
|
||||
|
||||
#define ARM64_CB_PATCH ARM64_NCAPS
|
||||
/*
|
||||
* Binutils 2.27.0 can't handle a 'UL' suffix on constants, so for the assembly
|
||||
* macros below we must use we must use `(1 << ARM64_CB_SHIFT)`.
|
||||
*/
|
||||
#define ARM64_CB_SHIFT 15
|
||||
#define ARM64_CB_BIT BIT(ARM64_CB_SHIFT)
|
||||
|
||||
#if ARM64_NCAPS >= ARM64_CB_BIT
|
||||
#error "cpucaps have overflown ARM64_CB_BIT"
|
||||
#endif
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@@ -73,8 +85,8 @@
|
||||
#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \
|
||||
__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
|
||||
|
||||
#define ALTERNATIVE_CB(oldinstr, cb) \
|
||||
__ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb)
|
||||
#define ALTERNATIVE_CB(oldinstr, feature, cb) \
|
||||
__ALTERNATIVE_CFG_CB(oldinstr, (1 << ARM64_CB_SHIFT) | (feature), 1, cb)
|
||||
#else
|
||||
|
||||
#include <asm/assembler.h>
|
||||
@@ -82,7 +94,7 @@
|
||||
.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
|
||||
.word \orig_offset - .
|
||||
.word \alt_offset - .
|
||||
.hword \feature
|
||||
.hword (\feature)
|
||||
.byte \orig_len
|
||||
.byte \alt_len
|
||||
.endm
|
||||
@@ -141,10 +153,10 @@
|
||||
661:
|
||||
.endm
|
||||
|
||||
.macro alternative_cb cb
|
||||
.macro alternative_cb cap, cb
|
||||
.set .Lasm_alt_mode, 0
|
||||
.pushsection .altinstructions, "a"
|
||||
altinstruction_entry 661f, \cb, ARM64_CB_PATCH, 662f-661f, 0
|
||||
altinstruction_entry 661f, \cb, (1 << ARM64_CB_SHIFT) | \cap, 662f-661f, 0
|
||||
.popsection
|
||||
661:
|
||||
.endm
|
||||
@@ -207,4 +219,46 @@ alternative_endif
|
||||
#define ALTERNATIVE(oldinstr, newinstr, ...) \
|
||||
_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
static __always_inline bool
|
||||
alternative_has_feature_likely(unsigned long feature)
|
||||
{
|
||||
compiletime_assert(feature < ARM64_NCAPS,
|
||||
"feature must be < ARM64_NCAPS");
|
||||
|
||||
asm_volatile_goto(
|
||||
ALTERNATIVE_CB("b %l[l_no]", %[feature], alt_cb_patch_nops)
|
||||
:
|
||||
: [feature] "i" (feature)
|
||||
:
|
||||
: l_no);
|
||||
|
||||
return true;
|
||||
l_no:
|
||||
return false;
|
||||
}
|
||||
|
||||
static __always_inline bool
|
||||
alternative_has_feature_unlikely(unsigned long feature)
|
||||
{
|
||||
compiletime_assert(feature < ARM64_NCAPS,
|
||||
"feature must be < ARM64_NCAPS");
|
||||
|
||||
asm_volatile_goto(
|
||||
ALTERNATIVE("nop", "b %l[l_yes]", %[feature])
|
||||
:
|
||||
: [feature] "i" (feature)
|
||||
:
|
||||
: l_yes);
|
||||
|
||||
return false;
|
||||
l_yes:
|
||||
return true;
|
||||
}
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* __ASM_ALTERNATIVE_MACROS_H */
|
||||
|
||||
@@ -293,7 +293,7 @@ alternative_endif
|
||||
alternative_if_not ARM64_KVM_PROTECTED_MODE
|
||||
ASM_BUG()
|
||||
alternative_else_nop_endif
|
||||
alternative_cb kvm_compute_final_ctr_el0
|
||||
alternative_cb ARM64_ALWAYS_SYSTEM, kvm_compute_final_ctr_el0
|
||||
movz \reg, #0
|
||||
movk \reg, #0, lsl #16
|
||||
movk \reg, #0, lsl #32
|
||||
@@ -384,8 +384,8 @@ alternative_cb_end
|
||||
.macro tcr_compute_pa_size, tcr, pos, tmp0, tmp1
|
||||
mrs \tmp0, ID_AA64MMFR0_EL1
|
||||
// Narrow PARange to fit the PS field in TCR_ELx
|
||||
ubfx \tmp0, \tmp0, #ID_AA64MMFR0_PARANGE_SHIFT, #3
|
||||
mov \tmp1, #ID_AA64MMFR0_PARANGE_MAX
|
||||
ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
|
||||
mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
|
||||
cmp \tmp0, \tmp1
|
||||
csel \tmp0, \tmp1, \tmp0, hi
|
||||
bfi \tcr, \tmp0, \pos, #3
|
||||
@@ -512,7 +512,7 @@ alternative_endif
|
||||
*/
|
||||
.macro reset_pmuserenr_el0, tmpreg
|
||||
mrs \tmpreg, id_aa64dfr0_el1
|
||||
sbfx \tmpreg, \tmpreg, #ID_AA64DFR0_PMUVER_SHIFT, #4
|
||||
sbfx \tmpreg, \tmpreg, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4
|
||||
cmp \tmpreg, #1 // Skip if no PMU present
|
||||
b.lt 9000f
|
||||
msr pmuserenr_el0, xzr // Disable PMU access from EL0
|
||||
@@ -524,7 +524,7 @@ alternative_endif
|
||||
*/
|
||||
.macro reset_amuserenr_el0, tmpreg
|
||||
mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1
|
||||
ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4
|
||||
ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
|
||||
cbz \tmpreg, .Lskip_\@ // Skip if no AMU present
|
||||
msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0
|
||||
.Lskip_\@:
|
||||
@@ -612,7 +612,7 @@ alternative_endif
|
||||
.macro offset_ttbr1, ttbr, tmp
|
||||
#ifdef CONFIG_ARM64_VA_BITS_52
|
||||
mrs_s \tmp, SYS_ID_AA64MMFR2_EL1
|
||||
and \tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
|
||||
and \tmp, \tmp, #(0xf << ID_AA64MMFR2_EL1_VARange_SHIFT)
|
||||
cbnz \tmp, .Lskipoffs_\@
|
||||
orr \ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
|
||||
.Lskipoffs_\@ :
|
||||
@@ -877,7 +877,7 @@ alternative_endif
|
||||
|
||||
.macro __mitigate_spectre_bhb_loop tmp
|
||||
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
|
||||
alternative_cb spectre_bhb_patch_loop_iter
|
||||
alternative_cb ARM64_ALWAYS_SYSTEM, spectre_bhb_patch_loop_iter
|
||||
mov \tmp, #32 // Patched to correct the immediate
|
||||
alternative_cb_end
|
||||
.Lspectre_bhb_loop\@:
|
||||
@@ -890,7 +890,7 @@ alternative_cb_end
|
||||
|
||||
.macro mitigate_spectre_bhb_loop tmp
|
||||
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
|
||||
alternative_cb spectre_bhb_patch_loop_mitigation_enable
|
||||
alternative_cb ARM64_ALWAYS_SYSTEM, spectre_bhb_patch_loop_mitigation_enable
|
||||
b .L_spectre_bhb_loop_done\@ // Patched to NOP
|
||||
alternative_cb_end
|
||||
__mitigate_spectre_bhb_loop \tmp
|
||||
@@ -904,7 +904,7 @@ alternative_cb_end
|
||||
stp x0, x1, [sp, #-16]!
|
||||
stp x2, x3, [sp, #-16]!
|
||||
mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
|
||||
alternative_cb smccc_patch_fw_mitigation_conduit
|
||||
alternative_cb ARM64_ALWAYS_SYSTEM, smccc_patch_fw_mitigation_conduit
|
||||
nop // Patched to SMC/HVC #0
|
||||
alternative_cb_end
|
||||
ldp x2, x3, [sp], #16
|
||||
@@ -914,7 +914,7 @@ alternative_cb_end
|
||||
|
||||
.macro mitigate_spectre_bhb_clear_insn
|
||||
#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY
|
||||
alternative_cb spectre_bhb_patch_clearbhb
|
||||
alternative_cb ARM64_ALWAYS_SYSTEM, spectre_bhb_patch_clearbhb
|
||||
/* Patched to NOP when not supported */
|
||||
clearbhb
|
||||
isb
|
||||
|
||||
@@ -12,19 +12,6 @@
|
||||
|
||||
#include <linux/stringify.h>
|
||||
|
||||
#ifdef CONFIG_ARM64_LSE_ATOMICS
|
||||
#define __LL_SC_FALLBACK(asm_ops) \
|
||||
" b 3f\n" \
|
||||
" .subsection 1\n" \
|
||||
"3:\n" \
|
||||
asm_ops "\n" \
|
||||
" b 4f\n" \
|
||||
" .previous\n" \
|
||||
"4:\n"
|
||||
#else
|
||||
#define __LL_SC_FALLBACK(asm_ops) asm_ops
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_CC_HAS_K_CONSTRAINT
|
||||
#define K
|
||||
#endif
|
||||
@@ -36,38 +23,36 @@ asm_ops "\n" \
|
||||
*/
|
||||
|
||||
#define ATOMIC_OP(op, asm_op, constraint) \
|
||||
static inline void \
|
||||
static __always_inline void \
|
||||
__ll_sc_atomic_##op(int i, atomic_t *v) \
|
||||
{ \
|
||||
unsigned long tmp; \
|
||||
int result; \
|
||||
\
|
||||
asm volatile("// atomic_" #op "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" stxr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n") \
|
||||
" cbnz %w1, 1b\n" \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i)); \
|
||||
}
|
||||
|
||||
#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
|
||||
static inline int \
|
||||
static __always_inline int \
|
||||
__ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
unsigned long tmp; \
|
||||
int result; \
|
||||
\
|
||||
asm volatile("// atomic_" #op "_return" #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %w0, %2\n" \
|
||||
" " #asm_op " %w0, %w0, %w3\n" \
|
||||
" st" #rel "xr %w1, %w0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
" " #mb \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@@ -76,20 +61,19 @@ __ll_sc_atomic_##op##_return##name(int i, atomic_t *v) \
|
||||
}
|
||||
|
||||
#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \
|
||||
static inline int \
|
||||
static __always_inline int \
|
||||
__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
unsigned long tmp; \
|
||||
int val, result; \
|
||||
\
|
||||
asm volatile("// atomic_fetch_" #op #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %w0, %3\n" \
|
||||
" " #asm_op " %w1, %w0, %w4\n" \
|
||||
" st" #rel "xr %w2, %w1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
" " #mb \
|
||||
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@@ -135,38 +119,36 @@ ATOMIC_OPS(andnot, bic, )
|
||||
#undef ATOMIC_OP
|
||||
|
||||
#define ATOMIC64_OP(op, asm_op, constraint) \
|
||||
static inline void \
|
||||
static __always_inline void \
|
||||
__ll_sc_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
s64 result; \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile("// atomic64_" #op "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" stxr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b") \
|
||||
" cbnz %w1, 1b" \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i)); \
|
||||
}
|
||||
|
||||
#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
|
||||
static inline long \
|
||||
static __always_inline long \
|
||||
__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
s64 result; \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile("// atomic64_" #op "_return" #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ld" #acq "xr %0, %2\n" \
|
||||
" " #asm_op " %0, %0, %3\n" \
|
||||
" st" #rel "xr %w1, %0, %2\n" \
|
||||
" cbnz %w1, 1b\n" \
|
||||
" " #mb ) \
|
||||
" " #mb \
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@@ -175,20 +157,19 @@ __ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v) \
|
||||
}
|
||||
|
||||
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
|
||||
static inline long \
|
||||
static __always_inline long \
|
||||
__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
s64 result, val; \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile("// atomic64_fetch_" #op #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %3\n" \
|
||||
"1: ld" #acq "xr %0, %3\n" \
|
||||
" " #asm_op " %1, %0, %4\n" \
|
||||
" st" #rel "xr %w2, %1, %3\n" \
|
||||
" cbnz %w2, 1b\n" \
|
||||
" " #mb ) \
|
||||
" " #mb \
|
||||
: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \
|
||||
: __stringify(constraint) "r" (i) \
|
||||
: cl); \
|
||||
@@ -233,14 +214,13 @@ ATOMIC64_OPS(andnot, bic, )
|
||||
#undef ATOMIC64_OP_RETURN
|
||||
#undef ATOMIC64_OP
|
||||
|
||||
static inline s64
|
||||
static __always_inline s64
|
||||
__ll_sc_atomic64_dec_if_positive(atomic64_t *v)
|
||||
{
|
||||
s64 result;
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile("// atomic64_dec_if_positive\n"
|
||||
__LL_SC_FALLBACK(
|
||||
" prfm pstl1strm, %2\n"
|
||||
"1: ldxr %0, %2\n"
|
||||
" subs %0, %0, #1\n"
|
||||
@@ -248,7 +228,7 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
|
||||
" stlxr %w1, %0, %2\n"
|
||||
" cbnz %w1, 1b\n"
|
||||
" dmb ish\n"
|
||||
"2:")
|
||||
"2:"
|
||||
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
|
||||
:
|
||||
: "cc", "memory");
|
||||
@@ -257,7 +237,7 @@ __ll_sc_atomic64_dec_if_positive(atomic64_t *v)
|
||||
}
|
||||
|
||||
#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint) \
|
||||
static inline u##sz \
|
||||
static __always_inline u##sz \
|
||||
__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \
|
||||
unsigned long old, \
|
||||
u##sz new) \
|
||||
@@ -274,7 +254,6 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \
|
||||
old = (u##sz)old; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %[v]\n" \
|
||||
"1: ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n" \
|
||||
" eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \
|
||||
@@ -282,7 +261,7 @@ __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr, \
|
||||
" st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n" \
|
||||
" cbnz %w[tmp], 1b\n" \
|
||||
" " #mb "\n" \
|
||||
"2:") \
|
||||
"2:" \
|
||||
: [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \
|
||||
[v] "+Q" (*(u##sz *)ptr) \
|
||||
: [old] __stringify(constraint) "r" (old), [new] "r" (new) \
|
||||
@@ -316,7 +295,7 @@ __CMPXCHG_CASE( , , mb_, 64, dmb ish, , l, "memory", L)
|
||||
#undef __CMPXCHG_CASE
|
||||
|
||||
#define __CMPXCHG_DBL(name, mb, rel, cl) \
|
||||
static inline long \
|
||||
static __always_inline long \
|
||||
__ll_sc__cmpxchg_double##name(unsigned long old1, \
|
||||
unsigned long old2, \
|
||||
unsigned long new1, \
|
||||
@@ -326,7 +305,6 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \
|
||||
unsigned long tmp, ret; \
|
||||
\
|
||||
asm volatile("// __cmpxchg_double" #name "\n" \
|
||||
__LL_SC_FALLBACK( \
|
||||
" prfm pstl1strm, %2\n" \
|
||||
"1: ldxp %0, %1, %2\n" \
|
||||
" eor %0, %0, %3\n" \
|
||||
@@ -336,7 +314,7 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \
|
||||
" st" #rel "xp %w0, %5, %6, %2\n" \
|
||||
" cbnz %w0, 1b\n" \
|
||||
" " #mb "\n" \
|
||||
"2:") \
|
||||
"2:" \
|
||||
: "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \
|
||||
: "r" (old1), "r" (old2), "r" (new1), "r" (new2) \
|
||||
: cl); \
|
||||
|
||||
@@ -11,7 +11,8 @@
|
||||
#define __ASM_ATOMIC_LSE_H
|
||||
|
||||
#define ATOMIC_OP(op, asm_op) \
|
||||
static inline void __lse_atomic_##op(int i, atomic_t *v) \
|
||||
static __always_inline void \
|
||||
__lse_atomic_##op(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
@@ -25,7 +26,7 @@ ATOMIC_OP(or, stset)
|
||||
ATOMIC_OP(xor, steor)
|
||||
ATOMIC_OP(add, stadd)
|
||||
|
||||
static inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
static __always_inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
__lse_atomic_add(-i, v);
|
||||
}
|
||||
@@ -33,7 +34,8 @@ static inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
#undef ATOMIC_OP
|
||||
|
||||
#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \
|
||||
static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
static __always_inline int \
|
||||
__lse_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
int old; \
|
||||
\
|
||||
@@ -63,7 +65,8 @@ ATOMIC_FETCH_OPS(add, ldadd)
|
||||
#undef ATOMIC_FETCH_OPS
|
||||
|
||||
#define ATOMIC_FETCH_OP_SUB(name) \
|
||||
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
|
||||
static __always_inline int \
|
||||
__lse_atomic_fetch_sub##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_add##name(-i, v); \
|
||||
}
|
||||
@@ -76,12 +79,14 @@ ATOMIC_FETCH_OP_SUB( )
|
||||
#undef ATOMIC_FETCH_OP_SUB
|
||||
|
||||
#define ATOMIC_OP_ADD_SUB_RETURN(name) \
|
||||
static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \
|
||||
static __always_inline int \
|
||||
__lse_atomic_add_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_add##name(i, v) + i; \
|
||||
} \
|
||||
\
|
||||
static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
|
||||
static __always_inline int \
|
||||
__lse_atomic_sub_return##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_sub(i, v) - i; \
|
||||
}
|
||||
@@ -93,13 +98,14 @@ ATOMIC_OP_ADD_SUB_RETURN( )
|
||||
|
||||
#undef ATOMIC_OP_ADD_SUB_RETURN
|
||||
|
||||
static inline void __lse_atomic_and(int i, atomic_t *v)
|
||||
static __always_inline void __lse_atomic_and(int i, atomic_t *v)
|
||||
{
|
||||
return __lse_atomic_andnot(~i, v);
|
||||
}
|
||||
|
||||
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
|
||||
static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \
|
||||
static __always_inline int \
|
||||
__lse_atomic_fetch_and##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
return __lse_atomic_fetch_andnot##name(~i, v); \
|
||||
}
|
||||
@@ -112,7 +118,8 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
|
||||
#undef ATOMIC_FETCH_OP_AND
|
||||
|
||||
#define ATOMIC64_OP(op, asm_op) \
|
||||
static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
static __always_inline void \
|
||||
__lse_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
@@ -126,7 +133,7 @@ ATOMIC64_OP(or, stset)
|
||||
ATOMIC64_OP(xor, steor)
|
||||
ATOMIC64_OP(add, stadd)
|
||||
|
||||
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
static __always_inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
{
|
||||
__lse_atomic64_add(-i, v);
|
||||
}
|
||||
@@ -134,7 +141,8 @@ static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
#undef ATOMIC64_OP
|
||||
|
||||
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
|
||||
static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
|
||||
static __always_inline long \
|
||||
__lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
s64 old; \
|
||||
\
|
||||
@@ -164,7 +172,8 @@ ATOMIC64_FETCH_OPS(add, ldadd)
|
||||
#undef ATOMIC64_FETCH_OPS
|
||||
|
||||
#define ATOMIC64_FETCH_OP_SUB(name) \
|
||||
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
|
||||
static __always_inline long \
|
||||
__lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
return __lse_atomic64_fetch_add##name(-i, v); \
|
||||
}
|
||||
@@ -177,12 +186,14 @@ ATOMIC64_FETCH_OP_SUB( )
|
||||
#undef ATOMIC64_FETCH_OP_SUB
|
||||
|
||||
#define ATOMIC64_OP_ADD_SUB_RETURN(name) \
|
||||
static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
|
||||
static __always_inline long \
|
||||
__lse_atomic64_add_return##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
return __lse_atomic64_fetch_add##name(i, v) + i; \
|
||||
} \
|
||||
\
|
||||
static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)\
|
||||
static __always_inline long \
|
||||
__lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
return __lse_atomic64_fetch_sub##name(i, v) - i; \
|
||||
}
|
||||
@@ -194,13 +205,14 @@ ATOMIC64_OP_ADD_SUB_RETURN( )
|
||||
|
||||
#undef ATOMIC64_OP_ADD_SUB_RETURN
|
||||
|
||||
static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
|
||||
static __always_inline void __lse_atomic64_and(s64 i, atomic64_t *v)
|
||||
{
|
||||
return __lse_atomic64_andnot(~i, v);
|
||||
}
|
||||
|
||||
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
|
||||
static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
|
||||
static __always_inline long \
|
||||
__lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
return __lse_atomic64_fetch_andnot##name(~i, v); \
|
||||
}
|
||||
@@ -212,7 +224,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
|
||||
|
||||
#undef ATOMIC64_FETCH_OP_AND
|
||||
|
||||
static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
|
||||
static __always_inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
|
||||
{
|
||||
unsigned long tmp;
|
||||
|
||||
|
||||
@@ -45,10 +45,6 @@ static inline unsigned int arch_slab_minalign(void)
|
||||
#define arch_slab_minalign() arch_slab_minalign()
|
||||
#endif
|
||||
|
||||
#define CTR_CACHE_MINLINE_MASK \
|
||||
(0xf << CTR_EL0_DMINLINE_SHIFT | \
|
||||
CTR_EL0_IMINLINE_MASK << CTR_EL0_IMINLINE_SHIFT)
|
||||
|
||||
#define CTR_L1IP(ctr) SYS_FIELD_GET(CTR_EL0, L1Ip, ctr)
|
||||
|
||||
#define ICACHEF_ALIASING 0
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
#ifndef __ASM_CPUFEATURE_H
|
||||
#define __ASM_CPUFEATURE_H
|
||||
|
||||
#include <asm/alternative-macros.h>
|
||||
#include <asm/cpucaps.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/hwcap.h>
|
||||
@@ -419,12 +420,8 @@ static __always_inline bool is_hyp_code(void)
|
||||
}
|
||||
|
||||
extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
|
||||
extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
|
||||
extern struct static_key_false arm64_const_caps_ready;
|
||||
|
||||
/* ARM64 CAPS + alternative_cb */
|
||||
#define ARM64_NPATCHABLE (ARM64_NCAPS + 1)
|
||||
extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
|
||||
extern DECLARE_BITMAP(boot_capabilities, ARM64_NCAPS);
|
||||
|
||||
#define for_each_available_cap(cap) \
|
||||
for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS)
|
||||
@@ -440,7 +437,7 @@ unsigned long cpu_get_elf_hwcap2(void);
|
||||
|
||||
static __always_inline bool system_capabilities_finalized(void)
|
||||
{
|
||||
return static_branch_likely(&arm64_const_caps_ready);
|
||||
return alternative_has_feature_likely(ARM64_ALWAYS_SYSTEM);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -448,11 +445,11 @@ static __always_inline bool system_capabilities_finalized(void)
|
||||
*
|
||||
* Before the capability is detected, this returns false.
|
||||
*/
|
||||
static inline bool cpus_have_cap(unsigned int num)
|
||||
static __always_inline bool cpus_have_cap(unsigned int num)
|
||||
{
|
||||
if (num >= ARM64_NCAPS)
|
||||
return false;
|
||||
return test_bit(num, cpu_hwcaps);
|
||||
return arch_test_bit(num, cpu_hwcaps);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -467,7 +464,7 @@ static __always_inline bool __cpus_have_const_cap(int num)
|
||||
{
|
||||
if (num >= ARM64_NCAPS)
|
||||
return false;
|
||||
return static_branch_unlikely(&cpu_hwcap_keys[num]);
|
||||
return alternative_has_feature_unlikely(num);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -553,7 +550,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
|
||||
u64 mask = GENMASK_ULL(field + 3, field);
|
||||
|
||||
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
|
||||
if (val == ID_AA64DFR0_PMUVER_IMP_DEF)
|
||||
if (val == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
|
||||
val = 0;
|
||||
|
||||
if (val > cap) {
|
||||
@@ -597,43 +594,43 @@ static inline s64 arm64_ftr_value(const struct arm64_ftr_bits *ftrp, u64 val)
|
||||
|
||||
static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
|
||||
{
|
||||
return cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 ||
|
||||
cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
|
||||
return cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_EL1_BIGEND_SHIFT) == 0x1 ||
|
||||
cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_EL1_BIGENDEL0_SHIFT) == 0x1;
|
||||
}
|
||||
|
||||
static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
|
||||
{
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL1_SHIFT);
|
||||
|
||||
return val == ID_AA64PFR0_ELx_32BIT_64BIT;
|
||||
return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT;
|
||||
}
|
||||
|
||||
static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
|
||||
{
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_EL0_SHIFT);
|
||||
|
||||
return val == ID_AA64PFR0_ELx_32BIT_64BIT;
|
||||
return val == ID_AA64PFR0_EL1_ELx_32BIT_64BIT;
|
||||
}
|
||||
|
||||
static inline bool id_aa64pfr0_sve(u64 pfr0)
|
||||
{
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_SVE_SHIFT);
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SVE_SHIFT);
|
||||
|
||||
return val > 0;
|
||||
}
|
||||
|
||||
static inline bool id_aa64pfr1_sme(u64 pfr1)
|
||||
{
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_SME_SHIFT);
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_SME_SHIFT);
|
||||
|
||||
return val > 0;
|
||||
}
|
||||
|
||||
static inline bool id_aa64pfr1_mte(u64 pfr1)
|
||||
{
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
|
||||
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_MTE_SHIFT);
|
||||
|
||||
return val >= ID_AA64PFR1_MTE;
|
||||
return val >= ID_AA64PFR1_EL1_MTE_MTE2;
|
||||
}
|
||||
|
||||
void __init setup_cpu_features(void);
|
||||
@@ -659,7 +656,7 @@ static inline bool supports_csv2p3(int scope)
|
||||
pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
|
||||
|
||||
csv2_val = cpuid_feature_extract_unsigned_field(pfr0,
|
||||
ID_AA64PFR0_CSV2_SHIFT);
|
||||
ID_AA64PFR0_EL1_CSV2_SHIFT);
|
||||
return csv2_val == 3;
|
||||
}
|
||||
|
||||
@@ -694,10 +691,10 @@ static inline bool system_supports_4kb_granule(void)
|
||||
|
||||
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN4_SHIFT);
|
||||
ID_AA64MMFR0_EL1_TGRAN4_SHIFT);
|
||||
|
||||
return (val >= ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX);
|
||||
return (val >= ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_64kb_granule(void)
|
||||
@@ -707,10 +704,10 @@ static inline bool system_supports_64kb_granule(void)
|
||||
|
||||
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN64_SHIFT);
|
||||
ID_AA64MMFR0_EL1_TGRAN64_SHIFT);
|
||||
|
||||
return (val >= ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX);
|
||||
return (val >= ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_16kb_granule(void)
|
||||
@@ -720,10 +717,10 @@ static inline bool system_supports_16kb_granule(void)
|
||||
|
||||
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_TGRAN16_SHIFT);
|
||||
ID_AA64MMFR0_EL1_TGRAN16_SHIFT);
|
||||
|
||||
return (val >= ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX);
|
||||
return (val >= ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN) &&
|
||||
(val <= ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX);
|
||||
}
|
||||
|
||||
static inline bool system_supports_mixed_endian_el0(void)
|
||||
@@ -738,7 +735,7 @@ static inline bool system_supports_mixed_endian(void)
|
||||
|
||||
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
val = cpuid_feature_extract_unsigned_field(mmfr0,
|
||||
ID_AA64MMFR0_BIGENDEL_SHIFT);
|
||||
ID_AA64MMFR0_EL1_BIGEND_SHIFT);
|
||||
|
||||
return val == 0x1;
|
||||
}
|
||||
@@ -840,13 +837,13 @@ extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
|
||||
static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
|
||||
{
|
||||
switch (parange) {
|
||||
case ID_AA64MMFR0_PARANGE_32: return 32;
|
||||
case ID_AA64MMFR0_PARANGE_36: return 36;
|
||||
case ID_AA64MMFR0_PARANGE_40: return 40;
|
||||
case ID_AA64MMFR0_PARANGE_42: return 42;
|
||||
case ID_AA64MMFR0_PARANGE_44: return 44;
|
||||
case ID_AA64MMFR0_PARANGE_48: return 48;
|
||||
case ID_AA64MMFR0_PARANGE_52: return 52;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_32: return 32;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_36: return 36;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_40: return 40;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_42: return 42;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_44: return 44;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_48: return 48;
|
||||
case ID_AA64MMFR0_EL1_PARANGE_52: return 52;
|
||||
/*
|
||||
* A future PE could use a value unknown to the kernel.
|
||||
* However, by the "D10.1.4 Principles of the ID scheme
|
||||
@@ -868,14 +865,14 @@ static inline bool cpu_has_hw_af(void)
|
||||
|
||||
mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
|
||||
return cpuid_feature_extract_unsigned_field(mmfr1,
|
||||
ID_AA64MMFR1_HADBS_SHIFT);
|
||||
ID_AA64MMFR1_EL1_HAFDBS_SHIFT);
|
||||
}
|
||||
|
||||
static inline bool cpu_has_pan(void)
|
||||
{
|
||||
u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
|
||||
return cpuid_feature_extract_unsigned_field(mmfr1,
|
||||
ID_AA64MMFR1_PAN_SHIFT);
|
||||
ID_AA64MMFR1_EL1_PAN_SHIFT);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_AMU_EXTN
|
||||
@@ -896,8 +893,8 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
|
||||
int vmid_bits;
|
||||
|
||||
vmid_bits = cpuid_feature_extract_unsigned_field(mmfr1,
|
||||
ID_AA64MMFR1_VMIDBITS_SHIFT);
|
||||
if (vmid_bits == ID_AA64MMFR1_VMIDBITS_16)
|
||||
ID_AA64MMFR1_EL1_VMIDBits_SHIFT);
|
||||
if (vmid_bits == ID_AA64MMFR1_EL1_VMIDBits_16)
|
||||
return 16;
|
||||
|
||||
/*
|
||||
@@ -907,6 +904,8 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
|
||||
return 8;
|
||||
}
|
||||
|
||||
struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id);
|
||||
|
||||
extern struct arm64_ftr_override id_aa64mmfr1_override;
|
||||
extern struct arm64_ftr_override id_aa64pfr0_override;
|
||||
extern struct arm64_ftr_override id_aa64pfr1_override;
|
||||
|
||||
@@ -40,7 +40,7 @@
|
||||
|
||||
.macro __init_el2_debug
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
sbfx x0, x1, #ID_AA64DFR0_PMUVER_SHIFT, #4
|
||||
sbfx x0, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4
|
||||
cmp x0, #1
|
||||
b.lt .Lskip_pmu_\@ // Skip if no PMU present
|
||||
mrs x0, pmcr_el0 // Disable debug access traps
|
||||
@@ -49,7 +49,7 @@
|
||||
csel x2, xzr, x0, lt // all PMU counters from EL1
|
||||
|
||||
/* Statistical profiling */
|
||||
ubfx x0, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
|
||||
ubfx x0, x1, #ID_AA64DFR0_EL1_PMSVer_SHIFT, #4
|
||||
cbz x0, .Lskip_spe_\@ // Skip if SPE not present
|
||||
|
||||
mrs_s x0, SYS_PMBIDR_EL1 // If SPE available at EL2,
|
||||
@@ -65,7 +65,7 @@
|
||||
|
||||
.Lskip_spe_\@:
|
||||
/* Trace buffer */
|
||||
ubfx x0, x1, #ID_AA64DFR0_TRBE_SHIFT, #4
|
||||
ubfx x0, x1, #ID_AA64DFR0_EL1_TraceBuffer_SHIFT, #4
|
||||
cbz x0, .Lskip_trace_\@ // Skip if TraceBuffer is not present
|
||||
|
||||
mrs_s x0, SYS_TRBIDR_EL1
|
||||
@@ -83,7 +83,7 @@
|
||||
/* LORegions */
|
||||
.macro __init_el2_lor
|
||||
mrs x1, id_aa64mmfr1_el1
|
||||
ubfx x0, x1, #ID_AA64MMFR1_LOR_SHIFT, 4
|
||||
ubfx x0, x1, #ID_AA64MMFR1_EL1_LO_SHIFT, 4
|
||||
cbz x0, .Lskip_lor_\@
|
||||
msr_s SYS_LORC_EL1, xzr
|
||||
.Lskip_lor_\@:
|
||||
@@ -97,7 +97,7 @@
|
||||
/* GICv3 system register access */
|
||||
.macro __init_el2_gicv3
|
||||
mrs x0, id_aa64pfr0_el1
|
||||
ubfx x0, x0, #ID_AA64PFR0_GIC_SHIFT, #4
|
||||
ubfx x0, x0, #ID_AA64PFR0_EL1_GIC_SHIFT, #4
|
||||
cbz x0, .Lskip_gicv3_\@
|
||||
|
||||
mrs_s x0, SYS_ICC_SRE_EL2
|
||||
@@ -132,12 +132,12 @@
|
||||
/* Disable any fine grained traps */
|
||||
.macro __init_el2_fgt
|
||||
mrs x1, id_aa64mmfr0_el1
|
||||
ubfx x1, x1, #ID_AA64MMFR0_FGT_SHIFT, #4
|
||||
ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4
|
||||
cbz x1, .Lskip_fgt_\@
|
||||
|
||||
mov x0, xzr
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_PMSVer_SHIFT, #4
|
||||
cmp x1, #3
|
||||
b.lt .Lset_debug_fgt_\@
|
||||
/* Disable PMSNEVFR_EL1 read and write traps */
|
||||
@@ -149,7 +149,7 @@
|
||||
|
||||
mov x0, xzr
|
||||
mrs x1, id_aa64pfr1_el1
|
||||
ubfx x1, x1, #ID_AA64PFR1_SME_SHIFT, #4
|
||||
ubfx x1, x1, #ID_AA64PFR1_EL1_SME_SHIFT, #4
|
||||
cbz x1, .Lset_fgt_\@
|
||||
|
||||
/* Disable nVHE traps of TPIDR2 and SMPRI */
|
||||
@@ -162,7 +162,7 @@
|
||||
msr_s SYS_HFGITR_EL2, xzr
|
||||
|
||||
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
|
||||
ubfx x1, x1, #ID_AA64PFR0_AMU_SHIFT, #4
|
||||
ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
|
||||
cbz x1, .Lskip_fgt_\@
|
||||
|
||||
msr_s SYS_HAFGRTR_EL2, xzr
|
||||
|
||||
@@ -58,8 +58,9 @@ asmlinkage void call_on_irq_stack(struct pt_regs *regs,
|
||||
asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs);
|
||||
|
||||
void do_mem_abort(unsigned long far, unsigned long esr, struct pt_regs *regs);
|
||||
void do_undefinstr(struct pt_regs *regs);
|
||||
void do_bti(struct pt_regs *regs);
|
||||
void do_undefinstr(struct pt_regs *regs, unsigned long esr);
|
||||
void do_el0_bti(struct pt_regs *regs);
|
||||
void do_el1_bti(struct pt_regs *regs, unsigned long esr);
|
||||
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr,
|
||||
struct pt_regs *regs);
|
||||
void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs);
|
||||
@@ -70,9 +71,11 @@ void do_sysinstr(unsigned long esr, struct pt_regs *regs);
|
||||
void do_sp_pc_abort(unsigned long addr, unsigned long esr, struct pt_regs *regs);
|
||||
void bad_el0_sync(struct pt_regs *regs, int reason, unsigned long esr);
|
||||
void do_cp15instr(unsigned long esr, struct pt_regs *regs);
|
||||
int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs);
|
||||
void do_el0_svc(struct pt_regs *regs);
|
||||
void do_el0_svc_compat(struct pt_regs *regs);
|
||||
void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
|
||||
void do_el0_fpac(struct pt_regs *regs, unsigned long esr);
|
||||
void do_el1_fpac(struct pt_regs *regs, unsigned long esr);
|
||||
void do_serror(struct pt_regs *regs, unsigned long esr);
|
||||
void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags);
|
||||
|
||||
|
||||
@@ -142,7 +142,7 @@ static inline int get_num_brps(void)
|
||||
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
|
||||
return 1 +
|
||||
cpuid_feature_extract_unsigned_field(dfr0,
|
||||
ID_AA64DFR0_BRPS_SHIFT);
|
||||
ID_AA64DFR0_EL1_BRPs_SHIFT);
|
||||
}
|
||||
|
||||
/* Determine number of WRP registers available. */
|
||||
@@ -151,7 +151,7 @@ static inline int get_num_wrps(void)
|
||||
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
|
||||
return 1 +
|
||||
cpuid_feature_extract_unsigned_field(dfr0,
|
||||
ID_AA64DFR0_WRPS_SHIFT);
|
||||
ID_AA64DFR0_EL1_WRPs_SHIFT);
|
||||
}
|
||||
|
||||
#endif /* __ASM_BREAKPOINT_H */
|
||||
|
||||
@@ -119,6 +119,7 @@
|
||||
#define KERNEL_HWCAP_SME_FA64 __khwcap2_feature(SME_FA64)
|
||||
#define KERNEL_HWCAP_WFXT __khwcap2_feature(WFXT)
|
||||
#define KERNEL_HWCAP_EBF16 __khwcap2_feature(EBF16)
|
||||
#define KERNEL_HWCAP_SVE_EBF16 __khwcap2_feature(SVE_EBF16)
|
||||
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user