You've already forked linux-rockchip
mirror of
https://github.com/armbian/linux-rockchip.git
synced 2026-01-06 11:08:10 -08:00
Merge 5.10.30 into android12-5.10
Changes in 5.10.30 xfrm/compat: Cleanup WARN()s that can be user-triggered ALSA: aloop: Fix initialization of controls ALSA: hda/realtek: Fix speaker amp setup on Acer Aspire E1 ALSA: hda/conexant: Apply quirk for another HP ZBook G5 model ASoC: intel: atom: Stop advertising non working S24LE support nfc: fix refcount leak in llcp_sock_bind() nfc: fix refcount leak in llcp_sock_connect() nfc: fix memory leak in llcp_sock_connect() nfc: Avoid endless loops caused by repeated llcp_sock_connect() selinux: make nslot handling in avtab more robust selinux: fix cond_list corruption when changing booleans selinux: fix race between old and new sidtab xen/evtchn: Change irq_info lock to raw_spinlock_t net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh net: dsa: lantiq_gswip: Let GSWIP automatically set the xMII clock net: dsa: lantiq_gswip: Don't use PHY auto polling net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits drm/i915: Fix invalid access to ACPI _DSM objects ACPI: processor: Fix build when CONFIG_ACPI_PROCESSOR=m IB/hfi1: Fix probe time panic when AIP is enabled with a buggy BIOS LOOKUP_MOUNTPOINT: we are cleaning "jumped" flag too late gcov: re-fix clang-11+ support ia64: fix user_stack_pointer() for ptrace() nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff ocfs2: fix deadlock between setattr and dio_end_io_write fs: direct-io: fix missing sdio->boundary ethtool: fix incorrect datatype in set_eee ops of: property: fw_devlink: do not link ".*,nr-gpios" parisc: parisc-agp requires SBA IOMMU driver parisc: avoid a warning on u8 cast for cmpxchg on u8 pointers ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pin batman-adv: initialize "struct batadv_tvlv_tt_vlan_data"->reserved field ice: Continue probe on link/PHY errors ice: Increase control queue timeout ice: prevent ice_open and ice_stop during reset ice: fix memory allocation call ice: remove DCBNL_DEVRESET bit from PF state ice: Fix for dereference of NULL pointer ice: Use port number instead of PF ID for WoL ice: Cleanup fltr list in case of allocation issues iwlwifi: pcie: properly set LTR workarounds on 22000 devices ice: fix memory leak of aRFS after resuming from suspend net: hso: fix null-ptr-deref during tty device unregistration libbpf: Fix bail out from 'ringbuf_process_ring()' on error bpf: Enforce that struct_ops programs be GPL-only bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx libbpf: Ensure umem pointer is non-NULL before dereferencing libbpf: Restore umem state after socket create failure libbpf: Only create rx and tx XDP rings when necessary bpf: Refcount task stack in bpf_get_task_stack bpf, sockmap: Fix sk->prot unhash op reset bpf, sockmap: Fix incorrect fwd_alloc accounting net: ensure mac header is set in virtio_net_hdr_to_skb() i40e: Fix sparse warning: missing error code 'err' i40e: Fix sparse error: 'vsi->netdev' could be null i40e: Fix sparse error: uninitialized symbol 'ring' i40e: Fix sparse errors in i40e_txrx.c vdpa/mlx5: Fix suspend/resume index restoration net: sched: sch_teql: fix null-pointer dereference net: sched: fix action overwrite reference counting nl80211: fix beacon head validation nl80211: fix potential leak of ACL params cfg80211: check S1G beacon compat element length mac80211: fix time-is-after bug in mlme mac80211: fix TXQ AC confusion net: hsr: Reset MAC header for Tx path net-ipv6: bugfix - raw & sctp - switch to ipv6_can_nonlocal_bind() net: let skb_orphan_partial wake-up waiters. thunderbolt: Fix a leak in tb_retimer_add() thunderbolt: Fix off by one in tb_port_find_retimer() usbip: add sysfs_lock to synchronize sysfs code paths usbip: stub-dev synchronize sysfs code paths usbip: vudc synchronize sysfs code paths usbip: synchronize event handler with sysfs code paths driver core: Fix locking bug in deferred_probe_timeout_work_func() scsi: pm80xx: Fix chip initialization failure scsi: target: iscsi: Fix zero tag inside a trace event percpu: make pcpu_nr_empty_pop_pages per chunk type i2c: turn recovery error on init to debug KVM: x86/mmu: change TDP MMU yield function returns to match cond_resched KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched KVM: x86/mmu: Rename goal_gfn to next_last_level_gfn KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed KVM: x86/mmu: Ensure TLBs are flushed when yielding during GFN range zap KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages KVM: x86/mmu: preserve pending TLB flush across calls to kvm_tdp_mmu_zap_sp net: sched: fix err handler in tcf_action_init() ice: Refactor DCB related variables out of the ice_port_info struct ice: Recognize 860 as iSCSI port in CEE mode xfrm: interface: fix ipv4 pmtu check to honor ip header df xfrm: Use actual socket sk instead of skb socket for xfrm_output_resume remoteproc: qcom: pil_info: avoid 64-bit division regulator: bd9571mwv: Fix AVS and DVFS voltage range ARM: OMAP4: Fix PMIC voltage domains for bionic ARM: OMAP4: PM: update ROM return address for OSWR and OFF net: xfrm: Localize sequence counter per network namespace esp: delete NETIF_F_SCTP_CRC bit from features for esp offload ASoC: SOF: Intel: HDA: fix core status verification ASoC: wm8960: Fix wrong bclk and lrclk with pll enabled for some chips xfrm: Fix NULL pointer dereference on policy lookup virtchnl: Fix layout of RSS structures i40e: Added Asym_Pause to supported link modes i40e: Fix kernel oops when i40e driver removes VF's hostfs: fix memory handling in follow_link() amd-xgbe: Update DMA coherency values vxlan: do not modify the shared tunnel info when PMTU triggers an ICMP reply geneve: do not modify the shared tunnel info when PMTU triggers an ICMP reply sch_red: fix off-by-one checks in red_check_params() drivers/net/wan/hdlc_fr: Fix a double free in pvc_xmit arm64: dts: imx8mm/q: Fix pad control of SD1_DATA0 xfrm: Provide private skb extensions for segmented and hw offloaded ESP packets can: bcm/raw: fix msg_namelen values depending on CAN_REQUIRED_SIZE can: isotp: fix msg_namelen values depending on CAN_REQUIRED_SIZE mlxsw: spectrum: Fix ECN marking in tunnel decapsulation ethernet: myri10ge: Fix a use after free in myri10ge_sw_tso gianfar: Handle error code at MAC address change net: dsa: Fix type was not set for devlink port cxgb4: avoid collecting SGE_QBASE regs during traffic net:tipc: Fix a double free in tipc_sk_mcast_rcv ARM: dts: imx6: pbab01: Set vmmc supply for both SD interfaces net/ncsi: Avoid channel_monitor hrtimer deadlock net: qrtr: Fix memory leak on qrtr_tx_wait failure nfp: flower: ignore duplicate merge hints from FW net: phy: broadcom: Only advertise EEE for supported modes I2C: JZ4780: Fix bug for Ingenic X1000. ASoC: sunxi: sun4i-codec: fill ASoC card owner net/mlx5e: Fix mapping of ct_label zero net/mlx5e: Fix ethtool indication of connector type net/mlx5: Don't request more than supported EQs net/rds: Fix a use after free in rds_message_map_pages xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory model soc/fsl: qbman: fix conflicting alignment attributes i40e: Fix display statistics for veb_tc RDMA/rtrs-clt: Close rtrs client conn before destroying rtrs clt session files drm/msm: Set drvdata to NULL when msm_drm_init() fails net: udp: Add support for getsockopt(..., ..., UDP_GRO, ..., ...); mptcp: forbit mcast-related sockopt on MPTCP sockets scsi: ufs: core: Fix task management request completion timeout scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs net: cls_api: Fix uninitialised struct field bo->unlocked_driver_cb net: macb: restore cmp registers on resume path clk: fix invalid usage of list cursor in register clk: fix invalid usage of list cursor in unregister workqueue: Move the position of debug_work_activate() in __queue_work() s390/cpcmd: fix inline assembly register clobbering perf inject: Fix repipe usage net: openvswitch: conntrack: simplify the return expression of ovs_ct_limit_get_default_limit() openvswitch: fix send of uninitialized stack memory in ct limit reply i2c: designware: Adjust bus_freq_hz when refuse high speed mode set iwlwifi: fix 11ax disabled bit in the regulatory capability flags can: mcp251x: fix support for half duplex SPI host controllers tipc: increment the tmp aead refcnt before attaching it net: hns3: clear VF down state bit before request link status net/mlx5: Fix placement of log_max_flow_counter net/mlx5: Fix PPLM register mapping net/mlx5: Fix PBMC register mapping RDMA/cxgb4: check for ipv6 address properly while destroying listener perf report: Fix wrong LBR block sorting RDMA/qedr: Fix kernel panic when trying to access recv_cq drm/vc4: crtc: Reduce PV fifo threshold on hvs4 i40e: Fix parameters in aq_get_phy_register() RDMA/addr: Be strict with gid size vdpa/mlx5: should exclude header length and fcs from mtu vdpa/mlx5: Fix wrong use of bit numbers RAS/CEC: Correct ce_add_elem()'s returned values clk: socfpga: fix iomem pointer cast on 64-bit lockdep: Address clang -Wformat warning printing for %hd dt-bindings: net: ethernet-controller: fix typo in NVMEM net: sched: bump refcount for new action in ACT replace mode gpiolib: Read "gpio-line-names" from a firmware node cfg80211: remove WARN_ON() in cfg80211_sme_connect net: tun: set tun->dev->addr_len during TUNSETLINK processing drivers: net: fix memory leak in atusb_probe drivers: net: fix memory leak in peak_usb_create_dev net: mac802154: Fix general protection fault net: ieee802154: nl-mac: fix check on panid net: ieee802154: fix nl802154 del llsec key net: ieee802154: fix nl802154 del llsec dev net: ieee802154: fix nl802154 add llsec key net: ieee802154: fix nl802154 del llsec devkey net: ieee802154: forbid monitor for set llsec params net: ieee802154: forbid monitor for del llsec seclevel net: ieee802154: stop dump llsec params for monitors Revert "net: sched: bump refcount for new action in ACT replace mode" Linux 5.10.30 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ie8754a2e4dfef03bf1f2b878843cde19a4adab21
This commit is contained in:
@@ -49,7 +49,7 @@ properties:
|
||||
description:
|
||||
Reference to an nvmem node for the MAC address
|
||||
|
||||
nvmem-cells-names:
|
||||
nvmem-cell-names:
|
||||
const: mac-address
|
||||
|
||||
phy-connection-type:
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 29
|
||||
SUBLEVEL = 30
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
||||
@@ -236,6 +236,7 @@
|
||||
status = "okay";
|
||||
compatible = "ethernet-phy-id0141.0DD1", "ethernet-phy-ieee802.3-c22";
|
||||
reg = <1>;
|
||||
marvell,reg-init = <3 18 0 0x4985>;
|
||||
|
||||
/* irq is connected to &pcawan pin 7 */
|
||||
};
|
||||
|
||||
@@ -432,6 +432,7 @@
|
||||
pinctrl-0 = <&pinctrl_usdhc2>;
|
||||
cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
|
||||
wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
|
||||
vmmc-supply = <&vdd_sd1_reg>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@@ -441,5 +442,6 @@
|
||||
&pinctrl_usdhc3_cdwp>;
|
||||
cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
|
||||
wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>;
|
||||
vmmc-supply = <&vdd_sd0_reg>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/cpu_pm.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/io.h>
|
||||
@@ -20,6 +21,7 @@
|
||||
|
||||
#include "common.h"
|
||||
#include "omap-secure.h"
|
||||
#include "soc.h"
|
||||
|
||||
static phys_addr_t omap_secure_memblock_base;
|
||||
|
||||
@@ -213,3 +215,40 @@ void __init omap_secure_init(void)
|
||||
{
|
||||
omap_optee_init_check();
|
||||
}
|
||||
|
||||
/*
|
||||
* Dummy dispatcher call after core OSWR and MPU off. Updates the ROM return
|
||||
* address after MMU has been re-enabled after CPU1 has been woken up again.
|
||||
* Otherwise the ROM code will attempt to use the earlier physical return
|
||||
* address that got set with MMU off when waking up CPU1. Only used on secure
|
||||
* devices.
|
||||
*/
|
||||
static int cpu_notifier(struct notifier_block *nb, unsigned long cmd, void *v)
|
||||
{
|
||||
switch (cmd) {
|
||||
case CPU_CLUSTER_PM_EXIT:
|
||||
omap_secure_dispatcher(OMAP4_PPA_SERVICE_0,
|
||||
FLAG_START_CRITICAL,
|
||||
0, 0, 0, 0, 0);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block secure_notifier_block = {
|
||||
.notifier_call = cpu_notifier,
|
||||
};
|
||||
|
||||
static int __init secure_pm_init(void)
|
||||
{
|
||||
if (omap_type() == OMAP2_DEVICE_TYPE_GP || !soc_is_omap44xx())
|
||||
return 0;
|
||||
|
||||
cpu_pm_register_notifier(&secure_notifier_block);
|
||||
|
||||
return 0;
|
||||
}
|
||||
omap_arch_initcall(secure_pm_init);
|
||||
|
||||
@@ -50,6 +50,7 @@
|
||||
#define OMAP5_DRA7_MON_SET_ACR_INDEX 0x107
|
||||
|
||||
/* Secure PPA(Primary Protected Application) APIs */
|
||||
#define OMAP4_PPA_SERVICE_0 0x21
|
||||
#define OMAP4_PPA_L2_POR_INDEX 0x23
|
||||
#define OMAP4_PPA_CPU_ACTRL_SMP_INDEX 0x25
|
||||
|
||||
|
||||
@@ -246,10 +246,10 @@ int __init omap4_cpcap_init(void)
|
||||
omap_voltage_register_pmic(voltdm, &omap443x_max8952_mpu);
|
||||
|
||||
if (of_machine_is_compatible("motorola,droid-bionic")) {
|
||||
voltdm = voltdm_lookup("mpu");
|
||||
voltdm = voltdm_lookup("core");
|
||||
omap_voltage_register_pmic(voltdm, &omap_cpcap_core);
|
||||
|
||||
voltdm = voltdm_lookup("mpu");
|
||||
voltdm = voltdm_lookup("iva");
|
||||
omap_voltage_register_pmic(voltdm, &omap_cpcap_iva);
|
||||
} else {
|
||||
voltdm = voltdm_lookup("core");
|
||||
|
||||
@@ -124,7 +124,7 @@
|
||||
#define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
|
||||
#define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
|
||||
|
||||
@@ -130,7 +130,7 @@
|
||||
#define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
|
||||
#define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0
|
||||
|
||||
@@ -54,8 +54,7 @@
|
||||
|
||||
static inline unsigned long user_stack_pointer(struct pt_regs *regs)
|
||||
{
|
||||
/* FIXME: should this be bspstore + nr_dirty regs? */
|
||||
return regs->ar_bspstore;
|
||||
return regs->r12;
|
||||
}
|
||||
|
||||
static inline int is_syscall_success(struct pt_regs *regs)
|
||||
@@ -79,11 +78,6 @@ static inline long regs_return_value(struct pt_regs *regs)
|
||||
unsigned long __ip = instruction_pointer(regs); \
|
||||
(__ip & ~3UL) + ((__ip & 3UL) << 2); \
|
||||
})
|
||||
/*
|
||||
* Why not default? Because user_stack_pointer() on ia64 gives register
|
||||
* stack backing store instead...
|
||||
*/
|
||||
#define current_user_stack_pointer() (current_pt_regs()->r12)
|
||||
|
||||
/* given a pointer to a task_struct, return the user's pt_regs */
|
||||
# define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
|
||||
|
||||
@@ -238,7 +238,7 @@ void flush_dcache_page(struct page *page)
|
||||
{
|
||||
struct address_space *mapping;
|
||||
|
||||
mapping = page_mapping(page);
|
||||
mapping = page_mapping_file(page);
|
||||
if (mapping && !mapping_mapped(mapping))
|
||||
set_bit(PG_dcache_dirty, &page->flags);
|
||||
else {
|
||||
|
||||
@@ -72,7 +72,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
|
||||
#endif
|
||||
case 4: return __cmpxchg_u32((unsigned int *)ptr,
|
||||
(unsigned int)old, (unsigned int)new_);
|
||||
case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_);
|
||||
case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff);
|
||||
}
|
||||
__cmpxchg_called_with_bad_pointer();
|
||||
return old;
|
||||
|
||||
@@ -37,10 +37,12 @@ static int diag8_noresponse(int cmdlen)
|
||||
|
||||
static int diag8_response(int cmdlen, char *response, int *rlen)
|
||||
{
|
||||
unsigned long _cmdlen = cmdlen | 0x40000000L;
|
||||
unsigned long _rlen = *rlen;
|
||||
register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf;
|
||||
register unsigned long reg3 asm ("3") = (addr_t) response;
|
||||
register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L;
|
||||
register unsigned long reg5 asm ("5") = *rlen;
|
||||
register unsigned long reg4 asm ("4") = _cmdlen;
|
||||
register unsigned long reg5 asm ("5") = _rlen;
|
||||
|
||||
asm volatile(
|
||||
" diag %2,%0,0x8\n"
|
||||
|
||||
@@ -132,7 +132,7 @@ void native_play_dead(void);
|
||||
void play_dead_common(void);
|
||||
void wbinvd_on_cpu(int cpu);
|
||||
int wbinvd_on_all_cpus(void);
|
||||
bool wakeup_cpu0(void);
|
||||
void cond_wakeup_cpu0(void);
|
||||
|
||||
void native_smp_send_reschedule(int cpu);
|
||||
void native_send_call_func_ipi(const struct cpumask *mask);
|
||||
|
||||
@@ -1655,13 +1655,17 @@ void play_dead_common(void)
|
||||
local_irq_disable();
|
||||
}
|
||||
|
||||
bool wakeup_cpu0(void)
|
||||
/**
|
||||
* cond_wakeup_cpu0 - Wake up CPU0 if needed.
|
||||
*
|
||||
* If NMI wants to wake up CPU0, start CPU0.
|
||||
*/
|
||||
void cond_wakeup_cpu0(void)
|
||||
{
|
||||
if (smp_processor_id() == 0 && enable_start_cpu0)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
start_cpu0();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
|
||||
|
||||
/*
|
||||
* We need to flush the caches before going to sleep, lest we have
|
||||
@@ -1730,11 +1734,8 @@ static inline void mwait_play_dead(void)
|
||||
__monitor(mwait_ptr, 0, 0);
|
||||
mb();
|
||||
__mwait(eax, 0);
|
||||
/*
|
||||
* If NMI wants to wake up CPU0, start CPU0.
|
||||
*/
|
||||
if (wakeup_cpu0())
|
||||
start_cpu0();
|
||||
|
||||
cond_wakeup_cpu0();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1745,11 +1746,8 @@ void hlt_play_dead(void)
|
||||
|
||||
while (1) {
|
||||
native_halt();
|
||||
/*
|
||||
* If NMI wants to wake up CPU0, start CPU0.
|
||||
*/
|
||||
if (wakeup_cpu0())
|
||||
start_cpu0();
|
||||
|
||||
cond_wakeup_cpu0();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -5972,6 +5972,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
|
||||
struct kvm_mmu_page *sp;
|
||||
unsigned int ratio;
|
||||
LIST_HEAD(invalid_list);
|
||||
bool flush = false;
|
||||
ulong to_zap;
|
||||
|
||||
rcu_idx = srcu_read_lock(&kvm->srcu);
|
||||
@@ -5992,20 +5993,20 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
|
||||
struct kvm_mmu_page,
|
||||
lpage_disallowed_link);
|
||||
WARN_ON_ONCE(!sp->lpage_disallowed);
|
||||
if (sp->tdp_mmu_page)
|
||||
kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn,
|
||||
sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level));
|
||||
else {
|
||||
if (sp->tdp_mmu_page) {
|
||||
flush |= kvm_tdp_mmu_zap_sp(kvm, sp);
|
||||
} else {
|
||||
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
|
||||
WARN_ON_ONCE(sp->lpage_disallowed);
|
||||
}
|
||||
|
||||
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
|
||||
kvm_mmu_commit_zap_page(kvm, &invalid_list);
|
||||
kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
|
||||
cond_resched_lock(&kvm->mmu_lock);
|
||||
flush = false;
|
||||
}
|
||||
}
|
||||
kvm_mmu_commit_zap_page(kvm, &invalid_list);
|
||||
kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
|
||||
|
||||
spin_unlock(&kvm->mmu_lock);
|
||||
srcu_read_unlock(&kvm->srcu, rcu_idx);
|
||||
|
||||
@@ -22,21 +22,22 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level)
|
||||
|
||||
/*
|
||||
* Sets a TDP iterator to walk a pre-order traversal of the paging structure
|
||||
* rooted at root_pt, starting with the walk to translate goal_gfn.
|
||||
* rooted at root_pt, starting with the walk to translate next_last_level_gfn.
|
||||
*/
|
||||
void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
|
||||
int min_level, gfn_t goal_gfn)
|
||||
int min_level, gfn_t next_last_level_gfn)
|
||||
{
|
||||
WARN_ON(root_level < 1);
|
||||
WARN_ON(root_level > PT64_ROOT_MAX_LEVEL);
|
||||
|
||||
iter->goal_gfn = goal_gfn;
|
||||
iter->next_last_level_gfn = next_last_level_gfn;
|
||||
iter->yielded_gfn = iter->next_last_level_gfn;
|
||||
iter->root_level = root_level;
|
||||
iter->min_level = min_level;
|
||||
iter->level = root_level;
|
||||
iter->pt_path[iter->level - 1] = root_pt;
|
||||
|
||||
iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
|
||||
iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
|
||||
tdp_iter_refresh_sptep(iter);
|
||||
|
||||
iter->valid = true;
|
||||
@@ -82,7 +83,7 @@ static bool try_step_down(struct tdp_iter *iter)
|
||||
|
||||
iter->level--;
|
||||
iter->pt_path[iter->level - 1] = child_pt;
|
||||
iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
|
||||
iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level);
|
||||
tdp_iter_refresh_sptep(iter);
|
||||
|
||||
return true;
|
||||
@@ -106,7 +107,7 @@ static bool try_step_side(struct tdp_iter *iter)
|
||||
return false;
|
||||
|
||||
iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
|
||||
iter->goal_gfn = iter->gfn;
|
||||
iter->next_last_level_gfn = iter->gfn;
|
||||
iter->sptep++;
|
||||
iter->old_spte = READ_ONCE(*iter->sptep);
|
||||
|
||||
@@ -158,23 +159,6 @@ void tdp_iter_next(struct tdp_iter *iter)
|
||||
iter->valid = false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Restart the walk over the paging structure from the root, starting from the
|
||||
* highest gfn the iterator had previously reached. Assumes that the entire
|
||||
* paging structure, except the root page, may have been completely torn down
|
||||
* and rebuilt.
|
||||
*/
|
||||
void tdp_iter_refresh_walk(struct tdp_iter *iter)
|
||||
{
|
||||
gfn_t goal_gfn = iter->goal_gfn;
|
||||
|
||||
if (iter->gfn > goal_gfn)
|
||||
goal_gfn = iter->gfn;
|
||||
|
||||
tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
|
||||
iter->root_level, iter->min_level, goal_gfn);
|
||||
}
|
||||
|
||||
u64 *tdp_iter_root_pt(struct tdp_iter *iter)
|
||||
{
|
||||
return iter->pt_path[iter->root_level - 1];
|
||||
|
||||
@@ -15,7 +15,13 @@ struct tdp_iter {
|
||||
* The iterator will traverse the paging structure towards the mapping
|
||||
* for this GFN.
|
||||
*/
|
||||
gfn_t goal_gfn;
|
||||
gfn_t next_last_level_gfn;
|
||||
/*
|
||||
* The next_last_level_gfn at the time when the thread last
|
||||
* yielded. Only yielding when the next_last_level_gfn !=
|
||||
* yielded_gfn helps ensure forward progress.
|
||||
*/
|
||||
gfn_t yielded_gfn;
|
||||
/* Pointers to the page tables traversed to reach the current SPTE */
|
||||
u64 *pt_path[PT64_ROOT_MAX_LEVEL];
|
||||
/* A pointer to the current SPTE */
|
||||
@@ -52,9 +58,8 @@ struct tdp_iter {
|
||||
u64 *spte_to_child_pt(u64 pte, int level);
|
||||
|
||||
void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
|
||||
int min_level, gfn_t goal_gfn);
|
||||
int min_level, gfn_t next_last_level_gfn);
|
||||
void tdp_iter_next(struct tdp_iter *iter);
|
||||
void tdp_iter_refresh_walk(struct tdp_iter *iter);
|
||||
u64 *tdp_iter_root_pt(struct tdp_iter *iter);
|
||||
|
||||
#endif /* __KVM_X86_MMU_TDP_ITER_H */
|
||||
|
||||
@@ -103,7 +103,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
|
||||
}
|
||||
|
||||
static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
gfn_t start, gfn_t end, bool can_yield);
|
||||
gfn_t start, gfn_t end, bool can_yield, bool flush);
|
||||
|
||||
void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
|
||||
{
|
||||
@@ -116,7 +116,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)
|
||||
|
||||
list_del(&root->link);
|
||||
|
||||
zap_gfn_range(kvm, root, 0, max_gfn, false);
|
||||
zap_gfn_range(kvm, root, 0, max_gfn, false, false);
|
||||
|
||||
free_page((unsigned long)root->spt);
|
||||
kmem_cache_free(mmu_page_header_cache, root);
|
||||
@@ -405,27 +405,43 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
|
||||
_mmu->shadow_root_level, _start, _end)
|
||||
|
||||
/*
|
||||
* Flush the TLB if the process should drop kvm->mmu_lock.
|
||||
* Return whether the caller still needs to flush the tlb.
|
||||
* Yield if the MMU lock is contended or this thread needs to return control
|
||||
* to the scheduler.
|
||||
*
|
||||
* If this function should yield and flush is set, it will perform a remote
|
||||
* TLB flush before yielding.
|
||||
*
|
||||
* If this function yields, it will also reset the tdp_iter's walk over the
|
||||
* paging structure and the calling function should skip to the next
|
||||
* iteration to allow the iterator to continue its traversal from the
|
||||
* paging structure root.
|
||||
*
|
||||
* Return true if this function yielded and the iterator's traversal was reset.
|
||||
* Return false if a yield was not needed.
|
||||
*/
|
||||
static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
|
||||
static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
|
||||
struct tdp_iter *iter, bool flush)
|
||||
{
|
||||
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
|
||||
kvm_flush_remote_tlbs(kvm);
|
||||
cond_resched_lock(&kvm->mmu_lock);
|
||||
tdp_iter_refresh_walk(iter);
|
||||
/* Ensure forward progress has been made before yielding. */
|
||||
if (iter->next_last_level_gfn == iter->yielded_gfn)
|
||||
return false;
|
||||
} else {
|
||||
|
||||
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
|
||||
if (flush)
|
||||
kvm_flush_remote_tlbs(kvm);
|
||||
|
||||
cond_resched_lock(&kvm->mmu_lock);
|
||||
|
||||
WARN_ON(iter->gfn > iter->next_last_level_gfn);
|
||||
|
||||
tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
|
||||
iter->root_level, iter->min_level,
|
||||
iter->next_last_level_gfn);
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
|
||||
{
|
||||
if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
|
||||
cond_resched_lock(&kvm->mmu_lock);
|
||||
tdp_iter_refresh_walk(iter);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -437,15 +453,22 @@ static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
|
||||
* scheduler needs the CPU or there is contention on the MMU lock. If this
|
||||
* function cannot yield, it will not release the MMU lock or reschedule and
|
||||
* the caller must ensure it does not supply too large a GFN range, or the
|
||||
* operation can cause a soft lockup.
|
||||
* operation can cause a soft lockup. Note, in some use cases a flush may be
|
||||
* required by prior actions. Ensure the pending flush is performed prior to
|
||||
* yielding.
|
||||
*/
|
||||
static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
gfn_t start, gfn_t end, bool can_yield)
|
||||
gfn_t start, gfn_t end, bool can_yield, bool flush)
|
||||
{
|
||||
struct tdp_iter iter;
|
||||
bool flush_needed = false;
|
||||
|
||||
tdp_root_for_each_pte(iter, root, start, end) {
|
||||
if (can_yield &&
|
||||
tdp_mmu_iter_cond_resched(kvm, &iter, flush)) {
|
||||
flush = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!is_shadow_present_pte(iter.old_spte))
|
||||
continue;
|
||||
|
||||
@@ -460,13 +483,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
continue;
|
||||
|
||||
tdp_mmu_set_spte(kvm, &iter, 0);
|
||||
|
||||
if (can_yield)
|
||||
flush_needed = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
|
||||
else
|
||||
flush_needed = true;
|
||||
flush = true;
|
||||
}
|
||||
return flush_needed;
|
||||
|
||||
return flush;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -475,13 +495,14 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
* SPTEs have been cleared and a TLB flush is needed before releasing the
|
||||
* MMU lock.
|
||||
*/
|
||||
bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
|
||||
bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
|
||||
bool can_yield)
|
||||
{
|
||||
struct kvm_mmu_page *root;
|
||||
bool flush = false;
|
||||
|
||||
for_each_tdp_mmu_root_yield_safe(kvm, root)
|
||||
flush |= zap_gfn_range(kvm, root, start, end, true);
|
||||
flush = zap_gfn_range(kvm, root, start, end, can_yield, flush);
|
||||
|
||||
return flush;
|
||||
}
|
||||
@@ -673,7 +694,7 @@ static int zap_gfn_range_hva_wrapper(struct kvm *kvm,
|
||||
struct kvm_mmu_page *root, gfn_t start,
|
||||
gfn_t end, unsigned long unused)
|
||||
{
|
||||
return zap_gfn_range(kvm, root, start, end, false);
|
||||
return zap_gfn_range(kvm, root, start, end, false, false);
|
||||
}
|
||||
|
||||
int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
|
||||
@@ -824,6 +845,9 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
|
||||
for_each_tdp_pte_min_level(iter, root->spt, root->role.level,
|
||||
min_level, start, end) {
|
||||
if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
|
||||
continue;
|
||||
|
||||
if (!is_shadow_present_pte(iter.old_spte) ||
|
||||
!is_last_spte(iter.old_spte, iter.level))
|
||||
continue;
|
||||
@@ -832,8 +856,6 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
|
||||
tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
|
||||
spte_set = true;
|
||||
|
||||
tdp_mmu_iter_cond_resched(kvm, &iter);
|
||||
}
|
||||
return spte_set;
|
||||
}
|
||||
@@ -877,6 +899,9 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
bool spte_set = false;
|
||||
|
||||
tdp_root_for_each_leaf_pte(iter, root, start, end) {
|
||||
if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
|
||||
continue;
|
||||
|
||||
if (spte_ad_need_write_protect(iter.old_spte)) {
|
||||
if (is_writable_pte(iter.old_spte))
|
||||
new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
|
||||
@@ -891,8 +916,6 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
|
||||
tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte);
|
||||
spte_set = true;
|
||||
|
||||
tdp_mmu_iter_cond_resched(kvm, &iter);
|
||||
}
|
||||
return spte_set;
|
||||
}
|
||||
@@ -1000,6 +1023,9 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
bool spte_set = false;
|
||||
|
||||
tdp_root_for_each_pte(iter, root, start, end) {
|
||||
if (tdp_mmu_iter_cond_resched(kvm, &iter, false))
|
||||
continue;
|
||||
|
||||
if (!is_shadow_present_pte(iter.old_spte))
|
||||
continue;
|
||||
|
||||
@@ -1007,8 +1033,6 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
|
||||
|
||||
tdp_mmu_set_spte(kvm, &iter, new_spte);
|
||||
spte_set = true;
|
||||
|
||||
tdp_mmu_iter_cond_resched(kvm, &iter);
|
||||
}
|
||||
|
||||
return spte_set;
|
||||
@@ -1049,6 +1073,11 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
|
||||
bool spte_set = false;
|
||||
|
||||
tdp_root_for_each_pte(iter, root, start, end) {
|
||||
if (tdp_mmu_iter_cond_resched(kvm, &iter, spte_set)) {
|
||||
spte_set = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!is_shadow_present_pte(iter.old_spte) ||
|
||||
!is_last_spte(iter.old_spte, iter.level))
|
||||
continue;
|
||||
@@ -1061,7 +1090,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
|
||||
|
||||
tdp_mmu_set_spte(kvm, &iter, 0);
|
||||
|
||||
spte_set = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
|
||||
spte_set = true;
|
||||
}
|
||||
|
||||
if (spte_set)
|
||||
|
||||
@@ -12,7 +12,23 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root);
|
||||
hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
|
||||
void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root);
|
||||
|
||||
bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end);
|
||||
bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,
|
||||
bool can_yield);
|
||||
static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start,
|
||||
gfn_t end)
|
||||
{
|
||||
return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true);
|
||||
}
|
||||
static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
|
||||
{
|
||||
gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
|
||||
|
||||
/*
|
||||
* Don't allow yielding, as the caller may have pending pages to zap
|
||||
* on the shadow MMU.
|
||||
*/
|
||||
return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false);
|
||||
}
|
||||
void kvm_tdp_mmu_zap_all(struct kvm *kvm);
|
||||
|
||||
int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user