You've already forked linux-rockchip
mirror of
https://github.com/armbian/linux-rockchip.git
synced 2026-01-06 11:08:10 -08:00
Merge tag 'vfio-v5.18-rc1' of https://github.com/awilliam/linux-vfio
Pull VFIO updates from Alex Williamson:
- Introduce new device migration uAPI and implement device specific
mlx5 vfio-pci variant driver supporting new protocol (Jason
Gunthorpe, Yishai Hadas, Leon Romanovsky)
- New HiSilicon acc vfio-pci variant driver, also supporting migration
interface (Shameer Kolothum, Longfang Liu)
- D3hot fixes for vfio-pci-core (Abhishek Sahu)
- Document new vfio-pci variant driver acceptance criteria
(Alex Williamson)
- Fix UML build unresolved ioport_{un}map() functions
(Alex Williamson)
- Fix MAINTAINERS due to header movement (Lukas Bulwahn)
* tag 'vfio-v5.18-rc1' of https://github.com/awilliam/linux-vfio: (31 commits)
vfio-pci: Provide reviewers and acceptance criteria for variant drivers
MAINTAINERS: adjust entry for header movement in hisilicon qm driver
hisi_acc_vfio_pci: Use its own PCI reset_done error handler
hisi_acc_vfio_pci: Add support for VFIO live migration
crypto: hisilicon/qm: Set the VF QM state register
hisi_acc_vfio_pci: Add helper to retrieve the struct pci_driver
hisi_acc_vfio_pci: Restrict access to VF dev BAR2 migration region
hisi_acc_vfio_pci: add new vfio_pci driver for HiSilicon ACC devices
hisi_acc_qm: Move VF PCI device IDs to common header
crypto: hisilicon/qm: Move few definitions to common header
crypto: hisilicon/qm: Move the QM header to include/linux
vfio/mlx5: Fix to not use 0 as NULL pointer
PCI/IOV: Fix wrong kernel-doc identifier
vfio/mlx5: Use its own PCI reset_done error handler
vfio/pci: Expose vfio_pci_core_aer_err_detected()
vfio/mlx5: Implement vfio_pci driver for mlx5 devices
vfio/mlx5: Expose migration commands over mlx5 device
vfio: Remove migration protocol v1 documentation
vfio: Extend the device migration protocol with RUNNING_P2P
vfio: Define device migration protocol v2
...
This commit is contained in:
@@ -103,6 +103,7 @@ available subsections can be seen below.
|
||||
sync_file
|
||||
vfio-mediated-device
|
||||
vfio
|
||||
vfio-pci-device-specific-driver-acceptance
|
||||
xilinx/index
|
||||
xillybus
|
||||
zorro
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
Acceptance criteria for vfio-pci device specific driver variants
|
||||
================================================================
|
||||
|
||||
Overview
|
||||
--------
|
||||
The vfio-pci driver exists as a device agnostic driver using the
|
||||
system IOMMU and relying on the robustness of platform fault
|
||||
handling to provide isolated device access to userspace. While the
|
||||
vfio-pci driver does include some device specific support, further
|
||||
extensions for yet more advanced device specific features are not
|
||||
sustainable. The vfio-pci driver has therefore split out
|
||||
vfio-pci-core as a library that may be reused to implement features
|
||||
requiring device specific knowledge, ex. saving and loading device
|
||||
state for the purposes of supporting migration.
|
||||
|
||||
In support of such features, it's expected that some device specific
|
||||
variants may interact with parent devices (ex. SR-IOV PF in support of
|
||||
a user assigned VF) or other extensions that may not be otherwise
|
||||
accessible via the vfio-pci base driver. Authors of such drivers
|
||||
should be diligent not to create exploitable interfaces via these
|
||||
interactions or allow unchecked userspace data to have an effect
|
||||
beyond the scope of the assigned device.
|
||||
|
||||
New driver submissions are therefore requested to have approval via
|
||||
sign-off/ack/review/etc for any interactions with parent drivers.
|
||||
Additionally, drivers should make an attempt to provide sufficient
|
||||
documentation for reviewers to understand the device specific
|
||||
extensions, for example in the case of migration data, how is the
|
||||
device state composed and consumed, which portions are not otherwise
|
||||
available to the user via vfio-pci, what safeguards exist to validate
|
||||
the data, etc. To that extent, authors should additionally expect to
|
||||
require reviews from at least one of the listed reviewers, in addition
|
||||
to the overall vfio maintainer.
|
||||
@@ -103,3 +103,4 @@ to do something different in the near future.
|
||||
../nvdimm/maintainer-entry-profile
|
||||
../riscv/patch-acceptance
|
||||
../driver-api/media/maintainer-entry-profile
|
||||
../driver-api/vfio-pci-device-specific-driver-acceptance
|
||||
|
||||
25
MAINTAINERS
25
MAINTAINERS
@@ -8722,9 +8722,9 @@ L: linux-crypto@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/ABI/testing/debugfs-hisi-zip
|
||||
F: drivers/crypto/hisilicon/qm.c
|
||||
F: drivers/crypto/hisilicon/qm.h
|
||||
F: drivers/crypto/hisilicon/sgl.c
|
||||
F: drivers/crypto/hisilicon/zip/
|
||||
F: include/linux/hisi_acc_qm.h
|
||||
|
||||
HISILICON ROCE DRIVER
|
||||
M: Wenpeng Liang <liangwenpeng@huawei.com>
|
||||
@@ -20399,6 +20399,13 @@ L: kvm@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/vfio/fsl-mc/
|
||||
|
||||
VFIO HISILICON PCI DRIVER
|
||||
M: Longfang Liu <liulongfang@huawei.com>
|
||||
M: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
|
||||
L: kvm@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/vfio/pci/hisilicon/
|
||||
|
||||
VFIO MEDIATED DEVICE DRIVERS
|
||||
M: Kirti Wankhede <kwankhede@nvidia.com>
|
||||
L: kvm@vger.kernel.org
|
||||
@@ -20408,12 +20415,28 @@ F: drivers/vfio/mdev/
|
||||
F: include/linux/mdev.h
|
||||
F: samples/vfio-mdev/
|
||||
|
||||
VFIO PCI DEVICE SPECIFIC DRIVERS
|
||||
R: Jason Gunthorpe <jgg@nvidia.com>
|
||||
R: Yishai Hadas <yishaih@nvidia.com>
|
||||
R: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
|
||||
R: Kevin Tian <kevin.tian@intel.com>
|
||||
L: kvm@vger.kernel.org
|
||||
S: Maintained
|
||||
P: Documentation/driver-api/vfio-pci-device-specific-driver-acceptance.rst
|
||||
F: drivers/vfio/pci/*/
|
||||
|
||||
VFIO PLATFORM DRIVER
|
||||
M: Eric Auger <eric.auger@redhat.com>
|
||||
L: kvm@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/vfio/platform/
|
||||
|
||||
VFIO MLX5 PCI DRIVER
|
||||
M: Yishai Hadas <yishaih@nvidia.com>
|
||||
L: kvm@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/vfio/pci/mlx5/
|
||||
|
||||
VGA_SWITCHEROO
|
||||
R: Lukas Wunner <lukas@wunner.de>
|
||||
S: Maintained
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
#define __HISI_HPRE_H
|
||||
|
||||
#include <linux/list.h>
|
||||
#include "../qm.h"
|
||||
#include <linux/hisi_acc_qm.h>
|
||||
|
||||
#define HPRE_SQE_SIZE sizeof(struct hpre_sqe)
|
||||
#define HPRE_PF_DEF_Q_NUM 64
|
||||
|
||||
@@ -68,8 +68,7 @@
|
||||
#define HPRE_REG_RD_INTVRL_US 10
|
||||
#define HPRE_REG_RD_TMOUT_US 1000
|
||||
#define HPRE_DBGFS_VAL_MAX_LEN 20
|
||||
#define HPRE_PCI_DEVICE_ID 0xa258
|
||||
#define HPRE_PCI_VF_DEVICE_ID 0xa259
|
||||
#define PCI_DEVICE_ID_HUAWEI_HPRE_PF 0xa258
|
||||
#define HPRE_QM_USR_CFG_MASK GENMASK(31, 1)
|
||||
#define HPRE_QM_AXI_CFG_MASK GENMASK(15, 0)
|
||||
#define HPRE_QM_VFG_AX_MASK GENMASK(7, 0)
|
||||
@@ -111,8 +110,8 @@
|
||||
static const char hpre_name[] = "hisi_hpre";
|
||||
static struct dentry *hpre_debugfs_root;
|
||||
static const struct pci_device_id hpre_dev_ids[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HPRE_PCI_DEVICE_ID) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HPRE_PCI_VF_DEVICE_ID) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_HPRE_PF) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_HPRE_VF) },
|
||||
{ 0, }
|
||||
};
|
||||
|
||||
@@ -242,7 +241,7 @@ MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
|
||||
|
||||
static int pf_q_num_set(const char *val, const struct kernel_param *kp)
|
||||
{
|
||||
return q_num_set(val, kp, HPRE_PCI_DEVICE_ID);
|
||||
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_HPRE_PF);
|
||||
}
|
||||
|
||||
static const struct kernel_param_ops hpre_pf_q_num_ops = {
|
||||
@@ -921,7 +920,7 @@ static int hpre_debugfs_init(struct hisi_qm *qm)
|
||||
qm->debug.sqe_mask_len = HPRE_SQE_MASK_LEN;
|
||||
hisi_qm_debug_init(qm);
|
||||
|
||||
if (qm->pdev->device == HPRE_PCI_DEVICE_ID) {
|
||||
if (qm->pdev->device == PCI_DEVICE_ID_HUAWEI_HPRE_PF) {
|
||||
ret = hpre_ctrl_debug_init(qm);
|
||||
if (ret)
|
||||
goto failed_to_create;
|
||||
@@ -958,7 +957,7 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
|
||||
qm->sqe_size = HPRE_SQE_SIZE;
|
||||
qm->dev_name = hpre_name;
|
||||
|
||||
qm->fun_type = (pdev->device == HPRE_PCI_DEVICE_ID) ?
|
||||
qm->fun_type = (pdev->device == PCI_DEVICE_ID_HUAWEI_HPRE_PF) ?
|
||||
QM_HW_PF : QM_HW_VF;
|
||||
if (qm->fun_type == QM_HW_PF) {
|
||||
qm->qp_base = HPRE_PF_DEF_Q_BASE;
|
||||
@@ -1191,6 +1190,12 @@ static struct pci_driver hpre_pci_driver = {
|
||||
.driver.pm = &hpre_pm_ops,
|
||||
};
|
||||
|
||||
struct pci_driver *hisi_hpre_get_pf_driver(void)
|
||||
{
|
||||
return &hpre_pci_driver;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_hpre_get_pf_driver);
|
||||
|
||||
static void hpre_register_debugfs(void)
|
||||
{
|
||||
if (!debugfs_initialized())
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
#include <linux/uacce.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <uapi/misc/uacce/hisi_qm.h>
|
||||
#include "qm.h"
|
||||
#include <linux/hisi_acc_qm.h>
|
||||
|
||||
/* eq/aeq irq enable */
|
||||
#define QM_VF_AEQ_INT_SOURCE 0x0
|
||||
@@ -33,23 +33,6 @@
|
||||
#define QM_ABNORMAL_EVENT_IRQ_VECTOR 3
|
||||
|
||||
/* mailbox */
|
||||
#define QM_MB_CMD_SQC 0x0
|
||||
#define QM_MB_CMD_CQC 0x1
|
||||
#define QM_MB_CMD_EQC 0x2
|
||||
#define QM_MB_CMD_AEQC 0x3
|
||||
#define QM_MB_CMD_SQC_BT 0x4
|
||||
#define QM_MB_CMD_CQC_BT 0x5
|
||||
#define QM_MB_CMD_SQC_VFT_V2 0x6
|
||||
#define QM_MB_CMD_STOP_QP 0x8
|
||||
#define QM_MB_CMD_SRC 0xc
|
||||
#define QM_MB_CMD_DST 0xd
|
||||
|
||||
#define QM_MB_CMD_SEND_BASE 0x300
|
||||
#define QM_MB_EVENT_SHIFT 8
|
||||
#define QM_MB_BUSY_SHIFT 13
|
||||
#define QM_MB_OP_SHIFT 14
|
||||
#define QM_MB_CMD_DATA_ADDR_L 0x304
|
||||
#define QM_MB_CMD_DATA_ADDR_H 0x308
|
||||
#define QM_MB_PING_ALL_VFS 0xffff
|
||||
#define QM_MB_CMD_DATA_SHIFT 32
|
||||
#define QM_MB_CMD_DATA_MASK GENMASK(31, 0)
|
||||
@@ -103,19 +86,12 @@
|
||||
#define QM_DB_CMD_SHIFT_V1 16
|
||||
#define QM_DB_INDEX_SHIFT_V1 32
|
||||
#define QM_DB_PRIORITY_SHIFT_V1 48
|
||||
#define QM_DOORBELL_SQ_CQ_BASE_V2 0x1000
|
||||
#define QM_DOORBELL_EQ_AEQ_BASE_V2 0x2000
|
||||
#define QM_QUE_ISO_CFG_V 0x0030
|
||||
#define QM_PAGE_SIZE 0x0034
|
||||
#define QM_QUE_ISO_EN 0x100154
|
||||
#define QM_CAPBILITY 0x100158
|
||||
#define QM_QP_NUN_MASK GENMASK(10, 0)
|
||||
#define QM_QP_DB_INTERVAL 0x10000
|
||||
#define QM_QP_MAX_NUM_SHIFT 11
|
||||
#define QM_DB_CMD_SHIFT_V2 12
|
||||
#define QM_DB_RAND_SHIFT_V2 16
|
||||
#define QM_DB_INDEX_SHIFT_V2 32
|
||||
#define QM_DB_PRIORITY_SHIFT_V2 48
|
||||
|
||||
#define QM_MEM_START_INIT 0x100040
|
||||
#define QM_MEM_INIT_DONE 0x100044
|
||||
@@ -693,7 +669,7 @@ static void qm_mb_pre_init(struct qm_mailbox *mailbox, u8 cmd,
|
||||
}
|
||||
|
||||
/* return 0 mailbox ready, -ETIMEDOUT hardware timeout */
|
||||
static int qm_wait_mb_ready(struct hisi_qm *qm)
|
||||
int hisi_qm_wait_mb_ready(struct hisi_qm *qm)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
@@ -701,6 +677,7 @@ static int qm_wait_mb_ready(struct hisi_qm *qm)
|
||||
val, !((val >> QM_MB_BUSY_SHIFT) &
|
||||
0x1), POLL_PERIOD, POLL_TIMEOUT);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_qm_wait_mb_ready);
|
||||
|
||||
/* 128 bit should be written to hardware at one time to trigger a mailbox */
|
||||
static void qm_mb_write(struct hisi_qm *qm, const void *src)
|
||||
@@ -726,14 +703,14 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src)
|
||||
|
||||
static int qm_mb_nolock(struct hisi_qm *qm, struct qm_mailbox *mailbox)
|
||||
{
|
||||
if (unlikely(qm_wait_mb_ready(qm))) {
|
||||
if (unlikely(hisi_qm_wait_mb_ready(qm))) {
|
||||
dev_err(&qm->pdev->dev, "QM mailbox is busy to start!\n");
|
||||
goto mb_busy;
|
||||
}
|
||||
|
||||
qm_mb_write(qm, mailbox);
|
||||
|
||||
if (unlikely(qm_wait_mb_ready(qm))) {
|
||||
if (unlikely(hisi_qm_wait_mb_ready(qm))) {
|
||||
dev_err(&qm->pdev->dev, "QM mailbox operation timeout!\n");
|
||||
goto mb_busy;
|
||||
}
|
||||
@@ -745,8 +722,8 @@ mb_busy:
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
static int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue,
|
||||
bool op)
|
||||
int hisi_qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue,
|
||||
bool op)
|
||||
{
|
||||
struct qm_mailbox mailbox;
|
||||
int ret;
|
||||
@@ -762,6 +739,7 @@ static int qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue,
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_qm_mb);
|
||||
|
||||
static void qm_db_v1(struct hisi_qm *qm, u16 qn, u8 cmd, u16 index, u8 priority)
|
||||
{
|
||||
@@ -1351,7 +1329,7 @@ static int qm_get_vft_v2(struct hisi_qm *qm, u32 *base, u32 *number)
|
||||
u64 sqc_vft;
|
||||
int ret;
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_SQC_VFT_V2, 0, 0, 1);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_SQC_VFT_V2, 0, 0, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -1725,12 +1703,12 @@ static int dump_show(struct hisi_qm *qm, void *info,
|
||||
|
||||
static int qm_dump_sqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
|
||||
{
|
||||
return qm_mb(qm, QM_MB_CMD_SQC, dma_addr, qp_id, 1);
|
||||
return hisi_qm_mb(qm, QM_MB_CMD_SQC, dma_addr, qp_id, 1);
|
||||
}
|
||||
|
||||
static int qm_dump_cqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
|
||||
{
|
||||
return qm_mb(qm, QM_MB_CMD_CQC, dma_addr, qp_id, 1);
|
||||
return hisi_qm_mb(qm, QM_MB_CMD_CQC, dma_addr, qp_id, 1);
|
||||
}
|
||||
|
||||
static int qm_sqc_dump(struct hisi_qm *qm, const char *s)
|
||||
@@ -1842,7 +1820,7 @@ static int qm_eqc_aeqc_dump(struct hisi_qm *qm, char *s, size_t size,
|
||||
if (IS_ERR(xeqc))
|
||||
return PTR_ERR(xeqc);
|
||||
|
||||
ret = qm_mb(qm, cmd, xeqc_dma, 0, 1);
|
||||
ret = hisi_qm_mb(qm, cmd, xeqc_dma, 0, 1);
|
||||
if (ret)
|
||||
goto err_free_ctx;
|
||||
|
||||
@@ -2495,7 +2473,7 @@ unlock:
|
||||
|
||||
static int qm_stop_qp(struct hisi_qp *qp)
|
||||
{
|
||||
return qm_mb(qp->qm, QM_MB_CMD_STOP_QP, 0, qp->qp_id, 0);
|
||||
return hisi_qm_mb(qp->qm, QM_MB_CMD_STOP_QP, 0, qp->qp_id, 0);
|
||||
}
|
||||
|
||||
static int qm_set_msi(struct hisi_qm *qm, bool set)
|
||||
@@ -2763,7 +2741,7 @@ static int qm_sq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 0);
|
||||
dma_unmap_single(dev, sqc_dma, sizeof(struct qm_sqc), DMA_TO_DEVICE);
|
||||
kfree(sqc);
|
||||
|
||||
@@ -2804,7 +2782,7 @@ static int qm_cq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 0);
|
||||
dma_unmap_single(dev, cqc_dma, sizeof(struct qm_cqc), DMA_TO_DEVICE);
|
||||
kfree(cqc);
|
||||
|
||||
@@ -3514,6 +3492,12 @@ static void hisi_qm_pci_uninit(struct hisi_qm *qm)
|
||||
pci_disable_device(pdev);
|
||||
}
|
||||
|
||||
static void hisi_qm_set_state(struct hisi_qm *qm, u8 state)
|
||||
{
|
||||
if (qm->ver > QM_HW_V2 && qm->fun_type == QM_HW_VF)
|
||||
writel(state, qm->io_base + QM_VF_STATE);
|
||||
}
|
||||
|
||||
/**
|
||||
* hisi_qm_uninit() - Uninitialize qm.
|
||||
* @qm: The qm needed uninit.
|
||||
@@ -3542,6 +3526,7 @@ void hisi_qm_uninit(struct hisi_qm *qm)
|
||||
dma_free_coherent(dev, qm->qdma.size,
|
||||
qm->qdma.va, qm->qdma.dma);
|
||||
}
|
||||
hisi_qm_set_state(qm, QM_NOT_READY);
|
||||
up_write(&qm->qps_lock);
|
||||
|
||||
qm_irq_unregister(qm);
|
||||
@@ -3655,7 +3640,7 @@ static int qm_eq_ctx_cfg(struct hisi_qm *qm)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0);
|
||||
dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE);
|
||||
kfree(eqc);
|
||||
|
||||
@@ -3684,7 +3669,7 @@ static int qm_aeq_ctx_cfg(struct hisi_qm *qm)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_AEQC, aeqc_dma, 0, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_AEQC, aeqc_dma, 0, 0);
|
||||
dma_unmap_single(dev, aeqc_dma, sizeof(struct qm_aeqc), DMA_TO_DEVICE);
|
||||
kfree(aeqc);
|
||||
|
||||
@@ -3723,11 +3708,11 @@ static int __hisi_qm_start(struct hisi_qm *qm)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_SQC_BT, qm->sqc_dma, 0, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_SQC_BT, qm->sqc_dma, 0, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = qm_mb(qm, QM_MB_CMD_CQC_BT, qm->cqc_dma, 0, 0);
|
||||
ret = hisi_qm_mb(qm, QM_MB_CMD_CQC_BT, qm->cqc_dma, 0, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -3767,6 +3752,7 @@ int hisi_qm_start(struct hisi_qm *qm)
|
||||
if (!ret)
|
||||
atomic_set(&qm->status.flags, QM_START);
|
||||
|
||||
hisi_qm_set_state(qm, QM_READY);
|
||||
err_unlock:
|
||||
up_write(&qm->qps_lock);
|
||||
return ret;
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
#ifndef __HISI_SEC_V2_H
|
||||
#define __HISI_SEC_V2_H
|
||||
|
||||
#include "../qm.h"
|
||||
#include <linux/hisi_acc_qm.h>
|
||||
#include "sec_crypto.h"
|
||||
|
||||
/* Algorithm resource per hardware SEC queue */
|
||||
|
||||
@@ -20,8 +20,7 @@
|
||||
|
||||
#define SEC_VF_NUM 63
|
||||
#define SEC_QUEUE_NUM_V1 4096
|
||||
#define SEC_PF_PCI_DEVICE_ID 0xa255
|
||||
#define SEC_VF_PCI_DEVICE_ID 0xa256
|
||||
#define PCI_DEVICE_ID_HUAWEI_SEC_PF 0xa255
|
||||
|
||||
#define SEC_BD_ERR_CHK_EN0 0xEFFFFFFF
|
||||
#define SEC_BD_ERR_CHK_EN1 0x7ffff7fd
|
||||
@@ -229,7 +228,7 @@ static const struct debugfs_reg32 sec_dfx_regs[] = {
|
||||
|
||||
static int sec_pf_q_num_set(const char *val, const struct kernel_param *kp)
|
||||
{
|
||||
return q_num_set(val, kp, SEC_PF_PCI_DEVICE_ID);
|
||||
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_SEC_PF);
|
||||
}
|
||||
|
||||
static const struct kernel_param_ops sec_pf_q_num_ops = {
|
||||
@@ -317,8 +316,8 @@ module_param_cb(uacce_mode, &sec_uacce_mode_ops, &uacce_mode, 0444);
|
||||
MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
|
||||
|
||||
static const struct pci_device_id sec_dev_ids[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_PF_PCI_DEVICE_ID) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, SEC_VF_PCI_DEVICE_ID) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_SEC_PF) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_SEC_VF) },
|
||||
{ 0, }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, sec_dev_ids);
|
||||
@@ -748,7 +747,7 @@ static int sec_core_debug_init(struct hisi_qm *qm)
|
||||
regset->base = qm->io_base;
|
||||
regset->dev = dev;
|
||||
|
||||
if (qm->pdev->device == SEC_PF_PCI_DEVICE_ID)
|
||||
if (qm->pdev->device == PCI_DEVICE_ID_HUAWEI_SEC_PF)
|
||||
debugfs_create_file("regs", 0444, tmp_d, regset, &sec_regs_fops);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(sec_dfx_labels); i++) {
|
||||
@@ -766,7 +765,7 @@ static int sec_debug_init(struct hisi_qm *qm)
|
||||
struct sec_dev *sec = container_of(qm, struct sec_dev, qm);
|
||||
int i;
|
||||
|
||||
if (qm->pdev->device == SEC_PF_PCI_DEVICE_ID) {
|
||||
if (qm->pdev->device == PCI_DEVICE_ID_HUAWEI_SEC_PF) {
|
||||
for (i = SEC_CLEAR_ENABLE; i < SEC_DEBUG_FILE_NUM; i++) {
|
||||
spin_lock_init(&sec->debug.files[i].lock);
|
||||
sec->debug.files[i].index = i;
|
||||
@@ -908,7 +907,7 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
|
||||
qm->sqe_size = SEC_SQE_SIZE;
|
||||
qm->dev_name = sec_name;
|
||||
|
||||
qm->fun_type = (pdev->device == SEC_PF_PCI_DEVICE_ID) ?
|
||||
qm->fun_type = (pdev->device == PCI_DEVICE_ID_HUAWEI_SEC_PF) ?
|
||||
QM_HW_PF : QM_HW_VF;
|
||||
if (qm->fun_type == QM_HW_PF) {
|
||||
qm->qp_base = SEC_PF_DEF_Q_BASE;
|
||||
@@ -1120,6 +1119,12 @@ static struct pci_driver sec_pci_driver = {
|
||||
.driver.pm = &sec_pm_ops,
|
||||
};
|
||||
|
||||
struct pci_driver *hisi_sec_get_pf_driver(void)
|
||||
{
|
||||
return &sec_pci_driver;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sec_get_pf_driver);
|
||||
|
||||
static void sec_register_debugfs(void)
|
||||
{
|
||||
if (!debugfs_initialized())
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2019 HiSilicon Limited. */
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/hisi_acc_qm.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include "qm.h"
|
||||
|
||||
#define HISI_ACC_SGL_SGE_NR_MIN 1
|
||||
#define HISI_ACC_SGL_NR_MAX 256
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
#define pr_fmt(fmt) "hisi_zip: " fmt
|
||||
|
||||
#include <linux/list.h>
|
||||
#include "../qm.h"
|
||||
#include <linux/hisi_acc_qm.h>
|
||||
|
||||
enum hisi_zip_error_type {
|
||||
/* negative compression */
|
||||
|
||||
@@ -15,8 +15,7 @@
|
||||
#include <linux/uacce.h>
|
||||
#include "zip.h"
|
||||
|
||||
#define PCI_DEVICE_ID_ZIP_PF 0xa250
|
||||
#define PCI_DEVICE_ID_ZIP_VF 0xa251
|
||||
#define PCI_DEVICE_ID_HUAWEI_ZIP_PF 0xa250
|
||||
|
||||
#define HZIP_QUEUE_NUM_V1 4096
|
||||
|
||||
@@ -246,7 +245,7 @@ MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
|
||||
|
||||
static int pf_q_num_set(const char *val, const struct kernel_param *kp)
|
||||
{
|
||||
return q_num_set(val, kp, PCI_DEVICE_ID_ZIP_PF);
|
||||
return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_ZIP_PF);
|
||||
}
|
||||
|
||||
static const struct kernel_param_ops pf_q_num_ops = {
|
||||
@@ -268,8 +267,8 @@ module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
|
||||
MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
|
||||
|
||||
static const struct pci_device_id hisi_zip_dev_ids[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_PF) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_VF) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_ZIP_PF) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HUAWEI_ZIP_VF) },
|
||||
{ 0, }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, hisi_zip_dev_ids);
|
||||
@@ -838,7 +837,7 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
|
||||
qm->sqe_size = HZIP_SQE_SIZE;
|
||||
qm->dev_name = hisi_zip_name;
|
||||
|
||||
qm->fun_type = (pdev->device == PCI_DEVICE_ID_ZIP_PF) ?
|
||||
qm->fun_type = (pdev->device == PCI_DEVICE_ID_HUAWEI_ZIP_PF) ?
|
||||
QM_HW_PF : QM_HW_VF;
|
||||
if (qm->fun_type == QM_HW_PF) {
|
||||
qm->qp_base = HZIP_PF_DEF_Q_BASE;
|
||||
@@ -1013,6 +1012,12 @@ static struct pci_driver hisi_zip_pci_driver = {
|
||||
.driver.pm = &hisi_zip_pm_ops,
|
||||
};
|
||||
|
||||
struct pci_driver *hisi_zip_get_pf_driver(void)
|
||||
{
|
||||
return &hisi_zip_pci_driver;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_zip_get_pf_driver);
|
||||
|
||||
static void hisi_zip_register_debugfs(void)
|
||||
{
|
||||
if (!debugfs_initialized())
|
||||
|
||||
@@ -478,6 +478,11 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
|
||||
case MLX5_CMD_OP_QUERY_VHCA_STATE:
|
||||
case MLX5_CMD_OP_MODIFY_VHCA_STATE:
|
||||
case MLX5_CMD_OP_ALLOC_SF:
|
||||
case MLX5_CMD_OP_SUSPEND_VHCA:
|
||||
case MLX5_CMD_OP_RESUME_VHCA:
|
||||
case MLX5_CMD_OP_QUERY_VHCA_MIGRATION_STATE:
|
||||
case MLX5_CMD_OP_SAVE_VHCA_STATE:
|
||||
case MLX5_CMD_OP_LOAD_VHCA_STATE:
|
||||
*status = MLX5_DRIVER_STATUS_ABORTED;
|
||||
*synd = MLX5_DRIVER_SYND;
|
||||
return -EIO;
|
||||
@@ -675,6 +680,11 @@ const char *mlx5_command_str(int command)
|
||||
MLX5_COMMAND_STR_CASE(MODIFY_VHCA_STATE);
|
||||
MLX5_COMMAND_STR_CASE(ALLOC_SF);
|
||||
MLX5_COMMAND_STR_CASE(DEALLOC_SF);
|
||||
MLX5_COMMAND_STR_CASE(SUSPEND_VHCA);
|
||||
MLX5_COMMAND_STR_CASE(RESUME_VHCA);
|
||||
MLX5_COMMAND_STR_CASE(QUERY_VHCA_MIGRATION_STATE);
|
||||
MLX5_COMMAND_STR_CASE(SAVE_VHCA_STATE);
|
||||
MLX5_COMMAND_STR_CASE(LOAD_VHCA_STATE);
|
||||
default: return "unknown command opcode";
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1620,6 +1620,7 @@ static void remove_one(struct pci_dev *pdev)
|
||||
struct devlink *devlink = priv_to_devlink(dev);
|
||||
|
||||
devlink_unregister(devlink);
|
||||
mlx5_sriov_disable(pdev);
|
||||
mlx5_crdump_disable(dev);
|
||||
mlx5_drain_health_wq(dev);
|
||||
mlx5_uninit_one(dev);
|
||||
@@ -1882,6 +1883,50 @@ static struct pci_driver mlx5_core_driver = {
|
||||
.sriov_set_msix_vec_count = mlx5_core_sriov_set_msix_vec_count,
|
||||
};
|
||||
|
||||
/**
|
||||
* mlx5_vf_get_core_dev - Get the mlx5 core device from a given VF PCI device if
|
||||
* mlx5_core is its driver.
|
||||
* @pdev: The associated PCI device.
|
||||
*
|
||||
* Upon return the interface state lock stay held to let caller uses it safely.
|
||||
* Caller must ensure to use the returned mlx5 device for a narrow window
|
||||
* and put it back with mlx5_vf_put_core_dev() immediately once usage was over.
|
||||
*
|
||||
* Return: Pointer to the associated mlx5_core_dev or NULL.
|
||||
*/
|
||||
struct mlx5_core_dev *mlx5_vf_get_core_dev(struct pci_dev *pdev)
|
||||
__acquires(&mdev->intf_state_mutex)
|
||||
{
|
||||
struct mlx5_core_dev *mdev;
|
||||
|
||||
mdev = pci_iov_get_pf_drvdata(pdev, &mlx5_core_driver);
|
||||
if (IS_ERR(mdev))
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&mdev->intf_state_mutex);
|
||||
if (!test_bit(MLX5_INTERFACE_STATE_UP, &mdev->intf_state)) {
|
||||
mutex_unlock(&mdev->intf_state_mutex);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return mdev;
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_vf_get_core_dev);
|
||||
|
||||
/**
|
||||
* mlx5_vf_put_core_dev - Put the mlx5 core device back.
|
||||
* @mdev: The mlx5 core device.
|
||||
*
|
||||
* Upon return the interface state lock is unlocked and caller should not
|
||||
* access the mdev any more.
|
||||
*/
|
||||
void mlx5_vf_put_core_dev(struct mlx5_core_dev *mdev)
|
||||
__releases(&mdev->intf_state_mutex)
|
||||
{
|
||||
mutex_unlock(&mdev->intf_state_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_vf_put_core_dev);
|
||||
|
||||
static void mlx5_core_verify_params(void)
|
||||
{
|
||||
if (prof_sel >= ARRAY_SIZE(profile)) {
|
||||
|
||||
@@ -164,6 +164,7 @@ void mlx5_sriov_cleanup(struct mlx5_core_dev *dev);
|
||||
int mlx5_sriov_attach(struct mlx5_core_dev *dev);
|
||||
void mlx5_sriov_detach(struct mlx5_core_dev *dev);
|
||||
int mlx5_core_sriov_configure(struct pci_dev *dev, int num_vfs);
|
||||
void mlx5_sriov_disable(struct pci_dev *pdev);
|
||||
int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count);
|
||||
int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id);
|
||||
int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id);
|
||||
|
||||
@@ -161,7 +161,7 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void mlx5_sriov_disable(struct pci_dev *pdev)
|
||||
void mlx5_sriov_disable(struct pci_dev *pdev)
|
||||
{
|
||||
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
|
||||
int num_vfs = pci_num_vf(dev->pdev);
|
||||
@@ -205,19 +205,8 @@ int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count)
|
||||
mlx5_get_default_msix_vec_count(dev, pci_num_vf(pf));
|
||||
|
||||
sriov = &dev->priv.sriov;
|
||||
|
||||
/* Reversed translation of PCI VF function number to the internal
|
||||
* function_id, which exists in the name of virtfn symlink.
|
||||
*/
|
||||
for (id = 0; id < pci_num_vf(pf); id++) {
|
||||
if (!sriov->vfs_ctx[id].enabled)
|
||||
continue;
|
||||
|
||||
if (vf->devfn == pci_iov_virtfn_devfn(pf, id))
|
||||
break;
|
||||
}
|
||||
|
||||
if (id == pci_num_vf(pf) || !sriov->vfs_ctx[id].enabled)
|
||||
id = pci_iov_vf_id(vf);
|
||||
if (id < 0 || !sriov->vfs_ctx[id].enabled)
|
||||
return -EINVAL;
|
||||
|
||||
return mlx5_set_msix_vec_count(dev, id + 1, msix_vec_count);
|
||||
|
||||
@@ -33,6 +33,49 @@ int pci_iov_virtfn_devfn(struct pci_dev *dev, int vf_id)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_iov_virtfn_devfn);
|
||||
|
||||
int pci_iov_vf_id(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_dev *pf;
|
||||
|
||||
if (!dev->is_virtfn)
|
||||
return -EINVAL;
|
||||
|
||||
pf = pci_physfn(dev);
|
||||
return (((dev->bus->number << 8) + dev->devfn) -
|
||||
((pf->bus->number << 8) + pf->devfn + pf->sriov->offset)) /
|
||||
pf->sriov->stride;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_iov_vf_id);
|
||||
|
||||
/**
|
||||
* pci_iov_get_pf_drvdata - Return the drvdata of a PF
|
||||
* @dev: VF pci_dev
|
||||
* @pf_driver: Device driver required to own the PF
|
||||
*
|
||||
* This must be called from a context that ensures that a VF driver is attached.
|
||||
* The value returned is invalid once the VF driver completes its remove()
|
||||
* callback.
|
||||
*
|
||||
* Locking is achieved by the driver core. A VF driver cannot be probed until
|
||||
* pci_enable_sriov() is called and pci_disable_sriov() does not return until
|
||||
* all VF drivers have completed their remove().
|
||||
*
|
||||
* The PF driver must call pci_disable_sriov() before it begins to destroy the
|
||||
* drvdata.
|
||||
*/
|
||||
void *pci_iov_get_pf_drvdata(struct pci_dev *dev, struct pci_driver *pf_driver)
|
||||
{
|
||||
struct pci_dev *pf_dev;
|
||||
|
||||
if (!dev->is_virtfn)
|
||||
return ERR_PTR(-EINVAL);
|
||||
pf_dev = dev->physfn;
|
||||
if (pf_dev->driver != pf_driver)
|
||||
return ERR_PTR(-EINVAL);
|
||||
return pci_get_drvdata(pf_dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_iov_get_pf_drvdata);
|
||||
|
||||
/*
|
||||
* Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may
|
||||
* change when NumVFs changes.
|
||||
|
||||
@@ -43,4 +43,9 @@ config VFIO_PCI_IGD
|
||||
|
||||
To enable Intel IGD assignment through vfio-pci, say Y.
|
||||
endif
|
||||
|
||||
source "drivers/vfio/pci/mlx5/Kconfig"
|
||||
|
||||
source "drivers/vfio/pci/hisilicon/Kconfig"
|
||||
|
||||
endif
|
||||
|
||||
@@ -7,3 +7,7 @@ obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o
|
||||
vfio-pci-y := vfio_pci.o
|
||||
vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
|
||||
obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
|
||||
|
||||
obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/
|
||||
|
||||
obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/
|
||||
|
||||
15
drivers/vfio/pci/hisilicon/Kconfig
Normal file
15
drivers/vfio/pci/hisilicon/Kconfig
Normal file
@@ -0,0 +1,15 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
config HISI_ACC_VFIO_PCI
|
||||
tristate "VFIO PCI support for HiSilicon ACC devices"
|
||||
depends on ARM64 || (COMPILE_TEST && 64BIT)
|
||||
depends on VFIO_PCI_CORE
|
||||
depends on PCI_MSI
|
||||
depends on CRYPTO_DEV_HISI_QM
|
||||
depends on CRYPTO_DEV_HISI_HPRE
|
||||
depends on CRYPTO_DEV_HISI_SEC2
|
||||
depends on CRYPTO_DEV_HISI_ZIP
|
||||
help
|
||||
This provides generic PCI support for HiSilicon ACC devices
|
||||
using the VFIO framework.
|
||||
|
||||
If you don't know what to do here, say N.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user