2024-02-01 15:42:15

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 00/10] crypto: qat - enable SRIOV VF live migration for

This set enables live migration for Intel QAT GEN4 SRIOV Virtual
Functions (VFs).
It is composed of 10 patches. Patch 1~6 refactor the original QAT PF
driver implementation which will be reused by the following patches.
Patch 7 introduces the logic to the QAT PF driver that allows to save
and restore the state of a bank (a QAT VF is a wrapper around banks) and
drain a ring pair. Patch 8 adds the QAT PF driver a set of interfaces to
allow to save and restore the state of a VF that will be called by the
module qat_vfio_pci which will be introduced in the last patch. Patch 9
implements the defined device interfaces. The last one adds a vfio pci
extension specific for QAT which intercepts the vfio device operations
for a QAT VF to allow live migration.

Here are the steps required to test the live migration of a QAT GEN4 VF:
1. Bind one or more QAT GEN4 VF devices to the module qat_vfio_pci.ko
2. Assign the VFs to the virtual machine and enable device live
migration
3. Run a workload using a QAT VF inside the VM, for example using qatlib
(https://github.com/intel/qatlib)
4. Migrate the VM from the source node to a destination node

RFC:
https://lore.kernel.org/all/[email protected]/

Change logs:
RFC -> v1:
- Address comments including the right module dependancy in Kconfig,
source file name and module description (Alex)
- Added PCI error handler and P2P state handler (Suggested by Kevin)
- Refactor the state check duing loading ring state (Kevin)
- Fix missed call to vfio_put_device in the error case (Breet)
- Migrate the shadow states in PF driver
- Rebase on top of 6.8-rc1

Giovanni Cabiddu (2):
crypto: qat - adf_get_etr_base() helper
crypto: qat - relocate CSR access code

Siming Wan (3):
crypto: qat - rename get_sla_arr_of_type()
crypto: qat - expand CSR operations for QAT GEN4 devices
crypto: qat - add bank save and restore flows

Xin Zeng (5):
crypto: qat - relocate and rename 4xxx PF2VM definitions
crypto: qat - move PFVF compat checker to a function
crypto: qat - add interface for live migration
crypto: qat - implement interface for live migration
vfio/qat: Add vfio_pci driver for Intel QAT VF devices

MAINTAINERS | 8 +
.../intel/qat/qat_420xx/adf_420xx_hw_data.c | 3 +
.../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 5 +
.../intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 +
.../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 1 +
.../intel/qat/qat_c62x/adf_c62x_hw_data.c | 1 +
.../intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c | 1 +
drivers/crypto/intel/qat/qat_common/Makefile | 6 +-
.../intel/qat/qat_common/adf_accel_devices.h | 74 ++
.../intel/qat/qat_common/adf_common_drv.h | 10 +
.../qat/qat_common/adf_gen2_hw_csr_data.c | 101 +++
.../qat/qat_common/adf_gen2_hw_csr_data.h | 86 ++
.../intel/qat/qat_common/adf_gen2_hw_data.c | 97 --
.../intel/qat/qat_common/adf_gen2_hw_data.h | 76 --
.../qat/qat_common/adf_gen4_hw_csr_data.c | 231 +++++
.../qat/qat_common/adf_gen4_hw_csr_data.h | 188 ++++
.../intel/qat/qat_common/adf_gen4_hw_data.c | 382 +++++---
.../intel/qat/qat_common/adf_gen4_hw_data.h | 127 +--
.../intel/qat/qat_common/adf_gen4_pfvf.c | 8 +-
.../intel/qat/qat_common/adf_gen4_vf_mig.c | 834 ++++++++++++++++++
.../intel/qat/qat_common/adf_gen4_vf_mig.h | 10 +
.../intel/qat/qat_common/adf_mstate_mgr.c | 310 +++++++
.../intel/qat/qat_common/adf_mstate_mgr.h | 87 ++
.../intel/qat/qat_common/adf_pfvf_pf_proto.c | 8 +-
.../intel/qat/qat_common/adf_pfvf_utils.h | 11 +
drivers/crypto/intel/qat/qat_common/adf_rl.c | 10 +-
drivers/crypto/intel/qat/qat_common/adf_rl.h | 2 +
.../crypto/intel/qat/qat_common/adf_sriov.c | 7 +-
.../intel/qat/qat_common/adf_transport.c | 4 +-
.../crypto/intel/qat/qat_common/qat_mig_dev.c | 35 +
.../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 +
.../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 1 +
drivers/vfio/pci/Kconfig | 2 +
drivers/vfio/pci/Makefile | 2 +
drivers/vfio/pci/intel/qat/Kconfig | 13 +
drivers/vfio/pci/intel/qat/Makefile | 4 +
drivers/vfio/pci/intel/qat/main.c | 572 ++++++++++++
include/linux/qat/qat_mig_dev.h | 31 +
38 files changed, 2963 insertions(+), 387 deletions(-)
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.h
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.h
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.h
create mode 100644 drivers/crypto/intel/qat/qat_common/qat_mig_dev.c
create mode 100644 drivers/vfio/pci/intel/qat/Kconfig
create mode 100644 drivers/vfio/pci/intel/qat/Makefile
create mode 100644 drivers/vfio/pci/intel/qat/main.c
create mode 100644 include/linux/qat/qat_mig_dev.h


base-commit: 57f39c1d3cff476b5ce091bbf3df520488a682e5
--
2.18.2



2024-02-01 15:42:31

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 03/10] crypto: qat - move PFVF compat checker to a function

Move the code that implements VF version compatibility on the PF side to
a separate function so that it can be reused when doing VM live
migration.

This does not introduce any functional change.

Signed-off-by: Xin Zeng <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
---
.../crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c | 8 +-------
drivers/crypto/intel/qat/qat_common/adf_pfvf_utils.h | 11 +++++++++++
2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
index 388e58bcbcaf..d737d07937c7 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
@@ -242,13 +242,7 @@ static int adf_handle_vf2pf_msg(struct adf_accel_dev *accel_dev, u8 vf_nr,
"VersionRequest received from VF%d (vers %d) to PF (vers %d)\n",
vf_nr, vf_compat_ver, ADF_PFVF_COMPAT_THIS_VERSION);

- if (vf_compat_ver == 0)
- compat = ADF_PF2VF_VF_INCOMPATIBLE;
- else if (vf_compat_ver <= ADF_PFVF_COMPAT_THIS_VERSION)
- compat = ADF_PF2VF_VF_COMPATIBLE;
- else
- compat = ADF_PF2VF_VF_COMPAT_UNKNOWN;
-
+ compat = adf_vf_compat_checker(vf_compat_ver);
vf_info->vf_compat_ver = vf_compat_ver;

resp->type = ADF_PF2VF_MSGTYPE_VERSION_RESP;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_utils.h b/drivers/crypto/intel/qat/qat_common/adf_pfvf_utils.h
index 2be048e2287b..1a044297d873 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_utils.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_utils.h
@@ -28,4 +28,15 @@ u32 adf_pfvf_csr_msg_of(struct adf_accel_dev *accel_dev, struct pfvf_message msg
struct pfvf_message adf_pfvf_message_of(struct adf_accel_dev *accel_dev, u32 raw_msg,
const struct pfvf_csr_format *fmt);

+static inline u8 adf_vf_compat_checker(u8 vf_compat_ver)
+{
+ if (vf_compat_ver == 0)
+ return ADF_PF2VF_VF_INCOMPATIBLE;
+
+ if (vf_compat_ver <= ADF_PFVF_COMPAT_THIS_VERSION)
+ return ADF_PF2VF_VF_COMPATIBLE;
+
+ return ADF_PF2VF_VF_COMPAT_UNKNOWN;
+}
+
#endif /* ADF_PFVF_UTILS_H */
--
2.18.2


2024-02-01 15:42:55

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 06/10] crypto: qat - expand CSR operations for QAT GEN4 devices

From: Siming Wan <[email protected]>

Extend the CSR operations for QAT GEN4 devices to allow saving and
restoring the rings state.

The new operations will be used as a building block for implementing the
state save and restore of Virtual Functions necessary for VM live
migration.

This adds the following operations:
- read ring status register
- read ring underflow/overflow status register
- read ring nearly empty status register
- read ring nearly full status register
- read ring full status register
- read ring complete status register
- read ring exception status register
- read/write ring exception interrupt mask register
- read ring configuration register
- read ring base register
- read/write ring interrupt enable register
- read ring interrupt flag register
- read/write ring interrupt source select register
- read ring coalesced interrupt enable register
- read ring coalesced interrupt control register
- read ring flag and coalesced interrupt enable register
- read ring service arbiter enable register
- get ring coalesced interrupt control enable mask

Signed-off-by: Siming Wan <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
Reviewed-by: Xin Zeng <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
---
.../intel/qat/qat_common/adf_accel_devices.h | 27 ++++
.../qat/qat_common/adf_gen4_hw_csr_data.c | 130 ++++++++++++++++++
.../qat/qat_common/adf_gen4_hw_csr_data.h | 93 ++++++++++++-
3 files changed, 249 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
index a16c7e6edc65..5cc94bea176a 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
@@ -150,22 +150,49 @@ struct adf_hw_csr_ops {
u32 ring);
void (*write_csr_ring_tail)(void __iomem *csr_base_addr, u32 bank,
u32 ring, u32 value);
+ u32 (*read_csr_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_uo_stat)(void __iomem *csr_base_addr, u32 bank);
u32 (*read_csr_e_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_ne_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_nf_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_f_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_c_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_exp_stat)(void __iomem *csr_base_addr, u32 bank);
+ u32 (*read_csr_exp_int_en)(void __iomem *csr_base_addr, u32 bank);
+ void (*write_csr_exp_int_en)(void __iomem *csr_base_addr, u32 bank,
+ u32 value);
+ u32 (*read_csr_ring_config)(void __iomem *csr_base_addr, u32 bank,
+ u32 ring);
void (*write_csr_ring_config)(void __iomem *csr_base_addr, u32 bank,
u32 ring, u32 value);
+ dma_addr_t (*read_csr_ring_base)(void __iomem *csr_base_addr, u32 bank,
+ u32 ring);
void (*write_csr_ring_base)(void __iomem *csr_base_addr, u32 bank,
u32 ring, dma_addr_t addr);
+ u32 (*read_csr_int_en)(void __iomem *csr_base_addr, u32 bank);
+ void (*write_csr_int_en)(void __iomem *csr_base_addr, u32 bank,
+ u32 value);
+ u32 (*read_csr_int_flag)(void __iomem *csr_base_addr, u32 bank);
void (*write_csr_int_flag)(void __iomem *csr_base_addr, u32 bank,
u32 value);
+ u32 (*read_csr_int_srcsel)(void __iomem *csr_base_addr, u32 bank);
void (*write_csr_int_srcsel)(void __iomem *csr_base_addr, u32 bank);
+ void (*write_csr_int_srcsel_w_val)(void __iomem *csr_base_addr,
+ u32 bank, u32 value);
+ u32 (*read_csr_int_col_en)(void __iomem *csr_base_addr, u32 bank);
void (*write_csr_int_col_en)(void __iomem *csr_base_addr, u32 bank,
u32 value);
+ u32 (*read_csr_int_col_ctl)(void __iomem *csr_base_addr, u32 bank);
void (*write_csr_int_col_ctl)(void __iomem *csr_base_addr, u32 bank,
u32 value);
+ u32 (*read_csr_int_flag_and_col)(void __iomem *csr_base_addr,
+ u32 bank);
void (*write_csr_int_flag_and_col)(void __iomem *csr_base_addr,
u32 bank, u32 value);
+ u32 (*read_csr_ring_srv_arb_en)(void __iomem *csr_base_addr, u32 bank);
void (*write_csr_ring_srv_arb_en)(void __iomem *csr_base_addr, u32 bank,
u32 value);
+ u32 (*get_int_col_ctl_enable_mask)(void);
};

struct adf_cfg_device_data;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
index 652ef4598930..6609c248aaba 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
@@ -30,57 +30,166 @@ static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring,
WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value);
}

+static u32 read_csr_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_uo_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_UO_STAT(csr_base_addr, bank);
+}
+
static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank)
{
return READ_CSR_E_STAT(csr_base_addr, bank);
}

+static u32 read_csr_ne_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_NE_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_nf_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_NF_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_f_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_F_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_c_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_C_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_exp_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_EXP_STAT(csr_base_addr, bank);
+}
+
+static u32 read_csr_exp_int_en(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_EXP_INT_EN(csr_base_addr, bank);
+}
+
+static void write_csr_exp_int_en(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_EXP_INT_EN(csr_base_addr, bank, value);
+}
+
+static u32 read_csr_ring_config(void __iomem *csr_base_addr, u32 bank,
+ u32 ring)
+{
+ return READ_CSR_RING_CONFIG(csr_base_addr, bank, ring);
+}
+
static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank, u32 ring,
u32 value)
{
WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value);
}

+static dma_addr_t read_csr_ring_base(void __iomem *csr_base_addr, u32 bank,
+ u32 ring)
+{
+ return READ_CSR_RING_BASE(csr_base_addr, bank, ring);
+}
+
static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring,
dma_addr_t addr)
{
WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr);
}

+static u32 read_csr_int_en(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_EN(csr_base_addr, bank);
+}
+
+static void write_csr_int_en(void __iomem *csr_base_addr, u32 bank, u32 value)
+{
+ WRITE_CSR_INT_EN(csr_base_addr, bank, value);
+}
+
+static u32 read_csr_int_flag(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_FLAG(csr_base_addr, bank);
+}
+
static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank,
u32 value)
{
WRITE_CSR_INT_FLAG(csr_base_addr, bank, value);
}

+static u32 read_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_SRCSEL(csr_base_addr, bank);
+}
+
static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
{
WRITE_CSR_INT_SRCSEL(csr_base_addr, bank);
}

+static void write_csr_int_srcsel_w_val(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_SRCSEL_W_VAL(csr_base_addr, bank, value);
+}
+
+static u32 read_csr_int_col_en(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_COL_EN(csr_base_addr, bank);
+}
+
static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank, u32 value)
{
WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value);
}

+static u32 read_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_COL_CTL(csr_base_addr, bank);
+}
+
static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank,
u32 value)
{
WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value);
}

+static u32 read_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_INT_FLAG_AND_COL(csr_base_addr, bank);
+}
+
static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank,
u32 value)
{
WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value);
}

+static u32 read_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_RING_SRV_ARB_EN(csr_base_addr, bank);
+}
+
static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank,
u32 value)
{
WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value);
}

+static u32 get_int_col_ctl_enable_mask(void)
+{
+ return ADF_RING_CSR_INT_COL_CTL_ENABLE;
+}
+
void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
{
csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr;
@@ -88,14 +197,35 @@ void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
csr_ops->write_csr_ring_head = write_csr_ring_head;
csr_ops->read_csr_ring_tail = read_csr_ring_tail;
csr_ops->write_csr_ring_tail = write_csr_ring_tail;
+ csr_ops->read_csr_stat = read_csr_stat;
+ csr_ops->read_csr_uo_stat = read_csr_uo_stat;
csr_ops->read_csr_e_stat = read_csr_e_stat;
+ csr_ops->read_csr_ne_stat = read_csr_ne_stat;
+ csr_ops->read_csr_nf_stat = read_csr_nf_stat;
+ csr_ops->read_csr_f_stat = read_csr_f_stat;
+ csr_ops->read_csr_c_stat = read_csr_c_stat;
+ csr_ops->read_csr_exp_stat = read_csr_exp_stat;
+ csr_ops->read_csr_exp_int_en = read_csr_exp_int_en;
+ csr_ops->write_csr_exp_int_en = write_csr_exp_int_en;
+ csr_ops->read_csr_ring_config = read_csr_ring_config;
csr_ops->write_csr_ring_config = write_csr_ring_config;
+ csr_ops->read_csr_ring_base = read_csr_ring_base;
csr_ops->write_csr_ring_base = write_csr_ring_base;
+ csr_ops->read_csr_int_en = read_csr_int_en;
+ csr_ops->write_csr_int_en = write_csr_int_en;
+ csr_ops->read_csr_int_flag = read_csr_int_flag;
csr_ops->write_csr_int_flag = write_csr_int_flag;
+ csr_ops->read_csr_int_srcsel = read_csr_int_srcsel;
csr_ops->write_csr_int_srcsel = write_csr_int_srcsel;
+ csr_ops->write_csr_int_srcsel_w_val = write_csr_int_srcsel_w_val;
+ csr_ops->read_csr_int_col_en = read_csr_int_col_en;
csr_ops->write_csr_int_col_en = write_csr_int_col_en;
+ csr_ops->read_csr_int_col_ctl = read_csr_int_col_ctl;
csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl;
+ csr_ops->read_csr_int_flag_and_col = read_csr_int_flag_and_col;
csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col;
+ csr_ops->read_csr_ring_srv_arb_en = read_csr_ring_srv_arb_en;
csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en;
+ csr_ops->get_int_col_ctl_enable_mask = get_int_col_ctl_enable_mask;
}
EXPORT_SYMBOL_GPL(adf_gen4_init_hw_csr_ops);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
index 08d803432d9f..6f33e7c87c2c 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
@@ -12,13 +12,22 @@
#define ADF_RING_CSR_RING_UBASE 0x1080
#define ADF_RING_CSR_RING_HEAD 0x0C0
#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_STAT 0x140
+#define ADF_RING_CSR_UO_STAT 0x148
#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_NE_STAT 0x150
+#define ADF_RING_CSR_NF_STAT 0x154
+#define ADF_RING_CSR_F_STAT 0x158
+#define ADF_RING_CSR_C_STAT 0x15C
+#define ADF_RING_CSR_INT_FLAG_EN 0x16C
#define ADF_RING_CSR_INT_FLAG 0x170
#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_COL_EN 0x17C
#define ADF_RING_CSR_INT_COL_CTL 0x180
#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_EXP_STAT 0x188
+#define ADF_RING_CSR_EXP_INT_EN 0x18C
#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000
-#define ADF_RING_CSR_INT_COL_EN 0x17C
#define ADF_RING_CSR_ADDR_OFFSET 0x100000
#define ADF_RING_BUNDLE_SIZE 0x2000
#define ADF_RING_CSR_RING_SRV_ARB_EN 0x19C
@@ -33,9 +42,41 @@
ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_RING_TAIL + ((ring) << 2))
+#define READ_CSR_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_STAT)
+#define READ_CSR_UO_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_UO_STAT)
#define READ_CSR_E_STAT(csr_base_addr, bank) \
ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_E_STAT)
+#define READ_CSR_NE_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_NE_STAT)
+#define READ_CSR_NF_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_NF_STAT)
+#define READ_CSR_F_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_F_STAT)
+#define READ_CSR_C_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_C_STAT)
+#define READ_CSR_EXP_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_EXP_STAT)
+#define READ_CSR_EXP_INT_EN(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_EXP_INT_EN)
+#define WRITE_CSR_EXP_INT_EN(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_EXP_INT_EN, value)
+#define READ_CSR_RING_CONFIG(csr_base_addr, bank, ring) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_CONFIG + ((ring) << 2))
#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
@@ -57,6 +98,25 @@ do { \
ADF_RING_CSR_RING_UBASE + ((_ring) << 2), u_base); \
} while (0)

+static inline u64 read_base(void __iomem *csr_base_addr, u32 bank, u32 ring)
+{
+ u32 l_base, u_base;
+
+ /*
+ * Use special IO wrapper for ring base as LBASE and UBASE are
+ * not physically contigious
+ */
+ l_base = ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) +
+ ADF_RING_CSR_RING_LBASE + (ring << 2));
+ u_base = ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * bank) +
+ ADF_RING_CSR_RING_UBASE + (ring << 2));
+
+ return (u64)u_base << 32 | (u64)l_base;
+}
+
+#define READ_CSR_RING_BASE(csr_base_addr, bank, ring) \
+ read_base((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, (bank), (ring))
+
#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
@@ -65,28 +125,59 @@ do { \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_RING_TAIL + ((ring) << 2), value)
+#define READ_CSR_INT_EN(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_INT_FLAG_EN)
+#define WRITE_CSR_INT_EN(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_FLAG_EN, (value))
+#define READ_CSR_INT_FLAG(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_INT_FLAG)
#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_INT_FLAG, (value))
+#define READ_CSR_INT_SRCSEL(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_INT_SRCSEL)
#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK)
+#define WRITE_CSR_INT_SRCSEL_W_VAL(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_SRCSEL, (value))
+#define READ_CSR_INT_COL_EN(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_INT_COL_EN)
#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_INT_COL_EN, (value))
+#define READ_CSR_INT_COL_CTL(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_INT_COL_CTL)
#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_INT_COL_CTL, \
ADF_RING_CSR_INT_COL_CTL_ENABLE | (value))
+#define READ_CSR_INT_FLAG_AND_COL(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_FLAG_AND_COL)
#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
ADF_RING_CSR_INT_FLAG_AND_COL, (value))

+#define READ_CSR_RING_SRV_ARB_EN(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_SRV_ARB_EN)
#define WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value) \
ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
ADF_RING_BUNDLE_SIZE * (bank) + \
--
2.18.2


2024-02-01 15:43:08

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 04/10] crypto: qat - relocate CSR access code

From: Giovanni Cabiddu <[email protected]>

As the common hw_data files are growing and the adf_hw_csr_ops is going
to be extended with new operations, move all logic related to ring CSRs
to the newly created adf_gen[2|4]_hw_csr_data.[c|h] files.

This does not introduce any functional change.

Signed-off-by: Giovanni Cabiddu <[email protected]>
Reviewed-by: Xin Zeng <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
---
.../intel/qat/qat_420xx/adf_420xx_hw_data.c | 1 +
.../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 1 +
.../intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c | 1 +
.../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 1 +
.../intel/qat/qat_c62x/adf_c62x_hw_data.c | 1 +
.../intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c | 1 +
drivers/crypto/intel/qat/qat_common/Makefile | 2 +
.../qat/qat_common/adf_gen2_hw_csr_data.c | 101 ++++++++++++++++++
.../qat/qat_common/adf_gen2_hw_csr_data.h | 86 +++++++++++++++
.../intel/qat/qat_common/adf_gen2_hw_data.c | 97 -----------------
.../intel/qat/qat_common/adf_gen2_hw_data.h | 76 -------------
.../qat/qat_common/adf_gen4_hw_csr_data.c | 101 ++++++++++++++++++
.../qat/qat_common/adf_gen4_hw_csr_data.h | 97 +++++++++++++++++
.../intel/qat/qat_common/adf_gen4_hw_data.c | 97 -----------------
.../intel/qat/qat_common/adf_gen4_hw_data.h | 94 +---------------
.../qat/qat_dh895xcc/adf_dh895xcc_hw_data.c | 1 +
.../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 1 +
17 files changed, 397 insertions(+), 362 deletions(-)
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.h
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h

diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
index a87d29ae724f..bd36b536f663 100644
--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
@@ -10,6 +10,7 @@
#include <adf_fw_config.h>
#include <adf_gen4_config.h>
#include <adf_gen4_dc.h>
+#include <adf_gen4_hw_csr_data.h>
#include <adf_gen4_hw_data.h>
#include <adf_gen4_pfvf.h>
#include <adf_gen4_pm.h>
diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
index 94a0ebb03d8c..054e48d194cd 100644
--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
@@ -10,6 +10,7 @@
#include <adf_fw_config.h>
#include <adf_gen4_config.h>
#include <adf_gen4_dc.h>
+#include <adf_gen4_hw_csr_data.h>
#include <adf_gen4_hw_data.h>
#include <adf_gen4_pfvf.h>
#include <adf_gen4_pm.h>
diff --git a/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c b/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c
index a882e0ea2279..201f9412c582 100644
--- a/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c
@@ -6,6 +6,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include "adf_c3xxx_hw_data.h"
diff --git a/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
index 84d9486e04de..a512ca4efd3f 100644
--- a/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
@@ -4,6 +4,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include <adf_pfvf_vf_msg.h>
diff --git a/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c b/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c
index 48cf3eb7c734..6b5b0cf9c7c7 100644
--- a/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c
@@ -6,6 +6,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include "adf_c62x_hw_data.h"
diff --git a/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c
index 751d7aa57fc7..4aaaaf921734 100644
--- a/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_c62xvf/adf_c62xvf_hw_data.c
@@ -4,6 +4,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include <adf_pfvf_vf_msg.h>
diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile
index 6908727bff3b..da4d806a9287 100644
--- a/drivers/crypto/intel/qat/qat_common/Makefile
+++ b/drivers/crypto/intel/qat/qat_common/Makefile
@@ -14,9 +14,11 @@ intel_qat-objs := adf_cfg.o \
adf_hw_arbiter.o \
adf_sysfs.o \
adf_sysfs_ras_counters.o \
+ adf_gen2_hw_csr_data.o \
adf_gen2_hw_data.o \
adf_gen2_config.o \
adf_gen4_config.o \
+ adf_gen4_hw_csr_data.o \
adf_gen4_hw_data.o \
adf_gen4_pm.o \
adf_gen2_dc.o \
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.c
new file mode 100644
index 000000000000..650c9edd8a66
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+#include <linux/types.h>
+#include "adf_gen2_hw_csr_data.h"
+
+static u64 build_csr_ring_base_addr(dma_addr_t addr, u32 size)
+{
+ return BUILD_RING_BASE_ADDR(addr, size);
+}
+
+static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring)
+{
+ return READ_CSR_RING_HEAD(csr_base_addr, bank, ring);
+}
+
+static void write_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ u32 value)
+{
+ WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value);
+}
+
+static u32 read_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring)
+{
+ return READ_CSR_RING_TAIL(csr_base_addr, bank, ring);
+}
+
+static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ u32 value)
+{
+ WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value);
+}
+
+static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_E_STAT(csr_base_addr, bank);
+}
+
+static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank,
+ u32 ring, u32 value)
+{
+ WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value);
+}
+
+static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ dma_addr_t addr)
+{
+ WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr);
+}
+
+static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank, u32 value)
+{
+ WRITE_CSR_INT_FLAG(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
+{
+ WRITE_CSR_INT_SRCSEL(csr_base_addr, bank);
+}
+
+static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value);
+}
+
+static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value);
+}
+
+void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
+{
+ csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr;
+ csr_ops->read_csr_ring_head = read_csr_ring_head;
+ csr_ops->write_csr_ring_head = write_csr_ring_head;
+ csr_ops->read_csr_ring_tail = read_csr_ring_tail;
+ csr_ops->write_csr_ring_tail = write_csr_ring_tail;
+ csr_ops->read_csr_e_stat = read_csr_e_stat;
+ csr_ops->write_csr_ring_config = write_csr_ring_config;
+ csr_ops->write_csr_ring_base = write_csr_ring_base;
+ csr_ops->write_csr_int_flag = write_csr_int_flag;
+ csr_ops->write_csr_int_srcsel = write_csr_int_srcsel;
+ csr_ops->write_csr_int_col_en = write_csr_int_col_en;
+ csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl;
+ csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col;
+ csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en;
+}
+EXPORT_SYMBOL_GPL(adf_gen2_init_hw_csr_ops);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.h
new file mode 100644
index 000000000000..55058b0f9e52
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_csr_data.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2024 Intel Corporation */
+#ifndef ADF_GEN2_HW_CSR_DATA_H_
+#define ADF_GEN2_HW_CSR_DATA_H_
+
+#include <linux/bitops.h>
+#include "adf_accel_devices.h"
+
+#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
+#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
+#define ADF_RING_CSR_RING_CONFIG 0x000
+#define ADF_RING_CSR_RING_LBASE 0x040
+#define ADF_RING_CSR_RING_UBASE 0x080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_FLAG 0x170
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_SRCSEL_2 0x178
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000
+#define ADF_RING_BUNDLE_SIZE 0x1000
+#define ADF_ARB_REG_SLOT 0x1000
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C
+
+#define BUILD_RING_BASE_ADDR(addr, size) \
+ (((addr) >> 6) & (GENMASK_ULL(63, 0) << (size)))
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+ ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_HEAD + ((ring) << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+ ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_TAIL + ((ring) << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_CONFIG + ((ring) << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+ u32 l_base = 0, u_base = 0; \
+ l_base = (u32)((value) & 0xFFFFFFFF); \
+ u_base = (u32)(((value) & 0xFFFFFFFF00000000ULL) >> 32); \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_LBASE + ((ring) << 2), l_base); \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_UBASE + ((ring) << 2), u_base); \
+} while (0)
+
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_HEAD + ((ring) << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_RING_TAIL + ((ring) << 2), value)
+#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_FLAG, value)
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+do { \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0); \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
+} while (0)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_COL_EN, value)
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_COL_CTL, \
+ ADF_RING_CSR_INT_COL_CTL_ENABLE | (value))
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+ ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
+ ADF_RING_CSR_INT_FLAG_AND_COL, value)
+
+#define WRITE_CSR_RING_SRV_ARB_EN(csr_addr, index, value) \
+ ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+ (ADF_ARB_REG_SLOT * (index)), value)
+
+void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
+
+#endif
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c
index d1884547b5a1..1f64bf49b221 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.c
@@ -111,103 +111,6 @@ void adf_gen2_enable_ints(struct adf_accel_dev *accel_dev)
}
EXPORT_SYMBOL_GPL(adf_gen2_enable_ints);

-static u64 build_csr_ring_base_addr(dma_addr_t addr, u32 size)
-{
- return BUILD_RING_BASE_ADDR(addr, size);
-}
-
-static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring)
-{
- return READ_CSR_RING_HEAD(csr_base_addr, bank, ring);
-}
-
-static void write_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring,
- u32 value)
-{
- WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value);
-}
-
-static u32 read_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring)
-{
- return READ_CSR_RING_TAIL(csr_base_addr, bank, ring);
-}
-
-static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring,
- u32 value)
-{
- WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value);
-}
-
-static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank)
-{
- return READ_CSR_E_STAT(csr_base_addr, bank);
-}
-
-static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank,
- u32 ring, u32 value)
-{
- WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value);
-}
-
-static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring,
- dma_addr_t addr)
-{
- WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr);
-}
-
-static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank, u32 value)
-{
- WRITE_CSR_INT_FLAG(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
-{
- WRITE_CSR_INT_SRCSEL(csr_base_addr, bank);
-}
-
-static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value);
-}
-
-static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value);
-}
-
-void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
-{
- csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr;
- csr_ops->read_csr_ring_head = read_csr_ring_head;
- csr_ops->write_csr_ring_head = write_csr_ring_head;
- csr_ops->read_csr_ring_tail = read_csr_ring_tail;
- csr_ops->write_csr_ring_tail = write_csr_ring_tail;
- csr_ops->read_csr_e_stat = read_csr_e_stat;
- csr_ops->write_csr_ring_config = write_csr_ring_config;
- csr_ops->write_csr_ring_base = write_csr_ring_base;
- csr_ops->write_csr_int_flag = write_csr_int_flag;
- csr_ops->write_csr_int_srcsel = write_csr_int_srcsel;
- csr_ops->write_csr_int_col_en = write_csr_int_col_en;
- csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl;
- csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col;
- csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en;
-}
-EXPORT_SYMBOL_GPL(adf_gen2_init_hw_csr_ops);
-
u32 adf_gen2_get_accel_cap(struct adf_accel_dev *accel_dev)
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h
index 6bd341061de4..708e9186127b 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen2_hw_data.h
@@ -6,78 +6,9 @@
#include "adf_accel_devices.h"
#include "adf_cfg_common.h"

-/* Transport access */
-#define ADF_BANK_INT_SRC_SEL_MASK_0 0x4444444CUL
-#define ADF_BANK_INT_SRC_SEL_MASK_X 0x44444444UL
-#define ADF_RING_CSR_RING_CONFIG 0x000
-#define ADF_RING_CSR_RING_LBASE 0x040
-#define ADF_RING_CSR_RING_UBASE 0x080
-#define ADF_RING_CSR_RING_HEAD 0x0C0
-#define ADF_RING_CSR_RING_TAIL 0x100
-#define ADF_RING_CSR_E_STAT 0x14C
-#define ADF_RING_CSR_INT_FLAG 0x170
-#define ADF_RING_CSR_INT_SRCSEL 0x174
-#define ADF_RING_CSR_INT_SRCSEL_2 0x178
-#define ADF_RING_CSR_INT_COL_EN 0x17C
-#define ADF_RING_CSR_INT_COL_CTL 0x180
-#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
-#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000
-#define ADF_RING_BUNDLE_SIZE 0x1000
#define ADF_GEN2_RX_RINGS_OFFSET 8
#define ADF_GEN2_TX_RINGS_MASK 0xFF

-#define BUILD_RING_BASE_ADDR(addr, size) \
- (((addr) >> 6) & (GENMASK_ULL(63, 0) << (size)))
-#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
- ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_HEAD + ((ring) << 2))
-#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
- ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_TAIL + ((ring) << 2))
-#define READ_CSR_E_STAT(csr_base_addr, bank) \
- ADF_CSR_RD(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_E_STAT)
-#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_CONFIG + ((ring) << 2), value)
-#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
-do { \
- u32 l_base = 0, u_base = 0; \
- l_base = (u32)((value) & 0xFFFFFFFF); \
- u_base = (u32)(((value) & 0xFFFFFFFF00000000ULL) >> 32); \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_LBASE + ((ring) << 2), l_base); \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_UBASE + ((ring) << 2), u_base); \
-} while (0)
-
-#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_HEAD + ((ring) << 2), value)
-#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_RING_TAIL + ((ring) << 2), value)
-#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_FLAG, value)
-#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
-do { \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK_0); \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_SRCSEL_2, ADF_BANK_INT_SRC_SEL_MASK_X); \
-} while (0)
-#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_COL_EN, value)
-#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_COL_CTL, \
- ADF_RING_CSR_INT_COL_CTL_ENABLE | (value))
-#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
- ADF_CSR_WR(csr_base_addr, (ADF_RING_BUNDLE_SIZE * (bank)) + \
- ADF_RING_CSR_INT_FLAG_AND_COL, value)
-
/* AE to function map */
#define AE2FUNCTION_MAP_A_OFFSET (0x3A400 + 0x190)
#define AE2FUNCTION_MAP_B_OFFSET (0x3A400 + 0x310)
@@ -106,12 +37,6 @@ do { \
#define ADF_ARB_OFFSET 0x30000
#define ADF_ARB_WRK_2_SER_MAP_OFFSET 0x180
#define ADF_ARB_CONFIG (BIT(31) | BIT(6) | BIT(0))
-#define ADF_ARB_REG_SLOT 0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C
-
-#define WRITE_CSR_RING_SRV_ARB_EN(csr_addr, index, value) \
- ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
- (ADF_ARB_REG_SLOT * (index)), value)

/* Power gating */
#define ADF_POWERGATE_DC BIT(23)
@@ -158,7 +83,6 @@ u32 adf_gen2_get_num_aes(struct adf_hw_device_data *self);
void adf_gen2_enable_error_correction(struct adf_accel_dev *accel_dev);
void adf_gen2_cfg_iov_thds(struct adf_accel_dev *accel_dev, bool enable,
int num_a_regs, int num_b_regs);
-void adf_gen2_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
void adf_gen2_get_admin_info(struct admin_info *admin_csrs_info);
void adf_gen2_get_arb_info(struct arb_info *arb_info);
void adf_gen2_enable_ints(struct adf_accel_dev *accel_dev);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
new file mode 100644
index 000000000000..652ef4598930
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+#include <linux/types.h>
+#include "adf_gen4_hw_csr_data.h"
+
+static u64 build_csr_ring_base_addr(dma_addr_t addr, u32 size)
+{
+ return BUILD_RING_BASE_ADDR(addr, size);
+}
+
+static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring)
+{
+ return READ_CSR_RING_HEAD(csr_base_addr, bank, ring);
+}
+
+static void write_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ u32 value)
+{
+ WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value);
+}
+
+static u32 read_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring)
+{
+ return READ_CSR_RING_TAIL(csr_base_addr, bank, ring);
+}
+
+static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ u32 value)
+{
+ WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value);
+}
+
+static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank)
+{
+ return READ_CSR_E_STAT(csr_base_addr, bank);
+}
+
+static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ u32 value)
+{
+ WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value);
+}
+
+static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring,
+ dma_addr_t addr)
+{
+ WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr);
+}
+
+static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_FLAG(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
+{
+ WRITE_CSR_INT_SRCSEL(csr_base_addr, bank);
+}
+
+static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank, u32 value)
+{
+ WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value);
+}
+
+static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value);
+}
+
+static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank,
+ u32 value)
+{
+ WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value);
+}
+
+void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
+{
+ csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr;
+ csr_ops->read_csr_ring_head = read_csr_ring_head;
+ csr_ops->write_csr_ring_head = write_csr_ring_head;
+ csr_ops->read_csr_ring_tail = read_csr_ring_tail;
+ csr_ops->write_csr_ring_tail = write_csr_ring_tail;
+ csr_ops->read_csr_e_stat = read_csr_e_stat;
+ csr_ops->write_csr_ring_config = write_csr_ring_config;
+ csr_ops->write_csr_ring_base = write_csr_ring_base;
+ csr_ops->write_csr_int_flag = write_csr_int_flag;
+ csr_ops->write_csr_int_srcsel = write_csr_int_srcsel;
+ csr_ops->write_csr_int_col_en = write_csr_int_col_en;
+ csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl;
+ csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col;
+ csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_init_hw_csr_ops);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
new file mode 100644
index 000000000000..08d803432d9f
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_csr_data.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2024 Intel Corporation */
+#ifndef ADF_GEN4_HW_CSR_DATA_H_
+#define ADF_GEN4_HW_CSR_DATA_H_
+
+#include <linux/bitops.h>
+#include "adf_accel_devices.h"
+
+#define ADF_BANK_INT_SRC_SEL_MASK 0x44UL
+#define ADF_RING_CSR_RING_CONFIG 0x1000
+#define ADF_RING_CSR_RING_LBASE 0x1040
+#define ADF_RING_CSR_RING_UBASE 0x1080
+#define ADF_RING_CSR_RING_HEAD 0x0C0
+#define ADF_RING_CSR_RING_TAIL 0x100
+#define ADF_RING_CSR_E_STAT 0x14C
+#define ADF_RING_CSR_INT_FLAG 0x170
+#define ADF_RING_CSR_INT_SRCSEL 0x174
+#define ADF_RING_CSR_INT_COL_CTL 0x180
+#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
+#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000
+#define ADF_RING_CSR_INT_COL_EN 0x17C
+#define ADF_RING_CSR_ADDR_OFFSET 0x100000
+#define ADF_RING_BUNDLE_SIZE 0x2000
+#define ADF_RING_CSR_RING_SRV_ARB_EN 0x19C
+
+#define BUILD_RING_BASE_ADDR(addr, size) \
+ ((((addr) >> 6) & (GENMASK_ULL(63, 0) << (size))) << 6)
+#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_HEAD + ((ring) << 2))
+#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_TAIL + ((ring) << 2))
+#define READ_CSR_E_STAT(csr_base_addr, bank) \
+ ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_E_STAT)
+#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_CONFIG + ((ring) << 2), value)
+#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
+do { \
+ void __iomem *_csr_base_addr = csr_base_addr; \
+ u32 _bank = bank; \
+ u32 _ring = ring; \
+ dma_addr_t _value = value; \
+ u32 l_base = 0, u_base = 0; \
+ l_base = lower_32_bits(_value); \
+ u_base = upper_32_bits(_value); \
+ ADF_CSR_WR((_csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (_bank) + \
+ ADF_RING_CSR_RING_LBASE + ((_ring) << 2), l_base); \
+ ADF_CSR_WR((_csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (_bank) + \
+ ADF_RING_CSR_RING_UBASE + ((_ring) << 2), u_base); \
+} while (0)
+
+#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_HEAD + ((ring) << 2), value)
+#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_TAIL + ((ring) << 2), value)
+#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_FLAG, (value))
+#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK)
+#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_COL_EN, (value))
+#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_COL_CTL, \
+ ADF_RING_CSR_INT_COL_CTL_ENABLE | (value))
+#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_INT_FLAG_AND_COL, (value))
+
+#define WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value) \
+ ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
+ ADF_RING_BUNDLE_SIZE * (bank) + \
+ ADF_RING_CSR_RING_SRV_ARB_EN, (value))
+
+void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
+
+#endif
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
index a0d1326ac73d..2140ac249e9d 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
@@ -7,103 +7,6 @@
#include "adf_gen4_hw_data.h"
#include "adf_gen4_pm.h"

-static u64 build_csr_ring_base_addr(dma_addr_t addr, u32 size)
-{
- return BUILD_RING_BASE_ADDR(addr, size);
-}
-
-static u32 read_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring)
-{
- return READ_CSR_RING_HEAD(csr_base_addr, bank, ring);
-}
-
-static void write_csr_ring_head(void __iomem *csr_base_addr, u32 bank, u32 ring,
- u32 value)
-{
- WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value);
-}
-
-static u32 read_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring)
-{
- return READ_CSR_RING_TAIL(csr_base_addr, bank, ring);
-}
-
-static void write_csr_ring_tail(void __iomem *csr_base_addr, u32 bank, u32 ring,
- u32 value)
-{
- WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value);
-}
-
-static u32 read_csr_e_stat(void __iomem *csr_base_addr, u32 bank)
-{
- return READ_CSR_E_STAT(csr_base_addr, bank);
-}
-
-static void write_csr_ring_config(void __iomem *csr_base_addr, u32 bank, u32 ring,
- u32 value)
-{
- WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value);
-}
-
-static void write_csr_ring_base(void __iomem *csr_base_addr, u32 bank, u32 ring,
- dma_addr_t addr)
-{
- WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, addr);
-}
-
-static void write_csr_int_flag(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_FLAG(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_srcsel(void __iomem *csr_base_addr, u32 bank)
-{
- WRITE_CSR_INT_SRCSEL(csr_base_addr, bank);
-}
-
-static void write_csr_int_col_en(void __iomem *csr_base_addr, u32 bank, u32 value)
-{
- WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_col_ctl(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value);
-}
-
-static void write_csr_int_flag_and_col(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value);
-}
-
-static void write_csr_ring_srv_arb_en(void __iomem *csr_base_addr, u32 bank,
- u32 value)
-{
- WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value);
-}
-
-void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
-{
- csr_ops->build_csr_ring_base_addr = build_csr_ring_base_addr;
- csr_ops->read_csr_ring_head = read_csr_ring_head;
- csr_ops->write_csr_ring_head = write_csr_ring_head;
- csr_ops->read_csr_ring_tail = read_csr_ring_tail;
- csr_ops->write_csr_ring_tail = write_csr_ring_tail;
- csr_ops->read_csr_e_stat = read_csr_e_stat;
- csr_ops->write_csr_ring_config = write_csr_ring_config;
- csr_ops->write_csr_ring_base = write_csr_ring_base;
- csr_ops->write_csr_int_flag = write_csr_int_flag;
- csr_ops->write_csr_int_srcsel = write_csr_int_srcsel;
- csr_ops->write_csr_int_col_en = write_csr_int_col_en;
- csr_ops->write_csr_int_col_ctl = write_csr_int_col_ctl;
- csr_ops->write_csr_int_flag_and_col = write_csr_int_flag_and_col;
- csr_ops->write_csr_ring_srv_arb_en = write_csr_ring_srv_arb_en;
-}
-EXPORT_SYMBOL_GPL(adf_gen4_init_hw_csr_ops);
-
u32 adf_gen4_get_accel_mask(struct adf_hw_device_data *self)
{
return ADF_GEN4_ACCELERATORS_MASK;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
index 9cabbb5ce96b..f94ccf64e55d 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) */
/* Copyright(c) 2020 Intel Corporation */
-#ifndef ADF_GEN4_HW_CSR_DATA_H_
-#define ADF_GEN4_HW_CSR_DATA_H_
+#ifndef ADF_GEN4_HW_DATA_H_
+#define ADF_GEN4_HW_DATA_H_

#include <linux/units.h>

@@ -54,95 +54,6 @@
#define ADF_GEN4_ADMINMSGLR_OFFSET 0x500578
#define ADF_GEN4_MAILBOX_BASE_OFFSET 0x600970

-/* Transport access */
-#define ADF_BANK_INT_SRC_SEL_MASK 0x44UL
-#define ADF_RING_CSR_RING_CONFIG 0x1000
-#define ADF_RING_CSR_RING_LBASE 0x1040
-#define ADF_RING_CSR_RING_UBASE 0x1080
-#define ADF_RING_CSR_RING_HEAD 0x0C0
-#define ADF_RING_CSR_RING_TAIL 0x100
-#define ADF_RING_CSR_E_STAT 0x14C
-#define ADF_RING_CSR_INT_FLAG 0x170
-#define ADF_RING_CSR_INT_SRCSEL 0x174
-#define ADF_RING_CSR_INT_COL_CTL 0x180
-#define ADF_RING_CSR_INT_FLAG_AND_COL 0x184
-#define ADF_RING_CSR_INT_COL_CTL_ENABLE 0x80000000
-#define ADF_RING_CSR_INT_COL_EN 0x17C
-#define ADF_RING_CSR_ADDR_OFFSET 0x100000
-#define ADF_RING_BUNDLE_SIZE 0x2000
-
-#define BUILD_RING_BASE_ADDR(addr, size) \
- ((((addr) >> 6) & (GENMASK_ULL(63, 0) << (size))) << 6)
-#define READ_CSR_RING_HEAD(csr_base_addr, bank, ring) \
- ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_HEAD + ((ring) << 2))
-#define READ_CSR_RING_TAIL(csr_base_addr, bank, ring) \
- ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_TAIL + ((ring) << 2))
-#define READ_CSR_E_STAT(csr_base_addr, bank) \
- ADF_CSR_RD((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + ADF_RING_CSR_E_STAT)
-#define WRITE_CSR_RING_CONFIG(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_CONFIG + ((ring) << 2), value)
-#define WRITE_CSR_RING_BASE(csr_base_addr, bank, ring, value) \
-do { \
- void __iomem *_csr_base_addr = csr_base_addr; \
- u32 _bank = bank; \
- u32 _ring = ring; \
- dma_addr_t _value = value; \
- u32 l_base = 0, u_base = 0; \
- l_base = lower_32_bits(_value); \
- u_base = upper_32_bits(_value); \
- ADF_CSR_WR((_csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (_bank) + \
- ADF_RING_CSR_RING_LBASE + ((_ring) << 2), l_base); \
- ADF_CSR_WR((_csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (_bank) + \
- ADF_RING_CSR_RING_UBASE + ((_ring) << 2), u_base); \
-} while (0)
-
-#define WRITE_CSR_RING_HEAD(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_HEAD + ((ring) << 2), value)
-#define WRITE_CSR_RING_TAIL(csr_base_addr, bank, ring, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_TAIL + ((ring) << 2), value)
-#define WRITE_CSR_INT_FLAG(csr_base_addr, bank, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_INT_FLAG, (value))
-#define WRITE_CSR_INT_SRCSEL(csr_base_addr, bank) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_INT_SRCSEL, ADF_BANK_INT_SRC_SEL_MASK)
-#define WRITE_CSR_INT_COL_EN(csr_base_addr, bank, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_INT_COL_EN, (value))
-#define WRITE_CSR_INT_COL_CTL(csr_base_addr, bank, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_INT_COL_CTL, \
- ADF_RING_CSR_INT_COL_CTL_ENABLE | (value))
-#define WRITE_CSR_INT_FLAG_AND_COL(csr_base_addr, bank, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_INT_FLAG_AND_COL, (value))
-
-/* Arbiter configuration */
-#define ADF_RING_CSR_RING_SRV_ARB_EN 0x19C
-
-#define WRITE_CSR_RING_SRV_ARB_EN(csr_base_addr, bank, value) \
- ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET, \
- ADF_RING_BUNDLE_SIZE * (bank) + \
- ADF_RING_CSR_RING_SRV_ARB_EN, (value))
-
/* Default ring mapping */
#define ADF_GEN4_DEFAULT_RING_TO_SRV_MAP \
(ASYM << ADF_CFG_SERV_RING_PAIR_0_SHIFT | \
@@ -234,7 +145,6 @@ u32 adf_gen4_get_num_aes(struct adf_hw_device_data *self);
enum dev_sku_info adf_gen4_get_sku(struct adf_hw_device_data *self);
u32 adf_gen4_get_sram_bar_id(struct adf_hw_device_data *self);
int adf_gen4_init_device(struct adf_accel_dev *accel_dev);
-void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
void adf_gen4_set_msix_default_rttable(struct adf_accel_dev *accel_dev);
void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);
diff --git a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
index af14090cc4be..6e24d57e6b98 100644
--- a/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
@@ -5,6 +5,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include "adf_dh895xcc_hw_data.h"
diff --git a/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
index 70e56cc16ece..f4ee4c2e00da 100644
--- a/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
@@ -4,6 +4,7 @@
#include <adf_common_drv.h>
#include <adf_gen2_config.h>
#include <adf_gen2_dc.h>
+#include <adf_gen2_hw_csr_data.h>
#include <adf_gen2_hw_data.h>
#include <adf_gen2_pfvf.h>
#include <adf_pfvf_vf_msg.h>
--
2.18.2


2024-02-01 15:43:11

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 05/10] crypto: qat - rename get_sla_arr_of_type()

From: Siming Wan <[email protected]>

The function get_sla_arr_of_type() returns a pointer to an SLA type
specific array.
Rename it and expose it as it will be used externally to this module.

This does not introduce any functional change.

Signed-off-by: Siming Wan <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
Reviewed-by: Damian Muszynski <[email protected]>
Reviewed-by: Xin Zeng <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
---
drivers/crypto/intel/qat/qat_common/adf_rl.c | 10 +++++-----
drivers/crypto/intel/qat/qat_common/adf_rl.h | 2 ++
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.c b/drivers/crypto/intel/qat/qat_common/adf_rl.c
index de1b214dba1f..41017af1be68 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_rl.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_rl.c
@@ -183,14 +183,14 @@ static enum adf_cfg_service_type srv_to_cfg_svc_type(enum adf_base_services rl_s
}

/**
- * get_sla_arr_of_type() - Returns a pointer to SLA type specific array
+ * adf_rl_get_sla_arr_of_type() - Returns a pointer to SLA type specific array
* @rl_data: pointer to ratelimiting data
* @type: SLA type
* @sla_arr: pointer to variable where requested pointer will be stored
*
* Return: Max number of elements allowed for the returned array
*/
-static u32 get_sla_arr_of_type(struct adf_rl *rl_data, enum rl_node_type type,
+u32 adf_rl_get_sla_arr_of_type(struct adf_rl *rl_data, enum rl_node_type type,
struct rl_sla ***sla_arr)
{
switch (type) {
@@ -778,7 +778,7 @@ static void clear_sla(struct adf_rl *rl_data, struct rl_sla *sla)
rp_in_use[sla->ring_pairs_ids[i]] = false;

update_budget(sla, old_cir, true);
- get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+ adf_rl_get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
assign_node_to_parent(rl_data->accel_dev, sla, true);
adf_rl_send_admin_delete_msg(rl_data->accel_dev, node_id, sla->type);
mark_rps_usage(sla, rl_data->rp_in_use, false);
@@ -857,7 +857,7 @@ static int add_update_sla(struct adf_accel_dev *accel_dev,

if (!is_update) {
mark_rps_usage(sla, rl_data->rp_in_use, true);
- get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+ adf_rl_get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
sla_type_arr[sla->node_id] = sla;
rl_data->sla[sla->sla_id] = sla;
}
@@ -1047,7 +1047,7 @@ void adf_rl_remove_sla_all(struct adf_accel_dev *accel_dev, bool incl_default)

/* Unregister and remove all SLAs */
for (j = RL_LEAF; j >= end_type; j--) {
- max_id = get_sla_arr_of_type(rl_data, j, &sla_type_arr);
+ max_id = adf_rl_get_sla_arr_of_type(rl_data, j, &sla_type_arr);

for (i = 0; i < max_id; i++) {
if (!sla_type_arr[i])
diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.h b/drivers/crypto/intel/qat/qat_common/adf_rl.h
index 269c6656fb90..bfe750ea0e83 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_rl.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_rl.h
@@ -151,6 +151,8 @@ struct rl_sla {
u16 ring_pairs_cnt;
};

+u32 adf_rl_get_sla_arr_of_type(struct adf_rl *rl_data, enum rl_node_type type,
+ struct rl_sla ***sla_arr);
int adf_rl_add_sla(struct adf_accel_dev *accel_dev,
struct adf_rl_sla_input_data *sla_in);
int adf_rl_update_sla(struct adf_accel_dev *accel_dev,
--
2.18.2


2024-02-01 15:43:18

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 07/10] crypto: qat - add bank save and restore flows

From: Siming Wan <[email protected]>

Add logic to save, restore, quiesce and drain a ring bank for QAT GEN4
devices.
This allows to save and restore the state of a Virtual Function (VF) and
will be used to implement VM live migration.

Signed-off-by: Siming Wan <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
Reviewed-by: Xin Zeng <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
---
.../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 2 +
.../intel/qat/qat_common/adf_accel_devices.h | 39 +++
.../intel/qat/qat_common/adf_gen4_hw_data.c | 281 ++++++++++++++++++
.../intel/qat/qat_common/adf_gen4_hw_data.h | 19 ++
4 files changed, 341 insertions(+)

diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
index 054e48d194cd..a04488ada49b 100644
--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
@@ -487,6 +487,8 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
hw_data->get_ring_to_svc_map = get_ring_to_svc_map;
hw_data->disable_iov = adf_disable_sriov;
hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
+ hw_data->bank_state_save = adf_gen4_bank_state_save;
+ hw_data->bank_state_restore = adf_gen4_bank_state_restore;
hw_data->enable_pm = adf_gen4_enable_pm;
hw_data->handle_pm_interrupt = adf_gen4_handle_pm_interrupt;
hw_data->dev_config = adf_gen4_dev_config;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
index 5cc94bea176a..bdb026155f64 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
@@ -140,6 +140,41 @@ struct admin_info {
u32 mailbox_offset;
};

+struct ring_config {
+ u32 config;
+ u64 base;
+ u32 head;
+ u32 tail;
+};
+
+struct bank_state {
+ u32 reservd0;
+ u32 reservd1;
+ u32 num_rings;
+ u32 ringstat0;
+ u32 ringstat1;
+ u32 ringuostat;
+ u32 ringestat;
+ u32 ringnestat;
+ u32 ringnfstat;
+ u32 ringfstat;
+ u32 ringcstat0;
+ u32 ringcstat1;
+ u32 ringcstat2;
+ u32 ringcstat3;
+ u32 iaintflagen;
+ u32 iaintflagreg;
+ u32 iaintflagsrcsel0;
+ u32 iaintflagsrcsel1;
+ u32 iaintcolen;
+ u32 iaintcolctl;
+ u32 iaintflagandcolen;
+ u32 ringexpstat;
+ u32 ringexpintenable;
+ u32 ringsrvarben;
+ struct ring_config rings[ADF_ETR_MAX_RINGS_PER_BANK];
+};
+
struct adf_hw_csr_ops {
u64 (*build_csr_ring_base_addr)(dma_addr_t addr, u32 size);
u32 (*read_csr_ring_head)(void __iomem *csr_base_addr, u32 bank,
@@ -271,6 +306,10 @@ struct adf_hw_device_data {
void (*enable_ints)(struct adf_accel_dev *accel_dev);
void (*set_ssm_wdtimer)(struct adf_accel_dev *accel_dev);
int (*ring_pair_reset)(struct adf_accel_dev *accel_dev, u32 bank_nr);
+ int (*bank_state_save)(struct adf_accel_dev *accel_dev, u32 bank_number,
+ struct bank_state *state);
+ int (*bank_state_restore)(struct adf_accel_dev *accel_dev,
+ u32 bank_number, struct bank_state *state);
void (*reset_device)(struct adf_accel_dev *accel_dev);
void (*set_msix_rttable)(struct adf_accel_dev *accel_dev);
const char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
index 2140ac249e9d..dc2e30cc8e98 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
/* Copyright(c) 2020 Intel Corporation */
#include <linux/iopoll.h>
+#include <asm/div64.h>
#include "adf_accel_devices.h"
#include "adf_cfg_services.h"
#include "adf_common_drv.h"
@@ -334,3 +335,283 @@ int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev)
return 0;
}
EXPORT_SYMBOL_GPL(adf_gen4_init_thd2arb_map);
+
+/*
+ * adf_gen4_bank_quiesce_coal_timer() - quiesce bank coalesced interrupt timer
+ * @accel_dev: Pointer to the device structure
+ * @bank_idx: Offset to the bank within this device
+ * @timeout_ms: Timeout in milliseconds for the operation
+ *
+ * This function tries to quiesce the coalesced interrupt timer of a bank if
+ * it has been enabled and triggered.
+ *
+ * Returns 0 on success, error code otherwise
+ *
+ */
+int adf_gen4_bank_quiesce_coal_timer(struct adf_accel_dev *accel_dev,
+ u32 bank_idx, int timeout_ms)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
+ void __iomem *csr_misc = adf_get_pmisc_base(accel_dev);
+ void __iomem *csr_etr = adf_get_etr_base(accel_dev);
+ u32 int_col_ctl, int_col_mask, int_col_en;
+ u32 e_stat, intsrc;
+ u64 wait_us;
+ int ret;
+
+ if (timeout_ms < 0)
+ return -EINVAL;
+
+ int_col_ctl = csr_ops->read_csr_int_col_ctl(csr_etr, bank_idx);
+ int_col_mask = csr_ops->get_int_col_ctl_enable_mask();
+ if (!(int_col_ctl & int_col_mask))
+ return 0;
+
+ int_col_en = csr_ops->read_csr_int_col_en(csr_etr, bank_idx);
+ int_col_en &= BIT(ADF_WQM_CSR_RP_IDX_RX);
+
+ e_stat = csr_ops->read_csr_e_stat(csr_etr, bank_idx);
+ if (!(~e_stat & int_col_en))
+ return 0;
+
+ wait_us = 2 * ((int_col_ctl & ~int_col_mask) << 8) * USEC_PER_SEC;
+ do_div(wait_us, hw_data->clock_frequency);
+ wait_us = min(wait_us, (u64)timeout_ms * USEC_PER_MSEC);
+ dev_dbg(&GET_DEV(accel_dev),
+ "wait for bank %d - coalesced timer expires in %llu us (max=%u ms estat=0x%x intcolen=0x%x)\n",
+ bank_idx, wait_us, timeout_ms, e_stat, int_col_en);
+
+ ret = read_poll_timeout(ADF_CSR_RD, intsrc, intsrc,
+ ADF_COALESCED_POLL_DELAY_US, wait_us, true,
+ csr_misc, ADF_WQM_CSR_RPINTSOU(bank_idx));
+ if (ret)
+ dev_warn(&GET_DEV(accel_dev),
+ "coalesced timer for bank %d expired (%llu us)\n",
+ bank_idx, wait_us);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_bank_quiesce_coal_timer);
+
+static int drain_bank(void __iomem *csr, u32 bank_number, int timeout_us)
+{
+ u32 status;
+
+ ADF_CSR_WR(csr, ADF_WQM_CSR_RPRESETCTL(bank_number),
+ ADF_WQM_CSR_RPRESETCTL_DRAIN);
+
+ return read_poll_timeout(ADF_CSR_RD, status,
+ status & ADF_WQM_CSR_RPRESETSTS_STATUS,
+ ADF_RPRESET_POLL_DELAY_US, timeout_us, true,
+ csr, ADF_WQM_CSR_RPRESETSTS(bank_number));
+}
+
+void adf_gen4_bank_drain_finish(struct adf_accel_dev *accel_dev,
+ u32 bank_number)
+{
+ void __iomem *csr = adf_get_etr_base(accel_dev);
+
+ ADF_CSR_WR(csr, ADF_WQM_CSR_RPRESETSTS(bank_number),
+ ADF_WQM_CSR_RPRESETSTS_STATUS);
+}
+
+int adf_gen4_bank_drain_start(struct adf_accel_dev *accel_dev,
+ u32 bank_number, int timeout_us)
+{
+ void __iomem *csr = adf_get_etr_base(accel_dev);
+ int ret;
+
+ dev_dbg(&GET_DEV(accel_dev), "Drain bank %d\n", bank_number);
+
+ ret = drain_bank(csr, bank_number, timeout_us);
+ if (ret)
+ dev_err(&GET_DEV(accel_dev), "Bank drain failed (timeout)\n");
+ else
+ dev_dbg(&GET_DEV(accel_dev), "Bank drain successful\n");
+
+ return ret;
+}
+
+static void bank_state_save(struct adf_hw_csr_ops *ops, void __iomem *base,
+ u32 bank, struct bank_state *state, u32 num_rings)
+{
+ u32 i;
+
+ state->ringstat0 = ops->read_csr_stat(base, bank);
+ state->ringuostat = ops->read_csr_uo_stat(base, bank);
+ state->ringestat = ops->read_csr_e_stat(base, bank);
+ state->ringnestat = ops->read_csr_ne_stat(base, bank);
+ state->ringnfstat = ops->read_csr_nf_stat(base, bank);
+ state->ringfstat = ops->read_csr_f_stat(base, bank);
+ state->ringcstat0 = ops->read_csr_c_stat(base, bank);
+ state->iaintflagen = ops->read_csr_int_en(base, bank);
+ state->iaintflagreg = ops->read_csr_int_flag(base, bank);
+ state->iaintflagsrcsel0 = ops->read_csr_int_srcsel(base, bank);
+ state->iaintcolen = ops->read_csr_int_col_en(base, bank);
+ state->iaintcolctl = ops->read_csr_int_col_ctl(base, bank);
+ state->iaintflagandcolen = ops->read_csr_int_flag_and_col(base, bank);
+ state->ringexpstat = ops->read_csr_exp_stat(base, bank);
+ state->ringexpintenable = ops->read_csr_exp_int_en(base, bank);
+ state->ringsrvarben = ops->read_csr_ring_srv_arb_en(base, bank);
+ state->num_rings = num_rings;
+
+ for (i = 0; i < num_rings; i++) {
+ state->rings[i].head = ops->read_csr_ring_head(base, bank, i);
+ state->rings[i].tail = ops->read_csr_ring_tail(base, bank, i);
+ state->rings[i].config = ops->read_csr_ring_config(base, bank, i);
+ state->rings[i].base = ops->read_csr_ring_base(base, bank, i);
+ }
+}
+
+#define CHECK_STAT(op, expect_val, name, args...) \
+({ \
+ u32 __expect_val = (expect_val); \
+ u32 actual_val = op(args); \
+ if (__expect_val != actual_val) \
+ pr_err("QAT: Fail to restore %s register. Expected 0x%x, but actual is 0x%x\n", \
+ name, __expect_val, actual_val); \
+ (__expect_val == actual_val) ? 0 : -EINVAL; \
+})
+
+static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
+ u32 bank, struct bank_state *state, u32 num_rings,
+ int tx_rx_gap)
+{
+ u32 val, tmp_val, i;
+ int ret;
+
+ for (i = 0; i < num_rings; i++)
+ ops->write_csr_ring_base(base, bank, i, state->rings[i].base);
+
+ for (i = 0; i < num_rings; i++)
+ ops->write_csr_ring_config(base, bank, i, state->rings[i].config);
+
+ for (i = 0; i < num_rings / 2; i++) {
+ int tx = i * (tx_rx_gap + 1);
+ int rx = tx + tx_rx_gap;
+
+ ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
+ ops->write_csr_ring_tail(base, bank, tx, state->rings[tx].tail);
+
+ /*
+ * The TX ring head needs to be updated again to make sure that
+ * the HW will not consider the ring as full when it is empty
+ * and the correct state flags are set to match the recovered state.
+ */
+ if (state->ringestat & BIT(tx)) {
+ val = ops->read_csr_int_srcsel(base, bank);
+ val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK;
+ ops->write_csr_int_srcsel_w_val(base, bank, val);
+ ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
+ }
+
+ ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
+ val = ops->read_csr_int_srcsel(base, bank);
+ val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
+ ops->write_csr_int_srcsel_w_val(base, bank, val);
+
+ ops->write_csr_ring_head(base, bank, rx, state->rings[rx].head);
+ val = ops->read_csr_int_srcsel(base, bank);
+ val |= ADF_RP_INT_SRC_SEL_F_FALL_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
+ ops->write_csr_int_srcsel_w_val(base, bank, val);
+
+ /*
+ * The RX ring head needs to be updated again to make sure that
+ * the HW will not consider the ring as empty when it is full
+ * and the correct state flags are set to match the recovered state.
+ */
+ if (state->ringfstat & BIT(rx))
+ ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
+ }
+
+ ops->write_csr_int_flag_and_col(base, bank, state->iaintflagandcolen);
+ ops->write_csr_int_en(base, bank, state->iaintflagen);
+ ops->write_csr_int_col_en(base, bank, state->iaintcolen);
+ ops->write_csr_int_srcsel_w_val(base, bank, state->iaintflagsrcsel0);
+ ops->write_csr_exp_int_en(base, bank, state->ringexpintenable);
+ ops->write_csr_int_col_ctl(base, bank, state->iaintcolctl);
+ ops->write_csr_ring_srv_arb_en(base, bank, state->ringsrvarben);
+
+ /* Check that all ring statuses match the saved state. */
+ ret = CHECK_STAT(ops->read_csr_stat, state->ringstat0, "ringstat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ ret = CHECK_STAT(ops->read_csr_e_stat, state->ringestat, "ringestat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ ret = CHECK_STAT(ops->read_csr_ne_stat, state->ringnestat, "ringnestat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ ret = CHECK_STAT(ops->read_csr_nf_stat, state->ringnfstat, "ringnfstat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ ret = CHECK_STAT(ops->read_csr_f_stat, state->ringfstat, "ringfstat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ ret = CHECK_STAT(ops->read_csr_c_stat, state->ringcstat0, "ringcstat",
+ base, bank);
+ if (ret)
+ return ret;
+
+ tmp_val = ops->read_csr_exp_stat(base, bank);
+ val = state->ringexpstat;
+ if (tmp_val && !val) {
+ pr_err("QAT: Bank was restored with exception: 0x%x\n", val);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int adf_gen4_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number,
+ struct bank_state *state)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
+ void __iomem *csr_base = adf_get_etr_base(accel_dev);
+
+ if (bank_number >= hw_data->num_banks || !state)
+ return -EINVAL;
+
+ dev_dbg(&GET_DEV(accel_dev), "Saving state of bank %d\n", bank_number);
+
+ bank_state_save(csr_ops, csr_base, bank_number, state,
+ hw_data->num_rings_per_bank);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_bank_state_save);
+
+int adf_gen4_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number,
+ struct bank_state *state)
+{
+ struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+ struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
+ void __iomem *csr_base = adf_get_etr_base(accel_dev);
+ int ret;
+
+ if (bank_number >= hw_data->num_banks || !state)
+ return -EINVAL;
+
+ dev_dbg(&GET_DEV(accel_dev), "Restoring state of bank %d\n", bank_number);
+
+ ret = bank_state_restore(csr_ops, csr_base, bank_number, state,
+ hw_data->num_rings_per_bank, hw_data->tx_rx_gap);
+ if (ret)
+ dev_err(&GET_DEV(accel_dev),
+ "Unable to restore state of bank %d\n", bank_number);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_bank_state_restore);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
index f94ccf64e55d..788fe9d345c0 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
@@ -77,10 +77,19 @@
#define ADF_RPRESET_POLL_TIMEOUT_US (5 * USEC_PER_SEC)
#define ADF_RPRESET_POLL_DELAY_US 20
#define ADF_WQM_CSR_RPRESETCTL_RESET BIT(0)
+#define ADF_WQM_CSR_RPRESETCTL_DRAIN BIT(2)
#define ADF_WQM_CSR_RPRESETCTL(bank) (0x6000 + ((bank) << 3))
#define ADF_WQM_CSR_RPRESETSTS_STATUS BIT(0)
#define ADF_WQM_CSR_RPRESETSTS(bank) (ADF_WQM_CSR_RPRESETCTL(bank) + 4)

+/* Ring interrupt */
+#define ADF_RP_INT_SRC_SEL_F_RISE_MASK BIT(2)
+#define ADF_RP_INT_SRC_SEL_F_FALL_MASK GENMASK(2, 0)
+#define ADF_RP_INT_SRC_SEL_RANGE_WIDTH 4
+#define ADF_COALESCED_POLL_DELAY_US 1000
+#define ADF_WQM_CSR_RPINTSOU(bank) (0x200000 + ((bank) << 12))
+#define ADF_WQM_CSR_RP_IDX_RX 1
+
/* Error source registers */
#define ADF_GEN4_ERRSOU0 (0x41A200)
#define ADF_GEN4_ERRSOU1 (0x41A204)
@@ -149,5 +158,15 @@ int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
void adf_gen4_set_msix_default_rttable(struct adf_accel_dev *accel_dev);
void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);
int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev);
+int adf_gen4_bank_quiesce_coal_timer(struct adf_accel_dev *accel_dev,
+ u32 bank_idx, int timeout_ms);
+int adf_gen4_bank_drain_start(struct adf_accel_dev *accel_dev,
+ u32 bank_number, int timeout_us);
+void adf_gen4_bank_drain_finish(struct adf_accel_dev *accel_dev,
+ u32 bank_number);
+int adf_gen4_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number,
+ struct bank_state *state);
+int adf_gen4_bank_state_restore(struct adf_accel_dev *accel_dev,
+ u32 bank_number, struct bank_state *state);

#endif
--
2.18.2


2024-02-01 15:43:39

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

Add vfio pci driver for Intel QAT VF devices.

This driver uses vfio_pci_core to register to the VFIO subsystem. It
acts as a vfio agent and interacts with the QAT PF driver to implement
VF live migration.

Co-developed-by: Yahui Cao <[email protected]>
Signed-off-by: Yahui Cao <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
---
MAINTAINERS | 8 +
drivers/vfio/pci/Kconfig | 2 +
drivers/vfio/pci/Makefile | 2 +
drivers/vfio/pci/intel/qat/Kconfig | 13 +
drivers/vfio/pci/intel/qat/Makefile | 4 +
drivers/vfio/pci/intel/qat/main.c | 572 ++++++++++++++++++++++++++++
6 files changed, 601 insertions(+)
create mode 100644 drivers/vfio/pci/intel/qat/Kconfig
create mode 100644 drivers/vfio/pci/intel/qat/Makefile
create mode 100644 drivers/vfio/pci/intel/qat/main.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 8d1052fa6a69..c1d3e4cb3892 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23095,6 +23095,14 @@ S: Maintained
F: Documentation/networking/device_drivers/ethernet/amd/pds_vfio_pci.rst
F: drivers/vfio/pci/pds/

+VFIO QAT PCI DRIVER
+M: Xin Zeng <[email protected]>
+M: Giovanni Cabiddu <[email protected]>
+L: [email protected]
+L: [email protected]
+S: Supported
+F: drivers/vfio/pci/intel/qat/
+
VFIO PLATFORM DRIVER
M: Eric Auger <[email protected]>
L: [email protected]
diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
index 18c397df566d..329d25c53274 100644
--- a/drivers/vfio/pci/Kconfig
+++ b/drivers/vfio/pci/Kconfig
@@ -67,4 +67,6 @@ source "drivers/vfio/pci/pds/Kconfig"

source "drivers/vfio/pci/virtio/Kconfig"

+source "drivers/vfio/pci/intel/qat/Kconfig"
+
endmenu
diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
index 046139a4eca5..a87b6b43ce1c 100644
--- a/drivers/vfio/pci/Makefile
+++ b/drivers/vfio/pci/Makefile
@@ -15,3 +15,5 @@ obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/
obj-$(CONFIG_PDS_VFIO_PCI) += pds/

obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/
+
+obj-$(CONFIG_QAT_VFIO_PCI) += intel/qat/
diff --git a/drivers/vfio/pci/intel/qat/Kconfig b/drivers/vfio/pci/intel/qat/Kconfig
new file mode 100644
index 000000000000..71b28ac0bf6a
--- /dev/null
+++ b/drivers/vfio/pci/intel/qat/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config QAT_VFIO_PCI
+ tristate "VFIO support for QAT VF PCI devices"
+ select VFIO_PCI_CORE
+ depends on CRYPTO_DEV_QAT
+ depends on CRYPTO_DEV_QAT_4XXX
+ help
+ This provides migration support for Intel(R) QAT Virtual Function
+ using the VFIO framework.
+
+ To compile this as a module, choose M here: the module
+ will be called qat_vfio_pci. If you don't know what to do here,
+ say N.
diff --git a/drivers/vfio/pci/intel/qat/Makefile b/drivers/vfio/pci/intel/qat/Makefile
new file mode 100644
index 000000000000..9289ae4c51bf
--- /dev/null
+++ b/drivers/vfio/pci/intel/qat/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_QAT_VFIO_PCI) += qat_vfio_pci.o
+qat_vfio_pci-y := main.o
+
diff --git a/drivers/vfio/pci/intel/qat/main.c b/drivers/vfio/pci/intel/qat/main.c
new file mode 100644
index 000000000000..85d0ed701397
--- /dev/null
+++ b/drivers/vfio/pci/intel/qat/main.c
@@ -0,0 +1,572 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+
+#include <linux/anon_inodes.h>
+#include <linux/container_of.h>
+#include <linux/device.h>
+#include <linux/file.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <linux/vfio_pci_core.h>
+#include <linux/qat/qat_mig_dev.h>
+
+struct qat_vf_migration_file {
+ struct file *filp;
+ /* protects migration region context */
+ struct mutex lock;
+ bool disabled;
+ struct qat_mig_dev *mdev;
+};
+
+struct qat_vf_core_device {
+ struct vfio_pci_core_device core_device;
+ struct qat_mig_dev *mdev;
+ /* protects migration state */
+ struct mutex state_mutex;
+ enum vfio_device_mig_state mig_state;
+ /* protects reset down flow */
+ spinlock_t reset_lock;
+ bool deferred_reset;
+ struct qat_vf_migration_file *resuming_migf;
+ struct qat_vf_migration_file *saving_migf;
+};
+
+static int qat_vf_pci_open_device(struct vfio_device *core_vdev)
+{
+ struct qat_vf_core_device *qat_vdev =
+ container_of(core_vdev, struct qat_vf_core_device,
+ core_device.vdev);
+ struct vfio_pci_core_device *vdev = &qat_vdev->core_device;
+ int ret;
+
+ ret = vfio_pci_core_enable(vdev);
+ if (ret)
+ return ret;
+
+ ret = qat_vdev->mdev->ops->open(qat_vdev->mdev);
+ if (ret) {
+ vfio_pci_core_disable(vdev);
+ return ret;
+ }
+ qat_vdev->mig_state = VFIO_DEVICE_STATE_RUNNING;
+
+ vfio_pci_core_finish_enable(vdev);
+
+ return 0;
+}
+
+static void qat_vf_disable_fd(struct qat_vf_migration_file *migf)
+{
+ mutex_lock(&migf->lock);
+ migf->disabled = true;
+ migf->filp->f_pos = 0;
+ mutex_unlock(&migf->lock);
+}
+
+static void qat_vf_disable_fds(struct qat_vf_core_device *qat_vdev)
+{
+ if (qat_vdev->resuming_migf) {
+ qat_vf_disable_fd(qat_vdev->resuming_migf);
+ fput(qat_vdev->resuming_migf->filp);
+ qat_vdev->resuming_migf = NULL;
+ }
+
+ if (qat_vdev->saving_migf) {
+ qat_vf_disable_fd(qat_vdev->saving_migf);
+ fput(qat_vdev->saving_migf->filp);
+ qat_vdev->saving_migf = NULL;
+ }
+}
+
+static void qat_vf_pci_close_device(struct vfio_device *core_vdev)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
+ struct qat_vf_core_device, core_device.vdev);
+
+ qat_vdev->mdev->ops->close(qat_vdev->mdev);
+ qat_vf_disable_fds(qat_vdev);
+ vfio_pci_core_close_device(core_vdev);
+}
+
+static ssize_t qat_vf_save_read(struct file *filp, char __user *buf,
+ size_t len, loff_t *pos)
+{
+ struct qat_vf_migration_file *migf = filp->private_data;
+ ssize_t done = 0;
+ loff_t *offs;
+ int ret;
+
+ if (pos)
+ return -ESPIPE;
+ offs = &filp->f_pos;
+
+ mutex_lock(&migf->lock);
+ if (*offs > migf->mdev->state_size || *offs < 0) {
+ done = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (migf->disabled) {
+ done = -ENODEV;
+ goto out_unlock;
+ }
+
+ len = min_t(size_t, migf->mdev->state_size - *offs, len);
+ if (len) {
+ ret = copy_to_user(buf, migf->mdev->state + *offs, len);
+ if (ret) {
+ done = -EFAULT;
+ goto out_unlock;
+ }
+ *offs += len;
+ done = len;
+ }
+
+out_unlock:
+ mutex_unlock(&migf->lock);
+ return done;
+}
+
+static int qat_vf_release_file(struct inode *inode, struct file *filp)
+{
+ struct qat_vf_migration_file *migf = filp->private_data;
+
+ qat_vf_disable_fd(migf);
+ mutex_destroy(&migf->lock);
+ kfree(migf);
+
+ return 0;
+}
+
+static const struct file_operations qat_vf_save_fops = {
+ .owner = THIS_MODULE,
+ .read = qat_vf_save_read,
+ .release = qat_vf_release_file,
+ .llseek = no_llseek,
+};
+
+static struct qat_vf_migration_file *
+qat_vf_save_device_data(struct qat_vf_core_device *qat_vdev)
+{
+ struct qat_vf_migration_file *migf;
+ int ret;
+
+ migf = kzalloc(sizeof(*migf), GFP_KERNEL);
+ if (!migf)
+ return ERR_PTR(-ENOMEM);
+
+ migf->filp = anon_inode_getfile("qat_vf_mig", &qat_vf_save_fops, migf, O_RDONLY);
+ ret = PTR_ERR_OR_ZERO(migf->filp);
+ if (ret) {
+ kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+ stream_open(migf->filp->f_inode, migf->filp);
+ mutex_init(&migf->lock);
+
+ ret = qat_vdev->mdev->ops->save_state(qat_vdev->mdev);
+ if (ret) {
+ fput(migf->filp);
+ kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+ migf->mdev = qat_vdev->mdev;
+
+ return migf;
+}
+
+static ssize_t qat_vf_resume_write(struct file *filp, const char __user *buf,
+ size_t len, loff_t *pos)
+{
+ struct qat_vf_migration_file *migf = filp->private_data;
+ loff_t end, *offs;
+ ssize_t done = 0;
+ int ret;
+
+ if (pos)
+ return -ESPIPE;
+ offs = &filp->f_pos;
+
+ if (*offs < 0 ||
+ check_add_overflow((loff_t)len, *offs, &end))
+ return -EOVERFLOW;
+
+ if (end > migf->mdev->state_size)
+ return -ENOMEM;
+
+ mutex_lock(&migf->lock);
+ if (migf->disabled) {
+ done = -ENODEV;
+ goto out_unlock;
+ }
+
+ ret = copy_from_user(migf->mdev->state + *offs, buf, len);
+ if (ret) {
+ done = -EFAULT;
+ goto out_unlock;
+ }
+ *offs += len;
+ done = len;
+
+out_unlock:
+ mutex_unlock(&migf->lock);
+ return done;
+}
+
+static const struct file_operations qat_vf_resume_fops = {
+ .owner = THIS_MODULE,
+ .write = qat_vf_resume_write,
+ .release = qat_vf_release_file,
+ .llseek = no_llseek,
+};
+
+static struct qat_vf_migration_file *
+qat_vf_resume_device_data(struct qat_vf_core_device *qat_vdev)
+{
+ struct qat_vf_migration_file *migf;
+ int ret;
+
+ migf = kzalloc(sizeof(*migf), GFP_KERNEL);
+ if (!migf)
+ return ERR_PTR(-ENOMEM);
+
+ migf->filp = anon_inode_getfile("qat_vf_mig", &qat_vf_resume_fops, migf, O_WRONLY);
+ ret = PTR_ERR_OR_ZERO(migf->filp);
+ if (ret) {
+ kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+ migf->mdev = qat_vdev->mdev;
+ stream_open(migf->filp->f_inode, migf->filp);
+ mutex_init(&migf->lock);
+
+ return migf;
+}
+
+static int qat_vf_load_device_data(struct qat_vf_core_device *qat_vdev)
+{
+ return qat_vdev->mdev->ops->load_state(qat_vdev->mdev);
+}
+
+static struct file *qat_vf_pci_step_device_state(struct qat_vf_core_device *qat_vdev, u32 new)
+{
+ u32 cur = qat_vdev->mig_state;
+ int ret;
+
+ if ((cur == VFIO_DEVICE_STATE_RUNNING && new == VFIO_DEVICE_STATE_RUNNING_P2P)) {
+ ret = qat_vdev->mdev->ops->suspend(qat_vdev->mdev);
+ if (ret)
+ return ERR_PTR(ret);
+ return NULL;
+ }
+
+ if ((cur == VFIO_DEVICE_STATE_RUNNING_P2P && new == VFIO_DEVICE_STATE_STOP) ||
+ (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_RUNNING_P2P))
+ return NULL;
+
+ if (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_STOP_COPY) {
+ struct qat_vf_migration_file *migf;
+
+ migf = qat_vf_save_device_data(qat_vdev);
+ if (IS_ERR(migf))
+ return ERR_CAST(migf);
+ get_file(migf->filp);
+ qat_vdev->saving_migf = migf;
+ return migf->filp;
+ }
+
+ if (cur == VFIO_DEVICE_STATE_STOP_COPY && new == VFIO_DEVICE_STATE_STOP) {
+ qat_vf_disable_fds(qat_vdev);
+ return NULL;
+ }
+
+ if (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_RESUMING) {
+ struct qat_vf_migration_file *migf;
+
+ migf = qat_vf_resume_device_data(qat_vdev);
+ if (IS_ERR(migf))
+ return ERR_CAST(migf);
+ get_file(migf->filp);
+ qat_vdev->resuming_migf = migf;
+ return migf->filp;
+ }
+
+ if (cur == VFIO_DEVICE_STATE_RESUMING && new == VFIO_DEVICE_STATE_STOP) {
+ ret = qat_vf_load_device_data(qat_vdev);
+ if (ret)
+ return ERR_PTR(ret);
+
+ qat_vf_disable_fds(qat_vdev);
+ return NULL;
+ }
+
+ if (cur == VFIO_DEVICE_STATE_RUNNING_P2P && new == VFIO_DEVICE_STATE_RUNNING) {
+ qat_vdev->mdev->ops->resume(qat_vdev->mdev);
+ return NULL;
+ }
+
+ /* vfio_mig_get_next_state() does not use arcs other than the above */
+ WARN_ON(true);
+ return ERR_PTR(-EINVAL);
+}
+
+static void qat_vf_state_mutex_unlock(struct qat_vf_core_device *qat_vdev)
+{
+again:
+ spin_lock(&qat_vdev->reset_lock);
+ if (qat_vdev->deferred_reset) {
+ qat_vdev->deferred_reset = false;
+ spin_unlock(&qat_vdev->reset_lock);
+ qat_vdev->mig_state = VFIO_DEVICE_STATE_RUNNING;
+ qat_vf_disable_fds(qat_vdev);
+ goto again;
+ }
+ mutex_unlock(&qat_vdev->state_mutex);
+ spin_unlock(&qat_vdev->reset_lock);
+}
+
+static struct file *qat_vf_pci_set_device_state(struct vfio_device *vdev,
+ enum vfio_device_mig_state new_state)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(vdev,
+ struct qat_vf_core_device, core_device.vdev);
+ enum vfio_device_mig_state next_state;
+ struct file *res = NULL;
+ int ret;
+
+ mutex_lock(&qat_vdev->state_mutex);
+ while (new_state != qat_vdev->mig_state) {
+ ret = vfio_mig_get_next_state(vdev, qat_vdev->mig_state,
+ new_state, &next_state);
+ if (ret) {
+ res = ERR_PTR(ret);
+ break;
+ }
+ res = qat_vf_pci_step_device_state(qat_vdev, next_state);
+ if (IS_ERR(res))
+ break;
+ qat_vdev->mig_state = next_state;
+ if (WARN_ON(res && new_state != qat_vdev->mig_state)) {
+ fput(res);
+ res = ERR_PTR(-EINVAL);
+ break;
+ }
+ }
+ qat_vf_state_mutex_unlock(qat_vdev);
+
+ return res;
+}
+
+static int qat_vf_pci_get_device_state(struct vfio_device *vdev,
+ enum vfio_device_mig_state *curr_state)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(vdev,
+ struct qat_vf_core_device, core_device.vdev);
+
+ mutex_lock(&qat_vdev->state_mutex);
+ *curr_state = qat_vdev->mig_state;
+ qat_vf_state_mutex_unlock(qat_vdev);
+
+ return 0;
+}
+
+static int qat_vf_pci_get_data_size(struct vfio_device *vdev,
+ unsigned long *stop_copy_length)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(vdev,
+ struct qat_vf_core_device, core_device.vdev);
+
+ *stop_copy_length = qat_vdev->mdev->state_size;
+ return 0;
+}
+
+static const struct vfio_migration_ops qat_vf_pci_mig_ops = {
+ .migration_set_state = qat_vf_pci_set_device_state,
+ .migration_get_state = qat_vf_pci_get_device_state,
+ .migration_get_data_size = qat_vf_pci_get_data_size,
+};
+
+static void qat_vf_pci_release_dev(struct vfio_device *core_vdev)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
+ struct qat_vf_core_device, core_device.vdev);
+
+ qat_vdev->mdev->ops->cleanup(qat_vdev->mdev);
+ qat_vfmig_destroy(qat_vdev->mdev);
+ mutex_destroy(&qat_vdev->state_mutex);
+ vfio_pci_core_release_dev(core_vdev);
+}
+
+static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
+{
+ struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
+ struct qat_vf_core_device, core_device.vdev);
+ struct qat_migdev_ops *ops;
+ struct qat_mig_dev *mdev;
+ struct pci_dev *parent;
+ int ret, vf_id;
+
+ core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P;
+ core_vdev->mig_ops = &qat_vf_pci_mig_ops;
+
+ ret = vfio_pci_core_init_dev(core_vdev);
+ if (ret)
+ return ret;
+
+ mutex_init(&qat_vdev->state_mutex);
+ spin_lock_init(&qat_vdev->reset_lock);
+
+ parent = qat_vdev->core_device.pdev->physfn;
+ vf_id = pci_iov_vf_id(qat_vdev->core_device.pdev);
+ if (!parent || vf_id < 0) {
+ ret = -ENODEV;
+ goto err_rel;
+ }
+
+ mdev = qat_vfmig_create(parent, vf_id);
+ if (IS_ERR(mdev)) {
+ ret = PTR_ERR(mdev);
+ goto err_rel;
+ }
+
+ ops = mdev->ops;
+ if (!ops || !ops->init || !ops->cleanup ||
+ !ops->open || !ops->close ||
+ !ops->save_state || !ops->load_state ||
+ !ops->suspend || !ops->resume) {
+ ret = -EIO;
+ dev_err(&parent->dev, "Incomplete device migration ops structure!");
+ goto err_destroy;
+ }
+ ret = ops->init(mdev);
+ if (ret)
+ goto err_destroy;
+
+ qat_vdev->mdev = mdev;
+
+ return 0;
+
+err_destroy:
+ qat_vfmig_destroy(mdev);
+err_rel:
+ vfio_pci_core_release_dev(core_vdev);
+ return ret;
+}
+
+static const struct vfio_device_ops qat_vf_pci_ops = {
+ .name = "qat-vf-vfio-pci",
+ .init = qat_vf_pci_init_dev,
+ .release = qat_vf_pci_release_dev,
+ .open_device = qat_vf_pci_open_device,
+ .close_device = qat_vf_pci_close_device,
+ .ioctl = vfio_pci_core_ioctl,
+ .read = vfio_pci_core_read,
+ .write = vfio_pci_core_write,
+ .mmap = vfio_pci_core_mmap,
+ .request = vfio_pci_core_request,
+ .match = vfio_pci_core_match,
+ .bind_iommufd = vfio_iommufd_physical_bind,
+ .unbind_iommufd = vfio_iommufd_physical_unbind,
+ .attach_ioas = vfio_iommufd_physical_attach_ioas,
+ .detach_ioas = vfio_iommufd_physical_detach_ioas,
+};
+
+static struct qat_vf_core_device *qat_vf_drvdata(struct pci_dev *pdev)
+{
+ struct vfio_pci_core_device *core_device = pci_get_drvdata(pdev);
+
+ return container_of(core_device, struct qat_vf_core_device, core_device);
+}
+
+static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
+{
+ struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
+
+ if (!qat_vdev->core_device.vdev.mig_ops)
+ return;
+
+ /*
+ * As the higher VFIO layers are holding locks across reset and using
+ * those same locks with the mm_lock we need to prevent ABBA deadlock
+ * with the state_mutex and mm_lock.
+ * In case the state_mutex was taken already we defer the cleanup work
+ * to the unlock flow of the other running context.
+ */
+ spin_lock(&qat_vdev->reset_lock);
+ qat_vdev->deferred_reset = true;
+ if (!mutex_trylock(&qat_vdev->state_mutex)) {
+ spin_unlock(&qat_vdev->reset_lock);
+ return;
+ }
+ spin_unlock(&qat_vdev->reset_lock);
+ qat_vf_state_mutex_unlock(qat_vdev);
+}
+
+static int
+qat_vf_vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct device *dev = &pdev->dev;
+ struct qat_vf_core_device *qat_vdev;
+ int ret;
+
+ qat_vdev = vfio_alloc_device(qat_vf_core_device, core_device.vdev, dev, &qat_vf_pci_ops);
+ if (IS_ERR(qat_vdev))
+ return PTR_ERR(qat_vdev);
+
+ pci_set_drvdata(pdev, &qat_vdev->core_device);
+ ret = vfio_pci_core_register_device(&qat_vdev->core_device);
+ if (ret)
+ goto out_put_device;
+
+ return 0;
+
+out_put_device:
+ vfio_put_device(&qat_vdev->core_device.vdev);
+ return ret;
+}
+
+static void qat_vf_vfio_pci_remove(struct pci_dev *pdev)
+{
+ struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
+
+ vfio_pci_core_unregister_device(&qat_vdev->core_device);
+ vfio_put_device(&qat_vdev->core_device.vdev);
+}
+
+static const struct pci_device_id qat_vf_vfio_pci_table[] = {
+ /* Intel QAT GEN4 4xxx VF device */
+ { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL, 0x4941) },
+ { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL, 0x4943) },
+ { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL, 0x4945) },
+ {}
+};
+MODULE_DEVICE_TABLE(pci, qat_vf_vfio_pci_table);
+
+static const struct pci_error_handlers qat_vf_err_handlers = {
+ .reset_done = qat_vf_pci_aer_reset_done,
+ .error_detected = vfio_pci_core_aer_err_detected,
+};
+
+static struct pci_driver qat_vf_vfio_pci_driver = {
+ .name = "qat_vfio_pci",
+ .id_table = qat_vf_vfio_pci_table,
+ .probe = qat_vf_vfio_pci_probe,
+ .remove = qat_vf_vfio_pci_remove,
+ .err_handler = &qat_vf_err_handlers,
+ .driver_managed_dma = true,
+};
+module_pci_driver(qat_vf_vfio_pci_driver)
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION("QAT VFIO PCI - VFIO PCI driver with live migration support for Intel(R) QAT GEN4 device family");
+MODULE_IMPORT_NS(CRYPTO_QAT);
--
2.18.2


2024-02-01 15:52:37

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 01/10] crypto: qat - adf_get_etr_base() helper

From: Giovanni Cabiddu <[email protected]>

Add and use the new helper function adf_get_etr_base() which retrieves
the virtual address of the ring bar.

This will be used extensively when adding support for Live Migration.

Signed-off-by: Giovanni Cabiddu <[email protected]>
Reviewed-by: Xin Zeng <[email protected]>
Signed-off-by: Xin Zeng <[email protected]>
---
drivers/crypto/intel/qat/qat_common/adf_common_drv.h | 10 ++++++++++
drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c | 4 +---
drivers/crypto/intel/qat/qat_common/adf_transport.c | 4 +---
3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
index f06188033a93..cf1164a27f6f 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
@@ -238,6 +238,16 @@ static inline void __iomem *adf_get_pmisc_base(struct adf_accel_dev *accel_dev)
return pmisc->virt_addr;
}

+static inline void __iomem *adf_get_etr_base(struct adf_accel_dev *accel_dev)
+{
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_bar *etr;
+
+ etr = &GET_BARS(accel_dev)[hw_data->get_etr_bar_id(hw_data)];
+
+ return etr->virt_addr;
+}
+
static inline void __iomem *adf_get_aram_base(struct adf_accel_dev *accel_dev)
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
index f752653ccb47..a0d1326ac73d 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
@@ -320,8 +320,7 @@ static int reset_ring_pair(void __iomem *csr, u32 bank_number)
int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number)
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
- u32 etr_bar_id = hw_data->get_etr_bar_id(hw_data);
- void __iomem *csr;
+ void __iomem *csr = adf_get_etr_base(accel_dev);
int ret;

if (bank_number >= hw_data->num_banks)
@@ -330,7 +329,6 @@ int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number)
dev_dbg(&GET_DEV(accel_dev),
"ring pair reset for bank:%d\n", bank_number);

- csr = (&GET_BARS(accel_dev)[etr_bar_id])->virt_addr;
ret = reset_ring_pair(csr, bank_number);
if (ret)
dev_err(&GET_DEV(accel_dev),
diff --git a/drivers/crypto/intel/qat/qat_common/adf_transport.c b/drivers/crypto/intel/qat/qat_common/adf_transport.c
index 630d0483c4e0..1efdf46490f1 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_transport.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_transport.c
@@ -474,7 +474,6 @@ static int adf_init_bank(struct adf_accel_dev *accel_dev,
int adf_init_etr_data(struct adf_accel_dev *accel_dev)
{
struct adf_etr_data *etr_data;
- struct adf_hw_device_data *hw_data = accel_dev->hw_device;
void __iomem *csr_addr;
u32 size;
u32 num_banks = 0;
@@ -495,8 +494,7 @@ int adf_init_etr_data(struct adf_accel_dev *accel_dev)
}

accel_dev->transport = etr_data;
- i = hw_data->get_etr_bar_id(hw_data);
- csr_addr = accel_dev->accel_pci_dev.pci_bars[i].virt_addr;
+ csr_addr = adf_get_etr_base(accel_dev);

/* accel_dev->debugfs_dir should always be non-NULL here */
etr_data->debug = debugfs_create_dir("transport",
--
2.18.2


2024-02-01 15:52:37

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 02/10] crypto: qat - relocate and rename 4xxx PF2VM definitions

Move and rename ADF_4XXX_PF2VM_OFFSET and ADF_4XXX_VM2PF_OFFSET to
ADF_GEN4_PF2VM_OFFSET and ADF_GEN4_VM2PF_OFFSET respectively.
These definitions are moved from adf_gen4_pfvf.c to adf_gen4_hw_data.h
as they are specific to GEN4 and not just to qat_4xxx.

This change is made in anticipation of their use in live migration.

This does not introduce any functional change.

Signed-off-by: Xin Zeng <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
---
drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h | 4 ++++
drivers/crypto/intel/qat/qat_common/adf_gen4_pfvf.c | 8 +++-----
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
index 7d8a774cadc8..9cabbb5ce96b 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
@@ -197,6 +197,10 @@ do { \
/* Arbiter threads mask with error value */
#define ADF_GEN4_ENA_THD_MASK_ERROR GENMASK(ADF_NUM_THREADS_PER_AE, 0)

+/* PF2VM communication channel */
+#define ADF_GEN4_PF2VM_OFFSET(i) (0x40B010 + (i) * 0x20)
+#define ADF_GEN4_VM2PF_OFFSET(i) (0x40B014 + (i) * 0x20)
+
void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);

enum icp_qat_gen4_slice_mask {
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_pfvf.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_pfvf.c
index 8e8efe93f3ee..21474d402d09 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_pfvf.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_pfvf.c
@@ -6,12 +6,10 @@
#include "adf_accel_devices.h"
#include "adf_common_drv.h"
#include "adf_gen4_pfvf.h"
+#include "adf_gen4_hw_data.h"
#include "adf_pfvf_pf_proto.h"
#include "adf_pfvf_utils.h"

-#define ADF_4XXX_PF2VM_OFFSET(i) (0x40B010 + ((i) * 0x20))
-#define ADF_4XXX_VM2PF_OFFSET(i) (0x40B014 + ((i) * 0x20))
-
/* VF2PF interrupt source registers */
#define ADF_4XXX_VM2PF_SOU 0x41A180
#define ADF_4XXX_VM2PF_MSK 0x41A1C0
@@ -29,12 +27,12 @@ static const struct pfvf_csr_format csr_gen4_fmt = {

static u32 adf_gen4_pf_get_pf2vf_offset(u32 i)
{
- return ADF_4XXX_PF2VM_OFFSET(i);
+ return ADF_GEN4_PF2VM_OFFSET(i);
}

static u32 adf_gen4_pf_get_vf2pf_offset(u32 i)
{
- return ADF_4XXX_VM2PF_OFFSET(i);
+ return ADF_GEN4_VM2PF_OFFSET(i);
}

static void adf_gen4_enable_vf2pf_interrupts(void __iomem *pmisc_addr, u32 vf_mask)
--
2.18.2


2024-02-01 15:53:00

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 08/10] crypto: qat - add interface for live migration

Extend the driver with a new interface to be used for VF live migration.
This allows to create and destroy a qat_mig_dev object that contains
a set of methods to allow to save and restore the state of QAT VF.
This interface will be used by the qat-vfio-pci module.

Signed-off-by: Xin Zeng <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
---
drivers/crypto/intel/qat/qat_common/Makefile | 2 +-
.../intel/qat/qat_common/adf_accel_devices.h | 2 ++
.../crypto/intel/qat/qat_common/qat_mig_dev.c | 35 +++++++++++++++++++
include/linux/qat/qat_mig_dev.h | 31 ++++++++++++++++
4 files changed, 69 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/intel/qat/qat_common/qat_mig_dev.c
create mode 100644 include/linux/qat/qat_mig_dev.h

diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile
index da4d806a9287..05cac8a9e463 100644
--- a/drivers/crypto/intel/qat/qat_common/Makefile
+++ b/drivers/crypto/intel/qat/qat_common/Makefile
@@ -54,4 +54,4 @@ intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o \
intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_vf_isr.o adf_pfvf_utils.o \
adf_pfvf_pf_msg.o adf_pfvf_pf_proto.o \
adf_pfvf_vf_msg.o adf_pfvf_vf_proto.o \
- adf_gen2_pfvf.o adf_gen4_pfvf.o
+ adf_gen2_pfvf.o adf_gen4_pfvf.o qat_mig_dev.o
diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
index bdb026155f64..0fa52d20c131 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
@@ -9,6 +9,7 @@
#include <linux/pci.h>
#include <linux/ratelimit.h>
#include <linux/types.h>
+#include <linux/qat/qat_mig_dev.h>
#include "adf_cfg_common.h"
#include "adf_rl.h"
#include "adf_telemetry.h"
@@ -325,6 +326,7 @@ struct adf_hw_device_data {
struct adf_dev_err_mask dev_err_mask;
struct adf_rl_hw_data rl_data;
struct adf_tl_hw_data tl_data;
+ struct qat_migdev_ops vfmig_ops;
const char *fw_name;
const char *fw_mmp_name;
u32 fuses;
diff --git a/drivers/crypto/intel/qat/qat_common/qat_mig_dev.c b/drivers/crypto/intel/qat/qat_common/qat_mig_dev.c
new file mode 100644
index 000000000000..45520a715544
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/qat_mig_dev.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+#include <linux/dev_printk.h>
+#include <linux/export.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/qat/qat_mig_dev.h>
+#include "adf_common_drv.h"
+
+struct qat_mig_dev *qat_vfmig_create(struct pci_dev *pdev, int vf_id)
+{
+ struct adf_accel_dev *accel_dev;
+ struct qat_mig_dev *mdev;
+
+ accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
+ if (!accel_dev)
+ return ERR_PTR(-ENODEV);
+
+ mdev = kmalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+ return ERR_PTR(-ENOMEM);
+
+ mdev->vf_id = vf_id;
+ mdev->parent_accel_dev = accel_dev;
+ mdev->ops = &accel_dev->hw_device->vfmig_ops;
+
+ return mdev;
+}
+EXPORT_SYMBOL_GPL(qat_vfmig_create);
+
+void qat_vfmig_destroy(struct qat_mig_dev *mdev)
+{
+ kfree(mdev);
+}
+EXPORT_SYMBOL_GPL(qat_vfmig_destroy);
diff --git a/include/linux/qat/qat_mig_dev.h b/include/linux/qat/qat_mig_dev.h
new file mode 100644
index 000000000000..229454efe3ba
--- /dev/null
+++ b/include/linux/qat/qat_mig_dev.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2024 Intel Corporation */
+#ifndef QAT_MIG_DEV_H_
+#define QAT_MIG_DEV_H_
+
+struct pci_dev;
+struct qat_migdev_ops;
+
+struct qat_mig_dev {
+ void *parent_accel_dev;
+ struct qat_migdev_ops *ops;
+ u8 *state;
+ u32 state_size;
+ s32 vf_id;
+};
+
+struct qat_migdev_ops {
+ int (*init)(struct qat_mig_dev *mdev);
+ void (*cleanup)(struct qat_mig_dev *mdev);
+ int (*open)(struct qat_mig_dev *mdev);
+ void (*close)(struct qat_mig_dev *mdev);
+ int (*suspend)(struct qat_mig_dev *mdev);
+ int (*resume)(struct qat_mig_dev *mdev);
+ int (*save_state)(struct qat_mig_dev *mdev);
+ int (*load_state)(struct qat_mig_dev *mdev);
+};
+
+struct qat_mig_dev *qat_vfmig_create(struct pci_dev *pdev, int vf_id);
+void qat_vfmig_destroy(struct qat_mig_dev *mdev);
+
+#endif /*QAT_MIG_DEV_H_*/
--
2.18.2


2024-02-01 15:57:09

by Zeng, Xin

[permalink] [raw]
Subject: [PATCH 09/10] crypto: qat - implement interface for live migration

Add logic to implement the interface for live migration defined in
qat/qat_mig_dev.h. This is specific for QAT GEN4 Virtual Functions
(VFs).

This introduces a migration data manager which is used to handle the
device state during migration. The manager ensures that the device state
is stored in a format that can be restored in the destination node.

The VF state is organized into a hierarchical structure that includes a
preamble, a general state section, a MISC bar section and an ETR bar
section. The latter contains the state of the 4 ring pairs contained on
a VF. Here is a graphical representation of the state:

preamble | general state section | leaf state
| MISC bar state section| leaf state
| ETR bar state section | bank0 state section | leaf state
| bank1 state section | leaf state
| bank2 state section | leaf state
| bank3 state section | leaf state

In addition to the implementation of the qat_migdev_ops interface and
the state manager framework, add a mutex in pfvf to avoid pf2vf messages
during migration.

Signed-off-by: Xin Zeng <[email protected]>
Reviewed-by: Giovanni Cabiddu <[email protected]>
---
.../intel/qat/qat_420xx/adf_420xx_hw_data.c | 2 +
.../intel/qat/qat_4xxx/adf_4xxx_hw_data.c | 2 +
drivers/crypto/intel/qat/qat_common/Makefile | 2 +
.../intel/qat/qat_common/adf_accel_devices.h | 6 +
.../intel/qat/qat_common/adf_gen4_hw_data.h | 10 +
.../intel/qat/qat_common/adf_gen4_vf_mig.c | 834 ++++++++++++++++++
.../intel/qat/qat_common/adf_gen4_vf_mig.h | 10 +
.../intel/qat/qat_common/adf_mstate_mgr.c | 310 +++++++
.../intel/qat/qat_common/adf_mstate_mgr.h | 87 ++
.../crypto/intel/qat/qat_common/adf_sriov.c | 7 +-
10 files changed, 1269 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.h
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.c
create mode 100644 drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.h

diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
index bd36b536f663..a48bb83d3e3d 100644
--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
@@ -17,6 +17,7 @@
#include <adf_gen4_ras.h>
#include <adf_gen4_timer.h>
#include <adf_gen4_tl.h>
+#include <adf_gen4_vf_mig.h>
#include "adf_420xx_hw_data.h"
#include "icp_qat_hw.h"

@@ -520,6 +521,7 @@ void adf_init_hw_data_420xx(struct adf_hw_device_data *hw_data, u32 dev_id)
adf_gen4_init_dc_ops(&hw_data->dc_ops);
adf_gen4_init_ras_ops(&hw_data->ras_ops);
adf_gen4_init_tl_data(&hw_data->tl_data);
+ adf_gen4_init_vf_mig_ops(&hw_data->vfmig_ops);
adf_init_rl_data(&hw_data->rl_data);
}

diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
index a04488ada49b..5dce7dfd06c4 100644
--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
@@ -17,6 +17,7 @@
#include "adf_gen4_ras.h"
#include <adf_gen4_timer.h>
#include <adf_gen4_tl.h>
+#include <adf_gen4_vf_mig.h>
#include "adf_4xxx_hw_data.h"
#include "icp_qat_hw.h"

@@ -504,6 +505,7 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
adf_gen4_init_dc_ops(&hw_data->dc_ops);
adf_gen4_init_ras_ops(&hw_data->ras_ops);
adf_gen4_init_tl_data(&hw_data->tl_data);
+ adf_gen4_init_vf_mig_ops(&hw_data->vfmig_ops);
adf_init_rl_data(&hw_data->rl_data);
}

diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile
index 05cac8a9e463..061f7d04a0bb 100644
--- a/drivers/crypto/intel/qat/qat_common/Makefile
+++ b/drivers/crypto/intel/qat/qat_common/Makefile
@@ -20,12 +20,14 @@ intel_qat-objs := adf_cfg.o \
adf_gen4_config.o \
adf_gen4_hw_csr_data.o \
adf_gen4_hw_data.o \
+ adf_gen4_vf_mig.o \
adf_gen4_pm.o \
adf_gen2_dc.o \
adf_gen4_dc.o \
adf_gen4_ras.o \
adf_gen4_timer.o \
adf_clock.o \
+ adf_mstate_mgr.o \
qat_crypto.o \
qat_compression.o \
qat_comp_algs.o \
diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
index 0fa52d20c131..4bb50e00c210 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
@@ -397,10 +397,16 @@ struct adf_fw_loader_data {
struct adf_accel_vf_info {
struct adf_accel_dev *accel_dev;
struct mutex pf2vf_lock; /* protect CSR access for PF2VF messages */
+ struct mutex pfvf_mig_lock; /* protects PFVF state for migration */
struct ratelimit_state vf2pf_ratelimit;
u32 vf_nr;
bool init;
u8 vf_compat_ver;
+ /*
+ * Private area used for device migration.
+ * Memory allocation and free is managed by migration driver.
+ */
+ void *mig_priv;
};

struct adf_dc_data {
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
index 788fe9d345c0..fdff987902ed 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
@@ -86,6 +86,7 @@
#define ADF_RP_INT_SRC_SEL_F_RISE_MASK BIT(2)
#define ADF_RP_INT_SRC_SEL_F_FALL_MASK GENMASK(2, 0)
#define ADF_RP_INT_SRC_SEL_RANGE_WIDTH 4
+#define ADF_COALESCED_POLL_TIMEOUT_US (1 * USEC_PER_SEC)
#define ADF_COALESCED_POLL_DELAY_US 1000
#define ADF_WQM_CSR_RPINTSOU(bank) (0x200000 + ((bank) << 12))
#define ADF_WQM_CSR_RP_IDX_RX 1
@@ -120,6 +121,15 @@
/* PF2VM communication channel */
#define ADF_GEN4_PF2VM_OFFSET(i) (0x40B010 + (i) * 0x20)
#define ADF_GEN4_VM2PF_OFFSET(i) (0x40B014 + (i) * 0x20)
+#define ADF_GEN4_VINTMSKPF2VM_OFFSET(i) (0x40B00C + (i) * 0x20)
+#define ADF_GEN4_VINTSOUPF2VM_OFFSET(i) (0x40B008 + (i) * 0x20)
+#define ADF_GEN4_VINTMSK_OFFSET(i) (0x40B004 + (i) * 0x20)
+#define ADF_GEN4_VINTSOU_OFFSET(i) (0x40B000 + (i) * 0x20)
+
+struct adf_gen4_vfmig {
+ struct adf_mstate_mgr *mstate_mgr;
+ bool bank_stopped[ADF_GEN4_NUM_BANKS_PER_VF];
+};

void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);

diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.c
new file mode 100644
index 000000000000..9975b38134db
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.c
@@ -0,0 +1,834 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <asm/errno.h>
+
+#include "adf_accel_devices.h"
+#include "adf_common_drv.h"
+#include "adf_gen4_hw_data.h"
+#include "adf_gen4_pfvf.h"
+#include "adf_pfvf_utils.h"
+#include "adf_mstate_mgr.h"
+#include "adf_gen4_vf_mig.h"
+
+#define ADF_GEN4_VF_MSTATE_SIZE 4096
+#define ADF_GEN4_PFVF_RSP_TIMEOUT_US 5000
+
+static int adf_gen4_vfmig_init_device(struct qat_mig_dev *mdev)
+{
+ u8 *state;
+
+ state = kmalloc(ADF_GEN4_VF_MSTATE_SIZE, GFP_KERNEL);
+ if (!state)
+ return -ENOMEM;
+
+ mdev->state = state;
+ mdev->state_size = ADF_GEN4_VF_MSTATE_SIZE;
+
+ return 0;
+}
+
+static void adf_gen4_vfmig_cleanup_device(struct qat_mig_dev *mdev)
+{
+ kfree(mdev->state);
+ mdev->state = NULL;
+}
+
+static int adf_gen4_vfmig_open_device(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vfmig;
+
+ vf_info = &accel_dev->pf.vf_info[mdev->vf_id];
+
+ vfmig = kzalloc(sizeof(*vfmig), GFP_KERNEL);
+ if (!vfmig)
+ return -ENOMEM;
+
+ vfmig->mstate_mgr = adf_mstate_mgr_new(mdev->state, mdev->state_size);
+ if (!vfmig->mstate_mgr) {
+ kfree(vfmig);
+ return -ENOMEM;
+ }
+
+ vf_info->mig_priv = vfmig;
+
+ return 0;
+}
+
+static void adf_gen4_vfmig_close_device(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vfmig;
+
+ vf_info = &accel_dev->pf.vf_info[mdev->vf_id];
+ if (vf_info->mig_priv) {
+ vfmig = vf_info->mig_priv;
+ adf_mstate_mgr_destroy(vfmig->mstate_mgr);
+ kfree(vfmig);
+ vf_info->mig_priv = NULL;
+ }
+}
+
+static int adf_gen4_vfmig_suspend_device(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vf_mig;
+ u32 vf_nr = mdev->vf_id;
+ int ret, i;
+
+ vf_info = &accel_dev->pf.vf_info[vf_nr];
+ vf_mig = vf_info->mig_priv;
+
+ /* Stop all inflight jobs */
+ for (i = 0; i < hw_data->num_banks_per_vf; i++) {
+ u32 pf_bank_nr = i + vf_nr * hw_data->num_banks_per_vf;
+
+ ret = adf_gen4_bank_drain_start(accel_dev, pf_bank_nr,
+ ADF_RPRESET_POLL_TIMEOUT_US);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to drain bank %d for vf_nr %d\n", i,
+ vf_nr);
+ return ret;
+ }
+ vf_mig->bank_stopped[i] = true;
+
+ adf_gen4_bank_quiesce_coal_timer(accel_dev, pf_bank_nr,
+ ADF_COALESCED_POLL_TIMEOUT_US);
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_resume_device(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vf_mig;
+ u32 vf_nr = mdev->vf_id;
+ int i;
+
+ vf_info = &accel_dev->pf.vf_info[vf_nr];
+ vf_mig = vf_info->mig_priv;
+
+ for (i = 0; i < hw_data->num_banks_per_vf; i++) {
+ u32 pf_bank_nr = i + vf_nr * hw_data->num_banks_per_vf;
+
+ if (vf_mig->bank_stopped[i]) {
+ adf_gen4_bank_drain_finish(accel_dev, pf_bank_nr);
+ vf_mig->bank_stopped[i] = false;
+ }
+ }
+
+ return 0;
+}
+
+struct adf_vf_bank_info {
+ struct adf_accel_dev *accel_dev;
+ u32 vf_nr;
+ u32 bank_nr;
+};
+
+struct mig_user_sla {
+ enum adf_base_services srv;
+ u64 rp_mask;
+ u32 cir;
+ u32 pir;
+};
+
+static int adf_mstate_sla_check(struct adf_mstate_mgr *sub_mgr, u8 *src_buf,
+ u32 src_size, void *opaque)
+{
+ struct adf_mstate_vreginfo _sinfo = { src_buf, src_size };
+ struct adf_mstate_vreginfo *sinfo = &_sinfo, *dinfo = opaque;
+ u32 src_sla_cnt = sinfo->size / sizeof(struct mig_user_sla);
+ u32 dst_sla_cnt = dinfo->size / sizeof(struct mig_user_sla);
+ struct mig_user_sla *src_slas = sinfo->addr;
+ struct mig_user_sla *dst_slas = dinfo->addr;
+ int i, j;
+
+ for (i = 0; i < src_sla_cnt; i++) {
+ for (j = 0; j < dst_sla_cnt; j++) {
+ if (src_slas[i].srv != dst_slas[j].srv ||
+ src_slas[i].rp_mask != dst_slas[j].rp_mask)
+ continue;
+
+ if (src_slas[i].cir > dst_slas[j].cir ||
+ src_slas[i].pir > dst_slas[j].pir) {
+ pr_err("QAT: DST VF rate limiting mismatch.\n");
+ return -EINVAL;
+ }
+ break;
+ }
+
+ if (j == dst_sla_cnt) {
+ pr_err("QAT: SRC VF rate limiting mismatch - SRC srv %d and rp_mask 0x%llx.\n",
+ src_slas[i].srv, src_slas[i].rp_mask);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static inline int adf_mstate_check_cap_size(u32 src_sz, u32 dst_sz, u32 max_sz)
+{
+ if (src_sz > max_sz || dst_sz > max_sz)
+ return -EINVAL;
+ else
+ return 0;
+}
+
+static int adf_mstate_compatver_check(struct adf_mstate_mgr *sub_mgr,
+ u8 *src_buf, u32 src_sz, void *opaque)
+{
+ struct adf_mstate_vreginfo *info = opaque;
+ u8 compat = 0;
+ u8 *pcompat;
+
+ if (src_sz != info->size) {
+ pr_debug("QAT: State mismatch (compat version size), current %u, expected %u\n",
+ src_sz, info->size);
+ return -EINVAL;
+ }
+
+ memcpy(info->addr, src_buf, info->size);
+ pcompat = info->addr;
+ if (*pcompat == 0) {
+ pr_warn("QAT: Unable to determine the version of VF\n");
+ return 0;
+ }
+
+ compat = adf_vf_compat_checker(*pcompat);
+ if (compat == ADF_PF2VF_VF_INCOMPATIBLE) {
+ pr_debug("QAT: SRC VF driver (ver=%u) is incompatible with DST PF driver (ver=%u)\n",
+ *pcompat, ADF_PFVF_COMPAT_THIS_VERSION);
+ return -EINVAL;
+ }
+
+ if (compat == ADF_PF2VF_VF_COMPAT_UNKNOWN)
+ pr_debug("QAT: SRC VF driver (ver=%u) is newer than DST PF driver (ver=%u)\n",
+ *pcompat, ADF_PFVF_COMPAT_THIS_VERSION);
+
+ return 0;
+}
+
+/*
+ * adf_mstate_capmask_compare() - compare QAT device capability mask
+ * @sinfo: Pointer to source capability info
+ * @dinfo: Pointer to target capability info
+ *
+ * This function compares the capability mask between source VF and target VF
+ *
+ * Returns: 0 if target capability mask is identical to source capability mask,
+ * 1 if target mask can represent all the capabilities represented by source mask,
+ * -1 if target mask can't represent all the capabilities represented by source
+ * mask.
+ */
+static int adf_mstate_capmask_compare(struct adf_mstate_vreginfo *sinfo,
+ struct adf_mstate_vreginfo *dinfo)
+{
+ u64 src = 0, dst = 0;
+
+ if (adf_mstate_check_cap_size(sinfo->size, dinfo->size, sizeof(u64))) {
+ pr_debug("QAT: Unexpected capability size %u %u %zu\n",
+ sinfo->size, dinfo->size, sizeof(u64));
+ return -1;
+ }
+
+ memcpy(&src, sinfo->addr, sinfo->size);
+ memcpy(&dst, dinfo->addr, dinfo->size);
+
+ pr_debug("QAT: Check cap compatibility of cap %llu %llu\n", src, dst);
+
+ if (src == dst)
+ return 0;
+
+ if ((src | dst) == dst)
+ return 1;
+
+ return -1;
+}
+
+static int adf_mstate_capmask_superset(struct adf_mstate_mgr *sub_mgr, u8 *buf,
+ u32 size, void *opa)
+{
+ struct adf_mstate_vreginfo sinfo = { buf, size };
+
+ if (adf_mstate_capmask_compare(&sinfo, opa) >= 0)
+ return 0;
+
+ return -EINVAL;
+}
+
+static int adf_mstate_capmask_equal(struct adf_mstate_mgr *sub_mgr, u8 *buf,
+ u32 size, void *opa)
+{
+ struct adf_mstate_vreginfo sinfo = { buf, size };
+
+ if (adf_mstate_capmask_compare(&sinfo, opa) == 0)
+ return 0;
+
+ return -EINVAL;
+}
+
+static int adf_mstate_set_vreg(struct adf_mstate_mgr *sub_mgr, u8 *buf,
+ u32 size, void *opa)
+{
+ struct adf_mstate_vreginfo *info = opa;
+
+ if (size != info->size) {
+ pr_debug("QAT: Unexpected cap size %u %u\n", size, info->size);
+ return -EINVAL;
+ }
+ memcpy(info->addr, buf, info->size);
+
+ return 0;
+}
+
+static u32 adf_gen4_vfmig_get_slas(struct adf_accel_dev *accel_dev, u32 vf_nr,
+ struct mig_user_sla *pmig_slas)
+{
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_rl *rl_data = accel_dev->rate_limiting;
+ struct rl_sla **sla_type_arr = NULL;
+ u64 rp_mask, rp_index;
+ u32 max_num_sla;
+ u32 sla_cnt = 0;
+ int i, j;
+
+ if (!accel_dev->rate_limiting)
+ return 0;
+
+ rp_index = vf_nr * hw_data->num_banks_per_vf;
+ max_num_sla = adf_rl_get_sla_arr_of_type(rl_data, RL_LEAF, &sla_type_arr);
+
+ for (i = 0; i < max_num_sla; i++) {
+ if (!sla_type_arr[i])
+ continue;
+
+ rp_mask = 0;
+ for (j = 0; j < sla_type_arr[i]->ring_pairs_cnt; j++)
+ rp_mask |= BIT(sla_type_arr[i]->ring_pairs_ids[j]);
+
+ if (rp_mask & GENMASK_ULL(rp_index + 3, rp_index)) {
+ pmig_slas->rp_mask = rp_mask;
+ pmig_slas->cir = sla_type_arr[i]->cir;
+ pmig_slas->pir = sla_type_arr[i]->pir;
+ pmig_slas->srv = sla_type_arr[i]->srv;
+ pmig_slas++;
+ sla_cnt++;
+ }
+ }
+
+ return sla_cnt;
+}
+
+static int adf_gen4_vfmig_load_etr_regs(struct adf_mstate_mgr *sub_mgr,
+ u8 *state, u32 size, void *opa)
+{
+ struct adf_vf_bank_info *vf_bank_info = opa;
+ struct adf_accel_dev *accel_dev = vf_bank_info->accel_dev;
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ u32 pf_bank_nr;
+ int ret;
+
+ pf_bank_nr = vf_bank_info->bank_nr + vf_bank_info->vf_nr * hw_data->num_banks_per_vf;
+ ret = hw_data->bank_state_restore(accel_dev, pf_bank_nr,
+ (struct bank_state *)state);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to load regs for vf%d bank%d\n",
+ vf_bank_info->vf_nr, vf_bank_info->bank_nr);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_load_etr_bank(struct adf_accel_dev *accel_dev,
+ u32 vf_nr, u32 bank_nr,
+ struct adf_mstate_mgr *mstate_mgr)
+{
+ struct adf_vf_bank_info vf_bank_info = {accel_dev, vf_nr, bank_nr};
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct adf_mstate_mgr sub_sects_mgr;
+ char bank_ids[ADF_MSTATE_ID_LEN];
+
+ snprintf(bank_ids, sizeof(bank_ids), ADF_MSTATE_BANK_IDX_IDS "%x", bank_nr);
+ subsec = adf_mstate_sect_lookup(mstate_mgr, bank_ids, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to lookup sec %s for vf%d bank%d\n",
+ ADF_MSTATE_BANK_IDX_IDS, vf_nr, bank_nr);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_psect(&sub_sects_mgr, subsec);
+ l2_subsec = adf_mstate_sect_lookup(&sub_sects_mgr, ADF_MSTATE_ETR_REGS_IDS,
+ adf_gen4_vfmig_load_etr_regs,
+ &vf_bank_info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to add sec %s for vf%d bank%d\n",
+ ADF_MSTATE_ETR_REGS_IDS, vf_nr, bank_nr);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_load_etr(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct adf_mstate_sect_h *subsec;
+ int ret, i;
+
+ subsec = adf_mstate_sect_lookup(mstate_mgr, ADF_MSTATE_ETRB_IDS, NULL,
+ NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to load sec %s\n",
+ ADF_MSTATE_ETRB_IDS);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_psect(&sub_sects_mgr, subsec);
+ for (i = 0; i < hw_data->num_banks_per_vf; i++) {
+ ret = adf_gen4_vfmig_load_etr_bank(accel_dev, vf_nr, i,
+ &sub_sects_mgr);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_load_misc(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ void __iomem *csr = adf_get_pmisc_base(accel_dev);
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct {
+ char *id;
+ u64 ofs;
+ } misc_states[] = {
+ {ADF_MSTATE_VINTMSK_IDS, ADF_GEN4_VINTMSK_OFFSET(vf_nr)},
+ {ADF_MSTATE_VINTMSK_PF2VM_IDS, ADF_GEN4_VINTMSKPF2VM_OFFSET(vf_nr)},
+ {ADF_MSTATE_PF2VM_IDS, ADF_GEN4_PF2VM_OFFSET(vf_nr)},
+ {ADF_MSTATE_VM2PF_IDS, ADF_GEN4_VM2PF_OFFSET(vf_nr)},
+ };
+ int i;
+
+ subsec = adf_mstate_sect_lookup(mstate_mgr, ADF_MSTATE_MISCB_IDS, NULL,
+ NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to load sec %s\n",
+ ADF_MSTATE_MISCB_IDS);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_psect(&sub_sects_mgr, subsec);
+ for (i = 0; i < ARRAY_SIZE(misc_states); i++) {
+ struct adf_mstate_vreginfo info;
+ u32 regv;
+
+ info.addr = &regv;
+ info.size = sizeof(regv);
+ l2_subsec = adf_mstate_sect_lookup(&sub_sects_mgr,
+ misc_states[i].id,
+ adf_mstate_set_vreg,
+ &info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to load sec %s\n", misc_states[i].id);
+ return -EINVAL;
+ }
+ ADF_CSR_WR(csr, misc_states[i].ofs, regv);
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_load_generic(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct mig_user_sla dst_slas[RL_RP_CNT_PER_LEAF_MAX] = { };
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct adf_mstate_mgr sub_sects_mgr;
+ u32 dst_sla_cnt;
+ struct {
+ char *id;
+ int (*action)(struct adf_mstate_mgr *sub_mgr, u8 *buf, u32 size, void *opa);
+ struct adf_mstate_vreginfo info;
+ } gen_states[] = {
+ {ADF_MSTATE_GEN_CAP_IDS, adf_mstate_capmask_superset,
+ {&hw_data->accel_capabilities_mask, sizeof(hw_data->accel_capabilities_mask)}},
+ {ADF_MSTATE_GEN_SVCMAP_IDS, adf_mstate_capmask_equal,
+ {&hw_data->ring_to_svc_map, sizeof(hw_data->ring_to_svc_map)}},
+ {ADF_MSTATE_GEN_EXTDC_IDS, adf_mstate_capmask_superset,
+ {&hw_data->extended_dc_capabilities, sizeof(hw_data->extended_dc_capabilities)}},
+ {ADF_MSTATE_IOV_INIT_IDS, adf_mstate_set_vreg,
+ {&vf_info->init, sizeof(vf_info->init)}},
+ {ADF_MSTATE_COMPAT_VER_IDS, adf_mstate_compatver_check,
+ {&vf_info->vf_compat_ver, sizeof(vf_info->vf_compat_ver)}},
+ {ADF_MSTATE_SLA_IDS, adf_mstate_sla_check, {dst_slas, 0}},
+ };
+ int i;
+
+ subsec = adf_mstate_sect_lookup(mstate_mgr, ADF_MSTATE_GEN_IDS, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to load sec %s\n",
+ ADF_MSTATE_GEN_IDS);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_psect(&sub_sects_mgr, subsec);
+ for (i = 0; i < ARRAY_SIZE(gen_states); i++) {
+ if (gen_states[i].info.addr == dst_slas) {
+ dst_sla_cnt = adf_gen4_vfmig_get_slas(accel_dev, vf_nr, dst_slas);
+ gen_states[i].info.size = dst_sla_cnt * sizeof(struct mig_user_sla);
+ }
+
+ l2_subsec = adf_mstate_sect_lookup(&sub_sects_mgr,
+ gen_states[i].id,
+ gen_states[i].action,
+ &gen_states[i].info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to load sec %s\n",
+ gen_states[i].id);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_save_etr_regs(struct adf_mstate_mgr *subs, u8 *state,
+ u32 size, void *opa)
+{
+ struct adf_vf_bank_info *vf_bank_info = opa;
+ struct adf_accel_dev *accel_dev = vf_bank_info->accel_dev;
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ u32 pf_bank_nr;
+ int ret;
+
+ pf_bank_nr = vf_bank_info->bank_nr;
+ pf_bank_nr += vf_bank_info->vf_nr * hw_data->num_banks_per_vf;
+
+ ret = hw_data->bank_state_save(accel_dev, pf_bank_nr,
+ (struct bank_state *)state);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to save regs for vf%d bank%d\n",
+ vf_bank_info->vf_nr, vf_bank_info->bank_nr);
+ return ret;
+ }
+
+ return sizeof(struct bank_state);
+}
+
+static int adf_gen4_vfmig_save_etr_bank(struct adf_accel_dev *accel_dev,
+ u32 vf_nr, u32 bank_nr,
+ struct adf_mstate_mgr *mstate_mgr)
+{
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct adf_vf_bank_info vf_bank_info;
+ struct adf_mstate_mgr sub_sects_mgr;
+ char bank_ids[ADF_MSTATE_ID_LEN];
+
+ snprintf(bank_ids, sizeof(bank_ids), ADF_MSTATE_BANK_IDX_IDS "%x", bank_nr);
+
+ subsec = adf_mstate_sect_add(mstate_mgr, bank_ids, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to add sec %s for vf%d bank%d\n",
+ ADF_MSTATE_BANK_IDX_IDS, vf_nr, bank_nr);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_parent(&sub_sects_mgr, mstate_mgr);
+ vf_bank_info.accel_dev = accel_dev;
+ vf_bank_info.vf_nr = vf_nr;
+ vf_bank_info.bank_nr = bank_nr;
+ l2_subsec = adf_mstate_sect_add(&sub_sects_mgr, ADF_MSTATE_ETR_REGS_IDS,
+ adf_gen4_vfmig_save_etr_regs,
+ &vf_bank_info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to add sec %s for vf%d bank%d\n",
+ ADF_MSTATE_ETR_REGS_IDS, vf_nr, bank_nr);
+ return -EINVAL;
+ }
+ adf_mstate_sect_update(mstate_mgr, &sub_sects_mgr, subsec);
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_save_etr(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct adf_mstate_sect_h *subsec;
+ int ret, i;
+
+ subsec = adf_mstate_sect_add(mstate_mgr, ADF_MSTATE_ETRB_IDS, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to add sec %s\n",
+ ADF_MSTATE_ETRB_IDS);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_parent(&sub_sects_mgr, mstate_mgr);
+ for (i = 0; i < hw_data->num_banks_per_vf; i++) {
+ ret = adf_gen4_vfmig_save_etr_bank(accel_dev, vf_nr, i,
+ &sub_sects_mgr);
+ if (ret)
+ return ret;
+ }
+ adf_mstate_sect_update(mstate_mgr, &sub_sects_mgr, subsec);
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_save_misc(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ void __iomem *csr = adf_get_pmisc_base(accel_dev);
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct {
+ char *id;
+ u64 offset;
+ } misc_states[] = {
+ {ADF_MSTATE_VINTSRC_IDS, ADF_GEN4_VINTSOU_OFFSET(vf_nr)},
+ {ADF_MSTATE_VINTMSK_IDS, ADF_GEN4_VINTMSK_OFFSET(vf_nr)},
+ {ADF_MSTATE_VINTSRC_PF2VM_IDS, ADF_GEN4_VINTSOUPF2VM_OFFSET(vf_nr)},
+ {ADF_MSTATE_VINTMSK_PF2VM_IDS, ADF_GEN4_VINTMSKPF2VM_OFFSET(vf_nr)},
+ {ADF_MSTATE_PF2VM_IDS, ADF_GEN4_PF2VM_OFFSET(vf_nr)},
+ {ADF_MSTATE_VM2PF_IDS, ADF_GEN4_VM2PF_OFFSET(vf_nr)},
+ };
+ ktime_t time_exp;
+ int i;
+
+ subsec = adf_mstate_sect_add(mstate_mgr, ADF_MSTATE_MISCB_IDS, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to add sec %s\n",
+ ADF_MSTATE_MISCB_IDS);
+ return -EINVAL;
+ }
+
+ time_exp = ktime_add_us(ktime_get(), ADF_GEN4_PFVF_RSP_TIMEOUT_US);
+ while (!mutex_trylock(&vf_info->pfvf_mig_lock)) {
+ if (ktime_after(ktime_get(), time_exp)) {
+ dev_err(&GET_DEV(accel_dev), "Failed to get pfvf mig lock\n");
+ return -ETIMEDOUT;
+ }
+ usleep_range(500, 1000);
+ }
+
+ adf_mstate_mgr_init_from_parent(&sub_sects_mgr, mstate_mgr);
+ for (i = 0; i < ARRAY_SIZE(misc_states); i++) {
+ struct adf_mstate_vreginfo info;
+ u32 regv;
+
+ info.addr = &regv;
+ info.size = sizeof(regv);
+ regv = ADF_CSR_RD(csr, misc_states[i].offset);
+
+ l2_subsec = adf_mstate_sect_add_vreg(&sub_sects_mgr,
+ misc_states[i].id,
+ &info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to add sec %s\n",
+ misc_states[i].id);
+ mutex_unlock(&vf_info->pfvf_mig_lock);
+ return -EINVAL;
+ }
+ }
+
+ mutex_unlock(&vf_info->pfvf_mig_lock);
+ adf_mstate_sect_update(mstate_mgr, &sub_sects_mgr, subsec);
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_save_generic(struct adf_accel_dev *accel_dev, u32 vf_nr)
+{
+ struct adf_accel_vf_info *vf_info = &accel_dev->pf.vf_info[vf_nr];
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+ struct adf_gen4_vfmig *vfmig = vf_info->mig_priv;
+ struct adf_mstate_mgr *mstate_mgr = vfmig->mstate_mgr;
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct adf_mstate_sect_h *subsec, *l2_subsec;
+ struct mig_user_sla src_slas[RL_RP_CNT_PER_LEAF_MAX] = { };
+ u32 src_sla_cnt;
+ struct {
+ char *id;
+ struct adf_mstate_vreginfo info;
+ } gen_states[] = {
+ {ADF_MSTATE_GEN_CAP_IDS,
+ {&hw_data->accel_capabilities_mask, sizeof(hw_data->accel_capabilities_mask)}},
+ {ADF_MSTATE_GEN_SVCMAP_IDS,
+ {&hw_data->ring_to_svc_map, sizeof(hw_data->ring_to_svc_map)}},
+ {ADF_MSTATE_GEN_EXTDC_IDS,
+ {&hw_data->extended_dc_capabilities, sizeof(hw_data->extended_dc_capabilities)}},
+ {ADF_MSTATE_IOV_INIT_IDS,
+ {&vf_info->init, sizeof(vf_info->init)}},
+ {ADF_MSTATE_COMPAT_VER_IDS,
+ {&vf_info->vf_compat_ver, sizeof(vf_info->vf_compat_ver)}},
+ {ADF_MSTATE_SLA_IDS, {src_slas, 0}},
+ };
+ int i;
+
+ subsec = adf_mstate_sect_add(mstate_mgr, ADF_MSTATE_GEN_IDS, NULL, NULL);
+ if (!subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to add sec %s\n",
+ ADF_MSTATE_GEN_IDS);
+ return -EINVAL;
+ }
+
+ adf_mstate_mgr_init_from_parent(&sub_sects_mgr, mstate_mgr);
+ for (i = 0; i < ARRAY_SIZE(gen_states); i++) {
+ if (gen_states[i].info.addr == src_slas) {
+ src_sla_cnt = adf_gen4_vfmig_get_slas(accel_dev, vf_nr, src_slas);
+ gen_states[i].info.size = src_sla_cnt * sizeof(struct mig_user_sla);
+ }
+
+ l2_subsec = adf_mstate_sect_add_vreg(&sub_sects_mgr,
+ gen_states[i].id,
+ &gen_states[i].info);
+ if (!l2_subsec) {
+ dev_err(&GET_DEV(accel_dev), "Failed to add sec %s\n",
+ gen_states[i].id);
+ return -EINVAL;
+ }
+ }
+ adf_mstate_sect_update(mstate_mgr, &sub_sects_mgr, subsec);
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_save_state(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vfmig;
+ struct adf_mstate_preh *pre;
+ u32 vf_nr = mdev->vf_id;
+ int ret;
+
+ vf_info = &accel_dev->pf.vf_info[vf_nr];
+ vfmig = vf_info->mig_priv;
+
+ adf_mstate_mgr_init(vfmig->mstate_mgr, mdev->state, mdev->state_size);
+ pre = adf_mstate_preamble_add(vfmig->mstate_mgr);
+ if (!pre)
+ return -EINVAL;
+
+ ret = adf_gen4_vfmig_save_generic(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to save generic state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ ret = adf_gen4_vfmig_save_misc(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to save misc bar state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ ret = adf_gen4_vfmig_save_etr(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to save etr bar state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ adf_mstate_preamble_update(vfmig->mstate_mgr, pre);
+
+ return 0;
+}
+
+static int adf_gen4_vfmig_load_state(struct qat_mig_dev *mdev)
+{
+ struct adf_accel_dev *accel_dev = mdev->parent_accel_dev;
+ struct adf_accel_vf_info *vf_info;
+ struct adf_gen4_vfmig *vfmig;
+ u32 vf_nr = mdev->vf_id;
+ int ret;
+
+ vf_info = &accel_dev->pf.vf_info[vf_nr];
+ vfmig = vf_info->mig_priv;
+
+ ret = adf_mstate_mgr_init_from_remote(vfmig->mstate_mgr, mdev->state,
+ mdev->state_size, NULL, NULL);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev), "Invalid state for vf_nr %d\n",
+ vf_nr);
+ return ret;
+ }
+
+ ret = adf_gen4_vfmig_load_generic(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to load general state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ ret = adf_gen4_vfmig_load_misc(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to load misc bar state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ ret = adf_gen4_vfmig_load_etr(accel_dev, vf_nr);
+ if (ret) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to load etr bar state for vf_nr %d\n", vf_nr);
+ return ret;
+ }
+
+ return 0;
+}
+
+void adf_gen4_init_vf_mig_ops(struct qat_migdev_ops *vfmig_ops)
+{
+ vfmig_ops->init = adf_gen4_vfmig_init_device;
+ vfmig_ops->cleanup = adf_gen4_vfmig_cleanup_device;
+ vfmig_ops->open = adf_gen4_vfmig_open_device;
+ vfmig_ops->close = adf_gen4_vfmig_close_device;
+ vfmig_ops->suspend = adf_gen4_vfmig_suspend_device;
+ vfmig_ops->resume = adf_gen4_vfmig_resume_device;
+ vfmig_ops->save_state = adf_gen4_vfmig_save_state;
+ vfmig_ops->load_state = adf_gen4_vfmig_load_state;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_init_vf_mig_ops);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.h
new file mode 100644
index 000000000000..dbb1f26b35f1
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_vf_mig.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2024 Intel Corporation */
+#ifndef ADF_GEN4_VF_MIG_H_
+#define ADF_GEN4_VF_MIG_H_
+
+#include <linux/qat/qat_mig_dev.h>
+
+void adf_gen4_init_vf_mig_ops(struct qat_migdev_ops *vfmig_ops);
+
+#endif
diff --git a/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.c b/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.c
new file mode 100644
index 000000000000..58d068b98aea
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.c
@@ -0,0 +1,310 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2024 Intel Corporation */
+
+#include <linux/slab.h>
+#include <linux/types.h>
+#include "adf_mstate_mgr.h"
+
+#define ADF_MSTATE_MAGIC 0xADF5CAEA
+#define ADF_MSTATE_VERSION 0x1
+
+struct adf_mstate_sect_h {
+ u8 id[ADF_MSTATE_ID_LEN];
+ u32 size;
+ u32 sub_sects;
+ u8 state[];
+};
+
+static inline u32 adf_mstate_state_size(struct adf_mstate_mgr *mgr)
+{
+ return mgr->state - mgr->buf;
+}
+
+static inline u32 adf_mstate_avail_room(struct adf_mstate_mgr *mgr)
+{
+ return mgr->buf + mgr->size - mgr->state;
+}
+
+void adf_mstate_mgr_init(struct adf_mstate_mgr *mgr, u8 *buf, u32 size)
+{
+ mgr->buf = buf;
+ mgr->state = buf;
+ mgr->size = size;
+ mgr->n_sects = 0;
+};
+
+struct adf_mstate_mgr *adf_mstate_mgr_new(u8 *buf, u32 size)
+{
+ struct adf_mstate_mgr *mgr;
+
+ mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
+ if (!mgr)
+ return NULL;
+
+ adf_mstate_mgr_init(mgr, buf, size);
+
+ return mgr;
+}
+
+void adf_mstate_mgr_destroy(struct adf_mstate_mgr *mgr)
+{
+ kfree(mgr);
+}
+
+void adf_mstate_mgr_init_from_parent(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_mgr *p_mgr)
+{
+ adf_mstate_mgr_init(mgr, p_mgr->state,
+ p_mgr->size - adf_mstate_state_size(p_mgr));
+}
+
+void adf_mstate_mgr_init_from_psect(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_sect_h *p_sect)
+{
+ adf_mstate_mgr_init(mgr, p_sect->state, p_sect->size);
+ mgr->n_sects = p_sect->sub_sects;
+}
+
+static void adf_mstate_preamble_init(struct adf_mstate_preh *preamble)
+{
+ preamble->magic = ADF_MSTATE_MAGIC;
+ preamble->version = ADF_MSTATE_VERSION;
+ preamble->preh_len = sizeof(*preamble);
+ preamble->size = 0;
+ preamble->n_sects = 0;
+}
+
+/* default preambles checker */
+static int adf_mstate_preamble_def_checker(struct adf_mstate_preh *preamble,
+ void *opaque)
+{
+ struct adf_mstate_mgr *mgr = opaque;
+
+ if (preamble->magic != ADF_MSTATE_MAGIC ||
+ preamble->version > ADF_MSTATE_VERSION ||
+ preamble->preh_len > mgr->size) {
+ pr_debug("QAT: LM - Invalid state (magic=%#x, version=%#x, hlen=%u), state_size=%u\n",
+ preamble->magic, preamble->version, preamble->preh_len,
+ mgr->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct adf_mstate_preh *adf_mstate_preamble_add(struct adf_mstate_mgr *mgr)
+{
+ struct adf_mstate_preh *pre = (struct adf_mstate_preh *)mgr->buf;
+
+ if (adf_mstate_avail_room(mgr) < sizeof(*pre)) {
+ pr_err("QAT: LM - Not enough space for preamble\n");
+ return NULL;
+ }
+
+ adf_mstate_preamble_init(pre);
+ mgr->state += pre->preh_len;
+
+ return pre;
+}
+
+int adf_mstate_preamble_update(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_preh *preamble)
+{
+ preamble->size = adf_mstate_state_size(mgr) - preamble->preh_len;
+ preamble->n_sects = mgr->n_sects;
+
+ return 0;
+}
+
+static void adf_mstate_dump_sect(struct adf_mstate_sect_h *sect,
+ const char *prefix)
+{
+ pr_debug("QAT: LM - %s QAT state section %s\n", prefix, sect->id);
+ print_hex_dump_debug("h-", DUMP_PREFIX_OFFSET, 16, 2, sect,
+ sizeof(*sect), true);
+ print_hex_dump_debug("s-", DUMP_PREFIX_OFFSET, 16, 2, sect->state,
+ sect->size, true);
+}
+
+static inline void __adf_mstate_sect_update(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_sect_h *sect,
+ u32 size,
+ u32 n_subsects)
+{
+ sect->size += size;
+ sect->sub_sects += n_subsects;
+ mgr->n_sects++;
+ mgr->state += sect->size;
+
+ adf_mstate_dump_sect(sect, "Add");
+}
+
+void adf_mstate_sect_update(struct adf_mstate_mgr *p_mgr,
+ struct adf_mstate_mgr *curr_mgr,
+ struct adf_mstate_sect_h *sect)
+{
+ __adf_mstate_sect_update(p_mgr, sect, adf_mstate_state_size(curr_mgr),
+ curr_mgr->n_sects);
+}
+
+static struct adf_mstate_sect_h *adf_mstate_sect_add_header(struct adf_mstate_mgr *mgr,
+ const char *id)
+{
+ struct adf_mstate_sect_h *sect = (struct adf_mstate_sect_h *)(mgr->state);
+
+ if (adf_mstate_avail_room(mgr) < sizeof(*sect)) {
+ pr_debug("QAT: LM - Not enough space for header of QAT state sect %s\n", id);
+ return NULL;
+ }
+
+ strscpy(sect->id, id, sizeof(sect->id));
+ sect->size = 0;
+ sect->sub_sects = 0;
+ mgr->state += sizeof(*sect);
+
+ return sect;
+}
+
+struct adf_mstate_sect_h *adf_mstate_sect_add_vreg(struct adf_mstate_mgr *mgr,
+ const char *id,
+ struct adf_mstate_vreginfo *info)
+{
+ struct adf_mstate_sect_h *sect;
+
+ sect = adf_mstate_sect_add_header(mgr, id);
+ if (!sect)
+ return NULL;
+
+ if (adf_mstate_avail_room(mgr) < info->size) {
+ pr_debug("QAT: LM - Not enough space for QAT state sect %s, requires %u\n",
+ id, info->size);
+ return NULL;
+ }
+
+ memcpy(sect->state, info->addr, info->size);
+ __adf_mstate_sect_update(mgr, sect, info->size, 0);
+
+ return sect;
+}
+
+struct adf_mstate_sect_h *adf_mstate_sect_add(struct adf_mstate_mgr *mgr,
+ const char *id,
+ adf_mstate_populate populate,
+ void *opaque)
+{
+ struct adf_mstate_mgr sub_sects_mgr;
+ struct adf_mstate_sect_h *sect;
+ int avail_room, size;
+
+ sect = adf_mstate_sect_add_header(mgr, id);
+ if (!sect)
+ return NULL;
+
+ if (!populate)
+ return sect;
+
+ avail_room = adf_mstate_avail_room(mgr);
+ adf_mstate_mgr_init_from_parent(&sub_sects_mgr, mgr);
+
+ size = (*populate)(&sub_sects_mgr, sect->state, avail_room, opaque);
+ if (size < 0)
+ return NULL;
+
+ size += adf_mstate_state_size(&sub_sects_mgr);
+ if (avail_room < size) {
+ pr_debug("QAT: LM - Not enough space for QAT state sect %s, requires %u\n",
+ id, size);
+ return NULL;
+ }
+ __adf_mstate_sect_update(mgr, sect, size, sub_sects_mgr.n_sects);
+
+ return sect;
+}
+
+static int adf_mstate_sect_validate(struct adf_mstate_mgr *mgr)
+{
+ struct adf_mstate_sect_h *start = (struct adf_mstate_sect_h *)mgr->state;
+ struct adf_mstate_sect_h *sect = start;
+ u64 end;
+ int i;
+
+ end = (uintptr_t)mgr->buf + mgr->size;
+ for (i = 0; i < mgr->n_sects; i++) {
+ uintptr_t s_start = (uintptr_t)sect->state;
+ uintptr_t s_end = s_start + sect->size;
+
+ if (s_end < s_start || s_end > end) {
+ pr_debug("QAT: LM - Corrupted state section (index=%u, size=%u) in state_mgr (size=%u, secs=%u)\n",
+ i, sect->size, mgr->size, mgr->n_sects);
+ return -EINVAL;
+ }
+ sect = (struct adf_mstate_sect_h *)s_end;
+ }
+
+ pr_debug("QAT: LM - Scanned section (last child=%s, size=%lu) in state_mgr (size=%u, secs=%u)\n",
+ start->id, sizeof(struct adf_mstate_sect_h) * (ulong)(sect - start),
+ mgr->size, mgr->n_sects);
+
+ return 0;
+}
+
+int adf_mstate_mgr_init_from_remote(struct adf_mstate_mgr *mgr, u8 *buf, u32 size,
+ adf_mstate_preamble_checker pre_checker,
+ void *opaque)
+{
+ struct adf_mstate_preh *pre;
+ int ret;
+
+ adf_mstate_mgr_init(mgr, buf, size);
+ pre = (struct adf_mstate_preh *)(mgr->buf);
+
+ pr_debug("QAT: LM - Dump state preambles\n");
+ print_hex_dump_debug("", DUMP_PREFIX_OFFSET, 16, 2, pre, pre->preh_len, 0);
+
+ if (pre_checker)
+ ret = (*pre_checker)(pre, opaque);
+ else
+ ret = adf_mstate_preamble_def_checker(pre, mgr);
+ if (ret)
+ return ret;
+
+ mgr->state = mgr->buf + pre->preh_len;
+ mgr->n_sects = pre->n_sects;
+
+ return adf_mstate_sect_validate(mgr);
+}
+
+struct adf_mstate_sect_h *adf_mstate_sect_lookup(struct adf_mstate_mgr *mgr,
+ const char *id,
+ adf_mstate_action action,
+ void *opaque)
+{
+ struct adf_mstate_sect_h *sect = (struct adf_mstate_sect_h *)mgr->state;
+ struct adf_mstate_mgr sub_sects_mgr;
+ int i, ret;
+
+ for (i = 0; i < mgr->n_sects; i++) {
+ if (!strncmp(sect->id, id, sizeof(sect->id)))
+ goto found;
+
+ sect = (struct adf_mstate_sect_h *)(sect->state + sect->size);
+ }
+
+ return NULL;
+
+found:
+ adf_mstate_dump_sect(sect, "Found");
+
+ adf_mstate_mgr_init_from_psect(&sub_sects_mgr, sect);
+ if (sect->sub_sects && adf_mstate_sect_validate(&sub_sects_mgr))
+ return NULL;
+
+ if (!action)
+ return sect;
+
+ ret = (*action)(&sub_sects_mgr, sect->state, sect->size, opaque);
+ if (ret)
+ return NULL;
+
+ return sect;
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.h b/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.h
new file mode 100644
index 000000000000..4f7d01d0055c
--- /dev/null
+++ b/drivers/crypto/intel/qat/qat_common/adf_mstate_mgr.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2024 Intel Corporation */
+
+#ifndef ADF_MSTATE_MGR_H
+#define ADF_MSTATE_MGR_H
+
+#define ADF_MSTATE_ID_LEN 8
+
+#define ADF_MSTATE_ETRB_IDS "ETRBAR"
+#define ADF_MSTATE_MISCB_IDS "MISCBAR"
+#define ADF_MSTATE_EXTB_IDS "EXTBAR"
+#define ADF_MSTATE_GEN_IDS "GENER"
+#define ADF_MSTATE_SECTION_NUM 4
+
+#define ADF_MSTATE_BANK_IDX_IDS "bnk"
+
+#define ADF_MSTATE_ETR_REGS_IDS "mregs"
+#define ADF_MSTATE_VINTSRC_IDS "visrc"
+#define ADF_MSTATE_VINTMSK_IDS "vimsk"
+#define ADF_MSTATE_SLA_IDS "sla"
+#define ADF_MSTATE_IOV_INIT_IDS "iovinit"
+#define ADF_MSTATE_COMPAT_VER_IDS "compver"
+#define ADF_MSTATE_GEN_CAP_IDS "gencap"
+#define ADF_MSTATE_GEN_SVCMAP_IDS "svcmap"
+#define ADF_MSTATE_GEN_EXTDC_IDS "extdc"
+#define ADF_MSTATE_VINTSRC_PF2VM_IDS "vispv"
+#define ADF_MSTATE_VINTMSK_PF2VM_IDS "vimpv"
+#define ADF_MSTATE_VM2PF_IDS "vm2pf"
+#define ADF_MSTATE_PF2VM_IDS "pf2vm"
+
+struct adf_mstate_mgr {
+ u8 *buf;
+ u8 *state;
+ u32 size;
+ u32 n_sects;
+};
+
+struct adf_mstate_preh {
+ u32 magic;
+ u32 version;
+ u16 preh_len;
+ u16 n_sects;
+ u32 size;
+};
+
+struct adf_mstate_vreginfo {
+ void *addr;
+ u32 size;
+};
+
+struct adf_mstate_sect_h;
+
+typedef int (*adf_mstate_preamble_checker)(struct adf_mstate_preh *preamble, void *opa);
+typedef int (*adf_mstate_populate)(struct adf_mstate_mgr *sub_mgr, u8 *buf,
+ u32 size, void *opa);
+typedef int (*adf_mstate_action)(struct adf_mstate_mgr *sub_mgr, u8 *buf, u32 size,
+ void *opa);
+
+struct adf_mstate_mgr *adf_mstate_mgr_new(u8 *buf, u32 size);
+void adf_mstate_mgr_destroy(struct adf_mstate_mgr *mgr);
+void adf_mstate_mgr_init(struct adf_mstate_mgr *mgr, u8 *buf, u32 size);
+void adf_mstate_mgr_init_from_parent(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_mgr *p_mgr);
+void adf_mstate_mgr_init_from_psect(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_sect_h *p_sect);
+int adf_mstate_mgr_init_from_remote(struct adf_mstate_mgr *mgr,
+ u8 *buf, u32 size,
+ adf_mstate_preamble_checker checker,
+ void *opaque);
+struct adf_mstate_preh *adf_mstate_preamble_add(struct adf_mstate_mgr *mgr);
+int adf_mstate_preamble_update(struct adf_mstate_mgr *mgr,
+ struct adf_mstate_preh *preamble);
+void adf_mstate_sect_update(struct adf_mstate_mgr *p_mgr,
+ struct adf_mstate_mgr *curr_mgr,
+ struct adf_mstate_sect_h *sect);
+struct adf_mstate_sect_h *adf_mstate_sect_add_vreg(struct adf_mstate_mgr *mgr,
+ const char *id,
+ struct adf_mstate_vreginfo *info);
+struct adf_mstate_sect_h *adf_mstate_sect_add(struct adf_mstate_mgr *mgr,
+ const char *id,
+ adf_mstate_populate populate,
+ void *opaque);
+struct adf_mstate_sect_h *adf_mstate_sect_lookup(struct adf_mstate_mgr *mgr,
+ const char *id,
+ adf_mstate_action action,
+ void *opaque);
+#endif
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sriov.c b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
index f44025bb6f99..bc0966f3c7b2 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
@@ -26,10 +26,12 @@ static void adf_iov_send_resp(struct work_struct *work)
u32 vf_nr = vf_info->vf_nr;
bool ret;

+ mutex_lock(&vf_info->pfvf_mig_lock);
ret = adf_recv_and_handle_vf2pf_msg(accel_dev, vf_nr);
if (ret)
/* re-enable interrupt on PF from this VF */
adf_enable_vf2pf_interrupts(accel_dev, 1 << vf_nr);
+ mutex_unlock(&vf_info->pfvf_mig_lock);

kfree(pf2vf_resp);
}
@@ -63,6 +65,7 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev)
vf_info->vf_compat_ver = 0;

mutex_init(&vf_info->pf2vf_lock);
+ mutex_init(&vf_info->pfvf_mig_lock);
ratelimit_state_init(&vf_info->vf2pf_ratelimit,
ADF_VF2PF_RATELIMIT_INTERVAL,
ADF_VF2PF_RATELIMIT_BURST);
@@ -112,8 +115,10 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev)
if (hw_data->configure_iov_threads)
hw_data->configure_iov_threads(accel_dev, false);

- for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++)
+ for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++) {
mutex_destroy(&vf->pf2vf_lock);
+ mutex_destroy(&vf->pfvf_mig_lock);
+ }

kfree(accel_dev->pf.vf_info);
accel_dev->pf.vf_info = NULL;
--
2.18.2


2024-02-04 07:52:03

by Kamlesh Gurudasani

[permalink] [raw]
Subject: Re: [EXTERNAL] [PATCH 07/10] crypto: qat - add bank save and restore flows

Xin Zeng <[email protected]> writes:

> This message was sent from outside of Texas Instruments.
> Do not click links or open attachments unless you recognize the source of this email and know the content is safe.
>
> From: Siming Wan <[email protected]>
>
> Add logic to save, restore, quiesce and drain a ring bank for QAT GEN4
> devices.
> This allows to save and restore the state of a Virtual Function (VF) and
> will be used to implement VM live migration.
>
> Signed-off-by: Siming Wan <[email protected]>
> Reviewed-by: Giovanni Cabiddu <[email protected]>
> Reviewed-by: Xin Zeng <[email protected]>
> Signed-off-by: Xin Zeng <[email protected]>
...
> +#define CHECK_STAT(op, expect_val, name, args...) \
> +({ \
> + u32 __expect_val = (expect_val); \
> + u32 actual_val = op(args); \
> + if (__expect_val != actual_val) \
> + pr_err("QAT: Fail to restore %s register. Expected 0x%x, but actual is 0x%x\n", \
> + name, __expect_val, actual_val); \
> + (__expect_val == actual_val) ? 0 : -EINVAL; \
I was wondering if this can be done like following, saves repeat comparison.

(__expect_val == actual_val) ? 0 : (pr_err("QAT: Fail to restore %s \
register. Expected 0x%x, \
but actual is 0x%x\n", \
name, __expect_val, \
actual_val), -EINVAL); \
Regards,
Kamlesh

> +})
...

2024-02-06 12:55:45

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Thu, Feb 01, 2024 at 11:33:37PM +0800, Xin Zeng wrote:

> +static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> + struct qat_vf_core_device, core_device.vdev);
> + struct qat_migdev_ops *ops;
> + struct qat_mig_dev *mdev;
> + struct pci_dev *parent;
> + int ret, vf_id;
> +
> + core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P;
> + core_vdev->mig_ops = &qat_vf_pci_mig_ops;
> +
> + ret = vfio_pci_core_init_dev(core_vdev);
> + if (ret)
> + return ret;
> +
> + mutex_init(&qat_vdev->state_mutex);
> + spin_lock_init(&qat_vdev->reset_lock);
> +
> + parent = qat_vdev->core_device.pdev->physfn;
> + vf_id = pci_iov_vf_id(qat_vdev->core_device.pdev);
> + if (!parent || vf_id < 0) {
> + ret = -ENODEV;
> + goto err_rel;
> + }
> +
> + mdev = qat_vfmig_create(parent, vf_id);
> + if (IS_ERR(mdev)) {
> + ret = PTR_ERR(mdev);
> + goto err_rel;
> + }
> +
> + ops = mdev->ops;
> + if (!ops || !ops->init || !ops->cleanup ||
> + !ops->open || !ops->close ||
> + !ops->save_state || !ops->load_state ||
> + !ops->suspend || !ops->resume) {
> + ret = -EIO;
> + dev_err(&parent->dev, "Incomplete device migration ops structure!");
> + goto err_destroy;
> + }

Why are there ops pointers here? I would expect this should just be
direct function calls to the PF QAT driver.

> +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
> +{
> + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> +
> + if (!qat_vdev->core_device.vdev.mig_ops)
> + return;
> +
> + /*
> + * As the higher VFIO layers are holding locks across reset and using
> + * those same locks with the mm_lock we need to prevent ABBA deadlock
> + * with the state_mutex and mm_lock.
> + * In case the state_mutex was taken already we defer the cleanup work
> + * to the unlock flow of the other running context.
> + */
> + spin_lock(&qat_vdev->reset_lock);
> + qat_vdev->deferred_reset = true;
> + if (!mutex_trylock(&qat_vdev->state_mutex)) {
> + spin_unlock(&qat_vdev->reset_lock);
> + return;
> + }
> + spin_unlock(&qat_vdev->reset_lock);
> + qat_vf_state_mutex_unlock(qat_vdev);
> +}

Do you really need this? I thought this ugly thing was going to be a
uniquely mlx5 thing..

Jason

2024-02-06 21:14:59

by Alex Williamson

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Thu, 1 Feb 2024 23:33:37 +0800
Xin Zeng <[email protected]> wrote:

> Add vfio pci driver for Intel QAT VF devices.
>
> This driver uses vfio_pci_core to register to the VFIO subsystem. It
> acts as a vfio agent and interacts with the QAT PF driver to implement
> VF live migration.
>
> Co-developed-by: Yahui Cao <[email protected]>
> Signed-off-by: Yahui Cao <[email protected]>
> Signed-off-by: Xin Zeng <[email protected]>
> Reviewed-by: Giovanni Cabiddu <[email protected]>
> ---
> MAINTAINERS | 8 +
> drivers/vfio/pci/Kconfig | 2 +
> drivers/vfio/pci/Makefile | 2 +
> drivers/vfio/pci/intel/qat/Kconfig | 13 +
> drivers/vfio/pci/intel/qat/Makefile | 4 +
> drivers/vfio/pci/intel/qat/main.c | 572 ++++++++++++++++++++++++++++
> 6 files changed, 601 insertions(+)
> create mode 100644 drivers/vfio/pci/intel/qat/Kconfig
> create mode 100644 drivers/vfio/pci/intel/qat/Makefile
> create mode 100644 drivers/vfio/pci/intel/qat/main.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8d1052fa6a69..c1d3e4cb3892 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -23095,6 +23095,14 @@ S: Maintained
> F: Documentation/networking/device_drivers/ethernet/amd/pds_vfio_pci.rst
> F: drivers/vfio/pci/pds/
>
> +VFIO QAT PCI DRIVER
> +M: Xin Zeng <[email protected]>
> +M: Giovanni Cabiddu <[email protected]>
> +L: [email protected]
> +L: [email protected]
> +S: Supported
> +F: drivers/vfio/pci/intel/qat/
> +
> VFIO PLATFORM DRIVER
> M: Eric Auger <[email protected]>
> L: [email protected]
> diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
> index 18c397df566d..329d25c53274 100644
> --- a/drivers/vfio/pci/Kconfig
> +++ b/drivers/vfio/pci/Kconfig
> @@ -67,4 +67,6 @@ source "drivers/vfio/pci/pds/Kconfig"
>
> source "drivers/vfio/pci/virtio/Kconfig"
>
> +source "drivers/vfio/pci/intel/qat/Kconfig"
> +
> endmenu
> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> index 046139a4eca5..a87b6b43ce1c 100644
> --- a/drivers/vfio/pci/Makefile
> +++ b/drivers/vfio/pci/Makefile
> @@ -15,3 +15,5 @@ obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/
> obj-$(CONFIG_PDS_VFIO_PCI) += pds/
>
> obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/
> +
> +obj-$(CONFIG_QAT_VFIO_PCI) += intel/qat/
> diff --git a/drivers/vfio/pci/intel/qat/Kconfig b/drivers/vfio/pci/intel/qat/Kconfig
> new file mode 100644
> index 000000000000..71b28ac0bf6a
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/Kconfig
> @@ -0,0 +1,13 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +config QAT_VFIO_PCI
> + tristate "VFIO support for QAT VF PCI devices"
> + select VFIO_PCI_CORE
> + depends on CRYPTO_DEV_QAT
> + depends on CRYPTO_DEV_QAT_4XXX

CRYPTO_DEV_QAT_4XXX selects CRYPTO_DEV_QAT, is it necessary to depend
on CRYPTO_DEV_QAT here?

> + help
> + This provides migration support for Intel(R) QAT Virtual Function
> + using the VFIO framework.
> +
> + To compile this as a module, choose M here: the module
> + will be called qat_vfio_pci. If you don't know what to do here,
> + say N.
> diff --git a/drivers/vfio/pci/intel/qat/Makefile b/drivers/vfio/pci/intel/qat/Makefile
> new file mode 100644
> index 000000000000..9289ae4c51bf
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +obj-$(CONFIG_QAT_VFIO_PCI) += qat_vfio_pci.o
> +qat_vfio_pci-y := main.o
> +

Blank line at end of file.

> diff --git a/drivers/vfio/pci/intel/qat/main.c b/drivers/vfio/pci/intel/qat/main.c
> new file mode 100644
> index 000000000000..85d0ed701397
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/main.c
> @@ -0,0 +1,572 @@
...
> +static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> + struct qat_vf_core_device, core_device.vdev);
> + struct qat_migdev_ops *ops;
> + struct qat_mig_dev *mdev;
> + struct pci_dev *parent;
> + int ret, vf_id;
> +
> + core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P;

AFAICT it's always good practice to support VFIO_MIGRATION_PRE_COPY,
even if only to leave yourself an option to mark an incompatible ABI
change in the data stream that can be checked before the stop-copy
phase of migration. See for example the mtty sample driver. Thanks,

Alex

> + core_vdev->mig_ops = &qat_vf_pci_mig_ops;
> +
> + ret = vfio_pci_core_init_dev(core_vdev);
> + if (ret)
> + return ret;
> +
> + mutex_init(&qat_vdev->state_mutex);
> + spin_lock_init(&qat_vdev->reset_lock);
> +
> + parent = qat_vdev->core_device.pdev->physfn;
> + vf_id = pci_iov_vf_id(qat_vdev->core_device.pdev);
> + if (!parent || vf_id < 0) {
> + ret = -ENODEV;
> + goto err_rel;
> + }
> +
> + mdev = qat_vfmig_create(parent, vf_id);
> + if (IS_ERR(mdev)) {
> + ret = PTR_ERR(mdev);
> + goto err_rel;
> + }
> +
> + ops = mdev->ops;
> + if (!ops || !ops->init || !ops->cleanup ||
> + !ops->open || !ops->close ||
> + !ops->save_state || !ops->load_state ||
> + !ops->suspend || !ops->resume) {
> + ret = -EIO;
> + dev_err(&parent->dev, "Incomplete device migration ops structure!");
> + goto err_destroy;
> + }
> + ret = ops->init(mdev);
> + if (ret)
> + goto err_destroy;
> +
> + qat_vdev->mdev = mdev;
> +
> + return 0;
> +
> +err_destroy:
> + qat_vfmig_destroy(mdev);
> +err_rel:
> + vfio_pci_core_release_dev(core_vdev);
> + return ret;
> +}


Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices



> -----Original Message-----
> From: Xin Zeng <[email protected]>
> Sent: Thursday, February 1, 2024 3:34 PM
> To: [email protected]; [email protected];
> [email protected]; [email protected]; Shameerali Kolothum Thodi
> <[email protected]>; [email protected]
> Cc: [email protected]; [email protected]; qat-
> [email protected]; Xin Zeng <[email protected]>; Yahui Cao
> <[email protected]>
> Subject: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices
>
> Add vfio pci driver for Intel QAT VF devices.
>
> This driver uses vfio_pci_core to register to the VFIO subsystem. It
> acts as a vfio agent and interacts with the QAT PF driver to implement
> VF live migration.
>
> Co-developed-by: Yahui Cao <[email protected]>
> Signed-off-by: Yahui Cao <[email protected]>
> Signed-off-by: Xin Zeng <[email protected]>
> Reviewed-by: Giovanni Cabiddu <[email protected]>
> ---
> MAINTAINERS | 8 +
> drivers/vfio/pci/Kconfig | 2 +
> drivers/vfio/pci/Makefile | 2 +
> drivers/vfio/pci/intel/qat/Kconfig | 13 +
> drivers/vfio/pci/intel/qat/Makefile | 4 +
> drivers/vfio/pci/intel/qat/main.c | 572 ++++++++++++++++++++++++++++
> 6 files changed, 601 insertions(+)
> create mode 100644 drivers/vfio/pci/intel/qat/Kconfig
> create mode 100644 drivers/vfio/pci/intel/qat/Makefile
> create mode 100644 drivers/vfio/pci/intel/qat/main.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8d1052fa6a69..c1d3e4cb3892 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -23095,6 +23095,14 @@ S: Maintained
> F:
> Documentation/networking/device_drivers/ethernet/amd/pds_vfio_
> pci.rst
> F: drivers/vfio/pci/pds/
>
> +VFIO QAT PCI DRIVER
> +M: Xin Zeng <[email protected]>
> +M: Giovanni Cabiddu <[email protected]>
> +L: [email protected]
> +L: [email protected]
> +S: Supported
> +F: drivers/vfio/pci/intel/qat/
> +
> VFIO PLATFORM DRIVER
> M: Eric Auger <[email protected]>
> L: [email protected]
> diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig
> index 18c397df566d..329d25c53274 100644
> --- a/drivers/vfio/pci/Kconfig
> +++ b/drivers/vfio/pci/Kconfig
> @@ -67,4 +67,6 @@ source "drivers/vfio/pci/pds/Kconfig"
>
> source "drivers/vfio/pci/virtio/Kconfig"
>
> +source "drivers/vfio/pci/intel/qat/Kconfig"
> +
> endmenu
> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> index 046139a4eca5..a87b6b43ce1c 100644
> --- a/drivers/vfio/pci/Makefile
> +++ b/drivers/vfio/pci/Makefile
> @@ -15,3 +15,5 @@ obj-$(CONFIG_HISI_ACC_VFIO_PCI) += hisilicon/
> obj-$(CONFIG_PDS_VFIO_PCI) += pds/
>
> obj-$(CONFIG_VIRTIO_VFIO_PCI) += virtio/
> +
> +obj-$(CONFIG_QAT_VFIO_PCI) += intel/qat/
> diff --git a/drivers/vfio/pci/intel/qat/Kconfig
> b/drivers/vfio/pci/intel/qat/Kconfig
> new file mode 100644
> index 000000000000..71b28ac0bf6a
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/Kconfig
> @@ -0,0 +1,13 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +config QAT_VFIO_PCI
> + tristate "VFIO support for QAT VF PCI devices"
> + select VFIO_PCI_CORE
> + depends on CRYPTO_DEV_QAT
> + depends on CRYPTO_DEV_QAT_4XXX
> + help
> + This provides migration support for Intel(R) QAT Virtual Function
> + using the VFIO framework.
> +
> + To compile this as a module, choose M here: the module
> + will be called qat_vfio_pci. If you don't know what to do here,
> + say N.
> diff --git a/drivers/vfio/pci/intel/qat/Makefile
> b/drivers/vfio/pci/intel/qat/Makefile
> new file mode 100644
> index 000000000000..9289ae4c51bf
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +obj-$(CONFIG_QAT_VFIO_PCI) += qat_vfio_pci.o
> +qat_vfio_pci-y := main.o
> +
> diff --git a/drivers/vfio/pci/intel/qat/main.c
> b/drivers/vfio/pci/intel/qat/main.c
> new file mode 100644
> index 000000000000..85d0ed701397
> --- /dev/null
> +++ b/drivers/vfio/pci/intel/qat/main.c
> @@ -0,0 +1,572 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2024 Intel Corporation */
> +
> +#include <linux/anon_inodes.h>
> +#include <linux/container_of.h>
> +#include <linux/device.h>
> +#include <linux/file.h>
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/mutex.h>
> +#include <linux/pci.h>
> +#include <linux/sizes.h>
> +#include <linux/types.h>
> +#include <linux/uaccess.h>
> +#include <linux/vfio_pci_core.h>
> +#include <linux/qat/qat_mig_dev.h>
> +
> +struct qat_vf_migration_file {
> + struct file *filp;
> + /* protects migration region context */
> + struct mutex lock;
> + bool disabled;
> + struct qat_mig_dev *mdev;
> +};
> +
> +struct qat_vf_core_device {
> + struct vfio_pci_core_device core_device;
> + struct qat_mig_dev *mdev;
> + /* protects migration state */
> + struct mutex state_mutex;
> + enum vfio_device_mig_state mig_state;
> + /* protects reset down flow */
> + spinlock_t reset_lock;
> + bool deferred_reset;
> + struct qat_vf_migration_file *resuming_migf;
> + struct qat_vf_migration_file *saving_migf;
> +};
> +
> +static int qat_vf_pci_open_device(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev =
> + container_of(core_vdev, struct qat_vf_core_device,
> + core_device.vdev);
> + struct vfio_pci_core_device *vdev = &qat_vdev->core_device;
> + int ret;
> +
> + ret = vfio_pci_core_enable(vdev);
> + if (ret)
> + return ret;
> +
> + ret = qat_vdev->mdev->ops->open(qat_vdev->mdev);
> + if (ret) {
> + vfio_pci_core_disable(vdev);
> + return ret;
> + }
> + qat_vdev->mig_state = VFIO_DEVICE_STATE_RUNNING;
> +
> + vfio_pci_core_finish_enable(vdev);
> +
> + return 0;
> +}
> +
> +static void qat_vf_disable_fd(struct qat_vf_migration_file *migf)
> +{
> + mutex_lock(&migf->lock);
> + migf->disabled = true;
> + migf->filp->f_pos = 0;
> + mutex_unlock(&migf->lock);
> +}
> +
> +static void qat_vf_disable_fds(struct qat_vf_core_device *qat_vdev)
> +{
> + if (qat_vdev->resuming_migf) {
> + qat_vf_disable_fd(qat_vdev->resuming_migf);
> + fput(qat_vdev->resuming_migf->filp);
> + qat_vdev->resuming_migf = NULL;
> + }
> +
> + if (qat_vdev->saving_migf) {
> + qat_vf_disable_fd(qat_vdev->saving_migf);
> + fput(qat_vdev->saving_migf->filp);
> + qat_vdev->saving_migf = NULL;
> + }
> +}
> +
> +static void qat_vf_pci_close_device(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> + struct qat_vf_core_device, core_device.vdev);
> +
> + qat_vdev->mdev->ops->close(qat_vdev->mdev);
> + qat_vf_disable_fds(qat_vdev);
> + vfio_pci_core_close_device(core_vdev);
> +}
> +
> +static ssize_t qat_vf_save_read(struct file *filp, char __user *buf,
> + size_t len, loff_t *pos)
> +{
> + struct qat_vf_migration_file *migf = filp->private_data;
> + ssize_t done = 0;
> + loff_t *offs;
> + int ret;
> +
> + if (pos)
> + return -ESPIPE;
> + offs = &filp->f_pos;
> +
> + mutex_lock(&migf->lock);
> + if (*offs > migf->mdev->state_size || *offs < 0) {
> + done = -EINVAL;
> + goto out_unlock;
> + }
> +
> + if (migf->disabled) {
> + done = -ENODEV;
> + goto out_unlock;
> + }
> +
> + len = min_t(size_t, migf->mdev->state_size - *offs, len);
> + if (len) {
> + ret = copy_to_user(buf, migf->mdev->state + *offs, len);
> + if (ret) {
> + done = -EFAULT;
> + goto out_unlock;
> + }
> + *offs += len;
> + done = len;
> + }
> +
> +out_unlock:
> + mutex_unlock(&migf->lock);
> + return done;
> +}
> +
> +static int qat_vf_release_file(struct inode *inode, struct file *filp)
> +{
> + struct qat_vf_migration_file *migf = filp->private_data;
> +
> + qat_vf_disable_fd(migf);
> + mutex_destroy(&migf->lock);
> + kfree(migf);
> +
> + return 0;
> +}
> +
> +static const struct file_operations qat_vf_save_fops = {
> + .owner = THIS_MODULE,
> + .read = qat_vf_save_read,
> + .release = qat_vf_release_file,
> + .llseek = no_llseek,
> +};
> +
> +static struct qat_vf_migration_file *
> +qat_vf_save_device_data(struct qat_vf_core_device *qat_vdev)
> +{
> + struct qat_vf_migration_file *migf;
> + int ret;
> +
> + migf = kzalloc(sizeof(*migf), GFP_KERNEL);
> + if (!migf)
> + return ERR_PTR(-ENOMEM);
> +
> + migf->filp = anon_inode_getfile("qat_vf_mig", &qat_vf_save_fops,
> migf, O_RDONLY);
> + ret = PTR_ERR_OR_ZERO(migf->filp);
> + if (ret) {
> + kfree(migf);
> + return ERR_PTR(ret);
> + }
> +
> + stream_open(migf->filp->f_inode, migf->filp);
> + mutex_init(&migf->lock);
> +
> + ret = qat_vdev->mdev->ops->save_state(qat_vdev->mdev);
> + if (ret) {
> + fput(migf->filp);
> + kfree(migf);

Probably don't need that kfree(migf) here as fput() --> qat_vf_release_file () will do that.

> + return ERR_PTR(ret);
> + }
> +
> + migf->mdev = qat_vdev->mdev;
> +
> + return migf;
> +}
> +
> +static ssize_t qat_vf_resume_write(struct file *filp, const char __user *buf,
> + size_t len, loff_t *pos)
> +{
> + struct qat_vf_migration_file *migf = filp->private_data;
> + loff_t end, *offs;
> + ssize_t done = 0;
> + int ret;
> +
> + if (pos)
> + return -ESPIPE;
> + offs = &filp->f_pos;
> +
> + if (*offs < 0 ||
> + check_add_overflow((loff_t)len, *offs, &end))
> + return -EOVERFLOW;
> +
> + if (end > migf->mdev->state_size)
> + return -ENOMEM;
> +
> + mutex_lock(&migf->lock);
> + if (migf->disabled) {
> + done = -ENODEV;
> + goto out_unlock;
> + }
> +
> + ret = copy_from_user(migf->mdev->state + *offs, buf, len);
> + if (ret) {
> + done = -EFAULT;
> + goto out_unlock;
> + }
> + *offs += len;
> + done = len;
> +
> +out_unlock:
> + mutex_unlock(&migf->lock);
> + return done;
> +}
> +
> +static const struct file_operations qat_vf_resume_fops = {
> + .owner = THIS_MODULE,
> + .write = qat_vf_resume_write,
> + .release = qat_vf_release_file,
> + .llseek = no_llseek,
> +};
> +
> +static struct qat_vf_migration_file *
> +qat_vf_resume_device_data(struct qat_vf_core_device *qat_vdev)
> +{
> + struct qat_vf_migration_file *migf;
> + int ret;
> +
> + migf = kzalloc(sizeof(*migf), GFP_KERNEL);
> + if (!migf)
> + return ERR_PTR(-ENOMEM);
> +
> + migf->filp = anon_inode_getfile("qat_vf_mig", &qat_vf_resume_fops,
> migf, O_WRONLY);
> + ret = PTR_ERR_OR_ZERO(migf->filp);
> + if (ret) {
> + kfree(migf);
> + return ERR_PTR(ret);
> + }
> +
> + migf->mdev = qat_vdev->mdev;
> + stream_open(migf->filp->f_inode, migf->filp);
> + mutex_init(&migf->lock);
> +
> + return migf;
> +}
> +
> +static int qat_vf_load_device_data(struct qat_vf_core_device *qat_vdev)
> +{
> + return qat_vdev->mdev->ops->load_state(qat_vdev->mdev);
> +}
> +
> +static struct file *qat_vf_pci_step_device_state(struct qat_vf_core_device
> *qat_vdev, u32 new)
> +{
> + u32 cur = qat_vdev->mig_state;
> + int ret;
> +
> + if ((cur == VFIO_DEVICE_STATE_RUNNING && new ==
> VFIO_DEVICE_STATE_RUNNING_P2P)) {
> + ret = qat_vdev->mdev->ops->suspend(qat_vdev->mdev);
> + if (ret)
> + return ERR_PTR(ret);
> + return NULL;
> + }
> +
> + if ((cur == VFIO_DEVICE_STATE_RUNNING_P2P && new ==
> VFIO_DEVICE_STATE_STOP) ||
> + (cur == VFIO_DEVICE_STATE_STOP && new ==
> VFIO_DEVICE_STATE_RUNNING_P2P))
> + return NULL;
> +
> + if (cur == VFIO_DEVICE_STATE_STOP && new ==
> VFIO_DEVICE_STATE_STOP_COPY) {
> + struct qat_vf_migration_file *migf;
> +
> + migf = qat_vf_save_device_data(qat_vdev);
> + if (IS_ERR(migf))
> + return ERR_CAST(migf);
> + get_file(migf->filp);
> + qat_vdev->saving_migf = migf;
> + return migf->filp;
> + }
> +
> + if (cur == VFIO_DEVICE_STATE_STOP_COPY && new ==
> VFIO_DEVICE_STATE_STOP) {
> + qat_vf_disable_fds(qat_vdev);
> + return NULL;
> + }
> +
> + if (cur == VFIO_DEVICE_STATE_STOP && new ==
> VFIO_DEVICE_STATE_RESUMING) {
> + struct qat_vf_migration_file *migf;
> +
> + migf = qat_vf_resume_device_data(qat_vdev);
> + if (IS_ERR(migf))
> + return ERR_CAST(migf);
> + get_file(migf->filp);
> + qat_vdev->resuming_migf = migf;
> + return migf->filp;
> + }
> +
> + if (cur == VFIO_DEVICE_STATE_RESUMING && new ==
> VFIO_DEVICE_STATE_STOP) {
> + ret = qat_vf_load_device_data(qat_vdev);
> + if (ret)
> + return ERR_PTR(ret);
> +
> + qat_vf_disable_fds(qat_vdev);
> + return NULL;
> + }
> +
> + if (cur == VFIO_DEVICE_STATE_RUNNING_P2P && new ==
> VFIO_DEVICE_STATE_RUNNING) {
> + qat_vdev->mdev->ops->resume(qat_vdev->mdev);
> + return NULL;
> + }
> +
> + /* vfio_mig_get_next_state() does not use arcs other than the above
> */
> + WARN_ON(true);
> + return ERR_PTR(-EINVAL);
> +}
> +
> +static void qat_vf_state_mutex_unlock(struct qat_vf_core_device
> *qat_vdev)
> +{
> +again:
> + spin_lock(&qat_vdev->reset_lock);
> + if (qat_vdev->deferred_reset) {
> + qat_vdev->deferred_reset = false;
> + spin_unlock(&qat_vdev->reset_lock);
> + qat_vdev->mig_state = VFIO_DEVICE_STATE_RUNNING;
> + qat_vf_disable_fds(qat_vdev);
> + goto again;
> + }
> + mutex_unlock(&qat_vdev->state_mutex);
> + spin_unlock(&qat_vdev->reset_lock);
> +}
> +
> +static struct file *qat_vf_pci_set_device_state(struct vfio_device *vdev,
> + enum vfio_device_mig_state
> new_state)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(vdev,
> + struct qat_vf_core_device, core_device.vdev);
> + enum vfio_device_mig_state next_state;
> + struct file *res = NULL;
> + int ret;
> +
> + mutex_lock(&qat_vdev->state_mutex);
> + while (new_state != qat_vdev->mig_state) {
> + ret = vfio_mig_get_next_state(vdev, qat_vdev->mig_state,
> + new_state, &next_state);
> + if (ret) {
> + res = ERR_PTR(ret);
> + break;
> + }
> + res = qat_vf_pci_step_device_state(qat_vdev, next_state);
> + if (IS_ERR(res))
> + break;
> + qat_vdev->mig_state = next_state;
> + if (WARN_ON(res && new_state != qat_vdev->mig_state)) {
> + fput(res);
> + res = ERR_PTR(-EINVAL);
> + break;
> + }
> + }
> + qat_vf_state_mutex_unlock(qat_vdev);
> +
> + return res;
> +}
> +
> +static int qat_vf_pci_get_device_state(struct vfio_device *vdev,
> + enum vfio_device_mig_state *curr_state)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(vdev,
> + struct qat_vf_core_device, core_device.vdev);
> +
> + mutex_lock(&qat_vdev->state_mutex);
> + *curr_state = qat_vdev->mig_state;
> + qat_vf_state_mutex_unlock(qat_vdev);
> +
> + return 0;
> +}
> +
> +static int qat_vf_pci_get_data_size(struct vfio_device *vdev,
> + unsigned long *stop_copy_length)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(vdev,
> + struct qat_vf_core_device, core_device.vdev);
> +
> + *stop_copy_length = qat_vdev->mdev->state_size;

Do we need a lock here or this is not changing?

> + return 0;
> +}
> +
> +static const struct vfio_migration_ops qat_vf_pci_mig_ops = {
> + .migration_set_state = qat_vf_pci_set_device_state,
> + .migration_get_state = qat_vf_pci_get_device_state,
> + .migration_get_data_size = qat_vf_pci_get_data_size,
> +};
> +
> +static void qat_vf_pci_release_dev(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> + struct qat_vf_core_device, core_device.vdev);
> +
> + qat_vdev->mdev->ops->cleanup(qat_vdev->mdev);
> + qat_vfmig_destroy(qat_vdev->mdev);
> + mutex_destroy(&qat_vdev->state_mutex);
> + vfio_pci_core_release_dev(core_vdev);
> +}
> +
> +static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
> +{
> + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> + struct qat_vf_core_device, core_device.vdev);
> + struct qat_migdev_ops *ops;
> + struct qat_mig_dev *mdev;
> + struct pci_dev *parent;
> + int ret, vf_id;
> +
> + core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY |
> VFIO_MIGRATION_P2P;
> + core_vdev->mig_ops = &qat_vf_pci_mig_ops;
> +
> + ret = vfio_pci_core_init_dev(core_vdev);
> + if (ret)
> + return ret;
> +
> + mutex_init(&qat_vdev->state_mutex);
> + spin_lock_init(&qat_vdev->reset_lock);
> +
> + parent = qat_vdev->core_device.pdev->physfn;

Can we use pci_physfn() here?

> + vf_id = pci_iov_vf_id(qat_vdev->core_device.pdev);
> + if (!parent || vf_id < 0) {

Also if the pci_iov_vf_id() return success I don't think you need to
check for parent and can use directly below.

> + ret = -ENODEV;
> + goto err_rel;
> + }
> +
> + mdev = qat_vfmig_create(parent, vf_id);
> + if (IS_ERR(mdev)) {
> + ret = PTR_ERR(mdev);
> + goto err_rel;
> + }
> +
> + ops = mdev->ops;
> + if (!ops || !ops->init || !ops->cleanup ||
> + !ops->open || !ops->close ||
> + !ops->save_state || !ops->load_state ||
> + !ops->suspend || !ops->resume) {
> + ret = -EIO;
> + dev_err(&parent->dev, "Incomplete device migration ops
> structure!");
> + goto err_destroy;
> + }

If all these ops are a must why cant we move the check inside the qat_vfmig_create()?
Or rather call them explicitly as suggested by Jason.

Thanks,
Shameer

> + ret = ops->init(mdev);
> + if (ret)
> + goto err_destroy;
> +
> + qat_vdev->mdev = mdev;
> +
> + return 0;
> +
> +err_destroy:
> + qat_vfmig_destroy(mdev);
> +err_rel:
> + vfio_pci_core_release_dev(core_vdev);
> + return ret;
> +}
> +
> +static const struct vfio_device_ops qat_vf_pci_ops = {
> + .name = "qat-vf-vfio-pci",
> + .init = qat_vf_pci_init_dev,
> + .release = qat_vf_pci_release_dev,
> + .open_device = qat_vf_pci_open_device,
> + .close_device = qat_vf_pci_close_device,
> + .ioctl = vfio_pci_core_ioctl,
> + .read = vfio_pci_core_read,
> + .write = vfio_pci_core_write,
> + .mmap = vfio_pci_core_mmap,
> + .request = vfio_pci_core_request,
> + .match = vfio_pci_core_match,
> + .bind_iommufd = vfio_iommufd_physical_bind,
> + .unbind_iommufd = vfio_iommufd_physical_unbind,
> + .attach_ioas = vfio_iommufd_physical_attach_ioas,
> + .detach_ioas = vfio_iommufd_physical_detach_ioas,
> +};
> +
> +static struct qat_vf_core_device *qat_vf_drvdata(struct pci_dev *pdev)
> +{
> + struct vfio_pci_core_device *core_device = pci_get_drvdata(pdev);
> +
> + return container_of(core_device, struct qat_vf_core_device,
> core_device);
> +}
> +
> +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
> +{
> + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> +
> + if (!qat_vdev->core_device.vdev.mig_ops)
> + return;
> +
> + /*
> + * As the higher VFIO layers are holding locks across reset and using
> + * those same locks with the mm_lock we need to prevent ABBA
> deadlock
> + * with the state_mutex and mm_lock.
> + * In case the state_mutex was taken already we defer the cleanup
> work
> + * to the unlock flow of the other running context.
> + */
> + spin_lock(&qat_vdev->reset_lock);
> + qat_vdev->deferred_reset = true;
> + if (!mutex_trylock(&qat_vdev->state_mutex)) {
> + spin_unlock(&qat_vdev->reset_lock);
> + return;
> + }
> + spin_unlock(&qat_vdev->reset_lock);
> + qat_vf_state_mutex_unlock(qat_vdev);
> +}
> +
> +static int
> +qat_vf_vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> + struct device *dev = &pdev->dev;
> + struct qat_vf_core_device *qat_vdev;
> + int ret;
> +
> + qat_vdev = vfio_alloc_device(qat_vf_core_device, core_device.vdev,
> dev, &qat_vf_pci_ops);
> + if (IS_ERR(qat_vdev))
> + return PTR_ERR(qat_vdev);
> +
> + pci_set_drvdata(pdev, &qat_vdev->core_device);
> + ret = vfio_pci_core_register_device(&qat_vdev->core_device);
> + if (ret)
> + goto out_put_device;
> +
> + return 0;
> +
> +out_put_device:
> + vfio_put_device(&qat_vdev->core_device.vdev);
> + return ret;
> +}
> +
> +static void qat_vf_vfio_pci_remove(struct pci_dev *pdev)
> +{
> + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> +
> + vfio_pci_core_unregister_device(&qat_vdev->core_device);
> + vfio_put_device(&qat_vdev->core_device.vdev);
> +}
> +
> +static const struct pci_device_id qat_vf_vfio_pci_table[] = {
> + /* Intel QAT GEN4 4xxx VF device */
> + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL,
> 0x4941) },
> + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL,
> 0x4943) },
> + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_INTEL,
> 0x4945) },
> + {}
> +};
> +MODULE_DEVICE_TABLE(pci, qat_vf_vfio_pci_table);
> +
> +static const struct pci_error_handlers qat_vf_err_handlers = {
> + .reset_done = qat_vf_pci_aer_reset_done,
> + .error_detected = vfio_pci_core_aer_err_detected,
> +};
> +
> +static struct pci_driver qat_vf_vfio_pci_driver = {
> + .name = "qat_vfio_pci",
> + .id_table = qat_vf_vfio_pci_table,
> + .probe = qat_vf_vfio_pci_probe,
> + .remove = qat_vf_vfio_pci_remove,
> + .err_handler = &qat_vf_err_handlers,
> + .driver_managed_dma = true,
> +};
> +module_pci_driver(qat_vf_vfio_pci_driver)
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Intel Corporation");
> +MODULE_DESCRIPTION("QAT VFIO PCI - VFIO PCI driver with live migration
> support for Intel(R) QAT GEN4 device family");
> +MODULE_IMPORT_NS(CRYPTO_QAT);
> --
> 2.18.2


2024-02-09 08:23:47

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

Thanks for your comments, Jason.
On Tuesday, February 6, 2024 8:55 PM, Jason Gunthorpe <[email protected]> wrote:
> > +
> > + ops = mdev->ops;
> > + if (!ops || !ops->init || !ops->cleanup ||
> > + !ops->open || !ops->close ||
> > + !ops->save_state || !ops->load_state ||
> > + !ops->suspend || !ops->resume) {
> > + ret = -EIO;
> > + dev_err(&parent->dev, "Incomplete device migration ops
> structure!");
> > + goto err_destroy;
> > + }
>
> Why are there ops pointers here? I would expect this should just be
> direct function calls to the PF QAT driver.

I indeed had a version where the direct function calls are Implemented in
QAT driver, while when I look at the functions, most of them
only translate the interface to the ops pointer. That's why I put
ops pointers directly into vfio variant driver.

>
> > +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
> > +{
> > + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> > +
> > + if (!qat_vdev->core_device.vdev.mig_ops)
> > + return;
> > +
> > + /*
> > + * As the higher VFIO layers are holding locks across reset and using
> > + * those same locks with the mm_lock we need to prevent ABBA
> deadlock
> > + * with the state_mutex and mm_lock.
> > + * In case the state_mutex was taken already we defer the cleanup work
> > + * to the unlock flow of the other running context.
> > + */
> > + spin_lock(&qat_vdev->reset_lock);
> > + qat_vdev->deferred_reset = true;
> > + if (!mutex_trylock(&qat_vdev->state_mutex)) {
> > + spin_unlock(&qat_vdev->reset_lock);
> > + return;
> > + }
> > + spin_unlock(&qat_vdev->reset_lock);
> > + qat_vf_state_mutex_unlock(qat_vdev);
> > +}
>
> Do you really need this? I thought this ugly thing was going to be a
> uniquely mlx5 thing..

I think that's still required to make the migration state synchronized
if the VF is reset by other VFIO emulation paths. Is it the case?
BTW, this implementation is not only in mlx5 driver, but also in other
Vfio pci variant drivers such as hisilicon acc driver and pds driver.
Thanks,
Xin

2024-02-09 12:20:57

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Fri, Feb 09, 2024 at 08:23:32AM +0000, Zeng, Xin wrote:
> Thanks for your comments, Jason.
> On Tuesday, February 6, 2024 8:55 PM, Jason Gunthorpe <[email protected]> wrote:
> > > +
> > > + ops = mdev->ops;
> > > + if (!ops || !ops->init || !ops->cleanup ||
> > > + !ops->open || !ops->close ||
> > > + !ops->save_state || !ops->load_state ||
> > > + !ops->suspend || !ops->resume) {
> > > + ret = -EIO;
> > > + dev_err(&parent->dev, "Incomplete device migration ops
> > structure!");
> > > + goto err_destroy;
> > > + }
> >
> > Why are there ops pointers here? I would expect this should just be
> > direct function calls to the PF QAT driver.
>
> I indeed had a version where the direct function calls are Implemented in
> QAT driver, while when I look at the functions, most of them
> only translate the interface to the ops pointer. That's why I put
> ops pointers directly into vfio variant driver.

But why is there an ops indirection at all? Are there more than one
ops?

> > > +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
> > > +{
> > > + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> > > +
> > > + if (!qat_vdev->core_device.vdev.mig_ops)
> > > + return;
> > > +
> > > + /*
> > > + * As the higher VFIO layers are holding locks across reset and using
> > > + * those same locks with the mm_lock we need to prevent ABBA
> > deadlock
> > > + * with the state_mutex and mm_lock.
> > > + * In case the state_mutex was taken already we defer the cleanup work
> > > + * to the unlock flow of the other running context.
> > > + */
> > > + spin_lock(&qat_vdev->reset_lock);
> > > + qat_vdev->deferred_reset = true;
> > > + if (!mutex_trylock(&qat_vdev->state_mutex)) {
> > > + spin_unlock(&qat_vdev->reset_lock);
> > > + return;
> > > + }
> > > + spin_unlock(&qat_vdev->reset_lock);
> > > + qat_vf_state_mutex_unlock(qat_vdev);
> > > +}
> >
> > Do you really need this? I thought this ugly thing was going to be a
> > uniquely mlx5 thing..
>
> I think that's still required to make the migration state synchronized
> if the VF is reset by other VFIO emulation paths. Is it the case?
> BTW, this implementation is not only in mlx5 driver, but also in other
> Vfio pci variant drivers such as hisilicon acc driver and pds
> driver.

It had to specifically do with the mm lock interaction that, I
thought, was going to be unique to the mlx driver. Otherwise you could
just directly hold the state_mutex here.

Yishai do you remember the exact trace for the mmlock entanglement?

Jason

2024-02-09 16:23:10

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

Thanks for your comments, Alex.
On Wednesday, February 7, 2024 5:15 AM, Alex Williamson <[email protected]> wrote:
> > diff --git a/drivers/vfio/pci/intel/qat/Kconfig
> b/drivers/vfio/pci/intel/qat/Kconfig
> > new file mode 100644
> > index 000000000000..71b28ac0bf6a
> > --- /dev/null
> > +++ b/drivers/vfio/pci/intel/qat/Kconfig
> > @@ -0,0 +1,13 @@
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +config QAT_VFIO_PCI
> > + tristate "VFIO support for QAT VF PCI devices"
> > + select VFIO_PCI_CORE
> > + depends on CRYPTO_DEV_QAT
> > + depends on CRYPTO_DEV_QAT_4XXX
>
> CRYPTO_DEV_QAT_4XXX selects CRYPTO_DEV_QAT, is it necessary to depend
> on CRYPTO_DEV_QAT here?

You are right, we don't need it, will update it in next version.

>
> > + help
> > + This provides migration support for Intel(R) QAT Virtual Function
> > + using the VFIO framework.
> > +
> > + To compile this as a module, choose M here: the module
> > + will be called qat_vfio_pci. If you don't know what to do here,
> > + say N.
> > diff --git a/drivers/vfio/pci/intel/qat/Makefile
> b/drivers/vfio/pci/intel/qat/Makefile
> > new file mode 100644
> > index 000000000000..9289ae4c51bf
> > --- /dev/null
> > +++ b/drivers/vfio/pci/intel/qat/Makefile
> > @@ -0,0 +1,4 @@
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +obj-$(CONFIG_QAT_VFIO_PCI) += qat_vfio_pci.o
> > +qat_vfio_pci-y := main.o
> > +
>
> Blank line at end of file.

Good catch, will update it in next version.

>
> > diff --git a/drivers/vfio/pci/intel/qat/main.c
> b/drivers/vfio/pci/intel/qat/main.c
> > new file mode 100644
> > index 000000000000..85d0ed701397
> > --- /dev/null
> > +++ b/drivers/vfio/pci/intel/qat/main.c
> > @@ -0,0 +1,572 @@
> ...
> > +static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
> > +{
> > + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> > + struct qat_vf_core_device, core_device.vdev);
> > + struct qat_migdev_ops *ops;
> > + struct qat_mig_dev *mdev;
> > + struct pci_dev *parent;
> > + int ret, vf_id;
> > +
> > + core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY |
> VFIO_MIGRATION_P2P;
>
> AFAICT it's always good practice to support VFIO_MIGRATION_PRE_COPY,
> even if only to leave yourself an option to mark an incompatible ABI
> change in the data stream that can be checked before the stop-copy
> phase of migration. See for example the mtty sample driver. Thanks,
> Alex

If the good practice is to check migration stream compatibility, then
we do incorporate additional information as part of device state for ABI
compatibility testing such as magic, version and device capability in stop
copy phase. Can we ignore the optional VFIO_MIGRATION_PRE_COPY support?
Thanks,
Xin


2024-02-11 08:17:53

by Yishai Hadas

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On 09/02/2024 14:10, Jason Gunthorpe wrote:
> On Fri, Feb 09, 2024 at 08:23:32AM +0000, Zeng, Xin wrote:
>> Thanks for your comments, Jason.
>> On Tuesday, February 6, 2024 8:55 PM, Jason Gunthorpe <[email protected]> wrote:
>>>> +
>>>> + ops = mdev->ops;
>>>> + if (!ops || !ops->init || !ops->cleanup ||
>>>> + !ops->open || !ops->close ||
>>>> + !ops->save_state || !ops->load_state ||
>>>> + !ops->suspend || !ops->resume) {
>>>> + ret = -EIO;
>>>> + dev_err(&parent->dev, "Incomplete device migration ops
>>> structure!");
>>>> + goto err_destroy;
>>>> + }
>>>
>>> Why are there ops pointers here? I would expect this should just be
>>> direct function calls to the PF QAT driver.
>>
>> I indeed had a version where the direct function calls are Implemented in
>> QAT driver, while when I look at the functions, most of them
>> only translate the interface to the ops pointer. That's why I put
>> ops pointers directly into vfio variant driver.
>
> But why is there an ops indirection at all? Are there more than one
> ops?
>
>>>> +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
>>>> +{
>>>> + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
>>>> +
>>>> + if (!qat_vdev->core_device.vdev.mig_ops)
>>>> + return;
>>>> +
>>>> + /*
>>>> + * As the higher VFIO layers are holding locks across reset and using
>>>> + * those same locks with the mm_lock we need to prevent ABBA
>>> deadlock
>>>> + * with the state_mutex and mm_lock.
>>>> + * In case the state_mutex was taken already we defer the cleanup work
>>>> + * to the unlock flow of the other running context.
>>>> + */
>>>> + spin_lock(&qat_vdev->reset_lock);
>>>> + qat_vdev->deferred_reset = true;
>>>> + if (!mutex_trylock(&qat_vdev->state_mutex)) {
>>>> + spin_unlock(&qat_vdev->reset_lock);
>>>> + return;
>>>> + }
>>>> + spin_unlock(&qat_vdev->reset_lock);
>>>> + qat_vf_state_mutex_unlock(qat_vdev);
>>>> +}
>>>
>>> Do you really need this? I thought this ugly thing was going to be a
>>> uniquely mlx5 thing..
>>
>> I think that's still required to make the migration state synchronized
>> if the VF is reset by other VFIO emulation paths. Is it the case?
>> BTW, this implementation is not only in mlx5 driver, but also in other
>> Vfio pci variant drivers such as hisilicon acc driver and pds
>> driver.
>
> It had to specifically do with the mm lock interaction that, I
> thought, was going to be unique to the mlx driver. Otherwise you could
> just directly hold the state_mutex here.
>
> Yishai do you remember the exact trace for the mmlock entanglement?
>

I found in my in-box (from more than 2.5 years ago) the below [1]
lockdep WARNING when the state_mutex was used directly.

Once we moved to the 'deferred_reset' mode it solved that.

[1]
[ +1.063822] ======================================================
[ +0.000732] WARNING: possible circular locking dependency detected
[ +0.000747] 5.15.0-rc3 #236 Not tainted
[ +0.000556] ------------------------------------------------------
[ +0.000714] qemu-system-x86/7731 is trying to acquire lock:
[ +0.000659] ffff888126c64b78 (&vdev->vma_lock){+.+.}-{3:3}, at:
vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.001127]
but task is already holding lock:
[ +0.000803] ffff888105f4c5d8 (&mm->mmap_lock#2){++++}-{3:3}, at:
vaddr_get_pfns+0x64/0x240 [vfio_iommu_type1]
[ +0.001119]
which lock already depends on the new lock.

[ +0.001086]
the existing dependency chain (in reverse order) is:
[ +0.000910]
-> #3 (&mm->mmap_lock#2){++++}-{3:3}:
[ +0.000844] __might_fault+0x56/0x80
[ +0.000572] _copy_to_user+0x1e/0x80
[ +0.000556] mlx5vf_pci_mig_rw.cold+0xa1/0x24f [mlx5_vfio_pci]
[ +0.000732] vfs_read+0xa8/0x1c0
[ +0.000547] __x64_sys_pread64+0x8c/0xc0
[ +0.000580] do_syscall_64+0x3d/0x90
[ +0.000566] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ +0.000682]
-> #2 (&mvdev->state_mutex){+.+.}-{3:3}:
[ +0.000899] __mutex_lock+0x80/0x9d0
[ +0.000566] mlx5vf_reset_done+0x2c/0x40 [mlx5_vfio_pci]
[ +0.000697] vfio_pci_core_ioctl+0x585/0x1020 [vfio_pci_core]
[ +0.000721] __x64_sys_ioctl+0x436/0x9a0
[ +0.000588] do_syscall_64+0x3d/0x90
[ +0.000584] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ +0.000674]
-> #1 (&vdev->memory_lock){+.+.}-{3:3}:
[ +0.000843] down_write+0x38/0x70
[ +0.000544] vfio_pci_zap_and_down_write_memory_lock+0x1c/0x30
[vfio_pci_core]
[ +0.003705] vfio_basic_config_write+0x1e7/0x280 [vfio_pci_core]
[ +0.000744] vfio_pci_config_rw+0x1b7/0x3af [vfio_pci_core]
[ +0.000716] vfs_write+0xe6/0x390
[ +0.000539] __x64_sys_pwrite64+0x8c/0xc0
[ +0.000603] do_syscall_64+0x3d/0x90
[ +0.000572] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ +0.000661]
-> #0 (&vdev->vma_lock){+.+.}-{3:3}:
[ +0.000828] __lock_acquire+0x1244/0x2280
[ +0.000596] lock_acquire+0xc2/0x2c0
[ +0.000556] __mutex_lock+0x80/0x9d0
[ +0.000580] vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.000709] __do_fault+0x32/0xa0
[ +0.000556] __handle_mm_fault+0xbe8/0x1450
[ +0.000606] handle_mm_fault+0x6c/0x140
[ +0.000624] fixup_user_fault+0x6b/0x100
[ +0.000600] vaddr_get_pfns+0x108/0x240 [vfio_iommu_type1]
[ +0.000726] vfio_pin_pages_remote+0x326/0x460 [vfio_iommu_type1]
[ +0.000736] vfio_iommu_type1_ioctl+0x43b/0x15a0 [vfio_iommu_type1]
[ +0.000752] __x64_sys_ioctl+0x436/0x9a0
[ +0.000588] do_syscall_64+0x3d/0x90
[ +0.000571] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ +0.000677]
other info that might help us debug this:

[ +0.001073] Chain exists of:
&vdev->vma_lock --> &mvdev->state_mutex -->
&mm->mmap_lock#2

[ +0.001285] Possible unsafe locking scenario:

[ +0.000808] CPU0 CPU1
[ +0.000599] ---- ----
[ +0.000593] lock(&mm->mmap_lock#2);
[ +0.000530] lock(&mvdev->state_mutex);
[ +0.000725] lock(&mm->mmap_lock#2);
[ +0.000712] lock(&vdev->vma_lock);
[ +0.000532]
*** DEADLOCK ***

[ +0.000922] 2 locks held by qemu-system-x86/7731:
[ +0.000597] #0: ffff88810c9bec88 (&iommu->lock#2){+.+.}-{3:3}, at:
vfio_iommu_type1_ioctl+0x189/0x15a0 [vfio_iommu_type1]
[ +0.001177] #1: ffff888105f4c5d8 (&mm->mmap_lock#2){++++}-{3:3}, at:
vaddr_get_pfns+0x64/0x240 [vfio_iommu_type1]
[ +0.001153]
stack backtrace:
[ +0.000689] CPU: 1 PID: 7731 Comm: qemu-system-x86 Not tainted
5.15.0-rc3 #236
[ +0.000932] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[ +0.001192] Call Trace:
[ +0.000454] dump_stack_lvl+0x45/0x59
[ +0.000527] check_noncircular+0xf2/0x110
[ +0.000557] __lock_acquire+0x1244/0x2280
[ +0.002462] lock_acquire+0xc2/0x2c0
[ +0.000534] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.000684] ? lock_is_held_type+0x98/0x110
[ +0.000565] __mutex_lock+0x80/0x9d0
[ +0.000542] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.000676] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.000681] ? mark_held_locks+0x49/0x70
[ +0.000551] ? lock_is_held_type+0x98/0x110
[ +0.000575] vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
[ +0.000679] __do_fault+0x32/0xa0
[ +0.000700] __handle_mm_fault+0xbe8/0x1450
[ +0.000571] handle_mm_fault+0x6c/0x140
[ +0.000564] fixup_user_fault+0x6b/0x100
[ +0.000556] vaddr_get_pfns+0x108/0x240 [vfio_iommu_type1]
[ +0.000666] ? lock_is_held_type+0x98/0x110
[ +0.000572] vfio_pin_pages_remote+0x326/0x460 [vfio_iommu_type1]
[ +0.000710] vfio_iommu_type1_ioctl+0x43b/0x15a0 [vfio_iommu_type1]
[ +0.001050] ? find_held_lock+0x2b/0x80
[ +0.000561] ? lock_release+0xc2/0x2a0
[ +0.000537] ? __fget_files+0xdc/0x1d0
[ +0.000541] __x64_sys_ioctl+0x436/0x9a0
[ +0.000556] ? lockdep_hardirqs_on_prepare+0xd4/0x170
[ +0.000641] do_syscall_64+0x3d/0x90
[ +0.000529] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ +0.000670] RIP: 0033:0x7f54255bb45b
[ +0.000541] Code: 0f 1e fa 48 8b 05 2d aa 2c 00 64 c7 00 26 00 00 00
48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f
05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d fd a9 2c 00 f7 d8 64 89 01 48
[ +0.001843] RSP: 002b:00007f5415ca3d38 EFLAGS: 00000206 ORIG_RAX:
0000000000000010
[ +0.000929] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
00007f54255bb45b
[ +0.000781] RDX: 00007f5415ca3d70 RSI: 0000000000003b71 RDI:
0000000000000012
[ +0.000775] RBP: 00007f5415ca3da0 R08: 0000000000000000 R09:
0000000000000000
[ +0.000777] R10: 00000000fe002000 R11: 0000000000000206 R12:
0000000000000004
[ +0.000775] R13: 0000000000000000 R14: 00000000fe000000 R15:
00007f5415ca47c0

Yishai


2024-02-13 13:09:02

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

Thanks for the comments, Shameer.

> -----Original Message-----
> From: Shameerali Kolothum Thodi <[email protected]>
> Sent: Thursday, February 8, 2024 8:17 PM
> To: Zeng, Xin <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]; Tian, Kevin
> <[email protected]>
> Cc: [email protected]; [email protected]; qat-linux <qat-
> [email protected]>; Cao, Yahui <[email protected]>
> Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices
>
>
> > +static struct qat_vf_migration_file *
> > +qat_vf_save_device_data(struct qat_vf_core_device *qat_vdev)
> > +{
> > + struct qat_vf_migration_file *migf;
> > + int ret;
> > +
> > + migf = kzalloc(sizeof(*migf), GFP_KERNEL);
> > + if (!migf)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + migf->filp = anon_inode_getfile("qat_vf_mig", &qat_vf_save_fops,
> > migf, O_RDONLY);
> > + ret = PTR_ERR_OR_ZERO(migf->filp);
> > + if (ret) {
> > + kfree(migf);
> > + return ERR_PTR(ret);
> > + }
> > +
> > + stream_open(migf->filp->f_inode, migf->filp);
> > + mutex_init(&migf->lock);
> > +
> > + ret = qat_vdev->mdev->ops->save_state(qat_vdev->mdev);
> > + if (ret) {
> > + fput(migf->filp);
> > + kfree(migf);
>
> Probably don't need that kfree(migf) here as fput() --> qat_vf_release_file () will
> do that.

Thanks, it's redundant, will update it in next version.

>
> > +static int qat_vf_pci_get_data_size(struct vfio_device *vdev,
> > + unsigned long *stop_copy_length)
> > +{
> > + struct qat_vf_core_device *qat_vdev = container_of(vdev,
> > + struct qat_vf_core_device, core_device.vdev);
> > +
> > + *stop_copy_length = qat_vdev->mdev->state_size;
>
> Do we need a lock here or this is not changing?

Yes, will update it in next version.

>
> > + return 0;
> > +}
> > +
> > +static const struct vfio_migration_ops qat_vf_pci_mig_ops = {
> > + .migration_set_state = qat_vf_pci_set_device_state,
> > + .migration_get_state = qat_vf_pci_get_device_state,
> > + .migration_get_data_size = qat_vf_pci_get_data_size,
> > +};
> > +
> > +static void qat_vf_pci_release_dev(struct vfio_device *core_vdev)
> > +{
> > + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> > + struct qat_vf_core_device, core_device.vdev);
> > +
> > + qat_vdev->mdev->ops->cleanup(qat_vdev->mdev);
> > + qat_vfmig_destroy(qat_vdev->mdev);
> > + mutex_destroy(&qat_vdev->state_mutex);
> > + vfio_pci_core_release_dev(core_vdev);
> > +}
> > +
> > +static int qat_vf_pci_init_dev(struct vfio_device *core_vdev)
> > +{
> > + struct qat_vf_core_device *qat_vdev = container_of(core_vdev,
> > + struct qat_vf_core_device, core_device.vdev);
> > + struct qat_migdev_ops *ops;
> > + struct qat_mig_dev *mdev;
> > + struct pci_dev *parent;
> > + int ret, vf_id;
> > +
> > + core_vdev->migration_flags = VFIO_MIGRATION_STOP_COPY |
> > VFIO_MIGRATION_P2P;
> > + core_vdev->mig_ops = &qat_vf_pci_mig_ops;
> > +
> > + ret = vfio_pci_core_init_dev(core_vdev);
> > + if (ret)
> > + return ret;
> > +
> > + mutex_init(&qat_vdev->state_mutex);
> > + spin_lock_init(&qat_vdev->reset_lock);
> > +
> > + parent = qat_vdev->core_device.pdev->physfn;
>
> Can we use pci_physfn() here?

Sure, will update it in next version.

>
> > + vf_id = pci_iov_vf_id(qat_vdev->core_device.pdev);
> > + if (!parent || vf_id < 0) {
>
> Also if the pci_iov_vf_id() return success I don't think you need to
> check for parent and can use directly below.

OK, will update it in next version.

>
> > + ret = -ENODEV;
> > + goto err_rel;
> > + }
> > +
> > + mdev = qat_vfmig_create(parent, vf_id);
> > + if (IS_ERR(mdev)) {
> > + ret = PTR_ERR(mdev);
> > + goto err_rel;
> > + }
> > +
> > + ops = mdev->ops;
> > + if (!ops || !ops->init || !ops->cleanup ||
> > + !ops->open || !ops->close ||
> > + !ops->save_state || !ops->load_state ||
> > + !ops->suspend || !ops->resume) {
> > + ret = -EIO;
> > + dev_err(&parent->dev, "Incomplete device migration ops
> > structure!");
> > + goto err_destroy;
> > + }
>
> If all these ops are a must why cant we move the check inside the
> qat_vfmig_create()?
> Or rather call them explicitly as suggested by Jason.

We can do it, but it might make sense to leave the check to the APIs' user
as some of these ops interfaces might be optional for other QAT variant driver
in future.

Thanks,
Xin


2024-02-13 14:56:59

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Tue, Feb 13, 2024 at 01:08:47PM +0000, Zeng, Xin wrote:
> > > + ret = -ENODEV;
> > > + goto err_rel;
> > > + }
> > > +
> > > + mdev = qat_vfmig_create(parent, vf_id);
> > > + if (IS_ERR(mdev)) {
> > > + ret = PTR_ERR(mdev);
> > > + goto err_rel;
> > > + }
> > > +
> > > + ops = mdev->ops;
> > > + if (!ops || !ops->init || !ops->cleanup ||
> > > + !ops->open || !ops->close ||
> > > + !ops->save_state || !ops->load_state ||
> > > + !ops->suspend || !ops->resume) {
> > > + ret = -EIO;
> > > + dev_err(&parent->dev, "Incomplete device migration ops
> > > structure!");
> > > + goto err_destroy;
> > > + }
> >
> > If all these ops are a must why cant we move the check inside the
> > qat_vfmig_create()?
> > Or rather call them explicitly as suggested by Jason.
>
> We can do it, but it might make sense to leave the check to the APIs' user
> as some of these ops interfaces might be optional for other QAT variant driver
> in future.

If you have patches already that need ops then you can leave them, but
otherwise they should be removed and added back if you come with
future patches that require another implementation.

Jason

2024-02-17 16:20:33

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Sunday, February 11, 2024 4:17 PM, Yishai Hadas <[email protected]> wrote:
> >>>> +static void qat_vf_pci_aer_reset_done(struct pci_dev *pdev)
> >>>> +{
> >>>> + struct qat_vf_core_device *qat_vdev = qat_vf_drvdata(pdev);
> >>>> +
> >>>> + if (!qat_vdev->core_device.vdev.mig_ops)
> >>>> + return;
> >>>> +
> >>>> + /*
> >>>> + * As the higher VFIO layers are holding locks across reset and using
> >>>> + * those same locks with the mm_lock we need to prevent ABBA
> >>> deadlock
> >>>> + * with the state_mutex and mm_lock.
> >>>> + * In case the state_mutex was taken already we defer the cleanup work
> >>>> + * to the unlock flow of the other running context.
> >>>> + */
> >>>> + spin_lock(&qat_vdev->reset_lock);
> >>>> + qat_vdev->deferred_reset = true;
> >>>> + if (!mutex_trylock(&qat_vdev->state_mutex)) {
> >>>> + spin_unlock(&qat_vdev->reset_lock);
> >>>> + return;
> >>>> + }
> >>>> + spin_unlock(&qat_vdev->reset_lock);
> >>>> + qat_vf_state_mutex_unlock(qat_vdev);
> >>>> +}
> >>>
> >>> Do you really need this? I thought this ugly thing was going to be a
> >>> uniquely mlx5 thing..
> >>
> >> I think that's still required to make the migration state synchronized
> >> if the VF is reset by other VFIO emulation paths. Is it the case?
> >> BTW, this implementation is not only in mlx5 driver, but also in other
> >> Vfio pci variant drivers such as hisilicon acc driver and pds
> >> driver.
> >
> > It had to specifically do with the mm lock interaction that, I
> > thought, was going to be unique to the mlx driver. Otherwise you could
> > just directly hold the state_mutex here.
> >
> > Yishai do you remember the exact trace for the mmlock entanglement?
> >
>
> I found in my in-box (from more than 2.5 years ago) the below [1]
> lockdep WARNING when the state_mutex was used directly.
>
> Once we moved to the 'deferred_reset' mode it solved that.
>
> [1]
> [ +1.063822] ======================================================
> [ +0.000732] WARNING: possible circular locking dependency detected
> [ +0.000747] 5.15.0-rc3 #236 Not tainted
> [ +0.000556] ------------------------------------------------------
> [ +0.000714] qemu-system-x86/7731 is trying to acquire lock:
> [ +0.000659] ffff888126c64b78 (&vdev->vma_lock){+.+.}-{3:3}, at:
> vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.001127]
> but task is already holding lock:
> [ +0.000803] ffff888105f4c5d8 (&mm->mmap_lock#2){++++}-{3:3}, at:
> vaddr_get_pfns+0x64/0x240 [vfio_iommu_type1]
> [ +0.001119]
> which lock already depends on the new lock.
>
> [ +0.001086]
> the existing dependency chain (in reverse order) is:
> [ +0.000910]
> -> #3 (&mm->mmap_lock#2){++++}-{3:3}:
> [ +0.000844] __might_fault+0x56/0x80
> [ +0.000572] _copy_to_user+0x1e/0x80
> [ +0.000556] mlx5vf_pci_mig_rw.cold+0xa1/0x24f [mlx5_vfio_pci]
> [ +0.000732] vfs_read+0xa8/0x1c0
> [ +0.000547] __x64_sys_pread64+0x8c/0xc0
> [ +0.000580] do_syscall_64+0x3d/0x90
> [ +0.000566] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ +0.000682]
> -> #2 (&mvdev->state_mutex){+.+.}-{3:3}:
> [ +0.000899] __mutex_lock+0x80/0x9d0
> [ +0.000566] mlx5vf_reset_done+0x2c/0x40 [mlx5_vfio_pci]
> [ +0.000697] vfio_pci_core_ioctl+0x585/0x1020 [vfio_pci_core]
> [ +0.000721] __x64_sys_ioctl+0x436/0x9a0
> [ +0.000588] do_syscall_64+0x3d/0x90
> [ +0.000584] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ +0.000674]
> -> #1 (&vdev->memory_lock){+.+.}-{3:3}:
> [ +0.000843] down_write+0x38/0x70
> [ +0.000544] vfio_pci_zap_and_down_write_memory_lock+0x1c/0x30
> [vfio_pci_core]
> [ +0.003705] vfio_basic_config_write+0x1e7/0x280 [vfio_pci_core]
> [ +0.000744] vfio_pci_config_rw+0x1b7/0x3af [vfio_pci_core]
> [ +0.000716] vfs_write+0xe6/0x390
> [ +0.000539] __x64_sys_pwrite64+0x8c/0xc0
> [ +0.000603] do_syscall_64+0x3d/0x90
> [ +0.000572] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ +0.000661]
> -> #0 (&vdev->vma_lock){+.+.}-{3:3}:
> [ +0.000828] __lock_acquire+0x1244/0x2280
> [ +0.000596] lock_acquire+0xc2/0x2c0
> [ +0.000556] __mutex_lock+0x80/0x9d0
> [ +0.000580] vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.000709] __do_fault+0x32/0xa0
> [ +0.000556] __handle_mm_fault+0xbe8/0x1450
> [ +0.000606] handle_mm_fault+0x6c/0x140
> [ +0.000624] fixup_user_fault+0x6b/0x100
> [ +0.000600] vaddr_get_pfns+0x108/0x240 [vfio_iommu_type1]
> [ +0.000726] vfio_pin_pages_remote+0x326/0x460 [vfio_iommu_type1]
> [ +0.000736] vfio_iommu_type1_ioctl+0x43b/0x15a0 [vfio_iommu_type1]
> [ +0.000752] __x64_sys_ioctl+0x436/0x9a0
> [ +0.000588] do_syscall_64+0x3d/0x90
> [ +0.000571] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ +0.000677]
> other info that might help us debug this:
>
> [ +0.001073] Chain exists of:
> &vdev->vma_lock --> &mvdev->state_mutex -->
> &mm->mmap_lock#2
>
> [ +0.001285] Possible unsafe locking scenario:
>
> [ +0.000808] CPU0 CPU1
> [ +0.000599] ---- ----
> [ +0.000593] lock(&mm->mmap_lock#2);
> [ +0.000530] lock(&mvdev->state_mutex);
> [ +0.000725] lock(&mm->mmap_lock#2);
> [ +0.000712] lock(&vdev->vma_lock);
> [ +0.000532]
> *** DEADLOCK ***

Thanks for this information, but this flow is not clear to me why it
cause deadlock. From this flow, CPU0 is not waiting for any resource
held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
to run. Am I missing something?
Meanwhile, it seems the trace above is triggered with migration
protocol v1, the context of CPU1 listed also seems protocol v1
specific. Is it the case?
Thanks,
Xin

>
> [ +0.000922] 2 locks held by qemu-system-x86/7731:
> [ +0.000597] #0: ffff88810c9bec88 (&iommu->lock#2){+.+.}-{3:3}, at:
> vfio_iommu_type1_ioctl+0x189/0x15a0 [vfio_iommu_type1]
> [ +0.001177] #1: ffff888105f4c5d8 (&mm->mmap_lock#2){++++}-{3:3}, at:
> vaddr_get_pfns+0x64/0x240 [vfio_iommu_type1]
> [ +0.001153]
> stack backtrace:
> [ +0.000689] CPU: 1 PID: 7731 Comm: qemu-system-x86 Not tainted
> 5.15.0-rc3 #236
> [ +0.000932] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
> rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
> [ +0.001192] Call Trace:
> [ +0.000454] dump_stack_lvl+0x45/0x59
> [ +0.000527] check_noncircular+0xf2/0x110
> [ +0.000557] __lock_acquire+0x1244/0x2280
> [ +0.002462] lock_acquire+0xc2/0x2c0
> [ +0.000534] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.000684] ? lock_is_held_type+0x98/0x110
> [ +0.000565] __mutex_lock+0x80/0x9d0
> [ +0.000542] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.000676] ? vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.000681] ? mark_held_locks+0x49/0x70
> [ +0.000551] ? lock_is_held_type+0x98/0x110
> [ +0.000575] vfio_pci_mmap_fault+0x35/0x140 [vfio_pci_core]
> [ +0.000679] __do_fault+0x32/0xa0
> [ +0.000700] __handle_mm_fault+0xbe8/0x1450
> [ +0.000571] handle_mm_fault+0x6c/0x140
> [ +0.000564] fixup_user_fault+0x6b/0x100
> [ +0.000556] vaddr_get_pfns+0x108/0x240 [vfio_iommu_type1]
> [ +0.000666] ? lock_is_held_type+0x98/0x110
> [ +0.000572] vfio_pin_pages_remote+0x326/0x460 [vfio_iommu_type1]
> [ +0.000710] vfio_iommu_type1_ioctl+0x43b/0x15a0 [vfio_iommu_type1]
> [ +0.001050] ? find_held_lock+0x2b/0x80
> [ +0.000561] ? lock_release+0xc2/0x2a0
> [ +0.000537] ? __fget_files+0xdc/0x1d0
> [ +0.000541] __x64_sys_ioctl+0x436/0x9a0
> [ +0.000556] ? lockdep_hardirqs_on_prepare+0xd4/0x170
> [ +0.000641] do_syscall_64+0x3d/0x90
> [ +0.000529] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ +0.000670] RIP: 0033:0x7f54255bb45b
> [ +0.000541] Code: 0f 1e fa 48 8b 05 2d aa 2c 00 64 c7 00 26 00 00 00
> 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f
> 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d fd a9 2c 00 f7 d8 64 89 01 48
> [ +0.001843] RSP: 002b:00007f5415ca3d38 EFLAGS: 00000206 ORIG_RAX:
> 0000000000000010
> [ +0.000929] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
> 00007f54255bb45b
> [ +0.000781] RDX: 00007f5415ca3d70 RSI: 0000000000003b71 RDI:
> 0000000000000012
> [ +0.000775] RBP: 00007f5415ca3da0 R08: 0000000000000000 R09:
> 0000000000000000
> [ +0.000777] R10: 00000000fe002000 R11: 0000000000000206 R12:
> 0000000000000004
> [ +0.000775] R13: 0000000000000000 R14: 00000000fe000000 R15:
> 00007f5415ca47c0
>
> Yishai




2024-02-19 03:41:15

by Wan, Siming

[permalink] [raw]
Subject: RE: [EXTERNAL] [PATCH 07/10] crypto: qat - add bank save and restore flows

From: Kamlesh Gurudasani <[email protected]>
Sent: Sunday, February 4, 2024 3:50 PM
To: Zeng, Xin <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Tian, Kevin <[email protected]>
Cc: [email protected]; [email protected]; qat-linux <[email protected]>; Wan, Siming <[email protected]>; Zeng, Xin <[email protected]>
Subject: Re: [EXTERNAL] [PATCH 07/10] crypto: qat - add bank save and restore flows

Xin Zeng <[email protected]> writes:

> This message was sent from outside of Texas Instruments.
> Do not click links or open attachments unless you recognize the source of this email and know the content is safe.
>
> From: Siming Wan <[email protected]>
>
> Add logic to save, restore, quiesce and drain a ring bank for QAT GEN4
> devices.
> This allows to save and restore the state of a Virtual Function (VF)
> and will be used to implement VM live migration.
>
> Signed-off-by: Siming Wan <[email protected]>
> Reviewed-by: Giovanni Cabiddu <[email protected]>
> Reviewed-by: Xin Zeng <[email protected]>
> Signed-off-by: Xin Zeng <[email protected]>
...
> +#define CHECK_STAT(op, expect_val, name, args...) \ ({ \
> + u32 __expect_val = (expect_val); \
> + u32 actual_val = op(args); \
> + if (__expect_val != actual_val) \
> + pr_err("QAT: Fail to restore %s register. Expected 0x%x, but actual is 0x%x\n", \
> + name, __expect_val, actual_val); \
> + (__expect_val == actual_val) ? 0 : -EINVAL; \
I was wondering if this can be done like following, saves repeat comparison.

(__expect_val == actual_val) ? 0 : (pr_err("QAT: Fail to restore %s \
register. Expected 0x%x, \
but actual is 0x%x\n", \
name, __expect_val, \
actual_val), -EINVAL); \ Regards, Kamlesh

> +})
...


Thanks for your comments! Will update.

2024-02-20 13:25:10

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:

> Thanks for this information, but this flow is not clear to me why it
> cause deadlock. From this flow, CPU0 is not waiting for any resource
> held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
> to run. Am I missing something?

At some point it was calling copy_to_user() under the state
mutex. These days it doesn't.

copy_to_user() would nest the mm_lock under the state mutex which is a
locking inversion.

So I wonder if we still have this problem now that the copy_to_user()
is not under the mutex?

Jason

2024-02-20 15:53:23

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
> To: Zeng, Xin <[email protected]>
> Cc: Yishai Hadas <[email protected]>; [email protected];
> [email protected]; [email protected]; Tian,
> Kevin <[email protected]>; [email protected];
> [email protected]; qat-linux <[email protected]>; Cao, Yahui
> <[email protected]>
> Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices
>
> On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
>
> > Thanks for this information, but this flow is not clear to me why it
> > cause deadlock. From this flow, CPU0 is not waiting for any resource
> > held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
> > to run. Am I missing something?
>
> At some point it was calling copy_to_user() under the state
> mutex. These days it doesn't.
>
> copy_to_user() would nest the mm_lock under the state mutex which is a
> locking inversion.
>
> So I wonder if we still have this problem now that the copy_to_user()
> is not under the mutex?

In protocol v2, we still have the scenario in precopy_ioctl where copy_to_user is
called under state_mutex.

Thanks,
Xin


2024-02-20 17:03:30

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Tue, Feb 20, 2024 at 03:53:08PM +0000, Zeng, Xin wrote:
> On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
> > To: Zeng, Xin <[email protected]>
> > Cc: Yishai Hadas <[email protected]>; [email protected];
> > [email protected]; [email protected]; Tian,
> > Kevin <[email protected]>; [email protected];
> > [email protected]; qat-linux <[email protected]>; Cao, Yahui
> > <[email protected]>
> > Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices
> >
> > On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
> >
> > > Thanks for this information, but this flow is not clear to me why it
> > > cause deadlock. From this flow, CPU0 is not waiting for any resource
> > > held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
> > > to run. Am I missing something?
> >
> > At some point it was calling copy_to_user() under the state
> > mutex. These days it doesn't.
> >
> > copy_to_user() would nest the mm_lock under the state mutex which is a
> > locking inversion.
> >
> > So I wonder if we still have this problem now that the copy_to_user()
> > is not under the mutex?
>
> In protocol v2, we still have the scenario in precopy_ioctl where copy_to_user is
> called under state_mutex.

Why? Does mlx5 do that? It looked Ok to me:

mlx5vf_state_mutex_unlock(mvdev);
if (copy_to_user((void __user *)arg, &info, minsz))
return -EFAULT;

Jason

2024-02-21 08:48:37

by Zeng, Xin

[permalink] [raw]
Subject: RE: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Wednesday, February 21, 2024 1:03 AM, Jason Gunthorpe wrote:
> On Tue, Feb 20, 2024 at 03:53:08PM +0000, Zeng, Xin wrote:
> > On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
> > > To: Zeng, Xin <[email protected]>
> > > Cc: Yishai Hadas <[email protected]>; [email protected];
> > > [email protected]; [email protected];
> Tian,
> > > Kevin <[email protected]>; [email protected];
> > > [email protected]; qat-linux <[email protected]>; Cao, Yahui
> > > <[email protected]>
> > > Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF
> devices
> > >
> > > On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
> > >
> > > > Thanks for this information, but this flow is not clear to me why it
> > > > cause deadlock. From this flow, CPU0 is not waiting for any resource
> > > > held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
> > > > to run. Am I missing something?
> > >
> > > At some point it was calling copy_to_user() under the state
> > > mutex. These days it doesn't.
> > >
> > > copy_to_user() would nest the mm_lock under the state mutex which is
> a
> > > locking inversion.
> > >
> > > So I wonder if we still have this problem now that the copy_to_user()
> > > is not under the mutex?
> >
> > In protocol v2, we still have the scenario in precopy_ioctl where
> copy_to_user is
> > called under state_mutex.
>
> Why? Does mlx5 do that? It looked Ok to me:
>
> mlx5vf_state_mutex_unlock(mvdev);
> if (copy_to_user((void __user *)arg, &info, minsz))
> return -EFAULT;

Indeed, thanks, Jason. BTW, is there any reason why was "deferred_reset" mode
still implemented in mlx5 driver given this deadlock condition has been avoided
with migration protocol v2 implementation.
Anyway, I'll use state_mutex directly instead of the "deferred_reset" mode in qat
variant driver and update it in next version soon, please help to review.
Thanks,
Xin


2024-02-21 13:19:05

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On Wed, Feb 21, 2024 at 08:44:31AM +0000, Zeng, Xin wrote:
> On Wednesday, February 21, 2024 1:03 AM, Jason Gunthorpe wrote:
> > On Tue, Feb 20, 2024 at 03:53:08PM +0000, Zeng, Xin wrote:
> > > On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
> > > > To: Zeng, Xin <[email protected]>
> > > > Cc: Yishai Hadas <[email protected]>; [email protected];
> > > > [email protected]; [email protected];
> > Tian,
> > > > Kevin <[email protected]>; [email protected];
> > > > [email protected]; qat-linux <[email protected]>; Cao, Yahui
> > > > <[email protected]>
> > > > Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF
> > devices
> > > >
> > > > On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
> > > >
> > > > > Thanks for this information, but this flow is not clear to me why it
> > > > > cause deadlock. From this flow, CPU0 is not waiting for any resource
> > > > > held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
> > > > > to run. Am I missing something?
> > > >
> > > > At some point it was calling copy_to_user() under the state
> > > > mutex. These days it doesn't.
> > > >
> > > > copy_to_user() would nest the mm_lock under the state mutex which is
> > a
> > > > locking inversion.
> > > >
> > > > So I wonder if we still have this problem now that the copy_to_user()
> > > > is not under the mutex?
> > >
> > > In protocol v2, we still have the scenario in precopy_ioctl where
> > copy_to_user is
> > > called under state_mutex.
> >
> > Why? Does mlx5 do that? It looked Ok to me:
> >
> > mlx5vf_state_mutex_unlock(mvdev);
> > if (copy_to_user((void __user *)arg, &info, minsz))
> > return -EFAULT;
>
> Indeed, thanks, Jason. BTW, is there any reason why was "deferred_reset" mode
> still implemented in mlx5 driver given this deadlock condition has been avoided
> with migration protocol v2 implementation.

I do not remember. Yishai?

Jason

2024-02-21 15:11:51

by Yishai Hadas

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On 21/02/2024 15:18, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 08:44:31AM +0000, Zeng, Xin wrote:
>> On Wednesday, February 21, 2024 1:03 AM, Jason Gunthorpe wrote:
>>> On Tue, Feb 20, 2024 at 03:53:08PM +0000, Zeng, Xin wrote:
>>>> On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
>>>>> To: Zeng, Xin <[email protected]>
>>>>> Cc: Yishai Hadas <[email protected]>; [email protected];
>>>>> [email protected]; [email protected];
>>> Tian,
>>>>> Kevin <[email protected]>; [email protected];
>>>>> [email protected]; qat-linux <[email protected]>; Cao, Yahui
>>>>> <[email protected]>
>>>>> Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF
>>> devices
>>>>>
>>>>> On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
>>>>>
>>>>>> Thanks for this information, but this flow is not clear to me why it
>>>>>> cause deadlock. From this flow, CPU0 is not waiting for any resource
>>>>>> held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
>>>>>> to run. Am I missing something?
>>>>>
>>>>> At some point it was calling copy_to_user() under the state
>>>>> mutex. These days it doesn't.
>>>>>
>>>>> copy_to_user() would nest the mm_lock under the state mutex which is
>>> a
>>>>> locking inversion.
>>>>>
>>>>> So I wonder if we still have this problem now that the copy_to_user()
>>>>> is not under the mutex?
>>>>
>>>> In protocol v2, we still have the scenario in precopy_ioctl where
>>> copy_to_user is
>>>> called under state_mutex.
>>>
>>> Why? Does mlx5 do that? It looked Ok to me:
>>>
>>> mlx5vf_state_mutex_unlock(mvdev);
>>> if (copy_to_user((void __user *)arg, &info, minsz))
>>> return -EFAULT;
>>
>> Indeed, thanks, Jason. BTW, is there any reason why was "deferred_reset" mode
>> still implemented in mlx5 driver given this deadlock condition has been avoided
>> with migration protocol v2 implementation.
>
> I do not remember. Yishai?
>

Long time passed.., I also don't fully remember whether this was the
only potential problem here, maybe Yes.

My plan is to prepare a cleanup patch for mlx5 and put it into our
regression for a while, if all will be good, I may send it for the next
kernel cycle.

By the way, there are other drivers around (i.e. hisi and mtty) that
still run copy_to_user() under the state mutex and might hit the problem
without the 'deferred_rest', see here[1].

If we'll reach to the conclusion that the only reason for that mechanism
was the copy_to_user() under the state_mutex, those drivers can change
their code easily and also send a patch to cleanup the 'deferred_reset'.

[1]
https://elixir.bootlin.com/linux/v6.8-rc5/source/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c#L808
[2]
https://elixir.bootlin.com/linux/v6.8-rc5/source/samples/vfio-mdev/mtty.c#L878

Yishai





2024-02-22 09:39:49

by Yishai Hadas

[permalink] [raw]
Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel QAT VF devices

On 21/02/2024 17:11, Yishai Hadas wrote:
> On 21/02/2024 15:18, Jason Gunthorpe wrote:
>> On Wed, Feb 21, 2024 at 08:44:31AM +0000, Zeng, Xin wrote:
>>> On Wednesday, February 21, 2024 1:03 AM, Jason Gunthorpe wrote:
>>>> On Tue, Feb 20, 2024 at 03:53:08PM +0000, Zeng, Xin wrote:
>>>>> On Tuesday, February 20, 2024 9:25 PM, Jason Gunthorpe wrote:
>>>>>> To: Zeng, Xin <[email protected]>
>>>>>> Cc: Yishai Hadas <[email protected]>; [email protected];
>>>>>> [email protected]; [email protected];
>>>> Tian,
>>>>>> Kevin <[email protected]>; [email protected];
>>>>>> [email protected]; qat-linux <[email protected]>; Cao, Yahui
>>>>>> <[email protected]>
>>>>>> Subject: Re: [PATCH 10/10] vfio/qat: Add vfio_pci driver for Intel
>>>>>> QAT VF
>>>> devices
>>>>>>
>>>>>> On Sat, Feb 17, 2024 at 04:20:20PM +0000, Zeng, Xin wrote:
>>>>>>
>>>>>>> Thanks for this information, but this flow is not clear to me why it
>>>>>>> cause deadlock. From this flow, CPU0 is not waiting for any resource
>>>>>>> held by CPU1, so after CPU0 releases mmap_lock, CPU1 can continue
>>>>>>> to run. Am I missing something?
>>>>>>
>>>>>> At some point it was calling copy_to_user() under the state
>>>>>> mutex. These days it doesn't.
>>>>>>
>>>>>> copy_to_user() would nest the mm_lock under the state mutex which is
>>>> a
>>>>>> locking inversion.
>>>>>>
>>>>>> So I wonder if we still have this problem now that the copy_to_user()
>>>>>> is not under the mutex?
>>>>>
>>>>> In protocol v2, we still have the scenario in precopy_ioctl where
>>>> copy_to_user is
>>>>> called under state_mutex.
>>>>
>>>> Why? Does mlx5 do that? It looked Ok to me:
>>>>
>>>>          mlx5vf_state_mutex_unlock(mvdev);
>>>>          if (copy_to_user((void __user *)arg, &info, minsz))
>>>>                  return -EFAULT;
>>>
>>> Indeed, thanks, Jason. BTW, is there any reason why was
>>> "deferred_reset" mode
>>> still implemented in mlx5 driver given this deadlock condition has
>>> been avoided
>>> with migration protocol v2 implementation.
>>
>> I do not remember. Yishai?
>>
>
> Long time passed.., I also don't fully remember whether this was the
> only potential problem here, maybe Yes.
>
> My plan is to prepare a cleanup patch for mlx5 and put it into our
> regression for a while, if all will be good, I may send it for the next
> kernel cycle.
>
> By the way, there are other drivers around (i.e. hisi and mtty) that
> still run copy_to_user() under the state mutex and might hit the problem
> without the 'deferred_rest', see here[1].
>
> If we'll reach to the conclusion that the only reason for that mechanism
> was the copy_to_user() under the state_mutex, those drivers can change
> their code easily and also send a patch to cleanup the 'deferred_reset'.
>
> [1]
> https://elixir.bootlin.com/linux/v6.8-rc5/source/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c#L808
> [2]
> https://elixir.bootlin.com/linux/v6.8-rc5/source/samples/vfio-mdev/mtty.c#L878
>
> Yishai
>

From an extra code review around, it looks as we still need the
'deferred_reset' flow in mlx5.

For example, we call copy_from_user() as part of
mlx5vf_resume_read_header() which is called by mlx5vf_resume_write()
under the state mutex.

So, no change is expected in mlx5.

Yishai