2022-09-01 07:44:35

by Tian, Kevin

[permalink] [raw]
Subject: [PATCH v2 00/15] Tidy up vfio_device life cycle

The idea is to let vfio core manage the vfio_device life cycle instead
of duplicating the logic cross drivers. Besides cleaner code in driver
side this also allows adding struct device to vfio_device as the first
step toward adding cdev uAPI in the future. Another benefit is that
user can now look at sysfs to decide whether a device is bound to
vfio [1], e.g.:

/sys/devices/pci0000\:6f/0000\:6f\:01.0/vfio-dev/vfio0

Though most drivers can fit the new model naturally:

- vfio_alloc_device() to allocate and initialize vfio_device
- vfio_put_device() to release vfio_device
- dev_ops->init() for driver private initialization
- dev_ops->release() for driver private cleanup

vfio-ccw is the only exception due to a life cycle mess that its private
structure mixes both parent and mdev info hence must be alloc/freed
outside of the life cycle of vfio device.

Per prior discussions this won't be fixed in short term by IBM folks [2].

Instead of waiting this series introduces a few tricks to move forward:

- vfio_init_device() to initialize a pre-allocated device structure;

- require *EVERY* driver to implement @release and free vfio_device
inside. Then vfio-ccw can use a completion mechanism to delay the
free to css driver;

The second trick is not a real burden to other drivers because they
all require a @release for private cleanup anyay. Later once the ccw
mess is fixed a simple cleanup can be done by moving free from @release
to vfio core.

v2:
- rebase to 6.0-rc3
- fix build warnings (lkp)
- patch1: remove unnecessary forward reference (Jason)
- patch10: leave device_set released by vfio core (Jason)
- patch13: add Suggested-by
- patch15: add ABI file sysfs-devices-vfio-dev (Alex)
- patch15: rename 'vfio' to 'vfio_group' in procfs (Jason)
v1: https://lore.kernel.org/lkml/[email protected]/

--
@Alex, before knowing your merging preference this is only rebased to
6.0-rc3.

There is no conflict with:

- Remove private items from linux/vfio_pci_core.h
- Break up ioctl dispatch functions to one function per ioctl

But conflict exists with Jason's two series:

- Allow MMIO regions to be exported through dma-buf
- vfio: Split the container code into a clean layer and dedicated file

Thanks
Kevin

[1] https://listman.redhat.com/archives/libvir-list/2022-August/233482.html
[2] https://lore.kernel.org/all/[email protected]/

Kevin Tian (6):
vfio: Add helpers for unifying vfio_device life cycle
drm/i915/gvt: Use the new device life cycle helpers
vfio/platform: Use the new device life cycle helpers
vfio/amba: Use the new device life cycle helpers
vfio/ccw: Use the new device life cycle helpers
vfio: Rename vfio_device_put() and vfio_device_try_get()

Yi Liu (9):
vfio/pci: Use the new device life cycle helpers
vfio/mlx5: Use the new device life cycle helpers
vfio/hisi_acc: Use the new device life cycle helpers
vfio/mdpy: Use the new device life cycle helpers
vfio/mtty: Use the new device life cycle helpers
vfio/mbochs: Use the new device life cycle helpers
vfio/ap: Use the new device life cycle helpers
vfio/fsl-mc: Use the new device life cycle helpers
vfio: Add struct device to vfio_device

.../ABI/testing/sysfs-devices-vfio-dev | 8 +
drivers/gpu/drm/i915/gvt/gvt.h | 5 +-
drivers/gpu/drm/i915/gvt/kvmgt.c | 52 ++++--
drivers/gpu/drm/i915/gvt/vgpu.c | 33 ++--
drivers/s390/cio/vfio_ccw_ops.c | 52 +++++-
drivers/s390/cio/vfio_ccw_private.h | 3 +
drivers/s390/crypto/vfio_ap_ops.c | 50 +++---
drivers/vfio/fsl-mc/vfio_fsl_mc.c | 85 +++++----
.../vfio/pci/hisilicon/hisi_acc_vfio_pci.c | 80 ++++-----
drivers/vfio/pci/mlx5/main.c | 49 +++--
drivers/vfio/pci/vfio_pci.c | 20 +--
drivers/vfio/pci/vfio_pci_core.c | 23 ++-
drivers/vfio/platform/vfio_amba.c | 72 ++++++--
drivers/vfio/platform/vfio_platform.c | 66 +++++--
drivers/vfio/platform/vfio_platform_common.c | 71 +++-----
drivers/vfio/platform/vfio_platform_private.h | 18 +-
drivers/vfio/vfio_main.c | 167 +++++++++++++++---
include/linux/vfio.h | 28 ++-
include/linux/vfio_pci_core.h | 6 +-
samples/vfio-mdev/mbochs.c | 73 +++++---
samples/vfio-mdev/mdpy.c | 81 +++++----
samples/vfio-mdev/mtty.c | 67 ++++---
22 files changed, 729 insertions(+), 380 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-devices-vfio-dev


base-commit: b90cb1053190353cc30f0fef0ef1f378ccc063c5
--
2.21.3


2022-09-01 08:01:02

by Tian, Kevin

[permalink] [raw]
Subject: [PATCH v2 10/15] vfio/fsl-mc: Use the new device life cycle helpers

From: Yi Liu <[email protected]>

Also add a comment to mark that vfio core releases device_set if @init
fails.

Signed-off-by: Yi Liu <[email protected]>
Signed-off-by: Kevin Tian <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
---
drivers/vfio/fsl-mc/vfio_fsl_mc.c | 85 ++++++++++++++++++-------------
1 file changed, 49 insertions(+), 36 deletions(-)

diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
index 3feff729f3ce..b6a985b07419 100644
--- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
@@ -418,16 +418,7 @@ static int vfio_fsl_mc_mmap(struct vfio_device *core_vdev,
return vfio_fsl_mc_mmap_mmio(vdev->regions[index], vma);
}

-static const struct vfio_device_ops vfio_fsl_mc_ops = {
- .name = "vfio-fsl-mc",
- .open_device = vfio_fsl_mc_open_device,
- .close_device = vfio_fsl_mc_close_device,
- .ioctl = vfio_fsl_mc_ioctl,
- .read = vfio_fsl_mc_read,
- .write = vfio_fsl_mc_write,
- .mmap = vfio_fsl_mc_mmap,
-};
-
+static const struct vfio_device_ops vfio_fsl_mc_ops;
static int vfio_fsl_mc_bus_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
@@ -518,35 +509,43 @@ static void vfio_fsl_uninit_device(struct vfio_fsl_mc_device *vdev)
bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
}

-static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+static int vfio_fsl_mc_init_dev(struct vfio_device *core_vdev)
{
- struct vfio_fsl_mc_device *vdev;
- struct device *dev = &mc_dev->dev;
+ struct vfio_fsl_mc_device *vdev =
+ container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
+ struct fsl_mc_device *mc_dev = to_fsl_mc_device(core_vdev->dev);
int ret;

- vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
- if (!vdev)
- return -ENOMEM;
-
- vfio_init_group_dev(&vdev->vdev, dev, &vfio_fsl_mc_ops);
vdev->mc_dev = mc_dev;
mutex_init(&vdev->igate);

if (is_fsl_mc_bus_dprc(mc_dev))
- ret = vfio_assign_device_set(&vdev->vdev, &mc_dev->dev);
+ ret = vfio_assign_device_set(core_vdev, &mc_dev->dev);
else
- ret = vfio_assign_device_set(&vdev->vdev, mc_dev->dev.parent);
- if (ret)
- goto out_uninit;
+ ret = vfio_assign_device_set(core_vdev, mc_dev->dev.parent);

- ret = vfio_fsl_mc_init_device(vdev);
if (ret)
- goto out_uninit;
+ return ret;
+
+ /* device_set is released by vfio core if @init fails */
+ return vfio_fsl_mc_init_device(vdev);
+}
+
+static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+{
+ struct vfio_fsl_mc_device *vdev;
+ struct device *dev = &mc_dev->dev;
+ int ret;
+
+ vdev = vfio_alloc_device(vfio_fsl_mc_device, vdev, dev,
+ &vfio_fsl_mc_ops);
+ if (IS_ERR(vdev))
+ return PTR_ERR(vdev);

ret = vfio_register_group_dev(&vdev->vdev);
if (ret) {
dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
- goto out_device;
+ goto out_put_vdev;
}

ret = vfio_fsl_mc_scan_container(mc_dev);
@@ -557,30 +556,44 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)

out_group_dev:
vfio_unregister_group_dev(&vdev->vdev);
-out_device:
- vfio_fsl_uninit_device(vdev);
-out_uninit:
- vfio_uninit_group_dev(&vdev->vdev);
- kfree(vdev);
+out_put_vdev:
+ vfio_put_device(&vdev->vdev);
return ret;
}

+static void vfio_fsl_mc_release_dev(struct vfio_device *core_vdev)
+{
+ struct vfio_fsl_mc_device *vdev =
+ container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
+
+ vfio_fsl_uninit_device(vdev);
+ mutex_destroy(&vdev->igate);
+ vfio_free_device(core_vdev);
+}
+
static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev)
{
struct device *dev = &mc_dev->dev;
struct vfio_fsl_mc_device *vdev = dev_get_drvdata(dev);

vfio_unregister_group_dev(&vdev->vdev);
- mutex_destroy(&vdev->igate);
-
dprc_remove_devices(mc_dev, NULL, 0);
- vfio_fsl_uninit_device(vdev);
-
- vfio_uninit_group_dev(&vdev->vdev);
- kfree(vdev);
+ vfio_put_device(&vdev->vdev);
return 0;
}

+static const struct vfio_device_ops vfio_fsl_mc_ops = {
+ .name = "vfio-fsl-mc",
+ .init = vfio_fsl_mc_init_dev,
+ .release = vfio_fsl_mc_release_dev,
+ .open_device = vfio_fsl_mc_open_device,
+ .close_device = vfio_fsl_mc_close_device,
+ .ioctl = vfio_fsl_mc_ioctl,
+ .read = vfio_fsl_mc_read,
+ .write = vfio_fsl_mc_write,
+ .mmap = vfio_fsl_mc_mmap,
+};
+
static struct fsl_mc_driver vfio_fsl_mc_driver = {
.probe = vfio_fsl_mc_probe,
.remove = vfio_fsl_mc_remove,
--
2.21.3

2022-09-01 08:04:28

by Tian, Kevin

[permalink] [raw]
Subject: [PATCH v2 03/15] vfio/mlx5: Use the new device life cycle helpers

From: Yi Liu <[email protected]>

mlx5 has its own @init/@release for handling migration cap.

Signed-off-by: Yi Liu <[email protected]>
Signed-off-by: Kevin Tian <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
---
drivers/vfio/pci/mlx5/main.c | 49 ++++++++++++++++++++++++++----------
1 file changed, 36 insertions(+), 13 deletions(-)

diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
index a9b63d15c5d3..168c1133e337 100644
--- a/drivers/vfio/pci/mlx5/main.c
+++ b/drivers/vfio/pci/mlx5/main.c
@@ -579,8 +579,35 @@ static const struct vfio_migration_ops mlx5vf_pci_mig_ops = {
.migration_get_state = mlx5vf_pci_get_device_state,
};

+static int mlx5vf_pci_init_dev(struct vfio_device *core_vdev)
+{
+ struct mlx5vf_pci_core_device *mvdev = container_of(core_vdev,
+ struct mlx5vf_pci_core_device, core_device.vdev);
+ int ret;
+
+ ret = vfio_pci_core_init_dev(core_vdev);
+ if (ret)
+ return ret;
+
+ mlx5vf_cmd_set_migratable(mvdev, &mlx5vf_pci_mig_ops);
+
+ return 0;
+
+}
+
+static void mlx5vf_pci_release_dev(struct vfio_device *core_vdev)
+{
+ struct mlx5vf_pci_core_device *mvdev = container_of(core_vdev,
+ struct mlx5vf_pci_core_device, core_device.vdev);
+
+ mlx5vf_cmd_remove_migratable(mvdev);
+ vfio_pci_core_release_dev(core_vdev);
+}
+
static const struct vfio_device_ops mlx5vf_pci_ops = {
.name = "mlx5-vfio-pci",
+ .init = mlx5vf_pci_init_dev,
+ .release = mlx5vf_pci_release_dev,
.open_device = mlx5vf_pci_open_device,
.close_device = mlx5vf_pci_close_device,
.ioctl = vfio_pci_core_ioctl,
@@ -598,21 +625,19 @@ static int mlx5vf_pci_probe(struct pci_dev *pdev,
struct mlx5vf_pci_core_device *mvdev;
int ret;

- mvdev = kzalloc(sizeof(*mvdev), GFP_KERNEL);
- if (!mvdev)
- return -ENOMEM;
- vfio_pci_core_init_device(&mvdev->core_device, pdev, &mlx5vf_pci_ops);
- mlx5vf_cmd_set_migratable(mvdev, &mlx5vf_pci_mig_ops);
+ mvdev = vfio_alloc_device(mlx5vf_pci_core_device, core_device.vdev,
+ &pdev->dev, &mlx5vf_pci_ops);
+ if (IS_ERR(mvdev))
+ return PTR_ERR(mvdev);
+
dev_set_drvdata(&pdev->dev, &mvdev->core_device);
ret = vfio_pci_core_register_device(&mvdev->core_device);
if (ret)
- goto out_free;
+ goto out_put_vdev;
return 0;

-out_free:
- mlx5vf_cmd_remove_migratable(mvdev);
- vfio_pci_core_uninit_device(&mvdev->core_device);
- kfree(mvdev);
+out_put_vdev:
+ vfio_put_device(&mvdev->core_device.vdev);
return ret;
}

@@ -621,9 +646,7 @@ static void mlx5vf_pci_remove(struct pci_dev *pdev)
struct mlx5vf_pci_core_device *mvdev = mlx5vf_drvdata(pdev);

vfio_pci_core_unregister_device(&mvdev->core_device);
- mlx5vf_cmd_remove_migratable(mvdev);
- vfio_pci_core_uninit_device(&mvdev->core_device);
- kfree(mvdev);
+ vfio_put_device(&mvdev->core_device.vdev);
}

static const struct pci_device_id mlx5vf_pci_table[] = {
--
2.21.3

2022-09-01 08:08:29

by Tian, Kevin

[permalink] [raw]
Subject: [PATCH v2 08/15] drm/i915/gvt: Use the new device life cycle helpers

Move vfio_device to the start of intel_vgpu as required by the new
helpers.

Change intel_gvt_create_vgpu() to use intel_vgpu as the first param
as other vgpu helpers do.

Signed-off-by: Kevin Tian <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
---
drivers/gpu/drm/i915/gvt/gvt.h | 5 ++-
drivers/gpu/drm/i915/gvt/kvmgt.c | 52 ++++++++++++++++++++++----------
drivers/gpu/drm/i915/gvt/vgpu.c | 33 ++++++++------------
3 files changed, 50 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 705689e64011..89fab7896fc6 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -172,6 +172,7 @@ struct intel_vgpu_submission {
#define KVMGT_DEBUGFS_FILENAME "kvmgt_nr_cache_entries"

struct intel_vgpu {
+ struct vfio_device vfio_device;
struct intel_gvt *gvt;
struct mutex vgpu_lock;
int id;
@@ -211,7 +212,6 @@ struct intel_vgpu {

u32 scan_nonprivbb;

- struct vfio_device vfio_device;
struct vfio_region *region;
int num_regions;
struct eventfd_ctx *intx_trigger;
@@ -494,8 +494,7 @@ void intel_gvt_clean_vgpu_types(struct intel_gvt *gvt);

struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt);
void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu);
-struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
- struct intel_vgpu_type *type);
+int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, struct intel_vgpu_type *type);
void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu);
void intel_gvt_release_vgpu(struct intel_vgpu *vgpu);
void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index e3cd58946477..41bba40feef8 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -1546,7 +1546,33 @@ static const struct attribute_group *intel_vgpu_groups[] = {
NULL,
};

+static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
+{
+ struct mdev_device *mdev = to_mdev_device(vfio_dev->dev);
+ struct device *pdev = mdev_parent_dev(mdev);
+ struct intel_gvt *gvt = kdev_to_i915(pdev)->gvt;
+ struct intel_vgpu_type *type;
+ struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
+
+ type = &gvt->types[mdev_get_type_group_id(mdev)];
+ if (!type)
+ return -EINVAL;
+
+ vgpu->gvt = gvt;
+ return intel_gvt_create_vgpu(vgpu, type);
+}
+
+static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
+{
+ struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
+
+ intel_gvt_destroy_vgpu(vgpu);
+ vfio_free_device(vfio_dev);
+}
+
static const struct vfio_device_ops intel_vgpu_dev_ops = {
+ .init = intel_vgpu_init_dev,
+ .release = intel_vgpu_release_dev,
.open_device = intel_vgpu_open_device,
.close_device = intel_vgpu_close_device,
.read = intel_vgpu_read,
@@ -1558,35 +1584,28 @@ static const struct vfio_device_ops intel_vgpu_dev_ops = {

static int intel_vgpu_probe(struct mdev_device *mdev)
{
- struct device *pdev = mdev_parent_dev(mdev);
- struct intel_gvt *gvt = kdev_to_i915(pdev)->gvt;
- struct intel_vgpu_type *type;
struct intel_vgpu *vgpu;
int ret;

- type = &gvt->types[mdev_get_type_group_id(mdev)];
- if (!type)
- return -EINVAL;
-
- vgpu = intel_gvt_create_vgpu(gvt, type);
+ vgpu = vfio_alloc_device(intel_vgpu, vfio_device, &mdev->dev,
+ &intel_vgpu_dev_ops);
if (IS_ERR(vgpu)) {
gvt_err("failed to create intel vgpu: %ld\n", PTR_ERR(vgpu));
return PTR_ERR(vgpu);
}

- vfio_init_group_dev(&vgpu->vfio_device, &mdev->dev,
- &intel_vgpu_dev_ops);
-
dev_set_drvdata(&mdev->dev, vgpu);
ret = vfio_register_emulated_iommu_dev(&vgpu->vfio_device);
- if (ret) {
- intel_gvt_destroy_vgpu(vgpu);
- return ret;
- }
+ if (ret)
+ goto out_put_vdev;

gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
dev_name(mdev_dev(mdev)));
return 0;
+
+out_put_vdev:
+ vfio_put_device(&vgpu->vfio_device);
+ return ret;
}

static void intel_vgpu_remove(struct mdev_device *mdev)
@@ -1595,7 +1614,8 @@ static void intel_vgpu_remove(struct mdev_device *mdev)

if (WARN_ON_ONCE(vgpu->attached))
return;
- intel_gvt_destroy_vgpu(vgpu);
+
+ vfio_put_device(&vgpu->vfio_device);
}

static struct mdev_driver intel_vgpu_mdev_driver = {
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 46da19b3225d..5c533fbc2c8d 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -302,8 +302,6 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
mutex_lock(&gvt->lock);
intel_gvt_update_vgpu_types(gvt);
mutex_unlock(&gvt->lock);
-
- vfree(vgpu);
}

#define IDLE_VGPU_IDR 0
@@ -363,28 +361,23 @@ void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu)
vfree(vgpu);
}

-static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
- struct intel_vgpu_creation_params *param)
+static int __intel_gvt_create_vgpu(struct intel_vgpu *vgpu,
+ struct intel_vgpu_creation_params *param)
{
+ struct intel_gvt *gvt = vgpu->gvt;
struct drm_i915_private *dev_priv = gvt->gt->i915;
- struct intel_vgpu *vgpu;
int ret;

gvt_dbg_core("low %llu MB high %llu MB fence %llu\n",
param->low_gm_sz, param->high_gm_sz,
param->fence_sz);

- vgpu = vzalloc(sizeof(*vgpu));
- if (!vgpu)
- return ERR_PTR(-ENOMEM);
-
ret = idr_alloc(&gvt->vgpu_idr, vgpu, IDLE_VGPU_IDR + 1, GVT_MAX_VGPU,
GFP_KERNEL);
if (ret < 0)
- goto out_free_vgpu;
+ return ret;

vgpu->id = ret;
- vgpu->gvt = gvt;
vgpu->sched_ctl.weight = param->weight;
mutex_init(&vgpu->vgpu_lock);
mutex_init(&vgpu->dmabuf_lock);
@@ -437,7 +430,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
if (ret)
goto out_clean_sched_policy;

- return vgpu;
+ return 0;

out_clean_sched_policy:
intel_vgpu_clean_sched_policy(vgpu);
@@ -455,9 +448,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
intel_vgpu_clean_mmio(vgpu);
out_clean_idr:
idr_remove(&gvt->vgpu_idr, vgpu->id);
-out_free_vgpu:
- vfree(vgpu);
- return ERR_PTR(ret);
+ return ret;
}

/**
@@ -470,11 +461,11 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
* Returns:
* pointer to intel_vgpu, error pointer if failed.
*/
-struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
- struct intel_vgpu_type *type)
+int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, struct intel_vgpu_type *type)
{
+ struct intel_gvt *gvt = vgpu->gvt;
struct intel_vgpu_creation_params param;
- struct intel_vgpu *vgpu;
+ int ret;

param.primary = 1;
param.low_gm_sz = type->low_gm_size;
@@ -488,15 +479,15 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
param.high_gm_sz = BYTES_TO_MB(param.high_gm_sz);

mutex_lock(&gvt->lock);
- vgpu = __intel_gvt_create_vgpu(gvt, &param);
- if (!IS_ERR(vgpu)) {
+ ret = __intel_gvt_create_vgpu(vgpu, &param);
+ if (!ret) {
/* calculate left instance change for types */
intel_gvt_update_vgpu_types(gvt);
intel_gvt_update_reg_whitelist(vgpu);
}
mutex_unlock(&gvt->lock);

- return vgpu;
+ return ret;
}

/**
--
2.21.3

2022-09-07 03:59:59

by Zhenyu Wang

[permalink] [raw]
Subject: Re: [PATCH v2 08/15] drm/i915/gvt: Use the new device life cycle helpers

On 2022.09.01 22:37:40 +0800, Kevin Tian wrote:
> Move vfio_device to the start of intel_vgpu as required by the new
> helpers.
>
> Change intel_gvt_create_vgpu() to use intel_vgpu as the first param
> as other vgpu helpers do.
>
> Signed-off-by: Kevin Tian <[email protected]>
> Reviewed-by: Jason Gunthorpe <[email protected]>
> ---

Looks good to me.

Reviewed-by: Zhenyu Wang <[email protected]>

> drivers/gpu/drm/i915/gvt/gvt.h | 5 ++-
> drivers/gpu/drm/i915/gvt/kvmgt.c | 52 ++++++++++++++++++++++----------
> drivers/gpu/drm/i915/gvt/vgpu.c | 33 ++++++++------------
> 3 files changed, 50 insertions(+), 40 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 705689e64011..89fab7896fc6 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -172,6 +172,7 @@ struct intel_vgpu_submission {
> #define KVMGT_DEBUGFS_FILENAME "kvmgt_nr_cache_entries"
>
> struct intel_vgpu {
> + struct vfio_device vfio_device;
> struct intel_gvt *gvt;
> struct mutex vgpu_lock;
> int id;
> @@ -211,7 +212,6 @@ struct intel_vgpu {
>
> u32 scan_nonprivbb;
>
> - struct vfio_device vfio_device;
> struct vfio_region *region;
> int num_regions;
> struct eventfd_ctx *intx_trigger;
> @@ -494,8 +494,7 @@ void intel_gvt_clean_vgpu_types(struct intel_gvt *gvt);
>
> struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt);
> void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu);
> -struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
> - struct intel_vgpu_type *type);
> +int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, struct intel_vgpu_type *type);
> void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu);
> void intel_gvt_release_vgpu(struct intel_vgpu *vgpu);
> void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index e3cd58946477..41bba40feef8 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -1546,7 +1546,33 @@ static const struct attribute_group *intel_vgpu_groups[] = {
> NULL,
> };
>
> +static int intel_vgpu_init_dev(struct vfio_device *vfio_dev)
> +{
> + struct mdev_device *mdev = to_mdev_device(vfio_dev->dev);
> + struct device *pdev = mdev_parent_dev(mdev);
> + struct intel_gvt *gvt = kdev_to_i915(pdev)->gvt;
> + struct intel_vgpu_type *type;
> + struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
> +
> + type = &gvt->types[mdev_get_type_group_id(mdev)];
> + if (!type)
> + return -EINVAL;
> +
> + vgpu->gvt = gvt;
> + return intel_gvt_create_vgpu(vgpu, type);
> +}
> +
> +static void intel_vgpu_release_dev(struct vfio_device *vfio_dev)
> +{
> + struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
> +
> + intel_gvt_destroy_vgpu(vgpu);
> + vfio_free_device(vfio_dev);
> +}
> +
> static const struct vfio_device_ops intel_vgpu_dev_ops = {
> + .init = intel_vgpu_init_dev,
> + .release = intel_vgpu_release_dev,
> .open_device = intel_vgpu_open_device,
> .close_device = intel_vgpu_close_device,
> .read = intel_vgpu_read,
> @@ -1558,35 +1584,28 @@ static const struct vfio_device_ops intel_vgpu_dev_ops = {
>
> static int intel_vgpu_probe(struct mdev_device *mdev)
> {
> - struct device *pdev = mdev_parent_dev(mdev);
> - struct intel_gvt *gvt = kdev_to_i915(pdev)->gvt;
> - struct intel_vgpu_type *type;
> struct intel_vgpu *vgpu;
> int ret;
>
> - type = &gvt->types[mdev_get_type_group_id(mdev)];
> - if (!type)
> - return -EINVAL;
> -
> - vgpu = intel_gvt_create_vgpu(gvt, type);
> + vgpu = vfio_alloc_device(intel_vgpu, vfio_device, &mdev->dev,
> + &intel_vgpu_dev_ops);
> if (IS_ERR(vgpu)) {
> gvt_err("failed to create intel vgpu: %ld\n", PTR_ERR(vgpu));
> return PTR_ERR(vgpu);
> }
>
> - vfio_init_group_dev(&vgpu->vfio_device, &mdev->dev,
> - &intel_vgpu_dev_ops);
> -
> dev_set_drvdata(&mdev->dev, vgpu);
> ret = vfio_register_emulated_iommu_dev(&vgpu->vfio_device);
> - if (ret) {
> - intel_gvt_destroy_vgpu(vgpu);
> - return ret;
> - }
> + if (ret)
> + goto out_put_vdev;
>
> gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
> dev_name(mdev_dev(mdev)));
> return 0;
> +
> +out_put_vdev:
> + vfio_put_device(&vgpu->vfio_device);
> + return ret;
> }
>
> static void intel_vgpu_remove(struct mdev_device *mdev)
> @@ -1595,7 +1614,8 @@ static void intel_vgpu_remove(struct mdev_device *mdev)
>
> if (WARN_ON_ONCE(vgpu->attached))
> return;
> - intel_gvt_destroy_vgpu(vgpu);
> +
> + vfio_put_device(&vgpu->vfio_device);
> }
>
> static struct mdev_driver intel_vgpu_mdev_driver = {
> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
> index 46da19b3225d..5c533fbc2c8d 100644
> --- a/drivers/gpu/drm/i915/gvt/vgpu.c
> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c
> @@ -302,8 +302,6 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
> mutex_lock(&gvt->lock);
> intel_gvt_update_vgpu_types(gvt);
> mutex_unlock(&gvt->lock);
> -
> - vfree(vgpu);
> }
>
> #define IDLE_VGPU_IDR 0
> @@ -363,28 +361,23 @@ void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu)
> vfree(vgpu);
> }
>
> -static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
> - struct intel_vgpu_creation_params *param)
> +static int __intel_gvt_create_vgpu(struct intel_vgpu *vgpu,
> + struct intel_vgpu_creation_params *param)
> {
> + struct intel_gvt *gvt = vgpu->gvt;
> struct drm_i915_private *dev_priv = gvt->gt->i915;
> - struct intel_vgpu *vgpu;
> int ret;
>
> gvt_dbg_core("low %llu MB high %llu MB fence %llu\n",
> param->low_gm_sz, param->high_gm_sz,
> param->fence_sz);
>
> - vgpu = vzalloc(sizeof(*vgpu));
> - if (!vgpu)
> - return ERR_PTR(-ENOMEM);
> -
> ret = idr_alloc(&gvt->vgpu_idr, vgpu, IDLE_VGPU_IDR + 1, GVT_MAX_VGPU,
> GFP_KERNEL);
> if (ret < 0)
> - goto out_free_vgpu;
> + return ret;
>
> vgpu->id = ret;
> - vgpu->gvt = gvt;
> vgpu->sched_ctl.weight = param->weight;
> mutex_init(&vgpu->vgpu_lock);
> mutex_init(&vgpu->dmabuf_lock);
> @@ -437,7 +430,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
> if (ret)
> goto out_clean_sched_policy;
>
> - return vgpu;
> + return 0;
>
> out_clean_sched_policy:
> intel_vgpu_clean_sched_policy(vgpu);
> @@ -455,9 +448,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
> intel_vgpu_clean_mmio(vgpu);
> out_clean_idr:
> idr_remove(&gvt->vgpu_idr, vgpu->id);
> -out_free_vgpu:
> - vfree(vgpu);
> - return ERR_PTR(ret);
> + return ret;
> }
>
> /**
> @@ -470,11 +461,11 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
> * Returns:
> * pointer to intel_vgpu, error pointer if failed.
> */
> -struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
> - struct intel_vgpu_type *type)
> +int intel_gvt_create_vgpu(struct intel_vgpu *vgpu, struct intel_vgpu_type *type)
> {
> + struct intel_gvt *gvt = vgpu->gvt;
> struct intel_vgpu_creation_params param;
> - struct intel_vgpu *vgpu;
> + int ret;
>
> param.primary = 1;
> param.low_gm_sz = type->low_gm_size;
> @@ -488,15 +479,15 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
> param.high_gm_sz = BYTES_TO_MB(param.high_gm_sz);
>
> mutex_lock(&gvt->lock);
> - vgpu = __intel_gvt_create_vgpu(gvt, &param);
> - if (!IS_ERR(vgpu)) {
> + ret = __intel_gvt_create_vgpu(vgpu, &param);
> + if (!ret) {
> /* calculate left instance change for types */
> intel_gvt_update_vgpu_types(gvt);
> intel_gvt_update_reg_whitelist(vgpu);
> }
> mutex_unlock(&gvt->lock);
>
> - return vgpu;
> + return ret;
> }
>
> /**
> --
> 2.21.3
>


Attachments:
(No filename) (7.85 kB)
signature.asc (201.00 B)
Download all attachments