2023-07-14 22:54:29

by Dmitry Torokhov

[permalink] [raw]
Subject: [PATCH v3 2/2] kvm/vfio: avoid bouncing the mutex when adding and deleting groups

Stop taking kv->lock mutex in kvm_vfio_update_coherency() and instead
call it with this mutex held: the callers of the function usually
already have it taken (and released) before calling
kvm_vfio_update_coherency(). This avoid bouncing the lock up and down.

The exception is kvm_vfio_release() where we do not take the lock, but
it is being executed when the very last reference to kvm_device is being
dropped, so there are no concerns about concurrency.

Suggested-by: Alex Williamson <[email protected]>
Reviewed-by: Alex Williamson <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
---

v3: initialize "ret" with 0 (per Alex), added Alex's reviewed-by

v2: new patch.

virt/kvm/vfio.c | 26 +++++++++-----------------
1 file changed, 9 insertions(+), 17 deletions(-)

diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index cd46d7ef98d6..dbf2b855cf78 100644
--- a/virt/kvm/vfio.c
+++ b/virt/kvm/vfio.c
@@ -122,8 +122,6 @@ static void kvm_vfio_update_coherency(struct kvm_device *dev)
bool noncoherent = false;
struct kvm_vfio_group *kvg;

- mutex_lock(&kv->lock);
-
list_for_each_entry(kvg, &kv->group_list, node) {
if (!kvm_vfio_file_enforced_coherent(kvg->file)) {
noncoherent = true;
@@ -139,8 +137,6 @@ static void kvm_vfio_update_coherency(struct kvm_device *dev)
else
kvm_arch_unregister_noncoherent_dma(dev->kvm);
}
-
- mutex_unlock(&kv->lock);
}

static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
@@ -148,7 +144,7 @@ static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
struct kvm_vfio *kv = dev->private;
struct kvm_vfio_group *kvg;
struct file *filp;
- int ret;
+ int ret = 0;

filp = fget(fd);
if (!filp)
@@ -157,7 +153,7 @@ static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
/* Ensure the FD is a vfio group FD.*/
if (!kvm_vfio_file_is_group(filp)) {
ret = -EINVAL;
- goto err_fput;
+ goto out_fput;
}

mutex_lock(&kv->lock);
@@ -165,30 +161,26 @@ static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
list_for_each_entry(kvg, &kv->group_list, node) {
if (kvg->file == filp) {
ret = -EEXIST;
- goto err_unlock;
+ goto out_unlock;
}
}

kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
if (!kvg) {
ret = -ENOMEM;
- goto err_unlock;
+ goto out_unlock;
}

- kvg->file = filp;
+ kvg->file = get_file(filp);
list_add_tail(&kvg->node, &kv->group_list);

kvm_arch_start_assignment(dev->kvm);
kvm_vfio_file_set_kvm(kvg->file, dev->kvm);
-
- mutex_unlock(&kv->lock);
-
kvm_vfio_update_coherency(dev);

- return 0;
-err_unlock:
+out_unlock:
mutex_unlock(&kv->lock);
-err_fput:
+out_fput:
fput(filp);
return ret;
}
@@ -224,12 +216,12 @@ static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd)
break;
}

+ kvm_vfio_update_coherency(dev);
+
mutex_unlock(&kv->lock);

fdput(f);

- kvm_vfio_update_coherency(dev);
-
return ret;
}

--
2.41.0.255.g8b1d071c50-goog



2023-07-19 06:11:15

by Tian, Kevin

[permalink] [raw]
Subject: RE: [PATCH v3 2/2] kvm/vfio: avoid bouncing the mutex when adding and deleting groups

> From: Dmitry Torokhov <[email protected]>
> Sent: Saturday, July 15, 2023 6:46 AM
>
> @@ -165,30 +161,26 @@ static int kvm_vfio_group_add(struct kvm_device
> *dev, unsigned int fd)
> list_for_each_entry(kvg, &kv->group_list, node) {
> if (kvg->file == filp) {
> ret = -EEXIST;
> - goto err_unlock;
> + goto out_unlock;
> }
> }
>
> kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
> if (!kvg) {
> ret = -ENOMEM;
> - goto err_unlock;
> + goto out_unlock;
> }
>
> - kvg->file = filp;
> + kvg->file = get_file(filp);

Why is another reference required here?

2023-07-19 06:35:43

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm/vfio: avoid bouncing the mutex when adding and deleting groups

On Wed, Jul 19, 2023 at 05:32:27AM +0000, Tian, Kevin wrote:
> > From: Dmitry Torokhov <[email protected]>
> > Sent: Saturday, July 15, 2023 6:46 AM
> >
> > @@ -165,30 +161,26 @@ static int kvm_vfio_group_add(struct kvm_device
> > *dev, unsigned int fd)
> > list_for_each_entry(kvg, &kv->group_list, node) {
> > if (kvg->file == filp) {
> > ret = -EEXIST;
> > - goto err_unlock;
> > + goto out_unlock;
> > }
> > }
> >
> > kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
> > if (!kvg) {
> > ret = -ENOMEM;
> > - goto err_unlock;
> > + goto out_unlock;
> > }
> >
> > - kvg->file = filp;
> > + kvg->file = get_file(filp);
>
> Why is another reference required here?

Because the function now has a single exit point and the original
reference is dropped unconditionally on exit. It looks cleaner than
checking for non-zero "ret" and deciding whether the reference should be
dropped or kept.

Thanks.

--
Dmitry

2023-07-20 02:51:49

by Tian, Kevin

[permalink] [raw]
Subject: RE: [PATCH v3 2/2] kvm/vfio: avoid bouncing the mutex when adding and deleting groups

> From: Dmitry Torokhov <[email protected]>
> Sent: Wednesday, July 19, 2023 2:11 PM
>
> On Wed, Jul 19, 2023 at 05:32:27AM +0000, Tian, Kevin wrote:
> > > From: Dmitry Torokhov <[email protected]>
> > > Sent: Saturday, July 15, 2023 6:46 AM
> > >
> > > @@ -165,30 +161,26 @@ static int kvm_vfio_group_add(struct
> kvm_device
> > > *dev, unsigned int fd)
> > > list_for_each_entry(kvg, &kv->group_list, node) {
> > > if (kvg->file == filp) {
> > > ret = -EEXIST;
> > > - goto err_unlock;
> > > + goto out_unlock;
> > > }
> > > }
> > >
> > > kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
> > > if (!kvg) {
> > > ret = -ENOMEM;
> > > - goto err_unlock;
> > > + goto out_unlock;
> > > }
> > >
> > > - kvg->file = filp;
> > > + kvg->file = get_file(filp);
> >
> > Why is another reference required here?
>
> Because the function now has a single exit point and the original
> reference is dropped unconditionally on exit. It looks cleaner than
> checking for non-zero "ret" and deciding whether the reference should be
> dropped or kept.
>

A comment is appreciated. otherwise,

Reviewed-by: Kevin Tian <[email protected]>

2023-07-20 17:39:05

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm/vfio: avoid bouncing the mutex when adding and deleting groups

On Thu, Jul 20, 2023 at 02:36:16AM +0000, Tian, Kevin wrote:
> > From: Dmitry Torokhov <[email protected]>
> > Sent: Wednesday, July 19, 2023 2:11 PM
> >
> > On Wed, Jul 19, 2023 at 05:32:27AM +0000, Tian, Kevin wrote:
> > > > From: Dmitry Torokhov <[email protected]>
> > > > Sent: Saturday, July 15, 2023 6:46 AM
> > > >
> > > > @@ -165,30 +161,26 @@ static int kvm_vfio_group_add(struct
> > kvm_device
> > > > *dev, unsigned int fd)
> > > > list_for_each_entry(kvg, &kv->group_list, node) {
> > > > if (kvg->file == filp) {
> > > > ret = -EEXIST;
> > > > - goto err_unlock;
> > > > + goto out_unlock;
> > > > }
> > > > }
> > > >
> > > > kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
> > > > if (!kvg) {
> > > > ret = -ENOMEM;
> > > > - goto err_unlock;
> > > > + goto out_unlock;
> > > > }
> > > >
> > > > - kvg->file = filp;
> > > > + kvg->file = get_file(filp);
> > >
> > > Why is another reference required here?
> >
> > Because the function now has a single exit point and the original
> > reference is dropped unconditionally on exit. It looks cleaner than
> > checking for non-zero "ret" and deciding whether the reference should be
> > dropped or kept.
> >
>
> A comment is appreciated. otherwise,
>
> Reviewed-by: Kevin Tian <[email protected]>

Thank you for the review! However I do not think any comment is needed,
if one is looking at the final source and not the patch form, the reason
for taking another reference is plain to see.

Thanks!

--
Dmitry