2022-01-14 09:06:39

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

In vhost_enable_notify() we enable the notifications and we read
the avail index to check if new buffers have become available in
the meantime.

We are not caching the avail index, so when the device will call
vhost_get_vq_desc(), it will find the old value in the cache and
it will read the avail index again.

It would be better to refresh the cache every time we read avail
index, so let's change vhost_enable_notify() caching the value in
`avail_idx` and compare it with `last_avail_idx` to check if there
are new buffers available.

Anyway, we don't expect a significant performance boost because
the above path is not very common, indeed vhost_enable_notify()
is often called with unlikely(), expecting that avail index has
not been updated.

Signed-off-by: Stefano Garzarella <[email protected]>
---
v1:
- improved the commit description [MST, Jason]
---
drivers/vhost/vhost.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 59edb5a1ffe2..07363dff559e 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2543,8 +2543,9 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
&vq->avail->idx, r);
return false;
}
+ vq->avail_idx = vhost16_to_cpu(vq, avail_idx);

- return vhost16_to_cpu(vq, avail_idx) != vq->avail_idx;
+ return vq->avail_idx != vq->last_avail_idx;
}
EXPORT_SYMBOL_GPL(vhost_enable_notify);

--
2.31.1



2022-01-14 12:45:49

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> In vhost_enable_notify() we enable the notifications and we read
> the avail index to check if new buffers have become available in
> the meantime.
>
> We are not caching the avail index, so when the device will call
> vhost_get_vq_desc(), it will find the old value in the cache and
> it will read the avail index again.
>
> It would be better to refresh the cache every time we read avail
> index, so let's change vhost_enable_notify() caching the value in
> `avail_idx` and compare it with `last_avail_idx` to check if there
> are new buffers available.
>
> Anyway, we don't expect a significant performance boost because
> the above path is not very common, indeed vhost_enable_notify()
> is often called with unlikely(), expecting that avail index has
> not been updated.
>
> Signed-off-by: Stefano Garzarella <[email protected]>

... and can in theory even hurt due to an extra memory write.
So ... performance test restults pls?

> ---
> v1:
> - improved the commit description [MST, Jason]
> ---
> drivers/vhost/vhost.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 59edb5a1ffe2..07363dff559e 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -2543,8 +2543,9 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> &vq->avail->idx, r);
> return false;
> }
> + vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
>
> - return vhost16_to_cpu(vq, avail_idx) != vq->avail_idx;
> + return vq->avail_idx != vq->last_avail_idx;
> }
> EXPORT_SYMBOL_GPL(vhost_enable_notify);
>
> --
> 2.31.1


2022-01-14 13:38:35

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Fri, Jan 14, 2022 at 07:45:35AM -0500, Michael S. Tsirkin wrote:
>On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
>> In vhost_enable_notify() we enable the notifications and we read
>> the avail index to check if new buffers have become available in
>> the meantime.
>>
>> We are not caching the avail index, so when the device will call
>> vhost_get_vq_desc(), it will find the old value in the cache and
>> it will read the avail index again.
>>
>> It would be better to refresh the cache every time we read avail
>> index, so let's change vhost_enable_notify() caching the value in
>> `avail_idx` and compare it with `last_avail_idx` to check if there
>> are new buffers available.
>>
>> Anyway, we don't expect a significant performance boost because
>> the above path is not very common, indeed vhost_enable_notify()
>> is often called with unlikely(), expecting that avail index has
>> not been updated.
>>
>> Signed-off-by: Stefano Garzarella <[email protected]>
>
>... and can in theory even hurt due to an extra memory write.
>So ... performance test restults pls?

Right, could be.

I'll run some perf test with vsock, about net, do you have a test suite
or common step to follow to test it?

Thanks,
Stefano


2022-01-14 13:40:49

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Fri, Jan 14, 2022 at 02:38:16PM +0100, Stefano Garzarella wrote:
> On Fri, Jan 14, 2022 at 07:45:35AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> > > In vhost_enable_notify() we enable the notifications and we read
> > > the avail index to check if new buffers have become available in
> > > the meantime.
> > >
> > > We are not caching the avail index, so when the device will call
> > > vhost_get_vq_desc(), it will find the old value in the cache and
> > > it will read the avail index again.
> > >
> > > It would be better to refresh the cache every time we read avail
> > > index, so let's change vhost_enable_notify() caching the value in
> > > `avail_idx` and compare it with `last_avail_idx` to check if there
> > > are new buffers available.
> > >
> > > Anyway, we don't expect a significant performance boost because
> > > the above path is not very common, indeed vhost_enable_notify()
> > > is often called with unlikely(), expecting that avail index has
> > > not been updated.
> > >
> > > Signed-off-by: Stefano Garzarella <[email protected]>
> >
> > ... and can in theory even hurt due to an extra memory write.
> > So ... performance test restults pls?
>
> Right, could be.
>
> I'll run some perf test with vsock, about net, do you have a test suite or
> common step to follow to test it?
>
> Thanks,
> Stefano

You can use the vhost test as a unit test as well.

--
MST


2022-01-21 22:20:21

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Fri, Jan 14, 2022 at 2:40 PM Michael S. Tsirkin <[email protected]> wrote:
>
> On Fri, Jan 14, 2022 at 02:38:16PM +0100, Stefano Garzarella wrote:
> > On Fri, Jan 14, 2022 at 07:45:35AM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> > > > In vhost_enable_notify() we enable the notifications and we read
> > > > the avail index to check if new buffers have become available in
> > > > the meantime.
> > > >
> > > > We are not caching the avail index, so when the device will call
> > > > vhost_get_vq_desc(), it will find the old value in the cache and
> > > > it will read the avail index again.
> > > >
> > > > It would be better to refresh the cache every time we read avail
> > > > index, so let's change vhost_enable_notify() caching the value in
> > > > `avail_idx` and compare it with `last_avail_idx` to check if there
> > > > are new buffers available.
> > > >
> > > > Anyway, we don't expect a significant performance boost because
> > > > the above path is not very common, indeed vhost_enable_notify()
> > > > is often called with unlikely(), expecting that avail index has
> > > > not been updated.
> > > >
> > > > Signed-off-by: Stefano Garzarella <[email protected]>
> > >
> > > ... and can in theory even hurt due to an extra memory write.
> > > So ... performance test restults pls?
> >
> > Right, could be.
> >
> > I'll run some perf test with vsock, about net, do you have a test suite or
> > common step to follow to test it?
> >
> > Thanks,
> > Stefano
>
> You can use the vhost test as a unit test as well.

Thanks for the advice, I did indeed use it!

I run virtio_test (with vhost_test.ko) using 64 as batch to increase the
chance of the path being taken. (I changed bufs=0x1000000 in
virtio_test.c to increase the duration).

I used `perf stat` to take some numbers, running this command:

taskset -c 2 perf stat -r 10 --log-fd 1 -- ./virtio_test --batch=64

- Linux v5.16 without the patch applied

Performance counter stats for './virtio_test --batch=64' (10 runs):

2,791.70 msec task-clock # 0.996 CPUs utilized ( +- 0.36% )
23 context-switches # 8.209 /sec ( +- 2.75% )
0 cpu-migrations # 0.000 /sec
79 page-faults # 28.195 /sec ( +- 0.41% )
7,249,926,989 cycles # 2.587 GHz ( +- 0.36% )
7,711,999,656 instructions # 1.06 insn per cycle ( +- 1.08% )
1,838,436,806 branches # 656.134 M/sec ( +- 1.44% )
3,055,439 branch-misses # 0.17% of all branches ( +- 6.22% )

2.8024 +- 0.0100 seconds time elapsed ( +- 0.36% )

- Linux v5.16 with this patch applied

Performance counter stats for './virtio_test --batch=64' (10 runs):

2,753.36 msec task-clock # 0.998 CPUs utilized ( +- 0.20% )
24 context-switches # 8.699 /sec ( +- 2.86% )
0 cpu-migrations # 0.000 /sec
76 page-faults # 27.545 /sec ( +- 0.56% )
7,150,358,721 cycles # 2.592 GHz ( +- 0.20% )
7,420,639,950 instructions # 1.04 insn per cycle ( +- 0.76% )
1,745,759,193 branches # 632.730 M/sec ( +- 1.03% )
3,022,508 branch-misses # 0.17% of all branches ( +- 3.24% )

2.75952 +- 0.00561 seconds time elapsed ( +- 0.20% )


The difference seems minimal with a slight improvement.

To try to stress the patch more, I modified vhost_test.ko to call
vhost_enable_notify()/vhost_disable_notify() on every cycle when calling
vhost_get_vq_desc():

- Linux v5.16 modified without the patch applied

Performance counter stats for './virtio_test --batch=64' (10 runs):

4,126.66 msec task-clock # 1.006 CPUs utilized ( +- 0.25% )
28 context-switches # 6.826 /sec ( +- 3.41% )
0 cpu-migrations # 0.000 /sec
85 page-faults # 20.721 /sec ( +- 0.44% )
10,716,808,883 cycles # 2.612 GHz ( +- 0.25% )
11,804,381,462 instructions # 1.11 insn per cycle ( +- 0.86% )
3,138,813,438 branches # 765.153 M/sec ( +- 1.03% )
11,286,860 branch-misses # 0.35% of all branches ( +- 1.23% )

4.1027 +- 0.0103 seconds time elapsed ( +- 0.25% )

- Linux v5.16 modified with this patch applied

Performance counter stats for './virtio_test --batch=64' (10 runs):

3,953.55 msec task-clock # 1.001 CPUs utilized ( +- 0.33% )
29 context-switches # 7.345 /sec ( +- 2.67% )
0 cpu-migrations # 0.000 /sec
83 page-faults # 21.021 /sec ( +- 0.65% )
10,267,242,653 cycles # 2.600 GHz ( +- 0.33% )
7,972,866,579 instructions # 0.78 insn per cycle ( +- 0.21% )
1,663,770,390 branches # 421.377 M/sec ( +- 0.45% )
16,986,093 branch-misses # 1.02% of all branches ( +- 0.47% )

3.9489 +- 0.0130 seconds time elapsed ( +- 0.33% )

In this case the difference is bigger, with a reduction in execution
time (3.7 %) and fewer branches and instructions. It should be the
branch `if (vq->avail_idx == vq->last_avail_idx)` in vhost_get_vq_desc()
that is not taken.

Should I resend the patch adding some more performance information?

Thanks,
Stefano

2022-01-21 22:25:50

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Thu, Jan 20, 2022 at 04:08:39PM +0100, Stefano Garzarella wrote:
> On Fri, Jan 14, 2022 at 2:40 PM Michael S. Tsirkin <[email protected]> wrote:
> >
> > On Fri, Jan 14, 2022 at 02:38:16PM +0100, Stefano Garzarella wrote:
> > > On Fri, Jan 14, 2022 at 07:45:35AM -0500, Michael S. Tsirkin wrote:
> > > > On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> > > > > In vhost_enable_notify() we enable the notifications and we read
> > > > > the avail index to check if new buffers have become available in
> > > > > the meantime.
> > > > >
> > > > > We are not caching the avail index, so when the device will call
> > > > > vhost_get_vq_desc(), it will find the old value in the cache and
> > > > > it will read the avail index again.
> > > > >
> > > > > It would be better to refresh the cache every time we read avail
> > > > > index, so let's change vhost_enable_notify() caching the value in
> > > > > `avail_idx` and compare it with `last_avail_idx` to check if there
> > > > > are new buffers available.
> > > > >
> > > > > Anyway, we don't expect a significant performance boost because
> > > > > the above path is not very common, indeed vhost_enable_notify()
> > > > > is often called with unlikely(), expecting that avail index has
> > > > > not been updated.
> > > > >
> > > > > Signed-off-by: Stefano Garzarella <[email protected]>
> > > >
> > > > ... and can in theory even hurt due to an extra memory write.
> > > > So ... performance test restults pls?
> > >
> > > Right, could be.
> > >
> > > I'll run some perf test with vsock, about net, do you have a test suite or
> > > common step to follow to test it?
> > >
> > > Thanks,
> > > Stefano
> >
> > You can use the vhost test as a unit test as well.
>
> Thanks for the advice, I did indeed use it!
>
> I run virtio_test (with vhost_test.ko) using 64 as batch to increase the
> chance of the path being taken. (I changed bufs=0x1000000 in
> virtio_test.c to increase the duration).
>
> I used `perf stat` to take some numbers, running this command:
>
> taskset -c 2 perf stat -r 10 --log-fd 1 -- ./virtio_test --batch=64
>
> - Linux v5.16 without the patch applied
>
> Performance counter stats for './virtio_test --batch=64' (10 runs):
>
> 2,791.70 msec task-clock # 0.996 CPUs utilized ( +- 0.36% )
> 23 context-switches # 8.209 /sec ( +- 2.75% )
> 0 cpu-migrations # 0.000 /sec
> 79 page-faults # 28.195 /sec ( +- 0.41% )
> 7,249,926,989 cycles # 2.587 GHz ( +- 0.36% )
> 7,711,999,656 instructions # 1.06 insn per cycle ( +- 1.08% )
> 1,838,436,806 branches # 656.134 M/sec ( +- 1.44% )
> 3,055,439 branch-misses # 0.17% of all branches ( +- 6.22% )
>
> 2.8024 +- 0.0100 seconds time elapsed ( +- 0.36% )
>
> - Linux v5.16 with this patch applied
>
> Performance counter stats for './virtio_test --batch=64' (10 runs):
>
> 2,753.36 msec task-clock # 0.998 CPUs utilized ( +- 0.20% )
> 24 context-switches # 8.699 /sec ( +- 2.86% )
> 0 cpu-migrations # 0.000 /sec
> 76 page-faults # 27.545 /sec ( +- 0.56% )
> 7,150,358,721 cycles # 2.592 GHz ( +- 0.20% )
> 7,420,639,950 instructions # 1.04 insn per cycle ( +- 0.76% )
> 1,745,759,193 branches # 632.730 M/sec ( +- 1.03% )
> 3,022,508 branch-misses # 0.17% of all branches ( +- 3.24% )
>
> 2.75952 +- 0.00561 seconds time elapsed ( +- 0.20% )
>
>
> The difference seems minimal with a slight improvement.
>
> To try to stress the patch more, I modified vhost_test.ko to call
> vhost_enable_notify()/vhost_disable_notify() on every cycle when calling
> vhost_get_vq_desc():
>
> - Linux v5.16 modified without the patch applied
>
> Performance counter stats for './virtio_test --batch=64' (10 runs):
>
> 4,126.66 msec task-clock # 1.006 CPUs utilized ( +- 0.25% )
> 28 context-switches # 6.826 /sec ( +- 3.41% )
> 0 cpu-migrations # 0.000 /sec
> 85 page-faults # 20.721 /sec ( +- 0.44% )
> 10,716,808,883 cycles # 2.612 GHz ( +- 0.25% )
> 11,804,381,462 instructions # 1.11 insn per cycle ( +- 0.86% )
> 3,138,813,438 branches # 765.153 M/sec ( +- 1.03% )
> 11,286,860 branch-misses # 0.35% of all branches ( +- 1.23% )
>
> 4.1027 +- 0.0103 seconds time elapsed ( +- 0.25% )
>
> - Linux v5.16 modified with this patch applied
>
> Performance counter stats for './virtio_test --batch=64' (10 runs):
>
> 3,953.55 msec task-clock # 1.001 CPUs utilized ( +- 0.33% )
> 29 context-switches # 7.345 /sec ( +- 2.67% )
> 0 cpu-migrations # 0.000 /sec
> 83 page-faults # 21.021 /sec ( +- 0.65% )
> 10,267,242,653 cycles # 2.600 GHz ( +- 0.33% )
> 7,972,866,579 instructions # 0.78 insn per cycle ( +- 0.21% )
> 1,663,770,390 branches # 421.377 M/sec ( +- 0.45% )
> 16,986,093 branch-misses # 1.02% of all branches ( +- 0.47% )
>
> 3.9489 +- 0.0130 seconds time elapsed ( +- 0.33% )
>
> In this case the difference is bigger, with a reduction in execution
> time (3.7 %) and fewer branches and instructions. It should be the
> branch `if (vq->avail_idx == vq->last_avail_idx)` in vhost_get_vq_desc()
> that is not taken.
>
> Should I resend the patch adding some more performance information?
>
> Thanks,
> Stefano

Yea, pls do. You can just summarize it in a couple of lines.

--
MST

2022-01-24 19:06:24

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> In vhost_enable_notify() we enable the notifications and we read
> the avail index to check if new buffers have become available in
> the meantime.
>
> We are not caching the avail index, so when the device will call
> vhost_get_vq_desc(), it will find the old value in the cache and
> it will read the avail index again.

I think this wording is clearer because we do keep a cached the avail
index value, but the issue is we don't update it:
s/We are not caching the avail index/We do not update the cached avail
index value/

>
> It would be better to refresh the cache every time we read avail
> index, so let's change vhost_enable_notify() caching the value in
> `avail_idx` and compare it with `last_avail_idx` to check if there
> are new buffers available.
>
> Anyway, we don't expect a significant performance boost because
> the above path is not very common, indeed vhost_enable_notify()
> is often called with unlikely(), expecting that avail index has
> not been updated.
>
> Signed-off-by: Stefano Garzarella <[email protected]>
> ---
> v1:
> - improved the commit description [MST, Jason]
> ---
> drivers/vhost/vhost.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 59edb5a1ffe2..07363dff559e 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -2543,8 +2543,9 @@ bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> &vq->avail->idx, r);
> return false;
> }
> + vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
>
> - return vhost16_to_cpu(vq, avail_idx) != vq->avail_idx;
> + return vq->avail_idx != vq->last_avail_idx;

vhost_vq_avail_empty() has a fast path that's missing in
vhost_enable_notify():

if (vq->avail_idx != vq->last_avail_idx)
return false;


Attachments:
(No filename) (1.90 kB)
signature.asc (499.00 B)
Download all attachments

2022-01-25 15:06:22

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Mon, Jan 24, 2022 at 11:31:49AM +0000, Stefan Hajnoczi wrote:
>On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
>> In vhost_enable_notify() we enable the notifications and we read
>> the avail index to check if new buffers have become available in
>> the meantime.
>>
>> We are not caching the avail index, so when the device will call
>> vhost_get_vq_desc(), it will find the old value in the cache and
>> it will read the avail index again.
>
>I think this wording is clearer because we do keep a cached the avail
>index value, but the issue is we don't update it:
>s/We are not caching the avail index/We do not update the cached avail
>index value/

I'll fix in v3.
It seems I forgot to CC you on v2:
https://lore.kernel.org/virtualization/[email protected]/

>
>>
>> It would be better to refresh the cache every time we read avail
>> index, so let's change vhost_enable_notify() caching the value in
>> `avail_idx` and compare it with `last_avail_idx` to check if there
>> are new buffers available.
>>
>> Anyway, we don't expect a significant performance boost because
>> the above path is not very common, indeed vhost_enable_notify()
>> is often called with unlikely(), expecting that avail index has
>> not been updated.
>>
>> Signed-off-by: Stefano Garzarella <[email protected]>
>> ---
>> v1:
>> - improved the commit description [MST, Jason]
>> ---
>> drivers/vhost/vhost.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 59edb5a1ffe2..07363dff559e 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -2543,8 +2543,9 @@ bool vhost_enable_notify(struct vhost_dev *dev,
>> struct vhost_virtqueue *vq)
>> &vq->avail->idx, r);
>> return false;
>> }
>> + vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
>>
>> - return vhost16_to_cpu(vq, avail_idx) != vq->avail_idx;
>> + return vq->avail_idx != vq->last_avail_idx;
>
>vhost_vq_avail_empty() has a fast path that's missing in
>vhost_enable_notify():
>
> if (vq->avail_idx != vq->last_avail_idx)
> return false;

Yep, I thought about that, but devices usually call
vhost_enable_notify() right when vq->avail_idx == vq->last_avail_idx, so
I don't know if it's an extra check for a branch that will never be
taken.

Do you think it is better to add that check? (maybe with unlikely())

Thanks,
Stefano

2022-01-26 01:18:18

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH v1] vhost: cache avail index in vhost_enable_notify()

On Tue, Jan 25, 2022 at 12:14:22PM +0100, Stefano Garzarella wrote:
> On Mon, Jan 24, 2022 at 11:31:49AM +0000, Stefan Hajnoczi wrote:
> > On Fri, Jan 14, 2022 at 10:05:08AM +0100, Stefano Garzarella wrote:
> > > In vhost_enable_notify() we enable the notifications and we read
> > > the avail index to check if new buffers have become available in
> > > the meantime.
> > >
> > > We are not caching the avail index, so when the device will call
> > > vhost_get_vq_desc(), it will find the old value in the cache and
> > > it will read the avail index again.
> >
> > I think this wording is clearer because we do keep a cached the avail
> > index value, but the issue is we don't update it:
> > s/We are not caching the avail index/We do not update the cached avail
> > index value/
>
> I'll fix in v3.
> It seems I forgot to CC you on v2: https://lore.kernel.org/virtualization/[email protected]/
>
> >
> > >
> > > It would be better to refresh the cache every time we read avail
> > > index, so let's change vhost_enable_notify() caching the value in
> > > `avail_idx` and compare it with `last_avail_idx` to check if there
> > > are new buffers available.
> > >
> > > Anyway, we don't expect a significant performance boost because
> > > the above path is not very common, indeed vhost_enable_notify()
> > > is often called with unlikely(), expecting that avail index has
> > > not been updated.
> > >
> > > Signed-off-by: Stefano Garzarella <[email protected]>
> > > ---
> > > v1:
> > > - improved the commit description [MST, Jason]
> > > ---
> > > drivers/vhost/vhost.c | 3 ++-
> > > 1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > index 59edb5a1ffe2..07363dff559e 100644
> > > --- a/drivers/vhost/vhost.c
> > > +++ b/drivers/vhost/vhost.c
> > > @@ -2543,8 +2543,9 @@ bool vhost_enable_notify(struct vhost_dev
> > > *dev, struct vhost_virtqueue *vq)
> > > &vq->avail->idx, r);
> > > return false;
> > > }
> > > + vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
> > >
> > > - return vhost16_to_cpu(vq, avail_idx) != vq->avail_idx;
> > > + return vq->avail_idx != vq->last_avail_idx;
> >
> > vhost_vq_avail_empty() has a fast path that's missing in
> > vhost_enable_notify():
> >
> > if (vq->avail_idx != vq->last_avail_idx)
> > return false;
>
> Yep, I thought about that, but devices usually call vhost_enable_notify()
> right when vq->avail_idx == vq->last_avail_idx, so I don't know if it's an
> extra check for a branch that will never be taken.
>
> Do you think it is better to add that check? (maybe with unlikely())

You're right. It's probably fine to omit it.

Stefan


Attachments:
(No filename) (2.72 kB)
signature.asc (499.00 B)
Download all attachments