2019-03-27 10:33:44

by Dongli Zhang

[permalink] [raw]
Subject: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi

When tag_set->nr_maps is 1, the block layer limits the number of hw queues
by nr_cpu_ids. No matter how many hw queues are use by
virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they
can use at most nr_cpu_ids hw queues.

In addition, specifically for pci scenario, when the 'num-queues' specified
by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to
allocate more than maxcpus vectors in order to have a vector for each
queue. As a result, they fall back into MSI-X with one vector for config
and one shared for queues.

Considering above reasons, this patch set limits the number of hw queues
used by nr_cpu_ids for both virtio-blk and virtio-scsi.

-------------------------------------------------------------

Here is test result of virtio-scsi:

qemu cmdline:

-smp 2,maxcpus=4, \
-device virtio-scsi-pci,id=scsi0,num_queues=8, \
-device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0, \
-drive file=test.img,if=none,id=drive0

Although maxcpus=4 and num_queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
24: 0 0 PCI-MSI 65536-edge virtio0-config
25: 0 369 PCI-MSI 65537-edge virtio0-virtqueues
... ...

# /sys/block/sda/mq/
0 1 2 3 ------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
24: 0 0 PCI-MSI 65536-edge virtio0-config
25: 0 0 PCI-MSI 65537-edge virtio0-control
26: 0 0 PCI-MSI 65538-edge virtio0-event
27: 296 0 PCI-MSI 65539-edge virtio0-request
28: 0 139 PCI-MSI 65540-edge virtio0-request
29: 0 0 PCI-MSI 65541-edge virtio0-request
30: 0 0 PCI-MSI 65542-edge virtio0-request

# ls /sys/block/sda/mq
0 1 2 3

-------------------------------------------------------------

Here is test result of virtio-blk:

qemu cmdline:

-smp 2,maxcpus=4,
-device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0,num-queues=8
-drive test.img,format=raw,if=none,id=drive-virtio-disk0

Although maxcpus=4 and num-queues=8, 4 queues are used while 2 interrupts
are allocated.

# cat /proc/interrupts
... ...
24: 0 0 PCI-MSI 65536-edge virtio0-config
25: 0 65 PCI-MSI 65537-edge virtio0-virtqueues
... ...

# ls /sys/block/vda/mq
0 1 2 3 -------> 4 queues although qemu sets num_queues=8


With the patch set, there is per-queue interrupt.

# cat /proc/interrupts
24: 0 0 PCI-MSI 65536-edge virtio0-config
25: 64 0 PCI-MSI 65537-edge virtio0-req.0
26: 0 10290 PCI-MSI 65538-edge virtio0-req.1
27: 0 0 PCI-MSI 65539-edge virtio0-req.2
28: 0 0 PCI-MSI 65540-edge virtio0-req.3

# ls /sys/block/vda/mq/
0 1 2 3


Reference: https://lore.kernel.org/lkml/e4afe4c5-0262-4500-aeec-60f30734b4fc@default/

Thank you very much!

Dongli Zhang



2019-03-27 10:33:45

by Dongli Zhang

[permalink] [raw]
Subject: [PATCH 2/2] scsi: virtio_scsi: limit number of hw queues by nr_cpu_ids

When tag_set->nr_maps is 1, the block layer limits the number of hw queues
by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it
has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues.

In addition, specifically for pci scenario, when the 'num_queues' specified
by qemu is more than maxcpus, virtio-scsi would not be able to allocate
more than maxcpus vectors in order to have a vector for each queue. As a
result, it falls back into MSI-X with one vector for config and one shared
for queues.

Considering above reasons, this patch limits the number of hw queues used
by virtio-scsi by nr_cpu_ids.

Signed-off-by: Dongli Zhang <[email protected]>
---
drivers/scsi/virtio_scsi.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 8af0177..9c4a3e1 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -793,6 +793,7 @@ static int virtscsi_probe(struct virtio_device *vdev)

/* We need to know how many queues before we allocate. */
num_queues = virtscsi_config_get(vdev, num_queues) ? : 1;
+ num_queues = min_t(unsigned int, nr_cpu_ids, num_queues);

num_targets = virtscsi_config_get(vdev, max_target) + 1;

--
2.7.4


2019-03-27 10:34:05

by Dongli Zhang

[permalink] [raw]
Subject: [PATCH 1/2] virtio-blk: limit number of hw queues by nr_cpu_ids

When tag_set->nr_maps is 1, the block layer limits the number of hw queues
by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it
has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues.

In addition, specifically for pci scenario, when the 'num-queues' specified
by qemu is more than maxcpus, virtio-blk would not be able to allocate more
than maxcpus vectors in order to have a vector for each queue. As a result,
it falls back into MSI-X with one vector for config and one shared for
queues.

Considering above reasons, this patch limits the number of hw queues used
by virtio-blk by nr_cpu_ids.

Signed-off-by: Dongli Zhang <[email protected]>
---
drivers/block/virtio_blk.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..b83cb45 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
if (err)
num_vqs = 1;

+ num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs);
+
vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
if (!vblk->vqs)
return -ENOMEM;
--
2.7.4


2019-04-08 13:59:13

by Dongli Zhang

[permalink] [raw]
Subject: Re: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi

ping?

Thank you very much!

Dongli Zhang

On 03/27/2019 06:36 PM, Dongli Zhang wrote:
> When tag_set->nr_maps is 1, the block layer limits the number of hw queues
> by nr_cpu_ids. No matter how many hw queues are use by
> virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they
> can use at most nr_cpu_ids hw queues.
>
> In addition, specifically for pci scenario, when the 'num-queues' specified
> by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to
> allocate more than maxcpus vectors in order to have a vector for each
> queue. As a result, they fall back into MSI-X with one vector for config
> and one shared for queues.
>
> Considering above reasons, this patch set limits the number of hw queues
> used by nr_cpu_ids for both virtio-blk and virtio-scsi.
>
> -------------------------------------------------------------
>
> Here is test result of virtio-scsi:
>
> qemu cmdline:
>
> -smp 2,maxcpus=4, \
> -device virtio-scsi-pci,id=scsi0,num_queues=8, \
> -device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0, \
> -drive file=test.img,if=none,id=drive0
>
> Although maxcpus=4 and num_queues=8, 4 queues are used while 2 interrupts
> are allocated.
>
> # cat /proc/interrupts
> ... ...
> 24: 0 0 PCI-MSI 65536-edge virtio0-config
> 25: 0 369 PCI-MSI 65537-edge virtio0-virtqueues
> ... ...
>
> # /sys/block/sda/mq/
> 0 1 2 3 ------> 4 queues although qemu sets num_queues=8
>
>
> With the patch set, there is per-queue interrupt.
>
> # cat /proc/interrupts
> 24: 0 0 PCI-MSI 65536-edge virtio0-config
> 25: 0 0 PCI-MSI 65537-edge virtio0-control
> 26: 0 0 PCI-MSI 65538-edge virtio0-event
> 27: 296 0 PCI-MSI 65539-edge virtio0-request
> 28: 0 139 PCI-MSI 65540-edge virtio0-request
> 29: 0 0 PCI-MSI 65541-edge virtio0-request
> 30: 0 0 PCI-MSI 65542-edge virtio0-request
>
> # ls /sys/block/sda/mq
> 0 1 2 3
>
> -------------------------------------------------------------
>
> Here is test result of virtio-blk:
>
> qemu cmdline:
>
> -smp 2,maxcpus=4,
> -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0,num-queues=8
> -drive test.img,format=raw,if=none,id=drive-virtio-disk0
>
> Although maxcpus=4 and num-queues=8, 4 queues are used while 2 interrupts
> are allocated.
>
> # cat /proc/interrupts
> ... ...
> 24: 0 0 PCI-MSI 65536-edge virtio0-config
> 25: 0 65 PCI-MSI 65537-edge virtio0-virtqueues
> ... ...
>
> # ls /sys/block/vda/mq
> 0 1 2 3 -------> 4 queues although qemu sets num_queues=8
>
>
> With the patch set, there is per-queue interrupt.
>
> # cat /proc/interrupts
> 24: 0 0 PCI-MSI 65536-edge virtio0-config
> 25: 64 0 PCI-MSI 65537-edge virtio0-req.0
> 26: 0 10290 PCI-MSI 65538-edge virtio0-req.1
> 27: 0 0 PCI-MSI 65539-edge virtio0-req.2
> 28: 0 0 PCI-MSI 65540-edge virtio0-req.3
>
> # ls /sys/block/vda/mq/
> 0 1 2 3
>
>
> Reference: https://lore.kernel.org/lkml/e4afe4c5-0262-4500-aeec-60f30734b4fc@default/
>
> Thank you very much!
>
> Dongli Zhang
>

2019-04-10 13:40:00

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH 1/2] virtio-blk: limit number of hw queues by nr_cpu_ids

On Wed, Mar 27, 2019 at 06:36:34PM +0800, Dongli Zhang wrote:
> When tag_set->nr_maps is 1, the block layer limits the number of hw queues
> by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it
> has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues.
>
> In addition, specifically for pci scenario, when the 'num-queues' specified
> by qemu is more than maxcpus, virtio-blk would not be able to allocate more
> than maxcpus vectors in order to have a vector for each queue. As a result,
> it falls back into MSI-X with one vector for config and one shared for
> queues.
>
> Considering above reasons, this patch limits the number of hw queues used
> by virtio-blk by nr_cpu_ids.
>
> Signed-off-by: Dongli Zhang <[email protected]>
> ---
> drivers/block/virtio_blk.c | 2 ++
> 1 file changed, 2 insertions(+)

Reviewed-by: Stefan Hajnoczi <[email protected]>


Attachments:
(No filename) (926.00 B)
signature.asc (465.00 B)
Download all attachments

2019-04-10 13:40:02

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH 2/2] scsi: virtio_scsi: limit number of hw queues by nr_cpu_ids

On Wed, Mar 27, 2019 at 06:36:35PM +0800, Dongli Zhang wrote:
> When tag_set->nr_maps is 1, the block layer limits the number of hw queues
> by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it
> has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues.
>
> In addition, specifically for pci scenario, when the 'num_queues' specified
> by qemu is more than maxcpus, virtio-scsi would not be able to allocate
> more than maxcpus vectors in order to have a vector for each queue. As a
> result, it falls back into MSI-X with one vector for config and one shared
> for queues.
>
> Considering above reasons, this patch limits the number of hw queues used
> by virtio-scsi by nr_cpu_ids.
>
> Signed-off-by: Dongli Zhang <[email protected]>
> ---
> drivers/scsi/virtio_scsi.c | 1 +
> 1 file changed, 1 insertion(+)

Reviewed-by: Stefan Hajnoczi <[email protected]>


Attachments:
(No filename) (927.00 B)
signature.asc (465.00 B)
Download all attachments

2019-04-10 17:39:25

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi

On 3/27/19 4:36 AM, Dongli Zhang wrote:
> When tag_set->nr_maps is 1, the block layer limits the number of hw queues
> by nr_cpu_ids. No matter how many hw queues are use by
> virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they
> can use at most nr_cpu_ids hw queues.
>
> In addition, specifically for pci scenario, when the 'num-queues' specified
> by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to
> allocate more than maxcpus vectors in order to have a vector for each
> queue. As a result, they fall back into MSI-X with one vector for config
> and one shared for queues.
>
> Considering above reasons, this patch set limits the number of hw queues
> used by nr_cpu_ids for both virtio-blk and virtio-scsi.

I picked both up for 5.1.

--
Jens Axboe