2022-12-27 02:35:48

by Ming Lei

[permalink] [raw]
Subject: [PATCH V4 6/6] blk-mq: Build default queue map via group_cpus_evenly()

The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
topo into account, so the built mapping is pretty bad, since CPUs
belonging to different NUMA node are assigned to same queue. It is
observed that IOPS drops by ~30% when running two jobs on same hctx
of null_blk from two CPUs belonging to two NUMA nodes compared with
from same NUMA node.

Address the issue by reusing group_cpus_evenly() for building queue
mapping since group_cpus_evenly() does group cpus according to CPU/NUMA
locality.

Also performance data becomes more stable with this patchset given
correct queue mapping is applied wrt. numa locality viewpoint, for
example, on one two nodes arm64 machine with 160 cpus, node 0(cpu 0~79),
node 1(cpu 80~159):

1) modprobe null_blk nr_devices=1 submit_queues=2

2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that
IOPS becomes much stable on multiple tests:

- without patched: IOPS is 2.5M ~ 4.5M
- patched: IOPS is 4.3 ~ 5M

Lots of drivers may benefit from the change, such as nvme pci poll,
nvme tcp, ...

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
---
block/blk-mq-cpumap.c | 63 +++++++++----------------------------------
1 file changed, 13 insertions(+), 50 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 9c2fce1a7b50..0c612c19feb8 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -10,66 +10,29 @@
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/cpu.h>
+#include <linux/group_cpus.h>

#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"

-static int queue_index(struct blk_mq_queue_map *qmap,
- unsigned int nr_queues, const int q)
-{
- return qmap->queue_offset + (q % nr_queues);
-}
-
-static int get_first_sibling(unsigned int cpu)
-{
- unsigned int ret;
-
- ret = cpumask_first(topology_sibling_cpumask(cpu));
- if (ret < nr_cpu_ids)
- return ret;
-
- return cpu;
-}
-
void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
{
- unsigned int *map = qmap->mq_map;
- unsigned int nr_queues = qmap->nr_queues;
- unsigned int cpu, first_sibling, q = 0;
-
- for_each_possible_cpu(cpu)
- map[cpu] = -1;
-
- /*
- * Spread queues among present CPUs first for minimizing
- * count of dead queues which are mapped by all un-present CPUs
- */
- for_each_present_cpu(cpu) {
- if (q >= nr_queues)
- break;
- map[cpu] = queue_index(qmap, nr_queues, q++);
+ const struct cpumask *masks;
+ unsigned int queue, cpu;
+
+ masks = group_cpus_evenly(qmap->nr_queues);
+ if (!masks) {
+ for_each_possible_cpu(cpu)
+ qmap->mq_map[cpu] = qmap->queue_offset;
+ return;
}

- for_each_possible_cpu(cpu) {
- if (map[cpu] != -1)
- continue;
- /*
- * First do sequential mapping between CPUs and queues.
- * In case we still have CPUs to map, and we have some number of
- * threads per cores then map sibling threads to the same queue
- * for performance optimizations.
- */
- if (q < nr_queues) {
- map[cpu] = queue_index(qmap, nr_queues, q++);
- } else {
- first_sibling = get_first_sibling(cpu);
- if (first_sibling == cpu)
- map[cpu] = queue_index(qmap, nr_queues, q++);
- else
- map[cpu] = map[first_sibling];
- }
+ for (queue = 0; queue < qmap->nr_queues; queue++) {
+ for_each_cpu(cpu, &masks[queue])
+ qmap->mq_map[cpu] = qmap->queue_offset + queue;
}
+ kfree(masks);
}
EXPORT_SYMBOL_GPL(blk_mq_map_queues);

--
2.31.1


2023-01-11 10:26:34

by John Garry

[permalink] [raw]
Subject: Re: [PATCH V4 6/6] blk-mq: Build default queue map via group_cpus_evenly()

On 27/12/2022 02:29, Ming Lei wrote:
> The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
> topo into account, so the built mapping is pretty bad, since CPUs
> belonging to different NUMA node are assigned to same queue. It is
> observed that IOPS drops by ~30% when running two jobs on same hctx
> of null_blk from two CPUs belonging to two NUMA nodes compared with
> from same NUMA node.
>
> Address the issue by reusing group_cpus_evenly() for building queue
> mapping since group_cpus_evenly() does group cpus according to CPU/NUMA
> locality.
>
> Also performance data becomes more stable with this patchset given
> correct queue mapping is applied wrt. numa locality viewpoint, for
> example, on one two nodes arm64 machine with 160 cpus, node 0(cpu 0~79),
> node 1(cpu 80~159):
>
> 1) modprobe null_blk nr_devices=1 submit_queues=2
>
> 2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that
> IOPS becomes much stable on multiple tests:
>
> - without patched: IOPS is 2.5M ~ 4.5M
> - patched: IOPS is 4.3 ~ 5M
>
> Lots of drivers may benefit from the change, such as nvme pci poll,
> nvme tcp, ...
>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>

FWIW, but just a comment below:

Reviewed-by: John Garry <[email protected]>

> ---
> block/blk-mq-cpumap.c | 63 +++++++++----------------------------------
> 1 file changed, 13 insertions(+), 50 deletions(-)
>
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 9c2fce1a7b50..0c612c19feb8 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -10,66 +10,29 @@
> #include <linux/mm.h>
> #include <linux/smp.h>
> #include <linux/cpu.h>
> +#include <linux/group_cpus.h>
>
> #include <linux/blk-mq.h>
> #include "blk.h"
> #include "blk-mq.h"
>
> -static int queue_index(struct blk_mq_queue_map *qmap,
> - unsigned int nr_queues, const int q)
> -{
> - return qmap->queue_offset + (q % nr_queues);
> -}
> -
> -static int get_first_sibling(unsigned int cpu)
> -{
> - unsigned int ret;
> -
> - ret = cpumask_first(topology_sibling_cpumask(cpu));
> - if (ret < nr_cpu_ids)
> - return ret;
> -
> - return cpu;
> -}
> -
> void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
> {
> - unsigned int *map = qmap->mq_map;
> - unsigned int nr_queues = qmap->nr_queues;
> - unsigned int cpu, first_sibling, q = 0;
> -
> - for_each_possible_cpu(cpu)
> - map[cpu] = -1;
> -
> - /*
> - * Spread queues among present CPUs first for minimizing
> - * count of dead queues which are mapped by all un-present CPUs
> - */
> - for_each_present_cpu(cpu) {
> - if (q >= nr_queues)
> - break;
> - map[cpu] = queue_index(qmap, nr_queues, q++);
> + const struct cpumask *masks;
> + unsigned int queue, cpu;
> +
> + masks = group_cpus_evenly(qmap->nr_queues);
> + if (!masks) {
> + for_each_possible_cpu(cpu)
> + qmap->mq_map[cpu] = qmap->queue_offset;

I'm not sure if we should try something better than just assigning all
CPUs to a single queue (which we seem to be doing), but I suppose we
don't expect masks alloc to fail and there are bigger issues to deal
with if it does ....

> + return;
> }
>
> - for_each_possible_cpu(cpu) {
> - if (map[cpu] != -1)
> - continue;
> - /*
> - * First do sequential mapping between CPUs and queues.
> - * In case we still have CPUs to map, and we have some number of
> - * threads per cores then map sibling threads to the same queue
> - * for performance optimizations.
> - */
> - if (q < nr_queues) {
> - map[cpu] = queue_index(qmap, nr_queues, q++);
> - } else {
> - first_sibling = get_first_sibling(cpu);
> - if (first_sibling == cpu)
> - map[cpu] = queue_index(qmap, nr_queues, q++);
> - else
> - map[cpu] = map[first_sibling];
> - }
> + for (queue = 0; queue < qmap->nr_queues; queue++) {
> + for_each_cpu(cpu, &masks[queue])
> + qmap->mq_map[cpu] = qmap->queue_offset + queue;
> }
> + kfree(masks);
> }
> EXPORT_SYMBOL_GPL(blk_mq_map_queues);
>

2023-01-17 18:21:29

by tip-bot2 for Haifeng Xu

[permalink] [raw]
Subject: [tip: irq/core] blk-mq: Build default queue map via group_cpus_evenly()

The following commit has been merged into the irq/core branch of tip:

Commit-ID: 6a6dcae8f486c3f3298d0767d34505121c7b0b81
Gitweb: https://git.kernel.org/tip/6a6dcae8f486c3f3298d0767d34505121c7b0b81
Author: Ming Lei <[email protected]>
AuthorDate: Tue, 27 Dec 2022 10:29:05 +08:00
Committer: Thomas Gleixner <[email protected]>
CommitterDate: Tue, 17 Jan 2023 18:50:06 +01:00

blk-mq: Build default queue map via group_cpus_evenly()

The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
topo into account, so the built mapping is pretty bad, since CPUs
belonging to different NUMA node are assigned to same queue. It is
observed that IOPS drops by ~30% when running two jobs on same hctx
of null_blk from two CPUs belonging to two NUMA nodes compared with
from same NUMA node.

Address the issue by reusing group_cpus_evenly() for building queue mapping
since group_cpus_evenly() does group cpus according to CPU/NUMA locality.

Also performance data becomes more stable with this given correct queue
mapping is applied wrt. numa locality viewpoint, for example, on one two
nodes arm64 machine with 160 cpus, node 0(cpu 0~79), node 1(cpu 80~159):

1) modprobe null_blk nr_devices=1 submit_queues=2

2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that
IOPS becomes much stable on multiple tests:

- unpatched: IOPS is 2.5M ~ 4.5M
- patched: IOPS is 4.3M ~ 5.0M

Lots of drivers may benefit from the change, such as nvme pci poll,
nvme tcp, ...

Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Garry <[email protected]>
Reviewed-by: Jens Axboe <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

---
block/blk-mq-cpumap.c | 63 ++++++++----------------------------------
1 file changed, 13 insertions(+), 50 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 9c2fce1..0c612c1 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -10,66 +10,29 @@
#include <linux/mm.h>
#include <linux/smp.h>
#include <linux/cpu.h>
+#include <linux/group_cpus.h>

#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"

-static int queue_index(struct blk_mq_queue_map *qmap,
- unsigned int nr_queues, const int q)
-{
- return qmap->queue_offset + (q % nr_queues);
-}
-
-static int get_first_sibling(unsigned int cpu)
-{
- unsigned int ret;
-
- ret = cpumask_first(topology_sibling_cpumask(cpu));
- if (ret < nr_cpu_ids)
- return ret;
-
- return cpu;
-}
-
void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
{
- unsigned int *map = qmap->mq_map;
- unsigned int nr_queues = qmap->nr_queues;
- unsigned int cpu, first_sibling, q = 0;
-
- for_each_possible_cpu(cpu)
- map[cpu] = -1;
-
- /*
- * Spread queues among present CPUs first for minimizing
- * count of dead queues which are mapped by all un-present CPUs
- */
- for_each_present_cpu(cpu) {
- if (q >= nr_queues)
- break;
- map[cpu] = queue_index(qmap, nr_queues, q++);
+ const struct cpumask *masks;
+ unsigned int queue, cpu;
+
+ masks = group_cpus_evenly(qmap->nr_queues);
+ if (!masks) {
+ for_each_possible_cpu(cpu)
+ qmap->mq_map[cpu] = qmap->queue_offset;
+ return;
}

- for_each_possible_cpu(cpu) {
- if (map[cpu] != -1)
- continue;
- /*
- * First do sequential mapping between CPUs and queues.
- * In case we still have CPUs to map, and we have some number of
- * threads per cores then map sibling threads to the same queue
- * for performance optimizations.
- */
- if (q < nr_queues) {
- map[cpu] = queue_index(qmap, nr_queues, q++);
- } else {
- first_sibling = get_first_sibling(cpu);
- if (first_sibling == cpu)
- map[cpu] = queue_index(qmap, nr_queues, q++);
- else
- map[cpu] = map[first_sibling];
- }
+ for (queue = 0; queue < qmap->nr_queues; queue++) {
+ for_each_cpu(cpu, &masks[queue])
+ qmap->mq_map[cpu] = qmap->queue_offset + queue;
}
+ kfree(masks);
}
EXPORT_SYMBOL_GPL(blk_mq_map_queues);