Kernel parameter of `isolcpus=` is used for isolating CPUs for specific
task, and user often won't want block IO to disturb these CPUs, also long
IO latency may be caused if blk-mq kworker is scheduled on these isolated
CPUs.
Kernel workqueue only respects this limit for WQ_UNBOUND, for bound wq,
the responsibility should be on wq user.
Add one block layer parameter for not running block kworker on isolated
CPUs.
Cc: Juri Lelli <[email protected]>
Cc: Andrew Theurer <[email protected]>
Cc: Joe Mario <[email protected]>
Cc: Sebastian Jug <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
---
block/blk-mq.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ec922c6bccbe..c53b5b522053 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -29,6 +29,7 @@
#include <linux/prefetch.h>
#include <linux/blk-crypto.h>
#include <linux/part_stat.h>
+#include <linux/sched/isolation.h>
#include <trace/events/block.h>
@@ -42,6 +43,13 @@
#include "blk-rq-qos.h"
#include "blk-ioprio.h"
+static bool respect_cpu_isolation;
+module_param(respect_cpu_isolation, bool, 0444);
+MODULE_PARM_DESC(respect_cpu_isolation,
+ "Don't schedule blk-mq worker on isolated CPUs passed in "
+ "isolcpus= or nohz_full=. User need to guarantee to not run "
+ "block IO on isolated CPUs (default: false)");
+
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
static DEFINE_PER_CPU(call_single_data_t, blk_cpu_csd);
@@ -3926,6 +3934,13 @@ static void blk_mq_map_swqueue(struct request_queue *q)
*/
sbitmap_resize(&hctx->ctx_map, hctx->nr_ctx);
+ if (respect_cpu_isolation) {
+ cpumask_and(hctx->cpumask, hctx->cpumask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpumask_and(hctx->cpumask, hctx->cpumask,
+ housekeeping_cpumask(HK_TYPE_WQ));
+ }
+
/*
* Initialize batch roundrobin counts
*/
--
2.41.0
(cc'ing Frederic)
On Tue, Oct 10, 2023 at 10:22:16PM +0800, Ming Lei wrote:
> Kernel parameter of `isolcpus=` is used for isolating CPUs for specific
> task, and user often won't want block IO to disturb these CPUs, also long
> IO latency may be caused if blk-mq kworker is scheduled on these isolated
> CPUs.
>
> Kernel workqueue only respects this limit for WQ_UNBOUND, for bound wq,
> the responsibility should be on wq user.
>
> Add one block layer parameter for not running block kworker on isolated
> CPUs.
>
> Cc: Juri Lelli <[email protected]>
> Cc: Andrew Theurer <[email protected]>
> Cc: Joe Mario <[email protected]>
> Cc: Sebastian Jug <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>
> ---
> block/blk-mq.c | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ec922c6bccbe..c53b5b522053 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -29,6 +29,7 @@
> #include <linux/prefetch.h>
> #include <linux/blk-crypto.h>
> #include <linux/part_stat.h>
> +#include <linux/sched/isolation.h>
>
> #include <trace/events/block.h>
>
> @@ -42,6 +43,13 @@
> #include "blk-rq-qos.h"
> #include "blk-ioprio.h"
>
> +static bool respect_cpu_isolation;
> +module_param(respect_cpu_isolation, bool, 0444);
> +MODULE_PARM_DESC(respect_cpu_isolation,
> + "Don't schedule blk-mq worker on isolated CPUs passed in "
> + "isolcpus= or nohz_full=. User need to guarantee to not run "
> + "block IO on isolated CPUs (default: false)");
Any chance we can centralize these? It's no fun to try to hunt down module
params to opt in different subsystems and the housekeeping interface does
have some provisions for selecting different parts. I'd much prefer to see
these settings to be collected into a central place.
Thanks.
--
tejun
Hello,
On Tue, Oct 10, 2023 at 08:45:44AM -1000, Tejun Heo wrote:
> (cc'ing Frederic)
>
> On Tue, Oct 10, 2023 at 10:22:16PM +0800, Ming Lei wrote:
> > Kernel parameter of `isolcpus=` is used for isolating CPUs for specific
> > task, and user often won't want block IO to disturb these CPUs, also long
> > IO latency may be caused if blk-mq kworker is scheduled on these isolated
> > CPUs.
> >
> > Kernel workqueue only respects this limit for WQ_UNBOUND, for bound wq,
> > the responsibility should be on wq user.
> >
> > Add one block layer parameter for not running block kworker on isolated
> > CPUs.
> >
> > Cc: Juri Lelli <[email protected]>
> > Cc: Andrew Theurer <[email protected]>
> > Cc: Joe Mario <[email protected]>
> > Cc: Sebastian Jug <[email protected]>
> > Signed-off-by: Ming Lei <[email protected]>
> > ---
> > block/blk-mq.c | 15 +++++++++++++++
> > 1 file changed, 15 insertions(+)
> >
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index ec922c6bccbe..c53b5b522053 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -29,6 +29,7 @@
> > #include <linux/prefetch.h>
> > #include <linux/blk-crypto.h>
> > #include <linux/part_stat.h>
> > +#include <linux/sched/isolation.h>
> >
> > #include <trace/events/block.h>
> >
> > @@ -42,6 +43,13 @@
> > #include "blk-rq-qos.h"
> > #include "blk-ioprio.h"
> >
> > +static bool respect_cpu_isolation;
> > +module_param(respect_cpu_isolation, bool, 0444);
> > +MODULE_PARM_DESC(respect_cpu_isolation,
> > + "Don't schedule blk-mq worker on isolated CPUs passed in "
> > + "isolcpus= or nohz_full=. User need to guarantee to not run "
> > + "block IO on isolated CPUs (default: false)");
>
> Any chance we can centralize these? It's no fun to try to hunt down module
> params to opt in different subsystems and the housekeeping interface does
> have some provisions for selecting different parts. I'd much prefer to see
> these settings to be collected into a central place.
I guess it is hard to solve in a central place, such as workqueue.
Follows the workqueue API:
/**
* queue_work_on - queue work on specific cpu
* @cpu: CPU number to execute work on
* @wq: workqueue to use
* @work: work to queue
*
* We queue the work to a specific CPU, the caller must ensure it
* can't go away. Callers that fail to ensure that the specified
* CPU cannot go away will execute on a randomly chosen CPU.
* But note well that callers specifying a CPU that never has been
* online will get a splat.
*
* Return: %false if @work was already on a queue, %true otherwise.
*/
bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work)
The caller specifies one cpu to queue work, what can queue_work_on()
do if the specified CPU is isolated? If the API is changed by dealing
with isolated CPU, the caller has to modify for adapting with the API
change.
Secondly isolated CPUs still can be override by 'taskset -C
$isolated_cpus', that is why I add one blk-mq module parameter,
but the module parameter can be removed, just with two extra effects
if block IOs are submitted from isolated CPUs:
- driver's ->queue_rq() can be queued on other CPU or UNBOUND CPU,
which looks fine
- IO timeout may be triggered during cpu hotplug, but this way had
been long time, maybe not one big deal too.
I appreciate that any specific suggestions about dealing with isolated CPUs
generically for bound WQ can be shared.
Thanks,
Ming
Hello,
On Wed, Oct 11, 2023 at 08:39:05AM +0800, Ming Lei wrote:
> I appreciate that any specific suggestions about dealing with isolated CPUs
> generically for bound WQ can be shared.
Oh, all I meant was whether we can at least collect this into or at least
adjacent to the existing housekeeping / isolcpu parameters. Let's say
there's someone who really wants to isolated some CPUs, how would they find
out the different parameters if they're scattered across different
subsystems?
Thanks.
--
tejun
Hi Tejun,
On Thu, Oct 12, 2023 at 09:55:55AM -1000, Tejun Heo wrote:
> Hello,
>
> On Wed, Oct 11, 2023 at 08:39:05AM +0800, Ming Lei wrote:
> > I appreciate that any specific suggestions about dealing with isolated CPUs
> > generically for bound WQ can be shared.
>
> Oh, all I meant was whether we can at least collect this into or at least
> adjacent to the existing housekeeping / isolcpu parameters. Let's say
> there's someone who really wants to isolated some CPUs, how would they find
> out the different parameters if they're scattered across different
> subsystems?
AFAIK, the issue is reported on RH Openshift environment and it is real use
case, some of CPUs are isolated for some dedicated tasks(such as network polling,
...) by passing "isolcpus=managed_irq nohz_full".
But blk-mq still queue kworker on these isolated CPUs, and cause very long
latency in nvme IO workloads. Joe should know the story much more then me.
Thanks,
Ming
On Tue, Oct 10, 2023 at 08:45:44AM -1000, Tejun Heo wrote:
> > +static bool respect_cpu_isolation;
> > +module_param(respect_cpu_isolation, bool, 0444);
> > +MODULE_PARM_DESC(respect_cpu_isolation,
> > + "Don't schedule blk-mq worker on isolated CPUs passed in "
> > + "isolcpus= or nohz_full=. User need to guarantee to not run "
> > + "block IO on isolated CPUs (default: false)");
>
> Any chance we can centralize these? It's no fun to try to hunt down module
> params to opt in different subsystems and the housekeeping interface does
> have some provisions for selecting different parts. I'd much prefer to see
> these settings to be collected into a central place.
Do we need this parameter in the first place? Shouldn't we avoid scheduling
blk-mq worker on isolated CPUs in any case?
Thanks.
>
> Thanks.
>
> --
> tejun
On Fri, Oct 13, 2023 at 01:26:08PM +0200, Frederic Weisbecker wrote:
> On Tue, Oct 10, 2023 at 08:45:44AM -1000, Tejun Heo wrote:
> > > +static bool respect_cpu_isolation;
> > > +module_param(respect_cpu_isolation, bool, 0444);
> > > +MODULE_PARM_DESC(respect_cpu_isolation,
> > > + "Don't schedule blk-mq worker on isolated CPUs passed in "
> > > + "isolcpus= or nohz_full=. User need to guarantee to not run "
> > > + "block IO on isolated CPUs (default: false)");
> >
> > Any chance we can centralize these? It's no fun to try to hunt down module
> > params to opt in different subsystems and the housekeeping interface does
> > have some provisions for selecting different parts. I'd much prefer to see
> > these settings to be collected into a central place.
>
> Do we need this parameter in the first place? Shouldn't we avoid scheduling
> blk-mq worker on isolated CPUs in any case?
Yeah, I think this parameter isn't necessary, will remove it in V2.
Thanks,
Ming