Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1966322ybv; Thu, 6 Feb 2020 13:13:43 -0800 (PST) X-Google-Smtp-Source: APXvYqw2Fb2CFS8iYKRJ+se8ybOPokAgh4l2oHzHQWLM2EQzEFMrKJQgExJPMP66fxMg83kTMots X-Received: by 2002:a9d:760d:: with SMTP id k13mr74205otl.42.1581023623125; Thu, 06 Feb 2020 13:13:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581023623; cv=none; d=google.com; s=arc-20160816; b=jq7kaU+Tji+4AFSKoc12Wys+ZQtX/cHpjXgA4XqDrnkPU8+wYZf9g9E+TyT+Z+IkB9 BLwuZqbBzO8iGT0qWgYIcvNLYeuB6fLBMyewvDXL4yLHdBzz5HAs34vZy+TBoio+baW1 uIrUEm/ZC3URDcpaR5lftv9314ymhfq2Fr1EMaCY6NVy4Psx1/XwHUNUCTqhmy1ncdND Td5Qlxl5CU5tpYm69VXFkEnnbrYOhJiHl6j8GZkz2llcB6Wn5SZV605zhEf5S5Ec9ioL KyWcYT0sd6xDUSaIsDYXG7r33iOs50t9WtYFSKFmyVcyyCXWHWfpOjGN0omYLcsydPLx Pm9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:dkim-signature; bh=M/uylP+O2LMA2B9C+rW8Rx9lky5lbsAF0tAZt5+spyU=; b=luiwiTWNOGw0rCxUtkDmB69BNYGd5ABBgkf/UCM8Btrav+vX7uqhZ6uuyvkvZ63bdU T2r13rZxieuAsFksHLM3vxiIPMBmjNdQsV+4gQZ3Vi2alWzNQ47+Aie116rdWjdf7irv wxuJEkzMAU+ozp3ZsnZdogchM1Wv8lDO7LMe+xrsc7zy0Oq5f6ZpkumFLQoz4u/Xvm21 fu7WR9gmHYt+fcdRtk7+AzQHNXi+9BFqcICl2wqvAyxRD+p5Bw5WpGqydQCl+7N4d7ip iPSWogh3soMsTja4sMlrfunA9+UX1MSRDS8yH5AVtHxA8s3p1NmB9g7AfACdTJz7tmIC mc2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=S938oPiq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8si2637760oie.96.2020.02.06.13.13.29; Thu, 06 Feb 2020 13:13:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=S938oPiq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727842AbgBFVMa (ORCPT + 99 others); Thu, 6 Feb 2020 16:12:30 -0500 Received: from mail-ua1-f73.google.com ([209.85.222.73]:32774 "EHLO mail-ua1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726765AbgBFVM3 (ORCPT ); Thu, 6 Feb 2020 16:12:29 -0500 Received: by mail-ua1-f73.google.com with SMTP id z7so17339uai.0 for ; Thu, 06 Feb 2020 13:12:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M/uylP+O2LMA2B9C+rW8Rx9lky5lbsAF0tAZt5+spyU=; b=S938oPiqoyR38GSzPnxeYiov9Qi36UgJMkMv0iSp1iPTbat6jCfKi+MqClj5bbz7co sgtdL/vGsFzr49gFgyXUGvBkok+Wu/CC3KUXiUOInqWoles67Oji/aO2Xa/r3Y8BVk1I XN8xZWcl3MdPub3KprwG7pasxNqHWFuXtabcni5KRyZ4Z4eBQf9F5EEY8vW2H3wRH38Z B6VtJwFswLKAytE8u5Ly+j9cno0FgsbPwv14ZXtfZxPxbSuXMVTuTzc1uMDEtzbOopGT i/hpfArbMd7/+gX19KcJjSdOD/cwNGVrf4nCV5Qo3WSQnVS+1kHnTulBjooMZO9fwUGI HsnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M/uylP+O2LMA2B9C+rW8Rx9lky5lbsAF0tAZt5+spyU=; b=QYWi+j/lbcekB2JP7NtgW0bo7YXtfxWLEqVqIyEqDFFZVMOMs6gsaWDDcIhzg16KgV QbcWefpdBNnlB5LoX9TBGyzvaTIJjiRTgrsjdm+F+OCnxvMIBj9wUCuL18pUJqErzW5e jT++9TuYLFqRm2D4aXlbyOYWNpyvYJizw/UesRf4i2NbR/AKzpNqIxYJ11XS613iAZyb 36T/1ikk0bEsaGcpp4wzSMlHk/jq/g9hhzbNib7bV+fif8JMuJ0AFI1snoGR3+7N32vn Z1HM9vuIfyuexjk5oK9VQ3D1ZHw3AEfItYq72XuuxJn0DwHbMmcPsJoKw7Pwbd4+Ih27 AOHQ== X-Gm-Message-State: APjAAAUsqxNqwVN+YJYX/0mEFsKSaF+YOoXv7vKMz7bzJXBtbN3EUWs3 UJgsydWMX8w/NuAMOYx61Ys2Y+lszA== X-Received: by 2002:a1f:7c0c:: with SMTP id x12mr2906722vkc.41.1581023547890; Thu, 06 Feb 2020 13:12:27 -0800 (PST) Date: Thu, 6 Feb 2020 13:12:22 -0800 In-Reply-To: <20200206101833.GA20943@ming.t460p> Message-Id: <20200206211222.83170-1-sqazi@google.com> Mime-Version: 1.0 References: <20200206101833.GA20943@ming.t460p> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog Subject: [PATCH] block: Limit number of items taken from the I/O scheduler in one go From: Salman Qazi To: Jens Axboe , Ming Lei , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jesse Barnes , Gwendal Grignou , Hannes Reinecke , Christoph Hellwig , Salman Qazi Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flushes bypass the I/O scheduler and get added to hctx->dispatch in blk_mq_sched_bypass_insert. This can happen while a kworker is running hctx->run_work work item and is past the point in blk_mq_sched_dispatch_requests where hctx->dispatch is checked. The blk_mq_do_dispatch_sched call is not guaranteed to end in bounded time, because the I/O scheduler can feed an arbitrary number of commands. Since we have only one hctx->run_work, the commands waiting in hctx->dispatch will wait an arbitrary length of time for run_work to be rerun. A similar phenomenon exists with dispatches from the software queue. The solution is to poll hctx->dispatch in blk_mq_do_dispatch_sched and blk_mq_do_dispatch_ctx and return from the run_work handler and let it rerun. Signed-off-by: Salman Qazi --- block/blk-mq-sched.c | 47 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 41 insertions(+), 6 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index ca22afd47b3d..84dde147f646 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -84,12 +84,16 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx) * Only SCSI implements .get_budget and .put_budget, and SCSI restarts * its queue by itself in its completion handler, so we don't need to * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE. + * + * Returns true if hctx->dispatch was found non-empty and + * run_work has to be run again. */ -static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) +static bool blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; LIST_HEAD(rq_list); + bool ret = false; do { struct request *rq; @@ -97,6 +101,11 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) break; + if (!list_empty_careful(&hctx->dispatch)) { + ret = true; + break; + } + if (!blk_mq_get_dispatch_budget(hctx)) break; @@ -113,6 +122,8 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) */ list_add(&rq->queuelist, &rq_list); } while (blk_mq_dispatch_rq_list(q, &rq_list, true)); + + return ret; } static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx, @@ -130,16 +141,25 @@ static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx, * Only SCSI implements .get_budget and .put_budget, and SCSI restarts * its queue by itself in its completion handler, so we don't need to * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE. + * + * Returns true if hctx->dispatch was found non-empty and + * run_work has to be run again. */ -static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) +static bool blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; LIST_HEAD(rq_list); struct blk_mq_ctx *ctx = READ_ONCE(hctx->dispatch_from); + bool ret = false; do { struct request *rq; + if (!list_empty_careful(&hctx->dispatch)) { + ret = true; + break; + } + if (!sbitmap_any_bit_set(&hctx->ctx_map)) break; @@ -165,6 +185,7 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) } while (blk_mq_dispatch_rq_list(q, &rq_list, true)); WRITE_ONCE(hctx->dispatch_from, ctx); + return ret; } void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) @@ -172,6 +193,8 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; const bool has_sched_dispatch = e && e->type->ops.dispatch_request; + bool run_again; + bool restarted = false; LIST_HEAD(rq_list); /* RCU or SRCU read lock is needed before checking quiesced flag */ @@ -180,6 +203,9 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) hctx->run++; +again: + run_again = false; + /* * If we have previous entries on our dispatch list, grab them first for * more fair dispatch. @@ -208,19 +234,28 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) blk_mq_sched_mark_restart_hctx(hctx); if (blk_mq_dispatch_rq_list(q, &rq_list, false)) { if (has_sched_dispatch) - blk_mq_do_dispatch_sched(hctx); + run_again = blk_mq_do_dispatch_sched(hctx); else - blk_mq_do_dispatch_ctx(hctx); + run_again = blk_mq_do_dispatch_ctx(hctx); } } else if (has_sched_dispatch) { - blk_mq_do_dispatch_sched(hctx); + run_again = blk_mq_do_dispatch_sched(hctx); } else if (hctx->dispatch_busy) { /* dequeue request one by one from sw queue if queue is busy */ - blk_mq_do_dispatch_ctx(hctx); + run_again = blk_mq_do_dispatch_ctx(hctx); } else { blk_mq_flush_busy_ctxs(hctx, &rq_list); blk_mq_dispatch_rq_list(q, &rq_list, false); } + + if (run_again) { + if (!restarted) { + restarted = true; + goto again; + } + + blk_mq_run_hw_queue(hctx, true); + } } bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, -- 2.25.0.341.g760bfbb309-goog