Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp707060pxb; Tue, 5 Apr 2022 19:39:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxBeZGjNSncLcRF+VmAo3k8XWsWneP/Sbe8+36Md1Mx/iPrLvzzCvSMx+Q0CncnG4kKdAl2 X-Received: by 2002:a17:907:c0e:b0:6e7:ee1a:5c70 with SMTP id ga14-20020a1709070c0e00b006e7ee1a5c70mr6181431ejc.507.1649212762724; Tue, 05 Apr 2022 19:39:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649212762; cv=none; d=google.com; s=arc-20160816; b=g1MORxRx24BvBwgvkno/40bne6XylHuwT90D3gs4AvlVUY2R5tSUpStrVjJOZdiHwE PCV5XoPlRysTQXa1mDS3uCVN0XnhLtj1+4tPg2AANXSX28TALcceklvaXPaxmZavbtrD NJc1NOKXInBh4VlFQmfSTlhF+PZREGw3nDwBAJxOJOuBm7CdztPV7/W/r5FV7Dtap7yp 6DOIhdnqkHCM+4GIqAtR27cgelaWw05XyAv0Tf7WYhR9DVXIWgfmbpBMphc+wAHUO+Q1 tRBvLxHx36oiOWYxsuQvTOY0ruslPGdNtAzGlyudgUL1WRLUCzJTIKMFOdXKERfgV6JP km+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Wt6TqGkWlyhdygvlhi+uTFouB5e7pFa/KUOVdlmgqXc=; b=DgeQOqtgjDQgRrmD51nROsy0Ydg07MCVI0KIYrW2+anX5wMKlV1+aRW+gJKWtye+6E Y6PfICjr9gfxG45VsNLEP/i50NDGY57p1EUp1n4prjbhTA4MDH9E0ZXF207/dUnD39tp npQolpDU9GEAk56Cq6MKN6mr8XAk/Bktx+GtGFWHmXLIsw9yIGttbQ2d5WMEpJCH10f2 hk/10IqeW359tc2KzkyDk4tI262Jf2qTvsErBuP0naHoQglCYQhFtDmkKLH6Hpz0CXER xPCk2gcU8Q1iagdIRDyHJUxw3SeYBQh0ElL3GfXzkxWMuGGfDsuNVhf7NwIuZIZswCzX dz/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=F9OuOpys; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ym18-20020a170907331200b006e7f31e6b09si5638336ejb.817.2022.04.05.19.38.58; Tue, 05 Apr 2022 19:39:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=F9OuOpys; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346074AbiDELBG (ORCPT + 99 others); Tue, 5 Apr 2022 07:01:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235503AbiDEIjy (ORCPT ); Tue, 5 Apr 2022 04:39:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C90D21BB; Tue, 5 Apr 2022 01:33:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7EC34B81C14; Tue, 5 Apr 2022 08:33:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D85A1C385A0; Tue, 5 Apr 2022 08:33:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649147611; bh=c04I7xSLn2X3v4DHKriGZ3PkJ9qQh69X1RKAjzC5s9M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F9OuOpystmZwaXX+ei8PjXP8YsAOYMlIHzfFDtToBAllZhjRVjfC6ssJTe0jM+GoQ ye4ypyGNzYYqRNq4b49y4zVNmWYWYnJSc0WExqQ+/yQESMBAV2ZJj+ZslkwnUAg6Kz TnYINRJsY35CPnImZ7C+Z/hcNs/hR9k9+qvRIplY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Song Liu , Jens Axboe , Song Liu Subject: [PATCH 5.16 0031/1017] block: flush plug based on hardware and software queue order Date: Tue, 5 Apr 2022 09:15:44 +0200 Message-Id: <20220405070355.102575591@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405070354.155796697@linuxfoundation.org> References: <20220405070354.155796697@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jens Axboe commit 26fed4ac4eab09c27fbae1859696cc38f0536407 upstream. We used to sort the plug list if we had multiple queues before dispatching requests to the IO scheduler. This usually isn't needed, but for certain workloads that interleave requests to disks, it's a less efficient to process the plug list one-by-one if everything is interleaved. Don't sort the list, but skip through it and flush out entries that have the same target at the same time. Fixes: df87eb0fce8f ("block: get rid of plug list sorting") Reported-and-tested-by: Song Liu Reviewed-by: Song Liu Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/blk-mq.c | 60 +++++++++++++++++++++++++-------------------------------- 1 file changed, 27 insertions(+), 33 deletions(-) --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2244,13 +2244,35 @@ static void blk_mq_plug_issue_direct(str blk_mq_commit_rqs(hctx, &queued, from_schedule); } -void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) +static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) { - struct blk_mq_hw_ctx *this_hctx; - struct blk_mq_ctx *this_ctx; - unsigned int depth; + struct blk_mq_hw_ctx *this_hctx = NULL; + struct blk_mq_ctx *this_ctx = NULL; + struct request *requeue_list = NULL; + unsigned int depth = 0; LIST_HEAD(list); + do { + struct request *rq = rq_list_pop(&plug->mq_list); + + if (!this_hctx) { + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { + rq_list_add(&requeue_list, rq); + continue; + } + list_add_tail(&rq->queuelist, &list); + depth++; + } while (!rq_list_empty(plug->mq_list)); + + plug->mq_list = requeue_list; + trace_block_unplug(this_hctx->queue, depth, !from_sched); + blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched); +} + +void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) +{ if (rq_list_empty(plug->mq_list)) return; plug->rq_count = 0; @@ -2261,37 +2283,9 @@ void blk_mq_flush_plug_list(struct blk_p return; } - this_hctx = NULL; - this_ctx = NULL; - depth = 0; do { - struct request *rq; - - rq = rq_list_pop(&plug->mq_list); - - if (!this_hctx) { - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { - trace_block_unplug(this_hctx->queue, depth, - !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, - &list, from_schedule); - depth = 0; - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - - } - - list_add(&rq->queuelist, &list); - depth++; + blk_mq_dispatch_plug_list(plug, from_schedule); } while (!rq_list_empty(plug->mq_list)); - - if (!list_empty(&list)) { - trace_block_unplug(this_hctx->queue, depth, !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, - from_schedule); - } } static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,