Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1798370pxb; Fri, 5 Feb 2021 01:29:25 -0800 (PST) X-Google-Smtp-Source: ABdhPJwtG9eiakolX/+Pz3UYjiJTKDdXGjdtslOiIlflg19vEjyKOdeBVRNz1e6zOeTwMkh6X+2E X-Received: by 2002:a17:907:170d:: with SMTP id le13mr3204546ejc.220.1612517365105; Fri, 05 Feb 2021 01:29:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612517365; cv=none; d=google.com; s=arc-20160816; b=AbnKHr2X4vie3WltQjqWoJoUVA5yMfvgxEVGdxAlhkZicSpUtpqslD8xnJhhsMYOU8 BYTTcYq+tBgElYWTro3Ocsg6ZP1KRmgzH0u5Nb2wwLg52SJwjPVEA8njXL09okC2YOWQ J5n+2BwxCrobi+sy+t0ViyzHAIU3S5v7FF+7dXsdme+HeL/3nXw9BWgex+JFHUxxa4yl yTWJqFBz8BYR+5pvLU14343981fwn/lgo7242woiFZXjN8hqqE6PgBhXGqbtecKTZRC2 Po7qovJwiIEJU0QgOeU1F0uqaFydDPW96v0HEJ1lWBZJDgALnTGosrPCSKpD5JKNkbZ2 HOkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:subject:cc:to:from; bh=v8Q8xb3WG3BYMfgqcQvFiXDcoTUE/1+GsQ4acSGXi3k=; b=OCHtl3/oaOqRxjvjd0aSNnv/nTmMWan5aVdN5btev2Emv8zoh1jPVY1MJcN7nMIdIX Yv5kguRX983Cz/MNA4szs1t3HlLQVkXes3QU3ds009U1wEf26oflhr/1NLtvA3U0cWKo H0pC0GrSCDc6orSYMpvTxTcrA5MRxz4dwtddOKCOq/LThozUb8mL1f/h9VucPsveLUaO bi4yoODgAfbme1tzpcxVVuvwj/iGT48zTylFywJ/K9Z20lqfQTc1CetHZL6yxs575cLA Txn65Zq3DPnlp5yerjVcg9TDjcL8EnZukVRv0YrWO357n6LM8Cl+8qXcOkMMoSrYL/Q1 19+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vivo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bc7si4883364edb.41.2021.02.05.01.29.01; Fri, 05 Feb 2021 01:29:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vivo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231131AbhBEJ2X (ORCPT + 99 others); Fri, 5 Feb 2021 04:28:23 -0500 Received: from mail-m121144.qiye.163.com ([115.236.121.144]:41596 "EHLO mail-m121144.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbhBEJ0M (ORCPT ); Fri, 5 Feb 2021 04:26:12 -0500 X-Greylist: delayed 676 seconds by postgrey-1.27 at vger.kernel.org; Fri, 05 Feb 2021 04:26:11 EST Received: from ubuntu.localdomain (unknown [157.0.31.124]) by mail-m121144.qiye.163.com (Hmail) with ESMTPA id C85A6AC0384; Fri, 5 Feb 2021 17:14:07 +0800 (CST) From: Yang Yang To: Omar Sandoval , Jens Axboe , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Cc: onlyfever@icloud.com Subject: [PATCH v2] kyber: introduce kyber_depth_updated() Date: Fri, 5 Feb 2021 01:13:10 -0800 Message-Id: <20210205091311.129498-1-yang.yang@vivo.com> X-Mailer: git-send-email 2.17.1 X-HM-Spam-Status: e1kfGhgUHx5ZQUtXWQgYFAkeWUFZS1VLWVdZKFlBSE83V1ktWUFJV1kPCR oVCBIfWUFZQhpNHk5CSx8aTksZVkpNSklOSk1PT0xCTE9VEwETFhoSFyQUDg9ZV1kWGg8SFR0UWU FZT0tIVUpKS0hKQ1VLWQY+ X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6MQg6Qjo4Oj8WKSo9OCgoGDky SzoKCjBVSlVKTUpJTkpNT09DSE9PVTMWGhIXVQIaFRxVAhoVHDsNEg0UVRgUFkVZV1kSC1lBWUpO TFVLVUhKVUpJT1lXWQgBWUFPS0pKNwY+ X-HM-Tid: 0a77717856aeb039kuuuc85a6ac0384 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hang occurs when user changes the scheduler queue depth, by writing to the 'nr_requests' sysfs file of that device. The details of the environment that we found the problem are as follows: an eMMC block device total driver tags: 16 default queue_depth: 32 kqd->async_depth initialized in kyber_init_sched() with queue_depth=32 Then we change queue_depth to 256, by writing to the 'nr_requests' sysfs file. But kqd->async_depth don't be updated after queue_depth changes. Now the value of async depth is too small for queue_depth=256, this may cause hang. This patch introduces kyber_depth_updated(), so that kyber can update async depth when queue depth changes. Signed-off-by: Yang Yang --- v2: - Change the commit message - Change from sbitmap::depth to 2^sbitmap::shift --- block/kyber-iosched.c | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index dc89199bc8c6..17215b6bf482 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -353,19 +353,9 @@ static void kyber_timer_fn(struct timer_list *t) } } -static unsigned int kyber_sched_tags_shift(struct request_queue *q) -{ - /* - * All of the hardware queues have the same depth, so we can just grab - * the shift of the first one. - */ - return q->queue_hw_ctx[0]->sched_tags->bitmap_tags->sb.shift; -} - static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q) { struct kyber_queue_data *kqd; - unsigned int shift; int ret = -ENOMEM; int i; @@ -400,9 +390,6 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q) kqd->latency_targets[i] = kyber_latency_targets[i]; } - shift = kyber_sched_tags_shift(q); - kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; - return kqd; err_buckets: @@ -458,9 +445,19 @@ static void kyber_ctx_queue_init(struct kyber_ctx_queue *kcq) INIT_LIST_HEAD(&kcq->rq_list[i]); } -static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) +static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx) { struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; + struct blk_mq_tags *tags = hctx->sched_tags; + unsigned int shift = tags->bitmap_tags->sb.shift; + + kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; + + sbitmap_queue_min_shallow_depth(tags->bitmap_tags, kqd->async_depth); +} + +static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) +{ struct kyber_hctx_data *khd; int i; @@ -502,8 +499,7 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) khd->batching = 0; hctx->sched_data = khd; - sbitmap_queue_min_shallow_depth(hctx->sched_tags->bitmap_tags, - kqd->async_depth); + kyber_depth_updated(hctx); return 0; @@ -1022,6 +1018,7 @@ static struct elevator_type kyber_sched = { .completed_request = kyber_completed_request, .dispatch_request = kyber_dispatch_request, .has_work = kyber_has_work, + .depth_updated = kyber_depth_updated, }, #ifdef CONFIG_BLK_DEBUG_FS .queue_debugfs_attrs = kyber_queue_debugfs_attrs, -- 2.17.1