Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1442975pxb; Mon, 22 Feb 2021 01:53:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJzPZI8yedBoixYEiDO+Ou674RpCc5IRwBgiKuUOAyi94UNH3zlYzv3iNNN0eMPgG9z5ondt X-Received: by 2002:aa7:c543:: with SMTP id s3mr22325061edr.305.1613987614179; Mon, 22 Feb 2021 01:53:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613987614; cv=none; d=google.com; s=arc-20160816; b=zqD5FBHIqpIAulxzJQdcfQnmh3H82n9mQLfRtmfYOx5HkyN19bKcGZrx3UxKSoSemV tq01LsezJEHMVS+UcbzfYY5LeumGwNatuZ0RI3hWIFILD8WcVbD7SHKoFFf5PBbfgtXP qpp6N7hDmimzWY+Szjh4fLxOdu8oOgO+Cz09Ky64kkypxzIPk2iVL4dwPbChBq25CH4/ MH5vP1LK0uiyr+LzQnhhKPdDzkOPLegRR0wQlkRZcv8Ggqjy9mocxHVVD0+ZkU+L2Lmw nvBI2qQEFpQNU8JFVovl6Vyl21KikaC8TpBMtmsm2vvDREsU2di+v96f10xbxlyiVg0z ALpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:references:cc :to:from:subject; bh=Z8+LLlw+KxjuBTHiJx277LztPVYFp6j/o7euD6Uvn7E=; b=davVkGCzWlYwWHb4Q7+rfEnpoJqJRvkL/2/ZRL8pcD20GtGILOvWc3nnWN/XyBa72/ 2BPc8Oa2sQg1kNtIPJqlQM0GZyM4U+u+d/jwk6wtcAxT8LKYxvLOYjCv1uCrXnQTvNPs IjlXpOx/EtcQixlShdgpmlLFWcYveZmNVw8s3sabqDBDGKYPeh3QZRJ/nqPymQ4f3lk5 peuKgEs0EEDiT63IZcbmYnVq9Z4HIJuW+8CLzZkKggE5Cx9AI0F+Beebf1LekbRYb660 u7Sml4Qm0v2NKuDtcAqcUOKzLeENjcmwnUnBbAKEuBpGme+YjB41Dim/PZkU3gDIwnHz p4fg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vivo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si589858ejb.448.2021.02.22.01.53.11; Mon, 22 Feb 2021 01:53:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vivo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229989AbhBVJuk (ORCPT + 99 others); Mon, 22 Feb 2021 04:50:40 -0500 Received: from mail-m121144.qiye.163.com ([115.236.121.144]:49094 "EHLO mail-m121144.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230147AbhBVJuH (ORCPT ); Mon, 22 Feb 2021 04:50:07 -0500 Received: from [127.0.0.1] (unknown [157.0.31.124]) by mail-m121144.qiye.163.com (Hmail) with ESMTPA id 6043AAC0460; Mon, 22 Feb 2021 17:49:10 +0800 (CST) Subject: Re: [PATCH v2] kyber: introduce kyber_depth_updated() From: Yang Yang To: Omar Sandoval , Jens Axboe Cc: onlyfever@icloud.com, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org References: <20210205091311.129498-1-yang.yang@vivo.com> Message-ID: Date: Mon, 22 Feb 2021 17:49:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.5.1 MIME-Version: 1.0 In-Reply-To: <20210205091311.129498-1-yang.yang@vivo.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-HM-Spam-Status: e1kfGhgUHx5ZQUtXWQgYFAkeWUFZS1VLWVdZKFlBSE83V1ktWUFJV1kPCR oVCBIfWUFZH0NOSxhNTkhIQxofVkpNSkhCQ0xITktOSklVEwETFhoSFyQUDg9ZV1kWGg8SFR0UWU FZT0tIVUpKS0hNSlVLWQY+ X-HM-Sender-Digest: e1kMHhlZQR0aFwgeV1kSHx4VD1lBWUc6KyI6UQw*ED8UExgNKxkNFi8u FTwaFBFVSlVKTUpIQkNMSE5LQ0NPVTMWGhIXVQIaFRxVAhoVHDsNEg0UVRgUFkVZV1kSC1lBWUpO TFVLVUhKVUpJT1lXWQgBWUFPQ0hDNwY+ X-HM-Tid: 0a77c92487c8b039kuuu6043aac0460 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/2/5 17:13, Yang Yang wrote: > Hang occurs when user changes the scheduler queue depth, by writing to > the 'nr_requests' sysfs file of that device. > > The details of the environment that we found the problem are as follows: > an eMMC block device > total driver tags: 16 > default queue_depth: 32 > kqd->async_depth initialized in kyber_init_sched() with queue_depth=32 > > Then we change queue_depth to 256, by writing to the 'nr_requests' sysfs > file. But kqd->async_depth don't be updated after queue_depth changes. > Now the value of async depth is too small for queue_depth=256, this may > cause hang. > > This patch introduces kyber_depth_updated(), so that kyber can update > async depth when queue depth changes. > > Signed-off-by: Yang Yang > --- > v2: > - Change the commit message > - Change from sbitmap::depth to 2^sbitmap::shift > --- > block/kyber-iosched.c | 29 +++++++++++++---------------- > 1 file changed, 13 insertions(+), 16 deletions(-) > > diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c > index dc89199bc8c6..17215b6bf482 100644 > --- a/block/kyber-iosched.c > +++ b/block/kyber-iosched.c > @@ -353,19 +353,9 @@ static void kyber_timer_fn(struct timer_list *t) > } > } > > -static unsigned int kyber_sched_tags_shift(struct request_queue *q) > -{ > - /* > - * All of the hardware queues have the same depth, so we can just grab > - * the shift of the first one. > - */ > - return q->queue_hw_ctx[0]->sched_tags->bitmap_tags->sb.shift; > -} > - > static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q) > { > struct kyber_queue_data *kqd; > - unsigned int shift; > int ret = -ENOMEM; > int i; > > @@ -400,9 +390,6 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q) > kqd->latency_targets[i] = kyber_latency_targets[i]; > } > > - shift = kyber_sched_tags_shift(q); > - kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; > - > return kqd; > > err_buckets: > @@ -458,9 +445,19 @@ static void kyber_ctx_queue_init(struct kyber_ctx_queue *kcq) > INIT_LIST_HEAD(&kcq->rq_list[i]); > } > > -static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) > +static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx) > { > struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data; > + struct blk_mq_tags *tags = hctx->sched_tags; > + unsigned int shift = tags->bitmap_tags->sb.shift; > + > + kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U; > + > + sbitmap_queue_min_shallow_depth(tags->bitmap_tags, kqd->async_depth); > +} > + > +static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) > +{ > struct kyber_hctx_data *khd; > int i; > > @@ -502,8 +499,7 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) > khd->batching = 0; > > hctx->sched_data = khd; > - sbitmap_queue_min_shallow_depth(hctx->sched_tags->bitmap_tags, > - kqd->async_depth); > + kyber_depth_updated(hctx); > > return 0; > > @@ -1022,6 +1018,7 @@ static struct elevator_type kyber_sched = { > .completed_request = kyber_completed_request, > .dispatch_request = kyber_dispatch_request, > .has_work = kyber_has_work, > + .depth_updated = kyber_depth_updated, > }, > #ifdef CONFIG_BLK_DEBUG_FS > .queue_debugfs_attrs = kyber_queue_debugfs_attrs, > Hello, Ping... Thanks!