Received: by 2002:a05:6a10:8395:0:0:0:0 with SMTP id n21csp175003pxh; Tue, 9 Nov 2021 23:35:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJx5zLEc9tsFbSVqQZc/wieT3ay3bKnjOfo5vNRLZksNKYNiFNlDk06PbW33J0U7VzCyr/OD X-Received: by 2002:a02:9f15:: with SMTP id z21mr10564880jal.137.1636529737973; Tue, 09 Nov 2021 23:35:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1636529737; cv=none; d=google.com; s=arc-20160816; b=wvsubLjz+A+73xyQxnl4ouTibuYf2u8k/4GtuLojva5a6n2+cXX9lwdpiqv3V2uzkx d3OeLVqCcc5I7Xe6z+Wc81/4cYpzwuiQB3xAUoP7y/zO+8+S+q720l9HWrd1ER/MA8TP +0zOVoV6pAUzi5taf50d1OR00T78cKP157qnNepAkpjhTBKIzhxsil22fM98RGOe2TeF +z1HOlIiITIV8JE3TCgiTVScV60UfNapGEbvCMUKFobWXn68Ggu1PYTlW8+bl6CNTkjT onYmkSzrEzZu8n2tIKEpvC0IDtwqRVZS2aPeKrJFXytBVClEJUM98/IDCnlETvUgXuEY I+jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from:cc:to :references:content-language:subject:user-agent:mime-version:date :message-id; bh=rjvO5Kh6Vt7+crSu63acdRGZf4UAQH3sqlbTTkF90B8=; b=NqveGVQjk7qH17I24DxGbwM1HUYaFS95VRwZ2juUXPxrvoV/zCnMIiCMbY0jiF89XY hwr+RX+Pr4z//9pD5SN0CgRyOmGwMgeQP6eUgnY0CJs49ZP0rWt59pzldw5YfhTAEHEQ mm3166UjKXBUX1I1ZkVGj7pkyAun19alb431v+ocggJnaoWHlTu36ygY2tUzYAapHMfo X2E02g7mP52ahz01DV8NIPn9QIg5J0pIBflHzkB4g6EmAjwMhs5xHoxVlK2P4IaCBIkY Z0vPes0+kk7QC/NdFu8eRTZ9Xpi5IMnGyrgar4r1lw1mGwo6O9Qv9LTi/Rex35+Ef4UR VsBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j3si34179992ilq.96.2021.11.09.23.35.24; Tue, 09 Nov 2021 23:35:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229654AbhKJHg2 (ORCPT + 99 others); Wed, 10 Nov 2021 02:36:28 -0500 Received: from szxga08-in.huawei.com ([45.249.212.255]:27125 "EHLO szxga08-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229567AbhKJHg2 (ORCPT ); Wed, 10 Nov 2021 02:36:28 -0500 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4HpxNp5XvLz1DJJb; Wed, 10 Nov 2021 15:31:22 +0800 (CST) Received: from dggpemm500004.china.huawei.com (7.185.36.219) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 10 Nov 2021 15:33:36 +0800 Received: from [10.174.177.69] (10.174.177.69) by dggpemm500004.china.huawei.com (7.185.36.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 10 Nov 2021 15:33:36 +0800 Message-ID: <3190660d-452e-690c-371f-e75744d37785@huawei.com> Date: Wed, 10 Nov 2021 15:33:35 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 Subject: Re: [PATCH -next] blk-mq: fix tag_get wait task can't be awakened Content-Language: en-US References: To: , , , CC: , , , From: QiuLaibin In-Reply-To: X-Forwarded-Message-Id: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.69] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500004.china.huawei.com (7.185.36.219) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Ming, On 2021/09/26 20:48, Ming Lei wrote: > Hi Laibin, > > On Mon, Sep 13, 2021 at 04:12:48PM +0800, Laibin Qiu wrote: >> When multiple hctx share one tagset. The wake_batch is calculated >> during initialization by queue_depth. But when multiple hctx share one >> tagset. The queue depth assigned to each user may be smaller than >> wakeup_batch. This may cause the waiting queue to fail to wakup and >> leads to Hang. > In case of shared tags, there might be more than one hctx which allocates tag from single tags, and each hctx is limited to allocate at most: > > hctx_max_depth = max((bt->sb.depth + users - 1) / users, 4U); > > and > > users = atomic_read(&hctx->tags->active_queues) > > See hctx_may_queue(). > > tag idle detection is lazy, and may be delayed for 30sec, so there could be just one real active hctx(queue) but all others are actually idle and still accounted as active because of the lazy idle detection. Then if wake_batch is > hctx_max_depth, driver tag allocation may wait forever on this real active hctx. > > Correct me if my understanding is wrong. Your understanding is right. When we add lots of users for one shared tag, it will make wake_batch > hctx_max_dept. So driver tag allocation may wait forever on this real active hctx. >> Fix this by recalculating wake_batch when inc or dec active_queues. >> >> Fixes: 0d2602ca30e41 ("blk-mq: improve support for shared tags maps") >> Signed-off-by: Laibin Qiu >> --- >> block/blk-mq-tag.c | 44 +++++++++++++++++++++++++++++++++++++++-- >> include/linux/sbitmap.h | 8 ++++++++ >> lib/sbitmap.c | 3 ++- >> 3 files changed, 52 insertions(+), 3 deletions(-) >> >> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index >> 86f87346232a..d02f5ac0004c 100644 >> --- a/block/blk-mq-tag.c >> +++ b/block/blk-mq-tag.c >> @@ -16,6 +16,27 @@ >> #include "blk-mq-sched.h" >> #include "blk-mq-tag.h" >> +static void bt_update_wake_batch(struct sbitmap_queue *bt, unsigned >> +int users) { >> + unsigned int depth; >> + >> + depth = max((bt->sb.depth + users - 1) / users, 4U); >> + sbitmap_queue_update_wake_batch(bt, depth); > Use the hctx's max queue depth could reduce wake_batch a lot, then performance may be degraded. > > Just wondering why not set sbq->wake_batch as hctx_max_depth if > sbq->wake_batch is < hctx_max_depth? > __blk_mq_tag_busy() will add Users and __blk_mq_tag_idle() will decrease Users. Only changes in Users will affect each user's max depth. So we recalculate the matching wake_batch by sbitmap_queue_update_wake_batch(). sbitmap_queue_update_wake_batch() will calculate wake_batch by incoming depth. The value of sbq->wake_batch will only be changed when the calculated wake_batch changes. static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq, unsigned int depth) { unsigned int wake_batch = sbq_calc_wake_batch(sbq, depth); ^^^^^^^^^^^^^^^^^ int i; if (sbq->wake_batch != wake_batch) { ^^^^^^^^^^^^^^^^^^ WRITE_ONCE(sbq->wake_batch, wake_batch); /* * Pairs with the memory barrier in sbitmap_queue_wake_up() * to ensure that the batch size is updated before the wait * counts. */ smp_mb(); for (i = 0; i < SBQ_WAIT_QUEUES; i++) atomic_set(&sbq->ws[i].wait_cnt, 1); } } > > Thanks, > Ming Thanks Laibin