Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp1708249rwb; Thu, 29 Sep 2022 00:43:46 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4cRuQR0rlnmzOo1+uP9hE+2ykLiQb4UnReL8NVUyiZQFzOTR8dGDTcJtG+xIB3D7zbqMgH X-Received: by 2002:a63:c142:0:b0:43c:9fcc:c9f2 with SMTP id p2-20020a63c142000000b0043c9fccc9f2mr1813689pgi.44.1664437426082; Thu, 29 Sep 2022 00:43:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664437426; cv=none; d=google.com; s=arc-20160816; b=aO60L5Z3DYrioHH+OjUX/Zk5QlxcVrvR+BGBgCgtWlelaeh2Y8a49tSBommoj4RXHY 51ERmJJZUlyk4icgUMl7klusKaSc/36hD0wLHfYZ/m7xvQ2bz70OreYFJwiMN7lO8d3f wgkZYuSg6V6yPohGX3hi+F1EExJzyYfY1ifgbJaW++K8LawcF4VehTn6m0PLURWANXEy lQSGJt+CbG6CEBJ1GxcrYfeMM4G5Hf0w/Yfxni2ljCqXKeYLKGn3T9uPEjaU0XmDRZC3 EMAQc0paNv+i0h8r81N179+eIF65s1ql1CO5PZG6JpIdJQo1m3IRUg4O6MK/PL7ZqrPM zEgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=zFzpkaUIt9j0+ZxqSuEFnUTf5Pkznh6zEy0oBzC/OBE=; b=YMD2J2SwpeCQBdARORmEX65NtsIfbYl7SoBXp2Ykp2MytqLsyfty6i4RpEnrX+0MBw 7vnOLMD/VJFBvqr9s5eg+zqK6hXzZAFHTCiZD7x/L4R2PVw/YvTc/f0wDLPXoyJURWzn PQLZgcobGIDqNS+h4oI1sN6gqBBpgi7+TjVYA7zDY3gD4OomTP0H/FD6UdsqjDmIXt9t U9t/9XLofQf9i0NR5j+ElqTWeq5BcXu2dSy6OvbFPVxXxdC1FuQWZuulGgWv5hbbUM2/ 1cqazMJNg1j0Rn2Ibe2YMb9+1kbFRodNPIHQ9lcHWH0IHLF8eWycpGVrzL58hTyuAOM0 biKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z16-20020a170903019000b00177f35ce11fsi9099820plg.22.2022.09.29.00.43.34; Thu, 29 Sep 2022 00:43:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235411AbiI2Hlu (ORCPT + 99 others); Thu, 29 Sep 2022 03:41:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235357AbiI2HlU (ORCPT ); Thu, 29 Sep 2022 03:41:20 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3535E58DF8; Thu, 29 Sep 2022 00:41:01 -0700 (PDT) Received: from kwepemi500016.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MdQD44MHrzWh3G; Thu, 29 Sep 2022 15:36:52 +0800 (CST) Received: from huawei.com (10.174.178.129) by kwepemi500016.china.huawei.com (7.221.188.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 29 Sep 2022 15:40:59 +0800 From: Kemeng Shi To: , CC: , , , Subject: [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Date: Thu, 29 Sep 2022 15:40:55 +0800 Message-ID: <20220929074055.30080-4-shikemeng@huawei.com> X-Mailer: git-send-email 2.14.1.windows.1 In-Reply-To: <20220929074055.30080-1-shikemeng@huawei.com> References: <20220929074055.30080-1-shikemeng@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.178.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemi500016.china.huawei.com (7.221.188.220) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We only need a max queue depth for every iolatency to limit the inflight io number. Replace struct rq_depth with unsigned int to simplfy "struct iolatency_grp" and save memory. Signed-off-by: Kemeng Shi --- block/blk-iolatency.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index 2666afd7abdb..55bc742d3b66 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -141,7 +141,7 @@ struct iolatency_grp { struct latency_stat __percpu *stats; struct latency_stat cur_stat; struct blk_iolatency *blkiolat; - struct rq_depth rq_depth; + unsigned int max_depth; struct rq_wait rq_wait; atomic64_t window_start; atomic_t scale_cookie; @@ -280,7 +280,7 @@ static void iolat_cleanup_cb(struct rq_wait *rqw, void *private_data) static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data) { struct iolatency_grp *iolat = private_data; - return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth); + return rq_wait_inc_below(rqw, iolat->max_depth); } static void __blkcg_iolatency_throttle(struct rq_qos *rqos, @@ -372,7 +372,7 @@ static void scale_change(struct iolatency_grp *iolat, bool up) { unsigned long qd = iolat->blkiolat->rqos.q->nr_requests; unsigned long scale = scale_amount(qd, up); - unsigned long old = iolat->rq_depth.max_depth; + unsigned long old = iolat->max_depth; if (old > qd) old = qd; @@ -384,12 +384,12 @@ static void scale_change(struct iolatency_grp *iolat, bool up) if (old < qd) { old += scale; old = min(old, qd); - iolat->rq_depth.max_depth = old; + iolat->max_depth = old; wake_up_all(&iolat->rq_wait.wait); } } else { old >>= 1; - iolat->rq_depth.max_depth = max(old, 1UL); + iolat->max_depth = max(old, 1UL); } } @@ -442,7 +442,7 @@ static void check_scale_change(struct iolatency_grp *iolat) } /* We're as low as we can go. */ - if (iolat->rq_depth.max_depth == 1 && direction < 0) { + if (iolat->max_depth == 1 && direction < 0) { blkcg_use_delay(lat_to_blkg(iolat)); return; } @@ -450,7 +450,7 @@ static void check_scale_change(struct iolatency_grp *iolat) /* We're back to the default cookie, unthrottle all the things. */ if (cur_cookie == DEFAULT_SCALE_COOKIE) { blkcg_clear_delay(lat_to_blkg(iolat)); - iolat->rq_depth.max_depth = UINT_MAX; + iolat->max_depth = UINT_MAX; wake_up_all(&iolat->rq_wait.wait); return; } @@ -505,7 +505,7 @@ static void iolatency_record_time(struct iolatency_grp *iolat, * We don't want to count issue_as_root bio's in the cgroups latency * statistics as it could skew the numbers downwards. */ - if (unlikely(issue_as_root && iolat->rq_depth.max_depth != UINT_MAX)) { + if (unlikely(issue_as_root && iolat->max_depth != UINT_MAX)) { u64 sub = iolat->min_lat_nsec; if (req_time < sub) blkcg_add_delay(lat_to_blkg(iolat), now, sub - req_time); @@ -916,7 +916,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s) } preempt_enable(); - if (iolat->rq_depth.max_depth == UINT_MAX) + if (iolat->max_depth == UINT_MAX) seq_printf(s, " missed=%llu total=%llu depth=max", (unsigned long long)stat.ps.missed, (unsigned long long)stat.ps.total); @@ -924,7 +924,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s) seq_printf(s, " missed=%llu total=%llu depth=%u", (unsigned long long)stat.ps.missed, (unsigned long long)stat.ps.total, - iolat->rq_depth.max_depth); + iolat->max_depth); } static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s) @@ -941,12 +941,12 @@ static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s) avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC); cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC); - if (iolat->rq_depth.max_depth == UINT_MAX) + if (iolat->max_depth == UINT_MAX) seq_printf(s, " depth=max avg_lat=%llu win=%llu", avg_lat, cur_win); else seq_printf(s, " depth=%u avg_lat=%llu win=%llu", - iolat->rq_depth.max_depth, avg_lat, cur_win); + iolat->max_depth, avg_lat, cur_win); } static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp, @@ -990,9 +990,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd) latency_stat_init(iolat, &iolat->cur_stat); rq_wait_init(&iolat->rq_wait); spin_lock_init(&iolat->child_lat.lock); - iolat->rq_depth.queue_depth = blkg->q->nr_requests; - iolat->rq_depth.max_depth = UINT_MAX; - iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth; + iolat->max_depth = UINT_MAX; iolat->blkiolat = blkiolat; iolat->cur_win_nsec = 100 * NSEC_PER_MSEC; atomic64_set(&iolat->window_start, now); -- 2.30.0