Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp31345rwb; Wed, 10 Aug 2022 18:38:27 -0700 (PDT) X-Google-Smtp-Source: AA6agR4AeNCTUxWPQjE7fYXddy77zy865ep+PPer7KA4jx+si6XAcyVMwTenh3TKRXq3wbQqHvrU X-Received: by 2002:a17:902:b215:b0:168:da4b:c925 with SMTP id t21-20020a170902b21500b00168da4bc925mr29331797plr.155.1660181896423; Wed, 10 Aug 2022 18:38:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660181896; cv=none; d=google.com; s=arc-20160816; b=bi5P1g7SvwmPALdLJTjhha+6hgBTcGB6n/8U8DyplYEQzwB/5eQqekHHoOMWyVAH8F pC40AoaUlwxdb2cXRYgzgenk4vmGFM3mpqGAq5E2G45Zx+DVqKueS+qXvPjkCaWNdEHE I0PJoZ/Z2WWCWleBHIykxsaRFxSr9ktKzpSiM4EdOFDKA7h5r2zLUKUrqVkJOYgYGPLZ Q2gBy8pfK89mzYAQ7Q62gb/SSGo2JP3mOzMAdo6caKF/L8JF1mjrEPj9wDRR2Vv7VNHo SNWckCnwcVIMDQ8AibLmK/WbCoHJ7XWtiiOImLSHdCbHtGyax5WvZi7gbcTTVSQWkhnK XITw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=KC+Ix/0Ec9aEnIErVXL1gFjNQ0+i02JKULt9NuWPdK8=; b=LIOhQza9zQqI27P+bjtG0SSbg3IFAA9Tyv0ESfMbTzi9wyn6gr83NjEqo896aDQJYC PuxDChWBc9e1c7BdubsgVQdg7J4HSXfVPrdfDcHfPP0PESLe7L6QqiaqMT3BF+wxwKkv h91NGCd8XtHyUZ39QRKkFR+ueLq+sVDWIiBL9JzIdBHuU8YwGzdVF1cI1wD4RqORgIZX Y5Ofc9JOS8ENlQrItYuU8mWqjL7FzkaooYXNBTp8d62BXVoeHDQswQyepXbM72eTm+8D E4skLQ+ViJb0E6gRdJS2X6g4X7S1FKFQJrlSlbENlp7VtaIjgEWPL9yhqP5zHspqIY2/ zf4Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s20-20020a170902b19400b0016d313f3d0csi6052105plr.468.2022.08.10.18.38.02; Wed, 10 Aug 2022 18:38:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233439AbiHKBTT (ORCPT + 99 others); Wed, 10 Aug 2022 21:19:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232539AbiHKBTR (ORCPT ); Wed, 10 Aug 2022 21:19:17 -0400 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD487B7AB; Wed, 10 Aug 2022 18:19:15 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4M387L3QzNz6R22m; Thu, 11 Aug 2022 09:17:50 +0800 (CST) Received: from [10.174.176.73] (unknown [10.174.176.73]) by APP2 (Coremail) with SMTP id Syh0CgC37roQWfRiNqTHAA--.61291S3; Thu, 11 Aug 2022 09:19:13 +0800 (CST) Subject: Re: [PATCH -next v10 3/4] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' To: Paolo Valente , Yu Kuai Cc: Jan Kara , cgroups@vger.kernel.org, linux-block , Tejun Heo , Jens Axboe , LKML , yi.zhang@huawei.com, "yukuai3@huawei.com >> yukuai (C)" References: <20220610021701.2347602-1-yukuai3@huawei.com> <20220610021701.2347602-4-yukuai3@huawei.com> <27F2DF19-7CC6-42C5-8CEB-43583EB4AE46@linaro.org> <9b2d667f-6636-9347-08a1-8bd0aa2346f2@huaweicloud.com> <2f94f241-445f-1beb-c4a8-73f6efce5af2@huaweicloud.com> <55A07102-BE55-4606-9E32-64E884064FB9@unimore.it> From: Yu Kuai Message-ID: <5cb0e5bc-feec-86d6-6f60-3c28ee625efd@huaweicloud.com> Date: Thu, 11 Aug 2022 09:19:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <55A07102-BE55-4606-9E32-64E884064FB9@unimore.it> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-CM-TRANSID: Syh0CgC37roQWfRiNqTHAA--.61291S3 X-Coremail-Antispam: 1UD129KBjvJXoWxKrW3Ar4fKr18Ww4kCry3Arb_yoW3Ar43p3 y3Ga17Cr4UXr15tr1jqw1UXr1Sq34fArykWr1DJryxArnFyFn7JF47tr4rury8Zr95Jr12 qr1jg3s7uw1UtFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Y14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4U JVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gc CE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E 2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJV W8JwACjcxG0xvEwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2Y2ka 0xkIwI1lc7I2V7IY0VAS07AlzVAYIcxG8wCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7x kEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E 67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCw CI42IY6xIIjxv20xvEc7CjxVAFwI0_Jr0_Gr1lIxAIcVCF04k26cxKx2IYs7xG6rW3Jr0E 3s1lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVWUJVW8JbIYCT nIWIevJa73UjIFyTuYvjfUoOJ5UUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Paolo 在 2022/08/10 18:49, Paolo Valente 写道: > > >> Il giorno 27 lug 2022, alle ore 14:11, Yu Kuai ha scritto: >> >> Hi, Paolo >> > > hi > >> Are you still interested in this patchset? >> > > Yes. Sorry for replying very late again. > > Probably the last fix that you suggest is enough, but I'm a little bit > concerned that it may be a little hasty. In fact, before this fix, we > exchanged several messages, and I didn't seem to be very good at > convincing you about the need to keep into account also in-service > I/O. So, my question is: are you sure that now you have a I'm confused here, I'm pretty aware that in-service I/O(as said pending requests is the patchset) should be counted, as you suggested in v7, are you still thinking that the way in this patchset is problematic? I'll try to explain again that how to track is bfqq has pending pending requests, please let me know if you still think there are some problems: patch 1 support to track if bfqq has pending requests, it's done by setting the flag 'entity->in_groups_with_pending_reqs' when the first request is inserted to bfqq, and it's cleared when the last request is completed. specifically the flag is set in bfq_add_bfqq_busy() when 'bfqq->dispatched' if false, and it's cleared both in bfq_completed_request() and bfq_del_bfqq_busy() when 'bfqq->diapatched' is false. Thanks, Kuai > clear/complete understanding of this non-trivial matter? > Consequently, are we sure that this last fix is most certainly all we > need? Of course, I will check on my own, but if you reassure me on > this point, I will feel more confident. > > Thanks, > Paolo > >> 在 2022/07/20 19:38, Yu Kuai 写道: >>> Hi >>> >>> 在 2022/07/20 19:24, Paolo VALENTE 写道: >>>> >>>> >>>>> Il giorno 12 lug 2022, alle ore 15:30, Yu Kuai > ha scritto: >>>>> >>>>> Hi! >>>>> >>>>> I'm copying my reply with new mail address, because Paolo seems >>>>> didn't receive my reply. >>>>> >>>>> 在 2022/06/23 23:32, Paolo Valente 写道: >>>>>> Sorry for the delay. >>>>>>> Il giorno 10 giu 2022, alle ore 04:17, Yu Kuai > ha scritto: >>>>>>> >>>>>>> Currently, bfq can't handle sync io concurrently as long as they >>>>>>> are not issued from root group. This is because >>>>>>> 'bfqd->num_groups_with_pending_reqs > 0' is always true in >>>>>>> bfq_asymmetric_scenario(). >>>>>>> >>>>>>> The way that bfqg is counted into 'num_groups_with_pending_reqs': >>>>>>> >>>>>>> Before this patch: >>>>>>> 1) root group will never be counted. >>>>>>> 2) Count if bfqg or it's child bfqgs have pending requests. >>>>>>> 3) Don't count if bfqg and it's child bfqgs complete all the requests. >>>>>>> >>>>>>> After this patch: >>>>>>> 1) root group is counted. >>>>>>> 2) Count if bfqg have pending requests. >>>>>>> 3) Don't count if bfqg complete all the requests. >>>>>>> >>>>>>> With this change, the occasion that only one group is activated can be >>>>>>> detected, and next patch will support concurrent sync io in the >>>>>>> occasion. >>>>>>> >>>>>>> Signed-off-by: Yu Kuai > >>>>>>> Reviewed-by: Jan Kara > >>>>>>> --- >>>>>>> block/bfq-iosched.c | 42 ------------------------------------------ >>>>>>> block/bfq-iosched.h | 18 +++++++++--------- >>>>>>> block/bfq-wf2q.c | 19 ++++--------------- >>>>>>> 3 files changed, 13 insertions(+), 66 deletions(-) >>>>>>> >>>>>>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c >>>>>>> index 0ec21018daba..03b04892440c 100644 >>>>>>> --- a/block/bfq-iosched.c >>>>>>> +++ b/block/bfq-iosched.c >>>>>>> @@ -970,48 +970,6 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd, >>>>>>> void bfq_weights_tree_remove(struct bfq_data *bfqd, >>>>>>> struct bfq_queue *bfqq) >>>>>>> { >>>>>>> -struct bfq_entity *entity = bfqq->entity.parent; >>>>>>> - >>>>>>> -for_each_entity(entity) { >>>>>>> -struct bfq_sched_data *sd = entity->my_sched_data; >>>>>>> - >>>>>>> -if (sd->next_in_service || sd->in_service_entity) { >>>>>>> -/* >>>>>>> -* entity is still active, because either >>>>>>> -* next_in_service or in_service_entity is not >>>>>>> -* NULL (see the comments on the definition of >>>>>>> -* next_in_service for details on why >>>>>>> -* in_service_entity must be checked too). >>>>>>> -* >>>>>>> -* As a consequence, its parent entities are >>>>>>> -* active as well, and thus this loop must >>>>>>> -* stop here. >>>>>>> -*/ >>>>>>> -break; >>>>>>> -} >>>>>>> - >>>>>>> -/* >>>>>>> -* The decrement of num_groups_with_pending_reqs is >>>>>>> -* not performed immediately upon the deactivation of >>>>>>> -* entity, but it is delayed to when it also happens >>>>>>> -* that the first leaf descendant bfqq of entity gets >>>>>>> -* all its pending requests completed. The following >>>>>>> -* instructions perform this delayed decrement, if >>>>>>> -* needed. See the comments on >>>>>>> -* num_groups_with_pending_reqs for details. >>>>>>> -*/ >>>>>>> -if (entity->in_groups_with_pending_reqs) { >>>>>>> -entity->in_groups_with_pending_reqs = false; >>>>>>> -bfqd->num_groups_with_pending_reqs--; >>>>>>> -} >>>>>>> -} >>>>>> With this part removed, I'm missing how you handle the following >>>>>> sequence of events: >>>>>> 1. a queue Q becomes non busy but still has dispatched requests, so >>>>>> it must not be removed from the counter of queues with pending reqs >>>>>> yet >>>>>> 2. the last request of Q is completed with Q being still idle (non >>>>>> busy). At this point Q must be removed from the counter. It seems to >>>>>> me that this case is not handled any longer >>>>> Hi, Paolo >>>>> >>>>> 1) At first, patch 1 support to track if bfqq has pending requests, it's >>>>> done by setting the flag 'entity->in_groups_with_pending_reqs' when the >>>>> first request is inserted to bfqq, and it's cleared when the last >>>>> request is completed(based on weights_tree insertion and removal). >>>>> >>>> >>>> In patch 1 I don't see the flag cleared for the request-completion event :( >>>> >>>> The piece of code involved is this: >>>> >>>> static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) >>>> { >>>> u64 now_ns; >>>> u32 delta_us; >>>> >>>> bfq_update_hw_tag(bfqd); >>>> >>>> bfqd->rq_in_driver[bfqq->actuator_idx]--; >>>> bfqd->tot_rq_in_driver--; >>>> bfqq->dispatched--; >>>> >>>> if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) { >>>> /* >>>> * Set budget_timeout (which we overload to store the >>>> * time at which the queue remains with no backlog and >>>> * no outstanding request; used by the weight-raising >>>> * mechanism). >>>> */ >>>> bfqq->budget_timeout = jiffies; >>>> >>>> bfq_weights_tree_remove(bfqd, bfqq); >>>> } >>>> ... >>>> >>>> Am I missing something? >>> >>> I add a new api bfq_del_bfqq_in_groups_with_pending_reqs() in patch 1 >>> to clear the flag, and it's called both from bfq_del_bfqq_busy() and >>> bfq_completed_request(). I think you may miss the later: >>> >>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c >>> index 0d46cb728bbf..0ec21018daba 100644 >>> --- a/block/bfq-iosched.c >>> +++ b/block/bfq-iosched.c >>> @@ -6263,6 +6263,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) >>> */ >>> bfqq->budget_timeout = jiffies; >>> >>> + bfq_del_bfqq_in_groups_with_pending_reqs(bfqq); >>> bfq_weights_tree_remove(bfqd, bfqq); >>> } >>> >>> Thanks, >>> Kuai >>>> >>>> Thanks, >>>> Paolo >> > > . >