Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp954531pxp; Wed, 16 Mar 2022 22:31:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJytZnrDj5VfC9uAbZ8yVr7XLnVnRfF7nx0VTq8sCvHPOsLUlTMZGHtvNbvCjCLHkaWpSigb X-Received: by 2002:a17:902:860c:b0:153:36b3:89aa with SMTP id f12-20020a170902860c00b0015336b389aamr3282703plo.125.1647495110508; Wed, 16 Mar 2022 22:31:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647495110; cv=none; d=google.com; s=arc-20160816; b=Mnipthci36dOfJzfYlCNmIkOuvCPBrYnZVeTMVJkAS9B/s04TwxNrmrs5mPXZdFV5z FCRTwvXlDuDqx/G4lJaaiMs7FOopG7QTj9L54XJEQ0Xg6lfwFfl5lrfVZBt0j+2ryzHF iKkJY81ENkfYLGHnBBtgxhjILKPBMmzkUYNKXMg/7X0zSQS8uzBn1XQzMcJBVxhuf8Oa FWLDGzm/E5hc9SuyYxoNQ7vC+fsBoyzp5tunLOCbYCWWMYa/F1ICZLSufgXOrHP8xJa0 /UGXijTKj/d5w6hsER818eJXohq6wrD34PmdXmKjUKtasbqz6dfvaVP+rGunKPhuYRfj v6Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:references:cc:to:from :subject; bh=VuGxwao1gP/+V0o6EadAdXyFgpTTRxdwDwyIFN6wAIE=; b=PN9x3h/CtF63qtqceQ3jmfnCa9f2NCCoq41uT9sqOhcTIRfJLh0l+bl46qPGxer1R8 8N5FRTnF0lluikx83rNK3MyR0fjgljCB67x9HFbHfyzMR8Da/o+MgKIIrPxDHu/e+Iv/ HWqMDytGid8Rkylp1UESRBknZFlZjQLTDlSgjrNtJGCcrh+qGXI4irgqzac1uaa2i1IR cDozAQmIIH2TYp8RvM6Jge1sBhmlpp9IGF62C1d91NWlacgjsqi+UDQSK5A8hvpajnaP 91qJPSEIZhjeJjwo1O7kuJ1pueUf0ugYsDHy3ieCIxmKaFD/7YfVQsMnu5oLf33VrF9T bsqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id l11-20020a63570b000000b003816043ef1fsi1046491pgb.276.2022.03.16.22.31.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Mar 2022 22:31:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 78BB710876A; Wed, 16 Mar 2022 21:33:31 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358358AbiCQBuh (ORCPT + 99 others); Wed, 16 Mar 2022 21:50:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348344AbiCQBug (ORCPT ); Wed, 16 Mar 2022 21:50:36 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9313115A00; Wed, 16 Mar 2022 18:49:20 -0700 (PDT) Received: from kwepemi100026.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KJqgl4t78zcb2M; Thu, 17 Mar 2022 09:44:19 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi100026.china.huawei.com (7.221.188.60) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 17 Mar 2022 09:49:18 +0800 Received: from [10.174.176.73] (10.174.176.73) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 17 Mar 2022 09:49:17 +0800 Subject: Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion From: "yukuai (C)" To: , , , CC: , , , References: <20220305091205.4188398-1-yukuai3@huawei.com> Message-ID: <11fda851-a552-97ea-d083-d0288c17ba53@huawei.com> Date: Thu, 17 Mar 2022 09:49:16 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.176.73] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org friendly ping ... 在 2022/03/11 14:31, yukuai (C) 写道: > friendly ping ... > > 在 2022/03/05 17:11, Yu Kuai 写道: >> Currently, bfq can't handle sync io concurrently as long as they >> are not issued from root group. This is because >> 'bfqd->num_groups_with_pending_reqs > 0' is always true in >> bfq_asymmetric_scenario(). >> >> This patchset tries to support concurrent sync io if all the sync ios >> are issued from the same cgroup: >> >> 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5; >> >> 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6; >> >> 3) Don't count the group if the group doesn't have pending requests, >> while it's child groups may have pending requests, patch 7; >> >> This is because, for example: >> if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2 >> will all be counted into 'num_groups_with_pending_reqs', >> which makes it impossible to handle sync ios concurrently. >> >> 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes >> all the requests, while child groups may still have pending >> requests, patch 8-10; >> >> This is because, for example: >> t1 issue sync io on root group, t2 and t3 issue sync io on the same >> child group. num_groups_with_pending_reqs is 2 now. >> After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from >> t2 and t3 still can't be handled concurrently. >> >> fio test script: startdelay is used to avoid queue merging >> [global] >> filename=/dev/nvme0n1 >> allow_mounted_write=0 >> ioengine=psync >> direct=1 >> ioscheduler=bfq >> offset_increment=10g >> group_reporting >> rw=randwrite >> bs=4k >> >> [test1] >> numjobs=1 >> >> [test2] >> startdelay=1 >> numjobs=1 >> >> [test3] >> startdelay=2 >> numjobs=1 >> >> [test4] >> startdelay=3 >> numjobs=1 >> >> [test5] >> startdelay=4 >> numjobs=1 >> >> [test6] >> startdelay=5 >> numjobs=1 >> >> [test7] >> startdelay=6 >> numjobs=1 >> >> [test8] >> startdelay=7 >> numjobs=1 >> >> test result: >> running fio on root cgroup >> v5.17-rc6:       550 Mib/s >> v5.17-rc6-patched: 550 Mib/s >> >> running fio on non-root cgroup >> v5.17-rc6:       349 Mib/s >> v5.17-rc6-patched: 550 Mib/s >> >> Yu Kuai (11): >>    block, bfq: add new apis to iterate bfq entities >>    block, bfq: apply news apis where root group is not expected >>    block, bfq: cleanup for __bfq_activate_requeue_entity() >>    block, bfq: move the increasement of 'num_groups_with_pending_reqs' to >>      it's caller >>    block, bfq: count root group into 'num_groups_with_pending_reqs' >>    block, bfq: do not idle if only one cgroup is activated >>    block, bfq: only count parent bfqg when bfqq is activated >>    block, bfq: record how many queues have pending requests in bfq_group >>    block, bfq: move forward __bfq_weights_tree_remove() >>    block, bfq: decrease 'num_groups_with_pending_reqs' earlier >>    block, bfq: cleanup bfqq_group() >> >>   block/bfq-cgroup.c  | 13 +++---- >>   block/bfq-iosched.c | 87 +++++++++++++++++++++++---------------------- >>   block/bfq-iosched.h | 41 +++++++++++++-------- >>   block/bfq-wf2q.c    | 56 +++++++++++++++-------------- >>   4 files changed, 106 insertions(+), 91 deletions(-) >>