Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp797821pxp; Fri, 11 Mar 2022 15:25:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJxLTszdvwlBDAl9koeXB08QwvhQ8Ck4FCaQlJn/hFGNEo60hUMftOeLvjUBVU7yo0cvRNW6 X-Received: by 2002:a62:55c4:0:b0:4f6:b396:9caa with SMTP id j187-20020a6255c4000000b004f6b3969caamr12513792pfb.19.1647041108409; Fri, 11 Mar 2022 15:25:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1647041108; cv=none; d=google.com; s=arc-20160816; b=RM9tSl9BpNc9QET0Pz+XElVHn8sq5zpFOUUQdDU02fL/bphcuTvrZqLI3DTkuGaNKe VdNXc8OLCYnsy/IDY1syQhWucDYXyd2gxhLZAXKQKMto1fH38c6xpK8oSNR1EIwcSGK1 AeAsM4GV/bcG1VOBp1g8AFpJDe4sMu+asirXoWDvuDxi2tneIzwFGW9e6sZLZ86fhNfy OTzstNcTVoS8He0ttU6zZmrd7/DLoAJfJIjEF1eEJu30HYJ0wXXatGscvi36y3ViUher EOe+e4LmcUWwAmHHVqXKAZIZf6ASs664NDGdSaLwmhsYTrFDkLVozwo0zhJjixnA/kKx c1EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=grdD9k5UI9qGZ8I8yDvc01yT7EYgiIkXhvkhtiHRpwI=; b=UOPO9g4ynpJCfxo/zHXw8dT2TAuMv4wDrIov1HAbWrpvIzePrRruMh+hKSf6mVufSJ bSzkeIcfoLSLR1S6Q3Dp0nMWcMiXh4/LUVO0KnSVEa4K/q39kJG+GrIs4iOqPkzo50MU 1l1AkT0QihtRwX8UhcigUVN09jfBIiQO6dg872YlaTJsQtqxfEnp8ekDJ4fUSClKO1ux lgl+ptXooi+uFITvgf8Oa2qKkoBKozp1XA8mmvvrIp3xCvMx4NMAxwsdXXIz9mICsoW7 /oXtjkSQ3H8o9oeR/xqnTXmwRSX7EDeSqAmvuM/9G8RC4OeJQPnl4lZ234VvwlEdfbYt QYuA== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id 18-20020a631252000000b003743ee6c635si8728193pgs.543.2022.03.11.15.25.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Mar 2022 15:25:08 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 31F343B7B69; Fri, 11 Mar 2022 14:11:57 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346595AbiCKGdH (ORCPT + 99 others); Fri, 11 Mar 2022 01:33:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229533AbiCKGdF (ORCPT ); Fri, 11 Mar 2022 01:33:05 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 051C5198D19; Thu, 10 Mar 2022 22:32:01 -0800 (PST) Received: from kwepemi500017.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KFGJq5yxVzfYv1; Fri, 11 Mar 2022 14:30:35 +0800 (CST) Received: from kwepemm600009.china.huawei.com (7.193.23.164) by kwepemi500017.china.huawei.com (7.221.188.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Fri, 11 Mar 2022 14:31:59 +0800 Received: from [10.174.176.73] (10.174.176.73) by kwepemm600009.china.huawei.com (7.193.23.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Fri, 11 Mar 2022 14:31:58 +0800 Subject: Re: [PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion To: , , , CC: , , , References: <20220305091205.4188398-1-yukuai3@huawei.com> From: "yukuai (C)" Message-ID: Date: Fri, 11 Mar 2022 14:31:58 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20220305091205.4188398-1-yukuai3@huawei.com> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.176.73] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600009.china.huawei.com (7.193.23.164) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org friendly ping ... ?? 2022/03/05 17:11, Yu Kuai ะด??: > Currently, bfq can't handle sync io concurrently as long as they > are not issued from root group. This is because > 'bfqd->num_groups_with_pending_reqs > 0' is always true in > bfq_asymmetric_scenario(). > > This patchset tries to support concurrent sync io if all the sync ios > are issued from the same cgroup: > > 1) Count root_group into 'num_groups_with_pending_reqs', patch 1-5; > > 2) Don't idle if 'num_groups_with_pending_reqs' is 1, patch 6; > > 3) Don't count the group if the group doesn't have pending requests, > while it's child groups may have pending requests, patch 7; > > This is because, for example: > if sync ios are issued from cgroup /root/c1/c2, root, c1 and c2 > will all be counted into 'num_groups_with_pending_reqs', > which makes it impossible to handle sync ios concurrently. > > 4) Decrease 'num_groups_with_pending_reqs' when the last queue completes > all the requests, while child groups may still have pending > requests, patch 8-10; > > This is because, for example: > t1 issue sync io on root group, t2 and t3 issue sync io on the same > child group. num_groups_with_pending_reqs is 2 now. > After t1 stopped, num_groups_with_pending_reqs is still 2. sync io from > t2 and t3 still can't be handled concurrently. > > fio test script: startdelay is used to avoid queue merging > [global] > filename=/dev/nvme0n1 > allow_mounted_write=0 > ioengine=psync > direct=1 > ioscheduler=bfq > offset_increment=10g > group_reporting > rw=randwrite > bs=4k > > [test1] > numjobs=1 > > [test2] > startdelay=1 > numjobs=1 > > [test3] > startdelay=2 > numjobs=1 > > [test4] > startdelay=3 > numjobs=1 > > [test5] > startdelay=4 > numjobs=1 > > [test6] > startdelay=5 > numjobs=1 > > [test7] > startdelay=6 > numjobs=1 > > [test8] > startdelay=7 > numjobs=1 > > test result: > running fio on root cgroup > v5.17-rc6: 550 Mib/s > v5.17-rc6-patched: 550 Mib/s > > running fio on non-root cgroup > v5.17-rc6: 349 Mib/s > v5.17-rc6-patched: 550 Mib/s > > Yu Kuai (11): > block, bfq: add new apis to iterate bfq entities > block, bfq: apply news apis where root group is not expected > block, bfq: cleanup for __bfq_activate_requeue_entity() > block, bfq: move the increasement of 'num_groups_with_pending_reqs' to > it's caller > block, bfq: count root group into 'num_groups_with_pending_reqs' > block, bfq: do not idle if only one cgroup is activated > block, bfq: only count parent bfqg when bfqq is activated > block, bfq: record how many queues have pending requests in bfq_group > block, bfq: move forward __bfq_weights_tree_remove() > block, bfq: decrease 'num_groups_with_pending_reqs' earlier > block, bfq: cleanup bfqq_group() > > block/bfq-cgroup.c | 13 +++---- > block/bfq-iosched.c | 87 +++++++++++++++++++++++---------------------- > block/bfq-iosched.h | 41 +++++++++++++-------- > block/bfq-wf2q.c | 56 +++++++++++++++-------------- > 4 files changed, 106 insertions(+), 91 deletions(-) >