Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2179835iof; Tue, 7 Jun 2022 22:21:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw0Vn0GUgwPsLa1+0WrtFT7BGUI3K84XZ3V0EwkPP0EfopwZ3X7Fscwxppes7xi446Klxqu X-Received: by 2002:a65:5601:0:b0:3fb:355:5f2f with SMTP id l1-20020a655601000000b003fb03555f2fmr28071431pgs.78.1654665702585; Tue, 07 Jun 2022 22:21:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654665702; cv=none; d=google.com; s=arc-20160816; b=xHrz6dv+nhczU8KQ2xzp9LHFebHSkwPzLYg2MytFMh9mkisEPi1w33W7vcufMozaDj d1/JDp40ndHF2MGePsQ1wft3zXHbvZXb4IToPulYDfEnM61Ku5CuYGw0ul1R2n2X0rYI vyuk999RF2d4anGi1SuQO3jABqfZHIB0qAq97qg/d+83IOiWPsAjXyrWU+DJGUmStzAC BeG3fxs4a5Dg8l+A+tcaBmfIns9j0+eD0gPP3tyJBqODjccp3feGnoy72ZMk9+ykJUS0 VaUQ+MMd51YgOiWmBAL9/+Kuc+V/6xuloxfkoBUfUYL2lsFp86zvSkg1cGubEWkSPm1t yq+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OvPebLWaC46r2eVknF/OmL6vbjODNImctFON5fh17Yw=; b=hQEbqmGay2FBW9xhLg7Omlve9g7Btl8fHK1gYcOoY8gDFSSwfHtG+h4D3pYm9st9Ln etJYD9DUPz7d4awTeLecm4CIfsO6Lcl8cXW4B6IzGC0tpKCbkcH6BytSYY0Bxd7a+8Gj xiGaBMVMMMMZa0eFC/ja63fOm0tsRgp3P8GsxCc6PTo0U9bdqcElYC23Ch12t4NdN5q3 X4eA+92vllUArJV6EAq3ducWpaBKiXGTvqEl44ogvjY6NJAlV5m284PDUbMxPS69wwVr B5+YdZKJ5b8O2M/7dR9X5pRG3/cWf/Kr3+1W9RZJ5i1V1RklpagFJQ2I0lHaauFKvY0e f+2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=FReF3Rbk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id w204-20020a627bd5000000b0050d4f5ccb83si13433431pfc.311.2022.06.07.22.21.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 22:21:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=FReF3Rbk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0578A4654BB; Tue, 7 Jun 2022 21:56:22 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357148AbiFGT6A (ORCPT + 99 others); Tue, 7 Jun 2022 15:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354675AbiFGSvT (ORCPT ); Tue, 7 Jun 2022 14:51:19 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 765B8131F2B; Tue, 7 Jun 2022 11:03:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7A1D0617B0; Tue, 7 Jun 2022 18:03:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E8BFC385A5; Tue, 7 Jun 2022 18:03:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654625005; bh=13PtrJB3f5xMiFq5dG//0YtIrEKnlUayoph64fNiBSA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FReF3Rbke/jnvBveQv3tb73YL+B2hUWcHuRL2wGFmNmWHYrh45ZivYKgk2I0U3lz4 erxQGy2n5vte/daRIUxLQ1z1wINZi4n3iJsD/dHKTTvS3CLrClBMwBhssbjMdFJgfc /n6MSCGiPMVb+g8GMUzU+wZzJBnVOeYlMKQmyIsc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "yukuai (C)" , Jan Kara , Christoph Hellwig , Jens Axboe Subject: [PATCH 5.15 533/667] bfq: Split shared queues on move between cgroups Date: Tue, 7 Jun 2022 19:03:18 +0200 Message-Id: <20220607164950.687261475@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220607164934.766888869@linuxfoundation.org> References: <20220607164934.766888869@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jan Kara commit 3bc5e683c67d94bd839a1da2e796c15847b51b69 upstream. When bfqq is shared by multiple processes it can happen that one of the processes gets moved to a different cgroup (or just starts submitting IO for different cgroup). In case that happens we need to split the merged bfqq as otherwise we will have IO for multiple cgroups in one bfqq and we will just account IO time to wrong entities etc. Similarly if the bfqq is scheduled to merge with another bfqq but the merge didn't happen yet, cancel the merge as it need not be valid anymore. CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-3-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 36 +++++++++++++++++++++++++++++++++--- block/bfq-iosched.c | 2 +- block/bfq-iosched.h | 1 + 3 files changed, 35 insertions(+), 4 deletions(-) --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -733,9 +733,39 @@ static struct bfq_group *__bfq_bic_chang } if (sync_bfqq) { - entity = &sync_bfqq->entity; - if (entity->sched_data != &bfqg->sched_data) - bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { + /* We are the only user of this bfqq, just move it */ + if (sync_bfqq->entity.sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + } else { + struct bfq_queue *bfqq; + + /* + * The queue was merged to a different queue. Check + * that the merge chain still belongs to the same + * cgroup. + */ + for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) + if (bfqq->entity.sched_data != + &bfqg->sched_data) + break; + if (bfqq) { + /* + * Some queue changed cgroup so the merge is + * not valid anymore. We cannot easily just + * cancel the merge (by clearing new_bfqq) as + * there may be other processes using this + * queue and holding refs to all queues below + * sync_bfqq->new_bfqq. Similarly if the merge + * already happened, we need to detach from + * bfqq now so that we cannot merge bio to a + * request from the old cgroup. + */ + bfq_put_cooperator(sync_bfqq); + bfq_release_process_ref(bfqd, sync_bfqq); + bic_set_bfqq(bic, NULL, 1); + } + } } return bfqg; --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5193,7 +5193,7 @@ static void bfq_put_stable_ref(struct bf bfq_put_queue(bfqq); } -static void bfq_put_cooperator(struct bfq_queue *bfqq) +void bfq_put_cooperator(struct bfq_queue *bfqq) { struct bfq_queue *__bfqq, *next; --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -977,6 +977,7 @@ void bfq_weights_tree_remove(struct bfq_ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); +void bfq_put_cooperator(struct bfq_queue *bfqq); void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_schedule_dispatch(struct bfq_data *bfqd);