Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2170861iof; Tue, 7 Jun 2022 22:05:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxfuW/wgC5VuQl9hhC1UkXKawwD5UxGwAC/SNvP+ghLzXuQvXxcOndGU0N1r+b6sjGq9pcN X-Received: by 2002:a17:902:cec2:b0:166:4277:e0c0 with SMTP id d2-20020a170902cec200b001664277e0c0mr30065484plg.107.1654664747188; Tue, 07 Jun 2022 22:05:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654664747; cv=none; d=google.com; s=arc-20160816; b=B/5pjEt6OHvI8DEaWEljNV7qk5+htN0MRN4VT7R0HUSKsSooP9IfF43SxStyYj9KsB 5Yl2aXFgLXFZimDQ8+U7Gf47nUGaX2mTnYWvMLt6Smz1Q8yVpgO7gofnFrPZXyzZeuRI L64EoGplgMw108BnCaLXTFuH0Ee50e1TtyzRB/57gwnyPsWZrdUb7QGxX99PdaEeGhhH 3MQwQhSgQy8J1IeYG5dRd3w2h6YenOAC0/jfrwk19WNQtxjcT9SjGWefZWVTFJMmOSiL U9S3LfAgYcXBWWFtq0xhPpKmRqrQuy7Wn67oVgo2xuNUrigrErtw8DXMROAmN5BYUniJ 1aAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=GCeLq4QY2SVVxBCzN0hyQ7cpX1G6iTZpiCpL84f8SBc=; b=hiXmjDaeY3E6qUzp/JJvf+Fk8ZSSP8ufVq1vx/12EnCm75MK6YN69DDZX9TMTAg1wW I269nr0uvr2YUEoTuhNcmGX0qXpurKOKTFqEYzSsggV8mbkQRIghtqSSoBd16hrfSXNr 0d0VRNJ6xaWogJeam+jDnWu0q79Pr0eRxsOVZPrcEBPThq1K1hCwDZw2ul2o1u0Qvb1Y GwqkGfcorgDw11XnS479246Ze2Gfk4fU8HEQCgFBzS1EhkJKeYAypNa4rYi7sfN/BgrW ZpV2YWuHwYgBUX2QR4ASfecpUBAPwY0GfObRwLbtClwN4jpaT6pXfQ1axUTx7drQcdMj wx/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=jaIwS87R; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id e9-20020a654789000000b0039d9d404482si24166916pgs.570.2022.06.07.22.05.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 22:05:47 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=jaIwS87R; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A2E533A7822; Tue, 7 Jun 2022 21:33:24 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350586AbiFGSBT (ORCPT + 99 others); Tue, 7 Jun 2022 14:01:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348669AbiFGRlR (ORCPT ); Tue, 7 Jun 2022 13:41:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B7C2122974; Tue, 7 Jun 2022 10:34:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 921E26157D; Tue, 7 Jun 2022 17:34:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FAE0C34119; Tue, 7 Jun 2022 17:34:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654623269; bh=Szbdp3dbQL5SZT5WgSpBoqQq5qpRBZObrw0n2Sa/d6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jaIwS87RBUezg3taiRnEMO+vXsRPwWl2FUpndyKSIqMCflM416XCBADNvVaBvObTs HIiuEnWcgnisWOWrX4CRYo8VhZl7db5zPg4wv+KABDYieCtZkbPeNmO+mvQa7lzDkL A8Fm6RU/0MkEvd4cldk481agJUNPXFSZ4Rq3QwjE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "yukuai (C)" , Jan Kara , Christoph Hellwig , Jens Axboe Subject: [PATCH 5.10 356/452] bfq: Split shared queues on move between cgroups Date: Tue, 7 Jun 2022 19:03:33 +0200 Message-Id: <20220607164919.173105589@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220607164908.521895282@linuxfoundation.org> References: <20220607164908.521895282@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jan Kara commit 3bc5e683c67d94bd839a1da2e796c15847b51b69 upstream. When bfqq is shared by multiple processes it can happen that one of the processes gets moved to a different cgroup (or just starts submitting IO for different cgroup). In case that happens we need to split the merged bfqq as otherwise we will have IO for multiple cgroups in one bfqq and we will just account IO time to wrong entities etc. Similarly if the bfqq is scheduled to merge with another bfqq but the merge didn't happen yet, cancel the merge as it need not be valid anymore. CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-3-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 36 +++++++++++++++++++++++++++++++++--- block/bfq-iosched.c | 2 +- block/bfq-iosched.h | 1 + 3 files changed, 35 insertions(+), 4 deletions(-) --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -725,9 +725,39 @@ static struct bfq_group *__bfq_bic_chang } if (sync_bfqq) { - entity = &sync_bfqq->entity; - if (entity->sched_data != &bfqg->sched_data) - bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { + /* We are the only user of this bfqq, just move it */ + if (sync_bfqq->entity.sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + } else { + struct bfq_queue *bfqq; + + /* + * The queue was merged to a different queue. Check + * that the merge chain still belongs to the same + * cgroup. + */ + for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) + if (bfqq->entity.sched_data != + &bfqg->sched_data) + break; + if (bfqq) { + /* + * Some queue changed cgroup so the merge is + * not valid anymore. We cannot easily just + * cancel the merge (by clearing new_bfqq) as + * there may be other processes using this + * queue and holding refs to all queues below + * sync_bfqq->new_bfqq. Similarly if the merge + * already happened, we need to detach from + * bfqq now so that we cannot merge bio to a + * request from the old cgroup. + */ + bfq_put_cooperator(sync_bfqq); + bfq_release_process_ref(bfqd, sync_bfqq); + bic_set_bfqq(bic, NULL, 1); + } + } } return bfqg; --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4917,7 +4917,7 @@ void bfq_put_queue(struct bfq_queue *bfq bfqg_and_blkg_put(bfqg); } -static void bfq_put_cooperator(struct bfq_queue *bfqq) +void bfq_put_cooperator(struct bfq_queue *bfqq) { struct bfq_queue *__bfqq, *next; --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -954,6 +954,7 @@ void bfq_weights_tree_remove(struct bfq_ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); +void bfq_put_cooperator(struct bfq_queue *bfqq); void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_schedule_dispatch(struct bfq_data *bfqd);