Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2196694iof; Tue, 7 Jun 2022 22:54:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOwt0nJ75M3J5B7ycHs+7sUJu9FIiqSIHa2tOmYFA7F6/jGcopXbVJlO39smJ1dG/9/sgK X-Received: by 2002:a05:6a00:114e:b0:51b:8ff5:e05a with SMTP id b14-20020a056a00114e00b0051b8ff5e05amr33110692pfm.37.1654667692490; Tue, 07 Jun 2022 22:54:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654667692; cv=none; d=google.com; s=arc-20160816; b=cFEk1WESjELShAy+XxFd1cbBO2XKhhktpWK3IkJjm/5IFySdExzxTduhc85KxjVR+S 1glc41vcIjGkHVsfplYxFbbmNih+USqsshk0s5zo2e7T5ccbwDkUJoSsqzhmGopM1Y1m FZBogqXrlSfEIcG23+nsA8z97iZtyKCAthsuVjNw6TsrVKh5SlZrUwv/3prASbDOseRb kuOB7j7kuJFLeUsVuYJvmgFJBHGqKDdtGTNFrdeFyiniGgEaJwilZbNSKMkZQmDnfTtt 97m+MvYDN4HM679mZqqjc9yVZaV8f5BHuWADDDxaEXDWKpLeup2+99Ez4ycqYAT2Ws5L aBWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=T+0VCw55VO7A3A5r2AnD6y8o0S6hjQS6OXzHFjsU648=; b=aTq8KYPu6HTWkxMDIXdfFXdHGKefPgmrnCmwof6q74x7L7DC8O4PK+KFms7zOSzxhA U4Obm5rS04aB2iAaiyO30KHC6v5KDnR2Cn8Nw2JDbES4L26wNF3SRsHU0OUHmNUngCYp f/r/Iu3dV9tkJaKIIaIUflm89WoCW3J07uAEsO7CAOafF+sA88J695B+SFl6aYwQCBIk jiOPZKEbx4DKYAHyjnQhEqHBrkDNGnbRjo44f+9ojQqdxuV+ghxKrrLQJDBFEeQ53dLe 4v76yeBiPX+y0RFHFamuZOx5xqnvGBoluTGLZcYy8Zk3I2BRgrM/khKbPjLV26Wf5ulM wWtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="b/oFff9r"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id l5-20020a17090270c500b0015d1dbd6abasi23450084plt.237.2022.06.07.22.54.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 22:54:52 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="b/oFff9r"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A7E094AE237; Tue, 7 Jun 2022 22:21:15 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442738AbiFHA4m (ORCPT + 99 others); Tue, 7 Jun 2022 20:56:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384482AbiFGWOk (ORCPT ); Tue, 7 Jun 2022 18:14:40 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06AFD25DFB9; Tue, 7 Jun 2022 12:19:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4707CB8233E; Tue, 7 Jun 2022 19:19:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A44C7C385A2; Tue, 7 Jun 2022 19:19:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1654629541; bh=ac12JH8yvmBKG96v1+jlYRJ8DxZdAouOgOx3/vF7S3o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b/oFff9rZFbPuHAfL2Qn0GaieY59baKNhwDh4tO01HPzpcH6CM9fU0LJpRWyAvsDk PRVFtPe9xvMfoxUa9tpj1qhuud1PHu3TQOjre3vM5yqw020RDHqqxu/aI/ymCq1pdE WtY1HfGy9a+f1yZ9/JgSZREJjE/0aNUitEpBsVbw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "yukuai (C)" , Jan Kara , Christoph Hellwig , Jens Axboe Subject: [PATCH 5.18 729/879] bfq: Split shared queues on move between cgroups Date: Tue, 7 Jun 2022 19:04:07 +0200 Message-Id: <20220607165024.016185919@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220607165002.659942637@linuxfoundation.org> References: <20220607165002.659942637@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jan Kara commit 3bc5e683c67d94bd839a1da2e796c15847b51b69 upstream. When bfqq is shared by multiple processes it can happen that one of the processes gets moved to a different cgroup (or just starts submitting IO for different cgroup). In case that happens we need to split the merged bfqq as otherwise we will have IO for multiple cgroups in one bfqq and we will just account IO time to wrong entities etc. Similarly if the bfqq is scheduled to merge with another bfqq but the merge didn't happen yet, cancel the merge as it need not be valid anymore. CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-3-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 36 +++++++++++++++++++++++++++++++++--- block/bfq-iosched.c | 2 +- block/bfq-iosched.h | 1 + 3 files changed, 35 insertions(+), 4 deletions(-) --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -743,9 +743,39 @@ static struct bfq_group *__bfq_bic_chang } if (sync_bfqq) { - entity = &sync_bfqq->entity; - if (entity->sched_data != &bfqg->sched_data) - bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { + /* We are the only user of this bfqq, just move it */ + if (sync_bfqq->entity.sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + } else { + struct bfq_queue *bfqq; + + /* + * The queue was merged to a different queue. Check + * that the merge chain still belongs to the same + * cgroup. + */ + for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) + if (bfqq->entity.sched_data != + &bfqg->sched_data) + break; + if (bfqq) { + /* + * Some queue changed cgroup so the merge is + * not valid anymore. We cannot easily just + * cancel the merge (by clearing new_bfqq) as + * there may be other processes using this + * queue and holding refs to all queues below + * sync_bfqq->new_bfqq. Similarly if the merge + * already happened, we need to detach from + * bfqq now so that we cannot merge bio to a + * request from the old cgroup. + */ + bfq_put_cooperator(sync_bfqq); + bfq_release_process_ref(bfqd, sync_bfqq); + bic_set_bfqq(bic, NULL, 1); + } + } } return bfqg; --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5319,7 +5319,7 @@ static void bfq_put_stable_ref(struct bf bfq_put_queue(bfqq); } -static void bfq_put_cooperator(struct bfq_queue *bfqq) +void bfq_put_cooperator(struct bfq_queue *bfqq) { struct bfq_queue *__bfqq, *next; --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -980,6 +980,7 @@ void bfq_weights_tree_remove(struct bfq_ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); +void bfq_put_cooperator(struct bfq_queue *bfqq); void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_schedule_dispatch(struct bfq_data *bfqd);