Received: by 2002:a05:6a10:6d25:0:0:0:0 with SMTP id gq37csp1879063pxb; Mon, 13 Sep 2021 07:25:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxT7hAo4p/auRAOj1yBXIpSCT5YdHY23fGoe842VUlZRMjjhACU77GPH63Mnba1BxYpuUO X-Received: by 2002:a5e:8349:: with SMTP id y9mr8921906iom.34.1631543120295; Mon, 13 Sep 2021 07:25:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631543120; cv=none; d=google.com; s=arc-20160816; b=MtWF1gsydG2x66mexLw4pku30aaTLLgmrOjTPooAtOy8d7gNYbLPZqP29fx8d44HYE mhzvyU7X/utk06VCoH4Mt+QTuLeB6EEDAXn2hTxBvQVfC55ftS1ixH/lTfVYjTBuDlnR UvSVGWV3tylOAJuHYhv21hGXQsW91P2RNu2OCHE0TbFKVtAUTpj7oO4OdKmwQ4JtunFq p2w8s3AE/XCxahA35i9h4t6wNsFn2P/vMQN/JrzV7VQA7YkzAbU+k3uNMkZuhCWu5YQ0 8mCJ5JwTmH7t+MXMEtmtRRsi0gbPIzfThuXGc38WCzsNluGJ0Yup6hPRsIEppVlzM/4U hdPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yhdHGqN5miZsYC809/Ps1/MO3kLF035vG4S8czGp9Q8=; b=pK9SyIGoNBxNrK5CIPW49ojVGN4+ABmDxxmNbY+ELs7bVY4y/WNYVHx1gEuyB5rXCo ltQQysj4w3feRj8lniXKW38dFbDyHakr+3kLO3nNFEZkjzWHpPsq2fQDmI6bHk/MfLqD jT5Tk7G+n7/+QwJglH/vcPIuo2DgNw5S3SfFUg8WUqLeKLc/DemczlX1f0p8l3v7OQHX dnX+MJbNOqWyYdlzl4rqovbj9g8dp2S6jiz2OLPisUY2XOKvODGOPeUl/wlm7/SteBsH p75sCb9Tic43ef4m83Q1iXlwTrX7wSJ2i29FMqagF+AefnK0jN8+sr6RrIP5ykPfPW1o CgGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=qatr9eyS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v1si6127242ile.137.2021.09.13.07.24.57; Mon, 13 Sep 2021 07:25:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=qatr9eyS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343837AbhIMOXR (ORCPT + 99 others); Mon, 13 Sep 2021 10:23:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:39566 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344841AbhIMOQo (ORCPT ); Mon, 13 Sep 2021 10:16:44 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7B49F61AFC; Mon, 13 Sep 2021 13:44:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631540673; bh=kl9KLjkQAlCTLw4tFE2mdHaVWrauryOV7wn3sR/c7YI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qatr9eySOXATLGUAQcNmGc66SCqzXV4JHl/ZYswiUDPhIMIDMVYxQmPI3VSsqCgpm b2zyw4KrkKGpQBaC7MnRn9bhIjYE3nb66S49+btaLBIHvjqrmNnBMC+asEDFDEusaq mPwLptFojMj8eu59yTe1i7b23iaEqHSWBwNyUDK8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Johannes Lundberg , Jens Axboe Subject: [PATCH 5.13 289/300] io-wq: check max_worker limits if a worker transitions bound state Date: Mon, 13 Sep 2021 15:15:50 +0200 Message-Id: <20210913131119.115555510@linuxfoundation.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210913131109.253835823@linuxfoundation.org> References: <20210913131109.253835823@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jens Axboe commit ecc53c48c13d995e6fe5559e30ffee48d92784fd upstream. For the two places where new workers are created, we diligently check if we are allowed to create a new worker. If we're currently at the limit of how many workers of a given type we can have, then we don't create any new ones. If you have a mixed workload with various types of bound and unbounded work, then it can happen that a worker finishes one type of work and is then transitioned to the other type. For this case, we don't check if we are actually allowed to do so. This can cause io-wq to temporarily exceed the allowed number of workers for a given type. When retrieving work, check that the types match. If they don't, check if we are allowed to transition to the other type. If not, then don't handle the new work. Cc: stable@vger.kernel.org Reported-by: Johannes Lundberg Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io-wq.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -424,7 +424,28 @@ static void io_wait_on_hash(struct io_wq spin_unlock(&wq->hash->wait.lock); } -static struct io_wq_work *io_get_next_work(struct io_wqe *wqe) +/* + * We can always run the work if the worker is currently the same type as + * the work (eg both are bound, or both are unbound). If they are not the + * same, only allow it if incrementing the worker count would be allowed. + */ +static bool io_worker_can_run_work(struct io_worker *worker, + struct io_wq_work *work) +{ + struct io_wqe_acct *acct; + + if (!(worker->flags & IO_WORKER_F_BOUND) != + !(work->flags & IO_WQ_WORK_UNBOUND)) + return true; + + /* not the same type, check if we'd go over the limit */ + acct = io_work_get_acct(worker->wqe, work); + return acct->nr_workers < acct->max_workers; +} + +static struct io_wq_work *io_get_next_work(struct io_wqe *wqe, + struct io_worker *worker, + bool *stalled) __must_hold(wqe->lock) { struct io_wq_work_node *node, *prev; @@ -436,6 +457,9 @@ static struct io_wq_work *io_get_next_wo work = container_of(node, struct io_wq_work, list); + if (!io_worker_can_run_work(worker, work)) + break; + /* not hashed, can run anytime */ if (!io_wq_is_hashed(work)) { wq_list_del(&wqe->work_list, node, prev); @@ -462,6 +486,7 @@ static struct io_wq_work *io_get_next_wo raw_spin_unlock(&wqe->lock); io_wait_on_hash(wqe, stall_hash); raw_spin_lock(&wqe->lock); + *stalled = true; } return NULL; @@ -501,6 +526,7 @@ static void io_worker_handle_work(struct do { struct io_wq_work *work; + bool stalled; get_next: /* * If we got some work, mark us as busy. If we didn't, but @@ -509,10 +535,11 @@ get_next: * can't make progress, any work completion or insertion will * clear the stalled flag. */ - work = io_get_next_work(wqe); + stalled = false; + work = io_get_next_work(wqe, worker, &stalled); if (work) __io_worker_busy(wqe, worker, work); - else if (!wq_list_empty(&wqe->work_list)) + else if (stalled) wqe->flags |= IO_WQE_FLAG_STALLED; raw_spin_unlock_irq(&wqe->lock);