Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2683619pxj; Mon, 31 May 2021 08:11:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzak5CrVrJYahsD1TGiZQapqLN1iQGPEJXEsPkdnCWCVE/XGTbfQVlx+JEM81ZQE3zqDeks X-Received: by 2002:a17:906:f6d7:: with SMTP id jo23mr4346345ejb.302.1622473881368; Mon, 31 May 2021 08:11:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622473881; cv=none; d=google.com; s=arc-20160816; b=goLmseKSoe065BiZY2Qnt7C5huaiao31HV8v8R8q8keFdFB71qQXnoM1v0yTy6pBFr /dpbIAAZN4IuNI1G0ronNDT/6ZzgvCA0SixaySPmXBBUnmXkKpmffj7UuE1mPb1du9h7 UwKNbDUtZKU7AzQ/weJ3pLfwlmZdaSK7sl8ykfvDnHF+pzePtYj8wNHxPtSGevVGpQkW PlXWzLby9uVtP/tjpFQXoJ1jfWQmw3hyeTPzCRFhsnnKP0nPb5AAn2mZUkFtha/h9qYQ FlcVQ5C7KNlXcSC7n2jKL9nuDjZp/fAgAkYyNfrlK8m1KVoQFEK7uRf0pbWhrcR+rnux VG8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xy1XeF49Y7KgX9QgKn3jq/G9jA1tHsYevUNjI6vdMUM=; b=C+qqzaqvdCOWJwAF3Hr9WMa+4kP74DVq/+cRkKumuGEvnJsVd/fLElFyh+D1bFwylc m2rTkmGEUYmpD3Pv64U+wMMHWPtJztAY77Xtzz43YkSofrEQJEUWk+KAxCFIfBhX019k R67aOAVYzGmWMen8I6p3RXI0u3+qCrp3tW9vnzv70a2UyErIPzN+ZqNUJplA0VZ2nbfH Yx9g0ZQDtKgmtm+QIv8vadu8mpoPuJgZscmvogMgYObtMqG2HR4ThoD2oRqzP52VYE2N sbfnmufQeOZNnuB2rOyT/XXnW8nT7zjdtKTzqRFxWhPYZ3naObslFfbTyAYLbiR2CyJg xQwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=aEjSLTyP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n15si12643114edt.229.2021.05.31.08.10.57; Mon, 31 May 2021 08:11:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=aEjSLTyP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233140AbhEaPLp (ORCPT + 99 others); Mon, 31 May 2021 11:11:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:40830 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233116AbhEaOKl (ORCPT ); Mon, 31 May 2021 10:10:41 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 312536197D; Mon, 31 May 2021 13:40:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1622468451; bh=ojP7AD91Td2DIOE5XG1Gb53ukD9vtNUalcDUcids81w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aEjSLTyPg50XEeWOYzHCmBtVwZeEvSsoy4S9QCw1n63q1CrEyJr0mYEzMp4IlMb2x PefpVVXryKjfM+XT6vgXm3CjV+zWNZlNzYtVQHpMhZuWCCpgUEdU8clGZ0mTisaTgz sMHqD0NE/d/nF4oxxCMLH/m67K8W+MDvQ3lqjf+Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Trond Myklebust Subject: [PATCH 5.10 247/252] SUNRPC: More fixes for backlog congestion Date: Mon, 31 May 2021 15:15:12 +0200 Message-Id: <20210531130706.393597050@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210531130657.971257589@linuxfoundation.org> References: <20210531130657.971257589@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Trond Myklebust commit e86be3a04bc4aeaf12f93af35f08f8d4385bcd98 upstream. Ensure that we fix the XPRT_CONGESTED starvation issue for RDMA as well as socket based transports. Ensure we always initialise the request after waking up from the backlog list. Fixes: e877a88d1f06 ("SUNRPC in case of backlog, hand free slots directly to waiting task") Signed-off-by: Trond Myklebust Signed-off-by: Greg Kroah-Hartman --- include/linux/sunrpc/xprt.h | 2 + net/sunrpc/xprt.c | 58 +++++++++++++++++++--------------------- net/sunrpc/xprtrdma/transport.c | 12 ++++---- net/sunrpc/xprtrdma/verbs.c | 18 ++++++++++-- net/sunrpc/xprtrdma/xprt_rdma.h | 1 5 files changed, 52 insertions(+), 39 deletions(-) --- a/include/linux/sunrpc/xprt.h +++ b/include/linux/sunrpc/xprt.h @@ -367,6 +367,8 @@ struct rpc_xprt * xprt_alloc(struct net unsigned int num_prealloc, unsigned int max_req); void xprt_free(struct rpc_xprt *); +void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task); +bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req); static inline int xprt_enable_swap(struct rpc_xprt *xprt) --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -1575,11 +1575,18 @@ xprt_transmit(struct rpc_task *task) spin_unlock(&xprt->queue_lock); } -static void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task) +static void xprt_complete_request_init(struct rpc_task *task) +{ + if (task->tk_rqstp) + xprt_request_init(task); +} + +void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task) { set_bit(XPRT_CONGESTED, &xprt->state); - rpc_sleep_on(&xprt->backlog, task, NULL); + rpc_sleep_on(&xprt->backlog, task, xprt_complete_request_init); } +EXPORT_SYMBOL_GPL(xprt_add_backlog); static bool __xprt_set_rq(struct rpc_task *task, void *data) { @@ -1587,14 +1594,13 @@ static bool __xprt_set_rq(struct rpc_tas if (task->tk_rqstp == NULL) { memset(req, 0, sizeof(*req)); /* mark unused */ - task->tk_status = -EAGAIN; task->tk_rqstp = req; return true; } return false; } -static bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req) +bool xprt_wake_up_backlog(struct rpc_xprt *xprt, struct rpc_rqst *req) { if (rpc_wake_up_first(&xprt->backlog, __xprt_set_rq, req) == NULL) { clear_bit(XPRT_CONGESTED, &xprt->state); @@ -1602,6 +1608,7 @@ static bool xprt_wake_up_backlog(struct } return true; } +EXPORT_SYMBOL_GPL(xprt_wake_up_backlog); static bool xprt_throttle_congested(struct rpc_xprt *xprt, struct rpc_task *task) { @@ -1611,7 +1618,7 @@ static bool xprt_throttle_congested(stru goto out; spin_lock(&xprt->reserve_lock); if (test_bit(XPRT_CONGESTED, &xprt->state)) { - rpc_sleep_on(&xprt->backlog, task, NULL); + xprt_add_backlog(xprt, task); ret = true; } spin_unlock(&xprt->reserve_lock); @@ -1780,10 +1787,6 @@ xprt_request_init(struct rpc_task *task) struct rpc_xprt *xprt = task->tk_xprt; struct rpc_rqst *req = task->tk_rqstp; - if (req->rq_task) - /* Already initialized */ - return; - req->rq_task = task; req->rq_xprt = xprt; req->rq_buffer = NULL; @@ -1844,10 +1847,8 @@ void xprt_retry_reserve(struct rpc_task struct rpc_xprt *xprt = task->tk_xprt; task->tk_status = 0; - if (task->tk_rqstp != NULL) { - xprt_request_init(task); + if (task->tk_rqstp != NULL) return; - } task->tk_status = -EAGAIN; xprt_do_reserve(xprt, task); @@ -1872,24 +1873,21 @@ void xprt_release(struct rpc_task *task) } xprt = req->rq_xprt; - if (xprt) { - xprt_request_dequeue_xprt(task); - spin_lock(&xprt->transport_lock); - xprt->ops->release_xprt(xprt, task); - if (xprt->ops->release_request) - xprt->ops->release_request(task); - xprt_schedule_autodisconnect(xprt); - spin_unlock(&xprt->transport_lock); - if (req->rq_buffer) - xprt->ops->buf_free(task); - xdr_free_bvec(&req->rq_rcv_buf); - xdr_free_bvec(&req->rq_snd_buf); - if (req->rq_cred != NULL) - put_rpccred(req->rq_cred); - if (req->rq_release_snd_buf) - req->rq_release_snd_buf(req); - } else - xprt = task->tk_xprt; + xprt_request_dequeue_xprt(task); + spin_lock(&xprt->transport_lock); + xprt->ops->release_xprt(xprt, task); + if (xprt->ops->release_request) + xprt->ops->release_request(task); + xprt_schedule_autodisconnect(xprt); + spin_unlock(&xprt->transport_lock); + if (req->rq_buffer) + xprt->ops->buf_free(task); + xdr_free_bvec(&req->rq_rcv_buf); + xdr_free_bvec(&req->rq_snd_buf); + if (req->rq_cred != NULL) + put_rpccred(req->rq_cred); + if (req->rq_release_snd_buf) + req->rq_release_snd_buf(req); task->tk_rqstp = NULL; if (likely(!bc_prealloc(req))) --- a/net/sunrpc/xprtrdma/transport.c +++ b/net/sunrpc/xprtrdma/transport.c @@ -520,9 +520,8 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xp return; out_sleep: - set_bit(XPRT_CONGESTED, &xprt->state); - rpc_sleep_on(&xprt->backlog, task, NULL); task->tk_status = -EAGAIN; + xprt_add_backlog(xprt, task); } /** @@ -537,10 +536,11 @@ xprt_rdma_free_slot(struct rpc_xprt *xpr struct rpcrdma_xprt *r_xprt = container_of(xprt, struct rpcrdma_xprt, rx_xprt); - memset(rqst, 0, sizeof(*rqst)); - rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); - if (unlikely(!rpc_wake_up_next(&xprt->backlog))) - clear_bit(XPRT_CONGESTED, &xprt->state); + rpcrdma_reply_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); + if (!xprt_wake_up_backlog(xprt, rqst)) { + memset(rqst, 0, sizeof(*rqst)); + rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst)); + } } static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt, --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -1198,6 +1198,20 @@ void rpcrdma_mr_put(struct rpcrdma_mr *m } /** + * rpcrdma_reply_put - Put reply buffers back into pool + * @buffers: buffer pool + * @req: object to return + * + */ +void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req) +{ + if (req->rl_reply) { + rpcrdma_rep_put(buffers, req->rl_reply); + req->rl_reply = NULL; + } +} + +/** * rpcrdma_buffer_get - Get a request buffer * @buffers: Buffer pool from which to obtain a buffer * @@ -1225,9 +1239,7 @@ rpcrdma_buffer_get(struct rpcrdma_buffer */ void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req) { - if (req->rl_reply) - rpcrdma_rep_put(buffers, req->rl_reply); - req->rl_reply = NULL; + rpcrdma_reply_put(buffers, req); spin_lock(&buffers->rb_lock); list_add(&req->rl_list, &buffers->rb_send_bufs); --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -472,6 +472,7 @@ void rpcrdma_mrs_refresh(struct rpcrdma_ struct rpcrdma_req *rpcrdma_buffer_get(struct rpcrdma_buffer *); void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req); +void rpcrdma_reply_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req); void rpcrdma_recv_buffer_put(struct rpcrdma_rep *); bool rpcrdma_regbuf_realloc(struct rpcrdma_regbuf *rb, size_t size,