Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp7775016rwb; Mon, 12 Dec 2022 20:22:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf46ORtRo/8iPZetdPzjfmqFP0Z3wtgYwku4uKN6kAh1Ds8XD+1PwE2P1qUeTiybU0Ug9vJM X-Received: by 2002:a17:907:7da0:b0:7c1:80d7:55f2 with SMTP id oz32-20020a1709077da000b007c180d755f2mr5027376ejc.48.1670905356872; Mon, 12 Dec 2022 20:22:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670905356; cv=none; d=google.com; s=arc-20160816; b=Fcj3SCSssf67zbVDsFjB5d0f3QONPrWIKqL7sUxabBMPujc3Gnoi751OWGxI+Pc698 j78NPcYi9+NFt5cArWSEckwRgxURR4PfIlWpzycMzYDDi51BYLik9hioyHAvD6FiibCC hDb7gWfVDOOTjXZKbXxD8Ez0iJTgsw4+/UvaPvI6IWH6yg2km8Dp369GIVqrMhWk7+BJ 7yTT9o1Jur3fbh2cMQNKHInO1z9LRtb+b7nWUlbp/BGKW2N7+KI9YAZeDMP7NoHa9r2C RrhEeuzfI2zs5K0wFbFuVaXuewhDRd97u7yau3qTpF3G3ojb9hGTxf/WB+kyL8fccX2J R87w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=rcP9DelVyRsRdZA4URbF4xjezFfoi1Fzy8r/2GxIWZQ=; b=A7jEfHs0YmeiU1eeoJOFRy30GYRj6YzZboEpNjXfo99oC0/nlIqWexRadfO7RNYMFT IjJsYe6HwUI/98hlTBwGH7Uu+Nig8jXSr0BQTdS0wdsgsZFfc4GazuugV76T8yRJV4Vt TJiDpNuuiQ6O1PpG/ReUo2/Jd97Cz7qEIasybbiUmUC0NvTgfXvsuLv3RZ2FNZRBD30u 6NrVdfGVS8tQF2Ip4ZK9oO1wvEITJVwip/O1+ZU2liiDQdwtLMSAOtLq5iBHjOLeq9H7 QejOf1w799VvLJLntzvvUKkSr8sVW2T0nmjdpoSED8z2Wr7WO9VfkYuoFFHgUsC/7dun dkvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ss28-20020a170907c01c00b007ae0ca417e4si6638996ejc.690.2022.12.12.20.22.05; Mon, 12 Dec 2022 20:22:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230052AbiLMEPD (ORCPT + 99 others); Mon, 12 Dec 2022 23:15:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbiLMEPC (ORCPT ); Mon, 12 Dec 2022 23:15:02 -0500 Received: from mail.valinux.co.jp (mail.valinux.co.jp [210.128.90.3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D38C01E3FE for ; Mon, 12 Dec 2022 20:15:00 -0800 (PST) Received: from localhost (localhost [127.0.0.1]) by mail.valinux.co.jp (Postfix) with ESMTP id 16771A8F4E; Tue, 13 Dec 2022 13:14:59 +0900 (JST) X-Virus-Scanned: Debian amavisd-new at valinux.co.jp Received: from mail.valinux.co.jp ([127.0.0.1]) by localhost (mail.valinux.co.jp [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xF6b0Z9Uqsyl; Tue, 13 Dec 2022 13:14:59 +0900 (JST) Received: from brer.quest.tech.valinux.jp (vagw.valinux.co.jp [210.128.90.14]) by mail.valinux.co.jp (Postfix) with ESMTP id E5F00A8A47; Tue, 13 Dec 2022 13:14:58 +0900 (JST) From: minoura makoto To: linux-nfs@vger.kernel.org Cc: minoura makoto , Hiroshi Shimamoto , Trond Myklebust Subject: [PATCH v4] SUNRPC: ensure the matching upcall is in-flight upon downcall Date: Tue, 13 Dec 2022 13:14:31 +0900 Message-Id: <20221213041430.311141-1-minoura@valinux.co.jp> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_SORBS_DUL, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Commit 9130b8dbc6ac ("SUNRPC: allow for upcalls for the same uid but different gss service") introduced `auth` argument to __gss_find_upcall(), but in gss_pipe_downcall() it was left as NULL since it (and auth->service) was not (yet) determined. When multiple upcalls with the same uid and different service are ongoing, it could happen that __gss_find_upcall(), which returns the first match found in the pipe->in_downcall list, could not find the correct gss_msg corresponding to the downcall we are looking for. Moreover, it might return a msg which is not sent to rpc.gssd yet. We could see mount.nfs process hung in D state with multiple mount.nfs are executed in parallel. The call trace below is of CentOS 7.9 kernel-3.10.0-1160.24.1.el7.x86_64 but we observed the same hang w/ elrepo kernel-ml-6.0.7-1.el7. PID: 71258 TASK: ffff91ebd4be0000 CPU: 36 COMMAND: "mount.nfs" #0 [ffff9203ca3234f8] __schedule at ffffffffa3b8899f #1 [ffff9203ca323580] schedule at ffffffffa3b88eb9 #2 [ffff9203ca323590] gss_cred_init at ffffffffc0355818 [auth_rpcgss] #3 [ffff9203ca323658] rpcauth_lookup_credcache at ffffffffc0421ebc [sunrpc] #4 [ffff9203ca3236d8] gss_lookup_cred at ffffffffc0353633 [auth_rpcgss] #5 [ffff9203ca3236e8] rpcauth_lookupcred at ffffffffc0421581 [sunrpc] #6 [ffff9203ca323740] rpcauth_refreshcred at ffffffffc04223d3 [sunrpc] #7 [ffff9203ca3237a0] call_refresh at ffffffffc04103dc [sunrpc] #8 [ffff9203ca3237b8] __rpc_execute at ffffffffc041e1c9 [sunrpc] #9 [ffff9203ca323820] rpc_execute at ffffffffc0420a48 [sunrpc] The scenario is like this. Let's say there are two upcalls for services A and B, A -> B in pipe->in_downcall, B -> A in pipe->pipe. When rpc.gssd reads pipe to get the upcall msg corresponding to service B from pipe->pipe and then writes the response, in gss_pipe_downcall the msg corresponding to service A will be picked because only uid is used to find the msg and it is before the one for B in pipe->in_downcall. And the process waiting for the msg corresponding to service A will be woken up. Actual scheduing of that process might be after rpc.gssd processes the next msg. In rpc_pipe_generic_upcall it clears msg->errno (for A). The process is scheduled to see gss_msg->ctx == NULL and gss_msg->msg.errno == 0, therefore it cannot break the loop in gss_create_upcall and is never woken up after that. This patch adds a simple check to ensure that a msg which is not sent to rpc.gssd yet is not chosen as the matching upcall upon receiving a downcall. Fixes: Commit 9130b8dbc6ac ("SUNRPC: allow for upcalls for the same uid but different gss service") Signed-off-by: minoura makoto Signed-off-by: Hiroshi Shimamoto Tested-by: Hiroshi Shimamoto Cc: Trond Myklebust --- include/linux/sunrpc/rpc_pipe_fs.h | 5 +++++ net/sunrpc/auth_gss/auth_gss.c | 19 +++++++++++++++++-- 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/include/linux/sunrpc/rpc_pipe_fs.h b/include/linux/sunrpc/rpc_pipe_fs.h index cd188a527d16..3b35b6f6533a 100644 --- a/include/linux/sunrpc/rpc_pipe_fs.h +++ b/include/linux/sunrpc/rpc_pipe_fs.h @@ -92,6 +92,11 @@ extern ssize_t rpc_pipe_generic_upcall(struct file *, struct rpc_pipe_msg *, char __user *, size_t); extern int rpc_queue_upcall(struct rpc_pipe *, struct rpc_pipe_msg *); +/* returns true if the msg is in-flight, i.e., already eaten by the peer */ +static inline bool rpc_msg_is_inflight(const struct rpc_pipe_msg *msg) { + return (msg->copied != 0 && list_empty(&msg->list)); +} + struct rpc_clnt; extern struct dentry *rpc_create_client_dir(struct dentry *, const char *, struct rpc_clnt *); extern int rpc_remove_client_dir(struct rpc_clnt *); diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c index 7bb247c51e2f..2d7b1e03110a 100644 --- a/net/sunrpc/auth_gss/auth_gss.c +++ b/net/sunrpc/auth_gss/auth_gss.c @@ -302,7 +302,7 @@ __gss_find_upcall(struct rpc_pipe *pipe, kuid_t uid, const struct gss_auth *auth list_for_each_entry(pos, &pipe->in_downcall, list) { if (!uid_eq(pos->uid, uid)) continue; - if (auth && pos->auth->service != auth->service) + if (pos->auth->service != auth->service) continue; refcount_inc(&pos->count); return pos; @@ -686,6 +686,21 @@ gss_create_upcall(struct gss_auth *gss_auth, struct gss_cred *gss_cred) return err; } +static struct gss_upcall_msg * +gss_find_downcall(struct rpc_pipe *pipe, kuid_t uid) +{ + struct gss_upcall_msg *pos; + list_for_each_entry(pos, &pipe->in_downcall, list) { + if (!uid_eq(pos->uid, uid)) + continue; + if (!rpc_msg_is_inflight(&pos->msg)) + continue; + refcount_inc(&pos->count); + return pos; + } + return NULL; +} + #define MSG_BUF_MAXSIZE 1024 static ssize_t @@ -732,7 +747,7 @@ gss_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) err = -ENOENT; /* Find a matching upcall */ spin_lock(&pipe->lock); - gss_msg = __gss_find_upcall(pipe, uid, NULL); + gss_msg = gss_find_downcall(pipe, uid); if (gss_msg == NULL) { spin_unlock(&pipe->lock); goto err_put_ctx; -- 2.25.1