Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5231591ybl; Tue, 27 Aug 2019 01:12:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqzAP/6lYHrjs1IOf5Df9Vn/8vLUKm8+m6g2R7Fu6+nnrd9ivw3YjUq8QFeT9mdJvrhPHk37 X-Received: by 2002:a17:90a:feb:: with SMTP id 98mr22844257pjz.55.1566893577438; Tue, 27 Aug 2019 01:12:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566893577; cv=none; d=google.com; s=arc-20160816; b=ZfNAJlJLHNb7G9vScvE9nfrFUhucGuybiLTK2xp3gXJgm+axe13JuYW8LGGqOTVXW8 4mS0VwKMw/e8dSGS6gO4Rd8Jj6YFhVRGwIS6erZXl43+ny0xgW9XdQajnCJWe0/VDR9f KVf4KnyPtfQEcgjT+fcJmDRYZ1N487nyjiawNSEZxLrloDtDmFTu7QsC8kcFBwAOhiRR bkBhi9Ruvu8PELn8uJwzlUB1YDcTUzw3IroW9s34KDJOBgmvENzAeRRZxihDW57fi5Fk qtJrdHqmDp/MuU/VD1/f54MEFcECe8mypmR7rjiFXlzdD3GhBuKmK518bBU0CwZbBlHa QqLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dZ6SZn7UpQBbd2GLOqE/7DqLNEnWBlFvde7fgdZqpV4=; b=XFE5PiqEy8Ml3qfZTb50cDmpMzITLVmM9qpRR6SwAsuwwtt1ShPVV503Eosir0RTR0 YGMIs1gb2ubK+IQjW0u+G+cb/qyKOLE6iIRAap5jctqATHn5is2ARvjulj6YPhUigKjR OmI/Jd+ACGZUPY5V92RI1ChYRq4kg3DRye+klWAK+c2mwmdxM6gPtUSd0WPncOqmOjIX iQeMlKjQQZ4b4g9jLAuz5Qdoub6wahjuhP9GPqjFItuaH0n80CPcV1p4/e/bq3waCOto ufpMtV1VPrNoauss2TyvmeFK+ERpF3I77dTySLkIgtSY/7Y3IHwz5FZ57dpT0IJ+UZwP hgoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="dhO/qhw+"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 4si2447924plf.176.2019.08.27.01.12.42; Tue, 27 Aug 2019 01:12:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="dhO/qhw+"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732133AbfH0IKV (ORCPT + 99 others); Tue, 27 Aug 2019 04:10:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:59828 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732036AbfH0ICl (ORCPT ); Tue, 27 Aug 2019 04:02:41 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8DAD6206BF; Tue, 27 Aug 2019 08:02:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566892960; bh=kH1AlPcY4Ev867+PpJaN3BPrKmFvMuvABNNNdCTn1Jg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dhO/qhw+NdC228QgK9CkYk0lIBWcnBgJWW4lTILCRst9jEx1FEMNEgheMhqyn6Qvd 4H+py3WU6JPHHU0h7sUlBsJL4mwFj+2CJwSw6BsChL8amyDA5GYPplnVFaXPCLw9u6 FeUnf0q4wPLW0K/TCmSH6Rx9MgpLH/d92iSUgplY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, John Hubbard , Trond Myklebust , Sasha Levin Subject: [PATCH 5.2 075/162] NFSv4: Fix a potential sleep while atomic in nfs4_do_reclaim() Date: Tue, 27 Aug 2019 09:50:03 +0200 Message-Id: <20190827072740.727775456@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827072738.093683223@linuxfoundation.org> References: <20190827072738.093683223@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [ Upstream commit c77e22834ae9a11891cb613bd9a551be1b94f2bc ] John Hubbard reports seeing the following stack trace: nfs4_do_reclaim rcu_read_lock /* we are now in_atomic() and must not sleep */ nfs4_purge_state_owners nfs4_free_state_owner nfs4_destroy_seqid_counter rpc_destroy_wait_queue cancel_delayed_work_sync __cancel_work_timer __flush_work start_flush_work might_sleep: (kernel/workqueue.c:2975: BUG) The solution is to separate out the freeing of the state owners from nfs4_purge_state_owners(), and perform that outside the atomic context. Reported-by: John Hubbard Fixes: 0aaaf5c424c7f ("NFS: Cache state owners after files are closed") Signed-off-by: Trond Myklebust Signed-off-by: Sasha Levin --- fs/nfs/nfs4_fs.h | 3 ++- fs/nfs/nfs4client.c | 5 ++++- fs/nfs/nfs4state.c | 27 ++++++++++++++++++++++----- 3 files changed, 28 insertions(+), 7 deletions(-) diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h index 8a38a254f5162..235919156eddd 100644 --- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -465,7 +465,8 @@ static inline void nfs4_schedule_session_recovery(struct nfs4_session *session, extern struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *, const struct cred *, gfp_t); extern void nfs4_put_state_owner(struct nfs4_state_owner *); -extern void nfs4_purge_state_owners(struct nfs_server *); +extern void nfs4_purge_state_owners(struct nfs_server *, struct list_head *); +extern void nfs4_free_state_owners(struct list_head *head); extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state_owner *); extern void nfs4_put_open_state(struct nfs4_state *); extern void nfs4_close_state(struct nfs4_state *, fmode_t); diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c index 81b9b6d7927ac..208a236dc2350 100644 --- a/fs/nfs/nfs4client.c +++ b/fs/nfs/nfs4client.c @@ -758,9 +758,12 @@ out: static void nfs4_destroy_server(struct nfs_server *server) { + LIST_HEAD(freeme); + nfs_server_return_all_delegations(server); unset_pnfs_layoutdriver(server); - nfs4_purge_state_owners(server); + nfs4_purge_state_owners(server, &freeme); + nfs4_free_state_owners(&freeme); } /* diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c index 556ec916846f0..261de26d897f7 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c @@ -624,24 +624,39 @@ void nfs4_put_state_owner(struct nfs4_state_owner *sp) /** * nfs4_purge_state_owners - Release all cached state owners * @server: nfs_server with cached state owners to release + * @head: resulting list of state owners * * Called at umount time. Remaining state owners will be on * the LRU with ref count of zero. + * Note that the state owners are not freed, but are added + * to the list @head, which can later be used as an argument + * to nfs4_free_state_owners. */ -void nfs4_purge_state_owners(struct nfs_server *server) +void nfs4_purge_state_owners(struct nfs_server *server, struct list_head *head) { struct nfs_client *clp = server->nfs_client; struct nfs4_state_owner *sp, *tmp; - LIST_HEAD(doomed); spin_lock(&clp->cl_lock); list_for_each_entry_safe(sp, tmp, &server->state_owners_lru, so_lru) { - list_move(&sp->so_lru, &doomed); + list_move(&sp->so_lru, head); nfs4_remove_state_owner_locked(sp); } spin_unlock(&clp->cl_lock); +} - list_for_each_entry_safe(sp, tmp, &doomed, so_lru) { +/** + * nfs4_purge_state_owners - Release all cached state owners + * @head: resulting list of state owners + * + * Frees a list of state owners that was generated by + * nfs4_purge_state_owners + */ +void nfs4_free_state_owners(struct list_head *head) +{ + struct nfs4_state_owner *sp, *tmp; + + list_for_each_entry_safe(sp, tmp, head, so_lru) { list_del(&sp->so_lru); nfs4_free_state_owner(sp); } @@ -1864,12 +1879,13 @@ static int nfs4_do_reclaim(struct nfs_client *clp, const struct nfs4_state_recov struct nfs4_state_owner *sp; struct nfs_server *server; struct rb_node *pos; + LIST_HEAD(freeme); int status = 0; restart: rcu_read_lock(); list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) { - nfs4_purge_state_owners(server); + nfs4_purge_state_owners(server, &freeme); spin_lock(&clp->cl_lock); for (pos = rb_first(&server->state_owners); pos != NULL; @@ -1898,6 +1914,7 @@ restart: spin_unlock(&clp->cl_lock); } rcu_read_unlock(); + nfs4_free_state_owners(&freeme); return 0; } -- 2.20.1