Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2260239imm; Thu, 7 Jun 2018 07:51:58 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIkOIWIuihWSuBPOQASuiootzacGZ+iMpnlHlvQ6ecaWF7Gv/U3rYgNtDXXEjakLJ8hOBDt X-Received: by 2002:a63:77c6:: with SMTP id s189-v6mr1903141pgc.450.1528383118311; Thu, 07 Jun 2018 07:51:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528383118; cv=none; d=google.com; s=arc-20160816; b=wTKb9mVO/o/y2kDfVFbw8kEmWt1MDVd2i04ODJGyiX54Xpf09ylT2sQG07eaeF0KTA 79a2tQhZitc8n2GcEg36iGmvOu6X3699oK6Ed3Xa/ez4yLLWc0kbiz5gdmSHxRGIt1tU 8CLDTEQRmUfc9cbLPGwFKeAdM6YBbf2qOXXEndY5aQtlpSn6ifqnOTmoAZs/2I8WteSk CRhrvrpiAfsgXk47X32KpNjeIBYEp5VXsml9DVjcrxlKacyXM1EiU8Y3ZH3dmVGpQhLD LapkrQlThCeEHHVzRAhoOtk6bXHwOTNC+tPgfze2buT6thf3DNmRM0bKq5EEZPJW0xTy x7Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition :arc-authentication-results; bh=AxVsEGDOexMF+JuOgmr/sC5FGJLovWZ4vgC+sIQNEO4=; b=izqR6GrGJbF0Q3BPsB/K51QtutVqjC3/eAH/JsM26MFDhfx9T2081FY+Y+5Cv1/gqX J+WhDHsxRN9Ye2vBCiIm4VdC5hkW1Z/6fyPfNrxKGq4/jnE9OKmKM7PJTIzYrfUingfs inh9DxG3N4lT1nbftiVdjwDvp9Dai3RCQNxh2DhQR6Giw62+yVMRpT8Jc81HLCCo++Ju LosTTqTD1cv50qgYqPlv4bTXraewJHhTR7qtF+ObsvOKK0r1Qe0TwHNqRptOEylPXbwR 9vSiSTjr+7NebLZSWWnIF2JEd3UYc1SVD7ymCbtCMEyb4jtPk9b14PZzJ368ZiCHYsBs h/Xw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r189-v6si13551892pgr.500.2018.06.07.07.51.43; Thu, 07 Jun 2018 07:51:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935168AbeFGOtk (ORCPT + 99 others); Thu, 7 Jun 2018 10:49:40 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:40878 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934887AbeFGOti (ORCPT ); Thu, 7 Jun 2018 10:49:38 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvbo-0005Zw-V5; Thu, 07 Jun 2018 15:09:49 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvb2-0002rf-Ge; Thu, 07 Jun 2018 15:09:00 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Scott Mayhew" , "Trond Myklebust" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 129/410] nfs/pnfs: fix nfs_direct_req ref leak when i/o falls back to the mds In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Scott Mayhew commit ba4a76f703ab7eb72941fdaac848502073d6e9ee upstream. Currently when falling back to doing I/O through the MDS (via pnfs_{read|write}_through_mds), the client frees the nfs_pgio_header without releasing the reference taken on the dreq via pnfs_generic_pg_{read|write}pages -> nfs_pgheader_init -> nfs_direct_pgio_init. It then takes another reference on the dreq via nfs_generic_pg_pgios -> nfs_pgheader_init -> nfs_direct_pgio_init and as a result the requester will become stuck in inode_dio_wait. Once that happens, other processes accessing the inode will become stuck as well. Ensure that pnfs_read_through_mds() and pnfs_write_through_mds() clean up correctly by calling hdr->completion_ops->completion() instead of calling hdr->release() directly. This can be reproduced (sometimes) by performing "storage failover takeover" commands on NetApp filer while doing direct I/O from a client. This can also be reproduced using SystemTap to simulate a failure while doing direct I/O from a client (from Dave Wysochanski ): stap -v -g -e 'probe module("nfs_layout_nfsv41_files").function("nfs4_fl_prepare_ds").return { $return=NULL; exit(); }' Suggested-by: Trond Myklebust Signed-off-by: Scott Mayhew Fixes: 1ca018d28d ("pNFS: Fix a memory leak when attempted pnfs fails") Signed-off-by: Trond Myklebust Signed-off-by: Ben Hutchings --- fs/nfs/pnfs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -1557,7 +1557,7 @@ pnfs_write_through_mds(struct nfs_pageio nfs_pageio_reset_write_mds(desc); desc->pg_recoalesce = 1; } - hdr->release(hdr); + hdr->completion_ops->completion(hdr); } static enum pnfs_try_status @@ -1694,7 +1694,7 @@ pnfs_read_through_mds(struct nfs_pageio_ nfs_pageio_reset_read_mds(desc); desc->pg_recoalesce = 1; } - hdr->release(hdr); + hdr->completion_ops->completion(hdr); } /*