Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp7468628ybi; Wed, 5 Jun 2019 18:46:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqyOhlEyu9EAtvZ14cIsKc8jYIGIZ0qK0tR1a1uYww102bpP4CKW5JnEp5mVTq47kfbt/1dq X-Received: by 2002:a63:1516:: with SMTP id v22mr858717pgl.204.1559785574956; Wed, 05 Jun 2019 18:46:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559785574; cv=none; d=google.com; s=arc-20160816; b=yH1RWE/zOg/YODF5LzoXJ64oCgHDWLPRmIehnYFs3BFegvH5KQD4hMlNoRCES6LzlZ 2H+KSxz/SoAGxy6foJlE+NkWo9OTTjQPkVnl7qZZhc9rXYxiYrq1ps0xjolVWqwa9khN VBNI8UmBaJ8BmkYcxmv6MktfWYlXm9JgwWyLMp5Dsg+68YMjPfe7NTB+OG3Jl7woKMxu 3nWB9wtwzcMpWFZcE+sJAIn62Dl7+uZHXyBg0wewxDtPCtvYNUzrZD7DLLligPTRIYzx S818iEudVaVE8Y24md0iOkuLIFz+US/wwTD1p/pI2SVpf1hQwQYp78aCEfSU18T7tyIV hcEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=6od4TDbdqx+LT3047SpEutXdAzWkr/YDgVOMjEtHNxg=; b=AGPM9kOf6SlFjSdbdLB4cAceUQ5fln5TcNGSqNQ6nO3xQ9GgY+PKNFJ2OZS5oTdGkF 7XxkJ/WXh4ycKCJIGYiatW+Rf7SVEtdMvJEyngB+8jebCUFbRKptRY9SM0mMtp9hUYtK t1O+Ex17wzI7B8pVlg7a8ISxGtrIvjjO2CLtioBBzNpAzVl56HnQt9FFOXTW6ELI5Ucy au4XNVlFlC+zokj4jdxYKPS+qAxpDvUeScd8HVQBfyXU2tvLDUI7J67/zrSlQ5RTX5M3 3US04LCnyWQZYzKaLo2hRMYJ0vgFeEZf6u2v9t4yU+pMW+UrJDatj7vx7v87bFPN3U0p ybyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x1si261533pfa.253.2019.06.05.18.45.57; Wed, 05 Jun 2019 18:46:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726934AbfFFBp0 (ORCPT + 99 others); Wed, 5 Jun 2019 21:45:26 -0400 Received: from mga03.intel.com ([134.134.136.65]:36145 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726922AbfFFBp0 (ORCPT ); Wed, 5 Jun 2019 21:45:26 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jun 2019 18:45:25 -0700 X-ExtLoop1: 1 Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by orsmga002.jf.intel.com with ESMTP; 05 Jun 2019 18:45:24 -0700 From: ira.weiny@intel.com To: Dan Williams , Jan Kara , "Theodore Ts'o" , Jeff Layton , Dave Chinner Cc: Ira Weiny , Matthew Wilcox , linux-xfs@vger.kernel.org, Andrew Morton , John Hubbard , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-ext4@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH RFC 09/10] fs/xfs: Fail truncate if pages are GUP pinned Date: Wed, 5 Jun 2019 18:45:42 -0700 Message-Id: <20190606014544.8339-10-ira.weiny@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190606014544.8339-1-ira.weiny@intel.com> References: <20190606014544.8339-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Ira Weiny If pages are actively gup pinned fail the truncate operation. To support an application who wishes to removing a pin upon SIGIO reception we must change the order of breaking layout leases with respect to DAX layout leases. Check for a GUP pin on the page being truncated and return ETXTBSY if it is GUP pinned. Change the order of XFS break leased layouts and break DAX layouts. Select EXPORT_BLOCK_OPS for FS_DAX to ensure that xfs_break_lease_layouts() is defined for FS_DAX as well as pNFS. Update comment for xfs_break_lease_layouts() Signed-off-by: Ira Weiny --- fs/Kconfig | 1 + fs/xfs/xfs_file.c | 8 ++++++-- fs/xfs/xfs_pnfs.c | 14 +++++++------- 3 files changed, 14 insertions(+), 9 deletions(-) diff --git a/fs/Kconfig b/fs/Kconfig index f1046cf6ad85..c54b0b88abbf 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -49,6 +49,7 @@ config FS_DAX select DEV_PAGEMAP_OPS if (ZONE_DEVICE && !FS_DAX_LIMITED) select FS_IOMAP select DAX + select EXPORTFS_BLOCK_OPS help Direct Access (DAX) can be used on memory-backed block devices. If the block device supports DAX and the filesystem supports DAX, diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 350eb5546d36..1dc61c98f7cd 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -756,6 +756,9 @@ xfs_break_dax_layouts( if (!page) return 0; + if (page_gup_pinned(page)) + return -ETXTBSY; + *retry = true; return ___wait_var_event(&page->_refcount, atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE, @@ -779,10 +782,11 @@ xfs_break_layouts( retry = false; switch (reason) { case BREAK_UNMAP: - error = xfs_break_dax_layouts(inode, &retry, off, len); + error = xfs_break_leased_layouts(inode, iolock, &retry); if (error || retry) break; - /* fall through */ + error = xfs_break_dax_layouts(inode, &retry, off, len); + break; case BREAK_WRITE: error = xfs_break_leased_layouts(inode, iolock, &retry); break; diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c index bde2c9f56a46..e70d24d12cbf 100644 --- a/fs/xfs/xfs_pnfs.c +++ b/fs/xfs/xfs_pnfs.c @@ -21,14 +21,14 @@ #include "xfs_pnfs.h" /* - * Ensure that we do not have any outstanding pNFS layouts that can be used by - * clients to directly read from or write to this inode. This must be called - * before every operation that can remove blocks from the extent map. - * Additionally we call it during the write operation, where aren't concerned - * about exposing unallocated blocks but just want to provide basic + * Ensure that we do not have any outstanding pNFS or longterm GUP layouts that + * can be used by clients to directly read from or write to this inode. This + * must be called before every operation that can remove blocks from the extent + * map. Additionally we call it during the write operation, where aren't + * concerned about exposing unallocated blocks but just want to provide basic * synchronization between a local writer and pNFS clients. mmap writes would - * also benefit from this sort of synchronization, but due to the tricky locking - * rules in the page fault path we don't bother. + * also benefit from this sort of synchronization, but due to the tricky + * locking rules in the page fault path we don't bother. */ int xfs_break_leased_layouts( -- 2.20.1