2021-07-12 16:58:28

by Jan Kara

[permalink] [raw]
Subject: [PATCH 0/14 v9] fs: Hole punch vs page cache filling races

Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. The only
significant change since last time is the change in patch 3/14 requested by
Linus that we don't acquire invalidate_lock when the page is already in the
page cache and uptodate. Darrick, Christoph, can you please review that change?

Out of all filesystem supporting hole punching, only GFS2 and OCFS2 remain
unresolved. GFS2 people are working on their own solution (cluster locking is
involved), OCFS2 has even bigger issues (maintainers informed, looking into
it).

Once this series lands, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

The series can be also pulled from:
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git hole_punch_fixes

Changes since v8:
* Rebased on top of 5.14-rc1
* Fixed up conflict in f2fs
* Modified filemap_fault() to acquire invalidate lock only when creating page
in the page cache or loading it from the disk
* Added some reviewed-by's

Changes since v7:
* Rebased on top of 5.13-rc6
* Added some reviewed-by tags
* Simplified xfs_isilocked() changes as Dave Chinner suggested
* Minor documentation formulation improvements

Changes since v6:
* Added some reviewed-by tags
* Added wrapper for taking invalidate_lock similar to inode_lock
* Renamed wrappers for taking invalidate_lock for two inodes
* Added xfs patch to make xfs_isilocked() work better even without lockdep
* Some minor documentation fixes

Changes since v5:
* Added some reviewed-by tags
* Added functions for locking two mappings and using them from XFS where needed
* Some minor code style & comment fixes

Changes since v4:
* Rebased onto 5.13-rc1
* Removed shmfs conversion patches
* Fixed up zonefs changelog
* Fixed up XFS comments
* Added patch fixing up definition of file_operations in Documentation/vfs/
* Updated documentation and comments to explain invalidate_lock is used also
to prevent changes through memory mappings to existing pages for some VFS
operations.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache. We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/[email protected]/
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Link: http://lore.kernel.org/r/[email protected] # v8


2021-07-12 16:58:35

by Jan Kara

[permalink] [raw]
Subject: [PATCH 14/14] cifs: Fix race between hole punch and page fault

Cifs has a following race between hole punching and page fault:

CPU1 CPU2
smb3_fallocate()
smb3_punch_hole()
truncate_pagecache_range()
filemap_fault()
- loads old data into the
page cache
SMB2_ioctl(..., FSCTL_SET_ZERO_DATA, ...)

And now we have stale data in the page cache. Fix the problem by locking
out faults (as well as reads) using mapping->invalidate_lock while hole
punch is running.

CC: Steve French <[email protected]>
CC: [email protected]
Signed-off-by: Jan Kara <[email protected]>
---
fs/cifs/smb2ops.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index e4c8f603dd58..458c546ce8cd 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -3588,6 +3588,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
return rc;
}

+ filemap_invalidate_lock(inode->i_mapping);
/*
* We implement the punch hole through ioctl, so we need remove the page
* caches first, otherwise the data may be inconsistent with the server.
@@ -3605,6 +3606,7 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
sizeof(struct file_zero_data_information),
CIFSMaxBufSize, NULL, NULL);
free_xid(xid);
+ filemap_invalidate_unlock(inode->i_mapping);
return rc;
}

--
2.26.2

2021-07-12 16:58:37

by Jan Kara

[permalink] [raw]
Subject: [PATCH 03/14] mm: Protect operations adding pages to page cache with invalidate_lock

Currently, serializing operations such as page fault, read, or readahead
against hole punching is rather difficult. The basic race scheme is
like:

fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / ..
truncate_inode_pages_range()
<create pages in page
cache here>
<update fs block mapping and free blocks>

Now the problem is in this way read / page fault / readahead can
instantiate pages in page cache with potentially stale data (if blocks
get quickly reused). Avoiding this race is not simple - page locks do
not work because we want to make sure there are *no* pages in given
range. inode->i_rwsem does not work because page fault happens under
mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
the performance for mixed read-write workloads suffer.

So create a new rw_semaphore in the address_space - invalidate_lock -
that protects adding of pages to page cache for page faults / reads /
readahead.

Reviewed-by: Darrick J. Wong <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
---
Documentation/filesystems/locking.rst | 62 +++++++++++++------
fs/inode.c | 2 +
include/linux/fs.h | 33 ++++++++++
mm/filemap.c | 88 +++++++++++++++++++++++----
mm/readahead.c | 2 +
mm/rmap.c | 37 +++++------
mm/truncate.c | 3 +-
7 files changed, 176 insertions(+), 51 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index cdf15492c699..38a3097b6f1c 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -271,19 +271,19 @@ prototypes::
locking rules:
All except set_page_dirty and freepage may block

-====================== ======================== =========
-ops PageLocked(page) i_rwsem
-====================== ======================== =========
+====================== ======================== ========= ===============
+ops PageLocked(page) i_rwsem invalidate_lock
+====================== ======================== ========= ===============
writepage: yes, unlocks (see below)
-readpage: yes, unlocks
+readpage: yes, unlocks shared
writepages:
set_page_dirty no
-readahead: yes, unlocks
-readpages: no
+readahead: yes, unlocks shared
+readpages: no shared
write_begin: locks the page exclusive
write_end: yes, unlocks exclusive
bmap:
-invalidatepage: yes
+invalidatepage: yes exclusive
releasepage: yes
freepage: yes
direct_IO:
@@ -378,7 +378,10 @@ keep it that way and don't breed new callers.
->invalidatepage() is called when the filesystem must attempt to drop
some or all of the buffers from the page when it is being truncated. It
returns zero on success. If ->invalidatepage is zero, the kernel uses
-block_invalidatepage() instead.
+block_invalidatepage() instead. The filesystem must exclusively acquire
+invalidate_lock before invalidating page cache in truncate / hole punch path
+(and thus calling into ->invalidatepage) to block races between page cache
+invalidation and page cache filling functions (fault, read, ...).

->releasepage() is called when the kernel is about to try to drop the
buffers from the page in preparation for freeing it. It returns zero to
@@ -573,6 +576,25 @@ in sys_read() and friends.
the lease within the individual filesystem to record the result of the
operation

+->fallocate implementation must be really careful to maintain page cache
+consistency when punching holes or performing other operations that invalidate
+page cache contents. Usually the filesystem needs to call
+truncate_inode_pages_range() to invalidate relevant range of the page cache.
+However the filesystem usually also needs to update its internal (and on disk)
+view of file offset -> disk block mapping. Until this update is finished, the
+filesystem needs to block page faults and reads from reloading now-stale page
+cache contents from the disk. Since VFS acquires mapping->invalidate_lock in
+shared mode when loading pages from disk (filemap_fault(), filemap_read(),
+readahead paths), the fallocate implementation must take the invalidate_lock to
+prevent reloading.
+
+->copy_file_range and ->remap_file_range implementations need to serialize
+against modifications of file data while the operation is running. For
+blocking changes through write(2) and similar operations inode->i_rwsem can be
+used. To block changes to file contents via a memory mapping during the
+operation, the filesystem must take mapping->invalidate_lock to coordinate
+with ->page_mkwrite.
+
dquot_operations
================

@@ -630,11 +652,11 @@ pfn_mkwrite: yes
access: yes
============= ========= ===========================

-->fault() is called when a previously not present pte is about
-to be faulted in. The filesystem must find and return the page associated
-with the passed in "pgoff" in the vm_fault structure. If it is possible that
-the page may be truncated and/or invalidated, then the filesystem must lock
-the page, then ensure it is not already truncated (the page lock will block
+->fault() is called when a previously not present pte is about to be faulted
+in. The filesystem must find and return the page associated with the passed in
+"pgoff" in the vm_fault structure. If it is possible that the page may be
+truncated and/or invalidated, then the filesystem must lock invalidate_lock,
+then ensure the page is not already truncated (invalidate_lock will block
subsequent truncate), and then return with VM_FAULT_LOCKED, and the page
locked. The VM will unlock the page.

@@ -647,12 +669,14 @@ page table entry. Pointer to entry associated with the page is passed in
"pte" field in vm_fault structure. Pointers to entries for other offsets
should be calculated relative to "pte".

-->page_mkwrite() is called when a previously read-only pte is
-about to become writeable. The filesystem again must ensure that there are
-no truncate/invalidate races, and then return with the page locked. If
-the page has been truncated, the filesystem should not look up a new page
-like the ->fault() handler, but simply return with VM_FAULT_NOPAGE, which
-will cause the VM to retry the fault.
+->page_mkwrite() is called when a previously read-only pte is about to become
+writeable. The filesystem again must ensure that there are no
+truncate/invalidate races or races with operations such as ->remap_file_range
+or ->copy_file_range, and then return with the page locked. Usually
+mapping->invalidate_lock is suitable for proper serialization. If the page has
+been truncated, the filesystem should not look up a new page like the ->fault()
+handler, but simply return with VM_FAULT_NOPAGE, which will cause the VM to
+retry the fault.

->pfn_mkwrite() is the same as page_mkwrite but when the pte is
VM_PFNMAP or VM_MIXEDMAP with a page-less entry. Expected return is
diff --git a/fs/inode.c b/fs/inode.c
index c93500d84264..84c528cd1955 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -190,6 +190,8 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
mapping->private_data = NULL;
mapping->writeback_index = 0;
+ __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock",
+ &sb->s_type->invalidate_lock_key);
inode->i_private = NULL;
inode->i_mapping = mapping;
INIT_HLIST_HEAD(&inode->i_dentry); /* buggered by rcu freeing */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 640574294216..90a80de37ad4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -436,6 +436,10 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
* struct address_space - Contents of a cacheable, mappable object.
* @host: Owner, either the inode or the block_device.
* @i_pages: Cached pages.
+ * @invalidate_lock: Guards coherency between page cache contents and
+ * file offset->disk block mappings in the filesystem during invalidates.
+ * It is also used to block modification of page cache contents through
+ * memory mappings.
* @gfp_mask: Memory allocation flags to use for allocating pages.
* @i_mmap_writable: Number of VM_SHARED mappings.
* @nr_thps: Number of THPs in the pagecache (non-shmem only).
@@ -453,6 +457,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
struct address_space {
struct inode *host;
struct xarray i_pages;
+ struct rw_semaphore invalidate_lock;
gfp_t gfp_mask;
atomic_t i_mmap_writable;
#ifdef CONFIG_READ_ONLY_THP_FOR_FS
@@ -814,6 +819,33 @@ static inline void inode_lock_shared_nested(struct inode *inode, unsigned subcla
down_read_nested(&inode->i_rwsem, subclass);
}

+static inline void filemap_invalidate_lock(struct address_space *mapping)
+{
+ down_write(&mapping->invalidate_lock);
+}
+
+static inline void filemap_invalidate_unlock(struct address_space *mapping)
+{
+ up_write(&mapping->invalidate_lock);
+}
+
+static inline void filemap_invalidate_lock_shared(struct address_space *mapping)
+{
+ down_read(&mapping->invalidate_lock);
+}
+
+static inline int filemap_invalidate_trylock_shared(
+ struct address_space *mapping)
+{
+ return down_read_trylock(&mapping->invalidate_lock);
+}
+
+static inline void filemap_invalidate_unlock_shared(
+ struct address_space *mapping)
+{
+ up_read(&mapping->invalidate_lock);
+}
+
void lock_two_nondirectories(struct inode *, struct inode*);
void unlock_two_nondirectories(struct inode *, struct inode*);

@@ -2487,6 +2519,7 @@ struct file_system_type {

struct lock_class_key i_lock_key;
struct lock_class_key i_mutex_key;
+ struct lock_class_key invalidate_lock_key;
struct lock_class_key i_mutex_dir_key;
};

diff --git a/mm/filemap.c b/mm/filemap.c
index acf20eca2fa4..56988c4e655d 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -77,7 +77,8 @@
* ->i_pages lock
*
* ->i_rwsem
- * ->i_mmap_rwsem (truncate->unmap_mapping_range)
+ * ->invalidate_lock (acquired by fs in truncate path)
+ * ->i_mmap_rwsem (truncate->unmap_mapping_range)
*
* ->mmap_lock
* ->i_mmap_rwsem
@@ -85,7 +86,8 @@
* ->i_pages lock (arch-dependent flush_dcache_mmap_lock)
*
* ->mmap_lock
- * ->lock_page (access_process_vm)
+ * ->invalidate_lock (filemap_fault)
+ * ->lock_page (filemap_fault, access_process_vm)
*
* ->i_rwsem (generic_perform_write)
* ->mmap_lock (fault_in_pages_readable->do_page_fault)
@@ -2368,20 +2370,30 @@ static int filemap_update_page(struct kiocb *iocb,
{
int error;

+ if (iocb->ki_flags & IOCB_NOWAIT) {
+ if (!filemap_invalidate_trylock_shared(mapping))
+ return -EAGAIN;
+ } else {
+ filemap_invalidate_lock_shared(mapping);
+ }
+
if (!trylock_page(page)) {
+ error = -EAGAIN;
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO))
- return -EAGAIN;
+ goto unlock_mapping;
if (!(iocb->ki_flags & IOCB_WAITQ)) {
+ filemap_invalidate_unlock_shared(mapping);
put_and_wait_on_page_locked(page, TASK_KILLABLE);
return AOP_TRUNCATED_PAGE;
}
error = __lock_page_async(page, iocb->ki_waitq);
if (error)
- return error;
+ goto unlock_mapping;
}

+ error = AOP_TRUNCATED_PAGE;
if (!page->mapping)
- goto truncated;
+ goto unlock;

error = 0;
if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page))
@@ -2392,15 +2404,13 @@ static int filemap_update_page(struct kiocb *iocb,
goto unlock;

error = filemap_read_page(iocb->ki_filp, mapping, page);
- if (error == AOP_TRUNCATED_PAGE)
- put_page(page);
- return error;
-truncated:
- unlock_page(page);
- put_page(page);
- return AOP_TRUNCATED_PAGE;
+ goto unlock_mapping;
unlock:
unlock_page(page);
+unlock_mapping:
+ filemap_invalidate_unlock_shared(mapping);
+ if (error == AOP_TRUNCATED_PAGE)
+ put_page(page);
return error;
}

@@ -2415,6 +2425,19 @@ static int filemap_create_page(struct file *file,
if (!page)
return -ENOMEM;

+ /*
+ * Protect against truncate / hole punch. Grabbing invalidate_lock here
+ * assures we cannot instantiate and bring uptodate new pagecache pages
+ * after evicting page cache during truncate and before actually
+ * freeing blocks. Note that we could release invalidate_lock after
+ * inserting the page into page cache as the locked page would then be
+ * enough to synchronize with hole punching. But there are code paths
+ * such as filemap_update_page() filling in partially uptodate pages or
+ * ->readpages() that need to hold invalidate_lock while mapping blocks
+ * for IO so let's hold the lock here as well to keep locking rules
+ * simple.
+ */
+ filemap_invalidate_lock_shared(mapping);
error = add_to_page_cache_lru(page, mapping, index,
mapping_gfp_constraint(mapping, GFP_KERNEL));
if (error == -EEXIST)
@@ -2426,9 +2449,11 @@ static int filemap_create_page(struct file *file,
if (error)
goto error;

+ filemap_invalidate_unlock_shared(mapping);
pagevec_add(pvec, page);
return 0;
error:
+ filemap_invalidate_unlock_shared(mapping);
put_page(page);
return error;
}
@@ -2967,6 +2992,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
pgoff_t max_off;
struct page *page;
vm_fault_t ret = 0;
+ bool mapping_locked = false;

max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
if (unlikely(offset >= max_off))
@@ -2988,15 +3014,30 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
ret = VM_FAULT_MAJOR;
fpin = do_sync_mmap_readahead(vmf);
+ }
+
+ if (!page) {
retry_find:
+ /*
+ * See comment in filemap_create_page() why we need
+ * invalidate_lock
+ */
+ if (!mapping_locked) {
+ filemap_invalidate_lock_shared(mapping);
+ mapping_locked = true;
+ }
page = pagecache_get_page(mapping, offset,
FGP_CREAT|FGP_FOR_MMAP,
vmf->gfp_mask);
if (!page) {
if (fpin)
goto out_retry;
+ filemap_invalidate_unlock_shared(mapping);
return VM_FAULT_OOM;
}
+ } else if (unlikely(!PageUptodate(page))) {
+ filemap_invalidate_lock_shared(mapping);
+ mapping_locked = true;
}

if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))
@@ -3014,8 +3055,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
* We have a locked page in the page cache, now we need to check
* that it's up-to-date. If not, it is going to be due to an error.
*/
- if (unlikely(!PageUptodate(page)))
+ if (unlikely(!PageUptodate(page))) {
+ /*
+ * The page was in cache and uptodate and now it is not.
+ * Strange but possible since we didn't hold the page lock all
+ * the time. Let's drop everything get the invalidate lock and
+ * try again.
+ */
+ if (!mapping_locked) {
+ unlock_page(page);
+ put_page(page);
+ goto retry_find;
+ }
goto page_not_uptodate;
+ }

/*
* We've made it this far and we had to drop our mmap_lock, now is the
@@ -3026,6 +3079,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
unlock_page(page);
goto out_retry;
}
+ if (mapping_locked)
+ filemap_invalidate_unlock_shared(mapping);

/*
* Found the page and have a reference on it.
@@ -3056,6 +3111,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)

if (!error || error == AOP_TRUNCATED_PAGE)
goto retry_find;
+ filemap_invalidate_unlock_shared(mapping);

return VM_FAULT_SIGBUS;

@@ -3067,6 +3123,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
*/
if (page)
put_page(page);
+ if (mapping_locked)
+ filemap_invalidate_unlock_shared(mapping);
if (fpin)
fput(fpin);
return ret | VM_FAULT_RETRY;
@@ -3437,6 +3495,8 @@ static struct page *do_read_cache_page(struct address_space *mapping,
*
* If the page does not get brought uptodate, return -EIO.
*
+ * The function expects mapping->invalidate_lock to be already held.
+ *
* Return: up to date page on success, ERR_PTR() on failure.
*/
struct page *read_cache_page(struct address_space *mapping,
@@ -3460,6 +3520,8 @@ EXPORT_SYMBOL(read_cache_page);
*
* If the page does not get brought uptodate, return -EIO.
*
+ * The function expects mapping->invalidate_lock to be already held.
+ *
* Return: up to date page on success, ERR_PTR() on failure.
*/
struct page *read_cache_page_gfp(struct address_space *mapping,
diff --git a/mm/readahead.c b/mm/readahead.c
index d589f147f4c2..41b75d76d36e 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -192,6 +192,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
*/
unsigned int nofs = memalloc_nofs_save();

+ filemap_invalidate_lock_shared(mapping);
/*
* Preallocate as many pages as we will need.
*/
@@ -236,6 +237,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* will then handle the error.
*/
read_pages(ractl, &page_pool, false);
+ filemap_invalidate_unlock_shared(mapping);
memalloc_nofs_restore(nofs);
}
EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
diff --git a/mm/rmap.c b/mm/rmap.c
index a8b01929ab2e..86471aacc54a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -22,24 +22,25 @@
*
* inode->i_rwsem (while writing or truncating, not reading or faulting)
* mm->mmap_lock
- * page->flags PG_locked (lock_page) * (see hugetlbfs below)
- * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
- * mapping->i_mmap_rwsem
- * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
- * anon_vma->rwsem
- * mm->page_table_lock or pte_lock
- * swap_lock (in swap_duplicate, swap_info_get)
- * mmlist_lock (in mmput, drain_mmlist and others)
- * mapping->private_lock (in __set_page_dirty_buffers)
- * lock_page_memcg move_lock (in __set_page_dirty_buffers)
- * i_pages lock (widely used)
- * lruvec->lru_lock (in lock_page_lruvec_irq)
- * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
- * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
- * sb_lock (within inode_lock in fs/fs-writeback.c)
- * i_pages lock (widely used, in set_page_dirty,
- * in arch-dependent flush_dcache_mmap_lock,
- * within bdi.wb->list_lock in __sync_single_inode)
+ * mapping->invalidate_lock (in filemap_fault)
+ * page->flags PG_locked (lock_page) * (see hugetlbfs below)
+ * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
+ * mapping->i_mmap_rwsem
+ * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
+ * anon_vma->rwsem
+ * mm->page_table_lock or pte_lock
+ * swap_lock (in swap_duplicate, swap_info_get)
+ * mmlist_lock (in mmput, drain_mmlist and others)
+ * mapping->private_lock (in __set_page_dirty_buffers)
+ * lock_page_memcg move_lock (in __set_page_dirty_buffers)
+ * i_pages lock (widely used)
+ * lruvec->lru_lock (in lock_page_lruvec_irq)
+ * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
+ * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
+ * sb_lock (within inode_lock in fs/fs-writeback.c)
+ * i_pages lock (widely used, in set_page_dirty,
+ * in arch-dependent flush_dcache_mmap_lock,
+ * within bdi.wb->list_lock in __sync_single_inode)
*
* anon_vma->rwsem,mapping->i_mmap_rwsem (memory_failure, collect_procs_anon)
* ->tasklist_lock
diff --git a/mm/truncate.c b/mm/truncate.c
index 0f9becee9789..44ad5e515140 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -412,7 +412,8 @@ EXPORT_SYMBOL(truncate_inode_pages_range);
* @mapping: mapping to truncate
* @lstart: offset from which to truncate
*
- * Called under (and serialised by) inode->i_rwsem.
+ * Called under (and serialised by) inode->i_rwsem and
+ * mapping->invalidate_lock.
*
* Note: When this function returns, there can be a page in the process of
* deletion (inside __delete_from_page_cache()) in the specified range. Thus
--
2.26.2

2021-07-13 01:25:46

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 03/14] mm: Protect operations adding pages to page cache with invalidate_lock

On Mon, Jul 12, 2021 at 06:55:54PM +0200, Jan Kara wrote:
> Currently, serializing operations such as page fault, read, or readahead
> against hole punching is rather difficult. The basic race scheme is
> like:
>
> fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / ..
> truncate_inode_pages_range()
> <create pages in page
> cache here>
> <update fs block mapping and free blocks>
>
> Now the problem is in this way read / page fault / readahead can
> instantiate pages in page cache with potentially stale data (if blocks
> get quickly reused). Avoiding this race is not simple - page locks do
> not work because we want to make sure there are *no* pages in given
> range. inode->i_rwsem does not work because page fault happens under
> mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
> the performance for mixed read-write workloads suffer.
>
> So create a new rw_semaphore in the address_space - invalidate_lock -
> that protects adding of pages to page cache for page faults / reads /
> readahead.
>
> Reviewed-by: Darrick J. Wong <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Jan Kara <[email protected]>
> ---
> Documentation/filesystems/locking.rst | 62 +++++++++++++------
> fs/inode.c | 2 +
> include/linux/fs.h | 33 ++++++++++
> mm/filemap.c | 88 +++++++++++++++++++++++----
> mm/readahead.c | 2 +
> mm/rmap.c | 37 +++++------
> mm/truncate.c | 3 +-
> 7 files changed, 176 insertions(+), 51 deletions(-)
>
> diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
> index cdf15492c699..38a3097b6f1c 100644
> --- a/Documentation/filesystems/locking.rst
> +++ b/Documentation/filesystems/locking.rst
> @@ -271,19 +271,19 @@ prototypes::
> locking rules:
> All except set_page_dirty and freepage may block
>
> -====================== ======================== =========
> -ops PageLocked(page) i_rwsem
> -====================== ======================== =========
> +====================== ======================== ========= ===============
> +ops PageLocked(page) i_rwsem invalidate_lock
> +====================== ======================== ========= ===============
> writepage: yes, unlocks (see below)
> -readpage: yes, unlocks
> +readpage: yes, unlocks shared
> writepages:
> set_page_dirty no
> -readahead: yes, unlocks
> -readpages: no
> +readahead: yes, unlocks shared
> +readpages: no shared
> write_begin: locks the page exclusive
> write_end: yes, unlocks exclusive
> bmap:
> -invalidatepage: yes
> +invalidatepage: yes exclusive
> releasepage: yes
> freepage: yes
> direct_IO:
> @@ -378,7 +378,10 @@ keep it that way and don't breed new callers.
> ->invalidatepage() is called when the filesystem must attempt to drop
> some or all of the buffers from the page when it is being truncated. It
> returns zero on success. If ->invalidatepage is zero, the kernel uses
> -block_invalidatepage() instead.
> +block_invalidatepage() instead. The filesystem must exclusively acquire
> +invalidate_lock before invalidating page cache in truncate / hole punch path
> +(and thus calling into ->invalidatepage) to block races between page cache
> +invalidation and page cache filling functions (fault, read, ...).
>
> ->releasepage() is called when the kernel is about to try to drop the
> buffers from the page in preparation for freeing it. It returns zero to
> @@ -573,6 +576,25 @@ in sys_read() and friends.
> the lease within the individual filesystem to record the result of the
> operation
>
> +->fallocate implementation must be really careful to maintain page cache
> +consistency when punching holes or performing other operations that invalidate
> +page cache contents. Usually the filesystem needs to call
> +truncate_inode_pages_range() to invalidate relevant range of the page cache.
> +However the filesystem usually also needs to update its internal (and on disk)
> +view of file offset -> disk block mapping. Until this update is finished, the
> +filesystem needs to block page faults and reads from reloading now-stale page
> +cache contents from the disk. Since VFS acquires mapping->invalidate_lock in
> +shared mode when loading pages from disk (filemap_fault(), filemap_read(),
> +readahead paths), the fallocate implementation must take the invalidate_lock to
> +prevent reloading.
> +
> +->copy_file_range and ->remap_file_range implementations need to serialize
> +against modifications of file data while the operation is running. For
> +blocking changes through write(2) and similar operations inode->i_rwsem can be
> +used. To block changes to file contents via a memory mapping during the
> +operation, the filesystem must take mapping->invalidate_lock to coordinate
> +with ->page_mkwrite.
> +
> dquot_operations
> ================
>
> @@ -630,11 +652,11 @@ pfn_mkwrite: yes
> access: yes
> ============= ========= ===========================
>
> -->fault() is called when a previously not present pte is about
> -to be faulted in. The filesystem must find and return the page associated
> -with the passed in "pgoff" in the vm_fault structure. If it is possible that
> -the page may be truncated and/or invalidated, then the filesystem must lock
> -the page, then ensure it is not already truncated (the page lock will block
> +->fault() is called when a previously not present pte is about to be faulted
> +in. The filesystem must find and return the page associated with the passed in
> +"pgoff" in the vm_fault structure. If it is possible that the page may be
> +truncated and/or invalidated, then the filesystem must lock invalidate_lock,
> +then ensure the page is not already truncated (invalidate_lock will block
> subsequent truncate), and then return with VM_FAULT_LOCKED, and the page
> locked. The VM will unlock the page.
>
> @@ -647,12 +669,14 @@ page table entry. Pointer to entry associated with the page is passed in
> "pte" field in vm_fault structure. Pointers to entries for other offsets
> should be calculated relative to "pte".
>
> -->page_mkwrite() is called when a previously read-only pte is
> -about to become writeable. The filesystem again must ensure that there are
> -no truncate/invalidate races, and then return with the page locked. If
> -the page has been truncated, the filesystem should not look up a new page
> -like the ->fault() handler, but simply return with VM_FAULT_NOPAGE, which
> -will cause the VM to retry the fault.
> +->page_mkwrite() is called when a previously read-only pte is about to become
> +writeable. The filesystem again must ensure that there are no
> +truncate/invalidate races or races with operations such as ->remap_file_range
> +or ->copy_file_range, and then return with the page locked. Usually
> +mapping->invalidate_lock is suitable for proper serialization. If the page has
> +been truncated, the filesystem should not look up a new page like the ->fault()
> +handler, but simply return with VM_FAULT_NOPAGE, which will cause the VM to
> +retry the fault.
>
> ->pfn_mkwrite() is the same as page_mkwrite but when the pte is
> VM_PFNMAP or VM_MIXEDMAP with a page-less entry. Expected return is
> diff --git a/fs/inode.c b/fs/inode.c
> index c93500d84264..84c528cd1955 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -190,6 +190,8 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
> mapping->private_data = NULL;
> mapping->writeback_index = 0;
> + __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock",
> + &sb->s_type->invalidate_lock_key);
> inode->i_private = NULL;
> inode->i_mapping = mapping;
> INIT_HLIST_HEAD(&inode->i_dentry); /* buggered by rcu freeing */
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 640574294216..90a80de37ad4 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -436,6 +436,10 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
> * struct address_space - Contents of a cacheable, mappable object.
> * @host: Owner, either the inode or the block_device.
> * @i_pages: Cached pages.
> + * @invalidate_lock: Guards coherency between page cache contents and
> + * file offset->disk block mappings in the filesystem during invalidates.
> + * It is also used to block modification of page cache contents through
> + * memory mappings.
> * @gfp_mask: Memory allocation flags to use for allocating pages.
> * @i_mmap_writable: Number of VM_SHARED mappings.
> * @nr_thps: Number of THPs in the pagecache (non-shmem only).
> @@ -453,6 +457,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
> struct address_space {
> struct inode *host;
> struct xarray i_pages;
> + struct rw_semaphore invalidate_lock;
> gfp_t gfp_mask;
> atomic_t i_mmap_writable;
> #ifdef CONFIG_READ_ONLY_THP_FOR_FS
> @@ -814,6 +819,33 @@ static inline void inode_lock_shared_nested(struct inode *inode, unsigned subcla
> down_read_nested(&inode->i_rwsem, subclass);
> }
>
> +static inline void filemap_invalidate_lock(struct address_space *mapping)
> +{
> + down_write(&mapping->invalidate_lock);
> +}
> +
> +static inline void filemap_invalidate_unlock(struct address_space *mapping)
> +{
> + up_write(&mapping->invalidate_lock);
> +}
> +
> +static inline void filemap_invalidate_lock_shared(struct address_space *mapping)
> +{
> + down_read(&mapping->invalidate_lock);
> +}
> +
> +static inline int filemap_invalidate_trylock_shared(
> + struct address_space *mapping)
> +{
> + return down_read_trylock(&mapping->invalidate_lock);
> +}
> +
> +static inline void filemap_invalidate_unlock_shared(
> + struct address_space *mapping)
> +{
> + up_read(&mapping->invalidate_lock);
> +}
> +
> void lock_two_nondirectories(struct inode *, struct inode*);
> void unlock_two_nondirectories(struct inode *, struct inode*);
>
> @@ -2487,6 +2519,7 @@ struct file_system_type {
>
> struct lock_class_key i_lock_key;
> struct lock_class_key i_mutex_key;
> + struct lock_class_key invalidate_lock_key;
> struct lock_class_key i_mutex_dir_key;
> };
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index acf20eca2fa4..56988c4e655d 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -77,7 +77,8 @@
> * ->i_pages lock
> *
> * ->i_rwsem
> - * ->i_mmap_rwsem (truncate->unmap_mapping_range)
> + * ->invalidate_lock (acquired by fs in truncate path)
> + * ->i_mmap_rwsem (truncate->unmap_mapping_range)
> *
> * ->mmap_lock
> * ->i_mmap_rwsem
> @@ -85,7 +86,8 @@
> * ->i_pages lock (arch-dependent flush_dcache_mmap_lock)
> *
> * ->mmap_lock
> - * ->lock_page (access_process_vm)
> + * ->invalidate_lock (filemap_fault)
> + * ->lock_page (filemap_fault, access_process_vm)
> *
> * ->i_rwsem (generic_perform_write)
> * ->mmap_lock (fault_in_pages_readable->do_page_fault)
> @@ -2368,20 +2370,30 @@ static int filemap_update_page(struct kiocb *iocb,
> {
> int error;
>
> + if (iocb->ki_flags & IOCB_NOWAIT) {
> + if (!filemap_invalidate_trylock_shared(mapping))
> + return -EAGAIN;
> + } else {
> + filemap_invalidate_lock_shared(mapping);
> + }
> +
> if (!trylock_page(page)) {
> + error = -EAGAIN;
> if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO))
> - return -EAGAIN;
> + goto unlock_mapping;
> if (!(iocb->ki_flags & IOCB_WAITQ)) {
> + filemap_invalidate_unlock_shared(mapping);
> put_and_wait_on_page_locked(page, TASK_KILLABLE);
> return AOP_TRUNCATED_PAGE;
> }
> error = __lock_page_async(page, iocb->ki_waitq);
> if (error)
> - return error;
> + goto unlock_mapping;
> }
>
> + error = AOP_TRUNCATED_PAGE;
> if (!page->mapping)
> - goto truncated;
> + goto unlock;
>
> error = 0;
> if (filemap_range_uptodate(mapping, iocb->ki_pos, iter, page))
> @@ -2392,15 +2404,13 @@ static int filemap_update_page(struct kiocb *iocb,
> goto unlock;
>
> error = filemap_read_page(iocb->ki_filp, mapping, page);
> - if (error == AOP_TRUNCATED_PAGE)
> - put_page(page);
> - return error;
> -truncated:
> - unlock_page(page);
> - put_page(page);
> - return AOP_TRUNCATED_PAGE;
> + goto unlock_mapping;
> unlock:
> unlock_page(page);
> +unlock_mapping:
> + filemap_invalidate_unlock_shared(mapping);
> + if (error == AOP_TRUNCATED_PAGE)
> + put_page(page);
> return error;
> }
>
> @@ -2415,6 +2425,19 @@ static int filemap_create_page(struct file *file,
> if (!page)
> return -ENOMEM;
>
> + /*
> + * Protect against truncate / hole punch. Grabbing invalidate_lock here
> + * assures we cannot instantiate and bring uptodate new pagecache pages
> + * after evicting page cache during truncate and before actually
> + * freeing blocks. Note that we could release invalidate_lock after
> + * inserting the page into page cache as the locked page would then be
> + * enough to synchronize with hole punching. But there are code paths
> + * such as filemap_update_page() filling in partially uptodate pages or
> + * ->readpages() that need to hold invalidate_lock while mapping blocks
> + * for IO so let's hold the lock here as well to keep locking rules
> + * simple.
> + */
> + filemap_invalidate_lock_shared(mapping);
> error = add_to_page_cache_lru(page, mapping, index,
> mapping_gfp_constraint(mapping, GFP_KERNEL));
> if (error == -EEXIST)
> @@ -2426,9 +2449,11 @@ static int filemap_create_page(struct file *file,
> if (error)
> goto error;
>
> + filemap_invalidate_unlock_shared(mapping);
> pagevec_add(pvec, page);
> return 0;
> error:
> + filemap_invalidate_unlock_shared(mapping);
> put_page(page);
> return error;
> }
> @@ -2967,6 +2992,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> pgoff_t max_off;
> struct page *page;
> vm_fault_t ret = 0;
> + bool mapping_locked = false;
>
> max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> if (unlikely(offset >= max_off))
> @@ -2988,15 +3014,30 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
> ret = VM_FAULT_MAJOR;
> fpin = do_sync_mmap_readahead(vmf);
> + }
> +
> + if (!page) {

Is it still necessary to re-evaluate !page here?

> retry_find:
> + /*
> + * See comment in filemap_create_page() why we need
> + * invalidate_lock
> + */
> + if (!mapping_locked) {
> + filemap_invalidate_lock_shared(mapping);
> + mapping_locked = true;
> + }
> page = pagecache_get_page(mapping, offset,
> FGP_CREAT|FGP_FOR_MMAP,
> vmf->gfp_mask);
> if (!page) {
> if (fpin)
> goto out_retry;
> + filemap_invalidate_unlock_shared(mapping);
> return VM_FAULT_OOM;
> }
> + } else if (unlikely(!PageUptodate(page))) {
> + filemap_invalidate_lock_shared(mapping);
> + mapping_locked = true;
> }
>
> if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))
> @@ -3014,8 +3055,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> * We have a locked page in the page cache, now we need to check
> * that it's up-to-date. If not, it is going to be due to an error.
> */
> - if (unlikely(!PageUptodate(page)))
> + if (unlikely(!PageUptodate(page))) {
> + /*
> + * The page was in cache and uptodate and now it is not.
> + * Strange but possible since we didn't hold the page lock all
> + * the time. Let's drop everything get the invalidate lock and
> + * try again.
> + */
> + if (!mapping_locked) {
> + unlock_page(page);
> + put_page(page);
> + goto retry_find;
> + }
> goto page_not_uptodate;
> + }
>
> /*
> * We've made it this far and we had to drop our mmap_lock, now is the
> @@ -3026,6 +3079,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> unlock_page(page);
> goto out_retry;
> }
> + if (mapping_locked)
> + filemap_invalidate_unlock_shared(mapping);
>
> /*
> * Found the page and have a reference on it.
> @@ -3056,6 +3111,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>
> if (!error || error == AOP_TRUNCATED_PAGE)
> goto retry_find;
> + filemap_invalidate_unlock_shared(mapping);

Hm. I /think/ it's the case that mapping_locked==true always holds here
because the new "The page was in cache and uptodate and now it is not."
block above will take the invalidate_lock and retry pagecache_get_page,
right?

>
> return VM_FAULT_SIGBUS;
>
> @@ -3067,6 +3123,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> */
> if (page)
> put_page(page);
> + if (mapping_locked)
> + filemap_invalidate_unlock_shared(mapping);

Hm. I think this looks ok, even though this patch now contains the
subtlety that we've both hoisted the xfs mmaplock to page cache /and/
reduced the scope of the invalidate_lock.

As for fancy things like remap_range, I think they're still safe with
this latest iteration because those functions grab the invalidate_lock
in exclusive mode and invalidate the mappings before proceeding, which
means that other programs will never find the lockless path (i.e. page
locked, uptodate, and attached to the mapping) and will instead block on
the invalidate lock until the remap operation completes. Is that
right?

--D
> if (fpin)
> fput(fpin);
> return ret | VM_FAULT_RETRY;
> @@ -3437,6 +3495,8 @@ static struct page *do_read_cache_page(struct address_space *mapping,
> *
> * If the page does not get brought uptodate, return -EIO.
> *
> + * The function expects mapping->invalidate_lock to be already held.
> + *
> * Return: up to date page on success, ERR_PTR() on failure.
> */
> struct page *read_cache_page(struct address_space *mapping,
> @@ -3460,6 +3520,8 @@ EXPORT_SYMBOL(read_cache_page);
> *
> * If the page does not get brought uptodate, return -EIO.
> *
> + * The function expects mapping->invalidate_lock to be already held.
> + *
> * Return: up to date page on success, ERR_PTR() on failure.
> */
> struct page *read_cache_page_gfp(struct address_space *mapping,
> diff --git a/mm/readahead.c b/mm/readahead.c
> index d589f147f4c2..41b75d76d36e 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -192,6 +192,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> */
> unsigned int nofs = memalloc_nofs_save();
>
> + filemap_invalidate_lock_shared(mapping);
> /*
> * Preallocate as many pages as we will need.
> */
> @@ -236,6 +237,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> * will then handle the error.
> */
> read_pages(ractl, &page_pool, false);
> + filemap_invalidate_unlock_shared(mapping);
> memalloc_nofs_restore(nofs);
> }
> EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index a8b01929ab2e..86471aacc54a 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -22,24 +22,25 @@
> *
> * inode->i_rwsem (while writing or truncating, not reading or faulting)
> * mm->mmap_lock
> - * page->flags PG_locked (lock_page) * (see hugetlbfs below)
> - * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
> - * mapping->i_mmap_rwsem
> - * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
> - * anon_vma->rwsem
> - * mm->page_table_lock or pte_lock
> - * swap_lock (in swap_duplicate, swap_info_get)
> - * mmlist_lock (in mmput, drain_mmlist and others)
> - * mapping->private_lock (in __set_page_dirty_buffers)
> - * lock_page_memcg move_lock (in __set_page_dirty_buffers)
> - * i_pages lock (widely used)
> - * lruvec->lru_lock (in lock_page_lruvec_irq)
> - * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
> - * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
> - * sb_lock (within inode_lock in fs/fs-writeback.c)
> - * i_pages lock (widely used, in set_page_dirty,
> - * in arch-dependent flush_dcache_mmap_lock,
> - * within bdi.wb->list_lock in __sync_single_inode)
> + * mapping->invalidate_lock (in filemap_fault)
> + * page->flags PG_locked (lock_page) * (see hugetlbfs below)
> + * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)
> + * mapping->i_mmap_rwsem
> + * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
> + * anon_vma->rwsem
> + * mm->page_table_lock or pte_lock
> + * swap_lock (in swap_duplicate, swap_info_get)
> + * mmlist_lock (in mmput, drain_mmlist and others)
> + * mapping->private_lock (in __set_page_dirty_buffers)
> + * lock_page_memcg move_lock (in __set_page_dirty_buffers)
> + * i_pages lock (widely used)
> + * lruvec->lru_lock (in lock_page_lruvec_irq)
> + * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
> + * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
> + * sb_lock (within inode_lock in fs/fs-writeback.c)
> + * i_pages lock (widely used, in set_page_dirty,
> + * in arch-dependent flush_dcache_mmap_lock,
> + * within bdi.wb->list_lock in __sync_single_inode)
> *
> * anon_vma->rwsem,mapping->i_mmap_rwsem (memory_failure, collect_procs_anon)
> * ->tasklist_lock
> diff --git a/mm/truncate.c b/mm/truncate.c
> index 0f9becee9789..44ad5e515140 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -412,7 +412,8 @@ EXPORT_SYMBOL(truncate_inode_pages_range);
> * @mapping: mapping to truncate
> * @lstart: offset from which to truncate
> *
> - * Called under (and serialised by) inode->i_rwsem.
> + * Called under (and serialised by) inode->i_rwsem and
> + * mapping->invalidate_lock.
> *
> * Note: When this function returns, there can be a page in the process of
> * deletion (inside __delete_from_page_cache()) in the specified range. Thus
> --
> 2.26.2
>

2021-07-13 06:26:53

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 03/14] mm: Protect operations adding pages to page cache with invalidate_lock

Still looks good. That being said the additional conditional locking in
filemap_fault makes it fall over the readbility cliff for me. Something
like this on top of your series would help:

diff --git a/mm/filemap.c b/mm/filemap.c
index fd3f94d36c49..0fad08331cf4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3040,21 +3040,23 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
* Do we have something in the page cache already?
*/
page = find_get_page(mapping, offset);
- if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {
+ if (likely(page)) {
/*
- * We found the page, so try async readahead before
- * waiting for the lock.
+ * We found the page, so try async readahead before waiting for
+ * the lock.
*/
- fpin = do_async_mmap_readahead(vmf, page);
- } else if (!page) {
+ if (!(vmf->flags & FAULT_FLAG_TRIED))
+ fpin = do_async_mmap_readahead(vmf, page);
+ if (unlikely(!PageUptodate(page))) {
+ filemap_invalidate_lock_shared(mapping);
+ mapping_locked = true;
+ }
+ } else {
/* No page in the page cache at all */
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
ret = VM_FAULT_MAJOR;
fpin = do_sync_mmap_readahead(vmf);
- }
-
- if (!page) {
retry_find:
/*
* See comment in filemap_create_page() why we need
@@ -3073,9 +3075,6 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
filemap_invalidate_unlock_shared(mapping);
return VM_FAULT_OOM;
}
- } else if (unlikely(!PageUptodate(page))) {
- filemap_invalidate_lock_shared(mapping);
- mapping_locked = true;
}

if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))

2021-07-13 11:12:26

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 03/14] mm: Protect operations adding pages to page cache with invalidate_lock

On Mon 12-07-21 18:25:14, Darrick J. Wong wrote:
> On Mon, Jul 12, 2021 at 06:55:54PM +0200, Jan Kara wrote:
> > @@ -2967,6 +2992,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> > pgoff_t max_off;
> > struct page *page;
> > vm_fault_t ret = 0;
> > + bool mapping_locked = false;
> >
> > max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
> > if (unlikely(offset >= max_off))
> > @@ -2988,15 +3014,30 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> > count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
> > ret = VM_FAULT_MAJOR;
> > fpin = do_sync_mmap_readahead(vmf);
> > + }
> > +
> > + if (!page) {
>
> Is it still necessary to re-evaluate !page here?

No, you are right it is not necessary. I'll remove it.

> > retry_find:
> > + /*
> > + * See comment in filemap_create_page() why we need
> > + * invalidate_lock
> > + */
> > + if (!mapping_locked) {
> > + filemap_invalidate_lock_shared(mapping);
> > + mapping_locked = true;
> > + }
> > page = pagecache_get_page(mapping, offset,
> > FGP_CREAT|FGP_FOR_MMAP,
> > vmf->gfp_mask);
> > if (!page) {
> > if (fpin)
> > goto out_retry;
> > + filemap_invalidate_unlock_shared(mapping);
> > return VM_FAULT_OOM;
> > }
> > + } else if (unlikely(!PageUptodate(page))) {
> > + filemap_invalidate_lock_shared(mapping);
> > + mapping_locked = true;
> > }
> >
> > if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))
> > @@ -3014,8 +3055,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> > * We have a locked page in the page cache, now we need to check
> > * that it's up-to-date. If not, it is going to be due to an error.
> > */
> > - if (unlikely(!PageUptodate(page)))
> > + if (unlikely(!PageUptodate(page))) {
> > + /*
> > + * The page was in cache and uptodate and now it is not.
> > + * Strange but possible since we didn't hold the page lock all
> > + * the time. Let's drop everything get the invalidate lock and
> > + * try again.
> > + */
> > + if (!mapping_locked) {
> > + unlock_page(page);
> > + put_page(page);
> > + goto retry_find;
> > + }
> > goto page_not_uptodate;
> > + }
> >
> > /*
> > * We've made it this far and we had to drop our mmap_lock, now is the
> > @@ -3026,6 +3079,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> > unlock_page(page);
> > goto out_retry;
> > }
> > + if (mapping_locked)
> > + filemap_invalidate_unlock_shared(mapping);
> >
> > /*
> > * Found the page and have a reference on it.
> > @@ -3056,6 +3111,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> >
> > if (!error || error == AOP_TRUNCATED_PAGE)
> > goto retry_find;
> > + filemap_invalidate_unlock_shared(mapping);
>
> Hm. I /think/ it's the case that mapping_locked==true always holds here
> because the new "The page was in cache and uptodate and now it is not."
> block above will take the invalidate_lock and retry pagecache_get_page,
> right?

Yes. page_not_uptodate block can only be entered with mapping_locked ==
true - the only place that can enter this block is:

if (unlikely(!PageUptodate(page))) {
/*
* The page was in cache and uptodate and now it is not.
* Strange but possible since we didn't hold the page lock all
* the time. Let's drop everything get the invalidate lock and
* try again.
*/
if (!mapping_locked) {
unlock_page(page);
put_page(page);
goto retry_find;
}
goto page_not_uptodate;
}

> >
> > return VM_FAULT_SIGBUS;
> >
> > @@ -3067,6 +3123,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> > */
> > if (page)
> > put_page(page);
> > + if (mapping_locked)
> > + filemap_invalidate_unlock_shared(mapping);
>
> Hm. I think this looks ok, even though this patch now contains the
> subtlety that we've both hoisted the xfs mmaplock to page cache /and/
> reduced the scope of the invalidate_lock.
>
> As for fancy things like remap_range, I think they're still safe with
> this latest iteration because those functions grab the invalidate_lock
> in exclusive mode and invalidate the mappings before proceeding, which
> means that other programs will never find the lockless path (i.e. page
> locked, uptodate, and attached to the mapping) and will instead block on
> the invalidate lock until the remap operation completes. Is that
> right?

Correct. For operations such as hole punch or destination of remap_range,
we lock invalidate_lock exclusively and invalidate pagecache in the
involved range. No new pages can be created in that range until you drop
invalidate_lock (places creating pages without holding i_rwsem are read,
readahead, fault and all those take invalidate_lock when they should create
the page).

There's also the case someone pointed out that *source* of remap_range
needs to be protected (but only from modifications through mmap). This is
achieved by having invalidate_lock taken in .page_mkwrite handlers and
thus not impacted by these changes to filemap_fault().

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2021-07-13 12:37:08

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 03/14] mm: Protect operations adding pages to page cache with invalidate_lock

On Tue 13-07-21 07:25:05, Christoph Hellwig wrote:
> Still looks good. That being said the additional conditional locking in
> filemap_fault makes it fall over the readbility cliff for me. Something
> like this on top of your series would help:

Yeah, that's better. Applied, thanks.

Honza

>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index fd3f94d36c49..0fad08331cf4 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3040,21 +3040,23 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> * Do we have something in the page cache already?
> */
> page = find_get_page(mapping, offset);
> - if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {
> + if (likely(page)) {
> /*
> - * We found the page, so try async readahead before
> - * waiting for the lock.
> + * We found the page, so try async readahead before waiting for
> + * the lock.
> */
> - fpin = do_async_mmap_readahead(vmf, page);
> - } else if (!page) {
> + if (!(vmf->flags & FAULT_FLAG_TRIED))
> + fpin = do_async_mmap_readahead(vmf, page);
> + if (unlikely(!PageUptodate(page))) {
> + filemap_invalidate_lock_shared(mapping);
> + mapping_locked = true;
> + }
> + } else {
> /* No page in the page cache at all */
> count_vm_event(PGMAJFAULT);
> count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
> ret = VM_FAULT_MAJOR;
> fpin = do_sync_mmap_readahead(vmf);
> - }
> -
> - if (!page) {
> retry_find:
> /*
> * See comment in filemap_create_page() why we need
> @@ -3073,9 +3075,6 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> filemap_invalidate_unlock_shared(mapping);
> return VM_FAULT_OOM;
> }
> - } else if (unlikely(!PageUptodate(page))) {
> - filemap_invalidate_lock_shared(mapping);
> - mapping_locked = true;
> }
>
> if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))
--
Jan Kara <[email protected]>
SUSE Labs, CR