2021-04-07 13:52:15

by Michel Lespinasse

[permalink] [raw]
Subject: [RFC PATCH 25/37] mm: implement speculative handling in filemap_fault()

Extend filemap_fault() to handle speculative faults.

In the speculative case, we will only be fishing existing pages out of
the page cache. The logic we use mirrors what is done in the
non-speculative case, assuming that pages are found in the page cache,
are up to date and not already locked, and that readahead is not
necessary at this time. In all other cases, the fault is aborted to be
handled non-speculatively.

Signed-off-by: Michel Lespinasse <[email protected]>
---
mm/filemap.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 44 insertions(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 43700480d897..6e8505fe5df9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2851,7 +2851,9 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
* it in the page cache, and handles the special cases reasonably without
* having a lot of duplicated code.
*
- * vma->vm_mm->mmap_lock must be held on entry.
+ * If FAULT_FLAG_SPECULATIVE is set, this function runs within an rcu
+ * read locked section and with mmap lock not held.
+ * Otherwise, vma->vm_mm->mmap_lock must be held on entry.
*
* If our return value has VM_FAULT_RETRY set, it's because the mmap_lock
* may be dropped before doing I/O or by lock_page_maybe_drop_mmap().
@@ -2876,6 +2878,47 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
struct page *page;
vm_fault_t ret = 0;

+ if (vmf->flags & FAULT_FLAG_SPECULATIVE) {
+ page = find_get_page(mapping, offset);
+ if (unlikely(!page) || unlikely(PageReadahead(page)))
+ return VM_FAULT_RETRY;
+
+ if (!trylock_page(page))
+ return VM_FAULT_RETRY;
+
+ if (unlikely(compound_head(page)->mapping != mapping))
+ goto page_unlock;
+ VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page);
+ if (unlikely(!PageUptodate(page)))
+ goto page_unlock;
+
+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+ if (unlikely(offset >= max_off))
+ goto page_unlock;
+
+ /*
+ * Update readahead mmap_miss statistic.
+ *
+ * Note that we are not sure if finish_fault() will
+ * manage to complete the transaction. If it fails,
+ * we'll come back to filemap_fault() non-speculative
+ * case which will update mmap_miss a second time.
+ * This is not ideal, we would prefer to guarantee the
+ * update will happen exactly once.
+ */
+ if (!(vmf->vma->vm_flags & VM_RAND_READ) && ra->ra_pages) {
+ unsigned int mmap_miss = READ_ONCE(ra->mmap_miss);
+ if (mmap_miss)
+ WRITE_ONCE(ra->mmap_miss, --mmap_miss);
+ }
+
+ vmf->page = page;
+ return VM_FAULT_LOCKED;
+page_unlock:
+ unlock_page(page);
+ return VM_FAULT_RETRY;
+ }
+
max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
if (unlikely(offset >= max_off))
return VM_FAULT_SIGBUS;
--
2.20.1