This is a port on kernel 4.16 of the work done by Peter Zijlstra to
handle page fault without holding the mm semaphore [1].
The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault handler will not wait for other threads memory
layout change to be done, assuming that this change is done in another part
of the process's memory space. This type page fault is named speculative
page fault. If the speculative page fault fails because of a concurrency is
detected or because underlying PMD or PTE tables are not yet allocating, it
is failing its processing and a classic page fault is then tried.
The speculative page fault (SPF) has to look for the VMA matching the fault
address without holding the mmap_sem, this is done by introducing a rwlock
which protects the access to the mm_rb tree. Previously this was done using
SRCU but it was introducing a lot of scheduling to process the VMA's
freeing
operation which was hitting the performance by 20% as reported by Kemi Wang
[2].Using a rwlock to protect access to the mm_rb tree is limiting the
locking contention to these operations which are expected to be in a O(log
n)
order. In addition to ensure that the VMA is not freed in our back a
reference count is added and 2 services (get_vma() and put_vma()) are
introduced to handle the reference count. When a VMA is fetch from the RB
tree using get_vma() is must be later freeed using put_vma(). Furthermore,
to allow the VMA to be used again by the classic page fault handler a
service is introduced can_reuse_spf_vma(). This service is expected to be
called with the mmap_sem hold. It checked that the VMA is still matching
the specified address and is releasing its reference count as the mmap_sem
is hold it is ensure that it will not be freed in our back. In general, the
VMA's reference count could be decremented when holding the mmap_sem but it
should not be increased as holding the mmap_sem is ensuring that the VMA is
stable. I can't see anymore the overhead I got while will-it-scale
benchmark anymore.
The VMA's attributes checked during the speculative page fault processing
have to be protected against parallel changes. This is done by using a per
VMA sequence lock. This sequence lock allows the speculative page fault
handler to fast check for parallel changes in progress and to abort the
speculative page fault in that case.
Once the VMA is found, the speculative page fault handler would check for
the VMA's attributes to verify that the page fault has to be handled
correctly or not. Thus the VMA is protected through a sequence lock which
allows fast detection of concurrent VMA changes. If such a change is
detected, the speculative page fault is aborted and a *classic* page fault
is tried. VMA sequence lockings are added when VMA attributes which are
checked during the page fault are modified.
When the PTE is fetched, the VMA is checked to see if it has been changed,
so once the page table is locked, the VMA is valid, so any other changes
leading to touching this PTE will need to lock the page table, so no
parallel change is possible at this time.
The locking of the PTE is done with interrupts disabled, this allows to
check for the PMD to ensure that there is not an ongoing collapsing
operation. Since khugepaged is firstly set the PMD to pmd_none and then is
waiting for the other CPU to have catch the IPI interrupt, if the pmd is
valid at the time the PTE is locked, we have the guarantee that the
collapsing opertion will have to wait on the PTE lock to move foward. This
allows the SPF handler to map the PTE safely. If the PMD value is different
than the one recorded at the beginning of the SPF operation, the classic
page fault handler will be called to handle the operation while holding the
mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
done using spin_trylock() to avoid dead lock when handling a page fault
while a TLB invalidate is requested by an other CPU holding the PTE.
Support for THP is not done because when checking for the PMD, we can be
confused by an in progress collapsing operation done by khugepaged. The
issue is that pmd_none() could be true either if the PMD is not already
populated or if the underlying PTE are in the way to be collapsed. So we
cannot safely allocate a PMD if pmd_none() is true.
This series a new software performance event named 'speculative-faults' or
'spf'. It counts the number of successful page fault event handled in a
speculative way. When recording 'faults,spf' events, the faults one is
counting the total number of page fault events while 'spf' is only counting
the part of the faults processed in a speculative way.
There are some trace events introduced by this series. They allow to
identify why the page faults where not processed in a speculative way. This
doesn't take in account the faults generated by a monothreaded process
which directly processed while holding the mmap_sem. This trace events are
grouped in a system named 'pagefault', they are:
- pagefault:spf_pte_lock : if the pte was already locked by another thread
- pagefault:spf_vma_changed : if the VMA has been changed in our back
- pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
- pagefault:spf_vma_notsup : the VMA's type is not supported
- pagefault:spf_vma_access : the VMA's access right are not respected
- pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
back.
To record all the related events, the easier is to run perf with the
following arguments :
$ perf stat -e 'faults,spf,pagefault:*' <command>
This series builds on top of v4.16-rc2-mmotm-2018-02-21-14-48 and is
functional on x86 and PowerPC.
---------------------
Real Workload results
As mentioned in previous email, we did non official runs using a "popular
in memory multithreaded database product" on 176 cores SMT8 Power system
which showed a 30% improvements in the number of transaction processed per
second. This run has been done on the v6 series, but changes introduced in
this new verion should not impact the performance boost seen.
Here are the perf data captured during 2 of these runs on top of the v8
series:
vanilla spf
faults 89.418 101.364
spf n/a 97.989
With the SPF kernel, most of the page fault were processed in a speculative
way.
------------------
Benchmarks results
Base kernel is v4.16-rc4-mmotm-2018-03-09-16-34
SPF is BASE + this series
Kernbench:
----------
Here are the results on a 16 CPUs X86 guest using kernbench on a 4.13-rc4
kernel (kernel is build 5 times):
Average Half load -j 8
Run (std deviation)
BASE SPF
Elapsed Time 151.36 (1.40139) 151.748 (1.09716) 0.26%
User Time 1023.19 (3.58972) 1027.35 (2.30396) 0.41%
System Time 125.026 (1.8547) 124.504 (0.980015) -0.42%
Percent CPU 758.2 (5.54076) 758.6 (3.97492) 0.05%
Context Switches 54924 (453.634) 54851 (382.293) -0.13%
Sleeps 105589 (704.581) 105282 (435.502) -0.29%
Average Optimal load -j 16
Run (std deviation)
BASE SPF
Elapsed Time 74.804 (1.25139) 74.368 (0.406288) -0.58%
User Time 962.033 (64.5125) 963.93 (66.8797) 0.20%
System Time 110.771 (15.0817) 110.387 (14.8989) -0.35%
Percent CPU 1045.7 (303.387) 1049.1 (306.255) 0.33%
Context Switches 76201.8 (22433.1) 76170.4 (22482.9) -0.04%
Sleeps 110289 (5024.05) 110220 (5248.58) -0.06%
During a run on the SPF, perf events were captured:
Performance counter stats for '../kernbench -M':
510334017 faults
200 spf
0 pagefault:spf_pte_lock
0 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
2174 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
Very few speculative page fault were recorded as most of the processes
involved are monothreaded (sounds that on this architecture some threads
were created during the kernel build processing).
Here are the kerbench results on a 80 CPUs Power8 system:
Average Half load -j 40
Run (std deviation)
BASE SPF
Elapsed Time 116.958 (0.73401) 117.43 (0.927497) 0.40%
User Time 4472.35 (7.85792) 4480.16 (19.4909) 0.17%
System Time 136.248 (0.587639) 136.922 (1.09058) 0.49%
Percent CPU 3939.8 (20.6567) 3931.2 (17.2829) -0.22%
Context Switches 92445.8 (236.672) 92720.8 (270.118) 0.30%
Sleeps 318475 (1412.6) 317996 (1819.07) -0.15%
Average Optimal load -j 80
Run (std deviation)
BASE SPF
Elapsed Time 106.976 (0.406731) 107.72 (0.329014) 0.70%
User Time 5863.47 (1466.45) 5865.38 (1460.27) 0.03%
System Time 159.995 (25.0393) 160.329 (24.6921) 0.21%
Percent CPU 5446.2 (1588.23) 5416 (1565.34) -0.55%
Context Switches 223018 (137637) 224867 (139305) 0.83%
Sleeps 330846 (13127.3) 332348 (15556.9) 0.45%
During a run on the SPF, perf events were captured:
Performance counter stats for '../kernbench -M':
116612488 faults
0 spf
0 pagefault:spf_pte_lock
0 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
473 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
Most of the processes involved are monothreaded so SPF is not activated but
there is no impact on the performance.
Ebizzy:
-------
The test is counting the number of records per second it can manage, the
higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
result I repeated the test 100 times and measure the average result. The
number is the record processes per second, the higher is the best.
BASE SPF delta
16 CPUs x86 VM 14902.6 95905.16 543.55%
80 CPUs P8 node 37240.24 78185.67 109.95%
Here are the performance counter read during a run on a 16 CPUs x86 VM:
Performance counter stats for './ebizzy -mRTp':
888157 faults
884773 spf
92 pagefault:spf_pte_lock
2379 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
80 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
And the ones captured during a run on a 80 CPUs Power node:
Performance counter stats for './ebizzy -mRTp':
762134 faults
728663 spf
19101 pagefault:spf_pte_lock
13969 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
272 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
In ebizzy's case most of the page fault were handled in a speculative way,
leading the ebizzy performance boost.
------------------
Changes since v8:
- Don't check PMD when locking the pte when THP is disabled
Thanks to Daniel Jordan for reporting this.
- Rebase on 4.16
Changes since v7:
- move pte_map_lock() and pte_spinlock() upper in mm/memory.c (patch 4 &
5)
- make pte_unmap_same() compatible with the speculative page fault (patch
6)
Changes since v6:
- Rename config variable to CONFIG_SPECULATIVE_PAGE_FAULT (patch 1)
- Review the way the config variable is set (patch 1 to 3)
- Introduce mm_rb_write_*lock() in mm/mmap.c (patch 18)
- Merge patch introducing pte try locking in the patch 18.
Changes since v5:
- use rwlock agains the mm RB tree in place of SRCU
- add a VMA's reference count to protect VMA while using it without
holding the mmap_sem.
- check PMD value to detect collapsing operation
- don't try speculative page fault for mono threaded processes
- try to reuse the fetched VMA if VM_RETRY is returned
- go directly to the error path if an error is detected during the SPF
path
- fix race window when moving VMA in move_vma()
Changes since v4:
- As requested by Andrew Morton, use CONFIG_SPF and define it earlier in
the series to ease bisection.
Changes since v3:
- Don't build when CONFIG_SMP is not set
- Fixed a lock dependency warning in __vma_adjust()
- Use READ_ONCE to access p*d values in handle_speculative_fault()
- Call memcp_oom() service in handle_speculative_fault()
Changes since v2:
- Perf event is renamed in PERF_COUNT_SW_SPF
- On Power handle do_page_fault()'s cleaning
- On Power if the VM_FAULT_ERROR is returned by
handle_speculative_fault(), do not retry but jump to the error path
- If VMA's flags are not matching the fault, directly returns
VM_FAULT_SIGSEGV and not VM_FAULT_RETRY
- Check for pud_trans_huge() to avoid speculative path
- Handles _vm_normal_page()'s introduced by 6f16211df3bf
("mm/device-public-memory: device memory cache coherent with CPU")
- add and review few comments in the code
Changes since v1:
- Remove PERF_COUNT_SW_SPF_FAILED perf event.
- Add tracing events to details speculative page fault failures.
- Cache VMA fields values which are used once the PTE is unlocked at the
end of the page fault events.
- Ensure that fields read during the speculative path are written and read
using WRITE_ONCE and READ_ONCE.
- Add checks at the beginning of the speculative path to abort it if the
VMA is known to not be supported.
Changes since RFC V5 [5]
- Port to 4.13 kernel
- Merging patch fixing lock dependency into the original patch
- Replace the 2 parameters of vma_has_changed() with the vmf pointer
- In patch 7, don't call __do_fault() in the speculative path as it may
want to unlock the mmap_sem.
- In patch 11-12, don't check for vma boundaries when
page_add_new_anon_rmap() is called during the spf path and protect against
anon_vma pointer's update.
- In patch 13-16, add performance events to report number of successful
and failed speculative events.
[1]
http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/9999687/
Laurent Dufour (20):
mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: make pte_unmap_same compatible with SPF
mm: Protect VMA modifications using VMA sequence count
mm: protect mremap() against SPF hanlder
mm: Protect SPF handler against anon_vma changes
mm: Cache some VMA fields in the vm_fault structure
mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
mm: Introduce __lru_cache_add_active_or_unevictable
mm: Introduce __maybe_mkwrite()
mm: Introduce __vm_normal_page()
mm: Introduce __page_add_new_anon_rmap()
mm: Protect mm_rb tree with a rwlock
mm: Adding speculative page fault failure trace events
perf: Add a speculative page fault sw event
perf tools: Add support for the SPF perf event
mm: Speculative page fault handler return VMA
powerpc/mm: Add speculative page fault
Peter Zijlstra (4):
mm: Prepare for FAULT_FLAG_SPECULATIVE
mm: VMA sequence count
mm: Provide speculative fault infrastructure
x86/mm: Add speculative pagefault handling
arch/powerpc/Kconfig | 1 +
arch/powerpc/mm/fault.c | 31 +-
arch/x86/Kconfig | 1 +
arch/x86/mm/fault.c | 38 ++-
fs/proc/task_mmu.c | 5 +-
fs/userfaultfd.c | 17 +-
include/linux/hugetlb_inline.h | 2 +-
include/linux/migrate.h | 4 +-
include/linux/mm.h | 92 +++++-
include/linux/mm_types.h | 7 +
include/linux/pagemap.h | 4 +-
include/linux/rmap.h | 12 +-
include/linux/swap.h | 10 +-
include/trace/events/pagefault.h | 87 +++++
include/uapi/linux/perf_event.h | 1 +
kernel/fork.c | 3 +
mm/Kconfig | 3 +
mm/hugetlb.c | 2 +
mm/init-mm.c | 3 +
mm/internal.h | 20 ++
mm/khugepaged.c | 5 +
mm/madvise.c | 6 +-
mm/memory.c | 594 ++++++++++++++++++++++++++++++----
mm/mempolicy.c | 51 ++-
mm/migrate.c | 4 +-
mm/mlock.c | 13 +-
mm/mmap.c | 211 +++++++++---
mm/mprotect.c | 4 +-
mm/mremap.c | 13 +
mm/rmap.c | 5 +-
mm/swap.c | 6 +-
mm/swap_state.c | 8 +-
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c | 4 +
tools/perf/util/parse-events.l | 1 +
tools/perf/util/python.c | 1 +
37 files changed, 1097 insertions(+), 174 deletions(-)
create mode 100644 include/trace/events/pagefault.h
--
2.7.4
Introduce CONFIG_SPECULATIVE_PAGE_FAULT which turns on the Speculative Page
Fault handler when building for 64bits with SMP.
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Laurent Dufour <[email protected]>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a0a777ce4c7c..4c018c48d414 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -29,6 +29,7 @@ config X86_64
select HAVE_ARCH_SOFT_DIRTY
select MODULES_USE_ELF_RELA
select X86_DEV_DMA_OPS
+ select SPECULATIVE_PAGE_FAULT if SMP
#
# Arch settings
--
2.7.4
If a thread is remapping an area while another one is faulting on the
destination area, the SPF handler may fetch the vma from the RB tree before
the pte has been moved by the other thread. This means that the moved ptes
will overwrite those create by the page fault handler leading to page
leaked.
CPU 1 CPU2
enter mremap()
unmap the dest area
copy_vma() Enter speculative page fault handler
>> at this time the dest area is present in the RB tree
fetch the vma matching dest area
create a pte as the VMA matched
Exit the SPF handler
<data written in the new page>
move_ptes()
> it is assumed that the dest area is empty,
> the move ptes overwrite the page mapped by the CPU2.
To prevent that, when the VMA matching the dest area is extended or created
by copy_vma(), it should be marked as non available to the SPF handler.
The usual way to so is to rely on vm_write_begin()/end().
This is already in __vma_adjust() called by copy_vma() (through
vma_merge()). But __vma_adjust() is calling vm_write_end() before returning
which create a window for another thread.
This patch adds a new parameter to vma_merge() which is passed down to
vma_adjust().
The assumption is that copy_vma() is returning a vma which should be
released by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 16 ++++++++++++----
mm/mmap.c | 47 ++++++++++++++++++++++++++++++++++++-----------
mm/mremap.c | 13 +++++++++++++
3 files changed, 61 insertions(+), 15 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 88042d843668..ef6ef0627090 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2189,16 +2189,24 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
extern int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
- struct vm_area_struct *expand);
+ struct vm_area_struct *expand, bool keep_locked);
static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
{
- return __vma_adjust(vma, start, end, pgoff, insert, NULL);
+ return __vma_adjust(vma, start, end, pgoff, insert, NULL, false);
}
-extern struct vm_area_struct *vma_merge(struct mm_struct *,
+extern struct vm_area_struct *__vma_merge(struct mm_struct *,
struct vm_area_struct *prev, unsigned long addr, unsigned long end,
unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
- struct mempolicy *, struct vm_userfaultfd_ctx);
+ struct mempolicy *, struct vm_userfaultfd_ctx, bool keep_locked);
+static inline struct vm_area_struct *vma_merge(struct mm_struct *vma,
+ struct vm_area_struct *prev, unsigned long addr, unsigned long end,
+ unsigned long vm_flags, struct anon_vma *anon, struct file *file,
+ pgoff_t off, struct mempolicy *pol, struct vm_userfaultfd_ctx uff)
+{
+ return __vma_merge(vma, prev, addr, end, vm_flags, anon, file, off,
+ pol, uff, false);
+}
extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
extern int __split_vma(struct mm_struct *, struct vm_area_struct *,
unsigned long addr, int new_below);
diff --git a/mm/mmap.c b/mm/mmap.c
index d6533cb85213..ac32b577a0c9 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -684,7 +684,7 @@ static inline void __vma_unlink_prev(struct mm_struct *mm,
*/
int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
- struct vm_area_struct *expand)
+ struct vm_area_struct *expand, bool keep_locked)
{
struct mm_struct *mm = vma->vm_mm;
struct vm_area_struct *next = vma->vm_next, *orig_vma = vma;
@@ -996,7 +996,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (next && next != vma)
vm_raw_write_end(next);
- vm_raw_write_end(vma);
+ if (!keep_locked)
+ vm_raw_write_end(vma);
validate_mm(mm);
@@ -1132,12 +1133,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
* parameter) may establish ptes with the wrong permissions of NNNN
* instead of the right permissions of XXXX.
*/
-struct vm_area_struct *vma_merge(struct mm_struct *mm,
+struct vm_area_struct *__vma_merge(struct mm_struct *mm,
struct vm_area_struct *prev, unsigned long addr,
unsigned long end, unsigned long vm_flags,
struct anon_vma *anon_vma, struct file *file,
pgoff_t pgoff, struct mempolicy *policy,
- struct vm_userfaultfd_ctx vm_userfaultfd_ctx)
+ struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
+ bool keep_locked)
{
pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
struct vm_area_struct *area, *next;
@@ -1185,10 +1187,11 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
/* cases 1, 6 */
err = __vma_adjust(prev, prev->vm_start,
next->vm_end, prev->vm_pgoff, NULL,
- prev);
+ prev, keep_locked);
} else /* cases 2, 5, 7 */
err = __vma_adjust(prev, prev->vm_start,
- end, prev->vm_pgoff, NULL, prev);
+ end, prev->vm_pgoff, NULL, prev,
+ keep_locked);
if (err)
return NULL;
khugepaged_enter_vma_merge(prev, vm_flags);
@@ -1205,10 +1208,12 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
vm_userfaultfd_ctx)) {
if (prev && addr < prev->vm_end) /* case 4 */
err = __vma_adjust(prev, prev->vm_start,
- addr, prev->vm_pgoff, NULL, next);
+ addr, prev->vm_pgoff, NULL, next,
+ keep_locked);
else { /* cases 3, 8 */
err = __vma_adjust(area, addr, next->vm_end,
- next->vm_pgoff - pglen, NULL, next);
+ next->vm_pgoff - pglen, NULL, next,
+ keep_locked);
/*
* In case 3 area is already equal to next and
* this is a noop, but in case 8 "area" has
@@ -3163,9 +3168,20 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent))
return NULL; /* should never get here */
- new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx);
+
+ /* There is 3 cases to manage here in
+ * AAAA AAAA AAAA AAAA
+ * PPPP.... PPPP......NNNN PPPP....NNNN PP........NN
+ * PPPPPPPP(A) PPPP..NNNNNNNN(B) PPPPPPPPPPPP(1) NULL
+ * PPPPPPPPNNNN(2)
+ * PPPPNNNNNNNN(3)
+ *
+ * new_vma == prev in case A,1,2
+ * new_vma == next in case B,3
+ */
+ new_vma = __vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
+ vma->anon_vma, vma->vm_file, pgoff,
+ vma_policy(vma), vma->vm_userfaultfd_ctx, true);
if (new_vma) {
/*
* Source vma may have been merged into new_vma
@@ -3205,6 +3221,15 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
get_file(new_vma->vm_file);
if (new_vma->vm_ops && new_vma->vm_ops->open)
new_vma->vm_ops->open(new_vma);
+ /*
+ * As the VMA is linked right now, it may be hit by the
+ * speculative page fault handler. But we don't want it to
+ * to start mapping page in this area until the caller has
+ * potentially move the pte from the moved VMA. To prevent
+ * that we protect it right now, and let the caller unprotect
+ * it once the move is done.
+ */
+ vm_raw_write_begin(new_vma);
vma_link(mm, new_vma, prev, rb_link, rb_parent);
*need_rmap_locks = false;
}
diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..8ed1a1d6eaed 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -302,6 +302,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
if (!new_vma)
return -ENOMEM;
+ /* new_vma is returned protected by copy_vma, to prevent speculative
+ * page fault to be done in the destination area before we move the pte.
+ * Now, we must also protect the source VMA since we don't want pages
+ * to be mapped in our back while we are copying the PTEs.
+ */
+ if (vma != new_vma)
+ vm_raw_write_begin(vma);
+
moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
need_rmap_locks);
if (moved_len < old_len) {
@@ -318,6 +326,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
*/
move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
true);
+ if (vma != new_vma)
+ vm_raw_write_end(vma);
vma = new_vma;
old_len = new_len;
old_addr = new_addr;
@@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
mremap_userfaultfd_prep(new_vma, uf);
arch_remap(mm, old_addr, old_addr + old_len,
new_addr, new_addr + new_len);
+ if (vma != new_vma)
+ vm_raw_write_end(vma);
}
+ vm_raw_write_end(new_vma);
/* Conceal VM_ACCOUNT so old reservation is not undone */
if (vm_flags & VM_ACCOUNT) {
--
2.7.4
From: Peter Zijlstra <[email protected]>
Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Try speculative fault path only for multi threaded processes]
[Try to the VMA fetch during the speculative path in case of retry]
Signed-off-by: Laurent Dufour <[email protected]>
---
arch/x86/mm/fault.c | 38 +++++++++++++++++++++++++++++++++++++-
1 file changed, 37 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index e6af2b464c3d..a73cf227edd6 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1239,6 +1239,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
struct vm_area_struct *vma;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ struct vm_area_struct *spf_vma = NULL;
+#endif
struct task_struct *tsk;
struct mm_struct *mm;
int fault, major = 0;
@@ -1332,6 +1335,27 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
if (error_code & X86_PF_INSTR)
flags |= FAULT_FLAG_INSTRUCTION;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ if ((error_code & X86_PF_USER) && (atomic_read(&mm->mm_users) > 1)) {
+ fault = handle_speculative_fault(mm, address, flags,
+ &spf_vma);
+
+ if (!(fault & VM_FAULT_RETRY)) {
+ if (!(fault & VM_FAULT_ERROR)) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1,
+ regs, address);
+ goto done;
+ }
+ /*
+ * In case of error we need the pkey value, but
+ * can't get it from the spf_vma as it is only returned
+ * when VM_FAULT_RETRY is returned. So we have to
+ * retry the page fault with the mmap_sem grabbed.
+ */
+ }
+ }
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
/*
* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in
@@ -1365,7 +1389,16 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
might_sleep();
}
- vma = find_vma(mm, address);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ if (spf_vma) {
+ if (can_reuse_spf_vma(spf_vma, address))
+ vma = spf_vma;
+ else
+ vma = find_vma(mm, address);
+ spf_vma = NULL;
+ } else
+#endif
+ vma = find_vma(mm, address);
if (unlikely(!vma)) {
bad_area(regs, error_code, address);
return;
@@ -1451,6 +1484,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
return;
}
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+done:
+#endif
/*
* Major/minor page fault accounting. If any of the events
* returned VM_FAULT_MAJOR, we account it as a major fault.
--
2.7.4
When the speculative page fault handler is returning VM_RETRY, there is a
chance that VMA fetched without grabbing the mmap_sem can be reused by the
legacy page fault handler. By reusing it, we avoid calling find_vma()
again. To achieve, that we must ensure that the VMA structure will not be
freed in our back. This is done by getting the reference on it (get_vma())
and by assuming that the caller will call the new service
can_reuse_spf_vma() once it has grabbed the mmap_sem.
can_reuse_spf_vma() is first checking that the VMA is still in the RB tree
, and then that the VMA's boundaries matched the passed address and release
the reference on the VMA so that it can be freed if needed.
In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 5 +-
mm/memory.c | 136 +++++++++++++++++++++++++++++++++--------------------
2 files changed, 88 insertions(+), 53 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1acc3f4e07d1..38a8c0041fd0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1357,7 +1357,10 @@ extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags);
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
extern int handle_speculative_fault(struct mm_struct *mm,
- unsigned long address, unsigned int flags);
+ unsigned long address, unsigned int flags,
+ struct vm_area_struct **vma);
+extern bool can_reuse_spf_vma(struct vm_area_struct *vma,
+ unsigned long address);
#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
unsigned long address, unsigned int fault_flags,
diff --git a/mm/memory.c b/mm/memory.c
index f39c4a4df703..16d3f5f4ffdd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4292,13 +4292,22 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
/* This is required by vm_normal_page() */
#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
#endif
-
/*
* vm_normal_page() adds some processing which should be done while
* hodling the mmap_sem.
*/
+
+/*
+ * Tries to handle the page fault in a speculative way, without grabbing the
+ * mmap_sem.
+ * When VM_FAULT_RETRY is returned, the vma pointer is valid and this vma must
+ * be checked later when the mmap_sem has been grabbed by calling
+ * can_reuse_spf_vma().
+ * This is needed as the returned vma is kept in memory until the call to
+ * can_reuse_spf_vma() is made.
+ */
int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
- unsigned int flags)
+ unsigned int flags, struct vm_area_struct **vma)
{
struct vm_fault vmf = {
.address = address,
@@ -4307,7 +4316,6 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
p4d_t *p4d, p4dval;
pud_t pudval;
int seq, ret = VM_FAULT_RETRY;
- struct vm_area_struct *vma;
#ifdef CONFIG_NUMA
struct mempolicy *pol;
#endif
@@ -4316,14 +4324,16 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
flags |= FAULT_FLAG_SPECULATIVE;
- vma = get_vma(mm, address);
- if (!vma)
+ *vma = get_vma(mm, address);
+ if (!*vma)
return ret;
+ vmf.vma = *vma;
- seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+ /* rmb <-> seqlock,vma_rb_erase() */
+ seq = raw_read_seqcount(&vmf.vma->vm_sequence);
if (seq & 1) {
- trace_spf_vma_changed(_RET_IP_, vma, address);
- goto out_put;
+ trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+ return ret;
}
/*
@@ -4331,9 +4341,9 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* with the VMA.
* This include huge page from hugetlbfs.
*/
- if (vma->vm_ops) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
+ if (vmf.vma->vm_ops) {
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+ return ret;
}
/*
@@ -4341,18 +4351,18 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* because vm_next and vm_prev must be safe. This can't be guaranteed
* in the speculative path.
*/
- if (unlikely(!vma->anon_vma)) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
+ if (unlikely(!vmf.vma->anon_vma)) {
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+ return ret;
}
- vmf.vma_flags = READ_ONCE(vma->vm_flags);
- vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+ vmf.vma_flags = READ_ONCE(vmf.vma->vm_flags);
+ vmf.vma_page_prot = READ_ONCE(vmf.vma->vm_page_prot);
/* Can't call userland page fault handler in the speculative path */
if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+ return ret;
}
if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
@@ -4361,48 +4371,39 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* boundaries but we want to trace it as not supported instead
* of changed.
*/
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+ return ret;
}
- if (address < READ_ONCE(vma->vm_start)
- || READ_ONCE(vma->vm_end) <= address) {
- trace_spf_vma_changed(_RET_IP_, vma, address);
- goto out_put;
+ if (address < READ_ONCE(vmf.vma->vm_start)
+ || READ_ONCE(vmf.vma->vm_end) <= address) {
+ trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+ return ret;
}
- if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+ if (!arch_vma_access_permitted(vmf.vma, flags & FAULT_FLAG_WRITE,
flags & FAULT_FLAG_INSTRUCTION,
- flags & FAULT_FLAG_REMOTE)) {
- trace_spf_vma_access(_RET_IP_, vma, address);
- ret = VM_FAULT_SIGSEGV;
- goto out_put;
- }
+ flags & FAULT_FLAG_REMOTE))
+ goto out_segv;
/* This is one is required to check that the VMA has write access set */
if (flags & FAULT_FLAG_WRITE) {
- if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
- trace_spf_vma_access(_RET_IP_, vma, address);
- ret = VM_FAULT_SIGSEGV;
- goto out_put;
- }
- } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
- trace_spf_vma_access(_RET_IP_, vma, address);
- ret = VM_FAULT_SIGSEGV;
- goto out_put;
- }
+ if (unlikely(!(vmf.vma_flags & VM_WRITE)))
+ goto out_segv;
+ } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE))))
+ goto out_segv;
#ifdef CONFIG_NUMA
/*
* MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
* are not compatible with the speculative page fault processing.
*/
- pol = __get_vma_policy(vma, address);
+ pol = __get_vma_policy(vmf.vma, address);
if (!pol)
pol = get_task_policy(current);
if (pol && pol->mode == MPOL_INTERLEAVE) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+ return ret;
}
#endif
@@ -4464,9 +4465,8 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
vmf.pte = NULL;
}
- vmf.vma = vma;
- vmf.pgoff = linear_page_index(vma, address);
- vmf.gfp_mask = __get_fault_gfp_mask(vma);
+ vmf.pgoff = linear_page_index(vmf.vma, address);
+ vmf.gfp_mask = __get_fault_gfp_mask(vmf.vma);
vmf.sequence = seq;
vmf.flags = flags;
@@ -4476,16 +4476,22 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* We need to re-validate the VMA after checking the bounds, otherwise
* we might have a false positive on the bounds.
*/
- if (read_seqcount_retry(&vma->vm_sequence, seq)) {
- trace_spf_vma_changed(_RET_IP_, vma, address);
- goto out_put;
+ if (read_seqcount_retry(&vmf.vma->vm_sequence, seq)) {
+ trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+ return ret;
}
mem_cgroup_oom_enable();
ret = handle_pte_fault(&vmf);
mem_cgroup_oom_disable();
- put_vma(vma);
+ /*
+ * If there is no need to retry, don't return the vma to the caller.
+ */
+ if (!(ret & VM_FAULT_RETRY)) {
+ put_vma(vmf.vma);
+ *vma = NULL;
+ }
/*
* The task may have entered a memcg OOM situation but
@@ -4498,9 +4504,35 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return ret;
out_walk:
- trace_spf_vma_notsup(_RET_IP_, vma, address);
+ trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
local_irq_enable();
-out_put:
+ return ret;
+
+out_segv:
+ trace_spf_vma_access(_RET_IP_, vmf.vma, address);
+ /*
+ * We don't return VM_FAULT_RETRY so the caller is not expected to
+ * retrieve the fetched VMA.
+ */
+ put_vma(vmf.vma);
+ *vma = NULL;
+ return VM_FAULT_SIGSEGV;
+}
+
+/*
+ * This is used to know if the vma fetch in the speculative page fault handler
+ * is still valid when trying the regular fault path while holding the
+ * mmap_sem.
+ * The call to put_vma(vma) must be made after checking the vma's fields, as
+ * the vma may be freed by put_vma(). In such a case it is expected that false
+ * is returned.
+ */
+bool can_reuse_spf_vma(struct vm_area_struct *vma, unsigned long address)
+{
+ bool ret;
+
+ ret = !RB_EMPTY_NODE(&vma->vm_rb) &&
+ vma->vm_start <= address && address < vma->vm_end;
put_vma(vma);
return ret;
}
--
2.7.4
This patch enable the speculative page fault on the PowerPC
architecture.
This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.
The speculative path is only tried for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Build on if CONFIG_SPECULATIVE_PAGE_FAULT is defined (currently for
BOOK3S_64 && SMP).
Signed-off-by: Laurent Dufour <[email protected]>
---
arch/powerpc/mm/fault.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 866446cf2d9a..104f3cc86b51 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -392,6 +392,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
unsigned long error_code)
{
struct vm_area_struct * vma;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ struct vm_area_struct *spf_vma = NULL;
+#endif
struct mm_struct *mm = current->mm;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
int is_exec = TRAP(regs) == 0x400;
@@ -459,6 +462,20 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
if (is_exec)
flags |= FAULT_FLAG_INSTRUCTION;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ if (is_user && (atomic_read(&mm->mm_users) > 1)) {
+ /* let's try a speculative page fault without grabbing the
+ * mmap_sem.
+ */
+ fault = handle_speculative_fault(mm, address, flags, &spf_vma);
+ if (!(fault & VM_FAULT_RETRY)) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1,
+ regs, address);
+ goto done;
+ }
+ }
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
/* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in the
* kernel and should generate an OOPS. Unfortunately, in the case of an
@@ -489,7 +506,16 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
might_sleep();
}
- vma = find_vma(mm, address);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ if (spf_vma) {
+ if (can_reuse_spf_vma(spf_vma, address))
+ vma = spf_vma;
+ else
+ vma = find_vma(mm, address);
+ spf_vma = NULL;
+ } else
+#endif
+ vma = find_vma(mm, address);
if (unlikely(!vma))
return bad_area(regs, address);
if (likely(vma->vm_start <= address))
@@ -568,6 +594,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
up_read(¤t->mm->mmap_sem);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+done:
+#endif
if (unlikely(fault & VM_FAULT_ERROR))
return mm_fault_error(regs, address, fault);
--
2.7.4
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/trace/events/pagefault.h | 87 ++++++++++++++++++++++++++++++++++++++++
mm/memory.c | 62 ++++++++++++++++++++++------
2 files changed, 136 insertions(+), 13 deletions(-)
create mode 100644 include/trace/events/pagefault.h
diff --git a/include/trace/events/pagefault.h b/include/trace/events/pagefault.h
new file mode 100644
index 000000000000..1d793f8c739b
--- /dev/null
+++ b/include/trace/events/pagefault.h
@@ -0,0 +1,87 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM pagefault
+
+#if !defined(_TRACE_PAGEFAULT_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PAGEFAULT_H
+
+#include <linux/tracepoint.h>
+#include <linux/mm.h>
+
+DECLARE_EVENT_CLASS(spf,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, caller)
+ __field(unsigned long, vm_start)
+ __field(unsigned long, vm_end)
+ __field(unsigned long, address)
+ ),
+
+ TP_fast_assign(
+ __entry->caller = caller;
+ __entry->vm_start = vma->vm_start;
+ __entry->vm_end = vma->vm_end;
+ __entry->address = address;
+ ),
+
+ TP_printk("ip:%lx vma:%lx-%lx address:%lx",
+ __entry->caller, __entry->vm_start, __entry->vm_end,
+ __entry->address)
+);
+
+DEFINE_EVENT(spf, spf_pte_lock,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_changed,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_noanon,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_notsup,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_access,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_pmd_changed,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+#endif /* _TRACE_PAGEFAULT_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/memory.c b/mm/memory.c
index f0f2caa11282..f39c4a4df703 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -80,6 +80,9 @@
#include "internal.h"
+#define CREATE_TRACE_POINTS
+#include <trace/events/pagefault.h>
+
#if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
#warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
#endif
@@ -2312,8 +2315,10 @@ static bool pte_spinlock(struct vm_fault *vmf)
}
local_irq_disable();
- if (vma_has_changed(vmf))
+ if (vma_has_changed(vmf)) {
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
@@ -2321,16 +2326,21 @@ static bool pte_spinlock(struct vm_fault *vmf)
* is not a huge collapse operation in progress in our back.
*/
pmdval = READ_ONCE(*vmf->pmd);
- if (!pmd_same(pmdval, vmf->orig_pmd))
+ if (!pmd_same(pmdval, vmf->orig_pmd)) {
+ trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#endif
vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
- if (unlikely(!spin_trylock(vmf->ptl)))
+ if (unlikely(!spin_trylock(vmf->ptl))) {
+ trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
if (vma_has_changed(vmf)) {
spin_unlock(vmf->ptl);
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
}
@@ -2363,8 +2373,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
* block on the PTL and thus we're safe.
*/
local_irq_disable();
- if (vma_has_changed(vmf))
+ if (vma_has_changed(vmf)) {
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
@@ -2372,8 +2384,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
* is not a huge collapse operation in progress in our back.
*/
pmdval = READ_ONCE(*vmf->pmd);
- if (!pmd_same(pmdval, vmf->orig_pmd))
+ if (!pmd_same(pmdval, vmf->orig_pmd)) {
+ trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#endif
/*
@@ -2387,11 +2401,13 @@ static bool pte_map_lock(struct vm_fault *vmf)
pte = pte_offset_map(vmf->pmd, vmf->address);
if (unlikely(!spin_trylock(ptl))) {
pte_unmap(pte);
+ trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
goto out;
}
if (vma_has_changed(vmf)) {
pte_unmap_unlock(pte, ptl);
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
}
@@ -4305,47 +4321,60 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return ret;
seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
- if (seq & 1)
+ if (seq & 1) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
/*
* Can't call vm_ops service has we don't know what they would do
* with the VMA.
* This include huge page from hugetlbfs.
*/
- if (vma->vm_ops)
+ if (vma->vm_ops) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
/*
* __anon_vma_prepare() requires the mmap_sem to be held
* because vm_next and vm_prev must be safe. This can't be guaranteed
* in the speculative path.
*/
- if (unlikely(!vma->anon_vma))
+ if (unlikely(!vma->anon_vma)) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
vmf.vma_flags = READ_ONCE(vma->vm_flags);
vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
/* Can't call userland page fault handler in the speculative path */
- if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
- if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+ if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
/*
* This could be detected by the check address against VMA's
* boundaries but we want to trace it as not supported instead
* of changed.
*/
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
if (address < READ_ONCE(vma->vm_start)
- || READ_ONCE(vma->vm_end) <= address)
+ || READ_ONCE(vma->vm_end) <= address) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
flags & FAULT_FLAG_INSTRUCTION,
flags & FAULT_FLAG_REMOTE)) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
@@ -4353,10 +4382,12 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
/* This is one is required to check that the VMA has write access set */
if (flags & FAULT_FLAG_WRITE) {
if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
@@ -4369,8 +4400,10 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
pol = __get_vma_policy(vma, address);
if (!pol)
pol = get_task_policy(current);
- if (pol && pol->mode == MPOL_INTERLEAVE)
+ if (pol && pol->mode == MPOL_INTERLEAVE) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
#endif
/*
@@ -4443,8 +4476,10 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* We need to re-validate the VMA after checking the bounds, otherwise
* we might have a false positive on the bounds.
*/
- if (read_seqcount_retry(&vma->vm_sequence, seq))
+ if (read_seqcount_retry(&vma->vm_sequence, seq)) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
mem_cgroup_oom_enable();
ret = handle_pte_fault(&vmf);
@@ -4463,6 +4498,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return ret;
out_walk:
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
local_irq_enable();
out_put:
put_vma(vma);
--
2.7.4
Add support for the new speculative faults event.
Signed-off-by: Laurent Dufour <[email protected]>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c | 4 ++++
tools/perf/util/parse-events.l | 1 +
tools/perf/util/python.c | 1 +
5 files changed, 8 insertions(+)
diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index 6f873503552d..a6ddab9edeec 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
PERF_COUNT_SW_EMULATION_FAULTS = 8,
PERF_COUNT_SW_DUMMY = 9,
PERF_COUNT_SW_BPF_OUTPUT = 10,
+ PERF_COUNT_SW_SPF = 11,
PERF_COUNT_SW_MAX, /* non-ABI */
};
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index ef351688b797..45b954019118 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -428,6 +428,7 @@ const char *perf_evsel__sw_names[PERF_COUNT_SW_MAX] = {
"alignment-faults",
"emulation-faults",
"dummy",
+ "speculative-faults",
};
static const char *__perf_evsel__sw_name(u64 config)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 34589c427e52..2a8189c6d5fc 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -140,6 +140,10 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = {
.symbol = "bpf-output",
.alias = "",
},
+ [PERF_COUNT_SW_SPF] = {
+ .symbol = "speculative-faults",
+ .alias = "spf",
+ },
};
#define __PERF_EVENT_FIELD(config, name) \
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 655ecff636a8..5d6782426b30 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -308,6 +308,7 @@ emulation-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EM
dummy { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
duration_time { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
bpf-output { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); }
+speculative-faults|spf { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_SPF); }
/*
* We have to handle the kernel PMU event cycles-ct/cycles-t/mem-loads/mem-stores separately.
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index 2918cac7a142..00dd227959e6 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -1174,6 +1174,7 @@ static struct {
PERF_CONST(COUNT_SW_ALIGNMENT_FAULTS),
PERF_CONST(COUNT_SW_EMULATION_FAULTS),
PERF_CONST(COUNT_SW_DUMMY),
+ PERF_CONST(COUNT_SW_SPF),
PERF_CONST(SAMPLE_IP),
PERF_CONST(SAMPLE_TID),
--
2.7.4
The current maybe_mkwrite() is getting passed the pointer to the vma
structure to fetch the vm_flags field.
When dealing with the speculative page fault handler, it will be better to
rely on the cached vm_flags value stored in the vm_fault structure.
This patch introduce a __maybe_mkwrite() service which can be called by
passing the value of the vm_flags field.
There is no change functional changes expected for the other callers of
maybe_mkwrite().
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 9 +++++++--
mm/memory.c | 6 +++---
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index dfa81a638b7c..a84ddc218bbd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -684,13 +684,18 @@ void free_compound_page(struct page *page);
* pte_mkwrite. But get_user_pages can cause write faults for mappings
* that do not have writing enabled, when used by access_process_vm.
*/
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t __maybe_mkwrite(pte_t pte, unsigned long vma_flags)
{
- if (likely(vma->vm_flags & VM_WRITE))
+ if (likely(vma_flags & VM_WRITE))
pte = pte_mkwrite(pte);
return pte;
}
+static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+ return __maybe_mkwrite(pte, vma->vm_flags);
+}
+
int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page);
int finish_fault(struct vm_fault *vmf);
diff --git a/mm/memory.c b/mm/memory.c
index 0a0a483d9a65..af0338fbc34d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
entry = pte_mkyoung(vmf->orig_pte);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
update_mmu_cache(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2549,8 +2549,8 @@ static int wp_page_copy(struct vm_fault *vmf)
inc_mm_counter_fast(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
- entry = mk_pte(new_page, vma->vm_page_prot);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = mk_pte(new_page, vmf->vma_page_prot);
+ entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
/*
* Clear the pte entry and flush it first, before updating the
* pte with the new entry. This will avoid a race condition
--
2.7.4
From: Peter Zijlstra <[email protected]>
Provide infrastructure to do a speculative fault (not holding
mmap_sem).
The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(including umapping / page-table deletion) and we use gup_fast() style
page-table walking to deal with page-table races.
Once we've obtained the page and are ready to update the PTE, we
validate if the state we started the fault with is still valid, if
not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
PTE and we're done.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
[Manage the newly introduced pte_spinlock() for speculative page
fault to fail if the VMA is touched in our back]
[Rename vma_is_dead() to vma_has_changed() and declare it here]
[Fetch p4d and pud]
[Set vmd.sequence in __handle_mm_fault()]
[Abort speculative path when handle_userfault() has to be called]
[Add additional VMA's flags checks in handle_speculative_fault()]
[Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
[Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
[Remove warning comment about waiting for !seq&1 since we don't want
to wait]
[Remove warning about no huge page support, mention it explictly]
[Don't call do_fault() in the speculative path as __do_fault() calls
vma->vm_ops->fault() which may want to release mmap_sem]
[Only vm_fault pointer argument for vma_has_changed()]
[Fix check against huge page, calling pmd_trans_huge()]
[Use READ_ONCE() when reading VMA's fields in the speculative path]
[Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
processing done in vm_normal_page()]
[Check that vma->anon_vma is already set when starting the speculative
path]
[Check for memory policy as we can't support MPOL_INTERLEAVE case due to
the processing done in mpol_misplaced()]
[Don't support VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
[Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Add mem cgroup oom check]
[Use READ_ONCE to access p*d entries]
[Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
[Don't fetch pte again in handle_pte_fault() when running the speculative
path]
[Check PMD against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 8 +
include/linux/pagemap.h | 4 +-
mm/internal.h | 16 +-
mm/memory.c | 342 ++++++++++++++++++++++++++++++++++++++++-
5 files changed, 364 insertions(+), 8 deletions(-)
diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 0660a03d37d9..9e25283d6fc9 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -8,7 +8,7 @@
static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
{
- return !!(vma->vm_flags & VM_HUGETLB);
+ return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
}
#else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 73b8b99f482b..1acc3f4e07d1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -329,6 +329,10 @@ struct vm_fault {
gfp_t gfp_mask; /* gfp mask to be used for allocations */
pgoff_t pgoff; /* Logical page offset based on vma */
unsigned long address; /* Faulting virtual address */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ unsigned int sequence;
+ pmd_t orig_pmd; /* value of PMD at the time of fault */
+#endif
pmd_t *pmd; /* Pointer to pmd entry matching
* the 'address' */
pud_t *pud; /* Pointer to pud entry matching
@@ -1351,6 +1355,10 @@ int invalidate_inode_page(struct page *page);
#ifdef CONFIG_MMU
extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern int handle_speculative_fault(struct mm_struct *mm,
+ unsigned long address, unsigned int flags);
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
unsigned long address, unsigned int fault_flags,
bool *unlocked);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 34ce3ebf97d5..70e4d2688e7b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
pgoff_t pgoff;
if (unlikely(is_vm_hugetlb_page(vma)))
return linear_hugepage_index(vma, address);
- pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
- pgoff += vma->vm_pgoff;
+ pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
+ pgoff += READ_ONCE(vma->vm_pgoff);
return pgoff;
}
diff --git a/mm/internal.h b/mm/internal.h
index fb2667b20f0a..10b188c87fa4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
extern struct vm_area_struct *get_vma(struct mm_struct *mm,
unsigned long addr);
extern void put_vma(struct vm_area_struct *vma);
-#endif
+
+static inline bool vma_has_changed(struct vm_fault *vmf)
+{
+ int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
+ unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
+
+ /*
+ * Matches both the wmb in write_seqlock_{begin,end}() and
+ * the wmb in vma_rb_erase().
+ */
+ smp_rmb();
+
+ return ret || seq != vmf->sequence;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/memory.c b/mm/memory.c
index 66517535514b..f0f2caa11282 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
if (page)
dump_page(page, "bad pte");
pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
- (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
+ (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
+ mapping, index);
pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
vma->vm_file,
vma->vm_ops ? vma->vm_ops->fault : NULL,
@@ -2295,19 +2296,127 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
static bool pte_spinlock(struct vm_fault *vmf)
{
+ bool ret = false;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ pmd_t pmdval;
+#endif
+
+ /* Check if vma is still valid */
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ return true;
+ }
+
+ local_irq_disable();
+ if (vma_has_changed(vmf))
+ goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+ * We check if the pmd value is still the same to ensure that there
+ * is not a huge collapse operation in progress in our back.
+ */
+ pmdval = READ_ONCE(*vmf->pmd);
+ if (!pmd_same(pmdval, vmf->orig_pmd))
+ goto out;
+#endif
+
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ if (unlikely(!spin_trylock(vmf->ptl)))
+ goto out;
+
+ if (vma_has_changed(vmf)) {
+ spin_unlock(vmf->ptl);
+ goto out;
+ }
+
+ ret = true;
+out:
+ local_irq_enable();
+ return ret;
+}
+
+static bool pte_map_lock(struct vm_fault *vmf)
+{
+ bool ret = false;
+ pte_t *pte;
+ spinlock_t *ptl;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ pmd_t pmdval;
+#endif
+
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+ vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ return true;
+ }
+
+ /*
+ * The first vma_has_changed() guarantees the page-tables are still
+ * valid, having IRQs disabled ensures they stay around, hence the
+ * second vma_has_changed() to make sure they are still valid once
+ * we've got the lock. After that a concurrent zap_pte_range() will
+ * block on the PTL and thus we're safe.
+ */
+ local_irq_disable();
+ if (vma_has_changed(vmf))
+ goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+ * We check if the pmd value is still the same to ensure that there
+ * is not a huge collapse operation in progress in our back.
+ */
+ pmdval = READ_ONCE(*vmf->pmd);
+ if (!pmd_same(pmdval, vmf->orig_pmd))
+ goto out;
+#endif
+
+ /*
+ * Same as pte_offset_map_lock() except that we call
+ * spin_trylock() in place of spin_lock() to avoid race with
+ * unmap path which may have the lock and wait for this CPU
+ * to invalidate TLB but this CPU has irq disabled.
+ * Since we are in a speculative patch, accept it could fail
+ */
+ ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ pte = pte_offset_map(vmf->pmd, vmf->address);
+ if (unlikely(!spin_trylock(ptl))) {
+ pte_unmap(pte);
+ goto out;
+ }
+
+ if (vma_has_changed(vmf)) {
+ pte_unmap_unlock(pte, ptl);
+ goto out;
+ }
+
+ vmf->pte = pte;
+ vmf->ptl = ptl;
+ ret = true;
+out:
+ local_irq_enable();
+ return ret;
+}
+#else
+static inline bool pte_spinlock(struct vm_fault *vmf)
+{
vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
spin_lock(vmf->ptl);
return true;
}
-static bool pte_map_lock(struct vm_fault *vmf)
+static inline bool pte_map_lock(struct vm_fault *vmf)
{
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
return true;
}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
/*
* handle_pte_fault chooses page fault handler according to an entry which was
@@ -3184,6 +3293,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
ret = check_stable_address_space(vma->vm_mm);
if (ret)
goto unlock;
+ /*
+ * Don't call the userfaultfd during the speculative path.
+ * We already checked for the VMA to not be managed through
+ * userfaultfd, but it may be set in our back once we have lock
+ * the pte. In such a case we can ignore it this time.
+ */
+ if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ goto setpte;
/* Deliver the page fault to userland, check inside PT lock */
if (userfaultfd_missing(vma)) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -3226,7 +3343,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
goto release;
/* Deliver the page fault to userland, check inside PT lock */
- if (userfaultfd_missing(vma)) {
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
@@ -3969,13 +4086,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (unlikely(pmd_none(*vmf->pmd))) {
/*
+ * In the case of the speculative page fault handler we abort
+ * the speculative path immediately as the pmd is probably
+ * in the way to be converted in a huge one. We will try
+ * again holding the mmap_sem (which implies that the collapse
+ * operation is done).
+ */
+ if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ return VM_FAULT_RETRY;
+ /*
* Leave __pte_alloc() until later: because vm_ops->fault may
* want to allocate huge page, and if we expose page table
* for an instant, it will be difficult to retract from
* concurrent faults and from rmap lookups.
*/
vmf->pte = NULL;
- } else {
+ } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
/* See comment in pte_alloc_one_map() */
if (pmd_devmap_trans_unstable(vmf->pmd))
return 0;
@@ -3984,6 +4110,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
* pmd from under us anymore at this point because we hold the
* mmap_sem read mode and khugepaged takes it in write mode.
* So now it's safe to run pte_offset_map().
+ * This is not applicable to the speculative page fault handler
+ * but in that case, the pte is fetched earlier in
+ * handle_speculative_fault().
*/
vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
vmf->orig_pte = *vmf->pte;
@@ -4006,6 +4135,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (!vmf->pte) {
if (vma_is_anonymous(vmf->vma))
return do_anonymous_page(vmf);
+ else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ return VM_FAULT_RETRY;
else
return do_fault(vmf);
}
@@ -4103,6 +4234,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
vmf.pmd = pmd_alloc(mm, vmf.pud, address);
if (!vmf.pmd)
return VM_FAULT_OOM;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
+#endif
if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
ret = create_huge_pmd(&vmf);
if (!(ret & VM_FAULT_FALLBACK))
@@ -4136,6 +4270,206 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
return handle_pte_fault(&vmf);
}
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+
+#ifndef __HAVE_ARCH_PTE_SPECIAL
+/* This is required by vm_normal_page() */
+#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
+#endif
+
+/*
+ * vm_normal_page() adds some processing which should be done while
+ * hodling the mmap_sem.
+ */
+int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
+ unsigned int flags)
+{
+ struct vm_fault vmf = {
+ .address = address,
+ };
+ pgd_t *pgd, pgdval;
+ p4d_t *p4d, p4dval;
+ pud_t pudval;
+ int seq, ret = VM_FAULT_RETRY;
+ struct vm_area_struct *vma;
+#ifdef CONFIG_NUMA
+ struct mempolicy *pol;
+#endif
+
+ /* Clear flags that may lead to release the mmap_sem to retry */
+ flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
+ flags |= FAULT_FLAG_SPECULATIVE;
+
+ vma = get_vma(mm, address);
+ if (!vma)
+ return ret;
+
+ seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+ if (seq & 1)
+ goto out_put;
+
+ /*
+ * Can't call vm_ops service has we don't know what they would do
+ * with the VMA.
+ * This include huge page from hugetlbfs.
+ */
+ if (vma->vm_ops)
+ goto out_put;
+
+ /*
+ * __anon_vma_prepare() requires the mmap_sem to be held
+ * because vm_next and vm_prev must be safe. This can't be guaranteed
+ * in the speculative path.
+ */
+ if (unlikely(!vma->anon_vma))
+ goto out_put;
+
+ vmf.vma_flags = READ_ONCE(vma->vm_flags);
+ vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+
+ /* Can't call userland page fault handler in the speculative path */
+ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+ goto out_put;
+
+ if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+ /*
+ * This could be detected by the check address against VMA's
+ * boundaries but we want to trace it as not supported instead
+ * of changed.
+ */
+ goto out_put;
+
+ if (address < READ_ONCE(vma->vm_start)
+ || READ_ONCE(vma->vm_end) <= address)
+ goto out_put;
+
+ if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+ flags & FAULT_FLAG_INSTRUCTION,
+ flags & FAULT_FLAG_REMOTE)) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+
+ /* This is one is required to check that the VMA has write access set */
+ if (flags & FAULT_FLAG_WRITE) {
+ if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+ } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+
+#ifdef CONFIG_NUMA
+ /*
+ * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
+ * are not compatible with the speculative page fault processing.
+ */
+ pol = __get_vma_policy(vma, address);
+ if (!pol)
+ pol = get_task_policy(current);
+ if (pol && pol->mode == MPOL_INTERLEAVE)
+ goto out_put;
+#endif
+
+ /*
+ * Do a speculative lookup of the PTE entry.
+ */
+ local_irq_disable();
+ pgd = pgd_offset(mm, address);
+ pgdval = READ_ONCE(*pgd);
+ if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
+ goto out_walk;
+
+ p4d = p4d_offset(pgd, address);
+ p4dval = READ_ONCE(*p4d);
+ if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
+ goto out_walk;
+
+ vmf.pud = pud_offset(p4d, address);
+ pudval = READ_ONCE(*vmf.pud);
+ if (pud_none(pudval) || unlikely(pud_bad(pudval)))
+ goto out_walk;
+
+ /* Huge pages at PUD level are not supported. */
+ if (unlikely(pud_trans_huge(pudval)))
+ goto out_walk;
+
+ vmf.pmd = pmd_offset(vmf.pud, address);
+ vmf.orig_pmd = READ_ONCE(*vmf.pmd);
+ /*
+ * pmd_none could mean that a hugepage collapse is in progress
+ * in our back as collapse_huge_page() mark it before
+ * invalidating the pte (which is done once the IPI is catched
+ * by all CPU and we have interrupt disabled).
+ * For this reason we cannot handle THP in a speculative way since we
+ * can't safely indentify an in progress collapse operation done in our
+ * back on that PMD.
+ * Regarding the order of the following checks, see comment in
+ * pmd_devmap_trans_unstable()
+ */
+ if (unlikely(pmd_devmap(vmf.orig_pmd) ||
+ pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
+ is_swap_pmd(vmf.orig_pmd)))
+ goto out_walk;
+
+ /*
+ * The above does not allocate/instantiate page-tables because doing so
+ * would lead to the possibility of instantiating page-tables after
+ * free_pgtables() -- and consequently leaking them.
+ *
+ * The result is that we take at least one !speculative fault per PMD
+ * in order to instantiate it.
+ */
+
+ vmf.pte = pte_offset_map(vmf.pmd, address);
+ vmf.orig_pte = READ_ONCE(*vmf.pte);
+ barrier(); /* See comment in handle_pte_fault() */
+ if (pte_none(vmf.orig_pte)) {
+ pte_unmap(vmf.pte);
+ vmf.pte = NULL;
+ }
+
+ vmf.vma = vma;
+ vmf.pgoff = linear_page_index(vma, address);
+ vmf.gfp_mask = __get_fault_gfp_mask(vma);
+ vmf.sequence = seq;
+ vmf.flags = flags;
+
+ local_irq_enable();
+
+ /*
+ * We need to re-validate the VMA after checking the bounds, otherwise
+ * we might have a false positive on the bounds.
+ */
+ if (read_seqcount_retry(&vma->vm_sequence, seq))
+ goto out_put;
+
+ mem_cgroup_oom_enable();
+ ret = handle_pte_fault(&vmf);
+ mem_cgroup_oom_disable();
+
+ put_vma(vma);
+
+ /*
+ * The task may have entered a memcg OOM situation but
+ * if the allocation error was handled gracefully (no
+ * VM_FAULT_OOM), there is no need to kill anything.
+ * Just clean up the OOM state peacefully.
+ */
+ if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
+ mem_cgroup_oom_synchronize(false);
+ return ret;
+
+out_walk:
+ local_irq_enable();
+out_put:
+ put_vma(vma);
+ return ret;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
/*
* By the time we get here, we already hold the mm semaphore
*
--
2.7.4
Add a new software event to count succeeded speculative page faults.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 6f873503552d..a6ddab9edeec 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
PERF_COUNT_SW_EMULATION_FAULTS = 8,
PERF_COUNT_SW_DUMMY = 9,
PERF_COUNT_SW_BPF_OUTPUT = 10,
+ PERF_COUNT_SW_SPF = 11,
PERF_COUNT_SW_MAX, /* non-ABI */
};
--
2.7.4
When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.
Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 7 +++++--
mm/memory.c | 18 ++++++++++--------
2 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a84ddc218bbd..73b8b99f482b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1263,8 +1263,11 @@ struct zap_details {
pgoff_t last_index; /* Highest page->index to unmap */
};
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
- pte_t pte, bool with_public_device);
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte, bool with_public_device,
+ unsigned long vma_flags);
+#define _vm_normal_page(vma, addr, pte, with_public_device) \
+ __vm_normal_page(vma, addr, pte, with_public_device, (vma)->vm_flags)
#define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
diff --git a/mm/memory.c b/mm/memory.c
index af0338fbc34d..184a0d663a76 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -826,8 +826,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
#else
# define HAVE_PTE_SPECIAL 0
#endif
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
- pte_t pte, bool with_public_device)
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte, bool with_public_device,
+ unsigned long vma_flags)
{
unsigned long pfn = pte_pfn(pte);
@@ -836,7 +837,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
goto check_pfn;
if (vma->vm_ops && vma->vm_ops->find_special_page)
return vma->vm_ops->find_special_page(vma, addr);
- if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+ if (vma_flags & (VM_PFNMAP | VM_MIXEDMAP))
return NULL;
if (is_zero_pfn(pfn))
return NULL;
@@ -868,8 +869,8 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
/* !HAVE_PTE_SPECIAL case follows: */
- if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
- if (vma->vm_flags & VM_MIXEDMAP) {
+ if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+ if (vma_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
return NULL;
goto out;
@@ -878,7 +879,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
off = (addr - vma->vm_start) >> PAGE_SHIFT;
if (pfn == vma->vm_pgoff + off)
return NULL;
- if (!is_cow_mapping(vma->vm_flags))
+ if (!is_cow_mapping(vma_flags))
return NULL;
}
}
@@ -2742,7 +2743,8 @@ static int do_wp_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
+ vmf->page = __vm_normal_page(vma, vmf->address, vmf->orig_pte, false,
+ vmf->vma_flags);
if (!vmf->page) {
/*
* VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a
@@ -3839,7 +3841,7 @@ static int do_numa_page(struct vm_fault *vmf)
ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte);
update_mmu_cache(vma, vmf->address, vmf->pte);
- page = vm_normal_page(vma, vmf->address, pte);
+ page = __vm_normal_page(vma, vmf->address, pte, false, vmf->vma_flags);
if (!page) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
return 0;
--
2.7.4
The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/swap.h | 10 ++++++++--
mm/memory.c | 8 ++++----
mm/swap.c | 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 1985940af479..a7dc37e0e405 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -338,8 +338,14 @@ extern void deactivate_file_page(struct page *page);
extern void mark_page_lazyfree(struct page *page);
extern void swap_setup(void);
-extern void lru_cache_add_active_or_unevictable(struct page *page,
- struct vm_area_struct *vma);
+extern void __lru_cache_add_active_or_unevictable(struct page *page,
+ unsigned long vma_flags);
+
+static inline void lru_cache_add_active_or_unevictable(struct page *page,
+ struct vm_area_struct *vma)
+{
+ return __lru_cache_add_active_or_unevictable(page, vma->vm_flags);
+}
/* linux/mm/vmscan.c */
extern unsigned long zone_reclaimable_pages(struct zone *zone);
diff --git a/mm/memory.c b/mm/memory.c
index 412014d5785b..0a0a483d9a65 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2560,7 +2560,7 @@ static int wp_page_copy(struct vm_fault *vmf)
ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
page_add_new_anon_rmap(new_page, vma, vmf->address, false);
mem_cgroup_commit_charge(new_page, memcg, false, false);
- lru_cache_add_active_or_unevictable(new_page, vma);
+ __lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
/*
* We call the notify macro here because, when using secondary
* mmu page tables (such as kvm shadow page tables), we want the
@@ -3083,7 +3083,7 @@ int do_swap_page(struct vm_fault *vmf)
if (unlikely(page != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
mem_cgroup_commit_charge(page, memcg, true, false);
@@ -3234,7 +3234,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
setpte:
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
@@ -3486,7 +3486,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
page_add_file_rmap(page, false);
diff --git a/mm/swap.c b/mm/swap.c
index 3dd518832096..f2f9c587246f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -455,12 +455,12 @@ void lru_cache_add(struct page *page)
* directly back onto it's zone's unevictable list, it does NOT use a
* per cpu pagevec.
*/
-void lru_cache_add_active_or_unevictable(struct page *page,
- struct vm_area_struct *vma)
+void __lru_cache_add_active_or_unevictable(struct page *page,
+ unsigned long vma_flags)
{
VM_BUG_ON_PAGE(PageLRU(page), page);
- if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
+ if (likely((vma_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
SetPageActive(page);
else if (!TestSetPageMlocked(page)) {
/*
--
2.7.4
The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.
In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protected by the mmap_sem.
In __vma_adjust() when importer->anon_vma is set, there is no need to
protect against speculative page faults since speculative page fault
is aborted if the vma->anon_vma is not set.
When calling page_add_new_anon_rmap() vma->anon_vma is necessarily
valid since we checked for it when locking the pte and the anon_vma is
removed once the pte is unlocked. So even if the speculative page
fault handler is running concurrently with do_unmap(), as the pte is
locked in unmap_region() - through unmap_vmas() - and the anon_vma
unlinked later, because we check for the vma sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <[email protected]>
---
mm/memory.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index d57749966fb8..0200340ef089 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -624,7 +624,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
* Hide vma from rmap and truncate_pagecache before freeing
* pgtables
*/
+ vm_write_begin(vma);
unlink_anon_vmas(vma);
+ vm_write_end(vma);
unlink_file_vma(vma);
if (is_vm_hugetlb_page(vma)) {
@@ -638,7 +640,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
&& !is_vm_hugetlb_page(next)) {
vma = next;
next = vma->vm_next;
+ vm_write_begin(vma);
unlink_anon_vmas(vma);
+ vm_write_end(vma);
unlink_file_vma(vma);
}
free_pgd_range(tlb, addr, vma->vm_end,
--
2.7.4
This change is inspired by the Peter's proposal patch [1] which was
protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
that particular case, and it is introducing major performance degradation
due to excessive scheduling operations.
To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
is protecting it access using a rwlock. As the mm_rb tree is a O(log n)
search it is safe to protect it using such a lock. The VMA cache is not
protected by the new rwlock and it should not be used without holding the
mmap_sem.
To allow the picked VMA structure to be used once the rwlock is released, a
use count is added to the VMA structure. When the VMA is allocated it is
set to 1. Each time the VMA is picked with the rwlock held its use count
is incremented. Each time the VMA is released it is decremented. When the
use count hits zero, this means that the VMA is no more used and should be
freed.
This patch is preparing for 2 kind of VMA access :
- as usual, under the control of the mmap_sem,
- without holding the mmap_sem for the speculative page fault handler.
Access done under the control the mmap_sem doesn't require to grab the
rwlock to protect read access to the mm_rb tree, but access in write must
be done under the protection of the rwlock too. This affects inserting and
removing of elements in the RB tree.
The patch is introducing 2 new functions:
- vma_get() to find a VMA based on an address by holding the new rwlock.
- vma_put() to release the VMA when its no more used.
These services are designed to be used when access are made to the RB tree
without holding the mmap_sem.
When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
we rely on the WMB done when releasing the rwlock to serialize the write
with the RMB done in a later patch to check for the VMA's validity.
When free_vma is called, the file associated with the VMA is closed
immediately, but the policy and the file structure remained in used until
the VMA's use count reach 0, which may happens later when exiting an
in progress speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel) <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm_types.h | 4 ++
kernel/fork.c | 3 ++
mm/init-mm.c | 3 ++
mm/internal.h | 6 +++
mm/mmap.c | 122 ++++++++++++++++++++++++++++++++++-------------
5 files changed, 106 insertions(+), 32 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 34fde7111e88..28c763ea1036 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -335,6 +335,7 @@ struct vm_area_struct {
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
seqcount_t vm_sequence;
+ atomic_t vm_ref_count; /* see vma_get(), vma_put() */
#endif
} __randomize_layout;
@@ -353,6 +354,9 @@ struct kioctx_table;
struct mm_struct {
struct vm_area_struct *mmap; /* list of VMAs */
struct rb_root mm_rb;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ rwlock_t mm_rb_lock;
+#endif
u32 vmacache_seqnum; /* per-thread vmacache */
#ifdef CONFIG_MMU
unsigned long (*get_unmapped_area) (struct file *filp,
diff --git a/kernel/fork.c b/kernel/fork.c
index a32e1c4311b2..9ecac4f725b9 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -889,6 +889,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
mm->mmap = NULL;
mm->mm_rb = RB_ROOT;
mm->vmacache_seqnum = 0;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ rwlock_init(&mm->mm_rb_lock);
+#endif
atomic_set(&mm->mm_users, 1);
atomic_set(&mm->mm_count, 1);
init_rwsem(&mm->mmap_sem);
diff --git a/mm/init-mm.c b/mm/init-mm.c
index f94d5d15ebc0..e71ac37a98c4 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -17,6 +17,9 @@
struct mm_struct init_mm = {
.mm_rb = RB_ROOT,
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ .mm_rb_lock = __RW_LOCK_UNLOCKED(init_mm.mm_rb_lock),
+#endif
.pgd = swapper_pg_dir,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
diff --git a/mm/internal.h b/mm/internal.h
index 62d8c34e63d5..fb2667b20f0a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,12 @@ void page_writeback_init(void);
int do_swap_page(struct vm_fault *vmf);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern struct vm_area_struct *get_vma(struct mm_struct *mm,
+ unsigned long addr);
+extern void put_vma(struct vm_area_struct *vma);
+#endif
+
void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/mmap.c b/mm/mmap.c
index ac32b577a0c9..182359a5445c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -160,6 +160,27 @@ void unlink_file_vma(struct vm_area_struct *vma)
}
}
+static void __free_vma(struct vm_area_struct *vma)
+{
+ if (vma->vm_file)
+ fput(vma->vm_file);
+ mpol_put(vma_policy(vma));
+ kmem_cache_free(vm_area_cachep, vma);
+}
+
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+void put_vma(struct vm_area_struct *vma)
+{
+ if (atomic_dec_and_test(&vma->vm_ref_count))
+ __free_vma(vma);
+}
+#else
+static inline void put_vma(struct vm_area_struct *vma)
+{
+ return __free_vma(vma);
+}
+#endif
+
/*
* Close a vm structure and free it, returning the next.
*/
@@ -170,10 +191,7 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
might_sleep();
if (vma->vm_ops && vma->vm_ops->close)
vma->vm_ops->close(vma);
- if (vma->vm_file)
- fput(vma->vm_file);
- mpol_put(vma_policy(vma));
- kmem_cache_free(vm_area_cachep, vma);
+ put_vma(vma);
return next;
}
@@ -393,6 +411,14 @@ static void validate_mm(struct mm_struct *mm)
#define validate_mm(mm) do { } while (0)
#endif
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+#define mm_rb_write_lock(mm) write_lock(&(mm)->mm_rb_lock)
+#define mm_rb_write_unlock(mm) write_unlock(&(mm)->mm_rb_lock)
+#else
+#define mm_rb_write_lock(mm) do { } while (0)
+#define mm_rb_write_unlock(mm) do { } while (0)
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
RB_DECLARE_CALLBACKS(static, vma_gap_callbacks, struct vm_area_struct, vm_rb,
unsigned long, rb_subtree_gap, vma_compute_subtree_gap)
@@ -411,26 +437,37 @@ static void vma_gap_update(struct vm_area_struct *vma)
}
static inline void vma_rb_insert(struct vm_area_struct *vma,
- struct rb_root *root)
+ struct mm_struct *mm)
{
+ struct rb_root *root = &mm->mm_rb;
+
/* All rb_subtree_gap values must be consistent prior to insertion */
validate_mm_rb(root, NULL);
rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
}
-static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
+static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm)
{
+ struct rb_root *root = &mm->mm_rb;
/*
* Note rb_erase_augmented is a fairly large inline function,
* so make sure we instantiate it only once with our desired
* augmented rbtree callbacks.
*/
+ mm_rb_write_lock(mm);
rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
+ mm_rb_write_unlock(mm); /* wmb */
+
+ /*
+ * Ensure the removal is complete before clearing the node.
+ * Matched by vma_has_changed()/handle_speculative_fault().
+ */
+ RB_CLEAR_NODE(&vma->vm_rb);
}
static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
- struct rb_root *root,
+ struct mm_struct *mm,
struct vm_area_struct *ignore)
{
/*
@@ -438,21 +475,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
* with the possible exception of the "next" vma being erased if
* next->vm_start was reduced.
*/
- validate_mm_rb(root, ignore);
+ validate_mm_rb(&mm->mm_rb, ignore);
- __vma_rb_erase(vma, root);
+ __vma_rb_erase(vma, mm);
}
static __always_inline void vma_rb_erase(struct vm_area_struct *vma,
- struct rb_root *root)
+ struct mm_struct *mm)
{
/*
* All rb_subtree_gap values must be consistent prior to erase,
* with the possible exception of the vma being erased.
*/
- validate_mm_rb(root, vma);
+ validate_mm_rb(&mm->mm_rb, vma);
- __vma_rb_erase(vma, root);
+ __vma_rb_erase(vma, mm);
}
/*
@@ -558,10 +595,6 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
else
mm->highest_vm_end = vm_end_gap(vma);
-#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
- seqcount_init(&vma->vm_sequence);
-#endif
-
/*
* vma->vm_prev wasn't known when we followed the rbtree to find the
* correct insertion point for that vma. As a result, we could not
@@ -571,10 +604,15 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
* immediately update the gap to the correct value. Finally we
* rebalance the rbtree after all augmented values have been set.
*/
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ atomic_set(&vma->vm_ref_count, 1);
+#endif
+ mm_rb_write_lock(mm);
rb_link_node(&vma->vm_rb, rb_parent, rb_link);
vma->rb_subtree_gap = 0;
vma_gap_update(vma);
- vma_rb_insert(vma, &mm->mm_rb);
+ vma_rb_insert(vma, mm);
+ mm_rb_write_unlock(mm);
}
static void __vma_link_file(struct vm_area_struct *vma)
@@ -650,7 +688,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm,
{
struct vm_area_struct *next;
- vma_rb_erase_ignore(vma, &mm->mm_rb, ignore);
+ vma_rb_erase_ignore(vma, mm, ignore);
next = vma->vm_next;
if (has_prev)
prev->vm_next = next;
@@ -923,16 +961,13 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
if (remove_next) {
- if (file) {
+ if (file)
uprobe_munmap(next, next->vm_start, next->vm_end);
- fput(file);
- }
if (next->anon_vma)
anon_vma_merge(vma, next);
mm->map_count--;
- mpol_put(vma_policy(next));
vm_raw_write_end(next);
- kmem_cache_free(vm_area_cachep, next);
+ put_vma(next);
/*
* In mprotect's case 6 (see comments on vma_merge),
* we must remove another next too. It would clutter
@@ -2182,15 +2217,11 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
EXPORT_SYMBOL(get_unmapped_area);
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+static struct vm_area_struct *__find_vma(struct mm_struct *mm,
+ unsigned long addr)
{
struct rb_node *rb_node;
- struct vm_area_struct *vma;
-
- /* Check the cache first. */
- vma = vmacache_find(mm, addr);
- if (likely(vma))
- return vma;
+ struct vm_area_struct *vma = NULL;
rb_node = mm->mm_rb.rb_node;
@@ -2208,13 +2239,40 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
rb_node = rb_node->rb_right;
}
+ return vma;
+}
+
+struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma;
+
+ /* Check the cache first. */
+ vma = vmacache_find(mm, addr);
+ if (likely(vma))
+ return vma;
+
+ vma = __find_vma(mm, addr);
if (vma)
vmacache_update(addr, vma);
return vma;
}
-
EXPORT_SYMBOL(find_vma);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+struct vm_area_struct *get_vma(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma = NULL;
+
+ read_lock(&mm->mm_rb_lock);
+ vma = __find_vma(mm, addr);
+ if (vma)
+ atomic_inc(&vma->vm_ref_count);
+ read_unlock(&mm->mm_rb_lock);
+
+ return vma;
+}
+#endif
+
/*
* Same as find_vma, but also return a pointer to the previous VMA in *pprev.
*/
@@ -2582,7 +2640,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
insertion_point = (prev ? &prev->vm_next : &mm->mmap);
vma->vm_prev = NULL;
do {
- vma_rb_erase(vma, &mm->mm_rb);
+ vma_rb_erase(vma, mm);
mm->map_count--;
tail_vma = vma;
vma = vma->vm_next;
--
2.7.4
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/migrate.h | 4 ++--
mm/memory.c | 2 +-
mm/migrate.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f2b4abbca55e..fd4c3ab7bd9c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -126,14 +126,14 @@ static inline void __ClearPageMovable(struct page *page)
#ifdef CONFIG_NUMA_BALANCING
extern bool pmd_trans_migrating(pmd_t pmd);
extern int migrate_misplaced_page(struct page *page,
- struct vm_area_struct *vma, int node);
+ struct vm_fault *vmf, int node);
#else
static inline bool pmd_trans_migrating(pmd_t pmd)
{
return false;
}
static inline int migrate_misplaced_page(struct page *page,
- struct vm_area_struct *vma, int node)
+ struct vm_fault *vmf, int node)
{
return -EAGAIN; /* can't migrate now */
}
diff --git a/mm/memory.c b/mm/memory.c
index 46fe92b93682..412014d5785b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3880,7 +3880,7 @@ static int do_numa_page(struct vm_fault *vmf)
}
/* Migrate to the requested node */
- migrated = migrate_misplaced_page(page, vma, target_nid);
+ migrated = migrate_misplaced_page(page, vmf, target_nid);
if (migrated) {
page_nid = target_nid;
flags |= TNF_MIGRATED;
diff --git a/mm/migrate.c b/mm/migrate.c
index 5d0dc7b85f90..ad8692ca6a4f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1900,7 +1900,7 @@ bool pmd_trans_migrating(pmd_t pmd)
* node. Caller is expected to have an elevated reference count on
* the page that will be dropped by this function before returning.
*/
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+int migrate_misplaced_page(struct page *page, struct vm_fault *vmf,
int node)
{
pg_data_t *pgdat = NODE_DATA(node);
@@ -1913,7 +1913,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
* with execute permissions as they are probably shared libraries.
*/
if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
- (vma->vm_flags & VM_EXEC))
+ (vmf->vma_flags & VM_EXEC))
goto out;
/*
--
2.7.4
When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.
This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VMA will be the same as the original one,
because in __split_vma the new->anon_vma is set to src->anon_vma when
*new = *vma.
So even if the VMA boundaries are not correct, the anon_vma pointer is
still valid.
If the VMA has been merged, then the VMA in which it has been merged
must have the same anon_vma pointer otherwise the merge can't be done.
So in all the case we know that the anon_vma is valid, since we have
checked before starting the speculative page fault that the anon_vma
pointer is valid for this VMA and since there is an anon_vma this
means that at one time a page has been backed and that before the VMA
is cleaned, the page table lock would have to be grab to clean the
PTE, and the anon_vma field is checked once the PTE is locked.
This patch introduce a new __page_add_new_anon_rmap() service which
doesn't check for the VMA boundaries, and create a new inline one
which do the check.
When called from a page fault handler, if this is not a speculative one,
there is a guarantee that vm_start and vm_end match the faulting address,
so this check is useless. In the context of the speculative page fault
handler, this check may be wrong but anon_vma is still valid as explained
above.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/rmap.h | 12 ++++++++++--
mm/memory.c | 8 ++++----
mm/rmap.c | 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..a5d282573093 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -174,8 +174,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long, bool);
void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long, int);
-void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
- unsigned long, bool);
+void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
+ unsigned long, bool);
+static inline void page_add_new_anon_rmap(struct page *page,
+ struct vm_area_struct *vma,
+ unsigned long address, bool compound)
+{
+ VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+ __page_add_new_anon_rmap(page, vma, address, compound);
+}
+
void page_add_file_rmap(struct page *, bool);
void page_remove_rmap(struct page *, bool);
diff --git a/mm/memory.c b/mm/memory.c
index 184a0d663a76..66517535514b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2559,7 +2559,7 @@ static int wp_page_copy(struct vm_fault *vmf)
* thread doing COW.
*/
ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
- page_add_new_anon_rmap(new_page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(new_page, vma, vmf->address, false);
mem_cgroup_commit_charge(new_page, memcg, false, false);
__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
/*
@@ -3083,7 +3083,7 @@ int do_swap_page(struct vm_fault *vmf)
/* ksm created a completely new copy */
if (unlikely(page != swapcache && swapcache)) {
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
@@ -3234,7 +3234,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
}
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
setpte:
@@ -3486,7 +3486,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
/* copy-on-write page */
if (write && !(vmf->vma_flags & VM_SHARED)) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
diff --git a/mm/rmap.c b/mm/rmap.c
index 9eaa6354fe70..e028d660c304 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1136,7 +1136,7 @@ void do_page_add_anon_rmap(struct page *page,
}
/**
- * page_add_new_anon_rmap - add pte mapping to a new anonymous page
+ * __page_add_new_anon_rmap - add pte mapping to a new anonymous page
* @page: the page to add the mapping to
* @vma: the vm area in which the mapping is added
* @address: the user virtual address mapped
@@ -1146,12 +1146,11 @@ void do_page_add_anon_rmap(struct page *page,
* This means the inc-and-test can be bypassed.
* Page does not have to be locked.
*/
-void page_add_new_anon_rmap(struct page *page,
+void __page_add_new_anon_rmap(struct page *page,
struct vm_area_struct *vma, unsigned long address, bool compound)
{
int nr = compound ? hpage_nr_pages(page) : 1;
- VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
__SetPageSwapBacked(page);
if (compound) {
VM_BUG_ON_PAGE(!PageTransHuge(page), page);
--
2.7.4
When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 6 ++++++
mm/hugetlb.c | 2 ++
mm/khugepaged.c | 2 ++
mm/memory.c | 38 ++++++++++++++++++++------------------
4 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef6ef0627090..dfa81a638b7c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -359,6 +359,12 @@ struct vm_fault {
* page table to avoid allocation from
* atomic context.
*/
+ /*
+ * These entries are required when handling speculative page fault.
+ * This way the page handling is done using consistent field values.
+ */
+ unsigned long vma_flags;
+ pgprot_t vma_page_prot;
};
/* page entry size for vm->huge_fault() */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 446427cafa19..f71db2b42b30 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3717,6 +3717,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
.vma = vma,
.address = address,
.flags = flags,
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
/*
* Hard to debug if it ends up being
* used by a callee that assumes
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 32314e9e48dd..a946d5306160 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -882,6 +882,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
.flags = FAULT_FLAG_ALLOW_RETRY,
.pmd = pmd,
.pgoff = linear_page_index(vma, address),
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
};
/* we only decide to swapin, if there is enough young ptes */
diff --git a/mm/memory.c b/mm/memory.c
index 0200340ef089..46fe92b93682 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2615,7 +2615,7 @@ static int wp_page_copy(struct vm_fault *vmf)
* Don't let another task, with possibly unlocked vma,
* keep the mlocked page.
*/
- if (page_copied && (vma->vm_flags & VM_LOCKED)) {
+ if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
lock_page(old_page); /* LRU manipulation */
if (PageMlocked(old_page))
munlock_vma_page(old_page);
@@ -2649,7 +2649,7 @@ static int wp_page_copy(struct vm_fault *vmf)
*/
int finish_mkwrite_fault(struct vm_fault *vmf)
{
- WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
+ WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
if (!pte_map_lock(vmf))
return VM_FAULT_RETRY;
/*
@@ -2751,7 +2751,7 @@ static int do_wp_page(struct vm_fault *vmf)
* We should not cow pages in a shared writeable mapping.
* Just mark the pages writable and/or call ops->pfn_mkwrite.
*/
- if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+ if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))
return wp_pfn_shared(vmf);
@@ -2798,7 +2798,7 @@ static int do_wp_page(struct vm_fault *vmf)
return VM_FAULT_WRITE;
}
unlock_page(vmf->page);
- } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+ } else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))) {
return wp_page_shared(vmf);
}
@@ -3067,7 +3067,7 @@ int do_swap_page(struct vm_fault *vmf)
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
- pte = mk_pte(page, vma->vm_page_prot);
+ pte = mk_pte(page, vmf->vma_page_prot);
if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
@@ -3093,7 +3093,7 @@ int do_swap_page(struct vm_fault *vmf)
swap_free(entry);
if (mem_cgroup_swap_full(page) ||
- (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+ (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
try_to_free_swap(page);
unlock_page(page);
if (page != swapcache && swapcache) {
@@ -3150,7 +3150,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
pte_t entry;
/* File mapping without ->vm_ops ? */
- if (vma->vm_flags & VM_SHARED)
+ if (vmf->vma_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
/*
@@ -3174,7 +3174,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
if (!(vmf->flags & FAULT_FLAG_WRITE) &&
!mm_forbids_zeropage(vma->vm_mm)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
- vma->vm_page_prot));
+ vmf->vma_page_prot));
if (!pte_map_lock(vmf))
return VM_FAULT_RETRY;
if (!pte_none(*vmf->pte))
@@ -3207,8 +3207,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
*/
__SetPageUptodate(page);
- entry = mk_pte(page, vma->vm_page_prot);
- if (vma->vm_flags & VM_WRITE)
+ entry = mk_pte(page, vmf->vma_page_prot);
+ if (vmf->vma_flags & VM_WRITE)
entry = pte_mkwrite(pte_mkdirty(entry));
if (!pte_map_lock(vmf)) {
@@ -3404,7 +3404,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
for (i = 0; i < HPAGE_PMD_NR; i++)
flush_icache_page(vma, page + i);
- entry = mk_huge_pmd(page, vma->vm_page_prot);
+ entry = mk_huge_pmd(page, vmf->vma_page_prot);
if (write)
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
@@ -3478,11 +3478,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
return VM_FAULT_NOPAGE;
flush_icache_page(vma, page);
- entry = mk_pte(page, vma->vm_page_prot);
+ entry = mk_pte(page, vmf->vma_page_prot);
if (write)
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
/* copy-on-write page */
- if (write && !(vma->vm_flags & VM_SHARED)) {
+ if (write && !(vmf->vma_flags & VM_SHARED)) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
@@ -3521,7 +3521,7 @@ int finish_fault(struct vm_fault *vmf)
/* Did we COW the page? */
if ((vmf->flags & FAULT_FLAG_WRITE) &&
- !(vmf->vma->vm_flags & VM_SHARED))
+ !(vmf->vma_flags & VM_SHARED))
page = vmf->cow_page;
else
page = vmf->page;
@@ -3775,7 +3775,7 @@ static int do_fault(struct vm_fault *vmf)
ret = VM_FAULT_SIGBUS;
else if (!(vmf->flags & FAULT_FLAG_WRITE))
ret = do_read_fault(vmf);
- else if (!(vma->vm_flags & VM_SHARED))
+ else if (!(vmf->vma_flags & VM_SHARED))
ret = do_cow_fault(vmf);
else
ret = do_shared_fault(vmf);
@@ -3832,7 +3832,7 @@ static int do_numa_page(struct vm_fault *vmf)
* accessible ptes, some can allow access by kernel mode.
*/
pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
- pte = pte_modify(pte, vma->vm_page_prot);
+ pte = pte_modify(pte, vmf->vma_page_prot);
pte = pte_mkyoung(pte);
if (was_writable)
pte = pte_mkwrite(pte);
@@ -3866,7 +3866,7 @@ static int do_numa_page(struct vm_fault *vmf)
* Flag if the page is shared between multiple address spaces. This
* is later used when determining whether to group tasks together
*/
- if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
+ if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
flags |= TNF_SHARED;
last_cpupid = page_cpupid_last(page);
@@ -3911,7 +3911,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
/* COW handled on pte level: split pmd */
- VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
+ VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
return VM_FAULT_FALLBACK;
@@ -4058,6 +4058,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
.flags = flags,
.pgoff = linear_page_index(vma, address),
.gfp_mask = __get_fault_gfp_mask(vma),
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
};
unsigned int dirty = flags & FAULT_FLAG_WRITE;
struct mm_struct *mm = vma->vm_mm;
--
2.7.4
The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.
This patch provides protection against the VMA modification done in :
- madvise()
- mpol_rebind_policy()
- vma_replace_policy()
- change_prot_numa()
- mlock(), munlock()
- mprotect()
- mmap_region()
- collapse_huge_page()
- userfaultd registering services
In addition, VMA fields which will be read during the speculative fault
path needs to be written using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <[email protected]>
---
fs/proc/task_mmu.c | 5 ++++-
fs/userfaultfd.c | 17 +++++++++++++----
mm/khugepaged.c | 3 +++
mm/madvise.c | 6 +++++-
mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
mm/mlock.c | 13 ++++++++-----
mm/mmap.c | 17 ++++++++++-------
mm/mprotect.c | 4 +++-
mm/swap_state.c | 8 ++++++--
9 files changed, 86 insertions(+), 38 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 65ae54659833..a2d9c87b7b0b 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
goto out_mm;
}
for (vma = mm->mmap; vma; vma = vma->vm_next) {
- vma->vm_flags &= ~VM_SOFTDIRTY;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & ~VM_SOFTDIRTY);
vma_set_page_prot(vma);
+ vm_write_end(vma);
}
downgrade_write(&mm->mmap_sem);
break;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cec550c8468f..b8212ba17695 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
octx = vma->vm_userfaultfd_ctx.ctx;
if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
+ vm_write_begin(vma);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
- vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
+ vm_write_end(vma);
return 0;
}
@@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
vma = prev;
else
prev = vma;
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ vm_write_end(vma);
}
up_write(&mm->mmap_sem);
mmput(mm);
@@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
* the next vma was merged into the current one and
* the current one has not been updated yet.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx.ctx = ctx;
+ vm_write_end(vma);
skip:
prev = vma;
@@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
* the next vma was merged into the current one and
* the current one has not been updated yet.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ vm_write_end(vma);
skip:
prev = vma;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b7e2268dfc9a..32314e9e48dd 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1006,6 +1006,7 @@ static void collapse_huge_page(struct mm_struct *mm,
if (mm_find_pmd(mm, address) != pmd)
goto out;
+ vm_write_begin(vma);
anon_vma_lock_write(vma->anon_vma);
pte = pte_offset_map(pmd, address);
@@ -1041,6 +1042,7 @@ static void collapse_huge_page(struct mm_struct *mm,
pmd_populate(mm, pmd, pmd_pgtable(_pmd));
spin_unlock(pmd_ptl);
anon_vma_unlock_write(vma->anon_vma);
+ vm_write_end(vma);
result = SCAN_FAIL;
goto out;
}
@@ -1075,6 +1077,7 @@ static void collapse_huge_page(struct mm_struct *mm,
set_pmd_at(mm, address, pmd, _pmd);
update_mmu_cache_pmd(vma, address, pmd);
spin_unlock(pmd_ptl);
+ vm_write_end(vma);
*hpage = NULL;
diff --git a/mm/madvise.c b/mm/madvise.c
index 4d3c922ea1a1..e328f7ab5942 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
/*
* vm_flags is protected by the mmap_sem held in write mode.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
+ vm_write_end(vma);
out:
return error;
}
@@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
.private = tlb,
};
+ vm_write_begin(vma);
tlb_start_vma(tlb, vma);
walk_page_range(addr, end, &free_walk);
tlb_end_vma(tlb, vma);
+ vm_write_end(vma);
}
static int madvise_free_single_vma(struct vm_area_struct *vma,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e0e706f0b34e..2632c6f93b63 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
struct vm_area_struct *vma;
down_write(&mm->mmap_sem);
- for (vma = mm->mmap; vma; vma = vma->vm_next)
+ for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ vm_write_begin(vma);
mpol_rebind_policy(vma->vm_policy, new);
+ vm_write_end(vma);
+ }
up_write(&mm->mmap_sem);
}
@@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
{
int nr_updated;
+ vm_write_begin(vma);
nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
if (nr_updated)
count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
+ vm_write_end(vma);
return nr_updated;
}
@@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
if (IS_ERR(new))
return PTR_ERR(new);
+ vm_write_begin(vma);
if (vma->vm_ops && vma->vm_ops->set_policy) {
err = vma->vm_ops->set_policy(vma, new);
if (err)
@@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
}
old = vma->vm_policy;
- vma->vm_policy = new; /* protected by mmap_sem */
+ /*
+ * The speculative page fault handler access this field without
+ * hodling the mmap_sem.
+ */
+ WRITE_ONCE(vma->vm_policy, new);
+ vm_write_end(vma);
mpol_put(old);
return 0;
err_out:
+ vm_write_end(vma);
mpol_put(new);
return err;
}
@@ -1552,23 +1564,28 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len,
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
unsigned long addr)
{
- struct mempolicy *pol = NULL;
+ struct mempolicy *pol;
- if (vma) {
- if (vma->vm_ops && vma->vm_ops->get_policy) {
- pol = vma->vm_ops->get_policy(vma, addr);
- } else if (vma->vm_policy) {
- pol = vma->vm_policy;
+ if (!vma)
+ return NULL;
- /*
- * shmem_alloc_page() passes MPOL_F_SHARED policy with
- * a pseudo vma whose vma->vm_ops=NULL. Take a reference
- * count on these policies which will be dropped by
- * mpol_cond_put() later
- */
- if (mpol_needs_cond_ref(pol))
- mpol_get(pol);
- }
+ if (vma->vm_ops && vma->vm_ops->get_policy)
+ return vma->vm_ops->get_policy(vma, addr);
+
+ /*
+ * This could be called without holding the mmap_sem in the
+ * speculative page fault handler's path.
+ */
+ pol = READ_ONCE(vma->vm_policy);
+ if (pol) {
+ /*
+ * shmem_alloc_page() passes MPOL_F_SHARED policy with
+ * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+ * count on these policies which will be dropped by
+ * mpol_cond_put() later
+ */
+ if (mpol_needs_cond_ref(pol))
+ mpol_get(pol);
}
return pol;
diff --git a/mm/mlock.c b/mm/mlock.c
index 74e5a6547c3d..c40285c94ced 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -445,7 +445,9 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
void munlock_vma_pages_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
- vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, vma->vm_flags & VM_LOCKED_CLEAR_MASK);
+ vm_write_end(vma);
while (start < end) {
struct page *page;
@@ -568,10 +570,11 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
* It's okay if try_to_unmap_one unmaps a page just after we
* set VM_LOCKED, populate_vma_page_range will bring it back.
*/
-
- if (lock)
- vma->vm_flags = newflags;
- else
+ if (lock) {
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, newflags);
+ vm_write_end(vma);
+ } else
munlock_vma_pages_range(vma, start, end);
out:
diff --git a/mm/mmap.c b/mm/mmap.c
index 5898255d0aeb..d6533cb85213 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -847,17 +847,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
if (start != vma->vm_start) {
- vma->vm_start = start;
+ WRITE_ONCE(vma->vm_start, start);
start_changed = true;
}
if (end != vma->vm_end) {
- vma->vm_end = end;
+ WRITE_ONCE(vma->vm_end, end);
end_changed = true;
}
- vma->vm_pgoff = pgoff;
+ WRITE_ONCE(vma->vm_pgoff, pgoff);
if (adjust_next) {
- next->vm_start += adjust_next << PAGE_SHIFT;
- next->vm_pgoff += adjust_next;
+ WRITE_ONCE(next->vm_start,
+ next->vm_start + (adjust_next << PAGE_SHIFT));
+ WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
}
if (root) {
@@ -1781,6 +1782,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
out:
perf_event_mmap(vma);
+ vm_write_begin(vma);
vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
if (vm_flags & VM_LOCKED) {
if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
@@ -1803,6 +1805,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
vma->vm_flags |= VM_SOFTDIRTY;
vma_set_page_prot(vma);
+ vm_write_end(vma);
return addr;
@@ -2431,8 +2434,8 @@ int expand_downwards(struct vm_area_struct *vma,
mm->locked_vm += grow;
vm_stat_account(mm, vma->vm_flags, grow);
anon_vma_interval_tree_pre_update_vma(vma);
- vma->vm_start = address;
- vma->vm_pgoff -= grow;
+ WRITE_ONCE(vma->vm_start, address);
+ WRITE_ONCE(vma->vm_pgoff, vma->vm_pgoff - grow);
anon_vma_interval_tree_post_update_vma(vma);
vma_gap_update(vma);
spin_unlock(&mm->page_table_lock);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index e3309fcf586b..9b7a71c30287 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -366,7 +366,8 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
* vm_flags and vm_page_prot are protected by the mmap_sem
* held in write mode.
*/
- vma->vm_flags = newflags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, newflags);
dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot);
vma_set_page_prot(vma);
@@ -381,6 +382,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
(newflags & VM_WRITE)) {
populate_vma_page_range(vma, start, end, NULL);
}
+ vm_write_end(vma);
vm_stat_account(mm, oldflags, -nrpages);
vm_stat_account(mm, newflags, nrpages);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 486f7c26386e..691bd6e06967 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -575,6 +575,10 @@ static unsigned long swapin_nr_pages(unsigned long offset)
* the readahead.
*
* Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
+ * This is needed to ensure the VMA will not be freed in our back. In the case
+ * of the speculative page fault handler, this cannot happen, even if we don't
+ * hold the mmap_sem. Callees are assumed to take care of reading VMA's fields
+ * using READ_ONCE() to read consistent values.
*/
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct vm_fault *vmf)
@@ -669,9 +673,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
unsigned long *start,
unsigned long *end)
{
- *start = max3(lpfn, PFN_DOWN(vma->vm_start),
+ *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
PFN_DOWN(faddr & PMD_MASK));
- *end = min3(rpfn, PFN_DOWN(vma->vm_end),
+ *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
}
--
2.7.4
pte_unmap_same() is making the assumption that the page table are still
around because the mmap_sem is held.
This is no more the case when running a speculative page fault and
additional check must be made to ensure that the final page table are still
there.
This is now done by calling pte_spinlock() to check for the VMA's
consistency while locking for the page tables.
This is requiring passing a vm_fault structure to pte_unmap_same() which is
containing all the needed parameters.
As pte_spinlock() may fail in the case of a speculative page fault, if the
VMA has been touched in our back, pte_unmap_same() should now return 3
cases :
1. pte are the same (0)
2. pte are different (VM_FAULT_PTNOTSAME)
3. a VMA's changes has been detected (VM_FAULT_RETRY)
The case 2 is handled by the introduction of a new VM_FAULT flag named
VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Signed-off-by: Laurent Dufour <[email protected]>
---
include/linux/mm.h | 1 +
mm/memory.c | 29 +++++++++++++++++++----------
2 files changed, 20 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2f3e98edc94a..b6432a261e63 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1199,6 +1199,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
#define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
* and needs fsync() to complete (for
* synchronous page faults in DAX) */
+#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
#define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
diff --git a/mm/memory.c b/mm/memory.c
index 21b1212a0892..4bc7b0bdcb40 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
* parts, do_swap_page must check under lock before unmapping the pte and
* proceeding (but do_wp_page is only called after already making such a check;
* and do_anonymous_page can safely check later on).
+ *
+ * pte_unmap_same() returns:
+ * 0 if the PTE are the same
+ * VM_FAULT_PTNOTSAME if the PTE are different
+ * VM_FAULT_RETRY if the VMA has changed in our back during
+ * a speculative page fault handling.
*/
-static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
- pte_t *page_table, pte_t orig_pte)
+static inline int pte_unmap_same(struct vm_fault *vmf)
{
- int same = 1;
+ int ret = 0;
+
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
if (sizeof(pte_t) > sizeof(unsigned long)) {
- spinlock_t *ptl = pte_lockptr(mm, pmd);
- spin_lock(ptl);
- same = pte_same(*page_table, orig_pte);
- spin_unlock(ptl);
+ if (pte_spinlock(vmf)) {
+ if (!pte_same(*vmf->pte, vmf->orig_pte))
+ ret = VM_FAULT_PTNOTSAME;
+ spin_unlock(vmf->ptl);
+ } else
+ ret = VM_FAULT_RETRY;
}
#endif
- pte_unmap(page_table);
- return same;
+ pte_unmap(vmf->pte);
+ return ret;
}
static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
@@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
int exclusive = 0;
int ret = 0;
- if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
+ ret = pte_unmap_same(vmf);
+ if (ret)
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
--
2.7.4
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <[email protected]>
---
mm/memory.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 8ac241b9f370..21b1212a0892 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+static bool pte_spinlock(struct vm_fault *vmf)
+{
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ return true;
+}
+
static bool pte_map_lock(struct vm_fault *vmf)
{
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
@@ -3798,8 +3805,8 @@ static int do_numa_page(struct vm_fault *vmf)
* validation through pte_unmap_same(). It's of NUMA type but
* the pfn may be screwed if the read is non atomic.
*/
- vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
- spin_lock(vmf->ptl);
+ if (!pte_spinlock(vmf))
+ return VM_FAULT_RETRY;
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
goto out;
@@ -3992,8 +3999,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
return do_numa_page(vmf);
- vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
- spin_lock(vmf->ptl);
+ if (!pte_spinlock(vmf))
+ return VM_FAULT_RETRY;
entry = vmf->orig_pte;
if (unlikely(!pte_same(*vmf->pte, entry)))
goto unlock;
--
2.7.4
Define CONFIG_SPECULATIVE_PAGE_FAULT for BOOK3S_64 and SMP. This enables
the Speculative Page Fault handler.
Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Signed-off-by: Laurent Dufour <[email protected]>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..acf2696a6505 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -233,6 +233,7 @@ config PPC
select OLD_SIGACTION if PPC32
select OLD_SIGSUSPEND
select SPARSE_IRQ
+ select SPECULATIVE_PAGE_FAULT if PPC_BOOK3S_64 && SMP
select SYSCTL_EXCEPTION_TRACE
select VIRT_TO_BUS if !PPC64
#
--
2.7.4
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support.
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Laurent Dufour <[email protected]>
---
mm/Kconfig | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index abefa573bcd8..07c566c88faf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -759,3 +759,6 @@ config GUP_BENCHMARK
performance of get_user_pages_fast().
See tools/testing/selftests/vm/gup_benchmark.c
+
+config SPECULATIVE_PAGE_FAULT
+ bool
--
2.7.4
On Tue, Mar 13, 2018 at 06:59:47PM +0100, Laurent Dufour wrote:
> This change is inspired by the Peter's proposal patch [1] which was
> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
> that particular case, and it is introducing major performance degradation
> due to excessive scheduling operations.
Do you happen to have a little more detail on that?
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 34fde7111e88..28c763ea1036 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -335,6 +335,7 @@ struct vm_area_struct {
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> seqcount_t vm_sequence;
> + atomic_t vm_ref_count; /* see vma_get(), vma_put() */
> #endif
> } __randomize_layout;
>
> @@ -353,6 +354,9 @@ struct kioctx_table;
> struct mm_struct {
> struct vm_area_struct *mmap; /* list of VMAs */
> struct rb_root mm_rb;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + rwlock_t mm_rb_lock;
> +#endif
> u32 vmacache_seqnum; /* per-thread vmacache */
> #ifdef CONFIG_MMU
> unsigned long (*get_unmapped_area) (struct file *filp,
When I tried this, it simply traded contention on mmap_sem for
contention on these two cachelines.
This was for the concurrent fault benchmark, where mmap_sem is only ever
acquired for reading (so no blocking ever happens) and the bottle-neck
was really pure cacheline access.
Only by using RCU can you avoid that thrashing.
Also note that if your database allocates the one giant mapping, it'll
be _one_ VMA and that vm_ref_count gets _very_ hot indeed.
On Tue 13-03-18 18:59:30, Laurent Dufour wrote:
> Changes since v8:
> - Don't check PMD when locking the pte when THP is disabled
> Thanks to Daniel Jordan for reporting this.
> - Rebase on 4.16
Is this really worth reposting the whole pile? I mean this is at v9,
each doing little changes. It is quite tiresome to barely get to a
bookmarked version just to find out that there are 2 new versions out.
I am sorry to be grumpy and I can understand some frustration it doesn't
move forward that easilly but this is a _big_ change. We should start
with a real high level review rather than doing small changes here and
there and reach v20 quickly.
I am planning to find some time to look at it but the spare cycles are
so rare these days...
--
Michal Hocko
SUSE Labs
On 14/03/2018 14:11, Michal Hocko wrote:
> On Tue 13-03-18 18:59:30, Laurent Dufour wrote:
>> Changes since v8:
>> - Don't check PMD when locking the pte when THP is disabled
>> Thanks to Daniel Jordan for reporting this.
>> - Rebase on 4.16
>
> Is this really worth reposting the whole pile? I mean this is at v9,
> each doing little changes. It is quite tiresome to barely get to a
> bookmarked version just to find out that there are 2 new versions out.
I agree, I could have sent only a change for the concerned patch. But the
previous series has been sent a month ago and this one is rebased on the 4.16
kernel.
> I am sorry to be grumpy and I can understand some frustration it doesn't
> move forward that easilly but this is a _big_ change. We should start
> with a real high level review rather than doing small changes here and
> there and reach v20 quickly.
>
> I am planning to find some time to look at it but the spare cycles are
> so rare these days...
I understand that this is a big change and I'll try to not post a new series
until I get more feedback from this one.
Thanks,
Laurent.
On 14/03/2018 09:48, Peter Zijlstra wrote:
> On Tue, Mar 13, 2018 at 06:59:47PM +0100, Laurent Dufour wrote:
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>> that particular case, and it is introducing major performance degradation
>> due to excessive scheduling operations.
>
> Do you happen to have a little more detail on that?
This has been reported by kemi who find bad performance when running some
benchmarks on top of the v5 series:
https://patchwork.kernel.org/patch/9999687/
It appears that SRCU is generating a lot of additional scheduling to manage the
freeing of the VMA structure. SRCU is dealing through per cpu ressources but
the SRCU callback is And since we are handling this way a per process
ressource (VMA) through a global resource (SRCU) this leads to a lot of
overhead when scheduling the SRCU callback.
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 34fde7111e88..28c763ea1036 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -335,6 +335,7 @@ struct vm_area_struct {
>> struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
>> #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> seqcount_t vm_sequence;
>> + atomic_t vm_ref_count; /* see vma_get(), vma_put() */
>> #endif
>> } __randomize_layout;
>>
>> @@ -353,6 +354,9 @@ struct kioctx_table;
>> struct mm_struct {
>> struct vm_area_struct *mmap; /* list of VMAs */
>> struct rb_root mm_rb;
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> + rwlock_t mm_rb_lock;
>> +#endif
>> u32 vmacache_seqnum; /* per-thread vmacache */
>> #ifdef CONFIG_MMU
>> unsigned long (*get_unmapped_area) (struct file *filp,
>
> When I tried this, it simply traded contention on mmap_sem for
> contention on these two cachelines.
>
> This was for the concurrent fault benchmark, where mmap_sem is only ever
> acquired for reading (so no blocking ever happens) and the bottle-neck
> was really pure cacheline access.
I'd say that this expected if multiple threads are dealing on the same VMA, but
if the VMA differ then this contention is disappearing while it is remaining
when using the mmap_sem.
This being said, test I did on PowerPC using will-it-scale/page_fault1_threads
showed that the number of caches-misses generated in get_vma() are very low
(less than 5%). Am I missing something ?
> Only by using RCU can you avoid that thrashing.
I agree, but this kind of test is the best use case for SRCU because there are
not so many updates, so not a lot of call to the SRCU asynchronous callback.
Honestly, I can't see an ideal solution here, RCU is not optimal when there is
a high number of updates, and using a rwlock may introduced a bottleneck there.
I get better results when using the rwlock than using SRCU in that case, but if
you have another proposal, please advise, I'll give it a try.
> Also note that if your database allocates the one giant mapping, it'll
> be _one_ VMA and that vm_ref_count gets _very_ hot indeed.
In the case of the database product I mentioned in the series header, that's
the opposite, the VMA number is very high so this doesn't happen. But in the
case of one VMA, it's clear that there will be a contention on vm_ref_count,
but this would be better than blocking on the mmap_sem.
Laurent.
FYI, we noticed the following commit (built with gcc-7):
commit: b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e ("mm: Protect mm_rb tree with a rwlock")
url: https://github.com/0day-ci/linux/commits/Laurent-Dufour/Speculative-page-faults/20180316-151833
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------+------------+------------+
| | 7f3f7b4e80 | b33ddf50eb |
+----------------------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 0 | 6 |
| INFO:trying_to_register_non-static_key | 0 | 6 |
+----------------------------------------+------------+------------+
[ 22.218186] INFO: trying to register non-static key.
[ 22.220252] the code is fine but needs lockdep annotation.
[ 22.222471] turning off the locking correctness validator.
[ 22.224839] CPU: 0 PID: 1 Comm: init Not tainted 4.16.0-rc4-next-20180309-00017-gb33ddf5 #1
[ 22.228528] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 22.232443] Call Trace:
[ 22.234234] dump_stack+0x85/0xbc
[ 22.236085] register_lock_class+0x237/0x477
[ 22.238057] __lock_acquire+0xd0/0xf15
[ 22.240032] lock_acquire+0x19c/0x1ce
[ 22.241927] ? do_mmap+0x3aa/0x3ff
[ 22.243749] mmap_region+0x37a/0x4c0
[ 22.245619] ? do_mmap+0x3aa/0x3ff
[ 22.247425] do_mmap+0x3aa/0x3ff
[ 22.249175] vm_mmap_pgoff+0xa1/0xea
[ 22.251083] elf_map+0x6d/0x134
[ 22.252873] load_elf_binary+0x56f/0xe07
[ 22.254853] search_binary_handler+0x75/0x1f8
[ 22.256934] do_execveat_common+0x661/0x92b
[ 22.259164] ? rest_init+0x22e/0x22e
[ 22.261082] do_execve+0x1f/0x21
[ 22.262884] kernel_init+0x5a/0xf0
[ 22.264722] ret_from_fork+0x3a/0x50
[ 22.303240] systemd[1]: RTC configured in localtime, applying delta of 480 minutes to system time.
[ 22.313544] systemd[1]: Failed to insert module 'autofs4': No such file or directory
[ 22.513663] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Mounting Debug File System...
Starting Create list of required st... nodes for the current kernel...
Starting Remount Root and Kernel File Systems...
Mounting Huge Pages File System...
Starting Journal Service...
Starting Load Kernel Modules...
Mounting POSIX Message Queue File System...
Mounting RPC Pipe File System...
Mounting Configuration File System...
Starting Apply Kernel Variables...
Mounting FUSE Control File System...
Starting Flush Journal to Persistent Storage...
Starting Load/Save Random Seed...
Starting udev Coldplug all Devices...
Starting Create Static Device Nodes in /dev...
Starting udev Kernel Device Manager...
Starting Preprocess NFS configuration...
Starting Create Volatile Files and Directories...
Starting Network Time Synchronization...
Starting RPC bind portmap service...
Starting Update UTMP about System Boot/Shutdown...
[ 24.727672] input: PC Speaker as /devices/platform/pcspkr/input/input4
Starting OpenBSD Secure Shell server...
Starting /etc/rc.local Compatibility...
Starting Permit User Sessions...
Starting LKP bootstrap...
[ 24.923463] rc.local[3526]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/lkp/lkp/src/bin
Starting Login Service...
LKP: HOSTNAME vm-lkp-nex04-4G-18, MAC 52:54:00:12:34:56, kernel 4.16.0-rc4-next-20180309-00017-gb33ddf5 1, serial console /dev/ttyS0
[ 25.479682] Kernel tests: Boot OK!
[ 25.479710]
[ 25.644452] install debs round one: dpkg -i --force-confdef --force-depends /opt/deb/gawk_1%3a4.1.4+dfsg-1_amd64.deb
[ 25.644475]
[ 25.668363] /opt/deb/sysstat_11.6.0-1_amd64.deb
[ 25.668384]
[ 25.722852] (Reading database ... 2202 files and directories currently installed.)
[ 25.722894]
[ 25.744726] Preparing to unpack .../gawk_1%3a4.1.4+dfsg-1_amd64.deb ...
[ 25.744756]
[ 25.783727] Unpacking gawk (1:4.1.4+dfsg-1) over (1:4.1.1+dfsg-1) ...
[ 25.783756]
[ 26.409737] Selecting previously unselected package sysstat.
[ 26.409765]
[ 26.447612] Preparing to unpack .../deb/sysstat_11.6.0-1_amd64.deb ...
[ 26.447643]
[ 26.489959] Unpacking sysstat (11.6.0-1) ...
[ 26.489988]
[ 26.995116] Setting up gawk (1:4.1.4+dfsg-1) ...
[ 26.995146]
[ 27.102542] Setting up sysstat (11.6.0-1) ...
[ 27.102573]
[ 30.773854] Processing triggers for systemd (231-5) ...
[ 30.773873]
[ 30.851184] /lkp/lkp/src/bin/run-lkp
[ 30.851205]
[ 30.917864] RESULT_ROOT=/result/boot/1/vm-lkp-nex04-4G/debian-x86_64-2016-08-31.cgz/x86_64-nfsroot/gcc-7/b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e/0
[ 30.917885]
[ 30.949653] job=/lkp/scheduled/vm-lkp-nex04-4G-18/boot-1-debian-x86_64-2016-08-31.cgz-b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e-20180316-52130-1wxdvik-0.yaml
[ 30.949673]
[ 30.958745] mount.nfs: try 1 time...
[ 30.958760]
[ 31.190880] mount.nfs (3761) used greatest stack depth: 12424 bytes left
[ 31.264066] run-job /lkp/scheduled/vm-lkp-nex04-4G-18/boot-1-debian-x86_64-2016-08-31.cgz-b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e-20180316-52130-1wxdvik-0.yaml
[ 31.264087]
[ 31.338975] /usr/bin/curl -sSf http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/scheduled/vm-lkp-nex04-4G-18/boot-1-debian-x86_64-2016-08-31.cgz-b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e-20180316-52130-1wxdvik-0.yaml&job_state=running -o /dev/null
[ 31.339035]
[ 34.814078] skip microcode check for virtual machine
[ 34.814099]
[ 35.520694] random: crng init done
[ 38.000898] /usr/bin/curl -sSf http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/scheduled/vm-lkp-nex04-4G-18/boot-1-debian-x86_64-2016-08-31.cgz-b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e-20180316-52130-1wxdvik-0.yaml&job_state=post_run -o /dev/null
[ 38.000920]
[ 40.029932] kill 3803 dmesg --follow --decode
[ 40.029959]
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
On 16/03/2018 11:23, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-7):
>
> commit: b33ddf50ebcc740b990dd2e0e8ff0b92c7acf58e ("mm: Protect mm_rb tree with a rwlock")
> url: https://github.com/0day-ci/linux/commits/Laurent-Dufour/Speculative-page-faults/20180316-151833
>
>
> in testcase: boot
>
> on test machine: qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G
>
> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
>
>
> +----------------------------------------+------------+------------+
> | | 7f3f7b4e80 | b33ddf50eb |
> +----------------------------------------+------------+------------+
> | boot_successes | 8 | 0 |
> | boot_failures | 0 | 6 |
> | INFO:trying_to_register_non-static_key | 0 | 6 |
> +----------------------------------------+------------+------------+
>
>
>
> [ 22.218186] INFO: trying to register non-static key.
> [ 22.220252] the code is fine but needs lockdep annotation.
> [ 22.222471] turning off the locking correctness validator.
> [ 22.224839] CPU: 0 PID: 1 Comm: init Not tainted 4.16.0-rc4-next-20180309-00017-gb33ddf5 #1
> [ 22.228528] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
> [ 22.232443] Call Trace:
> [ 22.234234] dump_stack+0x85/0xbc
> [ 22.236085] register_lock_class+0x237/0x477
> [ 22.238057] __lock_acquire+0xd0/0xf15
> [ 22.240032] lock_acquire+0x19c/0x1ce
> [ 22.241927] ? do_mmap+0x3aa/0x3ff
> [ 22.243749] mmap_region+0x37a/0x4c0
> [ 22.245619] ? do_mmap+0x3aa/0x3ff
> [ 22.247425] do_mmap+0x3aa/0x3ff
> [ 22.249175] vm_mmap_pgoff+0xa1/0xea
> [ 22.251083] elf_map+0x6d/0x134
> [ 22.252873] load_elf_binary+0x56f/0xe07
> [ 22.254853] search_binary_handler+0x75/0x1f8
> [ 22.256934] do_execveat_common+0x661/0x92b
> [ 22.259164] ? rest_init+0x22e/0x22e
> [ 22.261082] do_execve+0x1f/0x21
> [ 22.262884] kernel_init+0x5a/0xf0
> [ 22.264722] ret_from_fork+0x3a/0x50
> [ 22.303240] systemd[1]: RTC configured in localtime, applying delta of 480 minutes to system time.
> [ 22.313544] systemd[1]: Failed to insert module 'autofs4': No such file or directory
Thanks a lot for reporting this.
I found the issue introduced in that patch.
I mistakenly remove in the call to seqcount_init(&vma->vm_sequence) in
__vma_link_rb().
This doesn't have a functional impact as the vm_sequence is incremented
monotonically.
I'll fix that in the next series.
Laurent.
Hi, Laurent
2018-03-14 1:59 GMT+08:00 Laurent Dufour <[email protected]>:
> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
>
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, this is done by introducing a rwlock
> which protects the access to the mm_rb tree. Previously this was done using
> SRCU but it was introducing a lot of scheduling to process the VMA's
> freeing
> operation which was hitting the performance by 20% as reported by Kemi Wang
> [2].Using a rwlock to protect access to the mm_rb tree is limiting the
> locking contention to these operations which are expected to be in a O(log
> n)
> order. In addition to ensure that the VMA is not freed in our back a
> reference count is added and 2 services (get_vma() and put_vma()) are
> introduced to handle the reference count. When a VMA is fetch from the RB
> tree using get_vma() is must be later freeed using put_vma(). Furthermore,
> to allow the VMA to be used again by the classic page fault handler a
> service is introduced can_reuse_spf_vma(). This service is expected to be
> called with the mmap_sem hold. It checked that the VMA is still matching
> the specified address and is releasing its reference count as the mmap_sem
> is hold it is ensure that it will not be freed in our back. In general, the
> VMA's reference count could be decremented when holding the mmap_sem but it
> should not be increased as holding the mmap_sem is ensuring that the VMA is
> stable. I can't see anymore the overhead I got while will-it-scale
> benchmark anymore.
>
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
>
> Once the VMA is found, the speculative page fault handler would check for
> the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried. VMA sequence lockings are added when VMA attributes which are
> checked during the page fault are modified.
>
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.
>
> The locking of the PTE is done with interrupts disabled, this allows to
> check for the PMD to ensure that there is not an ongoing collapsing
> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> waiting for the other CPU to have catch the IPI interrupt, if the pmd is
> valid at the time the PTE is locked, we have the guarantee that the
> collapsing opertion will have to wait on the PTE lock to move foward. This
> allows the SPF handler to map the PTE safely. If the PMD value is different
> than the one recorded at the beginning of the SPF operation, the classic
> page fault handler will be called to handle the operation while holding the
> mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
> done using spin_trylock() to avoid dead lock when handling a page fault
> while a TLB invalidate is requested by an other CPU holding the PTE.
>
> Support for THP is not done because when checking for the PMD, we can be
> confused by an in progress collapsing operation done by khugepaged. The
> issue is that pmd_none() could be true either if the PMD is not already
> populated or if the underlying PTE are in the way to be collapsed. So we
> cannot safely allocate a PMD if pmd_none() is true.
>
> This series a new software performance event named 'speculative-faults' or
> 'spf'. It counts the number of successful page fault event handled in a
> speculative way. When recording 'faults,spf' events, the faults one is
> counting the total number of page fault events while 'spf' is only counting
> the part of the faults processed in a speculative way.
>
> There are some trace events introduced by this series. They allow to
> identify why the page faults where not processed in a speculative way. This
> doesn't take in account the faults generated by a monothreaded process
> which directly processed while holding the mmap_sem. This trace events are
> grouped in a system named 'pagefault', they are:
> - pagefault:spf_pte_lock : if the pte was already locked by another thread
> - pagefault:spf_vma_changed : if the VMA has been changed in our back
> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
> - pagefault:spf_vma_notsup : the VMA's type is not supported
> - pagefault:spf_vma_access : the VMA's access right are not respected
> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
> back.
>
> To record all the related events, the easier is to run perf with the
> following arguments :
> $ perf stat -e 'faults,spf,pagefault:*' <command>
>
> This series builds on top of v4.16-rc2-mmotm-2018-02-21-14-48 and is
> functional on x86 and PowerPC.
>
> ---------------------
> Real Workload results
>
> As mentioned in previous email, we did non official runs using a "popular
> in memory multithreaded database product" on 176 cores SMT8 Power system
> which showed a 30% improvements in the number of transaction processed per
> second. This run has been done on the v6 series, but changes introduced in
> this new verion should not impact the performance boost seen.
>
> Here are the perf data captured during 2 of these runs on top of the v8
> series:
> vanilla spf
> faults 89.418 101.364
> spf n/a 97.989
>
> With the SPF kernel, most of the page fault were processed in a speculative
> way.
>
> ------------------
> Benchmarks results
>
> Base kernel is v4.16-rc4-mmotm-2018-03-09-16-34
> SPF is BASE + this series
>
> Kernbench:
> ----------
> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.13-rc4
> kernel (kernel is build 5 times):
>
> Average Half load -j 8
> Run (std deviation)
> BASE SPF
> Elapsed Time 151.36 (1.40139) 151.748 (1.09716) 0.26%
> User Time 1023.19 (3.58972) 1027.35 (2.30396) 0.41%
> System Time 125.026 (1.8547) 124.504 (0.980015) -0.42%
> Percent CPU 758.2 (5.54076) 758.6 (3.97492) 0.05%
> Context Switches 54924 (453.634) 54851 (382.293) -0.13%
> Sleeps 105589 (704.581) 105282 (435.502) -0.29%
>
> Average Optimal load -j 16
> Run (std deviation)
> BASE SPF
> Elapsed Time 74.804 (1.25139) 74.368 (0.406288) -0.58%
> User Time 962.033 (64.5125) 963.93 (66.8797) 0.20%
> System Time 110.771 (15.0817) 110.387 (14.8989) -0.35%
> Percent CPU 1045.7 (303.387) 1049.1 (306.255) 0.33%
> Context Switches 76201.8 (22433.1) 76170.4 (22482.9) -0.04%
> Sleeps 110289 (5024.05) 110220 (5248.58) -0.06%
>
> During a run on the SPF, perf events were captured:
> Performance counter stats for '../kernbench -M':
> 510334017 faults
> 200 spf
> 0 pagefault:spf_pte_lock
> 0 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 2174 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> Very few speculative page fault were recorded as most of the processes
> involved are monothreaded (sounds that on this architecture some threads
> were created during the kernel build processing).
>
> Here are the kerbench results on a 80 CPUs Power8 system:
>
> Average Half load -j 40
> Run (std deviation)
> BASE SPF
> Elapsed Time 116.958 (0.73401) 117.43 (0.927497) 0.40%
> User Time 4472.35 (7.85792) 4480.16 (19.4909) 0.17%
> System Time 136.248 (0.587639) 136.922 (1.09058) 0.49%
> Percent CPU 3939.8 (20.6567) 3931.2 (17.2829) -0.22%
> Context Switches 92445.8 (236.672) 92720.8 (270.118) 0.30%
> Sleeps 318475 (1412.6) 317996 (1819.07) -0.15%
>
> Average Optimal load -j 80
> Run (std deviation)
> BASE SPF
> Elapsed Time 106.976 (0.406731) 107.72 (0.329014) 0.70%
> User Time 5863.47 (1466.45) 5865.38 (1460.27) 0.03%
> System Time 159.995 (25.0393) 160.329 (24.6921) 0.21%
> Percent CPU 5446.2 (1588.23) 5416 (1565.34) -0.55%
> Context Switches 223018 (137637) 224867 (139305) 0.83%
> Sleeps 330846 (13127.3) 332348 (15556.9) 0.45%
>
> During a run on the SPF, perf events were captured:
> Performance counter stats for '../kernbench -M':
> 116612488 faults
> 0 spf
> 0 pagefault:spf_pte_lock
> 0 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 473 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> Most of the processes involved are monothreaded so SPF is not activated but
> there is no impact on the performance.
>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
>
> BASE SPF delta
> 16 CPUs x86 VM 14902.6 95905.16 543.55%
> 80 CPUs P8 node 37240.24 78185.67 109.95%
>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
> Performance counter stats for './ebizzy -mRTp':
> 888157 faults
> 884773 spf
> 92 pagefault:spf_pte_lock
> 2379 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 80 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
> Performance counter stats for './ebizzy -mRTp':
> 762134 faults
> 728663 spf
> 19101 pagefault:spf_pte_lock
> 13969 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 272 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> In ebizzy's case most of the page fault were handled in a speculative way,
> leading the ebizzy performance boost.
We ported the SPF to kernel 4.9 in android devices.
For the app launch time, It improves about 15% average. For the apps
which have hundreds of threads, it will be about 20%.
Thanks.
>
> ------------------
> Changes since v8:
> - Don't check PMD when locking the pte when THP is disabled
> Thanks to Daniel Jordan for reporting this.
> - Rebase on 4.16
> Changes since v7:
> - move pte_map_lock() and pte_spinlock() upper in mm/memory.c (patch 4 &
> 5)
> - make pte_unmap_same() compatible with the speculative page fault (patch
> 6)
> Changes since v6:
> - Rename config variable to CONFIG_SPECULATIVE_PAGE_FAULT (patch 1)
> - Review the way the config variable is set (patch 1 to 3)
> - Introduce mm_rb_write_*lock() in mm/mmap.c (patch 18)
> - Merge patch introducing pte try locking in the patch 18.
> Changes since v5:
> - use rwlock agains the mm RB tree in place of SRCU
> - add a VMA's reference count to protect VMA while using it without
> holding the mmap_sem.
> - check PMD value to detect collapsing operation
> - don't try speculative page fault for mono threaded processes
> - try to reuse the fetched VMA if VM_RETRY is returned
> - go directly to the error path if an error is detected during the SPF
> path
> - fix race window when moving VMA in move_vma()
> Changes since v4:
> - As requested by Andrew Morton, use CONFIG_SPF and define it earlier in
> the series to ease bisection.
> Changes since v3:
> - Don't build when CONFIG_SMP is not set
> - Fixed a lock dependency warning in __vma_adjust()
> - Use READ_ONCE to access p*d values in handle_speculative_fault()
> - Call memcp_oom() service in handle_speculative_fault()
> Changes since v2:
> - Perf event is renamed in PERF_COUNT_SW_SPF
> - On Power handle do_page_fault()'s cleaning
> - On Power if the VM_FAULT_ERROR is returned by
> handle_speculative_fault(), do not retry but jump to the error path
> - If VMA's flags are not matching the fault, directly returns
> VM_FAULT_SIGSEGV and not VM_FAULT_RETRY
> - Check for pud_trans_huge() to avoid speculative path
> - Handles _vm_normal_page()'s introduced by 6f16211df3bf
> ("mm/device-public-memory: device memory cache coherent with CPU")
> - add and review few comments in the code
> Changes since v1:
> - Remove PERF_COUNT_SW_SPF_FAILED perf event.
> - Add tracing events to details speculative page fault failures.
> - Cache VMA fields values which are used once the PTE is unlocked at the
> end of the page fault events.
> - Ensure that fields read during the speculative path are written and read
> using WRITE_ONCE and READ_ONCE.
> - Add checks at the beginning of the speculative path to abort it if the
> VMA is known to not be supported.
> Changes since RFC V5 [5]
> - Port to 4.13 kernel
> - Merging patch fixing lock dependency into the original patch
> - Replace the 2 parameters of vma_has_changed() with the vmf pointer
> - In patch 7, don't call __do_fault() in the speculative path as it may
> want to unlock the mmap_sem.
> - In patch 11-12, don't check for vma boundaries when
> page_add_new_anon_rmap() is called during the spf path and protect against
> anon_vma pointer's update.
> - In patch 13-16, add performance events to report number of successful
> and failed speculative events.
>
> [1]
> http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
> [2] https://patchwork.kernel.org/patch/9999687/
>
>
> Laurent Dufour (20):
> mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
> x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
> powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
> mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
> mm: make pte_unmap_same compatible with SPF
> mm: Protect VMA modifications using VMA sequence count
> mm: protect mremap() against SPF hanlder
> mm: Protect SPF handler against anon_vma changes
> mm: Cache some VMA fields in the vm_fault structure
> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
> mm: Introduce __lru_cache_add_active_or_unevictable
> mm: Introduce __maybe_mkwrite()
> mm: Introduce __vm_normal_page()
> mm: Introduce __page_add_new_anon_rmap()
> mm: Protect mm_rb tree with a rwlock
> mm: Adding speculative page fault failure trace events
> perf: Add a speculative page fault sw event
> perf tools: Add support for the SPF perf event
> mm: Speculative page fault handler return VMA
> powerpc/mm: Add speculative page fault
>
> Peter Zijlstra (4):
> mm: Prepare for FAULT_FLAG_SPECULATIVE
> mm: VMA sequence count
> mm: Provide speculative fault infrastructure
> x86/mm: Add speculative pagefault handling
>
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/mm/fault.c | 31 +-
> arch/x86/Kconfig | 1 +
> arch/x86/mm/fault.c | 38 ++-
> fs/proc/task_mmu.c | 5 +-
> fs/userfaultfd.c | 17 +-
> include/linux/hugetlb_inline.h | 2 +-
> include/linux/migrate.h | 4 +-
> include/linux/mm.h | 92 +++++-
> include/linux/mm_types.h | 7 +
> include/linux/pagemap.h | 4 +-
> include/linux/rmap.h | 12 +-
> include/linux/swap.h | 10 +-
> include/trace/events/pagefault.h | 87 +++++
> include/uapi/linux/perf_event.h | 1 +
> kernel/fork.c | 3 +
> mm/Kconfig | 3 +
> mm/hugetlb.c | 2 +
> mm/init-mm.c | 3 +
> mm/internal.h | 20 ++
> mm/khugepaged.c | 5 +
> mm/madvise.c | 6 +-
> mm/memory.c | 594 ++++++++++++++++++++++++++++++----
> mm/mempolicy.c | 51 ++-
> mm/migrate.c | 4 +-
> mm/mlock.c | 13 +-
> mm/mmap.c | 211 +++++++++---
> mm/mprotect.c | 4 +-
> mm/mremap.c | 13 +
> mm/rmap.c | 5 +-
> mm/swap.c | 6 +-
> mm/swap_state.c | 8 +-
> tools/include/uapi/linux/perf_event.h | 1 +
> tools/perf/util/evsel.c | 1 +
> tools/perf/util/parse-events.c | 4 +
> tools/perf/util/parse-events.l | 1 +
> tools/perf/util/python.c | 1 +
> 37 files changed, 1097 insertions(+), 174 deletions(-)
> create mode 100644 include/trace/events/pagefault.h
>
> --
> 2.7.4
>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> This configuration variable will be used to build the code needed to
> handle speculative page fault.
>
> By default it is turned off, and activated depending on architecture
> support.
>
> Suggested-by: Thomas Gleixner <[email protected]>
> Signed-off-by: Laurent Dufour <[email protected]>
> ---
> mm/Kconfig | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index abefa573bcd8..07c566c88faf 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -759,3 +759,6 @@ config GUP_BENCHMARK
> performance of get_user_pages_fast().
>
> See tools/testing/selftests/vm/gup_benchmark.c
> +
> +config SPECULATIVE_PAGE_FAULT
> + bool
Should this be configurable even if the arch supports it?
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> When handling page fault without holding the mmap_sem the fetch of the
> pte lock pointer and the locking will have to be done while ensuring
> that the VMA is not touched in our back.
>
> So move the fetch and locking operations in a dedicated function.
>
> Signed-off-by: Laurent Dufour <[email protected]>
> ---
> mm/memory.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 8ac241b9f370..21b1212a0892 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(apply_to_page_range);
>
> +static bool pte_spinlock(struct vm_fault *vmf)
inline?
> +{
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + spin_lock(vmf->ptl);
> + return true;
> +}
> +
> static bool pte_map_lock(struct vm_fault *vmf)
> {
> vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
Shouldn't pte_unmap_same() take struct vm_fault * and use the new
pte_spinlock()?
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 866446cf2d9a..104f3cc86b51 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -392,6 +392,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
> unsigned long error_code)
> {
> struct vm_area_struct * vma;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + struct vm_area_struct *spf_vma = NULL;
> +#endif
> struct mm_struct *mm = current->mm;
> unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
> int is_exec = TRAP(regs) == 0x400;
> @@ -459,6 +462,20 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
> if (is_exec)
> flags |= FAULT_FLAG_INSTRUCTION;
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + if (is_user && (atomic_read(&mm->mm_users) > 1)) {
> + /* let's try a speculative page fault without grabbing the
> + * mmap_sem.
> + */
> + fault = handle_speculative_fault(mm, address, flags, &spf_vma);
> + if (!(fault & VM_FAULT_RETRY)) {
> + perf_sw_event(PERF_COUNT_SW_SPF, 1,
> + regs, address);
> + goto done;
> + }
> + }
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
Can't you elimiate all #ifdef's in this patch if
handle_speculative_fault() can be passed is_user and return some error
code that fallback is needed? Maybe reuse VM_FAULT_FALLBACK?
> /* When running in the kernel we expect faults to occur only to
> * addresses in user space. All other faults represent errors in the
> * kernel and should generate an OOPS. Unfortunately, in the case of an
> @@ -489,7 +506,16 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
> might_sleep();
> }
>
> - vma = find_vma(mm, address);
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + if (spf_vma) {
> + if (can_reuse_spf_vma(spf_vma, address))
> + vma = spf_vma;
> + else
> + vma = find_vma(mm, address);
> + spf_vma = NULL;
> + } else
> +#endif
> + vma = find_vma(mm, address);
> if (unlikely(!vma))
> return bad_area(regs, address);
> if (likely(vma->vm_start <= address))
I think the code quality here could be improved such that you can pass mm,
&spf_vma, and address and some helper function would return spf_vma if
can_reuse_spf_vma() is true (and do *spf_vma to NULL) or otherwise return
find_vma(mm, address).
Also, spf_vma is being set to NULL because of VM_FAULT_RETRY, but does it
make sense to retry handle_speculative_fault() in this case since we've
dropped mm->mmap_sem and there may have been a writer queued behind it?
> @@ -568,6 +594,9 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
>
> up_read(¤t->mm->mmap_sem);
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +done:
> +#endif
> if (unlikely(fault & VM_FAULT_ERROR))
> return mm_fault_error(regs, address, fault);
>
And things like this are trivially handled by doing
done: __maybe_unused
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index e6af2b464c3d..a73cf227edd6 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -1239,6 +1239,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
> unsigned long address)
> {
> struct vm_area_struct *vma;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + struct vm_area_struct *spf_vma = NULL;
> +#endif
> struct task_struct *tsk;
> struct mm_struct *mm;
> int fault, major = 0;
> @@ -1332,6 +1335,27 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
> if (error_code & X86_PF_INSTR)
> flags |= FAULT_FLAG_INSTRUCTION;
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + if ((error_code & X86_PF_USER) && (atomic_read(&mm->mm_users) > 1)) {
> + fault = handle_speculative_fault(mm, address, flags,
> + &spf_vma);
> +
> + if (!(fault & VM_FAULT_RETRY)) {
> + if (!(fault & VM_FAULT_ERROR)) {
> + perf_sw_event(PERF_COUNT_SW_SPF, 1,
> + regs, address);
> + goto done;
> + }
> + /*
> + * In case of error we need the pkey value, but
> + * can't get it from the spf_vma as it is only returned
> + * when VM_FAULT_RETRY is returned. So we have to
> + * retry the page fault with the mmap_sem grabbed.
> + */
> + }
> + }
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
All the comments from the powerpc version will apply here as well, the
only interesting point is whether VM_FAULT_FALLBACK can be returned from
handle_speculative_fault() to indicate its not possible.
> +
> /*
> * When running in the kernel we expect faults to occur only to
> * addresses in user space. All other faults represent errors in
> @@ -1365,7 +1389,16 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
> might_sleep();
> }
>
> - vma = find_vma(mm, address);
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + if (spf_vma) {
> + if (can_reuse_spf_vma(spf_vma, address))
> + vma = spf_vma;
> + else
> + vma = find_vma(mm, address);
> + spf_vma = NULL;
> + } else
> +#endif
> + vma = find_vma(mm, address);
> if (unlikely(!vma)) {
> bad_area(regs, error_code, address);
> return;
> @@ -1451,6 +1484,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
> return;
> }
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +done:
> +#endif
> /*
> * Major/minor page fault accounting. If any of the events
> * returned VM_FAULT_MAJOR, we account it as a major fault.
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> Add a new software event to count succeeded speculative page faults.
>
> Signed-off-by: Laurent Dufour <[email protected]>
Acked-by: David Rientjes <[email protected]>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> Add support for the new speculative faults event.
>
> Signed-off-by: Laurent Dufour <[email protected]>
Acked-by: David Rientjes <[email protected]>
Aside: should there be a new spec_flt field for struct task_struct that
complements maj_flt and min_flt and be exported through /proc/pid/stat?
On Mon, Mar 26, 2018 at 02:44:48PM -0700, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
> > Add support for the new speculative faults event.
> >
> > Signed-off-by: Laurent Dufour <[email protected]>
>
> Acked-by: David Rientjes <[email protected]>
>
> Aside: should there be a new spec_flt field for struct task_struct that
> complements maj_flt and min_flt and be exported through /proc/pid/stat?
No. task_struct is already too bloated. If you need per process tracking
you can always get it through trace points.
-Andi
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2f3e98edc94a..b6432a261e63 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1199,6 +1199,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
> #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
> * and needs fsync() to complete (for
> * synchronous page faults in DAX) */
> +#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
>
> #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
> VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
> diff --git a/mm/memory.c b/mm/memory.c
> index 21b1212a0892..4bc7b0bdcb40 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
> * parts, do_swap_page must check under lock before unmapping the pte and
> * proceeding (but do_wp_page is only called after already making such a check;
> * and do_anonymous_page can safely check later on).
> + *
> + * pte_unmap_same() returns:
> + * 0 if the PTE are the same
> + * VM_FAULT_PTNOTSAME if the PTE are different
> + * VM_FAULT_RETRY if the VMA has changed in our back during
> + * a speculative page fault handling.
> */
> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> - pte_t *page_table, pte_t orig_pte)
> +static inline int pte_unmap_same(struct vm_fault *vmf)
> {
> - int same = 1;
> + int ret = 0;
> +
> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
> if (sizeof(pte_t) > sizeof(unsigned long)) {
> - spinlock_t *ptl = pte_lockptr(mm, pmd);
> - spin_lock(ptl);
> - same = pte_same(*page_table, orig_pte);
> - spin_unlock(ptl);
> + if (pte_spinlock(vmf)) {
> + if (!pte_same(*vmf->pte, vmf->orig_pte))
> + ret = VM_FAULT_PTNOTSAME;
> + spin_unlock(vmf->ptl);
> + } else
> + ret = VM_FAULT_RETRY;
> }
> #endif
> - pte_unmap(page_table);
> - return same;
> + pte_unmap(vmf->pte);
> + return ret;
> }
>
> static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
> int exclusive = 0;
> int ret = 0;
Initialization is now unneeded.
Otherwise:
Acked-by: David Rientjes <[email protected]>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 65ae54659833..a2d9c87b7b0b 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> goto out_mm;
> }
> for (vma = mm->mmap; vma; vma = vma->vm_next) {
> - vma->vm_flags &= ~VM_SOFTDIRTY;
> + vm_write_begin(vma);
> + WRITE_ONCE(vma->vm_flags,
> + vma->vm_flags & ~VM_SOFTDIRTY);
> vma_set_page_prot(vma);
> + vm_write_end(vma);
> }
> downgrade_write(&mm->mmap_sem);
> break;
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index cec550c8468f..b8212ba17695 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
>
> octx = vma->vm_userfaultfd_ctx.ctx;
> if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
> + vm_write_begin(vma);
> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> - vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
> + WRITE_ONCE(vma->vm_flags,
> + vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
> + vm_write_end(vma);
> return 0;
> }
>
In several locations in this patch vm_write_begin(vma) ->
vm_write_end(vma) is nesting things other than vma->vm_flags,
vma->vm_policy, etc. I think it's better to do vm_write_end(vma) as soon
as the members that the seqcount protects are modified. In other words,
this isn't offering protection for vma->vm_userfaultfd_ctx. There are
several examples of this in the patch.
> @@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> vma = prev;
> else
> prev = vma;
> - vma->vm_flags = new_flags;
> + vm_write_begin(vma);
> + WRITE_ONCE(vma->vm_flags, new_flags);
> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> + vm_write_end(vma);
> }
> up_write(&mm->mmap_sem);
> mmput(mm);
> @@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> * the next vma was merged into the current one and
> * the current one has not been updated yet.
> */
> - vma->vm_flags = new_flags;
> + vm_write_begin(vma);
> + WRITE_ONCE(vma->vm_flags, new_flags);
> vma->vm_userfaultfd_ctx.ctx = ctx;
> + vm_write_end(vma);
>
> skip:
> prev = vma;
> @@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> * the next vma was merged into the current one and
> * the current one has not been updated yet.
> */
> - vma->vm_flags = new_flags;
> + vm_write_begin(vma);
> + WRITE_ONCE(vma->vm_flags, new_flags);
> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
> + vm_write_end(vma);
>
> skip:
> prev = vma;
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b7e2268dfc9a..32314e9e48dd 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1006,6 +1006,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> if (mm_find_pmd(mm, address) != pmd)
> goto out;
>
> + vm_write_begin(vma);
> anon_vma_lock_write(vma->anon_vma);
>
> pte = pte_offset_map(pmd, address);
> @@ -1041,6 +1042,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> pmd_populate(mm, pmd, pmd_pgtable(_pmd));
> spin_unlock(pmd_ptl);
> anon_vma_unlock_write(vma->anon_vma);
> + vm_write_end(vma);
> result = SCAN_FAIL;
> goto out;
> }
> @@ -1075,6 +1077,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> set_pmd_at(mm, address, pmd, _pmd);
> update_mmu_cache_pmd(vma, address, pmd);
> spin_unlock(pmd_ptl);
> + vm_write_end(vma);
>
> *hpage = NULL;
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 4d3c922ea1a1..e328f7ab5942 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
> /*
> * vm_flags is protected by the mmap_sem held in write mode.
> */
> - vma->vm_flags = new_flags;
> + vm_write_begin(vma);
> + WRITE_ONCE(vma->vm_flags, new_flags);
> + vm_write_end(vma);
> out:
> return error;
> }
> @@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
> .private = tlb,
> };
>
> + vm_write_begin(vma);
> tlb_start_vma(tlb, vma);
> walk_page_range(addr, end, &free_walk);
> tlb_end_vma(tlb, vma);
> + vm_write_end(vma);
> }
>
> static int madvise_free_single_vma(struct vm_area_struct *vma,
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index e0e706f0b34e..2632c6f93b63 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
> struct vm_area_struct *vma;
>
> down_write(&mm->mmap_sem);
> - for (vma = mm->mmap; vma; vma = vma->vm_next)
> + for (vma = mm->mmap; vma; vma = vma->vm_next) {
> + vm_write_begin(vma);
> mpol_rebind_policy(vma->vm_policy, new);
> + vm_write_end(vma);
> + }
> up_write(&mm->mmap_sem);
> }
>
> @@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
> {
> int nr_updated;
>
> + vm_write_begin(vma);
> nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
> if (nr_updated)
> count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
> + vm_write_end(vma);
>
> return nr_updated;
> }
> @@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
> if (IS_ERR(new))
> return PTR_ERR(new);
>
> + vm_write_begin(vma);
> if (vma->vm_ops && vma->vm_ops->set_policy) {
> err = vma->vm_ops->set_policy(vma, new);
> if (err)
> @@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
> }
>
> old = vma->vm_policy;
> - vma->vm_policy = new; /* protected by mmap_sem */
> + /*
> + * The speculative page fault handler access this field without
> + * hodling the mmap_sem.
> + */
"The speculative page fault handler accesses this field without holding
vma->vm_mm->mmap_sem"
> + WRITE_ONCE(vma->vm_policy, new);
> + vm_write_end(vma);
> mpol_put(old);
>
> return 0;
> err_out:
> + vm_write_end(vma);
> mpol_put(new);
> return err;
> }
Wait, doesn't vma_dup_policy() also need to protect dst->vm_policy?
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2121,7 +2121,9 @@ int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
if (IS_ERR(pol))
return PTR_ERR(pol);
- dst->vm_policy = pol;
+ vm_write_begin(dst);
+ WRITE_ONCE(dst->vm_policy, pol);
+ vm_write_end(dst);
return 0;
}
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 5898255d0aeb..d6533cb85213 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -847,17 +847,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> }
>
> if (start != vma->vm_start) {
> - vma->vm_start = start;
> + WRITE_ONCE(vma->vm_start, start);
> start_changed = true;
> }
> if (end != vma->vm_end) {
> - vma->vm_end = end;
> + WRITE_ONCE(vma->vm_end, end);
> end_changed = true;
> }
> - vma->vm_pgoff = pgoff;
> + WRITE_ONCE(vma->vm_pgoff, pgoff);
> if (adjust_next) {
> - next->vm_start += adjust_next << PAGE_SHIFT;
> - next->vm_pgoff += adjust_next;
> + WRITE_ONCE(next->vm_start,
> + next->vm_start + (adjust_next << PAGE_SHIFT));
> + WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
> }
>
> if (root) {
> @@ -1781,6 +1782,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> out:
> perf_event_mmap(vma);
>
> + vm_write_begin(vma);
> vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
> if (vm_flags & VM_LOCKED) {
> if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
> @@ -1803,6 +1805,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> vma->vm_flags |= VM_SOFTDIRTY;
>
> vma_set_page_prot(vma);
> + vm_write_end(vma);
>
> return addr;
>
Shouldn't this also protect vma->vm_flags?
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1796,7 +1796,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
vma == get_gate_vma(current->mm)))
mm->locked_vm += (len >> PAGE_SHIFT);
else
- vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & VM_LOCKED_CLEAR_MASK);
}
if (file)
@@ -1809,7 +1810,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
* then new mapped in-place (which must be aimed as
* a completely new data area).
*/
- vma->vm_flags |= VM_SOFTDIRTY;
+ WRITE_ONCE(vma->vm_flags, vma->vm_flags | VM_SOFTDIRTY);
vma_set_page_prot(vma);
vm_write_end(vma);
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 88042d843668..ef6ef0627090 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2189,16 +2189,24 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
> extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
> extern int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
> - struct vm_area_struct *expand);
> + struct vm_area_struct *expand, bool keep_locked);
> static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
> {
> - return __vma_adjust(vma, start, end, pgoff, insert, NULL);
> + return __vma_adjust(vma, start, end, pgoff, insert, NULL, false);
> }
> -extern struct vm_area_struct *vma_merge(struct mm_struct *,
> +extern struct vm_area_struct *__vma_merge(struct mm_struct *,
> struct vm_area_struct *prev, unsigned long addr, unsigned long end,
> unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
> - struct mempolicy *, struct vm_userfaultfd_ctx);
> + struct mempolicy *, struct vm_userfaultfd_ctx, bool keep_locked);
> +static inline struct vm_area_struct *vma_merge(struct mm_struct *vma,
> + struct vm_area_struct *prev, unsigned long addr, unsigned long end,
> + unsigned long vm_flags, struct anon_vma *anon, struct file *file,
> + pgoff_t off, struct mempolicy *pol, struct vm_userfaultfd_ctx uff)
> +{
> + return __vma_merge(vma, prev, addr, end, vm_flags, anon, file, off,
> + pol, uff, false);
> +}
The first formal to vma_merge() is an mm, not a vma.
This area could use an uncluttering.
> extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
> extern int __split_vma(struct mm_struct *, struct vm_area_struct *,
> unsigned long addr, int new_below);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index d6533cb85213..ac32b577a0c9 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -684,7 +684,7 @@ static inline void __vma_unlink_prev(struct mm_struct *mm,
> */
> int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
> - struct vm_area_struct *expand)
> + struct vm_area_struct *expand, bool keep_locked)
> {
> struct mm_struct *mm = vma->vm_mm;
> struct vm_area_struct *next = vma->vm_next, *orig_vma = vma;
> @@ -996,7 +996,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>
> if (next && next != vma)
> vm_raw_write_end(next);
> - vm_raw_write_end(vma);
> + if (!keep_locked)
> + vm_raw_write_end(vma);
>
> validate_mm(mm);
>
This will require a fixup for the following patch where a retval from
anon_vma_close() can also return without vma locked even though
keep_locked == true.
How does vma_merge() handle that error wrt vm_raw_write_begin(vma)?
> @@ -1132,12 +1133,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
> * parameter) may establish ptes with the wrong permissions of NNNN
> * instead of the right permissions of XXXX.
> */
> -struct vm_area_struct *vma_merge(struct mm_struct *mm,
> +struct vm_area_struct *__vma_merge(struct mm_struct *mm,
> struct vm_area_struct *prev, unsigned long addr,
> unsigned long end, unsigned long vm_flags,
> struct anon_vma *anon_vma, struct file *file,
> pgoff_t pgoff, struct mempolicy *policy,
> - struct vm_userfaultfd_ctx vm_userfaultfd_ctx)
> + struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
> + bool keep_locked)
> {
> pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
> struct vm_area_struct *area, *next;
> @@ -1185,10 +1187,11 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
> /* cases 1, 6 */
> err = __vma_adjust(prev, prev->vm_start,
> next->vm_end, prev->vm_pgoff, NULL,
> - prev);
> + prev, keep_locked);
> } else /* cases 2, 5, 7 */
> err = __vma_adjust(prev, prev->vm_start,
> - end, prev->vm_pgoff, NULL, prev);
> + end, prev->vm_pgoff, NULL, prev,
> + keep_locked);
> if (err)
> return NULL;
> khugepaged_enter_vma_merge(prev, vm_flags);
> @@ -1205,10 +1208,12 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
> vm_userfaultfd_ctx)) {
> if (prev && addr < prev->vm_end) /* case 4 */
> err = __vma_adjust(prev, prev->vm_start,
> - addr, prev->vm_pgoff, NULL, next);
> + addr, prev->vm_pgoff, NULL, next,
> + keep_locked);
> else { /* cases 3, 8 */
> err = __vma_adjust(area, addr, next->vm_end,
> - next->vm_pgoff - pglen, NULL, next);
> + next->vm_pgoff - pglen, NULL, next,
> + keep_locked);
> /*
> * In case 3 area is already equal to next and
> * this is a noop, but in case 8 "area" has
> @@ -3163,9 +3168,20 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>
> if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent))
> return NULL; /* should never get here */
> - new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx);
> +
> + /* There is 3 cases to manage here in
> + * AAAA AAAA AAAA AAAA
> + * PPPP.... PPPP......NNNN PPPP....NNNN PP........NN
> + * PPPPPPPP(A) PPPP..NNNNNNNN(B) PPPPPPPPPPPP(1) NULL
> + * PPPPPPPPNNNN(2)
> + * PPPPNNNNNNNN(3)
> + *
> + * new_vma == prev in case A,1,2
> + * new_vma == next in case B,3
> + */
Interleaved tabs and whitespace.
> + new_vma = __vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
> + vma->anon_vma, vma->vm_file, pgoff,
> + vma_policy(vma), vma->vm_userfaultfd_ctx, true);
> if (new_vma) {
> /*
> * Source vma may have been merged into new_vma
> @@ -3205,6 +3221,15 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> get_file(new_vma->vm_file);
> if (new_vma->vm_ops && new_vma->vm_ops->open)
> new_vma->vm_ops->open(new_vma);
> + /*
> + * As the VMA is linked right now, it may be hit by the
> + * speculative page fault handler. But we don't want it to
> + * to start mapping page in this area until the caller has
> + * potentially move the pte from the moved VMA. To prevent
> + * that we protect it right now, and let the caller unprotect
> + * it once the move is done.
> + */
> + vm_raw_write_begin(new_vma);
> vma_link(mm, new_vma, prev, rb_link, rb_parent);
> *need_rmap_locks = false;
> }
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 049470aa1e3e..8ed1a1d6eaed 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -302,6 +302,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
> if (!new_vma)
> return -ENOMEM;
>
> + /* new_vma is returned protected by copy_vma, to prevent speculative
> + * page fault to be done in the destination area before we move the pte.
> + * Now, we must also protect the source VMA since we don't want pages
> + * to be mapped in our back while we are copying the PTEs.
> + */
> + if (vma != new_vma)
> + vm_raw_write_begin(vma);
> +
> moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
> need_rmap_locks);
> if (moved_len < old_len) {
> @@ -318,6 +326,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
> */
> move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
> true);
> + if (vma != new_vma)
> + vm_raw_write_end(vma);
> vma = new_vma;
> old_len = new_len;
> old_addr = new_addr;
> @@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
> mremap_userfaultfd_prep(new_vma, uf);
> arch_remap(mm, old_addr, old_addr + old_len,
> new_addr, new_addr + new_len);
> + if (vma != new_vma)
> + vm_raw_write_end(vma);
> }
> + vm_raw_write_end(new_vma);
Just do
vm_raw_write_end(vma);
vm_raw_write_end(new_vma);
here.
Hi David,
Thanks a lot for your deep review on this series.
On 25/03/2018 23:50, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> This configuration variable will be used to build the code needed to
>> handle speculative page fault.
>>
>> By default it is turned off, and activated depending on architecture
>> support.
>>
>> Suggested-by: Thomas Gleixner <[email protected]>
>> Signed-off-by: Laurent Dufour <[email protected]>
>> ---
>> mm/Kconfig | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index abefa573bcd8..07c566c88faf 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -759,3 +759,6 @@ config GUP_BENCHMARK
>> performance of get_user_pages_fast().
>>
>> See tools/testing/selftests/vm/gup_benchmark.c
>> +
>> +config SPECULATIVE_PAGE_FAULT
>> + bool
>
> Should this be configurable even if the arch supports it?
Actually, this is not configurable unless by manually editing the .config file.
I made it this way on the Thomas's request :
https://lkml.org/lkml/2018/1/15/969
That sounds to be the smarter way to achieve that, isn't it ?
Laurent.
On 25/03/2018 23:50, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> When handling page fault without holding the mmap_sem the fetch of the
>> pte lock pointer and the locking will have to be done while ensuring
>> that the VMA is not touched in our back.
>>
>> So move the fetch and locking operations in a dedicated function.
>>
>> Signed-off-by: Laurent Dufour <[email protected]>
>> ---
>> mm/memory.c | 15 +++++++++++----
>> 1 file changed, 11 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 8ac241b9f370..21b1212a0892 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>> }
>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>
>> +static bool pte_spinlock(struct vm_fault *vmf)
>
> inline?
You're right.
Indeed this was done in the patch 18 : "mm: Provide speculative fault
infrastructure", but this has to be done there too, I'll fix that.
>
>> +{
>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>> + spin_lock(vmf->ptl);
>> + return true;
>> +}
>> +
>> static bool pte_map_lock(struct vm_fault *vmf)
>> {
>> vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>
> Shouldn't pte_unmap_same() take struct vm_fault * and use the new
> pte_spinlock()?
done in the next patch, but you already acked it..
On 27/03/2018 23:18, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 2f3e98edc94a..b6432a261e63 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1199,6 +1199,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>> #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
>> * and needs fsync() to complete (for
>> * synchronous page faults in DAX) */
>> +#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
>>
>> #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
>> VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 21b1212a0892..4bc7b0bdcb40 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
>> * parts, do_swap_page must check under lock before unmapping the pte and
>> * proceeding (but do_wp_page is only called after already making such a check;
>> * and do_anonymous_page can safely check later on).
>> + *
>> + * pte_unmap_same() returns:
>> + * 0 if the PTE are the same
>> + * VM_FAULT_PTNOTSAME if the PTE are different
>> + * VM_FAULT_RETRY if the VMA has changed in our back during
>> + * a speculative page fault handling.
>> */
>> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
>> - pte_t *page_table, pte_t orig_pte)
>> +static inline int pte_unmap_same(struct vm_fault *vmf)
>> {
>> - int same = 1;
>> + int ret = 0;
>> +
>> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
>> if (sizeof(pte_t) > sizeof(unsigned long)) {
>> - spinlock_t *ptl = pte_lockptr(mm, pmd);
>> - spin_lock(ptl);
>> - same = pte_same(*page_table, orig_pte);
>> - spin_unlock(ptl);
>> + if (pte_spinlock(vmf)) {
>> + if (!pte_same(*vmf->pte, vmf->orig_pte))
>> + ret = VM_FAULT_PTNOTSAME;
>> + spin_unlock(vmf->ptl);
>> + } else
>> + ret = VM_FAULT_RETRY;
>> }
>> #endif
>> - pte_unmap(page_table);
>> - return same;
>> + pte_unmap(vmf->pte);
>> + return ret;
>> }
>>
>> static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
>> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
>> int exclusive = 0;
>> int ret = 0;
>
> Initialization is now unneeded.
I'm sorry, what "initialization" are you talking about here ?
>
> Otherwise:
>
> Acked-by: David Rientjes <[email protected]>
Thanks,
Laurent.
On 28/03/2018 12:20, David Rientjes wrote:
> On Wed, 28 Mar 2018, Laurent Dufour wrote:
>
>>>> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
>>>> int exclusive = 0;
>>>> int ret = 0;
>>>
>>> Initialization is now unneeded.
>>
>> I'm sorry, what "initialization" are you talking about here ?
>>
>
> The initialization of the ret variable.
>
> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
> int exclusive = 0;
> int ret = 0;
>
> - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
> + ret = pte_unmap_same(vmf);
> + if (ret)
> goto out;
>
> entry = pte_to_swp_entry(vmf->orig_pte);
>
> "ret" is immediately set to the return value of pte_unmap_same(), so there
> is no need to initialize it to 0.
Sorry, I missed that. I'll remove this initialization.
Thanks,
Laurent.
On Wed, 28 Mar 2018, Laurent Dufour wrote:
> >> This configuration variable will be used to build the code needed to
> >> handle speculative page fault.
> >>
> >> By default it is turned off, and activated depending on architecture
> >> support.
> >>
> >> Suggested-by: Thomas Gleixner <[email protected]>
> >> Signed-off-by: Laurent Dufour <[email protected]>
> >> ---
> >> mm/Kconfig | 3 +++
> >> 1 file changed, 3 insertions(+)
> >>
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index abefa573bcd8..07c566c88faf 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -759,3 +759,6 @@ config GUP_BENCHMARK
> >> performance of get_user_pages_fast().
> >>
> >> See tools/testing/selftests/vm/gup_benchmark.c
> >> +
> >> +config SPECULATIVE_PAGE_FAULT
> >> + bool
> >
> > Should this be configurable even if the arch supports it?
>
> Actually, this is not configurable unless by manually editing the .config file.
>
> I made it this way on the Thomas's request :
> https://lkml.org/lkml/2018/1/15/969
>
> That sounds to be the smarter way to achieve that, isn't it ?
>
Putting this in mm/Kconfig is definitely the right way to go about it
instead of any generic option in arch/*.
My question, though, was making this configurable by the user:
config SPECULATIVE_PAGE_FAULT
bool "Speculative page faults"
depends on X86_64 || PPC
default y
help
..
It's a question about whether we want this always enabled on x86_64 and
power or whether the user should be able to disable it (right now they
can't). With a large feature like this, you may want to offer something
simple (disable CONFIG_SPECULATIVE_PAGE_FAULT) if someone runs into
regressions.
On Wed, 28 Mar 2018, Laurent Dufour wrote:
> >> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
> >> int exclusive = 0;
> >> int ret = 0;
> >
> > Initialization is now unneeded.
>
> I'm sorry, what "initialization" are you talking about here ?
>
The initialization of the ret variable.
@@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
int exclusive = 0;
int ret = 0;
- if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
+ ret = pte_unmap_same(vmf);
+ if (ret)
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
"ret" is immediately set to the return value of pte_unmap_same(), so there
is no need to initialize it to 0.
On 28/03/2018 12:16, David Rientjes wrote:
> On Wed, 28 Mar 2018, Laurent Dufour wrote:
>
>>>> This configuration variable will be used to build the code needed to
>>>> handle speculative page fault.
>>>>
>>>> By default it is turned off, and activated depending on architecture
>>>> support.
>>>>
>>>> Suggested-by: Thomas Gleixner <[email protected]>
>>>> Signed-off-by: Laurent Dufour <[email protected]>
>>>> ---
>>>> mm/Kconfig | 3 +++
>>>> 1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>>> index abefa573bcd8..07c566c88faf 100644
>>>> --- a/mm/Kconfig
>>>> +++ b/mm/Kconfig
>>>> @@ -759,3 +759,6 @@ config GUP_BENCHMARK
>>>> performance of get_user_pages_fast().
>>>>
>>>> See tools/testing/selftests/vm/gup_benchmark.c
>>>> +
>>>> +config SPECULATIVE_PAGE_FAULT
>>>> + bool
>>>
>>> Should this be configurable even if the arch supports it?
>>
>> Actually, this is not configurable unless by manually editing the .config file.
>>
>> I made it this way on the Thomas's request :
>> https://lkml.org/lkml/2018/1/15/969
>>
>> That sounds to be the smarter way to achieve that, isn't it ?
>>
>
> Putting this in mm/Kconfig is definitely the right way to go about it
> instead of any generic option in arch/*.
>
> My question, though, was making this configurable by the user:
>
> config SPECULATIVE_PAGE_FAULT
> bool "Speculative page faults"
> depends on X86_64 || PPC
> default y
> help
> ..
>
> It's a question about whether we want this always enabled on x86_64 and
> power or whether the user should be able to disable it (right now they
> can't). With a large feature like this, you may want to offer something
> simple (disable CONFIG_SPECULATIVE_PAGE_FAULT) if someone runs into
> regressions.
I agree, but I think it would be important to get the per architecture
enablement to avoid complex check here. For instance in the case of powerPC
this is only supported for PPC_BOOK3S_64.
To avoid exposing such per architecture define here, what do you think about
having supporting architectures setting ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
and the SPECULATIVE_PAGE_FAULT depends on this, like this:
In mm/Kconfig:
config SPECULATIVE_PAGE_FAULT
bool "Speculative page faults"
depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT && SMP
default y
help
...
In arch/powerpc/Kconfig:
config PPC
...
select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT if PPC_BOOK3S_64
In arch/x86/Kconfig:
config X86_64
...
select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
On 27/03/2018 23:45, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>> index 65ae54659833..a2d9c87b7b0b 100644
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>> goto out_mm;
>> }
>> for (vma = mm->mmap; vma; vma = vma->vm_next) {
>> - vma->vm_flags &= ~VM_SOFTDIRTY;
>> + vm_write_begin(vma);
>> + WRITE_ONCE(vma->vm_flags,
>> + vma->vm_flags & ~VM_SOFTDIRTY);
>> vma_set_page_prot(vma);
>> + vm_write_end(vma);
>> }
>> downgrade_write(&mm->mmap_sem);
>> break;
>> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
>> index cec550c8468f..b8212ba17695 100644
>> --- a/fs/userfaultfd.c
>> +++ b/fs/userfaultfd.c
>> @@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
>>
>> octx = vma->vm_userfaultfd_ctx.ctx;
>> if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
>> + vm_write_begin(vma);
>> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
>> - vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
>> + WRITE_ONCE(vma->vm_flags,
>> + vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
>> + vm_write_end(vma);
>> return 0;
>> }
>>
>
> In several locations in this patch vm_write_begin(vma) ->
> vm_write_end(vma) is nesting things other than vma->vm_flags,
> vma->vm_policy, etc. I think it's better to do vm_write_end(vma) as soon
> as the members that the seqcount protects are modified. In other words,
> this isn't offering protection for vma->vm_userfaultfd_ctx. There are
> several examples of this in the patch.
That's true in this particular case, and I could change that to not include the
change to vm_userfaultfd_ctx.
This being said, I don't think this will have a major impact, but I'll make a
close review on this patch to be sure there is too large protected part of code.
>> @@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
>> vma = prev;
>> else
>> prev = vma;
>> - vma->vm_flags = new_flags;
>> + vm_write_begin(vma);
>> + WRITE_ONCE(vma->vm_flags, new_flags);
>> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
>> + vm_write_end(vma);
>> }
>> up_write(&mm->mmap_sem);
>> mmput(mm);
>> @@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
>> * the next vma was merged into the current one and
>> * the current one has not been updated yet.
>> */
>> - vma->vm_flags = new_flags;
>> + vm_write_begin(vma);
>> + WRITE_ONCE(vma->vm_flags, new_flags);
>> vma->vm_userfaultfd_ctx.ctx = ctx;
>> + vm_write_end(vma);
>>
>> skip:
>> prev = vma;
>> @@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
>> * the next vma was merged into the current one and
>> * the current one has not been updated yet.
>> */
>> - vma->vm_flags = new_flags;
>> + vm_write_begin(vma);
>> + WRITE_ONCE(vma->vm_flags, new_flags);
>> vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
>> + vm_write_end(vma);
>>
>> skip:
>> prev = vma;
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index b7e2268dfc9a..32314e9e48dd 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -1006,6 +1006,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>> if (mm_find_pmd(mm, address) != pmd)
>> goto out;
>>
>> + vm_write_begin(vma);
>> anon_vma_lock_write(vma->anon_vma);
>>
>> pte = pte_offset_map(pmd, address);
>> @@ -1041,6 +1042,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>> pmd_populate(mm, pmd, pmd_pgtable(_pmd));
>> spin_unlock(pmd_ptl);
>> anon_vma_unlock_write(vma->anon_vma);
>> + vm_write_end(vma);
>> result = SCAN_FAIL;
>> goto out;
>> }
>> @@ -1075,6 +1077,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>> set_pmd_at(mm, address, pmd, _pmd);
>> update_mmu_cache_pmd(vma, address, pmd);
>> spin_unlock(pmd_ptl);
>> + vm_write_end(vma);
>>
>> *hpage = NULL;
>>
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 4d3c922ea1a1..e328f7ab5942 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
>> /*
>> * vm_flags is protected by the mmap_sem held in write mode.
>> */
>> - vma->vm_flags = new_flags;
>> + vm_write_begin(vma);
>> + WRITE_ONCE(vma->vm_flags, new_flags);
>> + vm_write_end(vma);
>> out:
>> return error;
>> }
>> @@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
>> .private = tlb,
>> };
>>
>> + vm_write_begin(vma);
>> tlb_start_vma(tlb, vma);
>> walk_page_range(addr, end, &free_walk);
>> tlb_end_vma(tlb, vma);
>> + vm_write_end(vma);
>> }
>>
>> static int madvise_free_single_vma(struct vm_area_struct *vma,
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index e0e706f0b34e..2632c6f93b63 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
>> struct vm_area_struct *vma;
>>
>> down_write(&mm->mmap_sem);
>> - for (vma = mm->mmap; vma; vma = vma->vm_next)
>> + for (vma = mm->mmap; vma; vma = vma->vm_next) {
>> + vm_write_begin(vma);
>> mpol_rebind_policy(vma->vm_policy, new);
>> + vm_write_end(vma);
>> + }
>> up_write(&mm->mmap_sem);
>> }
>>
>> @@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
>> {
>> int nr_updated;
>>
>> + vm_write_begin(vma);
>> nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
>> if (nr_updated)
>> count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
>> + vm_write_end(vma);
>>
>> return nr_updated;
>> }
>> @@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
>> if (IS_ERR(new))
>> return PTR_ERR(new);
>>
>> + vm_write_begin(vma);
>> if (vma->vm_ops && vma->vm_ops->set_policy) {
>> err = vma->vm_ops->set_policy(vma, new);
>> if (err)
>> @@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
>> }
>>
>> old = vma->vm_policy;
>> - vma->vm_policy = new; /* protected by mmap_sem */
>> + /*
>> + * The speculative page fault handler access this field without
>> + * hodling the mmap_sem.
>> + */
>
> "The speculative page fault handler accesses this field without holding
> vma->vm_mm->mmap_sem"
Oops :/
>
>> + WRITE_ONCE(vma->vm_policy, new);
>> + vm_write_end(vma);
>> mpol_put(old);
>>
>> return 0;
>> err_out:
>> + vm_write_end(vma);
>> mpol_put(new);
>> return err;
>> }
>
> Wait, doesn't vma_dup_policy() also need to protect dst->vm_policy?
Indeed this is not necessary because vma_dup_policy() is called when dst is not
yet linked in the RB tree, so it can't be found by the speculative page fault
handler. This is not the case of vma_replace_policy, and this is why the
protection are needed here.
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2121,7 +2121,9 @@ int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
>
> if (IS_ERR(pol))
> return PTR_ERR(pol);
> - dst->vm_policy = pol;
> + vm_write_begin(dst);
> + WRITE_ONCE(dst->vm_policy, pol);
> + vm_write_end(dst);
> return 0;
> }
On 27/03/2018 23:57, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 5898255d0aeb..d6533cb85213 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -847,17 +847,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>> }
>>
>> if (start != vma->vm_start) {
>> - vma->vm_start = start;
>> + WRITE_ONCE(vma->vm_start, start);
>> start_changed = true;
>> }
>> if (end != vma->vm_end) {
>> - vma->vm_end = end;
>> + WRITE_ONCE(vma->vm_end, end);
>> end_changed = true;
>> }
>> - vma->vm_pgoff = pgoff;
>> + WRITE_ONCE(vma->vm_pgoff, pgoff);
>> if (adjust_next) {
>> - next->vm_start += adjust_next << PAGE_SHIFT;
>> - next->vm_pgoff += adjust_next;
>> + WRITE_ONCE(next->vm_start,
>> + next->vm_start + (adjust_next << PAGE_SHIFT));
>> + WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
>> }
>>
>> if (root) {
>> @@ -1781,6 +1782,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>> out:
>> perf_event_mmap(vma);
>>
>> + vm_write_begin(vma);
>> vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
>> if (vm_flags & VM_LOCKED) {
>> if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
>> @@ -1803,6 +1805,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>> vma->vm_flags |= VM_SOFTDIRTY;
>>
>> vma_set_page_prot(vma);
>> + vm_write_end(vma);
>>
>> return addr;
>>
>
> Shouldn't this also protect vma->vm_flags?
Nice catch !
I just found that too while reviewing the entire patch to answer your previous
email.
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1796,7 +1796,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> vma == get_gate_vma(current->mm)))
> mm->locked_vm += (len >> PAGE_SHIFT);
> else
> - vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
> + WRITE_ONCE(vma->vm_flags,
> + vma->vm_flags & VM_LOCKED_CLEAR_MASK);
> }
>
> if (file)
> @@ -1809,7 +1810,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> * then new mapped in-place (which must be aimed as
> * a completely new data area).
> */
> - vma->vm_flags |= VM_SOFTDIRTY;
> + WRITE_ONCE(vma->vm_flags, vma->vm_flags | VM_SOFTDIRTY);
>
> vma_set_page_prot(vma);
> vm_write_end(vma);
>
On 28/03/2018 00:12, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 88042d843668..ef6ef0627090 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -2189,16 +2189,24 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
>> extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
>> extern int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
>> - struct vm_area_struct *expand);
>> + struct vm_area_struct *expand, bool keep_locked);
>> static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start,
>> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
>> {
>> - return __vma_adjust(vma, start, end, pgoff, insert, NULL);
>> + return __vma_adjust(vma, start, end, pgoff, insert, NULL, false);
>> }
>> -extern struct vm_area_struct *vma_merge(struct mm_struct *,
>> +extern struct vm_area_struct *__vma_merge(struct mm_struct *,
>> struct vm_area_struct *prev, unsigned long addr, unsigned long end,
>> unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
>> - struct mempolicy *, struct vm_userfaultfd_ctx);
>> + struct mempolicy *, struct vm_userfaultfd_ctx, bool keep_locked);
>> +static inline struct vm_area_struct *vma_merge(struct mm_struct *vma,
>> + struct vm_area_struct *prev, unsigned long addr, unsigned long end,
>> + unsigned long vm_flags, struct anon_vma *anon, struct file *file,
>> + pgoff_t off, struct mempolicy *pol, struct vm_userfaultfd_ctx uff)
>> +{
>> + return __vma_merge(vma, prev, addr, end, vm_flags, anon, file, off,
>> + pol, uff, false);
>> +}
>
> The first formal to vma_merge() is an mm, not a vma.
Oops !
> This area could use an uncluttering.
>
>> extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
>> extern int __split_vma(struct mm_struct *, struct vm_area_struct *,
>> unsigned long addr, int new_below);
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index d6533cb85213..ac32b577a0c9 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -684,7 +684,7 @@ static inline void __vma_unlink_prev(struct mm_struct *mm,
>> */
>> int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>> unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
>> - struct vm_area_struct *expand)
>> + struct vm_area_struct *expand, bool keep_locked)
>> {
>> struct mm_struct *mm = vma->vm_mm;
>> struct vm_area_struct *next = vma->vm_next, *orig_vma = vma;
>> @@ -996,7 +996,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>>
>> if (next && next != vma)
>> vm_raw_write_end(next);
>> - vm_raw_write_end(vma);
>> + if (!keep_locked)
>> + vm_raw_write_end(vma);
>>
>> validate_mm(mm);
>>
>
> This will require a fixup for the following patch where a retval from
> anon_vma_close() can also return without vma locked even though
> keep_locked == true.
Yes I saw your previous email about that.
>
> How does vma_merge() handle that error wrt vm_raw_write_begin(vma)?
The not written assumption is that in the case __vma_adjust() is returning an
error, the vma is no more sequence locked. That's need to be clarify a huge
comment near the __vma_adjust() definition.
This being said, in that case of __vma_merge() is returning NULL which means
that it didn't do the job (the caller don't know if this is due to an error or
because the VMA cannot be merged). In that case again the assumption is that
the VMA are no locked in that case since the merge operation was not done. But
again this is not documented at all and I've to fix that.
In addition the caller copy_vma() which is the only one calling __vma_merge()
with keep_locked=true, is assuming that in that case, the VMA is not locked and
it will allocate a new VMA which will be locked before it is inserted in the RB
tree.
So it should work, but I've to make a huge effort to document that in the code.
Thanks a lot for raising this !
>
>> @@ -1132,12 +1133,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
>> * parameter) may establish ptes with the wrong permissions of NNNN
>> * instead of the right permissions of XXXX.
>> */
>> -struct vm_area_struct *vma_merge(struct mm_struct *mm,
>> +struct vm_area_struct *__vma_merge(struct mm_struct *mm,
>> struct vm_area_struct *prev, unsigned long addr,
>> unsigned long end, unsigned long vm_flags,
>> struct anon_vma *anon_vma, struct file *file,
>> pgoff_t pgoff, struct mempolicy *policy,
>> - struct vm_userfaultfd_ctx vm_userfaultfd_ctx)
>> + struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
>> + bool keep_locked)
>> {
>> pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
>> struct vm_area_struct *area, *next;
>> @@ -1185,10 +1187,11 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
>> /* cases 1, 6 */
>> err = __vma_adjust(prev, prev->vm_start,
>> next->vm_end, prev->vm_pgoff, NULL,
>> - prev);
>> + prev, keep_locked);
>> } else /* cases 2, 5, 7 */
>> err = __vma_adjust(prev, prev->vm_start,
>> - end, prev->vm_pgoff, NULL, prev);
>> + end, prev->vm_pgoff, NULL, prev,
>> + keep_locked);
>> if (err)
>> return NULL;
>> khugepaged_enter_vma_merge(prev, vm_flags);
>> @@ -1205,10 +1208,12 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
>> vm_userfaultfd_ctx)) {
>> if (prev && addr < prev->vm_end) /* case 4 */
>> err = __vma_adjust(prev, prev->vm_start,
>> - addr, prev->vm_pgoff, NULL, next);
>> + addr, prev->vm_pgoff, NULL, next,
>> + keep_locked);
>> else { /* cases 3, 8 */
>> err = __vma_adjust(area, addr, next->vm_end,
>> - next->vm_pgoff - pglen, NULL, next);
>> + next->vm_pgoff - pglen, NULL, next,
>> + keep_locked);
>> /*
>> * In case 3 area is already equal to next and
>> * this is a noop, but in case 8 "area" has
>> @@ -3163,9 +3168,20 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>>
>> if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent))
>> return NULL; /* should never get here */
>> - new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
>> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
>> - vma->vm_userfaultfd_ctx);
>> +
>> + /* There is 3 cases to manage here in
>> + * AAAA AAAA AAAA AAAA
>> + * PPPP.... PPPP......NNNN PPPP....NNNN PP........NN
>> + * PPPPPPPP(A) PPPP..NNNNNNNN(B) PPPPPPPPPPPP(1) NULL
>> + * PPPPPPPPNNNN(2)
>> + * PPPPNNNNNNNN(3)
>> + *
>> + * new_vma == prev in case A,1,2
>> + * new_vma == next in case B,3
>> + */
>
> Interleaved tabs and whitespace.
Fair enough, I will try to fix that.
>> + new_vma = __vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
>> + vma->anon_vma, vma->vm_file, pgoff,
>> + vma_policy(vma), vma->vm_userfaultfd_ctx, true);
>> if (new_vma) {
>> /*
>> * Source vma may have been merged into new_vma
>> @@ -3205,6 +3221,15 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>> get_file(new_vma->vm_file);
>> if (new_vma->vm_ops && new_vma->vm_ops->open)
>> new_vma->vm_ops->open(new_vma);
>> + /*
>> + * As the VMA is linked right now, it may be hit by the
>> + * speculative page fault handler. But we don't want it to
>> + * to start mapping page in this area until the caller has
>> + * potentially move the pte from the moved VMA. To prevent
>> + * that we protect it right now, and let the caller unprotect
>> + * it once the move is done.
>> + */
>> + vm_raw_write_begin(new_vma);
>> vma_link(mm, new_vma, prev, rb_link, rb_parent);
>> *need_rmap_locks = false;
>> }
>> diff --git a/mm/mremap.c b/mm/mremap.c
>> index 049470aa1e3e..8ed1a1d6eaed 100644
>> --- a/mm/mremap.c
>> +++ b/mm/mremap.c
>> @@ -302,6 +302,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>> if (!new_vma)
>> return -ENOMEM;
>>
>> + /* new_vma is returned protected by copy_vma, to prevent speculative
>> + * page fault to be done in the destination area before we move the pte.
>> + * Now, we must also protect the source VMA since we don't want pages
>> + * to be mapped in our back while we are copying the PTEs.
>> + */
>> + if (vma != new_vma)
>> + vm_raw_write_begin(vma);
>> +
>> moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
>> need_rmap_locks);
>> if (moved_len < old_len) {
>> @@ -318,6 +326,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>> */
>> move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
>> true);
>> + if (vma != new_vma)
>> + vm_raw_write_end(vma);
>> vma = new_vma;
>> old_len = new_len;
>> old_addr = new_addr;
>> @@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>> mremap_userfaultfd_prep(new_vma, uf);
>> arch_remap(mm, old_addr, old_addr + old_len,
>> new_addr, new_addr + new_len);
>> + if (vma != new_vma)
>> + vm_raw_write_end(vma);
>> }
>> + vm_raw_write_end(new_vma);
>
> Just do
>
> vm_raw_write_end(vma);
> vm_raw_write_end(new_vma);
>
> here.
Are you sure ? we can have vma = new_vma done if (unlikely(err))
Cheers,
Laurent.
On Wed, 28 Mar 2018, Laurent Dufour wrote:
> > Putting this in mm/Kconfig is definitely the right way to go about it
> > instead of any generic option in arch/*.
> >
> > My question, though, was making this configurable by the user:
> >
> > config SPECULATIVE_PAGE_FAULT
> > bool "Speculative page faults"
> > depends on X86_64 || PPC
> > default y
> > help
> > ..
> >
> > It's a question about whether we want this always enabled on x86_64 and
> > power or whether the user should be able to disable it (right now they
> > can't). With a large feature like this, you may want to offer something
> > simple (disable CONFIG_SPECULATIVE_PAGE_FAULT) if someone runs into
> > regressions.
>
> I agree, but I think it would be important to get the per architecture
> enablement to avoid complex check here. For instance in the case of powerPC
> this is only supported for PPC_BOOK3S_64.
>
> To avoid exposing such per architecture define here, what do you think about
> having supporting architectures setting ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> and the SPECULATIVE_PAGE_FAULT depends on this, like this:
>
> In mm/Kconfig:
> config SPECULATIVE_PAGE_FAULT
> bool "Speculative page faults"
> depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT && SMP
> default y
> help
> ...
>
> In arch/powerpc/Kconfig:
> config PPC
> ...
> select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT if PPC_BOOK3S_64
>
> In arch/x86/Kconfig:
> config X86_64
> ...
> select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>
>
Looks good to me! It feels like this will add more assurance that if
things regress for certain workloads that it can be disabled. I don't
feel strongly about the default value, I'm ok with it being enabled by
default.
On Wed, 28 Mar 2018, Laurent Dufour wrote:
> >> @@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
> >> mremap_userfaultfd_prep(new_vma, uf);
> >> arch_remap(mm, old_addr, old_addr + old_len,
> >> new_addr, new_addr + new_len);
> >> + if (vma != new_vma)
> >> + vm_raw_write_end(vma);
> >> }
> >> + vm_raw_write_end(new_vma);
> >
> > Just do
> >
> > vm_raw_write_end(vma);
> > vm_raw_write_end(new_vma);
> >
> > here.
>
> Are you sure ? we can have vma = new_vma done if (unlikely(err))
>
Sorry, what I meant was do
if (vma != new_vma)
vm_raw_write_end(vma);
vm_raw_write_end(new_vma);
after the conditional. Having the locking unnecessarily embedded in the
conditional has been an issue in the past with other areas of core code,
unless you have a strong reason for it.
On 22/03/2018 02:21, Ganesh Mahendran wrote:
> Hi, Laurent
>
> 2018-03-14 1:59 GMT+08:00 Laurent Dufour <[email protected]>:
>> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>> process since the page fault handler will not wait for other threads memory
>> layout change to be done, assuming that this change is done in another part
>> of the process's memory space. This type page fault is named speculative
>> page fault. If the speculative page fault fails because of a concurrency is
>> detected or because underlying PMD or PTE tables are not yet allocating, it
>> is failing its processing and a classic page fault is then tried.
>>
>> The speculative page fault (SPF) has to look for the VMA matching the fault
>> address without holding the mmap_sem, this is done by introducing a rwlock
>> which protects the access to the mm_rb tree. Previously this was done using
>> SRCU but it was introducing a lot of scheduling to process the VMA's
>> freeing
>> operation which was hitting the performance by 20% as reported by Kemi Wang
>> [2].Using a rwlock to protect access to the mm_rb tree is limiting the
>> locking contention to these operations which are expected to be in a O(log
>> n)
>> order. In addition to ensure that the VMA is not freed in our back a
>> reference count is added and 2 services (get_vma() and put_vma()) are
>> introduced to handle the reference count. When a VMA is fetch from the RB
>> tree using get_vma() is must be later freeed using put_vma(). Furthermore,
>> to allow the VMA to be used again by the classic page fault handler a
>> service is introduced can_reuse_spf_vma(). This service is expected to be
>> called with the mmap_sem hold. It checked that the VMA is still matching
>> the specified address and is releasing its reference count as the mmap_sem
>> is hold it is ensure that it will not be freed in our back. In general, the
>> VMA's reference count could be decremented when holding the mmap_sem but it
>> should not be increased as holding the mmap_sem is ensuring that the VMA is
>> stable. I can't see anymore the overhead I got while will-it-scale
>> benchmark anymore.
>>
>> The VMA's attributes checked during the speculative page fault processing
>> have to be protected against parallel changes. This is done by using a per
>> VMA sequence lock. This sequence lock allows the speculative page fault
>> handler to fast check for parallel changes in progress and to abort the
>> speculative page fault in that case.
>>
>> Once the VMA is found, the speculative page fault handler would check for
>> the VMA's attributes to verify that the page fault has to be handled
>> correctly or not. Thus the VMA is protected through a sequence lock which
>> allows fast detection of concurrent VMA changes. If such a change is
>> detected, the speculative page fault is aborted and a *classic* page fault
>> is tried. VMA sequence lockings are added when VMA attributes which are
>> checked during the page fault are modified.
>>
>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>> so once the page table is locked, the VMA is valid, so any other changes
>> leading to touching this PTE will need to lock the page table, so no
>> parallel change is possible at this time.
>>
>> The locking of the PTE is done with interrupts disabled, this allows to
>> check for the PMD to ensure that there is not an ongoing collapsing
>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>> waiting for the other CPU to have catch the IPI interrupt, if the pmd is
>> valid at the time the PTE is locked, we have the guarantee that the
>> collapsing opertion will have to wait on the PTE lock to move foward. This
>> allows the SPF handler to map the PTE safely. If the PMD value is different
>> than the one recorded at the beginning of the SPF operation, the classic
>> page fault handler will be called to handle the operation while holding the
>> mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
>> done using spin_trylock() to avoid dead lock when handling a page fault
>> while a TLB invalidate is requested by an other CPU holding the PTE.
>>
>> Support for THP is not done because when checking for the PMD, we can be
>> confused by an in progress collapsing operation done by khugepaged. The
>> issue is that pmd_none() could be true either if the PMD is not already
>> populated or if the underlying PTE are in the way to be collapsed. So we
>> cannot safely allocate a PMD if pmd_none() is true.
>>
>> This series a new software performance event named 'speculative-faults' or
>> 'spf'. It counts the number of successful page fault event handled in a
>> speculative way. When recording 'faults,spf' events, the faults one is
>> counting the total number of page fault events while 'spf' is only counting
>> the part of the faults processed in a speculative way.
>>
>> There are some trace events introduced by this series. They allow to
>> identify why the page faults where not processed in a speculative way. This
>> doesn't take in account the faults generated by a monothreaded process
>> which directly processed while holding the mmap_sem. This trace events are
>> grouped in a system named 'pagefault', they are:
>> - pagefault:spf_pte_lock : if the pte was already locked by another thread
>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>> - pagefault:spf_vma_access : the VMA's access right are not respected
>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>> back.
>>
>> To record all the related events, the easier is to run perf with the
>> following arguments :
>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>
>> This series builds on top of v4.16-rc2-mmotm-2018-02-21-14-48 and is
>> functional on x86 and PowerPC.
>>
>> ---------------------
>> Real Workload results
>>
>> As mentioned in previous email, we did non official runs using a "popular
>> in memory multithreaded database product" on 176 cores SMT8 Power system
>> which showed a 30% improvements in the number of transaction processed per
>> second. This run has been done on the v6 series, but changes introduced in
>> this new verion should not impact the performance boost seen.
>>
>> Here are the perf data captured during 2 of these runs on top of the v8
>> series:
>> vanilla spf
>> faults 89.418 101.364
>> spf n/a 97.989
>>
>> With the SPF kernel, most of the page fault were processed in a speculative
>> way.
>>
>> ------------------
>> Benchmarks results
>>
>> Base kernel is v4.16-rc4-mmotm-2018-03-09-16-34
>> SPF is BASE + this series
>>
>> Kernbench:
>> ----------
>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.13-rc4
>> kernel (kernel is build 5 times):
>>
>> Average Half load -j 8
>> Run (std deviation)
>> BASE SPF
>> Elapsed Time 151.36 (1.40139) 151.748 (1.09716) 0.26%
>> User Time 1023.19 (3.58972) 1027.35 (2.30396) 0.41%
>> System Time 125.026 (1.8547) 124.504 (0.980015) -0.42%
>> Percent CPU 758.2 (5.54076) 758.6 (3.97492) 0.05%
>> Context Switches 54924 (453.634) 54851 (382.293) -0.13%
>> Sleeps 105589 (704.581) 105282 (435.502) -0.29%
>>
>> Average Optimal load -j 16
>> Run (std deviation)
>> BASE SPF
>> Elapsed Time 74.804 (1.25139) 74.368 (0.406288) -0.58%
>> User Time 962.033 (64.5125) 963.93 (66.8797) 0.20%
>> System Time 110.771 (15.0817) 110.387 (14.8989) -0.35%
>> Percent CPU 1045.7 (303.387) 1049.1 (306.255) 0.33%
>> Context Switches 76201.8 (22433.1) 76170.4 (22482.9) -0.04%
>> Sleeps 110289 (5024.05) 110220 (5248.58) -0.06%
>>
>> During a run on the SPF, perf events were captured:
>> Performance counter stats for '../kernbench -M':
>> 510334017 faults
>> 200 spf
>> 0 pagefault:spf_pte_lock
>> 0 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 2174 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>>
>> Very few speculative page fault were recorded as most of the processes
>> involved are monothreaded (sounds that on this architecture some threads
>> were created during the kernel build processing).
>>
>> Here are the kerbench results on a 80 CPUs Power8 system:
>>
>> Average Half load -j 40
>> Run (std deviation)
>> BASE SPF
>> Elapsed Time 116.958 (0.73401) 117.43 (0.927497) 0.40%
>> User Time 4472.35 (7.85792) 4480.16 (19.4909) 0.17%
>> System Time 136.248 (0.587639) 136.922 (1.09058) 0.49%
>> Percent CPU 3939.8 (20.6567) 3931.2 (17.2829) -0.22%
>> Context Switches 92445.8 (236.672) 92720.8 (270.118) 0.30%
>> Sleeps 318475 (1412.6) 317996 (1819.07) -0.15%
>>
>> Average Optimal load -j 80
>> Run (std deviation)
>> BASE SPF
>> Elapsed Time 106.976 (0.406731) 107.72 (0.329014) 0.70%
>> User Time 5863.47 (1466.45) 5865.38 (1460.27) 0.03%
>> System Time 159.995 (25.0393) 160.329 (24.6921) 0.21%
>> Percent CPU 5446.2 (1588.23) 5416 (1565.34) -0.55%
>> Context Switches 223018 (137637) 224867 (139305) 0.83%
>> Sleeps 330846 (13127.3) 332348 (15556.9) 0.45%
>>
>> During a run on the SPF, perf events were captured:
>> Performance counter stats for '../kernbench -M':
>> 116612488 faults
>> 0 spf
>> 0 pagefault:spf_pte_lock
>> 0 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 473 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>>
>> Most of the processes involved are monothreaded so SPF is not activated but
>> there is no impact on the performance.
>>
>> Ebizzy:
>> -------
>> The test is counting the number of records per second it can manage, the
>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>> result I repeated the test 100 times and measure the average result. The
>> number is the record processes per second, the higher is the best.
>>
>> BASE SPF delta
>> 16 CPUs x86 VM 14902.6 95905.16 543.55%
>> 80 CPUs P8 node 37240.24 78185.67 109.95%
>>
>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>> Performance counter stats for './ebizzy -mRTp':
>> 888157 faults
>> 884773 spf
>> 92 pagefault:spf_pte_lock
>> 2379 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 80 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>>
>> And the ones captured during a run on a 80 CPUs Power node:
>> Performance counter stats for './ebizzy -mRTp':
>> 762134 faults
>> 728663 spf
>> 19101 pagefault:spf_pte_lock
>> 13969 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 272 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>>
>> In ebizzy's case most of the page fault were handled in a speculative way,
>> leading the ebizzy performance boost.
>
> We ported the SPF to kernel 4.9 in android devices.
> For the app launch time, It improves about 15% average. For the apps
> which have hundreds of threads, it will be about 20%.
Hi Ganesh,
Thanks for sharing these great and encouraging results.
Could you please detail a bit more about your system configuration and
application ?
Laurent.
> Thanks.
>
>>
>> ------------------
>> Changes since v8:
>> - Don't check PMD when locking the pte when THP is disabled
>> Thanks to Daniel Jordan for reporting this.
>> - Rebase on 4.16
>> Changes since v7:
>> - move pte_map_lock() and pte_spinlock() upper in mm/memory.c (patch 4 &
>> 5)
>> - make pte_unmap_same() compatible with the speculative page fault (patch
>> 6)
>> Changes since v6:
>> - Rename config variable to CONFIG_SPECULATIVE_PAGE_FAULT (patch 1)
>> - Review the way the config variable is set (patch 1 to 3)
>> - Introduce mm_rb_write_*lock() in mm/mmap.c (patch 18)
>> - Merge patch introducing pte try locking in the patch 18.
>> Changes since v5:
>> - use rwlock agains the mm RB tree in place of SRCU
>> - add a VMA's reference count to protect VMA while using it without
>> holding the mmap_sem.
>> - check PMD value to detect collapsing operation
>> - don't try speculative page fault for mono threaded processes
>> - try to reuse the fetched VMA if VM_RETRY is returned
>> - go directly to the error path if an error is detected during the SPF
>> path
>> - fix race window when moving VMA in move_vma()
>> Changes since v4:
>> - As requested by Andrew Morton, use CONFIG_SPF and define it earlier in
>> the series to ease bisection.
>> Changes since v3:
>> - Don't build when CONFIG_SMP is not set
>> - Fixed a lock dependency warning in __vma_adjust()
>> - Use READ_ONCE to access p*d values in handle_speculative_fault()
>> - Call memcp_oom() service in handle_speculative_fault()
>> Changes since v2:
>> - Perf event is renamed in PERF_COUNT_SW_SPF
>> - On Power handle do_page_fault()'s cleaning
>> - On Power if the VM_FAULT_ERROR is returned by
>> handle_speculative_fault(), do not retry but jump to the error path
>> - If VMA's flags are not matching the fault, directly returns
>> VM_FAULT_SIGSEGV and not VM_FAULT_RETRY
>> - Check for pud_trans_huge() to avoid speculative path
>> - Handles _vm_normal_page()'s introduced by 6f16211df3bf
>> ("mm/device-public-memory: device memory cache coherent with CPU")
>> - add and review few comments in the code
>> Changes since v1:
>> - Remove PERF_COUNT_SW_SPF_FAILED perf event.
>> - Add tracing events to details speculative page fault failures.
>> - Cache VMA fields values which are used once the PTE is unlocked at the
>> end of the page fault events.
>> - Ensure that fields read during the speculative path are written and read
>> using WRITE_ONCE and READ_ONCE.
>> - Add checks at the beginning of the speculative path to abort it if the
>> VMA is known to not be supported.
>> Changes since RFC V5 [5]
>> - Port to 4.13 kernel
>> - Merging patch fixing lock dependency into the original patch
>> - Replace the 2 parameters of vma_has_changed() with the vmf pointer
>> - In patch 7, don't call __do_fault() in the speculative path as it may
>> want to unlock the mmap_sem.
>> - In patch 11-12, don't check for vma boundaries when
>> page_add_new_anon_rmap() is called during the spf path and protect against
>> anon_vma pointer's update.
>> - In patch 13-16, add performance events to report number of successful
>> and failed speculative events.
>>
>> [1]
>> http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>> [2] https://patchwork.kernel.org/patch/9999687/
>>
>>
>> Laurent Dufour (20):
>> mm: Introduce CONFIG_SPECULATIVE_PAGE_FAULT
>> x86/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
>> powerpc/mm: Define CONFIG_SPECULATIVE_PAGE_FAULT
>> mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>> mm: make pte_unmap_same compatible with SPF
>> mm: Protect VMA modifications using VMA sequence count
>> mm: protect mremap() against SPF hanlder
>> mm: Protect SPF handler against anon_vma changes
>> mm: Cache some VMA fields in the vm_fault structure
>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>> mm: Introduce __lru_cache_add_active_or_unevictable
>> mm: Introduce __maybe_mkwrite()
>> mm: Introduce __vm_normal_page()
>> mm: Introduce __page_add_new_anon_rmap()
>> mm: Protect mm_rb tree with a rwlock
>> mm: Adding speculative page fault failure trace events
>> perf: Add a speculative page fault sw event
>> perf tools: Add support for the SPF perf event
>> mm: Speculative page fault handler return VMA
>> powerpc/mm: Add speculative page fault
>>
>> Peter Zijlstra (4):
>> mm: Prepare for FAULT_FLAG_SPECULATIVE
>> mm: VMA sequence count
>> mm: Provide speculative fault infrastructure
>> x86/mm: Add speculative pagefault handling
>>
>> arch/powerpc/Kconfig | 1 +
>> arch/powerpc/mm/fault.c | 31 +-
>> arch/x86/Kconfig | 1 +
>> arch/x86/mm/fault.c | 38 ++-
>> fs/proc/task_mmu.c | 5 +-
>> fs/userfaultfd.c | 17 +-
>> include/linux/hugetlb_inline.h | 2 +-
>> include/linux/migrate.h | 4 +-
>> include/linux/mm.h | 92 +++++-
>> include/linux/mm_types.h | 7 +
>> include/linux/pagemap.h | 4 +-
>> include/linux/rmap.h | 12 +-
>> include/linux/swap.h | 10 +-
>> include/trace/events/pagefault.h | 87 +++++
>> include/uapi/linux/perf_event.h | 1 +
>> kernel/fork.c | 3 +
>> mm/Kconfig | 3 +
>> mm/hugetlb.c | 2 +
>> mm/init-mm.c | 3 +
>> mm/internal.h | 20 ++
>> mm/khugepaged.c | 5 +
>> mm/madvise.c | 6 +-
>> mm/memory.c | 594 ++++++++++++++++++++++++++++++----
>> mm/mempolicy.c | 51 ++-
>> mm/migrate.c | 4 +-
>> mm/mlock.c | 13 +-
>> mm/mmap.c | 211 +++++++++---
>> mm/mprotect.c | 4 +-
>> mm/mremap.c | 13 +
>> mm/rmap.c | 5 +-
>> mm/swap.c | 6 +-
>> mm/swap_state.c | 8 +-
>> tools/include/uapi/linux/perf_event.h | 1 +
>> tools/perf/util/evsel.c | 1 +
>> tools/perf/util/parse-events.c | 4 +
>> tools/perf/util/parse-events.l | 1 +
>> tools/perf/util/python.c | 1 +
>> 37 files changed, 1097 insertions(+), 174 deletions(-)
>> create mode 100644 include/trace/events/pagefault.h
>>
>> --
>> 2.7.4
>>
>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ef6ef0627090..dfa81a638b7c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -359,6 +359,12 @@ struct vm_fault {
> * page table to avoid allocation from
> * atomic context.
> */
> + /*
> + * These entries are required when handling speculative page fault.
> + * This way the page handling is done using consistent field values.
> + */
> + unsigned long vma_flags;
> + pgprot_t vma_page_prot;
> };
>
> /* page entry size for vm->huge_fault() */
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 446427cafa19..f71db2b42b30 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3717,6 +3717,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
> .vma = vma,
> .address = address,
> .flags = flags,
> + .vma_flags = vma->vm_flags,
> + .vma_page_prot = vma->vm_page_prot,
> /*
> * Hard to debug if it ends up being
> * used by a callee that assumes
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 32314e9e48dd..a946d5306160 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -882,6 +882,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
> .flags = FAULT_FLAG_ALLOW_RETRY,
> .pmd = pmd,
> .pgoff = linear_page_index(vma, address),
> + .vma_flags = vma->vm_flags,
> + .vma_page_prot = vma->vm_page_prot,
> };
>
> /* we only decide to swapin, if there is enough young ptes */
> diff --git a/mm/memory.c b/mm/memory.c
> index 0200340ef089..46fe92b93682 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2615,7 +2615,7 @@ static int wp_page_copy(struct vm_fault *vmf)
> * Don't let another task, with possibly unlocked vma,
> * keep the mlocked page.
> */
> - if (page_copied && (vma->vm_flags & VM_LOCKED)) {
> + if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
> lock_page(old_page); /* LRU manipulation */
> if (PageMlocked(old_page))
> munlock_vma_page(old_page);
Doesn't wp_page_copy() also need to pass this to anon_vma_prepare() so
that find_mergeable_anon_vma() works correctly?
> @@ -2649,7 +2649,7 @@ static int wp_page_copy(struct vm_fault *vmf)
> */
> int finish_mkwrite_fault(struct vm_fault *vmf)
> {
> - WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> + WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
> if (!pte_map_lock(vmf))
> return VM_FAULT_RETRY;
> /*
> @@ -2751,7 +2751,7 @@ static int do_wp_page(struct vm_fault *vmf)
> * We should not cow pages in a shared writeable mapping.
> * Just mark the pages writable and/or call ops->pfn_mkwrite.
> */
> - if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> + if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
> (VM_WRITE|VM_SHARED))
> return wp_pfn_shared(vmf);
>
> @@ -2798,7 +2798,7 @@ static int do_wp_page(struct vm_fault *vmf)
> return VM_FAULT_WRITE;
> }
> unlock_page(vmf->page);
> - } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> + } else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
> (VM_WRITE|VM_SHARED))) {
> return wp_page_shared(vmf);
> }
> @@ -3067,7 +3067,7 @@ int do_swap_page(struct vm_fault *vmf)
>
> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> - pte = mk_pte(page, vma->vm_page_prot);
> + pte = mk_pte(page, vmf->vma_page_prot);
> if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> vmf->flags &= ~FAULT_FLAG_WRITE;
> @@ -3093,7 +3093,7 @@ int do_swap_page(struct vm_fault *vmf)
>
> swap_free(entry);
> if (mem_cgroup_swap_full(page) ||
> - (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> + (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
> try_to_free_swap(page);
> unlock_page(page);
> if (page != swapcache && swapcache) {
> @@ -3150,7 +3150,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> pte_t entry;
>
> /* File mapping without ->vm_ops ? */
> - if (vma->vm_flags & VM_SHARED)
> + if (vmf->vma_flags & VM_SHARED)
> return VM_FAULT_SIGBUS;
>
> /*
> @@ -3174,7 +3174,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> if (!(vmf->flags & FAULT_FLAG_WRITE) &&
> !mm_forbids_zeropage(vma->vm_mm)) {
> entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
> - vma->vm_page_prot));
> + vmf->vma_page_prot));
> if (!pte_map_lock(vmf))
> return VM_FAULT_RETRY;
> if (!pte_none(*vmf->pte))
> @@ -3207,8 +3207,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
> */
> __SetPageUptodate(page);
>
> - entry = mk_pte(page, vma->vm_page_prot);
> - if (vma->vm_flags & VM_WRITE)
> + entry = mk_pte(page, vmf->vma_page_prot);
> + if (vmf->vma_flags & VM_WRITE)
> entry = pte_mkwrite(pte_mkdirty(entry));
>
> if (!pte_map_lock(vmf)) {
> @@ -3404,7 +3404,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
> for (i = 0; i < HPAGE_PMD_NR; i++)
> flush_icache_page(vma, page + i);
>
> - entry = mk_huge_pmd(page, vma->vm_page_prot);
> + entry = mk_huge_pmd(page, vmf->vma_page_prot);
> if (write)
> entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>
> @@ -3478,11 +3478,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
> return VM_FAULT_NOPAGE;
>
> flush_icache_page(vma, page);
> - entry = mk_pte(page, vma->vm_page_prot);
> + entry = mk_pte(page, vmf->vma_page_prot);
> if (write)
> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> /* copy-on-write page */
> - if (write && !(vma->vm_flags & VM_SHARED)) {
> + if (write && !(vmf->vma_flags & VM_SHARED)) {
> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> page_add_new_anon_rmap(page, vma, vmf->address, false);
> mem_cgroup_commit_charge(page, memcg, false, false);
> @@ -3521,7 +3521,7 @@ int finish_fault(struct vm_fault *vmf)
>
> /* Did we COW the page? */
> if ((vmf->flags & FAULT_FLAG_WRITE) &&
> - !(vmf->vma->vm_flags & VM_SHARED))
> + !(vmf->vma_flags & VM_SHARED))
> page = vmf->cow_page;
> else
> page = vmf->page;
> @@ -3775,7 +3775,7 @@ static int do_fault(struct vm_fault *vmf)
> ret = VM_FAULT_SIGBUS;
> else if (!(vmf->flags & FAULT_FLAG_WRITE))
> ret = do_read_fault(vmf);
> - else if (!(vma->vm_flags & VM_SHARED))
> + else if (!(vmf->vma_flags & VM_SHARED))
> ret = do_cow_fault(vmf);
> else
> ret = do_shared_fault(vmf);
> @@ -3832,7 +3832,7 @@ static int do_numa_page(struct vm_fault *vmf)
> * accessible ptes, some can allow access by kernel mode.
> */
> pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
> - pte = pte_modify(pte, vma->vm_page_prot);
> + pte = pte_modify(pte, vmf->vma_page_prot);
> pte = pte_mkyoung(pte);
> if (was_writable)
> pte = pte_mkwrite(pte);
> @@ -3866,7 +3866,7 @@ static int do_numa_page(struct vm_fault *vmf)
> * Flag if the page is shared between multiple address spaces. This
> * is later used when determining whether to group tasks together
> */
> - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
> + if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
> flags |= TNF_SHARED;
>
> last_cpupid = page_cpupid_last(page);
> @@ -3911,7 +3911,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
> return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
>
> /* COW handled on pte level: split pmd */
> - VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
> + VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
> __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
>
> return VM_FAULT_FALLBACK;
> @@ -4058,6 +4058,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> .flags = flags,
> .pgoff = linear_page_index(vma, address),
> .gfp_mask = __get_fault_gfp_mask(vma),
> + .vma_flags = vma->vm_flags,
> + .vma_page_prot = vma->vm_page_prot,
> };
> unsigned int dirty = flags & FAULT_FLAG_WRITE;
> struct mm_struct *mm = vma->vm_mm;
Don't you also need to do this?
diff --git a/include/linux/mm.h b/include/linux/mm.h
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -694,9 +694,9 @@ void free_compound_page(struct page *page);
* pte_mkwrite. But get_user_pages can cause write faults for mappings
* that do not have writing enabled, when used by access_process_vm.
*/
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
{
- if (likely(vma->vm_flags & VM_WRITE))
+ if (likely(vma_flags & VM_WRITE))
pte = pte_mkwrite(pte);
return pte;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1195,8 +1195,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
pte_t entry;
- entry = mk_pte(pages[i], vma->vm_page_prot);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = mk_pte(pages[i], vmf->vma_page_prot);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
memcg = (void *)page_private(pages[i]);
set_page_private(pages[i], 0);
page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
@@ -2169,7 +2169,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
entry = pte_swp_mksoft_dirty(entry);
} else {
entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
- entry = maybe_mkwrite(entry, vma);
+ entry = maybe_mkwrite(entry, vma->vm_flags);
if (!write)
entry = pte_wrprotect(entry);
if (!young)
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
out_mkwrite:
if (mkwrite) {
entry = pte_mkyoung(entry);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
}
set_pte_at(mm, addr, pte, entry);
@@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
entry = pte_mkyoung(vmf->orig_pte);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
update_mmu_cache(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2549,8 +2549,8 @@ static int wp_page_copy(struct vm_fault *vmf)
inc_mm_counter_fast(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
- entry = mk_pte(new_page, vma->vm_page_prot);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = mk_pte(new_page, vmf->vma_page_prot);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
/*
* Clear the pte entry and flush it first, before updating the
* pte with the new entry. This will avoid a race condition
@@ -3069,7 +3069,7 @@ int do_swap_page(struct vm_fault *vmf)
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
pte = mk_pte(page, vmf->vma_page_prot);
if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
- pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+ pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vm_flags);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
exclusive = RMAP_EXCLUSIVE;
@@ -3481,7 +3481,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
flush_icache_page(vma, page);
entry = mk_pte(page, vmf->vma_page_prot);
if (write)
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vm_flags);
/* copy-on-write page */
if (write && !(vmf->vma_flags & VM_SHARED)) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
diff --git a/mm/migrate.c b/mm/migrate.c
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
*/
entry = pte_to_swp_entry(*pvmw.pte);
if (is_write_migration_entry(entry))
- pte = maybe_mkwrite(pte, vma);
+ pte = maybe_mkwrite(pte, vma->vm_flags);
if (unlikely(is_zone_device_page(new))) {
if (is_device_private_page(new)) {
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> migrate_misplaced_page() is only called during the page fault handling so
> it's better to pass the pointer to the struct vm_fault instead of the vma.
>
> This way during the speculative page fault path the saved vma->vm_flags
> could be used.
>
> Signed-off-by: Laurent Dufour <[email protected]>
Acked-by: David Rientjes <[email protected]>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> The speculative page fault handler which is run without holding the
> mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
> is not guaranteed to remain constant.
> Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
> value parameter instead of the vma pointer.
>
> Signed-off-by: Laurent Dufour <[email protected]>
Acked-by: David Rientjes <[email protected]>
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index dfa81a638b7c..a84ddc218bbd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -684,13 +684,18 @@ void free_compound_page(struct page *page);
> * pte_mkwrite. But get_user_pages can cause write faults for mappings
> * that do not have writing enabled, when used by access_process_vm.
> */
> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +static inline pte_t __maybe_mkwrite(pte_t pte, unsigned long vma_flags)
> {
> - if (likely(vma->vm_flags & VM_WRITE))
> + if (likely(vma_flags & VM_WRITE))
> pte = pte_mkwrite(pte);
> return pte;
> }
>
> +static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +{
> + return __maybe_mkwrite(pte, vma->vm_flags);
> +}
> +
> int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
> struct page *page);
> int finish_fault(struct vm_fault *vmf);
> diff --git a/mm/memory.c b/mm/memory.c
> index 0a0a483d9a65..af0338fbc34d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>
> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> entry = pte_mkyoung(vmf->orig_pte);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
> update_mmu_cache(vma, vmf->address, vmf->pte);
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -2549,8 +2549,8 @@ static int wp_page_copy(struct vm_fault *vmf)
> inc_mm_counter_fast(mm, MM_ANONPAGES);
> }
> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> - entry = mk_pte(new_page, vma->vm_page_prot);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = mk_pte(new_page, vmf->vma_page_prot);
> + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> /*
> * Clear the pte entry and flush it first, before updating the
> * pte with the new entry. This will avoid a race condition
Don't you also need to do this in do_swap_page()?
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3067,9 +3067,9 @@ int do_swap_page(struct vm_fault *vmf)
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
- pte = mk_pte(page, vma->vm_page_prot);
+ pte = mk_pte(page, vmf->vma_page_prot);
if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
- pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+ pte = __maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
exclusive = RMAP_EXCLUSIVE;
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a84ddc218bbd..73b8b99f482b 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1263,8 +1263,11 @@ struct zap_details {
> pgoff_t last_index; /* Highest page->index to unmap */
> };
>
> -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> - pte_t pte, bool with_public_device);
> +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> + pte_t pte, bool with_public_device,
> + unsigned long vma_flags);
> +#define _vm_normal_page(vma, addr, pte, with_public_device) \
> + __vm_normal_page(vma, addr, pte, with_public_device, (vma)->vm_flags)
> #define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
>
> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
If _vm_normal_page() is a static inline function does it break somehow?
It's nice to avoid the #define's.
> diff --git a/mm/memory.c b/mm/memory.c
> index af0338fbc34d..184a0d663a76 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -826,8 +826,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
> #else
> # define HAVE_PTE_SPECIAL 0
> #endif
> -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> - pte_t pte, bool with_public_device)
> +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> + pte_t pte, bool with_public_device,
> + unsigned long vma_flags)
> {
> unsigned long pfn = pte_pfn(pte);
>
Would it be possible to update the comment since the function itself is no
longer named vm_normal_page?
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> When dealing with speculative page fault handler, we may race with VMA
> being split or merged. In this case the vma->vm_start and vm->vm_end
> fields may not match the address the page fault is occurring.
>
> This can only happens when the VMA is split but in that case, the
> anon_vma pointer of the new VMA will be the same as the original one,
> because in __split_vma the new->anon_vma is set to src->anon_vma when
> *new = *vma.
>
> So even if the VMA boundaries are not correct, the anon_vma pointer is
> still valid.
>
> If the VMA has been merged, then the VMA in which it has been merged
> must have the same anon_vma pointer otherwise the merge can't be done.
>
> So in all the case we know that the anon_vma is valid, since we have
> checked before starting the speculative page fault that the anon_vma
> pointer is valid for this VMA and since there is an anon_vma this
> means that at one time a page has been backed and that before the VMA
> is cleaned, the page table lock would have to be grab to clean the
> PTE, and the anon_vma field is checked once the PTE is locked.
>
> This patch introduce a new __page_add_new_anon_rmap() service which
> doesn't check for the VMA boundaries, and create a new inline one
> which do the check.
>
> When called from a page fault handler, if this is not a speculative one,
> there is a guarantee that vm_start and vm_end match the faulting address,
> so this check is useless. In the context of the speculative page fault
> handler, this check may be wrong but anon_vma is still valid as explained
> above.
>
> Signed-off-by: Laurent Dufour <[email protected]>
I'm indifferent on this: it could be argued both sides that the new
function and its variant for a simple VM_BUG_ON() isn't worth it and it
would should rather be done in the callers of page_add_new_anon_rmap().
It feels like it would be better left to the caller and add a comment to
page_add_anon_rmap() itself in mm/rmap.c.
On Tue, 13 Mar 2018, Laurent Dufour wrote:
> This change is inspired by the Peter's proposal patch [1] which was
> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
> that particular case, and it is introducing major performance degradation
> due to excessive scheduling operations.
>
> To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
> is protecting it access using a rwlock. As the mm_rb tree is a O(log n)
> search it is safe to protect it using such a lock. The VMA cache is not
> protected by the new rwlock and it should not be used without holding the
> mmap_sem.
>
> To allow the picked VMA structure to be used once the rwlock is released, a
> use count is added to the VMA structure. When the VMA is allocated it is
> set to 1. Each time the VMA is picked with the rwlock held its use count
> is incremented. Each time the VMA is released it is decremented. When the
> use count hits zero, this means that the VMA is no more used and should be
> freed.
>
> This patch is preparing for 2 kind of VMA access :
> - as usual, under the control of the mmap_sem,
> - without holding the mmap_sem for the speculative page fault handler.
>
> Access done under the control the mmap_sem doesn't require to grab the
> rwlock to protect read access to the mm_rb tree, but access in write must
> be done under the protection of the rwlock too. This affects inserting and
> removing of elements in the RB tree.
>
> The patch is introducing 2 new functions:
> - vma_get() to find a VMA based on an address by holding the new rwlock.
> - vma_put() to release the VMA when its no more used.
> These services are designed to be used when access are made to the RB tree
> without holding the mmap_sem.
>
> When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
> we rely on the WMB done when releasing the rwlock to serialize the write
> with the RMB done in a later patch to check for the VMA's validity.
>
> When free_vma is called, the file associated with the VMA is closed
> immediately, but the policy and the file structure remained in used until
> the VMA's use count reach 0, which may happens later when exiting an
> in progress speculative page fault.
>
> [1] https://patchwork.kernel.org/patch/5108281/
>
> Cc: Peter Zijlstra (Intel) <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Signed-off-by: Laurent Dufour <[email protected]>
Can __free_vma() be generalized for mm/nommu.c's delete_vma() and
do_mmap()?
On Tue, Mar 13, 2018 at 06:59:36PM +0100, Laurent Dufour wrote:
> pte_unmap_same() is making the assumption that the page table are still
> around because the mmap_sem is held.
> This is no more the case when running a speculative page fault and
> additional check must be made to ensure that the final page table are still
> there.
>
> This is now done by calling pte_spinlock() to check for the VMA's
> consistency while locking for the page tables.
>
> This is requiring passing a vm_fault structure to pte_unmap_same() which is
> containing all the needed parameters.
>
> As pte_spinlock() may fail in the case of a speculative page fault, if the
> VMA has been touched in our back, pte_unmap_same() should now return 3
> cases :
> 1. pte are the same (0)
> 2. pte are different (VM_FAULT_PTNOTSAME)
> 3. a VMA's changes has been detected (VM_FAULT_RETRY)
>
> The case 2 is handled by the introduction of a new VM_FAULT flag named
> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
> page fault while holding the mmap_sem.
>
> Signed-off-by: Laurent Dufour <[email protected]>
> ---
> include/linux/mm.h | 1 +
> mm/memory.c | 29 +++++++++++++++++++----------
> 2 files changed, 20 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2f3e98edc94a..b6432a261e63 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1199,6 +1199,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
> #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
> * and needs fsync() to complete (for
> * synchronous page faults in DAX) */
> +#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
>
> #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
> VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
> diff --git a/mm/memory.c b/mm/memory.c
> index 21b1212a0892..4bc7b0bdcb40 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
> * parts, do_swap_page must check under lock before unmapping the pte and
> * proceeding (but do_wp_page is only called after already making such a check;
> * and do_anonymous_page can safely check later on).
> + *
> + * pte_unmap_same() returns:
> + * 0 if the PTE are the same
> + * VM_FAULT_PTNOTSAME if the PTE are different
> + * VM_FAULT_RETRY if the VMA has changed in our back during
> + * a speculative page fault handling.
> */
> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> - pte_t *page_table, pte_t orig_pte)
> +static inline int pte_unmap_same(struct vm_fault *vmf)
> {
> - int same = 1;
> + int ret = 0;
> +
> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
> if (sizeof(pte_t) > sizeof(unsigned long)) {
> - spinlock_t *ptl = pte_lockptr(mm, pmd);
> - spin_lock(ptl);
> - same = pte_same(*page_table, orig_pte);
> - spin_unlock(ptl);
> + if (pte_spinlock(vmf)) {
> + if (!pte_same(*vmf->pte, vmf->orig_pte))
> + ret = VM_FAULT_PTNOTSAME;
> + spin_unlock(vmf->ptl);
> + } else
> + ret = VM_FAULT_RETRY;
> }
> #endif
> - pte_unmap(page_table);
> - return same;
> + pte_unmap(vmf->pte);
> + return ret;
> }
>
> static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
> int exclusive = 0;
> int ret = 0;
>
> - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
> + ret = pte_unmap_same(vmf);
> + if (ret)
> goto out;
>
This change what do_swap_page() returns ie before it was returning 0
when locked pte lookup was different from orig_pte. After this patch
it returns VM_FAULT_PTNOTSAME but this is a new return value for
handle_mm_fault() (the do_swap_page() return value is what ultimately
get return by handle_mm_fault())
Do we really want that ? This might confuse some existing user of
handle_mm_fault() and i am not sure of the value of that information
to caller.
Note i do understand that you want to return retry if anything did
change from underneath and thus need to differentiate from when the
pte value are not the same.
Cheers,
J?r?me
On Tue, Mar 13, 2018 at 06:59:45PM +0100, Laurent Dufour wrote:
> When dealing with the speculative fault path we should use the VMA's field
> cached value stored in the vm_fault structure.
>
> Currently vm_normal_page() is using the pointer to the VMA to fetch the
> vm_flags value. This patch provides a new __vm_normal_page() which is
> receiving the vm_flags flags value as parameter.
>
> Note: The speculative path is turned on for architecture providing support
> for special PTE flag. So only the first block of vm_normal_page is used
> during the speculative path.
Might be a good idea to explicitly have SPECULATIVE Kconfig option depends
on ARCH_PTE_SPECIAL and a comment for !HAVE_PTE_SPECIAL in the function
explaining that speculative page fault should never reach that point.
Cheers,
J?r?me
On Tue, Mar 13, 2018 at 06:59:30PM +0100, Laurent Dufour wrote:
> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
>
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, this is done by introducing a rwlock
> which protects the access to the mm_rb tree. Previously this was done using
> SRCU but it was introducing a lot of scheduling to process the VMA's
> freeing
> operation which was hitting the performance by 20% as reported by Kemi Wang
> [2].Using a rwlock to protect access to the mm_rb tree is limiting the
> locking contention to these operations which are expected to be in a O(log
> n)
> order. In addition to ensure that the VMA is not freed in our back a
> reference count is added and 2 services (get_vma() and put_vma()) are
> introduced to handle the reference count. When a VMA is fetch from the RB
> tree using get_vma() is must be later freeed using put_vma(). Furthermore,
> to allow the VMA to be used again by the classic page fault handler a
> service is introduced can_reuse_spf_vma(). This service is expected to be
> called with the mmap_sem hold. It checked that the VMA is still matching
> the specified address and is releasing its reference count as the mmap_sem
> is hold it is ensure that it will not be freed in our back. In general, the
> VMA's reference count could be decremented when holding the mmap_sem but it
> should not be increased as holding the mmap_sem is ensuring that the VMA is
> stable. I can't see anymore the overhead I got while will-it-scale
> benchmark anymore.
>
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
>
> Once the VMA is found, the speculative page fault handler would check for
> the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried. VMA sequence lockings are added when VMA attributes which are
> checked during the page fault are modified.
>
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.
What would have been nice is some pseudo highlevel code before all the
above detailed description. Something like:
speculative_fault(addr) {
mm_lock_for_vma_snapshot()
vma_snapshot = snapshot_vma_infos(addr)
mm_unlock_for_vma_snapshot()
...
if (!vma_can_speculatively_fault(vma_snapshot, addr))
return;
...
/* Do fault ie alloc memory, read from file ... */
page = ...;
preempt_disable();
if (vma_snapshot_still_valid(vma_snapshot, addr) &&
vma_pte_map_lock(vma_snapshot, addr)) {
if (pte_same(ptep, orig_pte)) {
/* Setup new pte */
page = NULL;
}
}
preempt_enable();
if (page)
put(page)
}
I just find pseudo code easier for grasping the highlevel view of the
expected code flow.
>
> The locking of the PTE is done with interrupts disabled, this allows to
> check for the PMD to ensure that there is not an ongoing collapsing
> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> waiting for the other CPU to have catch the IPI interrupt, if the pmd is
> valid at the time the PTE is locked, we have the guarantee that the
> collapsing opertion will have to wait on the PTE lock to move foward. This
> allows the SPF handler to map the PTE safely. If the PMD value is different
> than the one recorded at the beginning of the SPF operation, the classic
> page fault handler will be called to handle the operation while holding the
> mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
> done using spin_trylock() to avoid dead lock when handling a page fault
> while a TLB invalidate is requested by an other CPU holding the PTE.
>
> Support for THP is not done because when checking for the PMD, we can be
> confused by an in progress collapsing operation done by khugepaged. The
> issue is that pmd_none() could be true either if the PMD is not already
> populated or if the underlying PTE are in the way to be collapsed. So we
> cannot safely allocate a PMD if pmd_none() is true.
Might be a good topic fo LSF/MM, should we set the pmd to something
else then 0 when collapsing pmd (apply to pud too) ? This would allow
support THP.
[...]
>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
>
> BASE SPF delta
> 16 CPUs x86 VM 14902.6 95905.16 543.55%
> 80 CPUs P8 node 37240.24 78185.67 109.95%
I find those results interesting as it seems that SPF do not scale well
on big configuration. Note that it still have a sizeable improvement so
it is still a very interesting feature i believe.
Still understanding what is happening here might a good idea. From the
numbers below it seems there is 2 causes to the scaling issue. First
pte lock contention (kind of expected i guess). Second changes to vma
while faulting.
Have you thought about this ? Do i read those numbers in the wrong way ?
>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
> Performance counter stats for './ebizzy -mRTp':
> 888157 faults
> 884773 spf
> 92 pagefault:spf_pte_lock
> 2379 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 80 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
> Performance counter stats for './ebizzy -mRTp':
> 762134 faults
> 728663 spf
> 19101 pagefault:spf_pte_lock
> 13969 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 272 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
There is one aspect that i would like to see cover. Maybe i am not
understanding something fundamental, but it seems to me that SPF can
trigger OOM or at very least over stress page allocation.
Assume you have a lot of concurrent SPF to anonymous vma and they all
allocate new pages, then you might overallocate for a single address
by a factor correlated with the number of CPUs in your system. Now,
multiply this for several distinc address and you might be allocating
a lot of memory transiently ie just for a short period time. While
the fact that you quickly free when you fail should prevent the OOM
reaper. But still this might severly stress the memory allocation
path.
Am i missing something in how this all work ? Or is the above some-
thing that might be of concern ? Should there be some boundary on the
maximum number of concurrent SPF (and thus boundary on maximum page
temporary page allocation) ?
Cheers,
J?r?me
On Tue, 3 Apr 2018, Jerome Glisse wrote:
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 21b1212a0892..4bc7b0bdcb40 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
> > * parts, do_swap_page must check under lock before unmapping the pte and
> > * proceeding (but do_wp_page is only called after already making such a check;
> > * and do_anonymous_page can safely check later on).
> > + *
> > + * pte_unmap_same() returns:
> > + * 0 if the PTE are the same
> > + * VM_FAULT_PTNOTSAME if the PTE are different
> > + * VM_FAULT_RETRY if the VMA has changed in our back during
> > + * a speculative page fault handling.
> > */
> > -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> > - pte_t *page_table, pte_t orig_pte)
> > +static inline int pte_unmap_same(struct vm_fault *vmf)
> > {
> > - int same = 1;
> > + int ret = 0;
> > +
> > #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
> > if (sizeof(pte_t) > sizeof(unsigned long)) {
> > - spinlock_t *ptl = pte_lockptr(mm, pmd);
> > - spin_lock(ptl);
> > - same = pte_same(*page_table, orig_pte);
> > - spin_unlock(ptl);
> > + if (pte_spinlock(vmf)) {
> > + if (!pte_same(*vmf->pte, vmf->orig_pte))
> > + ret = VM_FAULT_PTNOTSAME;
> > + spin_unlock(vmf->ptl);
> > + } else
> > + ret = VM_FAULT_RETRY;
> > }
> > #endif
> > - pte_unmap(page_table);
> > - return same;
> > + pte_unmap(vmf->pte);
> > + return ret;
> > }
> >
> > static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> > @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
> > int exclusive = 0;
> > int ret = 0;
> >
> > - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
> > + ret = pte_unmap_same(vmf);
> > + if (ret)
> > goto out;
> >
>
> This change what do_swap_page() returns ie before it was returning 0
> when locked pte lookup was different from orig_pte. After this patch
> it returns VM_FAULT_PTNOTSAME but this is a new return value for
> handle_mm_fault() (the do_swap_page() return value is what ultimately
> get return by handle_mm_fault())
>
> Do we really want that ? This might confuse some existing user of
> handle_mm_fault() and i am not sure of the value of that information
> to caller.
>
> Note i do understand that you want to return retry if anything did
> change from underneath and thus need to differentiate from when the
> pte value are not the same.
>
I think VM_FAULT_RETRY should be handled appropriately for any user of
handle_mm_fault() already, and would be surprised to learn differently.
Khugepaged has the appropriate handling. I think the concern is whether a
user is handling anything other than VM_FAULT_RETRY and VM_FAULT_ERROR
(which VM_FAULT_PTNOTSAME is not set in)? I haven't done a full audit of
the users.
On Tue, 3 Apr 2018, Jerome Glisse wrote:
> > When dealing with the speculative fault path we should use the VMA's field
> > cached value stored in the vm_fault structure.
> >
> > Currently vm_normal_page() is using the pointer to the VMA to fetch the
> > vm_flags value. This patch provides a new __vm_normal_page() which is
> > receiving the vm_flags flags value as parameter.
> >
> > Note: The speculative path is turned on for architecture providing support
> > for special PTE flag. So only the first block of vm_normal_page is used
> > during the speculative path.
>
> Might be a good idea to explicitly have SPECULATIVE Kconfig option depends
> on ARCH_PTE_SPECIAL and a comment for !HAVE_PTE_SPECIAL in the function
> explaining that speculative page fault should never reach that point.
Yeah, I think that's appropriate but in a follow-up patch since this is
only propagating vma_flags. It will require that __HAVE_ARCH_PTE_SPECIAL
become an actual Kconfig entry, however.
On Tue, Apr 03, 2018 at 01:40:18PM -0700, David Rientjes wrote:
> On Tue, 3 Apr 2018, Jerome Glisse wrote:
>
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 21b1212a0892..4bc7b0bdcb40 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
> > > * parts, do_swap_page must check under lock before unmapping the pte and
> > > * proceeding (but do_wp_page is only called after already making such a check;
> > > * and do_anonymous_page can safely check later on).
> > > + *
> > > + * pte_unmap_same() returns:
> > > + * 0 if the PTE are the same
> > > + * VM_FAULT_PTNOTSAME if the PTE are different
> > > + * VM_FAULT_RETRY if the VMA has changed in our back during
> > > + * a speculative page fault handling.
> > > */
[...]
> > >
> >
> > This change what do_swap_page() returns ie before it was returning 0
> > when locked pte lookup was different from orig_pte. After this patch
> > it returns VM_FAULT_PTNOTSAME but this is a new return value for
> > handle_mm_fault() (the do_swap_page() return value is what ultimately
> > get return by handle_mm_fault())
> >
> > Do we really want that ? This might confuse some existing user of
> > handle_mm_fault() and i am not sure of the value of that information
> > to caller.
> >
> > Note i do understand that you want to return retry if anything did
> > change from underneath and thus need to differentiate from when the
> > pte value are not the same.
> >
>
> I think VM_FAULT_RETRY should be handled appropriately for any user of
> handle_mm_fault() already, and would be surprised to learn differently.
> Khugepaged has the appropriate handling. I think the concern is whether a
> user is handling anything other than VM_FAULT_RETRY and VM_FAULT_ERROR
> (which VM_FAULT_PTNOTSAME is not set in)? I haven't done a full audit of
> the users.
I am not worried about VM_FAULT_RETRY and barely have any worry about
VM_FAULT_PTNOTSAME either as they are other comparable new return value
(VM_FAULT_NEEDDSYNC for instance which is quite recent).
I wonder if adding a new value is really needed here. I don't see any
value to it for caller of handle_mm_fault() except for stats.
Note that I am not oppose, but while today we have free bits, maybe
tomorrow we will run out, i am always worried about thing like that :)
Cheers,
J?r?me
Hi Jerome,
Thanks for reviewing this series.
On 03/04/2018 22:37, Jerome Glisse wrote:
> On Tue, Mar 13, 2018 at 06:59:30PM +0100, Laurent Dufour wrote:
>> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>> process since the page fault handler will not wait for other threads memory
>> layout change to be done, assuming that this change is done in another part
>> of the process's memory space. This type page fault is named speculative
>> page fault. If the speculative page fault fails because of a concurrency is
>> detected or because underlying PMD or PTE tables are not yet allocating, it
>> is failing its processing and a classic page fault is then tried.
>>
>> The speculative page fault (SPF) has to look for the VMA matching the fault
>> address without holding the mmap_sem, this is done by introducing a rwlock
>> which protects the access to the mm_rb tree. Previously this was done using
>> SRCU but it was introducing a lot of scheduling to process the VMA's
>> freeing
>> operation which was hitting the performance by 20% as reported by Kemi Wang
>> [2].Using a rwlock to protect access to the mm_rb tree is limiting the
>> locking contention to these operations which are expected to be in a O(log
>> n)
>> order. In addition to ensure that the VMA is not freed in our back a
>> reference count is added and 2 services (get_vma() and put_vma()) are
>> introduced to handle the reference count. When a VMA is fetch from the RB
>> tree using get_vma() is must be later freeed using put_vma(). Furthermore,
>> to allow the VMA to be used again by the classic page fault handler a
>> service is introduced can_reuse_spf_vma(). This service is expected to be
>> called with the mmap_sem hold. It checked that the VMA is still matching
>> the specified address and is releasing its reference count as the mmap_sem
>> is hold it is ensure that it will not be freed in our back. In general, the
>> VMA's reference count could be decremented when holding the mmap_sem but it
>> should not be increased as holding the mmap_sem is ensuring that the VMA is
>> stable. I can't see anymore the overhead I got while will-it-scale
>> benchmark anymore.
>>
>> The VMA's attributes checked during the speculative page fault processing
>> have to be protected against parallel changes. This is done by using a per
>> VMA sequence lock. This sequence lock allows the speculative page fault
>> handler to fast check for parallel changes in progress and to abort the
>> speculative page fault in that case.
>>
>> Once the VMA is found, the speculative page fault handler would check for
>> the VMA's attributes to verify that the page fault has to be handled
>> correctly or not. Thus the VMA is protected through a sequence lock which
>> allows fast detection of concurrent VMA changes. If such a change is
>> detected, the speculative page fault is aborted and a *classic* page fault
>> is tried. VMA sequence lockings are added when VMA attributes which are
>> checked during the page fault are modified.
>>
>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>> so once the page table is locked, the VMA is valid, so any other changes
>> leading to touching this PTE will need to lock the page table, so no
>> parallel change is possible at this time.
>
> What would have been nice is some pseudo highlevel code before all the
> above detailed description. Something like:
> speculative_fault(addr) {
> mm_lock_for_vma_snapshot()
> vma_snapshot = snapshot_vma_infos(addr)
> mm_unlock_for_vma_snapshot()
> ...
> if (!vma_can_speculatively_fault(vma_snapshot, addr))
> return;
> ...
> /* Do fault ie alloc memory, read from file ... */
> page = ...;
>
> preempt_disable();
> if (vma_snapshot_still_valid(vma_snapshot, addr) &&
> vma_pte_map_lock(vma_snapshot, addr)) {
> if (pte_same(ptep, orig_pte)) {
> /* Setup new pte */
> page = NULL;
> }
> }
> preempt_enable();
> if (page)
> put(page)
> }
>
> I just find pseudo code easier for grasping the highlevel view of the
> expected code flow.
Fair enough, I agree that sounds easier this way, but one might argue that the
pseudo code is not more valid or accurate at one time :)
As always, the updated documentation is the code itself.
I'll try to put one inspired by yours in the next series's header.
>>
>> The locking of the PTE is done with interrupts disabled, this allows to
>> check for the PMD to ensure that there is not an ongoing collapsing
>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>> waiting for the other CPU to have catch the IPI interrupt, if the pmd is
>> valid at the time the PTE is locked, we have the guarantee that the
>> collapsing opertion will have to wait on the PTE lock to move foward. This
>> allows the SPF handler to map the PTE safely. If the PMD value is different
>> than the one recorded at the beginning of the SPF operation, the classic
>> page fault handler will be called to handle the operation while holding the
>> mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
>> done using spin_trylock() to avoid dead lock when handling a page fault
>> while a TLB invalidate is requested by an other CPU holding the PTE.
>>
>> Support for THP is not done because when checking for the PMD, we can be
>> confused by an in progress collapsing operation done by khugepaged. The
>> issue is that pmd_none() could be true either if the PMD is not already
>> populated or if the underlying PTE are in the way to be collapsed. So we
>> cannot safely allocate a PMD if pmd_none() is true.
>
> Might be a good topic fo LSF/MM, should we set the pmd to something
> else then 0 when collapsing pmd (apply to pud too) ? This would allow
> support THP.
Absolutely !
> [...]
>
>>
>> Ebizzy:
>> -------
>> The test is counting the number of records per second it can manage, the
>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>> result I repeated the test 100 times and measure the average result. The
>> number is the record processes per second, the higher is the best.
>>
>> BASE SPF delta
>> 16 CPUs x86 VM 14902.6 95905.16 543.55%
>> 80 CPUs P8 node 37240.24 78185.67 109.95%
>
> I find those results interesting as it seems that SPF do not scale well
> on big configuration. Note that it still have a sizeable improvement so
> it is still a very interesting feature i believe.
>
> Still understanding what is happening here might a good idea. From the
> numbers below it seems there is 2 causes to the scaling issue. First
> pte lock contention (kind of expected i guess). Second changes to vma
> while faulting.
>
> Have you thought about this ? Do i read those numbers in the wrong way ?
Your reading of the numbers is correct, but there is also another point to keep
in mind, on ppc64, the default page size is 64K, and since we are mapping new
pages for user space, those pages have to be cleared, leading to more time
spent clearing pages on ppc64 which leads to less page fault ratio on ppc64.
And since the VMA is checked again once the cleared page is allocated, there is
a major chance for that VMA to be touched in the ebizzy case.
>>
>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>> Performance counter stats for './ebizzy -mRTp':
>> As always, the updated documentation is the code itself.
>> 888157 faults
>> 884773 spf
>> 92 pagefault:spf_pte_lock
>> 2379 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 80 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>>
>> And the ones captured during a run on a 80 CPUs Power node:
>> Performance counter stats for './ebizzy -mRTp':
>> 762134 faults
>> 728663 spf
>> 19101 pagefault:spf_pte_lock
>> 13969 pagefault:spf_vma_changed
>> 0 pagefault:spf_vma_noanon
>> 272 pagefault:spf_vma_notsup
>> 0 pagefault:spf_vma_access
>> 0 pagefault:spf_pmd_changed
>
>
> There is one aspect that i would like to see cover. Maybe i am not
> understanding something fundamental, but it seems to me that SPF can
> trigger OOM or at very least over stress page allocation.
>
> Assume you have a lot of concurrent SPF to anonymous vma and they all
> allocate new pages, then you might overallocate for a single address
> by a factor correlated with the number of CPUs in your system. Now,
> multiply this for several distinc address and you might be allocating
> a lot of memory transiently ie just for a short period time. While
> the fact that you quickly free when you fail should prevent the OOM
> reaper. But still this might severly stress the memory allocation
> path.
That's an interesting point, and you're right, SPF may lead to page allocation
that will not be used.
But as you mentioned this will be a factor of CPU numbers, so the max page
overhead, assuming that all minus one threads of the same process are dealing
with page on the same VMA and the last one is touching that VMA parallel,
is (nrcpus-1) page allocated at one time which may not be used immediately.
I'm not sure this will be a major risk, but I might be too optimistic.
This raises also the question of the cleared page cache, I'd have to see if
there is such a cache is in place.
> Am i missing something in how this all work ? Or is the above some-
> thing that might be of concern ? Should there be some boundary on the
> maximum number of concurrent SPF (and thus boundary on maximum page
> temporary page allocation) ?
I don't think you're missing anything ;)
It would be easy to introduce such a limit in the case OOM are trigger too many
times due to SPF handling.
Cheers,
Laurent.
On 28/03/2018 23:21, David Rientjes wrote:
> On Wed, 28 Mar 2018, Laurent Dufour wrote:
>
>>>> @@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>>>> mremap_userfaultfd_prep(new_vma, uf);
>>>> arch_remap(mm, old_addr, old_addr + old_len,
>>>> new_addr, new_addr + new_len);
>>>> + if (vma != new_vma)
>>>> + vm_raw_write_end(vma);
>>>> }
>>>> + vm_raw_write_end(new_vma);
>>>
>>> Just do
>>>
>>> vm_raw_write_end(vma);
>>> vm_raw_write_end(new_vma);
>>>
>>> here.
>>
>> Are you sure ? we can have vma = new_vma done if (unlikely(err))
>>
>
> Sorry, what I meant was do
>
> if (vma != new_vma)
> vm_raw_write_end(vma);
> vm_raw_write_end(new_vma);
>
> after the conditional. Having the locking unnecessarily embedded in the
> conditional has been an issue in the past with other areas of core code,
> unless you have a strong reason for it.
Unfortunately, I can't see how doing this in another way since vma = new_vma is
done in the error branch.
So releasing the VMAs outside of the conditional may lead to miss 'vma' if the
error branch is taken.
Here is the code snippet as a reminder:
new_vma = copy_vma(&vma, new_addr, new_len, new_pgoff,
&need_rmap_locks);
[...]
if (vma != new_vma)
vm_raw_write_begin(vma);
[...]
if (unlikely(err)) {
[...]
if (vma != new_vma)
vm_raw_write_end(vma);
vma = new_vma; <<<< here we lost reference to vma
[...]
} else {
[...]
if (vma != new_vma)
vm_raw_write_end(vma);
}
vm_raw_write_end(new_vma);
On 03/04/2018 21:10, Jerome Glisse wrote:
> On Tue, Mar 13, 2018 at 06:59:36PM +0100, Laurent Dufour wrote:
>> pte_unmap_same() is making the assumption that the page table are still
>> around because the mmap_sem is held.
>> This is no more the case when running a speculative page fault and
>> additional check must be made to ensure that the final page table are still
>> there.
>>
>> This is now done by calling pte_spinlock() to check for the VMA's
>> consistency while locking for the page tables.
>>
>> This is requiring passing a vm_fault structure to pte_unmap_same() which is
>> containing all the needed parameters.
>>
>> As pte_spinlock() may fail in the case of a speculative page fault, if the
>> VMA has been touched in our back, pte_unmap_same() should now return 3
>> cases :
>> 1. pte are the same (0)
>> 2. pte are different (VM_FAULT_PTNOTSAME)
>> 3. a VMA's changes has been detected (VM_FAULT_RETRY)
>>
>> The case 2 is handled by the introduction of a new VM_FAULT flag named
>> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
>> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
>> page fault while holding the mmap_sem.
>>
>> Signed-off-by: Laurent Dufour <[email protected]>
>> ---
>> include/linux/mm.h | 1 +
>> mm/memory.c | 29 +++++++++++++++++++----------
>> 2 files changed, 20 insertions(+), 10 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 2f3e98edc94a..b6432a261e63 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1199,6 +1199,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>> #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
>> * and needs fsync() to complete (for
>> * synchronous page faults in DAX) */
>> +#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
>>
>> #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
>> VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 21b1212a0892..4bc7b0bdcb40 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2309,21 +2309,29 @@ static bool pte_map_lock(struct vm_fault *vmf)
>> * parts, do_swap_page must check under lock before unmapping the pte and
>> * proceeding (but do_wp_page is only called after already making such a check;
>> * and do_anonymous_page can safely check later on).
>> + *
>> + * pte_unmap_same() returns:
>> + * 0 if the PTE are the same
>> + * VM_FAULT_PTNOTSAME if the PTE are different
>> + * VM_FAULT_RETRY if the VMA has changed in our back during
>> + * a speculative page fault handling.
>> */
>> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
>> - pte_t *page_table, pte_t orig_pte)
>> +static inline int pte_unmap_same(struct vm_fault *vmf)
>> {
>> - int same = 1;
>> + int ret = 0;
>> +
>> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
>> if (sizeof(pte_t) > sizeof(unsigned long)) {
>> - spinlock_t *ptl = pte_lockptr(mm, pmd);
>> - spin_lock(ptl);
>> - same = pte_same(*page_table, orig_pte);
>> - spin_unlock(ptl);
>> + if (pte_spinlock(vmf)) {
>> + if (!pte_same(*vmf->pte, vmf->orig_pte))
>> + ret = VM_FAULT_PTNOTSAME;
>> + spin_unlock(vmf->ptl);
>> + } else
>> + ret = VM_FAULT_RETRY;
>> }
>> #endif
>> - pte_unmap(page_table);
>> - return same;
>> + pte_unmap(vmf->pte);
>> + return ret;
>> }
>>
>> static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
>> @@ -2913,7 +2921,8 @@ int do_swap_page(struct vm_fault *vmf)
>> int exclusive = 0;
>> int ret = 0;
>>
>> - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
>> + ret = pte_unmap_same(vmf);
>> + if (ret)
>> goto out;
>>
>
> This change what do_swap_page() returns ie before it was returning 0
> when locked pte lookup was different from orig_pte. After this patch
> it returns VM_FAULT_PTNOTSAME but this is a new return value for
> handle_mm_fault() (the do_swap_page() return value is what ultimately
> get return by handle_mm_fault())
>
> Do we really want that ? This might confuse some existing user of
> handle_mm_fault() and i am not sure of the value of that information
> to caller.
>
> Note i do understand that you want to return retry if anything did
> change from underneath and thus need to differentiate from when the
> pte value are not the same.
You're right, do_swap_page() should still return 0 in the case the lookup pte
is different from orig_pte, assuming that the swap operation has been handled
in our back by another concurrent thread.
So in do_swap_page(), VM_FAULT_PTNOTSAME should be translated in ret = 0.
ret = pte_unmap_same(vmf);
if (ret) {
/*
* If pte != orig_pte, this means another thread did the
* swap operation in our back.
* So nothing else to do.
*/
if (ret == VM_FAULT_PTNOTSAME)
ret = 0;
goto out;
}
This means that VM_FAULT_PTNOTSAME will never been reported up and limited to
do_swap_page().
Doing this will make easier to understand why when pte_unmap_same() is
returning 0, do_swap_page() is done.
Cheers,
Laurent.
On 03/04/2018 00:24, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ef6ef0627090..dfa81a638b7c 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -359,6 +359,12 @@ struct vm_fault {
>> * page table to avoid allocation from
>> * atomic context.
>> */
>> + /*
>> + * These entries are required when handling speculative page fault.
>> + * This way the page handling is done using consistent field values.
>> + */
>> + unsigned long vma_flags;
>> + pgprot_t vma_page_prot;
>> };
>>
>> /* page entry size for vm->huge_fault() */
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 446427cafa19..f71db2b42b30 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3717,6 +3717,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
>> .vma = vma,
>> .address = address,
>> .flags = flags,
>> + .vma_flags = vma->vm_flags,
>> + .vma_page_prot = vma->vm_page_prot,
>> /*
>> * Hard to debug if it ends up being
>> * used by a callee that assumes
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 32314e9e48dd..a946d5306160 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -882,6 +882,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>> .flags = FAULT_FLAG_ALLOW_RETRY,
>> .pmd = pmd,
>> .pgoff = linear_page_index(vma, address),
>> + .vma_flags = vma->vm_flags,
>> + .vma_page_prot = vma->vm_page_prot,
>> };
>>
>> /* we only decide to swapin, if there is enough young ptes */
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 0200340ef089..46fe92b93682 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2615,7 +2615,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>> * Don't let another task, with possibly unlocked vma,
>> * keep the mlocked page.
>> */
>> - if (page_copied && (vma->vm_flags & VM_LOCKED)) {
>> + if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
>> lock_page(old_page); /* LRU manipulation */
>> if (PageMlocked(old_page))
>> munlock_vma_page(old_page);
>
> Doesn't wp_page_copy() also need to pass this to anon_vma_prepare() so
> that find_mergeable_anon_vma() works correctly?
In the case of the spf handler, we check that the vma->anon_vma is not null.
So __anon_vma_prepare(vma) is never called in the context of the SPF handler.
>
>> @@ -2649,7 +2649,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>> */
>> int finish_mkwrite_fault(struct vm_fault *vmf)
>> {
>> - WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
>> + WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
>> if (!pte_map_lock(vmf))
>> return VM_FAULT_RETRY;
>> /*
>> @@ -2751,7 +2751,7 @@ static int do_wp_page(struct vm_fault *vmf)
>> * We should not cow pages in a shared writeable mapping.
>> * Just mark the pages writable and/or call ops->pfn_mkwrite.
>> */
>> - if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
>> + if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>> (VM_WRITE|VM_SHARED))
>> return wp_pfn_shared(vmf);
>>
>> @@ -2798,7 +2798,7 @@ static int do_wp_page(struct vm_fault *vmf)
>> return VM_FAULT_WRITE;
>> }
>> unlock_page(vmf->page);
>> - } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
>> + } else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>> (VM_WRITE|VM_SHARED))) {
>> return wp_page_shared(vmf);
>> }
>> @@ -3067,7 +3067,7 @@ int do_swap_page(struct vm_fault *vmf)
>>
>> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>> dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
>> - pte = mk_pte(page, vma->vm_page_prot);
>> + pte = mk_pte(page, vmf->vma_page_prot);
>> if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
>> pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>> vmf->flags &= ~FAULT_FLAG_WRITE;
>> @@ -3093,7 +3093,7 @@ int do_swap_page(struct vm_fault *vmf)
>>
>> swap_free(entry);
>> if (mem_cgroup_swap_full(page) ||
>> - (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>> + (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
>> try_to_free_swap(page);
>> unlock_page(page);
>> if (page != swapcache && swapcache) {
>> @@ -3150,7 +3150,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>> pte_t entry;
>>
>> /* File mapping without ->vm_ops ? */
>> - if (vma->vm_flags & VM_SHARED)
>> + if (vmf->vma_flags & VM_SHARED)
>> return VM_FAULT_SIGBUS;
>>
>> /*
>> @@ -3174,7 +3174,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>> if (!(vmf->flags & FAULT_FLAG_WRITE) &&
>> !mm_forbids_zeropage(vma->vm_mm)) {
>> entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
>> - vma->vm_page_prot));
>> + vmf->vma_page_prot));
>> if (!pte_map_lock(vmf))
>> return VM_FAULT_RETRY;
>> if (!pte_none(*vmf->pte))
>> @@ -3207,8 +3207,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
>> */
>> __SetPageUptodate(page);
>>
>> - entry = mk_pte(page, vma->vm_page_prot);
>> - if (vma->vm_flags & VM_WRITE)
>> + entry = mk_pte(page, vmf->vma_page_prot);
>> + if (vmf->vma_flags & VM_WRITE)
>> entry = pte_mkwrite(pte_mkdirty(entry));
>>
>> if (!pte_map_lock(vmf)) {
>> @@ -3404,7 +3404,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
>> for (i = 0; i < HPAGE_PMD_NR; i++)
>> flush_icache_page(vma, page + i);
>>
>> - entry = mk_huge_pmd(page, vma->vm_page_prot);
>> + entry = mk_huge_pmd(page, vmf->vma_page_prot);
>> if (write)
>> entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>>
>> @@ -3478,11 +3478,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
>> return VM_FAULT_NOPAGE;
>>
>> flush_icache_page(vma, page);
>> - entry = mk_pte(page, vma->vm_page_prot);
>> + entry = mk_pte(page, vmf->vma_page_prot);
>> if (write)
>> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> /* copy-on-write page */
>> - if (write && !(vma->vm_flags & VM_SHARED)) {
>> + if (write && !(vmf->vma_flags & VM_SHARED)) {
>> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>> page_add_new_anon_rmap(page, vma, vmf->address, false);
>> mem_cgroup_commit_charge(page, memcg, false, false);
>> @@ -3521,7 +3521,7 @@ int finish_fault(struct vm_fault *vmf)
>>
>> /* Did we COW the page? */
>> if ((vmf->flags & FAULT_FLAG_WRITE) &&
>> - !(vmf->vma->vm_flags & VM_SHARED))
>> + !(vmf->vma_flags & VM_SHARED))
>> page = vmf->cow_page;
>> else
>> page = vmf->page;
>> @@ -3775,7 +3775,7 @@ static int do_fault(struct vm_fault *vmf)
>> ret = VM_FAULT_SIGBUS;
>> else if (!(vmf->flags & FAULT_FLAG_WRITE))
>> ret = do_read_fault(vmf);
>> - else if (!(vma->vm_flags & VM_SHARED))
>> + else if (!(vmf->vma_flags & VM_SHARED))
>> ret = do_cow_fault(vmf);
>> else
>> ret = do_shared_fault(vmf);
>> @@ -3832,7 +3832,7 @@ static int do_numa_page(struct vm_fault *vmf)
>> * accessible ptes, some can allow access by kernel mode.
>> */
>> pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
>> - pte = pte_modify(pte, vma->vm_page_prot);
>> + pte = pte_modify(pte, vmf->vma_page_prot);
>> pte = pte_mkyoung(pte);
>> if (was_writable)
>> pte = pte_mkwrite(pte);
>> @@ -3866,7 +3866,7 @@ static int do_numa_page(struct vm_fault *vmf)
>> * Flag if the page is shared between multiple address spaces. This
>> * is later used when determining whether to group tasks together
>> */
>> - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
>> + if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
>> flags |= TNF_SHARED;
>>
>> last_cpupid = page_cpupid_last(page);
>> @@ -3911,7 +3911,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
>> return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
>>
>> /* COW handled on pte level: split pmd */
>> - VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
>> + VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
>> __split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
>>
>> return VM_FAULT_FALLBACK;
>> @@ -4058,6 +4058,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>> .flags = flags,
>> .pgoff = linear_page_index(vma, address),
>> .gfp_mask = __get_fault_gfp_mask(vma),
>> + .vma_flags = vma->vm_flags,
>> + .vma_page_prot = vma->vm_page_prot,
>> };
>> unsigned int dirty = flags & FAULT_FLAG_WRITE;
>> struct mm_struct *mm = vma->vm_mm;
>
> Don't you also need to do this?
In theory there is no risk there, because if the vma->vm_flags have changed in
our back, the locking of the pte will prevent concurrent update of the pte's
values.
So if a mprotect() call is occuring in parallel, once the vm_flags have been
touched, the pte needs to be modified and this requires the pte lock to be
held. So this will happen after we have revalidated the vma and locked the pte.
This being said, that sounds better to deal with the vmf->vma_flags when the
vmf structure is available so I'll apply the following.
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -694,9 +694,9 @@ void free_compound_page(struct page *page);
> * pte_mkwrite. But get_user_pages can cause write faults for mappings
> * that do not have writing enabled, when used by access_process_vm.
> */
> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
> {
> - if (likely(vma->vm_flags & VM_WRITE))
> + if (likely(vma_flags & VM_WRITE))
> pte = pte_mkwrite(pte);
> return pte;
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1195,8 +1195,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
>
> for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
> pte_t entry;
> - entry = mk_pte(pages[i], vma->vm_page_prot);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = mk_pte(pages[i], vmf->vma_page_prot);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> memcg = (void *)page_private(pages[i]);
> set_page_private(pages[i], 0);
> page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
> @@ -2169,7 +2169,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> entry = pte_swp_mksoft_dirty(entry);
> } else {
> entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
> - entry = maybe_mkwrite(entry, vma);
> + entry = maybe_mkwrite(entry, vma->vm_flags);
> if (!write)
> entry = pte_wrprotect(entry);
> if (!young)
> diff --git a/mm/memory.c b/mm/memory.c
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> out_mkwrite:
> if (mkwrite) {
> entry = pte_mkyoung(entry);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
> }
>
> set_pte_at(mm, addr, pte, entry);
> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>
> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> entry = pte_mkyoung(vmf->orig_pte);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
> update_mmu_cache(vma, vmf->address, vmf->pte);
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -2549,8 +2549,8 @@ static int wp_page_copy(struct vm_fault *vmf)
> inc_mm_counter_fast(mm, MM_ANONPAGES);
> }
> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> - entry = mk_pte(new_page, vma->vm_page_prot);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = mk_pte(new_page, vmf->vma_page_prot);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> /*
> * Clear the pte entry and flush it first, before updating the
> * pte with the new entry. This will avoid a race condition
> @@ -3069,7 +3069,7 @@ int do_swap_page(struct vm_fault *vmf)
> dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> pte = mk_pte(page, vmf->vma_page_prot);
> if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> + pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vm_flags);
> vmf->flags &= ~FAULT_FLAG_WRITE;
> ret |= VM_FAULT_WRITE;
> exclusive = RMAP_EXCLUSIVE;
> @@ -3481,7 +3481,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
> flush_icache_page(vma, page);
> entry = mk_pte(page, vmf->vma_page_prot);
> if (write)
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> + entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vm_flags);
> /* copy-on-write page */
> if (write && !(vmf->vma_flags & VM_SHARED)) {
> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> diff --git a/mm/migrate.c b/mm/migrate.c
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
> */
> entry = pte_to_swp_entry(*pvmw.pte);
> if (is_write_migration_entry(entry))
> - pte = maybe_mkwrite(pte, vma);
> + pte = maybe_mkwrite(pte, vma->vm_flags);
>
> if (unlikely(is_zone_device_page(new))) {
> if (is_device_private_page(new)) {
>
On 03/04/2018 01:12, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index dfa81a638b7c..a84ddc218bbd 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -684,13 +684,18 @@ void free_compound_page(struct page *page);
>> * pte_mkwrite. But get_user_pages can cause write faults for mappings
>> * that do not have writing enabled, when used by access_process_vm.
>> */
>> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>> +static inline pte_t __maybe_mkwrite(pte_t pte, unsigned long vma_flags)
>> {
>> - if (likely(vma->vm_flags & VM_WRITE))
>> + if (likely(vma_flags & VM_WRITE))
>> pte = pte_mkwrite(pte);
>> return pte;
>> }
>>
>> +static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>> +{
>> + return __maybe_mkwrite(pte, vma->vm_flags);
>> +}
>> +
>> int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
>> struct page *page);
>> int finish_fault(struct vm_fault *vmf);
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 0a0a483d9a65..af0338fbc34d 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>>
>> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
>> entry = pte_mkyoung(vmf->orig_pte);
>> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>> if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
>> update_mmu_cache(vma, vmf->address, vmf->pte);
>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>> @@ -2549,8 +2549,8 @@ static int wp_page_copy(struct vm_fault *vmf)
>> inc_mm_counter_fast(mm, MM_ANONPAGES);
>> }
>> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
>> - entry = mk_pte(new_page, vma->vm_page_prot);
>> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> + entry = mk_pte(new_page, vmf->vma_page_prot);
>> + entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>> /*
>> * Clear the pte entry and flush it first, before updating the
>> * pte with the new entry. This will avoid a race condition
>
> Don't you also need to do this in do_swap_page()?
Indeed I'll drop this patch as all the changes are now done in the patch 11
"mm: Cache some VMA fields in the vm_fault structure" where, as you suggested,
maybe_mkwrite() is now getting passed the vm_flags value directly.
> diff --git a/mm/memory.c b/mm/memory.c
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3067,9 +3067,9 @@ int do_swap_page(struct vm_fault *vmf)
>
> inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> - pte = mk_pte(page, vma->vm_page_prot);
> + pte = mk_pte(page, vmf->vma_page_prot);
> if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> + pte = __maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
> vmf->flags &= ~FAULT_FLAG_WRITE;
> ret |= VM_FAULT_WRITE;
> exclusive = RMAP_EXCLUSIVE;
>
On 03/04/2018 01:18, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index a84ddc218bbd..73b8b99f482b 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1263,8 +1263,11 @@ struct zap_details {
>> pgoff_t last_index; /* Highest page->index to unmap */
>> };
>>
>> -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> - pte_t pte, bool with_public_device);
>> +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> + pte_t pte, bool with_public_device,
>> + unsigned long vma_flags);
>> +#define _vm_normal_page(vma, addr, pte, with_public_device) \
>> + __vm_normal_page(vma, addr, pte, with_public_device, (vma)->vm_flags)
>> #define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
>>
>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>
> If _vm_normal_page() is a static inline function does it break somehow?
> It's nice to avoid the #define's.
No problem, I'll create it as a static inline function.
>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index af0338fbc34d..184a0d663a76 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -826,8 +826,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>> #else
>> # define HAVE_PTE_SPECIAL 0
>> #endif
>> -struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> - pte_t pte, bool with_public_device)
>> +struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> + pte_t pte, bool with_public_device,
>> + unsigned long vma_flags)
>> {
>> unsigned long pfn = pte_pfn(pte);
>>
>
> Would it be possible to update the comment since the function itself is no
> longer named vm_normal_page?
Sure.
On 03/04/2018 21:39, Jerome Glisse wrote:
> On Tue, Mar 13, 2018 at 06:59:45PM +0100, Laurent Dufour wrote:
>> When dealing with the speculative fault path we should use the VMA's field
>> cached value stored in the vm_fault structure.
>>
>> Currently vm_normal_page() is using the pointer to the VMA to fetch the
>> vm_flags value. This patch provides a new __vm_normal_page() which is
>> receiving the vm_flags flags value as parameter.
>>
>> Note: The speculative path is turned on for architecture providing support
>> for special PTE flag. So only the first block of vm_normal_page is used
>> during the speculative path.
>
> Might be a good idea to explicitly have SPECULATIVE Kconfig option depends
> on ARCH_PTE_SPECIAL and a comment for !HAVE_PTE_SPECIAL in the function
> explaining that speculative page fault should never reach that point.
Unfortunately there is no ARCH_PTE_SPECIAL in the config file, it is defined in
the per architecture header files.
So I can't do anything in the Kconfig file
However, I can check that at build time, and doing such a check in
__vm_normal_page sounds to be a good place, like that:
@@ -869,6 +870,14 @@ struct page *__vm_normal_page(struct vm_area_struct *vma,
unsigned long addr,
/* !HAVE_PTE_SPECIAL case follows: */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ /* This part should never get called when the speculative page fault
+ * handler is turned on. This is mainly because we can't rely on
+ * vm_start.
+ */
+#error CONFIG_SPECULATIVE_PAGE_FAULT requires HAVE_PTE_SPECIAL
+#endif
+
if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
Thanks,
Laurent.
On Wed, Apr 04, 2018 at 06:26:44PM +0200, Laurent Dufour wrote:
>
>
> On 03/04/2018 21:39, Jerome Glisse wrote:
> > On Tue, Mar 13, 2018 at 06:59:45PM +0100, Laurent Dufour wrote:
> >> When dealing with the speculative fault path we should use the VMA's field
> >> cached value stored in the vm_fault structure.
> >>
> >> Currently vm_normal_page() is using the pointer to the VMA to fetch the
> >> vm_flags value. This patch provides a new __vm_normal_page() which is
> >> receiving the vm_flags flags value as parameter.
> >>
> >> Note: The speculative path is turned on for architecture providing support
> >> for special PTE flag. So only the first block of vm_normal_page is used
> >> during the speculative path.
> >
> > Might be a good idea to explicitly have SPECULATIVE Kconfig option depends
> > on ARCH_PTE_SPECIAL and a comment for !HAVE_PTE_SPECIAL in the function
> > explaining that speculative page fault should never reach that point.
>
> Unfortunately there is no ARCH_PTE_SPECIAL in the config file, it is defined in
> the per architecture header files.
> So I can't do anything in the Kconfig file
Maybe adding a new Kconfig symbol for ARCH_PTE_SPECIAL very much like
others ARCH_HAS_
>
> However, I can check that at build time, and doing such a check in
> __vm_normal_page sounds to be a good place, like that:
>
> @@ -869,6 +870,14 @@ struct page *__vm_normal_page(struct vm_area_struct *vma,
> unsigned long addr,
>
> /* !HAVE_PTE_SPECIAL case follows: */
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + /* This part should never get called when the speculative page fault
> + * handler is turned on. This is mainly because we can't rely on
> + * vm_start.
> + */
> +#error CONFIG_SPECULATIVE_PAGE_FAULT requires HAVE_PTE_SPECIAL
> +#endif
> +
> if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
> if (vma_flags & VM_MIXEDMAP) {
> if (!pfn_valid(pfn))
>
I am not a fan of #if/#else/#endif in code. But that's a taste thing.
I honnestly think that adding a Kconfig for special pte is the cleanest
solution.
Cheers,
J?r?me
On 04/04/2018 23:59, Jerome Glisse wrote:
> On Wed, Apr 04, 2018 at 06:26:44PM +0200, Laurent Dufour wrote:
>>
>>
>> On 03/04/2018 21:39, Jerome Glisse wrote:
>>> On Tue, Mar 13, 2018 at 06:59:45PM +0100, Laurent Dufour wrote:
>>>> When dealing with the speculative fault path we should use the VMA's field
>>>> cached value stored in the vm_fault structure.
>>>>
>>>> Currently vm_normal_page() is using the pointer to the VMA to fetch the
>>>> vm_flags value. This patch provides a new __vm_normal_page() which is
>>>> receiving the vm_flags flags value as parameter.
>>>>
>>>> Note: The speculative path is turned on for architecture providing support
>>>> for special PTE flag. So only the first block of vm_normal_page is used
>>>> during the speculative path.
>>>
>>> Might be a good idea to explicitly have SPECULATIVE Kconfig option depends
>>> on ARCH_PTE_SPECIAL and a comment for !HAVE_PTE_SPECIAL in the function
>>> explaining that speculative page fault should never reach that point.
>>
>> Unfortunately there is no ARCH_PTE_SPECIAL in the config file, it is defined in
>> the per architecture header files.
>> So I can't do anything in the Kconfig file
>
> Maybe adding a new Kconfig symbol for ARCH_PTE_SPECIAL very much like
> others ARCH_HAS_
>
>>
>> However, I can check that at build time, and doing such a check in
>> __vm_normal_page sounds to be a good place, like that:
>>
>> @@ -869,6 +870,14 @@ struct page *__vm_normal_page(struct vm_area_struct *vma,
>> unsigned long addr,
>>
>> /* !HAVE_PTE_SPECIAL case follows: */
>>
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> + /* This part should never get called when the speculative page fault
>> + * handler is turned on. This is mainly because we can't rely on
>> + * vm_start.
>> + */
>> +#error CONFIG_SPECULATIVE_PAGE_FAULT requires HAVE_PTE_SPECIAL
>> +#endif
>> +
>> if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>> if (vma_flags & VM_MIXEDMAP) {
>> if (!pfn_valid(pfn))
>>
>
> I am not a fan of #if/#else/#endif in code. But that's a taste thing.
> I honnestly think that adding a Kconfig for special pte is the cleanest
> solution.
I do agree, but this should be done in a separate series.
I'll see how this could be done but there are some arch (like powerpc) where
this is a bit obfuscated for unknown reason.
For the time being, I'll remove the check and just let the comment in place.
On 03/04/2018 02:11, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>> that particular case, and it is introducing major performance degradation
>> due to excessive scheduling operations.
>>
>> To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
>> is protecting it access using a rwlock. As the mm_rb tree is a O(log n)
>> search it is safe to protect it using such a lock. The VMA cache is not
>> protected by the new rwlock and it should not be used without holding the
>> mmap_sem.
>>
>> To allow the picked VMA structure to be used once the rwlock is released, a
>> use count is added to the VMA structure. When the VMA is allocated it is
>> set to 1. Each time the VMA is picked with the rwlock held its use count
>> is incremented. Each time the VMA is released it is decremented. When the
>> use count hits zero, this means that the VMA is no more used and should be
>> freed.
>>
>> This patch is preparing for 2 kind of VMA access :
>> - as usual, under the control of the mmap_sem,
>> - without holding the mmap_sem for the speculative page fault handler.
>>
>> Access done under the control the mmap_sem doesn't require to grab the
>> rwlock to protect read access to the mm_rb tree, but access in write must
>> be done under the protection of the rwlock too. This affects inserting and
>> removing of elements in the RB tree.
>>
>> The patch is introducing 2 new functions:
>> - vma_get() to find a VMA based on an address by holding the new rwlock.
>> - vma_put() to release the VMA when its no more used.
>> These services are designed to be used when access are made to the RB tree
>> without holding the mmap_sem.
>>
>> When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
>> we rely on the WMB done when releasing the rwlock to serialize the write
>> with the RMB done in a later patch to check for the VMA's validity.
>>
>> When free_vma is called, the file associated with the VMA is closed
>> immediately, but the policy and the file structure remained in used until
>> the VMA's use count reach 0, which may happens later when exiting an
>> in progress speculative page fault.
>>
>> [1] https://patchwork.kernel.org/patch/5108281/
>>
>> Cc: Peter Zijlstra (Intel) <[email protected]>
>> Cc: Matthew Wilcox <[email protected]>
>> Signed-off-by: Laurent Dufour <[email protected]>
>
> Can __free_vma() be generalized for mm/nommu.c's delete_vma() and
> do_mmap()?
To be honest I didn't look at mm/nommu.c assuming that such architecture would
probably be monothreaded. Am I wrong ?
On Mon, 26 Mar 2018, Andi Kleen wrote:
> > Aside: should there be a new spec_flt field for struct task_struct that
> > complements maj_flt and min_flt and be exported through /proc/pid/stat?
>
> No. task_struct is already too bloated. If you need per process tracking
> you can always get it through trace points.
>
Hi Andi,
We have
count_vm_event(PGFAULT);
count_memcg_event_mm(vma->vm_mm, PGFAULT);
in handle_mm_fault() but not counterpart for spf. I think it would be
helpful to be able to determine how much faulting can be done
speculatively if there is no per-process tracking without tracing.
On 03/04/2018 02:11, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>> that particular case, and it is introducing major performance degradation
>> due to excessive scheduling operations.
>>
>> To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
>> is protecting it access using a rwlock. As the mm_rb tree is a O(log n)
>> search it is safe to protect it using such a lock. The VMA cache is not
>> protected by the new rwlock and it should not be used without holding the
>> mmap_sem.
>>
>> To allow the picked VMA structure to be used once the rwlock is released, a
>> use count is added to the VMA structure. When the VMA is allocated it is
>> set to 1. Each time the VMA is picked with the rwlock held its use count
>> is incremented. Each time the VMA is released it is decremented. When the
>> use count hits zero, this means that the VMA is no more used and should be
>> freed.
>>
>> This patch is preparing for 2 kind of VMA access :
>> - as usual, under the control of the mmap_sem,
>> - without holding the mmap_sem for the speculative page fault handler.
>>
>> Access done under the control the mmap_sem doesn't require to grab the
>> rwlock to protect read access to the mm_rb tree, but access in write must
>> be done under the protection of the rwlock too. This affects inserting and
>> removing of elements in the RB tree.
>>
>> The patch is introducing 2 new functions:
>> - vma_get() to find a VMA based on an address by holding the new rwlock.
>> - vma_put() to release the VMA when its no more used.
>> These services are designed to be used when access are made to the RB tree
>> without holding the mmap_sem.
>>
>> When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
>> we rely on the WMB done when releasing the rwlock to serialize the write
>> with the RMB done in a later patch to check for the VMA's validity.
>>
>> When free_vma is called, the file associated with the VMA is closed
>> immediately, but the policy and the file structure remained in used until
>> the VMA's use count reach 0, which may happens later when exiting an
>> in progress speculative page fault.
>>
>> [1] https://patchwork.kernel.org/patch/5108281/
>>
>> Cc: Peter Zijlstra (Intel) <[email protected]>
>> Cc: Matthew Wilcox <[email protected]>
>> Signed-off-by: Laurent Dufour <[email protected]>
>
> Can __free_vma() be generalized for mm/nommu.c's delete_vma() and
> do_mmap()?
Good question !
I guess if there is no mmu, there is no page fault, so no speculative page
fault and this patch is clearly required by the speculative page fault handler.
By the I should probably make CONFIG_SPECULATIVE_PAGE_FAULT dependent on
CONFIG_MMU.
This being said, if your idea is to extend the mm_rb tree rwlocking to the
nommu case, then this is another story, and I wondering if there is a real need
in such case. But I've to admit I'm not so familliar with kernel built for
mmuless systems.
Am I missing something ?
Thanks,
Laurent.
On 03/04/2018 01:57, David Rientjes wrote:
> On Tue, 13 Mar 2018, Laurent Dufour wrote:
>
>> When dealing with speculative page fault handler, we may race with VMA
>> being split or merged. In this case the vma->vm_start and vm->vm_end
>> fields may not match the address the page fault is occurring.
>>
>> This can only happens when the VMA is split but in that case, the
>> anon_vma pointer of the new VMA will be the same as the original one,
>> because in __split_vma the new->anon_vma is set to src->anon_vma when
>> *new = *vma.
>>
>> So even if the VMA boundaries are not correct, the anon_vma pointer is
>> still valid.
>>
>> If the VMA has been merged, then the VMA in which it has been merged
>> must have the same anon_vma pointer otherwise the merge can't be done.
>>
>> So in all the case we know that the anon_vma is valid, since we have
>> checked before starting the speculative page fault that the anon_vma
>> pointer is valid for this VMA and since there is an anon_vma this
>> means that at one time a page has been backed and that before the VMA
>> is cleaned, the page table lock would have to be grab to clean the
>> PTE, and the anon_vma field is checked once the PTE is locked.
>>
>> This patch introduce a new __page_add_new_anon_rmap() service which
>> doesn't check for the VMA boundaries, and create a new inline one
>> which do the check.
>>
>> When called from a page fault handler, if this is not a speculative one,
>> there is a guarantee that vm_start and vm_end match the faulting address,
>> so this check is useless. In the context of the speculative page fault
>> handler, this check may be wrong but anon_vma is still valid as explained
>> above.
>>
>> Signed-off-by: Laurent Dufour <[email protected]>
>
> I'm indifferent on this: it could be argued both sides that the new
> function and its variant for a simple VM_BUG_ON() isn't worth it and it
> would should rather be done in the callers of page_add_new_anon_rmap().
> It feels like it would be better left to the caller and add a comment to
> page_add_anon_rmap() itself in mm/rmap.c.
Well there are 11 calls to page_add_new_anon_rmap() which will need to be
impacted and future ones too.
By introducing __page_add_new_anon_rmap() my goal was to make clear that this
call is *special* and that calling it is not the usual way. This also implies
that most of the time the check is done (when build with the right config) and
that we will not miss some.
On 10/04/2018 08:47, David Rientjes wrote:
> On Mon, 26 Mar 2018, Andi Kleen wrote:
>
>>> Aside: should there be a new spec_flt field for struct task_struct that
>>> complements maj_flt and min_flt and be exported through /proc/pid/stat?
>>
>> No. task_struct is already too bloated. If you need per process tracking
>> you can always get it through trace points.
>>
>
> Hi Andi,
>
> We have
>
> count_vm_event(PGFAULT);
> count_memcg_event_mm(vma->vm_mm, PGFAULT);
>
> in handle_mm_fault() but not counterpart for spf. I think it would be
> helpful to be able to determine how much faulting can be done
> speculatively if there is no per-process tracking without tracing.
That sounds to be a good idea, I will create a separate patch a dedicated
speculative_pgfault counter as PGFAULT is.
Thanks,
Laurent.
On 14/03/2018 14:11, Michal Hocko wrote:
> On Tue 13-03-18 18:59:30, Laurent Dufour wrote:
>> Changes since v8:
>> - Don't check PMD when locking the pte when THP is disabled
>> Thanks to Daniel Jordan for reporting this.
>> - Rebase on 4.16
>
> Is this really worth reposting the whole pile? I mean this is at v9,
> each doing little changes. It is quite tiresome to barely get to a
> bookmarked version just to find out that there are 2 new versions out.
>
> I am sorry to be grumpy and I can understand some frustration it doesn't
> move forward that easilly but this is a _big_ change. We should start
> with a real high level review rather than doing small changes here and
> there and reach v20 quickly.
I know this would mean v10, but there has been a bunch of reviews from David
Rientjes and Jerome Glisse, and I had to make many changes to address them.
So I think this is time to push a v10.
If you have already started a review of this v9 series, please send me your
remarks so that I can compile them in this v10 asap.
Thanks,
Laurent.