2019-09-18 13:42:47

by Justin He

[permalink] [raw]
Subject: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

Below call trace is from arm64 do_page_fault for debugging purpose
[ 110.016195] Call trace:
[ 110.016826] do_page_fault+0x5a4/0x690
[ 110.017812] do_mem_abort+0x50/0xb0
[ 110.018726] el1_da+0x20/0xc4
[ 110.019492] __arch_copy_from_user+0x180/0x280
[ 110.020646] do_wp_page+0xb0/0x860
[ 110.021517] __handle_mm_fault+0x994/0x1338
[ 110.022606] handle_mm_fault+0xe8/0x180
[ 110.023584] do_page_fault+0x240/0x690
[ 110.024535] do_mem_abort+0x50/0xb0
[ 110.025423] el0_da+0x20/0x24

The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
[ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

This patch fix it by calling pte_mkyoung. Also, the parameter is
changed because vmf should be passed to cow_user_page()

[1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork

Reported-by: Yibo Cai <[email protected]>
Signed-off-by: Jia He <[email protected]>
---
mm/memory.c | 35 ++++++++++++++++++++++++++++++-----
1 file changed, 30 insertions(+), 5 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..d2c130a5883b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
2;
#endif

+#ifndef arch_faults_on_old_pte
+static inline bool arch_faults_on_old_pte(void)
+{
+ return false;
+}
+#endif
+
static int __init disable_randmaps(char *s)
{
randomize_va_space = 0;
@@ -2140,8 +2147,12 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
return same;
}

-static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
+static inline void cow_user_page(struct page *dst, struct page *src,
+ struct vm_fault *vmf)
{
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long addr = vmf->address;
+
debug_dma_assert_idle(src);

/*
@@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
*/
if (unlikely(!src)) {
void *kaddr = kmap_atomic(dst);
- void __user *uaddr = (void __user *)(va & PAGE_MASK);
+ void __user *uaddr = (void __user *)(addr & PAGE_MASK);
+ pte_t entry;

/*
* This really shouldn't fail, because the page is there
* in the page tables. But it might just be unreadable,
* in which case we just give up and fill the result with
- * zeroes.
+ * zeroes. On architectures with software "accessed" bits,
+ * we would take a double page fault here, so mark it
+ * accessed here.
*/
+ if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
+ spin_lock(vmf->ptl);
+ if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
+ entry = pte_mkyoung(vmf->orig_pte);
+ if (ptep_set_access_flags(vma, addr,
+ vmf->pte, entry, 0))
+ update_mmu_cache(vma, addr, vmf->pte);
+ }
+ spin_unlock(vmf->ptl);
+ }
+
if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
clear_page(kaddr);
kunmap_atomic(kaddr);
flush_dcache_page(dst);
} else
- copy_user_highpage(dst, src, va, vma);
+ copy_user_highpage(dst, src, addr, vma);
}

static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
@@ -2318,7 +2343,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
vmf->address);
if (!new_page)
goto oom;
- cow_user_page(new_page, old_page, vmf->address, vma);
+ cow_user_page(new_page, old_page, vmf);
}

if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
--
2.17.1


2019-09-18 18:07:20

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > */
> > if (unlikely(!src)) {
> > void *kaddr = kmap_atomic(dst);
> > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > + pte_t entry;
> >
> > /*
> > * This really shouldn't fail, because the page is there
> > * in the page tables. But it might just be unreadable,
> > * in which case we just give up and fill the result with
> > - * zeroes.
> > + * zeroes. On architectures with software "accessed" bits,
> > + * we would take a double page fault here, so mark it
> > + * accessed here.
> > */
> > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > + spin_lock(vmf->ptl);
> > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > + entry = pte_mkyoung(vmf->orig_pte);
> > + if (ptep_set_access_flags(vma, addr,
> > + vmf->pte, entry, 0))
> > + update_mmu_cache(vma, addr, vmf->pte);
> > + }
>
> I don't follow.
>
> So if pte has changed under you, you don't set the accessed bit, but never
> the less copy from the user.
>
> What makes you think it will not trigger the same problem?
>
> I think we need to make cow_user_page() fail in this case and caller --
> wp_page_copy() -- return zero. If the fault was solved by other thread, we
> are fine. If not userspace would re-fault on the same address and we will
> handle the fault from the second attempt.

It would be nice to clarify the semantics of this function and do as
you suggest but the current comment is slightly confusing:

/*
* If the source page was a PFN mapping, we don't have
* a "struct page" for it. We do a best-effort copy by
* just copying from the original user address. If that
* fails, we just zero-fill it. Live with it.
*/

Would any user-space rely on getting a zero-filled page here instead of
a recursive fault?

--
Catalin

2019-09-18 20:18:24

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
> will be a double page fault in __copy_from_user_inatomic of cow_user_page.
>
> Below call trace is from arm64 do_page_fault for debugging purpose
> [ 110.016195] Call trace:
> [ 110.016826] do_page_fault+0x5a4/0x690
> [ 110.017812] do_mem_abort+0x50/0xb0
> [ 110.018726] el1_da+0x20/0xc4
> [ 110.019492] __arch_copy_from_user+0x180/0x280
> [ 110.020646] do_wp_page+0xb0/0x860
> [ 110.021517] __handle_mm_fault+0x994/0x1338
> [ 110.022606] handle_mm_fault+0xe8/0x180
> [ 110.023584] do_page_fault+0x240/0x690
> [ 110.024535] do_mem_abort+0x50/0xb0
> [ 110.025423] el0_da+0x20/0x24
>
> The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3
>
> As told by Catalin: "On arm64 without hardware Access Flag, copying from
> user will fail because the pte is old and cannot be marked young. So we
> always end up with zeroed page after fork() + CoW for pfn mappings. we
> don't always have a hardware-managed access flag on arm64."
>
> This patch fix it by calling pte_mkyoung. Also, the parameter is
> changed because vmf should be passed to cow_user_page()
>
> [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
>
> Reported-by: Yibo Cai <[email protected]>
> Signed-off-by: Jia He <[email protected]>
> ---
> mm/memory.c | 35 ++++++++++++++++++++++++++++++-----
> 1 file changed, 30 insertions(+), 5 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index e2bb51b6242e..d2c130a5883b 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
> 2;
> #endif
>
> +#ifndef arch_faults_on_old_pte
> +static inline bool arch_faults_on_old_pte(void)
> +{
> + return false;
> +}
> +#endif
> +
> static int __init disable_randmaps(char *s)
> {
> randomize_va_space = 0;
> @@ -2140,8 +2147,12 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> return same;
> }
>
> -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> +static inline void cow_user_page(struct page *dst, struct page *src,
> + struct vm_fault *vmf)
> {
> + struct vm_area_struct *vma = vmf->vma;
> + unsigned long addr = vmf->address;
> +
> debug_dma_assert_idle(src);
>
> /*
> @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> */
> if (unlikely(!src)) {
> void *kaddr = kmap_atomic(dst);
> - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> + pte_t entry;
>
> /*
> * This really shouldn't fail, because the page is there
> * in the page tables. But it might just be unreadable,
> * in which case we just give up and fill the result with
> - * zeroes.
> + * zeroes. On architectures with software "accessed" bits,
> + * we would take a double page fault here, so mark it
> + * accessed here.
> */
> + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> + spin_lock(vmf->ptl);
> + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> + entry = pte_mkyoung(vmf->orig_pte);
> + if (ptep_set_access_flags(vma, addr,
> + vmf->pte, entry, 0))
> + update_mmu_cache(vma, addr, vmf->pte);
> + }

I don't follow.

So if pte has changed under you, you don't set the accessed bit, but never
the less copy from the user.

What makes you think it will not trigger the same problem?

I think we need to make cow_user_page() fail in this case and caller --
wp_page_copy() -- return zero. If the fault was solved by other thread, we
are fine. If not userspace would re-fault on the same address and we will
handle the fault from the second attempt.

> + spin_unlock(vmf->ptl);
> + }
> +
> if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> clear_page(kaddr);
> kunmap_atomic(kaddr);
> flush_dcache_page(dst);
> } else
> - copy_user_highpage(dst, src, va, vma);
> + copy_user_highpage(dst, src, addr, vma);
> }
>
> static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
> @@ -2318,7 +2343,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> vmf->address);
> if (!new_page)
> goto oom;
> - cow_user_page(new_page, old_page, vmf->address, vma);
> + cow_user_page(new_page, old_page, vmf);
> }
>
> if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
> --
> 2.17.1
>
>

--
Kirill A. Shutemov

2019-09-19 01:49:57

by Justin He

[permalink] [raw]
Subject: RE: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared



> -----Original Message-----
> From: kbuild test robot <[email protected]>
> Sent: 2019??9??19?? 3:36
> To: Justin He (Arm Technology China) <[email protected]>
> Cc: [email protected]; Catalin Marinas <[email protected]>; Will
> Deacon <[email protected]>; Mark Rutland <[email protected]>;
> James Morse <[email protected]>; Marc Zyngier <[email protected]>;
> Matthew Wilcox <[email protected]>; Kirill A. Shutemov
> <[email protected]>; [email protected];
> [email protected]; [email protected]; Suzuki Poulose
> <[email protected]>; Punit Agrawal <[email protected]>;
> Anshuman Khandual <[email protected]>; Jun Yao
> <[email protected]>; Alex Van Brunt <[email protected]>;
> Robin Murphy <[email protected]>; Thomas Gleixner
> <[email protected]>; Andrew Morton <[email protected]>;
> [email protected]; Ralph Campbell <[email protected]>;
> [email protected]; Kaly Xin (Arm Technology China)
> <[email protected]>; Justin He (Arm Technology China)
> <[email protected]>
> Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF
> is cleared
>
> Hi Jia,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [cannot apply to v5.3 next-20190917]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system]
>
> url: https://github.com/0day-ci/linux/commits/Jia-He/fix-double-page-
> fault-on-arm64/20190918-220036
> config: arm64-allnoconfig (attached as .config)
> compiler: aarch64-linux-gcc (GCC) 7.4.0
> reproduce:
> wget https://raw.githubusercontent.com/intel/lkp-
> tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # save the attached .config to linux build tree
> GCC_VERSION=7.4.0 make.cross ARCH=arm64
>
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <[email protected]>
>
> All errors (new ones prefixed by >>):
>
> mm/memory.o: In function `wp_page_copy':
> >> memory.c:(.text+0x8fc): undefined reference to `cpu_has_hw_af'
> memory.c:(.text+0x8fc): relocation truncated to fit: R_AARCH64_CALL26
> against undefined symbol `cpu_has_hw_af'
>
Ah, I should add a stub for CONFIG_ARM64_HW_AFDBM is 'N' on arm64 arch
Will fix it asap

--
Cheers,
Justin (Jia He)


> 0-DAY kernel test infrastructure Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all Intel Corporation
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

2019-09-19 02:10:35

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

Hi Jia,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.3 next-20190917]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Jia-He/fix-double-page-fault-on-arm64/20190918-220036
config: arm64-allnoconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 7.4.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.4.0 make.cross ARCH=arm64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

mm/memory.o: In function `wp_page_copy':
>> memory.c:(.text+0x8fc): undefined reference to `cpu_has_hw_af'
memory.c:(.text+0x8fc): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `cpu_has_hw_af'

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (1.21 kB)
.config.gz (7.73 kB)
Download all attachments

2019-09-19 05:31:30

by Jia He

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

Hi Kirill

[On behalf of [email protected] because some mails are filted...]

On 2019/9/18 22:00, Kirill A. Shutemov wrote:
> On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
>> When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
>> will be a double page fault in __copy_from_user_inatomic of cow_user_page.
>>
>> Below call trace is from arm64 do_page_fault for debugging purpose
>> [ 110.016195] Call trace:
>> [ 110.016826] do_page_fault+0x5a4/0x690
>> [ 110.017812] do_mem_abort+0x50/0xb0
>> [ 110.018726] el1_da+0x20/0xc4
>> [ 110.019492] __arch_copy_from_user+0x180/0x280
>> [ 110.020646] do_wp_page+0xb0/0x860
>> [ 110.021517] __handle_mm_fault+0x994/0x1338
>> [ 110.022606] handle_mm_fault+0xe8/0x180
>> [ 110.023584] do_page_fault+0x240/0x690
>> [ 110.024535] do_mem_abort+0x50/0xb0
>> [ 110.025423] el0_da+0x20/0x24
>>
>> The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
>> [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3
>>
>> As told by Catalin: "On arm64 without hardware Access Flag, copying from
>> user will fail because the pte is old and cannot be marked young. So we
>> always end up with zeroed page after fork() + CoW for pfn mappings. we
>> don't always have a hardware-managed access flag on arm64."
>>
>> This patch fix it by calling pte_mkyoung. Also, the parameter is
>> changed because vmf should be passed to cow_user_page()
>>
>> [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
>>
>> Reported-by: Yibo Cai <[email protected]>
>> Signed-off-by: Jia He <[email protected]>
>> ---
>> mm/memory.c | 35 ++++++++++++++++++++++++++++++-----
>> 1 file changed, 30 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index e2bb51b6242e..d2c130a5883b 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
>> 2;
>> #endif
>>
>> +#ifndef arch_faults_on_old_pte
>> +static inline bool arch_faults_on_old_pte(void)
>> +{
>> + return false;
>> +}
>> +#endif
>> +
>> static int __init disable_randmaps(char *s)
>> {
>> randomize_va_space = 0;
>> @@ -2140,8 +2147,12 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
>> return same;
>> }
>>
>> -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
>> +static inline void cow_user_page(struct page *dst, struct page *src,
>> + struct vm_fault *vmf)
>> {
>> + struct vm_area_struct *vma = vmf->vma;
>> + unsigned long addr = vmf->address;
>> +
>> debug_dma_assert_idle(src);
>>
>> /*
>> @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
>> */
>> if (unlikely(!src)) {
>> void *kaddr = kmap_atomic(dst);
>> - void __user *uaddr = (void __user *)(va & PAGE_MASK);
>> + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
>> + pte_t entry;
>>
>> /*
>> * This really shouldn't fail, because the page is there
>> * in the page tables. But it might just be unreadable,
>> * in which case we just give up and fill the result with
>> - * zeroes.
>> + * zeroes. On architectures with software "accessed" bits,
>> + * we would take a double page fault here, so mark it
>> + * accessed here.
>> */
>> + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
>> + spin_lock(vmf->ptl);
>> + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
>> + entry = pte_mkyoung(vmf->orig_pte);
>> + if (ptep_set_access_flags(vma, addr,
>> + vmf->pte, entry, 0))
>> + update_mmu_cache(vma, addr, vmf->pte);
>> + }
> I don't follow.
>
> So if pte has changed under you, you don't set the accessed bit, but never
> the less copy from the user.
>
> What makes you think it will not trigger the same problem?
>
> I think we need to make cow_user_page() fail in this case and caller --
> wp_page_copy() -- return zero. If the fault was solved by other thread, we
> are fine. If not userspace would re-fault on the same address and we will
> handle the fault from the second attempt.

Thanks for the pointing. How about make cow_user_page() be returned

VM_FAULT_RETRY? Then in do_page_fault(), it can retry the page fault?

---
Cheers,
Justin (Jia He)

>
>> + spin_unlock(vmf->ptl);
>> + }
>> +
>> if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
>> clear_page(kaddr);
>> kunmap_atomic(kaddr);
>> flush_dcache_page(dst);
>> } else
>> - copy_user_highpage(dst, src, va, vma);
>> + copy_user_highpage(dst, src, addr, vma);
>> }
>>
>> static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
>> @@ -2318,7 +2343,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
>> vmf->address);
>> if (!new_page)
>> goto oom;
>> - cow_user_page(new_page, old_page, vmf->address, vma);
>> + cow_user_page(new_page, old_page, vmf);
>> }
>>
>> if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
>> --
>> 2.17.1
>>
>>
--

2019-09-19 17:23:57

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Wed, Sep 18, 2019 at 07:00:30PM +0100, Catalin Marinas wrote:
> On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote:
> > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > > */
> > > if (unlikely(!src)) {
> > > void *kaddr = kmap_atomic(dst);
> > > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > + pte_t entry;
> > >
> > > /*
> > > * This really shouldn't fail, because the page is there
> > > * in the page tables. But it might just be unreadable,
> > > * in which case we just give up and fill the result with
> > > - * zeroes.
> > > + * zeroes. On architectures with software "accessed" bits,
> > > + * we would take a double page fault here, so mark it
> > > + * accessed here.
> > > */
> > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > + spin_lock(vmf->ptl);
> > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > + entry = pte_mkyoung(vmf->orig_pte);
> > > + if (ptep_set_access_flags(vma, addr,
> > > + vmf->pte, entry, 0))
> > > + update_mmu_cache(vma, addr, vmf->pte);
> > > + }
> >
> > I don't follow.
> >
> > So if pte has changed under you, you don't set the accessed bit, but never
> > the less copy from the user.
> >
> > What makes you think it will not trigger the same problem?
> >
> > I think we need to make cow_user_page() fail in this case and caller --
> > wp_page_copy() -- return zero. If the fault was solved by other thread, we
> > are fine. If not userspace would re-fault on the same address and we will
> > handle the fault from the second attempt.
>
> It would be nice to clarify the semantics of this function and do as
> you suggest but the current comment is slightly confusing:
>
> /*
> * If the source page was a PFN mapping, we don't have
> * a "struct page" for it. We do a best-effort copy by
> * just copying from the original user address. If that
> * fails, we just zero-fill it. Live with it.
> */
>
> Would any user-space rely on getting a zero-filled page here instead of
> a recursive fault?

I don't see the point in zero-filled page in this case. SIGBUS sounds like
more appropriate response, no?

--
Kirill A. Shutemov

2019-09-19 18:11:50

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Thu, Sep 19, 2019 at 06:00:07PM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 18, 2019 at 07:00:30PM +0100, Catalin Marinas wrote:
> > On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote:
> > > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > > > */
> > > > if (unlikely(!src)) {
> > > > void *kaddr = kmap_atomic(dst);
> > > > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > > + pte_t entry;
> > > >
> > > > /*
> > > > * This really shouldn't fail, because the page is there
> > > > * in the page tables. But it might just be unreadable,
> > > > * in which case we just give up and fill the result with
> > > > - * zeroes.
> > > > + * zeroes. On architectures with software "accessed" bits,
> > > > + * we would take a double page fault here, so mark it
> > > > + * accessed here.
> > > > */
> > > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > > + spin_lock(vmf->ptl);
> > > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > > + entry = pte_mkyoung(vmf->orig_pte);
> > > > + if (ptep_set_access_flags(vma, addr,
> > > > + vmf->pte, entry, 0))
> > > > + update_mmu_cache(vma, addr, vmf->pte);
> > > > + }
> > >
> > > I don't follow.
> > >
> > > So if pte has changed under you, you don't set the accessed bit, but never
> > > the less copy from the user.
> > >
> > > What makes you think it will not trigger the same problem?
> > >
> > > I think we need to make cow_user_page() fail in this case and caller --
> > > wp_page_copy() -- return zero. If the fault was solved by other thread, we
> > > are fine. If not userspace would re-fault on the same address and we will
> > > handle the fault from the second attempt.
> >
> > It would be nice to clarify the semantics of this function and do as
> > you suggest but the current comment is slightly confusing:
> >
> > /*
> > * If the source page was a PFN mapping, we don't have
> > * a "struct page" for it. We do a best-effort copy by
> > * just copying from the original user address. If that
> > * fails, we just zero-fill it. Live with it.
> > */
> >
> > Would any user-space rely on getting a zero-filled page here instead of
> > a recursive fault?
>
> I don't see the point in zero-filled page in this case. SIGBUS sounds like
> more appropriate response, no?

I think misunderstood your comment. So, if !pte_same(), we should let
userspace re-fault. This wouldn't be a user ABI change and it is
bounded, can't end up in an infinite re-fault loop.

In case of a __copy_from_user_inatomic() error, SIGBUS would make more
sense but it changes the current behaviour (zero-filling the page). This
can be left for a separate patch, doesn't affect the arm64 case here.

--
Catalin

2019-09-19 18:21:19

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Thu, Sep 19, 2019 at 03:41:43PM +0000, Catalin Marinas wrote:
> On Thu, Sep 19, 2019 at 06:00:07PM +0300, Kirill A. Shutemov wrote:
> > On Wed, Sep 18, 2019 at 07:00:30PM +0100, Catalin Marinas wrote:
> > > On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote:
> > > > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > > > > */
> > > > > if (unlikely(!src)) {
> > > > > void *kaddr = kmap_atomic(dst);
> > > > > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > > > + pte_t entry;
> > > > >
> > > > > /*
> > > > > * This really shouldn't fail, because the page is there
> > > > > * in the page tables. But it might just be unreadable,
> > > > > * in which case we just give up and fill the result with
> > > > > - * zeroes.
> > > > > + * zeroes. On architectures with software "accessed" bits,
> > > > > + * we would take a double page fault here, so mark it
> > > > > + * accessed here.
> > > > > */
> > > > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > > > + spin_lock(vmf->ptl);
> > > > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > > > + entry = pte_mkyoung(vmf->orig_pte);
> > > > > + if (ptep_set_access_flags(vma, addr,
> > > > > + vmf->pte, entry, 0))
> > > > > + update_mmu_cache(vma, addr, vmf->pte);
> > > > > + }
> > > >
> > > > I don't follow.
> > > >
> > > > So if pte has changed under you, you don't set the accessed bit, but never
> > > > the less copy from the user.
> > > >
> > > > What makes you think it will not trigger the same problem?
> > > >
> > > > I think we need to make cow_user_page() fail in this case and caller --
> > > > wp_page_copy() -- return zero. If the fault was solved by other thread, we
> > > > are fine. If not userspace would re-fault on the same address and we will
> > > > handle the fault from the second attempt.
> > >
> > > It would be nice to clarify the semantics of this function and do as
> > > you suggest but the current comment is slightly confusing:
> > >
> > > /*
> > > * If the source page was a PFN mapping, we don't have
> > > * a "struct page" for it. We do a best-effort copy by
> > > * just copying from the original user address. If that
> > > * fails, we just zero-fill it. Live with it.
> > > */
> > >
> > > Would any user-space rely on getting a zero-filled page here instead of
> > > a recursive fault?
> >
> > I don't see the point in zero-filled page in this case. SIGBUS sounds like
> > more appropriate response, no?
>
> I think misunderstood your comment. So, if !pte_same(), we should let
> userspace re-fault. This wouldn't be a user ABI change and it is
> bounded, can't end up in an infinite re-fault loop.

Right. !pte_same() can only happen if we raced with somebody else.
I cannot imagine situation when the race will happen more than few times
in a row.

> In case of a __copy_from_user_inatomic() error, SIGBUS would make more
> sense but it changes the current behaviour (zero-filling the page). This
> can be left for a separate patch, doesn't affect the arm64 case here.

I think it's safer to leave it as is. Maybe put WARN_ON_ONCE() or
something. There can be some obscure use-case that I don't see.

--
Kirill A. Shutemov

2019-09-19 20:49:59

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

On Thu, Sep 19, 2019 at 10:16:34AM +0800, Jia He wrote:
> Hi Kirill
>
> [On behalf of [email protected] because some mails are filted...]
>
> On 2019/9/18 22:00, Kirill A. Shutemov wrote:
> > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
> > > will be a double page fault in __copy_from_user_inatomic of cow_user_page.
> > >
> > > Below call trace is from arm64 do_page_fault for debugging purpose
> > > [ 110.016195] Call trace:
> > > [ 110.016826] do_page_fault+0x5a4/0x690
> > > [ 110.017812] do_mem_abort+0x50/0xb0
> > > [ 110.018726] el1_da+0x20/0xc4
> > > [ 110.019492] __arch_copy_from_user+0x180/0x280
> > > [ 110.020646] do_wp_page+0xb0/0x860
> > > [ 110.021517] __handle_mm_fault+0x994/0x1338
> > > [ 110.022606] handle_mm_fault+0xe8/0x180
> > > [ 110.023584] do_page_fault+0x240/0x690
> > > [ 110.024535] do_mem_abort+0x50/0xb0
> > > [ 110.025423] el0_da+0x20/0x24
> > >
> > > The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> > > [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3
> > >
> > > As told by Catalin: "On arm64 without hardware Access Flag, copying from
> > > user will fail because the pte is old and cannot be marked young. So we
> > > always end up with zeroed page after fork() + CoW for pfn mappings. we
> > > don't always have a hardware-managed access flag on arm64."
> > >
> > > This patch fix it by calling pte_mkyoung. Also, the parameter is
> > > changed because vmf should be passed to cow_user_page()
> > >
> > > [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
> > >
> > > Reported-by: Yibo Cai <[email protected]>
> > > Signed-off-by: Jia He <[email protected]>
> > > ---
> > > mm/memory.c | 35 ++++++++++++++++++++++++++++++-----
> > > 1 file changed, 30 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index e2bb51b6242e..d2c130a5883b 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
> > > 2;
> > > #endif
> > > +#ifndef arch_faults_on_old_pte
> > > +static inline bool arch_faults_on_old_pte(void)
> > > +{
> > > + return false;
> > > +}
> > > +#endif
> > > +
> > > static int __init disable_randmaps(char *s)
> > > {
> > > randomize_va_space = 0;
> > > @@ -2140,8 +2147,12 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> > > return same;
> > > }
> > > -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> > > +static inline void cow_user_page(struct page *dst, struct page *src,
> > > + struct vm_fault *vmf)
> > > {
> > > + struct vm_area_struct *vma = vmf->vma;
> > > + unsigned long addr = vmf->address;
> > > +
> > > debug_dma_assert_idle(src);
> > > /*
> > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > > */
> > > if (unlikely(!src)) {
> > > void *kaddr = kmap_atomic(dst);
> > > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > + pte_t entry;
> > > /*
> > > * This really shouldn't fail, because the page is there
> > > * in the page tables. But it might just be unreadable,
> > > * in which case we just give up and fill the result with
> > > - * zeroes.
> > > + * zeroes. On architectures with software "accessed" bits,
> > > + * we would take a double page fault here, so mark it
> > > + * accessed here.
> > > */
> > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > + spin_lock(vmf->ptl);
> > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > + entry = pte_mkyoung(vmf->orig_pte);
> > > + if (ptep_set_access_flags(vma, addr,
> > > + vmf->pte, entry, 0))
> > > + update_mmu_cache(vma, addr, vmf->pte);
> > > + }
> > I don't follow.
> >
> > So if pte has changed under you, you don't set the accessed bit, but never
> > the less copy from the user.
> >
> > What makes you think it will not trigger the same problem?
> >
> > I think we need to make cow_user_page() fail in this case and caller --
> > wp_page_copy() -- return zero. If the fault was solved by other thread, we
> > are fine. If not userspace would re-fault on the same address and we will
> > handle the fault from the second attempt.
>
> Thanks for the pointing. How about make cow_user_page() be returned
>
> VM_FAULT_RETRY? Then in do_page_fault(), it can retry the page fault?

No. VM_FAULT_RETRY has different semantics: we have to drop mmap_sem(), so
let's try to take it again and handle the fault. In this case the more
likely scenario is that other thread has already handled the fault and we
don't need to do anything. If it's not the case, the fault will be
triggered again on the same address.

--
Kirill A. Shutemov

2019-09-19 21:31:11

by Justin He

[permalink] [raw]
Subject: RE: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared

Hi Kirill
Thanks for the detailed explanation.

--
Cheers,
Justin (Jia He)



> -----Original Message-----
> From: Kirill A. Shutemov <[email protected]>
> Sent: 2019年9月19日 22:58
> To: Jia He <[email protected]>
> Cc: Justin He (Arm Technology China) <[email protected]>; Catalin
> Marinas <[email protected]>; Will Deacon <[email protected]>; Mark
> Rutland <[email protected]>; James Morse
> <[email protected]>; Marc Zyngier <[email protected]>; Matthew
> Wilcox <[email protected]>; Kirill A. Shutemov
> <[email protected]>; [email protected];
> [email protected]; [email protected]; Suzuki Poulose
> <[email protected]>; Punit Agrawal <[email protected]>;
> Anshuman Khandual <[email protected]>; Jun Yao
> <[email protected]>; Alex Van Brunt <[email protected]>;
> Robin Murphy <[email protected]>; Thomas Gleixner
> <[email protected]>; Andrew Morton <[email protected]>;
> Jérôme Glisse <[email protected]>; Ralph Campbell
> <[email protected]>; Kaly Xin (Arm Technology China)
> <[email protected]>
> Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is
> cleared
>
> On Thu, Sep 19, 2019 at 10:16:34AM +0800, Jia He wrote:
> > Hi Kirill
> >
> > [On behalf of [email protected] because some mails are filted...]
> >
> > On 2019/9/18 22:00, Kirill A. Shutemov wrote:
> > > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote:
> > > > When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64
> guest, there
> > > > will be a double page fault in __copy_from_user_inatomic of
> cow_user_page.
> > > >
> > > > Below call trace is from arm64 do_page_fault for debugging purpose
> > > > [ 110.016195] Call trace:
> > > > [ 110.016826] do_page_fault+0x5a4/0x690
> > > > [ 110.017812] do_mem_abort+0x50/0xb0
> > > > [ 110.018726] el1_da+0x20/0xc4
> > > > [ 110.019492] __arch_copy_from_user+0x180/0x280
> > > > [ 110.020646] do_wp_page+0xb0/0x860
> > > > [ 110.021517] __handle_mm_fault+0x994/0x1338
> > > > [ 110.022606] handle_mm_fault+0xe8/0x180
> > > > [ 110.023584] do_page_fault+0x240/0x690
> > > > [ 110.024535] do_mem_abort+0x50/0xb0
> > > > [ 110.025423] el0_da+0x20/0x24
> > > >
> > > > The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> > > > [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003,
> pmd=000000023d4b3003, pte=360000298607bd3
> > > >
> > > > As told by Catalin: "On arm64 without hardware Access Flag, copying
> from
> > > > user will fail because the pte is old and cannot be marked young. So
> we
> > > > always end up with zeroed page after fork() + CoW for pfn mappings.
> we
> > > > don't always have a hardware-managed access flag on arm64."
> > > >
> > > > This patch fix it by calling pte_mkyoung. Also, the parameter is
> > > > changed because vmf should be passed to cow_user_page()
> > > >
> > > > [1]
> https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
> > > >
> > > > Reported-by: Yibo Cai <[email protected]>
> > > > Signed-off-by: Jia He <[email protected]>
> > > > ---
> > > > mm/memory.c | 35 ++++++++++++++++++++++++++++++-----
> > > > 1 file changed, 30 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/mm/memory.c b/mm/memory.c
> > > > index e2bb51b6242e..d2c130a5883b 100644
> > > > --- a/mm/memory.c
> > > > +++ b/mm/memory.c
> > > > @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
> > > > 2;
> > > > #endif
> > > > +#ifndef arch_faults_on_old_pte
> > > > +static inline bool arch_faults_on_old_pte(void)
> > > > +{
> > > > + return false;
> > > > +}
> > > > +#endif
> > > > +
> > > > static int __init disable_randmaps(char *s)
> > > > {
> > > > randomize_va_space = 0;
> > > > @@ -2140,8 +2147,12 @@ static inline int pte_unmap_same(struct
> mm_struct *mm, pmd_t *pmd,
> > > > return same;
> > > > }
> > > > -static inline void cow_user_page(struct page *dst, struct page *src,
> unsigned long va, struct vm_area_struct *vma)
> > > > +static inline void cow_user_page(struct page *dst, struct page *src,
> > > > + struct vm_fault *vmf)
> > > > {
> > > > + struct vm_area_struct *vma = vmf->vma;
> > > > + unsigned long addr = vmf->address;
> > > > +
> > > > debug_dma_assert_idle(src);
> > > > /*
> > > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct
> page *dst, struct page *src, unsigned lo
> > > > */
> > > > if (unlikely(!src)) {
> > > > void *kaddr = kmap_atomic(dst);
> > > > - void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > > + pte_t entry;
> > > > /*
> > > > * This really shouldn't fail, because the page is there
> > > > * in the page tables. But it might just be unreadable,
> > > > * in which case we just give up and fill the result with
> > > > - * zeroes.
> > > > + * zeroes. On architectures with software "accessed" bits,
> > > > + * we would take a double page fault here, so mark it
> > > > + * accessed here.
> > > > */
> > > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte))
> {
> > > > + spin_lock(vmf->ptl);
> > > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > > + entry = pte_mkyoung(vmf->orig_pte);
> > > > + if (ptep_set_access_flags(vma, addr,
> > > > + vmf->pte, entry, 0))
> > > > + update_mmu_cache(vma, addr, vmf-
> >pte);
> > > > + }
> > > I don't follow.
> > >
> > > So if pte has changed under you, you don't set the accessed bit, but
> never
> > > the less copy from the user.
> > >
> > > What makes you think it will not trigger the same problem?
> > >
> > > I think we need to make cow_user_page() fail in this case and caller --
> > > wp_page_copy() -- return zero. If the fault was solved by other thread,
> we
> > > are fine. If not userspace would re-fault on the same address and we
> will
> > > handle the fault from the second attempt.
> >
> > Thanks for the pointing. How about make cow_user_page() be returned
> >
> > VM_FAULT_RETRY? Then in do_page_fault(), it can retry the page fault?
>
> No. VM_FAULT_RETRY has different semantics: we have to drop
> mmap_sem(), so
> let's try to take it again and handle the fault. In this case the more
> likely scenario is that other thread has already handled the fault and we
> don't need to do anything. If it's not the case, the fault will be
> triggered again on the same address.
>
> --
> Kirill A. Shutemov
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.