2023-05-24 13:29:56

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block

When reading the arm64's PER_VMA_LOCK support code, I found a bit
difference between arm64 and other arch when calling handle_mm_fault()
during VMA lock-based page fault handling: the fault address is masked
before passing to handle_mm_fault(). This is also different from the
usage in mmap_lock-based handling. I think we need to pass the
original fault address to handle_mm_fault() as we did in
commit 84c5e23edecd ("arm64: mm: Pass original fault address to
handle_mm_fault()").

If we go through the code path further, we can find that the "masked"
fault address can cause mismatched fault address between perf sw
major/minor page fault sw event and perf page fault sw event:

do_page_fault
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr
handle_mm_fault
mm_account_fault
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr

Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
Signed-off-by: Jisheng Zhang <[email protected]>
---
arch/arm64/mm/fault.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index cb21ccd7940d..6045a5117ac1 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
vma_end_read(vma);
goto lock_mmap;
}
- fault = handle_mm_fault(vma, addr & PAGE_MASK,
- mm_flags | FAULT_FLAG_VMA_LOCK, regs);
+ fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);

if (!(fault & VM_FAULT_RETRY)) {
--
2.40.1



2023-05-24 14:08:43

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block

On Wed, May 24, 2023 at 09:12:38PM +0800, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
>
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:

OOPS, sorry please ignore this one. I pressed ctrl-c to interrupt the
git send-mail, but it's still sent out ;)

Instead, let's focus on
https://lore.kernel.org/linux-arm-kernel/[email protected]/T/#u

The two patches are the same, I just added Suren into CC list.

>
> do_page_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr
> handle_mm_fault
> mm_account_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
>
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> arch/arm64/mm/fault.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index cb21ccd7940d..6045a5117ac1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> vma_end_read(vma);
> goto lock_mmap;
> }
> - fault = handle_mm_fault(vma, addr & PAGE_MASK,
> - mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> vma_end_read(vma);
>
> if (!(fault & VM_FAULT_RETRY)) {
> --
> 2.40.1
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2023-05-24 14:58:55

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block

On Wed, May 24, 2023 at 6:38 AM Jisheng Zhang <[email protected]> wrote:
>
> On Wed, May 24, 2023 at 09:12:38PM +0800, Jisheng Zhang wrote:
> > When reading the arm64's PER_VMA_LOCK support code, I found a bit
> > difference between arm64 and other arch when calling handle_mm_fault()
> > during VMA lock-based page fault handling: the fault address is masked
> > before passing to handle_mm_fault(). This is also different from the
> > usage in mmap_lock-based handling. I think we need to pass the
> > original fault address to handle_mm_fault() as we did in
> > commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> > handle_mm_fault()").

Thanks for noticing. I'm not sure why this masking leaked into my
patch. I don't think I wrote it before 84c5e23edecd was merged in June
2021.
Anyway, your assessment looks correct to me.

> >
> > If we go through the code path further, we can find that the "masked"
> > fault address can cause mismatched fault address between perf sw
> > major/minor page fault sw event and perf page fault sw event:
>
> OOPS, sorry please ignore this one. I pressed ctrl-c to interrupt the
> git send-mail, but it's still sent out ;)
>
> Instead, let's focus on
> https://lore.kernel.org/linux-arm-kernel/[email protected]/T/#u
>
> The two patches are the same, I just added Suren into CC list.
>
> >
> > do_page_fault
> > perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr
> > handle_mm_fault
> > mm_account_fault
> > perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
> >
> > Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> > Signed-off-by: Jisheng Zhang <[email protected]>

Reviewed-by: Suren Baghdasaryan <[email protected]>

> > ---
> > arch/arm64/mm/fault.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index cb21ccd7940d..6045a5117ac1 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> > vma_end_read(vma);
> > goto lock_mmap;
> > }
> > - fault = handle_mm_fault(vma, addr & PAGE_MASK,
> > - mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> > + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> > vma_end_read(vma);
> >
> > if (!(fault & VM_FAULT_RETRY)) {
> > --
> > 2.40.1
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

2023-05-25 07:13:13

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block



On 5/24/23 18:42, Jisheng Zhang wrote:
> When reading the arm64's PER_VMA_LOCK support code, I found a bit
> difference between arm64 and other arch when calling handle_mm_fault()
> during VMA lock-based page fault handling: the fault address is masked
> before passing to handle_mm_fault(). This is also different from the
> usage in mmap_lock-based handling. I think we need to pass the
> original fault address to handle_mm_fault() as we did in
> commit 84c5e23edecd ("arm64: mm: Pass original fault address to
> handle_mm_fault()").
>
> If we go through the code path further, we can find that the "masked"
> fault address can cause mismatched fault address between perf sw
> major/minor page fault sw event and perf page fault sw event:
>
> do_page_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr
> handle_mm_fault
> mm_account_fault
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr
>
> Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first")
> Signed-off-by: Jisheng Zhang <[email protected]>

LGTM

Reviewed-by: Anshuman Khandual <[email protected]>

> ---
> arch/arm64/mm/fault.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index cb21ccd7940d..6045a5117ac1 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> vma_end_read(vma);
> goto lock_mmap;
> }
> - fault = handle_mm_fault(vma, addr & PAGE_MASK,
> - mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
> vma_end_read(vma);
>
> if (!(fault & VM_FAULT_RETRY)) {