The patchset cleanup ARM page fault handle to improve readability,
fix the page table entries printing and fix infinite loop in the
page fault handler when user code execution with privilege mode if
ARM_LPAE enabled.
echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
Before:
-------
lkdtm: Performing direct entry EXEC_USERSPACE
lkdtm: attempting ok execution at c0717674
lkdtm: attempting bad execution at b6fd6000
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 1-....: (2100 ticks this GP) idle=7e2/1/0x40000002 softirq=136/136 fqs=1050
(t=2101 jiffies g=-1027 q=16)
NMI backtrace for cpu 1
CPU: 1 PID: 57 Comm: sh Not tainted 5.13.0-rc4 #126
...
r9:c1f04000 r8:c0e04cc8 r7:c1f05cbc r6:ffffffff r5:60000113 r4:c03724f8
[<c03722e0>] (handle_mm_fault) from [<c02152f4>] (do_page_fault+0x1a0/0x3d8)
r10:c180ec48 r9:c11b1aa0 r8:c11b1ac0 r7:8000020f r6:b6fd6000 r5:c180ec00
r4:c1f05df8
[<c0215154>] (do_page_fault) from [<c02157cc>] (do_PrefetchAbort+0x40/0x94)
r10:0000000f r9:c1f04000 r8:c1f05df8 r7:b6fd6000 r6:c0215154 r5:0000020f
r4:c0e09b18
[<c021578c>] (do_PrefetchAbort) from [<c0200c50>] (__pabt_svc+0x50/0x80)
Exception stack(0xc1f05df8 to 0xc1f05e40)
5de0: 0000002b 2e34f000
5e00: 3ee77213 3ee77213 b6fd6000 c0b51020 c140d000 c0a4b5dc 0000000f c1f05f58
5e20: 0000000f c1f05e64 c1f05d88 c1f05e48 c0717a6c b6fd6000 60000013 ffffffff
r8:0000000f r7:c1f05e2c r6:ffffffff r5:60000013 r4:b6fd6000
[<c07179a8>] (lkdtm_EXEC_USERSPACE) from [<c09a51b8>] (lkdtm_do_action+0x48/0x4c)
r4:00000027
...
After:
-------
lkdtm: Performing direct entry EXEC_USERSPACE
lkdtm: attempting ok execution at c07176d4
lkdtm: attempting bad execution at b6f57000
8<--- cut here ---
Unable to handle kernel execution of memory at virtual address b6f57000
pgd = 81e20f00
[b6f57000] *pgd=81e23003, *pmd=13ee9c003
Internal error: Oops: 8000020f [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 57 Comm: sh Not tainted 5.13.0-rc4+ #127
Hardware name: ARM-Versatile Express
PC is at 0xb6f57000
LR is at lkdtm_EXEC_USERSPACE+0xc4/0xd4
pc : [<b6f57000>] lr : [<c0717acc>] psr: 60000013
sp : c1f3de48 ip : c1f3dd88 fp : c1f3de64
r10: 0000000f r9 : c1f3df58 r8 : 0000000f
r7 : c0a4b5dc r6 : c1f1d000 r5 : c0b51070 r4 :b6f57000
r3 : 7e62f7da r2 : 7e62f7da r1 : 2e330000 r0 :0000002b
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
...
v3:
- drop the fix about page table printing
- kill page table base print instead of printing the physical address
- only die when permission fault both kernel-mode and user code execution
with privilege mode
- drop LPAE specific
v2:
- split patch into smaller changes suggested by Russell
- fix page table printing in show_pte()
- add new die_kernel_fault() helper
- report "execution of user memory" when user code execution with
privilege mode
Kefeng Wang (6):
ARM: mm: Rafactor the __do_page_fault()
ARM: mm: Kill task_struct argument for __do_page_fault()
ARM: mm: Cleanup access_error()
ARM: mm: Kill page table base print in show_pte()
ARM: mm: Provide die_kernel_fault() helper
ARM: mm: Fix PXN process with LPAE feature
arch/arm/mm/fault.c | 119 +++++++++++++++++++++++---------------------
arch/arm/mm/fault.h | 4 ++
2 files changed, 67 insertions(+), 56 deletions(-)
--
2.26.2
Clean up the multiple goto statements and drops local variable
vm_fault_t fault, which will make the __do_page_fault() much
more readability.
No functional change.
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 34 +++++++++++++---------------------
1 file changed, 13 insertions(+), 21 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index efa402025031..662ac3ca3c8a 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -205,35 +205,27 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
unsigned int flags, struct task_struct *tsk,
struct pt_regs *regs)
{
- struct vm_area_struct *vma;
- vm_fault_t fault;
-
- vma = find_vma(mm, addr);
- fault = VM_FAULT_BADMAP;
+ struct vm_area_struct *vma = find_vma(mm, addr);
if (unlikely(!vma))
- goto out;
- if (unlikely(vma->vm_start > addr))
- goto check_stack;
+ return VM_FAULT_BADMAP;
+
+ if (unlikely(vma->vm_start > addr)) {
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ return VM_FAULT_BADMAP;
+ if (addr < FIRST_USER_ADDRESS)
+ return VM_FAULT_BADMAP;
+ if (expand_stack(vma, addr))
+ return VM_FAULT_BADMAP;
+ }
/*
* Ok, we have a good vm_area for this
* memory access, so we can handle it.
*/
-good_area:
- if (access_error(fsr, vma)) {
- fault = VM_FAULT_BADACCESS;
- goto out;
- }
+ if (access_error(fsr, vma))
+ return VM_FAULT_BADACCESS;
return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
-
-check_stack:
- /* Don't allow expansion below FIRST_USER_ADDRESS */
- if (vma->vm_flags & VM_GROWSDOWN &&
- addr >= FIRST_USER_ADDRESS && !expand_stack(vma, addr))
- goto good_area;
-out:
- return fault;
}
static int __kprobes
--
2.26.2
Provide die_kernel_fault() helper to do the kernel fault reporting,
which with msg argument, it could report different message in different
scenes, and the later patch "ARM: mm: Fix PXN process with LPAE feature"
will use it.
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 76aced067b12..82bcfe57de20 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -99,6 +99,21 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
{ }
#endif /* CONFIG_MMU */
+static void die_kernel_fault(const char *msg, struct mm_struct *mm,
+ unsigned long addr, unsigned int fsr,
+ struct pt_regs *regs)
+{
+ bust_spinlocks(1);
+ pr_alert("8<--- cut here ---\n");
+ pr_alert("Unable to handle kernel %s at virtual address %08lx\n",
+ msg, addr);
+
+ show_pte(KERN_ALERT, mm, addr);
+ die("Oops", regs, fsr);
+ bust_spinlocks(0);
+ do_exit(SIGKILL);
+}
+
/*
* Oops. The kernel tried to access some page that wasn't present.
*/
@@ -106,6 +121,7 @@ static void
__do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
{
+ const char *msg;
/*
* Are we prepared to handle this kernel fault?
*/
@@ -115,16 +131,12 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
/*
* No handler, we'll have to terminate things with extreme prejudice.
*/
- bust_spinlocks(1);
- pr_alert("8<--- cut here ---\n");
- pr_alert("Unable to handle kernel %s at virtual address %08lx\n",
- (addr < PAGE_SIZE) ? "NULL pointer dereference" :
- "paging request", addr);
+ if (addr < PAGE_SIZE)
+ msg = "NULL pointer dereference";
+ else
+ msg = "paging request";
- show_pte(KERN_ALERT, mm, addr);
- die("Oops", regs, fsr);
- bust_spinlocks(0);
- do_exit(SIGKILL);
+ die_kernel_fault(msg, mm, addr, fsr, regs);
}
/*
--
2.26.2
Now the show_pts() will dump the virtual (hashed) address of page
table base, it is useless, kill it.
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 9a6d74f6ea1d..76aced067b12 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -37,7 +37,6 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr)
if (!mm)
mm = &init_mm;
- printk("%spgd = %p\n", lvl, mm->pgd);
pgd = pgd_offset(mm, addr);
printk("%s[%08lx] *pgd=%08llx", lvl, addr, (long long)pgd_val(*pgd));
--
2.26.2
The __do_page_fault() won't use task_struct argument, kill it
and also use current->mm directly in do_page_fault().
No functional change.
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 662ac3ca3c8a..249db395bdf0 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -202,8 +202,7 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma)
static vm_fault_t __kprobes
__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
- unsigned int flags, struct task_struct *tsk,
- struct pt_regs *regs)
+ unsigned int flags, struct pt_regs *regs)
{
struct vm_area_struct *vma = find_vma(mm, addr);
if (unlikely(!vma))
@@ -231,8 +230,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
static int __kprobes
do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
- struct task_struct *tsk;
- struct mm_struct *mm;
+ struct mm_struct *mm = current->mm;
int sig, code;
vm_fault_t fault;
unsigned int flags = FAULT_FLAG_DEFAULT;
@@ -240,8 +238,6 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (kprobe_page_fault(regs, fsr))
return 0;
- tsk = current;
- mm = tsk->mm;
/* Enable interrupts if they were enabled in the parent context. */
if (interrupts_enabled(regs))
@@ -285,7 +281,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
#endif
}
- fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs);
+ fault = __do_page_fault(mm, addr, fsr, flags, regs);
/* If we need to retry but a fatal signal is pending, handle the
* signal first. We do not need to release the mmap_lock because
--
2.26.2
Now the write fault check in do_page_fault() and access_error() twice,
we can cleanup access_error(), and make the fault check and vma flags set
into do_page_fault() directly, then pass the vma flags to __do_page_fault.
No functional change.
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 38 ++++++++++++++------------------------
1 file changed, 14 insertions(+), 24 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 249db395bdf0..9a6d74f6ea1d 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -183,26 +183,9 @@ void do_bad_area(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
#define VM_FAULT_BADMAP 0x010000
#define VM_FAULT_BADACCESS 0x020000
-/*
- * Check that the permissions on the VMA allow for the fault which occurred.
- * If we encountered a write fault, we must have write permission, otherwise
- * we allow any permission.
- */
-static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma)
-{
- unsigned int mask = VM_ACCESS_FLAGS;
-
- if ((fsr & FSR_WRITE) && !(fsr & FSR_CM))
- mask = VM_WRITE;
- if (fsr & FSR_LNX_PF)
- mask = VM_EXEC;
-
- return vma->vm_flags & mask ? false : true;
-}
-
static vm_fault_t __kprobes
-__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
- unsigned int flags, struct pt_regs *regs)
+__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
+ unsigned long vma_flags, struct pt_regs *regs)
{
struct vm_area_struct *vma = find_vma(mm, addr);
if (unlikely(!vma))
@@ -218,10 +201,10 @@ __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
}
/*
- * Ok, we have a good vm_area for this
- * memory access, so we can handle it.
+ * ok, we have a good vm_area for this memory access, check the
+ * permissions on the VMA allow for the fault which occurred.
*/
- if (access_error(fsr, vma))
+ if (!(vma->vm_flags & vma_flags))
return VM_FAULT_BADACCESS;
return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
@@ -234,6 +217,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
int sig, code;
vm_fault_t fault;
unsigned int flags = FAULT_FLAG_DEFAULT;
+ unsigned long vm_flags = VM_ACCESS_FLAGS;
if (kprobe_page_fault(regs, fsr))
return 0;
@@ -252,8 +236,14 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (user_mode(regs))
flags |= FAULT_FLAG_USER;
- if ((fsr & FSR_WRITE) && !(fsr & FSR_CM))
+
+ if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) {
flags |= FAULT_FLAG_WRITE;
+ vm_flags = VM_WRITE;
+ }
+
+ if (fsr & FSR_LNX_PF)
+ vm_flags = VM_EXEC;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
@@ -281,7 +271,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
#endif
}
- fault = __do_page_fault(mm, addr, fsr, flags, regs);
+ fault = __do_page_fault(mm, addr, flags, vm_flags, regs);
/* If we need to retry but a fatal signal is pending, handle the
* signal first. We do not need to release the mmap_lock because
--
2.26.2
When user code execution with privilege mode, it will lead to
infinite loop in the page fault handler if ARM_LPAE enabled,
The issue could be reproduced with
"echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT"
As Permission fault shows in ARM spec,
IFSR format when using the Short-descriptor translation table format
Permission fault: 01101 First level 01111 Second level
IFSR format when using the Long-descriptor translation table format
Permission fault: 0011LL LL bits indicate levelb.
Add is_permission_fault() function to check permission fault and die
if permission fault occurred under instruction fault in do_page_fault().
Fixes: 1d4d37159d01 ("ARM: 8235/1: Support for the PXN CPU feature on ARMv7")
Signed-off-by: Kefeng Wang <[email protected]>
---
arch/arm/mm/fault.c | 20 +++++++++++++++++++-
arch/arm/mm/fault.h | 4 ++++
2 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 82bcfe57de20..bc8779d54a64 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -194,6 +194,19 @@ void do_bad_area(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
#define VM_FAULT_BADMAP 0x010000
#define VM_FAULT_BADACCESS 0x020000
+static inline bool is_permission_fault(unsigned int fsr)
+{
+ int fs = fsr_fs(fsr);
+#ifdef CONFIG_ARM_LPAE
+ if ((fs & FS_PERM_NOLL_MASK) == FS_PERM_NOLL)
+ return true;
+#else
+ if (fs == FS_L1_PERM || fs == FS_L2_PERM)
+ return true;
+#endif
+ return false;
+}
+
static vm_fault_t __kprobes
__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
unsigned long vma_flags, struct pt_regs *regs)
@@ -253,9 +266,14 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
vm_flags = VM_WRITE;
}
- if (fsr & FSR_LNX_PF)
+ if (fsr & FSR_LNX_PF) {
vm_flags = VM_EXEC;
+ if (is_permission_fault(fsr) && !user_mode(regs))
+ die_kernel_fault("execution of memory",
+ mm, addr, fsr, regs);
+ }
+
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
/*
diff --git a/arch/arm/mm/fault.h b/arch/arm/mm/fault.h
index 9ecc2097a87a..83b5ab32d7a4 100644
--- a/arch/arm/mm/fault.h
+++ b/arch/arm/mm/fault.h
@@ -14,6 +14,8 @@
#ifdef CONFIG_ARM_LPAE
#define FSR_FS_AEA 17
+#define FS_PERM_NOLL 0xC
+#define FS_PERM_NOLL_MASK 0x3C
static inline int fsr_fs(unsigned int fsr)
{
@@ -21,6 +23,8 @@ static inline int fsr_fs(unsigned int fsr)
}
#else
#define FSR_FS_AEA 22
+#define FS_L1_PERM 0xD
+#define FS_L2_PERM 0xF
static inline int fsr_fs(unsigned int fsr)
{
--
2.26.2
On 2021/6/10 20:35, Kefeng Wang wrote:
> The patchset cleanup ARM page fault handle to improve readability,
> fix the page table entries printing and fix infinite loop in the
> page fault handler when user code execution with privilege mode if
> ARM_LPAE enabled.
>
> echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
>
> Before:
> -------
> lkdtm: Performing direct entry EXEC_USERSPACE
> lkdtm: attempting ok execution at c0717674
> lkdtm: attempting bad execution at b6fd6000
> rcu: INFO: rcu_sched self-detected stall on CPU
> rcu: 1-....: (2100 ticks this GP) idle=7e2/1/0x40000002 softirq=136/136 fqs=1050
> (t=2101 jiffies g=-1027 q=16)
> NMI backtrace for cpu 1
> CPU: 1 PID: 57 Comm: sh Not tainted 5.13.0-rc4 #126
> ...
> r9:c1f04000 r8:c0e04cc8 r7:c1f05cbc r6:ffffffff r5:60000113 r4:c03724f8
> [<c03722e0>] (handle_mm_fault) from [<c02152f4>] (do_page_fault+0x1a0/0x3d8)
> r10:c180ec48 r9:c11b1aa0 r8:c11b1ac0 r7:8000020f r6:b6fd6000 r5:c180ec00
> r4:c1f05df8
> [<c0215154>] (do_page_fault) from [<c02157cc>] (do_PrefetchAbort+0x40/0x94)
> r10:0000000f r9:c1f04000 r8:c1f05df8 r7:b6fd6000 r6:c0215154 r5:0000020f
> r4:c0e09b18
> [<c021578c>] (do_PrefetchAbort) from [<c0200c50>] (__pabt_svc+0x50/0x80)
> Exception stack(0xc1f05df8 to 0xc1f05e40)
> 5de0: 0000002b 2e34f000
> 5e00: 3ee77213 3ee77213 b6fd6000 c0b51020 c140d000 c0a4b5dc 0000000f c1f05f58
> 5e20: 0000000f c1f05e64 c1f05d88 c1f05e48 c0717a6c b6fd6000 60000013 ffffffff
> r8:0000000f r7:c1f05e2c r6:ffffffff r5:60000013 r4:b6fd6000
> [<c07179a8>] (lkdtm_EXEC_USERSPACE) from [<c09a51b8>] (lkdtm_do_action+0x48/0x4c)
> r4:00000027
> ...
>
>
> After:
> -------
> lkdtm: Performing direct entry EXEC_USERSPACE
> lkdtm: attempting ok execution at c07176d4
> lkdtm: attempting bad execution at b6f57000
> 8<--- cut here ---
> Unable to handle kernel execution of memory at virtual address b6f57000
> pgd = 81e20f00
> [b6f57000] *pgd=81e23003, *pmd=13ee9c003
> Internal error: Oops: 8000020f [#1] SMP ARM
> Modules linked in:
> CPU: 0 PID: 57 Comm: sh Not tainted 5.13.0-rc4+ #127
> Hardware name: ARM-Versatile Express
> PC is at 0xb6f57000
> LR is at lkdtm_EXEC_USERSPACE+0xc4/0xd4
> pc : [<b6f57000>] lr : [<c0717acc>] psr: 60000013
> sp : c1f3de48 ip : c1f3dd88 fp : c1f3de64
> r10: 0000000f r9 : c1f3df58 r8 : 0000000f
> r7 : c0a4b5dc r6 : c1f1d000 r5 : c0b51070 r4 :b6f57000
> r3 : 7e62f7da r2 : 7e62f7da r1 : 2e330000 r0 :0000002b
> Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
> ...
Hi Russell, I fix patches according your comment.
Especially the patch 6,any new suggest about it, thanks.
>
> v3:
> - drop the fix about page table printing
> - kill page table base print instead of printing the physical address
> - only die when permission fault both kernel-mode and user code execution
> with privilege mode
> - drop LPAE specific
>
> v2:
> - split patch into smaller changes suggested by Russell
> - fix page table printing in show_pte()
> - add new die_kernel_fault() helper
> - report "execution of user memory" when user code execution with
> privilege mode
>
>
> Kefeng Wang (6):
> ARM: mm: Rafactor the __do_page_fault()
> ARM: mm: Kill task_struct argument for __do_page_fault()
> ARM: mm: Cleanup access_error()
> ARM: mm: Kill page table base print in show_pte()
> ARM: mm: Provide die_kernel_fault() helper
> ARM: mm: Fix PXN process with LPAE feature
>
> arch/arm/mm/fault.c | 119 +++++++++++++++++++++++---------------------
> arch/arm/mm/fault.h | 4 ++
> 2 files changed, 67 insertions(+), 56 deletions(-)
>
Hi Russell, kindly ping after 5.14-rc2, any comments, thanks.
On 2021/6/10 20:35, Kefeng Wang wrote:
> The patchset cleanup ARM page fault handle to improve readability,
> fix the page table entries printing and fix infinite loop in the
> page fault handler when user code execution with privilege mode if
> ARM_LPAE enabled.
>
> echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
>
> Before:
> -------
> lkdtm: Performing direct entry EXEC_USERSPACE
> lkdtm: attempting ok execution at c0717674
> lkdtm: attempting bad execution at b6fd6000
> rcu: INFO: rcu_sched self-detected stall on CPU
> rcu: 1-....: (2100 ticks this GP) idle=7e2/1/0x40000002 softirq=136/136 fqs=1050
> (t=2101 jiffies g=-1027 q=16)
> NMI backtrace for cpu 1
> CPU: 1 PID: 57 Comm: sh Not tainted 5.13.0-rc4 #126
> ...
> r9:c1f04000 r8:c0e04cc8 r7:c1f05cbc r6:ffffffff r5:60000113 r4:c03724f8
> [<c03722e0>] (handle_mm_fault) from [<c02152f4>] (do_page_fault+0x1a0/0x3d8)
> r10:c180ec48 r9:c11b1aa0 r8:c11b1ac0 r7:8000020f r6:b6fd6000 r5:c180ec00
> r4:c1f05df8
> [<c0215154>] (do_page_fault) from [<c02157cc>] (do_PrefetchAbort+0x40/0x94)
> r10:0000000f r9:c1f04000 r8:c1f05df8 r7:b6fd6000 r6:c0215154 r5:0000020f
> r4:c0e09b18
> [<c021578c>] (do_PrefetchAbort) from [<c0200c50>] (__pabt_svc+0x50/0x80)
> Exception stack(0xc1f05df8 to 0xc1f05e40)
> 5de0: 0000002b 2e34f000
> 5e00: 3ee77213 3ee77213 b6fd6000 c0b51020 c140d000 c0a4b5dc 0000000f c1f05f58
> 5e20: 0000000f c1f05e64 c1f05d88 c1f05e48 c0717a6c b6fd6000 60000013 ffffffff
> r8:0000000f r7:c1f05e2c r6:ffffffff r5:60000013 r4:b6fd6000
> [<c07179a8>] (lkdtm_EXEC_USERSPACE) from [<c09a51b8>] (lkdtm_do_action+0x48/0x4c)
> r4:00000027
> ...
>
>
> After:
> -------
> lkdtm: Performing direct entry EXEC_USERSPACE
> lkdtm: attempting ok execution at c07176d4
> lkdtm: attempting bad execution at b6f57000
> 8<--- cut here ---
> Unable to handle kernel execution of memory at virtual address b6f57000
> pgd = 81e20f00
> [b6f57000] *pgd=81e23003, *pmd=13ee9c003
> Internal error: Oops: 8000020f [#1] SMP ARM
> Modules linked in:
> CPU: 0 PID: 57 Comm: sh Not tainted 5.13.0-rc4+ #127
> Hardware name: ARM-Versatile Express
> PC is at 0xb6f57000
> LR is at lkdtm_EXEC_USERSPACE+0xc4/0xd4
> pc : [<b6f57000>] lr : [<c0717acc>] psr: 60000013
> sp : c1f3de48 ip : c1f3dd88 fp : c1f3de64
> r10: 0000000f r9 : c1f3df58 r8 : 0000000f
> r7 : c0a4b5dc r6 : c1f1d000 r5 : c0b51070 r4 :b6f57000
> r3 : 7e62f7da r2 : 7e62f7da r1 : 2e330000 r0 :0000002b
> Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
> ...
>
> v3:
> - drop the fix about page table printing
> - kill page table base print instead of printing the physical address
> - only die when permission fault both kernel-mode and user code execution
> with privilege mode
> - drop LPAE specific
>
> v2:
> - split patch into smaller changes suggested by Russell
> - fix page table printing in show_pte()
> - add new die_kernel_fault() helper
> - report "execution of user memory" when user code execution with
> privilege mode
>
>
> Kefeng Wang (6):
> ARM: mm: Rafactor the __do_page_fault()
> ARM: mm: Kill task_struct argument for __do_page_fault()
> ARM: mm: Cleanup access_error()
> ARM: mm: Kill page table base print in show_pte()
> ARM: mm: Provide die_kernel_fault() helper
> ARM: mm: Fix PXN process with LPAE feature
>
> arch/arm/mm/fault.c | 119 +++++++++++++++++++++++---------------------
> arch/arm/mm/fault.h | 4 ++
> 2 files changed, 67 insertions(+), 56 deletions(-)
>
On 2021/6/10 20:35, Kefeng Wang wrote:
> The patchset cleanup ARM page fault handle to improve readability,
> fix the page table entries printing and fix infinite loop in the
> page fault handler when user code execution with privilege mode if
> ARM_LPAE enabled.
Hi Russell, I make some changes according your advise, could you give me
some comments about this patchset, I want to make sure that whether they
can be sent to the ARM patch system, many thanks.
> v3:
> - drop the fix about page table printing
> - kill page table base print instead of printing the physical address
> - only die when permission fault both kernel-mode and user code execution
> with privilege mode
> - drop LPAE specific
>
On 2021/7/31 14:42, Kefeng Wang wrote:
>
> On 2021/6/10 20:35, Kefeng Wang wrote:
>> The patchset cleanup ARM page fault handle to improve readability,
>> fix the page table entries printing and fix infinite loop in the
>> page fault handler when user code execution with privilege mode if
>> ARM_LPAE enabled.
>
> Hi Russell, I make some changes according your advise, could you give
> me
>
> some comments about this patchset, I want to make sure that whether they
>
> can be sent to the ARM patch system, many thanks.
Keep ping...
>
>> v3:
>> - drop the fix about page table printing
>> - kill page table base print instead of printing the physical address
>> - only die when permission fault both kernel-mode and user code
>> execution
>> with privilege mode
>> - drop LPAE specific
>>
On 2021/6/10 20:35, Kefeng Wang wrote:
> The patchset cleanup ARM page fault handle to improve readability,
> fix the page table entries printing and fix infinite loop in the
> page fault handler when user code execution with privilege mode if
> ARM_LPAE enabled.
Hi Russell, I send the v3(most patches were reviewed by you in v2[1]) to
ARM patch system due to no more comments, and this patchset is suspended
too long without any changes, looking forward to your reply and hope it
could be merged, many thanks.
[1]
https://lore.kernel.org/linux-arm-kernel/[email protected]/
On Tue, Oct 12, 2021 at 09:41:23AM +0800, Kefeng Wang wrote:
>
>
> On 2021/6/10 20:35, Kefeng Wang wrote:
> > The patchset cleanup ARM page fault handle to improve readability,
> > fix the page table entries printing and fix infinite loop in the
> > page fault handler when user code execution with privilege mode if
> > ARM_LPAE enabled.
>
> Hi Russell, I send the v3(most patches were reviewed by you in v2[1]) to
> ARM patch system due to no more comments, and this patchset is suspended
> too long without any changes, looking forward to your reply and hope it
> could be merged, many thanks.
I did explicitly ask for the first two patches to be sent to the patch
system during the v2 review as a way to cut down on the amount of work
to review the entire patch set each time a new version is posted. Sadly
that didn't happen, which is demotivating for a reviewer.
Having looked through the v3 patch set, I think I'm happy with it.
Thanks.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
On 2021/10/15 0:28, Russell King (Oracle) wrote:
> On Tue, Oct 12, 2021 at 09:41:23AM +0800, Kefeng Wang wrote:
>>
>>
>> On 2021/6/10 20:35, Kefeng Wang wrote:
>>> The patchset cleanup ARM page fault handle to improve readability,
>>> fix the page table entries printing and fix infinite loop in the
>>> page fault handler when user code execution with privilege mode if
>>> ARM_LPAE enabled.
>>
>> Hi Russell, I send the v3(most patches were reviewed by you in v2[1]) to
>> ARM patch system due to no more comments, and this patchset is suspended
>> too long without any changes, looking forward to your reply and hope it
>> could be merged, many thanks.
>
> I did explicitly ask for the first two patches to be sent to the patch
> system during the v2 review as a way to cut down on the amount of work
> to review the entire patch set each time a new version is posted. Sadly
> that didn't happen, which is demotivating for a reviewer.
Oh, I should send the patch(already asked) to patch system ASAP,
sorry for not delay.
>
> Having looked through the v3 patch set, I think I'm happy with it.
>
It's greet to see that v3 is accepted, many thanks!
> Thanks.
>