2021-03-09 12:11:01

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 00/43] powerpc/32: Switch to interrupt entry/exit in C

This series aims at porting interrupt entry/exit in C on PPC32, using
the work already merged for PPC64.

First two patches are a fix and an optimisation of unrecoverable_exception() function.

Six following patches do minimal changes in 40x in order to be able to enable MMU
earlier in exception entry.

Second part of the series prepares and switches interrupt exit in C.

Third part moves more and more things in C, ending with KUAP management.

v2 is tested on 8xx and 83xx and qemu, and compile tested via kisskb.

Changes in v2:
- Two first patches are new.
- Mainly build fixes, nothing much new.

Christophe Leroy (43):
powerpc/traps: unrecoverable_exception() is not an interrupt handler
powerpc/traps: Declare unrecoverable_exception() as __noreturn
powerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlers
powerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro
powerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exception
powerpc/40x: Reorder a few instructions in critical exception prolog
powerpc/40x: Prepare for enabling MMU in critical exception prolog
powerpc/40x: Prepare normal exception handler for enabling MMU early
powerpc/32: Reconcile interrupts in C
powerpc/32: Entry cpu time accounting in C
powerpc/32: Handle bookE debugging in C in exception entry
powerpc/32: Use fast instruction to set MSR RI in exception prolog on
8xx
powerpc/32: Remove ksp_limit
powerpc/32: Always enable data translation in exception prolog
powerpc/32: Tag DAR in EXCEPTION_PROLOG_2 for the 8xx
powerpc/32: Enable instruction translation at the same time as data
translation
powerpc/32: Statically initialise first emergency context
powerpc/32: Add vmap_stack_overflow label inside the macro
powerpc/32: Use START_EXCEPTION() as much as possible
powerpc/32: Move exception prolog code into .text once MMU is back on
powerpc/32: Provide a name to exception prolog continuation in virtual
mode
powerpc/32: Refactor booke critical registers saving
powerpc/32: Perform normal function call in exception entry
powerpc/32: Always save non volatile registers on exception entry
powerpc/32: Replace ASM exception exit by C exception exit from ppc64
powerpc/32: Set regs parameter in r3 in transfer_to_handler
powerpc/32: Call bad_page_fault() from do_page_fault()
powerpc/64e: Call bad_page_fault() from do_page_fault()
powerpc/32: Save trap number on stack in exception prolog
powerpc/32: Add a prepare_transfer_to_handler macro for exception
prologs
powerpc/32: Only restore non volatile registers when required
powerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE
powerpc/32: Remove the xfer parameter in EXCEPTION() macro
powerpc/32: Refactor saving of volatile registers in exception prologs
powerpc/32: Save remaining registers in exception prolog
powerpc/32: Set current->thread.regs in C interrupt entry
powerpc/32: Return directly from power_save_ppc32_restore()
powerpc/32: Only use prepare_transfer_to_handler function on book3s/32
and e500
powerpc/32s: Move KUEP locking/unlocking in C
powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr()
generic
powerpc/32s: Create C version of kuap save/restore/check helpers
powerpc/8xx: Create C version of kuap save/restore/check helpers
powerpc/32: Manage KUAP in C

arch/powerpc/include/asm/book3s/32/kup.h | 126 +--
arch/powerpc/include/asm/book3s/64/kup.h | 24 +-
arch/powerpc/include/asm/interrupt.h | 19 +-
arch/powerpc/include/asm/kup.h | 27 +-
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 56 +-
arch/powerpc/include/asm/ppc_asm.h | 10 -
arch/powerpc/include/asm/processor.h | 6 +-
arch/powerpc/include/asm/ptrace.h | 6 +-
arch/powerpc/kernel/asm-offsets.c | 4 -
arch/powerpc/kernel/entry_32.S | 835 ++++---------------
arch/powerpc/kernel/exceptions-64e.S | 8 +-
arch/powerpc/kernel/fpu.S | 2 -
arch/powerpc/kernel/head_32.h | 195 ++---
arch/powerpc/kernel/head_40x.S | 271 +++---
arch/powerpc/kernel/head_44x.S | 10 +-
arch/powerpc/kernel/head_8xx.S | 151 ++--
arch/powerpc/kernel/head_book3s_32.S | 233 +++---
arch/powerpc/kernel/head_booke.h | 188 +++--
arch/powerpc/kernel/head_fsl_booke.S | 64 +-
arch/powerpc/kernel/idle_6xx.S | 14 +-
arch/powerpc/kernel/idle_e500.S | 14 +-
arch/powerpc/kernel/interrupt.c | 40 +-
arch/powerpc/kernel/irq.c | 2 +-
arch/powerpc/kernel/misc_32.S | 14 -
arch/powerpc/kernel/process.c | 6 +-
arch/powerpc/kernel/setup_32.c | 2 +-
arch/powerpc/kernel/traps.c | 15 +-
arch/powerpc/kernel/vector.S | 2 -
arch/powerpc/lib/sstep.c | 9 -
arch/powerpc/mm/book3s32/Makefile | 1 +
arch/powerpc/mm/book3s32/hash_low.S | 14 -
arch/powerpc/mm/book3s32/kuep.c | 38 +
arch/powerpc/mm/fault.c | 17 +-
33 files changed, 885 insertions(+), 1538 deletions(-)
create mode 100644 arch/powerpc/mm/book3s32/kuep.c

--
2.25.0


2021-03-09 12:11:01

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 04/43] powerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro

Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro to
remove the ugly ; and \ on each line.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_40x.S | 71 +++++++++++++++++-----------------
1 file changed, 36 insertions(+), 35 deletions(-)

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 383238a98f77..9cef423d574b 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -100,42 +100,43 @@ _ENTRY(saved_ksp_limit)
* Instead we use a couple of words of memory at low physical addresses.
* This is OK since we don't support SMP on these processors.
*/
-#define CRITICAL_EXCEPTION_PROLOG \
- stw r10,crit_r10@l(0); /* save two registers to work with */\
- stw r11,crit_r11@l(0); \
- mfcr r10; /* save CR in r10 for now */\
- mfspr r11,SPRN_SRR3; /* check whether user or kernel */\
- andi. r11,r11,MSR_PR; \
- lis r11,critirq_ctx@ha; \
- tophys(r11,r11); \
- lwz r11,critirq_ctx@l(r11); \
- beq 1f; \
- /* COMING FROM USER MODE */ \
- mfspr r11,SPRN_SPRG_THREAD; /* if from user, start at top of */\
- lwz r11,TASK_STACK-THREAD(r11); /* this thread's kernel stack */\
-1: addi r11,r11,THREAD_SIZE-INT_FRAME_SIZE; /* Alloc an excpt frm */\
- tophys(r11,r11); \
- stw r10,_CCR(r11); /* save various registers */\
- stw r12,GPR12(r11); \
- stw r9,GPR9(r11); \
- mflr r10; \
- stw r10,_LINK(r11); \
- mfspr r12,SPRN_DEAR; /* save DEAR and ESR in the frame */\
- stw r12,_DEAR(r11); /* since they may have had stuff */\
- mfspr r9,SPRN_ESR; /* in them at the point where the */\
- stw r9,_ESR(r11); /* exception was taken */\
- mfspr r12,SPRN_SRR2; \
- stw r1,GPR1(r11); \
- mfspr r9,SPRN_SRR3; \
- stw r1,0(r11); \
- tovirt(r1,r11); \
- rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
- stw r0,GPR0(r11); \
- lis r10, STACK_FRAME_REGS_MARKER@ha; /* exception frame marker */\
- addi r10, r10, STACK_FRAME_REGS_MARKER@l; \
- stw r10, 8(r11); \
- SAVE_4GPRS(3, r11); \
+.macro CRITICAL_EXCEPTION_PROLOG
+ stw r10,crit_r10@l(0) /* save two registers to work with */
+ stw r11,crit_r11@l(0)
+ mfcr r10 /* save CR in r10 for now */
+ mfspr r11,SPRN_SRR3 /* check whether user or kernel */
+ andi. r11,r11,MSR_PR
+ lis r11,critirq_ctx@ha
+ tophys(r11,r11)
+ lwz r11,critirq_ctx@l(r11)
+ beq 1f
+ /* COMING FROM USER MODE */
+ mfspr r11,SPRN_SPRG_THREAD /* if from user, start at top of */
+ lwz r11,TASK_STACK-THREAD(r11) /* this thread's kernel stack */
+1: addi r11,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
+ tophys(r11,r11)
+ stw r10,_CCR(r11) /* save various registers */
+ stw r12,GPR12(r11)
+ stw r9,GPR9(r11)
+ mflr r10
+ stw r10,_LINK(r11)
+ mfspr r12,SPRN_DEAR /* save DEAR and ESR in the frame */
+ stw r12,_DEAR(r11) /* since they may have had stuff */
+ mfspr r9,SPRN_ESR /* in them at the point where the */
+ stw r9,_ESR(r11) /* exception was taken */
+ mfspr r12,SPRN_SRR2
+ stw r1,GPR1(r11)
+ mfspr r9,SPRN_SRR3
+ stw r1,0(r11)
+ tovirt(r1,r11)
+ rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
+ stw r0,GPR0(r11)
+ lis r10, STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
+ addi r10, r10, STACK_FRAME_REGS_MARKER@l
+ stw r10, 8(r11)
+ SAVE_4GPRS(3, r11)
SAVE_2GPRS(7, r11)
+.endm

/*
* State at this point:
--
2.25.0

2021-03-09 12:11:01

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 02/43] powerpc/traps: Declare unrecoverable_exception() as __noreturn

unrecoverable_exception() is never expected to return, most callers
have an infiniteloop in case it returns.

Ensure it really never returns by terminating it with a BUG(), and
declare it __no_return.

It always GCC to really simplify functions calling it. In the exemple
below, it avoids the stack frame in the likely fast path and avoids
code duplication for the exit.

With this patch:

00000348 <interrupt_exit_kernel_prepare>:
348: 81 43 00 84 lwz r10,132(r3)
34c: 71 48 00 02 andi. r8,r10,2
350: 41 82 00 2c beq 37c <interrupt_exit_kernel_prepare+0x34>
354: 71 4a 40 00 andi. r10,r10,16384
358: 40 82 00 20 bne 378 <interrupt_exit_kernel_prepare+0x30>
35c: 80 62 00 70 lwz r3,112(r2)
360: 74 63 00 01 andis. r3,r3,1
364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44>
368: 7d 40 00 a6 mfmsr r10
36c: 7c 11 13 a6 mtspr 81,r0
370: 7c 12 13 a6 mtspr 82,r0
374: 4e 80 00 20 blr
378: 48 00 00 00 b 378 <interrupt_exit_kernel_prepare+0x30>
37c: 94 21 ff f0 stwu r1,-16(r1)
380: 7c 08 02 a6 mflr r0
384: 90 01 00 14 stw r0,20(r1)
388: 48 00 00 01 bl 388 <interrupt_exit_kernel_prepare+0x40>
388: R_PPC_REL24 unrecoverable_exception
38c: 38 e2 00 70 addi r7,r2,112
390: 3d 00 00 01 lis r8,1
394: 7c c0 38 28 lwarx r6,0,r7
398: 7c c6 40 78 andc r6,r6,r8
39c: 7c c0 39 2d stwcx. r6,0,r7
3a0: 40 a2 ff f4 bne 394 <interrupt_exit_kernel_prepare+0x4c>
3a4: 38 60 00 01 li r3,1
3a8: 4b ff ff c0 b 368 <interrupt_exit_kernel_prepare+0x20>

Without this patch:

00000348 <interrupt_exit_kernel_prepare>:
348: 94 21 ff f0 stwu r1,-16(r1)
34c: 93 e1 00 0c stw r31,12(r1)
350: 7c 7f 1b 78 mr r31,r3
354: 81 23 00 84 lwz r9,132(r3)
358: 71 2a 00 02 andi. r10,r9,2
35c: 41 82 00 34 beq 390 <interrupt_exit_kernel_prepare+0x48>
360: 71 29 40 00 andi. r9,r9,16384
364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44>
368: 80 62 00 70 lwz r3,112(r2)
36c: 74 63 00 01 andis. r3,r3,1
370: 40 82 00 3c bne 3ac <interrupt_exit_kernel_prepare+0x64>
374: 7d 20 00 a6 mfmsr r9
378: 7c 11 13 a6 mtspr 81,r0
37c: 7c 12 13 a6 mtspr 82,r0
380: 83 e1 00 0c lwz r31,12(r1)
384: 38 21 00 10 addi r1,r1,16
388: 4e 80 00 20 blr
38c: 48 00 00 00 b 38c <interrupt_exit_kernel_prepare+0x44>
390: 7c 08 02 a6 mflr r0
394: 90 01 00 14 stw r0,20(r1)
398: 48 00 00 01 bl 398 <interrupt_exit_kernel_prepare+0x50>
398: R_PPC_REL24 unrecoverable_exception
39c: 80 01 00 14 lwz r0,20(r1)
3a0: 81 3f 00 84 lwz r9,132(r31)
3a4: 7c 08 03 a6 mtlr r0
3a8: 4b ff ff b8 b 360 <interrupt_exit_kernel_prepare+0x18>
3ac: 39 02 00 70 addi r8,r2,112
3b0: 3d 40 00 01 lis r10,1
3b4: 7c e0 40 28 lwarx r7,0,r8
3b8: 7c e7 50 78 andc r7,r7,r10
3bc: 7c e0 41 2d stwcx. r7,0,r8
3c0: 40 a2 ff f4 bne 3b4 <interrupt_exit_kernel_prepare+0x6c>
3c4: 38 60 00 01 li r3,1
3c8: 4b ff ff ac b 374 <interrupt_exit_kernel_prepare+0x2c>

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 2 +-
arch/powerpc/kernel/traps.c | 6 +++++-
2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index e8d09a841373..232a4847f596 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -436,7 +436,7 @@ DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);

DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);

-void unrecoverable_exception(struct pt_regs *regs);
+void __noreturn unrecoverable_exception(struct pt_regs *regs);

void replay_system_reset(void);
void replay_soft_interrupts(void);
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index a44a30b0688c..d5c9d9ddd186 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -2170,11 +2170,15 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
* in the MSR is 0. This indicates that SRR0/1 are live, and that
* we therefore lost state by taking this exception.
*/
-void unrecoverable_exception(struct pt_regs *regs)
+void __noreturn unrecoverable_exception(struct pt_regs *regs)
{
pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
regs->trap, regs->nip, regs->msr);
die("Unrecoverable exception", regs, SIGABRT);
+ /* die() should not return */
+ WARN(true, "die() unexpectedly returned");
+ for (;;)
+ ;
}

#if defined(CONFIG_BOOKE_WDT) || defined(CONFIG_40x)
--
2.25.0

2021-03-09 12:11:01

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 01/43] powerpc/traps: unrecoverable_exception() is not an interrupt handler

unrecoverable_exception() is called from interrupt handlers or
after an interrupt handler has failed.

Make it a standard function to avoid doubling the actions
performed on interrupt entry (e.g.: user time accounting).

Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers")
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 3 ++-
arch/powerpc/kernel/interrupt.c | 1 -
arch/powerpc/kernel/traps.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index aedfba29e43a..e8d09a841373 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -410,7 +410,6 @@ DECLARE_INTERRUPT_HANDLER(altivec_assist_exception);
DECLARE_INTERRUPT_HANDLER(CacheLockingException);
DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException);
DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException);
-DECLARE_INTERRUPT_HANDLER(unrecoverable_exception);
DECLARE_INTERRUPT_HANDLER(WatchdogException);
DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);

@@ -437,6 +436,8 @@ DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);

DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);

+void unrecoverable_exception(struct pt_regs *regs);
+
void replay_system_reset(void);
void replay_soft_interrupts(void);

diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 398cd86b6ada..b8e7d25be31b 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -436,7 +436,6 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
return ret;
}

-void unrecoverable_exception(struct pt_regs *regs);
void preempt_schedule_irq(void);

notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsigned long msr)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 1583fd1c6010..a44a30b0688c 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -2170,7 +2170,7 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
* in the MSR is 0. This indicates that SRR0/1 are live, and that
* we therefore lost state by taking this exception.
*/
-DEFINE_INTERRUPT_HANDLER(unrecoverable_exception)
+void unrecoverable_exception(struct pt_regs *regs)
{
pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
regs->trap, regs->nip, regs->msr);
--
2.25.0

2021-03-09 12:11:03

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 03/43] powerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlers

SPRN_SPRG_SCRATCH5 is used to save SPRN_PID.
SPRN_SPRG_SCRATCH6 is already available.

SPRN_PID is only 8 bits. We have r12 that contains CR.
We only need to preserve CR0, so we have space available in r12
to save PID.

Keep PID in r12 and free up SPRN_SPRG_SCRATCH5.

Then In TLB miss handlers, instead of using SPRN_SPRG_SCRATCH0 and
SPRN_SPRG_SCRATCH1, use SPRN_SPRG_SCRATCH5 and SPRN_SPRG_SCRATCH6
to avoid future conflicts with normal exception prologs.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_40x.S | 39 ++++++++++++++++------------------
1 file changed, 18 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 24724a7dad49..383238a98f77 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -249,13 +249,13 @@ _ENTRY(saved_ksp_limit)
* load TLB entries from the page table if they exist.
*/
START_EXCEPTION(0x1100, DTLBMiss)
- mtspr SPRN_SPRG_SCRATCH0, r10 /* Save some working registers */
- mtspr SPRN_SPRG_SCRATCH1, r11
+ mtspr SPRN_SPRG_SCRATCH5, r10 /* Save some working registers */
+ mtspr SPRN_SPRG_SCRATCH6, r11
mtspr SPRN_SPRG_SCRATCH3, r12
mtspr SPRN_SPRG_SCRATCH4, r9
mfcr r12
mfspr r9, SPRN_PID
- mtspr SPRN_SPRG_SCRATCH5, r9
+ rlwimi r12, r9, 0, 0xff
mfspr r10, SPRN_DEAR /* Get faulting address */

/* If we are faulting a kernel address, we have to use the
@@ -316,13 +316,12 @@ _ENTRY(saved_ksp_limit)
/* The bailout. Restore registers to pre-exception conditions
* and call the heavyweights to help us out.
*/
- mfspr r9, SPRN_SPRG_SCRATCH5
- mtspr SPRN_PID, r9
- mtcr r12
+ mtspr SPRN_PID, r12
+ mtcrf 0x80, r12
mfspr r9, SPRN_SPRG_SCRATCH4
mfspr r12, SPRN_SPRG_SCRATCH3
- mfspr r11, SPRN_SPRG_SCRATCH1
- mfspr r10, SPRN_SPRG_SCRATCH0
+ mfspr r11, SPRN_SPRG_SCRATCH6
+ mfspr r10, SPRN_SPRG_SCRATCH5
b DataStorage

/* 0x1200 - Instruction TLB Miss Exception
@@ -330,13 +329,13 @@ _ENTRY(saved_ksp_limit)
* registers and bailout to a different point.
*/
START_EXCEPTION(0x1200, ITLBMiss)
- mtspr SPRN_SPRG_SCRATCH0, r10 /* Save some working registers */
- mtspr SPRN_SPRG_SCRATCH1, r11
+ mtspr SPRN_SPRG_SCRATCH5, r10 /* Save some working registers */
+ mtspr SPRN_SPRG_SCRATCH6, r11
mtspr SPRN_SPRG_SCRATCH3, r12
mtspr SPRN_SPRG_SCRATCH4, r9
mfcr r12
mfspr r9, SPRN_PID
- mtspr SPRN_SPRG_SCRATCH5, r9
+ rlwimi r12, r9, 0, 0xff
mfspr r10, SPRN_SRR0 /* Get faulting address */

/* If we are faulting a kernel address, we have to use the
@@ -397,13 +396,12 @@ _ENTRY(saved_ksp_limit)
/* The bailout. Restore registers to pre-exception conditions
* and call the heavyweights to help us out.
*/
- mfspr r9, SPRN_SPRG_SCRATCH5
- mtspr SPRN_PID, r9
- mtcr r12
+ mtspr SPRN_PID, r12
+ mtcrf 0x80, r12
mfspr r9, SPRN_SPRG_SCRATCH4
mfspr r12, SPRN_SPRG_SCRATCH3
- mfspr r11, SPRN_SPRG_SCRATCH1
- mfspr r10, SPRN_SPRG_SCRATCH0
+ mfspr r11, SPRN_SPRG_SCRATCH6
+ mfspr r10, SPRN_SPRG_SCRATCH5
b InstructionAccess

EXCEPTION(0x1300, Trap_13, unknown_exception, EXC_XFER_STD)
@@ -543,13 +541,12 @@ finish_tlb_load:

/* Done...restore registers and get out of here.
*/
- mfspr r9, SPRN_SPRG_SCRATCH5
- mtspr SPRN_PID, r9
- mtcr r12
+ mtspr SPRN_PID, r12
+ mtcrf 0x80, r12
mfspr r9, SPRN_SPRG_SCRATCH4
mfspr r12, SPRN_SPRG_SCRATCH3
- mfspr r11, SPRN_SPRG_SCRATCH1
- mfspr r10, SPRN_SPRG_SCRATCH0
+ mfspr r11, SPRN_SPRG_SCRATCH6
+ mfspr r10, SPRN_SPRG_SCRATCH5
rfi /* Should sync shadow TLBs */
b . /* prevent prefetch past rfi */

--
2.25.0

2021-03-09 12:11:04

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 07/43] powerpc/40x: Prepare for enabling MMU in critical exception prolog

In order the enable MMU early in exception prolog, implement
CONFIG_VMAP_STACK principles in critical exception prolog.

There is no intention to use CONFIG_VMAP_STACK on 40x,
but related code will be used to enable MMU early in exception
in a later patch.

Also address (critirq_ctx - PAGE_OFFSET) directly instead of
using tophys() in order to win one instruction.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_40x.S | 40 +++++++++++++++++++++++++++++++---
1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 5b337bf49bcb..1468f38c3860 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -89,6 +89,12 @@ _ENTRY(crit_srr0)
.space 4
_ENTRY(crit_srr1)
.space 4
+_ENTRY(crit_r1)
+ .space 4
+_ENTRY(crit_dear)
+ .space 4
+_ENTRY(crit_esr)
+ .space 4
_ENTRY(saved_ksp_limit)
.space 4

@@ -107,32 +113,60 @@ _ENTRY(saved_ksp_limit)
mfspr r11,SPRN_SRR1
stw r10,crit_srr0@l(0)
stw r11,crit_srr1@l(0)
+#ifdef CONFIG_VMAP_STACK
+ mfspr r10,SPRN_DEAR
+ mfspr r11,SPRN_ESR
+ stw r10,crit_dear@l(0)
+ stw r11,crit_esr@l(0)
+#endif
mfcr r10 /* save CR in r10 for now */
mfspr r11,SPRN_SRR3 /* check whether user or kernel */
andi. r11,r11,MSR_PR
- lis r11,critirq_ctx@ha
- tophys(r11,r11)
- lwz r11,critirq_ctx@l(r11)
+ lis r11,(critirq_ctx-PAGE_OFFSET)@ha
+ lwz r11,(critirq_ctx-PAGE_OFFSET)@l(r11)
beq 1f
/* COMING FROM USER MODE */
mfspr r11,SPRN_SPRG_THREAD /* if from user, start at top of */
lwz r11,TASK_STACK-THREAD(r11) /* this thread's kernel stack */
+#ifdef CONFIG_VMAP_STACK
+1: stw r1,crit_r1@l(0)
+ addi r1,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
+ LOAD_REG_IMMEDIATE(r11,MSR_KERNEL & ~(MSR_IR | MSR_RI))
+ mtmsr r11
+ isync
+ lwz r11,crit_r1@l(0)
+ stw r11,GPR1(r1)
+ stw r11,0(r1)
+ mr r11,r1
+#else
1: addi r11,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
tophys(r11,r11)
stw r1,GPR1(r11)
stw r1,0(r11)
tovirt(r1,r11)
+#endif
stw r10,_CCR(r11) /* save various registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
mflr r10
stw r10,_LINK(r11)
+#ifdef CONFIG_VMAP_STACK
+ lis r9,PAGE_OFFSET@ha
+ lwz r10,crit_r10@l(r9)
+ lwz r12,crit_r11@l(r9)
+#else
lwz r10,crit_r10@l(0)
lwz r12,crit_r11@l(0)
+#endif
stw r10,GPR10(r11)
stw r12,GPR11(r11)
+#ifdef CONFIG_VMAP_STACK
+ lwz r12,crit_dear@l(r9)
+ lwz r9,crit_esr@l(r9)
+#else
mfspr r12,SPRN_DEAR /* save DEAR and ESR in the frame */
mfspr r9,SPRN_ESR /* in them at the point where the */
+#endif
stw r12,_DEAR(r11) /* since they may have had stuff */
stw r9,_ESR(r11) /* exception was taken */
mfspr r12,SPRN_SRR2
--
2.25.0

2021-03-09 12:11:12

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 06/43] powerpc/40x: Reorder a few instructions in critical exception prolog

In order to ease preparation for CONFIG_VMAP_STACK, reorder
a few instruction, especially save r1 into stack frame earlier.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_40x.S | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 067ae1302c1c..5b337bf49bcb 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -119,6 +119,9 @@ _ENTRY(saved_ksp_limit)
lwz r11,TASK_STACK-THREAD(r11) /* this thread's kernel stack */
1: addi r11,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
tophys(r11,r11)
+ stw r1,GPR1(r11)
+ stw r1,0(r11)
+ tovirt(r1,r11)
stw r10,_CCR(r11) /* save various registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
@@ -129,14 +132,11 @@ _ENTRY(saved_ksp_limit)
stw r10,GPR10(r11)
stw r12,GPR11(r11)
mfspr r12,SPRN_DEAR /* save DEAR and ESR in the frame */
- stw r12,_DEAR(r11) /* since they may have had stuff */
mfspr r9,SPRN_ESR /* in them at the point where the */
+ stw r12,_DEAR(r11) /* since they may have had stuff */
stw r9,_ESR(r11) /* exception was taken */
mfspr r12,SPRN_SRR2
- stw r1,GPR1(r11)
mfspr r9,SPRN_SRR3
- stw r1,0(r11)
- tovirt(r1,r11)
rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
stw r0,GPR0(r11)
lis r10, STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
--
2.25.0

2021-03-09 12:11:33

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 11/43] powerpc/32: Handle bookE debugging in C in exception entry

The handling of SPRN_DBCR0 and other registers can easily
be done in C instead of ASM.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 2 ++
arch/powerpc/kernel/entry_32.S | 23 -----------------------
2 files changed, 2 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index c35368adbe71..861e6eadc98c 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -65,6 +65,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
if (user_mode(regs))
account_cpu_user_entry();
#endif
+
+ booke_restore_dbcr0();
}

/*
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 0f3f1bdd909e..4ffbcf3df72e 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -146,32 +146,9 @@ transfer_to_handler:
addi r2, r12, -THREAD
addi r11,r1,STACK_FRAME_OVERHEAD
stw r11,PT_REGS(r12)
-#if defined(CONFIG_40x) || defined(CONFIG_BOOKE)
- /* Check to see if the dbcr0 register is set up to debug. Use the
- internal debug mode bit to do this. */
- lwz r12,THREAD_DBCR0(r12)
- andis. r12,r12,DBCR0_IDM@h
-#endif
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
#endif
-#if defined(CONFIG_40x) || defined(CONFIG_BOOKE)
- beq+ 3f
- /* From user and task is ptraced - load up global dbcr0 */
- li r12,-1 /* clear all pending debug events */
- mtspr SPRN_DBSR,r12
- lis r11,global_dbcr0@ha
- tophys_novmstack r11,r11
- addi r11,r11,global_dbcr0@l
-#ifdef CONFIG_SMP
- lwz r9,TASK_CPU(r2)
- slwi r9,r9,2
- add r11,r11,r9
-#endif
- lwz r12,0(r11)
- mtspr SPRN_DBCR0,r12
-#endif
-
b 3f

2: /* if from kernel, check interrupted DOZE/NAP mode and
--
2.25.0

2021-03-09 12:11:34

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 12/43] powerpc/32: Use fast instruction to set MSR RI in exception prolog on 8xx

8xx has registers SPRN_NRI, SPRN_EID and SPRN_EIE for changing
MSR EE and RI.

Use SPRN_EID in exception prolog to set RI.

On an 8xx, it reduces the null_syscall test by 3 cycles.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index ac6b391f1493..25ee6b26ef5a 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -107,6 +107,8 @@
#endif
#ifdef CONFIG_40x
rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
+#elif defined(CONFIG_PPC_8xx)
+ mtspr SPRN_EID, r2 /* Set MSR_RI */
#else
#ifdef CONFIG_VMAP_STACK
li r10, MSR_KERNEL & ~MSR_IR /* can take exceptions */
--
2.25.0

2021-03-09 12:11:34

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 10/43] powerpc/32: Entry cpu time accounting in C

There is no need for this to be in asm,
use the new interrupt entry wrapper.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 3 +++
arch/powerpc/include/asm/ppc_asm.h | 10 ----------
arch/powerpc/kernel/entry_32.S | 1 -
3 files changed, 3 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index b2f69e5dcb50..c35368adbe71 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -32,6 +32,9 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
#ifdef CONFIG_PPC32
if (!arch_irq_disabled_regs(regs))
trace_hardirqs_off();
+
+ if (user_mode(regs))
+ account_cpu_user_entry();
#endif
/*
* Book3E reconciles irq soft mask in asm
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 3dceb64fc9af..8998122fc7e2 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -23,18 +23,8 @@
*/

#ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-#define ACCOUNT_CPU_USER_ENTRY(ptr, ra, rb)
#define ACCOUNT_CPU_USER_EXIT(ptr, ra, rb)
#else
-#define ACCOUNT_CPU_USER_ENTRY(ptr, ra, rb) \
- MFTB(ra); /* get timebase */ \
- PPC_LL rb, ACCOUNT_STARTTIME_USER(ptr); \
- PPC_STL ra, ACCOUNT_STARTTIME(ptr); \
- subf rb,rb,ra; /* subtract start value */ \
- PPC_LL ra, ACCOUNT_USER_TIME(ptr); \
- add ra,ra,rb; /* add on to user time */ \
- PPC_STL ra, ACCOUNT_USER_TIME(ptr); \
-
#define ACCOUNT_CPU_USER_EXIT(ptr, ra, rb) \
MFTB(ra); /* get timebase */ \
PPC_LL rb, ACCOUNT_STARTTIME(ptr); \
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 0f18fe14649c..0f3f1bdd909e 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -152,7 +152,6 @@ transfer_to_handler:
lwz r12,THREAD_DBCR0(r12)
andis. r12,r12,DBCR0_IDM@h
#endif
- ACCOUNT_CPU_USER_ENTRY(r2, r11, r12)
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
#endif
--
2.25.0

2021-03-09 12:11:34

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 13/43] powerpc/32: Remove ksp_limit

ksp_limit is there to help detect stack overflows.
That is specific to ppc32 as it was removed from ppc64 in
commit cbc9565ee826 ("powerpc: Remove ksp_limit on ppc64").

There are other means for detecting stack overflows.

As ppc64 has proven to not need it, ppc32 should be able to do
without it too.

Lets remove it and simplify exception handling.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/processor.h | 2 -
arch/powerpc/kernel/asm-offsets.c | 2 -
arch/powerpc/kernel/entry_32.S | 68 +---------------------------
arch/powerpc/kernel/head_40x.S | 2 -
arch/powerpc/kernel/head_booke.h | 1 -
arch/powerpc/kernel/misc_32.S | 14 ------
arch/powerpc/kernel/process.c | 3 --
arch/powerpc/kernel/traps.c | 9 ----
arch/powerpc/lib/sstep.c | 9 ----
9 files changed, 2 insertions(+), 108 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 8acc3590c971..43cbd9281055 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -144,7 +144,6 @@ struct thread_struct {
#endif
#ifdef CONFIG_PPC32
void *pgdir; /* root of page-table tree */
- unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */
#ifdef CONFIG_PPC_RTAS
unsigned long rtas_sp; /* stack pointer for when in RTAS */
#endif
@@ -282,7 +281,6 @@ struct thread_struct {
#ifdef CONFIG_PPC32
#define INIT_THREAD { \
.ksp = INIT_SP, \
- .ksp_limit = INIT_SP_LIMIT, \
.pgdir = swapper_pg_dir, \
.fpexc_mode = MSR_FE0 | MSR_FE1, \
SPEFSCR_INIT \
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index f3a662201a9f..73620536c801 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -91,7 +91,6 @@ int main(void)
DEFINE(SIGSEGV, SIGSEGV);
DEFINE(NMI_MASK, NMI_MASK);
#else
- OFFSET(KSP_LIMIT, thread_struct, ksp_limit);
#ifdef CONFIG_PPC_RTAS
OFFSET(RTAS_SP, thread_struct, rtas_sp);
#endif
@@ -381,7 +380,6 @@ int main(void)
DEFINE(_CSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, csrr1));
DEFINE(_DSRR0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr0));
DEFINE(_DSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr1));
- DEFINE(SAVED_KSP_LIMIT, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, saved_ksp_limit));
#endif
#endif

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 4ffbcf3df72e..66198e6e25e7 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -94,12 +94,6 @@ crit_transfer_to_handler:
mfspr r0,SPRN_SRR1
stw r0,_SRR1(r11)

- /* set the stack limit to the current stack */
- mfspr r8,SPRN_SPRG_THREAD
- lwz r0,KSP_LIMIT(r8)
- stw r0,SAVED_KSP_LIMIT(r11)
- rlwinm r0,r1,0,0,(31 - THREAD_SHIFT)
- stw r0,KSP_LIMIT(r8)
/* fall through */
_ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
#endif
@@ -107,12 +101,6 @@ _ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
#ifdef CONFIG_40x
.globl crit_transfer_to_handler
crit_transfer_to_handler:
- /* set the stack limit to the current stack */
- mfspr r8,SPRN_SPRG_THREAD
- lwz r0,KSP_LIMIT(r8)
- stw r0,saved_ksp_limit@l(0)
- rlwinm r0,r1,0,0,(31 - THREAD_SHIFT)
- stw r0,KSP_LIMIT(r8)
/* fall through */
_ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
#endif
@@ -151,17 +139,10 @@ transfer_to_handler:
#endif
b 3f

-2: /* if from kernel, check interrupted DOZE/NAP mode and
- * check for stack overflow
- */
+ /* if from kernel, check interrupted DOZE/NAP mode */
+2:
kuap_save_and_lock r11, r12, r9, r2, r6
addi r2, r12, -THREAD
-#ifndef CONFIG_VMAP_STACK
- lwz r9,KSP_LIMIT(r12)
- cmplw r1,r9 /* if r1 <= ksp_limit */
- ble- stack_ovf /* then the kernel stack overflowed */
-#endif
-5:
#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
lwz r12,TI_LOCAL_FLAGS(r2)
mtcrf 0x01,r12
@@ -204,37 +185,6 @@ transfer_to_handler_cont:
_ASM_NOKPROBE_SYMBOL(transfer_to_handler)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)

-#ifndef CONFIG_VMAP_STACK
-/*
- * On kernel stack overflow, load up an initial stack pointer
- * and call StackOverflow(regs), which should not return.
- */
-stack_ovf:
- /* sometimes we use a statically-allocated stack, which is OK. */
- lis r12,_end@h
- ori r12,r12,_end@l
- cmplw r1,r12
- ble 5b /* r1 <= &_end is OK */
- SAVE_NVGPRS(r11)
- addi r3,r1,STACK_FRAME_OVERHEAD
- lis r1,init_thread_union@ha
- addi r1,r1,init_thread_union@l
- addi r1,r1,THREAD_SIZE-STACK_FRAME_OVERHEAD
- lis r9,StackOverflow@ha
- addi r9,r9,StackOverflow@l
- LOAD_REG_IMMEDIATE(r10,MSR_KERNEL)
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
- mtspr SPRN_NRI, r0
-#endif
- mtspr SPRN_SRR0,r9
- mtspr SPRN_SRR1,r10
- rfi
-#ifdef CONFIG_40x
- b . /* Prevent prefetch past rfi */
-#endif
-_ASM_NOKPROBE_SYMBOL(stack_ovf)
-#endif
-
.globl transfer_to_syscall
transfer_to_syscall:
SAVE_NVGPRS(r1)
@@ -815,11 +765,6 @@ _ASM_NOKPROBE_SYMBOL(exc_exit_restart)
#ifdef CONFIG_40x
.globl ret_from_crit_exc
ret_from_crit_exc:
- mfspr r9,SPRN_SPRG_THREAD
- lis r10,saved_ksp_limit@ha;
- lwz r10,saved_ksp_limit@l(r10);
- tovirt(r9,r9);
- stw r10,KSP_LIMIT(r9)
lis r9,crit_srr0@ha;
lwz r9,crit_srr0@l(r9);
lis r10,crit_srr1@ha;
@@ -833,9 +778,6 @@ _ASM_NOKPROBE_SYMBOL(ret_from_crit_exc)
#ifdef CONFIG_BOOKE
.globl ret_from_crit_exc
ret_from_crit_exc:
- mfspr r9,SPRN_SPRG_THREAD
- lwz r10,SAVED_KSP_LIMIT(r1)
- stw r10,KSP_LIMIT(r9)
RESTORE_xSRR(SRR0,SRR1);
RESTORE_MMU_REGS;
RET_FROM_EXC_LEVEL(SPRN_CSRR0, SPRN_CSRR1, PPC_RFCI)
@@ -843,9 +785,6 @@ _ASM_NOKPROBE_SYMBOL(ret_from_crit_exc)

.globl ret_from_debug_exc
ret_from_debug_exc:
- mfspr r9,SPRN_SPRG_THREAD
- lwz r10,SAVED_KSP_LIMIT(r1)
- stw r10,KSP_LIMIT(r9)
RESTORE_xSRR(SRR0,SRR1);
RESTORE_xSRR(CSRR0,CSRR1);
RESTORE_MMU_REGS;
@@ -854,9 +793,6 @@ _ASM_NOKPROBE_SYMBOL(ret_from_debug_exc)

.globl ret_from_mcheck_exc
ret_from_mcheck_exc:
- mfspr r9,SPRN_SPRG_THREAD
- lwz r10,SAVED_KSP_LIMIT(r1)
- stw r10,KSP_LIMIT(r9)
RESTORE_xSRR(SRR0,SRR1);
RESTORE_xSRR(CSRR0,CSRR1);
RESTORE_xSRR(DSRR0,DSRR1);
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 4bf0aee858eb..72e4962902de 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -95,8 +95,6 @@ _ENTRY(crit_dear)
.space 4
_ENTRY(crit_esr)
.space 4
-_ENTRY(saved_ksp_limit)
- .space 4

/*
* Exception prolog for critical exceptions. This is a little different
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 47857795f50a..4a5f0c9b652b 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -481,7 +481,6 @@ struct exception_regs {
unsigned long csrr1;
unsigned long dsrr0;
unsigned long dsrr1;
- unsigned long saved_ksp_limit;
};

/* ensure this structure is always sized to a multiple of the stack alignment */
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 717e658b90fd..acc410043b96 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -27,23 +27,14 @@

.text

-/*
- * We store the saved ksp_limit in the unused part
- * of the STACK_FRAME_OVERHEAD
- */
_GLOBAL(call_do_softirq)
mflr r0
stw r0,4(r1)
- lwz r10,THREAD+KSP_LIMIT(r2)
- stw r3, THREAD+KSP_LIMIT(r2)
stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
mr r1,r3
- stw r10,8(r1)
bl __do_softirq
- lwz r10,8(r1)
lwz r1,0(r1)
lwz r0,4(r1)
- stw r10,THREAD+KSP_LIMIT(r2)
mtlr r0
blr

@@ -53,16 +44,11 @@ _GLOBAL(call_do_softirq)
_GLOBAL(call_do_irq)
mflr r0
stw r0,4(r1)
- lwz r10,THREAD+KSP_LIMIT(r2)
- stw r4, THREAD+KSP_LIMIT(r2)
stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
mr r1,r4
- stw r10,8(r1)
bl __do_irq
- lwz r10,8(r1)
lwz r1,0(r1)
lwz r0,4(r1)
- stw r10,THREAD+KSP_LIMIT(r2)
mtlr r0
blr

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 3231c2df9e26..5d5d64be2679 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1724,9 +1724,6 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
kregs = (struct pt_regs *) sp;
sp -= STACK_FRAME_OVERHEAD;
p->thread.ksp = sp;
-#ifdef CONFIG_PPC32
- p->thread.ksp_limit = (unsigned long)end_of_stack(p);
-#endif
#ifdef CONFIG_HAVE_HW_BREAKPOINT
for (i = 0; i < nr_wp_slots(); i++)
p->thread.ptrace_bps[i] = NULL;
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index d5c9d9ddd186..7c4ab872004d 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1606,15 +1606,6 @@ DEFINE_INTERRUPT_HANDLER(alignment_exception)
bad_page_fault(regs, sig);
}

-DEFINE_INTERRUPT_HANDLER(StackOverflow)
-{
- pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
- current->comm, task_pid_nr(current), regs->gpr[1]);
- debugger(regs);
- show_regs(regs);
- panic("kernel stack overflow");
-}
-
DEFINE_INTERRUPT_HANDLER(stack_overflow_exception)
{
die("Kernel stack overflow", regs, SIGSEGV);
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index bb5c20d4ca91..212012f49f10 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -3086,15 +3086,6 @@ NOKPROBE_SYMBOL(analyse_instr);
*/
static nokprobe_inline int handle_stack_update(unsigned long ea, struct pt_regs *regs)
{
-#ifdef CONFIG_PPC32
- /*
- * Check if we will touch kernel stack overflow
- */
- if (ea - STACK_INT_FRAME_SIZE <= current->thread.ksp_limit) {
- printk(KERN_CRIT "Can't kprobe this since kernel stack would overflow.\n");
- return -EINVAL;
- }
-#endif /* CONFIG_PPC32 */
/*
* Check if we already set since that means we'll
* lose the previous value.
--
2.25.0

2021-03-09 12:11:34

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 14/43] powerpc/32: Always enable data translation in exception prolog

If the code can use a stack in vm area, it can also use a
stack in linear space.

Simplify code by removing old non VMAP stack code on PPC32.

That means the data translation is now re-enabled early in
exception prolog in all cases, not only when using VMAP stacks.

While we are touching EXCEPTION_PROLOG macros, remove the
unused for_rtas parameter in EXCEPTION_PROLOG_1.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/processor.h | 4 +-
arch/powerpc/kernel/asm-offsets.c | 2 -
arch/powerpc/kernel/entry_32.S | 19 +++----
arch/powerpc/kernel/fpu.S | 2 -
arch/powerpc/kernel/head_32.h | 85 +---------------------------
arch/powerpc/kernel/head_40x.S | 23 --------
arch/powerpc/kernel/head_8xx.S | 19 +------
arch/powerpc/kernel/head_book3s_32.S | 47 +--------------
arch/powerpc/kernel/idle_6xx.S | 12 +---
arch/powerpc/kernel/idle_e500.S | 4 +-
arch/powerpc/kernel/vector.S | 2 -
arch/powerpc/mm/book3s32/hash_low.S | 14 -----
12 files changed, 17 insertions(+), 216 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 43cbd9281055..eae16facc390 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -147,11 +147,9 @@ struct thread_struct {
#ifdef CONFIG_PPC_RTAS
unsigned long rtas_sp; /* stack pointer for when in RTAS */
#endif
-#endif
#if defined(CONFIG_PPC_BOOK3S_32) && defined(CONFIG_PPC_KUAP)
unsigned long kuap; /* opened segments for user access */
#endif
-#ifdef CONFIG_VMAP_STACK
unsigned long srr0;
unsigned long srr1;
unsigned long dar;
@@ -160,7 +158,7 @@ struct thread_struct {
unsigned long r0, r3, r4, r5, r6, r8, r9, r11;
unsigned long lr, ctr;
#endif
-#endif
+#endif /* CONFIG_PPC32 */
/* Debug Registers */
struct debug_reg debug;
#ifdef CONFIG_PPC_FPU_REGS
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 73620536c801..85ba2b0bc8d8 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -131,7 +131,6 @@ int main(void)
OFFSET(KSP_VSID, thread_struct, ksp_vsid);
#else /* CONFIG_PPC64 */
OFFSET(PGDIR, thread_struct, pgdir);
-#ifdef CONFIG_VMAP_STACK
OFFSET(SRR0, thread_struct, srr0);
OFFSET(SRR1, thread_struct, srr1);
OFFSET(DAR, thread_struct, dar);
@@ -148,7 +147,6 @@ int main(void)
OFFSET(THLR, thread_struct, lr);
OFFSET(THCTR, thread_struct, ctr);
#endif
-#endif
#ifdef CONFIG_SPE
OFFSET(THREAD_EVR0, thread_struct, evr[0]);
OFFSET(THREAD_ACC, thread_struct, acc);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 66198e6e25e7..33e97032ca25 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -129,7 +129,7 @@ transfer_to_handler:
stw r12,_CTR(r11)
stw r2,_XER(r11)
mfspr r12,SPRN_SPRG_THREAD
- tovirt_vmstack r12, r12
+ tovirt(r12, r12)
beq 2f /* if from user, fix up THREAD.regs */
addi r2, r12, -THREAD
addi r11,r1,STACK_FRAME_OVERHEAD
@@ -153,8 +153,7 @@ transfer_to_handler:
transfer_to_handler_cont:
3:
mflr r9
- tovirt_novmstack r2, r2 /* set r2 to current */
- tovirt_vmstack r9, r9
+ tovirt(r9, r9)
lwz r11,0(r9) /* virtual address of handler */
lwz r9,4(r9) /* where to go when done */
#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
@@ -933,7 +932,6 @@ _GLOBAL(enter_rtas)
lis r6,1f@ha /* physical return address for rtas */
addi r6,r6,1f@l
tophys(r6,r6)
- tophys_novmstack r7, r1
lwz r8,RTASENTRY(r4)
lwz r4,RTASBASE(r4)
mfmsr r9
@@ -942,22 +940,19 @@ _GLOBAL(enter_rtas)
mtmsr r0 /* disable interrupts so SRR0/1 don't get trashed */
li r9,MSR_KERNEL & ~(MSR_IR|MSR_DR)
mtlr r6
- stw r7, THREAD + RTAS_SP(r2)
+ stw r1, THREAD + RTAS_SP(r2)
mtspr SPRN_SRR0,r8
mtspr SPRN_SRR1,r9
rfi
-1: tophys_novmstack r9, r1
-#ifdef CONFIG_VMAP_STACK
+1:
li r0, MSR_KERNEL & ~MSR_IR /* can take DTLB miss */
mtmsr r0
isync
-#endif
- lwz r8,INT_FRAME_SIZE+4(r9) /* get return address */
- lwz r9,8(r9) /* original msr value */
+ lwz r8,INT_FRAME_SIZE+4(r1) /* get return address */
+ lwz r9,8(r1) /* original msr value */
addi r1,r1,INT_FRAME_SIZE
li r0,0
- tophys_novmstack r7, r2
- stw r0, THREAD + RTAS_SP(r7)
+ stw r0, THREAD + RTAS_SP(r2)
mtspr SPRN_SRR0,r8
mtspr SPRN_SRR1,r9
rfi /* return to caller */
diff --git a/arch/powerpc/kernel/fpu.S b/arch/powerpc/kernel/fpu.S
index 3ff9a8fafa46..2c57ece6671c 100644
--- a/arch/powerpc/kernel/fpu.S
+++ b/arch/powerpc/kernel/fpu.S
@@ -92,9 +92,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
/* enable use of FP after return */
#ifdef CONFIG_PPC32
mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
-#ifdef CONFIG_VMAP_STACK
tovirt(r5, r5)
-#endif
lwz r4,THREAD_FPEXC_MODE(r5)
ori r9,r9,MSR_FP /* enable FP for current */
or r9,r9,r4
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 25ee6b26ef5a..1b707755c68e 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -19,7 +19,6 @@
.macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0
mtspr SPRN_SPRG_SCRATCH0,r10
mtspr SPRN_SPRG_SCRATCH1,r11
-#ifdef CONFIG_VMAP_STACK
mfspr r10, SPRN_SPRG_THREAD
.if \handle_dar_dsisr
#ifdef CONFIG_40x
@@ -37,17 +36,13 @@
.endif
mfspr r11, SPRN_SRR0
stw r11, SRR0(r10)
-#endif
mfspr r11, SPRN_SRR1 /* check whether user or kernel */
-#ifdef CONFIG_VMAP_STACK
stw r11, SRR1(r10)
-#endif
mfcr r10
andi. r11, r11, MSR_PR
.endm

-.macro EXCEPTION_PROLOG_1 for_rtas=0
-#ifdef CONFIG_VMAP_STACK
+.macro EXCEPTION_PROLOG_1
mtspr SPRN_SPRG_SCRATCH2,r1
subi r1, r1, INT_FRAME_SIZE /* use r1 if kernel */
beq 1f
@@ -55,20 +50,13 @@
lwz r1,TASK_STACK-THREAD(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
1:
+#ifdef CONFIG_VMAP_STACK
mtcrf 0x3f, r1
bt 32 - THREAD_ALIGN_SHIFT, stack_overflow
-#else
- subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */
- beq 1f
- mfspr r11,SPRN_SPRG_THREAD
- lwz r11,TASK_STACK-THREAD(r11)
- addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE
-1: tophys(r11, r11)
#endif
.endm

.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
-#ifdef CONFIG_VMAP_STACK
LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_IR | MSR_RI)) /* can take DTLB miss */
mtmsr r11
isync
@@ -76,11 +64,6 @@
stw r11,GPR1(r1)
stw r11,0(r1)
mr r11, r1
-#else
- stw r1,GPR1(r11)
- stw r1,0(r11)
- tovirt(r1, r11) /* set new kernel sp */
-#endif
stw r10,_CCR(r11) /* save registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
@@ -90,7 +73,6 @@
stw r12,GPR11(r11)
mflr r10
stw r10,_LINK(r11)
-#ifdef CONFIG_VMAP_STACK
mfspr r12, SPRN_SPRG_THREAD
tovirt(r12, r12)
.if \handle_dar_dsisr
@@ -101,20 +83,12 @@
.endif
lwz r9, SRR1(r12)
lwz r12, SRR0(r12)
-#else
- mfspr r12,SPRN_SRR0
- mfspr r9,SPRN_SRR1
-#endif
#ifdef CONFIG_40x
rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
#elif defined(CONFIG_PPC_8xx)
mtspr SPRN_EID, r2 /* Set MSR_RI */
#else
-#ifdef CONFIG_VMAP_STACK
li r10, MSR_KERNEL & ~MSR_IR /* can take exceptions */
-#else
- li r10,MSR_KERNEL & ~(MSR_IR|MSR_DR) /* can take exceptions */
-#endif
mtmsr r10 /* (except for mach check in rtas) */
#endif
stw r0,GPR0(r11)
@@ -166,59 +140,6 @@
b transfer_to_syscall /* jump to handler */
.endm

-.macro save_dar_dsisr_on_stack reg1, reg2, sp
-#ifndef CONFIG_VMAP_STACK
-#ifdef CONFIG_40x
- mfspr \reg1, SPRN_DEAR
- mfspr \reg2, SPRN_ESR
-#else
- mfspr \reg1, SPRN_DAR
- mfspr \reg2, SPRN_DSISR
-#endif
- stw \reg1, _DAR(\sp)
- stw \reg2, _DSISR(\sp)
-#endif
-.endm
-
-.macro get_and_save_dar_dsisr_on_stack reg1, reg2, sp
-#ifdef CONFIG_VMAP_STACK
- lwz \reg1, _DAR(\sp)
- lwz \reg2, _DSISR(\sp)
-#else
- save_dar_dsisr_on_stack \reg1, \reg2, \sp
-#endif
-.endm
-
-.macro tovirt_vmstack dst, src
-#ifdef CONFIG_VMAP_STACK
- tovirt(\dst, \src)
-#else
- .ifnc \dst, \src
- mr \dst, \src
- .endif
-#endif
-.endm
-
-.macro tovirt_novmstack dst, src
-#ifndef CONFIG_VMAP_STACK
- tovirt(\dst, \src)
-#else
- .ifnc \dst, \src
- mr \dst, \src
- .endif
-#endif
-.endm
-
-.macro tophys_novmstack dst, src
-#ifndef CONFIG_VMAP_STACK
- tophys(\dst, \src)
-#else
- .ifnc \dst, \src
- mr \dst, \src
- .endif
-#endif
-.endm
-
/*
* Note: code which follows this uses cr0.eq (set if from kernel),
* r11, r12 (SRR0), and r9 (SRR1).
@@ -266,7 +187,6 @@
ret_from_except)

.macro vmap_stack_overflow_exception
-#ifdef CONFIG_VMAP_STACK
#ifdef CONFIG_SMP
mfspr r1, SPRN_SPRG_THREAD
lwz r1, TASK_CPU - THREAD(r1)
@@ -285,7 +205,6 @@
SAVE_NVGPRS(r11)
addi r3, r1, STACK_FRAME_OVERHEAD
EXC_XFER_STD(0, stack_overflow_exception)
-#endif
.endm

#endif /* __HEAD_32_H__ */
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 72e4962902de..7da673ec63ef 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -111,12 +111,10 @@ _ENTRY(crit_esr)
mfspr r11,SPRN_SRR1
stw r10,crit_srr0@l(0)
stw r11,crit_srr1@l(0)
-#ifdef CONFIG_VMAP_STACK
mfspr r10,SPRN_DEAR
mfspr r11,SPRN_ESR
stw r10,crit_dear@l(0)
stw r11,crit_esr@l(0)
-#endif
mfcr r10 /* save CR in r10 for now */
mfspr r11,SPRN_SRR3 /* check whether user or kernel */
andi. r11,r11,MSR_PR
@@ -126,7 +124,6 @@ _ENTRY(crit_esr)
/* COMING FROM USER MODE */
mfspr r11,SPRN_SPRG_THREAD /* if from user, start at top of */
lwz r11,TASK_STACK-THREAD(r11) /* this thread's kernel stack */
-#ifdef CONFIG_VMAP_STACK
1: stw r1,crit_r1@l(0)
addi r1,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
LOAD_REG_IMMEDIATE(r11,MSR_KERNEL & ~(MSR_IR | MSR_RI))
@@ -136,35 +133,18 @@ _ENTRY(crit_esr)
stw r11,GPR1(r1)
stw r11,0(r1)
mr r11,r1
-#else
-1: addi r11,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
- tophys(r11,r11)
- stw r1,GPR1(r11)
- stw r1,0(r11)
- tovirt(r1,r11)
-#endif
stw r10,_CCR(r11) /* save various registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
mflr r10
stw r10,_LINK(r11)
-#ifdef CONFIG_VMAP_STACK
lis r9,PAGE_OFFSET@ha
lwz r10,crit_r10@l(r9)
lwz r12,crit_r11@l(r9)
-#else
- lwz r10,crit_r10@l(0)
- lwz r12,crit_r11@l(0)
-#endif
stw r10,GPR10(r11)
stw r12,GPR11(r11)
-#ifdef CONFIG_VMAP_STACK
lwz r12,crit_dear@l(r9)
lwz r9,crit_esr@l(r9)
-#else
- mfspr r12,SPRN_DEAR /* save DEAR and ESR in the frame */
- mfspr r9,SPRN_ESR /* in them at the point where the */
-#endif
stw r12,_DEAR(r11) /* since they may have had stuff */
stw r9,_ESR(r11) /* exception was taken */
mfspr r12,SPRN_SRR2
@@ -220,7 +200,6 @@ _ENTRY(crit_esr)
*/
START_EXCEPTION(0x0300, DataStorage)
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
EXC_XFER_LITE(0x300, handle_page_fault)

/*
@@ -240,14 +219,12 @@ _ENTRY(crit_esr)
/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x700, program_check_exception)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 46dff3f9c31f..792e2fd86479 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -124,7 +124,6 @@ instruction_counter:
. = 0x200
MachineCheck:
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
li r6, RPN_PATTERN
mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */
addi r3,r1,STACK_FRAME_OVERHEAD
@@ -137,7 +136,6 @@ MachineCheck:
. = 0x600
Alignment:
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
li r6, RPN_PATTERN
mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */
addi r3,r1,STACK_FRAME_OVERHEAD
@@ -333,21 +331,16 @@ DataTLBError:
cmpwi cr1, r11, RPN_PATTERN
beq- cr1, FixupDAR /* must be a buggy dcbX, icbi insn. */
DARFixed:/* Return from dcbx instruction bug workaround */
-#ifdef CONFIG_VMAP_STACK
li r11, RPN_PATTERN
mtspr SPRN_DAR, r11 /* Tag DAR, to be used in DTLB Error */
-#endif
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
- get_and_save_dar_dsisr_on_stack r4, r5, r11
+ lwz r4, _DAR(r11)
+ lwz r5, _DSISR(r11)
andis. r10,r5,DSISR_NOHPTE@h
beq+ .Ldtlbie
tlbie r4
.Ldtlbie:
-#ifndef CONFIG_VMAP_STACK
- li r10,RPN_PATTERN
- mtspr SPRN_DAR,r10 /* Tag DAR, to be used in DTLB Error */
-#endif
/* 0x300 is DataAccess exception, needed by bad_page_fault() */
EXC_XFER_LITE(0x300, handle_page_fault)

@@ -364,10 +357,6 @@ do_databreakpoint:
addi r3,r1,STACK_FRAME_OVERHEAD
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
-#ifndef CONFIG_VMAP_STACK
- mfspr r5,SPRN_DSISR
- stw r5,_DSISR(r11)
-#endif
EXC_XFER_STD(0x1c00, do_break)

. = 0x1c00
@@ -510,14 +499,10 @@ FixupDAR:/* Entry point for dcbx workaround. */
152:
mfdar r11
mtctr r11 /* restore ctr reg from DAR */
-#ifdef CONFIG_VMAP_STACK
mfspr r11, SPRN_SPRG_THREAD
stw r10, DAR(r11)
mfspr r10, SPRN_DSISR
stw r10, DSISR(r11)
-#else
- mtdar r10 /* save fault EA to DAR */
-#endif
mfspr r10,SPRN_M_TW
b DARFixed /* Go back to normal TLB handling */

diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 727fdab557c9..59efbee7c080 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -260,21 +260,14 @@ __secondary_hold_acknowledge:
MachineCheck:
EXCEPTION_PROLOG_0
#ifdef CONFIG_PPC_CHRP
-#ifdef CONFIG_VMAP_STACK
mtspr SPRN_SPRG_SCRATCH2,r1
mfspr r1, SPRN_SPRG_THREAD
lwz r1, RTAS_SP(r1)
cmpwi cr1, r1, 0
bne cr1, 7f
mfspr r1, SPRN_SPRG_SCRATCH2
-#else
- mfspr r11, SPRN_SPRG_THREAD
- lwz r11, RTAS_SP(r11)
- cmpwi cr1, r11, 0
- bne cr1, 7f
-#endif
#endif /* CONFIG_PPC_CHRP */
- EXCEPTION_PROLOG_1 for_rtas=1
+ EXCEPTION_PROLOG_1
7: EXCEPTION_PROLOG_2
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_CHRP
@@ -288,7 +281,6 @@ MachineCheck:
. = 0x300
DO_KVM 0x300
DataAccess:
-#ifdef CONFIG_VMAP_STACK
#ifdef CONFIG_PPC_BOOK3S_604
BEGIN_MMU_FTR_SECTION
mtspr SPRN_SPRG_SCRATCH2,r10
@@ -310,29 +302,11 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
b handle_page_fault_tramp_1
-#else /* CONFIG_VMAP_STACK */
- EXCEPTION_PROLOG handle_dar_dsisr=1
- get_and_save_dar_dsisr_on_stack r4, r5, r11
-#ifdef CONFIG_PPC_BOOK3S_604
-BEGIN_MMU_FTR_SECTION
- andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h
- bne handle_page_fault_tramp_2 /* if not, try to put a PTE */
- rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */
- bl hash_page
- b handle_page_fault_tramp_1
-MMU_FTR_SECTION_ELSE
-#endif
- b handle_page_fault_tramp_2
-#ifdef CONFIG_PPC_BOOK3S_604
-ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
-#endif
-#endif /* CONFIG_VMAP_STACK */

/* Instruction access exception. */
. = 0x400
DO_KVM 0x400
InstructionAccess:
-#ifdef CONFIG_VMAP_STACK
mtspr SPRN_SPRG_SCRATCH0,r10
mtspr SPRN_SPRG_SCRATCH1,r11
mfspr r10, SPRN_SPRG_THREAD
@@ -353,18 +327,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)

EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2
-#else /* CONFIG_VMAP_STACK */
- EXCEPTION_PROLOG
- andis. r0,r9,SRR1_ISI_NOPT@h /* no pte found? */
- beq 1f /* if so, try to put a PTE */
- li r3,0 /* into the hash table */
- mr r4,r12 /* SRR0 is fault address */
-#ifdef CONFIG_PPC_BOOK3S_604
-BEGIN_MMU_FTR_SECTION
- bl hash_page
-END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
-#endif
-#endif /* CONFIG_VMAP_STACK */
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
stw r5, _DSISR(r11)
stw r12, _DAR(r11)
@@ -378,7 +340,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
DO_KVM 0x600
Alignment:
EXCEPTION_PROLOG handle_dar_dsisr=1
- save_dar_dsisr_on_stack r4, r5, r11
addi r3,r1,STACK_FRAME_OVERHEAD
b alignment_exception_tramp

@@ -686,18 +647,13 @@ alignment_exception_tramp:
EXC_XFER_STD(0x600, alignment_exception)

handle_page_fault_tramp_1:
-#ifdef CONFIG_VMAP_STACK
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
-#endif
lwz r5, _DSISR(r11)
- /* fall through */
-handle_page_fault_tramp_2:
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
EXC_XFER_LITE(0x300, handle_page_fault)
1: EXC_XFER_STD(0x300, do_break)

-#ifdef CONFIG_VMAP_STACK
#ifdef CONFIG_PPC_BOOK3S_604
.macro save_regs_thread thread
stw r0, THR0(\thread)
@@ -772,6 +728,7 @@ fast_hash_page_return:
rfi
#endif /* CONFIG_PPC_BOOK3S_604 */

+#ifdef CONFIG_VMAP_STACK
stack_overflow:
vmap_stack_overflow_exception
#endif
diff --git a/arch/powerpc/kernel/idle_6xx.S b/arch/powerpc/kernel/idle_6xx.S
index 69df840f7253..153366e178c4 100644
--- a/arch/powerpc/kernel/idle_6xx.S
+++ b/arch/powerpc/kernel/idle_6xx.S
@@ -145,9 +145,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)

/*
* Return from NAP/DOZE mode, restore some CPU specific registers,
- * we are called with DR/IR still off and r2 containing physical
- * address of current. R11 points to the exception frame (physical
- * address). We have to preserve r10.
+ * R11 points to the exception frame. We have to preserve r10.
*/
_GLOBAL(power_save_ppc32_restore)
lwz r9,_LINK(r11) /* interrupted in ppc6xx_idle: */
@@ -166,11 +164,7 @@ BEGIN_FTR_SECTION
mfspr r9,SPRN_HID0
andis. r9,r9,HID0_NAP@h
beq 1f
-#ifdef CONFIG_VMAP_STACK
addis r9, r11, nap_save_msscr0@ha
-#else
- addis r9,r11,(nap_save_msscr0-KERNELBASE)@ha
-#endif
lwz r9,nap_save_msscr0@l(r9)
mtspr SPRN_MSSCR0, r9
sync
@@ -178,11 +172,7 @@ BEGIN_FTR_SECTION
1:
END_FTR_SECTION_IFSET(CPU_FTR_NAP_DISABLE_L2_PR)
BEGIN_FTR_SECTION
-#ifdef CONFIG_VMAP_STACK
addis r9, r11, nap_save_hid1@ha
-#else
- addis r9,r11,(nap_save_hid1-KERNELBASE)@ha
-#endif
lwz r9,nap_save_hid1@l(r9)
mtspr SPRN_HID1, r9
END_FTR_SECTION_IFSET(CPU_FTR_DUAL_PLL_750FX)
diff --git a/arch/powerpc/kernel/idle_e500.S b/arch/powerpc/kernel/idle_e500.S
index 72c85b6f3898..7795727e7f08 100644
--- a/arch/powerpc/kernel/idle_e500.S
+++ b/arch/powerpc/kernel/idle_e500.S
@@ -74,8 +74,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)

/*
* Return from NAP/DOZE mode, restore some CPU specific registers,
- * r2 containing physical address of current.
- * r11 points to the exception frame (physical address).
+ * r2 containing address of current.
+ * r11 points to the exception frame.
* We have to preserve r10.
*/
_GLOBAL(power_save_ppc32_restore)
diff --git a/arch/powerpc/kernel/vector.S b/arch/powerpc/kernel/vector.S
index 801dc28fdcca..f5a52f444e36 100644
--- a/arch/powerpc/kernel/vector.S
+++ b/arch/powerpc/kernel/vector.S
@@ -67,9 +67,7 @@ _GLOBAL(load_up_altivec)
#ifdef CONFIG_PPC32
mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
oris r9,r9,MSR_VEC@h
-#ifdef CONFIG_VMAP_STACK
tovirt(r5, r5)
-#endif
#else
ld r4,PACACURRENT(r13)
addi r5,r4,THREAD /* Get THREAD */
diff --git a/arch/powerpc/mm/book3s32/hash_low.S b/arch/powerpc/mm/book3s32/hash_low.S
index 0e6dc830c38b..fb4233a5bdf7 100644
--- a/arch/powerpc/mm/book3s32/hash_low.S
+++ b/arch/powerpc/mm/book3s32/hash_low.S
@@ -140,10 +140,6 @@ _GLOBAL(hash_page)
bne- .Lretry /* retry if someone got there first */

mfsrin r3,r4 /* get segment reg for segment */
-#ifndef CONFIG_VMAP_STACK
- mfctr r0
- stw r0,_CTR(r11)
-#endif
bl create_hpte /* add the hash table entry */

#ifdef CONFIG_SMP
@@ -152,17 +148,7 @@ _GLOBAL(hash_page)
li r0,0
stw r0, (mmu_hash_lock - PAGE_OFFSET)@l(r8)
#endif
-
-#ifdef CONFIG_VMAP_STACK
b fast_hash_page_return
-#else
- /* Return from the exception */
- lwz r5,_CTR(r11)
- mtctr r5
- lwz r0,GPR0(r11)
- lwz r8,GPR8(r11)
- b fast_exception_return
-#endif

#ifdef CONFIG_SMP
.Lhash_page_out:
--
2.25.0

2021-03-09 12:11:36

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 16/43] powerpc/32: Enable instruction translation at the same time as data translation

On 40x and 8xx, kernel text is pinned.
On book3s/32, kernel text is mapped by BATs.

Enable instruction translation at the same time as data translation, it
makes things simpler.

In syscall handler, MSR_RI can also be set at the same time because
srr0/srr1 are already saved and r1 is set properly.

On booke, translation is always on, so at the end all PPC32
have translation on early. Just update msr.

Also update comment in power_save_ppc32_restore().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 30 ++++++++++++------------------
arch/powerpc/kernel/head_32.h | 13 ++++++++-----
arch/powerpc/kernel/head_40x.S | 10 +++++++---
arch/powerpc/kernel/head_booke.h | 6 ++++--
4 files changed, 31 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 33e97032ca25..01a064c8a96a 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -153,19 +153,11 @@ transfer_to_handler:
transfer_to_handler_cont:
3:
mflr r9
- tovirt(r9, r9)
lwz r11,0(r9) /* virtual address of handler */
lwz r9,4(r9) /* where to go when done */
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
- mtspr SPRN_NRI, r0
-#endif
- mtspr SPRN_SRR0,r11
- mtspr SPRN_SRR1,r10
+ mtctr r11
mtlr r9
- rfi /* jump to handler, enable MMU */
-#ifdef CONFIG_40x
- b . /* Prevent prefetch past rfi */
-#endif
+ bctr /* jump to handler */

#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
4: rlwinm r12,r12,0,~_TLF_NAPPING
@@ -444,8 +436,6 @@ fee_restarts:
li r10,-1
stw r10,_TRAP(r11)
addi r3,r1,STACK_FRAME_OVERHEAD
- lis r10,MSR_KERNEL@h
- ori r10,r10,MSR_KERNEL@l
bl transfer_to_handler_full
.long unrecoverable_exception
.long ret_from_except
@@ -945,16 +935,20 @@ _GLOBAL(enter_rtas)
mtspr SPRN_SRR1,r9
rfi
1:
- li r0, MSR_KERNEL & ~MSR_IR /* can take DTLB miss */
- mtmsr r0
- isync
+ lis r8, 1f@h
+ ori r8, r8, 1f@l
+ LOAD_REG_IMMEDIATE(r9,MSR_KERNEL)
+ mtspr SPRN_SRR0,r8
+ mtspr SPRN_SRR1,r9
+ rfi /* Reactivate MMU translation */
+1:
lwz r8,INT_FRAME_SIZE+4(r1) /* get return address */
lwz r9,8(r1) /* original msr value */
addi r1,r1,INT_FRAME_SIZE
li r0,0
stw r0, THREAD + RTAS_SP(r2)
- mtspr SPRN_SRR0,r8
- mtspr SPRN_SRR1,r9
- rfi /* return to caller */
+ mtlr r8
+ mtmsr r9
+ blr /* return to caller */
_ASM_NOKPROBE_SYMBOL(enter_rtas)
#endif /* CONFIG_PPC_RTAS */
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 910f86642eec..88b02bd91e8e 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -63,10 +63,14 @@
mtspr SPRN_DAR, r11 /* Tag DAR, to be used in DTLB Error */
.endif
#endif
- LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_IR | MSR_RI)) /* can take DTLB miss */
- mtmsr r11
- isync
+ LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~MSR_RI) /* re-enable MMU */
+ mtspr SPRN_SRR1, r11
+ lis r11, 1f@h
+ ori r11, r11, 1f@l
+ mtspr SPRN_SRR0, r11
mfspr r11, SPRN_SPRG_SCRATCH2
+ rfi
+1:
stw r11,GPR1(r1)
stw r11,0(r1)
mr r11, r1
@@ -94,7 +98,7 @@
#elif defined(CONFIG_PPC_8xx)
mtspr SPRN_EID, r2 /* Set MSR_RI */
#else
- li r10, MSR_KERNEL & ~MSR_IR /* can take exceptions */
+ li r10, MSR_KERNEL /* can take exceptions */
mtmsr r10 /* (except for mach check in rtas) */
#endif
stw r0,GPR0(r11)
@@ -179,7 +183,6 @@
#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
li r10,trap; \
stw r10,_TRAP(r11); \
- LOAD_REG_IMMEDIATE(r10, msr); \
bl tfer; \
.long hdlr; \
.long ret
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 7da673ec63ef..55fa99c5085c 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -126,9 +126,13 @@ _ENTRY(crit_esr)
lwz r11,TASK_STACK-THREAD(r11) /* this thread's kernel stack */
1: stw r1,crit_r1@l(0)
addi r1,r11,THREAD_SIZE-INT_FRAME_SIZE /* Alloc an excpt frm */
- LOAD_REG_IMMEDIATE(r11,MSR_KERNEL & ~(MSR_IR | MSR_RI))
- mtmsr r11
- isync
+ LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)) /* re-enable MMU */
+ mtspr SPRN_SRR1, r11
+ lis r11, 1f@h
+ ori r11, r11, 1f@l
+ mtspr SPRN_SRR0, r11
+ rfi
+1:
lwz r11,crit_r1@l(0)
stw r11,GPR1(r1)
stw r11,0(r1)
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 4a5f0c9b652b..f712b9bc6d62 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -53,6 +53,8 @@ END_BTB_FLUSH_SECTION
mfspr r11, SPRN_SRR1; \
DO_KVM BOOKE_INTERRUPT_##intno SPRN_SRR1; \
andi. r11, r11, MSR_PR; /* check whether user or kernel */\
+ LOAD_REG_IMMEDIATE(r11, MSR_KERNEL); \
+ mtmsr r11; \
mr r11, r1; \
beq 1f; \
BOOKE_CLEAR_BTB(r11) \
@@ -192,6 +194,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
DO_KVM BOOKE_INTERRUPT_##intno exc_level_srr1; \
BOOKE_CLEAR_BTB(r10) \
andi. r11,r11,MSR_PR; \
+ LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)); \
+ mtmsr r11; \
mfspr r11,SPRN_SPRG_THREAD; /* if from user, start at top of */\
lwz r11, TASK_STACK - THREAD(r11); /* this thread's kernel stack */\
addi r11,r11,EXC_LVL_FRAME_OVERHEAD; /* allocate stack frame */\
@@ -282,8 +286,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
li r10,trap; \
stw r10,_TRAP(r11); \
- lis r10,msr@h; \
- ori r10,r10,msr@l; \
bl tfer; \
.long hdlr; \
.long ret
--
2.25.0

2021-03-09 12:11:37

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 15/43] powerpc/32: Tag DAR in EXCEPTION_PROLOG_2 for the 8xx

8xx requires to tag the DAR with a magic value in order to
fixup DAR on faults generated by 'dcbX', as the 8xx
forgets to update the DAR for those faults.

Do the tagging as early as possible, that is before enabling MMU.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 6 ++++++
arch/powerpc/kernel/head_8xx.S | 18 ++++++------------
2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 1b707755c68e..910f86642eec 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -57,6 +57,12 @@
.endm

.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
+#ifdef CONFIG_PPC_8xx
+ .if \handle_dar_dsisr
+ li r11, RPN_PATTERN
+ mtspr SPRN_DAR, r11 /* Tag DAR, to be used in DTLB Error */
+ .endif
+#endif
LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_IR | MSR_RI)) /* can take DTLB miss */
mtmsr r11
isync
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 792e2fd86479..cdbfa9d41353 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -30,6 +30,12 @@
#include <asm/export.h>
#include <asm/code-patching-asm.h>

+/*
+ * Value for the bits that have fixed value in RPN entries.
+ * Also used for tagging DAR for DTLBerror.
+ */
+#define RPN_PATTERN 0x00f0
+
#include "head_32.h"

.macro compare_to_kernel_boundary scratch, addr
@@ -42,12 +48,6 @@
#endif
.endm

-/*
- * Value for the bits that have fixed value in RPN entries.
- * Also used for tagging DAR for DTLBerror.
- */
-#define RPN_PATTERN 0x00f0
-
#define PAGE_SHIFT_512K 19
#define PAGE_SHIFT_8M 23

@@ -124,8 +124,6 @@ instruction_counter:
. = 0x200
MachineCheck:
EXCEPTION_PROLOG handle_dar_dsisr=1
- li r6, RPN_PATTERN
- mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x200, machine_check_exception)

@@ -136,8 +134,6 @@ MachineCheck:
. = 0x600
Alignment:
EXCEPTION_PROLOG handle_dar_dsisr=1
- li r6, RPN_PATTERN
- mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */
addi r3,r1,STACK_FRAME_OVERHEAD
b .Lalignment_exception_ool

@@ -331,8 +327,6 @@ DataTLBError:
cmpwi cr1, r11, RPN_PATTERN
beq- cr1, FixupDAR /* must be a buggy dcbX, icbi insn. */
DARFixed:/* Return from dcbx instruction bug workaround */
- li r11, RPN_PATTERN
- mtspr SPRN_DAR, r11 /* Tag DAR, to be used in DTLB Error */
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
lwz r4, _DAR(r11)
--
2.25.0

2021-03-09 12:12:04

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 23/43] powerpc/32: Perform normal function call in exception entry

Now that the MMU is re-enabled before calling the transfer function,
we don't need anymore that hack with the address of the handler and
the return function sitting just after the 'bl' to the transfer
fonction, that function is retrieving via a read relative to 'lr'.

Do a regular call to the transfer function, then to the handler,
then branch to the return function.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 14 ++++----------
arch/powerpc/kernel/head_32.h | 4 ++--
arch/powerpc/kernel/head_booke.h | 6 +++---
3 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index ad1fd33e1126..fb849ef922fb 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -104,7 +104,7 @@ transfer_to_handler:
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
#endif
- b 3f
+ blr

/* if from kernel, check interrupted DOZE/NAP mode */
2:
@@ -118,13 +118,7 @@ transfer_to_handler:
#endif /* CONFIG_PPC_BOOK3S_32 || CONFIG_E500 */
.globl transfer_to_handler_cont
transfer_to_handler_cont:
-3:
- mflr r9
- lwz r11,0(r9) /* virtual address of handler */
- lwz r9,4(r9) /* where to go when done */
- mtctr r11
- mtlr r9
- bctr /* jump to handler */
+ blr

#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
4: rlwinm r12,r12,0,~_TLF_NAPPING
@@ -404,8 +398,8 @@ fee_restarts:
stw r10,_TRAP(r11)
addi r3,r1,STACK_FRAME_OVERHEAD
bl transfer_to_handler_full
- .long unrecoverable_exception
- .long ret_from_except
+ bl unrecoverable_exception
+ b ret_from_except
#endif

.globl ret_from_except_full
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 160ebd573c37..e09585b88ba7 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -190,8 +190,8 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
li r10,trap; \
stw r10,_TRAP(r11); \
bl tfer; \
- .long hdlr; \
- .long ret
+ bl hdlr; \
+ b ret

#define EXC_XFER_STD(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler_full, \
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index a127d5e7efb4..3707f49f0b78 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -322,9 +322,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
li r10,trap; \
stw r10,_TRAP(r11); \
- bl tfer; \
- .long hdlr; \
- .long ret
+ bl tfer; \
+ bl hdlr; \
+ b ret; \

#define EXC_XFER_STD(n, hdlr) \
EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler_full, \
--
2.25.0

2021-03-09 12:12:05

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 24/43] powerpc/32: Always save non volatile registers on exception entry

In preparation of handling exception entry and exit in C,
in order to simplify the handling, always save non volatile registers
when entering an exception.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/ptrace.h | 6 ++----
arch/powerpc/kernel/entry_32.S | 13 +------------
arch/powerpc/kernel/head_32.h | 3 +--
arch/powerpc/kernel/head_booke.h | 2 +-
4 files changed, 5 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 975ba260006a..0a5d8c6b13c4 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -209,16 +209,14 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
*/
#define TRAP_FLAGS_MASK 0x1F
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
-#define FULL_REGS(regs) (((regs)->trap & 1) == 0)
-#define SET_FULL_REGS(regs) ((regs)->trap |= 1)
+#define FULL_REGS(regs) true
+#define SET_FULL_REGS(regs) do { } while (0)
#define IS_CRITICAL_EXC(regs) (((regs)->trap & 2) != 0)
#define IS_MCHECK_EXC(regs) (((regs)->trap & 4) != 0)
#define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0)
#define NV_REG_POISON 0xdeadbeef
#define CHECK_FULL_REGS(regs) \
do { \
- if ((regs)->trap & 1) \
- printk(KERN_CRIT "%s: partial register set\n", __func__); \
} while (0)
#endif /* __powerpc64__ */

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index fb849ef922fb..7084289994b3 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -81,12 +81,12 @@ _ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
*/
.globl transfer_to_handler_full
transfer_to_handler_full:
- SAVE_NVGPRS(r11)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler_full)
/* fall through */

.globl transfer_to_handler
transfer_to_handler:
+ SAVE_NVGPRS(r11)
stw r2,GPR2(r11)
stw r12,_NIP(r11)
stw r9,_MSR(r11)
@@ -234,10 +234,6 @@ handle_page_fault:
bl do_page_fault
cmpwi r3,0
beq+ ret_from_except
- SAVE_NVGPRS(r1)
- lwz r0,_TRAP(r1)
- clrrwi r0,r0,1
- stw r0,_TRAP(r1)
mr r4,r3 /* err arg for bad_page_fault */
addi r3,r1,STACK_FRAME_OVERHEAD
bl __bad_page_fault
@@ -810,13 +806,6 @@ recheck:
do_user_signal: /* r10 contains MSR_KERNEL here */
ori r10,r10,MSR_EE
mtmsr r10 /* hard-enable interrupts */
- /* save r13-r31 in the exception frame, if not already done */
- lwz r3,_TRAP(r1)
- andi. r0,r3,1
- beq 2f
- SAVE_NVGPRS(r1)
- rlwinm r3,r3,0,0,30
- stw r3,_TRAP(r1)
2: addi r3,r1,STACK_FRAME_OVERHEAD
mr r4,r9
bl do_notify_resume
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index e09585b88ba7..087445e45489 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -198,7 +198,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
ret_from_except_full)

#define EXC_XFER_LITE(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, transfer_to_handler, \
+ EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler, \
ret_from_except)

.macro vmap_stack_overflow_exception
@@ -215,7 +215,6 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
lwz r1, emergency_ctx@l(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
EXCEPTION_PROLOG_2 vmap_stack_overflow
- SAVE_NVGPRS(r11)
addi r3, r1, STACK_FRAME_OVERHEAD
EXC_XFER_STD(0, stack_overflow_exception)
.endm
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 3707f49f0b78..b31bf9e833c0 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -331,7 +331,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
ret_from_except_full)

#define EXC_XFER_LITE(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, transfer_to_handler, \
+ EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler, \
ret_from_except)

/* Check for a single step debug exception while in an exception
--
2.25.0

2021-03-09 12:12:04

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 22/43] powerpc/32: Refactor booke critical registers saving

Refactor booke critical registers saving into a few macros
and move it into the exception prolog directly.

Keep the dedicated transfert_to_handler entry point for the
moment allthough they are empty. They will be removed in a
later patch to reduce churn.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 33 -------------------------
arch/powerpc/kernel/head_booke.h | 41 ++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 01a064c8a96a..ad1fd33e1126 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -51,49 +51,16 @@
#ifdef CONFIG_BOOKE
.globl mcheck_transfer_to_handler
mcheck_transfer_to_handler:
- mfspr r0,SPRN_DSRR0
- stw r0,_DSRR0(r11)
- mfspr r0,SPRN_DSRR1
- stw r0,_DSRR1(r11)
/* fall through */
_ASM_NOKPROBE_SYMBOL(mcheck_transfer_to_handler)

.globl debug_transfer_to_handler
debug_transfer_to_handler:
- mfspr r0,SPRN_CSRR0
- stw r0,_CSRR0(r11)
- mfspr r0,SPRN_CSRR1
- stw r0,_CSRR1(r11)
/* fall through */
_ASM_NOKPROBE_SYMBOL(debug_transfer_to_handler)

.globl crit_transfer_to_handler
crit_transfer_to_handler:
-#ifdef CONFIG_PPC_BOOK3E_MMU
- mfspr r0,SPRN_MAS0
- stw r0,MAS0(r11)
- mfspr r0,SPRN_MAS1
- stw r0,MAS1(r11)
- mfspr r0,SPRN_MAS2
- stw r0,MAS2(r11)
- mfspr r0,SPRN_MAS3
- stw r0,MAS3(r11)
- mfspr r0,SPRN_MAS6
- stw r0,MAS6(r11)
-#ifdef CONFIG_PHYS_64BIT
- mfspr r0,SPRN_MAS7
- stw r0,MAS7(r11)
-#endif /* CONFIG_PHYS_64BIT */
-#endif /* CONFIG_PPC_BOOK3E_MMU */
-#ifdef CONFIG_44x
- mfspr r0,SPRN_MMUCR
- stw r0,MMUCR(r11)
-#endif
- mfspr r0,SPRN_SRR0
- stw r0,_SRR0(r11)
- mfspr r0,SPRN_SRR1
- stw r0,_SRR1(r11)
-
/* fall through */
_ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
#endif
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index f712b9bc6d62..a127d5e7efb4 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -229,6 +229,36 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)

+#define SAVE_xSRR(xSRR) \
+ mfspr r0,SPRN_##xSRR##0; \
+ stw r0,_##xSRR##0(r1); \
+ mfspr r0,SPRN_##xSRR##1; \
+ stw r0,_##xSRR##1(r1)
+
+
+.macro SAVE_MMU_REGS
+#ifdef CONFIG_PPC_BOOK3E_MMU
+ mfspr r0,SPRN_MAS0
+ stw r0,MAS0(r1)
+ mfspr r0,SPRN_MAS1
+ stw r0,MAS1(r1)
+ mfspr r0,SPRN_MAS2
+ stw r0,MAS2(r1)
+ mfspr r0,SPRN_MAS3
+ stw r0,MAS3(r1)
+ mfspr r0,SPRN_MAS6
+ stw r0,MAS6(r1)
+#ifdef CONFIG_PHYS_64BIT
+ mfspr r0,SPRN_MAS7
+ stw r0,MAS7(r1)
+#endif /* CONFIG_PHYS_64BIT */
+#endif /* CONFIG_PPC_BOOK3E_MMU */
+#ifdef CONFIG_44x
+ mfspr r0,SPRN_MMUCR
+ stw r0,MMUCR(r1)
+#endif
+.endm
+
#define CRITICAL_EXCEPTION_PROLOG(intno) \
EXC_LEVEL_EXCEPTION_PROLOG(CRIT, intno, SPRN_CSRR0, SPRN_CSRR1)
#define DEBUG_EXCEPTION_PROLOG \
@@ -271,6 +301,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
START_EXCEPTION(label); \
CRITICAL_EXCEPTION_PROLOG(intno); \
addi r3,r1,STACK_FRAME_OVERHEAD; \
+ SAVE_MMU_REGS; \
+ SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
crit_transfer_to_handler, ret_from_crit_exc)

@@ -280,6 +312,10 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
mfspr r5,SPRN_ESR; \
stw r5,_ESR(r11); \
addi r3,r1,STACK_FRAME_OVERHEAD; \
+ SAVE_xSRR(DSRR); \
+ SAVE_xSRR(CSRR); \
+ SAVE_MMU_REGS; \
+ SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(hdlr, n+4, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
mcheck_transfer_to_handler, ret_from_mcheck_exc)

@@ -363,6 +399,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
2: mfspr r4,SPRN_DBSR; \
stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
addi r3,r1,STACK_FRAME_OVERHEAD; \
+ SAVE_xSRR(CSRR); \
+ SAVE_MMU_REGS; \
+ SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(DebugException, 0x2008, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), debug_transfer_to_handler, ret_from_debug_exc)

#define DEBUG_CRIT_EXCEPTION \
@@ -417,6 +456,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
2: mfspr r4,SPRN_DBSR; \
stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
addi r3,r1,STACK_FRAME_OVERHEAD; \
+ SAVE_MMU_REGS; \
+ SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), crit_transfer_to_handler, ret_from_crit_exc)

#define DATA_STORAGE_EXCEPTION \
--
2.25.0

2021-03-09 12:12:05

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 19/43] powerpc/32: Use START_EXCEPTION() as much as possible

Everywhere where it is possible, use START_EXCEPTION().

This will help for proper exception init in future patches.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_40x.S | 12 +++++------
arch/powerpc/kernel/head_8xx.S | 27 +++++++++----------------
arch/powerpc/kernel/head_book3s_32.S | 30 ++++++++--------------------
3 files changed, 22 insertions(+), 47 deletions(-)

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 55fa99c5085c..c14a71e0d6d3 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -247,17 +247,15 @@ _ENTRY(crit_esr)
EXCEPTION(0x0F00, Trap_0F, unknown_exception, EXC_XFER_STD)

/* 0x1000 - Programmable Interval Timer (PIT) Exception */
- . = 0x1000
+ START_EXCEPTION(0x1000, DecrementerTrap)
b Decrementer

-/* 0x1010 - Fixed Interval Timer (FIT) Exception
-*/
- . = 0x1010
+/* 0x1010 - Fixed Interval Timer (FIT) Exception */
+ START_EXCEPTION(0x1010, FITExceptionTrap)
b FITException

-/* 0x1020 - Watchdog Timer (WDT) Exception
-*/
- . = 0x1020
+/* 0x1020 - Watchdog Timer (WDT) Exception */
+ START_EXCEPTION(0x1020, WDTExceptionTrap)
b WDTException

/* 0x1100 - Data TLB Miss Exception
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index b63445c55f4d..11789a077d76 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -121,8 +121,7 @@ instruction_counter:
EXCEPTION(0x100, Reset, system_reset_exception, EXC_XFER_STD)

/* Machine check */
- . = 0x200
-MachineCheck:
+ START_EXCEPTION(0x200, MachineCheck)
EXCEPTION_PROLOG handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x200, machine_check_exception)
@@ -131,8 +130,7 @@ MachineCheck:
EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)

/* Alignment exception */
- . = 0x600
-Alignment:
+ START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
b .Lalignment_exception_ool
@@ -149,8 +147,7 @@ Alignment:
EXC_XFER_STD(0x600, alignment_exception)

/* System call */
- . = 0xc00
-SystemCall:
+ START_EXCEPTION(0xc00, SystemCall)
SYSCALL_ENTRY 0xc00

/* Single step - not used on 601 */
@@ -161,7 +158,6 @@ SystemCall:
*/
EXCEPTION(0x1000, SoftEmu, emulation_assist_interrupt, EXC_XFER_STD)

- . = 0x1100
/*
* For the MPC8xx, this is a software tablewalk to load the instruction
* TLB. The task switch loads the M_TWB register with the pointer to the first
@@ -183,7 +179,7 @@ SystemCall:
#define INVALIDATE_ADJACENT_PAGES_CPU15(addr, tmp)
#endif

-InstructionTLBMiss:
+ START_EXCEPTION(0x1100, InstructionTLBMiss)
mtspr SPRN_SPRG_SCRATCH2, r10
mtspr SPRN_M_TW, r11

@@ -239,8 +235,7 @@ InstructionTLBMiss:
rfi
#endif

- . = 0x1200
-DataStoreTLBMiss:
+ START_EXCEPTION(0x1200, DataStoreTLBMiss)
mtspr SPRN_SPRG_SCRATCH2, r10
mtspr SPRN_M_TW, r11
mfcr r11
@@ -303,8 +298,7 @@ DataStoreTLBMiss:
* to many reasons, such as executing guarded memory or illegal instruction
* addresses. There is nothing to do but handle a big time error fault.
*/
- . = 0x1300
-InstructionTLBError:
+ START_EXCEPTION(0x1300, InstructionTLBError)
EXCEPTION_PROLOG
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
andis. r10,r9,SRR1_ISI_NOPT@h
@@ -320,8 +314,7 @@ InstructionTLBError:
* many reasons, including a dirty update to a pte. We bail out to
* a higher level function that can handle it.
*/
- . = 0x1400
-DataTLBError:
+ START_EXCEPTION(0x1400, DataTLBError)
EXCEPTION_PROLOG_0 handle_dar_dsisr=1
mfspr r11, SPRN_DAR
cmpwi cr1, r11, RPN_PATTERN
@@ -354,8 +347,7 @@ do_databreakpoint:
stw r4,_DAR(r11)
EXC_XFER_STD(0x1c00, do_break)

- . = 0x1c00
-DataBreakpoint:
+ START_EXCEPTION(0x1c00, DataBreakpoint)
EXCEPTION_PROLOG_0 handle_dar_dsisr=1
mfspr r11, SPRN_SRR0
cmplwi cr1, r11, (.Ldtlbie - PAGE_OFFSET)@l
@@ -368,8 +360,7 @@ DataBreakpoint:
rfi

#ifdef CONFIG_PERF_EVENTS
- . = 0x1d00
-InstructionBreakpoint:
+ START_EXCEPTION(0x1d00, InstructionBreakpoint)
mtspr SPRN_SPRG_SCRATCH0, r10
lwz r10, (instruction_counter - PAGE_OFFSET)@l(0)
addi r10, r10, -1
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 9dc05890477d..8f5c8c8da63d 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -255,9 +255,7 @@ __secondary_hold_acknowledge:
* pointer when we take an exception from supervisor mode.)
* -- paulus.
*/
- . = 0x200
- DO_KVM 0x200
-MachineCheck:
+ START_EXCEPTION(0x200, MachineCheck)
EXCEPTION_PROLOG_0
#ifdef CONFIG_PPC_CHRP
mtspr SPRN_SPRG_SCRATCH2,r1
@@ -278,9 +276,7 @@ MachineCheck:
#endif

/* Data access exception. */
- . = 0x300
- DO_KVM 0x300
-DataAccess:
+ START_EXCEPTION(0x300, DataAccess)
#ifdef CONFIG_PPC_BOOK3S_604
BEGIN_MMU_FTR_SECTION
mtspr SPRN_SPRG_SCRATCH2,r10
@@ -304,9 +300,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
b handle_page_fault_tramp_1

/* Instruction access exception. */
- . = 0x400
- DO_KVM 0x400
-InstructionAccess:
+ START_EXCEPTION(0x400, InstructionAccess)
mtspr SPRN_SPRG_SCRATCH0,r10
mtspr SPRN_SPRG_SCRATCH1,r11
mfspr r10, SPRN_SPRG_THREAD
@@ -336,9 +330,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)

/* Alignment exception */
- . = 0x600
- DO_KVM 0x600
-Alignment:
+ START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
b alignment_exception_tramp
@@ -347,9 +339,7 @@ Alignment:
EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD)

/* Floating-point unavailable */
- . = 0x800
- DO_KVM 0x800
-FPUnavailable:
+ START_EXCEPTION(0x800, FPUnavailable)
#ifdef CONFIG_PPC_FPU
BEGIN_FTR_SECTION
/*
@@ -375,9 +365,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
EXCEPTION(0xb00, Trap_0b, unknown_exception, EXC_XFER_STD)

/* System call */
- . = 0xc00
- DO_KVM 0xc00
-SystemCall:
+ START_EXCEPTION(0xc00, SystemCall)
SYSCALL_ENTRY 0xc00

EXCEPTION(0xd00, SingleStep, single_step_exception, EXC_XFER_STD)
@@ -391,12 +379,10 @@ SystemCall:
* non-altivec kernel running on a machine with altivec just
* by executing an altivec instruction.
*/
- . = 0xf00
- DO_KVM 0xf00
+ START_EXCEPTION(0xf00, PerformanceMonitorTrap)
b PerformanceMonitor

- . = 0xf20
- DO_KVM 0xf20
+ START_EXCEPTION(0xf20, AltiVecUnavailableTrap)
b AltiVecUnavailable

/*
--
2.25.0

2021-03-09 12:12:05

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 21/43] powerpc/32: Provide a name to exception prolog continuation in virtual mode

Now that the prolog continuation is separated in .text, give it a name
and mark it _ASM_NOKPROBE_SYMBOL.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 12 +++++++-----
arch/powerpc/kernel/head_40x.S | 22 ++++++++++++----------
arch/powerpc/kernel/head_8xx.S | 10 +++++-----
arch/powerpc/kernel/head_book3s_32.S | 14 +++++++-------
4 files changed, 31 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 3c0aa4538514..160ebd573c37 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -10,10 +10,10 @@
* We assume sprg3 has the physical address of the current
* task's thread_struct.
*/
-.macro EXCEPTION_PROLOG handle_dar_dsisr=0
+.macro EXCEPTION_PROLOG name handle_dar_dsisr=0
EXCEPTION_PROLOG_0 handle_dar_dsisr=\handle_dar_dsisr
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 handle_dar_dsisr=\handle_dar_dsisr
+ EXCEPTION_PROLOG_2 \name handle_dar_dsisr=\handle_dar_dsisr
.endm

.macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0
@@ -56,7 +56,7 @@
#endif
.endm

-.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
+.macro EXCEPTION_PROLOG_2 name handle_dar_dsisr=0
#ifdef CONFIG_PPC_8xx
.if \handle_dar_dsisr
li r11, RPN_PATTERN
@@ -72,6 +72,7 @@
rfi

.text
+\name\()_virt:
1:
stw r11,GPR1(r1)
stw r11,0(r1)
@@ -109,6 +110,7 @@
stw r10,8(r11)
SAVE_4GPRS(3, r11)
SAVE_2GPRS(7, r11)
+_ASM_NOKPROBE_SYMBOL(\name\()_virt)
.endm

.macro SYSCALL_ENTRY trapno
@@ -180,7 +182,7 @@

#define EXCEPTION(n, label, hdlr, xfer) \
START_EXCEPTION(n, label) \
- EXCEPTION_PROLOG; \
+ EXCEPTION_PROLOG label; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
xfer(n, hdlr)

@@ -212,7 +214,7 @@
#endif
lwz r1, emergency_ctx@l(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
- EXCEPTION_PROLOG_2
+ EXCEPTION_PROLOG_2 vmap_stack_overflow
SAVE_NVGPRS(r11)
addi r3, r1, STACK_FRAME_OVERHEAD
EXC_XFER_STD(0, stack_overflow_exception)
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index e7d8856714d3..86883ccb3dc5 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -104,7 +104,7 @@ _ENTRY(crit_esr)
* Instead we use a couple of words of memory at low physical addresses.
* This is OK since we don't support SMP on these processors.
*/
-.macro CRITICAL_EXCEPTION_PROLOG
+.macro CRITICAL_EXCEPTION_PROLOG name
stw r10,crit_r10@l(0) /* save two registers to work with */
stw r11,crit_r11@l(0)
mfspr r10,SPRN_SRR0
@@ -135,6 +135,7 @@ _ENTRY(crit_esr)

.text
1:
+\name\()_virt:
lwz r11,crit_r1@l(0)
stw r11,GPR1(r1)
stw r11,0(r1)
@@ -162,6 +163,7 @@ _ENTRY(crit_esr)
stw r10, 8(r11)
SAVE_4GPRS(3, r11)
SAVE_2GPRS(7, r11)
+_ASM_NOKPROBE_SYMBOL(\name\()_virt)
.endm

/*
@@ -182,7 +184,7 @@ _ENTRY(crit_esr)
*/
#define CRITICAL_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(n, label); \
- CRITICAL_EXCEPTION_PROLOG; \
+ CRITICAL_EXCEPTION_PROLOG label; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
crit_transfer_to_handler, ret_from_crit_exc)
@@ -205,7 +207,7 @@ _ENTRY(crit_esr)
* if they can't resolve the lightweight TLB fault.
*/
START_EXCEPTION(0x0300, DataStorage)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG DataStorage handle_dar_dsisr=1
EXC_XFER_LITE(0x300, handle_page_fault)

/*
@@ -213,7 +215,7 @@ _ENTRY(crit_esr)
* This is caused by a fetch from non-execute or guarded pages.
*/
START_EXCEPTION(0x0400, InstructionAccess)
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG InstructionAccess
li r5,0
stw r5, _ESR(r11) /* Zero ESR */
stw r12, _DEAR(r11) /* SRR0 as DEAR */
@@ -224,13 +226,13 @@ _ENTRY(crit_esr)

/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG ProgramCheck handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x700, program_check_exception)

@@ -450,7 +452,7 @@ _ENTRY(crit_esr)
*/
/* 0x2000 - Debug Exception */
START_EXCEPTION(0x2000, DebugTrap)
- CRITICAL_EXCEPTION_PROLOG
+ CRITICAL_EXCEPTION_PROLOG DebugTrap

/*
* If this is a single step or branch-taken exception in an
@@ -500,7 +502,7 @@ _ENTRY(crit_esr)
/* Programmable Interval Timer (PIT) Exception. (from 0x1000) */
__HEAD
Decrementer:
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG Decrementer
lis r0,TSR_PIS@h
mtspr SPRN_TSR,r0 /* Clear the PIT exception */
addi r3,r1,STACK_FRAME_OVERHEAD
@@ -509,14 +511,14 @@ Decrementer:
/* Fixed Interval Timer (FIT) Exception. (from 0x1010) */
__HEAD
FITException:
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG FITException
addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_STD(0x1010, unknown_exception)

/* Watchdog Timer (WDT) Exception. (from 0x1020) */
__HEAD
WDTException:
- CRITICAL_EXCEPTION_PROLOG;
+ CRITICAL_EXCEPTION_PROLOG WDTException
addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_TEMPLATE(WatchdogException, 0x1020+2,
(MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)),
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index d16d0ec71bb2..932702a38234 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -122,7 +122,7 @@ instruction_counter:

/* Machine check */
START_EXCEPTION(0x200, MachineCheck)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG MachineCheck handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x200, machine_check_exception)

@@ -131,7 +131,7 @@ instruction_counter:

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

@@ -294,7 +294,7 @@ instruction_counter:
* addresses. There is nothing to do but handle a big time error fault.
*/
START_EXCEPTION(0x1300, InstructionTLBError)
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG InstructionTLBError
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
andis. r10,r9,SRR1_ISI_NOPT@h
beq+ .Litlbie
@@ -316,7 +316,7 @@ instruction_counter:
beq- cr1, FixupDAR /* must be a buggy dcbX, icbi insn. */
DARFixed:/* Return from dcbx instruction bug workaround */
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+ EXCEPTION_PROLOG_2 DataTLBError handle_dar_dsisr=1
lwz r4, _DAR(r11)
lwz r5, _DSISR(r11)
andis. r10,r5,DSISR_NOHPTE@h
@@ -347,7 +347,7 @@ DARFixed:/* Return from dcbx instruction bug workaround */
rfi

1: EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+ EXCEPTION_PROLOG_2 DataBreakpoint handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 56d7bc212866..24cb245e1c5b 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -266,7 +266,7 @@ __secondary_hold_acknowledge:
mfspr r1, SPRN_SPRG_SCRATCH2
#endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1
-7: EXCEPTION_PROLOG_2
+7: EXCEPTION_PROLOG_2 MachineCheck
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_CHRP
beq cr1, 1f
@@ -296,7 +296,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
#endif
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+ EXCEPTION_PROLOG_2 DataAccess handle_dar_dsisr=1
lwz r5, _DSISR(r11)
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
@@ -325,7 +325,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
andi. r11, r11, MSR_PR

EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2
+ EXCEPTION_PROLOG_2 InstructionAccess
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
stw r5, _DSISR(r11)
stw r12, _DAR(r11)
@@ -336,7 +336,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
- EXCEPTION_PROLOG handle_dar_dsisr=1
+ EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

@@ -353,7 +353,7 @@ BEGIN_FTR_SECTION
*/
b ProgramCheck
END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG FPUnavailable
beq 1f
bl load_up_fpu /* if from user, just load it up */
b fast_exception_return
@@ -713,7 +713,7 @@ fast_hash_page_return:

__HEAD
AltiVecUnavailable:
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG AltiVecUnavailable
#ifdef CONFIG_ALTIVEC
beq 1f
bl load_up_altivec /* if from user, just load it up */
@@ -724,7 +724,7 @@ AltiVecUnavailable:

__HEAD
PerformanceMonitor:
- EXCEPTION_PROLOG
+ EXCEPTION_PROLOG PerformanceMonitor
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0xf00, performance_monitor_exception)

--
2.25.0

2021-03-09 12:12:05

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 20/43] powerpc/32: Move exception prolog code into .text once MMU is back on

The space in the head section is rather constrained by the fact that
exception vectors are spread every 0x100 bytes and sometimes we
need to have "out of line" code because it doesn't fit.

Now that we are enabling MMU early in the prolog, take that opportunity
to jump somewhere else in the .text section where we don't have any
space constraint.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 5 ++++
arch/powerpc/kernel/head_40x.S | 6 +++++
arch/powerpc/kernel/head_8xx.S | 25 ++++++++------------
arch/powerpc/kernel/head_book3s_32.S | 34 ++++++++++++----------------
4 files changed, 36 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index d97ec94b34da..3c0aa4538514 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -70,6 +70,8 @@
mtspr SPRN_SRR0, r11
mfspr r11, SPRN_SPRG_SCRATCH2
rfi
+
+ .text
1:
stw r11,GPR1(r1)
stw r11,0(r1)
@@ -163,12 +165,14 @@
*/
#ifdef CONFIG_PPC_BOOK3S
#define START_EXCEPTION(n, label) \
+ __HEAD; \
. = n; \
DO_KVM n; \
label:

#else
#define START_EXCEPTION(n, label) \
+ __HEAD; \
. = n; \
label:

@@ -196,6 +200,7 @@
ret_from_except)

.macro vmap_stack_overflow_exception
+ __HEAD
vmap_stack_overflow:
#ifdef CONFIG_SMP
mfspr r1, SPRN_SPRG_THREAD
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index c14a71e0d6d3..e7d8856714d3 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -132,6 +132,8 @@ _ENTRY(crit_esr)
ori r11, r11, 1f@l
mtspr SPRN_SRR0, r11
rfi
+
+ .text
1:
lwz r11,crit_r1@l(0)
stw r11,GPR1(r1)
@@ -496,6 +498,7 @@ _ENTRY(crit_esr)
crit_transfer_to_handler, ret_from_crit_exc)

/* Programmable Interval Timer (PIT) Exception. (from 0x1000) */
+ __HEAD
Decrementer:
EXCEPTION_PROLOG
lis r0,TSR_PIS@h
@@ -504,12 +507,14 @@ Decrementer:
EXC_XFER_LITE(0x1000, timer_interrupt)

/* Fixed Interval Timer (FIT) Exception. (from 0x1010) */
+ __HEAD
FITException:
EXCEPTION_PROLOG
addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_STD(0x1010, unknown_exception)

/* Watchdog Timer (WDT) Exception. (from 0x1020) */
+ __HEAD
WDTException:
CRITICAL_EXCEPTION_PROLOG;
addi r3,r1,STACK_FRAME_OVERHEAD;
@@ -523,6 +528,7 @@ WDTException:
* reserved.
*/

+ __HEAD
/* Damn, I came up one instruction too many to fit into the
* exception space :-). Both the instruction and data TLB
* miss get to this point to load the TLB.
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 11789a077d76..d16d0ec71bb2 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -133,7 +133,7 @@ instruction_counter:
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
- b .Lalignment_exception_ool
+ EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD)
@@ -141,11 +141,6 @@ instruction_counter:
/* Decrementer */
EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE)

- /* With VMAP_STACK there's not enough room for this at 0x600 */
- . = 0xa00
-.Lalignment_exception_ool:
- EXC_XFER_STD(0x600, alignment_exception)
-
/* System call */
START_EXCEPTION(0xc00, SystemCall)
SYSCALL_ENTRY 0xc00
@@ -339,26 +334,25 @@ DARFixed:/* Return from dcbx instruction bug workaround */
* support of breakpoints and such. Someday I will get around to
* using them.
*/
-do_databreakpoint:
- EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
- mfspr r4,SPRN_BAR
- stw r4,_DAR(r11)
- EXC_XFER_STD(0x1c00, do_break)
-
START_EXCEPTION(0x1c00, DataBreakpoint)
EXCEPTION_PROLOG_0 handle_dar_dsisr=1
mfspr r11, SPRN_SRR0
cmplwi cr1, r11, (.Ldtlbie - PAGE_OFFSET)@l
cmplwi cr7, r11, (.Litlbie - PAGE_OFFSET)@l
cror 4*cr1+eq, 4*cr1+eq, 4*cr7+eq
- bne cr1, do_databreakpoint
+ bne cr1, 1f
mtcr r10
mfspr r10, SPRN_SPRG_SCRATCH0
mfspr r11, SPRN_SPRG_SCRATCH1
rfi

+1: EXCEPTION_PROLOG_1
+ EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ mfspr r4,SPRN_BAR
+ stw r4,_DAR(r11)
+ EXC_XFER_STD(0x1c00, do_break)
+
#ifdef CONFIG_PERF_EVENTS
START_EXCEPTION(0x1d00, InstructionBreakpoint)
mtspr SPRN_SPRG_SCRATCH0, r10
@@ -376,6 +370,7 @@ do_databreakpoint:
EXCEPTION(0x1e00, Trap_1e, unknown_exception, EXC_XFER_STD)
EXCEPTION(0x1f00, Trap_1f, unknown_exception, EXC_XFER_STD)

+ __HEAD
. = 0x2000

/* This is the procedure to calculate the data EA for buggy dcbx,dcbi instructions
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 8f5c8c8da63d..56d7bc212866 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -269,11 +269,10 @@ __secondary_hold_acknowledge:
7: EXCEPTION_PROLOG_2
addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_CHRP
- beq cr1, machine_check_tramp
+ beq cr1, 1f
twi 31, 0, 0
-#else
- b machine_check_tramp
#endif
+1: EXC_XFER_STD(0x200, machine_check_exception)

/* Data access exception. */
START_EXCEPTION(0x300, DataAccess)
@@ -297,7 +296,13 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
#endif
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
- b handle_page_fault_tramp_1
+ EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+ lwz r5, _DSISR(r11)
+ andis. r0, r5, DSISR_DABRMATCH@h
+ bne- 1f
+ EXC_XFER_LITE(0x300, handle_page_fault)
+1: EXC_XFER_STD(0x300, do_break)
+

/* Instruction access exception. */
START_EXCEPTION(0x400, InstructionAccess)
@@ -333,7 +338,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG handle_dar_dsisr=1
addi r3,r1,STACK_FRAME_OVERHEAD
- b alignment_exception_tramp
+ EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD)
@@ -385,6 +390,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
START_EXCEPTION(0xf20, AltiVecUnavailableTrap)
b AltiVecUnavailable

+ __HEAD
/*
* Handle TLB miss for instruction on 603/603e.
* Note: we get an alternate set of r0 - r3 to use automatically.
@@ -624,22 +630,9 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_NEED_DTLB_SW_LRU)
EXCEPTION(0x2e00, Trap_2e, unknown_exception, EXC_XFER_STD)
EXCEPTION(0x2f00, Trap_2f, unknown_exception, EXC_XFER_STD)

+ __HEAD
. = 0x3000

-machine_check_tramp:
- EXC_XFER_STD(0x200, machine_check_exception)
-
-alignment_exception_tramp:
- EXC_XFER_STD(0x600, alignment_exception)
-
-handle_page_fault_tramp_1:
- EXCEPTION_PROLOG_2 handle_dar_dsisr=1
- lwz r5, _DSISR(r11)
- andis. r0, r5, DSISR_DABRMATCH@h
- bne- 1f
- EXC_XFER_LITE(0x300, handle_page_fault)
-1: EXC_XFER_STD(0x300, do_break)
-
#ifdef CONFIG_PPC_BOOK3S_604
.macro save_regs_thread thread
stw r0, THR0(\thread)
@@ -718,6 +711,7 @@ fast_hash_page_return:
vmap_stack_overflow_exception
#endif

+ __HEAD
AltiVecUnavailable:
EXCEPTION_PROLOG
#ifdef CONFIG_ALTIVEC
@@ -728,12 +722,14 @@ AltiVecUnavailable:
1: addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_LITE(0xf20, altivec_unavailable_exception)

+ __HEAD
PerformanceMonitor:
EXCEPTION_PROLOG
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0xf00, performance_monitor_exception)


+ __HEAD
/*
* This code is jumped to from the startup code to copy
* the kernel image to physical address PHYSICAL_START.
--
2.25.0

2021-03-09 12:12:07

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 26/43] powerpc/32: Set regs parameter in r3 in transfer_to_handler

All exception handlers take regs as first parameter.

Instead of setting r3 just before each call to a handler, set
it in transfer_to_handler.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 5 ++---
arch/powerpc/kernel/head_32.h | 2 --
arch/powerpc/kernel/head_40x.S | 7 -------
arch/powerpc/kernel/head_8xx.S | 3 ---
arch/powerpc/kernel/head_book3s_32.S | 9 ++-------
arch/powerpc/kernel/head_booke.h | 11 +----------
arch/powerpc/kernel/head_fsl_booke.S | 4 +---
7 files changed, 6 insertions(+), 35 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index d8fd2fd2c777..4698fd1bd8c8 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -87,6 +87,7 @@ _ASM_NOKPROBE_SYMBOL(transfer_to_handler_full)
.globl transfer_to_handler
transfer_to_handler:
SAVE_NVGPRS(r11)
+ addi r3,r1,STACK_FRAME_OVERHEAD
stw r2,GPR2(r11)
stw r12,_NIP(r11)
stw r9,_MSR(r11)
@@ -99,8 +100,7 @@ transfer_to_handler:
tovirt(r12, r12)
beq 2f /* if from user, fix up THREAD.regs */
addi r2, r12, -THREAD
- addi r11,r1,STACK_FRAME_OVERHEAD
- stw r11,PT_REGS(r12)
+ stw r3,PT_REGS(r12)
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
#endif
@@ -228,7 +228,6 @@ ret_from_kernel_thread:
*/
.globl handle_page_fault
handle_page_fault:
- addi r3,r1,STACK_FRAME_OVERHEAD
bl do_page_fault
cmpwi r3,0
beq+ ret_from_except
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 087445e45489..4d638d760a96 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -183,7 +183,6 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
#define EXCEPTION(n, label, hdlr, xfer) \
START_EXCEPTION(n, label) \
EXCEPTION_PROLOG label; \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
xfer(n, hdlr)

#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
@@ -215,7 +214,6 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
lwz r1, emergency_ctx@l(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
EXCEPTION_PROLOG_2 vmap_stack_overflow
- addi r3, r1, STACK_FRAME_OVERHEAD
EXC_XFER_STD(0, stack_overflow_exception)
.endm

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 86883ccb3dc5..08563d4170c6 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -185,7 +185,6 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
#define CRITICAL_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(n, label); \
CRITICAL_EXCEPTION_PROLOG label; \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
crit_transfer_to_handler, ret_from_crit_exc)

@@ -227,13 +226,11 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
EXCEPTION_PROLOG ProgramCheck handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x700, program_check_exception)

EXCEPTION(0x0800, Trap_08, unknown_exception, EXC_XFER_STD)
@@ -494,7 +491,6 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
/* continue normal handling for a critical exception... */
2: mfspr r4,SPRN_DBSR
stw r4,_ESR(r11) /* DebugException takes DBSR in _ESR */
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_TEMPLATE(DebugException, 0x2002, \
(MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
crit_transfer_to_handler, ret_from_crit_exc)
@@ -505,21 +501,18 @@ Decrementer:
EXCEPTION_PROLOG Decrementer
lis r0,TSR_PIS@h
mtspr SPRN_TSR,r0 /* Clear the PIT exception */
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_LITE(0x1000, timer_interrupt)

/* Fixed Interval Timer (FIT) Exception. (from 0x1010) */
__HEAD
FITException:
EXCEPTION_PROLOG FITException
- addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_STD(0x1010, unknown_exception)

/* Watchdog Timer (WDT) Exception. (from 0x1020) */
__HEAD
WDTException:
CRITICAL_EXCEPTION_PROLOG WDTException
- addi r3,r1,STACK_FRAME_OVERHEAD;
EXC_XFER_TEMPLATE(WatchdogException, 0x1020+2,
(MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)),
crit_transfer_to_handler, ret_from_crit_exc)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 932702a38234..eb1d40a8f2c4 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -123,7 +123,6 @@ instruction_counter:
/* Machine check */
START_EXCEPTION(0x200, MachineCheck)
EXCEPTION_PROLOG MachineCheck handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x200, machine_check_exception)

/* External interrupt */
@@ -132,7 +131,6 @@ instruction_counter:
/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
@@ -348,7 +346,6 @@ DARFixed:/* Return from dcbx instruction bug workaround */

1: EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 DataBreakpoint handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
EXC_XFER_STD(0x1c00, do_break)
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 24cb245e1c5b..626e9fbac2cc 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -267,7 +267,6 @@ __secondary_hold_acknowledge:
#endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1
7: EXCEPTION_PROLOG_2 MachineCheck
- addi r3,r1,STACK_FRAME_OVERHEAD
#ifdef CONFIG_PPC_CHRP
beq cr1, 1f
twi 31, 0, 0
@@ -337,7 +336,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
@@ -357,8 +355,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
beq 1f
bl load_up_fpu /* if from user, just load it up */
b fast_exception_return
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_LITE(0x800, kernel_fp_unavailable_exception)
+1: EXC_XFER_LITE(0x800, kernel_fp_unavailable_exception)
#else
b ProgramCheck
#endif
@@ -719,13 +716,11 @@ AltiVecUnavailable:
bl load_up_altivec /* if from user, just load it up */
b fast_exception_return
#endif /* CONFIG_ALTIVEC */
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_LITE(0xf20, altivec_unavailable_exception)
+1: EXC_XFER_LITE(0xf20, altivec_unavailable_exception)

__HEAD
PerformanceMonitor:
EXCEPTION_PROLOG PerformanceMonitor
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0xf00, performance_monitor_exception)


diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index b31bf9e833c0..009a56d70d76 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -294,13 +294,11 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#define EXCEPTION(n, intno, label, hdlr, xfer) \
START_EXCEPTION(label); \
NORMAL_EXCEPTION_PROLOG(intno); \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
xfer(n, hdlr)

#define CRITICAL_EXCEPTION(n, intno, label, hdlr) \
START_EXCEPTION(label); \
CRITICAL_EXCEPTION_PROLOG(intno); \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
@@ -311,7 +309,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
MCHECK_EXCEPTION_PROLOG; \
mfspr r5,SPRN_ESR; \
stw r5,_ESR(r11); \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
SAVE_xSRR(DSRR); \
SAVE_xSRR(CSRR); \
SAVE_MMU_REGS; \
@@ -398,7 +395,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
/* continue normal handling for a debug exception... */ \
2: mfspr r4,SPRN_DBSR; \
stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
- addi r3,r1,STACK_FRAME_OVERHEAD; \
SAVE_xSRR(CSRR); \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
@@ -455,7 +451,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
/* continue normal handling for a critical exception... */ \
2: mfspr r4,SPRN_DBSR; \
stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
- addi r3,r1,STACK_FRAME_OVERHEAD; \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), crit_transfer_to_handler, ret_from_crit_exc)
@@ -482,7 +477,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(ALIGNMENT); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR and save it */ \
stw r4,_DEAR(r11); \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_STD(0x0600, alignment_exception)

#define PROGRAM_EXCEPTION \
@@ -490,7 +484,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(PROGRAM); \
mfspr r4,SPRN_ESR; /* Grab the ESR and save it */ \
stw r4,_ESR(r11); \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_STD(0x0700, program_check_exception)

#define DECREMENTER_EXCEPTION \
@@ -498,7 +491,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(DECREMENTER); \
lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */ \
mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */ \
- addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_LITE(0x0900, timer_interrupt)

#define FP_UNAVAILABLE_EXCEPTION \
@@ -507,8 +499,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
beq 1f; \
bl load_up_fpu; /* if from user, just load it up */ \
b fast_exception_return; \
-1: addi r3,r1,STACK_FRAME_OVERHEAD; \
- EXC_XFER_STD(0x800, kernel_fp_unavailable_exception)
+1: EXC_XFER_STD(0x800, kernel_fp_unavailable_exception)

#else /* __ASSEMBLY__ */
struct exception_regs {
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 3f4a40cccef5..f51c66f747ad 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -372,7 +372,6 @@ interrupt_base:
bne 1f
EXC_XFER_LITE(0x0300, handle_page_fault)
1:
- addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_LITE(0x0300, CacheLockingException)

/* Instruction Storage Interrupt */
@@ -618,8 +617,7 @@ END_BTB_FLUSH_SECTION
beq 1f
bl load_up_spe
b fast_exception_return
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_LITE(0x2010, KernelSPE)
+1: EXC_XFER_LITE(0x2010, KernelSPE)
#elif defined(CONFIG_SPE_POSSIBLE)
EXCEPTION(0x2020, SPE_UNAVAIL, SPEUnavailable, \
unknown_exception, EXC_XFER_STD)
--
2.25.0

2021-03-09 12:12:07

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 25/43] powerpc/32: Replace ASM exception exit by C exception exit from ppc64

This patch replaces the PPC32 ASM exception exit by C exception exit.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 481 +++++++++-----------------------
arch/powerpc/kernel/interrupt.c | 4 +
2 files changed, 132 insertions(+), 353 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 7084289994b3..d8fd2fd2c777 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -129,9 +129,7 @@ transfer_to_handler_cont:
stw r12,TI_LOCAL_FLAGS(r2)
lwz r9,_MSR(r11) /* if sleeping, clear MSR.EE */
rlwinm r9,r9,0,~MSR_EE
- lwz r12,_LINK(r11) /* and return to address in LR */
- kuap_restore r11, r2, r3, r4, r5
- lwz r2, GPR2(r11)
+ stw r9,_MSR(r11)
b fast_exception_return
#endif
_ASM_NOKPROBE_SYMBOL(transfer_to_handler)
@@ -334,69 +332,20 @@ END_FTR_SECTION_IFSET(CPU_FTR_SPE)

.globl fast_exception_return
fast_exception_return:
+ lwz r6,_MSR(r1)
+ andi. r0,r6,MSR_PR
+ bne .Lfast_user_interrupt_return
+ li r3,0 /* 0 return value, no EMULATE_STACK_STORE */
#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE))
- andi. r10,r9,MSR_RI /* check for recoverable interrupt */
- beq 1f /* if not, we've got problems */
-#endif
-
-2: REST_4GPRS(3, r11)
- lwz r10,_CCR(r11)
- REST_GPR(1, r11)
- mtcr r10
- lwz r10,_LINK(r11)
- mtlr r10
- /* Clear the exception_marker on the stack to avoid confusing stacktrace */
- li r10, 0
- stw r10, 8(r11)
- REST_GPR(10, r11)
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
- mtspr SPRN_NRI, r0
-#endif
- mtspr SPRN_SRR1,r9
- mtspr SPRN_SRR0,r12
- REST_GPR(9, r11)
- REST_GPR(12, r11)
- lwz r11,GPR11(r11)
- rfi
-#ifdef CONFIG_40x
- b . /* Prevent prefetch past rfi */
-#endif
-_ASM_NOKPROBE_SYMBOL(fast_exception_return)
-
-#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE))
-/* check if the exception happened in a restartable section */
-1: lis r3,exc_exit_restart_end@ha
- addi r3,r3,exc_exit_restart_end@l
- cmplw r12,r3
- bge 3f
- lis r4,exc_exit_restart@ha
- addi r4,r4,exc_exit_restart@l
- cmplw r12,r4
- blt 3f
- lis r3,fee_restarts@ha
- tophys(r3,r3)
- lwz r5,fee_restarts@l(r3)
- addi r5,r5,1
- stw r5,fee_restarts@l(r3)
- mr r12,r4 /* restart at exc_exit_restart */
- b 2b
-
- .section .bss
- .align 2
-fee_restarts:
- .space 4
- .previous
-
-/* aargh, a nonrecoverable interrupt, panic */
-/* aargh, we don't know which trap this is */
-3:
- li r10,-1
- stw r10,_TRAP(r11)
+ andi. r0,r6,MSR_RI
+ bne+ .Lfast_kernel_interrupt_return
addi r3,r1,STACK_FRAME_OVERHEAD
- bl transfer_to_handler_full
bl unrecoverable_exception
- b ret_from_except
+ trap /* should not get here */
+#else
+ b .Lfast_kernel_interrupt_return
#endif
+_ASM_NOKPROBE_SYMBOL(fast_exception_return)

.globl ret_from_except_full
ret_from_except_full:
@@ -405,213 +354,146 @@ ret_from_except_full:

.globl ret_from_except
ret_from_except:
- /* Hard-disable interrupts so that current_thread_info()->flags
- * can't change between when we test it and when we return
- * from the interrupt. */
- /* Note: We don't bother telling lockdep about it */
- LOAD_REG_IMMEDIATE(r10,MSR_KERNEL)
- mtmsr r10 /* disable interrupts */
-
- lwz r3,_MSR(r1) /* Returning to user mode? */
- andi. r0,r3,MSR_PR
- beq resume_kernel
-
-user_exc_return: /* r10 contains MSR_KERNEL here */
- /* Check current_thread_info()->flags */
- lwz r9,TI_FLAGS(r2)
- andi. r0,r9,_TIF_USER_WORK_MASK
- bne do_work
-
-restore_user:
-#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
- /* Check whether this process has its own DBCR0 value. The internal
- debug mode bit tells us that dbcr0 should be loaded. */
- lwz r0,THREAD+THREAD_DBCR0(r2)
- andis. r10,r0,DBCR0_IDM@h
- bnel- load_dbcr0
-#endif
- ACCOUNT_CPU_USER_EXIT(r2, r10, r11)
+_ASM_NOKPROBE_SYMBOL(ret_from_except)
+
+ .globl interrupt_return
+interrupt_return:
+ lwz r4,_MSR(r1)
+ andi. r0,r4,MSR_PR
+ beq .Lkernel_interrupt_return
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl interrupt_exit_user_prepare
+ cmpwi r3,0
+ bne- .Lrestore_nvgprs
+
+.Lfast_user_interrupt_return:
#ifdef CONFIG_PPC_BOOK3S_32
kuep_unlock r10, r11
#endif
+ kuap_check r2, r4
+ lwz r11,_NIP(r1)
+ lwz r12,_MSR(r1)
+ mtspr SPRN_SRR0,r11
+ mtspr SPRN_SRR1,r12

- b restore
-
-/* N.B. the only way to get here is from the beq following ret_from_except. */
-resume_kernel:
- /* check current_thread_info, _TIF_EMULATE_STACK_STORE */
- lwz r8,TI_FLAGS(r2)
- andis. r0,r8,_TIF_EMULATE_STACK_STORE@h
- beq+ 1f
+BEGIN_FTR_SECTION
+ stwcx. r0,0,r1 /* to clear the reservation */
+FTR_SECTION_ELSE
+ lwarx r0,0,r1
+ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)

- addi r8,r1,INT_FRAME_SIZE /* Get the kprobed function entry */
+ lwz r3,_CCR(r1)
+ lwz r4,_LINK(r1)
+ lwz r5,_CTR(r1)
+ lwz r6,_XER(r1)
+ li r0,0

- lwz r3,GPR1(r1)
- subi r3,r3,INT_FRAME_SIZE /* dst: Allocate a trampoline exception frame */
- mr r4,r1 /* src: current exception frame */
- mr r1,r3 /* Reroute the trampoline frame to r1 */
+ /*
+ * Leaving a stale exception_marker on the stack can confuse
+ * the reliable stack unwinder later on. Clear it.
+ */
+ stw r0,8(r1)
+ REST_4GPRS(7, r1)
+ REST_2GPRS(11, r1)

- /* Copy from the original to the trampoline. */
- li r5,INT_FRAME_SIZE/4 /* size: INT_FRAME_SIZE */
- li r6,0 /* start offset: 0 */
+ mtcr r3
+ mtlr r4
mtctr r5
-2: lwzx r0,r6,r4
- stwx r0,r6,r3
- addi r6,r6,4
- bdnz 2b
-
- /* Do real store operation to complete stwu */
- lwz r5,GPR1(r1)
- stw r8,0(r5)
-
- /* Clear _TIF_EMULATE_STACK_STORE flag */
- lis r11,_TIF_EMULATE_STACK_STORE@h
- addi r5,r2,TI_FLAGS
-0: lwarx r8,0,r5
- andc r8,r8,r11
- stwcx. r8,0,r5
- bne- 0b
-1:
+ mtspr SPRN_XER,r6

-#ifdef CONFIG_PREEMPTION
- /* check current_thread_info->preempt_count */
- lwz r0,TI_PREEMPT(r2)
- cmpwi 0,r0,0 /* if non-zero, just restore regs and return */
- bne restore_kuap
- andi. r8,r8,_TIF_NEED_RESCHED
- beq+ restore_kuap
- lwz r3,_MSR(r1)
- andi. r0,r3,MSR_EE /* interrupts off? */
- beq restore_kuap /* don't schedule if so */
-#ifdef CONFIG_TRACE_IRQFLAGS
- /* Lockdep thinks irqs are enabled, we need to call
- * preempt_schedule_irq with IRQs off, so we inform lockdep
- * now that we -did- turn them off already
- */
- bl trace_hardirqs_off
-#endif
- bl preempt_schedule_irq
-#ifdef CONFIG_TRACE_IRQFLAGS
- /* And now, to properly rebalance the above, we tell lockdep they
- * are being turned back on, which will happen when we return
- */
- bl trace_hardirqs_on
+ REST_4GPRS(2, r1)
+ REST_GPR(6, r1)
+ REST_GPR(0, r1)
+ REST_GPR(1, r1)
+ rfi
+#ifdef CONFIG_40x
+ b . /* Prevent prefetch past rfi */
#endif
-#endif /* CONFIG_PREEMPTION */
-restore_kuap:
- kuap_restore r1, r2, r9, r10, r0

- /* interrupts are hard-disabled at this point */
-restore:
-#if defined(CONFIG_44x) && !defined(CONFIG_PPC_47x)
- lis r4,icache_44x_need_flush@ha
- lwz r5,icache_44x_need_flush@l(r4)
- cmplwi cr0,r5,0
- beq+ 1f
- li r6,0
- iccci r0,r0
- stw r6,icache_44x_need_flush@l(r4)
-1:
-#endif /* CONFIG_44x */
-
- lwz r9,_MSR(r1)
-#ifdef CONFIG_TRACE_IRQFLAGS
- /* Lockdep doesn't know about the fact that IRQs are temporarily turned
- * off in this assembly code while peeking at TI_FLAGS() and such. However
- * we need to inform it if the exception turned interrupts off, and we
- * are about to trun them back on.
- */
- andi. r10,r9,MSR_EE
- beq 1f
- stwu r1,-32(r1)
- mflr r0
- stw r0,4(r1)
- bl trace_hardirqs_on
- addi r1, r1, 32
- lwz r9,_MSR(r1)
-1:
-#endif /* CONFIG_TRACE_IRQFLAGS */
+.Lrestore_nvgprs:
+ REST_NVGPRS(r1)
+ b .Lfast_user_interrupt_return

- lwz r0,GPR0(r1)
- lwz r2,GPR2(r1)
- REST_4GPRS(3, r1)
- REST_2GPRS(7, r1)
+.Lkernel_interrupt_return:
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl interrupt_exit_kernel_prepare

- lwz r10,_XER(r1)
- lwz r11,_CTR(r1)
- mtspr SPRN_XER,r10
- mtctr r11
+.Lfast_kernel_interrupt_return:
+ cmpwi cr1,r3,0
+ kuap_restore r1, r2, r3, r4, r5
+ lwz r11,_NIP(r1)
+ lwz r12,_MSR(r1)
+ mtspr SPRN_SRR0,r11
+ mtspr SPRN_SRR1,r12

BEGIN_FTR_SECTION
- lwarx r11,0,r1
-END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
- stwcx. r0,0,r1 /* to clear the reservation */
+ stwcx. r0,0,r1 /* to clear the reservation */
+FTR_SECTION_ELSE
+ lwarx r0,0,r1
+ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)

-#if !(defined(CONFIG_4xx) || defined(CONFIG_BOOKE))
- andi. r10,r9,MSR_RI /* check if this exception occurred */
- beql nonrecoverable /* at a bad place (MSR:RI = 0) */
+ lwz r3,_LINK(r1)
+ lwz r4,_CTR(r1)
+ lwz r5,_XER(r1)
+ lwz r6,_CCR(r1)
+ li r0,0
+
+ REST_4GPRS(7, r1)
+ REST_2GPRS(11, r1)

- lwz r10,_CCR(r1)
- lwz r11,_LINK(r1)
- mtcrf 0xFF,r10
- mtlr r11
+ mtlr r3
+ mtctr r4
+ mtspr SPRN_XER,r5

- /* Clear the exception_marker on the stack to avoid confusing stacktrace */
- li r10, 0
- stw r10, 8(r1)
/*
- * Once we put values in SRR0 and SRR1, we are in a state
- * where exceptions are not recoverable, since taking an
- * exception will trash SRR0 and SRR1. Therefore we clear the
- * MSR:RI bit to indicate this. If we do take an exception,
- * we can't return to the point of the exception but we
- * can restart the exception exit path at the label
- * exc_exit_restart below. -- paulus
+ * Leaving a stale exception_marker on the stack can confuse
+ * the reliable stack unwinder later on. Clear it.
*/
- LOAD_REG_IMMEDIATE(r10,MSR_KERNEL & ~MSR_RI)
- mtmsr r10 /* clear the RI bit */
- .globl exc_exit_restart
-exc_exit_restart:
- lwz r12,_NIP(r1)
- mtspr SPRN_SRR0,r12
- mtspr SPRN_SRR1,r9
- REST_4GPRS(9, r1)
- lwz r1,GPR1(r1)
- .globl exc_exit_restart_end
-exc_exit_restart_end:
+ stw r0,8(r1)
+
+ REST_4GPRS(2, r1)
+
+ bne- cr1,1f /* emulate stack store */
+ mtcr r6
+ REST_GPR(6, r1)
+ REST_GPR(0, r1)
+ REST_GPR(1, r1)
rfi
-_ASM_NOKPROBE_SYMBOL(exc_exit_restart)
-_ASM_NOKPROBE_SYMBOL(exc_exit_restart_end)
+#ifdef CONFIG_40x
+ b . /* Prevent prefetch past rfi */
+#endif

-#else /* !(CONFIG_4xx || CONFIG_BOOKE) */
- /*
- * This is a bit different on 4xx/Book-E because it doesn't have
- * the RI bit in the MSR.
- * The TLB miss handler checks if we have interrupted
- * the exception exit path and restarts it if so
- * (well maybe one day it will... :).
+1: /*
+ * Emulate stack store with update. New r1 value was already calculated
+ * and updated in our interrupt regs by emulate_loadstore, but we can't
+ * store the previous value of r1 to the stack before re-loading our
+ * registers from it, otherwise they could be clobbered. Use
+ * SPRG Scratch0 in thread struct as temporary storage to hold the store
+ * data, as interrupts are disabled here so it won't be clobbered.
*/
- lwz r11,_LINK(r1)
- mtlr r11
- lwz r10,_CCR(r1)
- mtcrf 0xff,r10
- /* Clear the exception_marker on the stack to avoid confusing stacktrace */
- li r10, 0
- stw r10, 8(r1)
- REST_2GPRS(9, r1)
- .globl exc_exit_restart
-exc_exit_restart:
- lwz r11,_NIP(r1)
- lwz r12,_MSR(r1)
- mtspr SPRN_SRR0,r11
- mtspr SPRN_SRR1,r12
- REST_2GPRS(11, r1)
- lwz r1,GPR1(r1)
- .globl exc_exit_restart_end
-exc_exit_restart_end:
+ mtcr r6
+#ifdef CONFIG_BOOKE
+ mtspr SPRN_SPRG_WSCRATCH0, r9
+#else
+ mtspr SPRN_SPRG_SCRATCH0, r9
+#endif
+ addi r9,r1,INT_FRAME_SIZE /* get original r1 */
+ REST_GPR(6, r1)
+ REST_GPR(0, r1)
+ REST_GPR(1, r1)
+ stw r9,0(r1) /* perform store component of stdu */
+#ifdef CONFIG_BOOKE
+ mfspr r9, SPRN_SPRG_RSCRATCH0
+#else
+ mfspr r9, SPRN_SPRG_SCRATCH0
+#endif
rfi
- b . /* prevent prefetch past rfi */
-_ASM_NOKPROBE_SYMBOL(exc_exit_restart)
+#ifdef CONFIG_40x
+ b . /* Prevent prefetch past rfi */
+#endif
+_ASM_NOKPROBE_SYMBOL(interrupt_return)
+
+#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)

/*
* Returning from a critical interrupt in user mode doesn't need
@@ -642,8 +524,7 @@ _ASM_NOKPROBE_SYMBOL(exc_exit_restart)
REST_NVGPRS(r1); \
lwz r3,_MSR(r1); \
andi. r3,r3,MSR_PR; \
- LOAD_REG_IMMEDIATE(r10,MSR_KERNEL); \
- bne user_exc_return; \
+ bne interrupt_return; \
lwz r0,GPR0(r1); \
lwz r2,GPR2(r1); \
REST_4GPRS(3, r1); \
@@ -746,114 +627,8 @@ ret_from_mcheck_exc:
RET_FROM_EXC_LEVEL(SPRN_MCSRR0, SPRN_MCSRR1, PPC_RFMCI)
_ASM_NOKPROBE_SYMBOL(ret_from_mcheck_exc)
#endif /* CONFIG_BOOKE */
-
-/*
- * Load the DBCR0 value for a task that is being ptraced,
- * having first saved away the global DBCR0. Note that r0
- * has the dbcr0 value to set upon entry to this.
- */
-load_dbcr0:
- mfmsr r10 /* first disable debug exceptions */
- rlwinm r10,r10,0,~MSR_DE
- mtmsr r10
- isync
- mfspr r10,SPRN_DBCR0
- lis r11,global_dbcr0@ha
- addi r11,r11,global_dbcr0@l
-#ifdef CONFIG_SMP
- lwz r9,TASK_CPU(r2)
- slwi r9,r9,2
- add r11,r11,r9
-#endif
- stw r10,0(r11)
- mtspr SPRN_DBCR0,r0
- li r11,-1
- mtspr SPRN_DBSR,r11 /* clear all pending debug events */
- blr
-
- .section .bss
- .align 4
- .global global_dbcr0
-global_dbcr0:
- .space 4*NR_CPUS
- .previous
#endif /* !(CONFIG_4xx || CONFIG_BOOKE) */

-do_work: /* r10 contains MSR_KERNEL here */
- andi. r0,r9,_TIF_NEED_RESCHED
- beq do_user_signal
-
-do_resched: /* r10 contains MSR_KERNEL here */
-#ifdef CONFIG_TRACE_IRQFLAGS
- bl trace_hardirqs_on
- mfmsr r10
-#endif
- ori r10,r10,MSR_EE
- mtmsr r10 /* hard-enable interrupts */
- bl schedule
-recheck:
- /* Note: And we don't tell it we are disabling them again
- * neither. Those disable/enable cycles used to peek at
- * TI_FLAGS aren't advertised.
- */
- LOAD_REG_IMMEDIATE(r10,MSR_KERNEL)
- mtmsr r10 /* disable interrupts */
- lwz r9,TI_FLAGS(r2)
- andi. r0,r9,_TIF_NEED_RESCHED
- bne- do_resched
- andi. r0,r9,_TIF_USER_WORK_MASK
- beq restore_user
-do_user_signal: /* r10 contains MSR_KERNEL here */
- ori r10,r10,MSR_EE
- mtmsr r10 /* hard-enable interrupts */
-2: addi r3,r1,STACK_FRAME_OVERHEAD
- mr r4,r9
- bl do_notify_resume
- REST_NVGPRS(r1)
- b recheck
-
-/*
- * We come here when we are at the end of handling an exception
- * that occurred at a place where taking an exception will lose
- * state information, such as the contents of SRR0 and SRR1.
- */
-nonrecoverable:
- lis r10,exc_exit_restart_end@ha
- addi r10,r10,exc_exit_restart_end@l
- cmplw r12,r10
- bge 3f
- lis r11,exc_exit_restart@ha
- addi r11,r11,exc_exit_restart@l
- cmplw r12,r11
- blt 3f
- lis r10,ee_restarts@ha
- lwz r12,ee_restarts@l(r10)
- addi r12,r12,1
- stw r12,ee_restarts@l(r10)
- mr r12,r11 /* restart at exc_exit_restart */
- blr
-3: /* OK, we can't recover, kill this process */
- lwz r3,_TRAP(r1)
- andi. r0,r3,1
- beq 5f
- SAVE_NVGPRS(r1)
- rlwinm r3,r3,0,0,30
- stw r3,_TRAP(r1)
-5: mfspr r2,SPRN_SPRG_THREAD
- addi r2,r2,-THREAD
- tovirt(r2,r2) /* set back r2 to current */
-4: addi r3,r1,STACK_FRAME_OVERHEAD
- bl unrecoverable_exception
- /* shouldn't return */
- b 4b
-_ASM_NOKPROBE_SYMBOL(nonrecoverable)
-
- .section .bss
- .align 2
-ee_restarts:
- .space 4
- .previous
-
/*
* PROM code for specific machines follows. Put it
* here so it's easy to add arch-specific sections later.
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index b8e7d25be31b..7082e8ee825e 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -20,6 +20,10 @@
#include <asm/time.h>
#include <asm/unistd.h>

+#if defined(CONFIG_PPC_ADV_DEBUG_REGS) && defined(CONFIG_PPC32)
+unsigned long global_dbcr0[NR_CPUS];
+#endif
+
typedef long (*syscall_fn)(long, long, long, long, long, long);

/* Has to run notrace because it is entered not completely "reconciled" */
--
2.25.0

2021-03-09 12:12:13

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 27/43] powerpc/32: Call bad_page_fault() from do_page_fault()

Now that non volatile registers are saved at all time, no
need to split bad_page_fault() out of do_page_fault().

Remove handle_page_fault() and use do_page_fault() directly.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 16 ----------------
arch/powerpc/kernel/head_40x.S | 4 ++--
arch/powerpc/kernel/head_8xx.S | 4 ++--
arch/powerpc/kernel/head_book3s_32.S | 4 ++--
arch/powerpc/kernel/head_booke.h | 4 ++--
arch/powerpc/kernel/head_fsl_booke.S | 2 +-
arch/powerpc/mm/fault.c | 2 +-
7 files changed, 10 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 4698fd1bd8c8..cb2fa00b8fc1 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -220,22 +220,6 @@ ret_from_kernel_thread:
li r3,0
b ret_from_syscall

-/*
- * Top-level page fault handling.
- * This is in assembler because if do_page_fault tells us that
- * it is a bad kernel page fault, we want to save the non-volatile
- * registers before calling bad_page_fault.
- */
- .globl handle_page_fault
-handle_page_fault:
- bl do_page_fault
- cmpwi r3,0
- beq+ ret_from_except
- mr r4,r3 /* err arg for bad_page_fault */
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl __bad_page_fault
- b ret_from_except_full
-
/*
* This routine switches between two different tasks. The process
* state of one is saved on its kernel stack. Then the state
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 08563d4170c6..a65778380704 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -207,7 +207,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
*/
START_EXCEPTION(0x0300, DataStorage)
EXCEPTION_PROLOG DataStorage handle_dar_dsisr=1
- EXC_XFER_LITE(0x300, handle_page_fault)
+ EXC_XFER_LITE(0x300, do_page_fault)

/*
* 0x0400 - Instruction Storage Exception
@@ -218,7 +218,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
li r5,0
stw r5, _ESR(r11) /* Zero ESR */
stw r12, _DEAR(r11) /* SRR0 as DEAR */
- EXC_XFER_LITE(0x400, handle_page_fault)
+ EXC_XFER_LITE(0x400, do_page_fault)

/* 0x0500 - External Interrupt Exception */
EXCEPTION(0x0500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index eb1d40a8f2c4..4078d0dc2f18 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -301,7 +301,7 @@ instruction_counter:
.Litlbie:
stw r12, _DAR(r11)
stw r5, _DSISR(r11)
- EXC_XFER_LITE(0x400, handle_page_fault)
+ EXC_XFER_LITE(0x400, do_page_fault)

/* This is the data TLB error on the MPC8xx. This could be due to
* many reasons, including a dirty update to a pte. We bail out to
@@ -322,7 +322,7 @@ DARFixed:/* Return from dcbx instruction bug workaround */
tlbie r4
.Ldtlbie:
/* 0x300 is DataAccess exception, needed by bad_page_fault() */
- EXC_XFER_LITE(0x300, handle_page_fault)
+ EXC_XFER_LITE(0x300, do_page_fault)

#ifdef CONFIG_VMAP_STACK
vmap_stack_overflow_exception
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 626e9fbac2cc..81a6ec098dd1 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -299,7 +299,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
lwz r5, _DSISR(r11)
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
- EXC_XFER_LITE(0x300, handle_page_fault)
+ EXC_XFER_LITE(0x300, do_page_fault)
1: EXC_XFER_STD(0x300, do_break)


@@ -328,7 +328,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
stw r5, _DSISR(r11)
stw r12, _DAR(r11)
- EXC_XFER_LITE(0x400, handle_page_fault)
+ EXC_XFER_LITE(0x400, do_page_fault)

/* External interrupt */
EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 009a56d70d76..036a69d16605 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -462,7 +462,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
stw r5,_ESR(r11); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR */ \
stw r4, _DEAR(r11); \
- EXC_XFER_LITE(0x0300, handle_page_fault)
+ EXC_XFER_LITE(0x0300, do_page_fault)

#define INSTRUCTION_STORAGE_EXCEPTION \
START_EXCEPTION(InstructionStorage) \
@@ -470,7 +470,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
stw r12, _DEAR(r11); /* Pass SRR0 as arg2 */ \
- EXC_XFER_LITE(0x0400, handle_page_fault)
+ EXC_XFER_LITE(0x0400, do_page_fault)

#define ALIGNMENT_EXCEPTION \
START_EXCEPTION(Alignment) \
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index f51c66f747ad..72e9aa45b99b 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -370,7 +370,7 @@ interrupt_base:
stw r4, _DEAR(r11)
andis. r10,r5,(ESR_ILK|ESR_DLK)@h
bne 1f
- EXC_XFER_LITE(0x0300, handle_page_fault)
+ EXC_XFER_LITE(0x0300, do_page_fault)
1:
EXC_XFER_LITE(0x0300, CacheLockingException)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index bb368257b55c..2e54bac99a22 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -552,7 +552,7 @@ static long __do_page_fault(struct pt_regs *regs)
if (likely(entry)) {
instruction_pointer_set(regs, extable_fixup(entry));
return 0;
- } else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {
+ } else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
__bad_page_fault(regs, err);
return 0;
} else {
--
2.25.0

2021-03-09 12:12:14

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 29/43] powerpc/32: Save trap number on stack in exception prolog

Saving the trap number into the stack goes into
the exception prolog, as EXC_XFER_xxx will soon disappear.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 14 ++++-----
arch/powerpc/kernel/head_40x.S | 22 +++++++-------
arch/powerpc/kernel/head_8xx.S | 14 ++++-----
arch/powerpc/kernel/head_book3s_32.S | 14 ++++-----
arch/powerpc/kernel/head_booke.h | 44 +++++++++++++++-------------
arch/powerpc/kernel/head_fsl_booke.S | 4 +--
6 files changed, 58 insertions(+), 54 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 4d638d760a96..bf4c288173ad 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -10,10 +10,10 @@
* We assume sprg3 has the physical address of the current
* task's thread_struct.
*/
-.macro EXCEPTION_PROLOG name handle_dar_dsisr=0
+.macro EXCEPTION_PROLOG trapno name handle_dar_dsisr=0
EXCEPTION_PROLOG_0 handle_dar_dsisr=\handle_dar_dsisr
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 \name handle_dar_dsisr=\handle_dar_dsisr
+ EXCEPTION_PROLOG_2 \trapno \name handle_dar_dsisr=\handle_dar_dsisr
.endm

.macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0
@@ -56,7 +56,7 @@
#endif
.endm

-.macro EXCEPTION_PROLOG_2 name handle_dar_dsisr=0
+.macro EXCEPTION_PROLOG_2 trapno name handle_dar_dsisr=0
#ifdef CONFIG_PPC_8xx
.if \handle_dar_dsisr
li r11, RPN_PATTERN
@@ -108,6 +108,8 @@
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
addi r10,r10,STACK_FRAME_REGS_MARKER@l
stw r10,8(r11)
+ li r10, \trapno
+ stw r10,_TRAP(r11)
SAVE_4GPRS(3, r11)
SAVE_2GPRS(7, r11)
_ASM_NOKPROBE_SYMBOL(\name\()_virt)
@@ -182,12 +184,10 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)

#define EXCEPTION(n, label, hdlr, xfer) \
START_EXCEPTION(n, label) \
- EXCEPTION_PROLOG label; \
+ EXCEPTION_PROLOG n label; \
xfer(n, hdlr)

#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
- li r10,trap; \
- stw r10,_TRAP(r11); \
bl tfer; \
bl hdlr; \
b ret
@@ -213,7 +213,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
#endif
lwz r1, emergency_ctx@l(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
- EXCEPTION_PROLOG_2 vmap_stack_overflow
+ EXCEPTION_PROLOG_2 0 vmap_stack_overflow
EXC_XFER_STD(0, stack_overflow_exception)
.endm

diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index a65778380704..7270caff665c 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -104,7 +104,7 @@ _ENTRY(crit_esr)
* Instead we use a couple of words of memory at low physical addresses.
* This is OK since we don't support SMP on these processors.
*/
-.macro CRITICAL_EXCEPTION_PROLOG name
+.macro CRITICAL_EXCEPTION_PROLOG trapno name
stw r10,crit_r10@l(0) /* save two registers to work with */
stw r11,crit_r11@l(0)
mfspr r10,SPRN_SRR0
@@ -161,6 +161,8 @@ _ENTRY(crit_esr)
lis r10, STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
addi r10, r10, STACK_FRAME_REGS_MARKER@l
stw r10, 8(r11)
+ li r10, \trapno + 2
+ stw r10,_TRAP(r11)
SAVE_4GPRS(3, r11)
SAVE_2GPRS(7, r11)
_ASM_NOKPROBE_SYMBOL(\name\()_virt)
@@ -184,7 +186,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
*/
#define CRITICAL_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(n, label); \
- CRITICAL_EXCEPTION_PROLOG label; \
+ CRITICAL_EXCEPTION_PROLOG n label; \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
crit_transfer_to_handler, ret_from_crit_exc)

@@ -206,7 +208,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
* if they can't resolve the lightweight TLB fault.
*/
START_EXCEPTION(0x0300, DataStorage)
- EXCEPTION_PROLOG DataStorage handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x300 DataStorage handle_dar_dsisr=1
EXC_XFER_LITE(0x300, do_page_fault)

/*
@@ -214,7 +216,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
* This is caused by a fetch from non-execute or guarded pages.
*/
START_EXCEPTION(0x0400, InstructionAccess)
- EXCEPTION_PROLOG InstructionAccess
+ EXCEPTION_PROLOG 0x400 InstructionAccess
li r5,0
stw r5, _ESR(r11) /* Zero ESR */
stw r12, _DEAR(r11) /* SRR0 as DEAR */
@@ -225,12 +227,12 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)

/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
- EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
EXC_XFER_STD(0x600, alignment_exception)

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
- EXCEPTION_PROLOG ProgramCheck handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x700 ProgramCheck handle_dar_dsisr=1
EXC_XFER_STD(0x700, program_check_exception)

EXCEPTION(0x0800, Trap_08, unknown_exception, EXC_XFER_STD)
@@ -449,7 +451,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
*/
/* 0x2000 - Debug Exception */
START_EXCEPTION(0x2000, DebugTrap)
- CRITICAL_EXCEPTION_PROLOG DebugTrap
+ CRITICAL_EXCEPTION_PROLOG 0x2000 DebugTrap

/*
* If this is a single step or branch-taken exception in an
@@ -498,7 +500,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
/* Programmable Interval Timer (PIT) Exception. (from 0x1000) */
__HEAD
Decrementer:
- EXCEPTION_PROLOG Decrementer
+ EXCEPTION_PROLOG 0x1000 Decrementer
lis r0,TSR_PIS@h
mtspr SPRN_TSR,r0 /* Clear the PIT exception */
EXC_XFER_LITE(0x1000, timer_interrupt)
@@ -506,13 +508,13 @@ Decrementer:
/* Fixed Interval Timer (FIT) Exception. (from 0x1010) */
__HEAD
FITException:
- EXCEPTION_PROLOG FITException
+ EXCEPTION_PROLOG 0x1010 FITException
EXC_XFER_STD(0x1010, unknown_exception)

/* Watchdog Timer (WDT) Exception. (from 0x1020) */
__HEAD
WDTException:
- CRITICAL_EXCEPTION_PROLOG WDTException
+ CRITICAL_EXCEPTION_PROLOG 0x1020 WDTException
EXC_XFER_TEMPLATE(WatchdogException, 0x1020+2,
(MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)),
crit_transfer_to_handler, ret_from_crit_exc)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 4078d0dc2f18..c48de97f42fc 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -122,7 +122,7 @@ instruction_counter:

/* Machine check */
START_EXCEPTION(0x200, MachineCheck)
- EXCEPTION_PROLOG MachineCheck handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x200 MachineCheck handle_dar_dsisr=1
EXC_XFER_STD(0x200, machine_check_exception)

/* External interrupt */
@@ -130,7 +130,7 @@ instruction_counter:

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
- EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
@@ -292,12 +292,12 @@ instruction_counter:
* addresses. There is nothing to do but handle a big time error fault.
*/
START_EXCEPTION(0x1300, InstructionTLBError)
- EXCEPTION_PROLOG InstructionTLBError
+ /* 0x400 is InstructionAccess exception, needed by bad_page_fault() */
+ EXCEPTION_PROLOG 0x400 InstructionTLBError
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
andis. r10,r9,SRR1_ISI_NOPT@h
beq+ .Litlbie
tlbie r12
- /* 0x400 is InstructionAccess exception, needed by bad_page_fault() */
.Litlbie:
stw r12, _DAR(r11)
stw r5, _DSISR(r11)
@@ -314,14 +314,14 @@ instruction_counter:
beq- cr1, FixupDAR /* must be a buggy dcbX, icbi insn. */
DARFixed:/* Return from dcbx instruction bug workaround */
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 DataTLBError handle_dar_dsisr=1
+ /* 0x300 is DataAccess exception, needed by bad_page_fault() */
+ EXCEPTION_PROLOG_2 0x300 DataTLBError handle_dar_dsisr=1
lwz r4, _DAR(r11)
lwz r5, _DSISR(r11)
andis. r10,r5,DSISR_NOHPTE@h
beq+ .Ldtlbie
tlbie r4
.Ldtlbie:
- /* 0x300 is DataAccess exception, needed by bad_page_fault() */
EXC_XFER_LITE(0x300, do_page_fault)

#ifdef CONFIG_VMAP_STACK
@@ -345,7 +345,7 @@ DARFixed:/* Return from dcbx instruction bug workaround */
rfi

1: EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 DataBreakpoint handle_dar_dsisr=1
+ EXCEPTION_PROLOG_2 0x1c00 DataBreakpoint handle_dar_dsisr=1
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
EXC_XFER_STD(0x1c00, do_break)
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 81a6ec098dd1..67dac65b8ec3 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -266,7 +266,7 @@ __secondary_hold_acknowledge:
mfspr r1, SPRN_SPRG_SCRATCH2
#endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1
-7: EXCEPTION_PROLOG_2 MachineCheck
+7: EXCEPTION_PROLOG_2 0x200 MachineCheck
#ifdef CONFIG_PPC_CHRP
beq cr1, 1f
twi 31, 0, 0
@@ -295,7 +295,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
#endif
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 DataAccess handle_dar_dsisr=1
+ EXCEPTION_PROLOG_2 0x300 DataAccess handle_dar_dsisr=1
lwz r5, _DSISR(r11)
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
@@ -324,7 +324,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
andi. r11, r11, MSR_PR

EXCEPTION_PROLOG_1
- EXCEPTION_PROLOG_2 InstructionAccess
+ EXCEPTION_PROLOG_2 0x400 InstructionAccess
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
stw r5, _DSISR(r11)
stw r12, _DAR(r11)
@@ -335,7 +335,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
- EXCEPTION_PROLOG Alignment handle_dar_dsisr=1
+ EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
EXC_XFER_STD(0x600, alignment_exception)

/* Program check exception */
@@ -351,7 +351,7 @@ BEGIN_FTR_SECTION
*/
b ProgramCheck
END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
- EXCEPTION_PROLOG FPUnavailable
+ EXCEPTION_PROLOG 0x800 FPUnavailable
beq 1f
bl load_up_fpu /* if from user, just load it up */
b fast_exception_return
@@ -710,7 +710,7 @@ fast_hash_page_return:

__HEAD
AltiVecUnavailable:
- EXCEPTION_PROLOG AltiVecUnavailable
+ EXCEPTION_PROLOG 0xf20 AltiVecUnavailable
#ifdef CONFIG_ALTIVEC
beq 1f
bl load_up_altivec /* if from user, just load it up */
@@ -720,7 +720,7 @@ AltiVecUnavailable:

__HEAD
PerformanceMonitor:
- EXCEPTION_PROLOG PerformanceMonitor
+ EXCEPTION_PROLOG 0xf00 PerformanceMonitor
EXC_XFER_STD(0xf00, performance_monitor_exception)


diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 036a69d16605..27a7358c04bb 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -44,7 +44,7 @@ END_BTB_FLUSH_SECTION
#endif


-#define NORMAL_EXCEPTION_PROLOG(intno) \
+#define NORMAL_EXCEPTION_PROLOG(trapno, intno) \
mtspr SPRN_SPRG_WSCRATCH0, r10; /* save one register */ \
mfspr r10, SPRN_SPRG_THREAD; \
stw r11, THREAD_NORMSAVE(0)(r10); \
@@ -82,6 +82,8 @@ END_BTB_FLUSH_SECTION
lis r10, STACK_FRAME_REGS_MARKER@ha;/* exception frame marker */ \
addi r10, r10, STACK_FRAME_REGS_MARKER@l; \
stw r10, 8(r11); \
+ li r10, trapno; \
+ stw r10,_TRAP(r11); \
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)

@@ -182,7 +184,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
* registers as the normal prolog above. Instead we use a portion of the
* critical/machine check exception stack at low physical addresses.
*/
-#define EXC_LEVEL_EXCEPTION_PROLOG(exc_level, intno, exc_level_srr0, exc_level_srr1) \
+#define EXC_LEVEL_EXCEPTION_PROLOG(exc_level, trapno, intno, exc_level_srr0, exc_level_srr1) \
mtspr SPRN_SPRG_WSCRATCH_##exc_level,r8; \
BOOKE_LOAD_EXC_LEVEL_STACK(exc_level);/* r8 points to the exc_level stack*/ \
stw r9,GPR9(r8); /* save various registers */\
@@ -225,6 +227,8 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
stw r1,0(r11); \
mr r1,r11; \
rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
+ li r10, trapno; \
+ stw r10,_TRAP(r11); \
stw r0,GPR0(r11); \
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)
@@ -259,12 +263,12 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#endif
.endm

-#define CRITICAL_EXCEPTION_PROLOG(intno) \
- EXC_LEVEL_EXCEPTION_PROLOG(CRIT, intno, SPRN_CSRR0, SPRN_CSRR1)
-#define DEBUG_EXCEPTION_PROLOG \
- EXC_LEVEL_EXCEPTION_PROLOG(DBG, DEBUG, SPRN_DSRR0, SPRN_DSRR1)
-#define MCHECK_EXCEPTION_PROLOG \
- EXC_LEVEL_EXCEPTION_PROLOG(MC, MACHINE_CHECK, \
+#define CRITICAL_EXCEPTION_PROLOG(trapno, intno) \
+ EXC_LEVEL_EXCEPTION_PROLOG(CRIT, trapno+2, intno, SPRN_CSRR0, SPRN_CSRR1)
+#define DEBUG_EXCEPTION_PROLOG(trapno) \
+ EXC_LEVEL_EXCEPTION_PROLOG(DBG, trapno+8, DEBUG, SPRN_DSRR0, SPRN_DSRR1)
+#define MCHECK_EXCEPTION_PROLOG(trapno) \
+ EXC_LEVEL_EXCEPTION_PROLOG(MC, trapno+4, MACHINE_CHECK, \
SPRN_MCSRR0, SPRN_MCSRR1)

/*
@@ -293,12 +297,12 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define EXCEPTION(n, intno, label, hdlr, xfer) \
START_EXCEPTION(label); \
- NORMAL_EXCEPTION_PROLOG(intno); \
+ NORMAL_EXCEPTION_PROLOG(n, intno); \
xfer(n, hdlr)

#define CRITICAL_EXCEPTION(n, intno, label, hdlr) \
START_EXCEPTION(label); \
- CRITICAL_EXCEPTION_PROLOG(intno); \
+ CRITICAL_EXCEPTION_PROLOG(n, intno); \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
@@ -306,7 +310,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define MCHECK_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(label); \
- MCHECK_EXCEPTION_PROLOG; \
+ MCHECK_EXCEPTION_PROLOG(n); \
mfspr r5,SPRN_ESR; \
stw r5,_ESR(r11); \
SAVE_xSRR(DSRR); \
@@ -317,8 +321,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
mcheck_transfer_to_handler, ret_from_mcheck_exc)

#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
- li r10,trap; \
- stw r10,_TRAP(r11); \
bl tfer; \
bl hdlr; \
b ret; \
@@ -346,7 +348,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
*/
#define DEBUG_DEBUG_EXCEPTION \
START_EXCEPTION(DebugDebug); \
- DEBUG_EXCEPTION_PROLOG; \
+ DEBUG_EXCEPTION_PROLOG(2000); \
\
/* \
* If there is a single step or branch-taken exception in an \
@@ -402,7 +404,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define DEBUG_CRIT_EXCEPTION \
START_EXCEPTION(DebugCrit); \
- CRITICAL_EXCEPTION_PROLOG(DEBUG); \
+ CRITICAL_EXCEPTION_PROLOG(2000,DEBUG); \
\
/* \
* If there is a single step or branch-taken exception in an \
@@ -457,7 +459,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define DATA_STORAGE_EXCEPTION \
START_EXCEPTION(DataStorage) \
- NORMAL_EXCEPTION_PROLOG(DATA_STORAGE); \
+ NORMAL_EXCEPTION_PROLOG(0x300, DATA_STORAGE); \
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR */ \
@@ -466,7 +468,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define INSTRUCTION_STORAGE_EXCEPTION \
START_EXCEPTION(InstructionStorage) \
- NORMAL_EXCEPTION_PROLOG(INST_STORAGE); \
+ NORMAL_EXCEPTION_PROLOG(0x400, INST_STORAGE); \
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
stw r12, _DEAR(r11); /* Pass SRR0 as arg2 */ \
@@ -474,28 +476,28 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)

#define ALIGNMENT_EXCEPTION \
START_EXCEPTION(Alignment) \
- NORMAL_EXCEPTION_PROLOG(ALIGNMENT); \
+ NORMAL_EXCEPTION_PROLOG(0x600, ALIGNMENT); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR and save it */ \
stw r4,_DEAR(r11); \
EXC_XFER_STD(0x0600, alignment_exception)

#define PROGRAM_EXCEPTION \
START_EXCEPTION(Program) \
- NORMAL_EXCEPTION_PROLOG(PROGRAM); \
+ NORMAL_EXCEPTION_PROLOG(0x700, PROGRAM); \
mfspr r4,SPRN_ESR; /* Grab the ESR and save it */ \
stw r4,_ESR(r11); \
EXC_XFER_STD(0x0700, program_check_exception)

#define DECREMENTER_EXCEPTION \
START_EXCEPTION(Decrementer) \
- NORMAL_EXCEPTION_PROLOG(DECREMENTER); \
+ NORMAL_EXCEPTION_PROLOG(0x900, DECREMENTER); \
lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */ \
mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */ \
EXC_XFER_LITE(0x0900, timer_interrupt)

#define FP_UNAVAILABLE_EXCEPTION \
START_EXCEPTION(FloatingPointUnavailable) \
- NORMAL_EXCEPTION_PROLOG(FP_UNAVAIL); \
+ NORMAL_EXCEPTION_PROLOG(0x800, FP_UNAVAIL); \
beq 1f; \
bl load_up_fpu; /* if from user, just load it up */ \
b fast_exception_return; \
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 72e9aa45b99b..bf2730b4e43b 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -363,7 +363,7 @@ interrupt_base:

/* Data Storage Interrupt */
START_EXCEPTION(DataStorage)
- NORMAL_EXCEPTION_PROLOG(DATA_STORAGE)
+ NORMAL_EXCEPTION_PROLOG(0x300, DATA_STORAGE)
mfspr r5,SPRN_ESR /* Grab the ESR, save it */
stw r5,_ESR(r11)
mfspr r4,SPRN_DEAR /* Grab the DEAR, save it */
@@ -613,7 +613,7 @@ END_BTB_FLUSH_SECTION
#ifdef CONFIG_SPE
/* SPE Unavailable */
START_EXCEPTION(SPEUnavailable)
- NORMAL_EXCEPTION_PROLOG(SPE_UNAVAIL)
+ NORMAL_EXCEPTION_PROLOG(0x2010, SPE_UNAVAIL)
beq 1f
bl load_up_spe
b fast_exception_return
--
2.25.0

2021-03-09 12:12:17

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 31/43] powerpc/32: Only restore non volatile registers when required

Until now, non volatile registers were restored everytime they
were saved, ie using EXC_XFER_STD meant saving and restoring
them while EXC_XFER_LITE meant neither saving not restoring them.

Now that they are always saved, EXC_XFER_STD means to restore
them and EXC_XFER_LITE means to not restore them.

Most of the users of EXC_XFER_STD only need to retrieve the
non volatile registers. For them there is no need to restore
the non volatile registers as they have not been modified.

Only very few exceptions require non volatile registers restore.

Opencode the few places which require saving of non volatile
registers.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 1 -
arch/powerpc/kernel/head_40x.S | 10 ++++++++--
arch/powerpc/kernel/head_8xx.S | 24 ++++++++++++++++++++----
arch/powerpc/kernel/head_book3s_32.S | 17 ++++++++++++++---
arch/powerpc/kernel/head_booke.h | 10 ++++++++--
arch/powerpc/kernel/head_fsl_booke.S | 16 ++++++++++++----
6 files changed, 62 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index e2346662444d..ca14bc2f3418 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -335,7 +335,6 @@ _ASM_NOKPROBE_SYMBOL(fast_exception_return)

.globl ret_from_except_full
ret_from_except_full:
- REST_NVGPRS(r1)
/* fall through */

.globl ret_from_except
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 7270caff665c..f3e5b462113f 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -228,12 +228,18 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
- EXC_XFER_STD(0x600, alignment_exception)
+ prepare_transfer_to_handler
+ bl alignment_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
EXCEPTION_PROLOG 0x700 ProgramCheck handle_dar_dsisr=1
- EXC_XFER_STD(0x700, program_check_exception)
+ prepare_transfer_to_handler
+ bl program_check_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

EXCEPTION(0x0800, Trap_08, unknown_exception, EXC_XFER_STD)
EXCEPTION(0x0900, Trap_09, unknown_exception, EXC_XFER_STD)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index c48de97f42fc..86f844eb0e5a 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -131,10 +131,18 @@ instruction_counter:
/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
- EXC_XFER_STD(0x600, alignment_exception)
+ prepare_transfer_to_handler
+ bl alignment_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

/* Program check exception */
- EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD)
+ START_EXCEPTION(0x700, ProgramCheck)
+ EXCEPTION_PROLOG 0x700 ProgramCheck
+ prepare_transfer_to_handler
+ bl program_check_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

/* Decrementer */
EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE)
@@ -149,7 +157,12 @@ instruction_counter:
/* On the MPC8xx, this is a software emulation interrupt. It occurs
* for all unimplemented and illegal instructions.
*/
- EXCEPTION(0x1000, SoftEmu, emulation_assist_interrupt, EXC_XFER_STD)
+ START_EXCEPTION(0x1000, SoftEmu)
+ EXCEPTION_PROLOG 0x1000 SoftEmu
+ prepare_transfer_to_handler
+ bl emulation_assist_interrupt
+ REST_NVGPRS(r1)
+ b interrupt_return

/*
* For the MPC8xx, this is a software tablewalk to load the instruction
@@ -348,7 +361,10 @@ DARFixed:/* Return from dcbx instruction bug workaround */
EXCEPTION_PROLOG_2 0x1c00 DataBreakpoint handle_dar_dsisr=1
mfspr r4,SPRN_BAR
stw r4,_DAR(r11)
- EXC_XFER_STD(0x1c00, do_break)
+ prepare_transfer_to_handler
+ bl do_break
+ REST_NVGPRS(r1)
+ b interrupt_return

#ifdef CONFIG_PERF_EVENTS
START_EXCEPTION(0x1d00, InstructionBreakpoint)
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 67dac65b8ec3..609b2eedd4f9 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -300,7 +300,10 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
EXC_XFER_LITE(0x300, do_page_fault)
-1: EXC_XFER_STD(0x300, do_break)
+1: prepare_transfer_to_handler
+ bl do_break
+ REST_NVGPRS(r1)
+ b interrupt_return


/* Instruction access exception. */
@@ -336,10 +339,18 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
EXCEPTION_PROLOG 0x600 Alignment handle_dar_dsisr=1
- EXC_XFER_STD(0x600, alignment_exception)
+ prepare_transfer_to_handler
+ bl alignment_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

/* Program check exception */
- EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD)
+ START_EXCEPTION(0x700, ProgramCheck)
+ EXCEPTION_PROLOG 0x700 ProgramCheck
+ prepare_transfer_to_handler
+ bl program_check_exception
+ REST_NVGPRS(r1)
+ b interrupt_return

/* Floating-point unavailable */
START_EXCEPTION(0x800, FPUnavailable)
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 0f02b970e797..baf10556c587 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -483,14 +483,20 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(0x600, ALIGNMENT); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR and save it */ \
stw r4,_DEAR(r11); \
- EXC_XFER_STD(0x0600, alignment_exception)
+ prepare_transfer_to_handler; \
+ bl alignment_exception; \
+ REST_NVGPRS(r1); \
+ b interrupt_return

#define PROGRAM_EXCEPTION \
START_EXCEPTION(Program) \
NORMAL_EXCEPTION_PROLOG(0x700, PROGRAM); \
mfspr r4,SPRN_ESR; /* Grab the ESR and save it */ \
stw r4,_ESR(r11); \
- EXC_XFER_STD(0x0700, program_check_exception)
+ prepare_transfer_to_handler; \
+ bl program_check_exception; \
+ REST_NVGPRS(r1); \
+ b interrupt_return

#define DECREMENTER_EXCEPTION \
START_EXCEPTION(Decrementer) \
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index bf2730b4e43b..210871b2eb41 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -625,12 +625,20 @@ END_BTB_FLUSH_SECTION

/* SPE Floating Point Data */
#ifdef CONFIG_SPE
- EXCEPTION(0x2030, SPE_FP_DATA, SPEFloatingPointData,
- SPEFloatingPointException, EXC_XFER_STD)
+ START_EXCEPTION(SPEFloatingPointData)
+ NORMAL_EXCEPTION_PROLOG(0x2030, SPE_FP_DATA)
+ prepare_transfer_to_handler
+ bl SPEFloatingPointException
+ REST_NVGPRS(r1)
+ b interrupt_return

/* SPE Floating Point Round */
- EXCEPTION(0x2050, SPE_FP_ROUND, SPEFloatingPointRound, \
- SPEFloatingPointRoundException, EXC_XFER_STD)
+ START_EXCEPTION(SPEFloatingPointRound)
+ NORMAL_EXCEPTION_PROLOG(0x2050, SPE_FP_ROUND)
+ prepare_transfer_to_handler
+ bl SPEFloatingPointRoundException
+ REST_NVGPRS(r1)
+ b interrupt_return
#elif defined(CONFIG_SPE_POSSIBLE)
EXCEPTION(0x2040, SPE_FP_DATA, SPEFloatingPointData,
unknown_exception, EXC_XFER_STD)
--
2.25.0

2021-03-09 12:12:46

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 33/43] powerpc/32: Remove the xfer parameter in EXCEPTION() macro

The xfer parameter is not used anymore, remove it.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 2 +-
arch/powerpc/kernel/head_40x.S | 42 ++++++++--------
arch/powerpc/kernel/head_44x.S | 10 ++--
arch/powerpc/kernel/head_8xx.S | 14 +++---
arch/powerpc/kernel/head_book3s_32.S | 72 ++++++++++++++--------------
arch/powerpc/kernel/head_booke.h | 2 +-
arch/powerpc/kernel/head_fsl_booke.S | 28 +++++------
7 files changed, 81 insertions(+), 89 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 412ede8610f7..84e6251622e8 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -186,7 +186,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)

#endif

-#define EXCEPTION(n, label, hdlr, xfer) \
+#define EXCEPTION(n, label, hdlr) \
START_EXCEPTION(n, label) \
EXCEPTION_PROLOG n label; \
prepare_transfer_to_handler; \
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 7eb49ebd6000..52b40bf529c6 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -228,7 +228,7 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
b interrupt_return

/* 0x0500 - External Interrupt Exception */
- EXCEPTION(0x0500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+ EXCEPTION(0x0500, HardwareInterrupt, do_IRQ)

/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
@@ -246,19 +246,19 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
REST_NVGPRS(r1)
b interrupt_return

- EXCEPTION(0x0800, Trap_08, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x0900, Trap_09, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x0A00, Trap_0A, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x0B00, Trap_0B, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x0800, Trap_08, unknown_exception)
+ EXCEPTION(0x0900, Trap_09, unknown_exception)
+ EXCEPTION(0x0A00, Trap_0A, unknown_exception)
+ EXCEPTION(0x0B00, Trap_0B, unknown_exception)

/* 0x0C00 - System Call Exception */
START_EXCEPTION(0x0C00, SystemCall)
SYSCALL_ENTRY 0xc00
/* Trap_0D is commented out to get more space for system call exception */

-/* EXCEPTION(0x0D00, Trap_0D, unknown_exception, EXC_XFER_STD) */
- EXCEPTION(0x0E00, Trap_0E, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x0F00, Trap_0F, unknown_exception, EXC_XFER_STD)
+/* EXCEPTION(0x0D00, Trap_0D, unknown_exception) */
+ EXCEPTION(0x0E00, Trap_0E, unknown_exception)
+ EXCEPTION(0x0F00, Trap_0F, unknown_exception)

/* 0x1000 - Programmable Interval Timer (PIT) Exception */
START_EXCEPTION(0x1000, DecrementerTrap)
@@ -433,19 +433,19 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
mfspr r10, SPRN_SPRG_SCRATCH5
b InstructionAccess

- EXCEPTION(0x1300, Trap_13, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1400, Trap_14, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1500, Trap_15, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1600, Trap_16, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1700, Trap_17, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1800, Trap_18, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1900, Trap_19, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1A00, Trap_1A, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1B00, Trap_1B, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1C00, Trap_1C, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1D00, Trap_1D, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1E00, Trap_1E, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1F00, Trap_1F, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x1300, Trap_13, unknown_exception)
+ EXCEPTION(0x1400, Trap_14, unknown_exception)
+ EXCEPTION(0x1500, Trap_15, unknown_exception)
+ EXCEPTION(0x1600, Trap_16, unknown_exception)
+ EXCEPTION(0x1700, Trap_17, unknown_exception)
+ EXCEPTION(0x1800, Trap_18, unknown_exception)
+ EXCEPTION(0x1900, Trap_19, unknown_exception)
+ EXCEPTION(0x1A00, Trap_1A, unknown_exception)
+ EXCEPTION(0x1B00, Trap_1B, unknown_exception)
+ EXCEPTION(0x1C00, Trap_1C, unknown_exception)
+ EXCEPTION(0x1D00, Trap_1D, unknown_exception)
+ EXCEPTION(0x1E00, Trap_1E, unknown_exception)
+ EXCEPTION(0x1F00, Trap_1F, unknown_exception)

/* Check for a single step debug exception while in an exception
* handler before state has been saved. This is to catch the case
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 813fa305c33b..5c106ac36626 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -263,8 +263,7 @@ interrupt_base:
INSTRUCTION_STORAGE_EXCEPTION

/* External Input Interrupt */
- EXCEPTION(0x0500, BOOKE_INTERRUPT_EXTERNAL, ExternalInput, \
- do_IRQ, EXC_XFER_LITE)
+ EXCEPTION(0x0500, BOOKE_INTERRUPT_EXTERNAL, ExternalInput, do_IRQ)

/* Alignment Interrupt */
ALIGNMENT_EXCEPTION
@@ -277,7 +276,7 @@ interrupt_base:
FP_UNAVAILABLE_EXCEPTION
#else
EXCEPTION(0x2010, BOOKE_INTERRUPT_FP_UNAVAIL, \
- FloatingPointUnavailable, unknown_exception, EXC_XFER_STD)
+ FloatingPointUnavailable, unknown_exception)
#endif
/* System Call Interrupt */
START_EXCEPTION(SystemCall)
@@ -285,15 +284,14 @@ interrupt_base:

/* Auxiliary Processor Unavailable Interrupt */
EXCEPTION(0x2020, BOOKE_INTERRUPT_AP_UNAVAIL, \
- AuxillaryProcessorUnavailable, unknown_exception, EXC_XFER_STD)
+ AuxillaryProcessorUnavailable, unknown_exception)

/* Decrementer Interrupt */
DECREMENTER_EXCEPTION

/* Fixed Internal Timer Interrupt */
/* TODO: Add FIT support */
- EXCEPTION(0x1010, BOOKE_INTERRUPT_FIT, FixedIntervalTimer, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x1010, BOOKE_INTERRUPT_FIT, FixedIntervalTimer, unknown_exception)

/* Watchdog Timer Interrupt */
/* TODO: Add watchdog support */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 4d73549722a1..34feb628c88d 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -118,7 +118,7 @@ instruction_counter:
#endif

/* System reset */
- EXCEPTION(0x100, Reset, system_reset_exception, EXC_XFER_STD)
+ EXCEPTION(0x100, Reset, system_reset_exception)

/* Machine check */
START_EXCEPTION(0x200, MachineCheck)
@@ -128,7 +128,7 @@ instruction_counter:
b interrupt_return

/* External interrupt */
- EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+ EXCEPTION(0x500, HardwareInterrupt, do_IRQ)

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
@@ -147,14 +147,14 @@ instruction_counter:
b interrupt_return

/* Decrementer */
- EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE)
+ EXCEPTION(0x900, Decrementer, timer_interrupt)

/* System call */
START_EXCEPTION(0xc00, SystemCall)
SYSCALL_ENTRY 0xc00

/* Single step - not used on 601 */
- EXCEPTION(0xd00, SingleStep, single_step_exception, EXC_XFER_STD)
+ EXCEPTION(0xd00, SingleStep, single_step_exception)

/* On the MPC8xx, this is a software emulation interrupt. It occurs
* for all unimplemented and illegal instructions.
@@ -384,10 +384,10 @@ DARFixed:/* Return from dcbx instruction bug workaround */
mfspr r10, SPRN_SPRG_SCRATCH0
rfi
#else
- EXCEPTION(0x1d00, Trap_1d, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x1d00, Trap_1d, unknown_exception)
#endif
- EXCEPTION(0x1e00, Trap_1e, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1f00, Trap_1f, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x1e00, Trap_1e, unknown_exception)
+ EXCEPTION(0x1f00, Trap_1f, unknown_exception)

__HEAD
. = 0x2000
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 0082342f0ae6..055441646f27 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -239,7 +239,7 @@ __secondary_hold_acknowledge:
/* System reset */
/* core99 pmac starts the seconary here by changing the vector, and
putting it back to what it was (unknown_async_exception) when done. */
- EXCEPTION(0x100, Reset, unknown_async_exception, EXC_XFER_STD)
+ EXCEPTION(0x100, Reset, unknown_async_exception)

/* Machine check */
/*
@@ -339,7 +339,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
b interrupt_return

/* External interrupt */
- EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+ EXCEPTION(0x500, HardwareInterrupt, do_IRQ)

/* Alignment exception */
START_EXCEPTION(0x600, Alignment)
@@ -379,17 +379,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
#endif

/* Decrementer */
- EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE)
+ EXCEPTION(0x900, Decrementer, timer_interrupt)

- EXCEPTION(0xa00, Trap_0a, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0xb00, Trap_0b, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0xa00, Trap_0a, unknown_exception)
+ EXCEPTION(0xb00, Trap_0b, unknown_exception)

/* System call */
START_EXCEPTION(0xc00, SystemCall)
SYSCALL_ENTRY 0xc00

- EXCEPTION(0xd00, SingleStep, single_step_exception, EXC_XFER_STD)
- EXCEPTION(0xe00, Trap_0e, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0xd00, SingleStep, single_step_exception)
+ EXCEPTION(0xe00, Trap_0e, unknown_exception)

/*
* The Altivec unavailable trap is at 0x0f20. Foo.
@@ -615,35 +615,35 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_NEED_DTLB_SW_LRU)
#define TAUException unknown_async_exception
#endif

- EXCEPTION(0x1300, Trap_13, instruction_breakpoint_exception, EXC_XFER_STD)
- EXCEPTION(0x1400, SMI, SMIException, EXC_XFER_STD)
- EXCEPTION(0x1500, Trap_15, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1600, Trap_16, altivec_assist_exception, EXC_XFER_STD)
- EXCEPTION(0x1700, Trap_17, TAUException, EXC_XFER_STD)
- EXCEPTION(0x1800, Trap_18, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1900, Trap_19, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1a00, Trap_1a, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1b00, Trap_1b, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1c00, Trap_1c, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1d00, Trap_1d, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1e00, Trap_1e, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x1f00, Trap_1f, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2000, RunMode, RunModeException, EXC_XFER_STD)
- EXCEPTION(0x2100, Trap_21, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2200, Trap_22, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2300, Trap_23, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2400, Trap_24, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2500, Trap_25, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2600, Trap_26, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2700, Trap_27, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2800, Trap_28, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2900, Trap_29, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2a00, Trap_2a, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2b00, Trap_2b, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2c00, Trap_2c, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2d00, Trap_2d, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2e00, Trap_2e, unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2f00, Trap_2f, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x1300, Trap_13, instruction_breakpoint_exception)
+ EXCEPTION(0x1400, SMI, SMIException)
+ EXCEPTION(0x1500, Trap_15, unknown_exception)
+ EXCEPTION(0x1600, Trap_16, altivec_assist_exception)
+ EXCEPTION(0x1700, Trap_17, TAUException)
+ EXCEPTION(0x1800, Trap_18, unknown_exception)
+ EXCEPTION(0x1900, Trap_19, unknown_exception)
+ EXCEPTION(0x1a00, Trap_1a, unknown_exception)
+ EXCEPTION(0x1b00, Trap_1b, unknown_exception)
+ EXCEPTION(0x1c00, Trap_1c, unknown_exception)
+ EXCEPTION(0x1d00, Trap_1d, unknown_exception)
+ EXCEPTION(0x1e00, Trap_1e, unknown_exception)
+ EXCEPTION(0x1f00, Trap_1f, unknown_exception)
+ EXCEPTION(0x2000, RunMode, RunModeException)
+ EXCEPTION(0x2100, Trap_21, unknown_exception)
+ EXCEPTION(0x2200, Trap_22, unknown_exception)
+ EXCEPTION(0x2300, Trap_23, unknown_exception)
+ EXCEPTION(0x2400, Trap_24, unknown_exception)
+ EXCEPTION(0x2500, Trap_25, unknown_exception)
+ EXCEPTION(0x2600, Trap_26, unknown_exception)
+ EXCEPTION(0x2700, Trap_27, unknown_exception)
+ EXCEPTION(0x2800, Trap_28, unknown_exception)
+ EXCEPTION(0x2900, Trap_29, unknown_exception)
+ EXCEPTION(0x2a00, Trap_2a, unknown_exception)
+ EXCEPTION(0x2b00, Trap_2b, unknown_exception)
+ EXCEPTION(0x2c00, Trap_2c, unknown_exception)
+ EXCEPTION(0x2d00, Trap_2d, unknown_exception)
+ EXCEPTION(0x2e00, Trap_2e, unknown_exception)
+ EXCEPTION(0x2f00, Trap_2f, unknown_exception)

__HEAD
. = 0x3000
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index bc69b9bf61a4..fa566e89f18b 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -299,7 +299,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
.align 5; \
label:

-#define EXCEPTION(n, intno, label, hdlr, xfer) \
+#define EXCEPTION(n, intno, label, hdlr) \
START_EXCEPTION(label); \
NORMAL_EXCEPTION_PROLOG(n, intno); \
prepare_transfer_to_handler; \
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 48d022b1f508..3efc5baa801a 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -382,7 +382,7 @@ interrupt_base:
INSTRUCTION_STORAGE_EXCEPTION

/* External Input Interrupt */
- EXCEPTION(0x0500, EXTERNAL, ExternalInput, do_IRQ, EXC_XFER_LITE)
+ EXCEPTION(0x0500, EXTERNAL, ExternalInput, do_IRQ)

/* Alignment Interrupt */
ALIGNMENT_EXCEPTION
@@ -394,8 +394,7 @@ interrupt_base:
#ifdef CONFIG_PPC_FPU
FP_UNAVAILABLE_EXCEPTION
#else
- EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, unknown_exception)
#endif

/* System Call Interrupt */
@@ -403,16 +402,14 @@ interrupt_base:
SYSCALL_ENTRY 0xc00 BOOKE_INTERRUPT_SYSCALL SPRN_SRR1

/* Auxiliary Processor Unavailable Interrupt */
- EXCEPTION(0x2900, AP_UNAVAIL, AuxillaryProcessorUnavailable, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x2900, AP_UNAVAIL, AuxillaryProcessorUnavailable, unknown_exception)

/* Decrementer Interrupt */
DECREMENTER_EXCEPTION

/* Fixed Internal Timer Interrupt */
/* TODO: Add FIT support */
- EXCEPTION(0x3100, FIT, FixedIntervalTimer, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x3100, FIT, FixedIntervalTimer, unknown_exception)

/* Watchdog Timer Interrupt */
#ifdef CONFIG_BOOKE_WDT
@@ -625,8 +622,7 @@ END_BTB_FLUSH_SECTION
bl KernelSPE
b interrupt_return
#elif defined(CONFIG_SPE_POSSIBLE)
- EXCEPTION(0x2020, SPE_UNAVAIL, SPEUnavailable, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x2020, SPE_UNAVAIL, SPEUnavailable, unknown_exception)
#endif /* CONFIG_SPE_POSSIBLE */

/* SPE Floating Point Data */
@@ -646,18 +642,16 @@ END_BTB_FLUSH_SECTION
REST_NVGPRS(r1)
b interrupt_return
#elif defined(CONFIG_SPE_POSSIBLE)
- EXCEPTION(0x2040, SPE_FP_DATA, SPEFloatingPointData,
- unknown_exception, EXC_XFER_STD)
- EXCEPTION(0x2050, SPE_FP_ROUND, SPEFloatingPointRound, \
- unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0x2040, SPE_FP_DATA, SPEFloatingPointData, unknown_exception)
+ EXCEPTION(0x2050, SPE_FP_ROUND, SPEFloatingPointRound, unknown_exception)
#endif /* CONFIG_SPE_POSSIBLE */


/* Performance Monitor */
EXCEPTION(0x2060, PERFORMANCE_MONITOR, PerformanceMonitor, \
- performance_monitor_exception, EXC_XFER_STD)
+ performance_monitor_exception)

- EXCEPTION(0x2070, DOORBELL, Doorbell, doorbell_exception, EXC_XFER_STD)
+ EXCEPTION(0x2070, DOORBELL, Doorbell, doorbell_exception)

CRITICAL_EXCEPTION(0x2080, DOORBELL_CRITICAL, \
CriticalDoorbell, unknown_exception)
@@ -672,10 +666,10 @@ END_BTB_FLUSH_SECTION
unknown_exception)

/* Hypercall */
- EXCEPTION(0, HV_SYSCALL, Hypercall, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0, HV_SYSCALL, Hypercall, unknown_exception)

/* Embedded Hypervisor Privilege */
- EXCEPTION(0, HV_PRIV, Ehvpriv, unknown_exception, EXC_XFER_STD)
+ EXCEPTION(0, HV_PRIV, Ehvpriv, unknown_exception)

interrupt_end:

--
2.25.0

2021-03-09 12:12:46

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 35/43] powerpc/32: Save remaining registers in exception prolog

Save non volatile registers, XER, CTR, MSR and NIP in exception prolog.

Also assign proper value to r2 and r3 there.

For now, recalculate thread pointer in prepare_transfer_to_handler.
It will disappear once KUAP is ported to C.

And remove the comment which is now completely wrong.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 25 +++----------------------
arch/powerpc/kernel/head_32.h | 12 ++++++++++++
2 files changed, 15 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 289f111a5ac7..8fe1c3fdfa6e 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -48,29 +48,11 @@
*/
.align 12

-/*
- * This code finishes saving the registers to the exception frame
- * and jumps to the appropriate handler for the exception, turning
- * on address translation.
- * Note that we rely on the caller having set cr0.eq iff the exception
- * occurred in kernel mode (i.e. MSR:PR = 0).
- */
.globl prepare_transfer_to_handler
prepare_transfer_to_handler:
- SAVE_NVGPRS(r11)
- addi r3,r1,STACK_FRAME_OVERHEAD
- stw r2,GPR2(r11)
- stw r12,_NIP(r11)
- stw r9,_MSR(r11)
- andi. r2,r9,MSR_PR
- mfctr r12
- mfspr r2,SPRN_XER
- stw r12,_CTR(r11)
- stw r2,_XER(r11)
- mfspr r12,SPRN_SPRG_THREAD
- tovirt(r12, r12)
+ andi. r0,r9,MSR_PR
+ addi r12, r2, THREAD
beq 2f /* if from user, fix up THREAD.regs */
- addi r2, r12, -THREAD
stw r3,PT_REGS(r12)
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
@@ -79,8 +61,7 @@ prepare_transfer_to_handler:

/* if from kernel, check interrupted DOZE/NAP mode */
2:
- kuap_save_and_lock r11, r12, r9, r2, r6
- addi r2, r12, -THREAD
+ kuap_save_and_lock r11, r12, r9, r5, r6
#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
lwz r12,TI_LOCAL_FLAGS(r2)
mtcrf 0x01,r12
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index ba20bfabdf63..267479072495 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -117,6 +117,18 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
stw r10,_TRAP(r1)
SAVE_4GPRS(3, r1)
SAVE_2GPRS(7, r1)
+ SAVE_NVGPRS(r1)
+ stw r2,GPR2(r1)
+ stw r12,_NIP(r1)
+ stw r9,_MSR(r1)
+ mfctr r0
+ mfspr r10,SPRN_XER
+ mfspr r2,SPRN_SPRG_THREAD
+ stw r0,_CTR(r1)
+ tovirt(r2, r2)
+ stw r10,_XER(r1)
+ addi r2, r2, -THREAD
+ addi r3,r1,STACK_FRAME_OVERHEAD
.endm

.macro prepare_transfer_to_handler
--
2.25.0

2021-03-09 12:12:55

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 32/43] powerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE

In order to get more control in exception prolog, dismantle
all non standard exception macros, finishing with EXC_XFER_STD
and EXC_XFER_LITE and EXC_XFER_TEMPLATE.

Also remove transfer_to_handler_full and ret_from_except and
ret_from_except_full as they are not used anymore.

Last parameter of EXCEPTION() is now ignored, will be removed
in a later patch to avoid too much churn.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 40 -----------------------
arch/powerpc/kernel/head_32.h | 21 ++++--------
arch/powerpc/kernel/head_40x.S | 33 ++++++++++++-------
arch/powerpc/kernel/head_8xx.S | 12 +++++--
arch/powerpc/kernel/head_book3s_32.S | 27 ++++++++++-----
arch/powerpc/kernel/head_booke.h | 49 +++++++++++++++-------------
arch/powerpc/kernel/head_fsl_booke.S | 14 +++++---
7 files changed, 91 insertions(+), 105 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index ca14bc2f3418..289f111a5ac7 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -48,30 +48,6 @@
*/
.align 12

-#ifdef CONFIG_BOOKE
- .globl mcheck_transfer_to_handler
-mcheck_transfer_to_handler:
- /* fall through */
-_ASM_NOKPROBE_SYMBOL(mcheck_transfer_to_handler)
-
- .globl debug_transfer_to_handler
-debug_transfer_to_handler:
- /* fall through */
-_ASM_NOKPROBE_SYMBOL(debug_transfer_to_handler)
-
- .globl crit_transfer_to_handler
-crit_transfer_to_handler:
- /* fall through */
-_ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
-#endif
-
-#ifdef CONFIG_40x
- .globl crit_transfer_to_handler
-crit_transfer_to_handler:
- /* fall through */
-_ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
-#endif
-
/*
* This code finishes saving the registers to the exception frame
* and jumps to the appropriate handler for the exception, turning
@@ -79,13 +55,6 @@ _ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
* Note that we rely on the caller having set cr0.eq iff the exception
* occurred in kernel mode (i.e. MSR:PR = 0).
*/
- .globl transfer_to_handler_full
-transfer_to_handler_full:
-_ASM_NOKPROBE_SYMBOL(transfer_to_handler_full)
- /* fall through */
-
- .globl transfer_to_handler
-transfer_to_handler:
.globl prepare_transfer_to_handler
prepare_transfer_to_handler:
SAVE_NVGPRS(r11)
@@ -135,7 +104,6 @@ transfer_to_handler_cont:
b fast_exception_return
#endif
_ASM_NOKPROBE_SYMBOL(prepare_transfer_to_handler)
-_ASM_NOKPROBE_SYMBOL(transfer_to_handler)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)

.globl transfer_to_syscall
@@ -333,14 +301,6 @@ fast_exception_return:
#endif
_ASM_NOKPROBE_SYMBOL(fast_exception_return)

- .globl ret_from_except_full
-ret_from_except_full:
- /* fall through */
-
- .globl ret_from_except
-ret_from_except:
-_ASM_NOKPROBE_SYMBOL(ret_from_except)
-
.globl interrupt_return
interrupt_return:
lwz r4,_MSR(r1)
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 3ab0f3ad9a6a..412ede8610f7 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -189,20 +189,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
#define EXCEPTION(n, label, hdlr, xfer) \
START_EXCEPTION(n, label) \
EXCEPTION_PROLOG n label; \
- xfer(n, hdlr)
-
-#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
- bl tfer; \
- bl hdlr; \
- b ret
-
-#define EXC_XFER_STD(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler_full, \
- ret_from_except_full)
-
-#define EXC_XFER_LITE(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler, \
- ret_from_except)
+ prepare_transfer_to_handler; \
+ bl hdlr; \
+ b interrupt_return

.macro vmap_stack_overflow_exception
__HEAD
@@ -218,7 +207,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
lwz r1, emergency_ctx@l(r1)
addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
EXCEPTION_PROLOG_2 0 vmap_stack_overflow
- EXC_XFER_STD(0, stack_overflow_exception)
+ prepare_transfer_to_handler
+ bl stack_overflow_exception
+ b interrupt_return
.endm

#endif /* __HEAD_32_H__ */
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index f3e5b462113f..7eb49ebd6000 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -187,8 +187,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
#define CRITICAL_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(n, label); \
CRITICAL_EXCEPTION_PROLOG n label; \
- EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
- crit_transfer_to_handler, ret_from_crit_exc)
+ prepare_transfer_to_handler; \
+ bl hdlr; \
+ b ret_from_crit_exc

/*
* 0x0100 - Critical Interrupt Exception
@@ -209,7 +210,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
*/
START_EXCEPTION(0x0300, DataStorage)
EXCEPTION_PROLOG 0x300 DataStorage handle_dar_dsisr=1
- EXC_XFER_LITE(0x300, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return

/*
* 0x0400 - Instruction Storage Exception
@@ -220,7 +223,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
li r5,0
stw r5, _ESR(r11) /* Zero ESR */
stw r12, _DEAR(r11) /* SRR0 as DEAR */
- EXC_XFER_LITE(0x400, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return

/* 0x0500 - External Interrupt Exception */
EXCEPTION(0x0500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
@@ -499,9 +504,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
/* continue normal handling for a critical exception... */
2: mfspr r4,SPRN_DBSR
stw r4,_ESR(r11) /* DebugException takes DBSR in _ESR */
- EXC_XFER_TEMPLATE(DebugException, 0x2002, \
- (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
- crit_transfer_to_handler, ret_from_crit_exc)
+ prepare_transfer_to_handler
+ bl DebugException
+ b ret_from_crit_exc

/* Programmable Interval Timer (PIT) Exception. (from 0x1000) */
__HEAD
@@ -509,21 +514,25 @@ Decrementer:
EXCEPTION_PROLOG 0x1000 Decrementer
lis r0,TSR_PIS@h
mtspr SPRN_TSR,r0 /* Clear the PIT exception */
- EXC_XFER_LITE(0x1000, timer_interrupt)
+ prepare_transfer_to_handler
+ bl timer_interrupt
+ b interrupt_return

/* Fixed Interval Timer (FIT) Exception. (from 0x1010) */
__HEAD
FITException:
EXCEPTION_PROLOG 0x1010 FITException
- EXC_XFER_STD(0x1010, unknown_exception)
+ prepare_transfer_to_handler
+ bl unknown_exception
+ b interrupt_return

/* Watchdog Timer (WDT) Exception. (from 0x1020) */
__HEAD
WDTException:
CRITICAL_EXCEPTION_PROLOG 0x1020 WDTException
- EXC_XFER_TEMPLATE(WatchdogException, 0x1020+2,
- (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)),
- crit_transfer_to_handler, ret_from_crit_exc)
+ prepare_transfer_to_handler
+ bl WatchdogException
+ b ret_from_crit_exc

/* Other PowerPC processors, namely those derived from the 6xx-series
* have vectors from 0x2100 through 0x2F00 defined, but marked as reserved.
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 86f844eb0e5a..4d73549722a1 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -123,7 +123,9 @@ instruction_counter:
/* Machine check */
START_EXCEPTION(0x200, MachineCheck)
EXCEPTION_PROLOG 0x200 MachineCheck handle_dar_dsisr=1
- EXC_XFER_STD(0x200, machine_check_exception)
+ prepare_transfer_to_handler
+ bl machine_check_exception
+ b interrupt_return

/* External interrupt */
EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
@@ -314,7 +316,9 @@ instruction_counter:
.Litlbie:
stw r12, _DAR(r11)
stw r5, _DSISR(r11)
- EXC_XFER_LITE(0x400, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return

/* This is the data TLB error on the MPC8xx. This could be due to
* many reasons, including a dirty update to a pte. We bail out to
@@ -335,7 +339,9 @@ DARFixed:/* Return from dcbx instruction bug workaround */
beq+ .Ldtlbie
tlbie r4
.Ldtlbie:
- EXC_XFER_LITE(0x300, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return

#ifdef CONFIG_VMAP_STACK
vmap_stack_overflow_exception
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 609b2eedd4f9..0082342f0ae6 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -271,7 +271,9 @@ __secondary_hold_acknowledge:
beq cr1, 1f
twi 31, 0, 0
#endif
-1: EXC_XFER_STD(0x200, machine_check_exception)
+1: prepare_transfer_to_handler
+ bl machine_check_exception
+ b interrupt_return

/* Data access exception. */
START_EXCEPTION(0x300, DataAccess)
@@ -296,12 +298,13 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 0x300 DataAccess handle_dar_dsisr=1
+ prepare_transfer_to_handler
lwz r5, _DSISR(r11)
andis. r0, r5, DSISR_DABRMATCH@h
bne- 1f
- EXC_XFER_LITE(0x300, do_page_fault)
-1: prepare_transfer_to_handler
- bl do_break
+ bl do_page_fault
+ b interrupt_return
+1: bl do_break
REST_NVGPRS(r1)
b interrupt_return

@@ -331,7 +334,9 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
stw r5, _DSISR(r11)
stw r12, _DAR(r11)
- EXC_XFER_LITE(0x400, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return

/* External interrupt */
EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
@@ -366,7 +371,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
beq 1f
bl load_up_fpu /* if from user, just load it up */
b fast_exception_return
-1: EXC_XFER_LITE(0x800, kernel_fp_unavailable_exception)
+1: prepare_transfer_to_handler
+ bl kernel_fp_unavailable_exception
+ b interrupt_return
#else
b ProgramCheck
#endif
@@ -727,12 +734,16 @@ AltiVecUnavailable:
bl load_up_altivec /* if from user, just load it up */
b fast_exception_return
#endif /* CONFIG_ALTIVEC */
-1: EXC_XFER_LITE(0xf20, altivec_unavailable_exception)
+1: prepare_transfer_to_handler
+ bl altivec_unavailable_exception
+ b interrupt_return

__HEAD
PerformanceMonitor:
EXCEPTION_PROLOG 0xf00 PerformanceMonitor
- EXC_XFER_STD(0xf00, performance_monitor_exception)
+ prepare_transfer_to_handler
+ bl performance_monitor_exception
+ b interrupt_return


__HEAD
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index baf10556c587..bc69b9bf61a4 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -302,15 +302,18 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
#define EXCEPTION(n, intno, label, hdlr, xfer) \
START_EXCEPTION(label); \
NORMAL_EXCEPTION_PROLOG(n, intno); \
- xfer(n, hdlr)
+ prepare_transfer_to_handler; \
+ bl hdlr; \
+ b interrupt_return

#define CRITICAL_EXCEPTION(n, intno, label, hdlr) \
START_EXCEPTION(label); \
CRITICAL_EXCEPTION_PROLOG(n, intno); \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
- EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
- crit_transfer_to_handler, ret_from_crit_exc)
+ prepare_transfer_to_handler; \
+ bl hdlr; \
+ b ret_from_crit_exc

#define MCHECK_EXCEPTION(n, label, hdlr) \
START_EXCEPTION(label); \
@@ -321,21 +324,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
SAVE_xSRR(CSRR); \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
- EXC_XFER_TEMPLATE(hdlr, n+4, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
- mcheck_transfer_to_handler, ret_from_mcheck_exc)
-
-#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret) \
- bl tfer; \
+ prepare_transfer_to_handler; \
bl hdlr; \
- b ret; \
-
-#define EXC_XFER_STD(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler_full, \
- ret_from_except_full)
-
-#define EXC_XFER_LITE(n, hdlr) \
- EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler, \
- ret_from_except)
+ b ret_from_mcheck_exc

/* Check for a single step debug exception while in an exception
* handler before state has been saved. This is to catch the case
@@ -404,7 +395,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
SAVE_xSRR(CSRR); \
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
- EXC_XFER_TEMPLATE(DebugException, 0x2008, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), debug_transfer_to_handler, ret_from_debug_exc)
+ prepare_transfer_to_handler; \
+ bl DebugException; \
+ b ret_from_debug_exc

#define DEBUG_CRIT_EXCEPTION \
START_EXCEPTION(DebugCrit); \
@@ -459,7 +452,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
stw r4,_ESR(r11); /* DebugException takes DBSR in _ESR */\
SAVE_MMU_REGS; \
SAVE_xSRR(SRR); \
- EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), crit_transfer_to_handler, ret_from_crit_exc)
+ prepare_transfer_to_handler; \
+ bl DebugException; \
+ b ret_from_crit_exc

#define DATA_STORAGE_EXCEPTION \
START_EXCEPTION(DataStorage) \
@@ -468,7 +463,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
stw r5,_ESR(r11); \
mfspr r4,SPRN_DEAR; /* Grab the DEAR */ \
stw r4, _DEAR(r11); \
- EXC_XFER_LITE(0x0300, do_page_fault)
+ prepare_transfer_to_handler; \
+ bl do_page_fault; \
+ b interrupt_return

#define INSTRUCTION_STORAGE_EXCEPTION \
START_EXCEPTION(InstructionStorage) \
@@ -476,7 +473,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
stw r5,_ESR(r11); \
stw r12, _DEAR(r11); /* Pass SRR0 as arg2 */ \
- EXC_XFER_LITE(0x0400, do_page_fault)
+ prepare_transfer_to_handler; \
+ bl do_page_fault; \
+ b interrupt_return

#define ALIGNMENT_EXCEPTION \
START_EXCEPTION(Alignment) \
@@ -503,7 +502,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
NORMAL_EXCEPTION_PROLOG(0x900, DECREMENTER); \
lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */ \
mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */ \
- EXC_XFER_LITE(0x0900, timer_interrupt)
+ prepare_transfer_to_handler; \
+ bl timer_interrupt; \
+ b interrupt_return

#define FP_UNAVAILABLE_EXCEPTION \
START_EXCEPTION(FloatingPointUnavailable) \
@@ -511,7 +512,9 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
beq 1f; \
bl load_up_fpu; /* if from user, just load it up */ \
b fast_exception_return; \
-1: EXC_XFER_STD(0x800, kernel_fp_unavailable_exception)
+1: prepare_transfer_to_handler; \
+ bl kernel_fp_unavailable_exception; \
+ b interrupt_return

#else /* __ASSEMBLY__ */
struct exception_regs {
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 210871b2eb41..48d022b1f508 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -370,9 +370,13 @@ interrupt_base:
stw r4, _DEAR(r11)
andis. r10,r5,(ESR_ILK|ESR_DLK)@h
bne 1f
- EXC_XFER_LITE(0x0300, do_page_fault)
+ prepare_transfer_to_handler
+ bl do_page_fault
+ b interrupt_return
1:
- EXC_XFER_LITE(0x0300, CacheLockingException)
+ prepare_transfer_to_handler
+ bl CacheLockingException
+ b interrupt_return

/* Instruction Storage Interrupt */
INSTRUCTION_STORAGE_EXCEPTION
@@ -617,7 +621,9 @@ END_BTB_FLUSH_SECTION
beq 1f
bl load_up_spe
b fast_exception_return
-1: EXC_XFER_LITE(0x2010, KernelSPE)
+1: prepare_transfer_to_handler
+ bl KernelSPE
+ b interrupt_return
#elif defined(CONFIG_SPE_POSSIBLE)
EXCEPTION(0x2020, SPE_UNAVAIL, SPEUnavailable, \
unknown_exception, EXC_XFER_STD)
@@ -860,7 +866,7 @@ KernelSPE:
lwz r5,_NIP(r1)
bl printk
#endif
- b ret_from_except
+ b interrupt_return
#ifdef CONFIG_PRINTK
87: .string "SPE used in kernel (task=%p, pc=%x) \n"
#endif
--
2.25.0

2021-03-09 12:13:09

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 38/43] powerpc/32: Only use prepare_transfer_to_handler function on book3s/32 and e500

Only book3s/32 and e500 have significative work to do in
prepare_transfer_to_handler.

Other 32 bit have nothing to do at all.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 6 ++----
arch/powerpc/kernel/head_32.h | 2 ++
arch/powerpc/kernel/head_booke.h | 2 ++
3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 5cfa10816261..9c333e6db5fa 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -48,6 +48,7 @@
*/
.align 12

+#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
.globl prepare_transfer_to_handler
prepare_transfer_to_handler:
andi. r0,r9,MSR_PR
@@ -61,15 +62,12 @@ prepare_transfer_to_handler:
/* if from kernel, check interrupted DOZE/NAP mode */
2:
kuap_save_and_lock r11, r12, r9, r5, r6
-#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
lwz r12,TI_LOCAL_FLAGS(r2)
mtcrf 0x01,r12
bt- 31-TLF_NAPPING,4f
bt- 31-TLF_SLEEPING,7f
-#endif /* CONFIG_PPC_BOOK3S_32 || CONFIG_E500 */
blr

-#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
4: rlwinm r12,r12,0,~_TLF_NAPPING
stw r12,TI_LOCAL_FLAGS(r2)
b power_save_ppc32_restore
@@ -80,8 +78,8 @@ prepare_transfer_to_handler:
rlwinm r9,r9,0,~MSR_EE
stw r9,_MSR(r11)
b fast_exception_return
-#endif
_ASM_NOKPROBE_SYMBOL(prepare_transfer_to_handler)
+#endif /* CONFIG_PPC_BOOK3S_32 || CONFIG_E500 */

.globl transfer_to_syscall
transfer_to_syscall:
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 267479072495..ca303762d8cc 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -132,7 +132,9 @@ _ASM_NOKPROBE_SYMBOL(\name\()_virt)
.endm

.macro prepare_transfer_to_handler
+#ifdef CONFIG_PPC_BOOK3S_32
bl prepare_transfer_to_handler
+#endif
.endm

.macro SYSCALL_ENTRY trapno
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 4d583fbef0b6..a2565023d2d0 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -92,7 +92,9 @@ END_BTB_FLUSH_SECTION
.endm

.macro prepare_transfer_to_handler
+#ifdef CONFIG_E500
bl prepare_transfer_to_handler
+#endif
.endm

.macro SYSCALL_ENTRY trapno intno srr1
--
2.25.0

2021-03-09 12:13:10

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 05/43] powerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exception

In order to be able to switch MMU on in exception prolog, save
SRR0 and SRR1 earlier.

Also save r10 and r11 into stack earlier to better match with the
normal exception prolog.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 9 ---------
arch/powerpc/kernel/head_40x.S | 8 ++++++++
2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 78c430b7f9d9..8528b4c7f9d3 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -107,15 +107,6 @@ _ASM_NOKPROBE_SYMBOL(crit_transfer_to_handler)
#ifdef CONFIG_40x
.globl crit_transfer_to_handler
crit_transfer_to_handler:
- lwz r0,crit_r10@l(0)
- stw r0,GPR10(r11)
- lwz r0,crit_r11@l(0)
- stw r0,GPR11(r11)
- mfspr r0,SPRN_SRR0
- stw r0,crit_srr0@l(0)
- mfspr r0,SPRN_SRR1
- stw r0,crit_srr1@l(0)
-
/* set the stack limit to the current stack */
mfspr r8,SPRN_SPRG_THREAD
lwz r0,KSP_LIMIT(r8)
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 9cef423d574b..067ae1302c1c 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -103,6 +103,10 @@ _ENTRY(saved_ksp_limit)
.macro CRITICAL_EXCEPTION_PROLOG
stw r10,crit_r10@l(0) /* save two registers to work with */
stw r11,crit_r11@l(0)
+ mfspr r10,SPRN_SRR0
+ mfspr r11,SPRN_SRR1
+ stw r10,crit_srr0@l(0)
+ stw r11,crit_srr1@l(0)
mfcr r10 /* save CR in r10 for now */
mfspr r11,SPRN_SRR3 /* check whether user or kernel */
andi. r11,r11,MSR_PR
@@ -120,6 +124,10 @@ _ENTRY(saved_ksp_limit)
stw r9,GPR9(r11)
mflr r10
stw r10,_LINK(r11)
+ lwz r10,crit_r10@l(0)
+ lwz r12,crit_r11@l(0)
+ stw r10,GPR10(r11)
+ stw r12,GPR11(r11)
mfspr r12,SPRN_DEAR /* save DEAR and ESR in the frame */
stw r12,_DEAR(r11) /* since they may have had stuff */
mfspr r9,SPRN_ESR /* in them at the point where the */
--
2.25.0

2021-03-09 12:13:12

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 09/43] powerpc/32: Reconcile interrupts in C

There is no need for this to be in asm anymore,
use the new interrupt entry wrapper.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 4 ++
arch/powerpc/kernel/entry_32.S | 58 ----------------------------
2 files changed, 4 insertions(+), 58 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 232a4847f596..b2f69e5dcb50 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -29,6 +29,10 @@ static inline void booke_restore_dbcr0(void)

static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
{
+#ifdef CONFIG_PPC32
+ if (!arch_irq_disabled_regs(regs))
+ trace_hardirqs_off();
+#endif
/*
* Book3E reconciles irq soft mask in asm
*/
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 535c55f4393a..0f18fe14649c 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -202,22 +202,6 @@ transfer_to_handler_cont:
lwz r9,4(r9) /* where to go when done */
#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
mtspr SPRN_NRI, r0
-#endif
-#ifdef CONFIG_TRACE_IRQFLAGS
- /*
- * When tracing IRQ state (lockdep) we enable the MMU before we call
- * the IRQ tracing functions as they might access vmalloc space or
- * perform IOs for console output.
- *
- * To speed up the syscall path where interrupts stay on, let's check
- * first if we are changing the MSR value at all.
- */
- tophys_novmstack r12, r1
- lwz r12,_MSR(r12)
- andi. r12,r12,MSR_EE
- bne 1f
-
- /* MSR isn't changing, just transition directly */
#endif
mtspr SPRN_SRR0,r11
mtspr SPRN_SRR1,r10
@@ -244,48 +228,6 @@ transfer_to_handler_cont:
_ASM_NOKPROBE_SYMBOL(transfer_to_handler)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)

-#ifdef CONFIG_TRACE_IRQFLAGS
-1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
- * keep interrupts disabled at this point otherwise we might risk
- * taking an interrupt before we tell lockdep they are enabled.
- */
- lis r12,reenable_mmu@h
- ori r12,r12,reenable_mmu@l
- LOAD_REG_IMMEDIATE(r0, MSR_KERNEL)
- mtspr SPRN_SRR0,r12
- mtspr SPRN_SRR1,r0
- rfi
-#ifdef CONFIG_40x
- b . /* Prevent prefetch past rfi */
-#endif
-
-reenable_mmu:
- /*
- * We save a bunch of GPRs,
- * r3 can be different from GPR3(r1) at this point, r9 and r11
- * contains the old MSR and handler address respectively,
- * r0, r4-r8, r12, CCR, CTR, XER etc... are left
- * clobbered as they aren't useful past this point.
- */
-
- stwu r1,-32(r1)
- stw r9,8(r1)
- stw r11,12(r1)
- stw r3,16(r1)
-
- /* If we are disabling interrupts (normal case), simply log it with
- * lockdep
- */
-1: bl trace_hardirqs_off
- lwz r3,16(r1)
- lwz r11,12(r1)
- lwz r9,8(r1)
- addi r1,r1,32
- mtctr r11
- mtlr r9
- bctr /* jump to handler */
-#endif /* CONFIG_TRACE_IRQFLAGS */
-
#ifndef CONFIG_VMAP_STACK
/*
* On kernel stack overflow, load up an initial stack pointer
--
2.25.0

2021-03-09 12:13:14

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 41/43] powerpc/32s: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_check() and kuap_get_and_check()
on book3s/32.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/book3s/32/kup.h | 45 ++++++++++++++++++++++++
1 file changed, 45 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index b97ea60f6fa3..c9d6c28bcd10 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -72,6 +72,51 @@ static inline void kuap_update_sr(u32 sr, u32 addr, u32 end)
isync(); /* Context sync required after mtsr() */
}

+static inline void kuap_save_and_lock(struct pt_regs *regs)
+{
+ unsigned long kuap = current->thread.kuap;
+ u32 addr = kuap & 0xf0000000;
+ u32 end = kuap << 28;
+
+ regs->kuap = kuap;
+ if (unlikely(!kuap))
+ return;
+
+ current->thread.kuap = 0;
+ kuap_update_sr(mfsr(addr) | SR_KS, addr, end); /* Set Ks */
+}
+
+static inline void kuap_user_restore(struct pt_regs *regs)
+{
+}
+
+static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+{
+ u32 addr = regs->kuap & 0xf0000000;
+ u32 end = regs->kuap << 28;
+
+ current->thread.kuap = regs->kuap;
+
+ if (unlikely(regs->kuap == kuap))
+ return;
+
+ kuap_update_sr(mfsr(addr) & ~SR_KS, addr, end); /* Clear Ks */
+}
+
+static inline unsigned long kuap_get_and_check(void)
+{
+ unsigned long kuap = current->thread.kuap;
+
+ WARN_ON_ONCE(IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && kuap != 0);
+
+ return kuap;
+}
+
+static inline void kuap_check(void)
+{
+ kuap_get_and_check();
+}
+
static __always_inline void allow_user_access(void __user *to, const void __user *from,
u32 size, unsigned long dir)
{
--
2.25.0

2021-03-09 12:13:14

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 08/43] powerpc/40x: Prepare normal exception handler for enabling MMU early

Ensure normal exception handler are able to manage stuff with
MMU enabled. For that we use CONFIG_VMAP_STACK related code
allthough there is no intention to really activate CONFIG_VMAP_STACK
on powerpc 40x for the moment.

40x uses SPRN_DEAR instead of SPRN_DAR and SPRN_ESR instead of
SPRN_DSISR. Take it into account in common macros.

40x MSR value doesn't fit on 15 bits, use LOAD_REG_IMMEDIATE() in
common macros that will be used also with 40x.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 2 +-
arch/powerpc/kernel/head_32.h | 15 ++++++++++++++-
arch/powerpc/kernel/head_40x.S | 17 ++++++-----------
3 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 8528b4c7f9d3..535c55f4393a 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -162,7 +162,7 @@ transfer_to_handler:
li r12,-1 /* clear all pending debug events */
mtspr SPRN_DBSR,r12
lis r11,global_dbcr0@ha
- tophys(r11,r11)
+ tophys_novmstack r11,r11
addi r11,r11,global_dbcr0@l
#ifdef CONFIG_SMP
lwz r9,TASK_CPU(r2)
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 5d4706c14572..ac6b391f1493 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -22,9 +22,17 @@
#ifdef CONFIG_VMAP_STACK
mfspr r10, SPRN_SPRG_THREAD
.if \handle_dar_dsisr
+#ifdef CONFIG_40x
+ mfspr r11, SPRN_DEAR
+#else
mfspr r11, SPRN_DAR
+#endif
stw r11, DAR(r10)
+#ifdef CONFIG_40x
+ mfspr r11, SPRN_ESR
+#else
mfspr r11, SPRN_DSISR
+#endif
stw r11, DSISR(r10)
.endif
mfspr r11, SPRN_SRR0
@@ -61,7 +69,7 @@

.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
#ifdef CONFIG_VMAP_STACK
- li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
+ LOAD_REG_IMMEDIATE(r11, MSR_KERNEL & ~(MSR_IR | MSR_RI)) /* can take DTLB miss */
mtmsr r11
isync
mfspr r11, SPRN_SPRG_SCRATCH2
@@ -158,8 +166,13 @@

.macro save_dar_dsisr_on_stack reg1, reg2, sp
#ifndef CONFIG_VMAP_STACK
+#ifdef CONFIG_40x
+ mfspr \reg1, SPRN_DEAR
+ mfspr \reg2, SPRN_ESR
+#else
mfspr \reg1, SPRN_DAR
mfspr \reg2, SPRN_DSISR
+#endif
stw \reg1, _DAR(\sp)
stw \reg2, _DSISR(\sp)
#endif
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 1468f38c3860..4bf0aee858eb 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -221,11 +221,8 @@ _ENTRY(saved_ksp_limit)
* if they can't resolve the lightweight TLB fault.
*/
START_EXCEPTION(0x0300, DataStorage)
- EXCEPTION_PROLOG
- mfspr r5, SPRN_ESR /* Grab the ESR, save it */
- stw r5, _ESR(r11)
- mfspr r4, SPRN_DEAR /* Grab the DEAR, save it */
- stw r4, _DEAR(r11)
+ EXCEPTION_PROLOG handle_dar_dsisr=1
+ save_dar_dsisr_on_stack r4, r5, r11
EXC_XFER_LITE(0x300, handle_page_fault)

/*
@@ -244,17 +241,15 @@ _ENTRY(saved_ksp_limit)

/* 0x0600 - Alignment Exception */
START_EXCEPTION(0x0600, Alignment)
- EXCEPTION_PROLOG
- mfspr r4,SPRN_DEAR /* Grab the DEAR and save it */
- stw r4,_DEAR(r11)
+ EXCEPTION_PROLOG handle_dar_dsisr=1
+ save_dar_dsisr_on_stack r4, r5, r11
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x600, alignment_exception)

/* 0x0700 - Program Exception */
START_EXCEPTION(0x0700, ProgramCheck)
- EXCEPTION_PROLOG
- mfspr r4,SPRN_ESR /* Grab the ESR and save it */
- stw r4,_ESR(r11)
+ EXCEPTION_PROLOG handle_dar_dsisr=1
+ save_dar_dsisr_on_stack r4, r5, r11
addi r3,r1,STACK_FRAME_OVERHEAD
EXC_XFER_STD(0x700, program_check_exception)

--
2.25.0

2021-03-09 12:13:16

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 43/43] powerpc/32: Manage KUAP in C

Move all KUAP management in C.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/book3s/32/kup.h | 50 +-------------------
arch/powerpc/include/asm/interrupt.h | 2 +
arch/powerpc/include/asm/kup.h | 9 ----
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 25 +---------
arch/powerpc/kernel/entry_32.S | 6 ---
arch/powerpc/kernel/interrupt.c | 19 ++------
arch/powerpc/kernel/process.c | 3 ++
7 files changed, 11 insertions(+), 103 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index c9d6c28bcd10..27991e0d2cf9 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -5,55 +5,7 @@
#include <asm/bug.h>
#include <asm/book3s/32/mmu-hash.h>

-#ifdef __ASSEMBLY__
-
-#ifdef CONFIG_PPC_KUAP
-
-.macro kuap_update_sr gpr1, gpr2, gpr3 /* NEVER use r0 as gpr2 due to addis */
-101: mtsrin \gpr1, \gpr2
- addi \gpr1, \gpr1, 0x111 /* next VSID */
- rlwinm \gpr1, \gpr1, 0, 0xf0ffffff /* clear VSID overflow */
- addis \gpr2, \gpr2, 0x1000 /* address of next segment */
- cmplw \gpr2, \gpr3
- blt- 101b
- isync
-.endm
-
-.macro kuap_save_and_lock sp, thread, gpr1, gpr2, gpr3
- lwz \gpr2, KUAP(\thread)
- rlwinm. \gpr3, \gpr2, 28, 0xf0000000
- stw \gpr2, STACK_REGS_KUAP(\sp)
- beq+ 102f
- li \gpr1, 0
- stw \gpr1, KUAP(\thread)
- mfsrin \gpr1, \gpr2
- oris \gpr1, \gpr1, SR_KS@h /* set Ks */
- kuap_update_sr \gpr1, \gpr2, \gpr3
-102:
-.endm
-
-.macro kuap_restore sp, current, gpr1, gpr2, gpr3
- lwz \gpr2, STACK_REGS_KUAP(\sp)
- rlwinm. \gpr3, \gpr2, 28, 0xf0000000
- stw \gpr2, THREAD + KUAP(\current)
- beq+ 102f
- mfsrin \gpr1, \gpr2
- rlwinm \gpr1, \gpr1, 0, ~SR_KS /* Clear Ks */
- kuap_update_sr \gpr1, \gpr2, \gpr3
-102:
-.endm
-
-.macro kuap_check current, gpr
-#ifdef CONFIG_PPC_KUAP_DEBUG
- lwz \gpr, THREAD + KUAP(\current)
-999: twnei \gpr, 0
- EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
-#endif
-.endm
-
-#endif /* CONFIG_PPC_KUAP */
-
-#else /* !__ASSEMBLY__ */
+#ifndef __ASSEMBLY__

#ifdef CONFIG_PPC_KUAP

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index d4bfe94b4a68..b41cb4e014b2 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -37,6 +37,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
kuep_lock();
current->thread.regs = regs;
account_cpu_user_entry();
+ } else {
+ kuap_save_and_lock(regs);
}
#endif
/*
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index b7efa46b3109..5bbe8f28d26b 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -28,15 +28,6 @@

#ifdef __ASSEMBLY__
#ifndef CONFIG_PPC_KUAP
-.macro kuap_save_and_lock sp, thread, gpr1, gpr2, gpr3
-.endm
-
-.macro kuap_restore sp, current, gpr1, gpr2, gpr3
-.endm
-
-.macro kuap_check current, gpr
-.endm
-
.macro kuap_check_amr gpr1, gpr2
.endm

diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index c74f5704bc47..fb294dbca102 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -7,30 +7,7 @@

#ifdef CONFIG_PPC_KUAP

-#ifdef __ASSEMBLY__
-
-.macro kuap_save_and_lock sp, thread, gpr1, gpr2, gpr3
- lis \gpr2, MD_APG_KUAP@h /* only APG0 and APG1 are used */
- mfspr \gpr1, SPRN_MD_AP
- mtspr SPRN_MD_AP, \gpr2
- stw \gpr1, STACK_REGS_KUAP(\sp)
-.endm
-
-.macro kuap_restore sp, current, gpr1, gpr2, gpr3
- lwz \gpr1, STACK_REGS_KUAP(\sp)
- mtspr SPRN_MD_AP, \gpr1
-.endm
-
-.macro kuap_check current, gpr
-#ifdef CONFIG_PPC_KUAP_DEBUG
- mfspr \gpr, SPRN_MD_AP
- rlwinm \gpr, \gpr, 16, 0xffff
-999: twnei \gpr, MD_APG_KUAP@h
- EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
-#endif
-.endm
-
-#else /* !__ASSEMBLY__ */
+#ifndef __ASSEMBLY__

#include <asm/reg.h>

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 850cb17a937f..f5ac021ff9ed 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -52,11 +52,9 @@
.globl prepare_transfer_to_handler
prepare_transfer_to_handler:
andi. r0,r9,MSR_PR
- addi r12, r2, THREAD
bnelr

/* if from kernel, check interrupted DOZE/NAP mode */
- kuap_save_and_lock r11, r12, r9, r5, r6
lwz r12,TI_LOCAL_FLAGS(r2)
mtcrf 0x01,r12
bt- 31-TLF_NAPPING,4f
@@ -96,7 +94,6 @@ ret_from_syscall:
cmplwi cr0,r5,0
bne- 2f
#endif /* CONFIG_PPC_47x */
- kuap_check r2, r4
lwz r4,_LINK(r1)
lwz r5,_CCR(r1)
mtlr r4
@@ -208,7 +205,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_SPE)
stw r10,_CCR(r1)
stw r1,KSP(r3) /* Set old stack pointer */

- kuap_check r2, r0
#ifdef CONFIG_SMP
/* We need a sync somewhere here to make sure that if the
* previous task gets rescheduled on another CPU, it sees all
@@ -276,7 +272,6 @@ interrupt_return:
bne- .Lrestore_nvgprs

.Lfast_user_interrupt_return:
- kuap_check r2, r4
lwz r11,_NIP(r1)
lwz r12,_MSR(r1)
mtspr SPRN_SRR0,r11
@@ -326,7 +321,6 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)

.Lfast_kernel_interrupt_return:
cmpwi cr1,r3,0
- kuap_restore r1, r2, r3, r4, r5
lwz r11,_NIP(r1)
lwz r12,_MSR(r1)
mtspr SPRN_SRR0,r11
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 40ed55064e54..864cf47b629a 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -34,6 +34,9 @@ notrace long system_call_exception(long r3, long r4, long r5,
syscall_fn f;

kuep_lock();
+#ifdef CONFIG_PPC32
+ kuap_save_and_lock(regs);
+#endif

regs->orig_gpr3 = r3;

@@ -75,9 +78,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
isync();
} else
#endif
-#ifdef CONFIG_PPC64
kuap_check();
-#endif

booke_restore_dbcr0();

@@ -253,9 +254,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,

CT_WARN_ON(ct_state() == CONTEXT_USER);

-#ifdef CONFIG_PPC64
kuap_check();
-#endif

regs->result = r3;

@@ -350,7 +349,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,

account_cpu_user_exit();

-#ifdef CONFIG_PPC_BOOK3S_64 /* BOOK3E and ppc32 not using this */
+#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not using this */
/*
* We do this at the end so that we do context switch with KERNEL AMR
*/
@@ -379,9 +378,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
* We don't need to restore AMR on the way back to userspace for KUAP.
* AMR can only have been unlocked if we interrupted the kernel.
*/
-#ifdef CONFIG_PPC64
kuap_check();
-#endif

local_irq_save(flags);

@@ -438,9 +435,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
/*
* We do this at the end so that we do context switch with KERNEL AMR
*/
-#ifdef CONFIG_PPC64
kuap_user_restore(regs);
-#endif
return ret;
}

@@ -450,9 +445,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
{
unsigned long flags;
unsigned long ret = 0;
-#ifdef CONFIG_PPC64
unsigned long kuap;
-#endif

if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
unlikely(!(regs->msr & MSR_RI)))
@@ -466,9 +459,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
if (TRAP(regs) != 0x700)
CT_WARN_ON(ct_state() == CONTEXT_USER);

-#ifdef CONFIG_PPC64
kuap = kuap_get_and_check();
-#endif

if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
clear_bits(_TIF_EMULATE_STACK_STORE, &current_thread_info()->flags);
@@ -510,9 +501,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
* which would cause Read-After-Write stalls. Hence, we take the AMR
* value from the check above.
*/
-#ifdef CONFIG_PPC64
kuap_kernel_restore(regs, kuap);
-#endif

return ret;
}
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 5d5d64be2679..fd4da71d92d1 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1255,6 +1255,9 @@ struct task_struct *__switch_to(struct task_struct *prev,
*/
restore_sprs(old_thread, new_thread);

+#ifdef CONFIG_PPC32
+ kuap_check();
+#endif
last = _switch(old_thread, new_thread);

#ifdef CONFIG_PPC_BOOK3S_64
--
2.25.0

2021-03-09 12:13:38

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 17/43] powerpc/32: Statically initialise first emergency context

The check of the emergency context initialisation in
vmap_stack_overflow is buggy for the SMP case, as it
compares r1 with 0 while in the SMP case r1 is offseted
by the CPU id.

Instead of fixing it, just perform static initialisation
of the first emergency context.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 6 +-----
arch/powerpc/kernel/setup_32.c | 2 +-
2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 88b02bd91e8e..15c6fc7cbbf5 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -205,11 +205,7 @@
lis r1, emergency_ctx@ha
#endif
lwz r1, emergency_ctx@l(r1)
- cmpwi cr1, r1, 0
- bne cr1, 1f
- lis r1, init_thread_union@ha
- addi r1, r1, init_thread_union@l
-1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
EXCEPTION_PROLOG_2
SAVE_NVGPRS(r11)
addi r3, r1, STACK_FRAME_OVERHEAD
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 8ba49a6bf515..d7c1f92152af 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -164,7 +164,7 @@ void __init irqstack_early_init(void)
}

#ifdef CONFIG_VMAP_STACK
-void *emergency_ctx[NR_CPUS] __ro_after_init;
+void *emergency_ctx[NR_CPUS] __ro_after_init = {[0] = &init_stack};

void __init emergency_stack_init(void)
{
--
2.25.0

2021-03-09 12:13:44

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 18/43] powerpc/32: Add vmap_stack_overflow label inside the macro

For consistency, add in the macro the label used by exception prolog
to branch to stack overflow processing.

While at it, enclose the macro in #ifdef CONFIG_VMAP_STACK on the 8xx
as already done on book3s/32.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 3 ++-
arch/powerpc/kernel/head_8xx.S | 3 ++-
arch/powerpc/kernel/head_book3s_32.S | 1 -
3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 15c6fc7cbbf5..d97ec94b34da 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -52,7 +52,7 @@
1:
#ifdef CONFIG_VMAP_STACK
mtcrf 0x3f, r1
- bt 32 - THREAD_ALIGN_SHIFT, stack_overflow
+ bt 32 - THREAD_ALIGN_SHIFT, vmap_stack_overflow
#endif
.endm

@@ -196,6 +196,7 @@
ret_from_except)

.macro vmap_stack_overflow_exception
+vmap_stack_overflow:
#ifdef CONFIG_SMP
mfspr r1, SPRN_SPRG_THREAD
lwz r1, TASK_CPU - THREAD(r1)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index cdbfa9d41353..b63445c55f4d 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -338,8 +338,9 @@ DARFixed:/* Return from dcbx instruction bug workaround */
/* 0x300 is DataAccess exception, needed by bad_page_fault() */
EXC_XFER_LITE(0x300, handle_page_fault)

-stack_overflow:
+#ifdef CONFIG_VMAP_STACK
vmap_stack_overflow_exception
+#endif

/* On the MPC8xx, these next four traps are used for development
* support of breakpoints and such. Someday I will get around to
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index 59efbee7c080..9dc05890477d 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -729,7 +729,6 @@ fast_hash_page_return:
#endif /* CONFIG_PPC_BOOK3S_604 */

#ifdef CONFIG_VMAP_STACK
-stack_overflow:
vmap_stack_overflow_exception
#endif

--
2.25.0

2021-03-09 12:14:06

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 28/43] powerpc/64e: Call bad_page_fault() from do_page_fault()

book3e/64 is the last one calling __bad_page_fault()
from assembly.

Save non volatile registers before calling do_page_fault()
and modify do_page_fault() to call __bad_page_fault()
for all platforms.

Then it can be refactored by the call of bad_page_fault()
which avoids the duplication of the exception table search.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/exceptions-64e.S | 8 +-------
arch/powerpc/mm/fault.c | 17 ++++-------------
2 files changed, 5 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index e8eb9992a270..b60f89078a3f 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1010,15 +1010,9 @@ storage_fault_common:
addi r3,r1,STACK_FRAME_OVERHEAD
ld r14,PACA_EXGEN+EX_R14(r13)
ld r15,PACA_EXGEN+EX_R15(r13)
+ bl save_nvgprs
bl do_page_fault
- cmpdi r3,0
- bne- 1f
b ret_from_except_lite
-1: bl save_nvgprs
- mr r4,r3
- addi r3,r1,STACK_FRAME_OVERHEAD
- bl __bad_page_fault
- b ret_from_except

/*
* Alignment exception doesn't fit entirely in the 0x100 bytes so it
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 2e54bac99a22..7bcff3fca110 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -541,24 +541,15 @@ NOKPROBE_SYMBOL(___do_page_fault);

static long __do_page_fault(struct pt_regs *regs)
{
- const struct exception_table_entry *entry;
long err;

err = ___do_page_fault(regs, regs->dar, regs->dsisr);
if (likely(!err))
- return err;
-
- entry = search_exception_tables(regs->nip);
- if (likely(entry)) {
- instruction_pointer_set(regs, extable_fixup(entry));
return 0;
- } else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
- __bad_page_fault(regs, err);
- return 0;
- } else {
- /* 32 and 64e handle the bad page fault in asm */
- return err;
- }
+
+ bad_page_fault(regs, err);
+
+ return 0;
}
NOKPROBE_SYMBOL(__do_page_fault);

--
2.25.0

2021-03-09 12:14:12

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 37/43] powerpc/32: Return directly from power_save_ppc32_restore()

transfer_to_handler_cont: is now just a blr.

Directly perform blr in power_save_ppc32_restore().

Also remove useless setting of r11 in e500 version of
power_save_ppc32_restore().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 3 ---
arch/powerpc/kernel/idle_6xx.S | 2 +-
arch/powerpc/kernel/idle_e500.S | 10 +---------
3 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 815a4ff1ba76..5cfa10816261 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -67,8 +67,6 @@ prepare_transfer_to_handler:
bt- 31-TLF_NAPPING,4f
bt- 31-TLF_SLEEPING,7f
#endif /* CONFIG_PPC_BOOK3S_32 || CONFIG_E500 */
- .globl transfer_to_handler_cont
-transfer_to_handler_cont:
blr

#if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
@@ -84,7 +82,6 @@ transfer_to_handler_cont:
b fast_exception_return
#endif
_ASM_NOKPROBE_SYMBOL(prepare_transfer_to_handler)
-_ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)

.globl transfer_to_syscall
transfer_to_syscall:
diff --git a/arch/powerpc/kernel/idle_6xx.S b/arch/powerpc/kernel/idle_6xx.S
index 153366e178c4..13cad9297d82 100644
--- a/arch/powerpc/kernel/idle_6xx.S
+++ b/arch/powerpc/kernel/idle_6xx.S
@@ -176,7 +176,7 @@ BEGIN_FTR_SECTION
lwz r9,nap_save_hid1@l(r9)
mtspr SPRN_HID1, r9
END_FTR_SECTION_IFSET(CPU_FTR_DUAL_PLL_750FX)
- b transfer_to_handler_cont
+ blr
_ASM_NOKPROBE_SYMBOL(power_save_ppc32_restore)

.data
diff --git a/arch/powerpc/kernel/idle_e500.S b/arch/powerpc/kernel/idle_e500.S
index 7795727e7f08..9e1bc4502c50 100644
--- a/arch/powerpc/kernel/idle_e500.S
+++ b/arch/powerpc/kernel/idle_e500.S
@@ -81,13 +81,5 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
_GLOBAL(power_save_ppc32_restore)
lwz r9,_LINK(r11) /* interrupted in e500_idle */
stw r9,_NIP(r11) /* make it do a blr */
-
-#ifdef CONFIG_SMP
- lwz r11,TASK_CPU(r2) /* get cpu number * 4 */
- slwi r11,r11,2
-#else
- li r11,0
-#endif
-
- b transfer_to_handler_cont
+ blr
_ASM_NOKPROBE_SYMBOL(power_save_ppc32_restore)
--
2.25.0

2021-03-09 12:14:14

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 30/43] powerpc/32: Add a prepare_transfer_to_handler macro for exception prologs

In order to increase flexibility, add a macro that will for now
call transfer_to_handler.

As transfer_to_handler doesn't do the actual transfer anymore,
also name it prepare_transfer_to_handler. The following patches
will progressively remove the use of transfer_to_handler label.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/entry_32.S | 3 +++
arch/powerpc/kernel/head_32.h | 4 ++++
arch/powerpc/kernel/head_booke.h | 4 ++++
3 files changed, 11 insertions(+)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index cb2fa00b8fc1..e2346662444d 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -86,6 +86,8 @@ _ASM_NOKPROBE_SYMBOL(transfer_to_handler_full)

.globl transfer_to_handler
transfer_to_handler:
+ .globl prepare_transfer_to_handler
+prepare_transfer_to_handler:
SAVE_NVGPRS(r11)
addi r3,r1,STACK_FRAME_OVERHEAD
stw r2,GPR2(r11)
@@ -132,6 +134,7 @@ transfer_to_handler_cont:
stw r9,_MSR(r11)
b fast_exception_return
#endif
+_ASM_NOKPROBE_SYMBOL(prepare_transfer_to_handler)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler)
_ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index bf4c288173ad..3ab0f3ad9a6a 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -115,6 +115,10 @@
_ASM_NOKPROBE_SYMBOL(\name\()_virt)
.endm

+.macro prepare_transfer_to_handler
+ bl prepare_transfer_to_handler
+.endm
+
.macro SYSCALL_ENTRY trapno
mfspr r9, SPRN_SRR1
mfspr r10, SPRN_SRR0
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 27a7358c04bb..0f02b970e797 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -87,6 +87,10 @@ END_BTB_FLUSH_SECTION
SAVE_4GPRS(3, r11); \
SAVE_2GPRS(7, r11)

+.macro prepare_transfer_to_handler
+ bl prepare_transfer_to_handler
+.endm
+
.macro SYSCALL_ENTRY trapno intno srr1
mfspr r10, SPRN_SPRG_THREAD
#ifdef CONFIG_KVM_BOOKE_HV
--
2.25.0

2021-03-09 12:14:19

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 40/43] powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic

In preparation of porting powerpc32 to C syscall entry/exit,
rename kuap_check_amr() and kuap_get_and_check_amr() as kuap_check()
and kuap_get_and_check(), and move in the generic asm/kup.h the stub
for when CONFIG_PPC_KUAP is not selected.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/book3s/64/kup.h | 24 ++----------------------
arch/powerpc/include/asm/kup.h | 10 +++++++++-
arch/powerpc/kernel/interrupt.c | 12 ++++++------
arch/powerpc/kernel/irq.c | 2 +-
4 files changed, 18 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 8bd905050896..d9b07e9998be 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -287,7 +287,7 @@ static inline void kuap_kernel_restore(struct pt_regs *regs,
*/
}

-static inline unsigned long kuap_get_and_check_amr(void)
+static inline unsigned long kuap_get_and_check(void)
{
if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
unsigned long amr = mfspr(SPRN_AMR);
@@ -298,27 +298,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
return 0;
}

-#else /* CONFIG_PPC_PKEY */
-
-static inline void kuap_user_restore(struct pt_regs *regs)
-{
-}
-
-static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
-{
-}
-
-static inline unsigned long kuap_get_and_check_amr(void)
-{
- return 0;
-}
-
-#endif /* CONFIG_PPC_PKEY */
-
-
-#ifdef CONFIG_PPC_KUAP
-
-static inline void kuap_check_amr(void)
+static inline void kuap_check(void)
{
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 25671f711ec2..b7efa46b3109 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -74,7 +74,15 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
return false;
}

-static inline void kuap_check_amr(void) { }
+static inline void kuap_check(void) { }
+static inline void kuap_save_and_lock(struct pt_regs *regs) { }
+static inline void kuap_user_restore(struct pt_regs *regs) { }
+static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
+
+static inline unsigned long kuap_get_and_check(void)
+{
+ return 0;
+}

/*
* book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 727b7848c9cc..40ed55064e54 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -76,7 +76,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
} else
#endif
#ifdef CONFIG_PPC64
- kuap_check_amr();
+ kuap_check();
#endif

booke_restore_dbcr0();
@@ -254,7 +254,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
CT_WARN_ON(ct_state() == CONTEXT_USER);

#ifdef CONFIG_PPC64
- kuap_check_amr();
+ kuap_check();
#endif

regs->result = r3;
@@ -380,7 +380,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
* AMR can only have been unlocked if we interrupted the kernel.
*/
#ifdef CONFIG_PPC64
- kuap_check_amr();
+ kuap_check();
#endif

local_irq_save(flags);
@@ -451,7 +451,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
unsigned long flags;
unsigned long ret = 0;
#ifdef CONFIG_PPC64
- unsigned long amr;
+ unsigned long kuap;
#endif

if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
@@ -467,7 +467,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
CT_WARN_ON(ct_state() == CONTEXT_USER);

#ifdef CONFIG_PPC64
- amr = kuap_get_and_check_amr();
+ kuap = kuap_get_and_check();
#endif

if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
@@ -511,7 +511,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
* value from the check above.
*/
#ifdef CONFIG_PPC64
- kuap_kernel_restore(regs, amr);
+ kuap_kernel_restore(regs, kuap);
#endif

return ret;
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index d71fd10a1dd4..3b18d2b2c702 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -282,7 +282,7 @@ static inline void replay_soft_interrupts_irqrestore(void)
* and re-locking AMR but we shouldn't get here in the first place,
* hence the warning.
*/
- kuap_check_amr();
+ kuap_check();

if (kuap_state != AMR_KUAP_BLOCKED)
set_kuap(AMR_KUAP_BLOCKED);
--
2.25.0

2021-03-09 12:14:50

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 34/43] powerpc/32: Refactor saving of volatile registers in exception prologs

Exception prologs all do the same at the end:
- Save trapno in stack
- Mark stack with exception marker
- Save r0
- Save r3 to r8

Refactor that into a COMMON_EXCEPTION_PROLOG_END macro.
At the same time use r1 instead of r11.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/head_32.h | 16 ++++++++++------
arch/powerpc/kernel/head_40x.S | 9 +--------
arch/powerpc/kernel/head_booke.h | 26 +++++++++++++-------------
3 files changed, 24 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 84e6251622e8..ba20bfabdf63 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -104,15 +104,19 @@
li r10, MSR_KERNEL /* can take exceptions */
mtmsr r10 /* (except for mach check in rtas) */
#endif
- stw r0,GPR0(r11)
+ COMMON_EXCEPTION_PROLOG_END \trapno
+_ASM_NOKPROBE_SYMBOL(\name\()_virt)
+.endm
+
+.macro COMMON_EXCEPTION_PROLOG_END trapno
+ stw r0,GPR0(r1)
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
addi r10,r10,STACK_FRAME_REGS_MARKER@l
- stw r10,8(r11)
+ stw r10,8(r1)
li r10, \trapno
- stw r10,_TRAP(r11)
- SAVE_4GPRS(3, r11)
- SAVE_2GPRS(7, r11)
-_ASM_NOKPROBE_SYMBOL(\name\()_virt)
+ stw r10,_TRAP(r1)
+ SAVE_4GPRS(3, r1)
+ SAVE_2GPRS(7, r1)
.endm

.macro prepare_transfer_to_handler
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 52b40bf529c6..e1360b88b6cb 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -157,14 +157,7 @@ _ENTRY(crit_esr)
mfspr r12,SPRN_SRR2
mfspr r9,SPRN_SRR3
rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
- stw r0,GPR0(r11)
- lis r10, STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
- addi r10, r10, STACK_FRAME_REGS_MARKER@l
- stw r10, 8(r11)
- li r10, \trapno + 2
- stw r10,_TRAP(r11)
- SAVE_4GPRS(3, r11)
- SAVE_2GPRS(7, r11)
+ COMMON_EXCEPTION_PROLOG_END \trapno + 2
_ASM_NOKPROBE_SYMBOL(\name\()_virt)
.endm

diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index fa566e89f18b..4d583fbef0b6 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -78,14 +78,18 @@ END_BTB_FLUSH_SECTION
stw r1, 0(r11); \
mr r1, r11; \
rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
- stw r0,GPR0(r11); \
- lis r10, STACK_FRAME_REGS_MARKER@ha;/* exception frame marker */ \
- addi r10, r10, STACK_FRAME_REGS_MARKER@l; \
- stw r10, 8(r11); \
- li r10, trapno; \
- stw r10,_TRAP(r11); \
- SAVE_4GPRS(3, r11); \
- SAVE_2GPRS(7, r11)
+ COMMON_EXCEPTION_PROLOG_END trapno
+
+.macro COMMON_EXCEPTION_PROLOG_END trapno
+ stw r0,GPR0(r1)
+ lis r10, STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
+ addi r10, r10, STACK_FRAME_REGS_MARKER@l
+ stw r10, 8(r1)
+ li r10, \trapno
+ stw r10,_TRAP(r1)
+ SAVE_4GPRS(3, r1)
+ SAVE_2GPRS(7, r1)
+.endm

.macro prepare_transfer_to_handler
bl prepare_transfer_to_handler
@@ -231,11 +235,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
stw r1,0(r11); \
mr r1,r11; \
rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
- li r10, trapno; \
- stw r10,_TRAP(r11); \
- stw r0,GPR0(r11); \
- SAVE_4GPRS(3, r11); \
- SAVE_2GPRS(7, r11)
+ COMMON_EXCEPTION_PROLOG_END trapno

#define SAVE_xSRR(xSRR) \
mfspr r0,SPRN_##xSRR##0; \
--
2.25.0

2021-03-09 12:14:54

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 36/43] powerpc/32: Set current->thread.regs in C interrupt entry

No need to do that is assembly, do it in C.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/interrupt.h | 4 +++-
arch/powerpc/kernel/entry_32.S | 3 +--
2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 861e6eadc98c..e6d71c2e3aa2 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -33,8 +33,10 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
if (!arch_irq_disabled_regs(regs))
trace_hardirqs_off();

- if (user_mode(regs))
+ if (user_mode(regs)) {
+ current->thread.regs = regs;
account_cpu_user_entry();
+ }
#endif
/*
* Book3E reconciles irq soft mask in asm
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 8fe1c3fdfa6e..815a4ff1ba76 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -52,8 +52,7 @@
prepare_transfer_to_handler:
andi. r0,r9,MSR_PR
addi r12, r2, THREAD
- beq 2f /* if from user, fix up THREAD.regs */
- stw r3,PT_REGS(r12)
+ beq 2f
#ifdef CONFIG_PPC_BOOK3S_32
kuep_lock r11, r12
#endif
--
2.25.0

2021-03-09 12:14:58

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 39/43] powerpc/32s: Move KUEP locking/unlocking in C

This can be done in C, do it.

Unrolling the loop gains approx. 15% performance.

From now on, prepare_transfer_to_handler() is only for
interrupts from kernel.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/book3s/32/kup.h | 31 -------------------
arch/powerpc/include/asm/interrupt.h | 3 ++
arch/powerpc/include/asm/kup.h | 8 +++++
arch/powerpc/kernel/entry_32.S | 16 +---------
arch/powerpc/kernel/interrupt.c | 4 +++
arch/powerpc/mm/book3s32/Makefile | 1 +
arch/powerpc/mm/book3s32/kuep.c | 38 ++++++++++++++++++++++++
7 files changed, 55 insertions(+), 46 deletions(-)
create mode 100644 arch/powerpc/mm/book3s32/kuep.c

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 73bc5d2c431d..b97ea60f6fa3 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -7,37 +7,6 @@

#ifdef __ASSEMBLY__

-.macro kuep_update_sr gpr1, gpr2 /* NEVER use r0 as gpr2 due to addis */
-101: mtsrin \gpr1, \gpr2
- addi \gpr1, \gpr1, 0x111 /* next VSID */
- rlwinm \gpr1, \gpr1, 0, 0xf0ffffff /* clear VSID overflow */
- addis \gpr2, \gpr2, 0x1000 /* address of next segment */
- bdnz 101b
- isync
-.endm
-
-.macro kuep_lock gpr1, gpr2
-#ifdef CONFIG_PPC_KUEP
- li \gpr1, NUM_USER_SEGMENTS
- li \gpr2, 0
- mtctr \gpr1
- mfsrin \gpr1, \gpr2
- oris \gpr1, \gpr1, SR_NX@h /* set Nx */
- kuep_update_sr \gpr1, \gpr2
-#endif
-.endm
-
-.macro kuep_unlock gpr1, gpr2
-#ifdef CONFIG_PPC_KUEP
- li \gpr1, NUM_USER_SEGMENTS
- li \gpr2, 0
- mtctr \gpr1
- mfsrin \gpr1, \gpr2
- rlwinm \gpr1, \gpr1, 0, ~SR_NX /* Clear Nx */
- kuep_update_sr \gpr1, \gpr2
-#endif
-.endm
-
#ifdef CONFIG_PPC_KUAP

.macro kuap_update_sr gpr1, gpr2, gpr3 /* NEVER use r0 as gpr2 due to addis */
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index e6d71c2e3aa2..d4bfe94b4a68 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -34,6 +34,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
trace_hardirqs_off();

if (user_mode(regs)) {
+ kuep_lock();
current->thread.regs = regs;
account_cpu_user_entry();
}
@@ -91,6 +92,8 @@ static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt
exception_exit(state->ctx_state);
#endif

+ if (user_mode(regs))
+ kuep_unlock();
/*
* Book3S exits to user via interrupt_exit_user_prepare(), which does
* context tracking, which is a cleaner way to handle PREEMPT=y
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 7ec21af49a45..25671f711ec2 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -55,6 +55,14 @@ void setup_kuep(bool disabled);
static inline void setup_kuep(bool disabled) { }
#endif /* CONFIG_PPC_KUEP */

+#if defined(CONFIG_PPC_KUEP) && defined(CONFIG_PPC_BOOK3S_32)
+void kuep_lock(void);
+void kuep_unlock(void);
+#else
+static inline void kuep_lock(void) { }
+static inline void kuep_unlock(void) { }
+#endif
+
#ifdef CONFIG_PPC_KUAP
void setup_kuap(bool disabled);
#else
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 9c333e6db5fa..850cb17a937f 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -53,14 +53,9 @@
prepare_transfer_to_handler:
andi. r0,r9,MSR_PR
addi r12, r2, THREAD
- beq 2f
-#ifdef CONFIG_PPC_BOOK3S_32
- kuep_lock r11, r12
-#endif
- blr
+ bnelr

/* if from kernel, check interrupted DOZE/NAP mode */
-2:
kuap_save_and_lock r11, r12, r9, r5, r6
lwz r12,TI_LOCAL_FLAGS(r2)
mtcrf 0x01,r12
@@ -84,9 +79,6 @@ _ASM_NOKPROBE_SYMBOL(prepare_transfer_to_handler)
.globl transfer_to_syscall
transfer_to_syscall:
SAVE_NVGPRS(r1)
-#ifdef CONFIG_PPC_BOOK3S_32
- kuep_lock r11, r12
-#endif

/* Calling convention has r9 = orig r0, r10 = regs */
addi r10,r1,STACK_FRAME_OVERHEAD
@@ -104,9 +96,6 @@ ret_from_syscall:
cmplwi cr0,r5,0
bne- 2f
#endif /* CONFIG_PPC_47x */
-#ifdef CONFIG_PPC_BOOK3S_32
- kuep_unlock r5, r7
-#endif
kuap_check r2, r4
lwz r4,_LINK(r1)
lwz r5,_CCR(r1)
@@ -287,9 +276,6 @@ interrupt_return:
bne- .Lrestore_nvgprs

.Lfast_user_interrupt_return:
-#ifdef CONFIG_PPC_BOOK3S_32
- kuep_unlock r10, r11
-#endif
kuap_check r2, r4
lwz r11,_NIP(r1)
lwz r12,_MSR(r1)
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 7082e8ee825e..727b7848c9cc 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -33,6 +33,8 @@ notrace long system_call_exception(long r3, long r4, long r5,
{
syscall_fn f;

+ kuep_lock();
+
regs->orig_gpr3 = r3;

if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
@@ -354,6 +356,8 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
*/
kuap_user_restore(regs);
#endif
+ kuep_unlock();
+
return ret;
}

diff --git a/arch/powerpc/mm/book3s32/Makefile b/arch/powerpc/mm/book3s32/Makefile
index 446d9de88ce4..7f0c8a78ba0c 100644
--- a/arch/powerpc/mm/book3s32/Makefile
+++ b/arch/powerpc/mm/book3s32/Makefile
@@ -9,3 +9,4 @@ endif
obj-y += mmu.o mmu_context.o
obj-$(CONFIG_PPC_BOOK3S_603) += nohash_low.o
obj-$(CONFIG_PPC_BOOK3S_604) += hash_low.o tlb.o
+obj-$(CONFIG_PPC_KUEP) += kuep.o
diff --git a/arch/powerpc/mm/book3s32/kuep.c b/arch/powerpc/mm/book3s32/kuep.c
new file mode 100644
index 000000000000..c70532568a28
--- /dev/null
+++ b/arch/powerpc/mm/book3s32/kuep.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <asm/reg.h>
+#include <asm/task_size_32.h>
+#include <asm/mmu.h>
+
+#define KUEP_UPDATE_TWO_USER_SEGMENTS(n) do { \
+ if (TASK_SIZE > ((n) << 28)) \
+ mtsr(val1, (n) << 28); \
+ if (TASK_SIZE > (((n) + 1) << 28)) \
+ mtsr(val2, ((n) + 1) << 28); \
+ val1 = (val1 + 0x222) & 0xf0ffffff; \
+ val2 = (val2 + 0x222) & 0xf0ffffff; \
+} while (0)
+
+static __always_inline void kuep_update(u32 val)
+{
+ int val1 = val;
+ int val2 = (val + 0x111) & 0xf0ffffff;
+
+ KUEP_UPDATE_TWO_USER_SEGMENTS(0);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(2);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(4);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(6);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(8);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(10);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(12);
+ KUEP_UPDATE_TWO_USER_SEGMENTS(14);
+}
+
+void kuep_lock(void)
+{
+ kuep_update(mfsr(0) | SR_NX);
+}
+
+void kuep_unlock(void)
+{
+ kuep_update(mfsr(0) & ~SR_NX);
+}
--
2.25.0

2021-03-09 12:15:14

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 42/43] powerpc/8xx: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_check() and kuap_get_and_check() on 8xx.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 31 ++++++++++++++++++++
1 file changed, 31 insertions(+)

diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 17a4a616436f..c74f5704bc47 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -34,6 +34,37 @@

#include <asm/reg.h>

+static inline void kuap_save_and_lock(struct pt_regs *regs)
+{
+ regs->kuap = mfspr(SPRN_MD_AP);
+ mtspr(SPRN_MD_AP, MD_APG_KUAP);
+}
+
+static inline void kuap_user_restore(struct pt_regs *regs)
+{
+}
+
+static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+{
+ mtspr(SPRN_MD_AP, regs->kuap);
+}
+
+static inline unsigned long kuap_get_and_check(void)
+{
+ unsigned long kuap = mfspr(SPRN_MD_AP);
+
+ if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
+ WARN_ON_ONCE(kuap >> 16 != MD_APG_KUAP >> 16);
+
+ return kuap;
+}
+
+static inline void kuap_check(void)
+{
+ if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
+ kuap_get_and_check();
+}
+
static inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
--
2.25.0

2021-03-10 01:21:03

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 01/43] powerpc/traps: unrecoverable_exception() is not an interrupt handler

Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
> unrecoverable_exception() is called from interrupt handlers or
> after an interrupt handler has failed.
>
> Make it a standard function to avoid doubling the actions
> performed on interrupt entry (e.g.: user time accounting).
>
> Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers")
> Signed-off-by: Christophe Leroy <[email protected]>

Reviewed-by: Nicholas Piggin <[email protected]>

This should go in as a fix for this release I think.

> ---
> arch/powerpc/include/asm/interrupt.h | 3 ++-
> arch/powerpc/kernel/interrupt.c | 1 -
> arch/powerpc/kernel/traps.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
> index aedfba29e43a..e8d09a841373 100644
> --- a/arch/powerpc/include/asm/interrupt.h
> +++ b/arch/powerpc/include/asm/interrupt.h
> @@ -410,7 +410,6 @@ DECLARE_INTERRUPT_HANDLER(altivec_assist_exception);
> DECLARE_INTERRUPT_HANDLER(CacheLockingException);
> DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException);
> DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException);
> -DECLARE_INTERRUPT_HANDLER(unrecoverable_exception);
> DECLARE_INTERRUPT_HANDLER(WatchdogException);
> DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);
>
> @@ -437,6 +436,8 @@ DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);
>
> DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);
>
> +void unrecoverable_exception(struct pt_regs *regs);
> +
> void replay_system_reset(void);
> void replay_soft_interrupts(void);
>
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index 398cd86b6ada..b8e7d25be31b 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -436,7 +436,6 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
> return ret;
> }
>
> -void unrecoverable_exception(struct pt_regs *regs);
> void preempt_schedule_irq(void);
>
> notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsigned long msr)
> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index 1583fd1c6010..a44a30b0688c 100644
> --- a/arch/powerpc/kernel/traps.c
> +++ b/arch/powerpc/kernel/traps.c
> @@ -2170,7 +2170,7 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
> * in the MSR is 0. This indicates that SRR0/1 are live, and that
> * we therefore lost state by taking this exception.
> */
> -DEFINE_INTERRUPT_HANDLER(unrecoverable_exception)
> +void unrecoverable_exception(struct pt_regs *regs)
> {
> pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
> regs->trap, regs->nip, regs->msr);
> --
> 2.25.0
>
>

2021-03-10 01:24:40

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 02/43] powerpc/traps: Declare unrecoverable_exception() as __noreturn

Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
> unrecoverable_exception() is never expected to return, most callers
> have an infiniteloop in case it returns.
>
> Ensure it really never returns by terminating it with a BUG(), and
> declare it __no_return.
>
> It always GCC to really simplify functions calling it. In the exemple
> below, it avoids the stack frame in the likely fast path and avoids
> code duplication for the exit.
>
> With this patch:

[snip]

Nice.

> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
> index a44a30b0688c..d5c9d9ddd186 100644
> --- a/arch/powerpc/kernel/traps.c
> +++ b/arch/powerpc/kernel/traps.c
> @@ -2170,11 +2170,15 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
> * in the MSR is 0. This indicates that SRR0/1 are live, and that
> * we therefore lost state by taking this exception.
> */
> -void unrecoverable_exception(struct pt_regs *regs)
> +void __noreturn unrecoverable_exception(struct pt_regs *regs)
> {
> pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
> regs->trap, regs->nip, regs->msr);
> die("Unrecoverable exception", regs, SIGABRT);
> + /* die() should not return */
> + WARN(true, "die() unexpectedly returned");
> + for (;;)
> + ;
> }

I don't think the WARN should be added because that will cause another
interrupt after something is already badly wrong, so this might just
make it harder to debug.

For example if die() is falling through for some reason, we warn and
cause a program check here, and that might also be unrecoverable so it
might come through here and fall through again and warn again, etc.

Putting the infinite loop is good enough I think (and better than there
was previously).

Otherwise

Reviewed-by: Nicholas Piggin <[email protected]>

Thanks,
Nick

2021-03-10 01:31:23

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 28/43] powerpc/64e: Call bad_page_fault() from do_page_fault()

Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
> book3e/64 is the last one calling __bad_page_fault()
> from assembly.
>
> Save non volatile registers before calling do_page_fault()
> and modify do_page_fault() to call __bad_page_fault()
> for all platforms.
>
> Then it can be refactored by the call of bad_page_fault()
> which avoids the duplication of the exception table search.

This can go in with the 64e change after your series. I think it should
be ready for the next merge window as well.

Thanks,
Nick

>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/kernel/exceptions-64e.S | 8 +-------
> arch/powerpc/mm/fault.c | 17 ++++-------------
> 2 files changed, 5 insertions(+), 20 deletions(-)
>
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index e8eb9992a270..b60f89078a3f 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -1010,15 +1010,9 @@ storage_fault_common:
> addi r3,r1,STACK_FRAME_OVERHEAD
> ld r14,PACA_EXGEN+EX_R14(r13)
> ld r15,PACA_EXGEN+EX_R15(r13)
> + bl save_nvgprs
> bl do_page_fault
> - cmpdi r3,0
> - bne- 1f
> b ret_from_except_lite
> -1: bl save_nvgprs
> - mr r4,r3
> - addi r3,r1,STACK_FRAME_OVERHEAD
> - bl __bad_page_fault
> - b ret_from_except
>
> /*
> * Alignment exception doesn't fit entirely in the 0x100 bytes so it
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 2e54bac99a22..7bcff3fca110 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -541,24 +541,15 @@ NOKPROBE_SYMBOL(___do_page_fault);
>
> static long __do_page_fault(struct pt_regs *regs)
> {
> - const struct exception_table_entry *entry;
> long err;
>
> err = ___do_page_fault(regs, regs->dar, regs->dsisr);
> if (likely(!err))
> - return err;
> -
> - entry = search_exception_tables(regs->nip);
> - if (likely(entry)) {
> - instruction_pointer_set(regs, extable_fixup(entry));
> return 0;
> - } else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
> - __bad_page_fault(regs, err);
> - return 0;
> - } else {
> - /* 32 and 64e handle the bad page fault in asm */
> - return err;
> - }
> +
> + bad_page_fault(regs, err);
> +
> + return 0;
> }
> NOKPROBE_SYMBOL(__do_page_fault);
>
> --
> 2.25.0
>
>

2021-03-10 01:35:15

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 36/43] powerpc/32: Set current->thread.regs in C interrupt entry

Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
> No need to do that is assembly, do it in C.

Hmm. No issues with the patch as such, but why does ppc32 need this but
not 64? AFAIKS 64 sets this when a thread is created.

Thanks,
Nick

>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/include/asm/interrupt.h | 4 +++-
> arch/powerpc/kernel/entry_32.S | 3 +--
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
> index 861e6eadc98c..e6d71c2e3aa2 100644
> --- a/arch/powerpc/include/asm/interrupt.h
> +++ b/arch/powerpc/include/asm/interrupt.h
> @@ -33,8 +33,10 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
> if (!arch_irq_disabled_regs(regs))
> trace_hardirqs_off();
>
> - if (user_mode(regs))
> + if (user_mode(regs)) {
> + current->thread.regs = regs;
> account_cpu_user_entry();
> + }
> #endif
> /*
> * Book3E reconciles irq soft mask in asm
> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
> index 8fe1c3fdfa6e..815a4ff1ba76 100644
> --- a/arch/powerpc/kernel/entry_32.S
> +++ b/arch/powerpc/kernel/entry_32.S
> @@ -52,8 +52,7 @@
> prepare_transfer_to_handler:
> andi. r0,r9,MSR_PR
> addi r12, r2, THREAD
> - beq 2f /* if from user, fix up THREAD.regs */
> - stw r3,PT_REGS(r12)
> + beq 2f
> #ifdef CONFIG_PPC_BOOK3S_32
> kuep_lock r11, r12
> #endif
> --
> 2.25.0
>
>

2021-03-10 01:41:35

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 40/43] powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic

Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
> In preparation of porting powerpc32 to C syscall entry/exit,
> rename kuap_check_amr() and kuap_get_and_check_amr() as kuap_check()
> and kuap_get_and_check(), and move in the generic asm/kup.h the stub
> for when CONFIG_PPC_KUAP is not selected.

Looks pretty straightforward to me.

While you're renaming things, could kuap_check_amr() be changed to
kuap_assert_locked() or similar? Otherwise,

Reviewed-by: Nicholas Piggin <[email protected]>

>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/include/asm/book3s/64/kup.h | 24 ++----------------------
> arch/powerpc/include/asm/kup.h | 10 +++++++++-
> arch/powerpc/kernel/interrupt.c | 12 ++++++------
> arch/powerpc/kernel/irq.c | 2 +-
> 4 files changed, 18 insertions(+), 30 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
> index 8bd905050896..d9b07e9998be 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
> @@ -287,7 +287,7 @@ static inline void kuap_kernel_restore(struct pt_regs *regs,
> */
> }
>
> -static inline unsigned long kuap_get_and_check_amr(void)
> +static inline unsigned long kuap_get_and_check(void)
> {
> if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
> unsigned long amr = mfspr(SPRN_AMR);
> @@ -298,27 +298,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
> return 0;
> }
>
> -#else /* CONFIG_PPC_PKEY */
> -
> -static inline void kuap_user_restore(struct pt_regs *regs)
> -{
> -}
> -
> -static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
> -{
> -}
> -
> -static inline unsigned long kuap_get_and_check_amr(void)
> -{
> - return 0;
> -}
> -
> -#endif /* CONFIG_PPC_PKEY */
> -
> -
> -#ifdef CONFIG_PPC_KUAP
> -
> -static inline void kuap_check_amr(void)
> +static inline void kuap_check(void)
> {
> if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
> WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
> diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
> index 25671f711ec2..b7efa46b3109 100644
> --- a/arch/powerpc/include/asm/kup.h
> +++ b/arch/powerpc/include/asm/kup.h
> @@ -74,7 +74,15 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
> return false;
> }
>
> -static inline void kuap_check_amr(void) { }
> +static inline void kuap_check(void) { }
> +static inline void kuap_save_and_lock(struct pt_regs *regs) { }
> +static inline void kuap_user_restore(struct pt_regs *regs) { }
> +static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
> +
> +static inline unsigned long kuap_get_and_check(void)
> +{
> + return 0;
> +}
>
> /*
> * book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index 727b7848c9cc..40ed55064e54 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -76,7 +76,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
> } else
> #endif
> #ifdef CONFIG_PPC64
> - kuap_check_amr();
> + kuap_check();
> #endif
>
> booke_restore_dbcr0();
> @@ -254,7 +254,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
> CT_WARN_ON(ct_state() == CONTEXT_USER);
>
> #ifdef CONFIG_PPC64
> - kuap_check_amr();
> + kuap_check();
> #endif
>
> regs->result = r3;
> @@ -380,7 +380,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
> * AMR can only have been unlocked if we interrupted the kernel.
> */
> #ifdef CONFIG_PPC64
> - kuap_check_amr();
> + kuap_check();
> #endif
>
> local_irq_save(flags);
> @@ -451,7 +451,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
> unsigned long flags;
> unsigned long ret = 0;
> #ifdef CONFIG_PPC64
> - unsigned long amr;
> + unsigned long kuap;
> #endif
>
> if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
> @@ -467,7 +467,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
> CT_WARN_ON(ct_state() == CONTEXT_USER);
>
> #ifdef CONFIG_PPC64
> - amr = kuap_get_and_check_amr();
> + kuap = kuap_get_and_check();
> #endif
>
> if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
> @@ -511,7 +511,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
> * value from the check above.
> */
> #ifdef CONFIG_PPC64
> - kuap_kernel_restore(regs, amr);
> + kuap_kernel_restore(regs, kuap);
> #endif
>
> return ret;
> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
> index d71fd10a1dd4..3b18d2b2c702 100644
> --- a/arch/powerpc/kernel/irq.c
> +++ b/arch/powerpc/kernel/irq.c
> @@ -282,7 +282,7 @@ static inline void replay_soft_interrupts_irqrestore(void)
> * and re-locking AMR but we shouldn't get here in the first place,
> * hence the warning.
> */
> - kuap_check_amr();
> + kuap_check();
>
> if (kuap_state != AMR_KUAP_BLOCKED)
> set_kuap(AMR_KUAP_BLOCKED);
> --
> 2.25.0
>
>

2021-03-11 10:42:52

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 36/43] powerpc/32: Set current->thread.regs in C interrupt entry



Le 10/03/2021 à 02:33, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
>> No need to do that is assembly, do it in C.
>
> Hmm. No issues with the patch as such, but why does ppc32 need this but
> not 64? AFAIKS 64 sets this when a thread is created.

Looks like ppc64 was doing the same in function save_remaining_regs() in arch/ppc64/kernel/head.S
until commit https://github.com/mpe/linux-fullhistory/commit/e5bb080d

But I can't find what happend to it in that commit.

Where is it done now ? Maybe that's also already done for ppc32.

Thanks
Christophe


>
> Thanks,
> Nick
>
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/include/asm/interrupt.h | 4 +++-
>> arch/powerpc/kernel/entry_32.S | 3 +--
>> 2 files changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
>> index 861e6eadc98c..e6d71c2e3aa2 100644
>> --- a/arch/powerpc/include/asm/interrupt.h
>> +++ b/arch/powerpc/include/asm/interrupt.h
>> @@ -33,8 +33,10 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
>> if (!arch_irq_disabled_regs(regs))
>> trace_hardirqs_off();
>>
>> - if (user_mode(regs))
>> + if (user_mode(regs)) {
>> + current->thread.regs = regs;
>> account_cpu_user_entry();
>> + }
>> #endif
>> /*
>> * Book3E reconciles irq soft mask in asm
>> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
>> index 8fe1c3fdfa6e..815a4ff1ba76 100644
>> --- a/arch/powerpc/kernel/entry_32.S
>> +++ b/arch/powerpc/kernel/entry_32.S
>> @@ -52,8 +52,7 @@
>> prepare_transfer_to_handler:
>> andi. r0,r9,MSR_PR
>> addi r12, r2, THREAD
>> - beq 2f /* if from user, fix up THREAD.regs */
>> - stw r3,PT_REGS(r12)
>> + beq 2f
>> #ifdef CONFIG_PPC_BOOK3S_32
>> kuep_lock r11, r12
>> #endif
>> --
>> 2.25.0
>>
>>

2021-03-11 12:40:37

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 36/43] powerpc/32: Set current->thread.regs in C interrupt entry



Le 11/03/2021 à 11:38, Christophe Leroy a écrit :
>
>
> Le 10/03/2021 à 02:33, Nicholas Piggin a écrit :
>> Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
>>> No need to do that is assembly, do it in C.
>>
>> Hmm. No issues with the patch as such, but why does ppc32 need this but
>> not 64? AFAIKS 64 sets this when a thread is created.
>
> Looks like ppc64 was doing the same in function save_remaining_regs() in arch/ppc64/kernel/head.S
> until commit https://github.com/mpe/linux-fullhistory/commit/e5bb080d
>
> But I can't find what happend to it in that commit.
>
> Where is it done now ? Maybe that's also already done for ppc32.
>

I digged a bit more and found a later bug fix which adds that setting of current->thread.regs at
task creation: https://github.com/mpe/linux-fullhistory/commit/3eac1897

That was in the ppc64 tree only at that time, and was merged into the common powerpc tree via commit
https://github.com/mpe/linux-fullhistory/commit/06d67d54

So we have it for both ppc32 and ppc64 and ppc32 doesn't need to do it at exception entry anymore.
I'll remove it.

Thanks
Christophe

2021-03-11 13:49:17

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 25/43] powerpc/32: Replace ASM exception exit by C exception exit from ppc64

Christophe Leroy <[email protected]> writes:
> This patch replaces the PPC32 ASM exception exit by C exception exit.
>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/kernel/entry_32.S | 481 +++++++++-----------------------
> arch/powerpc/kernel/interrupt.c | 4 +
> 2 files changed, 132 insertions(+), 353 deletions(-)

Bisect points to this breaking qemu mac99 for me, with pmac32_defconfig.

I haven't had time to dig any deeper sorry.

cheers


Freeing unused kernel memory: 1132K
This architecture does not have kernel memory protection.
Run /init as init process
init[1]: User access of kernel address (fffffd20) - exploit attempt? (uid: 0)
init[1]: segfault (11) at fffffd20 nip b7e78638 lr b7e845e4 code 1 in ld-2.27.so[b7e6b000+22000]
init[1]: code: 92010080 92210084 92410088 92810090 92a10094 92c10098 930100a0 932100a4
init[1]: code: 934100a8 936100ac 93a100b4 91810074 <7d41496e> 39400000 3b810017 579c0036
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00ERROR: Error: saw oops/warning etc. while expecting
00000b
CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc2+ #1
Call Trace:
[f1019d80] [c004f1ec] panic+0x138/0x328 (unreliable)
[f1019de0] [c0051c8c] do_exit+0x880/0x8f4
[f1019e30] [c0052bdc] do_group_exit+0x40/0xa4
[f1019e50] [c0060d04] get_signal+0x1e8/0x834
[f1019eb0] [c000b624] do_notify_resume+0xc8/0x314
[f1019f10] [c0010da8] interrupt_exit_user_prepare+0xa4/0xdc
[f1019f30] [c0018228] interrupt_return+0x14/0x14c
--- interrupt: 300 at 0xb7e78638
NIP: b7e78638 LR: b7e845e4 CTR: c01ea2d8
REGS: f1019f40 TRAP: 0300 Not tainted (5.12.0-rc2+)
MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 28004422 XER: 20000000
DAR: fffffd20 DSISR: 42000000
GPR00: b7e845e4 bf951440 00000000 bf951460 00000000 bf951718 fefefeff 7f7f7f7f
GPR08: bf9516b0 406ae8e0 b7eac1d4 00000000 0a12247b 00000000 b7e8a0d0 b7e78554
GPR16: bf951730 bf9516f0 b7eaaf40 bf9516f0 00000001 b7eaa688 10002178 bf951460
GPR24: 00000000 00000000 b7eac200 100cff38 bf9516f0 10002179 b7e845e4 bf951440
NIP [b7e78638] 0xb7e78638
LR [b7e845e4] 0xb7e845e4
--- interrupt: 300
Rebooting in 180 seconds..

2021-03-11 19:40:47

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 25/43] powerpc/32: Replace ASM exception exit by C exception exit from ppc64



Le 11/03/2021 à 14:46, Michael Ellerman a écrit :
> Christophe Leroy <[email protected]> writes:
>> This patch replaces the PPC32 ASM exception exit by C exception exit.
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/kernel/entry_32.S | 481 +++++++++-----------------------
>> arch/powerpc/kernel/interrupt.c | 4 +
>> 2 files changed, 132 insertions(+), 353 deletions(-)
>
> Bisect points to this breaking qemu mac99 for me, with pmac32_defconfig.
>
> I haven't had time to dig any deeper sorry.

Embarrasing ...

I don't get this problem on the 8xx (nohash/32) or the 83xx (book3s/32).
I don't get this problem with qemu mac99 when using my klibc-based initramfs.

I managed to reproduce it with the rootfs.cpio that I got some time ago from linuxppc github Wiki.

I'll investigate it tomorrow.

Thanks
Christophe


>
> cheers
>
>
> Freeing unused kernel memory: 1132K
> This architecture does not have kernel memory protection.
> Run /init as init process
> init[1]: User access of kernel address (fffffd20) - exploit attempt? (uid: 0)
> init[1]: segfault (11) at fffffd20 nip b7e78638 lr b7e845e4 code 1 in ld-2.27.so[b7e6b000+22000]
> init[1]: code: 92010080 92210084 92410088 92810090 92a10094 92c10098 930100a0 932100a4
> init[1]: code: 934100a8 936100ac 93a100b4 91810074 <7d41496e> 39400000 3b810017 579c0036
> Kernel panic - not syncing: Attempted to kill init! exitcode=0x00ERROR: Error: saw oops/warning etc. while expecting
> 00000b
> CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc2+ #1
> Call Trace:
> [f1019d80] [c004f1ec] panic+0x138/0x328 (unreliable)
> [f1019de0] [c0051c8c] do_exit+0x880/0x8f4
> [f1019e30] [c0052bdc] do_group_exit+0x40/0xa4
> [f1019e50] [c0060d04] get_signal+0x1e8/0x834
> [f1019eb0] [c000b624] do_notify_resume+0xc8/0x314
> [f1019f10] [c0010da8] interrupt_exit_user_prepare+0xa4/0xdc
> [f1019f30] [c0018228] interrupt_return+0x14/0x14c
> --- interrupt: 300 at 0xb7e78638
> NIP: b7e78638 LR: b7e845e4 CTR: c01ea2d8
> REGS: f1019f40 TRAP: 0300 Not tainted (5.12.0-rc2+)
> MSR: 0000d032 <EE,PR,ME,IR,DR,RI> CR: 28004422 XER: 20000000
> DAR: fffffd20 DSISR: 42000000
> GPR00: b7e845e4 bf951440 00000000 bf951460 00000000 bf951718 fefefeff 7f7f7f7f
> GPR08: bf9516b0 406ae8e0 b7eac1d4 00000000 0a12247b 00000000 b7e8a0d0 b7e78554
> GPR16: bf951730 bf9516f0 b7eaaf40 bf9516f0 00000001 b7eaa688 10002178 bf951460
> GPR24: 00000000 00000000 b7eac200 100cff38 bf9516f0 10002179 b7e845e4 bf951440
> NIP [b7e78638] 0xb7e78638
> LR [b7e845e4] 0xb7e845e4
> --- interrupt: 300
> Rebooting in 180 seconds..
>

2021-03-11 23:27:59

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 25/43] powerpc/32: Replace ASM exception exit by C exception exit from ppc64

Christophe Leroy <[email protected]> writes:
> Le 11/03/2021 à 14:46, Michael Ellerman a écrit :
>> Christophe Leroy <[email protected]> writes:
>>> This patch replaces the PPC32 ASM exception exit by C exception exit.
>>>
>>> Signed-off-by: Christophe Leroy <[email protected]>
>>> ---
>>> arch/powerpc/kernel/entry_32.S | 481 +++++++++-----------------------
>>> arch/powerpc/kernel/interrupt.c | 4 +
>>> 2 files changed, 132 insertions(+), 353 deletions(-)
>>
>> Bisect points to this breaking qemu mac99 for me, with pmac32_defconfig.
>>
>> I haven't had time to dig any deeper sorry.
>
> Embarrasing ...

Nah, these things happen.

> I don't get this problem on the 8xx (nohash/32) or the 83xx (book3s/32).
> I don't get this problem with qemu mac99 when using my klibc-based initramfs.
>
> I managed to reproduce it with the rootfs.cpio that I got some time ago from linuxppc github Wiki.

OK.

I'm using the ppc-rootfs.cpio.gz from here:

https://github.com/linuxppc/ci-scripts/blob/master/root-disks/Makefile

And the boot script is:

https://github.com/linuxppc/ci-scripts/blob/master/scripts/boot/qemu-mac99

I've been meaning to write docs on how to use those scripts, but haven't
got around to it.

There's nothing really special though it's just a wrapper around qemu -M mac99.

> I'll investigate it tomorrow.

Thanks.

cheers

2021-03-12 01:07:33

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH v2 36/43] powerpc/32: Set current->thread.regs in C interrupt entry

Excerpts from Christophe Leroy's message of March 11, 2021 10:38 pm:
>
>
> Le 11/03/2021 à 11:38, Christophe Leroy a écrit :
>>
>>
>> Le 10/03/2021 à 02:33, Nicholas Piggin a écrit :
>>> Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
>>>> No need to do that is assembly, do it in C.
>>>
>>> Hmm. No issues with the patch as such, but why does ppc32 need this but
>>> not 64? AFAIKS 64 sets this when a thread is created.
>>
>> Looks like ppc64 was doing the same in function save_remaining_regs() in arch/ppc64/kernel/head.S
>> until commit https://github.com/mpe/linux-fullhistory/commit/e5bb080d
>>
>> But I can't find what happend to it in that commit.
>>
>> Where is it done now ? Maybe that's also already done for ppc32.
>>
>
> I digged a bit more and found a later bug fix which adds that setting of current->thread.regs at
> task creation: https://github.com/mpe/linux-fullhistory/commit/3eac1897
>
> That was in the ppc64 tree only at that time, and was merged into the common powerpc tree via commit
> https://github.com/mpe/linux-fullhistory/commit/06d67d54

Nice archaeology!

> So we have it for both ppc32 and ppc64 and ppc32 doesn't need to do it at exception entry anymore.
> I'll remove it.

Good, that's what I hoped (otherwise ppc64 would have been missing
something).

Thanks,
Nick

2021-03-12 08:39:38

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 40/43] powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic



Le 10/03/2021 à 02:37, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
>> In preparation of porting powerpc32 to C syscall entry/exit,
>> rename kuap_check_amr() and kuap_get_and_check_amr() as kuap_check()
>> and kuap_get_and_check(), and move in the generic asm/kup.h the stub
>> for when CONFIG_PPC_KUAP is not selected.
>
> Looks pretty straightforward to me.
>
> While you're renaming things, could kuap_check_amr() be changed to
> kuap_assert_locked() or similar? Otherwise,

Ok, renamed kuap_assert_locked() and kuap_get_and_assert_locked()

>
> Reviewed-by: Nicholas Piggin <[email protected]>
>
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/include/asm/book3s/64/kup.h | 24 ++----------------------
>> arch/powerpc/include/asm/kup.h | 10 +++++++++-
>> arch/powerpc/kernel/interrupt.c | 12 ++++++------
>> arch/powerpc/kernel/irq.c | 2 +-
>> 4 files changed, 18 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
>> index 8bd905050896..d9b07e9998be 100644
>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>> @@ -287,7 +287,7 @@ static inline void kuap_kernel_restore(struct pt_regs *regs,
>> */
>> }
>>
>> -static inline unsigned long kuap_get_and_check_amr(void)
>> +static inline unsigned long kuap_get_and_check(void)
>> {
>> if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) {
>> unsigned long amr = mfspr(SPRN_AMR);
>> @@ -298,27 +298,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
>> return 0;
>> }
>>
>> -#else /* CONFIG_PPC_PKEY */
>> -
>> -static inline void kuap_user_restore(struct pt_regs *regs)
>> -{
>> -}
>> -
>> -static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
>> -{
>> -}
>> -
>> -static inline unsigned long kuap_get_and_check_amr(void)
>> -{
>> - return 0;
>> -}
>> -
>> -#endif /* CONFIG_PPC_PKEY */
>> -
>> -
>> -#ifdef CONFIG_PPC_KUAP
>> -
>> -static inline void kuap_check_amr(void)
>> +static inline void kuap_check(void)
>> {
>> if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_BOOK3S_KUAP))
>> WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
>> diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
>> index 25671f711ec2..b7efa46b3109 100644
>> --- a/arch/powerpc/include/asm/kup.h
>> +++ b/arch/powerpc/include/asm/kup.h
>> @@ -74,7 +74,15 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>> return false;
>> }
>>
>> -static inline void kuap_check_amr(void) { }
>> +static inline void kuap_check(void) { }
>> +static inline void kuap_save_and_lock(struct pt_regs *regs) { }
>> +static inline void kuap_user_restore(struct pt_regs *regs) { }
>> +static inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
>> +
>> +static inline unsigned long kuap_get_and_check(void)
>> +{
>> + return 0;
>> +}
>>
>> /*
>> * book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
>> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
>> index 727b7848c9cc..40ed55064e54 100644
>> --- a/arch/powerpc/kernel/interrupt.c
>> +++ b/arch/powerpc/kernel/interrupt.c
>> @@ -76,7 +76,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>> } else
>> #endif
>> #ifdef CONFIG_PPC64
>> - kuap_check_amr();
>> + kuap_check();
>> #endif
>>
>> booke_restore_dbcr0();
>> @@ -254,7 +254,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>> CT_WARN_ON(ct_state() == CONTEXT_USER);
>>
>> #ifdef CONFIG_PPC64
>> - kuap_check_amr();
>> + kuap_check();
>> #endif
>>
>> regs->result = r3;
>> @@ -380,7 +380,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>> * AMR can only have been unlocked if we interrupted the kernel.
>> */
>> #ifdef CONFIG_PPC64
>> - kuap_check_amr();
>> + kuap_check();
>> #endif
>>
>> local_irq_save(flags);
>> @@ -451,7 +451,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>> unsigned long flags;
>> unsigned long ret = 0;
>> #ifdef CONFIG_PPC64
>> - unsigned long amr;
>> + unsigned long kuap;
>> #endif
>>
>> if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
>> @@ -467,7 +467,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>> CT_WARN_ON(ct_state() == CONTEXT_USER);
>>
>> #ifdef CONFIG_PPC64
>> - amr = kuap_get_and_check_amr();
>> + kuap = kuap_get_and_check();
>> #endif
>>
>> if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
>> @@ -511,7 +511,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>> * value from the check above.
>> */
>> #ifdef CONFIG_PPC64
>> - kuap_kernel_restore(regs, amr);
>> + kuap_kernel_restore(regs, kuap);
>> #endif
>>
>> return ret;
>> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
>> index d71fd10a1dd4..3b18d2b2c702 100644
>> --- a/arch/powerpc/kernel/irq.c
>> +++ b/arch/powerpc/kernel/irq.c
>> @@ -282,7 +282,7 @@ static inline void replay_soft_interrupts_irqrestore(void)
>> * and re-locking AMR but we shouldn't get here in the first place,
>> * hence the warning.
>> */
>> - kuap_check_amr();
>> + kuap_check();
>>
>> if (kuap_state != AMR_KUAP_BLOCKED)
>> set_kuap(AMR_KUAP_BLOCKED);
>> --
>> 2.25.0
>>
>>

2021-03-12 08:44:11

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 28/43] powerpc/64e: Call bad_page_fault() from do_page_fault()



Le 10/03/2021 à 02:29, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
>> book3e/64 is the last one calling __bad_page_fault()
>> from assembly.
>>
>> Save non volatile registers before calling do_page_fault()
>> and modify do_page_fault() to call __bad_page_fault()
>> for all platforms.
>>
>> Then it can be refactored by the call of bad_page_fault()
>> which avoids the duplication of the exception table search.
>
> This can go in with the 64e change after your series. I think it should
> be ready for the next merge window as well.

Yes, I thought it would pull more optimisation, but at the end it doesn't bring anythink, so I'll
drop it for now and leave it to you for your series.

>
> Thanks,
> Nick
>
>>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> arch/powerpc/kernel/exceptions-64e.S | 8 +-------
>> arch/powerpc/mm/fault.c | 17 ++++-------------
>> 2 files changed, 5 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index e8eb9992a270..b60f89078a3f 100644
>> --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -1010,15 +1010,9 @@ storage_fault_common:
>> addi r3,r1,STACK_FRAME_OVERHEAD
>> ld r14,PACA_EXGEN+EX_R14(r13)
>> ld r15,PACA_EXGEN+EX_R15(r13)
>> + bl save_nvgprs
>> bl do_page_fault
>> - cmpdi r3,0
>> - bne- 1f
>> b ret_from_except_lite
>> -1: bl save_nvgprs
>> - mr r4,r3
>> - addi r3,r1,STACK_FRAME_OVERHEAD
>> - bl __bad_page_fault
>> - b ret_from_except
>>
>> /*
>> * Alignment exception doesn't fit entirely in the 0x100 bytes so it
>> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
>> index 2e54bac99a22..7bcff3fca110 100644
>> --- a/arch/powerpc/mm/fault.c
>> +++ b/arch/powerpc/mm/fault.c
>> @@ -541,24 +541,15 @@ NOKPROBE_SYMBOL(___do_page_fault);
>>
>> static long __do_page_fault(struct pt_regs *regs)
>> {
>> - const struct exception_table_entry *entry;
>> long err;
>>
>> err = ___do_page_fault(regs, regs->dar, regs->dsisr);
>> if (likely(!err))
>> - return err;
>> -
>> - entry = search_exception_tables(regs->nip);
>> - if (likely(entry)) {
>> - instruction_pointer_set(regs, extable_fixup(entry));
>> return 0;
>> - } else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
>> - __bad_page_fault(regs, err);
>> - return 0;
>> - } else {
>> - /* 32 and 64e handle the bad page fault in asm */
>> - return err;
>> - }
>> +
>> + bad_page_fault(regs, err);
>> +
>> + return 0;
>> }
>> NOKPROBE_SYMBOL(__do_page_fault);
>>
>> --
>> 2.25.0
>>
>>

2021-03-12 08:45:20

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 02/43] powerpc/traps: Declare unrecoverable_exception() as __noreturn



Le 10/03/2021 à 02:22, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
>> unrecoverable_exception() is never expected to return, most callers
>> have an infiniteloop in case it returns.
>>
>> Ensure it really never returns by terminating it with a BUG(), and
>> declare it __no_return.
>>
>> It always GCC to really simplify functions calling it. In the exemple
>> below, it avoids the stack frame in the likely fast path and avoids
>> code duplication for the exit.
>>
>> With this patch:
>
> [snip]
>
> Nice.
>
>> diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
>> index a44a30b0688c..d5c9d9ddd186 100644
>> --- a/arch/powerpc/kernel/traps.c
>> +++ b/arch/powerpc/kernel/traps.c
>> @@ -2170,11 +2170,15 @@ DEFINE_INTERRUPT_HANDLER(SPEFloatingPointRoundException)
>> * in the MSR is 0. This indicates that SRR0/1 are live, and that
>> * we therefore lost state by taking this exception.
>> */
>> -void unrecoverable_exception(struct pt_regs *regs)
>> +void __noreturn unrecoverable_exception(struct pt_regs *regs)
>> {
>> pr_emerg("Unrecoverable exception %lx at %lx (msr=%lx)\n",
>> regs->trap, regs->nip, regs->msr);
>> die("Unrecoverable exception", regs, SIGABRT);
>> + /* die() should not return */
>> + WARN(true, "die() unexpectedly returned");
>> + for (;;)
>> + ;
>> }
>
> I don't think the WARN should be added because that will cause another
> interrupt after something is already badly wrong, so this might just
> make it harder to debug.
>
> For example if die() is falling through for some reason, we warn and
> cause a program check here, and that might also be unrecoverable so it
> might come through here and fall through again and warn again, etc.
>
> Putting the infinite loop is good enough I think (and better than there
> was previously).

Ok, dropped the WARN()

>
> Otherwise
>
> Reviewed-by: Nicholas Piggin <[email protected]>
>
> Thanks,
> Nick
>

2021-03-12 11:46:01

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 25/43] powerpc/32: Replace ASM exception exit by C exception exit from ppc64



Le 12/03/2021 à 00:26, Michael Ellerman a écrit :
> Christophe Leroy <[email protected]> writes:
>> Le 11/03/2021 à 14:46, Michael Ellerman a écrit :
>>> Christophe Leroy <[email protected]> writes:
>>>> This patch replaces the PPC32 ASM exception exit by C exception exit.
>>>>
>>>> Signed-off-by: Christophe Leroy <[email protected]>
>>>> ---
>>>> arch/powerpc/kernel/entry_32.S | 481 +++++++++-----------------------
>>>> arch/powerpc/kernel/interrupt.c | 4 +
>>>> 2 files changed, 132 insertions(+), 353 deletions(-)
>>>
>>> Bisect points to this breaking qemu mac99 for me, with pmac32_defconfig.
>>>
>>> I haven't had time to dig any deeper sorry.
>>
>> Embarrasing ...
>
> Nah, these things happen.
>
>> I don't get this problem on the 8xx (nohash/32) or the 83xx (book3s/32).
>> I don't get this problem with qemu mac99 when using my klibc-based initramfs.
>>
>> I managed to reproduce it with the rootfs.cpio that I got some time ago from linuxppc github Wiki.
>
> OK.
>
> I'm using the ppc-rootfs.cpio.gz from here:
>
> https://github.com/linuxppc/ci-scripts/blob/master/root-disks/Makefile
>
> And the boot script is:
>
> https://github.com/linuxppc/ci-scripts/blob/master/scripts/boot/qemu-mac99
>
> I've been meaning to write docs on how to use those scripts, but haven't
> got around to it.
>
> There's nothing really special though it's just a wrapper around qemu -M mac99.
>
>> I'll investigate it tomorrow.
>

Problem is the fast_interrupt_return, registers are not all saved yet on ppc32 (msr, nip, xer, ctr),
can't restore them all as ppc64 do.

The problem happens only when userspace uses floating point or altivec.

For the time being, I'll keep the original fast_interrupt_return.

I will likely send a new version of the series later today, taking into account Nick's comments.

Christophe

2021-03-14 10:18:37

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 00/43] powerpc/32: Switch to interrupt entry/exit in C

On Tue, 9 Mar 2021 12:09:25 +0000 (UTC), Christophe Leroy wrote:
> This series aims at porting interrupt entry/exit in C on PPC32, using
> the work already merged for PPC64.
>
> First two patches are a fix and an optimisation of unrecoverable_exception() function.
>
> Six following patches do minimal changes in 40x in order to be able to enable MMU
> earlier in exception entry.
>
> [...]

Patch 1 applied to powerpc/fixes.

[01/43] powerpc/traps: unrecoverable_exception() is not an interrupt handler
https://git.kernel.org/powerpc/c/0b736881c8f1a6cd912f7a9162b9e097b28c1c30

cheers