2013-03-29 10:02:04

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 0/6] powerpc: Support context tracking for Power pSeries

These patches try to support context tracking for Power arch, beginning with
64-bit pSeries. The codes are ported from that of the x86_64, and in each
patch, I listed the corresponding patch for x86.

Would you please help review and give your comments?

v2:

I rebased the patches against 3.9-rcs, and also added a patch to replace
the exception handling with the generic code in tip timers/nohz.

I assume these patches would get in through powerpc tree, so I didn't combine
the new patch (#6) with the original one (#2). So that if powerpc tree picks
these, it could pick the first five patches, and apply patch #6 later
when the dependency enters into powerpc tree (maybe on some 3.10-rcs).

I'm also wondering whether it is possible for these to go through
tip timers/nohz, so for now, patches #6 and #2 could be combined into one, and
no need to worry about the issues caused by arch/common code merging. And it
might also make future changes easier.

Thanks, Zhong

Li Zhong (6):
powerpc: Syscall hooks for context tracking subsystem
powerpc: Exception hooks for context tracking subsystem
powerpc: Exit user context on notify resume
powerpc: Use the new schedule_user API on userspace preemption
powerpc: select HAVE_CONTEXT_TRACKING for pSeries
powerpc: Use generic code for exception handling

arch/powerpc/include/asm/context_tracking.h | 10 +++
arch/powerpc/include/asm/thread_info.h | 7 ++-
arch/powerpc/kernel/entry_64.S | 3 +-
arch/powerpc/kernel/ptrace.c | 5 ++
arch/powerpc/kernel/signal.c | 5 ++
arch/powerpc/kernel/traps.c | 91 ++++++++++++++++++++-------
arch/powerpc/mm/fault.c | 16 ++++-
arch/powerpc/mm/hash_utils_64.c | 38 ++++++++---
arch/powerpc/platforms/pseries/Kconfig | 1 +
9 files changed, 140 insertions(+), 36 deletions(-)
create mode 100644 arch/powerpc/include/asm/context_tracking.h

--
1.7.9.5


2013-03-29 10:00:59

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 2/6] powerpc: Exception hooks for context tracking subsystem

This is the exception hooks for context tracking subsystem, including
data access, program check, single step, instruction breakpoint, machine check,
alignment, fp unavailable, altivec assist, unknown exception, whose handlers
might use RCU.

This patch corresponds to
[PATCH] x86: Exception hooks for userspace RCU extended QS
commit 6ba3c97a38803883c2eee489505796cb0a727122

Signed-off-by: Li Zhong <[email protected]>
---
arch/powerpc/include/asm/context_tracking.h | 28 +++++++++
arch/powerpc/kernel/exceptions-64s.S | 4 +-
arch/powerpc/kernel/traps.c | 83 ++++++++++++++++++++-------
arch/powerpc/mm/fault.c | 15 ++++-
arch/powerpc/mm/hash_utils_64.c | 17 ++++++
5 files changed, 122 insertions(+), 25 deletions(-)
create mode 100644 arch/powerpc/include/asm/context_tracking.h

diff --git a/arch/powerpc/include/asm/context_tracking.h b/arch/powerpc/include/asm/context_tracking.h
new file mode 100644
index 0000000..377146e
--- /dev/null
+++ b/arch/powerpc/include/asm/context_tracking.h
@@ -0,0 +1,28 @@
+#ifndef _ASM_POWERPC_CONTEXT_TRACKING_H
+#define _ASM_POWERPC_CONTEXT_TRACKING_H
+
+#include <linux/context_tracking.h>
+#include <asm/ptrace.h>
+
+/*
+ * temporarily defined to avoid potential conflicts with the common
+ * implementation, these will be removed by a later patch after the common
+ * code enters powerpc tree
+ */
+#define exception_enter __exception_enter
+#define exception_exit __exception_exit
+
+static inline void __exception_enter(struct pt_regs *regs)
+{
+ user_exit();
+}
+
+static inline void __exception_exit(struct pt_regs *regs)
+{
+#ifdef CONFIG_CONTEXT_TRACKING
+ if (user_mode(regs))
+ user_enter();
+#endif
+}
+
+#endif
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index a8a5361..6d82f4f 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1368,15 +1368,17 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_SLB)
rlwimi r4,r0,32-13,30,30 /* becomes _PAGE_USER access bit */
ori r4,r4,1 /* add _PAGE_PRESENT */
rlwimi r4,r5,22+2,31-2,31-2 /* Set _PAGE_EXEC if trap is 0x400 */
+ addi r6,r1,STACK_FRAME_OVERHEAD

/*
* r3 contains the faulting address
* r4 contains the required access permissions
* r5 contains the trap number
+ * r6 contains the address of pt_regs
*
* at return r3 = 0 for success, 1 for page fault, negative for error
*/
- bl .hash_page /* build HPTE if possible */
+ bl .hash_page_ct /* build HPTE if possible */
cmpdi r3,0 /* see if hash_page succeeded */

/* Success */
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 37cc40e..6228b6b 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -60,6 +60,7 @@
#include <asm/switch_to.h>
#include <asm/tm.h>
#include <asm/debug.h>
+#include <asm/context_tracking.h>

#if defined(CONFIG_DEBUGGER) || defined(CONFIG_KEXEC)
int (*__debugger)(struct pt_regs *regs) __read_mostly;
@@ -669,6 +670,8 @@ void machine_check_exception(struct pt_regs *regs)
{
int recover = 0;

+ exception_enter(regs);
+
__get_cpu_var(irq_stat).mce_exceptions++;

/* See if any machine dependent calls. In theory, we would want
@@ -683,7 +686,7 @@ void machine_check_exception(struct pt_regs *regs)
recover = cur_cpu_spec->machine_check(regs);

if (recover > 0)
- return;
+ goto exit;

#if defined(CONFIG_8xx) && defined(CONFIG_PCI)
/* the qspan pci read routines can cause machine checks -- Cort
@@ -693,20 +696,23 @@ void machine_check_exception(struct pt_regs *regs)
* -- BenH
*/
bad_page_fault(regs, regs->dar, SIGBUS);
- return;
+ goto exit;
#endif

if (debugger_fault_handler(regs))
- return;
+ goto exit;

if (check_io_access(regs))
- return;
+ goto exit;

die("Machine check", regs, SIGBUS);

/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
panic("Unrecoverable Machine check");
+
+exit:
+ exception_exit(regs);
}

void SMIException(struct pt_regs *regs)
@@ -716,20 +722,29 @@ void SMIException(struct pt_regs *regs)

void unknown_exception(struct pt_regs *regs)
{
+ exception_enter(regs);
+
printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
regs->nip, regs->msr, regs->trap);

_exception(SIGTRAP, regs, 0, 0);
+
+ exception_exit(regs);
}

void instruction_breakpoint_exception(struct pt_regs *regs)
{
+ exception_enter(regs);
+
if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,
5, SIGTRAP) == NOTIFY_STOP)
- return;
+ goto exit;
if (debugger_iabr_match(regs))
- return;
+ goto exit;
_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
+
+exit:
+ exception_exit(regs);
}

void RunModeException(struct pt_regs *regs)
@@ -739,15 +754,20 @@ void RunModeException(struct pt_regs *regs)

void __kprobes single_step_exception(struct pt_regs *regs)
{
+ exception_enter(regs);
+
clear_single_step(regs);

if (notify_die(DIE_SSTEP, "single_step", regs, 5,
5, SIGTRAP) == NOTIFY_STOP)
- return;
+ goto exit;
if (debugger_sstep(regs))
- return;
+ goto exit;

_exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);
+
+exit:
+ exception_exit(regs);
}

/*
@@ -1002,32 +1022,34 @@ void __kprobes program_check_exception(struct pt_regs *regs)
unsigned int reason = get_reason(regs);
extern int do_mathemu(struct pt_regs *regs);

+ exception_enter(regs);
+
/* We can now get here via a FP Unavailable exception if the core
* has no FPU, in that case the reason flags will be 0 */

if (reason & REASON_FP) {
/* IEEE FP exception */
parse_fpe(regs);
- return;
+ goto exit;
}
if (reason & REASON_TRAP) {
/* Debugger is first in line to stop recursive faults in
* rcu_lock, notify_die, or atomic_notifier_call_chain */
if (debugger_bpt(regs))
- return;
+ goto exit;

/* trap exception */
if (notify_die(DIE_BPT, "breakpoint", regs, 5, 5, SIGTRAP)
== NOTIFY_STOP)
- return;
+ goto exit;

if (!(regs->msr & MSR_PR) && /* not user-mode */
report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {
regs->nip += 4;
- return;
+ goto exit;
}
_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
- return;
+ goto exit;
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (reason & REASON_TM) {
@@ -1043,7 +1065,7 @@ void __kprobes program_check_exception(struct pt_regs *regs)
if (!user_mode(regs) &&
report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {
regs->nip += 4;
- return;
+ goto exit;
}
/* If usermode caused this, it's done something illegal and
* gets a SIGILL slap on the wrist. We call it an illegal
@@ -1053,7 +1075,7 @@ void __kprobes program_check_exception(struct pt_regs *regs)
*/
if (user_mode(regs)) {
_exception(SIGILL, regs, ILL_ILLOPN, regs->nip);
- return;
+ goto exit;
} else {
printk(KERN_EMERG "Unexpected TM Bad Thing exception "
"at %lx (msr 0x%x)\n", regs->nip, reason);
@@ -1077,16 +1099,16 @@ void __kprobes program_check_exception(struct pt_regs *regs)
switch (do_mathemu(regs)) {
case 0:
emulate_single_step(regs);
- return;
+ goto exit;
case 1: {
int code = 0;
code = __parse_fpscr(current->thread.fpscr.val);
_exception(SIGFPE, regs, code, regs->nip);
- return;
+ goto exit;
}
case -EFAULT:
_exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
- return;
+ goto exit;
}
/* fall through on any other errors */
#endif /* CONFIG_MATH_EMULATION */
@@ -1097,10 +1119,10 @@ void __kprobes program_check_exception(struct pt_regs *regs)
case 0:
regs->nip += 4;
emulate_single_step(regs);
- return;
+ goto exit;
case -EFAULT:
_exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
- return;
+ goto exit;
}
}

@@ -1108,12 +1130,17 @@ void __kprobes program_check_exception(struct pt_regs *regs)
_exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
else
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
+
+exit:
+ exception_exit(regs);
}

void alignment_exception(struct pt_regs *regs)
{
int sig, code, fixed = 0;

+ exception_enter(regs);
+
/* We restore the interrupt state now */
if (!arch_irq_disabled_regs(regs))
local_irq_enable();
@@ -1125,7 +1152,7 @@ void alignment_exception(struct pt_regs *regs)
if (fixed == 1) {
regs->nip += 4; /* skip over emulated instruction */
emulate_single_step(regs);
- return;
+ goto exit;
}

/* Operand address was bad */
@@ -1140,6 +1167,9 @@ void alignment_exception(struct pt_regs *regs)
_exception(sig, regs, code, regs->dar);
else
bad_page_fault(regs, regs->dar, sig);
+
+exit:
+ exception_exit(regs);
}

void StackOverflow(struct pt_regs *regs)
@@ -1168,23 +1198,32 @@ void trace_syscall(struct pt_regs *regs)

void kernel_fp_unavailable_exception(struct pt_regs *regs)
{
+ exception_enter(regs);
+
printk(KERN_EMERG "Unrecoverable FP Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable FP Unavailable Exception", regs, SIGABRT);
+
+ exception_exit(regs);
}

void altivec_unavailable_exception(struct pt_regs *regs)
{
+ exception_enter(regs);
+
if (user_mode(regs)) {
/* A user program has executed an altivec instruction,
but this kernel doesn't support altivec. */
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
- return;
+ goto exit;
}

printk(KERN_EMERG "Unrecoverable VMX/Altivec Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT);
+
+exit:
+ exception_exit(regs);
}

void vsx_unavailable_exception(struct pt_regs *regs)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 229951f..108ab17 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -42,6 +42,7 @@
#include <asm/tlbflush.h>
#include <asm/siginfo.h>
#include <asm/debug.h>
+#include <asm/context_tracking.h>
#include <mm/mmu_decl.h>

#include "icswx.h"
@@ -193,8 +194,8 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
* The return value is 0 if the fault was handled, or the signal
* number if this is a kernel fault that can't be handled here.
*/
-int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
- unsigned long error_code)
+static int __kprobes __do_page_fault(struct pt_regs *regs,
+ unsigned long address, unsigned long error_code)
{
struct vm_area_struct * vma;
struct mm_struct *mm = current->mm;
@@ -475,6 +476,16 @@ bad_area_nosemaphore:

}

+int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
+ unsigned long error_code)
+{
+ int ret;
+ exception_enter(regs);
+ ret = __do_page_fault(regs, address, error_code);
+ exception_exit(regs);
+ return ret;
+}
+
/*
* bad_page_fault is called when we have a bad access from the kernel.
* It is called from the DSI and ISI handlers in head.S and from some
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1b6e127..360fba8 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -56,6 +56,7 @@
#include <asm/fadump.h>
#include <asm/firmware.h>
#include <asm/tm.h>
+#include <asm/context_tracking.h>

#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
@@ -1084,6 +1085,18 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
}
EXPORT_SYMBOL_GPL(hash_page);

+int hash_page_ct(unsigned long ea, unsigned long access,
+ unsigned long trap, struct pt_regs *regs)
+{
+ int ret;
+
+ exception_enter(regs);
+ ret = hash_page(ea, access, trap);
+ exception_exit(regs);
+
+ return ret;
+}
+
void hash_preload(struct mm_struct *mm, unsigned long ea,
unsigned long access, unsigned long trap)
{
@@ -1210,6 +1223,8 @@ void flush_hash_range(unsigned long number, int local)
*/
void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
{
+ exception_enter(regs);
+
if (user_mode(regs)) {
#ifdef CONFIG_PPC_SUBPAGE_PROT
if (rc == -2)
@@ -1219,6 +1234,8 @@ void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
_exception(SIGBUS, regs, BUS_ADRERR, address);
} else
bad_page_fault(regs, address, SIGBUS);
+
+ exception_exit(regs);
}

#ifdef CONFIG_DEBUG_PAGEALLOC
--
1.7.9.5

2013-03-29 10:01:10

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 6/6] powerpc: Use generic code for exception handling

After the exception handling moved to generic code, and some changes in
following two commits:
56dd9470d7c8734f055da2a6bac553caf4a468eb
context_tracking: Move exception handling to generic code
6c1e0256fad84a843d915414e4b5973b7443d48d
context_tracking: Restore correct previous context state on exception exit

it is able for this patch to replace the implementation in arch code
with the generic code in above commits.

Signed-off-by: Li Zhong <[email protected]>
---
arch/powerpc/include/asm/context_tracking.h | 29 ---------------
arch/powerpc/kernel/exceptions-64s.S | 4 +--
arch/powerpc/kernel/traps.c | 42 +++++++++++++---------
arch/powerpc/mm/fault.c | 7 ++--
arch/powerpc/mm/hash_utils_64.c | 51 ++++++++++++++-------------
5 files changed, 57 insertions(+), 76 deletions(-)

diff --git a/arch/powerpc/include/asm/context_tracking.h b/arch/powerpc/include/asm/context_tracking.h
index 4da287e..b6f5a33 100644
--- a/arch/powerpc/include/asm/context_tracking.h
+++ b/arch/powerpc/include/asm/context_tracking.h
@@ -1,39 +1,10 @@
#ifndef _ASM_POWERPC_CONTEXT_TRACKING_H
#define _ASM_POWERPC_CONTEXT_TRACKING_H

-#ifndef __ASSEMBLY__
-#include <linux/context_tracking.h>
-#include <asm/ptrace.h>
-
-/*
- * temporarily defined to avoid potential conflicts with the common
- * implementation, these will be removed by a later patch after the common
- * code enters powerpc tree
- */
-#define exception_enter __exception_enter
-#define exception_exit __exception_exit
-
-static inline void __exception_enter(struct pt_regs *regs)
-{
- user_exit();
-}
-
-static inline void __exception_exit(struct pt_regs *regs)
-{
-#ifdef CONFIG_CONTEXT_TRACKING
- if (user_mode(regs))
- user_enter();
-#endif
-}
-
-#else /* __ASSEMBLY__ */
-
#ifdef CONFIG_CONTEXT_TRACKING
#define SCHEDULE_USER bl .schedule_user
#else
#define SCHEDULE_USER bl .schedule
#endif

-#endif /* !__ASSEMBLY__ */
-
#endif
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 6d82f4f..a8a5361 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1368,17 +1368,15 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_SLB)
rlwimi r4,r0,32-13,30,30 /* becomes _PAGE_USER access bit */
ori r4,r4,1 /* add _PAGE_PRESENT */
rlwimi r4,r5,22+2,31-2,31-2 /* Set _PAGE_EXEC if trap is 0x400 */
- addi r6,r1,STACK_FRAME_OVERHEAD

/*
* r3 contains the faulting address
* r4 contains the required access permissions
* r5 contains the trap number
- * r6 contains the address of pt_regs
*
* at return r3 = 0 for success, 1 for page fault, negative for error
*/
- bl .hash_page_ct /* build HPTE if possible */
+ bl .hash_page /* build HPTE if possible */
cmpdi r3,0 /* see if hash_page succeeded */

/* Success */
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 6228b6b..1b46c2d9 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -35,6 +35,7 @@
#include <linux/kdebug.h>
#include <linux/debugfs.h>
#include <linux/ratelimit.h>
+#include <linux/context_tracking.h>

#include <asm/emulated_ops.h>
#include <asm/pgtable.h>
@@ -60,7 +61,6 @@
#include <asm/switch_to.h>
#include <asm/tm.h>
#include <asm/debug.h>
-#include <asm/context_tracking.h>

#if defined(CONFIG_DEBUGGER) || defined(CONFIG_KEXEC)
int (*__debugger)(struct pt_regs *regs) __read_mostly;
@@ -669,8 +669,9 @@ int machine_check_generic(struct pt_regs *regs)
void machine_check_exception(struct pt_regs *regs)
{
int recover = 0;
+ enum ctx_state prev_state;

- exception_enter(regs);
+ prev_state = exception_enter();

__get_cpu_var(irq_stat).mce_exceptions++;

@@ -712,7 +713,7 @@ void machine_check_exception(struct pt_regs *regs)
panic("Unrecoverable Machine check");

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

void SMIException(struct pt_regs *regs)
@@ -722,19 +723,21 @@ void SMIException(struct pt_regs *regs)

void unknown_exception(struct pt_regs *regs)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
regs->nip, regs->msr, regs->trap);

_exception(SIGTRAP, regs, 0, 0);

- exception_exit(regs);
+ exception_exit(prev_state);
}

void instruction_breakpoint_exception(struct pt_regs *regs)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,
5, SIGTRAP) == NOTIFY_STOP)
@@ -744,7 +747,7 @@ void instruction_breakpoint_exception(struct pt_regs *regs)
_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

void RunModeException(struct pt_regs *regs)
@@ -754,7 +757,8 @@ void RunModeException(struct pt_regs *regs)

void __kprobes single_step_exception(struct pt_regs *regs)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

clear_single_step(regs);

@@ -767,7 +771,7 @@ void __kprobes single_step_exception(struct pt_regs *regs)
_exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

/*
@@ -1020,9 +1024,10 @@ int is_valid_bugaddr(unsigned long addr)
void __kprobes program_check_exception(struct pt_regs *regs)
{
unsigned int reason = get_reason(regs);
+ enum ctx_state prev_state;
extern int do_mathemu(struct pt_regs *regs);

- exception_enter(regs);
+ prev_state = exception_enter();

/* We can now get here via a FP Unavailable exception if the core
* has no FPU, in that case the reason flags will be 0 */
@@ -1132,14 +1137,15 @@ void __kprobes program_check_exception(struct pt_regs *regs)
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

void alignment_exception(struct pt_regs *regs)
{
int sig, code, fixed = 0;
+ enum ctx_state prev_state;

- exception_enter(regs);
+ prev_state = exception_enter();

/* We restore the interrupt state now */
if (!arch_irq_disabled_regs(regs))
@@ -1169,7 +1175,7 @@ void alignment_exception(struct pt_regs *regs)
bad_page_fault(regs, regs->dar, sig);

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

void StackOverflow(struct pt_regs *regs)
@@ -1198,18 +1204,20 @@ void trace_syscall(struct pt_regs *regs)

void kernel_fp_unavailable_exception(struct pt_regs *regs)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

printk(KERN_EMERG "Unrecoverable FP Unavailable Exception "
"%lx at %lx\n", regs->trap, regs->nip);
die("Unrecoverable FP Unavailable Exception", regs, SIGABRT);

- exception_exit(regs);
+ exception_exit(prev_state);
}

void altivec_unavailable_exception(struct pt_regs *regs)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

if (user_mode(regs)) {
/* A user program has executed an altivec instruction,
@@ -1223,7 +1231,7 @@ void altivec_unavailable_exception(struct pt_regs *regs)
die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT);

exit:
- exception_exit(regs);
+ exception_exit(prev_state);
}

void vsx_unavailable_exception(struct pt_regs *regs)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 108ab17..141835b 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -32,6 +32,7 @@
#include <linux/perf_event.h>
#include <linux/magic.h>
#include <linux/ratelimit.h>
+#include <linux/context_tracking.h>

#include <asm/firmware.h>
#include <asm/page.h>
@@ -42,7 +43,6 @@
#include <asm/tlbflush.h>
#include <asm/siginfo.h>
#include <asm/debug.h>
-#include <asm/context_tracking.h>
#include <mm/mmu_decl.h>

#include "icswx.h"
@@ -480,9 +480,10 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address,
unsigned long error_code)
{
int ret;
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();
ret = __do_page_fault(regs, address, error_code);
- exception_exit(regs);
+ exception_exit(prev_state);
return ret;
}

diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 360fba8..eeab30f 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -33,6 +33,7 @@
#include <linux/init.h>
#include <linux/signal.h>
#include <linux/memblock.h>
+#include <linux/context_tracking.h>

#include <asm/processor.h>
#include <asm/pgtable.h>
@@ -56,7 +57,6 @@
#include <asm/fadump.h>
#include <asm/firmware.h>
#include <asm/tm.h>
-#include <asm/context_tracking.h>

#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
@@ -919,13 +919,17 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
const struct cpumask *tmp;
int rc, user_region = 0, local = 0;
int psize, ssize;
+ enum ctx_state prev_state;
+
+ prev_state = exception_enter();

DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
ea, access, trap);

if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
DBG_LOW(" out of pgtable range !\n");
- return 1;
+ rc = 1;
+ goto exit;
}

/* Get region & vsid */
@@ -935,7 +939,8 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
mm = current->mm;
if (! mm) {
DBG_LOW(" user region with no mm !\n");
- return 1;
+ rc = 1;
+ goto exit;
}
psize = get_slice_psize(mm, ea);
ssize = user_segment_size(ea);
@@ -954,14 +959,17 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
/* Not a valid range
* Send the problem up to do_page_fault
*/
- return 1;
+ rc = 1;
+ goto exit;
}
DBG_LOW(" mm=%p, mm->pgdir=%p, vsid=%016lx\n", mm, mm->pgd, vsid);

/* Get pgdir */
pgdir = mm->pgd;
- if (pgdir == NULL)
- return 1;
+ if (pgdir == NULL) {
+ rc = 1;
+ goto exit;
+ }

/* Check CPU locality */
tmp = cpumask_of(smp_processor_id());
@@ -984,7 +992,8 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift);
if (ptep == NULL || !pte_present(*ptep)) {
DBG_LOW(" no PTE !\n");
- return 1;
+ rc = 1;
+ goto exit;
}

/* Add _PAGE_PRESENT to the required access perm */
@@ -995,13 +1004,16 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
*/
if (access & ~pte_val(*ptep)) {
DBG_LOW(" no access !\n");
- return 1;
+ rc = 1;
+ goto exit;
}

#ifdef CONFIG_HUGETLB_PAGE
- if (hugeshift)
- return __hash_page_huge(ea, access, vsid, ptep, trap, local,
+ if (hugeshift) {
+ rc = __hash_page_huge(ea, access, vsid, ptep, trap, local,
ssize, hugeshift, psize);
+ goto exit;
+ }
#endif /* CONFIG_HUGETLB_PAGE */

#ifndef CONFIG_PPC_64K_PAGES
@@ -1081,22 +1093,12 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
pte_val(*(ptep + PTRS_PER_PTE)));
#endif
DBG_LOW(" -> rc=%d\n", rc);
+exit:
+ exception_exit(prev_state);
return rc;
}
EXPORT_SYMBOL_GPL(hash_page);

-int hash_page_ct(unsigned long ea, unsigned long access,
- unsigned long trap, struct pt_regs *regs)
-{
- int ret;
-
- exception_enter(regs);
- ret = hash_page(ea, access, trap);
- exception_exit(regs);
-
- return ret;
-}
-
void hash_preload(struct mm_struct *mm, unsigned long ea,
unsigned long access, unsigned long trap)
{
@@ -1223,7 +1225,8 @@ void flush_hash_range(unsigned long number, int local)
*/
void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
{
- exception_enter(regs);
+ enum ctx_state prev_state;
+ prev_state = exception_enter();

if (user_mode(regs)) {
#ifdef CONFIG_PPC_SUBPAGE_PROT
@@ -1235,7 +1238,7 @@ void low_hash_fault(struct pt_regs *regs, unsigned long address, int rc)
} else
bad_page_fault(regs, address, SIGBUS);

- exception_exit(regs);
+ exception_exit(prev_state);
}

#ifdef CONFIG_DEBUG_PAGEALLOC
--
1.7.9.5

2013-03-29 10:01:06

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 3/6] powerpc: Exit user context on notify resume

This patch allows RCU usage in do_notify_resume, e.g. signal handling.
It corresponds to
[PATCH] x86: Exit RCU extended QS on notify resume
commit edf55fda35c7dc7f2d9241c3abaddaf759b457c6

Signed-off-by: Li Zhong <[email protected]>
---
arch/powerpc/kernel/signal.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index cf12eae..d63b502 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -13,6 +13,7 @@
#include <linux/signal.h>
#include <linux/uprobes.h>
#include <linux/key.h>
+#include <linux/context_tracking.h>
#include <asm/hw_breakpoint.h>
#include <asm/uaccess.h>
#include <asm/unistd.h>
@@ -159,6 +160,8 @@ static int do_signal(struct pt_regs *regs)

void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
{
+ user_exit();
+
if (thread_info_flags & _TIF_UPROBE)
uprobe_notify_resume(regs);

@@ -169,4 +172,6 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags)
clear_thread_flag(TIF_NOTIFY_RESUME);
tracehook_notify_resume(regs);
}
+
+ user_enter();
}
--
1.7.9.5

2013-03-29 10:01:05

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 1/6] powerpc: Syscall hooks for context tracking subsystem

This is the syscall slow path hooks for context tracking subsystem,
corresponding to
[PATCH] x86: Syscall hooks for userspace RCU extended QS
commit bf5a3c13b939813d28ce26c01425054c740d6731

TIF_MEMDIE is moved to the second 16-bits (with value 17), as it seems there
is no asm code using it. TIF_NOHZ is added to _TIF_SYCALL_T_OR_A, so it is
better for it to be in the same 16 bits with others in the group, so in the
asm code, andi. with this group could work.

Signed-off-by: Li Zhong <[email protected]>
Acked-by: Frederic Weisbecker <[email protected]>
---
arch/powerpc/include/asm/thread_info.h | 7 +++++--
arch/powerpc/kernel/ptrace.c | 5 +++++
2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 406b7b9..414a261 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -97,7 +97,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_PERFMON_CTXSW 6 /* perfmon needs ctxsw calls */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SINGLESTEP 8 /* singlestepping active */
-#define TIF_MEMDIE 9 /* is terminating due to OOM killer */
+#define TIF_NOHZ 9 /* in adaptive nohz mode */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall return */
@@ -106,6 +106,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */
#define TIF_EMULATE_STACK_STORE 16 /* Is an instruction emulation
for stack store? */
+#define TIF_MEMDIE 17 /* is terminating due to OOM killer */

/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
@@ -124,8 +125,10 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_UPROBE (1<<TIF_UPROBE)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_EMULATE_STACK_STORE (1<<TIF_EMULATE_STACK_STORE)
+#define _TIF_NOHZ (1<<TIF_NOHZ)
#define _TIF_SYSCALL_T_OR_A (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
- _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT)
+ _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT | \
+ _TIF_NOHZ)

#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_RESUME | _TIF_UPROBE)
diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index 245c1b6..0b7aad0 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -32,6 +32,7 @@
#include <trace/syscall.h>
#include <linux/hw_breakpoint.h>
#include <linux/perf_event.h>
+#include <linux/context_tracking.h>

#include <asm/uaccess.h>
#include <asm/page.h>
@@ -1778,6 +1779,8 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{
long ret = 0;

+ user_exit();
+
secure_computing_strict(regs->gpr[0]);

if (test_thread_flag(TIF_SYSCALL_TRACE) &&
@@ -1822,4 +1825,6 @@ void do_syscall_trace_leave(struct pt_regs *regs)
step = test_thread_flag(TIF_SINGLESTEP);
if (step || test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, step);
+
+ user_enter();
}
--
1.7.9.5

2013-03-29 10:01:39

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 5/6] powerpc: select HAVE_CONTEXT_TRACKING for pSeries

Start context tracking support from pSeries.

Signed-off-by: Li Zhong <[email protected]>
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 9a0941b..023b288 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -18,6 +18,7 @@ config PPC_PSERIES
select PPC_PCI_CHOICE if EXPERT
select ZLIB_DEFLATE
select PPC_DOORBELL
+ select HAVE_CONTEXT_TRACKING
default y

config PPC_SPLPAR
--
1.7.9.5

2013-03-29 10:02:01

by Li Zhong

[permalink] [raw]
Subject: [RFC PATCH v2 4/6] powerpc: Use the new schedule_user API on userspace preemption

This patch corresponds to
[PATCH] x86: Use the new schedule_user API on userspace preemption
commit 0430499ce9d78691f3985962021b16bf8f8a8048

Signed-off-by: Li Zhong <[email protected]>
---
arch/powerpc/include/asm/context_tracking.h | 11 +++++++++++
arch/powerpc/kernel/entry_64.S | 3 ++-
2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/context_tracking.h b/arch/powerpc/include/asm/context_tracking.h
index 377146e..4da287e 100644
--- a/arch/powerpc/include/asm/context_tracking.h
+++ b/arch/powerpc/include/asm/context_tracking.h
@@ -1,6 +1,7 @@
#ifndef _ASM_POWERPC_CONTEXT_TRACKING_H
#define _ASM_POWERPC_CONTEXT_TRACKING_H

+#ifndef __ASSEMBLY__
#include <linux/context_tracking.h>
#include <asm/ptrace.h>

@@ -25,4 +26,14 @@ static inline void __exception_exit(struct pt_regs *regs)
#endif
}

+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_CONTEXT_TRACKING
+#define SCHEDULE_USER bl .schedule_user
+#else
+#define SCHEDULE_USER bl .schedule
+#endif
+
+#endif /* !__ASSEMBLY__ */
+
#endif
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 256c5bf..f7e4622 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -33,6 +33,7 @@
#include <asm/irqflags.h>
#include <asm/ftrace.h>
#include <asm/hw_irq.h>
+#include <asm/context_tracking.h>

/*
* System calls.
@@ -618,7 +619,7 @@ _GLOBAL(ret_from_except_lite)
andi. r0,r4,_TIF_NEED_RESCHED
beq 1f
bl .restore_interrupts
- bl .schedule
+ SCHEDULE_USER
b .ret_from_except_lite

1: bl .save_nvgprs
--
1.7.9.5

2013-04-05 03:14:18

by Paul Mackerras

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/6] powerpc: Exception hooks for context tracking subsystem

On Fri, Mar 29, 2013 at 06:00:17PM +0800, Li Zhong wrote:
> This is the exception hooks for context tracking subsystem, including
> data access, program check, single step, instruction breakpoint, machine check,
> alignment, fp unavailable, altivec assist, unknown exception, whose handlers
> might use RCU.
>
> This patch corresponds to
> [PATCH] x86: Exception hooks for userspace RCU extended QS
> commit 6ba3c97a38803883c2eee489505796cb0a727122
>
> Signed-off-by: Li Zhong <[email protected]>

Is there a reason why you didn't put the exception_exit() call in
ret_from_except_lite in entry_64.S, and the exception_entry() call in
EXCEPTION_PROLOG_COMMON? That would seem to catch all these cases in
a more centralized place.

Also, I notice that with the exception_exit calls where they are, we
can still deliver signals (thus possibly taking a page fault) or call
schedule() for preemption after the exception_exit() call. Is that
OK, or is it a potential problem?

Paul.

2013-04-08 09:04:10

by Li Zhong

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/6] powerpc: Exception hooks for context tracking subsystem

On Fri, 2013-04-05 at 13:50 +1100, Paul Mackerras wrote:
> On Fri, Mar 29, 2013 at 06:00:17PM +0800, Li Zhong wrote:
> > This is the exception hooks for context tracking subsystem, including
> > data access, program check, single step, instruction breakpoint, machine check,
> > alignment, fp unavailable, altivec assist, unknown exception, whose handlers
> > might use RCU.
> >
> > This patch corresponds to
> > [PATCH] x86: Exception hooks for userspace RCU extended QS
> > commit 6ba3c97a38803883c2eee489505796cb0a727122
> >
> > Signed-off-by: Li Zhong <[email protected]>


Hi Paul,

Thanks for your review! Please check my answers below, and correct me if
any errors.

> Is there a reason why you didn't put the exception_exit() call in
> ret_from_except_lite in entry_64.S, and the exception_entry() call in
> EXCEPTION_PROLOG_COMMON? That would seem to catch all these cases in
> a more centralized place.

It seems to me that ret_from_except_lite and EXCEPTION_PROLOG_COMMON are
also used by interrupts, where I think we don't need the hooks. So using
this way could help to avoid adding overhead to these code path
(interrupts, and some exit path of syscall).

And I think adding the hook on higher level code seems a little easier
for reading and checking. It seems that some exceptions don't use
EXCEPTION_PROLOG_COMMON, and some don't go ret_from_except_lite exit
path (like fp unavailable might go directly to fast_exception_return ).
Maybe fast_exception_return is a centralized place for us to return to
user space? But it still adds some overheads which is not necessarily
needed.

And I think it also makes the implementation here consistent with the
style that x86 uses.

> Also, I notice that with the exception_exit calls where they are, we
> can still deliver signals (thus possibly taking a page fault) or call
> schedule() for preemption after the exception_exit() call. Is that
> OK, or is it a potential problem?

If I understand correctly, I guess you are talking about the cases where
we might return to user space without context state correctly being set
as in user?

There is user_enter() called in do_notify_resume() in patch #3, so after
handling the signals we always call user_enter().

There are also some changes of the context_tracking code from Frederic,
which might be related: ( they are now in tip tree, and url of the
patches for your convenience https://lkml.org/lkml/2013/3/1/266 )

6c1e0256fad84a843d915414e4b5973b7443d48d
context_tracking: Restore correct previous context state on exception
exit.

With this patch, if a later exception happened after user_enter(),
before the CPU actually returns to user space, the correct context
state(in user) is saved and restored when handling the later exception.

Patch #6 converts the code to use these new APIs, which is currently not
available in powerpc tree.

b22366cd54c6fe05db426f20adb10f461c19ec06
context_tracking: Restore preempted context state after
preempt_schedule_irq

With this patch, the user context state could be correctly restored
after schedule returns.

Thanks, Zhong

> Paul.
>

2013-04-10 04:57:05

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RFC PATCH v2 6/6] powerpc: Use generic code for exception handling

On Fri, Mar 29, 2013 at 06:00:21PM +0800, Li Zhong wrote:
> After the exception handling moved to generic code, and some changes in
...
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index 360fba8..eeab30f 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -33,6 +33,7 @@
> #include <linux/init.h>
> #include <linux/signal.h>
> #include <linux/memblock.h>
> +#include <linux/context_tracking.h>
>
> #include <asm/processor.h>
> #include <asm/pgtable.h>
> @@ -56,7 +57,6 @@
> #include <asm/fadump.h>
> #include <asm/firmware.h>
> #include <asm/tm.h>
> -#include <asm/context_tracking.h>
>
> #ifdef DEBUG
> #define DBG(fmt...) udbg_printf(fmt)
> @@ -919,13 +919,17 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
> const struct cpumask *tmp;
> int rc, user_region = 0, local = 0;
> int psize, ssize;
> + enum ctx_state prev_state;
> +
> + prev_state = exception_enter();
>
> DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
> ea, access, trap);
>
> if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
> DBG_LOW(" out of pgtable range !\n");
> - return 1;
> + rc = 1;
> + goto exit;
> }
>
> /* Get region & vsid */

This no longer applies on mainline, please send an updated version.

cheers

2013-04-10 05:33:00

by Li Zhong

[permalink] [raw]
Subject: Re: [RFC PATCH v2 6/6] powerpc: Use generic code for exception handling

On Wed, 2013-04-10 at 14:56 +1000, Michael Ellerman wrote:
> On Fri, Mar 29, 2013 at 06:00:21PM +0800, Li Zhong wrote:
> > After the exception handling moved to generic code, and some changes in
> ...
> > diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> > index 360fba8..eeab30f 100644
> > --- a/arch/powerpc/mm/hash_utils_64.c
> > +++ b/arch/powerpc/mm/hash_utils_64.c
> > @@ -33,6 +33,7 @@
> > #include <linux/init.h>
> > #include <linux/signal.h>
> > #include <linux/memblock.h>
> > +#include <linux/context_tracking.h>
> >
> > #include <asm/processor.h>
> > #include <asm/pgtable.h>
> > @@ -56,7 +57,6 @@
> > #include <asm/fadump.h>
> > #include <asm/firmware.h>
> > #include <asm/tm.h>
> > -#include <asm/context_tracking.h>
> >
> > #ifdef DEBUG
> > #define DBG(fmt...) udbg_printf(fmt)
> > @@ -919,13 +919,17 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
> > const struct cpumask *tmp;
> > int rc, user_region = 0, local = 0;
> > int psize, ssize;
> > + enum ctx_state prev_state;
> > +
> > + prev_state = exception_enter();
> >
> > DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
> > ea, access, trap);
> >
> > if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
> > DBG_LOW(" out of pgtable range !\n");
> > - return 1;
> > + rc = 1;
> > + goto exit;
> > }
> >
> > /* Get region & vsid */
>
> This no longer applies on mainline, please send an updated version.

Yes, for current mainline (powerpc tree), only previous five patches
could be applied. The dependency of this patch is current in tip tree,
and seems would be in for 3.10.

There are some more details in the cover letter (#0):

"I assume these patches would get in through powerpc tree, so I didn't
combine the new patch (#6) with the original one (#2). So that if
powerpc tree picks these, it could pick the first five patches, and
apply patch #6 later when the dependency enters into powerpc tree (maybe
on some 3.10-rcs)."

Thanks, Zhong

> cheers
>

2013-04-10 05:56:26

by Li Zhong

[permalink] [raw]
Subject: Re: [RFC PATCH v2 6/6] powerpc: Use generic code for exception handling

On Wed, 2013-04-10 at 13:32 +0800, Li Zhong wrote:
> On Wed, 2013-04-10 at 14:56 +1000, Michael Ellerman wrote:
> > On Fri, Mar 29, 2013 at 06:00:21PM +0800, Li Zhong wrote:
> > > After the exception handling moved to generic code, and some changes in
> > ...
> > > diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> > > index 360fba8..eeab30f 100644
> > > --- a/arch/powerpc/mm/hash_utils_64.c
> > > +++ b/arch/powerpc/mm/hash_utils_64.c
> > > @@ -33,6 +33,7 @@
> > > #include <linux/init.h>
> > > #include <linux/signal.h>
> > > #include <linux/memblock.h>
> > > +#include <linux/context_tracking.h>
> > >
> > > #include <asm/processor.h>
> > > #include <asm/pgtable.h>
> > > @@ -56,7 +57,6 @@
> > > #include <asm/fadump.h>
> > > #include <asm/firmware.h>
> > > #include <asm/tm.h>
> > > -#include <asm/context_tracking.h>
> > >
> > > #ifdef DEBUG
> > > #define DBG(fmt...) udbg_printf(fmt)
> > > @@ -919,13 +919,17 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
> > > const struct cpumask *tmp;
> > > int rc, user_region = 0, local = 0;
> > > int psize, ssize;
> > > + enum ctx_state prev_state;
> > > +
> > > + prev_state = exception_enter();
> > >
> > > DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
> > > ea, access, trap);
> > >
> > > if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
> > > DBG_LOW(" out of pgtable range !\n");
> > > - return 1;
> > > + rc = 1;
> > > + goto exit;
> > > }
> > >
> > > /* Get region & vsid */
> >
> > This no longer applies on mainline, please send an updated version.
>
> Yes, for current mainline (powerpc tree), only previous five patches
> could be applied. The dependency of this patch is current in tip tree,
> and seems would be in for 3.10.
>
> There are some more details in the cover letter (#0):
>
> "I assume these patches would get in through powerpc tree, so I didn't
> combine the new patch (#6) with the original one (#2). So that if
> powerpc tree picks these, it could pick the first five patches, and
> apply patch #6 later when the dependency enters into powerpc tree (maybe
> on some 3.10-rcs)."

And I will send an updated version of this one when I see the dependency
commits in mainline.

Thanks, Zhong

> Thanks, Zhong
>
> > cheers
> >
>