2013-09-25 16:18:20

by Frederic Weisbecker

[permalink] [raw]
Subject: [RFC PATCH 0/7] softirq: Consolidation and stack overrun fix v2

Hi,

So here is a respin after the discussion we had, plus some more
goodies:

* 1st patch is a short term pure regression fixe, with stable tag etc...
* 2nd patch now also generalize the softirq_count() check
* 4th improve debugging (just hope I did not mistake the !in_interrupt()
assumption in __do_softirq()
* more comments
* introduction of a longer term solution via a new arch symbol for archs
to tell about irq_exit() stack coverage.

Thanks.

Frederic Weisbecker (7):
irq: Force hardirq exit's softirq processing on its own stack
irq: Consolidate do_softirq() arch overriden implementations
irq: Optimize call to softirq on hardirq exit
irq: Improve a bit softirq debugging
irq: Justify the various softirq stack choices
irq: Optimize softirq stack selection in irq exit
x86: Tell about irq stack coverage

arch/metag/kernel/irq.c | 52 ++++++++++++++++--------------------------
arch/parisc/kernel/irq.c | 17 ++------------
arch/powerpc/kernel/irq.c | 17 +-------------
arch/s390/kernel/irq.c | 52 +++++++++++++++++-------------------------
arch/sh/kernel/irq.c | 57 +++++++++++++++++-----------------------------
arch/sparc/kernel/irq_64.c | 31 ++++++++-----------------
arch/x86/include/asm/irq.h | 4 ++++
arch/x86/kernel/entry_64.S | 4 ++--
arch/x86/kernel/irq_32.c | 30 +++++++-----------------
arch/x86/kernel/irq_64.c | 21 -----------------
include/linux/interrupt.h | 11 +++++++++
kernel/softirq.c | 40 ++++++++++++++++++++++++--------
12 files changed, 130 insertions(+), 206 deletions(-)

--
1.8.3.1


2013-09-25 16:18:27

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 1/7] irq: Force hardirq exit's softirq processing on its own stack

The commit facd8b80c67a3cf64a467c4a2ac5fb31f2e6745b
("irq: Sanitize invoke_softirq") converted irq exit
calls of do_softirq() to __do_softirq() on all architectures,
assuming it was only used there for its irq disablement
properties.

But as a side effect, the softirqs processed in the end
of the hardirq are always called on the inline current
stack that is used by irq_exit() instead of the softirq
stack provided by the archs that override do_softirq().

The result is mostly safe if the architecture runs irq_exit()
on a separate irq stack because then softirqs are processed
on that same stack that is near empty at this stage (assuming
hardirq aren't nesting).

Otherwise irq_exit() runs in the task stack and so does the softirq
too. The interrupted call stack can be randomly deep already and
the softirq can dig through it even further. To add insult to the
injury, this softirq can be interrupted by a new hardirq, maximizing
the chances for a stack overrun as reported in powerpc for example:

[ 1002.364577] do_IRQ: stack overflow: 1920
[ 1002.364653] CPU: 0 PID: 1602 Comm: qemu-system-ppc Not tainted 3.10.4-300.1.fc19.ppc64p7 #1
[ 1002.364734] Call Trace:
[ 1002.364770] [c0000000050a8740] [c0000000000157c0] .show_stack+0x130/0x200 (unreliable)
[ 1002.364872] [c0000000050a8810] [c00000000082f6d0] .dump_stack+0x28/0x3c
[ 1002.364955] [c0000000050a8880] [c000000000010b98] .do_IRQ+0x2b8/0x2c0
[ 1002.365039] [c0000000050a8930] [c0000000000026d4] hardware_interrupt_common+0x154/0x180
[ 1002.365148] --- Exception: 501 at .cp_start_xmit+0x3a4/0x820 [8139cp]
[ 1002.365148] LR = .cp_start_xmit+0x390/0x820 [8139cp]
[ 1002.365359] [c0000000050a8d40] [c0000000006d7f14] .dev_hard_start_xmit+0x394/0x640
[ 1002.365433] [c0000000050a8e00] [c0000000007028c0] .sch_direct_xmit+0x110/0x260
[ 1002.365499] [c0000000050a8ea0] [c0000000006d8420] .dev_queue_xmit+0x260/0x630
[ 1002.365571] [c0000000050a8f40] [d0000000027d30d4] .br_dev_queue_push_xmit+0xc4/0x130 [bridge]
[ 1002.365641] [c0000000050a8fc0] [d0000000027d01f8] .br_dev_xmit+0x198/0x270 [bridge]
[ 1002.365707] [c0000000050a9070] [c0000000006d7f14] .dev_hard_start_xmit+0x394/0x640
[ 1002.365774] [c0000000050a9130] [c0000000006d85e8] .dev_queue_xmit+0x428/0x630
[ 1002.365834] [c0000000050a91d0] [c000000000729764] .ip_finish_output+0x2a4/0x550
[ 1002.365902] [c0000000050a9290] [c00000000072aaf0] .ip_local_out+0x50/0x70
[ 1002.365960] [c0000000050a9310] [c00000000072aed8] .ip_queue_xmit+0x148/0x420
[ 1002.366018] [c0000000050a93b0] [c000000000749524] .tcp_transmit_skb+0x4e4/0xaf0
[ 1002.366085] [c0000000050a94a0] [c00000000073de9c] .__tcp_ack_snd_check+0x7c/0xf0
[ 1002.366152] [c0000000050a9520] [c0000000007451d8] .tcp_rcv_established+0x1e8/0x930
[ 1002.366217] [c0000000050a95f0] [c00000000075326c] .tcp_v4_do_rcv+0x21c/0x570
[ 1002.366274] [c0000000050a96c0] [c000000000754a44] .tcp_v4_rcv+0x734/0x930
[ 1002.366332] [c0000000050a97a0] [c000000000724144] .ip_local_deliver_finish+0x184/0x360
[ 1002.366398] [c0000000050a9840] [c000000000724468] .ip_rcv_finish+0x148/0x400
[ 1002.366457] [c0000000050a98d0] [c0000000006d3248] .__netif_receive_skb_core+0x4f8/0xb00
[ 1002.366523] [c0000000050a99d0] [c0000000006d5414] .netif_receive_skb+0x44/0x110
[ 1002.366594] [c0000000050a9a70] [d0000000027d4e2c] .br_handle_frame_finish+0x2bc/0x3f0 [bridge]
[ 1002.366674] [c0000000050a9b20] [d0000000027de5ac] .br_nf_pre_routing_finish+0x2ac/0x420 [bridge]
[ 1002.366754] [c0000000050a9bd0] [d0000000027df5ec] .br_nf_pre_routing+0x4dc/0x7d0 [bridge]
[ 1002.366820] [c0000000050a9c70] [c000000000717aa4] .nf_iterate+0x114/0x130
[ 1002.366877] [c0000000050a9d30] [c000000000717b74] .nf_hook_slow+0xb4/0x1e0
[ 1002.366938] [c0000000050a9e00] [d0000000027d51f0] .br_handle_frame+0x290/0x330 [bridge]
[ 1002.367005] [c0000000050a9ea0] [c0000000006d309c] .__netif_receive_skb_core+0x34c/0xb00
[ 1002.367072] [c0000000050a9fa0] [c0000000006d5414] .netif_receive_skb+0x44/0x110
[ 1002.367138] [c0000000050aa040] [c0000000006d6218] .napi_gro_receive+0xe8/0x120
[ 1002.367210] [c0000000050aa0c0] [d00000000208536c] .cp_rx_poll+0x31c/0x590 [8139cp]
[ 1002.367276] [c0000000050aa1d0] [c0000000006d59cc] .net_rx_action+0x1dc/0x310
[ 1002.367335] [c0000000050aa2b0] [c0000000000a0568] .__do_softirq+0x158/0x330
[ 1002.367394] [c0000000050aa3b0] [c0000000000a0978] .irq_exit+0xc8/0x110
[ 1002.367452] [c0000000050aa430] [c0000000000109bc] .do_IRQ+0xdc/0x2c0
[ 1002.367510] [c0000000050aa4e0] [c0000000000026d4] hardware_interrupt_common+0x154/0x180
[ 1002.367580] --- Exception: 501 at .bad_range+0x1c/0x110
[ 1002.367580] LR = .get_page_from_freelist+0x908/0xbb0
[ 1002.367658] [c0000000050aa7d0] [c00000000041d758] .list_del+0x18/0x50 (unreliable)
[ 1002.367725] [c0000000050aa850] [c0000000001bfa98] .get_page_from_freelist+0x908/0xbb0
[ 1002.367792] [c0000000050aa9e0] [c0000000001bff5c] .__alloc_pages_nodemask+0x21c/0xae0
[ 1002.367860] [c0000000050aaba0] [c0000000002126d0] .alloc_pages_vma+0xd0/0x210
[ 1002.367918] [c0000000050aac60] [c0000000001e93f4] .handle_pte_fault+0x814/0xb70
[ 1002.367985] [c0000000050aad50] [c0000000001eade4] .__get_user_pages+0x1a4/0x640
[ 1002.368052] [c0000000050aae60] [c00000000004606c] .get_user_pages_fast+0xec/0x160
[ 1002.368130] [c0000000050aaf10] [d000000001f73930] .__gfn_to_pfn_memslot+0x3b0/0x430 [kvm]
[ 1002.368205] [c0000000050aafd0] [d000000001f7e214] .kvmppc_gfn_to_pfn+0x64/0x130 [kvm]
[ 1002.368280] [c0000000050ab070] [d000000001f8a824] .kvmppc_mmu_map_page+0x94/0x530 [kvm]
[ 1002.368354] [c0000000050ab190] [d000000001f85064] .kvmppc_handle_pagefault+0x174/0x610 [kvm]
[ 1002.368429] [c0000000050ab270] [d000000001f85b74] .kvmppc_handle_exit_pr+0x464/0x9b0 [kvm]
[ 1002.368504] [c0000000050ab320] [d000000001f88ec4] kvm_start_lightweight+0x1ec/0x1fc [kvm]
[ 1002.368578] [c0000000050ab4f0] [d000000001f86a58] .kvmppc_vcpu_run_pr+0x168/0x3b0 [kvm]
[ 1002.368652] [c0000000050ab9c0] [d000000001f7f218] .kvmppc_vcpu_run+0xc8/0xf0 [kvm]
[ 1002.368725] [c0000000050aba50] [d000000001f7bdac] .kvm_arch_vcpu_ioctl_run+0x5c/0x1a0 [kvm]
[ 1002.368797] [c0000000050abae0] [d000000001f74618] .kvm_vcpu_ioctl+0x478/0x730 [kvm]
[ 1002.368865] [c0000000050abc90] [c00000000025302c] .do_vfs_ioctl+0x4ec/0x7c0
[ 1002.368923] [c0000000050abd80] [c0000000002533d4] .SyS_ioctl+0xd4/0xf0
[ 1002.368981] [c0000000050abe30] [c000000000009ed4] syscall_exit+0x0/0x98

Since this is a regression, this patch proposes a minimalistic
and low-risk solution by blindly forcing the hardirq exit processing of
softirqs on the softirq stack. This way we should reduce significantly
the opportunities for task stack overflow dug by softirqs.

Longer term solutions may involve extending the hardirq stack coverage to
irq_exit(), etc...

Reported-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
kernel/softirq.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 53cc09c..d7d498d 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -328,10 +328,19 @@ void irq_enter(void)

static inline void invoke_softirq(void)
{
- if (!force_irqthreads)
- __do_softirq();
- else
+ if (!force_irqthreads) {
+ /*
+ * We can safely execute softirq on the current stack if
+ * it is the irq stack, because it should be near empty
+ * at this stage. But we have no way to know if the arch
+ * calls irq_exit() on the irq stack. So call softirq
+ * in its own stack to prevent from any overrun on top
+ * of a potentially deep task stack.
+ */
+ do_softirq();
+ } else {
wakeup_softirqd();
+ }
}

static inline void tick_irq_exit(void)
--
1.8.3.1

2013-09-25 16:18:32

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 6/7] irq: Optimize softirq stack selection in irq exit

If irq_exit() is called on the arch's specified irq stack,
it should be safe to run softirqs inline under that same
irq stack as it is near empty by the time we call irq_exit().

Hardirqs can nest but irq exit's processing of softirqs only
happen in the inner most hardirq.

So if we use the same stack for both the worst case scenario is:
hardirq -> softirq -> hardirq. But then the first hardirq can be
ignored as its stack is mostly empty. So the stack merge in this
case looks acceptable.

So lets adapt the irq exit's softirq stack on top of a new arch symbol
that can be defined when irq_exit() runs on the irq stack. That way
we can spare some stack switch on irq processing and all the cache
issues that come along.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
kernel/softirq.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 2b4328e..317778c 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -332,15 +332,21 @@ void irq_enter(void)
static inline void invoke_softirq(void)
{
if (!force_irqthreads) {
+#ifdef __ARCH_IRQ_EXIT_ON_IRQ_STACK
/*
* We can safely execute softirq on the current stack if
* it is the irq stack, because it should be near empty
- * at this stage. But we have no way to know if the arch
- * calls irq_exit() on the irq stack. So call softirq
- * in its own stack to prevent from any overrun on top
- * of a potentially deep task stack.
+ * at this stage.
+ */
+ __do_softirq();
+#else
+ /*
+ * Otherwise, irq_exit() is called on the task stack that can
+ * be potentially deep already. So call softirq in its own stack
+ * to prevent from any overrun.
*/
do_softirq_own_stack();
+#endif
} else {
wakeup_softirqd();
}
--
1.8.3.1

2013-09-25 16:18:29

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 5/7] irq: Justify the various softirq stack choices

For clarity, comment the various stack choices for softirqs
processing, whether we execute them from ksoftirqd or
local_irq_enable() calls.

Their use on irq_exit() is already commented.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
kernel/softirq.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 9f8092b..2b4328e 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -170,8 +170,13 @@ static inline void _local_bh_enable_ip(unsigned long ip)
*/
sub_preempt_count(SOFTIRQ_DISABLE_OFFSET - 1);

- if (unlikely(!in_interrupt() && local_softirq_pending()))
+ if (unlikely(!in_interrupt() && local_softirq_pending())) {
+ /*
+ * Run softirq if any pending. And do it in its own stack
+ * as we may be calling this deep in a task call stack already.
+ */
do_softirq();
+ }

dec_preempt_count();
#ifdef CONFIG_TRACE_IRQFLAGS
@@ -769,6 +774,10 @@ static void run_ksoftirqd(unsigned int cpu)
{
local_irq_disable();
if (local_softirq_pending()) {
+ /*
+ * We can safely run softirq on inline stack, as we are not deep
+ * in the task stack here.
+ */
__do_softirq();
rcu_note_context_switch(cpu);
local_irq_enable();
--
1.8.3.1

2013-09-25 16:18:35

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 7/7] x86: Tell about irq stack coverage

x86-64 runs irq_exit() under the irq stack. So it can afford
to run softirqs in hardirq exit without the need to switch
the stacks. The hardirq stack is good enough for that.

Now x86-64 runs softirqs in the hardirq stack anyway, so what we
mostly skip is some needless per cpu refcounting updates there.

x86-32 is not concerned because it only runs the irq handler on
the irq stack.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
arch/x86/include/asm/irq.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index 0ea10f27..879bece 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -23,6 +23,10 @@ extern void irq_ctx_init(int cpu);

#define __ARCH_HAS_DO_SOFTIRQ

+#ifdef CONFIG_X86_64
+# define __ARCH_IRQ_EXIT_ON_IRQ_STACK
+#endif
+
#ifdef CONFIG_HOTPLUG_CPU
#include <linux/cpumask.h>
extern void fixup_irqs(void);
--
1.8.3.1

2013-09-25 16:19:17

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 4/7] irq: Improve a bit softirq debugging

do_softirq() has a debug check that verifies that it is not nesting
on softirqs processing, nor miscounting the softirq part of the preempt
count.

But making sure that softirqs processing don't nest is actually a more
generic concern that applies to any caller of __do_softirq().

Do take it one step further and generalize that debug check to
any softirq processing.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
kernel/softirq.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 17c5cd2..9f8092b 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -133,7 +133,6 @@ EXPORT_SYMBOL(local_bh_disable);

static void __local_bh_enable(unsigned int cnt)
{
- WARN_ON_ONCE(in_irq());
WARN_ON_ONCE(!irqs_disabled());

if (softirq_count() == cnt)
@@ -148,6 +147,7 @@ static void __local_bh_enable(unsigned int cnt)
*/
void _local_bh_enable(void)
{
+ WARN_ON_ONCE(in_irq());
__local_bh_enable(SOFTIRQ_DISABLE_OFFSET);
}

@@ -279,6 +279,7 @@ restart:

account_irq_exit_time(current);
__local_bh_enable(SOFTIRQ_OFFSET);
+ WARN_ON_ONCE(in_interrupt());
tsk_restore_flags(current, old_flags, PF_MEMALLOC);
}

@@ -299,7 +300,6 @@ asmlinkage void do_softirq(void)
if (pending)
do_softirq_own_stack();

- WARN_ON_ONCE(softirq_count());
local_irq_restore(flags);
}

--
1.8.3.1

2013-09-25 16:19:38

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 2/7] irq: Consolidate do_softirq() arch overriden implementations

All arch overriden implementations of do_softirq() share the following
common code: disable irqs (to avoid races with the pending check),
check if there are softirqs pending, then execute __do_softirq() on
a specific stack.

Consolidate the common parts such that archs only worry about the
stack switch.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
arch/metag/kernel/irq.c | 52 ++++++++++++++++--------------------------
arch/parisc/kernel/irq.c | 17 ++------------
arch/powerpc/kernel/irq.c | 17 +-------------
arch/s390/kernel/irq.c | 52 +++++++++++++++++-------------------------
arch/sh/kernel/irq.c | 57 +++++++++++++++++-----------------------------
arch/sparc/kernel/irq_64.c | 31 ++++++++-----------------
arch/x86/kernel/entry_64.S | 4 ++--
arch/x86/kernel/irq_32.c | 30 +++++++-----------------
arch/x86/kernel/irq_64.c | 21 -----------------
include/linux/interrupt.h | 11 +++++++++
kernel/softirq.c | 8 +++----
11 files changed, 98 insertions(+), 202 deletions(-)

diff --git a/arch/metag/kernel/irq.c b/arch/metag/kernel/irq.c
index 2a2c9d5..3b4b7f6 100644
--- a/arch/metag/kernel/irq.c
+++ b/arch/metag/kernel/irq.c
@@ -159,44 +159,30 @@ void irq_ctx_exit(int cpu)

extern asmlinkage void __do_softirq(void);

-asmlinkage void do_softirq(void)
+void do_softirq_own_stack(void)
{
- unsigned long flags;
struct thread_info *curctx;
union irq_ctx *irqctx;
u32 *isp;

- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- if (local_softirq_pending()) {
- curctx = current_thread_info();
- irqctx = softirq_ctx[smp_processor_id()];
- irqctx->tinfo.task = curctx->task;
-
- /* build the stack frame on the softirq stack */
- isp = (u32 *) ((char *)irqctx + sizeof(struct thread_info));
-
- asm volatile (
- "MOV D0.5,%0\n"
- "SWAP A0StP,D0.5\n"
- "CALLR D1RtP,___do_softirq\n"
- "MOV A0StP,D0.5\n"
- :
- : "r" (isp)
- : "memory", "cc", "D1Ar1", "D0Ar2", "D1Ar3", "D0Ar4",
- "D1Ar5", "D0Ar6", "D0Re0", "D1Re0", "D0.4", "D1RtP",
- "D0.5"
- );
- /*
- * Shouldn't happen, we returned above if in_interrupt():
- */
- WARN_ON_ONCE(softirq_count());
- }
-
- local_irq_restore(flags);
+ curctx = current_thread_info();
+ irqctx = softirq_ctx[smp_processor_id()];
+ irqctx->tinfo.task = curctx->task;
+
+ /* build the stack frame on the softirq stack */
+ isp = (u32 *) ((char *)irqctx + sizeof(struct thread_info));
+
+ asm volatile (
+ "MOV D0.5,%0\n"
+ "SWAP A0StP,D0.5\n"
+ "CALLR D1RtP,___do_softirq\n"
+ "MOV A0StP,D0.5\n"
+ :
+ : "r" (isp)
+ : "memory", "cc", "D1Ar1", "D0Ar2", "D1Ar3", "D0Ar4",
+ "D1Ar5", "D0Ar6", "D0Re0", "D1Re0", "D0.4", "D1RtP",
+ "D0.5"
+ );
}
#endif

diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
index 2e6443b..ef59276 100644
--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -499,22 +499,9 @@ static void execute_on_irq_stack(void *func, unsigned long param1)
*irq_stack_in_use = 1;
}

-asmlinkage void do_softirq(void)
+void do_softirq_own_stack(void)
{
- __u32 pending;
- unsigned long flags;
-
- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- pending = local_softirq_pending();
-
- if (pending)
- execute_on_irq_stack(__do_softirq, 0);
-
- local_irq_restore(flags);
+ execute_on_irq_stack(__do_softirq, 0);
}
#endif /* CONFIG_IRQSTACKS */

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index c69440c..7d0da88 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -601,7 +601,7 @@ void irq_ctx_init(void)
}
}

-static inline void do_softirq_onstack(void)
+void do_softirq_own_stack(void)
{
struct thread_info *curtp, *irqtp;
unsigned long saved_sp_limit = current->thread.ksp_limit;
@@ -623,21 +623,6 @@ static inline void do_softirq_onstack(void)
set_bits(irqtp->flags, &curtp->flags);
}

-void do_softirq(void)
-{
- unsigned long flags;
-
- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- if (local_softirq_pending())
- do_softirq_onstack();
-
- local_irq_restore(flags);
-}
-
irq_hw_number_t virq_to_hw(unsigned int virq)
{
struct irq_data *irq_data = irq_get_irq_data(virq);
diff --git a/arch/s390/kernel/irq.c b/arch/s390/kernel/irq.c
index 8ac2097..bb27a26 100644
--- a/arch/s390/kernel/irq.c
+++ b/arch/s390/kernel/irq.c
@@ -157,39 +157,29 @@ int arch_show_interrupts(struct seq_file *p, int prec)
/*
* Switch to the asynchronous interrupt stack for softirq execution.
*/
-asmlinkage void do_softirq(void)
+void do_softirq_own_stack(void)
{
- unsigned long flags, old, new;
-
- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- if (local_softirq_pending()) {
- /* Get current stack pointer. */
- asm volatile("la %0,0(15)" : "=a" (old));
- /* Check against async. stack address range. */
- new = S390_lowcore.async_stack;
- if (((new - old) >> (PAGE_SHIFT + THREAD_ORDER)) != 0) {
- /* Need to switch to the async. stack. */
- new -= STACK_FRAME_OVERHEAD;
- ((struct stack_frame *) new)->back_chain = old;
-
- asm volatile(" la 15,0(%0)\n"
- " basr 14,%2\n"
- " la 15,0(%1)\n"
- : : "a" (new), "a" (old),
- "a" (__do_softirq)
- : "0", "1", "2", "3", "4", "5", "14",
- "cc", "memory" );
- } else {
- /* We are already on the async stack. */
- __do_softirq();
- }
+ unsigned long old, new;
+
+ /* Get current stack pointer. */
+ asm volatile("la %0,0(15)" : "=a" (old));
+ /* Check against async. stack address range. */
+ new = S390_lowcore.async_stack;
+ if (((new - old) >> (PAGE_SHIFT + THREAD_ORDER)) != 0) {
+ /* Need to switch to the async. stack. */
+ new -= STACK_FRAME_OVERHEAD;
+ ((struct stack_frame *) new)->back_chain = old;
+ asm volatile(" la 15,0(%0)\n"
+ " basr 14,%2\n"
+ " la 15,0(%1)\n"
+ : : "a" (new), "a" (old),
+ "a" (__do_softirq)
+ : "0", "1", "2", "3", "4", "5", "14",
+ "cc", "memory" );
+ } else {
+ /* We are already on the async stack. */
+ __do_softirq();
}
-
- local_irq_restore(flags);
}

/*
diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
index 063af10..0833736 100644
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -149,47 +149,32 @@ void irq_ctx_exit(int cpu)
hardirq_ctx[cpu] = NULL;
}

-asmlinkage void do_softirq(void)
+void do_softirq_own_stack(void)
{
- unsigned long flags;
struct thread_info *curctx;
union irq_ctx *irqctx;
u32 *isp;

- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- if (local_softirq_pending()) {
- curctx = current_thread_info();
- irqctx = softirq_ctx[smp_processor_id()];
- irqctx->tinfo.task = curctx->task;
- irqctx->tinfo.previous_sp = current_stack_pointer;
-
- /* build the stack frame on the softirq stack */
- isp = (u32 *)((char *)irqctx + sizeof(*irqctx));
-
- __asm__ __volatile__ (
- "mov r15, r9 \n"
- "jsr @%0 \n"
- /* switch to the softirq stack */
- " mov %1, r15 \n"
- /* restore the thread stack */
- "mov r9, r15 \n"
- : /* no outputs */
- : "r" (__do_softirq), "r" (isp)
- : "memory", "r0", "r1", "r2", "r3", "r4",
- "r5", "r6", "r7", "r8", "r9", "r15", "t", "pr"
- );
-
- /*
- * Shouldn't happen, we returned above if in_interrupt():
- */
- WARN_ON_ONCE(softirq_count());
- }
-
- local_irq_restore(flags);
+ curctx = current_thread_info();
+ irqctx = softirq_ctx[smp_processor_id()];
+ irqctx->tinfo.task = curctx->task;
+ irqctx->tinfo.previous_sp = current_stack_pointer;
+
+ /* build the stack frame on the softirq stack */
+ isp = (u32 *)((char *)irqctx + sizeof(*irqctx));
+
+ __asm__ __volatile__ (
+ "mov r15, r9 \n"
+ "jsr @%0 \n"
+ /* switch to the softirq stack */
+ " mov %1, r15 \n"
+ /* restore the thread stack */
+ "mov r9, r15 \n"
+ : /* no outputs */
+ : "r" (__do_softirq), "r" (isp)
+ : "memory", "r0", "r1", "r2", "r3", "r4",
+ "r5", "r6", "r7", "r8", "r9", "r15", "t", "pr"
+ );
}
#else
static inline void handle_one_irq(unsigned int irq)
diff --git a/arch/sparc/kernel/irq_64.c b/arch/sparc/kernel/irq_64.c
index d4840ce..666193f 100644
--- a/arch/sparc/kernel/irq_64.c
+++ b/arch/sparc/kernel/irq_64.c
@@ -698,30 +698,19 @@ void __irq_entry handler_irq(int pil, struct pt_regs *regs)
set_irq_regs(old_regs);
}

-void do_softirq(void)
+void do_softirq_own_stack(void)
{
- unsigned long flags;
-
- if (in_interrupt())
- return;
-
- local_irq_save(flags);
+ void *orig_sp, *sp = softirq_stack[smp_processor_id()];

- if (local_softirq_pending()) {
- void *orig_sp, *sp = softirq_stack[smp_processor_id()];
-
- sp += THREAD_SIZE - 192 - STACK_BIAS;
-
- __asm__ __volatile__("mov %%sp, %0\n\t"
- "mov %1, %%sp"
- : "=&r" (orig_sp)
- : "r" (sp));
- __do_softirq();
- __asm__ __volatile__("mov %0, %%sp"
- : : "r" (orig_sp));
- }
+ sp += THREAD_SIZE - 192 - STACK_BIAS;

- local_irq_restore(flags);
+ __asm__ __volatile__("mov %%sp, %0\n\t"
+ "mov %1, %%sp"
+ : "=&r" (orig_sp)
+ : "r" (sp));
+ __do_softirq();
+ __asm__ __volatile__("mov %0, %%sp"
+ : : "r" (orig_sp));
}

#ifdef CONFIG_HOTPLUG_CPU
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index b077f4c..083da7c 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1342,7 +1342,7 @@ bad_gs:
.previous

/* Call softirq on interrupt stack. Interrupts are off. */
-ENTRY(call_softirq)
+ENTRY(do_softirq_own_stack)
CFI_STARTPROC
pushq_cfi %rbp
CFI_REL_OFFSET rbp,0
@@ -1359,7 +1359,7 @@ ENTRY(call_softirq)
decl PER_CPU_VAR(irq_count)
ret
CFI_ENDPROC
-END(call_softirq)
+END(do_softirq_own_stack)

#ifdef CONFIG_XEN
zeroentry xen_hypervisor_callback xen_do_hypervisor_callback
diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
index 4186755..8a5bb01 100644
--- a/arch/x86/kernel/irq_32.c
+++ b/arch/x86/kernel/irq_32.c
@@ -149,35 +149,21 @@ void irq_ctx_init(int cpu)
cpu, per_cpu(hardirq_ctx, cpu), per_cpu(softirq_ctx, cpu));
}

-asmlinkage void do_softirq(void)
+void do_softirq_own_stack(void)
{
- unsigned long flags;
struct thread_info *curctx;
union irq_ctx *irqctx;
u32 *isp;

- if (in_interrupt())
- return;
-
- local_irq_save(flags);
-
- if (local_softirq_pending()) {
- curctx = current_thread_info();
- irqctx = __this_cpu_read(softirq_ctx);
- irqctx->tinfo.task = curctx->task;
- irqctx->tinfo.previous_esp = current_stack_pointer;
-
- /* build the stack frame on the softirq stack */
- isp = (u32 *) ((char *)irqctx + sizeof(*irqctx));
+ curctx = current_thread_info();
+ irqctx = __this_cpu_read(softirq_ctx);
+ irqctx->tinfo.task = curctx->task;
+ irqctx->tinfo.previous_esp = current_stack_pointer;

- call_on_stack(__do_softirq, isp);
- /*
- * Shouldn't happen, we returned above if in_interrupt():
- */
- WARN_ON_ONCE(softirq_count());
- }
+ /* build the stack frame on the softirq stack */
+ isp = (u32 *) ((char *)irqctx + sizeof(*irqctx));

- local_irq_restore(flags);
+ call_on_stack(__do_softirq, isp);
}

bool handle_irq(unsigned irq, struct pt_regs *regs)
diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
index d04d3ec..4d1c746 100644
--- a/arch/x86/kernel/irq_64.c
+++ b/arch/x86/kernel/irq_64.c
@@ -87,24 +87,3 @@ bool handle_irq(unsigned irq, struct pt_regs *regs)
generic_handle_irq_desc(irq, desc);
return true;
}
-
-
-extern void call_softirq(void);
-
-asmlinkage void do_softirq(void)
-{
- __u32 pending;
- unsigned long flags;
-
- if (in_interrupt())
- return;
-
- local_irq_save(flags);
- pending = local_softirq_pending();
- /* Switch to interrupt stack */
- if (pending) {
- call_softirq();
- WARN_ON_ONCE(softirq_count());
- }
- local_irq_restore(flags);
-}
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 5e865b5..c9e831d 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -19,6 +19,7 @@

#include <linux/atomic.h>
#include <asm/ptrace.h>
+#include <asm/irq.h>

/*
* These correspond to the IORESOURCE_IRQ_* defines in
@@ -374,6 +375,16 @@ struct softirq_action

asmlinkage void do_softirq(void);
asmlinkage void __do_softirq(void);
+
+#ifdef __ARCH_HAS_DO_SOFTIRQ
+void do_softirq_own_stack(void);
+#else
+static inline void do_softirq_own_stack(void)
+{
+ __do_softirq();
+}
+#endif
+
extern void open_softirq(int nr, void (*action)(struct softirq_action *));
extern void softirq_init(void);
extern void __raise_softirq_irqoff(unsigned int nr);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index d7d498d..26ee727 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -29,7 +29,6 @@
#define CREATE_TRACE_POINTS
#include <trace/events/irq.h>

-#include <asm/irq.h>
/*
- No shared variables, all the data are CPU local.
- If a softirq needs serialization, let it serialize itself
@@ -283,7 +282,7 @@ restart:
tsk_restore_flags(current, old_flags, PF_MEMALLOC);
}

-#ifndef __ARCH_HAS_DO_SOFTIRQ
+

asmlinkage void do_softirq(void)
{
@@ -298,13 +297,12 @@ asmlinkage void do_softirq(void)
pending = local_softirq_pending();

if (pending)
- __do_softirq();
+ do_softirq_own_stack();

+ WARN_ON_ONCE(softirq_count());
local_irq_restore(flags);
}

-#endif
-
/*
* Enter an interrupt context.
*/
--
1.8.3.1

2013-09-25 16:19:36

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH 3/7] irq: Optimize call to softirq on hardirq exit

Before processing softirqs on hardirq exit, we already
do the check for pending softirqs while hardirqs are
guaranteed to be disabled.

So we can take a shortcut and safely jump to the arch
specific implementation directly.

Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: James Hogan <[email protected]>
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Andrew Morton <[email protected]>
---
kernel/softirq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 26ee727..17c5cd2 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -335,7 +335,7 @@ static inline void invoke_softirq(void)
* in its own stack to prevent from any overrun on top
* of a potentially deep task stack.
*/
- do_softirq();
+ do_softirq_own_stack();
} else {
wakeup_softirqd();
}
--
1.8.3.1

2013-09-25 23:04:30

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 6/7] irq: Optimize softirq stack selection in irq exit

On Wed, Sep 25, 2013 at 9:18 AM, Frederic Weisbecker <[email protected]> wrote:
> +#ifdef __ARCH_IRQ_EXIT_ON_IRQ_STACK

Please don't introduce anymore of these insane ad-hoc crazy architecture macros.

Just make it a regular config option, which makes it much easier for
architectures to just select it at config time.

So make it something like CONFIG_IRQ_EXIT_ON_IRQ_STACK, and then
x86-64 (and now Power) can just do

select IRQ_EXIT_ON_IRQ_STACK

in their arch/*/Kconfig files.

Basically, we really should strive to get rid of all those insane
[__]ARCH_xyzzy flags. We have too many of them, and they are insane.

Linus

2013-09-25 23:08:25

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 7/7] x86: Tell about irq stack coverage

On Wed, Sep 25, 2013 at 9:18 AM, Frederic Weisbecker <[email protected]> wrote:
> +#ifdef CONFIG_X86_64
> +# define __ARCH_IRQ_EXIT_ON_IRQ_STACK
> +#endif

Ok, see the previous email, this should just be a single

select IRQ_EXIT_ON_IRQ_STACK

in under the "config X86_64" header in arch/x86/Kconfig.

And as of right now, this patch series should do it for Power too,
since I just merged Ben's branch, which contained commit 0366a1c70b89
("powerpc/irq: Run softirqs off the top of the irq stack"). Again,
just a single select in the arch/powerpc/Kconfig gile should be
sufficient.

Linus

2013-09-26 00:21:25

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 7/7] x86: Tell about irq stack coverage

On Wed, 25 Sep 2013 16:08:20 -0700 Linus Torvalds <[email protected]> wrote:

> On Wed, Sep 25, 2013 at 9:18 AM, Frederic Weisbecker <[email protected]> wrote:
> > +#ifdef CONFIG_X86_64
> > +# define __ARCH_IRQ_EXIT_ON_IRQ_STACK
> > +#endif
>
> Ok, see the previous email, this should just be a single
>
> select IRQ_EXIT_ON_IRQ_STACK
>
> in under the "config X86_64" header in arch/x86/Kconfig.
>
> And as of right now, this patch series should do it for Power too,
> since I just merged Ben's branch, which contained commit 0366a1c70b89
> ("powerpc/irq: Run softirqs off the top of the irq stack"). Again,
> just a single select in the arch/powerpc/Kconfig gile should be
> sufficient.
>

This happens quite a lot. Perhaps Joe can cook us up a checkpatch
rule. "#define [__]ARCH_HA" looks like a decent search pattern.

2013-09-26 00:40:34

by Joe Perches

[permalink] [raw]
Subject: Re: [PATCH 7/7] x86: Tell about irq stack coverage

On Wed, 2013-09-25 at 17:21 -0700, Andrew Morton wrote:
> On Wed, 25 Sep 2013 16:08:20 -0700 Linus Torvalds <[email protected]> wrote:
>
> > On Wed, Sep 25, 2013 at 9:18 AM, Frederic Weisbecker <[email protected]> wrote:
> > > +#ifdef CONFIG_X86_64
> > > +# define __ARCH_IRQ_EXIT_ON_IRQ_STACK
> > > +#endif
> >
> > Ok, see the previous email, this should just be a single
> >
> > select IRQ_EXIT_ON_IRQ_STACK
> >
> > in under the "config X86_64" header in arch/x86/Kconfig.
> >
> > And as of right now, this patch series should do it for Power too,
> > since I just merged Ben's branch, which contained commit 0366a1c70b89
> > ("powerpc/irq: Run softirqs off the top of the irq stack"). Again,
> > just a single select in the arch/powerpc/Kconfig gile should be
> > sufficient.
> >
>
> This happens quite a lot. Perhaps Joe can cook us up a checkpatch
> rule. "#define [__]ARCH_HA" looks like a decent search pattern.
>

Huh? That matches all the ARCH_HAS_<foo> patterns.

What is it you want again?

2013-09-26 01:26:49

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 7/7] x86: Tell about irq stack coverage

On Wed, Sep 25, 2013 at 5:40 PM, Joe Perches <[email protected]> wrote:
>
> Huh? That matches all the ARCH_HAS_<foo> patterns.

Right. And they are all crap. lib/string.c is a prime example of
something that should never have happened.

The ARCH_HAS_xyz pattern is totally retarded. It's wrong.

For big conceptual features, we should use Kconfig symbols.

And for smaller things - like lib/string.c - where we have
compatibility fallback functions but want architectures able to
override them with optimized ones one function at a time, we should
either use weak functions (appropriate for some cases), or the symbol
that protects them should the the SAME SYMBOL WE USE. Rather than some
made-up crap-for-brains new ARCH_HAS_xyz symbol. That way it shows up
in greps, and that way we don't have any question about what random
symbol pattern we use that particular day.

So for *bad* use, see lib/string.c, and the ARCH_AS_xyz horror.

For *good* use, see lib/div64.c or lib/find_next_bit.c.

Notice how div64.c doesn't make up new ARCH_HAS_random_crap names? And
no, you don't have to define those things as macros, you can define
them as functions (inline or not), and then just do

#define find_next_zero_bit find_next_zero_bit

to tell the rest of the world "Look, I have this defined".

The whole "make up a totally unrelated second name for it" means that
we have things like __HAVE_ARCH_STRLEN but also things like
ARCH_HAS_PREFETCHW. Ugh.

Linus

2013-09-26 02:01:02

by Joe Perches

[permalink] [raw]
Subject: [PATCH] checkpatch: Add test for #defines of ARCH_HAS_<foo>

On Wed, 2013-09-25 at 18:26 -0700, Linus Torvalds wrote:
> On Wed, Sep 25, 2013 at 5:40 PM, Joe Perches <[email protected]> wrote:
> >
> > Huh? That matches all the ARCH_HAS_<foo> patterns.
>
> Right. And they are all crap. lib/string.c is a prime example of
> something that should never have happened.
>
> The ARCH_HAS_xyz pattern is totally retarded. It's wrong.
>
> For big conceptual features, we should use Kconfig symbols.
>
> And for smaller things - like lib/string.c - where we have
> compatibility fallback functions but want architectures able to
> override them with optimized ones one function at a time, we should
> either use weak functions (appropriate for some cases), or the symbol
> that protects them should the the SAME SYMBOL WE USE. Rather than some
> made-up crap-for-brains new ARCH_HAS_xyz symbol. That way it shows up
> in greps, and that way we don't have any question about what random
> symbol pattern we use that particular day.
>
> So for *bad* use, see lib/string.c, and the ARCH_AS_xyz horror.
>
> For *good* use, see lib/div64.c or lib/find_next_bit.c.
>
> Notice how div64.c doesn't make up new ARCH_HAS_random_crap names? And
> no, you don't have to define those things as macros, you can define
> them as functions (inline or not), and then just do
>
> #define find_next_zero_bit find_next_zero_bit
>
> to tell the rest of the world "Look, I have this defined".
>
> The whole "make up a totally unrelated second name for it" means that
> we have things like __HAVE_ARCH_STRLEN but also things like
> ARCH_HAS_PREFETCHW. Ugh.
>
> Linus

So, add a test for these #defines

Additionally, moved string_find_replace sub as it
screws up subsequent formatting when placed inside
another sub.

Suggested-by: Andrew Morton <[email protected]>
Signed-off-by: Joe Perches <[email protected]>
---
scripts/checkpatch.pl | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index c03e427..e2e7703 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -1512,6 +1512,14 @@ sub rtrim {
return $string;
}

+sub string_find_replace {
+ my ($string, $find, $replace) = @_;
+
+ $string =~ s/$find/$replace/g;
+
+ return $string;
+}
+
sub tabify {
my ($leading) = @_;

@@ -3731,14 +3739,6 @@ sub process {
}
}

-sub string_find_replace {
- my ($string, $find, $replace) = @_;
-
- $string =~ s/$find/$replace/g;
-
- return $string;
-}
-
# check for bad placement of section $InitAttribute (e.g.: __initdata)
if ($line =~ /(\b$InitAttribute\b)/) {
my $attr = $1;
@@ -4196,6 +4196,12 @@ sub string_find_replace {
"usage of NR_CPUS is often wrong - consider using cpu_possible(), num_possible_cpus(), for_each_possible_cpu(), etc\n" . $herecurr);
}

+# Use of __ARCH_HAS_<FOO> or ARCH_HAVE_<BAR> is wrong.
+ if ($line =~ /\+\s*#\s*define\s+((?:__)?ARCH_(?:HAS|HAVE)\w*)\b/) {
+ ERROR("DEFINE_ARCH_HAS",
+ "#define of '$1' is wrong - use Kconfig variables or standard guards instead\n" . $herecurr);
+ }
+
# check for %L{u,d,i} in strings
my $string;
while ($line =~ /(?:^|")([X\t]*)(?:"|$)/g) {

2013-09-26 02:32:43

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] checkpatch: Add test for #defines of ARCH_HAS_<foo>

On Wed, 25 Sep 2013 19:00:54 -0700 Joe Perches <[email protected]> wrote:

> +# Use of __ARCH_HAS_<FOO> or ARCH_HAVE_<BAR> is wrong.
> + if ($line =~ /\+\s*#\s*define\s+((?:__)?ARCH_(?:HAS|HAVE)\w*)\b/) {
> + ERROR("DEFINE_ARCH_HAS",
> + "#define of '$1' is wrong - use Kconfig variables or standard guards instead\n" . $herecurr);
> + }
> +

Perhaps we can provide people with a bit more help than that.
http://www.kernelhub.org/?msg=334759&p=2 would suit (gad, google
updates fast!) or copy-n-paste into Documentation/wherever and refer to
that?

2013-09-26 02:40:33

by Joe Perches

[permalink] [raw]
Subject: Re: [PATCH] checkpatch: Add test for #defines of ARCH_HAS_<foo>

On Wed, 2013-09-25 at 19:32 -0700, Andrew Morton wrote:
> On Wed, 25 Sep 2013 19:00:54 -0700 Joe Perches <[email protected]> wrote:
>
> > +# Use of __ARCH_HAS_<FOO> or ARCH_HAVE_<BAR> is wrong.
> > + if ($line =~ /\+\s*#\s*define\s+((?:__)?ARCH_(?:HAS|HAVE)\w*)\b/) {
> > + ERROR("DEFINE_ARCH_HAS",
> > + "#define of '$1' is wrong - use Kconfig variables or standard guards instead\n" . $herecurr);
> > + }
> > +
>
> Perhaps we can provide people with a bit more help than that.
> http://www.kernelhub.org/?msg=334759&p=2 would suit (gad, google
> updates fast!) or copy-n-paste into Documentation/wherever and refer to
> that?
>

Maybe the fancy lkml.kernel.org link:

http://lkml.kernel.org/r/CA+55aFycQ9XJvEOsiM3txHL5bjUc8CeKWJNR_H+MiicaddB42Q@mail.gmail.com

as that might be more long-term stable.

Will kernel.org ever store all the lkml emails so it
doesn't have to reference outside links?

2013-09-26 07:12:17

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 6/7] irq: Optimize softirq stack selection in irq exit

On Wed, Sep 25, 2013 at 04:04:28PM -0700, Linus Torvalds wrote:
> On Wed, Sep 25, 2013 at 9:18 AM, Frederic Weisbecker <[email protected]> wrote:
> > +#ifdef __ARCH_IRQ_EXIT_ON_IRQ_STACK
>
> Please don't introduce anymore of these insane ad-hoc crazy architecture macros.
>
> Just make it a regular config option, which makes it much easier for
> architectures to just select it at config time.
>
> So make it something like CONFIG_IRQ_EXIT_ON_IRQ_STACK, and then
> x86-64 (and now Power) can just do
>
> select IRQ_EXIT_ON_IRQ_STACK
>
> in their arch/*/Kconfig files.
>
> Basically, we really should strive to get rid of all those insane
> [__]ARCH_xyzzy flags. We have too many of them, and they are insane.

Ok, I'll convert that and x86,powerpc along.

Thanks.

>
> Linus