2015-11-27 11:09:47

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 0/5] Cleaning printk stuff in NMI context

This patch set generalizes the already existing solution for
printing NMI messages. The main idea comes from Peter Zijlstra.

This second version has only few changes in compare with v1, see
below. Peter Zijlstra suggested to improve also handling of
the serial console. I looked into tit and it seems to be
a rather huge task. It might need to review all console
implementations and avoid a deadlock in the locally used locks.
It would be nice but I would move this to another season.

I think that this patchset makes sense as is. It definitely
improves the situation. It allows to put all messages into
the ring buffer so that they are available in the crashdump.
We have had a similar solution in SLE for years and customers
stopped complaining about the deadlock.

Changes against v1:

+ rebased on top of 4.4-rc2; there the old implementation was
moved to lib/nmi_backtrace.c and used also on arm; I hope that
I got the arm side correctly; I was not able to test on arm :-(

+ defined HAVE_NMI on arm for !CPU_V7M instead of !CPU_V7;
handle_fiq_as_nmi() is called from entry-armv.S that
is compiled when !CPU_V7M

+ defined HAVE_NMI also on mips; it calls nmi_enter() and
seems to have real NMIs (or am I wrong?)

+ serialized backtraces when printing directly
(oops_in_progress)

Petr Mladek (5):
printk/nmi: Generic solution for safe printk in NMI
printk/nmi: Use IRQ work only when ready
printk/nmi: Try hard to print Oops message in NMI context
printk/nmi: Warn when some message has been lost in NMI context
printk/nmi: Increase the size of the temporary buffer

arch/Kconfig | 7 ++
arch/arm/Kconfig | 2 +
arch/arm/kernel/smp.c | 2 +
arch/avr32/Kconfig | 1 +
arch/blackfin/Kconfig | 1 +
arch/cris/Kconfig | 1 +
arch/mips/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/s390/Kconfig | 1 +
arch/s390/mm/fault.c | 1 +
arch/sh/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/tile/Kconfig | 1 +
arch/x86/Kconfig | 1 +
arch/x86/kernel/apic/hw_nmi.c | 1 -
include/linux/hardirq.h | 2 +
include/linux/percpu.h | 3 -
include/linux/printk.h | 12 ++-
init/Kconfig | 5 +
init/main.c | 1 +
kernel/printk/Makefile | 1 +
kernel/printk/nmi.c | 218 ++++++++++++++++++++++++++++++++++++++++++
kernel/printk/printk.c | 36 +++----
kernel/printk/printk.h | 55 +++++++++++
lib/bust_spinlocks.c | 1 +
lib/nmi_backtrace.c | 93 +++---------------
26 files changed, 345 insertions(+), 105 deletions(-)
create mode 100644 kernel/printk/nmi.c
create mode 100644 kernel/printk/printk.h

--
1.8.5.6


2015-11-27 11:11:13

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

printk() takes some locks and could not be used a safe way in NMI
context.

The chance of a deadlock is real especially when printing
stacks from all CPUs. This particular problem has been addressed
on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack
trace on all CPUs").

This patch reuses most of the code and makes it generic. It is
useful for all messages and architectures that support NMI.

The alternative printk_func is set when entering and is reseted when
leaving NMI context. It queues IRQ work to copy the messages into
the main ring buffer in a safe context.

__printk_nmi_flush() copies all available messages and reset
the buffer. Then we could use a simple cmpxchg operations to
get synchronized with writers. There is also used a spinlock
to get synchronized with other flushers.

We do not longer use seq_buf because it depends on external lock.
It would be hard to make all supported operations safe for
a lockless use. It would be confusing and error prone to
make only some operations safe.

The code is put into separate printk/nmi.c as suggested by
Steven Rostedt. It needs a per-CPU buffer and is compiled only
on architectures that call nmi_enter(). This is achieved by
the new HAVE_NMI Kconfig flag.

One exception is arm where the deferred printing is used for
printing backtraces even without NMI. For this purpose,
we define NEED_PRINTK_NMI Kconfig flag. The alternative
printk_func is explicitly set when IPI_CPU_BACKTRACE is
handled.

Another exception is Xtensa architecture that uses just a
fake NMI.

The patch is heavily based on the draft from Peter Zijlstra,
see https://lkml.org/lkml/2015/6/10/327

Suggested-by: Peter Zijlstra <[email protected]>
Suggested-by: Steven Rostedt <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
---
arch/Kconfig | 7 ++
arch/arm/Kconfig | 2 +
arch/arm/kernel/smp.c | 2 +
arch/avr32/Kconfig | 1 +
arch/blackfin/Kconfig | 1 +
arch/cris/Kconfig | 1 +
arch/mips/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/s390/Kconfig | 1 +
arch/sh/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/tile/Kconfig | 1 +
arch/x86/Kconfig | 1 +
arch/x86/kernel/apic/hw_nmi.c | 1 -
include/linux/hardirq.h | 2 +
include/linux/percpu.h | 3 -
include/linux/printk.h | 12 ++-
init/Kconfig | 5 ++
init/main.c | 1 +
kernel/printk/Makefile | 1 +
kernel/printk/nmi.c | 200 ++++++++++++++++++++++++++++++++++++++++++
kernel/printk/printk.c | 19 +---
kernel/printk/printk.h | 44 ++++++++++
lib/nmi_backtrace.c | 89 ++-----------------
24 files changed, 291 insertions(+), 107 deletions(-)
create mode 100644 kernel/printk/nmi.c
create mode 100644 kernel/printk/printk.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 4e949e58b192..7ce5101c2472 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -187,7 +187,14 @@ config HAVE_OPTPROBES
config HAVE_KPROBES_ON_FTRACE
bool

+config HAVE_NMI
+ bool
+
+config NEED_PRINTK_NMI
+ bool
+
config HAVE_NMI_WATCHDOG
+ depends on HAVE_NMI
bool
#
# An arch should select this if it provides all these things:
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 0365cbbc9179..f0465e420762 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -63,6 +63,8 @@ config ARM
select HAVE_KRETPROBES if (HAVE_KPROBES)
select HAVE_MEMBLOCK
select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI if (!CPU_V7M)
+ select NEED_PRINTK_NMI if (CPU_V7M)
select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index b26361355dae..a960adb9bd7d 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -648,7 +648,9 @@ void handle_IPI(int ipinr, struct pt_regs *regs)

case IPI_CPU_BACKTRACE:
irq_enter();
+ printk_nmi_enter();
nmi_cpu_backtrace(regs);
+ printk_nmi_exit();
irq_exit();
break;

diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig
index b6878eb64884..9dc3e2b1180b 100644
--- a/arch/avr32/Kconfig
+++ b/arch/avr32/Kconfig
@@ -17,6 +17,7 @@ config AVR32
select GENERIC_CLOCKEVENTS
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
+ select HAVE_NMI
help
AVR32 is a high-performance 32-bit RISC microprocessor core,
designed for cost-sensitive embedded applications, with particular
diff --git a/arch/blackfin/Kconfig b/arch/blackfin/Kconfig
index af76634f8d98..47c0a55acafd 100644
--- a/arch/blackfin/Kconfig
+++ b/arch/blackfin/Kconfig
@@ -40,6 +40,7 @@ config BLACKFIN
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
+ select HAVE_NMI

config GENERIC_CSUM
def_bool y
diff --git a/arch/cris/Kconfig b/arch/cris/Kconfig
index e086f9e93728..62148eaf0189 100644
--- a/arch/cris/Kconfig
+++ b/arch/cris/Kconfig
@@ -69,6 +69,7 @@ config CRIS
select GENERIC_CLOCKEVENTS if ETRAX_ARCH_V32
select GENERIC_SCHED_CLOCK if ETRAX_ARCH_V32
select HAVE_DEBUG_BUGVERBOSE if ETRAX_ARCH_V32
+ select HAVE_NMI

config HZ
int
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 71683a853372..35ae01fe980a 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -63,6 +63,7 @@ config MIPS
select HAVE_IRQ_TIME_ACCOUNTING
select GENERIC_TIME_VSYSCALL
select ARCH_CLOCKSOURCE_DATA
+ select HAVE_NMI

menu "Machine selection"

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index db49e0d796b1..1bd8a7503a48 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -156,6 +156,7 @@ config PPC
select NO_BOOTMEM
select HAVE_GENERIC_RCU_GUP
select HAVE_PERF_EVENTS_NMI if PPC64
+ select HAVE_NMI if PERF_EVENTS
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select ARCH_HAS_DMA_SET_COHERENT_MASK
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3a55f493c7da..9a8a76829508 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -159,6 +159,7 @@ config S390
select TTY
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS
+ select HAVE_NMI


config SCHED_OMIT_FRAME_POINTER
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index d514df7e04dd..ff8093b574f1 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -44,6 +44,7 @@ config SUPERH
select OLD_SIGSUSPEND
select OLD_SIGACTION
select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_NMI
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 56442d2d7bbc..e05ea973fd6a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -80,6 +80,7 @@ config SPARC64
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
+ select HAVE_NMI

config ARCH_DEFCONFIG
string
diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 106c21bd7f44..050468c41dc6 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -29,6 +29,7 @@ config TILE
select HAVE_DEBUG_STACKOVERFLOW
select ARCH_WANT_FRAME_POINTERS
select HAVE_CONTEXT_TRACKING
+ select HAVE_NMI if USE_PMC
select EDAC_SUPPORT
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index db3622f22b61..e83f99194c4c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -126,6 +126,7 @@ config X86
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MIXED_BREAKPOINTS_REGS
+ select HAVE_NMI
select HAVE_OPROFILE
select HAVE_OPTPROBES
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 045e424fb368..7788ce643bf4 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -18,7 +18,6 @@
#include <linux/nmi.h>
#include <linux/module.h>
#include <linux/delay.h>
-#include <linux/seq_buf.h>

#ifdef CONFIG_HARDLOCKUP_DETECTOR
u64 hw_nmi_get_sample_period(int watchdog_thresh)
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index dfd59d6bc6f0..a477e0766d2e 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -67,10 +67,12 @@ extern void irq_exit(void);
preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET); \
rcu_nmi_enter(); \
trace_hardirq_enter(); \
+ printk_nmi_enter(); \
} while (0)

#define nmi_exit() \
do { \
+ printk_nmi_exit(); \
trace_hardirq_exit(); \
rcu_nmi_exit(); \
BUG_ON(!in_nmi()); \
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index caebf2a758dc..04c68b9f56f8 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -135,7 +135,4 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr);
(typeof(type) __percpu *)__alloc_percpu(sizeof(type), \
__alignof__(type))

-/* To avoid include hell, as printk can not declare this, we declare it here */
-DECLARE_PER_CPU(printk_func_t, printk_func);
-
#endif /* __LINUX_PERCPU_H */
diff --git a/include/linux/printk.h b/include/linux/printk.h
index 9729565c25ff..7aba6b92d020 100644
--- a/include/linux/printk.h
+++ b/include/linux/printk.h
@@ -122,7 +122,17 @@ static inline __printf(1, 2) __cold
void early_printk(const char *s, ...) { }
#endif

-typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args);
+#ifdef CONFIG_PRINTK_NMI
+extern void printk_nmi_init(void);
+extern void printk_nmi_enter(void);
+extern void printk_nmi_exit(void);
+extern void printk_nmi_flush(void);
+#else
+static inline void printk_nmi_init(void) { }
+static inline void printk_nmi_enter(void) { }
+static inline void printk_nmi_exit(void) { }
+static inline void printk_nmi_flush(void) { }
+#endif /* PRINTK_NMI */

#ifdef CONFIG_PRINTK
asmlinkage __printf(5, 0)
diff --git a/init/Kconfig b/init/Kconfig
index c24b6f767bf0..c1c0b6a2d712 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1456,6 +1456,11 @@ config PRINTK
very difficult to diagnose system problems, saying N here is
strongly discouraged.

+config PRINTK_NMI
+ def_bool y
+ depends on PRINTK
+ depends on HAVE_NMI || NEED_PRINTK_NMI
+
config BUG
bool "BUG() support" if EXPERT
default y
diff --git a/init/main.c b/init/main.c
index 9e64d7097f1a..45c1e05125dc 100644
--- a/init/main.c
+++ b/init/main.c
@@ -591,6 +591,7 @@ asmlinkage __visible void __init start_kernel(void)
timekeeping_init();
time_init();
sched_clock_postinit();
+ printk_nmi_init();
perf_event_init();
profile_init();
call_function_init();
diff --git a/kernel/printk/Makefile b/kernel/printk/Makefile
index 85405bdcf2b3..abb0042a427b 100644
--- a/kernel/printk/Makefile
+++ b/kernel/printk/Makefile
@@ -1,2 +1,3 @@
obj-y = printk.o
+obj-$(CONFIG_PRINTK_NMI) += nmi.o
obj-$(CONFIG_A11Y_BRAILLE_CONSOLE) += braille.o
diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
new file mode 100644
index 000000000000..3989e13a0021
--- /dev/null
+++ b/kernel/printk/nmi.c
@@ -0,0 +1,200 @@
+/*
+ * nmi.c - Safe printk in NMI context
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/smp.h>
+#include <linux/cpumask.h>
+#include <linux/irq_work.h>
+#include <linux/printk.h>
+
+#include "printk.h"
+
+/*
+ * printk() could not take logbuf_lock in NMI context. Instead,
+ * it uses an alternative implementation that temporary stores
+ * the strings into a per-CPU buffer. The content of the buffer
+ * is later flushed into the main ring buffer via IRQ work.
+ *
+ * The alternative implementation is chosen transparently
+ * via @printk_func per-CPU variable.
+ *
+ * The implementation allows to flush the strings also from another CPU.
+ * There are situations when we want to make sure that all buffers
+ * were handled or when IRQs are blocked.
+ */
+DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
+
+struct nmi_seq_buf {
+ atomic_t len; /* length of written data */
+ struct irq_work work; /* IRQ work that flushes the buffer */
+ unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
+ sizeof(struct irq_work)];
+};
+static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
+
+/*
+ * Safe printk() for NMI context. It uses a per-CPU buffer to
+ * store the message. NMIs are not nested, so there is always only
+ * one writer running. But the buffer might get flushed from another
+ * CPU, so we need to be careful.
+ */
+static int vprintk_nmi(const char *fmt, va_list args)
+{
+ struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
+ int add = 0, len;
+
+again:
+ len = atomic_read(&s->len);
+
+ if (len >= sizeof(s->buffer))
+ return 0;
+
+ /*
+ * Make sure that all old data have been read before the buffer was
+ * reseted. This is not needed when we just append data.
+ */
+ if (!len)
+ smp_rmb();
+
+ add = vsnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, args);
+
+ /*
+ * Do it once again if the buffer has been flushed in the meantime.
+ * Note that atomic_cmpxchg() is an implicit memory barrier that
+ * makes sure that the data were written before updating s->len.
+ */
+ if (atomic_cmpxchg(&s->len, len, len + add) != len)
+ goto again;
+
+ /* Get flushed in a more safe context. */
+ if (add)
+ irq_work_queue(&s->work);
+
+ return add;
+}
+
+/*
+ * printk one line from the temporary buffer from @start index until
+ * and including the @end index.
+ */
+static void print_nmi_seq_line(struct nmi_seq_buf *s, int start, int end)
+{
+ const char *buf = s->buffer + start;
+
+ printk("%.*s", (end - start) + 1, buf);
+}
+
+/*
+ * Flush data from the associated per_CPU buffer. The function
+ * can be called either via IRQ work or independently.
+ */
+static void __printk_nmi_flush(struct irq_work *work)
+{
+ static raw_spinlock_t read_lock =
+ __RAW_SPIN_LOCK_INITIALIZER(read_lock);
+ struct nmi_seq_buf *s = container_of(work, struct nmi_seq_buf, work);
+ int len, size, i, last_i;
+
+ /*
+ * The lock has two functions. First, one reader has to flush all
+ * available message to make the lockless synchronization with
+ * writers easier. Second, we do not want to mix messages from
+ * different CPUs. This is especially important when printing
+ * a backtrace.
+ */
+ raw_spin_lock(&read_lock);
+
+ i = 0;
+more:
+ len = atomic_read(&s->len);
+
+ /*
+ * This is just a paranoid check that nobody has manipulated
+ * the buffer an unexpected way. If we printed something then
+ * @len must only increase.
+ */
+ WARN_ON(i && i >= len);
+
+ if (!len)
+ goto out; /* Someone else has already flushed the buffer. */
+
+ /* Make sure that data has been written up to the @len */
+ smp_rmb();
+
+ size = min_t(int, len, sizeof(s->buffer));
+ last_i = i;
+
+ /* Print line by line. */
+ for (; i < size; i++) {
+ if (s->buffer[i] == '\n') {
+ print_nmi_seq_line(s, last_i, i);
+ last_i = i + 1;
+ }
+ }
+ /* Check if there was a partial line. */
+ if (last_i < size) {
+ print_nmi_seq_line(s, last_i, size - 1);
+ pr_cont("\n");
+ }
+
+ /*
+ * Check that nothing has got added in the meantime and truncate
+ * the buffer. Note that atomic_cmpxchg() is an implicit memory
+ * barrier that makes sure that the data were copied before
+ * updating s->len.
+ */
+ if (atomic_cmpxchg(&s->len, len, 0) != len)
+ goto more;
+
+out:
+ raw_spin_unlock(&read_lock);
+}
+
+/**
+ * printk_nmi_flush - flush all per-cpu nmi buffers.
+ *
+ * The buffers are flushed automatically via IRQ work. This function
+ * is useful only when someone wants to be sure that all buffers have
+ * been flushed at some point.
+ */
+void printk_nmi_flush(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ __printk_nmi_flush(&per_cpu(nmi_print_seq, cpu).work);
+}
+
+void __init printk_nmi_init(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ struct nmi_seq_buf *s = &per_cpu(nmi_print_seq, cpu);
+
+ init_irq_work(&s->work, __printk_nmi_flush);
+ }
+}
+
+void printk_nmi_enter(void)
+{
+ this_cpu_write(printk_func, vprintk_nmi);
+}
+
+void printk_nmi_exit(void)
+{
+ this_cpu_write(printk_func, vprintk_default);
+}
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 2ce8826f1053..88641c74163d 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -54,6 +54,7 @@

#include "console_cmdline.h"
#include "braille.h"
+#include "printk.h"

int console_printk[4] = {
CONSOLE_LOGLEVEL_DEFAULT, /* console_loglevel */
@@ -1867,14 +1868,6 @@ int vprintk_default(const char *fmt, va_list args)
}
EXPORT_SYMBOL_GPL(vprintk_default);

-/*
- * This allows printk to be diverted to another function per cpu.
- * This is useful for calling printk functions from within NMI
- * without worrying about race conditions that can lock up the
- * box.
- */
-DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
-
/**
* printk - print a kernel message
* @fmt: format string
@@ -1898,21 +1891,11 @@ DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
*/
asmlinkage __visible int printk(const char *fmt, ...)
{
- printk_func_t vprintk_func;
va_list args;
int r;

va_start(args, fmt);
-
- /*
- * If a caller overrides the per_cpu printk_func, then it needs
- * to disable preemption when calling printk(). Otherwise
- * the printk_func should be set to the default. No need to
- * disable preemption here.
- */
- vprintk_func = this_cpu_read(printk_func);
r = vprintk_func(fmt, args);
-
va_end(args);

return r;
diff --git a/kernel/printk/printk.h b/kernel/printk/printk.h
new file mode 100644
index 000000000000..3492f5def4ef
--- /dev/null
+++ b/kernel/printk/printk.h
@@ -0,0 +1,44 @@
+/*
+ * printk.h - printk internal definitions
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/percpu-defs.h>
+
+typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args);
+
+int __printf(1, 0) vprintk_default(const char *fmt, va_list args);
+
+#ifdef CONFIG_PRINTK_NMI
+
+/*
+ * printk() could not take logbuf_lock in NMI context. Instead,
+ * it temporary stores the strings into a per-CPU buffer.
+ * The alternative implementation is chosen transparently
+ * via per-CPU variable.
+ */
+DECLARE_PER_CPU(printk_func_t, printk_func);
+static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+{
+ return this_cpu_read(printk_func)(fmt, args);
+}
+
+#else /* CONFIG_PRINTK_NMI */
+
+static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+{
+ return vprintk_default(fmt, args);
+}
+
+#endif /* CONFIG_PRINTK_NMI */
diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
index 6019c53c669e..26caf51cc238 100644
--- a/lib/nmi_backtrace.c
+++ b/lib/nmi_backtrace.c
@@ -16,33 +16,14 @@
#include <linux/delay.h>
#include <linux/kprobes.h>
#include <linux/nmi.h>
-#include <linux/seq_buf.h>

#ifdef arch_trigger_all_cpu_backtrace
/* For reliability, we're prepared to waste bits here. */
static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
-static cpumask_t printtrace_mask;
-
-#define NMI_BUF_SIZE 4096
-
-struct nmi_seq_buf {
- unsigned char buffer[NMI_BUF_SIZE];
- struct seq_buf seq;
-};
-
-/* Safe printing in NMI context */
-static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);

/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;

-static void print_seq_line(struct nmi_seq_buf *s, int start, int end)
-{
- const char *buf = s->buffer + start;
-
- printk("%.*s", (end - start) + 1, buf);
-}
-
/*
* When raise() is called it will be is passed a pointer to the
* backtrace_mask. Architectures that call nmi_cpu_backtrace()
@@ -52,8 +33,7 @@ static void print_seq_line(struct nmi_seq_buf *s, int start, int end)
void nmi_trigger_all_cpu_backtrace(bool include_self,
void (*raise)(cpumask_t *mask))
{
- struct nmi_seq_buf *s;
- int i, cpu, this_cpu = get_cpu();
+ int i, this_cpu = get_cpu();

if (test_and_set_bit(0, &backtrace_flag)) {
/*
@@ -68,17 +48,6 @@ void nmi_trigger_all_cpu_backtrace(bool include_self,
if (!include_self)
cpumask_clear_cpu(this_cpu, to_cpumask(backtrace_mask));

- cpumask_copy(&printtrace_mask, to_cpumask(backtrace_mask));
-
- /*
- * Set up per_cpu seq_buf buffers that the NMIs running on the other
- * CPUs will write to.
- */
- for_each_cpu(cpu, to_cpumask(backtrace_mask)) {
- s = &per_cpu(nmi_print_seq, cpu);
- seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE);
- }
-
if (!cpumask_empty(to_cpumask(backtrace_mask))) {
pr_info("Sending NMI to %s CPUs:\n",
(include_self ? "all" : "other"));
@@ -94,73 +63,25 @@ void nmi_trigger_all_cpu_backtrace(bool include_self,
}

/*
- * Now that all the NMIs have triggered, we can dump out their
- * back traces safely to the console.
+ * Force flush any remote buffers that might be stuck in IRQ context
+ * and therefore could not run their irq_work.
*/
- for_each_cpu(cpu, &printtrace_mask) {
- int len, last_i = 0;
+ printk_nmi_flush();

- s = &per_cpu(nmi_print_seq, cpu);
- len = seq_buf_used(&s->seq);
- if (!len)
- continue;
-
- /* Print line by line. */
- for (i = 0; i < len; i++) {
- if (s->buffer[i] == '\n') {
- print_seq_line(s, last_i, i);
- last_i = i + 1;
- }
- }
- /* Check if there was a partial line. */
- if (last_i < len) {
- print_seq_line(s, last_i, len - 1);
- pr_cont("\n");
- }
- }
-
- clear_bit(0, &backtrace_flag);
- smp_mb__after_atomic();
+ clear_bit_unlock(0, &backtrace_flag);
put_cpu();
}

-/*
- * It is not safe to call printk() directly from NMI handlers.
- * It may be fine if the NMI detected a lock up and we have no choice
- * but to do so, but doing a NMI on all other CPUs to get a back trace
- * can be done with a sysrq-l. We don't want that to lock up, which
- * can happen if the NMI interrupts a printk in progress.
- *
- * Instead, we redirect the vprintk() to this nmi_vprintk() that writes
- * the content into a per cpu seq_buf buffer. Then when the NMIs are
- * all done, we can safely dump the contents of the seq_buf to a printk()
- * from a non NMI context.
- */
-static int nmi_vprintk(const char *fmt, va_list args)
-{
- struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
- unsigned int len = seq_buf_used(&s->seq);
-
- seq_buf_vprintf(&s->seq, fmt, args);
- return seq_buf_used(&s->seq) - len;
-}
-
bool nmi_cpu_backtrace(struct pt_regs *regs)
{
int cpu = smp_processor_id();

if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
- printk_func_t printk_func_save = this_cpu_read(printk_func);
-
- /* Replace printk to write into the NMI seq */
- this_cpu_write(printk_func, nmi_vprintk);
pr_warn("NMI backtrace for cpu %d\n", cpu);
if (regs)
show_regs(regs);
else
dump_stack();
- this_cpu_write(printk_func, printk_func_save);
-
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
return true;
}
--
1.8.5.6

2015-11-27 11:09:57

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 2/5] printk/nmi: Use IRQ work only when ready

NMIs could happen at any time. This patch makes sure that the safe
printk() in NMI will schedule IRQ work only when the related
structs are initialized.

All pending messages are flushed when the IRQ work is being
initialized.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/printk/nmi.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
index 3989e13a0021..05b4c09110f9 100644
--- a/kernel/printk/nmi.c
+++ b/kernel/printk/nmi.c
@@ -36,6 +36,7 @@
* were handled or when IRQs are blocked.
*/
DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
+static int printk_nmi_irq_ready;

struct nmi_seq_buf {
atomic_t len; /* length of written data */
@@ -80,8 +81,11 @@ again:
goto again;

/* Get flushed in a more safe context. */
- if (add)
+ if (add && printk_nmi_irq_ready) {
+ /* Make sure that IRQ work is really initialized. */
+ smp_rmb();
irq_work_queue(&s->work);
+ }

return add;
}
@@ -187,6 +191,13 @@ void __init printk_nmi_init(void)

init_irq_work(&s->work, __printk_nmi_flush);
}
+
+ /* Make sure that IRQ works are initialized before enabling. */
+ smp_wmb();
+ printk_nmi_irq_ready = 1;
+
+ /* Flush pending messages that did not have scheduled IRQ works. */
+ printk_nmi_flush();
}

void printk_nmi_enter(void)
--
1.8.5.6

2015-11-27 11:10:00

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

Oops messages are important for debugging. We should try hard to get
them directly into the ring buffer and print them to the console
even in NMI context. This is problematic because the needed locks
might already be taken.

What we can do, though, is to zap all printk locks. We already do this
when a printk recursion is detected. This should be safe because
the system is crashing and there shouldn't be any printk caller
that would cause the deadlock.

We should also flush the printk NMI buffers when Oops begins.
bust_spinlocks() might sound like a bad location but
it already does several other printk-related operations.

Finally, we have to serialize the printing to do not mix backtraces
from different CPUs. A simple spinlock is enough. It has already been
used for this purpose, see the commit a9edc88093287183a ("x86/nmi:
Perform a safe NMI stack trace on all CPUs").

Signed-off-by: Petr Mladek <[email protected]>
---
arch/s390/mm/fault.c | 1 +
kernel/printk/nmi.c | 6 +++++-
kernel/printk/printk.c | 7 +++++++
lib/bust_spinlocks.c | 1 +
lib/nmi_backtrace.c | 8 ++++++++
5 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index ec1a30d0d11a..6566344db263 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -80,6 +80,7 @@ void bust_spinlocks(int yes)
{
if (yes) {
oops_in_progress = 1;
+ printk_nmi_flush();
} else {
int loglevel_save = console_loglevel;
console_unblank();
diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
index 05b4c09110f9..34ba760ae794 100644
--- a/kernel/printk/nmi.c
+++ b/kernel/printk/nmi.c
@@ -202,7 +202,11 @@ void __init printk_nmi_init(void)

void printk_nmi_enter(void)
{
- this_cpu_write(printk_func, vprintk_nmi);
+ /*
+ * We try hard to print the messages directly when Oops is in progress.
+ */
+ if (!oops_in_progress)
+ this_cpu_write(printk_func, vprintk_nmi);
}

void printk_nmi_exit(void)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 88641c74163d..e970b623ae26 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1705,6 +1705,13 @@ asmlinkage int vprintk_emit(int facility, int level,
}

lockdep_off();
+ /*
+ * Messages are passed from NMI context using an extra buffer.
+ * The only exception is when Oops is in progress. In this case
+ * we try hard to get them out directly.
+ */
+ if (unlikely(oops_in_progress && in_nmi()))
+ zap_locks();
raw_spin_lock(&logbuf_lock);
logbuf_cpu = this_cpu;

diff --git a/lib/bust_spinlocks.c b/lib/bust_spinlocks.c
index f8e0e5367398..7a98098b0fef 100644
--- a/lib/bust_spinlocks.c
+++ b/lib/bust_spinlocks.c
@@ -20,6 +20,7 @@ void __attribute__((weak)) bust_spinlocks(int yes)
{
if (yes) {
++oops_in_progress;
+ printk_nmi_flush();
} else {
#ifdef CONFIG_VT
unblank_screen();
diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
index 26caf51cc238..026e9f16e742 100644
--- a/lib/nmi_backtrace.c
+++ b/lib/nmi_backtrace.c
@@ -74,14 +74,22 @@ void nmi_trigger_all_cpu_backtrace(bool include_self,

bool nmi_cpu_backtrace(struct pt_regs *regs)
{
+ static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
int cpu = smp_processor_id();

if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
+ /* Serialize backtraces when printing directly. */
+ if (unlikely(oops_in_progress))
+ arch_spin_lock(&lock);
+
pr_warn("NMI backtrace for cpu %d\n", cpu);
if (regs)
show_regs(regs);
else
dump_stack();
+
+ if (unlikely(oops_in_progress))
+ arch_spin_unlock(&lock);
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
return true;
}
--
1.8.5.6

2015-11-27 11:10:38

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 4/5] printk/nmi: Warn when some message has been lost in NMI context

We could not resize the temporary buffer in NMI context. Let's warn
if a message is lost.

This is rather theoretical. printk() should not be used in NMI.
The only sensible use is when we want to print backtrace from all
CPUs. The current buffer should be enough for this purpose.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/printk/nmi.c | 5 ++++-
kernel/printk/printk.c | 10 ++++++++++
kernel/printk/printk.h | 11 +++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
index 34ba760ae794..8af1e4016719 100644
--- a/kernel/printk/nmi.c
+++ b/kernel/printk/nmi.c
@@ -37,6 +37,7 @@
*/
DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
static int printk_nmi_irq_ready;
+atomic_t nmi_message_lost;

struct nmi_seq_buf {
atomic_t len; /* length of written data */
@@ -60,8 +61,10 @@ static int vprintk_nmi(const char *fmt, va_list args)
again:
len = atomic_read(&s->len);

- if (len >= sizeof(s->buffer))
+ if (len >= sizeof(s->buffer)) {
+ atomic_inc(&nmi_message_lost);
return 0;
+ }

/*
* Make sure that all old data have been read before the buffer was
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index e970b623ae26..e924c6ac2e6d 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1669,6 +1669,7 @@ asmlinkage int vprintk_emit(int facility, int level,
unsigned long flags;
int this_cpu;
int printed_len = 0;
+ int nmi_message_lost;
bool in_sched = false;
/* cpu currently holding logbuf_lock in this function */
static unsigned int logbuf_cpu = UINT_MAX;
@@ -1726,6 +1727,15 @@ asmlinkage int vprintk_emit(int facility, int level,
strlen(recursion_msg));
}

+ nmi_message_lost = get_nmi_message_lost();
+ if (unlikely(nmi_message_lost)) {
+ text_len = scnprintf(textbuf, sizeof(textbuf),
+ "BAD LUCK: lost %d message(s) from NMI context!",
+ nmi_message_lost);
+ printed_len += log_store(0, 2, LOG_PREFIX|LOG_NEWLINE, 0,
+ NULL, 0, textbuf, text_len);
+ }
+
/*
* The printf needs to come first; we need the syslog
* prefix which might be passed-in as a parameter.
diff --git a/kernel/printk/printk.h b/kernel/printk/printk.h
index 3492f5def4ef..834830c93aad 100644
--- a/kernel/printk/printk.h
+++ b/kernel/printk/printk.h
@@ -34,6 +34,12 @@ static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
return this_cpu_read(printk_func)(fmt, args);
}

+extern atomic_t nmi_message_lost;
+static inline int get_nmi_message_lost(void)
+{
+ return atomic_xchg(&nmi_message_lost, 0);
+}
+
#else /* CONFIG_PRINTK_NMI */

static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
@@ -41,4 +47,9 @@ static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
return vprintk_default(fmt, args);
}

+static inline int get_nmi_message_lost(void)
+{
+ return 0;
+}
+
#endif /* CONFIG_PRINTK_NMI */
--
1.8.5.6

2015-11-27 11:10:34

by Petr Mladek

[permalink] [raw]
Subject: [PATCH v2 5/5] printk/nmi: Increase the size of the temporary buffer

Testing has shown that the backtrace sometimes does not fit
into the 4kB temporary buffer that is used in NMI context.

The warnings are gone when I double the temporary buffer size.

Note that this problem existed even in the x86-specific
implementation that was added by the commit a9edc8809328
("x86/nmi: Perform a safe NMI stack trace on all CPUs").
Nobody noticed it because it did not print any warnings.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/printk/nmi.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
index 8af1e4016719..6111644d5f01 100644
--- a/kernel/printk/nmi.c
+++ b/kernel/printk/nmi.c
@@ -42,7 +42,7 @@ atomic_t nmi_message_lost;
struct nmi_seq_buf {
atomic_t len; /* length of written data */
struct irq_work work; /* IRQ work that flushes the buffer */
- unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
+ unsigned char buffer[2 * PAGE_SIZE - sizeof(atomic_t) -
sizeof(struct irq_work)];
};
static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
--
1.8.5.6

2015-11-27 11:50:46

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

Hi Petr,

[auto build test WARNING on powerpc/next]
[also build test WARNING on v4.4-rc2 next-20151127]
[cannot apply to tip/x86/core]

url: https://github.com/0day-ci/linux/commits/Petr-Mladek/Cleaning-printk-stuff-in-NMI-context/20151127-191620
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: mn10300-asb2364_defconfig (attached as .config)
reproduce:
wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=mn10300

All warnings (new ones prefixed by >>):

warning: (MN10300) selects HAVE_NMI_WATCHDOG which has unmet direct dependencies (HAVE_NMI)

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (932.00 B)
.config.gz (8.35 kB)
Download all attachments

2015-11-27 12:03:55

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

Hi Petr,

[auto build test ERROR on: powerpc/next]
[also build test ERROR on: v4.4-rc2 next-20151127]
[cannot apply to: tip/x86/core]

url: https://github.com/0day-ci/linux/commits/Petr-Mladek/Cleaning-printk-stuff-in-NMI-context/20151127-191620
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: i386-randconfig-s1-201547 (attached as .config)
reproduce:
# save the attached .config to linux build tree
make ARCH=i386

All error/warnings (new ones prefixed by >>):

In file included from arch/x86/include/asm/percpu.h:551:0,
from arch/x86/include/asm/current.h:5,
from arch/x86/include/asm/processor.h:15,
from arch/x86/include/asm/atomic.h:6,
from include/linux/atomic.h:4,
from include/linux/llist.h:58,
from include/linux/smp.h:14,
from kernel/printk/nmi.c:18:
kernel/printk/printk.h: In function 'vprintk_func':
>> include/asm-generic/percpu.h:111:2: error: implicit declaration of function 'preempt_disable' [-Werror=implicit-function-declaration]
preempt_disable(); \
^
>> include/asm-generic/percpu.h:305:31: note: in expansion of macro 'this_cpu_generic_read'
#define this_cpu_read_8(pcp) this_cpu_generic_read(pcp)
^
>> include/linux/percpu-defs.h:311:23: note: in expansion of macro 'this_cpu_read_8'
case 8: pscr_ret__ = stem##8(variable); break; \
^
>> include/linux/percpu-defs.h:494:29: note: in expansion of macro '__pcpu_size_call_return'
#define this_cpu_read(pcp) __pcpu_size_call_return(this_cpu_read_, pcp)
^
>> kernel/printk/printk.h:34:9: note: in expansion of macro 'this_cpu_read'
return this_cpu_read(printk_func)(fmt, args);
^
>> include/asm-generic/percpu.h:113:2: error: implicit declaration of function 'preempt_enable' [-Werror=implicit-function-declaration]
preempt_enable(); \
^
>> include/asm-generic/percpu.h:305:31: note: in expansion of macro 'this_cpu_generic_read'
#define this_cpu_read_8(pcp) this_cpu_generic_read(pcp)
^
>> include/linux/percpu-defs.h:311:23: note: in expansion of macro 'this_cpu_read_8'
case 8: pscr_ret__ = stem##8(variable); break; \
^
>> include/linux/percpu-defs.h:494:29: note: in expansion of macro '__pcpu_size_call_return'
#define this_cpu_read(pcp) __pcpu_size_call_return(this_cpu_read_, pcp)
^
>> kernel/printk/printk.h:34:9: note: in expansion of macro 'this_cpu_read'
return this_cpu_read(printk_func)(fmt, args);
^
kernel/printk/nmi.c: In function '__printk_nmi_flush':
>> kernel/printk/nmi.c:106:9: error: unknown type name 'raw_spinlock_t'
static raw_spinlock_t read_lock =
^
>> kernel/printk/nmi.c:107:3: error: implicit declaration of function '__RAW_SPIN_LOCK_INITIALIZER' [-Werror=implicit-function-declaration]
__RAW_SPIN_LOCK_INITIALIZER(read_lock);
^
>> kernel/printk/nmi.c:107:3: error: initializer element is not constant
>> kernel/printk/nmi.c:118:2: error: implicit declaration of function 'raw_spin_lock' [-Werror=implicit-function-declaration]
raw_spin_lock(&read_lock);
^
>> kernel/printk/nmi.c:163:2: error: implicit declaration of function 'raw_spin_unlock' [-Werror=implicit-function-declaration]
raw_spin_unlock(&read_lock);
^
cc1: some warnings being treated as errors

vim +/preempt_disable +111 include/asm-generic/percpu.h

9c28278a24 Tejun Heo 2014-06-17 105 (__ret); \
9c28278a24 Tejun Heo 2014-06-17 106 })
9c28278a24 Tejun Heo 2014-06-17 107
eba117889a Tejun Heo 2014-06-17 108 #define this_cpu_generic_read(pcp) \
eba117889a Tejun Heo 2014-06-17 109 ({ \
eba117889a Tejun Heo 2014-06-17 110 typeof(pcp) __ret; \
9c28278a24 Tejun Heo 2014-06-17 @111 preempt_disable(); \
eba117889a Tejun Heo 2014-06-17 112 __ret = *this_cpu_ptr(&(pcp)); \
9c28278a24 Tejun Heo 2014-06-17 @113 preempt_enable(); \
eba117889a Tejun Heo 2014-06-17 114 __ret; \
9c28278a24 Tejun Heo 2014-06-17 115 })
9c28278a24 Tejun Heo 2014-06-17 116
eba117889a Tejun Heo 2014-06-17 117 #define this_cpu_generic_to_op(pcp, val, op) \
9c28278a24 Tejun Heo 2014-06-17 118 do { \
eba117889a Tejun Heo 2014-06-17 119 unsigned long __flags; \
eba117889a Tejun Heo 2014-06-17 120 raw_local_irq_save(__flags); \
9c28278a24 Tejun Heo 2014-06-17 121 *raw_cpu_ptr(&(pcp)) op val; \
eba117889a Tejun Heo 2014-06-17 122 raw_local_irq_restore(__flags); \
9c28278a24 Tejun Heo 2014-06-17 123 } while (0)
9c28278a24 Tejun Heo 2014-06-17 124
eba117889a Tejun Heo 2014-06-17 125 #define this_cpu_generic_add_return(pcp, val) \
9c28278a24 Tejun Heo 2014-06-17 126 ({ \
eba117889a Tejun Heo 2014-06-17 127 typeof(pcp) __ret; \
eba117889a Tejun Heo 2014-06-17 128 unsigned long __flags; \
eba117889a Tejun Heo 2014-06-17 129 raw_local_irq_save(__flags); \
9c28278a24 Tejun Heo 2014-06-17 130 raw_cpu_add(pcp, val); \
eba117889a Tejun Heo 2014-06-17 131 __ret = raw_cpu_read(pcp); \
eba117889a Tejun Heo 2014-06-17 132 raw_local_irq_restore(__flags); \
eba117889a Tejun Heo 2014-06-17 133 __ret; \
9c28278a24 Tejun Heo 2014-06-17 134 })
9c28278a24 Tejun Heo 2014-06-17 135
eba117889a Tejun Heo 2014-06-17 136 #define this_cpu_generic_xchg(pcp, nval) \
eba117889a Tejun Heo 2014-06-17 137 ({ \
eba117889a Tejun Heo 2014-06-17 138 typeof(pcp) __ret; \
eba117889a Tejun Heo 2014-06-17 139 unsigned long __flags; \
eba117889a Tejun Heo 2014-06-17 140 raw_local_irq_save(__flags); \
eba117889a Tejun Heo 2014-06-17 141 __ret = raw_cpu_read(pcp); \
9c28278a24 Tejun Heo 2014-06-17 142 raw_cpu_write(pcp, nval); \
eba117889a Tejun Heo 2014-06-17 143 raw_local_irq_restore(__flags); \
eba117889a Tejun Heo 2014-06-17 144 __ret; \
9c28278a24 Tejun Heo 2014-06-17 145 })
9c28278a24 Tejun Heo 2014-06-17 146
eba117889a Tejun Heo 2014-06-17 147 #define this_cpu_generic_cmpxchg(pcp, oval, nval) \
9c28278a24 Tejun Heo 2014-06-17 148 ({ \
eba117889a Tejun Heo 2014-06-17 149 typeof(pcp) __ret; \
eba117889a Tejun Heo 2014-06-17 150 unsigned long __flags; \
eba117889a Tejun Heo 2014-06-17 151 raw_local_irq_save(__flags); \
eba117889a Tejun Heo 2014-06-17 152 __ret = raw_cpu_read(pcp); \
eba117889a Tejun Heo 2014-06-17 153 if (__ret == (oval)) \
9c28278a24 Tejun Heo 2014-06-17 154 raw_cpu_write(pcp, nval); \
eba117889a Tejun Heo 2014-06-17 155 raw_local_irq_restore(__flags); \
eba117889a Tejun Heo 2014-06-17 156 __ret; \
9c28278a24 Tejun Heo 2014-06-17 157 })
9c28278a24 Tejun Heo 2014-06-17 158
eba117889a Tejun Heo 2014-06-17 159 #define this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \
9c28278a24 Tejun Heo 2014-06-17 160 ({ \
eba117889a Tejun Heo 2014-06-17 161 int __ret; \
eba117889a Tejun Heo 2014-06-17 162 unsigned long __flags; \
eba117889a Tejun Heo 2014-06-17 163 raw_local_irq_save(__flags); \
eba117889a Tejun Heo 2014-06-17 164 __ret = raw_cpu_generic_cmpxchg_double(pcp1, pcp2, \
9c28278a24 Tejun Heo 2014-06-17 165 oval1, oval2, nval1, nval2); \
eba117889a Tejun Heo 2014-06-17 166 raw_local_irq_restore(__flags); \
eba117889a Tejun Heo 2014-06-17 167 __ret; \
9c28278a24 Tejun Heo 2014-06-17 168 })
9c28278a24 Tejun Heo 2014-06-17 169
47b69ad673 Tejun Heo 2014-06-17 170 #ifndef raw_cpu_read_1
47b69ad673 Tejun Heo 2014-06-17 171 #define raw_cpu_read_1(pcp) (*raw_cpu_ptr(&(pcp)))
47b69ad673 Tejun Heo 2014-06-17 172 #endif
47b69ad673 Tejun Heo 2014-06-17 173 #ifndef raw_cpu_read_2
47b69ad673 Tejun Heo 2014-06-17 174 #define raw_cpu_read_2(pcp) (*raw_cpu_ptr(&(pcp)))
47b69ad673 Tejun Heo 2014-06-17 175 #endif
47b69ad673 Tejun Heo 2014-06-17 176 #ifndef raw_cpu_read_4
47b69ad673 Tejun Heo 2014-06-17 177 #define raw_cpu_read_4(pcp) (*raw_cpu_ptr(&(pcp)))
47b69ad673 Tejun Heo 2014-06-17 178 #endif
47b69ad673 Tejun Heo 2014-06-17 179 #ifndef raw_cpu_read_8
47b69ad673 Tejun Heo 2014-06-17 180 #define raw_cpu_read_8(pcp) (*raw_cpu_ptr(&(pcp)))
47b69ad673 Tejun Heo 2014-06-17 181 #endif
47b69ad673 Tejun Heo 2014-06-17 182
47b69ad673 Tejun Heo 2014-06-17 183 #ifndef raw_cpu_write_1
eba117889a Tejun Heo 2014-06-17 184 #define raw_cpu_write_1(pcp, val) raw_cpu_generic_to_op(pcp, val, =)
47b69ad673 Tejun Heo 2014-06-17 185 #endif
47b69ad673 Tejun Heo 2014-06-17 186 #ifndef raw_cpu_write_2
eba117889a Tejun Heo 2014-06-17 187 #define raw_cpu_write_2(pcp, val) raw_cpu_generic_to_op(pcp, val, =)
47b69ad673 Tejun Heo 2014-06-17 188 #endif
47b69ad673 Tejun Heo 2014-06-17 189 #ifndef raw_cpu_write_4
eba117889a Tejun Heo 2014-06-17 190 #define raw_cpu_write_4(pcp, val) raw_cpu_generic_to_op(pcp, val, =)
47b69ad673 Tejun Heo 2014-06-17 191 #endif
47b69ad673 Tejun Heo 2014-06-17 192 #ifndef raw_cpu_write_8
eba117889a Tejun Heo 2014-06-17 193 #define raw_cpu_write_8(pcp, val) raw_cpu_generic_to_op(pcp, val, =)
47b69ad673 Tejun Heo 2014-06-17 194 #endif
47b69ad673 Tejun Heo 2014-06-17 195
47b69ad673 Tejun Heo 2014-06-17 196 #ifndef raw_cpu_add_1
eba117889a Tejun Heo 2014-06-17 197 #define raw_cpu_add_1(pcp, val) raw_cpu_generic_to_op(pcp, val, +=)
47b69ad673 Tejun Heo 2014-06-17 198 #endif
47b69ad673 Tejun Heo 2014-06-17 199 #ifndef raw_cpu_add_2
eba117889a Tejun Heo 2014-06-17 200 #define raw_cpu_add_2(pcp, val) raw_cpu_generic_to_op(pcp, val, +=)
47b69ad673 Tejun Heo 2014-06-17 201 #endif
47b69ad673 Tejun Heo 2014-06-17 202 #ifndef raw_cpu_add_4
eba117889a Tejun Heo 2014-06-17 203 #define raw_cpu_add_4(pcp, val) raw_cpu_generic_to_op(pcp, val, +=)
47b69ad673 Tejun Heo 2014-06-17 204 #endif
47b69ad673 Tejun Heo 2014-06-17 205 #ifndef raw_cpu_add_8
eba117889a Tejun Heo 2014-06-17 206 #define raw_cpu_add_8(pcp, val) raw_cpu_generic_to_op(pcp, val, +=)
47b69ad673 Tejun Heo 2014-06-17 207 #endif
47b69ad673 Tejun Heo 2014-06-17 208
47b69ad673 Tejun Heo 2014-06-17 209 #ifndef raw_cpu_and_1
eba117889a Tejun Heo 2014-06-17 210 #define raw_cpu_and_1(pcp, val) raw_cpu_generic_to_op(pcp, val, &=)
47b69ad673 Tejun Heo 2014-06-17 211 #endif
47b69ad673 Tejun Heo 2014-06-17 212 #ifndef raw_cpu_and_2
eba117889a Tejun Heo 2014-06-17 213 #define raw_cpu_and_2(pcp, val) raw_cpu_generic_to_op(pcp, val, &=)
47b69ad673 Tejun Heo 2014-06-17 214 #endif
47b69ad673 Tejun Heo 2014-06-17 215 #ifndef raw_cpu_and_4
eba117889a Tejun Heo 2014-06-17 216 #define raw_cpu_and_4(pcp, val) raw_cpu_generic_to_op(pcp, val, &=)
47b69ad673 Tejun Heo 2014-06-17 217 #endif
47b69ad673 Tejun Heo 2014-06-17 218 #ifndef raw_cpu_and_8
eba117889a Tejun Heo 2014-06-17 219 #define raw_cpu_and_8(pcp, val) raw_cpu_generic_to_op(pcp, val, &=)
47b69ad673 Tejun Heo 2014-06-17 220 #endif
47b69ad673 Tejun Heo 2014-06-17 221
47b69ad673 Tejun Heo 2014-06-17 222 #ifndef raw_cpu_or_1
eba117889a Tejun Heo 2014-06-17 223 #define raw_cpu_or_1(pcp, val) raw_cpu_generic_to_op(pcp, val, |=)
47b69ad673 Tejun Heo 2014-06-17 224 #endif
47b69ad673 Tejun Heo 2014-06-17 225 #ifndef raw_cpu_or_2
eba117889a Tejun Heo 2014-06-17 226 #define raw_cpu_or_2(pcp, val) raw_cpu_generic_to_op(pcp, val, |=)
47b69ad673 Tejun Heo 2014-06-17 227 #endif
47b69ad673 Tejun Heo 2014-06-17 228 #ifndef raw_cpu_or_4
eba117889a Tejun Heo 2014-06-17 229 #define raw_cpu_or_4(pcp, val) raw_cpu_generic_to_op(pcp, val, |=)
47b69ad673 Tejun Heo 2014-06-17 230 #endif
47b69ad673 Tejun Heo 2014-06-17 231 #ifndef raw_cpu_or_8
eba117889a Tejun Heo 2014-06-17 232 #define raw_cpu_or_8(pcp, val) raw_cpu_generic_to_op(pcp, val, |=)
47b69ad673 Tejun Heo 2014-06-17 233 #endif
47b69ad673 Tejun Heo 2014-06-17 234
47b69ad673 Tejun Heo 2014-06-17 235 #ifndef raw_cpu_add_return_1
47b69ad673 Tejun Heo 2014-06-17 236 #define raw_cpu_add_return_1(pcp, val) raw_cpu_generic_add_return(pcp, val)
47b69ad673 Tejun Heo 2014-06-17 237 #endif
47b69ad673 Tejun Heo 2014-06-17 238 #ifndef raw_cpu_add_return_2
47b69ad673 Tejun Heo 2014-06-17 239 #define raw_cpu_add_return_2(pcp, val) raw_cpu_generic_add_return(pcp, val)
47b69ad673 Tejun Heo 2014-06-17 240 #endif
47b69ad673 Tejun Heo 2014-06-17 241 #ifndef raw_cpu_add_return_4
47b69ad673 Tejun Heo 2014-06-17 242 #define raw_cpu_add_return_4(pcp, val) raw_cpu_generic_add_return(pcp, val)
47b69ad673 Tejun Heo 2014-06-17 243 #endif
47b69ad673 Tejun Heo 2014-06-17 244 #ifndef raw_cpu_add_return_8
47b69ad673 Tejun Heo 2014-06-17 245 #define raw_cpu_add_return_8(pcp, val) raw_cpu_generic_add_return(pcp, val)
47b69ad673 Tejun Heo 2014-06-17 246 #endif
47b69ad673 Tejun Heo 2014-06-17 247
47b69ad673 Tejun Heo 2014-06-17 248 #ifndef raw_cpu_xchg_1
47b69ad673 Tejun Heo 2014-06-17 249 #define raw_cpu_xchg_1(pcp, nval) raw_cpu_generic_xchg(pcp, nval)
47b69ad673 Tejun Heo 2014-06-17 250 #endif
47b69ad673 Tejun Heo 2014-06-17 251 #ifndef raw_cpu_xchg_2
47b69ad673 Tejun Heo 2014-06-17 252 #define raw_cpu_xchg_2(pcp, nval) raw_cpu_generic_xchg(pcp, nval)
47b69ad673 Tejun Heo 2014-06-17 253 #endif
47b69ad673 Tejun Heo 2014-06-17 254 #ifndef raw_cpu_xchg_4
47b69ad673 Tejun Heo 2014-06-17 255 #define raw_cpu_xchg_4(pcp, nval) raw_cpu_generic_xchg(pcp, nval)
47b69ad673 Tejun Heo 2014-06-17 256 #endif
47b69ad673 Tejun Heo 2014-06-17 257 #ifndef raw_cpu_xchg_8
47b69ad673 Tejun Heo 2014-06-17 258 #define raw_cpu_xchg_8(pcp, nval) raw_cpu_generic_xchg(pcp, nval)
47b69ad673 Tejun Heo 2014-06-17 259 #endif
47b69ad673 Tejun Heo 2014-06-17 260
47b69ad673 Tejun Heo 2014-06-17 261 #ifndef raw_cpu_cmpxchg_1
eba117889a Tejun Heo 2014-06-17 262 #define raw_cpu_cmpxchg_1(pcp, oval, nval) \
eba117889a Tejun Heo 2014-06-17 263 raw_cpu_generic_cmpxchg(pcp, oval, nval)
47b69ad673 Tejun Heo 2014-06-17 264 #endif
47b69ad673 Tejun Heo 2014-06-17 265 #ifndef raw_cpu_cmpxchg_2
eba117889a Tejun Heo 2014-06-17 266 #define raw_cpu_cmpxchg_2(pcp, oval, nval) \
eba117889a Tejun Heo 2014-06-17 267 raw_cpu_generic_cmpxchg(pcp, oval, nval)
47b69ad673 Tejun Heo 2014-06-17 268 #endif
47b69ad673 Tejun Heo 2014-06-17 269 #ifndef raw_cpu_cmpxchg_4
eba117889a Tejun Heo 2014-06-17 270 #define raw_cpu_cmpxchg_4(pcp, oval, nval) \
eba117889a Tejun Heo 2014-06-17 271 raw_cpu_generic_cmpxchg(pcp, oval, nval)
47b69ad673 Tejun Heo 2014-06-17 272 #endif
47b69ad673 Tejun Heo 2014-06-17 273 #ifndef raw_cpu_cmpxchg_8
eba117889a Tejun Heo 2014-06-17 274 #define raw_cpu_cmpxchg_8(pcp, oval, nval) \
eba117889a Tejun Heo 2014-06-17 275 raw_cpu_generic_cmpxchg(pcp, oval, nval)
47b69ad673 Tejun Heo 2014-06-17 276 #endif
47b69ad673 Tejun Heo 2014-06-17 277
47b69ad673 Tejun Heo 2014-06-17 278 #ifndef raw_cpu_cmpxchg_double_1
47b69ad673 Tejun Heo 2014-06-17 279 #define raw_cpu_cmpxchg_double_1(pcp1, pcp2, oval1, oval2, nval1, nval2) \
47b69ad673 Tejun Heo 2014-06-17 280 raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
47b69ad673 Tejun Heo 2014-06-17 281 #endif
47b69ad673 Tejun Heo 2014-06-17 282 #ifndef raw_cpu_cmpxchg_double_2
47b69ad673 Tejun Heo 2014-06-17 283 #define raw_cpu_cmpxchg_double_2(pcp1, pcp2, oval1, oval2, nval1, nval2) \
47b69ad673 Tejun Heo 2014-06-17 284 raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
47b69ad673 Tejun Heo 2014-06-17 285 #endif
47b69ad673 Tejun Heo 2014-06-17 286 #ifndef raw_cpu_cmpxchg_double_4
47b69ad673 Tejun Heo 2014-06-17 287 #define raw_cpu_cmpxchg_double_4(pcp1, pcp2, oval1, oval2, nval1, nval2) \
47b69ad673 Tejun Heo 2014-06-17 288 raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
47b69ad673 Tejun Heo 2014-06-17 289 #endif
47b69ad673 Tejun Heo 2014-06-17 290 #ifndef raw_cpu_cmpxchg_double_8
47b69ad673 Tejun Heo 2014-06-17 291 #define raw_cpu_cmpxchg_double_8(pcp1, pcp2, oval1, oval2, nval1, nval2) \
47b69ad673 Tejun Heo 2014-06-17 292 raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
47b69ad673 Tejun Heo 2014-06-17 293 #endif
47b69ad673 Tejun Heo 2014-06-17 294
47b69ad673 Tejun Heo 2014-06-17 295 #ifndef this_cpu_read_1
eba117889a Tejun Heo 2014-06-17 296 #define this_cpu_read_1(pcp) this_cpu_generic_read(pcp)
47b69ad673 Tejun Heo 2014-06-17 297 #endif
47b69ad673 Tejun Heo 2014-06-17 298 #ifndef this_cpu_read_2
eba117889a Tejun Heo 2014-06-17 299 #define this_cpu_read_2(pcp) this_cpu_generic_read(pcp)
47b69ad673 Tejun Heo 2014-06-17 300 #endif
47b69ad673 Tejun Heo 2014-06-17 301 #ifndef this_cpu_read_4
eba117889a Tejun Heo 2014-06-17 302 #define this_cpu_read_4(pcp) this_cpu_generic_read(pcp)
47b69ad673 Tejun Heo 2014-06-17 303 #endif
47b69ad673 Tejun Heo 2014-06-17 304 #ifndef this_cpu_read_8
eba117889a Tejun Heo 2014-06-17 @305 #define this_cpu_read_8(pcp) this_cpu_generic_read(pcp)
47b69ad673 Tejun Heo 2014-06-17 306 #endif
47b69ad673 Tejun Heo 2014-06-17 307
47b69ad673 Tejun Heo 2014-06-17 308 #ifndef this_cpu_write_1

:::::: The code at line 111 was first introduced by commit
:::::: 9c28278a24c01c0073fb89e53c1d2a605ab9587d percpu: reorder macros in percpu header files

:::::: TO: Tejun Heo <[email protected]>
:::::: CC: Tejun Heo <[email protected]>

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (17.38 kB)
.config.gz (22.22 kB)
Download all attachments

2015-11-27 14:26:20

by Max Filippov

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

Hi Peter,

On Fri, Nov 27, 2015 at 2:09 PM, Petr Mladek <[email protected]> wrote:
> printk() takes some locks and could not be used a safe way in NMI
> context.
>
> The chance of a deadlock is real especially when printing
> stacks from all CPUs. This particular problem has been addressed
> on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack
> trace on all CPUs").
>
> This patch reuses most of the code and makes it generic. It is
> useful for all messages and architectures that support NMI.
>
> The alternative printk_func is set when entering and is reseted when
> leaving NMI context. It queues IRQ work to copy the messages into
> the main ring buffer in a safe context.
>
> __printk_nmi_flush() copies all available messages and reset
> the buffer. Then we could use a simple cmpxchg operations to
> get synchronized with writers. There is also used a spinlock
> to get synchronized with other flushers.
>
> We do not longer use seq_buf because it depends on external lock.
> It would be hard to make all supported operations safe for
> a lockless use. It would be confusing and error prone to
> make only some operations safe.
>
> The code is put into separate printk/nmi.c as suggested by
> Steven Rostedt. It needs a per-CPU buffer and is compiled only
> on architectures that call nmi_enter(). This is achieved by
> the new HAVE_NMI Kconfig flag.
>
> One exception is arm where the deferred printing is used for
> printing backtraces even without NMI. For this purpose,
> we define NEED_PRINTK_NMI Kconfig flag. The alternative
> printk_func is explicitly set when IPI_CPU_BACKTRACE is
> handled.
>
> Another exception is Xtensa architecture that uses just a
> fake NMI.

It's called fake because it's actually maskable, but sometimes
it is safe to use it as NMI (when there are no other IRQs at the
same priority level and that level equals EXCM level). That
condition is checked in arch/xtensa/include/asm/processor.h
So 'fake' here is to avoid confusion with real NMI that exists
on xtensa (and is not currently used in linux), otherwise code
that runs in fake NMI must follow the NMI rules.

To make xtensa compatible with your change we can add a
choice whether fake NMI should be used to kconfig. It can
then set HAVE_NMI accordingly. I'll post a patch for xtensa.

--
Thanks.
-- Max

2015-11-27 15:38:14

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Fri 2015-11-27 19:49:48, kbuild test robot wrote:
> Hi Petr,
>
> [auto build test WARNING on powerpc/next]
> [also build test WARNING on v4.4-rc2 next-20151127]
> [cannot apply to tip/x86/core]
>
> url: https://github.com/0day-ci/linux/commits/Petr-Mladek/Cleaning-printk-stuff-in-NMI-context/20151127-191620
> base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: mn10300-asb2364_defconfig (attached as .config)
> reproduce:
> wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # save the attached .config to linux build tree
> make.cross ARCH=mn10300
>
> All warnings (new ones prefixed by >>):
>
> warning: (MN10300) selects HAVE_NMI_WATCHDOG which has unmet direct dependencies (HAVE_NMI)

MN10300 has its own implementation for entering and exiting NMI
handlers. It does not call nmi_enter() and nmi_exit().
Please, find below an updated patch that adds printk_nmi_enter()
and printk_nmi_exit() to the custom entry points.
Then we could add HAVE_NMI to arch/mn10300/Kconfig and avoid
the above warning.

The updated patch also fixes includes in kernel/printk/nmi.c
and kernel/printk/printk.h to fix the other build errors
found by kbuild test robot.

The kbuild test robot is really cool thing!


>From 1689f635cc423ff9887c6774ad6b59a1ea885e4b Mon Sep 17 00:00:00 2001
From: Petr Mladek <[email protected]>
Date: Thu, 2 Jul 2015 13:17:17 +0200
Subject: [PATCH 1/5] printk/nmi: Generic solution for safe printk in NMI

printk() takes some locks and could not be used a safe way in NMI
context.

The chance of a deadlock is real especially when printing
stacks from all CPUs. This particular problem has been addressed
on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack
trace on all CPUs").

This patch reuses most of the code and makes it generic. It is
useful for all messages and architectures that support NMI.

The alternative printk_func is set when entering and is reseted when
leaving NMI context. It queues IRQ work to copy the messages into
the main ring buffer in a safe context.

__printk_nmi_flush() copies all available messages and reset
the buffer. Then we could use a simple cmpxchg operations to
get synchronized with writers. There is also used a spinlock
to get synchronized with other flushers.

We do not longer use seq_buf because it depends on external lock.
It would be hard to make all supported operations safe for
a lockless use. It would be confusing and error prone to
make only some operations safe.

The code is put into separate printk/nmi.c as suggested by
Steven Rostedt. It needs a per-CPU buffer and is compiled only
on architectures that call nmi_enter(). This is achieved by
the new HAVE_NMI Kconfig flag.

One exception is arm where the deferred printing is used for
printing backtraces even without NMI. For this purpose,
we define NEED_PRINTK_NMI Kconfig flag. The alternative
printk_func is explicitly set when IPI_CPU_BACKTRACE is
handled.

Second exception is MN10300 architecture that has its own
implementation of entering and exiting NMI handlers.
It even has a separate optimized implementation for
NMI watchdog. This patch adds printk_nmi_enter() and
printk_nmi_exit() to these custom entry points.
Note that we have to define HAVE_NMI here. Otherwise,
Kconfig complains about unmet direct dependencies for
HAVE_NMI_WATCHDOG.

Last exception is Xtensa architecture that uses just a
fake NMI.

The patch is heavily based on the draft from Peter Zijlstra,
see https://lkml.org/lkml/2015/6/10/327

Suggested-by: Peter Zijlstra <[email protected]>
Suggested-by: Steven Rostedt <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
---
arch/Kconfig | 7 ++
arch/arm/Kconfig | 2 +
arch/arm/kernel/smp.c | 2 +
arch/avr32/Kconfig | 1 +
arch/blackfin/Kconfig | 1 +
arch/cris/Kconfig | 1 +
arch/mips/Kconfig | 1 +
arch/mn10300/Kconfig | 1 +
arch/mn10300/kernel/mn10300-watchdog.c | 4 +
arch/mn10300/kernel/smp.c | 3 +
arch/powerpc/Kconfig | 1 +
arch/s390/Kconfig | 1 +
arch/sh/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/tile/Kconfig | 1 +
arch/x86/Kconfig | 1 +
arch/x86/kernel/apic/hw_nmi.c | 1 -
include/linux/hardirq.h | 2 +
include/linux/percpu.h | 3 -
include/linux/printk.h | 12 +-
init/Kconfig | 5 +
init/main.c | 1 +
kernel/printk/Makefile | 1 +
kernel/printk/nmi.c | 202 +++++++++++++++++++++++++++++++++
kernel/printk/printk.c | 19 +---
kernel/printk/printk.h | 44 +++++++
lib/nmi_backtrace.c | 89 +--------------
27 files changed, 301 insertions(+), 107 deletions(-)
create mode 100644 kernel/printk/nmi.c
create mode 100644 kernel/printk/printk.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 4e949e58b192..7ce5101c2472 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -187,7 +187,14 @@ config HAVE_OPTPROBES
config HAVE_KPROBES_ON_FTRACE
bool

+config HAVE_NMI
+ bool
+
+config NEED_PRINTK_NMI
+ bool
+
config HAVE_NMI_WATCHDOG
+ depends on HAVE_NMI
bool
#
# An arch should select this if it provides all these things:
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 0365cbbc9179..f0465e420762 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -63,6 +63,8 @@ config ARM
select HAVE_KRETPROBES if (HAVE_KPROBES)
select HAVE_MEMBLOCK
select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI if (!CPU_V7M)
+ select NEED_PRINTK_NMI if (CPU_V7M)
select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index b26361355dae..a960adb9bd7d 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -648,7 +648,9 @@ void handle_IPI(int ipinr, struct pt_regs *regs)

case IPI_CPU_BACKTRACE:
irq_enter();
+ printk_nmi_enter();
nmi_cpu_backtrace(regs);
+ printk_nmi_exit();
irq_exit();
break;

diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig
index b6878eb64884..9dc3e2b1180b 100644
--- a/arch/avr32/Kconfig
+++ b/arch/avr32/Kconfig
@@ -17,6 +17,7 @@ config AVR32
select GENERIC_CLOCKEVENTS
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
+ select HAVE_NMI
help
AVR32 is a high-performance 32-bit RISC microprocessor core,
designed for cost-sensitive embedded applications, with particular
diff --git a/arch/blackfin/Kconfig b/arch/blackfin/Kconfig
index af76634f8d98..47c0a55acafd 100644
--- a/arch/blackfin/Kconfig
+++ b/arch/blackfin/Kconfig
@@ -40,6 +40,7 @@ config BLACKFIN
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
+ select HAVE_NMI

config GENERIC_CSUM
def_bool y
diff --git a/arch/cris/Kconfig b/arch/cris/Kconfig
index e086f9e93728..62148eaf0189 100644
--- a/arch/cris/Kconfig
+++ b/arch/cris/Kconfig
@@ -69,6 +69,7 @@ config CRIS
select GENERIC_CLOCKEVENTS if ETRAX_ARCH_V32
select GENERIC_SCHED_CLOCK if ETRAX_ARCH_V32
select HAVE_DEBUG_BUGVERBOSE if ETRAX_ARCH_V32
+ select HAVE_NMI

config HZ
int
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 71683a853372..35ae01fe980a 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -63,6 +63,7 @@ config MIPS
select HAVE_IRQ_TIME_ACCOUNTING
select GENERIC_TIME_VSYSCALL
select ARCH_CLOCKSOURCE_DATA
+ select HAVE_NMI

menu "Machine selection"

diff --git a/arch/mn10300/Kconfig b/arch/mn10300/Kconfig
index 4434b54e1d87..468d8d0e8773 100644
--- a/arch/mn10300/Kconfig
+++ b/arch/mn10300/Kconfig
@@ -7,6 +7,7 @@ config MN10300
select HAVE_ARCH_KGDB
select GENERIC_ATOMIC64
select HAVE_NMI_WATCHDOG if MN10300_WD_TIMER
+ select HAVE_NMI
select VIRT_TO_BUS
select GENERIC_CLOCKEVENTS
select MODULES_USE_ELF_RELA
diff --git a/arch/mn10300/kernel/mn10300-watchdog.c b/arch/mn10300/kernel/mn10300-watchdog.c
index a2d8e6938d67..d712cec61deb 100644
--- a/arch/mn10300/kernel/mn10300-watchdog.c
+++ b/arch/mn10300/kernel/mn10300-watchdog.c
@@ -144,6 +144,8 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
nmi_count(smp_processor_id())++;
kstat_incr_irq_this_cpu(irq);

+ printk_nmi_enter();
+
for_each_online_cpu(cpu) {

sum = irq_stat[cpu].__irq_count;
@@ -198,6 +200,8 @@ void watchdog_interrupt(struct pt_regs *regs, enum exception_code excep)
}
}

+ printk_nmi_exit();
+
WDCTR = wdt | WDCTR_WDRST;
tmp = WDCTR;
WDCTR = wdt | WDCTR_WDCNE;
diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index f984193718b1..9c26e4cefbb6 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -537,8 +537,11 @@ void smp_nmi_call_function_interrupt(void)
*/
smp_mb();
cpumask_clear_cpu(smp_processor_id(), &nmi_call_data->started);
+ printk_nmi_enter();
+
(*func)(info);

+ printk_nmi_exit();
if (wait) {
smp_mb();
cpumask_clear_cpu(smp_processor_id(),
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index db49e0d796b1..1bd8a7503a48 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -156,6 +156,7 @@ config PPC
select NO_BOOTMEM
select HAVE_GENERIC_RCU_GUP
select HAVE_PERF_EVENTS_NMI if PPC64
+ select HAVE_NMI if PERF_EVENTS
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select ARCH_HAS_DMA_SET_COHERENT_MASK
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3a55f493c7da..9a8a76829508 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -159,6 +159,7 @@ config S390
select TTY
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS
+ select HAVE_NMI


config SCHED_OMIT_FRAME_POINTER
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index d514df7e04dd..ff8093b574f1 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -44,6 +44,7 @@ config SUPERH
select OLD_SIGSUSPEND
select OLD_SIGACTION
select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_NMI
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 56442d2d7bbc..e05ea973fd6a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -80,6 +80,7 @@ config SPARC64
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
+ select HAVE_NMI

config ARCH_DEFCONFIG
string
diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 106c21bd7f44..050468c41dc6 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -29,6 +29,7 @@ config TILE
select HAVE_DEBUG_STACKOVERFLOW
select ARCH_WANT_FRAME_POINTERS
select HAVE_CONTEXT_TRACKING
+ select HAVE_NMI if USE_PMC
select EDAC_SUPPORT
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index db3622f22b61..e83f99194c4c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -126,6 +126,7 @@ config X86
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MIXED_BREAKPOINTS_REGS
+ select HAVE_NMI
select HAVE_OPROFILE
select HAVE_OPTPROBES
select HAVE_PCSPKR_PLATFORM
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 045e424fb368..7788ce643bf4 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -18,7 +18,6 @@
#include <linux/nmi.h>
#include <linux/module.h>
#include <linux/delay.h>
-#include <linux/seq_buf.h>

#ifdef CONFIG_HARDLOCKUP_DETECTOR
u64 hw_nmi_get_sample_period(int watchdog_thresh)
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index dfd59d6bc6f0..a477e0766d2e 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -67,10 +67,12 @@ extern void irq_exit(void);
preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET); \
rcu_nmi_enter(); \
trace_hardirq_enter(); \
+ printk_nmi_enter(); \
} while (0)

#define nmi_exit() \
do { \
+ printk_nmi_exit(); \
trace_hardirq_exit(); \
rcu_nmi_exit(); \
BUG_ON(!in_nmi()); \
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index caebf2a758dc..04c68b9f56f8 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -135,7 +135,4 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr);
(typeof(type) __percpu *)__alloc_percpu(sizeof(type), \
__alignof__(type))

-/* To avoid include hell, as printk can not declare this, we declare it here */
-DECLARE_PER_CPU(printk_func_t, printk_func);
-
#endif /* __LINUX_PERCPU_H */
diff --git a/include/linux/printk.h b/include/linux/printk.h
index 9729565c25ff..7aba6b92d020 100644
--- a/include/linux/printk.h
+++ b/include/linux/printk.h
@@ -122,7 +122,17 @@ static inline __printf(1, 2) __cold
void early_printk(const char *s, ...) { }
#endif

-typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args);
+#ifdef CONFIG_PRINTK_NMI
+extern void printk_nmi_init(void);
+extern void printk_nmi_enter(void);
+extern void printk_nmi_exit(void);
+extern void printk_nmi_flush(void);
+#else
+static inline void printk_nmi_init(void) { }
+static inline void printk_nmi_enter(void) { }
+static inline void printk_nmi_exit(void) { }
+static inline void printk_nmi_flush(void) { }
+#endif /* PRINTK_NMI */

#ifdef CONFIG_PRINTK
asmlinkage __printf(5, 0)
diff --git a/init/Kconfig b/init/Kconfig
index c24b6f767bf0..c1c0b6a2d712 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1456,6 +1456,11 @@ config PRINTK
very difficult to diagnose system problems, saying N here is
strongly discouraged.

+config PRINTK_NMI
+ def_bool y
+ depends on PRINTK
+ depends on HAVE_NMI || NEED_PRINTK_NMI
+
config BUG
bool "BUG() support" if EXPERT
default y
diff --git a/init/main.c b/init/main.c
index 9e64d7097f1a..45c1e05125dc 100644
--- a/init/main.c
+++ b/init/main.c
@@ -591,6 +591,7 @@ asmlinkage __visible void __init start_kernel(void)
timekeeping_init();
time_init();
sched_clock_postinit();
+ printk_nmi_init();
perf_event_init();
profile_init();
call_function_init();
diff --git a/kernel/printk/Makefile b/kernel/printk/Makefile
index 85405bdcf2b3..abb0042a427b 100644
--- a/kernel/printk/Makefile
+++ b/kernel/printk/Makefile
@@ -1,2 +1,3 @@
obj-y = printk.o
+obj-$(CONFIG_PRINTK_NMI) += nmi.o
obj-$(CONFIG_A11Y_BRAILLE_CONSOLE) += braille.o
diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
new file mode 100644
index 000000000000..01aef64613ea
--- /dev/null
+++ b/kernel/printk/nmi.c
@@ -0,0 +1,202 @@
+/*
+ * nmi.c - Safe printk in NMI context
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/preempt.h>
+#include <linux/spinlock.h>
+#include <linux/smp.h>
+#include <linux/cpumask.h>
+#include <linux/irq_work.h>
+#include <linux/printk.h>
+
+#include "printk.h"
+
+/*
+ * printk() could not take logbuf_lock in NMI context. Instead,
+ * it uses an alternative implementation that temporary stores
+ * the strings into a per-CPU buffer. The content of the buffer
+ * is later flushed into the main ring buffer via IRQ work.
+ *
+ * The alternative implementation is chosen transparently
+ * via @printk_func per-CPU variable.
+ *
+ * The implementation allows to flush the strings also from another CPU.
+ * There are situations when we want to make sure that all buffers
+ * were handled or when IRQs are blocked.
+ */
+DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
+
+struct nmi_seq_buf {
+ atomic_t len; /* length of written data */
+ struct irq_work work; /* IRQ work that flushes the buffer */
+ unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
+ sizeof(struct irq_work)];
+};
+static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
+
+/*
+ * Safe printk() for NMI context. It uses a per-CPU buffer to
+ * store the message. NMIs are not nested, so there is always only
+ * one writer running. But the buffer might get flushed from another
+ * CPU, so we need to be careful.
+ */
+static int vprintk_nmi(const char *fmt, va_list args)
+{
+ struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
+ int add = 0, len;
+
+again:
+ len = atomic_read(&s->len);
+
+ if (len >= sizeof(s->buffer))
+ return 0;
+
+ /*
+ * Make sure that all old data have been read before the buffer was
+ * reseted. This is not needed when we just append data.
+ */
+ if (!len)
+ smp_rmb();
+
+ add = vsnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, args);
+
+ /*
+ * Do it once again if the buffer has been flushed in the meantime.
+ * Note that atomic_cmpxchg() is an implicit memory barrier that
+ * makes sure that the data were written before updating s->len.
+ */
+ if (atomic_cmpxchg(&s->len, len, len + add) != len)
+ goto again;
+
+ /* Get flushed in a more safe context. */
+ if (add)
+ irq_work_queue(&s->work);
+
+ return add;
+}
+
+/*
+ * printk one line from the temporary buffer from @start index until
+ * and including the @end index.
+ */
+static void print_nmi_seq_line(struct nmi_seq_buf *s, int start, int end)
+{
+ const char *buf = s->buffer + start;
+
+ printk("%.*s", (end - start) + 1, buf);
+}
+
+/*
+ * Flush data from the associated per_CPU buffer. The function
+ * can be called either via IRQ work or independently.
+ */
+static void __printk_nmi_flush(struct irq_work *work)
+{
+ static raw_spinlock_t read_lock =
+ __RAW_SPIN_LOCK_INITIALIZER(read_lock);
+ struct nmi_seq_buf *s = container_of(work, struct nmi_seq_buf, work);
+ int len, size, i, last_i;
+
+ /*
+ * The lock has two functions. First, one reader has to flush all
+ * available message to make the lockless synchronization with
+ * writers easier. Second, we do not want to mix messages from
+ * different CPUs. This is especially important when printing
+ * a backtrace.
+ */
+ raw_spin_lock(&read_lock);
+
+ i = 0;
+more:
+ len = atomic_read(&s->len);
+
+ /*
+ * This is just a paranoid check that nobody has manipulated
+ * the buffer an unexpected way. If we printed something then
+ * @len must only increase.
+ */
+ WARN_ON(i && i >= len);
+
+ if (!len)
+ goto out; /* Someone else has already flushed the buffer. */
+
+ /* Make sure that data has been written up to the @len */
+ smp_rmb();
+
+ size = min_t(int, len, sizeof(s->buffer));
+ last_i = i;
+
+ /* Print line by line. */
+ for (; i < size; i++) {
+ if (s->buffer[i] == '\n') {
+ print_nmi_seq_line(s, last_i, i);
+ last_i = i + 1;
+ }
+ }
+ /* Check if there was a partial line. */
+ if (last_i < size) {
+ print_nmi_seq_line(s, last_i, size - 1);
+ pr_cont("\n");
+ }
+
+ /*
+ * Check that nothing has got added in the meantime and truncate
+ * the buffer. Note that atomic_cmpxchg() is an implicit memory
+ * barrier that makes sure that the data were copied before
+ * updating s->len.
+ */
+ if (atomic_cmpxchg(&s->len, len, 0) != len)
+ goto more;
+
+out:
+ raw_spin_unlock(&read_lock);
+}
+
+/**
+ * printk_nmi_flush - flush all per-cpu nmi buffers.
+ *
+ * The buffers are flushed automatically via IRQ work. This function
+ * is useful only when someone wants to be sure that all buffers have
+ * been flushed at some point.
+ */
+void printk_nmi_flush(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ __printk_nmi_flush(&per_cpu(nmi_print_seq, cpu).work);
+}
+
+void __init printk_nmi_init(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ struct nmi_seq_buf *s = &per_cpu(nmi_print_seq, cpu);
+
+ init_irq_work(&s->work, __printk_nmi_flush);
+ }
+}
+
+void printk_nmi_enter(void)
+{
+ this_cpu_write(printk_func, vprintk_nmi);
+}
+
+void printk_nmi_exit(void)
+{
+ this_cpu_write(printk_func, vprintk_default);
+}
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 2ce8826f1053..88641c74163d 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -54,6 +54,7 @@

#include "console_cmdline.h"
#include "braille.h"
+#include "printk.h"

int console_printk[4] = {
CONSOLE_LOGLEVEL_DEFAULT, /* console_loglevel */
@@ -1867,14 +1868,6 @@ int vprintk_default(const char *fmt, va_list args)
}
EXPORT_SYMBOL_GPL(vprintk_default);

-/*
- * This allows printk to be diverted to another function per cpu.
- * This is useful for calling printk functions from within NMI
- * without worrying about race conditions that can lock up the
- * box.
- */
-DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
-
/**
* printk - print a kernel message
* @fmt: format string
@@ -1898,21 +1891,11 @@ DEFINE_PER_CPU(printk_func_t, printk_func) = vprintk_default;
*/
asmlinkage __visible int printk(const char *fmt, ...)
{
- printk_func_t vprintk_func;
va_list args;
int r;

va_start(args, fmt);
-
- /*
- * If a caller overrides the per_cpu printk_func, then it needs
- * to disable preemption when calling printk(). Otherwise
- * the printk_func should be set to the default. No need to
- * disable preemption here.
- */
- vprintk_func = this_cpu_read(printk_func);
r = vprintk_func(fmt, args);
-
va_end(args);

return r;
diff --git a/kernel/printk/printk.h b/kernel/printk/printk.h
new file mode 100644
index 000000000000..dc97c1c4d53b
--- /dev/null
+++ b/kernel/printk/printk.h
@@ -0,0 +1,44 @@
+/*
+ * printk.h - printk internal definitions
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/percpu.h>
+
+typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args);
+
+int __printf(1, 0) vprintk_default(const char *fmt, va_list args);
+
+#ifdef CONFIG_PRINTK_NMI
+
+/*
+ * printk() could not take logbuf_lock in NMI context. Instead,
+ * it temporary stores the strings into a per-CPU buffer.
+ * The alternative implementation is chosen transparently
+ * via per-CPU variable.
+ */
+DECLARE_PER_CPU(printk_func_t, printk_func);
+static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+{
+ return this_cpu_read(printk_func)(fmt, args);
+}
+
+#else /* CONFIG_PRINTK_NMI */
+
+static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+{
+ return vprintk_default(fmt, args);
+}
+
+#endif /* CONFIG_PRINTK_NMI */
diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
index 6019c53c669e..26caf51cc238 100644
--- a/lib/nmi_backtrace.c
+++ b/lib/nmi_backtrace.c
@@ -16,33 +16,14 @@
#include <linux/delay.h>
#include <linux/kprobes.h>
#include <linux/nmi.h>
-#include <linux/seq_buf.h>

#ifdef arch_trigger_all_cpu_backtrace
/* For reliability, we're prepared to waste bits here. */
static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
-static cpumask_t printtrace_mask;
-
-#define NMI_BUF_SIZE 4096
-
-struct nmi_seq_buf {
- unsigned char buffer[NMI_BUF_SIZE];
- struct seq_buf seq;
-};
-
-/* Safe printing in NMI context */
-static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);

/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;

-static void print_seq_line(struct nmi_seq_buf *s, int start, int end)
-{
- const char *buf = s->buffer + start;
-
- printk("%.*s", (end - start) + 1, buf);
-}
-
/*
* When raise() is called it will be is passed a pointer to the
* backtrace_mask. Architectures that call nmi_cpu_backtrace()
@@ -52,8 +33,7 @@ static void print_seq_line(struct nmi_seq_buf *s, int start, int end)
void nmi_trigger_all_cpu_backtrace(bool include_self,
void (*raise)(cpumask_t *mask))
{
- struct nmi_seq_buf *s;
- int i, cpu, this_cpu = get_cpu();
+ int i, this_cpu = get_cpu();

if (test_and_set_bit(0, &backtrace_flag)) {
/*
@@ -68,17 +48,6 @@ void nmi_trigger_all_cpu_backtrace(bool include_self,
if (!include_self)
cpumask_clear_cpu(this_cpu, to_cpumask(backtrace_mask));

- cpumask_copy(&printtrace_mask, to_cpumask(backtrace_mask));
-
- /*
- * Set up per_cpu seq_buf buffers that the NMIs running on the other
- * CPUs will write to.
- */
- for_each_cpu(cpu, to_cpumask(backtrace_mask)) {
- s = &per_cpu(nmi_print_seq, cpu);
- seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE);
- }
-
if (!cpumask_empty(to_cpumask(backtrace_mask))) {
pr_info("Sending NMI to %s CPUs:\n",
(include_self ? "all" : "other"));
@@ -94,73 +63,25 @@ void nmi_trigger_all_cpu_backtrace(bool include_self,
}

/*
- * Now that all the NMIs have triggered, we can dump out their
- * back traces safely to the console.
+ * Force flush any remote buffers that might be stuck in IRQ context
+ * and therefore could not run their irq_work.
*/
- for_each_cpu(cpu, &printtrace_mask) {
- int len, last_i = 0;
+ printk_nmi_flush();

- s = &per_cpu(nmi_print_seq, cpu);
- len = seq_buf_used(&s->seq);
- if (!len)
- continue;
-
- /* Print line by line. */
- for (i = 0; i < len; i++) {
- if (s->buffer[i] == '\n') {
- print_seq_line(s, last_i, i);
- last_i = i + 1;
- }
- }
- /* Check if there was a partial line. */
- if (last_i < len) {
- print_seq_line(s, last_i, len - 1);
- pr_cont("\n");
- }
- }
-
- clear_bit(0, &backtrace_flag);
- smp_mb__after_atomic();
+ clear_bit_unlock(0, &backtrace_flag);
put_cpu();
}

-/*
- * It is not safe to call printk() directly from NMI handlers.
- * It may be fine if the NMI detected a lock up and we have no choice
- * but to do so, but doing a NMI on all other CPUs to get a back trace
- * can be done with a sysrq-l. We don't want that to lock up, which
- * can happen if the NMI interrupts a printk in progress.
- *
- * Instead, we redirect the vprintk() to this nmi_vprintk() that writes
- * the content into a per cpu seq_buf buffer. Then when the NMIs are
- * all done, we can safely dump the contents of the seq_buf to a printk()
- * from a non NMI context.
- */
-static int nmi_vprintk(const char *fmt, va_list args)
-{
- struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
- unsigned int len = seq_buf_used(&s->seq);
-
- seq_buf_vprintf(&s->seq, fmt, args);
- return seq_buf_used(&s->seq) - len;
-}
-
bool nmi_cpu_backtrace(struct pt_regs *regs)
{
int cpu = smp_processor_id();

if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
- printk_func_t printk_func_save = this_cpu_read(printk_func);
-
- /* Replace printk to write into the NMI seq */
- this_cpu_write(printk_func, nmi_vprintk);
pr_warn("NMI backtrace for cpu %d\n", cpu);
if (regs)
show_regs(regs);
else
dump_stack();
- this_cpu_write(printk_func, printk_func_save);
-
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
return true;
}
--
1.8.5.6

2015-11-30 14:25:53

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Fri 2015-11-27 17:26:16, Max Filippov wrote:
Hi Max,

> > Another exception is Xtensa architecture that uses just a
> > fake NMI.
>
> It's called fake because it's actually maskable, but sometimes
> it is safe to use it as NMI (when there are no other IRQs at the
> same priority level and that level equals EXCM level). That
> condition is checked in arch/xtensa/include/asm/processor.h
> So 'fake' here is to avoid confusion with real NMI that exists
> on xtensa (and is not currently used in linux), otherwise code
> that runs in fake NMI must follow the NMI rules.
>
> To make xtensa compatible with your change we can add a
> choice whether fake NMI should be used to kconfig. It can
> then set HAVE_NMI accordingly. I'll post a patch for xtensa.

Thanks a lot for explanation. I'll wait for the destiny of
the patch adding CONFIG_XTENSA_FAKE_NMI. It is not easy
for me to review. Anyway, we could select HAVE_NMI for
Xtensa anytime later if this patchset goes in earlier.

Best Regards,
Petr

2015-11-30 16:42:21

by yalin wang

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] printk/nmi: Increase the size of the temporary buffer


> On Nov 27, 2015, at 19:09, Petr Mladek <[email protected]> wrote:
>
> Testing has shown that the backtrace sometimes does not fit
> into the 4kB temporary buffer that is used in NMI context.
>
> The warnings are gone when I double the temporary buffer size.
>
> Note that this problem existed even in the x86-specific
> implementation that was added by the commit a9edc8809328
> ("x86/nmi: Perform a safe NMI stack trace on all CPUs").
> Nobody noticed it because it did not print any warnings.
>
> Signed-off-by: Petr Mladek <[email protected]>
> ---
> kernel/printk/nmi.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
> index 8af1e4016719..6111644d5f01 100644
> --- a/kernel/printk/nmi.c
> +++ b/kernel/printk/nmi.c
> @@ -42,7 +42,7 @@ atomic_t nmi_message_lost;
> struct nmi_seq_buf {
> atomic_t len; /* length of written data */
> struct irq_work work; /* IRQ work that flushes the buffer */
> - unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
> + unsigned char buffer[2 * PAGE_SIZE - sizeof(atomic_t) -
> sizeof(struct irq_work)];
> };
>

why not define like this:

union {
struct {atomic_t len;
struct irq_work work;
}
unsigned char buffer[PAGE_SIZE * 2] ;
}

we can make sure the union is 2 PAGE_SIZE .

Thanks

2015-12-01 23:25:00

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Fri, 27 Nov 2015, Petr Mladek wrote:

> MN10300 has its own implementation for entering and exiting NMI
> handlers. It does not call nmi_enter() and nmi_exit(). Please, find
> below an updated patch that adds printk_nmi_enter() and
> printk_nmi_exit() to the custom entry points. Then we could add HAVE_NMI
> to arch/mn10300/Kconfig and avoid the above warning.

Hmm, so what exactly would go wrong if MN10300 (whatever that architecture
is) would call nmi_enter() and nmi_exit() at the places where it's
starting and finishing NMI handler?

>From a cursory look, it seems like most (if not all) of the things called
from nmi_{enter,exit}() would be nops there anyway.

Thanks,

--
Jiri Kosina
SUSE Labs

2015-12-01 23:45:10

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

On Fri, Nov 27, 2015 at 12:09:30PM +0100, Petr Mladek wrote:
> What we can do, though, is to zap all printk locks. We already do this
> when a printk recursion is detected. This should be safe because
> the system is crashing and there shouldn't be any printk caller
> that would cause the deadlock.

What about serial consoles which may call out to subsystems like the
clk subsystem to enable a clock, which would want to take their own
spinlocks in addition to the serial console driver?

I don't see bust_spinlocks() dealing with any of these locks, so IMHO
trying to make this work in NMI context strikes me as making the
existing solution more unreliable on ARM systems.

--
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

2015-12-02 02:45:21

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Fri, 2015-11-27 at 12:09 +0100, Petr Mladek wrote:

> printk() takes some locks and could not be used a safe way in NMI
> context.
>
> The chance of a deadlock is real especially when printing
> stacks from all CPUs. This particular problem has been addressed
> on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack
> trace on all CPUs").

...

> diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
> new file mode 100644
> index 000000000000..3989e13a0021
> --- /dev/null
> +++ b/kernel/printk/nmi.c
> @@ -0,0 +1,200 @@

...

> +
> +struct nmi_seq_buf {
> + atomic_t len; /* length of written data */
> + struct irq_work work; /* IRQ work that flushes the buffer */
> + unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
> + sizeof(struct irq_work)];
> +};
> +static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);


PAGE_SIZE isn't always 4K.

On typical powerpc systems this will give you 128K, and on some 512K, which is
probably not what we wanted.

The existing code just did:

#define NMI_BUF_SIZE 4096

So I think you should just go back to doing that.

cheers

2015-12-02 16:22:21

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v2 5/5] printk/nmi: Increase the size of the temporary buffer

From: yalin wang
> Sent: 30 November 2015 16:42
> > On Nov 27, 2015, at 19:09, Petr Mladek <[email protected]> wrote:
> >
> > Testing has shown that the backtrace sometimes does not fit
> > into the 4kB temporary buffer that is used in NMI context.
> >
> > The warnings are gone when I double the temporary buffer size.

You are wasting a lot of memory for something that is infrequently used.
There ought to be some way of copying partial tracebacks into the
main buffer.

David

????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2015-12-04 15:27:17

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

On Tue 2015-12-01 23:44:37, Russell King - ARM Linux wrote:
> On Fri, Nov 27, 2015 at 12:09:30PM +0100, Petr Mladek wrote:
> > What we can do, though, is to zap all printk locks. We already do this
> > when a printk recursion is detected. This should be safe because
> > the system is crashing and there shouldn't be any printk caller
> > that would cause the deadlock.
>
> What about serial consoles which may call out to subsystems like the
> clk subsystem to enable a clock, which would want to take their own
> spinlocks in addition to the serial console driver?

Yes, there might be more locks used by the serial console but I do
not know how to handle them all easily. IMHO, this patch is just better
than nothing.

> I don't see bust_spinlocks() dealing with any of these locks, so IMHO
> trying to make this work in NMI context strikes me as making the
> existing solution more unreliable on ARM systems.

bust_spinlocks() calls printk_nmi_flush() that would call printk()
that would zap "lockbuf_lock" and "console_sem" when in Oops and NMI.
Yes, there might be more locks blocked but we try to break at least
the first two walls. Also zapping is allowed only once per 30 seconds,
see zap_locks(). Why do you think that it might make things more
unreliable, please?


Thanks for looking,
Petr

2015-12-04 15:29:31

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Wed 2015-12-02 13:45:16, Michael Ellerman wrote:
> On Fri, 2015-11-27 at 12:09 +0100, Petr Mladek wrote:
>
> > printk() takes some locks and could not be used a safe way in NMI
> > context.
> >
> > The chance of a deadlock is real especially when printing
> > stacks from all CPUs. This particular problem has been addressed
> > on x86 by the commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack
> > trace on all CPUs").
>
> ...
>
> > diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
> > new file mode 100644
> > index 000000000000..3989e13a0021
> > --- /dev/null
> > +++ b/kernel/printk/nmi.c
> > @@ -0,0 +1,200 @@
>
> ...
>
> > +
> > +struct nmi_seq_buf {
> > + atomic_t len; /* length of written data */
> > + struct irq_work work; /* IRQ work that flushes the buffer */
> > + unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
> > + sizeof(struct irq_work)];
> > +};
> > +static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
>
>
> PAGE_SIZE isn't always 4K.
>
> On typical powerpc systems this will give you 128K, and on some 512K, which is
> probably not what we wanted.

Good point!

> The existing code just did:
>
> #define NMI_BUF_SIZE 4096

I will change this to 8192. The 4kB were not enough in some cases.

Best Regards,
Petr

2015-12-04 15:47:54

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] printk/nmi: Increase the size of the temporary buffer

On Wed 2015-12-02 16:20:41, David Laight wrote:
> From: yalin wang
> > Sent: 30 November 2015 16:42
> > > On Nov 27, 2015, at 19:09, Petr Mladek <[email protected]> wrote:
> > >
> > > Testing has shown that the backtrace sometimes does not fit
> > > into the 4kB temporary buffer that is used in NMI context.
> > >
> > > The warnings are gone when I double the temporary buffer size.
>
> You are wasting a lot of memory for something that is infrequently used.
> There ought to be some way of copying partial tracebacks into the
> main buffer.

I have already tried to use a separate ring buffer that might be
shared between all CPUs. But it was rejected because it was too
complex. See
http://thread.gmane.org/gmane.linux.kernel/1700059/focus=1700066

If we would want to crate a lockless access to the main ring
buffer, we would end up with something like
kernel/trace/ring_buffer.c. It is even more complicated.
And reading of the messages is pretty slow.

Note that we already have this buffer allocated on x86 and arm.
It is used there for printing backtrace from all CPUs.
This patchset makes it usable for all NMI messages.

But I'll make the size configurable in the next version.

Thanks for review,
Petr

2015-12-04 16:57:52

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Wed 2015-12-02 00:24:49, Jiri Kosina wrote:
> On Fri, 27 Nov 2015, Petr Mladek wrote:
>
> > MN10300 has its own implementation for entering and exiting NMI
> > handlers. It does not call nmi_enter() and nmi_exit(). Please, find
> > below an updated patch that adds printk_nmi_enter() and
> > printk_nmi_exit() to the custom entry points. Then we could add HAVE_NMI
> > to arch/mn10300/Kconfig and avoid the above warning.
>
> Hmm, so what exactly would go wrong if MN10300 (whatever that architecture
> is) would call nmi_enter() and nmi_exit() at the places where it's
> starting and finishing NMI handler?
>
> >From a cursory look, it seems like most (if not all) of the things called
> from nmi_{enter,exit}() would be nops there anyway.

Good point. Max mentioned in the other main that the NMI handler
should follow the NMI ruler. I do not why it could not work.
In fact, it might improve things, e.g. nmi_enter() blocks
recursive NMIs.

I think that it will move it into a separate patch, thought.

Best Regards,
Petr

2015-12-04 17:13:26

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

On Fri, Dec 04, 2015 at 04:27:09PM +0100, Petr Mladek wrote:
> On Tue 2015-12-01 23:44:37, Russell King - ARM Linux wrote:
> > On Fri, Nov 27, 2015 at 12:09:30PM +0100, Petr Mladek wrote:
> > > What we can do, though, is to zap all printk locks. We already do this
> > > when a printk recursion is detected. This should be safe because
> > > the system is crashing and there shouldn't be any printk caller
> > > that would cause the deadlock.
> >
> > What about serial consoles which may call out to subsystems like the
> > clk subsystem to enable a clock, which would want to take their own
> > spinlocks in addition to the serial console driver?
>
> Yes, there might be more locks used by the serial console but I do
> not know how to handle them all easily. IMHO, this patch is just better
> than nothing.

I have a slightly different view...

> > I don't see bust_spinlocks() dealing with any of these locks, so IMHO
> > trying to make this work in NMI context strikes me as making the
> > existing solution more unreliable on ARM systems.
>
> bust_spinlocks() calls printk_nmi_flush() that would call printk()
> that would zap "lockbuf_lock" and "console_sem" when in Oops and NMI.
> Yes, there might be more locks blocked but we try to break at least
> the first two walls. Also zapping is allowed only once per 30 seconds,
> see zap_locks(). Why do you think that it might make things more
> unreliable, please?

Take the scenario where CPU1 is in the middle of a printk(), and is
holding its lock.

CPU0 comes along and decides to trigger a NMI backtrace. This sends
a NMI to CPU1, which takes it in the middle of the serial console
output.

With the existing solution, the NMI output will be written to the
temporary buffer, and CPU1 has finished handling the NMI it resumes
the serial console output, eventually dropping the lock. That then
allows CPU0 to print the contents of all buffers, and we get NMI
printk output.

With this solution, as I understand it, we'll instead end up with
CPU1's printk trying to output direct to the console, and although
we've busted a couple of locks, we won't have busted the serial
console locks, so CPU1 will deadlock - and that will stop any output
what so ever.

If this is correct, then the net result is that we go from NMI with
serial console producing output to NMI with serial console being
less reliable at producing output.

--
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

2015-12-07 14:16:55

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] printk/nmi: Increase the size of the temporary buffer

On Mon 2015-11-30 08:42:04, yalin wang wrote:
>
> > On Nov 27, 2015, at 19:09, Petr Mladek <[email protected]> wrote:
> >
> > Testing has shown that the backtrace sometimes does not fit
> > into the 4kB temporary buffer that is used in NMI context.
> >
> > The warnings are gone when I double the temporary buffer size.
> >
> > Note that this problem existed even in the x86-specific
> > implementation that was added by the commit a9edc8809328
> > ("x86/nmi: Perform a safe NMI stack trace on all CPUs").
> > Nobody noticed it because it did not print any warnings.
> >
> > Signed-off-by: Petr Mladek <[email protected]>
> > ---
> > kernel/printk/nmi.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/printk/nmi.c b/kernel/printk/nmi.c
> > index 8af1e4016719..6111644d5f01 100644
> > --- a/kernel/printk/nmi.c
> > +++ b/kernel/printk/nmi.c
> > @@ -42,7 +42,7 @@ atomic_t nmi_message_lost;
> > struct nmi_seq_buf {
> > atomic_t len; /* length of written data */
> > struct irq_work work; /* IRQ work that flushes the buffer */
> > - unsigned char buffer[PAGE_SIZE - sizeof(atomic_t) -
> > + unsigned char buffer[2 * PAGE_SIZE - sizeof(atomic_t) -
> > sizeof(struct irq_work)];
> > };
> >
>
> why not define like this:
>
> union {
> struct {atomic_t len;
> struct irq_work work;
> }
> unsigned char buffer[PAGE_SIZE * 2] ;
> }
>
> we can make sure the union is 2 PAGE_SIZE .

IMHO, this would add more confusion. It would just move the
computation somewhere else. The union will have 2*PAGE_SIZE
but the beginning of "buffer" will be shared with "len"
and "work". Therefore, we would need to skip the beginning
of the buffer when storing the data. By other words, we still
will be able to use only (sizeof(buffer) - sizeof(atomic_t) -
sizeof(struct irq_work)] of the "buffer".

Or did I miss something, please?

Best Regards,
Petr

2015-12-07 15:50:32

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

From: Russell King - ARM Linux
> Sent: 04 December 2015 17:13
...
> I have a slightly different view...
>
> > > I don't see bust_spinlocks() dealing with any of these locks, so IMHO
> > > trying to make this work in NMI context strikes me as making the
> > > existing solution more unreliable on ARM systems.
> >
> > bust_spinlocks() calls printk_nmi_flush() that would call printk()
> > that would zap "lockbuf_lock" and "console_sem" when in Oops and NMI.
> > Yes, there might be more locks blocked but we try to break at least
> > the first two walls. Also zapping is allowed only once per 30 seconds,
> > see zap_locks(). Why do you think that it might make things more
> > unreliable, please?
>
> Take the scenario where CPU1 is in the middle of a printk(), and is
> holding its lock.
>
> CPU0 comes along and decides to trigger a NMI backtrace. This sends
> a NMI to CPU1, which takes it in the middle of the serial console
> output.
>
> With the existing solution, the NMI output will be written to the
> temporary buffer, and CPU1 has finished handling the NMI it resumes
> the serial console output, eventually dropping the lock. That then
> allows CPU0 to print the contents of all buffers, and we get NMI
> printk output.

Is the traceback from inside printk() or serial console code
likely to be useful?
If not then why not get the stacktrace generated when the relevant
lock is released?
That should save any faffing with a special buffer.

David

????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2015-12-08 11:21:53

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

> Take the scenario where CPU1 is in the middle of a printk(), and is
> holding its lock.
>
> CPU0 comes along and decides to trigger a NMI backtrace. This sends
> a NMI to CPU1, which takes it in the middle of the serial console
> output.
>
> With the existing solution, the NMI output will be written to the
> temporary buffer, and CPU1 has finished handling the NMI it resumes
> the serial console output, eventually dropping the lock. That then
> allows CPU0 to print the contents of all buffers, and we get NMI
> printk output.
>
> With this solution, as I understand it, we'll instead end up with
> CPU1's printk trying to output direct to the console, and although
> we've busted a couple of locks, we won't have busted the serial
> console locks, so CPU1 will deadlock - and that will stop any output
> what so ever.
>
> If this is correct, then the net result is that we go from NMI with
> serial console producing output to NMI with serial console being
> less reliable at producing output.

You are right. I thought about it a lot and I think that the best
solution is to avoid this patch at all. I guess that it will make
Peter Zijlstra happy as well.

Best Regards,
Petr

2015-12-08 14:49:13

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] printk/nmi: Try hard to print Oops message in NMI context

On Mon 2015-12-07 15:48:33, David Laight wrote:
> From: Russell King - ARM Linux
> > Sent: 04 December 2015 17:13
> ...
> > I have a slightly different view...
> >
> > > > I don't see bust_spinlocks() dealing with any of these locks, so IMHO
> > > > trying to make this work in NMI context strikes me as making the
> > > > existing solution more unreliable on ARM systems.
> > >
> > > bust_spinlocks() calls printk_nmi_flush() that would call printk()
> > > that would zap "lockbuf_lock" and "console_sem" when in Oops and NMI.
> > > Yes, there might be more locks blocked but we try to break at least
> > > the first two walls. Also zapping is allowed only once per 30 seconds,
> > > see zap_locks(). Why do you think that it might make things more
> > > unreliable, please?
> >
> > Take the scenario where CPU1 is in the middle of a printk(), and is
> > holding its lock.
> >
> > CPU0 comes along and decides to trigger a NMI backtrace. This sends
> > a NMI to CPU1, which takes it in the middle of the serial console
> > output.
> >
> > With the existing solution, the NMI output will be written to the
> > temporary buffer, and CPU1 has finished handling the NMI it resumes
> > the serial console output, eventually dropping the lock. That then
> > allows CPU0 to print the contents of all buffers, and we get NMI
> > printk output.
>
> Is the traceback from inside printk() or serial console code
> likely to be useful?

It is useful if a problem is caused by the printk or serial console
code. For example, a slow serial console might cause a soft lockup
if there are too many messages to print.


> If not then why not get the stacktrace generated when the relevant
> lock is released? That should save any faffing with a special
> buffer.

Another question is how to detect that NMI interrupted printk() code.
We would either need to analyze backtrace. Or we would need to
know which CPU took the printk() or console locks. This check
should be race-safe vs. the NMI context.


Best Regards,
Petr

2016-03-17 19:35:32

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Fri, 4 Dec 2015 17:57:44 +0100 Petr Mladek <[email protected]> wrote:

> On Wed 2015-12-02 00:24:49, Jiri Kosina wrote:
> > On Fri, 27 Nov 2015, Petr Mladek wrote:
> >
> > > MN10300 has its own implementation for entering and exiting NMI
> > > handlers. It does not call nmi_enter() and nmi_exit(). Please, find
> > > below an updated patch that adds printk_nmi_enter() and
> > > printk_nmi_exit() to the custom entry points. Then we could add HAVE_NMI
> > > to arch/mn10300/Kconfig and avoid the above warning.
> >
> > Hmm, so what exactly would go wrong if MN10300 (whatever that architecture
> > is) would call nmi_enter() and nmi_exit() at the places where it's
> > starting and finishing NMI handler?
> >
> > >From a cursory look, it seems like most (if not all) of the things called
> > from nmi_{enter,exit}() would be nops there anyway.
>
> Good point. Max mentioned in the other main that the NMI handler
> should follow the NMI ruler. I do not why it could not work.
> In fact, it might improve things, e.g. nmi_enter() blocks
> recursive NMIs.
>
> I think that it will move it into a separate patch, thought.
>

I've sort of lost the plot on this patchset.

I know Daniel had concerns (resolved?). Sergey lost the ability to
perform backtraces and has a proposed fix ("printk/nmi: restore
printk_func in nmi_panic") but that wasn't fully resolved and I didn't
merge anything. I'm not sure what Jan's thinking is on it all.

So... I'll retain

printk-nmi-generic-solution-for-safe-printk-in-nmi.patch
printk-nmi-use-irq-work-only-when-ready.patch
printk-nmi-warn-when-some-message-has-been-lost-in-nmi-context.patch
printk-nmi-increase-the-size-of-nmi-buffer-and-make-it-configurable.patch

in -mm for now. Perhaps I should drop them all and we start again
after -rc1?

2016-03-18 10:03:16

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] printk/nmi: Generic solution for safe printk in NMI

On Thu 2016-03-17 12:35:27, Andrew Morton wrote:
> On Fri, 4 Dec 2015 17:57:44 +0100 Petr Mladek <[email protected]> wrote:
>
> > On Wed 2015-12-02 00:24:49, Jiri Kosina wrote:
> > > On Fri, 27 Nov 2015, Petr Mladek wrote:
> > >
> > > > MN10300 has its own implementation for entering and exiting NMI
> > > > handlers. It does not call nmi_enter() and nmi_exit(). Please, find
> > > > below an updated patch that adds printk_nmi_enter() and
> > > > printk_nmi_exit() to the custom entry points. Then we could add HAVE_NMI
> > > > to arch/mn10300/Kconfig and avoid the above warning.
> > >
> > > Hmm, so what exactly would go wrong if MN10300 (whatever that architecture
> > > is) would call nmi_enter() and nmi_exit() at the places where it's
> > > starting and finishing NMI handler?
> > >
> > > >From a cursory look, it seems like most (if not all) of the things called
> > > from nmi_{enter,exit}() would be nops there anyway.
> >
> > Good point. Max mentioned in the other main that the NMI handler
> > should follow the NMI ruler. I do not why it could not work.
> > In fact, it might improve things, e.g. nmi_enter() blocks
> > recursive NMIs.
> >
> > I think that it will move it into a separate patch, thought.
> >
>
> I've sort of lost the plot on this patchset.
>
> I know Daniel had concerns (resolved?). Sergey lost the ability to
> perform backtraces and has a proposed fix ("printk/nmi: restore
> printk_func in nmi_panic") but that wasn't fully resolved and I didn't
> merge anything. I'm not sure what Jan's thinking is on it all.
>
> So... I'll retain
>
> printk-nmi-generic-solution-for-safe-printk-in-nmi.patch
> printk-nmi-use-irq-work-only-when-ready.patch
> printk-nmi-warn-when-some-message-has-been-lost-in-nmi-context.patch
> printk-nmi-increase-the-size-of-nmi-buffer-and-make-it-configurable.patch
>
> in -mm for now. Perhaps I should drop them all and we start again
> after -rc1?

Please, drop it for now. I'll send an updated version that will better
handle Daniel's concerns after rc1.

I thought that it had already been decided. You wanted to remove the patchset
in favour of "improvements to the nmi_backtrace code" by Chris Metcalf, see
http://thread.gmane.org/gmane.linux.ports.arm.kernel/482845/focus=483002

Best Regards,
Petr