2010-04-02 18:33:50

by Jason Wessel

[permalink] [raw]
Subject: [GIT PULL] kgdb fixes for 2.6.34-rc3

Linus, please pull the kgdb-fixes for 2.6.34.

git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb.git kgdb-fixes

Summary:
* These are all the fixes that were specifically for the kgdb core from
the original 2.6.34 merge window request.
* Given where we are at in the rc cycle I removed the directory
restructuring patch and this is bug fixes only.

Thanks,
Jason.


---
The following changes since commit 42be79e37e264557f12860fa4cc84b4de3685954:
Linus Torvalds (1):
Merge branch 'drm-linus' of git://git.kernel.org/.../airlied/drm-2.6

are available in the git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb.git kgdb-fixes

Jason Wessel (5):
kgdb: have ebin2mem call probe_kernel_write once
kgdbts,sh: Add in breakpoint pc offset for superh
kgdb: eliminate kgdb_wait(), all cpus enter the same way
kgdb: Use atomic operators which use barriers
kgdb: Turn off tracing while in the debugger

drivers/misc/kgdbts.c | 6 ++
kernel/kgdb.c | 209 +++++++++++++++++++++++++------------------------
2 files changed, 111 insertions(+), 104 deletions(-)


2010-04-02 18:33:12

by Jason Wessel

[permalink] [raw]
Subject: [PATCH 1/5] kgdb: have ebin2mem call probe_kernel_write once

Rather than call probe_kernel_write() one byte at a time, process the
whole buffer locally and pass the entire result in one go. This way,
architectures that need to do special handling based on the length can
do so, or we only end up calling memcpy() once.

[[email protected]: Reported original problem and preliminary patch]

Signed-off-by: Jason Wessel <[email protected]>
Signed-off-by: Sonic Zhang <[email protected]>
Signed-off-by: Mike Frysinger <[email protected]>
---
kernel/kgdb.c | 23 +++++++++--------------
1 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/kernel/kgdb.c b/kernel/kgdb.c
index 761fdd2..42fd128 100644
--- a/kernel/kgdb.c
+++ b/kernel/kgdb.c
@@ -391,27 +391,22 @@ int kgdb_mem2hex(char *mem, char *buf, int count)

/*
* Copy the binary array pointed to by buf into mem. Fix $, #, and
- * 0x7d escaped with 0x7d. Return a pointer to the character after
- * the last byte written.
+ * 0x7d escaped with 0x7d. Return -EFAULT on failure or 0 on success.
+ * The input buf is overwitten with the result to write to mem.
*/
static int kgdb_ebin2mem(char *buf, char *mem, int count)
{
- int err = 0;
- char c;
+ int size = 0;
+ char *c = buf;

while (count-- > 0) {
- c = *buf++;
- if (c == 0x7d)
- c = *buf++ ^ 0x20;
-
- err = probe_kernel_write(mem, &c, 1);
- if (err)
- break;
-
- mem++;
+ c[size] = *buf++;
+ if (c[size] == 0x7d)
+ c[size] = *buf++ ^ 0x20;
+ size++;
}

- return err;
+ return probe_kernel_write(mem, c, size);
}

/*
--
1.6.3.1.9.g95405b

2010-04-02 18:33:18

by Jason Wessel

[permalink] [raw]
Subject: [PATCH 3/5] kgdb: eliminate kgdb_wait(), all cpus enter the same way

This is a kgdb architectural change to have all the cpus (master or
slave) enter the same function.

A cpu that hits an exception (wants to be the master cpu) will call
kgdb_handle_exception() from the trap handler and then invoke a
kgdb_roundup_cpu() to synchronize the other cpus and bring them into
the kgdb_handle_exception() as well.

A slave cpu will enter kgdb_handle_exception() from the
kgdb_nmicallback() and set the exception state to note that the
processor is a slave.

Previously the salve cpu would have called kgdb_wait(). This change
allows the debug core to change cpus without resuming the system in
order to inspect arch specific cpu information.

Signed-off-by: Jason Wessel <[email protected]>
---
kernel/kgdb.c | 165 ++++++++++++++++++++++++++++-----------------------------
1 files changed, 82 insertions(+), 83 deletions(-)

diff --git a/kernel/kgdb.c b/kernel/kgdb.c
index 42fd128..6882c04 100644
--- a/kernel/kgdb.c
+++ b/kernel/kgdb.c
@@ -69,9 +69,16 @@ struct kgdb_state {
struct pt_regs *linux_regs;
};

+/* Exception state values */
+#define DCPU_WANT_MASTER 0x1 /* Waiting to become a master kgdb cpu */
+#define DCPU_NEXT_MASTER 0x2 /* Transition from one master cpu to another */
+#define DCPU_IS_SLAVE 0x4 /* Slave cpu enter exception */
+#define DCPU_SSTEP 0x8 /* CPU is single stepping */
+
static struct debuggerinfo_struct {
void *debuggerinfo;
struct task_struct *task;
+ int exception_state;
} kgdb_info[NR_CPUS];

/**
@@ -558,49 +565,6 @@ static struct task_struct *getthread(struct pt_regs *regs, int tid)
}

/*
- * CPU debug state control:
- */
-
-#ifdef CONFIG_SMP
-static void kgdb_wait(struct pt_regs *regs)
-{
- unsigned long flags;
- int cpu;
-
- local_irq_save(flags);
- cpu = raw_smp_processor_id();
- kgdb_info[cpu].debuggerinfo = regs;
- kgdb_info[cpu].task = current;
- /*
- * Make sure the above info reaches the primary CPU before
- * our cpu_in_kgdb[] flag setting does:
- */
- smp_wmb();
- atomic_set(&cpu_in_kgdb[cpu], 1);
-
- /* Disable any cpu specific hw breakpoints */
- kgdb_disable_hw_debug(regs);
-
- /* Wait till primary CPU is done with debugging */
- while (atomic_read(&passive_cpu_wait[cpu]))
- cpu_relax();
-
- kgdb_info[cpu].debuggerinfo = NULL;
- kgdb_info[cpu].task = NULL;
-
- /* fix up hardware debug registers on local cpu */
- if (arch_kgdb_ops.correct_hw_break)
- arch_kgdb_ops.correct_hw_break();
-
- /* Signal the primary CPU that we are done: */
- atomic_set(&cpu_in_kgdb[cpu], 0);
- touch_softlockup_watchdog_sync();
- clocksource_touch_watchdog();
- local_irq_restore(flags);
-}
-#endif
-
-/*
* Some architectures need cache flushes when we set/clear a
* breakpoint:
*/
@@ -1395,34 +1359,12 @@ static int kgdb_reenter_check(struct kgdb_state *ks)
return 1;
}

-/*
- * kgdb_handle_exception() - main entry point from a kernel exception
- *
- * Locking hierarchy:
- * interface locks, if any (begin_session)
- * kgdb lock (kgdb_active)
- */
-int
-kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs)
+static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs)
{
- struct kgdb_state kgdb_var;
- struct kgdb_state *ks = &kgdb_var;
unsigned long flags;
int sstep_tries = 100;
int error = 0;
int i, cpu;
-
- ks->cpu = raw_smp_processor_id();
- ks->ex_vector = evector;
- ks->signo = signo;
- ks->ex_vector = evector;
- ks->err_code = ecode;
- ks->kgdb_usethreadid = 0;
- ks->linux_regs = regs;
-
- if (kgdb_reenter_check(ks))
- return 0; /* Ouch, double exception ! */
-
acquirelock:
/*
* Interrupts will be restored by the 'trap return' code, except when
@@ -1430,13 +1372,42 @@ acquirelock:
*/
local_irq_save(flags);

- cpu = raw_smp_processor_id();
+ cpu = ks->cpu;
+ kgdb_info[cpu].debuggerinfo = regs;
+ kgdb_info[cpu].task = current;
+ /*
+ * Make sure the above info reaches the primary CPU before
+ * our cpu_in_kgdb[] flag setting does:
+ */
+ smp_wmb();
+ atomic_set(&cpu_in_kgdb[cpu], 1);

/*
- * Acquire the kgdb_active lock:
+ * CPU will loop if it is a slave or request to become a kgdb
+ * master cpu and acquire the kgdb_active lock:
*/
- while (atomic_cmpxchg(&kgdb_active, -1, cpu) != -1)
+ while (1) {
+ if (kgdb_info[cpu].exception_state & DCPU_WANT_MASTER) {
+ if (atomic_cmpxchg(&kgdb_active, -1, cpu) == cpu)
+ break;
+ } else if (kgdb_info[cpu].exception_state & DCPU_IS_SLAVE) {
+ if (!atomic_read(&passive_cpu_wait[cpu]))
+ goto return_normal;
+ } else {
+return_normal:
+ /* Return to normal operation by executing any
+ * hw breakpoint fixup.
+ */
+ if (arch_kgdb_ops.correct_hw_break)
+ arch_kgdb_ops.correct_hw_break();
+ atomic_set(&cpu_in_kgdb[cpu], 0);
+ touch_softlockup_watchdog_sync();
+ clocksource_touch_watchdog();
+ local_irq_restore(flags);
+ return 0;
+ }
cpu_relax();
+ }

/*
* For single stepping, try to only enter on the processor
@@ -1470,9 +1441,6 @@ acquirelock:
if (kgdb_io_ops->pre_exception)
kgdb_io_ops->pre_exception();

- kgdb_info[ks->cpu].debuggerinfo = ks->linux_regs;
- kgdb_info[ks->cpu].task = current;
-
kgdb_disable_hw_debug(ks->linux_regs);

/*
@@ -1484,12 +1452,6 @@ acquirelock:
atomic_set(&passive_cpu_wait[i], 1);
}

- /*
- * spin_lock code is good enough as a barrier so we don't
- * need one here:
- */
- atomic_set(&cpu_in_kgdb[ks->cpu], 1);
-
#ifdef CONFIG_SMP
/* Signal the other CPUs to enter kgdb_wait() */
if ((!kgdb_single_step) && kgdb_do_roundup)
@@ -1521,8 +1483,6 @@ acquirelock:
if (kgdb_io_ops->post_exception)
kgdb_io_ops->post_exception();

- kgdb_info[ks->cpu].debuggerinfo = NULL;
- kgdb_info[ks->cpu].task = NULL;
atomic_set(&cpu_in_kgdb[ks->cpu], 0);

if (!kgdb_single_step) {
@@ -1555,13 +1515,52 @@ kgdb_restore:
return error;
}

+/*
+ * kgdb_handle_exception() - main entry point from a kernel exception
+ *
+ * Locking hierarchy:
+ * interface locks, if any (begin_session)
+ * kgdb lock (kgdb_active)
+ */
+int
+kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs)
+{
+ struct kgdb_state kgdb_var;
+ struct kgdb_state *ks = &kgdb_var;
+ int ret;
+
+ ks->cpu = raw_smp_processor_id();
+ ks->ex_vector = evector;
+ ks->signo = signo;
+ ks->ex_vector = evector;
+ ks->err_code = ecode;
+ ks->kgdb_usethreadid = 0;
+ ks->linux_regs = regs;
+
+ if (kgdb_reenter_check(ks))
+ return 0; /* Ouch, double exception ! */
+ kgdb_info[ks->cpu].exception_state |= DCPU_WANT_MASTER;
+ ret = kgdb_cpu_enter(ks, regs);
+ kgdb_info[ks->cpu].exception_state &= ~DCPU_WANT_MASTER;
+ return ret;
+}
+
int kgdb_nmicallback(int cpu, void *regs)
{
#ifdef CONFIG_SMP
+ struct kgdb_state kgdb_var;
+ struct kgdb_state *ks = &kgdb_var;
+
+ memset(ks, 0, sizeof(struct kgdb_state));
+ ks->cpu = cpu;
+ ks->linux_regs = regs;
+
if (!atomic_read(&cpu_in_kgdb[cpu]) &&
- atomic_read(&kgdb_active) != cpu &&
- atomic_read(&cpu_in_kgdb[atomic_read(&kgdb_active)])) {
- kgdb_wait((struct pt_regs *)regs);
+ atomic_read(&kgdb_active) != -1 &&
+ atomic_read(&kgdb_active) != cpu) {
+ kgdb_info[cpu].exception_state |= DCPU_IS_SLAVE;
+ kgdb_cpu_enter(ks, regs);
+ kgdb_info[cpu].exception_state &= ~DCPU_IS_SLAVE;
return 0;
}
#endif
--
1.6.3.1.9.g95405b

2010-04-02 18:33:29

by Jason Wessel

[permalink] [raw]
Subject: [PATCH 4/5] kgdb: Use atomic operators which use barriers

A cpu_relax() does not mandate that there is an smp memory barrier.
As a result on the arm smp architecture the kernel debugger can hang
on entry from time to time, as shown by the kgdb regression tests.

The solution is simply to use the atomic operators which include a
proper smp memory barrier, instead of using atomic_set() and
atomic_read().

Tested-by: Will Deacon <[email protected]>
Signed-off-by: Jason Wessel <[email protected]>
---
kernel/kgdb.c | 21 ++++++++++-----------
1 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/kernel/kgdb.c b/kernel/kgdb.c
index 6882c04..7a56dc9 100644
--- a/kernel/kgdb.c
+++ b/kernel/kgdb.c
@@ -1379,8 +1379,7 @@ acquirelock:
* Make sure the above info reaches the primary CPU before
* our cpu_in_kgdb[] flag setting does:
*/
- smp_wmb();
- atomic_set(&cpu_in_kgdb[cpu], 1);
+ atomic_inc(&cpu_in_kgdb[cpu]);

/*
* CPU will loop if it is a slave or request to become a kgdb
@@ -1391,7 +1390,7 @@ acquirelock:
if (atomic_cmpxchg(&kgdb_active, -1, cpu) == cpu)
break;
} else if (kgdb_info[cpu].exception_state & DCPU_IS_SLAVE) {
- if (!atomic_read(&passive_cpu_wait[cpu]))
+ if (!atomic_add_return(0, &passive_cpu_wait[cpu]))
goto return_normal;
} else {
return_normal:
@@ -1400,7 +1399,7 @@ return_normal:
*/
if (arch_kgdb_ops.correct_hw_break)
arch_kgdb_ops.correct_hw_break();
- atomic_set(&cpu_in_kgdb[cpu], 0);
+ atomic_dec(&cpu_in_kgdb[cpu]);
touch_softlockup_watchdog_sync();
clocksource_touch_watchdog();
local_irq_restore(flags);
@@ -1449,7 +1448,7 @@ return_normal:
*/
if (!kgdb_single_step) {
for (i = 0; i < NR_CPUS; i++)
- atomic_set(&passive_cpu_wait[i], 1);
+ atomic_inc(&passive_cpu_wait[i]);
}

#ifdef CONFIG_SMP
@@ -1462,7 +1461,7 @@ return_normal:
* Wait for the other CPUs to be notified and be waiting for us:
*/
for_each_online_cpu(i) {
- while (!atomic_read(&cpu_in_kgdb[i]))
+ while (!atomic_add_return(0, &cpu_in_kgdb[i]))
cpu_relax();
}

@@ -1483,17 +1482,17 @@ return_normal:
if (kgdb_io_ops->post_exception)
kgdb_io_ops->post_exception();

- atomic_set(&cpu_in_kgdb[ks->cpu], 0);
+ atomic_dec(&cpu_in_kgdb[ks->cpu]);

if (!kgdb_single_step) {
for (i = NR_CPUS-1; i >= 0; i--)
- atomic_set(&passive_cpu_wait[i], 0);
+ atomic_dec(&passive_cpu_wait[i]);
/*
* Wait till all the CPUs have quit
* from the debugger.
*/
for_each_online_cpu(i) {
- while (atomic_read(&cpu_in_kgdb[i]))
+ while (atomic_add_return(0, &cpu_in_kgdb[i]))
cpu_relax();
}
}
@@ -1736,11 +1735,11 @@ EXPORT_SYMBOL_GPL(kgdb_unregister_io_module);
*/
void kgdb_breakpoint(void)
{
- atomic_set(&kgdb_setting_breakpoint, 1);
+ atomic_inc(&kgdb_setting_breakpoint);
wmb(); /* Sync point before breakpoint */
arch_kgdb_breakpoint();
wmb(); /* Sync point after breakpoint */
- atomic_set(&kgdb_setting_breakpoint, 0);
+ atomic_dec(&kgdb_setting_breakpoint);
}
EXPORT_SYMBOL_GPL(kgdb_breakpoint);

--
1.6.3.1.9.g95405b

2010-04-02 18:33:40

by Jason Wessel

[permalink] [raw]
Subject: [PATCH 2/5] kgdbts,sh: Add in breakpoint pc offset for superh

The kgdb test suite mimics the behavior of gdb. For the sh
architecture the pc must be decremented by 2 for software breakpoint.

Signed-off-by: Jason Wessel <[email protected]>
Acked-by: Paul Mundt <[email protected]>
---
drivers/misc/kgdbts.c | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
index fcb6ec1..7245023 100644
--- a/drivers/misc/kgdbts.c
+++ b/drivers/misc/kgdbts.c
@@ -295,6 +295,10 @@ static int check_and_rewind_pc(char *put_str, char *arg)
/* On x86 a breakpoint stop requires it to be decremented */
if (addr + 1 == kgdbts_regs.ip)
offset = -1;
+#elif defined(CONFIG_SUPERH)
+ /* On SUPERH a breakpoint stop requires it to be decremented */
+ if (addr + 2 == kgdbts_regs.pc)
+ offset = -2;
#endif
if (strcmp(arg, "silent") &&
instruction_pointer(&kgdbts_regs) + offset != addr) {
@@ -305,6 +309,8 @@ static int check_and_rewind_pc(char *put_str, char *arg)
#ifdef CONFIG_X86
/* On x86 adjust the instruction pointer if needed */
kgdbts_regs.ip += offset;
+#elif defined(CONFIG_SUPERH)
+ kgdbts_regs.pc += offset;
#endif
return 0;
}
--
1.6.3.1.9.g95405b

2010-04-02 18:34:09

by Jason Wessel

[permalink] [raw]
Subject: [PATCH 5/5] kgdb: Turn off tracing while in the debugger

The kernel debugger should turn off kernel tracing any time the
debugger is active and restore it on resume.

Signed-off-by: Jason Wessel <[email protected]>
Reviewed-by: Steven Rostedt <[email protected]>
---
kernel/kgdb.c | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/kernel/kgdb.c b/kernel/kgdb.c
index 7a56dc9..27e7bb5 100644
--- a/kernel/kgdb.c
+++ b/kernel/kgdb.c
@@ -1365,6 +1365,7 @@ static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs)
int sstep_tries = 100;
int error = 0;
int i, cpu;
+ int trace_on = 0;
acquirelock:
/*
* Interrupts will be restored by the 'trap return' code, except when
@@ -1399,6 +1400,8 @@ return_normal:
*/
if (arch_kgdb_ops.correct_hw_break)
arch_kgdb_ops.correct_hw_break();
+ if (trace_on)
+ tracing_on();
atomic_dec(&cpu_in_kgdb[cpu]);
touch_softlockup_watchdog_sync();
clocksource_touch_watchdog();
@@ -1474,6 +1477,9 @@ return_normal:
kgdb_single_step = 0;
kgdb_contthread = current;
exception_level = 0;
+ trace_on = tracing_is_on();
+ if (trace_on)
+ tracing_off();

/* Talk to debugger with gdbserial protocol */
error = gdb_serial_stub(ks);
@@ -1505,6 +1511,8 @@ kgdb_restore:
else
kgdb_sstep_pid = 0;
}
+ if (trace_on)
+ tracing_on();
/* Free kgdb_active */
atomic_set(&kgdb_active, -1);
touch_softlockup_watchdog_sync();
--
1.6.3.1.9.g95405b

2010-04-02 19:17:37

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Fri, 2 Apr 2010, Jason Wessel wrote:
>
> A cpu_relax() does not mandate that there is an smp memory barrier.
> As a result on the arm smp architecture the kernel debugger can hang
> on entry from time to time, as shown by the kgdb regression tests.
>
> The solution is simply to use the atomic operators which include a
> proper smp memory barrier, instead of using atomic_set() and
> atomic_read().

Hmm. While I absolutely agree that 'cpu_relax()' does not imply a memory
barrier, I disagree that this change should be needed. If ARM has odd
semantics where it will never see changes in a busy loop, then ARM is
buggy, and that has _nothing_ to do with the Linux notion of memory
barriers.

The _whole_ point of "cpu_relax()" is to have busy loops. And the point of
busy loops is that they are waiting for something to change. So if this
loop:

> for_each_online_cpu(i) {
> - while (atomic_read(&cpu_in_kgdb[i]))
> + while (atomic_add_return(0, &cpu_in_kgdb[i]))
> cpu_relax();
> }

can somehow lock up because "cpu_relax()" doesn't work with an infinite
"while (atomic_read(..))" loop, then the ARM implementation of cpu_relax()
is buggy.

Here's a simple example of exactly these kinds of busy loops waiting for
something to change using cpu_relax() from generic kernel code:

ipc/mqueue.c- while (ewp->state == STATE_PENDING)
ipc/mqueue.c: cpu_relax();

ipc/msg.c- while (msg == NULL) {
ipc/msg.c: cpu_relax();

kernel/sched.c- while (task_is_waking(p))
kernel/sched.c: cpu_relax();

kernel/smp.c- while (data->flags & CSD_FLAG_LOCK)
kernel/smp.c: cpu_relax();

so I'd like to understand what the ARM issue is.

Does ARM have some broken cache coherency model where writes by other
CPU's _never_ show up unless the reading CPU does some memory sync thing?
If so, then cpu_relax() obviously does need to do that syncing
instruction.

And no, that does NOT mean that "cpu_relax()" has any memory barrier
semantics. All it means is that cpu_relax() obviously is some
architecture-specific way of saying "I'm in a busy loop, waiting for
something".

Linus

2010-04-02 19:38:21

by Jason Wessel

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers

On 04/02/2010 02:12 PM, Linus Torvalds wrote:
>
>
> Hmm. While I absolutely agree that 'cpu_relax()' does not imply a memory
> barrier, I disagree that this change should be needed. If ARM has odd
> semantics where it will never see changes in a busy loop, then ARM is
> buggy, and that has _nothing_ to do with the Linux notion of memory
> barriers.
>
> The _whole_ point of "cpu_relax()" is to have busy loops. And the point of
> busy loops is that they are waiting for something to change. So if this
> loop:
>
>
>> for_each_online_cpu(i) {
>> - while (atomic_read(&cpu_in_kgdb[i]))
>> + while (atomic_add_return(0, &cpu_in_kgdb[i]))
>> cpu_relax();
>> }
>>
>
>

So this part might be overkill, but I don't actually have the hardware,
schematics or reference manuals to ascertain what is going on. The
other changes in this patch should be correct because we really do want
memory barriers which come along with the inc and dec operators.

> can somehow lock up because "cpu_relax()" doesn't work with an infinite
> "while (atomic_read(..))" loop, then the ARM implementation of cpu_relax()
> is buggy.
>
>

Will originally proposed a patch for cpu_relax:
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-March/011076.html
--- The patch ---

+#if __LINUX_ARM_ARCH__ == 6
+#define cpu_relax() smp_mb()
+#else
#define cpu_relax() barrier()
+#endif

---

Russell had this thread:
http://permalink.gmane.org/gmane.linux.ports.arm.kernel/75717

--- clip from url ---

cpu_relax() is also defined to be a compiler barrier so that the compiler
reloads the variable on every iteration.

> This patch changes the definition of cpu_relax() to smp_mb() for ARMv6 cores,
> forcing a flushing of the write buffer on SMP systems. If the Kernel is not
> compiled for SMP support, this will expand to a barrier() as before.

I don't think this is correct. You're making a macro do something on ARM
which no other platform, apart from blackfin (which I believe is wrong)
makes it do.

---

> Does ARM have some broken cache coherency model where writes by other
> CPU's _never_ show up unless the reading CPU does some memory sync thing?
> If so, then cpu_relax() obviously does need to do that syncing
> instruction.
>
>

Given your statements, I can just keep the atomic reads as they were
previously, but keep the inc and dec parts. And we can wait for a
further response from either Will or Russell.

Thanks,
Jason.

2010-04-02 19:47:32

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Fri, 2 Apr 2010, Jason Wessel wrote:
>
> Russell had this thread:
> http://permalink.gmane.org/gmane.linux.ports.arm.kernel/75717

Russell is wrong.

Yes, originally it was about P4's overheating. But let me repeat: the fact
is, this _is_ valid kernel code:

kernel/sched.c- while (task_is_waking(p))
kernel/sched.c: cpu_relax();

(where that "task_is_waking()" is simply doing two regular reads, and
expects another CPU to be changing them).

This has _nothing_ to do with memory barriers, or with overheating.

The fact that maybe some ARM6 cache coherency implementation is pure and
utter sh*t and never sees the changes without the same instruction that
happens to be a memory barrier on that architecture does not make that
cpu_relax() any more about memory barriers.

Similarly, the fact that P4's wanted cpu_relax() in order to not overheat
and cause slowdowns has _nothing_ to do with anything.

All that matters is that the above kind of while loop must work. The
architecture needs to do whatever it needs to do to make it work. End of
discussion. If on ARM6 that means "smp_mb()", then that's an ARM6
implementation issue.

Linus

2010-04-02 19:48:24

by Jason Wessel

[permalink] [raw]
Subject: Re: [Kgdb-bugreport] [PATCH 4/5] kgdb: Use atomic operators whichuse barriers


Here is the revised patch, which is going into the kgdb-fixes branch.

Thanks,
Jason.

--
From: Jason Wessel <[email protected]>
Subject: [PATCH] kgdb: Use atomic operators which use barriers

The cpu_relax() does not mandate that there is an smp memory barrier.
As a result on the arm smp architecture the kernel debugger can hang
on entry from time to time, as shown by the kgdb regression tests.

The solution is simply to use the atomic operators which include a
proper smp memory barrier, instead of using atomic_set().

Tested-by: Will Deacon <[email protected]>
Signed-off-by: Jason Wessel <[email protected]>
---
kernel/kgdb.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)

--- a/kernel/kgdb.c
+++ b/kernel/kgdb.c
@@ -1379,8 +1379,7 @@ acquirelock:
* Make sure the above info reaches the primary CPU before
* our cpu_in_kgdb[] flag setting does:
*/
- smp_wmb();
- atomic_set(&cpu_in_kgdb[cpu], 1);
+ atomic_inc(&cpu_in_kgdb[cpu]);

/*
* CPU will loop if it is a slave or request to become a kgdb
@@ -1400,7 +1399,7 @@ return_normal:
*/
if (arch_kgdb_ops.correct_hw_break)
arch_kgdb_ops.correct_hw_break();
- atomic_set(&cpu_in_kgdb[cpu], 0);
+ atomic_dec(&cpu_in_kgdb[cpu]);
touch_softlockup_watchdog_sync();
clocksource_touch_watchdog();
local_irq_restore(flags);
@@ -1449,7 +1448,7 @@ return_normal:
*/
if (!kgdb_single_step) {
for (i = 0; i < NR_CPUS; i++)
- atomic_set(&passive_cpu_wait[i], 1);
+ atomic_inc(&passive_cpu_wait[i]);
}

#ifdef CONFIG_SMP
@@ -1483,11 +1482,11 @@ return_normal:
if (kgdb_io_ops->post_exception)
kgdb_io_ops->post_exception();

- atomic_set(&cpu_in_kgdb[ks->cpu], 0);
+ atomic_dec(&cpu_in_kgdb[ks->cpu]);

if (!kgdb_single_step) {
for (i = NR_CPUS-1; i >= 0; i--)
- atomic_set(&passive_cpu_wait[i], 0);
+ atomic_dec(&passive_cpu_wait[i]);
/*
* Wait till all the CPUs have quit
* from the debugger.
@@ -1736,11 +1735,11 @@ EXPORT_SYMBOL_GPL(kgdb_unregister_io_mod
*/
void kgdb_breakpoint(void)
{
- atomic_set(&kgdb_setting_breakpoint, 1);
+ atomic_inc(&kgdb_setting_breakpoint);
wmb(); /* Sync point before breakpoint */
arch_kgdb_breakpoint();
wmb(); /* Sync point after breakpoint */
- atomic_set(&kgdb_setting_breakpoint, 0);
+ atomic_dec(&kgdb_setting_breakpoint);
}
EXPORT_SYMBOL_GPL(kgdb_breakpoint);

2010-04-02 19:50:35

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Fri, 2 Apr 2010, Linus Torvalds wrote:
>
> All that matters is that the above kind of while loop must work. The
> architecture needs to do whatever it needs to do to make it work. End of
> discussion. If on ARM6 that means "smp_mb()", then that's an ARM6
> implementation issue.

Put another way: from a kernel standpoint, cpu_relax() in _no_ way implies
a memory barrier. That has always been true, and that continues to be
true.

But Linux does expect that if some other CPU modifies a memory location,
then we _will_ see that modification eventually. If the CPU needs help to
do so, then cpu_relax() needs to do that. Again - this has nothing to do
with memory barriers. It's just a basic requirement.

Linus

2010-04-02 19:51:43

by Linus Torvalds

[permalink] [raw]
Subject: Re: [Kgdb-bugreport] [PATCH 4/5] kgdb: Use atomic operators whichuse barriers



On Fri, 2 Apr 2010, Jason Wessel wrote:
>
> Here is the revised patch, which is going into the kgdb-fixes branch.

Yeah, this patch I have no issues with.

Except:

> From: Jason Wessel <[email protected]>
> Subject: [PATCH] kgdb: Use atomic operators which use barriers
>
> The cpu_relax() does not mandate that there is an smp memory barrier.
> As a result on the arm smp architecture the kernel debugger can hang
> on entry from time to time, as shown by the kgdb regression tests.

Now your changelog makes no sense any more.

Linus

2010-04-02 20:00:55

by Jason Wessel

[permalink] [raw]
Subject: Re: [Kgdb-bugreport] [PATCH 4/5] kgdb: Use atomic operators whichuse barriers

On 04/02/2010 02:47 PM, Linus Torvalds wrote:
>
>> From: Jason Wessel <[email protected]>
>> Subject: [PATCH] kgdb: Use atomic operators which use barriers
>>
>> The cpu_relax() does not mandate that there is an smp memory barrier.
>> As a result on the arm smp architecture the kernel debugger can hang
>> on entry from time to time, as shown by the kgdb regression tests.
>>
>
> Now your changelog makes no sense any more.
>

It is revised now and pushed. Regression testing on the HW I have has
passed as well now.

For the series the pull looks like:

drivers/misc/kgdbts.c | 6 ++
kernel/kgdb.c | 205
+++++++++++++++++++++++++------------------------
2 files changed, 109 insertions(+), 102 deletions(-)


Thanks,
Jason.


---
From: Jason Wessel <[email protected]>
Subject: [PATCH] kgdb: use atomic_inc and atomic_dec instead of atomic_set

Memory barriers should be used for the kgdb cpu synchronization. The
atomic_set() does not imply a memory barrier.

Reported-by: Will Deacon <[email protected]>
Signed-off-by: Jason Wessel <[email protected]>

2010-04-02 20:11:50

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Fri, 2 Apr 2010, Linus Torvalds wrote:
>
> But Linux does expect that if some other CPU modifies a memory location,
> then we _will_ see that modification eventually. If the CPU needs help to
> do so, then cpu_relax() needs to do that. Again - this has nothing to do
> with memory barriers. It's just a basic requirement.

Btw, this is not necessarily just limited to cpu_relax() either. The
assumption that we'll eventually see changes to memory is pretty common.

If some architecture doesn't see updates from other CPU's (or maybe DMA)
without extra help, I suspect that it might be a good idea to not just
have IO instructions but also things like 'udelay()' have that "extra
helping sauce" in them.

For the exact same reason: we have drivers that depend on things like
jiffies updates happening automatically, eg:

drivers/scsi/u14-34f.c: while ((jiffies - time) < (10 * HZ) && limit++ < 200000) udelay(100L);

or

drivers/isdn/hisax/elsa_ser.c- while(timeout-- && cs->hw.elsa.transcnt)
drivers/isdn/hisax/elsa_ser.c: udelay(1000);

or

drivers/serial/68328serial.c: while (!(uart->utx.w & UTX_TX_AVAIL)) udelay(5);

where those variables we read may be updated from another CPU taking an
interrupt (or by DMA - I didn't look at how UTX_TX_AVAIL gets set, for
example).

Now, in practice, I suspect the above kind of busy loop is _way_ less
common. It's harder to grep for (I picked 'udelay()' to look for, but
the result is full of noise that does IO etc that would presumably fix
things anyway, so the above two are just random examples of some drivers
basically reading variables in a busy loop).

And for something like ARM, random drivers probably don't much matter. So
I doubt that this udelay() thing is at _all_ as important as cpu_relax()
is, and ARM maintainers can probably happily just ignore it.

Linus

2010-04-02 22:26:17

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers

On Fri, Apr 02, 2010 at 12:43:00PM -0700, Linus Torvalds wrote:
> Russell is wrong.

Actually, in future threads you end up agreeing with my position...

> The fact that maybe some ARM6 cache coherency implementation is pure and
> utter sh*t and never sees the changes without the same instruction that
> happens to be a memory barrier on that architecture does not make that
> cpu_relax() any more about memory barriers.

I'm not going to discuss it now; I'm on holiday. Wait until mid-next
week on this and I'll respond with a fuller reply.

2010-04-02 23:29:55

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Fri, 2 Apr 2010, Russell King - ARM Linux wrote:
>
> Actually, in future threads you end up agreeing with my position...

I always agreed that it was not a memory barrier.

In fact, the commit that extended on the "volatile-considered-harmful"
patch from you has this quote from me in the commit logs:

Linus sayeth:

: I don't think it was ever the intention that it would be seen as anything
: but a compiler barrier, although it is obviously implied that it might
: well perform some per-architecture actions that have "memory barrier-like"
: semantics.
:
: After all, the whole and only point of the "cpu_relax()" thing is to tell
: the CPU that we're busy-looping on some event.
:
: And that "event" might be (and often is) about reading the same memory
: location over and over until it changes to what we want it to be. So it's
: quite possible that on various architectures the "cpu_relax()" could be
: about making sure that such a tight loop on loads doesn't starve cache
: transactions, for example - and as such look a bit like a memory barrier
: from a CPU standpoint.
:
: But it's not meant to have any kind of architectural memory ordering
: semantics as far as the kernel is concerned - those must come from other
: sources.

which I think is pretty clear.

But that quote seems to be the one where you then think I "agree" with
you.

Linus

2010-04-03 16:08:56

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers

On Fri, Apr 02, 2010 at 04:24:57PM -0700, Linus Torvalds wrote:
>
>
> On Fri, 2 Apr 2010, Russell King - ARM Linux wrote:
> >
> > Actually, in future threads you end up agreeing with my position...
>
> I always agreed that it was not a memory barrier.
>
> In fact, the commit that extended on the "volatile-considered-harmful"
> patch from you has this quote from me in the commit logs:
>
> Linus sayeth:
>
> : I don't think it was ever the intention that it would be seen as anything
> : but a compiler barrier, although it is obviously implied that it might
> : well perform some per-architecture actions that have "memory barrier-like"
> : semantics.
> :
> : After all, the whole and only point of the "cpu_relax()" thing is to tell
> : the CPU that we're busy-looping on some event.
> :
> : And that "event" might be (and often is) about reading the same memory
> : location over and over until it changes to what we want it to be. So it's
> : quite possible that on various architectures the "cpu_relax()" could be
> : about making sure that such a tight loop on loads doesn't starve cache
> : transactions, for example - and as such look a bit like a memory barrier
> : from a CPU standpoint.
> :
> : But it's not meant to have any kind of architectural memory ordering
> : semantics as far as the kernel is concerned - those must come from other
> : sources.
>
> which I think is pretty clear.
>
> But that quote seems to be the one where you then think I "agree" with
> you.

Yet again you read something into what I say that wasn't there.

Wait for me to return from holiday, as I said, and I'll respond further.

2010-04-05 09:21:42

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers

Hi!

> > Russell had this thread:
> > http://permalink.gmane.org/gmane.linux.ports.arm.kernel/75717
>
> Russell is wrong.
>
> Yes, originally it was about P4's overheating. But let me repeat: the fact
> is, this _is_ valid kernel code:
>
> kernel/sched.c- while (task_is_waking(p))
> kernel/sched.c: cpu_relax();


And this is valid (but ugly and not optimal) kernel code:

kernel/sched.c- while (task_is_waking(p))
kernel/sched.c: asm volatile("" :: "memory");


> (where that "task_is_waking()" is simply doing two regular reads, and
> expects another CPU to be changing them).
>
> This has _nothing_ to do with memory barriers, or with overheating.
...
> All that matters is that the above kind of while loop must work. The
> architecture needs to do whatever it needs to do to make it work. End of
> discussion. If on ARM6 that means "smp_mb()", then that's an ARM6
> implementation issue.

...so I don't think inserting smp_mb() into cpu_relax() and udelay()
and similar can ever fix the problem fully.

Run smp_mb() from periodic interrupt?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2010-04-05 15:01:39

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Mon, 5 Apr 2010, Pavel Machek wrote:
>
> And this is valid (but ugly and not optimal) kernel code:
>
> kernel/sched.c- while (task_is_waking(p))
> kernel/sched.c: asm volatile("" :: "memory");

No. We would consider such code buggy.

That said, you're right that such code would exist. But if it were to
exist and cause lock-ups, at least I would consider it a simple and
outright bug, and that the proper fix would be to just replace the asm
with cpu_relax().

> ...so I don't think inserting smp_mb() into cpu_relax() and udelay()
> and similar can ever fix the problem fully.

See above.

> Run smp_mb() from periodic interrupt?

Doesn't help - it's quite valid to do things like this in irq-disabled
code, although it is hopefully very very rare.

In particular, I suspect the kgdb use _is_ interrupts disabled, and is why
the ARM people even noticed (the normal cases would break out of the loop
exactly because an interrupt occurred, and an interrupt is probably
already enough to make the issue go away).

And please do not confuse this with smp_mb() - this is not about the Linux
notion of a memory barrier, this is about whatever per-arch oddity that
makes changes not be noticed (ie caches may be _coherent_, but they are
not "timely").

Linus

2010-04-08 14:59:22

by Linus Torvalds

[permalink] [raw]
Subject: RE: [PATCH 4/5] kgdb: Use atomic operators which use barriers



On Thu, 8 Apr 2010, Will Deacon wrote:
>
> I simply used smp_mb() as a way to solve this ARM-specific problem. I think
> Russell objects to this largely because this problem affects a particular
> scenario of busy-wait loops and changing the definition of cpu_relax() adds
> barriers to code that doesn't necessarily require them.

How expensive is a smp_mb() on arm?

And by "expensive" I don't mean so much performance of the instruction
itself (after all, we _are_ just busy-looping), but more about things like
power and perhaps secondary effects (does it cause memory traffic, for
example?).

Also, I have to say that _usually_ the problem with non-timely cache
updates in not on the reading side, but on the writing side - ie the other
CPU may be buffering writes indefinitely and the writes will go out only
as a response to bus cycles or the write buffers filling up. In which case
the reader can't really do much about it.

But your comment for the "smp_mb()" patch seems to imply that it's
literally a matter of cache access priorities:

"On the ARM11MPCore processor [where loads are prioritised over stores],
spinning in such a loop will prevent the write buffer from draining."

and in that case I would say that the correct thing _definitely_ is to
make sure that the loop simply is never so tight that. Maybe you can do
that without an smp_mb(), by just making whatever "cpu_relax()" does slow
enough (something that stalls the pipeline or whatever?)

But if smp_mb() is cheap, then that sounds like the right solution.

Linus

2010-04-08 16:27:46

by Dmitry Adamushko

[permalink] [raw]
Subject: Re: [Kgdb-bugreport] [PATCH 4/5] kgdb: Use atomic operators whichuse barriers

> ---
> From: Jason Wessel <[email protected]>
> Subject: [PATCH] kgdb: use atomic_inc and atomic_dec instead of atomic_set
>
> Memory barriers should be used for the kgdb cpu synchronization. The
> atomic_set() does not imply a memory barrier.

Hmm, but as far as I can see [ Documentation/memory-barriers.txt and
some actual implementations of atomic_inc/dec() ] atomic_inc/dec()
does not imply memory barriers either.

Either I'm missing the real point of this change and the very category
of "memory barriers" bears another meaning here or the following piece
looks especially dubious...

[...]
* Make sure the above info reaches the primary CPU before
* our cpu_in_kgdb[] flag setting does:
*/
- smp_wmb();
- atomic_set(&cpu_in_kgdb[cpu], 1);
+ atomic_inc(&cpu_in_kgdb[cpu]);

so what ensures the "Make sure the above info reaches..." requirement here?

TIA,


-- Dmitry

2010-04-19 15:22:31

by Will Deacon

[permalink] [raw]
Subject: Re: [Kgdb-bugreport] [PATCH 4/5] kgdb: Use atomic operators whichuse barriers

Hi Jason,

> Here is the revised patch, which is going into the kgdb-fixes branch.
>
> Thanks,
> Jason.
>
> --
> From: Jason Wessel <[email protected]>
> Subject: [PATCH] kgdb: Use atomic operators which use barriers
>
> The cpu_relax() does not mandate that there is an smp memory barrier.
> As a result on the arm smp architecture the kernel debugger can hang
> on entry from time to time, as shown by the kgdb regression tests.
>
> The solution is simply to use the atomic operators which include a
> proper smp memory barrier, instead of using atomic_set().
>
> Tested-by: Will Deacon <[email protected]>
> Signed-off-by: Jason Wessel <[email protected]>
> ---
> kernel/kgdb.c | 15 +++++++--------
> 1 file changed, 7 insertions(+), 8 deletions(-)
>
> --- a/kernel/kgdb.c
> +++ b/kernel/kgdb.c
> @@ -1379,8 +1379,7 @@ acquirelock:
> * Make sure the above info reaches the primary CPU before
> * our cpu_in_kgdb[] flag setting does:
> */
> - smp_wmb();
> - atomic_set(&cpu_in_kgdb[cpu], 1);
> + atomic_inc(&cpu_in_kgdb[cpu]);

As Dmitry pointed out here:

http://lkml.org/lkml/2010/4/8/214

This bit of code looks broken, especially since the comment has been left
alone by the patch. I think commit ae6bf53e should be reverted because
semantically all it does is remove the smp_wmb() above.

Please let me know what you think,

Will