2021-07-19 20:11:43

by Rik van Riel

[permalink] [raw]
Subject: [PATCH] x86,mm: print likely CPU at segfault time

From 14d31a44a5186c94399dc9518ba80adf64c99772 Mon Sep 17 00:00:00 2001
From: Rik van Riel <[email protected]>
Date: Mon, 19 Jul 2021 14:49:17 -0400
Subject: [PATCH] x86,mm: print likely CPU at segfault time

In a large enough fleet of computers, it is common to have a few bad
CPUs. Those can often be identified by seeing that some commonly run
kernel code (that runs fine everywhere else) keeps crashing on the
same CPU core on a particular bad system.

One of the failure modes observed is that either the instruction pointer,
or some register used to specify the address of data that needs to be
fetched gets corrupted, resulting in something like a kernel page fault,
null pointer dereference, NX violation, or similar.

Those kernel failures are often preceded by similar looking userspace
failures. It would be useful to know if those are also happening on
the same CPU cores, to get a little more confirmation that it is indeed
a hardware issue.

Adding a printk to show_signal_msg() achieves that purpose. It isn't
perfect since the task might get rescheduled on another CPU between
when the fault hit and when the message is printed, but it should be
good enough to show correlation between userspace and kernel errors
when dealing with a bad CPU.

$ ./segfault
Segmentation fault (core dumped)
$ dmesg | grep segfault
segfault[1349]: segfault at 0 ip 000000000040113a sp 00007ffc6d32e360 error 4 in segfault[401000+1000] on CPU 0

Signed-off-by: Rik van Riel <[email protected]>
---
arch/x86/mm/fault.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index b2eefdefc108..dd6c89c23a3a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -777,6 +777,8 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,

print_vma_addr(KERN_CONT " in ", regs->ip);

+ printk(KERN_CONT " on CPU %d", raw_smp_processor_id());
+
printk(KERN_CONT "\n");

show_opcodes(regs, loglvl);
--
2.24.1



2021-07-19 20:28:20

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time

On 7/19/21 12:00 PM, Rik van Riel wrote:
> In a large enough fleet of computers, it is common to have a few bad
> CPUs. Those can often be identified by seeing that some commonly run
> kernel code (that runs fine everywhere else) keeps crashing on the
> same CPU core on a particular bad system.

I've encountered a few of these kinds of things over the years. This is
*definitely* useful. What you've proposed here is surely the simplest
thing we could print and probably also offers the best bang for our buck.

The only other thing I thought of is that it might be nice to print out
the core id instead of the CPU id. If there are hardware issues with a
CPU, they're likely to affect both threads. Seeing to different "CPUs"
in an SMT environment might tempt some folks to think it's not a
core-level hardware issue.

If it's as trivial as:

printk(KERN_CONT " on cpu/core %d/%d",
raw_smp_processor_id(),
topology_core_id(raw_smp_processor_id()));

it would be handy. But, it's also not hard to look at 10 segfaults, see
that they happened only on 2 CPUs and realize that hyperthreading is
enabled.

Either way, this patch moves things in the right direction, so:

Acked-by: Dave Hansen <[email protected]>

2021-07-19 20:29:30

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time

On Mon, 2021-07-19 at 12:20 -0700, Dave Hansen wrote:

> If it's as trivial as:
>
>         printk(KERN_CONT " on cpu/core %d/%d",
>                 raw_smp_processor_id(),
>                 topology_core_id(raw_smp_processor_id()));
>
> it would be handy.  But, it's also not hard to look at 10 segfaults,
> see
> that they happened only on 2 CPUs and realize that hyperthreading is
> enabled.

One problem with topology_core_id() is that that, on a
multi-socket system, the core number may not be unique.

That is why I ended up going with just the CPU number.
It's pretty easy to put one and one together afterwards.

Thanks for your quick patch review.

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2021-07-21 20:54:18

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time

On Mon, Jul 19 2021 at 15:34, Rik van Riel wrote:

> On Mon, 2021-07-19 at 12:20 -0700, Dave Hansen wrote:
>
>> If it's as trivial as:
>>
>>         printk(KERN_CONT " on cpu/core %d/%d",
>>                 raw_smp_processor_id(),
>>                 topology_core_id(raw_smp_processor_id()));
>>
>> it would be handy.  But, it's also not hard to look at 10 segfaults,
>> see
>> that they happened only on 2 CPUs and realize that hyperthreading is
>> enabled.
>
> One problem with topology_core_id() is that that, on a
> multi-socket system, the core number may not be unique.

Just add topology_physical_package_id() and you have a complete picture.

Thanks,

tglx

2021-07-21 21:06:38

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time

Rik,

On Mon, Jul 19 2021 at 15:00, Rik van Riel wrote:
>
> Adding a printk to show_signal_msg() achieves that purpose. It isn't
> perfect since the task might get rescheduled on another CPU between
> when the fault hit and when the message is printed, but it should be
> good enough to show correlation between userspace and kernel errors
> when dealing with a bad CPU.

we could collect the cpu number in do_*_addr_fault() before interrupts
are enabled and just hand it through. There are only a few callchains
which end up in __bad_area_nosemaphore().

Thanks,

tglx

2021-07-24 01:40:46

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] x86,mm: print likely CPU at segfault time

On Wed, 2021-07-21 at 22:36 +0200, Thomas Gleixner wrote:
> Rik,
>
> On Mon, Jul 19 2021 at 15:00, Rik van Riel wrote:
> >
> > Adding a printk to show_signal_msg() achieves that purpose. It
> > isn't
> > perfect since the task might get rescheduled on another CPU between
> > when the fault hit and when the message is printed, but it should
> > be
> > good enough to show correlation between userspace and kernel errors
> > when dealing with a bad CPU.
>
> we could collect the cpu number in do_*_addr_fault() before
> interrupts
> are enabled and just hand it through. There are only a few callchains
> which end up in __bad_area_nosemaphore().

We could, but do we really want to add that to the hot path
for page faults, when segfaults are so rare?

I suspect the simple patch I sent will be good enough to
identify a bad CPU, even if only 3 out of 4 userspace crashes
get attributed to the right CPU...

I would be happy to write a patch that does what you want
though, so you can compare them side by side :)

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part