Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752736AbaLSXz0 (ORCPT ); Fri, 19 Dec 2014 18:55:26 -0500 Received: from mail-qc0-f177.google.com ([209.85.216.177]:42826 "EHLO mail-qc0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751414AbaLSXzY (ORCPT ); Fri, 19 Dec 2014 18:55:24 -0500 MIME-Version: 1.0 In-Reply-To: References: <20141218051327.GA31988@redhat.com> <1418918059.17358.6@mail.thefacebook.com> <20141218161230.GA6042@redhat.com> <20141219024549.GB1671@redhat.com> <20141219035859.GA20022@redhat.com> <20141219040308.GB20022@redhat.com> <20141219145528.GC13404@redhat.com> Date: Fri, 19 Dec 2014 15:55:23 -0800 X-Google-Sender-Auth: 54yrFBXxUwjysRykf9alGCmFI1s Message-ID: Subject: Re: frequent lockups in 3.18rc4 From: Linus Torvalds To: Thomas Gleixner Cc: Dave Jones , Chris Mason , Mike Galbraith , Ingo Molnar , Peter Zijlstra , =?UTF-8?Q?D=C3=A2niel_Fraga?= , Sasha Levin , "Paul E. McKenney" , Linux Kernel Mailing List , Suresh Siddha , Oleg Nesterov , Peter Anvin Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 19, 2014 at 3:14 PM, Thomas Gleixner wrote: > > Now that all looks correct. So there is something else going on. After > staring some more at it, I think we are looking at it from the wrong > angle. > > The watchdog always detects CPU1 as stuck and we got completely > fixated on the csd_wait() in the stack trace on CPU1. Now we have > stack traces which show a different picture, i.e. CPU1 makes progress > after a gazillion of seconds. .. but that doesn't explain why CPU0 ends up always being at that *exact* same instruction in the NMI backtrace. While a fairly tight loop, together with "mmio read is very expensive and synchronizing" would explain it. An MMIO read can easily be as expensive as several thousand instructions. > I think we really need to look at CPU1 itself. Not so fast. Take another look at CPU0. [24998.083577] [] ktime_get+0x3e/0xa0 [24998.084450] [] tick_sched_timer+0x23/0x160 [24998.085315] [] __run_hrtimer+0x76/0x1f0 [24998.086173] [] ? tick_init_highres+0x20/0x20 [24998.087025] [] hrtimer_interrupt+0x107/0x260 [24998.087877] [] local_apic_timer_interrupt+0x3b/0x70 [24998.088732] [] smp_apic_timer_interrupt+0x45/0x60 [24998.089583] [] apic_timer_interrupt+0x6f/0x80 [24998.090435] [24998.091279] [] ? __remove_hrtimer+0x4e/0xa0 [24998.092118] [] ? ipcget+0x8a/0x1e0 [24998.092951] [] ? ipcget+0x7c/0x1e0 [24998.093779] [] SyS_msgget+0x4d/0x70 Really. None of that changed. NONE. The likelihood that we hit the exact same instruction every time? Over a timefraem of more than a minute? The only way I see that happening is (a) NMI is completely buggered, and the backtrace is just random crap that is always the same. Or (b) it's really a fairly tight loop. The fact that you had a hrtimer interrupt happen in the *middle* of __remove_hrtimer() is really another fairly strong hint. That smells like "__remove_hrtimer() has a race with hrtimer interrupts". And that race results in a basically endless loop (which perhaps ends when the HRtimer overflows, in what, a few minutes?) I really don't think you should look at CPU1. Not when CPU0 has such an interesting pattern that you dismissed just because the HPET is making progress. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/