Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751419AbbKYJzP (ORCPT ); Wed, 25 Nov 2015 04:55:15 -0500 Received: from mail.skyhub.de ([78.46.96.112]:46907 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750699AbbKYJzD (ORCPT ); Wed, 25 Nov 2015 04:55:03 -0500 Date: Wed, 25 Nov 2015 10:54:57 +0100 From: Borislav Petkov To: Hidehiro Kawai Cc: Jonathan Corbet , Peter Zijlstra , Ingo Molnar , "Eric W. Biederman" , "H. Peter Anvin" , Andrew Morton , Thomas Gleixner , Vivek Goyal , Baoquan He , linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Michal Hocko , Masami Hiramatsu Subject: Re: [V5 PATCH 3/4] kexec: Fix race between panic() and crash_kexec() called directly Message-ID: <20151125095457.GB29499@pd.tnic> References: <20151120093641.4285.97253.stgit@softrs> <20151120093648.4285.17715.stgit@softrs> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20151120093648.4285.17715.stgit@softrs> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3280 Lines: 107 On Fri, Nov 20, 2015 at 06:36:48PM +0900, Hidehiro Kawai wrote: > Currently, panic() and crash_kexec() can be called at the same time. > For example (x86 case): > > CPU 0: > oops_end() > crash_kexec() > mutex_trylock() // acquired > nmi_shootdown_cpus() // stop other cpus > > CPU 1: > panic() > crash_kexec() > mutex_trylock() // failed to acquire > smp_send_stop() // stop other cpus > infinite loop > > If CPU 1 calls smp_send_stop() before nmi_shootdown_cpus(), kdump > fails. > > In another case: > > CPU 0: > oops_end() > crash_kexec() > mutex_trylock() // acquired > > io_check_error() > panic() > crash_kexec() > mutex_trylock() // failed to acquire > infinite loop > > Clearly, this is an undesirable result. > > To fix this problem, this patch changes crash_kexec() to exclude > others by using atomic_t panic_cpu. > > V5: > - Add missing dummy __crash_kexec() for !CONFIG_KEXEC_CORE case > - Replace atomic_xchg() with atomic_set() in crash_kexec() because > it is used as a release operation and there is no need of memory > barrier effect. This change also removes an unused value warning > > V4: > - Use new __crash_kexec(), no exclusion check version of crash_kexec(), > instead of checking if panic_cpu is the current cpu or not > > V2: > - Use atomic_cmpxchg() instead of spin_trylock() on panic_lock > to exclude concurrent accesses > - Don't introduce no-lock version of crash_kexec() > > Signed-off-by: Hidehiro Kawai > Cc: Eric Biederman > Cc: Vivek Goyal > Cc: Andrew Morton > Cc: Michal Hocko > --- > include/linux/kexec.h | 2 ++ > kernel/kexec_core.c | 26 +++++++++++++++++++++++++- > kernel/panic.c | 4 ++-- > 3 files changed, 29 insertions(+), 3 deletions(-) ... > +void crash_kexec(struct pt_regs *regs) > +{ > + int old_cpu, this_cpu; > + > + /* > + * Only one CPU is allowed to execute the crash_kexec() code as with > + * panic(). Otherwise parallel calls of panic() and crash_kexec() > + * may stop each other. To exclude them, we use panic_cpu here too. > + */ > + this_cpu = raw_smp_processor_id(); > + old_cpu = atomic_cmpxchg(&panic_cpu, -1, this_cpu); > + if (old_cpu == -1) { > + /* This is the 1st CPU which comes here, so go ahead. */ > + __crash_kexec(regs); > + > + /* > + * Reset panic_cpu to allow another panic()/crash_kexec() > + * call. So can we make __crash_kexec() return error values? * failed to grab kexec_mutex -> reset panic_cpu * no kexec_crash_image -> no need to reset it, all future crash_kexec() calls won't work so no need to run into that path anymore. However, this could be problematic if we want the other CPUs to panic. Do we care? * machine_kexec successful -> doesn't matter Thanks. -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/