Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933876AbbENSaF (ORCPT ); Thu, 14 May 2015 14:30:05 -0400 Received: from mail-wi0-f172.google.com ([209.85.212.172]:35687 "EHLO mail-wi0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933746AbbENS37 (ORCPT ); Thu, 14 May 2015 14:29:59 -0400 Date: Thu, 14 May 2015 20:29:55 +0200 From: Ingo Molnar To: Borislav Petkov Cc: Gu Zheng , "H. Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, x86@kernel.org, Stable Subject: Re: [RFC PATCH] x86, espfix: use spin_lock rather than mutex Message-ID: <20150514182954.GB23479@gmail.com> References: <1431603465-12610-1-git-send-email-guz.fnst@cn.fujitsu.com> <20150514122621.GB29235@pd.tnic> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150514122621.GB29235@pd.tnic> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3602 Lines: 74 * Borislav Petkov wrote: > On Thu, May 14, 2015 at 07:37:45PM +0800, Gu Zheng wrote: > > The following lockdep warning occurrs when running with latest kernel: > > [ 3.178000] ------------[ cut here ]------------ > > [ 3.183000] WARNING: CPU: 128 PID: 0 at kernel/locking/lockdep.c:2755 lockdep_trace_alloc+0xdd/0xe0() > > [ 3.193000] DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags)) > > [ 3.199000] Modules linked in: > > > > [ 3.203000] CPU: 128 PID: 0 Comm: swapper/128 Not tainted 4.1.0-rc3 #70 > > [ 3.221000] 0000000000000000 2d6601fb3e6d4e4c ffff88086fd5fc38 ffffffff81773f0a > > [ 3.230000] 0000000000000000 ffff88086fd5fc90 ffff88086fd5fc78 ffffffff8108c85a > > [ 3.238000] ffff88086fd60000 0000000000000092 ffff88086fd60000 00000000000000d0 > > [ 3.246000] Call Trace: > > [ 3.249000] [] dump_stack+0x4c/0x65 > > [ 3.255000] [] warn_slowpath_common+0x8a/0xc0 > > [ 3.261000] [] warn_slowpath_fmt+0x55/0x70 > > [ 3.268000] [] lockdep_trace_alloc+0xdd/0xe0 > > [ 3.274000] [] __alloc_pages_nodemask+0xad/0xca0 > > [ 3.281000] [] ? __lock_acquire+0xf6d/0x1560 > > [ 3.288000] [] alloc_page_interleave+0x3a/0x90 > > [ 3.295000] [] alloc_pages_current+0x17d/0x1a0 > > [ 3.301000] [] ? __get_free_pages+0xe/0x50 > > [ 3.308000] [] __get_free_pages+0xe/0x50 > > [ 3.314000] [] init_espfix_ap+0x17b/0x320 > > [ 3.320000] [] start_secondary+0xf1/0x1f0 > > [ 3.327000] ---[ end trace 1b3327d9d6a1d62c ]--- > > > > This seems a mis-warning by lockdep, as we alloc pages with GFP_KERNEL in > > init_espfix_ap() which is called before enabled local irq, and the lockdep > > sub-system considers this behaviour as allocating memory with GFP_FS with > > local irq disabled, then trigger the warning as mentioned about. > > Though here we use GFP_NOFS rather GFP_KERNEL to avoid the warning, but > > you know, init_espfix_ap is called with preempt and local irq disabled, > > it is not a good idea to use mutex (might sleep) here. > > So we convert the initialization lock to spin_lock here to avoid the noise. > > > > Signed-off-by: Gu Zheng > > Cc: Stable > > --- > > arch/x86/kernel/espfix_64.c | 13 +++++++------ > > 1 files changed, 7 insertions(+), 6 deletions(-) > > > > diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c > > index f5d0730..ceb35a3 100644 > > --- a/arch/x86/kernel/espfix_64.c > > +++ b/arch/x86/kernel/espfix_64.c > > @@ -57,14 +57,14 @@ > > # error "Need more than one PGD for the ESPFIX hack" > > #endif > > > > -#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO) > > +#define PGALLOC_GFP (GFP_ATOMIC | __GFP_NOTRACK | __GFP_ZERO) > > IINM, that's ESPFIX_MAX_PAGES with GFP_ATOMIC which for 8K CPUs are 128 > pages. > > That's a lotta waste in my book for espfix stack pages. > > Enabling interrupts earlier in start_secondary() is probably out of > the question, maybe we should prealloc all those pages... We could allocate them on the boot CPU side and hand them over to the secondary CPU. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/