Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751148AbbBUMyl (ORCPT ); Sat, 21 Feb 2015 07:54:41 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:47219 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750858AbbBUMyk (ORCPT ); Sat, 21 Feb 2015 07:54:40 -0500 Date: Sat, 21 Feb 2015 13:54:38 +0100 From: Peter Zijlstra To: Oleg Nesterov Cc: Manfred Spraul , "Paul E. McKenney" , Kirill Tkhai , linux-kernel@vger.kernel.org, Ingo Molnar , Josh Poimboeuf Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Message-ID: <20150221125438.GH23367@worktop.ger.corp.intel.com> References: <20150217160532.GW4166@linux.vnet.ibm.com> <20150217183636.GR5029@twins.programming.kicks-ass.net> <20150217215231.GK4166@linux.vnet.ibm.com> <20150218155904.GA27687@redhat.com> <54E4E479.4050003@colorfullife.com> <20150218224317.GC5029@twins.programming.kicks-ass.net> <20150219141905.GA11018@redhat.com> <54E77CC0.5030401@colorfullife.com> <20150220184551.GQ2896@worktop.programming.kicks-ass.net> <20150220202319.GA21132@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150220202319.GA21132@redhat.com> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1747 Lines: 47 On Fri, Feb 20, 2015 at 09:23:19PM +0100, Oleg Nesterov wrote: > On 02/20, Peter Zijlstra wrote: > > > > I think I agree with Oleg in that we only need the smp_rmb(); of course > > that wants a somewhat elaborate comment to go along with it. How about > > something like so: > > > > spin_unlock_wait(&local); > > /* > > * The above spin_unlock_wait() forms a control dependency with > > * any following stores; because we must first observe the lock > > * unlocked and we cannot speculate stores. > > * > > * Subsequent loads however can easily pass through the loads > > * represented by spin_unlock_wait() and therefore we need the > > * read barrier. > > * > > * This together is stronger than ACQUIRE for @local and > > * therefore we will observe the complete prior critical section > > * of @local. > > */ > > smp_rmb(); > > > > The obvious alternative is using spin_unlock_wait() with an > > smp_load_acquire(), but that might be more expensive on some archs due > > to repeated issuing of memory barriers. > > Yes, yes, thanks! > > But note that we need the same comment after sem_lock()->spin_is_locked(). > > So perhaps we can add this comment into include/linux/spinlock.h ? In this > case perhaps it makes sense to add, say, > > #define smp_mb__after_unlock_wait() smp_rmb() > > with this comment above? Another potential user task_work_run(). It could > use rmb() too, but this again needs the same fat comment. > > Ehat do you think? Sure, that works. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/