Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755665AbZGCLnl (ORCPT ); Fri, 3 Jul 2009 07:43:41 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752850AbZGCLnb (ORCPT ); Fri, 3 Jul 2009 07:43:31 -0400 Received: from mx2.redhat.com ([66.187.237.31]:34879 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752828AbZGCLna (ORCPT ); Fri, 3 Jul 2009 07:43:30 -0400 Date: Fri, 3 Jul 2009 13:43:10 +0200 From: Jiri Olsa To: Jarek Poplawski Cc: Ingo Molnar , Eric Dumazet , Peter Zijlstra , Mathieu Desnoyers , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, fbl@redhat.com, nhorman@redhat.com, davem@redhat.com, htejun@gmail.com, oleg@redhat.com, davidel@xmailserver.org Subject: Re: [PATCHv5 2/2] memory barrier: adding smp_mb__after_lock Message-ID: <20090703114310.GA4534@jolsa.lab.eng.brq.redhat.com> References: <20090703081219.GE2902@jolsa.lab.eng.brq.redhat.com> <20090703081445.GG2902@jolsa.lab.eng.brq.redhat.com> <20090703090606.GA3902@elte.hu> <4A4DCD54.1080908@gmail.com> <20090703092438.GE3902@elte.hu> <20090703095659.GA4518@jolsa.lab.eng.brq.redhat.com> <20090703102530.GD32128@elte.hu> <20090703111848.GA10267@jolsa.lab.eng.brq.redhat.com> <20090703113027.GC4847@ff.dom.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090703113027.GC4847@ff.dom.local> User-Agent: Mutt/1.5.19 (2009-01-05) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1147 Lines: 26 On Fri, Jul 03, 2009 at 11:30:27AM +0000, Jarek Poplawski wrote: > On Fri, Jul 03, 2009 at 01:18:48PM +0200, Jiri Olsa wrote: > ... > > diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h > > index b7e5db8..4e77853 100644 > > --- a/arch/x86/include/asm/spinlock.h > > +++ b/arch/x86/include/asm/spinlock.h > ... > > @@ -1271,6 +1271,9 @@ static inline int sk_has_allocations(const struct sock *sk) > > * in its cache, and so does the tp->rcv_nxt update on CPU2 side. The CPU1 > > * could then endup calling schedule and sleep forever if there are no more > > * data on the socket. > > + * > > + * The sk_has_helper is always called right after a call to read_lock, so we > Btw.: > - * The sk_has_helper is always called right after a call to read_lock, so we > + * The sk_has_sleeper is always called right after a call to read_lock, so we > > Jarek P. oops, thanks -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/