Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964871AbXHPCd0 (ORCPT ); Wed, 15 Aug 2007 22:33:26 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1764020AbXHPCdF (ORCPT ); Wed, 15 Aug 2007 22:33:05 -0400 Received: from e6.ny.us.ibm.com ([32.97.182.146]:59941 "EHLO e6.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755051AbXHPCdA (ORCPT ); Wed, 15 Aug 2007 22:33:00 -0400 Date: Wed, 15 Aug 2007 19:32:55 -0700 From: "Paul E. McKenney" To: Christoph Lameter Cc: Paul Mackerras , Satyam Sharma , Stefan Richter , Chris Snook , Linux Kernel Mailing List , linux-arch@vger.kernel.org, Linus Torvalds , netdev@vger.kernel.org, Andrew Morton , ak@suse.de, heiko.carstens@de.ibm.com, davem@davemloft.net, schwidefsky@de.ibm.com, wensong@linux-vs.org, horms@verge.net.au, wjiang@resilience.com, cfriesen@nortel.com, zlynx@acm.org, rpjday@mindspring.com, jesper.juhl@gmail.com, segher@kernel.crashing.org, Herbert Xu Subject: Re: [PATCH 0/24] make atomic_read() behave consistently across all architectures Message-ID: <20070816023255.GD14613@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <46C2D6F3.3070707@s5r6.in-berlin.de> <18115.35524.56393.347841@cargo.ozlabs.ibm.com> <20070816003948.GY9645@linux.vnet.ibm.com> <20070816005348.GA9645@linux.vnet.ibm.com> <20070816011414.GC9645@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.13 (2006-08-11) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1292 Lines: 28 On Wed, Aug 15, 2007 at 06:41:40PM -0700, Christoph Lameter wrote: > On Wed, 15 Aug 2007, Paul E. McKenney wrote: > > > Understood. My point is not that the impact is precisely zero, but > > rather that the impact on optimization is much less hurtful than the > > problems that could arise otherwise, particularly as compilers become > > more aggressive in their optimizations. > > The problems arise because barriers are not used as required. Volatile > has wishy washy semantics and somehow marries memory barriers with data > access. It is clearer to separate the two. Conceptual cleanness usually > translates into better code. If one really wants the volatile then lets > make it explicit and use > > atomic_read_volatile() There are indeed architectures where you can cause gcc to emit memory barriers in response to volatile. I am assuming that we are -not- making gcc do this. Given this, then volatiles and memory barrier instructions are orthogonal -- one controls the compiler, the other controls the CPU. Thanx, Paul - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/