Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754403AbcDYKMU (ORCPT ); Mon, 25 Apr 2016 06:12:20 -0400 Received: from e23smtp02.au.ibm.com ([202.81.31.144]:57017 "EHLO e23smtp02.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754388AbcDYKMS (ORCPT ); Mon, 25 Apr 2016 06:12:18 -0400 X-IBM-Helo: d23dlp03.au.ibm.com X-IBM-MailFrom: xinhui@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Message-ID: <571DED2B.8060600@linux.vnet.ibm.com> Date: Mon, 25 Apr 2016 18:10:51 +0800 From: Pan Xinhui User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Peter Zijlstra CC: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, tglx@linutronix.de Subject: Re: [PATCH V3] powerpc: Implement {cmp}xchg for u8 and u16 References: <5715D04E.9050009@linux.vnet.ibm.com> <571782F0.2020201@linux.vnet.ibm.com> <20160420142408.GF3430@twins.programming.kicks-ass.net> <5718F32B.3050409@linux.vnet.ibm.com> <20160421161354.GI3430@twins.programming.kicks-ass.net> In-Reply-To: <20160421161354.GI3430@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16042510-0005-0000-0000-0000077A27B0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1487 Lines: 38 On 2016年04月22日 00:13, Peter Zijlstra wrote: > On Thu, Apr 21, 2016 at 11:35:07PM +0800, Pan Xinhui wrote: >> yes, you are right. more load/store will be done in C code. >> However such xchg_u8/u16 is just used by qspinlock now. and I did not see any performance regression. >> So just wrote in C, for simple. :) > > Which is fine; but worthy of a note in your Changelog. > will do that. >> Of course I have done xchg tests. >> we run code just like xchg((u8*)&v, j++); in several threads. >> and the result is, >> [ 768.374264] use time[1550072]ns in xchg_u8_asm >> [ 768.377102] use time[2826802]ns in xchg_u8_c >> >> I think this is because there is one more load in C. >> If possible, we can move such code in asm-generic/. > > So I'm not actually _that_ familiar with the PPC LL/SC implementation; > but there are things a CPU can do to optimize these loops. > > For example, a CPU might choose to not release the exclusive hold of the > line for a number of cycles, except when it passes SC or an interrupt > happens. This way there's a smaller chance the SC fails and inhibits > forward progress. I am not sure if there is such hardware optimization. > > By doing the modification outside of the LL/SC you loose such > advantages. > > And yes, doing a !exclusive load prior to the exclusive load leads to an > even bigger window where the data can get changed out from under you. > you are right. We have observed such data change during the two different loads.