Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752545AbdGLMuJ (ORCPT ); Wed, 12 Jul 2017 08:50:09 -0400 Received: from merlin.infradead.org ([205.233.59.134]:33256 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751820AbdGLMuH (ORCPT ); Wed, 12 Jul 2017 08:50:07 -0400 Date: Wed, 12 Jul 2017 14:49:14 +0200 From: Peter Zijlstra To: Boqun Feng Cc: Palmer Dabbelt , Olof Johansson , Arnd Bergmann , akpm@linux-foundation.org, albert@sifive.com, yamada.masahiro@socionext.com, mmarek@suse.com, will.deacon@arm.com, mingo@redhat.com, daniel.lezcano@linaro.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, gregkh@linuxfoundation.org, jslaby@suse.com, davem@davemloft.net, mchehab@kernel.org, sfr@canb.auug.org.au, fweisbec@gmail.com, viro@zeniv.linux.org.uk, mcgrof@kernel.org, dledford@redhat.com, bart.vanassche@sandisk.com, sstabellini@kernel.org, daniel.vetter@ffwll.ch, mpe@ellerman.id.au, msalter@redhat.com, nicolas.dichtel@6wind.com, james.hogan@imgtec.com, paul.gortmaker@windriver.com, linux@roeck-us.net, heiko.carstens@de.ibm.com, schwidefsky@de.ibm.com, linux-kernel@vger.kernel.org, patches@groups.riscv.org Subject: Re: [PATCH 10/17] RISC-V: Atomic and Locking Code Message-ID: <20170712124914.o63vybym2yxs6kam@hirez.programming.kicks-ass.net> References: <20170712013130.14792-1-palmer@dabbelt.com> <20170712013130.14792-11-palmer@dabbelt.com> <20170712124049.i7taml22aat2bcgt@tardis> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170712124049.i7taml22aat2bcgt@tardis> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1708 Lines: 49 On Wed, Jul 12, 2017 at 08:40:49PM +0800, Boqun Feng wrote: > > +/** > > + * set_bit - Atomically set a bit in memory > > + * @nr: the bit to set > > + * @addr: the address to start counting from > > + * > > + * This function is atomic and may not be reordered. See __set_bit() > > This is incorrect, {set,change,clear}_bit() can be reordered, see > Documentation/memory-barriers.txt, they are just relaxed atomics. But I > think you just copy this from x86 code, so maybe x86 code needs help > too, at least claim that's only x86-specific guarantee. Yeah, I suspect that's an x86 special (all our atomics are fully ordered). > > +/** > > + * test_and_set_bit_lock - Set a bit and return its old value, for lock > > + * @nr: Bit to set > > + * @addr: Address to count from > > + * > > + * This operation is atomic and provides acquire barrier semantics. > > + * It can be used to implement bit locks. > > + */ > > +static inline int test_and_set_bit_lock( > > + unsigned long nr, volatile unsigned long *addr) > > +{ > > + return test_and_set_bit(nr, addr); > > If you want, you can open code an "amoor.aq" here, because > test_and_set_bit_lock() only needs an acquire barrier. > > > +} > > + > > +/** > > + * clear_bit_unlock - Clear a bit in memory, for unlock > > + * @nr: the bit to set > > + * @addr: the address to start counting from > > + * > > + * This operation is atomic and provides release barrier semantics. > > + */ > > +static inline void clear_bit_unlock( > > + unsigned long nr, volatile unsigned long *addr) > > +{ > > You need a smp_mb__before_atomic() here, because clear_bit() is only > relaxed atomic. And clear_bit_unlock() is a release. alternatively you can do "amoand.rl".