Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754456AbbDTQJj (ORCPT ); Mon, 20 Apr 2015 12:09:39 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:40514 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752316AbbDTQJi (ORCPT ); Mon, 20 Apr 2015 12:09:38 -0400 Date: Mon, 20 Apr 2015 09:09:30 -0700 From: "Paul E. McKenney" To: Will Deacon Cc: Andrey Ryabinin , Catalin Marinas , Linus Torvalds , Peter Zijlstra , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH] arm64: Implement 1-,2- byte smp_load_acquire and smp_store_release Message-ID: <20150420160930.GT5561@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1429544753-4120-1-git-send-email-a.ryabinin@samsung.com> <20150420154824.GD1504@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150420154824.GD1504@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15042016-0013-0000-0000-00000A2D325C Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2913 Lines: 82 On Mon, Apr 20, 2015 at 04:48:24PM +0100, Will Deacon wrote: > Hi Andrey, > > On Mon, Apr 20, 2015 at 04:45:53PM +0100, Andrey Ryabinin wrote: > > commit 47933ad41a86 ("arch: Introduce smp_load_acquire(), smp_store_release()") > > allowed only 4- and 8-byte smp_load_acquire, smp_store_release. > > So 1- and 2-byte cases weren't implemented in arm64. > > Later commit 536fa402221f ("compiler: Allow 1- and 2-byte smp_load_acquire() > > and smp_store_release()") > > allowed to use 1 and 2 byte smp_load_acquire and smp_store_release > > by adjusting the definition of __native_word(). > > However, 1-,2- byte cases in arm64 version left unimplemented. > > > > Commit 8053871d0f7f ("smp: Fix smp_call_function_single_async() locking") > > started to use smp_load_acquire() to load 2-bytes csd->flags. > > That crashes arm64 kernel during the boot. > > > > Implement 1,2 byte cases in arm64's smp_load_acquire() > > and smp_store_release() to fix this. > > > > Signed-off-by: Andrey Ryabinin > > I already have an equivalent patch queued in the arm64/fixes branch[1]. I'll > send a pull shortly. Even better! ;-) Thanx, Paul > Will > > [1] > https://git.kernel.org/cgit/linux/kernel/git/arm64/linux.git/log/?h=fixes/core > > > --- > > arch/arm64/include/asm/barrier.h | 16 ++++++++++++++++ > > 1 file changed, 16 insertions(+) > > > > diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h > > index a5abb00..71f19c4 100644 > > --- a/arch/arm64/include/asm/barrier.h > > +++ b/arch/arm64/include/asm/barrier.h > > @@ -65,6 +65,14 @@ do { \ > > do { \ > > compiletime_assert_atomic_type(*p); \ > > switch (sizeof(*p)) { \ > > + case 1: \ > > + asm volatile ("stlrb %w1, %0" \ > > + : "=Q" (*p) : "r" (v) : "memory"); \ > > + break; \ > > + case 2: \ > > + asm volatile ("stlrh %w1, %0" \ > > + : "=Q" (*p) : "r" (v) : "memory"); \ > > + break; \ > > case 4: \ > > asm volatile ("stlr %w1, %0" \ > > : "=Q" (*p) : "r" (v) : "memory"); \ > > @@ -81,6 +89,14 @@ do { \ > > typeof(*p) ___p1; \ > > compiletime_assert_atomic_type(*p); \ > > switch (sizeof(*p)) { \ > > + case 1: \ > > + asm volatile ("ldarb %w0, %1" \ > > + : "=r" (___p1) : "Q" (*p) : "memory"); \ > > + break; \ > > + case 2: \ > > + asm volatile ("ldarh %w0, %1" \ > > + : "=r" (___p1) : "Q" (*p) : "memory"); \ > > + break; \ > > case 4: \ > > asm volatile ("ldar %w0, %1" \ > > : "=r" (___p1) : "Q" (*p) : "memory"); \ > > -- > > 2.3.5 > > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/