Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1324208ybt; Thu, 2 Jul 2020 02:35:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwu/c+/j4n4i0maDeGul9vbGj6EPXX+BGgrZCWeUeo0RY6+hUow5p++4hlPDtJH43Zb5THq X-Received: by 2002:a05:6402:202a:: with SMTP id ay10mr35032757edb.0.1593682525298; Thu, 02 Jul 2020 02:35:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593682525; cv=none; d=google.com; s=arc-20160816; b=Ecsxy7G4RSskDV8yj40L8p9QDEsZuoKLqUq7lrCzYQUwsKv18TYsMWhEAMFPDUkm6D QHsksZ3+sc8suXOcclQO9UKitsPQnmd5TbTZZ79MtFpqFu9gSGbV/0dBiNaqbrXlaKHT l/FB20X3PKUdN7Hs6IGIRHNcsvJIJjISiKKSBJ07v/vuDMMFBYBr5W69PUr+6H+tJpt7 rWX5ams8GYVFfLoVklme38i2ZYZkZE9K+bUdQMDj12YV8ixAbxOFVoBDyZby9I674Hir kxO4YX+4aAYoCsA4MKFBrzzD6+ZnbMmBw7qXhnSG4EKSKyTKsONjRyhVuIlW1Ncdxp8P i/YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=eu6GMJklUxKdCqfWGgKqAvu6KPWlvTdYD6cTcX/6ACY=; b=1Lr82ubJGMQ6kO0Lr5LHrcbjZQRoCzVTI2w48FI2rihh1HZc/fp00yvq5H3NIzmQZU kHoudLA0MNhgI4s5DmWrthfQKTOrymE7eqZmxDfN1zIwFfDr4YRGK61JiqniFKB2oQCJ v75gxjGMqasVOltqDmjgJwQsvIRzICgfRCWsKkVpOUTErL1XRQ6hRCy/Y81OQDDKRMIA pilL2zgi0X3hf408jdlrDT2cbFEWaKe9s1qGQAE9ii/VBUFheyL/ZAJut5mKoYoemMdE LgJIv98cIe6kvoJVjOIiWu/8M6Zn2PBZj+o5IPh53/vuAz89ll1MZwHpBKdUnjWr4Tgn p0Og== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dt5si7293401ejc.335.2020.07.02.02.35.01; Thu, 02 Jul 2020 02:35:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728192AbgGBJcw (ORCPT + 99 others); Thu, 2 Jul 2020 05:32:52 -0400 Received: from foss.arm.com ([217.140.110.172]:58934 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727057AbgGBJcv (ORCPT ); Thu, 2 Jul 2020 05:32:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D892B31B; Thu, 2 Jul 2020 02:32:50 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.12.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7322A3F71E; Thu, 2 Jul 2020 02:32:46 -0700 (PDT) Date: Thu, 2 Jul 2020 10:32:39 +0100 From: Mark Rutland To: Will Deacon Cc: linux-kernel@vger.kernel.org, Sami Tolvanen , Nick Desaulniers , Kees Cook , Marco Elver , "Paul E. McKenney" , Josh Triplett , Matt Turner , Ivan Kokshaysky , Richard Henderson , Peter Zijlstra , Alan Stern , "Michael S. Tsirkin" , Jason Wang , Arnd Bergmann , Boqun Feng , Catalin Marinas , linux-arm-kernel@lists.infradead.org, linux-alpha@vger.kernel.org, virtualization@lists.linux-foundation.org, kernel-team@android.com Subject: Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation Message-ID: <20200702093239.GA15391@C02TD0UTHF1T.local> References: <20200630173734.14057-1-will@kernel.org> <20200630173734.14057-5-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200630173734.14057-5-will@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote: > Rather then relying on the core code to use smp_read_barrier_depends() > as part of the READ_ONCE() definition, instead override __READ_ONCE() > in the Alpha code so that it is treated the same way as > smp_load_acquire(). > > Acked-by: Paul E. McKenney > Signed-off-by: Will Deacon > --- > arch/alpha/include/asm/barrier.h | 61 ++++---------------------------- > arch/alpha/include/asm/rwonce.h | 19 ++++++++++ > 2 files changed, 26 insertions(+), 54 deletions(-) > create mode 100644 arch/alpha/include/asm/rwonce.h > > diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h > index 92ec486a4f9e..2ecd068d91d1 100644 > --- a/arch/alpha/include/asm/barrier.h > +++ b/arch/alpha/include/asm/barrier.h > @@ -2,64 +2,17 @@ > #ifndef __BARRIER_H > #define __BARRIER_H > > -#include > - > #define mb() __asm__ __volatile__("mb": : :"memory") > #define rmb() __asm__ __volatile__("mb": : :"memory") > #define wmb() __asm__ __volatile__("wmb": : :"memory") > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory") > +#define __smp_load_acquire(p) \ > +({ \ > + __unqual_scalar_typeof(*p) ___p1 = \ > + (*(volatile typeof(___p1) *)(p)); \ > + compiletime_assert_atomic_type(*p); \ > + ___p1; \ > +}) Sorry if I'm being thick, but doesn't this need a barrier after the volatile access to provide the acquire semantic? IIUC prior to this commit alpha would have used the asm-generic __smp_load_acquire, i.e. | #ifndef __smp_load_acquire | #define __smp_load_acquire(p) \ | ({ \ | __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p); \ | compiletime_assert_atomic_type(*p); \ | __smp_mb(); \ | (typeof(*p))___p1; \ | }) | #endif ... where the __smp_mb() would be alpha's mb() from earlier in the patch context, i.e. | #define mb() __asm__ __volatile__("mb": : :"memory") ... so don't we need similar before returning ___p1 above in __smp_load_acquire() (and also matching the old read_barrier_depends())? [...] > +#include > + > +/* > + * Alpha is apparently daft enough to reorder address-dependent loads > + * on some CPU implementations. Knock some common sense into it with > + * a memory barrier in READ_ONCE(). > + */ > +#define __READ_ONCE(x) __smp_load_acquire(&(x)) As above, I don't see a memory barrier implied here, so this doesn't look quite right. Thanks, Mark.