Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp37714imm; Thu, 30 Aug 2018 07:48:54 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZTEyfuIPHA3EtdWFdKg/D/HSPz8kmA45Um6j1EtDQZDjvEHgIqslbAWFyyVOqUCO6xkEvM X-Received: by 2002:a62:f208:: with SMTP id m8-v6mr10778530pfh.222.1535640534786; Thu, 30 Aug 2018 07:48:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535640534; cv=none; d=google.com; s=arc-20160816; b=Edj0DYwfTkDGpFp5bUcTP5LvZjzLOsGa3nXUYHV35C9qwfYKxg61yJGixvqrXsweI0 5GkxSLizPB2ZZOTgEZUPpK3ry/nEyPE2huh4AXGW2hhm3IK9sgzJE0GzeaJueKfzC8y/ mrW3o+2JFicrZFLH/aq63mQsM+ym5IRMRLK1hzF/omuCWW6vdWUQp+8Z+fV55nd2E8fs fJhX/Lud3+98x7VQkmjGU1/o4e6mdh5v9Eeb3UkSPdi22Wo/+zlOwqwniptSFQ0c1PTK 1eJdmWTUkN1zwP4OlAo15xhhRR+wHMmgkroXE5pTbntNedxBvxlblJo+qBytXP6gFimY Z5Sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=POe38wAXINdAk1gkhCrnt5nqNHz3lYXAFOnU815bWk4=; b=ReuFhE3HjXfHYk1hiirmyOZ5rWERGERt7c50HSKLCN7d39+BtNUxtN8fQYnNS6Wil0 PRm3/e0exD3w+KoaFk4H8n74cVmb9NXok/tt2phN0A18hxeu8bOWJ//F9RVwXjP5NNlO jm8sgB5J+RxnK/5XC3vEtCxYKGLOU8GbABWuM1cIDrFtVtZNG7pm3viDDbfD6si5y4d1 5TQ7D6RHnGvdJqljhoVPsJi0WipOizrTQiy7UE8bE0tISSFRJrRIx2KWoNQsr4zQFBsE FQ9qHfwg36ZJ4RohNFG/lJvs804Wm3CWp6V/Rf8GjJwttDch47zNS6l/R58Fskvqu77Y 5xHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=sw4KGlKT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n7-v6si6224986plp.363.2018.08.30.07.48.35; Thu, 30 Aug 2018 07:48:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=sw4KGlKT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729934AbeH3Ssw (ORCPT + 99 others); Thu, 30 Aug 2018 14:48:52 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:41674 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729570AbeH3SqS (ORCPT ); Thu, 30 Aug 2018 14:46:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=POe38wAXINdAk1gkhCrnt5nqNHz3lYXAFOnU815bWk4=; b=sw4KGlKTS+Mn6lr2t4PPaO50T uRRCR8XPO8RgMB+31lR4cm6lJ6B94ukndrkkY5GJiWBq6+FyjubBpH9bJL6+NJwspwkSs4xlZ3AES QlizRUL1OrTddIHNHP/eRveW4Wv2d0OiPv2wsN91jO0lg4Kr7SGQNW8dRXjsR3Fmm/gM9awaNk6zu Qq4xq5VNAJ01pt8WTt4zaTvIGk7WUt7/B7ayHlezcSdfAN+qCYgvseUi+0nnEuzFzJH01+AKQlY2A LHthBuq1u8myaL6RB5IQgt0ayTyzbf1+dbbQi+Hacc32DGYATtjfjXbEoxlJ7TmHQoApWqK1E16/z GWEaFKJPQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fvOAj-0007rT-Hc; Thu, 30 Aug 2018 14:43:45 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 12A082024D425; Thu, 30 Aug 2018 16:43:44 +0200 (CEST) Date: Thu, 30 Aug 2018 16:43:44 +0200 From: Peter Zijlstra To: Will Deacon Cc: Eugeniy Paltsev , "mingo@kernel.org" , "linux-kernel@vger.kernel.org" , "Alexey.Brodkin@synopsys.com" , "Vineet.Gupta1@synopsys.com" , "tglx@linutronix.de" , "linux-snps-arc@lists.infradead.org" , "yamada.masahiro@socionext.com" , "linux-arm-kernel@lists.infradead.org" , "linux-arch@vger.kernel.org" Subject: Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash Message-ID: <20180830144344.GW24142@hirez.programming.kicks-ass.net> References: <1535567633.4465.23.camel@synopsys.com> <20180830094411.GX24124@hirez.programming.kicks-ass.net> <20180830095148.GB5942@arm.com> <1535629996.4465.44.camel@synopsys.com> <20180830141713.GN24082@hirez.programming.kicks-ass.net> <20180830142354.GB13005@arm.com> <20180830142920.GO24082@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180830142920.GO24082@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or)