Received: by 10.223.185.116 with SMTP id b49csp4502056wrg; Mon, 26 Feb 2018 20:03:18 -0800 (PST) X-Google-Smtp-Source: AH8x2255d13dcy9eeb9mkyxI7dfzmSFHsFo6CxzD2QUIdwMOsTJTvSUquhNjrdnVf2Z+C3d+yLY+ X-Received: by 10.101.70.133 with SMTP id h5mr10438174pgr.166.1519704197975; Mon, 26 Feb 2018 20:03:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519704197; cv=none; d=google.com; s=arc-20160816; b=bXLR6RPPSOz3YT9oIKuL+ZT84py9lgYSCsJeGJxDmIet2WXoS/w3pBjlJEYAEXJDZI ScE34HDhhm3gPHWaIz5p5MLKep4Ia63v38BHE37U1yVmZya7z/lMO+dx+hBqOJoav9f7 3sZ/lC0j2I83TeBO6dDWv80Ttu8DBZoIyWdatxlBOv5XrEGqPeYAajDu9b8rJ+FdZyKG b8Xk/B8GZZsFrKx7RsWcbN1o+ocdUNpbZtCwcBYvJ6SIEGbTUIMUMlmtZ4vKypo2uU1t TPM3DfYERvPvcJp32GW4+G/whWo8+oLUdges8dZ6iSbwDVuuFbkKMb1Bzyqt7bOknvEf uzmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=8pNAn9e/8pQ9a9WsBkqMJUxeCFv38VrGxcxOqBKC4FY=; b=02Cb36+/1EhTPVRa5fAbyGW9lao0MgxqC2QTX/hZeCdBxufaMAVKerZerui0yaeaJz UpJnF3JDqxxhjD+ZOn2IjYJMvuh4bpVITodNtUr5GzYG5bHbJYuOdMi793/ruVmUYriw Pmlco3+6tpLle5OQy5qxOqGAC4sppyPdAW/JGZjN6bg0dYxVnesrXRpaVgG/jH6Xjgfs FJxZuEcBYYkmSXoumrMU7elzsyOfjUvNMI0J+Ev4tCERALUleGv7MBgK2QWhLaH0PWVb p86oCi6kWPRJQSvbx8he9xMeA2uBDa+1cNo2g6k1RYPM9qT0HxsyqvOPyVCEUgmjcVvZ jfKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eyZU8P7l; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j33-v6si7910585pld.442.2018.02.26.20.03.02; Mon, 26 Feb 2018 20:03:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eyZU8P7l; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751856AbeB0EBu (ORCPT + 99 others); Mon, 26 Feb 2018 23:01:50 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:54446 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751591AbeB0EBr (ORCPT ); Mon, 26 Feb 2018 23:01:47 -0500 Received: by mail-wm0-f67.google.com with SMTP id z81so21309198wmb.4; Mon, 26 Feb 2018 20:01:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=8pNAn9e/8pQ9a9WsBkqMJUxeCFv38VrGxcxOqBKC4FY=; b=eyZU8P7l6GbiAdeMkbdN43OEuPAm/U4Ki+x7D3Vx0jjNvJBdiF99aQ6QfxvOREgx7A A3XBirRbCXCzX/OOv+5gReAAxdOcbNsoSMd3OdsGQElSAbv/I49JdIigySFMK9MJkTNK rSFeD13XVlqcTQDhSK8nTjRJA7Jx3ygevJmIVpPUUdNjzGg+Xh0Ko+SjU1N4MsnnB70c G5rZh4e2LGB+EvJA+YumY9ufXm7jtE0i4ABfX+n1JwxZVkZq1nbd4PeojSas3Z55YZ73 AvUSqxGeVMquw/qBN/5HbmvOCtKyzsH/Q5v+fngogS2fqP/rlP8Ke+PvUk6LMIUq/7j4 WFiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=8pNAn9e/8pQ9a9WsBkqMJUxeCFv38VrGxcxOqBKC4FY=; b=soDqeWGWd++INuFxBc7hWFWfvMLVxdUCakLfdk+RyUrPXyuw+AJ4I9NiwIjPM49OfB gT2R3be69U589JSJI1Lpio7rO6PYQ27YVVoMedEejL+ado3MKkSo2tA20Fpvb79e7Qd1 pQyqRE/EO9f6jemeYOL2NVs26t+AyidGjMpjtipxQ2cFd2hqSaR2NEjcI/X8wAyn/nCX JartRIxcio75KODx6FgOCpHU/td2kxLS371wvtN2wc+NCrvgh0lXSFoHrk9JfsSmGYru Sfug7pjfDWOwygEDWpiFV6Qhfwi8vokV204GTHAbNwXvEFgS7uAZSTlEt1Z12k9tV9fV B+2w== X-Gm-Message-State: APf1xPDsLPngzwIWxHG/lKkvPu0GYyk/6g1mIyJFginAU1itsR4uVwxb a3uRK3mj9S04YkevCpZ8POcgHBQE X-Received: by 10.80.139.230 with SMTP id n35mr17260921edn.149.1519704105489; Mon, 26 Feb 2018 20:01:45 -0800 (PST) Received: from localhost.localdomain (dynamic-2a00-1028-8386-da8a-79ac-85fb-2a06-3199.ipv6.broadband.iol.cz. [2a00:1028:8386:da8a:79ac:85fb:2a06:3199]) by smtp.gmail.com with ESMTPSA id y8sm8559157edk.43.2018.02.26.20.01.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 26 Feb 2018 20:01:45 -0800 (PST) From: Andrea Parri To: Ingo Molnar , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Andrea Parri , "Paul E . McKenney" , Alan Stern , Andrew Morton , Ivan Kokshaysky , Linus Torvalds , Matt Turner , Richard Henderson , Thomas Gleixner , linux-alpha@vger.kernel.org Subject: [PATCH] locking/xchg/alpha: Remove memory barriers from the _local() variants Date: Tue, 27 Feb 2018 05:00:58 +0100 Message-Id: <1519704058-13430-1-git-send-email-parri.andrea@gmail.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commits 79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") and 472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") ended up adding unnecessary barriers to the _local variants, which the previous code took care to avoid. Fix them by adding the smp_mb() into the cmpxchg macro rather than into the ____cmpxchg variants. Fixes: 79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB") Fixes: 472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs") Reported-by: Will Deacon Signed-off-by: Andrea Parri Cc: Paul E. McKenney Cc: Alan Stern Cc: Andrew Morton Cc: Ivan Kokshaysky Cc: Linus Torvalds Cc: Matt Turner Cc: Peter Zijlstra Cc: Richard Henderson Cc: Thomas Gleixner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cmpxchg.h | 20 ++++++++++++++++---- arch/alpha/include/asm/xchg.h | 27 --------------------------- 2 files changed, 16 insertions(+), 31 deletions(-) diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h index 8a2b331e43feb..6c7c394524714 100644 --- a/arch/alpha/include/asm/cmpxchg.h +++ b/arch/alpha/include/asm/cmpxchg.h @@ -38,19 +38,31 @@ #define ____cmpxchg(type, args...) __cmpxchg ##type(args) #include +/* + * The leading and the trailing memory barriers guarantee that these + * operations are fully ordered. + */ #define xchg(ptr, x) \ ({ \ + __typeof__(*(ptr)) __ret; \ __typeof__(*(ptr)) _x_ = (x); \ - (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, \ - sizeof(*(ptr))); \ + smp_mb(); \ + __ret = (__typeof__(*(ptr))) \ + __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr))); \ + smp_mb(); \ + __ret; \ }) #define cmpxchg(ptr, o, n) \ ({ \ + __typeof__(*(ptr)) __ret; \ __typeof__(*(ptr)) _o_ = (o); \ __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ - (unsigned long)_n_, sizeof(*(ptr)));\ + smp_mb(); \ + __ret = (__typeof__(*(ptr))) __cmpxchg((ptr), \ + (unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\ + smp_mb(); \ + __ret; \ }) #define cmpxchg64(ptr, o, n) \ diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h index e2b59fac5257d..7adb80c6746ac 100644 --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -12,10 +12,6 @@ * Atomic exchange. * Since it can be used to implement critical sections * it must clobber "memory" (also for interrupts in UP). - * - * The leading and the trailing memory barriers guarantee that these - * operations are fully ordered. - * */ static inline unsigned long @@ -23,7 +19,6 @@ ____xchg(_u8, volatile char *m, unsigned long val) { unsigned long ret, tmp, addr64; - smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " insbl %1,%4,%1\n" @@ -38,7 +33,6 @@ ____xchg(_u8, volatile char *m, unsigned long val) ".previous" : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64) : "r" ((long)m), "1" (val) : "memory"); - smp_mb(); return ret; } @@ -48,7 +42,6 @@ ____xchg(_u16, volatile short *m, unsigned long val) { unsigned long ret, tmp, addr64; - smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " inswl %1,%4,%1\n" @@ -63,7 +56,6 @@ ____xchg(_u16, volatile short *m, unsigned long val) ".previous" : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64) : "r" ((long)m), "1" (val) : "memory"); - smp_mb(); return ret; } @@ -73,7 +65,6 @@ ____xchg(_u32, volatile int *m, unsigned long val) { unsigned long dummy; - smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%4\n" " bis $31,%3,%1\n" @@ -84,7 +75,6 @@ ____xchg(_u32, volatile int *m, unsigned long val) ".previous" : "=&r" (val), "=&r" (dummy), "=m" (*m) : "rI" (val), "m" (*m) : "memory"); - smp_mb(); return val; } @@ -94,7 +84,6 @@ ____xchg(_u64, volatile long *m, unsigned long val) { unsigned long dummy; - smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%4\n" " bis $31,%3,%1\n" @@ -105,7 +94,6 @@ ____xchg(_u64, volatile long *m, unsigned long val) ".previous" : "=&r" (val), "=&r" (dummy), "=m" (*m) : "rI" (val), "m" (*m) : "memory"); - smp_mb(); return val; } @@ -135,13 +123,6 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) * Atomic compare and exchange. Compare OLD with MEM, if identical, * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. - * - * The leading and the trailing memory barriers guarantee that these - * operations are fully ordered. - * - * The trailing memory barrier is placed in SMP unconditionally, in - * order to guarantee that dependency ordering is preserved when a - * dependency is headed by an unsuccessful operation. */ static inline unsigned long @@ -149,7 +130,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) { unsigned long prev, tmp, cmp, addr64; - smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " insbl %1,%5,%1\n" @@ -167,7 +147,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) ".previous" : "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64) : "r" ((long)m), "Ir" (old), "1" (new) : "memory"); - smp_mb(); return prev; } @@ -177,7 +156,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) { unsigned long prev, tmp, cmp, addr64; - smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " inswl %1,%5,%1\n" @@ -195,7 +173,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) ".previous" : "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64) : "r" ((long)m), "Ir" (old), "1" (new) : "memory"); - smp_mb(); return prev; } @@ -205,7 +182,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) { unsigned long prev, cmp; - smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%5\n" " cmpeq %0,%3,%1\n" @@ -219,7 +195,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) ".previous" : "=&r"(prev), "=&r"(cmp), "=m"(*m) : "r"((long) old), "r"(new), "m"(*m) : "memory"); - smp_mb(); return prev; } @@ -229,7 +204,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) { unsigned long prev, cmp; - smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%5\n" " cmpeq %0,%3,%1\n" @@ -243,7 +217,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) ".previous" : "=&r"(prev), "=&r"(cmp), "=m"(*m) : "r"((long) old), "r"(new), "m"(*m) : "memory"); - smp_mb(); return prev; } -- 2.7.4