Received: by 10.223.185.116 with SMTP id b49csp1699132wrg; Thu, 22 Feb 2018 01:26:20 -0800 (PST) X-Google-Smtp-Source: AH8x225HKGnwCUm6WI1H5r+R7GzsumofUqZcTGjFZ2wDU1evuqtQBRQJPEamxu6w0Qznzt+mQrUv X-Received: by 2002:a17:902:4083:: with SMTP id c3-v6mr5777879pld.90.1519291580743; Thu, 22 Feb 2018 01:26:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519291580; cv=none; d=google.com; s=arc-20160816; b=SOrYeUOhfXVBwI13ZQQjvdJ/IvHxVDcNgPNVkyhF1SxVhlTz/WYtp181uLNRoEzmdv +k/VrGrC1QGECQyGI5qPgn+i9QErKlQKM1F6IcNPnpv8I7Y3givjUq/+GgvpqUGVGvRI bnwwLhCY66RpPSzEjBcRcCqMm0rEd5zFwR1FZBCiOwrXPbnWiTHknkBRNRtELCYBVkmx +anINAcacTxXOTK/gln3pfxoB7k51bOJipOT1I2+aB1STOrvjS7VqwMtLOaQwUiX5MPc J1vhyR/lLXF/P4rqr9gDR7oz6yfZbN24+9VKv2u+LQhUcicbh3cdHjtf6OllTpQFPyen +R+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=PoQzDXYHgoN0drp73FIIhTSi1Fv81Ma/NCYi2F3QZww=; b=EphfRQ8gyqsTBCCF5yx9jqg5zGrRImHatQKaiYMSMjD0aLU0JKf6Y1RpMOxmW2oslb hMLgd1l3rhNpYlgqe5I75VPnj0/VItnSlodDkaM9U5PXeq2QfjACcme7iVsbBRHzDVwj XsVakmN9j86FvZwxMRdxl56rq369gvtYWwrFYsTw4EszKckTtSdaq0zMY63/G5MQHxR+ tXPAXQPrk/ytHIxwQU6fYfZGY95CHjtAlSZl+ycS/ttV2CkIyrnD9Rkx/MFWA8EaKijH zNIDhOrmrLD16DttWXbvd7Jef1zIJY5UNIxB9QZsJBAC0lsT1LMiXeDosBy7iJfmIi2l ZFkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fiN0yX2T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y14si5708960pgs.760.2018.02.22.01.26.05; Thu, 22 Feb 2018 01:26:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fiN0yX2T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753196AbeBVJZE (ORCPT + 99 others); Thu, 22 Feb 2018 04:25:04 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:34205 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753182AbeBVJZC (ORCPT ); Thu, 22 Feb 2018 04:25:02 -0500 Received: by mail-wm0-f66.google.com with SMTP id a20so1533445wmd.1; Thu, 22 Feb 2018 01:25:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=PoQzDXYHgoN0drp73FIIhTSi1Fv81Ma/NCYi2F3QZww=; b=fiN0yX2TzSMbwIXUagpnCtBbRlneXd1Yuiui30q3IsWEb6pmNkH44jl0qCtUNDB4WG 2kmYieeSLPvtzGVp6SLSOTnOWddflYPBe6fVgaE9MxY2YWsN4DJEOJcjLuq3BqbntR6h 1+Z5JneS8hCuPE7UPhfyLGHtm+clB9XA3J+V6dBRWO3NCSCSWoNJEpYaPwwHFF7w9LCL dGlPyvmFZWckUb7/wzLEIUEI4ao5dx46yK1qocV8wZ3rLwVsAB7Kcw4z2sOXUOBcI1Wo LCRuMjqy0jpTgf6GrCVeVLvTcueK/Cyvp/9Zv3wKL8bPsic2ltMKe5JeEFnLhVnubDgx dbZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=PoQzDXYHgoN0drp73FIIhTSi1Fv81Ma/NCYi2F3QZww=; b=NXdW8scwV99zM/+Lg8/HYDdHS22J6JpzJaRWQR4VfYDhhhVKlzXFlbUVHBMcWj+XAS 5/nPra4Gy2W4B99qqjBzUhp1KUZ6ZpCM1/vbIJb4ixWIHNyq474bDVmMcFXbkx01dVcS uSd0faGLWZVw3D8TOWqKMEunuduZ503UegBaxc86VCKg+wIRT0VeBWjQBp/JYGBw4yRI /UHuoN8ckiHycQSjmDjLCGwZHuHP1bilAJjDWDG+GEnI+kuYTrZr8UxmNFoBEjUzvrSO 4n/hb4mJ+F55/6XyBUeuouyfiQNd32q0vrb0VfUrP122x/Iw5O2oNAGS/wZ8isecwNAk 3IAw== X-Gm-Message-State: APf1xPDnFbZybmEFGImpivqd8CswM2gOlmJf4s7ouRaXNSPMgLEeJuVx 4GV84SVYyAccteW6GMWpoPA= X-Received: by 10.28.124.20 with SMTP id x20mr1814585wmc.62.1519291500484; Thu, 22 Feb 2018 01:25:00 -0800 (PST) Received: from andrea.amarulasolutions.com (85.100.broadband17.iol.cz. [109.80.100.85]) by smtp.gmail.com with ESMTPSA id y6sm25046675wmy.14.2018.02.22.01.24.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Feb 2018 01:25:00 -0800 (PST) From: Andrea Parri To: Ingo Molnar , Peter Zijlstra Cc: Andrea Parri , "Paul E . McKenney" , Alan Stern , Ivan Kokshaysky , Matt Turner , Richard Henderson , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] locking/xchg/alpha: Add leading smp_mb() to xchg(), cmpxchg() Date: Thu, 22 Feb 2018 10:24:48 +0100 Message-Id: <1519291488-5752-1-git-send-email-parri.andrea@gmail.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Successful RMW operations are supposed to be fully ordered, but Alpha's xchg() and cmpxchg() do not align to this requirement. Will reported that: > So MP using xchg: > > WRITE_ONCE(x, 1) > xchg(y, 1) > > smp_load_acquire(y) == 1 > READ_ONCE(x) == 0 > > would be allowed. (thus violating the above requirement). Amend this by adding a leading smp_mb() to the implementations of xchg(), cmpxchg(). Reported-by: Will Deacon Signed-off-by: Andrea Parri Cc: Peter Zijlstra Cc: Paul E. McKenney Cc: Alan Stern Cc: Ivan Kokshaysky Cc: Matt Turner Cc: Richard Henderson Cc: linux-alpha@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- arch/alpha/include/asm/xchg.h | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h index e1facf6fc2446..e2b59fac5257d 100644 --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -12,6 +12,10 @@ * Atomic exchange. * Since it can be used to implement critical sections * it must clobber "memory" (also for interrupts in UP). + * + * The leading and the trailing memory barriers guarantee that these + * operations are fully ordered. + * */ static inline unsigned long @@ -19,6 +23,7 @@ ____xchg(_u8, volatile char *m, unsigned long val) { unsigned long ret, tmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " insbl %1,%4,%1\n" @@ -43,6 +48,7 @@ ____xchg(_u16, volatile short *m, unsigned long val) { unsigned long ret, tmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " inswl %1,%4,%1\n" @@ -67,6 +73,7 @@ ____xchg(_u32, volatile int *m, unsigned long val) { unsigned long dummy; + smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%4\n" " bis $31,%3,%1\n" @@ -87,6 +94,7 @@ ____xchg(_u64, volatile long *m, unsigned long val) { unsigned long dummy; + smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%4\n" " bis $31,%3,%1\n" @@ -128,9 +136,12 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier is placed in SMP unconditionally, in order to - * guarantee that dependency ordering is preserved when a dependency - * is headed by an unsuccessful operation. + * The leading and the trailing memory barriers guarantee that these + * operations are fully ordered. + * + * The trailing memory barrier is placed in SMP unconditionally, in + * order to guarantee that dependency ordering is preserved when a + * dependency is headed by an unsuccessful operation. */ static inline unsigned long @@ -138,6 +149,7 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) { unsigned long prev, tmp, cmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " insbl %1,%5,%1\n" @@ -165,6 +177,7 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) { unsigned long prev, tmp, cmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " inswl %1,%5,%1\n" @@ -192,6 +205,7 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) { unsigned long prev, cmp; + smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%5\n" " cmpeq %0,%3,%1\n" @@ -215,6 +229,7 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) { unsigned long prev, cmp; + smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%5\n" " cmpeq %0,%3,%1\n" -- 2.7.4