Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2177792imm; Mon, 28 May 2018 03:18:42 -0700 (PDT) X-Google-Smtp-Source: AB8JxZohIgmoJzphEa6IThLFdo+87yMlUw8vEetrCeGJvsSgcWYFJ2JyQGH7R3AcQjmrf82M3M9K X-Received: by 2002:a17:902:6b84:: with SMTP id p4-v6mr12863875plk.272.1527502722876; Mon, 28 May 2018 03:18:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527502722; cv=none; d=google.com; s=arc-20160816; b=v5mHtSSnDCiPaR0FJ/K1DsgK3fGazWrGGFFzoqtgLRpexkkxtvVvkkWdU4FEhrCNYx aOLZ3+g4NThroh99HbZDdFgZU4G+v8whJU2Eq3HeuunVGTIaIFxNQXwkhODaPqcKclPY eUo80pU8izBKOlgYxshQAkVt5gNjkeFqE9bx3I3rV5yE4PY9ubylvMJrjWQ06id1yfci Vqf+gW8g4oN1ex0ANmB+vG1wvZf2i0BmmehIMhY1nhut9Eb03dm/GHx55jFKI9/Yvo0/ G2IaQ8cMLOsS624/nmFhcuDxoshyhdbdQrfelvXEpqeTRT2KLLJgdC4c1bYQasccmqE7 iHrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=X4tVm/PvLcp4Ppdx0Ec0Zxt9fVbHXBwR2gxjN+ZjoUs=; b=nayXEaKAN9jelbgvzhrJtaifBHrZnZR5NErDCOcwXjYhgqLMyFyD1jXDWz784k+sPZ Mqf/41yCq9zKOPGtX7WIM3UoTIa+G5Chk93BIs1ltl4NJumj/wdLUWwIqEO8zNYAt+d+ INI9hLfOFNoPFq0h/rKEf4WxJryGRqAXMHrV7/rcVS6U4mw+gctqvmEoZnL81FQ/Ik2n hoJr6kzbdbgbA59kBKLeZ6XHs46/8XlYvLfKa4m/1jBeIeMW132YHDe9Ru3L3kJSPQCb 25Ca2TWurhgbfCTz0YIkcP3GGhKYDMa15S9HVLAU2JhWTMf3M/EmLJ0AGdgN72f9Vp9c /mKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=CRVgUdtl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g63-v6si894098pfk.74.2018.05.28.03.18.28; Mon, 28 May 2018 03:18:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=CRVgUdtl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968757AbeE1KSS (ORCPT + 99 others); Mon, 28 May 2018 06:18:18 -0400 Received: from mail.kernel.org ([198.145.29.99]:38254 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031669AbeE1KSK (ORCPT ); Mon, 28 May 2018 06:18:10 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C72762086D; Mon, 28 May 2018 10:18:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527502689; bh=/UC+S+XRF0aX6YWjSGECpK2/VQjtWq+SUxyp1ZNisHw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CRVgUdtlcXXoVenvOGgTJBC/yENFO321VtigCk2iXNEJpl1Sv0YLkulfgi4qDX3Hv 85dkc8x3R93kaB+orEulqYCH4j0bmlyJCajEGRlZqUUtjbdfpkf2OB65vCfMZR/paI 8XHibDxxePq/zq4URajYWPjuK5mal+89jW1aOL9I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrea Parri , Peter Zijlstra , "Paul E. McKenney" , Alan Stern , Ivan Kokshaysky , Linus Torvalds , Matt Turner , Richard Henderson , Thomas Gleixner , Will Deacon , linux-alpha@vger.kernel.org, Ingo Molnar , Sasha Levin Subject: [PATCH 4.4 094/268] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg() Date: Mon, 28 May 2018 12:01:08 +0200 Message-Id: <20180528100212.848894278@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100202.045206534@linuxfoundation.org> References: <20180528100202.045206534@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andrea Parri [ Upstream commit cb13b424e986aed68d74cbaec3449ea23c50e167 ] Continuing along with the fight against smp_read_barrier_depends() [1] (or rather, against its improper use), add an unconditional barrier to cmpxchg. This guarantees that dependency ordering is preserved when a dependency is headed by an unsuccessful cmpxchg. As it turns out, the change could enable further simplification of LKMM as proposed in [2]. [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2 https://marc.info/?l=linux-kernel&m=150884946319353&w=2 https://marc.info/?l=linux-kernel&m=151215810824468&w=2 https://marc.info/?l=linux-kernel&m=151215816324484&w=2 [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2 Signed-off-by: Andrea Parri Acked-by: Peter Zijlstra Acked-by: Paul E. McKenney Cc: Alan Stern Cc: Ivan Kokshaysky Cc: Linus Torvalds Cc: Matt Turner Cc: Richard Henderson Cc: Thomas Gleixner Cc: Will Deacon Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/alpha/include/asm/xchg.h | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -127,10 +127,9 @@ ____xchg(, volatile void *ptr, unsigned * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier should be placed in SMP only when we actually - * make the change. If we don't change anything (so if the returned - * prev is equal to old) then we aren't acquiring anything new and - * we don't need any memory barrier as far I can tell. + * The memory barrier is placed in SMP unconditionally, in order to + * guarantee that dependency ordering is preserved when a dependency + * is headed by an unsuccessful operation. */ static inline unsigned long @@ -149,8 +148,8 @@ ____cmpxchg(_u8, volatile char *m, unsig " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -176,8 +175,8 @@ ____cmpxchg(_u16, volatile short *m, uns " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -199,8 +198,8 @@ ____cmpxchg(_u32, volatile int *m, int o " mov %4,%1\n" " stl_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -222,8 +221,8 @@ ____cmpxchg(_u64, volatile long *m, unsi " mov %4,%1\n" " stq_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous"