Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2347103imm; Mon, 28 May 2018 06:29:08 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp2eoaGgun6HDG/yRP6rBaFTXmUfoitxy1TLuU5Tjj9Bg5dMyQEBL4gohqLkNxoOa5lPcXt X-Received: by 2002:a63:82c7:: with SMTP id w190-v6mr10604127pgd.172.1527514148420; Mon, 28 May 2018 06:29:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527514148; cv=none; d=google.com; s=arc-20160816; b=Hxf5nEw1/UeuRbUxblTelGOd1aroyIvRcE2zUPdBzLZomfKANDA0pEb2ngKDMsVng3 CHVEXKol8BbtYb1xZm5cN/EBGs3ow7OfXR2khUMxsoDKlu5uo5Ct4S0DZYhilcSiYWx0 KL4KkjJhwLnFVpWK688LoGz5I2LcIITvSNk/ybnudI5l7le4Qi1ZacLdqD0INBwrqe6B Imd+PY3iCGAYOiwlYPuevipwXh16dLKZWSRe7jQvas7eOmPTMKvGAy09GAi3Q/qliHnW E32Fot4YTsoGQiVNn8FKKqCvPe+3Fvj7fgY0pMyuXfp2ro6JHtFrMnbuqTTFJg6rla3X A8rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=EAPpfZ30iKMUDG6vWN8oxzIJZrHhnWzAer4zPRXyOhY=; b=uEFYUv4N2RjGPFBYMJdR3xdJCpi+2raITVjechOj0/bIln8i8etpBp74LUEqxApxXG /YWEx7gpcYvUQdF85chCYod9qrNat5NfMaqcHwEMJ4aZSgburav6SOjxL3PsiJ+leMrN Utu+RjF+se+o3sAzuS/Pgr0Lf951vHRaLD2X6ofV5Gwl2n9rOYEJcl972n35O+PdpViI p5Wu0xwJqE0Lwid1U+yPt/YXmwsuVUk9d/kSxAbAhKBhs5k9YNmqNRAoQHYmoun5HwLk KYorXXnxajcPOocBEtlcPmUXMP9VdLBM1hzqZPo67j6WIw8MN3RgYfK2Lr0jltke9xli dASA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rwa7mQTL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p7-v6si29164455plk.293.2018.05.28.06.28.53; Mon, 28 May 2018 06:29:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rwa7mQTL; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1165343AbeE1N1z (ORCPT + 99 others); Mon, 28 May 2018 09:27:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:37550 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1163245AbeE1Kr2 (ORCPT ); Mon, 28 May 2018 06:47:28 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4F63D204EE; Mon, 28 May 2018 10:47:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527504447; bh=jBJSjutF+f14MaWNo/Pw3uOsbiFpsoBM6VkQtnpR8Os=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rwa7mQTLD4Id21xv8OVanIPOxqOlNqXJsqsULt2RJhq4XMTZKppUwdLqIWVBbM92v MC79FnhCPRWIp/+5ZZ9Ds8mV+7/scFeQ42g33snaqv974AAPW+pzxog1Ev+AVmYtlC jBj8LWf0D1wmsjxIHqtlb28IUGwN21qyV59mRU14= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrea Parri , Peter Zijlstra , "Paul E. McKenney" , Alan Stern , Ivan Kokshaysky , Linus Torvalds , Matt Turner , Richard Henderson , Thomas Gleixner , Will Deacon , linux-alpha@vger.kernel.org, Ingo Molnar , Sasha Levin Subject: [PATCH 4.14 109/496] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg() Date: Mon, 28 May 2018 11:58:14 +0200 Message-Id: <20180528100324.502201694@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100319.498712256@linuxfoundation.org> References: <20180528100319.498712256@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andrea Parri [ Upstream commit cb13b424e986aed68d74cbaec3449ea23c50e167 ] Continuing along with the fight against smp_read_barrier_depends() [1] (or rather, against its improper use), add an unconditional barrier to cmpxchg. This guarantees that dependency ordering is preserved when a dependency is headed by an unsuccessful cmpxchg. As it turns out, the change could enable further simplification of LKMM as proposed in [2]. [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2 https://marc.info/?l=linux-kernel&m=150884946319353&w=2 https://marc.info/?l=linux-kernel&m=151215810824468&w=2 https://marc.info/?l=linux-kernel&m=151215816324484&w=2 [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2 Signed-off-by: Andrea Parri Acked-by: Peter Zijlstra Acked-by: Paul E. McKenney Cc: Alan Stern Cc: Ivan Kokshaysky Cc: Linus Torvalds Cc: Matt Turner Cc: Richard Henderson Cc: Thomas Gleixner Cc: Will Deacon Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/alpha/include/asm/xchg.h | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier should be placed in SMP only when we actually - * make the change. If we don't change anything (so if the returned - * prev is equal to old) then we aren't acquiring anything new and - * we don't need any memory barrier as far I can tell. + * The memory barrier is placed in SMP unconditionally, in order to + * guarantee that dependency ordering is preserved when a dependency + * is headed by an unsuccessful operation. */ static inline unsigned long @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsig " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, uns " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int o " mov %4,%1\n" " stl_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsi " mov %4,%1\n" " stq_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous"