Received: by 10.223.185.116 with SMTP id b49csp3981661wrg; Mon, 26 Feb 2018 09:11:16 -0800 (PST) X-Google-Smtp-Source: AH8x224U2SXdmSl+Apt1y0WC+SXYZDT9mDANUMAB9LQzYOu3zOb7jeTMZfecZCtW70WQbyLxqtjG X-Received: by 10.101.101.10 with SMTP id x10mr9113583pgv.223.1519665076380; Mon, 26 Feb 2018 09:11:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519665076; cv=none; d=google.com; s=arc-20160816; b=xDRpLtheueswCfOWkYjcEgUDmmmjt+IfgsczySusrVzdOsj7m6bKkiLTR5aicOBIcE 1IPUawNk1AVyqbLMA2tJ/Oxa+o3JXW0XNMsBmXYuFlirHqWeJ72k48frzJ2KKYg7gWo8 UZhGrcFNnwh8ut1576yULE9SWwJFNaFP6LvDMIejA9QMTEoPsYEsuBpkbaIrAlodRJQj EQTzgm4Zsyz+EriltSI5tIgA9ff/3BGQu2nptzE/pQOecvXSkWRCiYuqNZ0ZnKGXUcGj WdmFiOoVL1j04wKow9k/Xs/DC9sBMzE5nC0ly9afMq5unXVz8ibNHhmwvOrypB+0isk/ VYhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=nH9f3WtYZt7XiNAMeaYha5dXmfqDQMIUfQtuEmfTo8I=; b=CdQ9+C6UtZQIePejc7iTO7UE+KepbMo/X4DhaLpOAnRUIi4krXoGv/B/6U4T7kF437 CNicMPgP/PSukMfTWJHIqr5yMAwKZ73ocSOTiKLAvXqL+jy3h4Zv4+SEUOKrSySkq96J xA3ZloFw8vk5C+OuQA5+egoevlgiy5jTbeo+y9D3Vx7owZzld1lik3xcAr2lHutvUEVo jNiPVCUuOzmQ/uV88en6i818J0sN7FNUYIyUVGgiBiQ8LblIhoKxIBT5QOfEumY19Zfh s6HtbtxyhTaaYD2HwZN92FZZXCx6tKUEFxg+vOtZOr6Bcm9G5xNEixZU9oZ2fFtxxOkr m76A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n9-v6si5847453pll.680.2018.02.26.09.11.00; Mon, 26 Feb 2018 09:11:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751581AbeBZRKJ (ORCPT + 99 others); Mon, 26 Feb 2018 12:10:09 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53176 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750826AbeBZRKI (ORCPT ); Mon, 26 Feb 2018 12:10:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A2D1080D; Mon, 26 Feb 2018 09:10:07 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 71ABB3F24D; Mon, 26 Feb 2018 09:10:07 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 0C7BB1AE53BC; Mon, 26 Feb 2018 17:10:08 +0000 (GMT) Date: Mon, 26 Feb 2018 17:10:08 +0000 From: Will Deacon To: Linus Torvalds Cc: Luc Maranget , Daniel Lustig , Peter Zijlstra , "Paul E. McKenney" , Andrea Parri , Linux Kernel Mailing List , Palmer Dabbelt , Albert Ou , Alan Stern , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Akira Yokosawa , Ingo Molnar , linux-riscv@lists.infradead.org Subject: Re: [RFC PATCH] riscv/locking: Strengthen spin_lock() and spin_unlock() Message-ID: <20180226171008.GC30736@arm.com> References: <20180222134004.GN25181@hirez.programming.kicks-ass.net> <20180222141249.GA14033@andrea> <82beae6a-2589-6136-b563-3946d7c4fc60@nvidia.com> <20180222181317.GI2855@linux.vnet.ibm.com> <20180222182717.GS25181@hirez.programming.kicks-ass.net> <563431d0-4fb5-9efd-c393-83cc5197e934@nvidia.com> <20180226142107.uid5vtv5r7zbso33@yquem.inria.fr> <20180226162426.GB17158@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 26, 2018 at 09:00:43AM -0800, Linus Torvalds wrote: > On Mon, Feb 26, 2018 at 8:24 AM, Will Deacon wrote: > > > > Strictly speaking, that's not what we've got implemented on arm64: only > > the read part of the RmW has Acquire semantics, but there is a total > > order on the lock/unlock operations for the lock. > > Hmm. > > I thought we had exactly that bug on some architecture with the queued > spinlocks, and people decided it was wrong. > > But it's possible that I mis-remember, and that we decided it was ok after all. > > > spin_lock(&lock); > > WRITE_ONCE(foo, 42); > > > > then another CPU could do: > > > > if (smp_load_acquire(&foo) == 42) > > BUG_ON(!spin_is_locked(&lock)); > > > > and that could fire. Is that relied on somewhere? > > I have a distinct memory that we said the spinlock write is seen in > order, wrt the writes inside the spinlock, and the reason was > something very similar to the above, except that "spin_is_locked()" > was about our spin_unlock_wait(). Yes, we did run into problems with spin_unlock_wait and we ended up strengthening the arm64 implementation to do an RmW, which puts it into the total order of lock/unlock operations. However, we then went and killed the thing because it was seldom used correctly and we struggled to define what "correctly" even meant! > Because we had something very much like the above in the exit path, > where we would look at some state and do "spin_unlock_wait()" and > expect to be guaranteed to be the last user after that. > > But a few months ago we obviously got rid of spin_unlock_wait exactly > because people were worried about the semantics. Similarly for spin_can_lock. > So maybe I just remember an older issue that simply became a non-issue > with that. I think so. If we need to, I could make spin_is_locked do an RmW on arm64 so we can say that all successful spin_* operations are totally ordered for a given lock, but spin_is_locked is normally only used as a coarse debug check anyway where it's assumed that if it's held, it's held by the current CPU. We should probably move most users over to lockdep and see what we're left with. Will