Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2210392pxb; Mon, 8 Mar 2021 18:05:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJyuXvlxk6BI634zV90jW+m6vIxIP1ZbB+1kNuknW4bunB2QDsZcTf8JHSfEzBojE5v2ucKP X-Received: by 2002:a50:9dcd:: with SMTP id l13mr1473342edk.220.1615255530282; Mon, 08 Mar 2021 18:05:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615255530; cv=none; d=google.com; s=arc-20160816; b=xrtN03eordxGxV2GbgqHKjPhVFOLjcR5MlAkzW+zTs5KQFvuuJ6j1L/Esa+bsuQgi1 pSLiDdqrZIp/+2CZXx263Mv++jC5q3YZrFlI5ARigA7laufHnh/Y3GkKOx5fHqM4UziL 680/JwNTsflG2ZrrhhYG+7hRvFVroMaN0KLvSaFxGFkfLx0WURChI5tbLywbn7zwzNkY pEDfrX7cW8X23rRw09bQ6Af0PgXP6a6Gb0CHMX/SHT7F9JqQNVjX7VmW+F8NkhHj2h2F 33CiEx4G+zhtM/pVCnurHDN51TbYI86hR6fUajDiwA3poNvwHBf+xpccoy6Dqi2T8cwO mTWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TGTtlgHnJt5h9bp/7VRAXiosemX2ZgL1wtsNEQxgZV4=; b=n9buluz8FT5TWoPfEP2R2y5IqrqXevP0hu3XP8K+dD/5a/f13yHLz2mWbAJ1wiM9d9 WRky9fqVXUMgO+QMbJkrUo77PdDc3igCL4MzIRxA+sDs4irm9ySGoLifdrR2IjpRRqZ6 jw7TCMazRJOpy6bsOA7SZ+cicbLpqJygdrmCooJBvVDPTT7Kzq7ZVi9OHG/10F5Q/vse 5tplAmaymQthVcvN9RRGp5pG2k7BEbkw+qQ8gWGITPPVL7QBybZhl+p7ExxfNk4xextB /ZNSgpld/DYnXI+XXafc3EjNqPbKsfUZsZ92j5IvBY8v6yO9RZnUBdqn6AQAUsgYhhPB SYbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y3si7601451edo.515.2021.03.08.18.05.07; Mon, 08 Mar 2021 18:05:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230289AbhCICAU (ORCPT + 99 others); Mon, 8 Mar 2021 21:00:20 -0500 Received: from mx2.suse.de ([195.135.220.15]:41486 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230161AbhCICAH (ORCPT ); Mon, 8 Mar 2021 21:00:07 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A1F3AAC8C; Tue, 9 Mar 2021 02:00:06 +0000 (UTC) From: Davidlohr Bueso To: npiggin@gmail.com Cc: peterz@infradead.org, mingo@redhat.com, will@kernel.org, longman@redhat.com, mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, dave@stgolabs.net, parri.andrea@gmail.com, pabeni@redhat.com, Davidlohr Bueso Subject: [PATCH 2/3] powerpc/spinlock: Unserialize spin_is_locked Date: Mon, 8 Mar 2021 17:59:49 -0800 Message-Id: <20210309015950.27688-3-dave@stgolabs.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210309015950.27688-1-dave@stgolabs.net> References: <20210309015950.27688-1-dave@stgolabs.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org c6f5d02b6a0f (locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()) made it pretty official that the call semantics do not imply any sort of barriers, and any user that gets creative must explicitly do any serialization. This creativity, however, is nowadays pretty limited: 1. spin_unlock_wait() has been removed from the kernel in favor of a lock/unlock combo. Furthermore, queued spinlocks have now for a number of years no longer relied on _Q_LOCKED_VAL for the call, but any non-zero value to indicate a locked state. There were cases where the delayed locked store could lead to breaking mutual exclusion with crossed locking; such as with sysv ipc and netfilter being the most extreme. 2. The auditing Andrea did in verified that remaining spin_is_locked() no longer rely on such semantics. Most callers just use it to assert a lock is taken, in a debug nature. The only user that gets cute is NOLOCK qdisc, as of: 96009c7d500e (sched: replace __QDISC_STATE_RUNNING bit with a spin lock) ... which ironically went in the next day after c6f5d02b6a0f. This change replaces test_bit() with spin_is_locked() to know whether to take the busylock heuristic to reduce contention on the main qdisc lock. So any races against spin_is_locked() for archs that use LL/SC for spin_lock() will be benign and not break any mutual exclusion; furthermore, both the seqlock and busylock have the same scope. Cc: parri.andrea@gmail.com Cc: pabeni@redhat.com Signed-off-by: Davidlohr Bueso --- arch/powerpc/include/asm/qspinlock.h | 12 ------------ arch/powerpc/include/asm/simple_spinlock.h | 3 +-- 2 files changed, 1 insertion(+), 14 deletions(-) diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index 3ce1a0bee4fe..b052b0624816 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -44,18 +44,6 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) } #define queued_spin_lock queued_spin_lock -static __always_inline int queued_spin_is_locked(struct qspinlock *lock) -{ - /* - * This barrier was added to simple spinlocks by commit 51d7d5205d338, - * but it should now be possible to remove it, asm arm64 has done with - * commit c6f5d02b6a0f. - */ - smp_mb(); - return atomic_read(&lock->val); -} -#define queued_spin_is_locked queued_spin_is_locked - #ifdef CONFIG_PARAVIRT_SPINLOCKS #define SPIN_THRESHOLD (1<<15) /* not tuned */ diff --git a/arch/powerpc/include/asm/simple_spinlock.h b/arch/powerpc/include/asm/simple_spinlock.h index 3e87258f73b1..1b935396522a 100644 --- a/arch/powerpc/include/asm/simple_spinlock.h +++ b/arch/powerpc/include/asm/simple_spinlock.h @@ -38,8 +38,7 @@ static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_is_locked(arch_spinlock_t *lock) { - smp_mb(); - return !arch_spin_value_unlocked(*lock); + return !arch_spin_value_unlocked(READ_ONCE(*lock)); } /* -- 2.26.2