Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1012395pxf; Thu, 8 Apr 2021 19:56:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyXn+eq3NQ7w97A/jmk+247m914zXMhkmZ4/zbO19zaVr0/4bZH6llZ6lEWaaYYNLG3VmlY X-Received: by 2002:a62:8485:0:b029:1fc:823d:2a70 with SMTP id k127-20020a6284850000b02901fc823d2a70mr10549675pfd.18.1617936998860; Thu, 08 Apr 2021 19:56:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617936998; cv=none; d=google.com; s=arc-20160816; b=rmLXJ1BRGj8o0Ul8YS6SKucc2g2tWMrn6/Pk3UDX7CALrkJFCX5d6QYZU2DQ29srb6 G1wqhVTSDxaOTr8wZYoAqQcAmeZHYput+hC5NwHeNwnBZcB/8ViuxXoFCzOAJ5PNG8iU 8vmmFe0ZXx/p+Wz8YFtuHt0bKJqFdTlHPm0lxGEIlahHlXtlH5qkWUrF+mw75ce6E+Qy lVaouEAOX1GyeT8w6z8Ci2ZnV63PLtubRoa7KVs7TPieicz7wpYA4/Qu6OpcXlO77lfg W9nj1PQXZgX92GmmMhAIpmsTHl3tXIARC5SGwZrfv+GrmXZbltR/KKyA21mVOF03qDeT 36Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+30ewFdNzqjAsHucIGBTD78Q8j4ynPy1ksGc7GxIwlc=; b=t2NDRUQ4D/wz/S2OXxOim7wo3No6BuVcXygv5zPEOipk/EnTGp5AH4LkB2milxJofO 5IARKqlTQ8SLQvu7aQs2ZfsNBxx2FYdyZqAMnmRxcDqVkKinHmiZXOWxst8j02uwux95 tdd4bc7DDfT4AL78/omx48MwieaZlgXmZhs4ULabcu8YxzKeHxlWzG0FnnQGCaokdsqp THpYqGC4rGnA0la9fcevhmjzj2N8Wa6nxZw+WwNxoJZR1SNDrAj7/P19DyII2nNKeNxp anC09Zi3je5d+IJsXi3t2JXl/fA1GBvmg0otAPZ0jbIsM0JaKV8Z1XMoJL4KuhdqH9fH aslw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KimaRN12; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mz4si1436426pjb.26.2021.04.08.19.56.26; Thu, 08 Apr 2021 19:56:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=KimaRN12; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232960AbhDICy7 (ORCPT + 99 others); Thu, 8 Apr 2021 22:54:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232831AbhDICy7 (ORCPT ); Thu, 8 Apr 2021 22:54:59 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63465C061760 for ; Thu, 8 Apr 2021 19:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+30ewFdNzqjAsHucIGBTD78Q8j4ynPy1ksGc7GxIwlc=; b=KimaRN12o3OpK/UWzLpc8rpv3t xWILfxbR/fEq5PH6kb95hH5kZx1CS+HvZ9ijoYCBScMtReG+hHgtYyVZ1hz0TyZwjBSn1sqhy6E5k Reb1maLgYAJDETFzWMB0/nCaKC3isCQDIgUXu6bV1ZJlueZb3w+Jq46Qgso6LIEgqHdQdFPb2/6bK uNDkgMYn0skx6Gk2jrv+E8acoq7V2wXMRO+LTM0srC8R3JrC3+QDFV28Uim+NJT8S61GQUxLueujO jraJtIxHrrKfyB54NB+C5kf5PoO02yfJZF1lw7PJFm0V6tt9Bq94+qJEXnCSMjTbkJxlp+crJYYQ/ iSsz9IFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lUhGW-00HGM9-Gn; Fri, 09 Apr 2021 02:53:12 +0000 From: "Matthew Wilcox (Oracle)" To: neilb@suse.de, peterz@infradead.org, mingo@redhat.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com, tglx@linutronix.de, bigeasy@linutronix.de Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 03/17] bit_spinlock: Prepare for split_locks Date: Fri, 9 Apr 2021 03:51:17 +0100 Message-Id: <20210409025131.4114078-4-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210409025131.4114078-1-willy@infradead.org> References: <20210409025131.4114078-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Make bit_spin_lock() and variants variadic to help with the transition. The split_lock parameter will become mandatory at the end of the series. Also add bit_spin_lock_nested() and bit_spin_unlock_assign() which will both be used by the rhashtable code later. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/bit_spinlock.h | 43 ++++++++++++++++++++++++++++++++---- 1 file changed, 39 insertions(+), 4 deletions(-) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505..6c5bbb55b334 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -6,6 +6,7 @@ #include #include #include +#include /* * bit-based spin_lock() @@ -13,7 +14,8 @@ * Don't use this unless you really need to: spin_lock() and spin_unlock() * are significantly faster. */ -static inline void bit_spin_lock(int bitnum, unsigned long *addr) +static inline void bit_spin_lock_nested(int bitnum, unsigned long *addr, + struct split_lock *lock, unsigned int subclass) { /* * Assuming the lock is uncontended, this never enters @@ -35,10 +37,27 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr) __acquire(bitlock); } +static inline void bit_spin_lock(int bitnum, unsigned long *addr, + ...) +{ + preempt_disable(); +#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) + while (unlikely(test_and_set_bit_lock(bitnum, addr))) { + preempt_enable(); + do { + cpu_relax(); + } while (test_bit(bitnum, addr)); + preempt_disable(); + } +#endif + __acquire(bitlock); +} + /* * Return true if it was acquired */ -static inline int bit_spin_trylock(int bitnum, unsigned long *addr) +static inline int bit_spin_trylock(int bitnum, unsigned long *addr, + ...) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -54,7 +73,8 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) /* * bit-based spin_unlock() */ -static inline void bit_spin_unlock(int bitnum, unsigned long *addr) +static inline void bit_spin_unlock(int bitnum, unsigned long *addr, + ...) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -71,7 +91,8 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) * non-atomic version, which can be used eg. if the bit lock itself is * protecting the rest of the flags in the word. */ -static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) +static inline void __bit_spin_unlock(int bitnum, unsigned long *addr, + ...) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -83,6 +104,20 @@ static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) __release(bitlock); } +/** + * bit_spin_unlock_assign - Unlock a bitlock by assignment of new value. + * @addr: Address to assign the value to. + * @val: New value to assign. + * @lock: Split lock that this bitlock is part of. + */ +static inline void bit_spin_unlock_assign(unsigned long *addr, + unsigned long val, struct split_lock *lock) +{ + smp_store_release(addr, val); + preempt_enable(); + __release(bitlock); +} + /* * Return true if the lock is held. */ -- 2.30.2