Received: by 2002:a05:7412:e794:b0:fa:551:50a7 with SMTP id o20csp2631246rdd; Fri, 12 Jan 2024 16:38:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IFTRWNwLXPSaByfyiLC8uDlQcnSeO7pXAU5Sxw/V7wuj8loolhrvjradVu/0NZ2xpNgzgwj X-Received: by 2002:a05:622a:1814:b0:429:9e60:4c51 with SMTP id t20-20020a05622a181400b004299e604c51mr2636299qtc.109.1705106306561; Fri, 12 Jan 2024 16:38:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705106306; cv=none; d=google.com; s=arc-20160816; b=ecJn0aNif6lrcJT47NWPZKUT8EIEXufi4cpo8l6PLmndnY72NACjLHsjiKgydxgprt CrugiMhcjwZfvPnr7fRDuE8gbdM0yvKfegdTsB4qMRRaoKTRVjrjRMEGhZm3xGH6Zzh6 RPkAZVHPer4K//hhHleBQld6JcsHhxJH3Kmj2bucrQCllpNFk8lXhzGyIKgglOSLgIhu O8UOOO/fqAlrCTEPlq/LuNBZDDC+g+tW8SdOQix9bcACjJs8nHh5NgIPs3KiiHn72A79 Qae5qu96KQ5eQkh40rrWY+ZsYrhRGS+WeLlmq2MYDsL+XpzU7GlWqxPyMyrMPIFtG2sr qfUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=MyxGNzc7hWfq7fi//wd/7chWsw/H9kpExClVp5MJZN4=; fh=0IwWxnJ9B3xUJE65MJI4dMP5Ycsf032T8LLU60gXLps=; b=zzGTK3mnM+E2mX5tAZN4u6OIaxkztk9fRVnWOLuoZiSm3IOsMgRGDiCgVTXULgUN5u 6FdZNmqfKkYyWIRNQLJ4s2J1XpLRxcSFBed7SvYHzGeZUtsUwBwL9XIouQbWZWPNgmmw 2FBNNFuf0Pzq1y9XnAuSgBj6pRpY5qYXoF3hcobKYCr4zMxCyYMyRNYBZKrFBTSRocAf jrJ3dwL81x2cQrz2vO/qr58Rt9ifXSDL9EYvHfz+L8zZVvpaHCcJgk19kT9EeqsryBIY XX8k9S32vHCRLvgTDgRjiwXVxbjRfhCoFdljWxQADDDM5y5a1xk1G6YFu7HuY38kzE3j lO8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=seVOur25; spf=pass (google.com: domain of linux-kernel+bounces-25143-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-25143-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id u9-20020ac858c9000000b00429d353f586si1431260qta.283.2024.01.12.16.38.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 16:38:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-25143-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=seVOur25; spf=pass (google.com: domain of linux-kernel+bounces-25143-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-25143-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 52BFD1C22D86 for ; Sat, 13 Jan 2024 00:38:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 60C32F4FE; Sat, 13 Jan 2024 00:38:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="seVOur25" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8EF8AA28; Sat, 13 Jan 2024 00:38:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC59FC43390; Sat, 13 Jan 2024 00:38:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705106297; bh=/eeFF6/JtaBBkHTd6iYnBzPQb/6NBh8oA+UF/0yDTRk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=seVOur25lNIR/X3Cq31wwPlG6ywe+zK8wg4yPuMaL1cTEZXUcbnd+tqbqLEol7Wxp E3GLCl08xmNZu3iJ3CnGJP73ya57Nb90ZEj5wQ4gPX0ojmV9p7KIDNhBkUaC7KYkik 2923k8sJTcCmqm+lN7VqdrGVqDqHMvJNzYeDNVovP2Gv8H2e6/oeNPlzakAXuUY+Pn LBZpEhaGsee4aWqzYtaFSrWyAacKMM7bmrnYd5BzANsQ1+X5fepEGWkcbXMze+kF2B 3WyYGZ4EfnsUn34tA4LlH2190cdAF5qElUsR+fKvuy975c5FHH6vgUE6cAR7u8wj13 CITxHtTBcPqbQ== Date: Fri, 12 Jan 2024 16:38:16 -0800 From: "Darrick J. Wong" To: "Matthew Wilcox (Oracle)" Cc: Chandan Babu R , linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, Mateusz Guzik , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Subject: Re: [PATCH v5 1/3] locking: Add rwsem_assert_held() and rwsem_assert_held_write() Message-ID: <20240113003816.GB722975@frogsfrogsfrogs> References: <20240111212424.3572189-1-willy@infradead.org> <20240111212424.3572189-2-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240111212424.3572189-2-willy@infradead.org> On Thu, Jan 11, 2024 at 09:24:22PM +0000, Matthew Wilcox (Oracle) wrote: > Modelled after lockdep_assert_held() and lockdep_assert_held_write(), > but are always active, even when lockdep is disabled. Of course, they > don't test that _this_ thread is the owner, but it's sufficient to catch > many bugs and doesn't incur the same performance penalty as lockdep. > > Signed-off-by: Matthew Wilcox (Oracle) > Acked-by: Peter Zijlstra (Intel) Looks good to me, having read this patchset backwards. Acked-by: Darrick J. Wong --D > --- > include/linux/rwbase_rt.h | 9 ++++++-- > include/linux/rwsem.h | 46 ++++++++++++++++++++++++++++++++++----- > 2 files changed, 48 insertions(+), 7 deletions(-) > > diff --git a/include/linux/rwbase_rt.h b/include/linux/rwbase_rt.h > index 1d264dd08625..29c4e4f243e4 100644 > --- a/include/linux/rwbase_rt.h > +++ b/include/linux/rwbase_rt.h > @@ -26,12 +26,17 @@ struct rwbase_rt { > } while (0) > > > -static __always_inline bool rw_base_is_locked(struct rwbase_rt *rwb) > +static __always_inline bool rw_base_is_locked(const struct rwbase_rt *rwb) > { > return atomic_read(&rwb->readers) != READER_BIAS; > } > > -static __always_inline bool rw_base_is_contended(struct rwbase_rt *rwb) > +static inline void rw_base_assert_held_write(const struct rwbase_rt *rwb) > +{ > + WARN_ON(atomic_read(&rwb->readers) != WRITER_BIAS); > +} > + > +static __always_inline bool rw_base_is_contended(const struct rwbase_rt *rwb) > { > return atomic_read(&rwb->readers) > 0; > } > diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h > index 9c29689ff505..4f1c18992f76 100644 > --- a/include/linux/rwsem.h > +++ b/include/linux/rwsem.h > @@ -66,14 +66,24 @@ struct rw_semaphore { > #endif > }; > > -/* In all implementations count != 0 means locked */ > +#define RWSEM_UNLOCKED_VALUE 0UL > +#define RWSEM_WRITER_LOCKED (1UL << 0) > +#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) > + > static inline int rwsem_is_locked(struct rw_semaphore *sem) > { > - return atomic_long_read(&sem->count) != 0; > + return atomic_long_read(&sem->count) != RWSEM_UNLOCKED_VALUE; > } > > -#define RWSEM_UNLOCKED_VALUE 0L > -#define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) > +static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) > +{ > + WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); > +} > + > +static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) > +{ > + WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); > +} > > /* Common initializer macros and functions */ > > @@ -152,11 +162,21 @@ do { \ > __init_rwsem((sem), #sem, &__key); \ > } while (0) > > -static __always_inline int rwsem_is_locked(struct rw_semaphore *sem) > +static __always_inline int rwsem_is_locked(const struct rw_semaphore *sem) > { > return rw_base_is_locked(&sem->rwbase); > } > > +static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) > +{ > + WARN_ON(!rwsem_is_locked(sem)); > +} > + > +static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) > +{ > + rw_base_assert_held_write(sem); > +} > + > static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) > { > return rw_base_is_contended(&sem->rwbase); > @@ -169,6 +189,22 @@ static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) > * the RT specific variant. > */ > > +static inline void rwsem_assert_held(const struct rw_semaphore *sem) > +{ > + if (IS_ENABLED(CONFIG_LOCKDEP)) > + lockdep_assert_held(sem); > + else > + rwsem_assert_held_nolockdep(sem); > +} > + > +static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) > +{ > + if (IS_ENABLED(CONFIG_LOCKDEP)) > + lockdep_assert_held_write(sem); > + else > + rwsem_assert_held_write_nolockdep(sem); > +} > + > /* > * lock for reading > */ > -- > 2.43.0 > >