Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp9921404ybl; Thu, 26 Dec 2019 07:37:57 -0800 (PST) X-Google-Smtp-Source: APXvYqxr4plx2Wu3MwEBBJYfDC8sB3XX/I9KPriujUMVjXNWqOVVITFeW++QdURedr9rPHtkKahH X-Received: by 2002:a9d:6ac1:: with SMTP id m1mr39447096otq.101.1577374677476; Thu, 26 Dec 2019 07:37:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577374677; cv=none; d=google.com; s=arc-20160816; b=ECTzcxcf0jrZjKX6dvo6Own2lSrhwyFmYimIitmxABmTIMu1dbAhArNU7kSIZ80V4d 2APQVU7Zjktfdq886x47Be6b5br+WicIT6nSV4+aG1LTVWf/MTPSWIXgUnmuq/puWfoK qRjyQbzaFKREP6IeFm3z1mbQEwXfxSKH1r3uxB6AGR3dDkyXIcXaadrxDHuWZv8ju8gX ZEC6q2voLpOOC5zvXXN+TXVUYHJAfMheH2gRtCMfMKLz9j3SOYFFgLlWUsrBw1JQ3TJH Gzx2/efbn4eCxZB3qTGkmn44KYvI6bA3Zu7Z2XGvtXJz0Uk8bLx5aFyeTyIWnANTLjQW WjJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=T29eNNWjMDxHNsEnGH+BNMV18QaUp1UpnBJimBpzmUE=; b=QV35CBgvibkKr1kBEqczm3JeRaW4u+6y3+LNJVo/5NLYA655zwQH7ECATqD8aqbvfB epJk9depMQ+72cJifYiyw24win8AbD+wm1xIYKK85c0+NwaMQm65BtUbMntbh9qGp3EH 9oy+htNITvfGesSiQVi44cYQUWVeQa1TlO9tHAlUxhsySJ8qu33Z6Igg0zXBLejNIoJn 5jDgJ+gpw/6FV/303Uf7RKT6lmp03Pp9YjhdqUHD8x1/36GldOkkzW+CrLvCOCXHmAwM um3NqVGu0dt0ic/lo4hDFmFLTEnHj8Coi+WBJ+FxM4vk3l9LbBrlV6AM30mZvft9qz+O RT8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=JDdz9fZM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u12si10396139oiv.13.2019.12.26.07.37.20; Thu, 26 Dec 2019 07:37:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=JDdz9fZM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726506AbfLZPbm (ORCPT + 99 others); Thu, 26 Dec 2019 10:31:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:38442 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726236AbfLZPbm (ORCPT ); Thu, 26 Dec 2019 10:31:42 -0500 Received: from zzz.tds (h75-100-12-111.burkwi.broadband.dynamic.tds.net [75.100.12.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 03F6520838; Thu, 26 Dec 2019 15:31:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1577374301; bh=ACcMsXJRgvv86ni8jBDQzCMzllT9TiFSBLY7g9uTcEQ=; h=From:To:Cc:Subject:Date:From; b=JDdz9fZMaW1BlgOtBZ0f4M7ky4oGVKzckPX6a5hzwxI3P0mmY0lmvEhQLrdSD2/1A 45JOuqNera1RmgDjHlhvQCY3ZJR3un57Oaj6490x7Upa6b+VI4bHLz7gbkezfQ9XTv UXxL1gOhAEGga0UtrzylvyrAUZmTCy5uZl/xKrKo= From: Eric Biggers To: linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Will Deacon Cc: Elena Reshetova , Thomas Gleixner , Kees Cook , Anna-Maria Gleixner , Sebastian Andrzej Siewior Subject: [PATCH] locking/refcount: add sparse annotations to dec-and-lock functions Date: Thu, 26 Dec 2019 09:29:22 -0600 Message-Id: <20191226152922.2034-1-ebiggers@kernel.org> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Biggers Wrap refcount_dec_and_lock() and refcount_dec_and_lock_irqsave() with macros using __cond_lock() so that 'sparse' doesn't report warnings about unbalanced locking when using them. This is the same thing that's done for their atomic_t equivalents. Don't annotate refcount_dec_and_mutex_lock(), because mutexes don't currently have sparse annotations. Signed-off-by: Eric Biggers --- include/linux/refcount.h | 45 ++++++++++++++++++++++++++++++++++++---- lib/refcount.c | 39 +++++----------------------------- 2 files changed, 46 insertions(+), 38 deletions(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 0ac50cf62d06..6bb5ab9e98ed 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -300,8 +300,45 @@ static inline void refcount_dec(refcount_t *r) extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock); -extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock); -extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, - spinlock_t *lock, - unsigned long *flags); + +/** + * refcount_dec_and_lock - return holding spinlock if able to decrement + * refcount to 0 + * @r: the refcount + * @lock: the spinlock to be locked + * + * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to + * decrement when saturated at REFCOUNT_SATURATED. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides a control dependency such that free() must come after. + * See the comment on top. + * + * Return: true and hold spinlock if able to decrement refcount to 0, false + * otherwise + */ +extern __must_check bool _refcount_dec_and_lock(refcount_t *r, + spinlock_t *lock); +#define refcount_dec_and_lock(r, lock) \ + __cond_lock(lock, _refcount_dec_and_lock(r, lock)) + +/** + * refcount_dec_and_lock_irqsave - return holding spinlock with disabled + * interrupts if able to decrement refcount to 0 + * @r: the refcount + * @lock: the spinlock to be locked + * @flags: saved IRQ-flags if the is acquired + * + * Same as refcount_dec_and_lock() above except that the spinlock is acquired + * with disabled interrupts. + * + * Return: true and hold spinlock if able to decrement refcount to 0, false + * otherwise + */ +extern __must_check bool _refcount_dec_and_lock_irqsave(refcount_t *r, + spinlock_t *lock, + unsigned long *flags); +#define refcount_dec_and_lock_irqsave(r, lock, flags) \ + __cond_lock(lock, _refcount_dec_and_lock_irqsave(r, lock, flags)) + #endif /* _LINUX_REFCOUNT_H */ diff --git a/lib/refcount.c b/lib/refcount.c index ebac8b7d15a7..f0eb996b28c0 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -125,23 +125,7 @@ bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) } EXPORT_SYMBOL(refcount_dec_and_mutex_lock); -/** - * refcount_dec_and_lock - return holding spinlock if able to decrement - * refcount to 0 - * @r: the refcount - * @lock: the spinlock to be locked - * - * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to - * decrement when saturated at REFCOUNT_SATURATED. - * - * Provides release memory ordering, such that prior loads and stores are done - * before, and provides a control dependency such that free() must come after. - * See the comment on top. - * - * Return: true and hold spinlock if able to decrement refcount to 0, false - * otherwise - */ -bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) +bool _refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) { if (refcount_dec_not_one(r)) return false; @@ -154,23 +138,10 @@ bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) return true; } -EXPORT_SYMBOL(refcount_dec_and_lock); +EXPORT_SYMBOL(_refcount_dec_and_lock); -/** - * refcount_dec_and_lock_irqsave - return holding spinlock with disabled - * interrupts if able to decrement refcount to 0 - * @r: the refcount - * @lock: the spinlock to be locked - * @flags: saved IRQ-flags if the is acquired - * - * Same as refcount_dec_and_lock() above except that the spinlock is acquired - * with disabled interupts. - * - * Return: true and hold spinlock if able to decrement refcount to 0, false - * otherwise - */ -bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, - unsigned long *flags) +bool _refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, + unsigned long *flags) { if (refcount_dec_not_one(r)) return false; @@ -183,4 +154,4 @@ bool refcount_dec_and_lock_irqsave(refcount_t *r, spinlock_t *lock, return true; } -EXPORT_SYMBOL(refcount_dec_and_lock_irqsave); +EXPORT_SYMBOL(_refcount_dec_and_lock_irqsave); -- 2.24.1