Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5783815ybl; Tue, 27 Aug 2019 09:35:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzsB4JvLQ1Yfbv56cXxLw5eSbWZgbuiKbIyTiIFfAxC2cU70uxBJmAgsSnvs22Udg1uJxIz X-Received: by 2002:a65:6152:: with SMTP id o18mr21257889pgv.279.1566923719499; Tue, 27 Aug 2019 09:35:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566923719; cv=none; d=google.com; s=arc-20160816; b=x/q6L75OrOWs5Mmp2N0Fpsy6b3iVvL5SvUK4/obHuL4MpMdlHQ2NQM8rd5pQVjkBpc X9+Y9KAjl6Q2esANOAnyNXV57y70LI35e12i4Eacvp36hSBVC+qU7gz9bfz584Xh5mU8 I9LhNUTfs13sY1LEKXmr4tfy1F2iNXgrXTC1SSoEzPd7y9BDXDUIoiNcjHV4jYVhpKfi liiGBleJBbvfclVB0Soo894bAdoBNX6Fxlq3vRhJcvo14j8fQEplORZlGxoYlgzwu45B WQfiDR0oTUyFqZH/bc2uv5OPzi1LgEeFQ3tIN6YW/RDYtRvcF9FZSVx+IHcINVJLW5Go 2wTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=fL3YXmpIVOuM+M5jPWqjEmbV+t3zWdq6MFDbVkjJ+Ss=; b=zIqhYn1wsc16+rEWk/Q23FNmRGHaiQisMSs9cAD4wqOasfr0P4U3sgRXnd6VgAWvc2 200MnXAkEvYo2kuHE0o5ta0arbnfNZJrnv6wUzKZkuvvJPGXTSuCVeMmKr2f3pHqr1/a AXFArHkGr1ZmymDeyF58RW7IggeVRmkrRoDbpv4PwWmsse7zU5lzwjKyS+QySpExOCg3 tn+sBNLDec+jsAN54cTSoKBVb/XJrCIaM7FjyOGnFp8eJE5d5siNiJ/bQetbl6l3jnZZ ttcDqzkvukFFlVg0S5xTXfZwJm3wliYk810WNHYLgkCD4cJkS0don9o4eMZpuWt0GtxB LIYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k8si12305302pgi.49.2019.08.27.09.35.03; Tue, 27 Aug 2019 09:35:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730421AbfH0Qck (ORCPT + 99 others); Tue, 27 Aug 2019 12:32:40 -0400 Received: from foss.arm.com ([217.140.110.172]:47634 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730362AbfH0Qci (ORCPT ); Tue, 27 Aug 2019 12:32:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9ADE15AD; Tue, 27 Aug 2019 09:32:37 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B0AEC3F59C; Tue, 27 Aug 2019 09:32:36 -0700 (PDT) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: Will Deacon , Kees Cook , Ingo Molnar , Elena Reshetova , Peter Zijlstra , Ard Biesheuvel , Hanjun Guo , Jan Glauber Subject: [PATCH v2 5/6] lib/refcount: Improve performance of generic REFCOUNT_FULL code Date: Tue, 27 Aug 2019 17:32:03 +0100 Message-Id: <20190827163204.29903-6-will@kernel.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190827163204.29903-1-will@kernel.org> References: <20190827163204.29903-1-will@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rewrite the generic REFCOUNT_FULL implementation so that the saturation point is moved to INT_MIN / 2. This allows us to defer the sanity checks until after the atomic operation, which removes many uses of cmpxchg() in favour of atomic_fetch_{add,sub}(). Cc: Kees Cook Cc: Ingo Molnar Cc: Elena Reshetova Cc: Peter Zijlstra Cc: Ard Biesheuvel Tested-by: Hanjun Guo Tested-by: Jan Glauber Signed-off-by: Will Deacon --- include/linux/refcount.h | 87 +++++++++++++++++++----------------------------- 1 file changed, 34 insertions(+), 53 deletions(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index e719b5b1220e..7f9aa6511142 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -47,8 +47,8 @@ static inline unsigned int refcount_read(const refcount_t *r) #ifdef CONFIG_REFCOUNT_FULL #include -#define REFCOUNT_MAX (UINT_MAX - 1) -#define REFCOUNT_SATURATED UINT_MAX +#define REFCOUNT_MAX INT_MAX +#define REFCOUNT_SATURATED (INT_MIN / 2) /* * Variant of atomic_t specialized for reference counts. @@ -109,25 +109,19 @@ static inline unsigned int refcount_read(const refcount_t *r) */ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); + int old = refcount_read(r); do { - if (!val) - return false; - - if (unlikely(val == REFCOUNT_SATURATED)) - return true; - - new = val + i; - if (new < val) - new = REFCOUNT_SATURATED; + if (!old) + break; + } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); + if (unlikely(old < 0 || old + i < 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: saturated; leaking memory.\n"); + } - return true; + return old; } /** @@ -148,7 +142,13 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) */ static inline void refcount_add(int i, refcount_t *r) { - WARN_ONCE(!refcount_add_not_zero(i, r), "refcount_t: addition on 0; use-after-free.\n"); + int old = atomic_fetch_add_relaxed(i, &r->refs); + + WARN_ONCE(!old, "refcount_t: addition on 0; use-after-free.\n"); + if (unlikely(old <= 0 || old + i <= 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(old, "refcount_t: saturated; leaking memory.\n"); + } } /** @@ -166,23 +166,7 @@ static inline void refcount_add(int i, refcount_t *r) */ static inline __must_check bool refcount_inc_not_zero(refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); - - do { - new = val + 1; - - if (!val) - return false; - - if (unlikely(!new)) - return true; - - } while (!atomic_try_cmpxchg_relaxed(&r->refs, &val, new)); - - WARN_ONCE(new == REFCOUNT_SATURATED, - "refcount_t: saturated; leaking memory.\n"); - - return true; + return refcount_add_not_zero(1, r); } /** @@ -199,7 +183,7 @@ static inline __must_check bool refcount_inc_not_zero(refcount_t *r) */ static inline void refcount_inc(refcount_t *r) { - WARN_ONCE(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n"); + refcount_add(1, r); } /** @@ -224,26 +208,19 @@ static inline void refcount_inc(refcount_t *r) */ static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) { - unsigned int new, val = atomic_read(&r->refs); - - do { - if (unlikely(val == REFCOUNT_SATURATED)) - return false; + int old = atomic_fetch_sub_release(i, &r->refs); - new = val - i; - if (new > val) { - WARN_ONCE(new > val, "refcount_t: underflow; use-after-free.\n"); - return false; - } - - } while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); - - if (!new) { + if (old == i) { smp_acquire__after_ctrl_dep(); return true; } - return false; + if (unlikely(old - i < 0)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: underflow; use-after-free.\n"); + } + + return false; } /** @@ -276,9 +253,13 @@ static inline __must_check bool refcount_dec_and_test(refcount_t *r) */ static inline void refcount_dec(refcount_t *r) { - WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); -} + int old = atomic_fetch_sub_release(1, &r->refs); + if (unlikely(old <= 1)) { + refcount_set(r, REFCOUNT_SATURATED); + WARN_ONCE(1, "refcount_t: decrement hit 0; leaking memory.\n"); + } +} #else /* CONFIG_REFCOUNT_FULL */ #define REFCOUNT_MAX INT_MAX -- 2.11.0