Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp562367pxb; Thu, 30 Sep 2021 11:53:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzPOu33eG1Q2KfvuK2ShUMc/ttApTu+jQKKhz9RHhi552uF1XqAs6XmxWguaDt9SQE9R3FP X-Received: by 2002:a17:902:e984:b0:13e:52b0:d63 with SMTP id f4-20020a170902e98400b0013e52b00d63mr5429283plb.77.1633028035168; Thu, 30 Sep 2021 11:53:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633028035; cv=none; d=google.com; s=arc-20160816; b=HjyXABZY7//KKYSpzJflCi4rvqgS9yUOaMUA6jOss7nsX4VhQPwl5K+2t9JLu8DxHB /ppsnPsLDs/hgqX6MbFr2PLiQzNYz+tGoSg/aYByvA6RJKxe4W6T22zUTwz8AVeeQXQV BdLY4WA6b4VmJSdAJsWP0lr1DOyExu+FVwxlQTX5F6eAwOcOoKUzo+p21kFunaSDj+wG dMsB2/HXJRQk8f4nK0ZbLb9pszfVsuLiE6/x+DalBlVG1unSnXWmfe6LgKyZFgKPdXRI Lf8ND7rUQlmPHqA3EBHn6Fqq4EY+R7mbRdz2cokmcJzTlJg3g5FcRV8JzVblsPi8xWdu AY0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :dkim-signature; bh=yWAvf4aQlVR2F4yVcHUaOtHEh1BNX6MVAiZjUIr82Uk=; b=kbkfWXBID2k9dDQy0+SekS+knrqNBvOsycq+ZYoYFHq4TgYq8JA3ENfm8eQrLe2Yt4 H4YQPqhDT6d2kfp5dNr8nK7WVqyJbMhKeMNtCJkcJXPJPpc2pF3R7ah+fnD6iA9S7iVJ wi7CqZJKVjgAqBvzGXtCCqfli8Ms56O1jk/DjtJZiaTIDvFXepVh9LbPFQIqh5NAPFFx Gr+K3lx1wNECx81mYPC5Bfh2OcJugfct1vBPyqUtwGvQlBM3U/eim86O/0uFVLKLhBYO A3oXOLDgGUPaHrZzPOQuBBamxGki5afAbBcwXN2U9cGtra1ilRv8G0VCSdS9PA8ZJopw E74Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=agJQ6Par; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i186si4684743pge.556.2021.09.30.11.53.41; Thu, 30 Sep 2021 11:53:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=agJQ6Par; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344918AbhI3PjC (ORCPT + 99 others); Thu, 30 Sep 2021 11:39:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344500AbhI3PjB (ORCPT ); Thu, 30 Sep 2021 11:39:01 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 917BAC06176A for ; Thu, 30 Sep 2021 08:37:18 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id z62-20020a509e44000000b003da839b9821so6728137ede.15 for ; Thu, 30 Sep 2021 08:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=yWAvf4aQlVR2F4yVcHUaOtHEh1BNX6MVAiZjUIr82Uk=; b=agJQ6ParoQ5qoZN/UB6cQqFuJimcYA6IkRfvw62IlHl4ENmP0QKUEirlzUF9NqEeQR gzPkp9DGPVJlmuy/fUVk6MtQmDdJbJ+T8xF5cWkxtYk5fmnACVV3VE/gE6tM2BJYHMBe 9eysVUrRAF9iNVlcwKDaawscJ45v/umjbSOR2AjaR3oSCzbTJoyObZEmSJX1aaR94FXM 52OGDACT8JRCYzJOVnXwQS3jJosLqYEOg/OSpUF20FZXnL1hrQBeyJOktKR4hp0KolXi D7jwm0XblroooRYDavrCeWYlkP90SE9iAtI28DKB5MUeXfZ1+4RQHIQJZE8rivs+KQky ytTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=yWAvf4aQlVR2F4yVcHUaOtHEh1BNX6MVAiZjUIr82Uk=; b=4P5a/CqY/HuNMiCQZt0aED6Bbhjc6G1t2kPYRkDuzyMFzV/ubZ4yq/p4qKkZjc5jVt Q/Xcn+xySKZrkR3//U8lcWYjdFPMhoegBMvk7RjtLlWNTOe2th8O5VHUqICqRPoSG5Nq r7X0acFKwbX7keCPN3sLVy32eUZ/5JRhmdrlqsebogOVDzoiHUgqmhsmNbp1eRj2Tf/f RZMfN2n8DKL7MLAUqvN8wPNNe3VDRD9B9MfO1yV61sX+RvaiATBv5v8UJTEMjFs9XaHK q85wAAlXTz7QnPPvHxE3Gn4O8pVV+wJfJG883M7mPPFGG0fyyLdve4u+mReahqVm7a7t 1I5A== X-Gm-Message-State: AOAM533RZaY9fS7RGC362dTjw7ejP5K8Q7GopkL2oQSsVAVio5g7fVT5 T7Kxq2Ggz85CPJ6Z2KUxt9ZwMlfarQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:4c54:2b8f:fabf:78b0]) (user=elver job=sendgmr) by 2002:a05:6402:21ef:: with SMTP id ce15mr7675136edb.19.1633016237037; Thu, 30 Sep 2021 08:37:17 -0700 (PDT) Date: Thu, 30 Sep 2021 17:37:06 +0200 Message-Id: <20210930153706.2105471-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [PATCH] kfence: shorten critical sections of alloc/free From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Initializing memory and setting/checking the canary bytes is relatively expensive, and doing so in the meta->lock critical sections extends the duration with preemption and interrupts disabled unnecessarily. Any reads to meta->addr and meta->size in kfence_guarded_alloc() and kfence_guarded_free() don't require locking meta->lock as long as the object is removed from the freelist: only kfence_guarded_alloc() sets meta->addr and meta->size after removing it from the freelist, which requires a preceding kfence_guarded_free() returning it to the list or the initial state. Therefore move reads to meta->addr and meta->size, including expensive memory initialization using them, out of meta->lock critical sections. Signed-off-by: Marco Elver --- mm/kfence/core.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index b61ef93d9f98..802905b1c89b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -309,12 +309,19 @@ static inline bool set_canary_byte(u8 *addr) /* Check canary byte at @addr. */ static inline bool check_canary_byte(u8 *addr) { + struct kfence_metadata *meta; + unsigned long flags; + if (likely(*addr == KFENCE_CANARY_PATTERN(addr))) return true; atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); - kfence_report_error((unsigned long)addr, false, NULL, addr_to_metadata((unsigned long)addr), - KFENCE_ERROR_CORRUPTION); + + meta = addr_to_metadata((unsigned long)addr); + raw_spin_lock_irqsave(&meta->lock, flags); + kfence_report_error((unsigned long)addr, false, NULL, meta, KFENCE_ERROR_CORRUPTION); + raw_spin_unlock_irqrestore(&meta->lock, flags); + return false; } @@ -324,8 +331,6 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE); unsigned long addr; - lockdep_assert_held(&meta->lock); - /* * We'll iterate over each canary byte per-side until fn() returns * false. However, we'll still iterate over the canary bytes to the @@ -414,8 +419,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g WRITE_ONCE(meta->cache, cache); meta->size = size; meta->alloc_stack_hash = alloc_stack_hash; + raw_spin_unlock_irqrestore(&meta->lock, flags); - for_each_canary(meta, set_canary_byte); + alloc_covered_add(alloc_stack_hash, 1); /* Set required struct page fields. */ page = virt_to_page(meta->addr); @@ -425,11 +431,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g if (IS_ENABLED(CONFIG_SLAB)) page->s_mem = addr; - raw_spin_unlock_irqrestore(&meta->lock, flags); - - alloc_covered_add(alloc_stack_hash, 1); - /* Memory initialization. */ + for_each_canary(meta, set_canary_byte); /* * We check slab_want_init_on_alloc() ourselves, rather than letting @@ -454,6 +457,7 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z { struct kcsan_scoped_access assert_page_exclusive; unsigned long flags; + bool init; raw_spin_lock_irqsave(&meta->lock, flags); @@ -481,6 +485,13 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z meta->unprotected_page = 0; } + /* Mark the object as freed. */ + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); + init = slab_want_init_on_free(meta->cache); + raw_spin_unlock_irqrestore(&meta->lock, flags); + + alloc_covered_add(meta->alloc_stack_hash, -1); + /* Check canary bytes for memory corruption. */ for_each_canary(meta, check_canary_byte); @@ -489,16 +500,9 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z * data is still there, and after a use-after-free is detected, we * unprotect the page, so the data is still accessible. */ - if (!zombie && unlikely(slab_want_init_on_free(meta->cache))) + if (!zombie && unlikely(init)) memzero_explicit(addr, meta->size); - /* Mark the object as freed. */ - metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); - - raw_spin_unlock_irqrestore(&meta->lock, flags); - - alloc_covered_add(meta->alloc_stack_hash, -1); - /* Protect to detect use-after-frees. */ kfence_protect((unsigned long)addr); -- 2.33.0.685.g46640cef36-goog