Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp645669pxb; Thu, 30 Sep 2021 13:59:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxPAOYLTm8itdaFiU38F68WkQMo3+uw9Qbss8YJMRmJhWos9jm/dwyfE+NkVsfU3jBGNRVw X-Received: by 2002:a17:906:6d0a:: with SMTP id m10mr1655871ejr.90.1633035558991; Thu, 30 Sep 2021 13:59:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633035558; cv=none; d=google.com; s=arc-20160816; b=OsnXd1MejeaJ/b6uWd9HdS0rOz2jUb/iK7qbqKwg4tI//xQv/5PdgBpWeMtj4J/zc8 dlNWnorfWnSmrZ57zsMkNE6K+A2TL0TDwr09ndxrnh7gnpXQtacvOwI26T0AXXRi4RhN k3vShAGw8np7uYCKz74p28V0RG2FaW22J9toHcZ0PNbOGqN1tG3cMFiUDu0xnLBvr1aj jQRpHqp2Fg3hwu1peVa9F7b/XdNC2Yb1goV9DDJBYdBWUGRg7dizFN0x+VK2A4im0KUs I/2GP4n2AAAow9qT0wPYjDP07aiUpSFsv+QXPypbBmasI4pa1OpFPp+y09T/0grbvo60 Vnkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=rk528hO09V+xXN91B1hSKF0BEg5FNswtyGgnAwkONyI=; b=SwPlSvR+hmeJydEqzBo60mXcnHHRQglI8m3aUCTDlZNsYA3U4n7ugeSutn5/5hRE91 AgtAAFhUBmW4/l4jCmuwrtgVaiEhDah6Emouwr71uZ2sO16XKWulXsFiuMS7Dh5WHyuD oGMLHJBkNm4O6seBTCSHtnlVA7RKUK0Bbj8irA7g1xaFMSGZbzQo9bWMTv8ORHuptRVH gv/F/9WjMbqcbYlWXSR1xzO+gIgVSDvug2PFYHfdSFROYr/zVyJ2cQOgyxvRSLv9Ar9S BHdRf5QZ+QYpmLDyq4L1vIYQbkVof0QPVv4pm286Ck5dPK819A94pUu8nWDdqzHjUtrs tntA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=dESEuhyj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k25si5269789edf.34.2021.09.30.13.58.53; Thu, 30 Sep 2021 13:59:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=dESEuhyj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345200AbhI3PmF (ORCPT + 99 others); Thu, 30 Sep 2021 11:42:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345057AbhI3PmF (ORCPT ); Thu, 30 Sep 2021 11:42:05 -0400 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64044C06176A for ; Thu, 30 Sep 2021 08:40:22 -0700 (PDT) Received: by mail-qk1-x72e.google.com with SMTP id p4so6223410qki.3 for ; Thu, 30 Sep 2021 08:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=rk528hO09V+xXN91B1hSKF0BEg5FNswtyGgnAwkONyI=; b=dESEuhyj1diuaN43mBYsQwKIqX9naWqnpqhhKA68G4OUlo0DTHbw8y2pY1FHxAFrzV nKSEL/1ki6qdHnNHPQNHYMzWLFD2L0w566kHcOkJmc4x+jGhRppQEqFU8y1gcn/S1WVS ecRwi6mmAz76ki1wey175nI40Fe6rqjoqPXb7XLIUTm5Ka8JHIav7eQ9Jp00o5DnN7Ix t/6BJKFAdciaAQ+LSO1dyfmJS2hysC5icO3MQg0XPd3tz+ab7gseuKOloQzRMiTrL/ml KQyFkfa0riIFzD58eRGM8c7k/GaNhhbqOEBANk9WCZ7n5/LdQmn2Ii7hmmm9Lu/Y6jnI arzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=rk528hO09V+xXN91B1hSKF0BEg5FNswtyGgnAwkONyI=; b=U9HyOJLZimurOV1+Txl9YwVXLZsk5fCZdIA4fVnkoqEJ+WnwTKDUqVCY+vJHSfrdr8 IBz4PRT4AJCxeQkFBQHJQF+/9XTYIPWCOP9POerxRukNPGkH3/QqLyRo7qVikqADOWn4 BH4f60QdncyOaEK8yEn+XCFM0NeK1Fw+KJqLjHIPrNj+Jylwivk7BGoQesjvvbQ6VZq7 3l3Cc13rcfqI8xNpAxbHnmarAIgc8BBxUx4Ki6BOFvcx6SJ8r5V2aPTIF165AKP5iSYH WzJ74W/IwHU51sDv9Es2m126cp22rVbyLgUuAJUvMWlzqYPnSLqSdI+peIUJB9rEbRuD qwIQ== X-Gm-Message-State: AOAM531xPxjMxLbAfWhZPu9YC5+cjW+wVPZ9ofdg2HHXnguBINPmOona NTFQL/yYLYW/7qUDIf3aEZ44uDosZv+Jpv/naClBKA== X-Received: by 2002:a37:5446:: with SMTP id i67mr5480440qkb.502.1633016421262; Thu, 30 Sep 2021 08:40:21 -0700 (PDT) MIME-Version: 1.0 References: <20210930153706.2105471-1-elver@google.com> In-Reply-To: <20210930153706.2105471-1-elver@google.com> From: Alexander Potapenko Date: Thu, 30 Sep 2021 17:39:44 +0200 Message-ID: Subject: Re: [PATCH] kfence: shorten critical sections of alloc/free To: Marco Elver Cc: Andrew Morton , Dmitry Vyukov , Jann Horn , LKML , Linux Memory Management List , kasan-dev Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 30, 2021 at 5:37 PM Marco Elver wrote: > > Initializing memory and setting/checking the canary bytes is relatively > expensive, and doing so in the meta->lock critical sections extends the > duration with preemption and interrupts disabled unnecessarily. > > Any reads to meta->addr and meta->size in kfence_guarded_alloc() and > kfence_guarded_free() don't require locking meta->lock as long as the > object is removed from the freelist: only kfence_guarded_alloc() sets > meta->addr and meta->size after removing it from the freelist, which > requires a preceding kfence_guarded_free() returning it to the list or > the initial state. > > Therefore move reads to meta->addr and meta->size, including expensive > memory initialization using them, out of meta->lock critical sections. > > Signed-off-by: Marco Elver Acked-by: Alexander Potapenko > --- > mm/kfence/core.c | 38 +++++++++++++++++++++----------------- > 1 file changed, 21 insertions(+), 17 deletions(-) > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index b61ef93d9f98..802905b1c89b 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -309,12 +309,19 @@ static inline bool set_canary_byte(u8 *addr) > /* Check canary byte at @addr. */ > static inline bool check_canary_byte(u8 *addr) > { > + struct kfence_metadata *meta; > + unsigned long flags; > + > if (likely(*addr =3D=3D KFENCE_CANARY_PATTERN(addr))) > return true; > > atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); > - kfence_report_error((unsigned long)addr, false, NULL, addr_to_met= adata((unsigned long)addr), > - KFENCE_ERROR_CORRUPTION); > + > + meta =3D addr_to_metadata((unsigned long)addr); > + raw_spin_lock_irqsave(&meta->lock, flags); > + kfence_report_error((unsigned long)addr, false, NULL, meta, KFENC= E_ERROR_CORRUPTION); > + raw_spin_unlock_irqrestore(&meta->lock, flags); > + > return false; > } > > @@ -324,8 +331,6 @@ static __always_inline void for_each_canary(const str= uct kfence_metadata *meta, > const unsigned long pageaddr =3D ALIGN_DOWN(meta->addr, PAGE_SIZE= ); > unsigned long addr; > > - lockdep_assert_held(&meta->lock); > - > /* > * We'll iterate over each canary byte per-side until fn() return= s > * false. However, we'll still iterate over the canary bytes to t= he > @@ -414,8 +419,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *= cache, size_t size, gfp_t g > WRITE_ONCE(meta->cache, cache); > meta->size =3D size; > meta->alloc_stack_hash =3D alloc_stack_hash; > + raw_spin_unlock_irqrestore(&meta->lock, flags); > > - for_each_canary(meta, set_canary_byte); > + alloc_covered_add(alloc_stack_hash, 1); > > /* Set required struct page fields. */ > page =3D virt_to_page(meta->addr); > @@ -425,11 +431,8 @@ static void *kfence_guarded_alloc(struct kmem_cache = *cache, size_t size, gfp_t g > if (IS_ENABLED(CONFIG_SLAB)) > page->s_mem =3D addr; > > - raw_spin_unlock_irqrestore(&meta->lock, flags); > - > - alloc_covered_add(alloc_stack_hash, 1); > - > /* Memory initialization. */ > + for_each_canary(meta, set_canary_byte); > > /* > * We check slab_want_init_on_alloc() ourselves, rather than lett= ing > @@ -454,6 +457,7 @@ static void kfence_guarded_free(void *addr, struct kf= ence_metadata *meta, bool z > { > struct kcsan_scoped_access assert_page_exclusive; > unsigned long flags; > + bool init; > > raw_spin_lock_irqsave(&meta->lock, flags); > > @@ -481,6 +485,13 @@ static void kfence_guarded_free(void *addr, struct k= fence_metadata *meta, bool z > meta->unprotected_page =3D 0; > } > > + /* Mark the object as freed. */ > + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); > + init =3D slab_want_init_on_free(meta->cache); > + raw_spin_unlock_irqrestore(&meta->lock, flags); > + > + alloc_covered_add(meta->alloc_stack_hash, -1); > + > /* Check canary bytes for memory corruption. */ > for_each_canary(meta, check_canary_byte); > > @@ -489,16 +500,9 @@ static void kfence_guarded_free(void *addr, struct k= fence_metadata *meta, bool z > * data is still there, and after a use-after-free is detected, w= e > * unprotect the page, so the data is still accessible. > */ > - if (!zombie && unlikely(slab_want_init_on_free(meta->cache))) > + if (!zombie && unlikely(init)) > memzero_explicit(addr, meta->size); > > - /* Mark the object as freed. */ > - metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); > - > - raw_spin_unlock_irqrestore(&meta->lock, flags); > - > - alloc_covered_add(meta->alloc_stack_hash, -1); > - > /* Protect to detect use-after-frees. */ > kfence_protect((unsigned long)addr); > > -- > 2.33.0.685.g46640cef36-goog > --=20 Alexander Potapenko Software Engineer Google Germany GmbH Erika-Mann-Stra=C3=9Fe, 33 80636 M=C3=BCnchen Gesch=C3=A4ftsf=C3=BChrer: Paul Manicle, Halimah DeLaine Prado Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg