Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp591865pxb; Thu, 23 Sep 2021 06:48:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwtrti9GOLTf1owLnwYMUrG9A1AB6GQuvLJLVNmxlaj2XvoObiLNfGtUX+GJl9gf2WoRTA1 X-Received: by 2002:a50:d9c9:: with SMTP id x9mr5622837edj.179.1632404908690; Thu, 23 Sep 2021 06:48:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632404908; cv=none; d=google.com; s=arc-20160816; b=nnlFKWmE5PrUBaCTsAiuOgg2WksoIPsDVH4pWcPZOaMrF6eFoQNcKzVyQ+TNEare7u XEj6awL+DuIJJsa00uJ+4zoKPIWnJOYOe1nb/LIXR0S7YSzc2fWIKn4zofEVUVG4SLEA YvSnTmr3LftpGmuCMw8KsYarCccT9wynOXNXJHBqaRZfWEp1Qc6DibE6IiBPN+nifTt6 GZ8uftAk1pLJlS7F5/4+rEU/ZBXSMnkdP5jUZLsRsWix78Z7dRS1xOZEFfpDHjzCZVxt RFLAV5u+YghM3aOwpVj36bwqsPwBYjhkM4dWap4va/plnLbiMA8/0KhIDF53trsZyFW0 dWAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ukweXmx4e+IJqf+FzCzZ1AIpQAkTlQYs/+7Ou4M2RuM=; b=IxQuSAd3grhllYbUZJZGJlTtKDe96/bTVwvf9vUUfKoW5Dj1mhPoDUcBu9QywFBM+2 +TmiMoSDHvIirjJweqb4v4bUV7ZrZpYcNk5v2dW2YHQT1J0n/Svb0DWGmN+UNvmAxRXa WTR8x/0a60tlffYC8hfEZ2HsqUAbZC7uWcIuuQQvEuXocRQF9wPMSTdApyzVGu2FNUI/ myZnrGlB5LOBhudEMVHIc6x74Eb5CxekTo1QbELQVrh10w236LjOCbgnKT/UraV71kk3 2lOk1zdaV74Upic2beBA+yisdk6wqkQHc2QtGwJ7Txee1KyJGvVSjUxTFofG349dCgLv XrkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="LWu70/eB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q16si5979113ejy.322.2021.09.23.06.48.03; Thu, 23 Sep 2021 06:48:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="LWu70/eB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241372AbhIWNpz (ORCPT + 99 others); Thu, 23 Sep 2021 09:45:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232316AbhIWNpz (ORCPT ); Thu, 23 Sep 2021 09:45:55 -0400 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAFFFC061574 for ; Thu, 23 Sep 2021 06:44:23 -0700 (PDT) Received: by mail-ot1-x32d.google.com with SMTP id r43-20020a05683044ab00b0054716b40005so1652608otv.4 for ; Thu, 23 Sep 2021 06:44:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ukweXmx4e+IJqf+FzCzZ1AIpQAkTlQYs/+7Ou4M2RuM=; b=LWu70/eBRthTIh5GFov772F4fU78omk/jrtYzccJDts2L3v6z45e9R8fa6TWFStLlT H9NQNyXZupj9iyNJU4NZ77BgPDnkO+rSZevzj7EmhqnRKc8KaKaL6CHOsBxRSX1mx9nL 1YBQbU+bZ+mQIjHDGvXO2H2jJ6jm9opzjTdljCM1DWCfmGDtAlq20lA/fgOV7D/nwPZI iqc2a2MPxIFqjg7WfT4axP0laaCXL8RZdlfK0wga+f5GD4SmNcUSg5gNYuYc17hsR8lJ vg7FiVDTJSteH5skcV+IQrTpoNwglbaShCzWqE/OWBS/iT/9peRKgmU6+/LCEx7N4SIT C4Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ukweXmx4e+IJqf+FzCzZ1AIpQAkTlQYs/+7Ou4M2RuM=; b=k9FL6ECChTXEmCb3zTwe0YEVry9gvziee2OID9WIdQwZogebiVG0UWWXqhilzB2Sqv Rea3H7ALoNac5gAKp9faeZwP8EIBrVAuiWLkymHnHmfknmkq3nyAzM81Q4X1x1L60v0g 3c+WsX0HRkCiZovZMOeHRODETCdda27RS3h6t1edkDfWbD5DSRHoB5CfdnuZPkHZLAop jJ4tgyl0IgqGVwsocX+zDO2mLO0ABtdqkLspfcvCQulIrkhlrudF7rCOmOjUdE4ymLQE /Wsk45FNh6k7Zfrf0nVVgtLbC/VE8U48RhUAYcZu4xCLf+ojW656diwY6pTewimpbz0j eFxQ== X-Gm-Message-State: AOAM530nirWmCwYez88hzDjA0ZUhkP801CR8OvTrV83vxwVMDflo2v9Q jB24nHJID8FBMV/SPVW5dX8JqDg3jz0PI7uDeQgIhpjrKN8= X-Received: by 2002:a9d:135:: with SMTP id 50mr4326659otu.295.1632404662778; Thu, 23 Sep 2021 06:44:22 -0700 (PDT) MIME-Version: 1.0 References: <20210923104803.2620285-1-elver@google.com> <20210923104803.2620285-4-elver@google.com> In-Reply-To: From: Marco Elver Date: Thu, 23 Sep 2021 15:44:10 +0200 Message-ID: Subject: Re: [PATCH v3 4/5] kfence: limit currently covered allocations when pool nearly full To: Alexander Potapenko Cc: Dmitry Vyukov , Andrew Morton , Jann Horn , Aleksandr Nogikh , Taras Madan , LKML , Linux Memory Management List , kasan-dev Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 23 Sept 2021 at 15:24, Alexander Potapenko wrote: > > On Thu, Sep 23, 2021 at 1:19 PM Dmitry Vyukov wrote: > > > > On Thu, 23 Sept 2021 at 12:48, Marco Elver wrote: > > > > > > One of KFENCE's main design principles is that with increasing uptime, > > > allocation coverage increases sufficiently to detect previously > > > undetected bugs. > > > > > > We have observed that frequent long-lived allocations of the same > > > source (e.g. pagecache) tend to permanently fill up the KFENCE pool > > > with increasing system uptime, thus breaking the above requirement. > > > The workaround thus far had been increasing the sample interval and/or > > > increasing the KFENCE pool size, but is no reliable solution. > > > > > > To ensure diverse coverage of allocations, limit currently covered > > > allocations of the same source once pool utilization reaches 75% > > > (configurable via `kfence.skip_covered_thresh`) or above. The effect is > > > retaining reasonable allocation coverage when the pool is close to full. > > > > > > A side-effect is that this also limits frequent long-lived allocations > > > of the same source filling up the pool permanently. > > > > > > Uniqueness of an allocation for coverage purposes is based on its > > > (partial) allocation stack trace (the source). A Counting Bloom filter > > > is used to check if an allocation is covered; if the allocation is > > > currently covered, the allocation is skipped by KFENCE. > > > > > > Testing was done using: > > > > > > (a) a synthetic workload that performs frequent long-lived > > > allocations (default config values; sample_interval=1; > > > num_objects=63), and > > > > > > (b) normal desktop workloads on an otherwise idle machine where > > > the problem was first reported after a few days of uptime > > > (default config values). > > > > > > In both test cases the sampled allocation rate no longer drops to zero > > > at any point. In the case of (b) we observe (after 2 days uptime) 15% > > > unique allocations in the pool, 77% pool utilization, with 20% "skipped > > > allocations (covered)". > > > > > > Signed-off-by: Marco Elver > > > > Reviewed-by: Dmitry Vyukov > Acked-by: Alexander Potapenko Thank you both! > > > --- > > > v3: > > > * Remove unneeded !alloc_stack_hash checks. > > > * Remove unneeded meta->alloc_stack_hash=0 in kfence_guarded_free(). > > > > > > v2: > > > * Switch to counting bloom filter to guarantee currently covered > > > allocations being skipped. > > > * Use a module param for skip_covered threshold. > > > * Use kfence pool address as hash entropy. > > > * Use filter_irq_stacks(). > > > --- > > > mm/kfence/core.c | 103 ++++++++++++++++++++++++++++++++++++++++++++- > > > mm/kfence/kfence.h | 2 + > > > 2 files changed, 103 insertions(+), 2 deletions(-) > > > > > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > > > index db01814f8ff0..58a0f6f1acc5 100644 > > > --- a/mm/kfence/core.c > > > +++ b/mm/kfence/core.c > > > @@ -11,11 +11,13 @@ > > > #include > > > #include > > > #include > > > +#include > > > #include > > > #include > > > #include > > > #include > > > #include > > > +#include > > > #include > > > #include > > > #include > > > @@ -82,6 +84,10 @@ static const struct kernel_param_ops sample_interval_param_ops = { > > > }; > > > module_param_cb(sample_interval, &sample_interval_param_ops, &kfence_sample_interval, 0600); > > > > > > +/* Pool usage% threshold when currently covered allocations are skipped. */ > > > +static unsigned long kfence_skip_covered_thresh __read_mostly = 75; > > > +module_param_named(skip_covered_thresh, kfence_skip_covered_thresh, ulong, 0644); > > > + > > > /* The pool of pages used for guard pages and objects. */ > > > char *__kfence_pool __ro_after_init; > > > EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */ > > > @@ -105,6 +111,25 @@ DEFINE_STATIC_KEY_FALSE(kfence_allocation_key); > > > /* Gates the allocation, ensuring only one succeeds in a given period. */ > > > atomic_t kfence_allocation_gate = ATOMIC_INIT(1); > > > > > > +/* > > > + * A Counting Bloom filter of allocation coverage: limits currently covered > > > + * allocations of the same source filling up the pool. > > > + * > > > + * Assuming a range of 15%-85% unique allocations in the pool at any point in > > Where do these 85% come from? An imaginary worst case, just to illustrate the range of the false positive probabilities (in the case of 85% it'd be 0.33). I expect unique allocations to be around 10-15% on a freshly booted system (on my real-system-experiment it stayed below 15%), but other workloads may produce other unique allocations%. > > > + * time, the below parameters provide a probablity of 0.02-0.33 for false > > > + * positive hits respectively: > > > + * > > > + * P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM > > > + */ > > > +#define ALLOC_COVERED_HNUM 2 > > > +#define ALLOC_COVERED_SIZE (1 << (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)) > > > +#define ALLOC_COVERED_HNEXT(h) (1664525 * (h) + 1013904223) > > Unless we are planning to change these primes, can you use > next_pseudo_random32() instead? I'm worried about next_pseudo_random32() changing their implementation to longer be deterministic or change in other ways that break our usecase. In this case we want pseudorandomness, but we're not implementing a PRNG. Open-coding the constants (given they are from "Numerical Recipes") is more reliable and doesn't introduce unwanted reliance on next_pseudo_random32()'s behaviour. Thanks, -- Marco