Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp384360pxk; Fri, 11 Sep 2020 09:27:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyiaHhksatyeducDBe6XMO74GE3Jf8ANiJ/fQ43dy6JqxP3N57OcSgzWhhNx5aisYRD23iN X-Received: by 2002:aa7:ce15:: with SMTP id d21mr2998671edv.284.1599841673818; Fri, 11 Sep 2020 09:27:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599841673; cv=none; d=google.com; s=arc-20160816; b=jUfquydR1jeuiWn/UF+vj0ASvO0SikQ97b9r4VUPkgYG1jOukw0j8zb3ewLlpzGgTb vT4DwJPOvkLSxUw3CIt0Rf7N1gulkrpPa7rss2EtnyA62quFyx0XvzMWYsq9Y3gnic4r YDMl8Qr7XAIVI9loTCwr64/NLzhv6KDVGPcfWkEyPilTEtTta7vG9PHD+hcEovatCxDi OBe9InagX8bWxYDkYX2xMIIT9Tj9xxxjg16BIAzOsy0j7Fa0q+GfKvW6lZqbuiBkRXXm ozbomqo97UMHj7Q5+dEjZJdmB4U4/hhTAoDgAb8r2Cx+QYyV4+aZJBvmMKRaZee3SMt8 SRtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=TpsoM88cyk0U9ujavsoiNiP/xNpunXVnqQMw3rUrW70=; b=arKtQUHG3+07koap5OjW/FtFdn+C2H/aZWMx3wKpaQA3Ls5jFdypTm3t7jZ/h1XpFy 2JDLkInc4KB+rideneHKV0xwb5CCHFgSPubHjErxcZP/N/c8tjju8FrYeHeQIPfnhgkB hMtMdeCeMi7OQwwzgNjMdajnXt7C/pAJn7wgwFbe8fyfO29WeurU2HbDfXnDfNeZHNUd pZU+zWsOiaMS2GiCkHDpV1IcB6LRJxrHHt0uI5hIjdCoCluSkj4A1hS5eNRUxYkJoij5 fognuAilkM6kkU065nzDyQClU4nMIpVkZSEUUetOPOBmAOBcCBJtFO4cTuerCBDypUAL yuOw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=cL1oEi6G; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d2si1639942edz.345.2020.09.11.09.27.30; Fri, 11 Sep 2020 09:27:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=cL1oEi6G; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726519AbgIKQ0c (ORCPT + 99 others); Fri, 11 Sep 2020 12:26:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726477AbgIKPWl (ORCPT ); Fri, 11 Sep 2020 11:22:41 -0400 Received: from mail-oi1-x244.google.com (mail-oi1-x244.google.com [IPv6:2607:f8b0:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE778C06179A for ; Fri, 11 Sep 2020 06:33:56 -0700 (PDT) Received: by mail-oi1-x244.google.com with SMTP id n2so9474512oij.1 for ; Fri, 11 Sep 2020 06:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=TpsoM88cyk0U9ujavsoiNiP/xNpunXVnqQMw3rUrW70=; b=cL1oEi6GisMpZ+1vyaA1YZmE9+hCJmDs4kjImnwwOvOvKuOrq7EHDUtsuuabzR57Po K0AAX9GUto+YmGqhasb5W+vSlZpIfwGIpXn8/ZobsoDeJAJ/YCW6Jf2ybVuHSXZ02iHQ h6tve/Bzu8Wpl7KaWn7eZFKsYkyrouec6acrsO3QrHhdbgZgDli4Blz30Kn6EMMfCBB6 58ax9jDb2e5Q/SgCE0G1rBCJ/6kVKL7K1KguH/4zVty6MT6Hs8WiLxSXGArkPLwwuURv HHvmieKXwiJoMkOUJQMjpACLQQ0BtYCF8oF3Vir4JYh/XjFSG9+Miqora4bd5lLjf1sX HUkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=TpsoM88cyk0U9ujavsoiNiP/xNpunXVnqQMw3rUrW70=; b=BDMyrkthghmoxu36GwcvqAb2F4NxqKibZbWHJ7UT3A9e1+M7himbmLivuA1JIOqKmz fLtSc4FkKbas8Trb+vnlTCS/inqJTN+uIONUw71D5Fxqzl8AfQUBzHeZ0FJdZ5fTMwyO t203keZFidYBVHJ86VBelD0v1ePpO9tfoJ0yTdE5e4cz9LClJEKuWrx0sSZ6Yi3fHVWd 7dEdW3IvQ7CY1X/Q/lXIp+HyYci1Aw2ZNb/qU20JBuvmYKvqgDg3Gq0n6tJt2m7/gAS+ GaaWkLlzjntaU2tQ4M+hD1z09WYgKfpJoNpcB4iCww/erlYcZtAkJ5L3UAoQ9Gr1Pe9k yXHQ== X-Gm-Message-State: AOAM533DVHtrMP0VgiZ0FlOWH/G0fSNFm1mQE4ycrN8sJCuOv3+My8IN oUikgjrR0xlBcSSBLgbhAjkARJsSSc11sFooVbVUEg== X-Received: by 2002:aca:54d1:: with SMTP id i200mr1268021oib.172.1599831235876; Fri, 11 Sep 2020 06:33:55 -0700 (PDT) MIME-Version: 1.0 References: <20200907134055.2878499-1-elver@google.com> <20200908153102.GB61807@elver.google.com> <20200908155631.GC61807@elver.google.com> In-Reply-To: From: Marco Elver Date: Fri, 11 Sep 2020 15:33:44 +0200 Message-ID: Subject: Re: [PATCH RFC 00/10] KFENCE: A low-overhead sampling-based memory safety error detector To: Dmitry Vyukov Cc: Vlastimil Babka , Dave Hansen , Alexander Potapenko , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 11 Sep 2020 at 15:10, Dmitry Vyukov wrote: > On Fri, Sep 11, 2020 at 2:03 PM Marco Elver wrote: > > On Fri, 11 Sep 2020 at 09:36, Dmitry Vyukov wrote: [...] > > > By "reasonable" I mean if the pool will last long enough to still > > > sample something after hours/days? Have you tried any experiments with > > > some workload (both short-lived processes and long-lived > > > processes/namespaces) capturing state of the pool? It can make sense > > > to do to better understand dynamics. I suspect that the rate may need > > > to be orders of magnitude lower. > > > > Yes, the current default sample interval is a lower bound, and is also > > a reasonable default for testing. I expect real deployments to use > > much higher sample intervals (lower rate). > > > > So here's some data (with CONFIG_KFENCE_NUM_OBJECTS=1000, so that > > allocated KFENCE objects isn't artificially capped): > > > > -- With a mostly vanilla config + KFENCE (sample interval 100 ms), > > after ~40 min uptime (only boot, then idle) I see ~60 KFENCE objects > > (total allocations >600). Those aren't always the same objects, with > > roughly ~2 allocations/frees per second. > > > > -- Then running sysbench I/O benchmark, KFENCE objects allocated peak > > at 82. During the benchmark, allocations/frees per second are closer > > to 10-15. After the benchmark, the KFENCE objects allocated remain at > > 82, and allocations/frees per second fall back to ~2. > > > > -- For the same system, changing the sample interval to 1 ms (echo 1 > > > /sys/module/kfence/parameters/sample_interval), and re-running the > > benchmark gives me: KFENCE objects allocated peak at exactly 500, with > > ~500 allocations/frees per second. After that, allocated KFENCE > > objects dropped a little to 496, and allocations/frees per second fell > > back to ~2. > > > > -- The long-lived objects are due to caches, and just running 'echo 1 > > > /proc/sys/vm/drop_caches' reduced allocated KFENCE objects back to > > 45. > > Interesting. What type of caches is this? If there is some type of > cache that caches particularly lots of sampled objects, we could > potentially change the cache to release sampled objects eagerly. The 2 major users of KFENCE objects for that workload are 'buffer_head' and 'bio-0'. If we want to deal with those, I guess there are 2 options: 1. More complex, but more precise: make the users of them check is_kfence_address() and release their buffers earlier. 2. Simpler, generic solution: make KFENCE stop return allocations for non-kmalloc_caches memcaches after more than ~90% of the pool is exhausted. This assumes that creators of long-lived objects usually set up their own memcaches. I'm currently inclined to go for (2). Thanks, -- Marco