Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1361625pxb; Mon, 11 Oct 2021 04:30:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz09t4uh414eV2OayUVoJc+qXo7o7TK/CxwaYxQznfOwTmXsQCUl0zj3pHYMxlLXLLLl/nr X-Received: by 2002:a17:902:a50f:b029:11a:b033:e158 with SMTP id s15-20020a170902a50fb029011ab033e158mr24136277plq.26.1633951822456; Mon, 11 Oct 2021 04:30:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633951822; cv=none; d=google.com; s=arc-20160816; b=oSDgEbOYO461XtoSSLe9ugnhKvIqGbTg8jWbGIun8QfDTy+Fr3azlFQuR0WH7DTa1e QrKLYC+u+MUcU3hgvDWTjJfQcIR4PcrCLWcEUVeGxAxHJ1jfG/7KgGSY0wvFsEJ9vQBl gEg6AMXBGigfkfngQ4dIrPq2n1KXzWIGtCRNW56CXXqQOAdHYmT1tX+r3hkCcfQfZ4em tIQwB3PsmDepEmAqTxIDVqSitCNGY00BVEXg1FUSV1o8poVtG/CbiLr7sgBYKL7Juh45 Onq9+DSGv2lJUHF2a6YH1oEcd/ZI2Q6jB4R8elYIEgk/q+d6ZSVesJ995mF2rOCQhPLa Rzpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date; bh=d/ZTddMFhpxK3lMFjPSqgXiuF+nJxu0ssxaJxFmiX8s=; b=yWt4lDRzoIfK98BbPtvJFBfG7vYIpt7dzBVD2qB04GkSs0/iuRv4o9w+uwJ4RE4K5A y3tu1P87Q5NUiHlCwT5voGeCMRMcO3u37mh2VeAvfddORUmMrJpYC28PAE4JFSwV2SWx NBpwOEDxV1RbHulRujrRuitP3GaJz09sXF0EiiryPa21wvW580yx+v09dzQyhutdjCdb LI4VWw4d/I3qrgru9LXCS9IPOLY0g6CWp9FtctqVlkUYrB7Z3SYurHWihUQ7MbAbTGrg TkqQhHiy8enJhU0J3g7qsR0N5z0v04ay58mh9jq7fByTsEwPgf1V6dj0GCmlfX1xo//A Tu+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c19si10970780plo.375.2021.10.11.04.30.10; Mon, 11 Oct 2021 04:30:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234276AbhJKHPy (ORCPT + 99 others); Mon, 11 Oct 2021 03:15:54 -0400 Received: from vmi485042.contaboserver.net ([161.97.139.209]:45436 "EHLO gentwo.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbhJKHPy (ORCPT ); Mon, 11 Oct 2021 03:15:54 -0400 Received: by gentwo.de (Postfix, from userid 1001) id 180B9B0025F; Mon, 11 Oct 2021 09:13:52 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by gentwo.de (Postfix) with ESMTP id 15A60B00100; Mon, 11 Oct 2021 09:13:52 +0200 (CEST) Date: Mon, 11 Oct 2021 09:13:52 +0200 (CEST) From: Christoph Lameter To: Hyeonggon Yoo <42.hyeyoo@gmail.com> cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka Subject: Re: [RFC] Some questions and an idea on SLUB/SLAB In-Reply-To: <20211009001903.GA3285@kvm.asia-northeast3-a.c.our-ratio-313919.internal> Message-ID: References: <20211009001903.GA3285@kvm.asia-northeast3-a.c.our-ratio-313919.internal> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 9 Oct 2021, Hyeonggon Yoo wrote: > - Is there a reason that SLUB does not implement cache coloring? > it will help utilizing hardware cache. Especially in block layer, > they are literally *squeezing* its performance now. Well as Matthew says: The high associativity of caches and the execution of other code path seems to make this not useful anymore. I am sure you can find a benchmark that shows some benefit. But please realize that in real-life the OS must perform work. This means that multiple other code paths are executed that affect cache use and placement of data in cache lines. > - In SLAB, do we really need to flush queues every few seconds? > (per cpu queue and shared queue). Flushing alien caches makes > sense, but flushing queues seems reducing it's fastpath. > But yeah, we need to reclaim memory. can we just defer this? The queues are designed to track cache hot objects (See the Bonwick paper). After a while the cachelines will be used for other purposes and no longer reflect what is in the caches. That is why they need to be expired. > - I don't like SLAB's per-node cache coloring, because L1 cache > isn't shared between cpus. For now, cpus in same node are sharing > its colour_next - but we can do better. This differs based on the cpu architecture in use. SLAB has an ideal model of how caches work and keeps objects cache hot based on that. In real life the cpu architecture differs from what SLAB things how caches operate. > what about splitting some per-cpu variables into kmem_cache_cpu > like SLUB? I think cpu_cache, colour (and colour_next), > alloc{hit,miss}, and free{hit,miss} can be per-cpu variables. That would in turn increase memory use and potentially the cache footprint of the hot paths.