Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4000857pxb; Mon, 8 Feb 2021 05:46:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJwSVjs+ImQMtuX1X08RqNQpu2NJK+QH7uHPuFO2yQrylQDiuYRHAV2wGmFiYCt1Tn6YoDX4 X-Received: by 2002:a17:906:f2ca:: with SMTP id gz10mr16476388ejb.285.1612791981997; Mon, 08 Feb 2021 05:46:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612791981; cv=none; d=google.com; s=arc-20160816; b=xuCMnHGCYZewDGxSeSzvjlTi3mwca1K8JLRcNxcg+K4m+6Bnr2+XnY+/mS7rUtsFki 79zTwrrSjIFoXv7jIdwZIrn2ks6apXmlBthL50nxpwNSYOhzKMfCrpDlBNWcUovuqPDc aB5T/7qi3ibZgmXyF+/cOD1HEG4JJNNUZosSy21qt39I0z6+8R5BVh8tgyfWXlCvIB2O EvRF5C6v6y6QtNQvhBRfJaY2mG6y+tGUdIY8RehIBJE4csxgsNcH3ETUFWnQRJZ3/NbM FgmJtMYOPoQ1Q7dLlFZSEG9GJsoHgLc/of/vgW4h+pTl1QY5fP/PoY9sT/lVU62nRpQh cuXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=S5ExMykZMfdoPiA/gRCHzujg1ZrccBMx4+8uziiwhnc=; b=rxi1j68ggrBj1AYxA+kkwwBMaacL0yGpjPhvYNVA2LJh94YlLYtxIv/Ecj3xAPpkek 4r37sgyh9wZhExLlMATr2eymKdXlLBc0jWNxyEWuNqlKJo3GK1War43MRjDXwwKbkzl1 fiF+3Nunm5d0BrWo0bHwzD0Z5OkwkJgT8O6QsvEy+4MsexODprydCMjOFT0men4vOQNX wdd9hTc+SENtc+A/nH4FP6qqsod9Ug7/NQOkMyBIdVxklNvXVLrLYQPx/naHM8EnPByP agVJaBNGYigYuogsZC3YcLtmGaIGMUMGze5I7Qs6bbo7NfiazK86bbUDZqjKfFIbjDVc tcBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a3si12728461ejd.310.2021.02.08.05.45.57; Mon, 08 Feb 2021 05:46:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230495AbhBHNpa (ORCPT + 99 others); Mon, 8 Feb 2021 08:45:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:41610 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbhBHNmI (ORCPT ); Mon, 8 Feb 2021 08:42:08 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E6A94AD24; Mon, 8 Feb 2021 13:41:25 +0000 (UTC) From: Vlastimil Babka To: vbabka@suse.cz Cc: Catalin.Marinas@arm.com, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, bharata@linux.ibm.com, cl@linux.com, guro@fb.com, hannes@cmpxchg.org, iamjoonsoo.kim@lge.com, jannh@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@kernel.org, rientjes@google.com, shakeelb@google.com, vincent.guittot@linaro.org, will@kernel.org, Mel Gorman , stable@vger.kernel.org Subject: [PATCH] mm, slub: better heuristic for number of cpus when calculating slab order Date: Mon, 8 Feb 2021 14:41:08 +0100 Message-Id: <20210208134108.22286-1-vbabka@suse.cz> X-Mailer: git-send-email 2.30.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When creating a new kmem cache, SLUB determines how large the slab pages will based on number of inputs, including the number of CPUs in the system. Larger slab pages mean that more objects can be allocated/free from per-cpu slabs before accessing shared structures, but also potentially more memory can be wasted due to low slab usage and fragmentation. The rough idea of using number of CPUs is that larger systems will be more likely to benefit from reduced contention, and also should have enough memory to spare. Number of CPUs used to be determined as nr_cpu_ids, which is number of possible cpus, but on some systems many will never be onlined, thus commit 045ab8c9487b ("mm/slub: let number of online CPUs determine the slub page order") changed it to nr_online_cpus(). However, for kmem caches created early before CPUs are onlined, this may lead to permamently low slab page sizes. Vincent reports a regression [1] of hackbench on arm64 systems: > I'm facing significant performances regression on a large arm64 server > system (224 CPUs). Regressions is also present on small arm64 system > (8 CPUs) but in a far smaller order of magnitude > On 224 CPUs system : 9 iterations of hackbench -l 16000 -g 16 > v5.11-rc4 : 9.135sec (+/- 0.45%) > v5.11-rc4 + revert this patch: 3.173sec (+/- 0.48%) > v5.10: 3.136sec (+/- 0.40%) Mel reports a regression [2] of hackbench on x86_64, with lockstat suggesting page allocator contention: > i.e. the patch incurs a 7% to 32% performance penalty. This bisected > cleanly yesterday when I was looking for the regression and then found > the thread. > Numerous caches change size. For example, kmalloc-512 goes from order-0 > (vanilla) to order-2 with the revert. > So mostly this is down to the number of times SLUB calls into the page > allocator which only caches order-0 pages on a per-cpu basis. Clearly num_online_cpus() doesn't work too early in bootup. We could change the order dynamically in a memory hotplug callback, but runtime order changing for existing kmem caches has been already shown as dangerous, and removed in 32a6f409b693 ("mm, slub: remove runtime allocation order changes"). It could be resurrected in a safe manner with some effort, but to fix the regression we need something simpler. We could use num_present_cpus() that should be the number of physically present CPUs even before they are onlined. That would for for PowerPC [3], which triggered the original commit, but that still doesn't work on arm64 [4] as explained in [5]. So this patch tries to determine the best available value without specific arch knowledge. - num_present_cpus() if the number is larger than 1, as that means the arch is likely setting it properly - nr_cpu_ids otherwise This should fix the reported regressions while also keeping the effect of 045ab8c9487b for PowerPC systems. It's possible there are configurations where num_present_cpus() is 1 during boot while nr_cpu_ids is at the same time bloated, so these (if they exist) would keep the large orders based on nr_cpu_ids as was before 045ab8c9487b. [1] https://lore.kernel.org/linux-mm/CAKfTPtA_JgMf_+zdFbcb_V9rM7JBWNPjAz9irgwFj7Rou=xzZg@mail.gmail.com/ [2] https://lore.kernel.org/linux-mm/20210128134512.GF3592@techsingularity.net/ [3] https://lore.kernel.org/linux-mm/20210123051607.GC2587010@in.ibm.com/ [4] https://lore.kernel.org/linux-mm/CAKfTPtAjyVmS5VYvU6DBxg4-JEo5bdmWbngf-03YsY18cmWv_g@mail.gmail.com/ [5] https://lore.kernel.org/linux-mm/20210126230305.GD30941@willie-the-truck/ Fixes: 045ab8c9487b ("mm/slub: let number of online CPUs determine the slub page order") Reported-by: Vincent Guittot Reported-by: Mel Gorman Cc: Signed-off-by: Vlastimil Babka --- OK, this is a 5.11 regression, so we should try to it by 5.12. I've also Cc'd stable for that reason although it's not a crash fix. We can still try later to replace this with a safe order update in hotplug callbacks, but that's infeasible for 5.12. mm/slub.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 176b1cb0d006..8fc9190e6cb3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3454,6 +3454,7 @@ static inline int calculate_order(unsigned int size) unsigned int order; unsigned int min_objects; unsigned int max_objects; + unsigned int nr_cpus; /* * Attempt to find best configuration for a slab. This @@ -3464,8 +3465,21 @@ static inline int calculate_order(unsigned int size) * we reduce the minimum objects required in a slab. */ min_objects = slub_min_objects; - if (!min_objects) - min_objects = 4 * (fls(num_online_cpus()) + 1); + if (!min_objects) { + /* + * Some architectures will only update present cpus when + * onlining them, so don't trust the number if it's just 1. But + * we also don't want to use nr_cpu_ids always, as on some other + * architectures, there can be many possible cpus, but never + * onlined. Here we compromise between trying to avoid too high + * order on systems that appear larger than they are, and too + * low order on systems that appear smaller than they are. + */ + nr_cpus = num_present_cpus(); + if (nr_cpus <= 1) + nr_cpus = nr_cpu_ids; + min_objects = 4 * (fls(nr_cpus) + 1); + } max_objects = order_objects(slub_max_order, size); min_objects = min(min_objects, max_objects); -- 2.30.0