Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp846244pxb; Fri, 22 Jan 2021 00:08:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJwOOVBYk7+6aDAJ3hFoPuP/pNSzXgH0uwhJFA8uWbYm24OLwAA1SWNxGj3Qxr4lPiU//BIX X-Received: by 2002:a17:906:7b84:: with SMTP id s4mr2155766ejo.520.1611302909245; Fri, 22 Jan 2021 00:08:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611302909; cv=none; d=google.com; s=arc-20160816; b=IsKjvVO9q7DNw7ShP6YZl9YdRESMkzljvPkpDNiQzo2aS2lPUF8BLL2tt7Y3k3oeur irawckLwEBt5ZYI0q6Wl9b6Q4rh/XqI1svHdGvt0PmOMISx0VK6YHhqqJhWjPsuR2zWA rj3LeEAn8+3B6JzC+HQpU2UAvcqgm9wSTilZ0kasbkD5DBi86ozfDMeP+CS1HMSod+G3 j+9SUESwZyWwFcjh66hoAHZ5xf++HU9o1lEGorQc/6NjxtEjbrwjNi0q1RWNXlRhEr2h h3DbQYeKR1/x5sN+IaHbRg42hIiVQIXDwX9SPz+VNh9P4NVWRr0YXb1yR5b3KuHP34Az gzCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=4ekB2r4XI7F6IjNfkYPWwLdcwzJaJYtXqKRX3CXrIFc=; b=Fjyt3n5Kid/VpSnKENVPhPTh684+o4r9q5l+apeB1oQgO4IofUnAKigxYOCTIic4+j Rh2gUw2BNbQgEeGURrFH68/5EFr3OkcgZ2MRB6BKAN40/wd/y03jHmJQCTX4SZJOXLae EyZfhyCQPFsRuLMB/zCdVs+FlkUWLNvGohDqmKgFO93w4WgcEM0jxh+Oh5CaUCCdiOnc oFFQ/+hzNoQKe9FcUUZYcAhP4IpH5ecsnOI2WMvkQjN94+aLiJ3F1HgwLyPBs6ZSIl/n MmCCrCeenbms+evaaPxwPyzURpDXfFWEOV9r0Xor1WS5QWKQLgj0YbcPJfo26yUiXZQ8 QZIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=msKCzptk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f10si3168587edc.491.2021.01.22.00.08.05; Fri, 22 Jan 2021 00:08:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=msKCzptk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726736AbhAVIEg (ORCPT + 99 others); Fri, 22 Jan 2021 03:04:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727000AbhAVIEa (ORCPT ); Fri, 22 Jan 2021 03:04:30 -0500 Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8572CC0613D6 for ; Fri, 22 Jan 2021 00:03:39 -0800 (PST) Received: by mail-lf1-x131.google.com with SMTP id o13so6332352lfr.3 for ; Fri, 22 Jan 2021 00:03:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4ekB2r4XI7F6IjNfkYPWwLdcwzJaJYtXqKRX3CXrIFc=; b=msKCzptkpliqAUQNglv+FxAEmT+FBmbcFa1TAi8A7yGnYd0WGDHwTPW0gxum7uImGR EJPcI5oBsbcLdUnySJxo/PbT6ouPs/NatUaY4LyQofnMBtyhgz4PTonG3TTbV9UKESqC LfDzGKeS8Zfq8WUX8QEBV1jtSX2p5UKJBQMiUwLK9eLitSvJzSqVS5KwYYoZdJwILEgc xa1Yh6huODhcnrjwoyopccvYJFg7mXT+UldmwldBVmEp2JNtop1KPAhpyGmSxZxeVu0n I1BpG54+oJLYPmK45s+ewvJE3cvb+Yds1EbkoCcvJHuPyF4S9tpvTd9r009NmkWqevP6 QXWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4ekB2r4XI7F6IjNfkYPWwLdcwzJaJYtXqKRX3CXrIFc=; b=gAYVO0FnMqPag6SBKUYAj5QbfHmBILHHIH3wnZM4ne26IpuG6j0f6+kflr0hSrItrz /jjv/l05AM+6J4OSrnzjnAamgTgJyC0NOk/PfBlHq326Zk8KMLTVNYeLhkqFIds3i2qM Vbvx6rhcxCQiUo7SHFrRmmbTvzp09I0stZwXh86L4nKALVQ0sAZUdm2YtOsKEl4H2E4/ tHcqd0Z1mNffU+55siIjgzaTmokunK7V6khMfOOoDoN5uG3VdCPNIz/muaXRRxlHGE52 4FAzEbAtyOd50Ita1XSu+gDydHSgIa4SJLs76dpiBoVQsehBzk0ezOStLynMbDGT8N9l B58g== X-Gm-Message-State: AOAM5310kSCmNnGrBsAwUVnPhmSNKtVQ6Py2VF6fOke6sImXIUhfm5GK ANYjCqttyu4coHbKeWKMYlFdop9Lp0DiJJ/4bN+ArQ== X-Received: by 2002:a19:810:: with SMTP id 16mr1683935lfi.233.1611302617988; Fri, 22 Jan 2021 00:03:37 -0800 (PST) MIME-Version: 1.0 References: <20201118082759.1413056-1-bharata@linux.ibm.com> <20210121053003.GB2587010@in.ibm.com> In-Reply-To: From: Vincent Guittot Date: Fri, 22 Jan 2021 09:03:26 +0100 Message-ID: Subject: Re: [RFC PATCH v0] mm/slub: Let number of online CPUs determine the slub page order To: Vlastimil Babka Cc: Christoph Lameter , Bharata B Rao , linux-kernel , linux-mm@kvack.org, David Rientjes , Joonsoo Kim , Andrew Morton , guro@fb.com, Shakeel Butt , Johannes Weiner , aneesh.kumar@linux.ibm.com, Jann Horn , Michal Hocko Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 21 Jan 2021 at 19:19, Vlastimil Babka wrote: > > On 1/21/21 11:01 AM, Christoph Lameter wrote: > > On Thu, 21 Jan 2021, Bharata B Rao wrote: > > > >> > The problem is that calculate_order() is called a number of times > >> > before secondaries CPUs are booted and it returns 1 instead of 224. > >> > This makes the use of num_online_cpus() irrelevant for those cases > >> > > >> > After adding in my command line "slub_min_objects=36" which equals to > >> > 4 * (fls(num_online_cpus()) + 1) with a correct num_online_cpus == 224 > >> > , the regression diseapears: > >> > > >> > 9 iterations of hackbench -l 16000 -g 16: 3.201sec (+/- 0.90%) > > I'm surprised that hackbench is that sensitive to slab performance, anyway. It's > supposed to be a scheduler benchmark? What exactly is going on? > From hackbench description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler. It's main job is to create a specified number of pairs of schedulable entities (either threads or traditional processes) which communicate via either sockets or pipes and time how long it takes for each pair to send data back and forth. > >> Should we have switched to num_present_cpus() rather than > >> num_online_cpus()? If so, the below patch should address the > >> above problem. > > > > There is certainly an initcall after secondaries are booted where we could > > redo the calculate_order? > > We could do it even in hotplug handler. But in practice that means making sure > it's safe, i.e. all users of oo_order/oo_objects must handle the value changing. > > Consider e.g. init_cache_random_seq() which uses oo_objects(s->oo) to allocate > s->random_seq when cache s is created. Then shuffle_freelist() will use the > current value of oo_objects(s->oo) to index s->random_seq, for a newly allocated > slab - what if the page order has increased meanwhile due to secondary booting > or hotplug? Array overflow. That's why I just made the former sysfs handler for > changing order read-only. > > Things would be easier if we could trust *on all arches* either > > - num_present_cpus() to count what the hardware really physically has during > boot, even if not yet onlined, at the time we init slab. This would still not > handle later hotplug (probably mostly in a VM scenario, not that somebody would > bring bunch of actual new cpu boards to a running bare metal system?). > > - num_possible_cpus()/nr_cpu_ids not to be excessive (broken BIOS?) on systems > where it's not really possible to plug more CPU's. In a VM scenario we could > still have an opposite problem, where theoretically "anything is possible" but > the virtual cpus are never added later. On all the system that I have tested num_possible_cpus()/nr_cpu_ids were correctly initialized large arm64 acpi system small arm64 DT based system VM on x86 system > > We could also start questioning the very assumption that number of cpus should > affect slab page size in the first place. Should it? After all, each CPU will > have one or more slab pages privately cached, as we discuss in the other > thread... So why make the slab pages also larger? > > > Or the num_online_cpus needs to be up to date earlier. Why does this issue > > not occur on x86? Does x86 have an up to date num_online_cpus earlier? > > > > >