Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp8648824rwr; Thu, 11 May 2023 04:30:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6vMUpSKqg54a1ABFhPZzpPCW7H459VrsO7Dx7DJxgA9JAEKbBnv1NPEaGtq/Rt5XFuq7kb X-Received: by 2002:a17:902:9a04:b0:1ad:bccc:af78 with SMTP id v4-20020a1709029a0400b001adbcccaf78mr3226585plp.58.1683804641581; Thu, 11 May 2023 04:30:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683804641; cv=none; d=google.com; s=arc-20160816; b=rdRCJAI7Lg0Y9l6cxogMvTg/3zUF+vcZsNYq3NGeRDfqjZRIsIKcLybOWr09Bhy6jp evlmKChLfOkCqYj8QaiLiHXmbWJKg/i/A1fUK1Vjqx6ekON+9VPwg0R9t8RgRisGi1Fv G9bb0FAEtKPKUZhFvsO4MJcy57VxJKM0V9QvhS+c2OwUUkOzxhcU6ctUE/+4+MVXG2Un BW0artq9QurHbAyNeOijQWKCG3MEQxBoytVNhcq9bBHdnWapXgHi4pI08q6zFDEnXtBj kvprXMuM0HOpHCpdnQUGnxaqa0/73TGM5bkc7gzYTjqm0W7y4NGa0MFyiaUOJGRK0NYt gyFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date; bh=102UW7dbybdfq1nK+Nizpu6mohUhLtNtuzywJEC+YUk=; b=geqq8swYNYECfK3GwjXR88gpzvvAKbyQTvzWvoEgoVz81HIwzYWLomQ/WzZOEsKknX ysufNGVEWDxQYtOoivsOzz1sMxWPYzzw3NfnG6HbA1qZA+lsVdL73s5Y3rrcdUNQb8pg w1uhZNxK/99s238WIMHYekb/UIKFKYFxIm/grU9AP9ptzMIcN4v2Ok7FY87nhU73aZNp q4KX10Tg+RCZi8/zD/qKGQzlKNmOpFWWiM0tACkL6l3tVjpUjgNWMByoNl1n55Y5lgzs Msnd/75ofXdQB9SWnX4w+BJ1pXCxyrhJaUdZtviN/inZyHvEm3rdoOv6lO1Chp7p1/SV A5YQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m14-20020a170902db0e00b001aafec82436si6490399plx.204.2023.05.11.04.30.29; Thu, 11 May 2023 04:30:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237524AbjEKKaR (ORCPT + 99 others); Thu, 11 May 2023 06:30:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236394AbjEKKaO (ORCPT ); Thu, 11 May 2023 06:30:14 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 473998A78 for ; Thu, 11 May 2023 03:30:13 -0700 (PDT) Received: from lhrpeml500005.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QH7Rn152Vz67dY8; Thu, 11 May 2023 18:29:25 +0800 (CST) Received: from localhost (10.202.227.76) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 11 May 2023 11:30:10 +0100 Date: Thu, 11 May 2023 11:30:09 +0100 From: Jonathan Cameron To: Huang Ying CC: , , Arjan Van De Ven , Andrew Morton , "Mel Gorman" , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting zones Message-ID: <20230511113009.00004821@Huawei.com> In-Reply-To: <20230511065607.37407-1-ying.huang@intel.com> References: <20230511065607.37407-1-ying.huang@intel.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.202.227.76] X-ClientProxiedBy: lhrpeml500002.china.huawei.com (7.191.160.78) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 11 May 2023 14:56:01 +0800 Huang Ying wrote: > The patchset is based on upstream v6.3. > > More and more cores are put in one physical CPU (usually one NUMA node > too). In 2023, one high-end server CPU has 56, 64, or more cores. > Even more cores per physical CPU are planned for future CPUs. While > all cores in one physical CPU will contend for the page allocation on > one zone in most cases. This causes heavy zone lock contention in > some workloads. And the situation will become worse and worse in the > future. > > For example, on an 2-socket Intel server machine with 224 logical > CPUs, if the kernel is built with `make -j224`, the zone lock > contention cycles% can reach up to about 12.7%. > > To improve the scalability of the page allocation, in this series, we > will create one zone instance for each about 256 GB memory of a zone > type generally. That is, one large zone type will be split into > multiple zone instances. Then, different logical CPUs will prefer > different zone instances based on the logical CPU No. So the total > number of logical CPUs contend on one zone will be reduced. Thus the > scalability is improved. > > With the series, the zone lock contention cycles% reduces to less than > 1.6% in the above kbuild test case when 4 zone instances are created > for ZONE_NORMAL. > > Also tested the series with the will-it-scale/page_fault1 with 16 > processes. With the optimization, the benchmark score increases up to > 18.2% and the zone lock contention reduces from 13.01% to 0.56%. > > To create multiple zone instances for a zone type, another choice is > to create zone instances based on the total number of logical CPUs. > We choose to use memory size because it is easier to be implemented. > In most cases, the more the cores, the larger the memory size is. > And, on system with larger memory size, the performance requirement of > the page allocator is usually higher. > > Best Regards, > Huang, Ying > Hi, Interesting idea. I'm curious though on whether this can suffer from imbalance problems where due to uneven allocations from particular CPUs you can end up with all page faults happening in one zone and the original contention problem coming back? Or am I missing some process that will result in that imbalance being corrected? Jonathan