Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp2421083rdb; Thu, 21 Sep 2023 19:49:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGG77TmDt88Fshr4y/p5Xkc8khzTCLrqWEZxBx0JZ7/XCUShd3vkRaLyuJfxoBn/JsRhDzf X-Received: by 2002:a17:90a:9747:b0:271:8d4d:d046 with SMTP id i7-20020a17090a974700b002718d4dd046mr7228007pjw.4.1695350952459; Thu, 21 Sep 2023 19:49:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695350952; cv=none; d=google.com; s=arc-20160816; b=GSebunYwGwf/gYo3mFW0YBUmg85rmFl5xIwk0A/KEqAmiO/iCbCGNrh0NJcNEQGB7B yT5JuiFrm6Rg+Svg5vtOtfUrfZMBSTlGOLtOkQYHo5v4HFvLFXt1cuTnCFBuzQz2OoiZ IzgEK4R+14vXGhfqmKZAp+4KDPjWY9vo/Y4wH9lomiw0dVQzg9HHaX8mYHuWNeB9hzw3 Ly6HpeHHHrvi4U1q9RbaTgq9WDBNRuzzL/2IZln8Xn9/83teR7dhzkLFFUQ+OSdQR1u1 I6M1lFvpsGPXbIFS3Qh4JxMH3h8/X1iOcPSUc0k6fHzBvYe4uFrvDV8CprI9YcGWVUmS BQ3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=F2QRgP9XSPnmxOPnsPPT1jkOh0B4wX/sO26ZRLLfkXM=; fh=U3+oqGHoLVghhQggvSa/QL3NZfXAZVZQxfk0n748Gvo=; b=FeykbxRhgqSFwbPVncq8JXtKCjJpHzhx7k2sy26pqa3uDALqKvJ+MlHmENEP7eFfKH IY4gDmF++FjtwQlxzHhDj/VG3Thg6PcainOCZ+YCe92X58iyHg7UFH6UMIewWZvpPjpA 80eKyu4iyf/g7rrVl+hrHtPheQpz+Bph9t0UR5Y7UnXlvO/+LqKHSEIEGaasD3VHkyVD cV79Y6OfmANE6H/yX5n6DmqzsV6HrtSMA9L+NSzKbFAdwWl3wp+7hDl0X2klQbhiB0uo NYw5pxZUHMwXtQf3BnUFIfkJZLBvXS9qzYhpFA1sfGZObDABbJhgDv6/ReB0b5MAzfkl LxfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jV4RE860; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id q8-20020a17090a304800b002746f65d4c3si2977506pjl.137.2023.09.21.19.49.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Sep 2023 19:49:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jV4RE860; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id BEC7082E7500; Thu, 21 Sep 2023 13:03:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231500AbjIUUDJ (ORCPT + 99 others); Thu, 21 Sep 2023 16:03:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231493AbjIUUCg (ORCPT ); Thu, 21 Sep 2023 16:02:36 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55E295493E for ; Thu, 21 Sep 2023 10:17:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695316636; x=1726852636; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=ftW/+CKi2mW4lDelhCa9qIoImXCyTfRFXibv41RPhmk=; b=jV4RE860v8fVBNx/Fn64HAbCo2NXv12LNYxvc+CwCWOAzecMVQDkzP/H Bg6I0Kx4HS8TmC69mBdJnBgS0USncqt6EQZkTMD82AYaiRi+XFeJHFJqp HRnrtgaRt11YI1GEWX+be9omdmT2b/2s9LE2rNcjYyWsGVVzFk/AoSVk+ ssXHjPPbfXExVTtpC6ekDjN1yKIMGiSZv9+5QMGpGNi8k2av38Bqo2E8E IOWMgN7Xl5GRW+K0VADIymkH5nSkzEXGEBt2uUm/GUwPCi95twg1Wt6// uCPzeGQ2pvrGLzGSqB9MTotKLJ7BHfFM7C3/ntqiQeikeKDdFfAye0BC7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="444641997" X-IronPort-AV: E=Sophos;i="6.03,165,1694761200"; d="scan'208";a="444641997" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2023 06:34:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="812646899" X-IronPort-AV: E=Sophos;i="6.03,165,1694761200"; d="scan'208";a="812646899" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2023 06:34:45 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: Re: [PATCH 00/10] mm: PCP high auto-tuning References: <20230920061856.257597-1-ying.huang@intel.com> <20230920094118.8b8f739125c6aede17c627e0@linux-foundation.org> Date: Thu, 21 Sep 2023 21:32:35 +0800 In-Reply-To: <20230920094118.8b8f739125c6aede17c627e0@linux-foundation.org> (Andrew Morton's message of "Wed, 20 Sep 2023 09:41:18 -0700") Message-ID: <87leczwt1o.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 21 Sep 2023 13:03:08 -0700 (PDT) Hi, Andrew, Andrew Morton writes: > On Wed, 20 Sep 2023 14:18:46 +0800 Huang Ying wrote: > >> The page allocation performance requirements of different workloads >> are often different. So, we need to tune the PCP (Per-CPU Pageset) >> high on each CPU automatically to optimize the page allocation >> performance. > > Some of the performance changes here are downright scary. > > I've never been very sure that percpu pages was very beneficial (and > hey, I invented the thing back in the Mesozoic era). But these numbers > make me think it's very important and we should have been paying more > attention. > >> The list of patches in series is as follows, >> >> 1 mm, pcp: avoid to drain PCP when process exit >> 2 cacheinfo: calculate per-CPU data cache size >> 3 mm, pcp: reduce lock contention for draining high-order pages >> 4 mm: restrict the pcp batch scale factor to avoid too long latency >> 5 mm, page_alloc: scale the number of pages that are batch allocated >> 6 mm: add framework for PCP high auto-tuning >> 7 mm: tune PCP high automatically >> 8 mm, pcp: decrease PCP high if free pages < high watermark >> 9 mm, pcp: avoid to reduce PCP high unnecessarily >> 10 mm, pcp: reduce detecting time of consecutive high order page freeing >> >> Patch 1/2/3 optimize the PCP draining for consecutive high-order pages >> freeing. >> >> Patch 4/5 optimize batch freeing and allocating. >> >> Patch 6/7/8/9 implement and optimize a PCP high auto-tuning method. >> >> Patch 10 optimize the PCP draining for consecutive high order page >> freeing based on PCP high auto-tuning. >> >> The test results for patches with performance impact are as follows, >> >> kbuild >> ====== >> >> On a 2-socket Intel server with 224 logical CPU, we tested kbuild on >> one socket with `make -j 112`. >> >> build time zone lock% free_high alloc_zone >> ---------- ---------- --------- ---------- >> base 100.0 43.6 100.0 100.0 >> patch1 96.6 40.3 49.2 95.2 >> patch3 96.4 40.5 11.3 95.1 >> patch5 96.1 37.9 13.3 96.8 >> patch7 86.4 9.8 6.2 22.0 >> patch9 85.9 9.4 4.8 16.3 >> patch10 87.7 12.6 29.0 32.3 > > You're seriously saying that kbuild got 12% faster? > > I see that [07/10] (autotuning) alone sped up kbuild by 10%? Thank you very much for questioning! I double-checked the my test results and configuration and found that I used an uncommon configuration. So the description of the test should have been, On a 2-socket Intel server with 224 logical CPU, we tested kbuild with `numactl -m 1 -- make -j 112`. This will make processes running on socket 0 to use the normal zone of socket 1. The remote accessing to zone->lock cause heavy lock contention. I apologize for any confusing caused by the above test results. If we test kbuild with `make -j 224` on the machine, the test results becomes, build time lock% free_high alloc_zone ---------- ---------- --------- ---------- base 100.0 16.8 100.0 100.0 patch5 99.2 13.9 9.5 97.0 patch7 98.5 5.4 4.8 19.2 Although lock contention cycles%, draining PCP for high order freeing, and allocating from zone reduces greatly, the build time almost doesn't change. We also tested kbuild in the following way, created 8 cgroup, and run `make -j 28` in each cgroup. That is, the total parallel is same, but LRU lock contention can be eliminated via cgroup. And, the single-process link stage take less proportion to the parallel compiling stage. This isn't common for personal usage. But it can be used by something like 0Day kbuild service. The test result is as follows, build time lock% free_high alloc_zone ---------- ---------- --------- ---------- base 100.0 14.2 100.0 100.0 patch5 98.5 8.5 8.1 97.1 patch7 95.0 0.7 3.0 19.0 The lock contention cycles% reduces to nearly 0, because LRU lock contention is eliminated too. The build time reduction becomes visible too. We will continue to do a full test with this configuration. > Other thoughts: > > - What if any facilities are provided to permit users/developers to > monitor the operation of the autotuning algorithm? /proc/zoneinfo can be used to observe PCP high and count for each CPU. > - I'm not seeing any Documentation/ updates. Surely there are things > we can tell users? I will think about that. > - This: > > : It's possible that PCP high auto-tuning doesn't work well for some > : workloads. So, when PCP high is tuned by hand via the sysctl knob, > : the auto-tuning will be disabled. The PCP high set by hand will be > : used instead. > > Is it a bit hacky to disable autotuning when the user alters > pcp-high? Would it be cleaner to have a separate on/off knob for > autotuning? This was suggested by Mel Gormon, https://lore.kernel.org/linux-mm/20230714140710.5xbesq6xguhcbyvi@techsingularity.net/ " I'm not opposed to having an adaptive pcp->high in concept. I think it would be best to disable adaptive tuning if percpu_pagelist_high_fraction is set though. I expect that users of that tunable are rare and that if it *is* used that there is a very good reason for it. " Do you think that this is reasonable? > And how is the user to determine that "PCP high auto-tuning doesn't work > well" for their workload? One way is to check the perf profiling results. If there is heavy zone lock contention, the PCP high auto-tuning doesn't work well enough to eliminate the zone lock contention. Users may try to tune PCP high by hand. -- Best Regards, Huang, Ying