Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753608AbdHORax (ORCPT ); Tue, 15 Aug 2017 13:30:53 -0400 Received: from outbound-smtp04.blacknight.com ([81.17.249.35]:58790 "EHLO outbound-smtp04.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752563AbdHORaw (ORCPT ); Tue, 15 Aug 2017 13:30:52 -0400 Date: Tue, 15 Aug 2017 18:30:50 +0100 From: Mel Gorman To: Tim Chen Cc: Kemi Wang , Andrew Morton , Michal Hocko , Johannes Weiner , Dave , Andi Kleen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Tim Chen , Linux MM , Linux Kernel Subject: Re: [PATCH 2/2] mm: Update NUMA counter threshold size Message-ID: <20170815173050.xn5ffrsvdj4myoam@techsingularity.net> References: <1502786736-21585-1-git-send-email-kemi.wang@intel.com> <1502786736-21585-3-git-send-email-kemi.wang@intel.com> <20170815095819.5kjh4rrhkye3lgf2@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2950 Lines: 67 On Tue, Aug 15, 2017 at 09:55:39AM -0700, Tim Chen wrote: > On 08/15/2017 02:58 AM, Mel Gorman wrote: > > On Tue, Aug 15, 2017 at 04:45:36PM +0800, Kemi Wang wrote: > >> Threshold CPU cycles Throughput(88 threads) > >> 32 799 241760478 > >> 64 640 301628829 > >> 125 537 358906028 <==> system by default (base) > >> 256 468 412397590 > >> 512 428 450550704 > >> 4096 399 482520943 > >> 20000 394 489009617 > >> 30000 395 488017817 > >> 32765 394(-26.6%) 488932078(+36.2%) <==> with this patchset > >> N/A 342(-36.3%) 562900157(+56.8%) <==> disable zone_statistics > >> > >> Signed-off-by: Kemi Wang > >> Suggested-by: Dave Hansen > >> Suggested-by: Ying Huang > >> --- > >> include/linux/mmzone.h | 4 ++-- > >> include/linux/vmstat.h | 6 +++++- > >> mm/vmstat.c | 23 ++++++++++------------- > >> 3 files changed, 17 insertions(+), 16 deletions(-) > >> > >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > >> index 0b11ba7..7eaf0e8 100644 > >> --- a/include/linux/mmzone.h > >> +++ b/include/linux/mmzone.h > >> @@ -282,8 +282,8 @@ struct per_cpu_pageset { > >> struct per_cpu_pages pcp; > >> #ifdef CONFIG_NUMA > >> s8 expire; > >> - s8 numa_stat_threshold; > >> - s8 vm_numa_stat_diff[NR_VM_ZONE_NUMA_STAT_ITEMS]; > >> + s16 numa_stat_threshold; > >> + s16 vm_numa_stat_diff[NR_VM_ZONE_NUMA_STAT_ITEMS]; > > > > I'm fairly sure this pushes the size of that structure into the next > > cache line which is not welcome. > > > > vm_numa_stat_diff is an always incrementing field. How much do you gain > > if this becomes a u8 code and remove any code that deals with negative > > values? That would double the threshold without consuming another cache line. > > Doubling the threshold and counter size will help, but not as much > as making them above u8 limit as seen in Kemi's data: > > 125 537 358906028 <==> system by default (base) > 256 468 412397590 > 32765 394(-26.6%) 488932078(+36.2%) <==> with this patchset > > For small system making them u8 makes sense. For larger ones the > frequent local counter overflow into the global counter still > causes a lot of cache bounce. Kemi can perhaps collect some data > to see what is the gain from making the counters u8. > The same comments hold. The increase of a cache line is undesirable but there are other places where the overall cost can be reduced by special casing based on how this counter is used (always incrementing by one). It would be preferred if those were addressed to see how close that gets to the same performance of doubling the necessary storage for a counter. doubling the storage -- Mel Gorman SUSE Labs