Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932255AbdLTFyS (ORCPT ); Wed, 20 Dec 2017 00:54:18 -0500 Received: from mga11.intel.com ([192.55.52.93]:65021 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932111AbdLTFyQ (ORCPT ); Wed, 20 Dec 2017 00:54:16 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,430,1508828400"; d="scan'208";a="14123509" Subject: Re: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size To: Michal Hocko Cc: Greg Kroah-Hartman , Andrew Morton , Vlastimil Babka , Mel Gorman , Johannes Weiner , Christopher Lameter , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Pavel Tatashin , David Rientjes , Sebastian Andrzej Siewior , Dave , Andi Kleen , Tim Chen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Aubrey Li , Linux MM , Linux Kernel References: <1513665566-4465-1-git-send-email-kemi.wang@intel.com> <1513665566-4465-4-git-send-email-kemi.wang@intel.com> <20171219124045.GO2787@dhcp22.suse.cz> From: kemi Message-ID: <439918f7-e8a3-c007-496c-99535cbc4582@intel.com> Date: Wed, 20 Dec 2017 13:52:14 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <20171219124045.GO2787@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2407 Lines: 75 On 2017年12月19日 20:40, Michal Hocko wrote: > On Tue 19-12-17 14:39:24, Kemi Wang wrote: >> We have seen significant overhead in cache bouncing caused by NUMA counters >> update in multi-threaded page allocation. See 'commit 1d90ca897cb0 ("mm: >> update NUMA counter threshold size")' for more details. >> >> This patch updates NUMA counters to a fixed size of (MAX_S16 - 2) and deals >> with global counter update using different threshold size for node page >> stats. > > Again, no numbers. Compare to vanilla kernel, I don't think it has performance improvement, so I didn't post performance data here. But, if you would like to see performance gain from enlarging threshold size for NUMA stats (compare to the first patch), I will do that later. > To be honest I do not really like the special casing > here. Why are numa counters any different from PGALLOC which is > incremented for _every_ single page allocation? > I guess you meant to PGALLOC event. The number of this event is kept in local cpu and sum up (for_each_online_cpu) when need. It uses the similar way to what I used before for NUMA stats in V1 patch series. Good enough. >> --- >> mm/vmstat.c | 13 +++++++++++-- >> 1 file changed, 11 insertions(+), 2 deletions(-) >> >> diff --git a/mm/vmstat.c b/mm/vmstat.c >> index 9c681cc..64e08ae 100644 >> --- a/mm/vmstat.c >> +++ b/mm/vmstat.c >> @@ -30,6 +30,8 @@ >> >> #include "internal.h" >> >> +#define VM_NUMA_STAT_THRESHOLD (S16_MAX - 2) >> + >> #ifdef CONFIG_NUMA >> int sysctl_vm_numa_stat = ENABLE_NUMA_STAT; >> >> @@ -394,7 +396,11 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) >> s16 v, t; >> >> v = __this_cpu_inc_return(*p); >> - t = __this_cpu_read(pcp->stat_threshold); >> + if (item >= NR_VM_NUMA_STAT_ITEMS) >> + t = __this_cpu_read(pcp->stat_threshold); >> + else >> + t = VM_NUMA_STAT_THRESHOLD; >> + >> if (unlikely(v > t)) { >> s16 overstep = t >> 1; >> >> @@ -549,7 +555,10 @@ static inline void mod_node_state(struct pglist_data *pgdat, >> * Most of the time the thresholds are the same anyways >> * for all cpus in a node. >> */ >> - t = this_cpu_read(pcp->stat_threshold); >> + if (item >= NR_VM_NUMA_STAT_ITEMS) >> + t = this_cpu_read(pcp->stat_threshold); >> + else >> + t = VM_NUMA_STAT_THRESHOLD; >> >> o = this_cpu_read(*p); >> n = delta + o; >> -- >> 2.7.4 >> >