Received: by 10.223.164.202 with SMTP id h10csp269595wrb; Wed, 29 Nov 2017 21:58:40 -0800 (PST) X-Google-Smtp-Source: AGs4zMYSd4oV5T2zUFqfWo6mJONpL7fDcajBUGOkMVY15RyoL7Wfy2c4HrZLuBerUY+7MsAtxpKZ X-Received: by 10.99.64.68 with SMTP id n65mr1361979pga.312.1512021520413; Wed, 29 Nov 2017 21:58:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512021520; cv=none; d=google.com; s=arc-20160816; b=a6CEIO4MXl9aE2aMWwSM9AkGw2hQ5ujzd6gutXvzCdznQS+RyptniNj+3EJBuyETuQ +++KUGm5DhMMPWbWIlGw1LDFPEamu2vQcnXga4l7W8TJzjTetxddOZSTsLcLEZ7/Ppor pUJw3JfR57ZJQZZ53/szr4vm6SPtOkyPe3ipkubcp5FUwxJsmgwsyFUzBZavL9pOdLsg JOVfY2GbDcLMqr91OQsPnGKcwWnMvs8aTLs/jVMEu9TsUG/UZg8Y566mBg2KuAgp7EPj rOJfkLskiAGw6lvXnJqLVe+Ci69Y0pi+9swwXByV0xW5UNfBc2urWle/RGwcpBRd3n49 USPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=+oUsm7fNjO5OSW+4l6HKKR0Nir3a2FqSpr3QydujaNA=; b=CA6aWBrb9VvD8Ju32bNdDNEFbevuvLz6f2Ce4eotdNwzZOVNNB1Y2A/sLtlUTEhBET 9r2Yp22/0P7VkkYbCnquWM4HdjDY6qDfLYfjrnxojXx/QZppTgANSyp+M7NubPA/NZum nE2RUKB3QXWDWVfmK6YpZ1MX4nCZRdz12oNY0mAIRODoqm3kijbCGqbQRpMhuZYG38UK +34CQmJY3Px61yNsDeux8Ystn+rFwHGxB14Pe5vQDVQciH5vkSw79a09pI5FV8ZMZ5EG xU2u0Dwo9aN3pTbBzJt+kACCKebJPZOj2VEr33EeZ2qC+YVxCY/aX0ISOCrNlPMYaUpl I2ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 20si2642035pft.356.2017.11.29.21.58.26; Wed, 29 Nov 2017 21:58:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751613AbdK3F6P (ORCPT + 99 others); Thu, 30 Nov 2017 00:58:15 -0500 Received: from mga06.intel.com ([134.134.136.31]:55758 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751488AbdK3F6L (ORCPT ); Thu, 30 Nov 2017 00:58:11 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Nov 2017 21:58:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,340,1508828400"; d="scan'208";a="7499074" Received: from kemi-desktop.sh.intel.com (HELO [10.239.13.118]) ([10.239.13.118]) by FMSMGA003.fm.intel.com with ESMTP; 29 Nov 2017 21:58:07 -0800 Subject: Re: [PATCH 1/2] mm: NUMA stats code cleanup and enhancement To: Michal Hocko Cc: Greg Kroah-Hartman , Andrew Morton , Vlastimil Babka , Mel Gorman , Johannes Weiner , Christopher Lameter , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Pavel Tatashin , David Rientjes , Sebastian Andrzej Siewior , Dave , Andi Kleen , Tim Chen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Aubrey Li , Linux MM , Linux Kernel References: <1511848824-18709-1-git-send-email-kemi.wang@intel.com> <20171129121740.f6drkbktc43l5ib6@dhcp22.suse.cz> From: kemi Message-ID: <4b840074-cb5f-3c10-d65b-916bc02fb1ee@intel.com> Date: Thu, 30 Nov 2017 13:56:13 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <20171129121740.f6drkbktc43l5ib6@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2017年11月29日 20:17, Michal Hocko wrote: > On Tue 28-11-17 14:00:23, Kemi Wang wrote: >> The existed implementation of NUMA counters is per logical CPU along with >> zone->vm_numa_stat[] separated by zone, plus a global numa counter array >> vm_numa_stat[]. However, unlike the other vmstat counters, numa stats don't >> effect system's decision and are only read from /proc and /sys, it is a >> slow path operation and likely tolerate higher overhead. Additionally, >> usually nodes only have a single zone, except for node 0. And there isn't >> really any use where you need these hits counts separated by zone. >> >> Therefore, we can migrate the implementation of numa stats from per-zone to >> per-node, and get rid of these global numa counters. It's good enough to >> keep everything in a per cpu ptr of type u64, and sum them up when need, as >> suggested by Andi Kleen. That's helpful for code cleanup and enhancement >> (e.g. save more than 130+ lines code). > > I agree. Having these stats per zone is a bit of overcomplication. The > only consumer is /proc/zoneinfo and I would argue this doesn't justify > the additional complexity. Who does really need to know per zone broken > out numbers? > > Anyway, I haven't checked your implementation too deeply but why don't > you simply define static percpu array for each numa node? To be honest, there are another two ways I can think of listed below. but I don't think they are simpler than my current implementation. Maybe you have better idea. static u64 __percpu vm_stat_numa[num_possible_nodes() * NR_VM_NUMA_STAT_ITEMS]; But it's not correct. Or we can add an u64 percpu array with size of NR_VM_NUMA_STAT_ITEMS in struct pglist_data. My current implementation is quite straightforward by combining all of local counters together, only one percpu array with size of num_possible_nodes()*NR_VM_NUMA_STAT_ITEMS is enough for that. > [...] >> +extern u64 __percpu *vm_numa_stat; > [...] >> +#ifdef CONFIG_NUMA >> + size = sizeof(u64) * num_possible_nodes() * NR_VM_NUMA_STAT_ITEMS; >> + align = __alignof__(u64[num_possible_nodes() * NR_VM_NUMA_STAT_ITEMS]); >> + vm_numa_stat = (u64 __percpu *)__alloc_percpu(size, align); >> +#endif From 1585424990147231944@xxx Wed Nov 29 18:11:33 +0000 2017 X-GM-THRID: 1585288579833195240 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread