Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751174Ab0HUAsU (ORCPT ); Fri, 20 Aug 2010 20:48:20 -0400 Received: from mga02.intel.com ([134.134.136.20]:45883 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750861Ab0HUAsQ (ORCPT ); Fri, 20 Aug 2010 20:48:16 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.56,242,1280732400"; d="scan'208";a="649815751" Date: Sat, 21 Aug 2010 08:48:04 +0800 From: Wu Fengguang To: Michael Rubin Cc: Peter Zijlstra , KOSAKI Motohiro , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" , "jack@suse.cz" , "riel@redhat.com" , "akpm@linux-foundation.org" , "david@fromorbit.com" , "npiggin@kernel.dk" , "hch@lst.de" , "axboe@kernel.dk" Subject: Re: [PATCH 3/4] writeback: nr_dirtied and nr_entered_writeback in /proc/vmstat Message-ID: <20100821004804.GA11030@localhost> References: <1282296689-25618-1-git-send-email-mrubin@google.com> <1282296689-25618-4-git-send-email-mrubin@google.com> <20100820100855.GC8440@localhost> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2357 Lines: 61 On Sat, Aug 21, 2010 at 07:51:38AM +0800, Michael Rubin wrote: > On Fri, Aug 20, 2010 at 3:08 AM, Wu Fengguang wrote: > > How about the names nr_dirty_accumulated and nr_writeback_accumulated? > > It seems more consistent, for both the interface and code (see below). > > I'm not really sure though. > > Those names don't seem to right to me. > I admit I like "nr_dirtied" and "nr_cleaned" that seems most > understood. These numbers also get very big pretty fast so I don't > think it's hard to infer. That's fine. I like "nr_cleaned". > >> In order to track the "cleaned" and "dirtied" counts we added two > >> vm_stat_items.  Per memory node stats have been added also. So we can > >> see per node granularity: > >> > >>    # cat /sys/devices/system/node/node20/writebackstat > >>    Node 20 pages_writeback: 0 times > >>    Node 20 pages_dirtied: 0 times > > > > I'd prefer the name "vmstat" over "writebackstat", and propose to > > migrate items from /proc/zoneinfo over time. zoneinfo is a terrible > > interface for scripting. > > I like vmstat also. I can do that. Thank you. > > Also, are there meaningful usage of per-node writeback stats? > > For us yes. We use fake numa nodes to implement cgroup memory isolation. > This allows us to see what the writeback behaviour is like per cgroup. That's sure convenient for you, for now. But it's special use case. I wonder if you'll still stick to the fake NUMA scenario two years later -- when memcg grows powerful enough. What do we do then? "Hey let's rip these counters, their major consumer has dumped them.." For per-job nr_dirtied, I suspect the per-process write_bytes and cancelled_write_bytes in /proc/self/io will serve you well. For per-job nr_cleaned, I suspect the per-zone nr_writeback will be sufficient for debug purposes (in despite of being a bit different). > > The numbers are naturally per-bdi ones instead. But if we plan to > > expose them for each bdi, this patch will need to be implemented > > vastly differently. > > Currently I have no plans to do that. Peter? :) Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/