Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752866Ab0ADAuk (ORCPT ); Sun, 3 Jan 2010 19:50:40 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752072Ab0ADAuj (ORCPT ); Sun, 3 Jan 2010 19:50:39 -0500 Received: from e28smtp05.in.ibm.com ([122.248.162.5]:40760 "EHLO e28smtp05.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941Ab0ADAuj (ORCPT ); Sun, 3 Jan 2010 19:50:39 -0500 Date: Mon, 4 Jan 2010 06:20:31 +0530 From: Balbir Singh To: KAMEZAWA Hiroyuki Cc: "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "nishimura@mxp.nes.nec.co.jp" Subject: Re: [RFC] Shared page accounting for memory cgroup Message-ID: <20100104005030.GG16187@balbir.in.ibm.com> Reply-To: balbir@linux.vnet.ibm.com References: <20091229182743.GB12533@balbir.in.ibm.com> <20100104085108.eaa9c867.kamezawa.hiroyu@jp.fujitsu.com> <20100104000752.GC16187@balbir.in.ibm.com> <20100104093528.04846521.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20100104093528.04846521.kamezawa.hiroyu@jp.fujitsu.com> User-Agent: Mutt/1.5.20 (2009-08-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3087 Lines: 93 * KAMEZAWA Hiroyuki [2010-01-04 09:35:28]: > On Mon, 4 Jan 2010 05:37:52 +0530 > Balbir Singh wrote: > > > * KAMEZAWA Hiroyuki [2010-01-04 08:51:08]: > > > > > On Tue, 29 Dec 2009 23:57:43 +0530 > > > Balbir Singh wrote: > > > > > > > Hi, Everyone, > > > > > > > > I've been working on heuristics for shared page accounting for the > > > > memory cgroup. I've tested the patches by creating multiple cgroups > > > > and running programs that share memory and observed the output. > > > > > > > > Comments? > > > > > > Hmm? Why we have to do this in the kernel ? > > > > > > > For several reasons that I can think of > > > > 1. With task migration changes coming in, getting consistent data free of races > > is going to be hard. > > Hmm, Let's see real-worlds's "ps" or "top" command. Even when there are no guarantee > of error range of data, it's still useful. Yes, my concern is this 1. I iterate through tasks and calculate RSS 2. I look at memory.usage_in_bytes If the time in user space between 1 and 2 is large I get very wrong results, specifically if the workload is changing its memory usage drastically.. no? > > > 2. The cost of doing it in the kernel is not high, it does not impact > > the memcg runtime, it is a request-response sort of cost. > > > > 3. The cost in user space is going to be high and the implementation > > cumbersome to get right. > > > I don't like moving a cost in the userland to the kernel. Me neither, but I don't think it is a fixed overhead. Considering > real-time kernel or full-preemptive kernel, this very long read_lock() in the > kernel is not good, IMHO. (I think css_set_lock should be mutex/rw-sem...) I agree, we should discuss converting the lock to a mutex or a semaphore, but there might be a good reason for keeping it as a spin_lock. > cgroup_iter_xxx can block cgroup_post_fork() and this may cause critical > system delay of milli-seconds. > Agreed, but then that can happen, even while attaching a task, seeing cgroup tasks file (list of tasks). > BTW, if you really want to calculate somthing in atomic, I think following > interface may be welcomed for freezing. > > cgroup.lock > # echo 1 > /...../cgroup.lock > All task move, mkdir, rmdir to this cgroup will be blocked by mutex. > (But fork/exit will not be blocked.) > > # echo 0 > /...../cgroup.lock > Unlock. > > # cat /...../cgroup.lock > show lock status and lock history (for debug). > > Maybe good for some kinds of middleware. > But this may be difficult if we have to consider hierarchy. > I don't like the idea of providing an interface that can control kernel locks from user space, user space can tangle up and get it wrong. -- Balbir -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/