Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756846Ab1FIWat (ORCPT ); Thu, 9 Jun 2011 18:30:49 -0400 Received: from smtp-out.google.com ([216.239.44.51]:13313 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756591Ab1FIWar convert rfc822-to-8bit (ORCPT ); Thu, 9 Jun 2011 18:30:47 -0400 DomainKey-Signature: a=rsa-sha1; c=nofws; d=google.com; s=beta; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=chUrsaz6T2VDJNi5M/nFGGZlaSFtQNIE1bJsPZ9CTmoRDIdDI4cW/xb3qsakx7AEXD B0IMxKXIaJY1sWQJBjFQ== MIME-Version: 1.0 In-Reply-To: <20110609183637.GC20333@cmpxchg.org> References: <20110602075028.GB20630@cmpxchg.org> <20110602175142.GH28684@cmpxchg.org> <20110608153211.GB27827@cmpxchg.org> <20110609083503.GC11603@cmpxchg.org> <20110609183637.GC20333@cmpxchg.org> Date: Thu, 9 Jun 2011 15:30:27 -0700 Message-ID: Subject: Re: [patch 0/8] mm: memcg naturalization -rc2 From: Ying Han To: Johannes Weiner Cc: Hiroyuki Kamezawa , KAMEZAWA Hiroyuki , Daisuke Nishimura , Balbir Singh , Michal Hocko , Andrew Morton , Rik van Riel , Minchan Kim , KOSAKI Motohiro , Mel Gorman , Greg Thelen , Michel Lespinasse , "linux-mm@kvack.org" , linux-kernel Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8BIT X-System-Of-Record: true Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 8005 Lines: 174 On Thu, Jun 9, 2011 at 11:36 AM, Johannes Weiner wrote: > On Thu, Jun 09, 2011 at 10:36:47AM -0700, Ying Han wrote: >> On Thu, Jun 9, 2011 at 1:35 AM, Johannes Weiner wrote: >> > On Wed, Jun 08, 2011 at 08:52:03PM -0700, Ying Han wrote: >> >> On Wed, Jun 8, 2011 at 8:32 AM, Johannes Weiner wrote: >> >> > I guess it would make much more sense to evaluate if reclaiming from >> >> > memcgs while there are others exceeding their soft limit is even a >> >> > problem. ?Otherwise this discussion is pretty pointless. >> >> >> >> AFAIK it is a problem since it changes the spec of kernel API >> >> memory.soft_limit_in_bytes. That value is set per-memcg which all the >> >> pages allocated above that are best effort and targeted to reclaim >> >> prior to others. >> > >> > That's not really true. ?Quoting the documentation: >> > >> > ? ?When the system detects memory contention or low memory, control groups >> > ? ?are pushed back to their soft limits. If the soft limit of each control >> > ? ?group is very high, they are pushed back as much as possible to make >> > ? ?sure that one control group does not starve the others of memory. >> > >> > I am language lawyering here, but I don't think it says it won't touch >> > other memcgs at all while there are memcgs exceeding their soft limit. >> >> Well... :) I would say that the documentation of soft_limit needs lots >> of work especially after lots of discussions we have after the LSF. >> >> The RFC i sent after our discussion has the following documentation, >> and I only cut & paste the content relevant to our conversation here: >> >> What is "soft_limit"? >> The "soft_limit was introduced in memcg to support over-committing the >> memory resource on the host. Each cgroup can be configured with >> "hard_limit", where it will be throttled or OOM killed by going over >> the limit. However, the allocation can go above the "soft_limit" as >> long as there is no memory contention. The "soft_limit" is the kernel >> mechanism for re-distributing spare memory resource among cgroups. >> >> What we have now? >> The current implementation of softlimit is based on per-zone RB tree, >> where only the cgroup exceeds the soft_limit the most being selected >> for reclaim. >> >> It makes less sense to only reclaim from one cgroup rather than >> reclaiming all cgroups based on calculated propotion. This is required >> for fairness. >> >> Proposed design: >> round-robin across the cgroups where they have memory allocated on the >> zone and also exceed the softlimit configured. >> >> there was a question on how to do zone balancing w/o global LRU. This >> could be solved by building another cgroup list per-zone, where we >> also link cgroups under their soft_limit. We won't scan the list >> unless the first list being exhausted and >> the free pages is still under the high_wmark. >> >> Since the per-zone memcg list design is being replaced by your >> patchset, some of the details doesn't apply. But the concept still >> remains where we would like to scan some memcgs first (above >> soft_limit) . > > I think the most important thing we wanted was to round-robin scan all > soft limit excessors instead of just the biggest one. ?I understood > this is the biggest fault with soft limits right now. > > We came up with maintaining a list of excessors, rather than a tree, > and from this particular implementation followed naturally that this > list is scanned BEFORE we look at other memcgs at all. > > This is a nice to have, but it was never the primary problem with the > soft limit implementation, as far as I understood. > >> > It would be a lie about the current code in the first place, which >> > does soft limit reclaim and then regular reclaim, no matter the >> > outcome of the soft limit reclaim cycle. ?It will go for the soft >> > limit first, but after an allocation under pressure the VM is likely >> > to have reclaimed from other memcgs as well. >> > >> > I saw your patch to fix that and break out of reclaim if soft limit >> > reclaim did enough. ?But this fix is not much newer than my changes. >> >> My soft_limit patch was developed in parallel with your patchset, and >> most of that wouldn't apply here. >> Is that what you are referring to? > > No, I meant that the current behaviour is old and we are only changing > it only now, so we are not really breaking backward compatibility. > >> > The second part of this is: >> > >> > ? ?Please note that soft limits is a best effort feature, it comes with >> > ? ?no guarantees, but it does its best to make sure that when memory is >> > ? ?heavily contended for, memory is allocated based on the soft limit >> > ? ?hints/setup. Currently soft limit based reclaim is setup such that >> > ? ?it gets invoked from balance_pgdat (kswapd). >> >> We had patch merged which add the soft_limit reclaim also in the global ttfp. >> >> memcg-add-the-soft_limit-reclaim-in-global-direct-reclaim.patch >> >> > It's not the pages-over-soft-limit that are best effort. ?It says that >> > it tries its best to take soft limits into account while reclaiming. >> Hmm. Both cases are true. The best effort pages I referring to means >> "the page above the soft_limit are targeted to reclaim first under >> memory contention" > > I really don't know where you are taking this from. ?That is neither > documented anywhere, nor is it the current behaviour. I got the email from andrew on may 27 and you were on the cc-ed :) Anyway, i just forwarded you that one. --Ying > > Yeah, currently the soft limit reclaim cycle preceeds the generic > reclaim cycle. ?But the end result is that other memcgs are reclaimed > from as well in both cases. ?The exact timing is irrelevant. > > And this has been the case for a long time, so I don't think my rework > breaks existing users in that regard. > >> > My code does that, so I don't think we are breaking any promises >> > currently made in the documentation. >> > >> > But much more important than keeping documentation promises is not to >> > break actual users. ?So if you are yourself a user of soft limits, >> > test the new code pretty please and complain if it breaks your setup! >> >> Yes, I've been running tests on your patchset, but not getting into >> specific configurations yet. But I don't think it is hard to generate >> the following scenario: >> >> on 32G machine, under root I have three cgroups with 20G hard_limit and >> cgroup-A: soft_limit 1g, usage 20g with clean file pages >> cgroup-B: soft_limit 10g, usage 5g with clean file pages >> cgroup-C: soft_limit 10g, usage 5g with clean file pages >> >> I would assume reclaiming from cgroup-A should be sufficient under >> global memory pressure, and no pages needs to be reclaimed from B or >> C, especially both of them have memory usage under their soft_limit. > > Keep in mind that memcgs are scanned proportionally to their size, > that we start out with relatively low scan counts, and that the > priority levels are a logarithmic scale. > > The formula is essentially this: > > ? ? ? ?(usage / PAGE_SIZE) >> priority > > which means that we would scan as follows, with decreased soft limit > priority for A: > > ? ? ? ?A: ((20 << 30) >> 12) >> 11 = 2560 pages > ? ? ? ?B: (( 5 << 30) >> 12) >> 12 = ?320 pages > ? ? ? ?C: ? ? ? ? ? ? ? ? ? ? ? ? ?= ?320 pages. > > So even if B and C are scanned, they are only shrunk by a bit over a > megabyte tops. ?For decreasing levels (if they are reached at all if > there is clean cache around): > > ? ? ? ?A: 20M 40M 80M 160M ... > ? ? ? ?B: ?2M ?4M ?8M ?16M ... > > While it would be sufficient to reclaim only from A, actually > reclaiming from B and C is not a big deal in practice, I would > suspect. > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/