Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755520Ab1FWHlh (ORCPT ); Thu, 23 Jun 2011 03:41:37 -0400 Received: from cantor2.suse.de ([195.135.220.15]:52020 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754079Ab1FWHlg (ORCPT ); Thu, 23 Jun 2011 03:41:36 -0400 Date: Thu, 23 Jun 2011 09:41:33 +0200 From: Michal Hocko To: KAMEZAWA Hiroyuki Cc: Christoph Hellwig , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Hugh Dickins , Rik van Riel , Michel Lespinasse , Mel Gorman , Lutz Vieweg Subject: Re: [PATCH] memcg: unlock page before charging it. (WasRe: [PATCH V2] mm: Do not keep page locked during page fault while charging it for memcg Message-ID: <20110623074133.GA31593@tiehlicka.suse.cz> References: <20110622120635.GB14343@tiehlicka.suse.cz> <20110622121516.GA28359@infradead.org> <20110622123204.GC14343@tiehlicka.suse.cz> <20110623150842.d13492cd.kamezawa.hiroyu@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110623150842.d13492cd.kamezawa.hiroyu@jp.fujitsu.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5657 Lines: 161 On Thu 23-06-11 15:08:42, KAMEZAWA Hiroyuki wrote: > On Wed, 22 Jun 2011 14:32:04 +0200 > Michal Hocko wrote: > > > On Wed 22-06-11 08:15:16, Christoph Hellwig wrote: > > > > + > > > > + /* We have to drop the page lock here because memcg > > > > + * charging might block for unbound time if memcg oom > > > > + * killer is disabled. > > > > + */ > > > > + unlock_page(vmf.page); > > > > + ret = mem_cgroup_newpage_charge(page, mm, GFP_KERNEL); > > > > + lock_page(vmf.page); > > > > > > This introduces a completely poinless unlock/lock cycle for non-memcg > > > pagefaults. Please make sure it only happens when actually needed. > > > > Fair point. Thanks! > > What about the following? > > I realize that pushing more memcg logic into mm/memory.c is not nice but > > I found it better than pushing the old page into mem_cgroup_newpage_charge. > > We could also check whether the old page is in the root cgroup because > > memcg oom killer is not active there but that would add more code into > > this hot path so I guess it is not worth it. > > > > Changes since v1 > > - do not unlock page when memory controller is disabled. > > > > Great work. Then I confirmed Lutz' problem is fixed. > > But I like following style rather than additional lock/unlock. > How do you think ? Yes, I like it much more than the hairy way I did it. See comments bellow. > I tested this on the latest git tree and confirmed > the Lutz's livelock problem is fixed. And I think this should go stable tree. > > > == > From 7e9250da9ff529958d4c1ff511458dbdac8e4b81 Mon Sep 17 00:00:00 2001 > From: KAMEZAWA Hiroyuki > Date: Thu, 23 Jun 2011 15:05:57 +0900 > Subject: [PATCH] memcg: unlock page before charging it. > > Currently we are keeping faulted page locked throughout whole __do_fault > call (except for page_mkwrite code path). If we do early COW we allocate a > new page which has to be charged for a memcg (mem_cgroup_newpage_charge). > > This function, however, might block for unbounded amount of time if memcg > oom killer is disabled or fork-bomb is running because the only way out of > the OOM situation is either an external event or OOM-situation fix. > > processes from faulting it in which is not good at all because we are Missing the beginning of the sentence? > basically punishing potentially an unrelated process for OOM condition > in a different group (I have seen stuck system because of ld-2.11.1.so being > locked). > > We can do test easily. > % cgcreate -g memory:A > % cgset -r memory.limit_in_bytes=64M A > % cgset -r memory.memsw.limit_in_bytes=64M A > % cd kernel_dir; cgexec -g memory:A make -j > > Then, the whole system will live-locked until you kill 'make -j' > by hands (or push reboot...) This is because some important > page in a shared library are locked and never released bcause of fork-bomb. > > This patch delays "charge" until unlock_page() called. There is > no problem as far as we keep reference on a page. > (memcg doesn't require page_lock()). > > Then, above livelock disappears. > > Reported-by: Lutz Vieweg > Original-idea-by: Michal Hocko > Signed-off-by: KAMEZAWA Hiroyuki > --- > mm/memory.c | 28 +++++++++++++++++++--------- > 1 files changed, 19 insertions(+), 9 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 87d9353..66442da 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3129,7 +3129,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma, > struct page *page; > pte_t entry; > int anon = 0; > - int charged = 0; > + struct page *need_charge = NULL; > struct page *dirty_page = NULL; > struct vm_fault vmf; > int ret; > @@ -3177,12 +3177,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma, > ret = VM_FAULT_OOM; > goto out; > } > - if (mem_cgroup_newpage_charge(page, mm, GFP_KERNEL)) { > - ret = VM_FAULT_OOM; > - page_cache_release(page); > - goto out; > - } > - charged = 1; > + need_charge = page; > copy_user_highpage(page, vmf.page, address, vma); > __SetPageUptodate(page); > } else { > @@ -3251,12 +3246,11 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma, > /* no need to invalidate: a not-present page won't be cached */ > update_mmu_cache(vma, address, page_table); > } else { > - if (charged) > - mem_cgroup_uncharge_page(page); > if (anon) > page_cache_release(page); > else > anon = 1; /* no anon but release faulted_page */ > + need_charge = NULL; > } > > pte_unmap_unlock(page_table, ptl); > @@ -3268,6 +3262,17 @@ out: > if (set_page_dirty(dirty_page)) > page_mkwrite = 1; > unlock_page(dirty_page); > + if (need_charge) { > + /* > + * charge this page before we drop refcnt. > + * memory cgroup returns OOM condition when > + * this task is killed. So, it's not necesasry > + * to undo. > + */ > + if (mem_cgroup_newpage_charge(need_charge, > + mm, GFP_KERNEL)) > + ret = VM_FAULT_OOM; > + } We do not need this hunk, don't we? dirty_page is set only if !anon so we never get to this path from COW. Other than that: Reviewed-by: Michal Hocko Thanks! -- Michal Hocko SUSE Labs SUSE LINUX s.r.o. Lihovarska 1060/12 190 00 Praha 9 Czech Republic -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/