Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752972Ab3G2TTf (ORCPT ); Mon, 29 Jul 2013 15:19:35 -0400 Received: from mx1.redhat.com ([209.132.183.28]:64071 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751900Ab3G2TTd (ORCPT ); Mon, 29 Jul 2013 15:19:33 -0400 Date: Mon, 29 Jul 2013 15:19:15 -0400 From: Naoya Horiguchi To: Joonsoo Kim Cc: Andrew Morton , Rik van Riel , Mel Gorman , Michal Hocko , "Aneesh Kumar K.V" , KAMEZAWA Hiroyuki , Hugh Dickins , Davidlohr Bueso , David Gibson , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim , Wanpeng Li , Hillf Danton Message-ID: <1375125555-yuwxqz39-mutt-n-horiguchi@ah.jp.nec.com> In-Reply-To: <1375124737-9w10y4c4-mutt-n-horiguchi@ah.jp.nec.com> References: <1375075929-6119-1-git-send-email-iamjoonsoo.kim@lge.com> <1375075929-6119-16-git-send-email-iamjoonsoo.kim@lge.com> <1375124737-9w10y4c4-mutt-n-horiguchi@ah.jp.nec.com> Subject: Re: [PATCH 15/18] mm, hugetlb: move up anon_vma_prepare() Mime-Version: 1.0 Content-Type: text/plain; charset=iso-2022-jp Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Mutt-References: <1375124737-9w10y4c4-mutt-n-horiguchi@ah.jp.nec.com> X-Mutt-Fcc: ~/Maildir/sent/ User-Agent: Mutt 1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2972 Lines: 86 On Mon, Jul 29, 2013 at 03:05:37PM -0400, Naoya Horiguchi wrote: > On Mon, Jul 29, 2013 at 02:32:06PM +0900, Joonsoo Kim wrote: > > If we fail with a allocated hugepage, it is hard to recover properly. > > One such example is reserve count. We don't have any method to recover > > reserve count. Although, I will introduce a function to recover reserve > > count in following patch, it is better not to allocate a hugepage > > as much as possible. So move up anon_vma_prepare() which can be failed > > in OOM situation. > > > > Signed-off-by: Joonsoo Kim > > Reviewed-by: Naoya Horiguchi Sorry, let me suspend this Reviewed for a question. If alloc_huge_page failed after we succeeded anon_vma_parepare, the allocated anon_vma_chain and/or anon_vma are safely freed? Or don't we have to free them? Thanks, Naoya Horiguchi > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 683fd38..bb8a45f 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -2536,6 +2536,15 @@ retry_avoidcopy: > > /* Drop page_table_lock as buddy allocator may be called */ > > spin_unlock(&mm->page_table_lock); > > > > + /* > > + * When the original hugepage is shared one, it does not have > > + * anon_vma prepared. > > + */ > > + if (unlikely(anon_vma_prepare(vma))) { > > + ret = VM_FAULT_OOM; > > + goto out_old_page; > > + } > > + > > use_reserve = vma_has_reserves(h, vma, address); > > if (use_reserve == -ENOMEM) { > > ret = VM_FAULT_OOM; > > @@ -2590,15 +2599,6 @@ retry_avoidcopy: > > goto out_lock; > > } > > > > - /* > > - * When the original hugepage is shared one, it does not have > > - * anon_vma prepared. > > - */ > > - if (unlikely(anon_vma_prepare(vma))) { > > - ret = VM_FAULT_OOM; > > - goto out_new_page; > > - } > > - > > copy_user_huge_page(new_page, old_page, address, vma, > > pages_per_huge_page(h)); > > __SetPageUptodate(new_page); > > @@ -2625,7 +2625,6 @@ retry_avoidcopy: > > spin_unlock(&mm->page_table_lock); > > mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); > > > > -out_new_page: > > page_cache_release(new_page); > > out_old_page: > > page_cache_release(old_page); > > -- > > 1.7.9.5 > > > > -- > > To unsubscribe, send a message with 'unsubscribe linux-mm' in > > the body to majordomo@kvack.org. For more info on Linux MM, > > see: http://www.linux-mm.org/ . > > Don't email: email@kvack.org > > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/