Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752046AbdIFIKa (ORCPT ); Wed, 6 Sep 2017 04:10:30 -0400 Received: from mx2.suse.de ([195.135.220.15]:49566 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751163AbdIFIKZ (ORCPT ); Wed, 6 Sep 2017 04:10:25 -0400 Subject: Re: [PATCH 2/2] mm/slub: don't use reserved memory for optimistic try To: js1304@gmail.com, Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mel Gorman , Michal Hocko , Joonsoo Kim References: <1504672666-19682-1-git-send-email-iamjoonsoo.kim@lge.com> <1504672666-19682-2-git-send-email-iamjoonsoo.kim@lge.com> From: Vlastimil Babka Message-ID: Date: Wed, 6 Sep 2017 10:10:22 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <1504672666-19682-2-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3343 Lines: 87 On 09/06/2017 06:37 AM, js1304@gmail.com wrote: > From: Joonsoo Kim > > High-order atomic allocation is difficult to succeed since we cannot > reclaim anything in this context. So, we reserves the pageblock for > this kind of request. > > In slub, we try to allocate higher-order page more than it actually > needs in order to get the best performance. If this optimistic try is > used with GFP_ATOMIC, alloc_flags will be set as ALLOC_HARDER and > the pageblock reserved for high-order atomic allocation would be used. > Moreover, this request would reserve the MIGRATE_HIGHATOMIC pageblock > ,if succeed, to prepare further request. It would not be good to use > MIGRATE_HIGHATOMIC pageblock in terms of fragmentation management > since it unconditionally set a migratetype to request's migratetype > when unreserving the pageblock without considering the migratetype of > used pages in the pageblock. > > This is not what we don't intend so fix it by unconditionally masking > out __GFP_ATOMIC in order to not set ALLOC_HARDER. > > And, it is also undesirable to use reserved memory for optimistic try > so mask out __GFP_HIGH. This patch also adds __GFP_NOMEMALLOC since > we don't want to use the reserved memory for optimistic try even if > the user has PF_MEMALLOC flag. > > Signed-off-by: Joonsoo Kim > --- > include/linux/gfp.h | 1 + > mm/page_alloc.c | 8 ++++++++ > mm/slub.c | 6 ++---- > 3 files changed, 11 insertions(+), 4 deletions(-) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index f780718..1f5658e 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -568,6 +568,7 @@ extern gfp_t gfp_allowed_mask; > > /* Returns true if the gfp_mask allows use of ALLOC_NO_WATERMARK */ > bool gfp_pfmemalloc_allowed(gfp_t gfp_mask); > +gfp_t gfp_drop_reserves(gfp_t gfp_mask); > > extern void pm_restrict_gfp_mask(void); > extern void pm_restore_gfp_mask(void); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 6dbc49e..0f34356 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3720,6 +3720,14 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask) > return !!__gfp_pfmemalloc_flags(gfp_mask); > } > > +gfp_t gfp_drop_reserves(gfp_t gfp_mask) > +{ > + gfp_mask &= ~(__GFP_HIGH | __GFP_ATOMIC); > + gfp_mask |= __GFP_NOMEMALLOC; > + > + return gfp_mask; > +} > + I think it's wasteful to do a function call for this, inline definition in header would be better (gfp_pfmemalloc_allowed() is different as it relies on a rather heavyweight __gfp_pfmemalloc_flags(). > /* > * Checks whether it makes sense to retry the reclaim to make a forward progress > * for the given allocation request. > diff --git a/mm/slub.c b/mm/slub.c > index 45f4a4b..3d75d30 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1579,10 +1579,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > */ > alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL; > if (oo_order(oo) > oo_order(s->min)) { > - if (alloc_gfp & __GFP_DIRECT_RECLAIM) { > - alloc_gfp |= __GFP_NOMEMALLOC; > - alloc_gfp &= ~__GFP_DIRECT_RECLAIM; > - } > + alloc_gfp = gfp_drop_reserves(alloc_gfp); > + alloc_gfp &= ~__GFP_DIRECT_RECLAIM; > } > > page = alloc_slab_page(s, alloc_gfp, node, oo); >