Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751761AbdH1BLd (ORCPT ); Sun, 27 Aug 2017 21:11:33 -0400 Received: from mail-pg0-f65.google.com ([74.125.83.65]:38282 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751687AbdH1BL3 (ORCPT ); Sun, 27 Aug 2017 21:11:29 -0400 From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mel Gorman , Vlastimil Babka Subject: [PATCH 2/2] mm/slub: don't use reserved highatomic pageblock for optimistic try Date: Mon, 28 Aug 2017 10:11:15 +0900 Message-Id: <1503882675-17910-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1503882675-17910-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1503882675-17910-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1680 Lines: 44 From: Joonsoo Kim High-order atomic allocation is difficult to succeed since we cannot reclaim anything in this context. So, we reserves the pageblock for this kind of request. In slub, we try to allocate higher-order page more than it actually needs in order to get the best performance. If this optimistic try is used with GFP_ATOMIC, alloc_flags will be set as ALLOC_HARDER and the pageblock reserved for high-order atomic allocation would be used. Moreover, this request would reserve the MIGRATE_HIGHATOMIC pageblock ,if succeed, to prepare further request. It would not be good to use MIGRATE_HIGHATOMIC pageblock in terms of fragmentation management since it unconditionally set a migratetype to request's migratetype when unreserving the pageblock without considering the migratetype of used pages in the pageblock. This is not what we don't intend so fix it by unconditionally setting __GFP_NOMEMALLOC in order to not set ALLOC_HARDER. Signed-off-by: Joonsoo Kim --- mm/slub.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e1e442c..fd8dd89 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1579,10 +1579,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) */ alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL; if (oo_order(oo) > oo_order(s->min)) { - if (alloc_gfp & __GFP_DIRECT_RECLAIM) { - alloc_gfp |= __GFP_NOMEMALLOC; - alloc_gfp &= ~__GFP_DIRECT_RECLAIM; - } + alloc_gfp |= __GFP_NOMEMALLOC; + alloc_gfp &= ~__GFP_DIRECT_RECLAIM; } page = alloc_slab_page(s, alloc_gfp, node, oo); -- 2.7.4