Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751346AbdH2AdG (ORCPT ); Mon, 28 Aug 2017 20:33:06 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:37737 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751322AbdH2AdF (ORCPT ); Mon, 28 Aug 2017 20:33:05 -0400 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Tue, 29 Aug 2017 09:33:44 +0900 From: Joonsoo Kim To: Michal Hocko Cc: Vlastimil Babka , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mel Gorman Subject: Re: [PATCH 2/2] mm/slub: don't use reserved highatomic pageblock for optimistic try Message-ID: <20170829003344.GB14489@js1304-P5Q-DELUXE> References: <1503882675-17910-1-git-send-email-iamjoonsoo.kim@lge.com> <1503882675-17910-2-git-send-email-iamjoonsoo.kim@lge.com> <20170828130829.GL17097@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170828130829.GL17097@dhcp22.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2832 Lines: 55 On Mon, Aug 28, 2017 at 03:08:29PM +0200, Michal Hocko wrote: > On Mon 28-08-17 13:29:29, Vlastimil Babka wrote: > > On 08/28/2017 03:11 AM, js1304@gmail.com wrote: > > > From: Joonsoo Kim > > > > > > High-order atomic allocation is difficult to succeed since we cannot > > > reclaim anything in this context. So, we reserves the pageblock for > > > this kind of request. > > > > > > In slub, we try to allocate higher-order page more than it actually > > > needs in order to get the best performance. If this optimistic try is > > > used with GFP_ATOMIC, alloc_flags will be set as ALLOC_HARDER and > > > the pageblock reserved for high-order atomic allocation would be used. > > > Moreover, this request would reserve the MIGRATE_HIGHATOMIC pageblock > > > ,if succeed, to prepare further request. It would not be good to use > > > MIGRATE_HIGHATOMIC pageblock in terms of fragmentation management > > > since it unconditionally set a migratetype to request's migratetype > > > when unreserving the pageblock without considering the migratetype of > > > used pages in the pageblock. > > > > > > This is not what we don't intend so fix it by unconditionally setting > > > __GFP_NOMEMALLOC in order to not set ALLOC_HARDER. > > > > I wonder if it would be more robust to strip GFP_ATOMIC from alloc_gfp. > > E.g. __GFP_NOMEMALLOC does seem to prevent ALLOC_HARDER, but not > > ALLOC_HIGH. Or maybe we should adjust __GFP_NOMEMALLOC implementation > > and document it more thoroughly? CC Michal Hocko > > Yeah, __GFP_NOMEMALLOC is rather inconsistent. It has been added to > override __GFP_MEMALLOC resp. PF_MEMALLOC AFAIK. In this particular > case I would agree that dropping __GFP_HIGH and __GFP_ATOMIC would > be more precise. I am not sure we want to touch the existing semantic of > __GFP_NOMEMALLOC though. This would require auditing all the existing > users (something tells me that quite some of those will be incorrect...) Hmm... now I realize that there is another reason that we need to use __GFP_NOMEMALLOC. Even if this allocation comes from PF_MEMALLOC user, this optimistic try should not use the reserved memory below the watermark. That is, it should not use ALLOC_NO_WATERMARKS. It can only be accomplished by using __GFP_NOMEMALLOC. > > > Also, were these 2 patches done via code inspection or you noticed > > suboptimal behavior which got fixed? Thanks. > > The patch description is not very clear to me either but I guess that > Joonsoo sees to many larger order pages to back slab objects when the > system is not under heavy memory pressure and that increases internal > fragmentation? Your guess is right. I found this problem when I checked the fragmentation ratio through the benchmark some months ago. I don't remember detailed system state in that benchmark. Thanks.