Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756642AbcK3NF7 (ORCPT ); Wed, 30 Nov 2016 08:05:59 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33836 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751583AbcK3NFx (ORCPT ); Wed, 30 Nov 2016 08:05:53 -0500 Date: Wed, 30 Nov 2016 14:05:50 +0100 From: Michal Hocko To: Mel Gorman Cc: Andrew Morton , Christoph Lameter , Vlastimil Babka , Johannes Weiner , Linux-MM , Linux-Kernel Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v3 Message-ID: <20161130130549.GE18432@dhcp22.suse.cz> References: <20161127131954.10026-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161127131954.10026-1-mgorman@techsingularity.net> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1742 Lines: 49 On Sun 27-11-16 13:19:54, Mel Gorman wrote: [...] > @@ -2588,18 +2594,22 @@ struct page *buffered_rmqueue(struct zone *preferred_zone, > struct page *page; > bool cold = ((gfp_flags & __GFP_COLD) != 0); > > - if (likely(order == 0)) { > + if (likely(order <= PAGE_ALLOC_COSTLY_ORDER)) { > struct per_cpu_pages *pcp; > struct list_head *list; > > local_irq_save(flags); > do { > + unsigned int pindex; > + > + pindex = order_to_pindex(migratetype, order); > pcp = &this_cpu_ptr(zone->pageset)->pcp; > - list = &pcp->lists[migratetype]; > + list = &pcp->lists[pindex]; > if (list_empty(list)) { > - pcp->count += rmqueue_bulk(zone, 0, > + int nr_pages = rmqueue_bulk(zone, order, > pcp->batch, list, > migratetype, cold); > + pcp->count += (nr_pages << order); > if (unlikely(list_empty(list))) > goto failed; just a nit, we can reorder the check and the count update because nobody could have stolen pages allocated by rmqueue_bulk. I would also consider nr_pages a bit misleading because we get a number or allocated elements. Nothing to lose sleep over... > } But... Unless I am missing something this effectively means that we do not exercise high order atomic reserves. Shouldn't we fallback to the locked __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC) for order > 0 && ALLOC_HARDER ? Or is this just hidden in some other code path which I am not seeing? Other than that the patch looks reasonable to me. Keeping some portion of !costly pages on pcp lists sounds useful from the fragmentation point of view as well AFAICS because it would be normally dissolved for order-0 requests while we push on the reclaim more right now. -- Michal Hocko SUSE Labs