Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp175685pxf; Wed, 10 Mar 2021 03:41:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJy7o59/Mp3+3vxFdDq1gtkYpc50u7jIWTKmMV8oxWkIgThqgvuNEcPzEsA4Bm3T8gSZCZqS X-Received: by 2002:a17:906:845b:: with SMTP id e27mr3056453ejy.487.1615376489916; Wed, 10 Mar 2021 03:41:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615376489; cv=none; d=google.com; s=arc-20160816; b=RHo+LVrckY6dGeHijroCYjiuoqe+1LovUbiHaFeOySwGVMi052gAEgDdvJTx4ZUYqu /RC3vTpoPfBKYRs2hAon+L51GiUlMPUhM6N0ORtJpetHDTj5ILTZ3pxLKlmEcU7KuwLc Ua+hrkqyydJxMfvqxQ5iUHJYzbYVsyQbDepnlYMkE838aVVLFqe6cl8SIrPhAaQNOBb2 P7S/gfRocMsNs0or9SbZv0cDN9PU65nEzI2E5gmHqoVjSYwNjqD8wFqM1c/1B10t1fgG iSnBxiAeVOAJKw/lBpOzRiG4KQCT1iIWk3/vJL0w3CTx3wagGXLO7gCVO/hJ5WFjoJMN QQWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=XF8iETrZE09vN4RLyI8g3H2VFWHSetPQG9gPMDT240c=; b=pk6lFPZ0G+eMoAGQdWKkgaG16Ue5Xfl3+fshMPsWiXm2fP2szZrsNaG7L1ZY4vSdpf fBvQiVNX5D1a5HjPCoBdtTG0ynj2B3y2Y1i7SP401BCDw4O54tmhGqsSybSGXNocGt5d Hs/K7CPlILTKlctYJzLSLADp6OLioTt2qFl7QkVH8UXan5OVkZwywushoNymEvlicTVR Bf1rq6sneUNErTvxrQLU4kWmKLPoW3LdT6iZQHFC5qLL9vv1j1zIXJoxXBxd+dYfT3C1 wT7EOQVh3ivMfJfagoSmyXmqwj45S29SOwV8reViOyd875jZ2/fj6Oqs2nsHJYti6WjQ H5IA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k18si11292909ejp.292.2021.03.10.03.41.07; Wed, 10 Mar 2021 03:41:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231265AbhCJLkU (ORCPT + 99 others); Wed, 10 Mar 2021 06:40:20 -0500 Received: from outbound-smtp18.blacknight.com ([46.22.139.245]:60403 "EHLO outbound-smtp18.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbhCJLjL (ORCPT ); Wed, 10 Mar 2021 06:39:11 -0500 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp18.blacknight.com (Postfix) with ESMTPS id D68FF1C3E71 for ; Wed, 10 Mar 2021 11:38:37 +0000 (GMT) Received: (qmail 6934 invoked from network); 10 Mar 2021 11:38:37 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 10 Mar 2021 11:38:37 -0000 Date: Wed, 10 Mar 2021 11:38:36 +0000 From: Mel Gorman To: Shay Agroskin Cc: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Message-ID: <20210310113836.GQ3697@techsingularity.net> References: <20210301161200.18852-1-mgorman@techsingularity.net> <20210301161200.18852-3-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 10, 2021 at 01:04:17PM +0200, Shay Agroskin wrote: > > Mel Gorman writes: > > > > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index 8572a1474e16..4903d1cc48dc 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -515,6 +515,10 @@ static inline int arch_make_page_accessible(struct > > page *page) > > } > > #endif > > +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int preferred_nid, > > + nodemask_t *nodemask, int nr_pages, > > + struct list_head *list); > > + > > struct page * > > __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int > > preferred_nid, > > nodemask_t *nodemask); > > @@ -525,6 +529,14 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, > > int preferred_nid) > > return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); > > } > > +/* Bulk allocate order-0 pages */ > > +static inline unsigned long > > +alloc_pages_bulk(gfp_t gfp_mask, unsigned long nr_pages, struct > > list_head *list) > > +{ > > + return __alloc_pages_bulk_nodemask(gfp_mask, numa_mem_id(), NULL, > > + nr_pages, list); > > Is the second line indentation intentional ? Why not align it to the first > argument (gfp_mask) ? > No particular reason. I usually pick this as it's visually clearer to me that it's part of the same line when the multi-line is part of an if block. > > +} > > + > > /* > > * Allocate pages, preferring the node given as nid. The node must be > > valid and > > * online. For more general interface, see alloc_pages_node(). > > @@ -594,6 +606,7 @@ void * __meminit alloc_pages_exact_nid(int nid, > > size_t size, gfp_t gfp_mask); > > extern void __free_pages(struct page *page, unsigned int order); > > extern void free_pages(unsigned long addr, unsigned int order); > > +extern void free_pages_bulk(struct list_head *list); > > struct page_frag_cache; > > extern void __page_frag_cache_drain(struct page *page, unsigned int > > count); > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 3e4b29ee2b1e..ff1e55793786 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -4436,6 +4436,21 @@ static void wake_all_kswapds(unsigned int order, > > gfp_t gfp_mask, > > } > > } > > ... > > +/* > > + * This is a batched version of the page allocator that attempts to > > + * allocate nr_pages quickly from the preferred zone and add them to > > list. > > + */ > > +int __alloc_pages_bulk_nodemask(gfp_t gfp_mask, int preferred_nid, > > + nodemask_t *nodemask, int nr_pages, > > + struct list_head *alloc_list) > > +{ > > + struct page *page; > > + unsigned long flags; > > + struct zone *zone; > > + struct zoneref *z; > > + struct per_cpu_pages *pcp; > > + struct list_head *pcp_list; > > + struct alloc_context ac; > > + gfp_t alloc_mask; > > + unsigned int alloc_flags; > > + int alloced = 0; > > Does alloced count the number of allocated pages ? Yes. > Do you mind renaming it to 'allocated' ? I will if there is another version as I do not feel particularly strongly about alloced vs allocated. alloc was to match the function name and I don't think the change makes it much clearer. > > > > + /* Attempt the batch allocation */ > > + local_irq_save(flags); > > + pcp = &this_cpu_ptr(zone->pageset)->pcp; > > + pcp_list = &pcp->lists[ac.migratetype]; > > + > > + while (alloced < nr_pages) { > > + page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags, > > + pcp, pcp_list); > > Same indentation comment as before > Again, simple personal perference to avoid any possibility it's mixed up with a later line. There has not been consistent code styling enforcement of what indentation style should be used for a multi-line within mm/page_alloc.c -- Mel Gorman SUSE Labs