Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3250543pxf; Mon, 22 Mar 2021 01:32:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwhqP1L8grANxsRBrqj9GajdIVqexIESjKORQXcOQiXkPtlxMp4T78Ip2pMDN+xY+vIISQi X-Received: by 2002:a05:6402:3593:: with SMTP id y19mr24259762edc.317.1616401921151; Mon, 22 Mar 2021 01:32:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616401921; cv=none; d=google.com; s=arc-20160816; b=m8ySAVuQENS1BJkZPvD5wcAIwDOcU5f3aH4+oFJqVHQ+6hAeBk+cfN4GDMf+U+EfV5 fVnsEF52dyBnwEjnyIpiu2fKffB0tfpsmai1AyPFIfDJvMRufd9zaI5Yaj2ktpLp8e6T FTRCER7r8SgeuTpcCioDSPlOinKt24CJBR7PnxqsHvSlJZhBCbNfXH1hiDZhEh6iN3HC 9QrO17jfBYlJ+FRVAZ6g2d4TFZ7LmSsNwfvktuhxNLABURqHHq4p+YE9/M8iLZmHKZLk XsG2HBQaHHk5B5IqbqOFPctzVies8K/9luCbVrhf7UPsci8rWpnPt36CebDlmx99WnFJ LJ7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=t2tvzOB0MPIj/wVypn7uEhz+ffMA0X05x9EiR1zEVgc=; b=SqNS8irPeVyG+avj0dFZ6/x9kUc91UEUzDkbmGYgiX05byx8clwWb3/kQt8aa5NV/y tF1HZwAECZxjcySqFRfu3T6Zb1FBoXdyHsjoUSXy4PnD9NOPqjk18AK53fQx1OpNma3Z LZPUoOBMjEVpApIwtSpyGgVP8wYA8vqGEH82Hdn5H0HBy+e84dvvJPPIf5RY9Vpqfed8 SGx4cVarrFBHk62kor2M+zNljtZnSmPMmFq/EL4Gd8R0o5bhlNbdBJROCbj5v6AjjhkE yQnllBJQUYAoJ2JRmVfBKNj0GMEGVmgjlV2BJM4ARNg6sC/+WCnujip+aVKJd4HpJPHE 58Jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si10996802ejx.725.2021.03.22.01.31.30; Mon, 22 Mar 2021 01:32:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbhCVIax (ORCPT + 99 others); Mon, 22 Mar 2021 04:30:53 -0400 Received: from outbound-smtp34.blacknight.com ([46.22.139.253]:37253 "EHLO outbound-smtp34.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229746AbhCVIam (ORCPT ); Mon, 22 Mar 2021 04:30:42 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp34.blacknight.com (Postfix) with ESMTPS id 6274B272D for ; Mon, 22 Mar 2021 08:30:41 +0000 (GMT) Received: (qmail 12857 invoked from network); 22 Mar 2021 08:30:41 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 22 Mar 2021 08:30:41 -0000 Date: Mon, 22 Mar 2021 08:30:39 +0000 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 3/7] mm/page_alloc: Add a bulk page allocator Message-ID: <20210322083039.GD3697@techsingularity.net> References: <20210312154331.32229-1-mgorman@techsingularity.net> <20210312154331.32229-4-mgorman@techsingularity.net> <7c520bbb-efd7-7cad-95df-610000832a67@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <7c520bbb-efd7-7cad-95df-610000832a67@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Fri, Mar 19, 2021 at 07:18:32PM +0100, Vlastimil Babka wrote: > On 3/12/21 4:43 PM, Mel Gorman wrote: > > This patch adds a new page allocator interface via alloc_pages_bulk, > > and __alloc_pages_bulk_nodemask. A caller requests a number of pages > > to be allocated and added to a list. They can be freed in bulk using > > free_pages_bulk(). > > > > The API is not guaranteed to return the requested number of pages and > > may fail if the preferred allocation zone has limited free memory, the > > cpuset changes during the allocation or page debugging decides to fail > > an allocation. It's up to the caller to request more pages in batch > > if necessary. > > > > Note that this implementation is not very efficient and could be improved > > but it would require refactoring. The intent is to make it available early > > to determine what semantics are required by different callers. Once the > > full semantics are nailed down, it can be refactored. > > > > Signed-off-by: Mel Gorman > > Acked-by: Vlastimil Babka > > Although maybe premature, if it changes significantly due to the users' > performance feedback, let's see :) > Indeed. The next version will have no users so that Jesper and Chuck can check if an array-based or LRU based version is better. There were also bugs such as broken accounting of stats that had to be fixed which increases overhead. > Some nits below: > > ... > > > @@ -4963,6 +4978,107 @@ static inline bool prepare_alloc_pages(gfp_t gfp, unsigned int order, > > return true; > > } > > > > +/* > > + * This is a batched version of the page allocator that attempts to > > + * allocate nr_pages quickly from the preferred zone and add them to list. > > + * > > + * Returns the number of pages allocated. > > + */ > > +int __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > > + nodemask_t *nodemask, int nr_pages, > > + struct list_head *alloc_list) > > +{ > > + struct page *page; > > + unsigned long flags; > > + struct zone *zone; > > + struct zoneref *z; > > + struct per_cpu_pages *pcp; > > + struct list_head *pcp_list; > > + struct alloc_context ac; > > + gfp_t alloc_gfp; > > + unsigned int alloc_flags; > > + int allocated = 0; > > + > > + if (WARN_ON_ONCE(nr_pages <= 0)) > > + return 0; > > + > > + if (nr_pages == 1) > > + goto failed; > > + > > + /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */ > > + if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask, &ac, > > + &alloc_gfp, &alloc_flags)) > > Unusual identation here. > Fixed > > + return 0; > > + gfp = alloc_gfp; > > + > > + /* Find an allowed local zone that meets the high watermark. */ > > + for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) { > > + unsigned long mark; > > + > > + if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) && > > + !__cpuset_zone_allowed(zone, gfp)) { > > + continue; > > + } > > + > > + if (nr_online_nodes > 1 && zone != ac.preferred_zoneref->zone && > > + zone_to_nid(zone) != zone_to_nid(ac.preferred_zoneref->zone)) { > > + goto failed; > > + } > > + > > + mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages; > > + if (zone_watermark_fast(zone, 0, mark, > > + zonelist_zone_idx(ac.preferred_zoneref), > > + alloc_flags, gfp)) { > > + break; > > + } > > + } > > + if (!zone) > > + return 0; > > Why not also "goto failed;" here? Good question. When first written, it was because the zone search for the normal allocator was almost certainly going to fail to find a zone and it was expected that callers would prefer to fail fast over blocking. Now we know that sunrpc can sleep on a failing allocation and it would be better to enter the single page allocator and reclaim pages instead of "sleep and hope for the best". -- Mel Gorman SUSE Labs