Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp918693pxf; Wed, 7 Apr 2021 15:03:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJztiICLQEBYqYkOr4xXzv2ss//7Jj01NF72NrBE9zdMxTNj1eTUfKx3DQ7RtdCFHpkM8st8 X-Received: by 2002:a05:6a00:1384:b029:242:9979:b1d with SMTP id t4-20020a056a001384b029024299790b1dmr3658626pfg.63.1617833022018; Wed, 07 Apr 2021 15:03:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617833022; cv=none; d=google.com; s=arc-20160816; b=boxp72cNtOn0jKxien4U4aXlliRkcSRanYTVelOptm3asnC2gxp6yqmr2YyGks/jlZ 5N7miiUOXNAL/ZxZoVmsDnAIQisAEzlJK5KC+/nDiOHgZ5FGbGjY4AD2rSWn1r9dppAh H4jkSgZEN/6rwV+A1mxS2ZDwodlJ7nPSalScVpZbEYFB3cY7arCsLswPFTRzBAYqgNss JBbuKT3Gh7fdSb1oFsQ1OEyvQEWAQ/5flqUuU88tiifvOq3V4YmetVhYXbs6J2Xjyk53 0rMG81t4T4FC2mdKcxm7nTCgudP71dP4HrYvXoaRdXZ/aCwm/1djB/SihCIALQe9h4vz stBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=6opEplC8T6FYe2/c7XseleyBM67q17tEePHVEUoEDDI=; b=uWd0/+MHNVzHlf3tG/1EBufAS+4EDCHKeZ+93SKqRp+JFO8MT4TmWG9ESwfQH0VZGv 1PBdPXNYkTrz51+4QidWKsrPMuM5eASc2yYSlCiwZ4w8G6ifcU4vNpQdo6QmDKDnTedE xXkeB0p0eGy+nMIeY1yAgtzkPGqHnmohgXnhLwhwWuBO47aKK7Y6bBMplgsi1ExO6XJH 4xtfpYMoKHUqTN4RU1IFN2XK3nEA0B3Vs4PEBsqAb1PiNa2L5a8aPrOr4HqLBGkr04yZ viDXAF+2CurQATsflhK8f0ps/THgbg1gapvS6xuL2HXg7/HfT7jcDRiEvDn4lOZQ9DvJ Afew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h11si7732863plk.312.2021.04.07.15.03.30; Wed, 07 Apr 2021 15:03:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356276AbhDGU0k (ORCPT + 99 others); Wed, 7 Apr 2021 16:26:40 -0400 Received: from outbound-smtp34.blacknight.com ([46.22.139.253]:35011 "EHLO outbound-smtp34.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356118AbhDGUZr (ORCPT ); Wed, 7 Apr 2021 16:25:47 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp34.blacknight.com (Postfix) with ESMTPS id A3FD01F90 for ; Wed, 7 Apr 2021 21:25:36 +0100 (IST) Received: (qmail 16603 invoked from network); 7 Apr 2021 20:25:36 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 7 Apr 2021 20:25:36 -0000 From: Mel Gorman To: Linux-MM , Linux-RT-Users Cc: LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Michal Hocko , Oscar Salvador , Mel Gorman Subject: [PATCH 06/11] mm/page_alloc: Batch the accounting updates in the bulk allocator Date: Wed, 7 Apr 2021 21:24:18 +0100 Message-Id: <20210407202423.16022-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210407202423.16022-1-mgorman@techsingularity.net> References: <20210407202423.16022-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that the zone_statistics are simple counters that do not require special protection, the bulk allocator accounting updates can be batch updated without adding too much complexity with protected RMW updates or using xchg. Signed-off-by: Mel Gorman --- include/linux/vmstat.h | 8 ++++++++ mm/page_alloc.c | 30 +++++++++++++----------------- 2 files changed, 21 insertions(+), 17 deletions(-) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index dde4dec4e7dd..8473b8fa9756 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -246,6 +246,14 @@ __count_numa_event(struct zone *zone, enum numa_stat_item item) raw_cpu_inc(pzstats->vm_numa_event[item]); } +static inline void +__count_numa_events(struct zone *zone, enum numa_stat_item item, long delta) +{ + struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats; + + raw_cpu_add(pzstats->vm_numa_event[item], delta); +} + extern void __count_numa_event(struct zone *zone, enum numa_stat_item item); extern unsigned long sum_zone_node_page_state(int node, enum zone_stat_item item); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 73e618d06315..defb0e436fac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3411,7 +3411,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) * * Must be called with interrupts disabled. */ -static inline void zone_statistics(struct zone *preferred_zone, struct zone *z) +static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, + long nr_account) { #ifdef CONFIG_NUMA enum numa_stat_item local_stat = NUMA_LOCAL; @@ -3424,12 +3425,12 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z) local_stat = NUMA_OTHER; if (zone_to_nid(z) == zone_to_nid(preferred_zone)) - __count_numa_event(z, NUMA_HIT); + __count_numa_events(z, NUMA_HIT, nr_account); else { - __count_numa_event(z, NUMA_MISS); - __count_numa_event(preferred_zone, NUMA_FOREIGN); + __count_numa_events(z, NUMA_MISS, nr_account); + __count_numa_events(preferred_zone, NUMA_FOREIGN, nr_account); } - __count_numa_event(z, local_stat); + __count_numa_events(z, local_stat, nr_account); #endif } @@ -3475,7 +3476,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); } local_unlock_irqrestore(&pagesets.lock, flags); return page; @@ -3536,7 +3537,7 @@ struct page *rmqueue(struct zone *preferred_zone, get_pcppage_migratetype(page)); __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone); + zone_statistics(preferred_zone, zone, 1); local_irq_restore(flags); out: @@ -5019,7 +5020,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags = ALLOC_WMARK_LOW; - int nr_populated = 0; + int nr_populated = 0, nr_account = 0; if (unlikely(nr_pages <= 0)) return 0; @@ -5092,15 +5093,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, goto failed_irq; break; } - - /* - * Ideally this would be batched but the best way to do - * that cheaply is to first convert zone_statistics to - * be inaccurate per-cpu counter like vm_events to avoid - * a RMW cycle then do the accounting with IRQs enabled. - */ - __count_zid_vm_events(PGALLOC, zone_idx(zone), 1); - zone_statistics(ac.preferred_zoneref->zone, zone); + nr_account++; prep_new_page(page, 0, gfp, 0); if (page_list) @@ -5110,6 +5103,9 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nr_populated++; } + __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); + zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); + local_unlock_irqrestore(&pagesets.lock, flags); return nr_populated; -- 2.26.2