Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1298795pxf; Fri, 9 Apr 2021 05:14:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw0qYlHkmsW7cUglP9MTUok9w5rxEmFh9dzVhfrcUCu/wSKHHA8pU+4v97oabRaGyGzduiB X-Received: by 2002:a17:906:5383:: with SMTP id g3mr16156675ejo.185.1617970458182; Fri, 09 Apr 2021 05:14:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617970458; cv=none; d=google.com; s=arc-20160816; b=C9RfE6j3Ii10QCCZ4sby+Aqu8JBbfXSILJpluS4uD1u0CDOJEdEwfyQKOTAqei7BiT 3h8DEelqsHIvN5s9EuV2wI3MmrQMe3019Ksr+o/HjfJW+Jn2Al7IB1+AU/i//rK8Z5QG XKcWGRqfHuigFvQPKI6dMIrJB8nCWaxpFHIxH7WWojyr2zBb1nGjzGpN0O2b/hVSatbR fX+lS1wZ9p3Xh+EAbxf9Hc6jeWyQNFB9ZqRtLOKOMh2g3Y74loqJvCOeiF3AF0jgt8+5 xgy7vvN6myeh7U5HdMSYOzOGVngyNGFKPt4R0+D0SPYkWceVWAk7G13bTpaNPVrCeLFm bZMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:content-disposition:mime-version :message-id:subject:cc:to:from:date; bh=0fs6eg8rsFYxjgWqdghqrqUoyw9IwUK8yudYjzyrUdI=; b=A1tLoDvzzCzgLEMUUprpzAcro32eyUniZx6DlorVwDnkSjLSKhjUJ/k4muaPUmZeWQ mdnboRyHr1BAteQxQG4xQw7KhmuXkfvECADxi5ttK/cWKoD0PbBXB7m/RD2yh9Xrvk9f VFEghe9sh32S854teWLTfjA6lLRScZsDtalqR0AoD3GcnXfw5pczozp7ecAyzyNEUeAl zgdsosCYhkwi6muO/jDobKI5m+QNElOqH19wvhkQDvB0PJ7pJINXid+W6HxS/RC9qqKk Xqv4vJp2FfFPi2aQXVfi2CX1TPo3NKtSEmfXFEJwguIiNn+87Imy7p+MYbT8FdIBLW51 bTag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v22si1982576edc.137.2021.04.09.05.13.54; Fri, 09 Apr 2021 05:14:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232087AbhDIMKN (ORCPT + 99 others); Fri, 9 Apr 2021 08:10:13 -0400 Received: from outbound-smtp54.blacknight.com ([46.22.136.238]:42267 "EHLO outbound-smtp54.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233572AbhDIMKM (ORCPT ); Fri, 9 Apr 2021 08:10:12 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp54.blacknight.com (Postfix) with ESMTPS id 384ABFABF7 for ; Fri, 9 Apr 2021 13:09:59 +0100 (IST) Received: (qmail 12263 invoked from network); 9 Apr 2021 12:09:59 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 9 Apr 2021 12:09:59 -0000 Date: Fri, 9 Apr 2021 13:09:57 +0100 From: Mel Gorman To: Andrew Morton Cc: Linux-MM , LKML , Oscar Salvador , Michal Hocko , "Michael S. Tsirkin" , David Hildenbrand , Vlastimil Babka , Alexander Duyck , Minchan Kim Subject: [PATCH] mm/memory_hotplug: Make unpopulated zones PCP structures unreachable during hot remove Message-ID: <20210409120957.GM3697@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org zone_pcp_reset allegedly protects against a race with drain_pages using local_irq_save but this is bogus. local_irq_save only operates on the local CPU. If memory hotplug is running on CPU A and drain_pages is running on CPU B, disabling IRQs on CPU A does not affect CPU B and offers no protection. This patch reorders memory hotremove such that the PCP structures relevant to the zone are no longer reachable by the time the structures are freed. With this reordering, no protection is required to prevent a use-after-free and the IRQs can be left enabled. zone_pcp_reset is renamed to zone_pcp_destroy to make it clear that the per-cpu structures are deleted when the function returns. Signed-off-by: Mel Gorman --- mm/internal.h | 2 +- mm/memory_hotplug.c | 10 +++++++--- mm/page_alloc.c | 22 ++++++++++++++++------ 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 09adf152a10b..cc34ce4461b7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -203,7 +203,7 @@ extern void free_unref_page(struct page *page); extern void free_unref_page_list(struct list_head *list); extern void zone_pcp_update(struct zone *zone); -extern void zone_pcp_reset(struct zone *zone); +extern void zone_pcp_destroy(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); extern void zone_pcp_enable(struct zone *zone); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 0cdbbfbc5757..3d059c9f9c2d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1687,12 +1687,16 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages; spin_unlock_irqrestore(&zone->lock, flags); - zone_pcp_enable(zone); - /* removal success */ adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); zone->present_pages -= nr_pages; + /* + * Restore PCP after managed pages has been updated. Unpopulated + * zones PCP structures will remain unusable. + */ + zone_pcp_enable(zone); + pgdat_resize_lock(zone->zone_pgdat, &flags); zone->zone_pgdat->node_present_pages -= nr_pages; pgdat_resize_unlock(zone->zone_pgdat, &flags); @@ -1700,8 +1704,8 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) init_per_zone_wmark_min(); if (!populated_zone(zone)) { - zone_pcp_reset(zone); build_all_zonelists(NULL); + zone_pcp_destroy(zone); } else zone_pcp_update(zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5e8aedb64b57..d6c3db853552 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8946,18 +8946,29 @@ void zone_pcp_disable(struct zone *zone) void zone_pcp_enable(struct zone *zone) { - __zone_set_pageset_high_and_batch(zone, zone->pageset_high, zone->pageset_batch); + /* + * If the zone is populated, restore the high and batch counts. + * If unpopulated, leave the high and batch count as 0 and 1 + * respectively as done by zone_pcp_disable. The per-cpu + * structures will later be freed by zone_pcp_destroy. + */ + if (populated_zone(zone)) + __zone_set_pageset_high_and_batch(zone, zone->pageset_high, zone->pageset_batch); + mutex_unlock(&pcp_batch_high_lock); } -void zone_pcp_reset(struct zone *zone) +/* + * Called when a zone has been hot-removed. At this point, the PCP has been + * drained, disabled and the zone is removed from the zonelists so the + * structures are no longer in use. PCP was disabled/drained by + * zone_pcp_disable. This function will drain any remaining vmstat deltas. + */ +void zone_pcp_destroy(struct zone *zone) { - unsigned long flags; int cpu; struct per_cpu_pageset *pset; - /* avoid races with drain_pages() */ - local_irq_save(flags); if (zone->pageset != &boot_pageset) { for_each_online_cpu(cpu) { pset = per_cpu_ptr(zone->pageset, cpu); @@ -8966,7 +8977,6 @@ void zone_pcp_reset(struct zone *zone) free_percpu(zone->pageset); zone->pageset = &boot_pageset; } - local_irq_restore(flags); } #ifdef CONFIG_MEMORY_HOTREMOVE