Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp307215pxy; Wed, 21 Apr 2021 03:35:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJys2Tx4N28xvKlC4wCOq+lRRwg0hYg6O0GIsBI3GeVX6pmrGxMUNgQclNqBHAQ5okV/NOTJ X-Received: by 2002:aa7:c5cd:: with SMTP id h13mr37542275eds.339.1619001341597; Wed, 21 Apr 2021 03:35:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619001341; cv=none; d=google.com; s=arc-20160816; b=QDeXf89/Pc9qlTkeL2XHF2DrXPggH0hInkcvZYubO1wr9VbONenUgtDHYYI9YASkLR xnTkboQ4BhZIbI4M/b+36V6nCDsQWbvZfir7/QDk/eMH7CBT+uBSAf6WHxbL2GttXSio HxMaoq1ejWkbv++FNqeYVT0MpqxTgp11vMHXp8wqB7GEd1b4caFs8flg4nSABG/2rand v550eIZ1ONdiWuKlBDQ8ksK/vo4flGvMVwpLhw4iRdhOUq5SQXcBVgQr4PqRpb0/Jj/b ltKmOv1CqbxjSvMOkjMPMnYPGOEuB5vQYYMVUdgeYqBxoTWrqKoxu5+xqpXM8vVG53HG 6DlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=a7PAt2pbI2u/7VhJEuift33bKH+vYuf9leqnK1069RM=; b=jRm06gckToCiacp2hUhXvE7hjiqnTEdu8gIy6ms+E8p4pxs/7+tSORpbyKEvr0BZ0k j7A4tNoPGh72i4828SXiedAV6cDR3NkOw2RVyPeTrsyOxJLoWI2W3ioASfiMGCjZ+drE I8xwFB3WmPQtf+Dh+hYT8osTziTXGBH7FCr1oE2+5qJOYZ3YQNhXwnRiwKP+vVLJP/9y ccFJ7oW5yppQDrkZJ3p/+FHk/BCQ9uTT6wIpZY1A8tAwVKhpEo2hiumVpgGFFaIaVR+n V7zo/S/qMIDxNdbaQBxGdCq5Ks9j1pobaaZ5YBCEEfotdb/FbDPEf2HmPmx35LPV4zYc vsJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i11si1953139edb.109.2021.04.21.03.35.18; Wed, 21 Apr 2021 03:35:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239377AbhDUK1x (ORCPT + 99 others); Wed, 21 Apr 2021 06:27:53 -0400 Received: from mx2.suse.de ([195.135.220.15]:38094 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238780AbhDUK1m (ORCPT ); Wed, 21 Apr 2021 06:27:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id DA865B00E; Wed, 21 Apr 2021 10:27:07 +0000 (UTC) From: Oscar Salvador To: Andrew Morton Cc: David Hildenbrand , Michal Hocko , Anshuman Khandual , Vlastimil Babka , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Oscar Salvador Subject: [PATCH v10 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count() Date: Wed, 21 Apr 2021 12:26:56 +0200 Message-Id: <20210421102701.25051-4-osalvador@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20210421102701.25051-1-osalvador@suse.de> References: <20210421102701.25051-1-osalvador@suse.de> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Hildenbrand Let's have a single place (inspired by adjust_managed_page_count()) where we adjust present pages. In contrast to adjust_managed_page_count(), only memory onlining/offlining is allowed to modify the number of present pages. Signed-off-by: David Hildenbrand Signed-off-by: Oscar Salvador Acked-by: Michal Hocko --- mm/memory_hotplug.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e6aafd17a01a..b954fd10474e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -829,6 +829,16 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, return default_zone_for_pfn(nid, start_pfn, nr_pages); } +static void adjust_present_page_count(struct zone *zone, long nr_pages) +{ + unsigned long flags; + + zone->present_pages += nr_pages; + pgdat_resize_lock(zone->zone_pgdat, &flags); + zone->zone_pgdat->node_present_pages += nr_pages; + pgdat_resize_unlock(zone->zone_pgdat, &flags); +} + int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_type, int nid) { @@ -884,11 +894,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, } online_pages_range(pfn, nr_pages); - zone->present_pages += nr_pages; - - pgdat_resize_lock(zone->zone_pgdat, &flags); - zone->zone_pgdat->node_present_pages += nr_pages; - pgdat_resize_unlock(zone->zone_pgdat, &flags); + adjust_present_page_count(zone, nr_pages); node_states_set_node(nid, &arg); if (need_zonelists_rebuild) @@ -1705,11 +1711,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) /* removal success */ adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); - zone->present_pages -= nr_pages; - - pgdat_resize_lock(zone->zone_pgdat, &flags); - zone->zone_pgdat->node_present_pages -= nr_pages; - pgdat_resize_unlock(zone->zone_pgdat, &flags); + adjust_present_page_count(zone, -nr_pages); init_per_zone_wmark_min(); -- 2.16.3