Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp5458666imu; Tue, 13 Nov 2018 06:58:02 -0800 (PST) X-Google-Smtp-Source: AJdET5eGNPA0fh6nIBjqPV/c+MKqolQ2C+R5YTJkDcz5wBSdFOQf7R8n809ebeFEezXzcWqgbFoJ X-Received: by 2002:a17:902:b282:: with SMTP id u2mr5496680plr.89.1542121082749; Tue, 13 Nov 2018 06:58:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542121082; cv=none; d=google.com; s=arc-20160816; b=RxXIW9/OpkLbXoGPSUI2Axirfc2+N/8MPlXUQe9e4OUkcXKAncoVNljLIsVSiDFjC6 7beuNL4Jfzm5aR0DYXtHcNAS5D5OGuWULKUNHsW12XYYhAK1Bso7qBZcWntek0nR8bvM YXRsu61Ky0hN/ua1bjoac59EseTxt62TeF3cY9ddR//ax4SZyf2dYja2KtuqGQoSi4Cj z+eGcD2c0GIaihNPTalrhNmK3LiTF+GCp+nGCvyAcZJ8pHz7E5gJP7oETZjRFe0iNQh8 /kReIOzVdurLdqTVjHqWJL5QidT4k+4TQxeu+g4sd3l8jsu8p9CAsO/bmd73GMtZ2LLj 8HyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=s260y3GxWT9j+OZfKupZFZR+6vekHSJkfL3D4TdDjK4=; b=IrIwEToLdodKnoV78KBgMuomvq0cKzJLLfamyej0Y4RxPA8Ehc3dUXAoLKyW6nHGRd 2sM2g9izZiwkhZPSsX+PK7GQAKxwW9TzBeEHXISxvKkX2hZscE7moPU888QJj0uSbCqS VnUOgT9xnwRMuArhgSS34pdK8n6gZ3n2PW0PdqIe8aC0cfqO/Dz7avCC6h6gJwqPMfit AVIHj8fOOm0jP6lEOVxamtsyn/qB66poyydkiPffygPH6LaWuqZzx3e7QIYj5F589zxI PBzILfUAAoTp2TrgAOodUpHQsoolPNsN8aiAmLEmm0u73C5zyI3oTzpLM2ANazAmpRyJ 7OLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n1-v6si20607088pld.378.2018.11.13.06.57.45; Tue, 13 Nov 2018 06:58:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387718AbeKNAyb (ORCPT + 99 others); Tue, 13 Nov 2018 19:54:31 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38880 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731328AbeKNAya (ORCPT ); Tue, 13 Nov 2018 19:54:30 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B58B13082E50; Tue, 13 Nov 2018 14:55:59 +0000 (UTC) Received: from [10.36.116.155] (ovpn-116-155.ams2.redhat.com [10.36.116.155]) by smtp.corp.redhat.com (Postfix) with ESMTP id EDEE0600D7; Tue, 13 Nov 2018 14:55:56 +0000 (UTC) Subject: Re: [PATCH v5 2/4] mm: convert zone->managed_pages to atomic variable To: Arun KS Cc: keescook@chromium.org, khlebnikov@yandex-team.ru, minchan@kernel.org, getarunks@gmail.com, gregkh@linuxfoundation.org, akpm@linux-foundation.org, mhocko@kernel.org, vbabka@suse.cz, linux-kernel@vger.kernel.org, linux-mm@kvack.org, vatsa@codeaurora.org, willy@infradead.org References: <1542090790-21750-1-git-send-email-arunks@codeaurora.org> <1542090790-21750-3-git-send-email-arunks@codeaurora.org> From: David Hildenbrand Openpgp: preference=signencrypt Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwX4EEwECACgFAljj9eoCGwMFCQlmAYAGCwkI BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEE3eEPcA/4Na5IIP/3T/FIQMxIfNzZshIq687qgG 8UbspuE/YSUDdv7r5szYTK6KPTlqN8NAcSfheywbuYD9A4ZeSBWD3/NAVUdrCaRP2IvFyELj xoMvfJccbq45BxzgEspg/bVahNbyuBpLBVjVWwRtFCUEXkyazksSv8pdTMAs9IucChvFmmq3 jJ2vlaz9lYt/lxN246fIVceckPMiUveimngvXZw21VOAhfQ+/sofXF8JCFv2mFcBDoa7eYob s0FLpmqFaeNRHAlzMWgSsP80qx5nWWEvRLdKWi533N2vC/EyunN3HcBwVrXH4hxRBMco3jvM m8VKLKao9wKj82qSivUnkPIwsAGNPdFoPbgghCQiBjBe6A75Z2xHFrzo7t1jg7nQfIyNC7ez MZBJ59sqA9EDMEJPlLNIeJmqslXPjmMFnE7Mby/+335WJYDulsRybN+W5rLT5aMvhC6x6POK z55fMNKrMASCzBJum2Fwjf/VnuGRYkhKCqqZ8gJ3OvmR50tInDV2jZ1DQgc3i550T5JDpToh dPBxZocIhzg+MBSRDXcJmHOx/7nQm3iQ6iLuwmXsRC6f5FbFefk9EjuTKcLMvBsEx+2DEx0E UnmJ4hVg7u1PQ+2Oy+Lh/opK/BDiqlQ8Pz2jiXv5xkECvr/3Sv59hlOCZMOaiLTTjtOIU7Tq 7ut6OL64oAq+zsFNBFXLn5EBEADn1959INH2cwYJv0tsxf5MUCghCj/CA/lc/LMthqQ773ga uB9mN+F1rE9cyyXb6jyOGn+GUjMbnq1o121Vm0+neKHUCBtHyseBfDXHA6m4B3mUTWo13nid 0e4AM71r0DS8+KYh6zvweLX/LL5kQS9GQeT+QNroXcC1NzWbitts6TZ+IrPOwT1hfB4WNC+X 2n4AzDqp3+ILiVST2DT4VBc11Gz6jijpC/KI5Al8ZDhRwG47LUiuQmt3yqrmN63V9wzaPhC+ xbwIsNZlLUvuRnmBPkTJwwrFRZvwu5GPHNndBjVpAfaSTOfppyKBTccu2AXJXWAE1Xjh6GOC 8mlFjZwLxWFqdPHR1n2aPVgoiTLk34LR/bXO+e0GpzFXT7enwyvFFFyAS0Nk1q/7EChPcbRb hJqEBpRNZemxmg55zC3GLvgLKd5A09MOM2BrMea+l0FUR+PuTenh2YmnmLRTro6eZ/qYwWkC u8FFIw4pT0OUDMyLgi+GI1aMpVogTZJ70FgV0pUAlpmrzk/bLbRkF3TwgucpyPtcpmQtTkWS gDS50QG9DR/1As3LLLcNkwJBZzBG6PWbvcOyrwMQUF1nl4SSPV0LLH63+BrrHasfJzxKXzqg rW28CTAE2x8qi7e/6M/+XXhrsMYG+uaViM7n2je3qKe7ofum3s4vq7oFCPsOgwARAQABwsFl BBgBAgAPBQJVy5+RAhsMBQkJZgGAAAoJEE3eEPcA/4NagOsP/jPoIBb/iXVbM+fmSHOjEshl KMwEl/m5iLj3iHnHPVLBUWrXPdS7iQijJA/VLxjnFknhaS60hkUNWexDMxVVP/6lbOrs4bDZ NEWDMktAeqJaFtxackPszlcpRVkAs6Msn9tu8hlvB517pyUgvuD7ZS9gGOMmYwFQDyytpepo YApVV00P0u3AaE0Cj/o71STqGJKZxcVhPaZ+LR+UCBZOyKfEyq+ZN311VpOJZ1IvTExf+S/5 lqnciDtbO3I4Wq0ArLX1gs1q1XlXLaVaA3yVqeC8E7kOchDNinD3hJS4OX0e1gdsx/e6COvy qNg5aL5n0Kl4fcVqM0LdIhsubVs4eiNCa5XMSYpXmVi3HAuFyg9dN+x8thSwI836FoMASwOl C7tHsTjnSGufB+D7F7ZBT61BffNBBIm1KdMxcxqLUVXpBQHHlGkbwI+3Ye+nE6HmZH7IwLwV W+Ajl7oYF+jeKaH4DZFtgLYGLtZ1LDwKPjX7VAsa4Yx7S5+EBAaZGxK510MjIx6SGrZWBrrV TEvdV00F2MnQoeXKzD7O4WFbL55hhyGgfWTHwZ457iN9SgYi1JLPqWkZB0JRXIEtjd4JEQcx +8Umfre0Xt4713VxMygW0PnQt5aSQdMD58jHFxTk092mU+yIHj5LeYgvwSgZN4airXk5yRXl SE+xAvmumFBY Organization: Red Hat GmbH Message-ID: <2b2b6108-45ce-4fd5-0e87-299211c1fcb4@redhat.com> Date: Tue, 13 Nov 2018 15:55:56 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0 MIME-Version: 1.0 In-Reply-To: <1542090790-21750-3-git-send-email-arunks@codeaurora.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Tue, 13 Nov 2018 14:55:59 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13.11.18 07:33, Arun KS wrote: > totalram_pages, zone->managed_pages and totalhigh_pages updates > are protected by managed_page_count_lock, but readers never care > about it. Convert these variables to atomic to avoid readers > potentially seeing a store tear. > > This patch converts zone->managed_pages. Subsequent patches will > convert totalram_panges, totalhigh_pages and eventually > managed_page_count_lock will be removed. > > Main motivation was that managed_page_count_lock handling was > complicating things. It was discussed in length here, > https://lore.kernel.org/patchwork/patch/995739/#1181785 > So it seemes better to remove the lock and convert variables > to atomic, with preventing poteintial store-to-read tearing as > a bonus. > > Suggested-by: Michal Hocko > Suggested-by: Vlastimil Babka > Signed-off-by: Arun KS > Reviewed-by: Konstantin Khlebnikov > Acked-by: Michal Hocko > Acked-by: Vlastimil Babka > --- > Most of the changes are done by below coccinelle script, > > @@ > struct zone *z; > expression e1; > @@ > ( > - z->managed_pages = e1 > + atomic_long_set(&z->managed_pages, e1) > | > - e1->managed_pages++ > + atomic_long_inc(&e1->managed_pages) > | > - z->managed_pages > + zone_managed_pages(z) > ) > > @@ > expression e,e1; > @@ > - e->managed_pages += e1 > + atomic_long_add(e1, &e->managed_pages) > > @@ > expression z; > @@ > - z.managed_pages > + zone_managed_pages(&z) > > Then, manually apply following change, > include/linux/mmzone.h > > - unsigned long managed_pages; > + atomic_long_t managed_pages; > > +static inline unsigned long zone_managed_pages(struct zone *zone) > +{ > + return (unsigned long)atomic_long_read(&zone->managed_pages); > +} > > --- > > drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 2 +- > include/linux/mmzone.h | 9 +++++-- > lib/show_mem.c | 2 +- > mm/memblock.c | 2 +- > mm/page_alloc.c | 44 +++++++++++++++++------------------ > mm/vmstat.c | 4 ++-- > 6 files changed, 34 insertions(+), 29 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c > index 56412b0..c0e55bb 100644 > --- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c > +++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c > @@ -848,7 +848,7 @@ static int kfd_fill_mem_info_for_cpu(int numa_node_id, int *avail_size, > */ > pgdat = NODE_DATA(numa_node_id); > for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) > - mem_in_bytes += pgdat->node_zones[zone_type].managed_pages; > + mem_in_bytes += zone_managed_pages(&pgdat->node_zones[zone_type]); > mem_in_bytes <<= PAGE_SHIFT; > > sub_type_hdr->length_low = lower_32_bits(mem_in_bytes); > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 847705a..e73dc31 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -435,7 +435,7 @@ struct zone { > * adjust_managed_page_count() should be used instead of directly > * touching zone->managed_pages and totalram_pages. > */ > - unsigned long managed_pages; > + atomic_long_t managed_pages; > unsigned long spanned_pages; > unsigned long present_pages; > > @@ -524,6 +524,11 @@ enum pgdat_flags { > PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */ > }; > > +static inline unsigned long zone_managed_pages(struct zone *zone) > +{ > + return (unsigned long)atomic_long_read(&zone->managed_pages); > +} > + > static inline unsigned long zone_end_pfn(const struct zone *zone) > { > return zone->zone_start_pfn + zone->spanned_pages; > @@ -814,7 +819,7 @@ static inline bool is_dev_zone(const struct zone *zone) > */ > static inline bool managed_zone(struct zone *zone) > { > - return zone->managed_pages; > + return zone_managed_pages(zone); > } > > /* Returns true if a zone has memory */ > diff --git a/lib/show_mem.c b/lib/show_mem.c > index 0beaa1d..eefe67d 100644 > --- a/lib/show_mem.c > +++ b/lib/show_mem.c > @@ -28,7 +28,7 @@ void show_mem(unsigned int filter, nodemask_t *nodemask) > continue; > > total += zone->present_pages; > - reserved += zone->present_pages - zone->managed_pages; > + reserved += zone->present_pages - zone_managed_pages(zone); > > if (is_highmem_idx(zoneid)) > highmem += zone->present_pages; > diff --git a/mm/memblock.c b/mm/memblock.c > index 7df468c..bbd82ab 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -1950,7 +1950,7 @@ void reset_node_managed_pages(pg_data_t *pgdat) > struct zone *z; > > for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) > - z->managed_pages = 0; > + atomic_long_set(&z->managed_pages, 0); > } > > void __init reset_all_zones_managed_pages(void) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 173312b..22e6645 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1279,7 +1279,7 @@ static void __init __free_pages_boot_core(struct page *page, unsigned int order) > __ClearPageReserved(p); > set_page_count(p, 0); > > - page_zone(page)->managed_pages += nr_pages; > + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); > set_page_refcounted(page); > __free_pages(page, order); > } > @@ -2258,7 +2258,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, > * Limit the number reserved to 1 pageblock or roughly 1% of a zone. > * Check is race-prone but harmless. > */ > - max_managed = (zone->managed_pages / 100) + pageblock_nr_pages; > + max_managed = (zone_managed_pages(zone) / 100) + pageblock_nr_pages; > if (zone->nr_reserved_highatomic >= max_managed) > return; > > @@ -4662,7 +4662,7 @@ static unsigned long nr_free_zone_pages(int offset) > struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL); > > for_each_zone_zonelist(zone, z, zonelist, offset) { > - unsigned long size = zone->managed_pages; > + unsigned long size = zone_managed_pages(zone); > unsigned long high = high_wmark_pages(zone); > if (size > high) > sum += size - high; > @@ -4769,7 +4769,7 @@ void si_meminfo_node(struct sysinfo *val, int nid) > pg_data_t *pgdat = NODE_DATA(nid); > > for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) > - managed_pages += pgdat->node_zones[zone_type].managed_pages; > + managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); > val->totalram = managed_pages; > val->sharedram = node_page_state(pgdat, NR_SHMEM); > val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); > @@ -4778,7 +4778,7 @@ void si_meminfo_node(struct sysinfo *val, int nid) > struct zone *zone = &pgdat->node_zones[zone_type]; > > if (is_highmem(zone)) { > - managed_highpages += zone->managed_pages; > + managed_highpages += zone_managed_pages(zone); > free_highpages += zone_page_state(zone, NR_FREE_PAGES); > } > } > @@ -4985,7 +4985,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), > K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), > K(zone->present_pages), > - K(zone->managed_pages), > + K(zone_managed_pages(zone)), > K(zone_page_state(zone, NR_MLOCK)), > zone_page_state(zone, NR_KERNEL_STACK_KB), > K(zone_page_state(zone, NR_PAGETABLE)), > @@ -5645,7 +5645,7 @@ static int zone_batchsize(struct zone *zone) > * The per-cpu-pages pools are set to around 1000th of the > * size of the zone. > */ > - batch = zone->managed_pages / 1024; > + batch = zone_managed_pages(zone) / 1024; > /* But no more than a meg. */ > if (batch * PAGE_SIZE > 1024 * 1024) > batch = (1024 * 1024) / PAGE_SIZE; > @@ -5756,7 +5756,7 @@ static void pageset_set_high_and_batch(struct zone *zone, > { > if (percpu_pagelist_fraction) > pageset_set_high(pcp, > - (zone->managed_pages / > + (zone_managed_pages(zone) / > percpu_pagelist_fraction)); > else > pageset_set_batch(pcp, zone_batchsize(zone)); > @@ -6311,7 +6311,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) > static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, > unsigned long remaining_pages) > { > - zone->managed_pages = remaining_pages; > + atomic_long_set(&zone->managed_pages, remaining_pages); > zone_set_nid(zone, nid); > zone->name = zone_names[idx]; > zone->zone_pgdat = NODE_DATA(nid); > @@ -7064,7 +7064,7 @@ static int __init cmdline_parse_movablecore(char *p) > void adjust_managed_page_count(struct page *page, long count) > { > spin_lock(&managed_page_count_lock); > - page_zone(page)->managed_pages += count; > + atomic_long_add(count, &page_zone(page)->managed_pages); > totalram_pages += count; > #ifdef CONFIG_HIGHMEM > if (PageHighMem(page)) > @@ -7112,7 +7112,7 @@ void free_highmem_page(struct page *page) > { > __free_reserved_page(page); > totalram_pages++; > - page_zone(page)->managed_pages++; > + atomic_long_inc(&page_zone(page)->managed_pages); > totalhigh_pages++; > } > #endif > @@ -7245,7 +7245,7 @@ static void calculate_totalreserve_pages(void) > for (i = 0; i < MAX_NR_ZONES; i++) { > struct zone *zone = pgdat->node_zones + i; > long max = 0; > - unsigned long managed_pages = zone->managed_pages; > + unsigned long managed_pages = zone_managed_pages(zone); > > /* Find valid and maximum lowmem_reserve in the zone */ > for (j = i; j < MAX_NR_ZONES; j++) { > @@ -7281,7 +7281,7 @@ static void setup_per_zone_lowmem_reserve(void) > for_each_online_pgdat(pgdat) { > for (j = 0; j < MAX_NR_ZONES; j++) { > struct zone *zone = pgdat->node_zones + j; > - unsigned long managed_pages = zone->managed_pages; > + unsigned long managed_pages = zone_managed_pages(zone); > > zone->lowmem_reserve[j] = 0; > > @@ -7299,7 +7299,7 @@ static void setup_per_zone_lowmem_reserve(void) > lower_zone->lowmem_reserve[j] = > managed_pages / sysctl_lowmem_reserve_ratio[idx]; > } > - managed_pages += lower_zone->managed_pages; > + managed_pages += zone_managed_pages(lower_zone); > } > } > } > @@ -7318,14 +7318,14 @@ static void __setup_per_zone_wmarks(void) > /* Calculate total number of !ZONE_HIGHMEM pages */ > for_each_zone(zone) { > if (!is_highmem(zone)) > - lowmem_pages += zone->managed_pages; > + lowmem_pages += zone_managed_pages(zone); > } > > for_each_zone(zone) { > u64 tmp; > > spin_lock_irqsave(&zone->lock, flags); > - tmp = (u64)pages_min * zone->managed_pages; > + tmp = (u64)pages_min * zone_managed_pages(zone); > do_div(tmp, lowmem_pages); > if (is_highmem(zone)) { > /* > @@ -7339,7 +7339,7 @@ static void __setup_per_zone_wmarks(void) > */ > unsigned long min_pages; > > - min_pages = zone->managed_pages / 1024; > + min_pages = zone_managed_pages(zone) / 1024; > min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL); > zone->watermark[WMARK_MIN] = min_pages; > } else { > @@ -7356,7 +7356,7 @@ static void __setup_per_zone_wmarks(void) > * ensure a minimum size on small systems. > */ > tmp = max_t(u64, tmp >> 2, > - mult_frac(zone->managed_pages, > + mult_frac(zone_managed_pages(zone), > watermark_scale_factor, 10000)); > > zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; > @@ -7486,8 +7486,8 @@ static void setup_min_unmapped_ratio(void) > pgdat->min_unmapped_pages = 0; > > for_each_zone(zone) > - zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages * > - sysctl_min_unmapped_ratio) / 100; > + zone->zone_pgdat->min_unmapped_pages += (zone_managed_pages(zone) * > + sysctl_min_unmapped_ratio) / 100; > } > > > @@ -7514,8 +7514,8 @@ static void setup_min_slab_ratio(void) > pgdat->min_slab_pages = 0; > > for_each_zone(zone) > - zone->zone_pgdat->min_slab_pages += (zone->managed_pages * > - sysctl_min_slab_ratio) / 100; > + zone->zone_pgdat->min_slab_pages += (zone_managed_pages(zone) * > + sysctl_min_slab_ratio) / 100; > } > > int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 6038ce5..9fee037 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -227,7 +227,7 @@ int calculate_normal_threshold(struct zone *zone) > * 125 1024 10 16-32 GB 9 > */ > > - mem = zone->managed_pages >> (27 - PAGE_SHIFT); > + mem = zone_managed_pages(zone) >> (27 - PAGE_SHIFT); > > threshold = 2 * fls(num_online_cpus()) * (1 + fls(mem)); > > @@ -1569,7 +1569,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat, > high_wmark_pages(zone), > zone->spanned_pages, > zone->present_pages, > - zone->managed_pages); > + zone_managed_pages(zone)); > > seq_printf(m, > "\n protection: (%ld", > Reviewed-by: David Hildenbrand -- Thanks, David / dhildenb