Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp531935pxb; Wed, 27 Jan 2021 14:06:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJwIb5VhjJq87cLxoLubMXGElAX2dAnVFTTBmClgw70A7YmOnYRnpRWv7cwv4Res4s3htQuW X-Received: by 2002:a17:906:d189:: with SMTP id c9mr8685630ejz.36.1611785192276; Wed, 27 Jan 2021 14:06:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611785192; cv=none; d=google.com; s=arc-20160816; b=CqUH//J3UNdJhsMW8jsKDnd4d0rGueaKD2nRfOJG3HbdqGNAJjPw3hX2cSdCwD8RM/ Nk9BcuGImQqqFBQ/5cUWtqrOPAjCTbvyartIj5OwBL0I2nxdvMLvrjaTnxxbHMxT0Ha7 5e7j+wr1Ir6I5gQqipXlTwvgWIuR1FJUdCDaSccfGYGz9/BAnPBg8rykB1w20zNGwFg8 Z3SaUDITZRuUSRFQtr+BaqXkZrQm4Dadr+cw6eeNEoWlZpxWH6lXjuuO2g/fdgWopX2A N705MHYjSDTFlAeThCipC/4JWg9PydjOkEfNNvfiUmC5lqBZQCA9evSGzMrO3POankGM R3fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Xjqz0z3B10MBMjRhyTzjYxNKEzPg2nutGw+BuiLIitY=; b=hurxi7I1FDirPuldCZoH3MGmORIguqsE20CtkLUehrfkWHtcafsOI+vMwb8LdyIYon xajXWqhVkQ1EI0B9niwEmx1/kjJNiDtLKkV6epCWDTAgndGC4k8OcTgwZIZgJcVrmdc9 Rvtop6erv9KG693pDDfMHIAwyQwth8tZVKnxoNeLUotNPvZ5hERvb+EmElG1gWiHY4Pa c3I6GgVgt89V5ovXSih3MlcFqqaHLbF7gpQOM2zBVAWcGLH+h+GV6Njsq+0zIe7vh79z e6s0s+zX5uwWMbp0aa4ksGQo4B9M2ykevVjANlocv01X8oV5rEnVEdq5/XUTNe20zvl9 IHbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OCEdIOXA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v13si1336060ejf.621.2021.01.27.14.06.06; Wed, 27 Jan 2021 14:06:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OCEdIOXA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235855AbhA0KXV (ORCPT + 99 others); Wed, 27 Jan 2021 05:23:21 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:59800 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235688AbhA0KT5 (ORCPT ); Wed, 27 Jan 2021 05:19:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611742710; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xjqz0z3B10MBMjRhyTzjYxNKEzPg2nutGw+BuiLIitY=; b=OCEdIOXAAPSCnYv9GFWV3cbykt664ihtGgntZ26ipblwaxnoeCVPzxfC9j6YuHWHDIGLem i0fAFTh9vU5FloPM+R8Wy03gnebii4yReNcs/Nj+Wjkmnh0vaI/J/ppRkuud5HHX0teTrN zmngPQh7KrJfX+m/IQP+t/qpM2N0l8M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-102-nzpZ7BHTOPa1PH98o-_REw-1; Wed, 27 Jan 2021 05:18:25 -0500 X-MC-Unique: nzpZ7BHTOPa1PH98o-_REw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4601D801AB8; Wed, 27 Jan 2021 10:18:24 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-237.ams2.redhat.com [10.36.114.237]) by smtp.corp.redhat.com (Postfix) with ESMTP id 445086EF55; Wed, 27 Jan 2021 10:18:22 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mike Rapoport , Oscar Salvador , Michal Hocko , Wei Yang Subject: [PATCH v1 2/2] mm/page_alloc: count CMA pages per zone and print them in /proc/zoneinfo Date: Wed, 27 Jan 2021 11:18:13 +0100 Message-Id: <20210127101813.6370-3-david@redhat.com> In-Reply-To: <20210127101813.6370-1-david@redhat.com> References: <20210127101813.6370-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Let's count the number of CMA pages per zone and print them in /proc/zoneinfo. Having access to the total number of CMA pages per zone is helpful for debugging purposes to know where exactly the CMA pages ended up, and to figure out how many pages of a zone might behave differently (e.g., like ZONE_MOVABLE) - even after some of these pages might already have been allocated. For now, we are only able to get the global nr+free cma pages from /proc/meminfo and the free cma pages per zone from /proc/zoneinfo. Note: Track/print that information even without CONFIG_CMA, similar to "nr_free_cma" in /proc/zoneinfo. This is different to /proc/meminfo - maybe we want to make that consistent in the future (however, changing /proc/zoneinfo output might uglify the code a bit). Cc: Andrew Morton Cc: Thomas Gleixner Cc: "Peter Zijlstra (Intel)" Cc: Mike Rapoport Cc: Oscar Salvador Cc: Michal Hocko Cc: Wei Yang Signed-off-by: David Hildenbrand --- include/linux/mmzone.h | 4 ++++ mm/page_alloc.c | 1 + mm/vmstat.c | 6 ++++-- 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index ae588b2f87ef..3bc18c9976fd 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -503,6 +503,9 @@ struct zone { * bootmem allocator): * managed_pages = present_pages - reserved_pages; * + * cma pages is present pages that are assigned for CMA use + * (MIGRATE_CMA). + * * So present_pages may be used by memory hotplug or memory power * management logic to figure out unmanaged pages by checking * (present_pages - managed_pages). And managed_pages should be used @@ -527,6 +530,7 @@ struct zone { atomic_long_t managed_pages; unsigned long spanned_pages; unsigned long present_pages; + unsigned long cma_pages; const char *name; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..9a82375bbcb2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2168,6 +2168,7 @@ void __init init_cma_reserved_pageblock(struct page *page) } adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages += pageblock_nr_pages; } #endif diff --git a/mm/vmstat.c b/mm/vmstat.c index 7758486097f9..97fc32a53320 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1642,14 +1642,16 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat, "\n high %lu" "\n spanned %lu" "\n present %lu" - "\n managed %lu", + "\n managed %lu" + "\n cma %lu", zone_page_state(zone, NR_FREE_PAGES), min_wmark_pages(zone), low_wmark_pages(zone), high_wmark_pages(zone), zone->spanned_pages, zone->present_pages, - zone_managed_pages(zone)); + zone_managed_pages(zone), + zone->cma_pages); seq_printf(m, "\n protection: (%ld", -- 2.29.2