Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp4761043rwb; Tue, 20 Sep 2022 20:26:14 -0700 (PDT) X-Google-Smtp-Source: AMsMyM77yrvHEI30Gx19VOTEUTzUksz0CVkAC1b1rUu5io8HVsMmfOkD21u/AZrjbFoGgVEOibvQ X-Received: by 2002:a17:902:74c8:b0:176:d229:b5e0 with SMTP id f8-20020a17090274c800b00176d229b5e0mr2661657plt.35.1663730774347; Tue, 20 Sep 2022 20:26:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663730774; cv=none; d=google.com; s=arc-20160816; b=sA8RURZxt4aXHlIOWVyVq77wjaPoIetTlAXupobjmUxjZD32RuorHdK3NV4OKANQer yLLoZvDE700iv1pEkjcmlHBhJP7K9hkQ1PhYxYQMYXcUzBtkqsGmRH5Y8CubiMY5WzTz tesTB/2mMzBYdqOifkPfvDBXCB6rbJu7wFS0LYky6RsA3lpO4vYx4NYEDDznGbAXMabT 8XehziLq3Vd1xU+gcsDBJcP3nv5ifdt27E+kGoA9VA5NiA+yOR/rz/O/WKRRhPeDy+Bs JY3hoDqAb4P1JFG19PIGP7FvsAXwc0wlZV0O7GBuXIV4tOYU7DmI3XrI9YK+p3zCMiWs X8Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:to:from; bh=aMC5usRkdBUVswHQTVR8sXoA/VyXFxaccgMI8nBuj+8=; b=KYShgEb5ZY4Sn8cwnZ6WqIekFkcA5F6TKKWBjxJY5ob10OInoJXx5DMYPEcmZWBt4B gE1gFHkzSOR82Vfx6JKNCFw+tw2dBqFJ5UlgrGMYrtTBWo62PsTR9Rrhws5Tcq73Xz87 ABHcFwgKAuUIPe/P+eRZrjX24FGUDyC3Q3L/JMs4X6Ii+7hRWHlm7neRPo09SXuDlHt5 4aLcxQCHjnj5z2X+K8R14n1KDakYRgGDH04KhUZ3m3pqwKaSccSoT/Wv0hmvdsoj9GDD xTIIKd3ulupMRVXDZnj6+TQNu3wFl7PhcSdcGuZmj94wI63TU6RZxFT/Rh4xBtJQOxU1 I9hQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t18-20020a1709028c9200b00176e226934dsi1371919plo.367.2022.09.20.20.26.03; Tue, 20 Sep 2022 20:26:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230154AbiIUDUD (ORCPT + 99 others); Tue, 20 Sep 2022 23:20:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbiIUDUB (ORCPT ); Tue, 20 Sep 2022 23:20:01 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34435101E5 for ; Tue, 20 Sep 2022 20:19:57 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTPS id 28L3HnXk067053 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NO); Wed, 21 Sep 2022 11:17:50 +0800 (CST) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Wed, 21 Sep 2022 11:17:49 +0800 From: "zhaoyang.huang" To: Andrew Morton , Catalin Marinas , Matthew Wilcox , Zhaoyang Huang , , , , Subject: [PATCHv2] mm: introduce NR_BAD_PAGES and track them via kmemleak Date: Wed, 21 Sep 2022 11:17:26 +0800 Message-ID: <1663730246-11968-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 28L3HnXk067053 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhaoyang Huang Bad pages could be introduced by extra reference among high order pages or compound tail pages which cause the pages failed go back to allocator and leaved as orphan pages. Booking them down and tracking them via kmemleak. Signed-off-by: Zhaoyang Huang --- change of v2: add booking for bad pages --- --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 13 ++++++++++--- mm/vmstat.c | 1 + 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index e24b40c..11c1422 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -166,6 +166,7 @@ enum zone_stat_item { NR_ZSPAGES, /* allocated in zsmalloc */ #endif NR_FREE_CMA_PAGES, + NR_BAD_PAGES, NR_VM_ZONE_STAT_ITEMS }; enum node_stat_item { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d4..a3768c96 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1408,7 +1408,7 @@ static __always_inline bool free_pages_prepare(struct page *page, __memcg_kmem_uncharge_page(page, order); reset_page_owner(page, order); page_table_check_free(page, order); - return false; + goto err; } /* @@ -1442,7 +1442,7 @@ static __always_inline bool free_pages_prepare(struct page *page, if (check_free) bad += check_free_page(page); if (bad) - return false; + goto err; page_cpupid_reset_last(page); page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; @@ -1486,6 +1486,11 @@ static __always_inline bool free_pages_prepare(struct page *page, debug_pagealloc_unmap_pages(page, 1 << order); return true; +err: + __mod_zone_page_state(page_zone(page), NR_BAD_PAGES, 1 << order); + kmemleak_alloc(page_address(page), PAGE_SIZE << order, 1, GFP_KERNEL); + return false; + } #ifdef CONFIG_DEBUG_VM @@ -1587,8 +1592,10 @@ static void free_pcppages_bulk(struct zone *zone, int count, count -= nr_pages; pcp->count -= nr_pages; - if (bulkfree_pcp_prepare(page)) + if (bulkfree_pcp_prepare(page)) { + __mod_zone_page_state(page_zone(page), NR_BAD_PAGES, 1 << order); continue; + } /* MIGRATE_ISOLATE page should not go to pcplists */ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); diff --git a/mm/vmstat.c b/mm/vmstat.c index 90af9a8..d391352 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1193,6 +1193,7 @@ int fragmentation_index(struct zone *zone, unsigned int order) "nr_zspages", #endif "nr_free_cma", + "nr_bad_pages", /* enum numa_stat_item counters */ #ifdef CONFIG_NUMA -- 1.9.1