Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45071C61DA4 for ; Fri, 10 Feb 2023 01:19:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230196AbjBJBTH (ORCPT ); Thu, 9 Feb 2023 20:19:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229628AbjBJBTE (ORCPT ); Thu, 9 Feb 2023 20:19:04 -0500 Received: from mxhk.zte.com.cn (mxhk.zte.com.cn [63.216.63.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7EDB643F9 for ; Thu, 9 Feb 2023 17:19:01 -0800 (PST) Received: from mse-fl1.zte.com.cn (unknown [10.5.228.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mxhk.zte.com.cn (FangMail) with ESMTPS id 4PCbVD3jTNz8R043; Fri, 10 Feb 2023 09:19:00 +0800 (CST) Received: from szxlzmapp07.zte.com.cn ([10.5.230.251]) by mse-fl1.zte.com.cn with SMTP id 31A1IqXc009217; Fri, 10 Feb 2023 09:18:52 +0800 (+08) (envelope-from yang.yang29@zte.com.cn) Received: from mapi (szxlzmapp01[null]) by mapi (Zmail) with MAPI id mid14; Fri, 10 Feb 2023 09:18:52 +0800 (CST) Date: Fri, 10 Feb 2023 09:18:52 +0800 (CST) X-Zmail-TransId: 2b0363e59b7c1e046e42 X-Mailer: Zmail v1.0 Message-ID: <202302100918524481474@zte.com.cn> Mime-Version: 1.0 From: To: Cc: , , , , , , , , Subject: =?UTF-8?B?W1BBVENIIHY2IDMvNl0ga3NtOiBjb3VudCBhbGwgemVybyBwYWdlcyBwbGFjZWQgYnkgS1NN?= Content-Type: text/plain; charset="UTF-8" X-MAIL: mse-fl1.zte.com.cn 31A1IqXc009217 X-Fangmail-Gw-Spam-Type: 0 X-FangMail-Miltered: at cgslv5.04-192.168.250.137.novalocal with ID 63E59B84.000 by FangMail milter! X-FangMail-Envelope: 1675991940/4PCbVD3jTNz8R043/63E59B84.000/10.5.228.132/[10.5.228.132]/mse-fl1.zte.com.cn/ X-Fangmail-Anti-Spam-Filtered: true X-Fangmail-MID-QID: 63E59B84.000/4PCbVD3jTNz8R043 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: xu xin As pages_sharing and pages_shared don't include the number of zero pages merged by KSM, we cannot know how many pages are zero pages placed by KSM when enabling use_zero_pages, which leads to KSM not being transparent with all actual merged pages by KSM. In the early days of use_zero_pages, zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so it's hard to count how many times one of those zeropages was then unmerged. But now, unsharing KSM-placed zero page accurately has been achieved, so we can easily count both how many times a page full of zeroes was merged with zero-page and how many times one of those pages was then unmerged. and so, it helps to estimate memory demands when each and every shared page could get unshared. So we add zero_pages_sharing under /sys/kernel/mm/ksm/ to show the number of all zero pages placed by KSM. Signed-off-by: xu xin Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang v4->v5: fix warning mm/ksm.c:3238:9: warning: no previous prototype for 'zero_pages_sharing_show' [-Wmissing-prototypes]. --- mm/ksm.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index ab04b44679c8..1fa668e1fe82 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -276,6 +276,9 @@ static unsigned int zero_checksum __read_mostly; /* Whether to merge empty (zeroed) pages with actual zero pages */ static bool ksm_use_zero_pages __read_mostly; +/* The number of zero pages placed by KSM use_zero_pages */ +static unsigned long ksm_zero_pages_sharing; + #ifdef CONFIG_NUMA /* Zeroed when merging across nodes is not allowed */ static unsigned int ksm_merge_across_nodes = 1; @@ -789,8 +792,10 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node, */ static inline void clean_rmap_item_zero_flag(struct ksm_rmap_item *rmap_item) { - if (rmap_item->address & ZERO_PAGE_FLAG) + if (rmap_item->address & ZERO_PAGE_FLAG) { + ksm_zero_pages_sharing--; rmap_item->address &= PAGE_MASK; + } } /* Only called when rmap_item is going to be freed */ @@ -2109,8 +2114,10 @@ static int try_to_merge_with_kernel_zero_page(struct ksm_rmap_item *rmap_item, if (vma) { err = try_to_merge_one_page(vma, page, ZERO_PAGE(rmap_item->address)); - if (!err) + if (!err) { rmap_item->address |= ZERO_PAGE_FLAG; + ksm_zero_pages_sharing++; + } } else { /* If the vma is out of date, we do not need to continue. */ err = 0; @@ -3230,6 +3237,13 @@ static ssize_t pages_volatile_show(struct kobject *kobj, } KSM_ATTR_RO(pages_volatile); +static ssize_t zero_pages_sharing_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%ld\n", ksm_zero_pages_sharing); +} +KSM_ATTR_RO(zero_pages_sharing); + static ssize_t stable_node_dups_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -3285,6 +3299,7 @@ static struct attribute *ksm_attrs[] = { &pages_sharing_attr.attr, &pages_unshared_attr.attr, &pages_volatile_attr.attr, + &zero_pages_sharing_attr.attr, &full_scans_attr.attr, #ifdef CONFIG_NUMA &merge_across_nodes_attr.attr, -- 2.15.2