Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp17426099rwd; Tue, 27 Jun 2023 02:54:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5strRlF3hX2bAcTbTMN/oOXeQ8KijokZHiBw2zh7Q08orWxIeVj2pO2c6z6ETaDoF6ffGg X-Received: by 2002:a17:907:3d8b:b0:992:16bb:2b6e with SMTP id he11-20020a1709073d8b00b0099216bb2b6emr1307165ejc.29.1687859641298; Tue, 27 Jun 2023 02:54:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687859641; cv=none; d=google.com; s=arc-20160816; b=dSw9tSo+YFt5ZtNhqTES96AgCm6NIQ36+8yryqjmn5IUBCEb8iTEvCVqFCN8YmODZp trqtrPIlVeGY1eKDa+ddgKN6TW3qgvRfFKHbOmk1IZw/Y6ADR1hfyac6jroDR80lulpj aV6ooDyafKZM60pf7Gnm7CXHmS2ok1Kg9BYw4ikoldtIcb6j8gYemNErJ+121fx7cYDC Kxy3HvPJ9qGUktncsU4Um9rCuIUrUe33yR6yG64UouPqn54XqDrxYP8wgc1cGf87EMQY r/vRGaSXZ0QjRFOfQa3DWUVZj0Hd9bymPstsdu9w+50Ki2EJPXrZ6ksZfYfCvNgfANUx ib+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:to:from; bh=hw6Uf5cq6qN6YRc0O4Wky45QCBN8e/c7RTMf9Ej1vyw=; fh=o2Jvokat5E8igZDmzJ+O8XMJH1u6XhV/xFIR/oEKVHE=; b=RJX7FSC966YChPks7geqBCvaT9QrrhJN2WiMC/Uzz+I4QdimFaQx+uoe/Z1NujmKWy 0ocmIsAqz24zsNJ3+6fRQxgjmoiaqhn5qzhDdBd6n25fyMXgj1WhLv2lqcYWdSRBVZti PVuPVAR4S9F5VqViLonOrhhgDFt/E+2ImQGZdzbNxJ0t40Ilo8wu+s0mCgGkGEJipXrg xKf2DQH1TN1kmIrKjUxCD8RZKhvO3dU/8YZFk3YDTVIOP0OKInq9dzZfADGmtoUIe7Cc IRC38Cj8tLETwLLdR7xntj6k6nfcwBj55sxJkFCG7jTJSqWFyKj4+4xZ8cda321nC78E MHAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jo11-20020a170906f6cb00b0098df03ffa69si3930010ejb.421.2023.06.27.02.53.35; Tue, 27 Jun 2023 02:54:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbjF0JTZ (ORCPT + 99 others); Tue, 27 Jun 2023 05:19:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229481AbjF0JTY (ORCPT ); Tue, 27 Jun 2023 05:19:24 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0313BF; Tue, 27 Jun 2023 02:19:21 -0700 (PDT) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 35R9HcE0076619; Tue, 27 Jun 2023 17:17:38 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4Qqzcd589hz2LDdKh; Tue, 27 Jun 2023 17:17:05 +0800 (CST) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Tue, 27 Jun 2023 17:17:36 +0800 From: "zhaoyang.huang" To: Alexander Viro , Andrew Morton , Matthew Wilcox , Vlastimil Babka , Yang Shi , Suren Baghdasaryan , Yu Zhao , , , , Zhaoyang Huang , Subject: [PATCH] mm: introduce statistic for inode's gen&tier Date: Tue, 27 Jun 2023 17:17:18 +0800 Message-ID: <1687857438-29142-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 35R9HcE0076619 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhaoyang Huang As mglru scale page's activity more presiced than before, I would like to introduce statistics over these two properties on all pages of the inode, which could help some mechanisms have ability to judge the inode's activity, etc madivse. Signed-off-by: Zhaoyang Huang --- fs/proc/task_mmu.c | 9 +++++++++ include/linux/fs.h | 2 ++ include/linux/mm_inline.h | 14 ++++++++++++++ mm/filemap.c | 11 +++++++++++ mm/swap.c | 1 + 5 files changed, 37 insertions(+) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e35a039..3ed30ef 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -283,17 +283,26 @@ static void show_vma_header_prefix(struct seq_file *m, unsigned long start, end; dev_t dev = 0; const char *name = NULL; + long nrpages = 0, gen = 0, tier = 0; if (file) { struct inode *inode = file_inode(vma->vm_file); dev = inode->i_sb->s_dev; ino = inode->i_ino; pgoff = ((loff_t)vma->vm_pgoff) << PAGE_SHIFT; + nrpages = inode->i_mapping->nrpages; + gen = atomic_long_read(&inode->i_mapping->gen); + tier = atomic_long_read(&inode->i_mapping->tier); } start = vma->vm_start; end = vma->vm_end; show_vma_header_prefix(m, start, end, flags, pgoff, dev, ino); + + seq_put_hex_ll(m, NULL, nrpages, 8); + seq_put_hex_ll(m, ":", gen, 8); + seq_put_hex_ll(m, ":", tier, 8); + if (mm) anon_name = anon_vma_name(vma); diff --git a/include/linux/fs.h b/include/linux/fs.h index c1769a2..4f4c3a2 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -434,6 +434,8 @@ struct address_space { struct rb_root_cached i_mmap; struct rw_semaphore i_mmap_rwsem; unsigned long nrpages; + atomic_long_t gen; + atomic_long_t tier; pgoff_t writeback_index; const struct address_space_operations *a_ops; unsigned long flags; diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ff3f3f2..f68bd06 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -307,6 +307,20 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, return false; } +static inline int lru_tier_from_refs(int refs) +{ + return 0; +} + +static inline int folio_lru_refs(struct folio *folio) +{ + return 0; +} + +static inline int folio_lru_gen(struct folio *folio) +{ + return 0; +} #endif /* CONFIG_LRU_GEN */ static __always_inline diff --git a/mm/filemap.c b/mm/filemap.c index c4d4ace..a1c68a9 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -45,6 +45,7 @@ #include #include #include "internal.h" +#include #define CREATE_TRACE_POINTS #include @@ -126,6 +127,9 @@ static void page_cache_delete(struct address_space *mapping, { XA_STATE(xas, &mapping->i_pages, folio->index); long nr = 1; + int refs = folio_lru_refs(folio); + int tier = lru_tier_from_refs(refs); + int gen = folio_lru_gen(folio); mapping_set_update(&xas, mapping); @@ -143,6 +147,8 @@ static void page_cache_delete(struct address_space *mapping, folio->mapping = NULL; /* Leave page->index set: truncation lookup relies upon it */ mapping->nrpages -= nr; + atomic_long_sub(gen, &mapping->gen); + atomic_long_sub(tier, &mapping->tier); } static void filemap_unaccount_folio(struct address_space *mapping, @@ -844,6 +850,9 @@ noinline int __filemap_add_folio(struct address_space *mapping, int huge = folio_test_hugetlb(folio); bool charged = false; long nr = 1; + int refs = folio_lru_refs(folio); + int tier = lru_tier_from_refs(refs); + int gen = folio_lru_gen(folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); @@ -898,6 +907,8 @@ noinline int __filemap_add_folio(struct address_space *mapping, goto unlock; mapping->nrpages += nr; + atomic_long_add(gen, &mapping->gen); + atomic_long_add(tier, &mapping->tier); /* hugetlb pages do not participate in page cache accounting */ if (!huge) { diff --git a/mm/swap.c b/mm/swap.c index 70e2063..6322c1c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -468,6 +468,7 @@ static void folio_inc_refs(struct folio *folio) new_flags += BIT(LRU_REFS_PGOFF); new_flags |= old_flags & ~LRU_REFS_MASK; } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); + atomic_long_inc(&folio->mapping->tier); } #else static void folio_inc_refs(struct folio *folio) -- 1.9.1