Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1637147pxu; Sun, 6 Dec 2020 02:24:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJyMInoQSXTD/JhwZFgjLcl/N0bgfm4nmZKEhrmoNYU+hRlP8bKXbzalXMoQOv34e/lkg9pU X-Received: by 2002:a17:906:391b:: with SMTP id f27mr14197145eje.195.1607250252930; Sun, 06 Dec 2020 02:24:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607250252; cv=none; d=google.com; s=arc-20160816; b=UsS5c5s481h6yoeM7pdbRM+zmrEucYXkPxrW+ZcsOq0GJG1Cwaxt9pELhuks9cVKly qFUy2WEoktc0If6L3m0TLivslb0ZPsshBm75GliIvYp06Ege/gEaUgqbliLd488G8Bti NBArePUGnD0NiFdS3ug1W5PMQ0a4wWK7A4mHS/loFguSSz2VdpvY63L3x0gy7GBqQUqu /UHpo81G2f+ET/KHLrEgYFBFI3ultjczrxIbfITR/NtclmtYOfl3YFZWRKWXzPPkMIwN us1wcqRxe1sV/1Qw9OG+aXEW1eLJPOgY0hOhWI0oXG0mUd6rzq0jbT9U87LwIopB42Br Qvrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KwRaMRI+Sr6xQo3P1J4+HoZOIYaqIcm7QaRM3zeaHfM=; b=0VWgDuVaRSQ0BO9Qp3Wrx5GamjWu6mBuZRHvugAtgF/GYobrH/cFHy8jXKSbCE371L ogz1mu2E9A40U/HgeAdtnGEi3ckGwEyeiMyZCa4tbJBsz4w5DXhaAG0ucPxofL1rrWLu 4kD7nDDIwBRPsessLMZpSf47jEX8W8gU/firR0Dsxy1On5hD7XYkuR2S924ThZifgj0T 2sf3c9XtYblCvZ+m932+FTb0IxhNWAfsic8+u6osM3q3OLMW2PSF5ZJSa4wR1gYzPuLR KDRlgr2MZ6Z1Zg1EavQ28Xw0ZOeJ3sJ6dPP60kouNBOv8bTy7QElxpmvpI3LKeZLqTL4 zxYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="f/VNxHgt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d16si1327685edp.251.2020.12.06.02.23.50; Sun, 06 Dec 2020 02:24:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b="f/VNxHgt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726597AbgLFKTN (ORCPT + 99 others); Sun, 6 Dec 2020 05:19:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727535AbgLFKTM (ORCPT ); Sun, 6 Dec 2020 05:19:12 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAE62C061A56 for ; Sun, 6 Dec 2020 02:18:31 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id hk16so5741871pjb.4 for ; Sun, 06 Dec 2020 02:18:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KwRaMRI+Sr6xQo3P1J4+HoZOIYaqIcm7QaRM3zeaHfM=; b=f/VNxHgtOj2/ijqqrMgBgEUMKgTq1TDJOBO6yN20rM9HpH8TxNGjJYzwKTkp3RG5sD USGASwcjMY+URzaWD2oF4Pv1KZGFh4zftUJ6WMVXbHKNpARcJxdsDopTTjMW+4Bx1CMk f2c5UACjKkSxlREuzyENkckmbtZVOQTFTlnlEPmMSdM0RyQlwfpdOGuZmJ9MMihGnp35 vxeZEdaUuGsIF96EnguTyhCMMADUD3KzIYz3IyE0jVXLFmkeoxxKPbhM1wOXunzhe9Jw 9p3eJ/ZZXPy00wlkw4joCyNiL5Fs2TV2Gpkjx954WMxsVeup4myYhUKVdDRCrwp027Q2 Ndsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KwRaMRI+Sr6xQo3P1J4+HoZOIYaqIcm7QaRM3zeaHfM=; b=PQBjm2cfTXa/J2N3g7mLHoyveCRYrcjqsYna9gBCyiy5EtVtc8eVdWbUcCo0h9QF2H 1xllxskDJbYsSgIJXeSkSgvftz49B88eheXZ2xRf/7kDIRxXbaW7DLBIvtP6s07BfHJe F1muM56fzwqjqxmGMoDjH4QZ+LSfhSZACDZ31Uq2UZySCwkwurssgtUnIAJTcMJwncg+ 4hI7d6ar0RYVHy4eZLHEjItLSA2ZI1IZduSc4Vs6wxFvPkKDKCTaGht1oUQqGC3CzspF iJfioRp28cVZr/ymY+OeUoqi33CLpPnSm3ZGnPrimHqoUebhpSL3Qhxe/Lm9AMh+LUr3 0L7w== X-Gm-Message-State: AOAM5323TvleZel6sNyq55xffX6aWG6TyItVvXxszriUy1hhFthF19Jd OoFQezGVOfUmZ5XlfAmLG6iPkQ== X-Received: by 2002:a17:90a:bf88:: with SMTP id d8mr12070599pjs.102.1607249911546; Sun, 06 Dec 2020 02:18:31 -0800 (PST) Received: from localhost.localdomain ([103.136.221.70]) by smtp.gmail.com with ESMTPSA id g16sm10337657pfb.201.2020.12.06.02.18.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 Dec 2020 02:18:30 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, will@kernel.org, guro@fb.com, rppt@kernel.org, tglx@linutronix.de, esyr@redhat.com, peterx@redhat.com, krisman@collabora.com, surenb@google.com, avagin@openvz.org, elver@google.com, rdunlap@infradead.org, iamjoonsoo.kim@lge.com Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [RESEND PATCH v2 11/12] mm: memcontrol: make the slab calculation consistent Date: Sun, 6 Dec 2020 18:14:50 +0800 Message-Id: <20201206101451.14706-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201206101451.14706-1-songmuchun@bytedance.com> References: <20201206101451.14706-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Although the ratio of the slab is one, we also should read the ratio from the related memory_stats instead of hard-coding. And the local variable of size is already the value of slab_unreclaimable. So we do not need to read again. The unit of the vmstat counters are either pages or bytes now. So we can drop the ratio in struct memory_stat. This can make the code clean and simple. And get rid of the awkward mix of static and runtime initialization of the memory_stats table. Signed-off-by: Muchun Song --- mm/memcontrol.c | 108 ++++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 70 insertions(+), 38 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 48d70c1ad301..49fbcf003bf5 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1493,48 +1493,71 @@ static bool mem_cgroup_wait_acct_move(struct mem_cgroup *memcg) struct memory_stat { const char *name; - unsigned int ratio; unsigned int idx; }; static const struct memory_stat memory_stats[] = { - { "anon", PAGE_SIZE, NR_ANON_MAPPED }, - { "file", PAGE_SIZE, NR_FILE_PAGES }, - { "kernel_stack", 1, NR_KERNEL_STACK_B }, - { "percpu", 1, MEMCG_PERCPU_B }, - { "sock", PAGE_SIZE, MEMCG_SOCK }, - { "shmem", PAGE_SIZE, NR_SHMEM }, - { "file_mapped", PAGE_SIZE, NR_FILE_MAPPED }, - { "file_dirty", PAGE_SIZE, NR_FILE_DIRTY }, - { "file_writeback", PAGE_SIZE, NR_WRITEBACK }, + { "anon", NR_ANON_MAPPED }, + { "file", NR_FILE_PAGES }, + { "kernel_stack", NR_KERNEL_STACK_B }, + { "percpu", MEMCG_PERCPU_B }, + { "sock", MEMCG_SOCK }, + { "shmem", NR_SHMEM }, + { "file_mapped", NR_FILE_MAPPED }, + { "file_dirty", NR_FILE_DIRTY }, + { "file_writeback", NR_WRITEBACK }, #ifdef CONFIG_TRANSPARENT_HUGEPAGE - { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, - { "file_thp", PAGE_SIZE, NR_FILE_THPS }, - { "shmem_thp", PAGE_SIZE, NR_SHMEM_THPS }, + { "anon_thp", NR_ANON_THPS }, + { "file_thp", NR_FILE_THPS }, + { "shmem_thp", NR_SHMEM_THPS }, #endif - { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON }, - { "active_anon", PAGE_SIZE, NR_ACTIVE_ANON }, - { "inactive_file", PAGE_SIZE, NR_INACTIVE_FILE }, - { "active_file", PAGE_SIZE, NR_ACTIVE_FILE }, - { "unevictable", PAGE_SIZE, NR_UNEVICTABLE }, - - /* - * Note: The slab_reclaimable and slab_unreclaimable must be - * together and slab_reclaimable must be in front. - */ - { "slab_reclaimable", 1, NR_SLAB_RECLAIMABLE_B }, - { "slab_unreclaimable", 1, NR_SLAB_UNRECLAIMABLE_B }, + { "inactive_anon", NR_INACTIVE_ANON }, + { "active_anon", NR_ACTIVE_ANON }, + { "inactive_file", NR_INACTIVE_FILE }, + { "active_file", NR_ACTIVE_FILE }, + { "unevictable", NR_UNEVICTABLE }, + { "slab_reclaimable", NR_SLAB_RECLAIMABLE_B }, + { "slab_unreclaimable", NR_SLAB_UNRECLAIMABLE_B }, /* The memory events */ - { "workingset_refault_anon", 1, WORKINGSET_REFAULT_ANON }, - { "workingset_refault_file", 1, WORKINGSET_REFAULT_FILE }, - { "workingset_activate_anon", 1, WORKINGSET_ACTIVATE_ANON }, - { "workingset_activate_file", 1, WORKINGSET_ACTIVATE_FILE }, - { "workingset_restore_anon", 1, WORKINGSET_RESTORE_ANON }, - { "workingset_restore_file", 1, WORKINGSET_RESTORE_FILE }, - { "workingset_nodereclaim", 1, WORKINGSET_NODERECLAIM }, + { "workingset_refault_anon", WORKINGSET_REFAULT_ANON }, + { "workingset_refault_file", WORKINGSET_REFAULT_FILE }, + { "workingset_activate_anon", WORKINGSET_ACTIVATE_ANON }, + { "workingset_activate_file", WORKINGSET_ACTIVATE_FILE }, + { "workingset_restore_anon", WORKINGSET_RESTORE_ANON }, + { "workingset_restore_file", WORKINGSET_RESTORE_FILE }, + { "workingset_nodereclaim", WORKINGSET_NODERECLAIM }, }; +/* Translate stat items to the correct unit for memory.stat output */ +static int memcg_page_state_unit(int item) +{ + int unit; + + switch (item) { + case WORKINGSET_REFAULT_ANON: + case WORKINGSET_REFAULT_FILE: + case WORKINGSET_ACTIVATE_ANON: + case WORKINGSET_ACTIVATE_FILE: + case WORKINGSET_RESTORE_ANON: + case WORKINGSET_RESTORE_FILE: + case WORKINGSET_NODERECLAIM: + unit = 1; + break; + default: + unit = memcg_stat_item_in_bytes(item) ? 1 : PAGE_SIZE; + break; + } + + return unit; +} + +static inline unsigned long memcg_page_state_output(struct mem_cgroup *memcg, + int item) +{ + return memcg_page_state(memcg, item) * memcg_page_state_unit(item); +} + static char *memory_stat_format(struct mem_cgroup *memcg) { struct seq_buf s; @@ -1558,13 +1581,16 @@ static char *memory_stat_format(struct mem_cgroup *memcg) for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; - size = memcg_page_state(memcg, memory_stats[i].idx); - size *= memory_stats[i].ratio; + size = memcg_page_state_output(memcg, memory_stats[i].idx); seq_buf_printf(&s, "%s %llu\n", memory_stats[i].name, size); + /* + * We are printing reclaimable, unreclaimable of the slab + * and the sum of both. + */ if (unlikely(memory_stats[i].idx == NR_SLAB_UNRECLAIMABLE_B)) { - size = memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) + - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B); + size += memcg_page_state_output(memcg, + NR_SLAB_RECLAIMABLE_B); seq_buf_printf(&s, "slab %llu\n", size); } } @@ -6358,6 +6384,12 @@ static int memory_stat_show(struct seq_file *m, void *v) } #ifdef CONFIG_NUMA +static inline unsigned long lruvec_page_state_output(struct lruvec *lruvec, + int item) +{ + return lruvec_page_state(lruvec, item) * memcg_page_state_unit(item); +} + static int memory_numa_stat_show(struct seq_file *m, void *v) { int i; @@ -6375,8 +6407,8 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) struct lruvec *lruvec; lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); - size = lruvec_page_state(lruvec, memory_stats[i].idx); - size *= memory_stats[i].ratio; + size = lruvec_page_state_output(lruvec, + memory_stats[i].idx); seq_printf(m, " N%d=%llu", nid, size); } seq_putc(m, '\n'); -- 2.11.0