Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1157229ybl; Wed, 21 Aug 2019 11:00:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqzzJ/Pe36OH1Ta8TksCEZlthLp1YvFPJMwiEw7C+KwMH9iuXG+CCu4DtTsWmIE+MLE8UQ+G X-Received: by 2002:a62:3887:: with SMTP id f129mr33164429pfa.245.1566410429461; Wed, 21 Aug 2019 11:00:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566410429; cv=none; d=google.com; s=arc-20160816; b=XSOOXrTOjjYR3mErmRsuZ059bv+Nywjyk0hkWP0EaXyuONSyzpuhMiNgQZQlXBRQaZ ef86S33GC3Qr4qRhkJhFZpoy/w7dG/7b0soaq5qkoKCXpe/rzSkxOQA7XWiKY3IKz/Uc 7YFizw2D+VrBNp8xbV60y3HpwR0h4RgLMfx4g7OlF4UxyOWmo7Ry1pINowmjSV2tlNjN EaSZuAlOTOzCsTKm6A9JVR+4EdWbf9vS+O/s3e+oEgc3ljnC0AC+0fCmxFMqxwhJftXa qq9xjxz7OHVN/kxMrqMalPrnOx5DLzv1+/5NBB6G2ZNmQT2Hwf/13+yiATz+K/9FReGX NKlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=tnvrVlkEGAbIZ0Wi8C8sTWFI1JhxqnNSMgmYPXsfJUY=; b=De6q77xTODiiGrsrM0CpbodF7qfc/QtYsxP02SxPkWDAnen/BVXLbzZt5hstKFDH+p aWKRoVK8Ul9t/Be2QoMo7D/H8nVqMe7IFmQlO6gjgv27h9b69jQQA8bwq9rbSjhUYAw+ 1IWWq4y7kfsI1eaAxqQCEJ5s5av+zhsRtOczEPXhs9FFjkxS0jY+sCINNOwGa3Gkktc/ p16F0RazymJMR1TxtcZ10G2uKSw7gKhdHWMawcIw0pAatzJjylpF9PIYWd58t7ePpoYP +tviElAknvJx3hAUc9PNVksY12M645EggBpw4V+GeNo8xq5JmMF3xwGN8NZ26ZgO+iNs 3UUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t21si16130244pfe.231.2019.08.21.11.00.12; Wed, 21 Aug 2019 11:00:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729641AbfHURzf (ORCPT + 99 others); Wed, 21 Aug 2019 13:55:35 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:33342 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728591AbfHURze (ORCPT ); Wed, 21 Aug 2019 13:55:34 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0Ta4JjET_1566410125; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0Ta4JjET_1566410125) by smtp.aliyun-inc.com(127.0.0.1); Thu, 22 Aug 2019 01:55:31 +0800 From: Yang Shi To: mhocko@suse.com, kirill.shutemov@linux.intel.com, hannes@cmpxchg.org, vbabka@suse.cz, rientjes@google.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable Date: Thu, 22 Aug 2019 01:55:25 +0800 Message-Id: <1566410125-66011-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Available memory is one of the most important metrics for memory pressure. Currently, the deferred split THPs are not accounted into available memory, but they are reclaimable actually, like reclaimable slabs. And, they seems very common with the common workloads when THP is enabled. A simple run with MariaDB test of mmtest with THP enabled as always shows it could generate over fifteen thousand deferred split THPs (accumulated around 30G in one hour run, 75% of 40G memory for my VM). It looks worth accounting in MemAvailable. Record the number of freeable normal pages of deferred split THPs into the second tail page, and account it into KReclaimable. Although THP allocations are not exactly "kernel allocations", once they are unmapped, they are in fact kernel-only. KReclaimable has been accounted into MemAvailable. When the deferred split THPs get split due to memory pressure or freed, just decrease by the recorded number. With this change when running program which populates 1G address space then madvise(MADV_DONTNEED) 511 pages for every THP, /proc/meminfo would show the deferred split THPs are accounted properly. Populated by before calling madvise(MADV_DONTNEED): MemAvailable: 43531960 kB AnonPages: 1096660 kB KReclaimable: 26156 kB AnonHugePages: 1056768 kB After calling madvise(MADV_DONTNEED): MemAvailable: 44411164 kB AnonPages: 50140 kB KReclaimable: 1070640 kB AnonHugePages: 10240 kB Suggested-by: Vlastimil Babka Cc: Michal Hocko Cc: Kirill A. Shutemov Cc: Johannes Weiner Cc: David Rientjes Signed-off-by: Yang Shi --- Documentation/filesystems/proc.txt | 4 ++-- include/linux/huge_mm.h | 7 +++++-- include/linux/mm_types.h | 3 ++- mm/huge_memory.c | 13 ++++++++++++- mm/rmap.c | 4 ++-- 5 files changed, 23 insertions(+), 8 deletions(-) diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt index 99ca040..93fc183 100644 --- a/Documentation/filesystems/proc.txt +++ b/Documentation/filesystems/proc.txt @@ -968,8 +968,8 @@ ShmemHugePages: Memory used by shared memory (shmem) and tmpfs allocated with huge pages ShmemPmdMapped: Shared memory mapped into userspace with huge pages KReclaimable: Kernel allocations that the kernel will attempt to reclaim - under memory pressure. Includes SReclaimable (below), and other - direct allocations with a shrinker. + under memory pressure. Includes SReclaimable (below), deferred + split THPs, and other direct allocations with a shrinker. Slab: in-kernel data structures cache SReclaimable: Part of Slab, that might be reclaimed, such as caches SUnreclaim: Part of Slab, that cannot be reclaimed on memory pressure diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 61c9ffd..c194630 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -162,7 +162,7 @@ static inline int split_huge_page(struct page *page) { return split_huge_page_to_list(page, NULL); } -void deferred_split_huge_page(struct page *page); +void deferred_split_huge_page(struct page *page, unsigned int nr); void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address, bool freeze, struct page *page); @@ -324,7 +324,10 @@ static inline int split_huge_page(struct page *page) { return 0; } -static inline void deferred_split_huge_page(struct page *page) {} +static inline void deferred_split_huge_page(struct page *page, unsigned int nr) +{ +} + #define split_huge_pmd(__vma, __pmd, __address) \ do { } while (0) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 156640c..17e0fc5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -138,7 +138,8 @@ struct page { }; struct { /* Second tail page of compound page */ unsigned long _compound_pad_1; /* compound_head */ - unsigned long _compound_pad_2; + /* Freeable normal pages for deferred split shrinker */ + unsigned long nr_freeable; /* For both global and memcg */ struct list_head deferred_list; }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c9a596e..e04ac4d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -524,6 +524,7 @@ void prep_transhuge_page(struct page *page) INIT_LIST_HEAD(page_deferred_list(page)); set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR); + page[2].nr_freeable = 0; } static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long len, @@ -2766,6 +2767,10 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); + __mod_node_page_state(page_pgdat(page), + NR_KERNEL_MISC_RECLAIMABLE, + -head[2].nr_freeable); + head[2].nr_freeable = 0; } if (mapping) __dec_node_page_state(page, NR_SHMEM_THPS); @@ -2816,11 +2821,14 @@ void free_transhuge_page(struct page *page) ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } + __mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + -page[2].nr_freeable); + page[2].nr_freeable = 0; spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); free_compound_page(page); } -void deferred_split_huge_page(struct page *page) +void deferred_split_huge_page(struct page *page, unsigned int nr) { struct deferred_split *ds_queue = get_deferred_split_queue(page); #ifdef CONFIG_MEMCG @@ -2844,6 +2852,9 @@ void deferred_split_huge_page(struct page *page) return; spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + page[2].nr_freeable += nr; + __mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + nr); if (list_empty(page_deferred_list(page))) { count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2a..6008fab 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1286,7 +1286,7 @@ static void page_remove_anon_compound_rmap(struct page *page) if (nr) { __mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, -nr); - deferred_split_huge_page(page); + deferred_split_huge_page(page, nr); } } @@ -1320,7 +1320,7 @@ void page_remove_rmap(struct page *page, bool compound) clear_page_mlock(page); if (PageTransCompound(page)) - deferred_split_huge_page(compound_head(page)); + deferred_split_huge_page(compound_head(page), 1); /* * It would be tidy to reset the PageAnon mapping here, -- 1.8.3.1