Received: by 2002:ac0:e350:0:0:0:0:0 with SMTP id g16csp1785495imn; Sun, 31 Jul 2022 23:07:41 -0700 (PDT) X-Google-Smtp-Source: AA6agR7SJmEQy6TbnyjaewKH/3q9jShfQp700DanZ1eYfGEBwB5R01r93EN+y8Tg7P+glIJNraBq X-Received: by 2002:a17:90a:df96:b0:1f3:22e:7826 with SMTP id p22-20020a17090adf9600b001f3022e7826mr16941947pjv.21.1659334061388; Sun, 31 Jul 2022 23:07:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1659334061; cv=none; d=google.com; s=arc-20160816; b=ADKCCdiAT8zQisDycJvg/bXvrxHa7MQolyJSpkehgajBVMBYYfD7/GMdqVKBS5WWNc dfVn+/rv2gEHRpwYBQYFgDfFrJbEiqEhXNqeeXqHBgSZ0gnJnFEREXegKscO3CT3NJW8 eYdbkFwWlp8UGSobczwd3G3QBz7bSxrwr+B0slEvWU6nE9Wt3KDQZmQMQwaymANCqiYU PoyTZAINY2X3JawP2nKApG9UaGrVdsTOwh0Zn1jJgq4l9ivkyaABrBmmZT5aU5pItoYA +9p+nXvwOOPZvXuOedztfztKH9DlrWP6JFnJq5B66WE0pHQkqPRHRD7eEQ3h2E3MLGdx 31/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=vocpjbskbKJdz8ys7FQYnZSfQ/Zai95WbO2AwUfKkEM=; b=FLi8k2haZAb0la0Iss7DO9wZjDRPF12ahzqTRyUMArfVvVoM+xX4PMe0Oc3V7/v0kh TMJpvgeGb1D3Es7Di2m9/9dP6/bZjfcMrZba04H3W92S/JoLnudWlPlevqCCqwLTuxcy +8Nqmjhi9DSBJlyx90/qpUS1vHsVwIVscPL/+Lip1gHJnp4jGmlQ+3dok2WaSPkC2Rep 0nvesJGnH3QKUNztbeHy93RuYXSlZ/gN39f+8rqyILgfJX/4mzHO4jKJp8RxXCgd4oko b2SgMhIn1eJDUrcRae8RZbxtY4axWBj+FNBCtBbKdpFUpGU4Rt98r/LxiVHCfgsfsriw jWPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b12-20020a170902b60c00b0016c45460800si10403927pls.421.2022.07.31.23.07.26; Sun, 31 Jul 2022 23:07:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232878AbiHAGGP (ORCPT + 99 others); Mon, 1 Aug 2022 02:06:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229681AbiHAGGM (ORCPT ); Mon, 1 Aug 2022 02:06:12 -0400 Received: from out199-8.us.a.mail.aliyun.com (out199-8.us.a.mail.aliyun.com [47.90.199.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BF976329 for ; Sun, 31 Jul 2022 23:06:09 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R361e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0VL1DdLW_1659333965; Received: from 30.97.48.40(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VL1DdLW_1659333965) by smtp.aliyun-inc.com; Mon, 01 Aug 2022 14:06:06 +0800 Message-ID: Date: Mon, 1 Aug 2022 14:06:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [RFC V3 PATCH] mm: add last level page table numa info to /proc/pid/numa_pgtable To: Xin Hao , willy@infradead.org Cc: akpm@linux-foundation.org, adobriyan@gmail.com, keescook@chromium.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20220801032704.64356-1-xhao@linux.alibaba.com> From: Baolin Wang In-Reply-To: <20220801032704.64356-1-xhao@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Xin, On 8/1/2022 11:27 AM, Xin Hao wrote: > In many data center servers, the shared memory architectures is > Non-Uniform Memory Access (NUMA), remote numa node data access > often brings a high latency problem, but what we are easy to ignore > is that the page table remote numa access, It can also leads to a > performance degradation. > > So there add a new interface in /proc, This will help developers to > get more info about performance issues if they are caused by cross-NUMA. > > V2 -> V3 > 1, Fix compile warning bug. > > V1 -> V2 > 1, As Matthew Wilcox advise, Simplify the code. > 2, Do some codes format fix. Please move the changes history under your 'Signed-off-by' with '---'. > > V2: https://lore.kernel.org/linux-mm/20220731155223.60238-1-xhao@linux.alibaba.com/ > V1: https://lore.kernel.org/linux-mm/YuVqdcY8Ibib2LJa@casper.infradead.org/T/ > > Reported-by: kernel test robot > Signed-off-by: Xin Hao > --- > fs/proc/base.c | 2 ++ > fs/proc/internal.h | 1 + > fs/proc/task_mmu.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 90 insertions(+) > > diff --git a/fs/proc/base.c b/fs/proc/base.c > index 8dfa36a99c74..487e82dd3275 100644 > --- a/fs/proc/base.c > +++ b/fs/proc/base.c > @@ -3224,6 +3224,7 @@ static const struct pid_entry tgid_base_stuff[] = { > REG("maps", S_IRUGO, proc_pid_maps_operations), > #ifdef CONFIG_NUMA > REG("numa_maps", S_IRUGO, proc_pid_numa_maps_operations), > + REG("numa_pgtable", S_IRUGO, proc_pid_numa_pgtable_operations), > #endif > REG("mem", S_IRUSR|S_IWUSR, proc_mem_operations), > LNK("cwd", proc_cwd_link), > @@ -3571,6 +3572,7 @@ static const struct pid_entry tid_base_stuff[] = { > #endif > #ifdef CONFIG_NUMA > REG("numa_maps", S_IRUGO, proc_pid_numa_maps_operations), > + REG("numa_pgtable", S_IRUGO, proc_pid_numa_pgtable_operations), > #endif > REG("mem", S_IRUSR|S_IWUSR, proc_mem_operations), > LNK("cwd", proc_cwd_link), > diff --git a/fs/proc/internal.h b/fs/proc/internal.h > index 06a80f78433d..e7ed9ef097b6 100644 > --- a/fs/proc/internal.h > +++ b/fs/proc/internal.h > @@ -296,6 +296,7 @@ struct mm_struct *proc_mem_open(struct inode *inode, unsigned int mode); > > extern const struct file_operations proc_pid_maps_operations; > extern const struct file_operations proc_pid_numa_maps_operations; > +extern const struct file_operations proc_pid_numa_pgtable_operations; > extern const struct file_operations proc_pid_smaps_operations; > extern const struct file_operations proc_pid_smaps_rollup_operations; > extern const struct file_operations proc_clear_refs_operations; > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index 2d04e3470d4c..77b7a49757f5 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1999,4 +1999,91 @@ const struct file_operations proc_pid_numa_maps_operations = { > .release = proc_map_release, > }; > > +struct pgtable_numa_private { > + struct proc_maps_private proc_maps; > + unsigned long node[MAX_NUMNODES]; > +}; > + > +static int gather_pgtable_numa_stats(pmd_t *pmd, unsigned long addr, > + unsigned long end, struct mm_walk *walk) > +{ > + struct pgtable_numa_private *priv = walk->private; > + struct page *page; > + int nid; > + > + if (pmd_huge(*pmd)) { > + page = virt_to_page(pmd); > + } else { > + page = pmd_page(*pmd); You should validate if the pmd is valid or present before getting the pagetable page. if (pmd_none(*pmd) || !pmd_present(*pmd)) Another issue is I think you should hold the pmd lock to call pmd_page(), since after the validation of pmd_huge(), the pmd entry can be modified by other threads if you did not hold the pmd lock. > + } > + > + nid = page_to_nid(page); > + priv->node[nid]++; > + > + return 0; > +} > + > +static const struct mm_walk_ops show_numa_pgtable_ops = { > + .pmd_entry = gather_pgtable_numa_stats, > +}; > + > +/* > + * Display the page talbe allocated per node via /proc. > + */ > +static int show_numa_pgtable(struct seq_file *m, void *v) > +{ > + struct pgtable_numa_private *numa_priv = m->private; > + struct vm_area_struct *vma = v; > + struct mm_struct *mm = vma->vm_mm; > + struct file *file = vma->vm_file; > + int nid; > + > + if (!mm) > + return 0; > + > + memset(numa_priv->node, 0, sizeof(numa_priv->node)); > + > + seq_printf(m, "%08lx ", vma->vm_start); > + > + if (file) { > + seq_puts(m, " file="); > + seq_file_path(m, file, "\n\t= "); > + } else if (vma->vm_start <= mm->brk && vma->vm_end >= mm->start_brk) { > + seq_puts(m, " heap"); > + } else if (is_stack(vma)) { > + seq_puts(m, " stack"); > + } > + > + /* mmap_lock is held by m_start */ > + walk_page_vma(vma, &show_numa_pgtable_ops, numa_priv); > + > + for_each_node_state(nid, N_MEMORY) { > + if (numa_priv->node[nid]) > + seq_printf(m, " N%d=%lu", nid, numa_priv->node[nid]); > + } > + seq_putc(m, '\n'); > + > + return 0; > +} > + > +static const struct seq_operations proc_pid_numa_pgtable_op = { > + .start = m_start, > + .next = m_next, > + .stop = m_stop, > + .show = show_numa_pgtable, > +}; > + > +static int pid_numa_pgtable_open(struct inode *inode, struct file *file) > +{ > + return proc_maps_open(inode, file, &proc_pid_numa_pgtable_op, > + sizeof(struct pgtable_numa_private)); > +} > + > +const struct file_operations proc_pid_numa_pgtable_operations = { > + .open = pid_numa_pgtable_open, > + .read = seq_read, > + .llseek = seq_lseek, > + .release = proc_map_release, > +}; > + > #endif /* CONFIG_NUMA */ > -- > 2.31.0