Received: by 10.192.165.148 with SMTP id m20csp646466imm; Fri, 4 May 2018 04:12:40 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpfy7DNPf1r1IXvyCsxeDkiiA9VfzJG96a8Th17OoXcFW4irxQJL1hf1KMoFAtxHhjW2I5D X-Received: by 2002:a17:902:6b86:: with SMTP id p6-v6mr27839160plk.32.1525432360072; Fri, 04 May 2018 04:12:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525432360; cv=none; d=google.com; s=arc-20160816; b=nfBrkAnRLBVWN18guxIdM9jgW/tLBawTC7vIIhua0y1n9dXSSUZGjuDMgtdss7ysMJ 1sIMZW/KtqCuc4foKZ8QYc0wuP1MiqMzbgUprxFWiqn0i/8DoXtgj/4tXFenX5lXnDJB QZ2TUjk5VNYMMyboVRi3u9IWRNhI/ECejdBu14QbK6oMm8hZc5os4gkMau8MSeWy/s6X 35R3F0qgDC3KZ1kiRLR4jQaUbjABN38HbN6bmLvkbumtRtw5L7fwRYmn1CZZbuF6L4Y3 QJZqER5hxH1m9E/TDalWEayCDHAO8omx8TfJjzdCiEY2uewyvP2mOgva0OMAhdhS84M+ RZmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=Lz7PVDYrJBoRhf0Q8TQnrXtcWxR7gLblRr96zPUGd3s=; b=VnmZLCVethF3Lx7Lt6Y2pnu77/Kr5qUha0l2rqmjk2Gp/HC4lyWTDu+H1lEYwwCdig pqKi5iZEqw6S4TcAgpAtz1HfwVqJUciQuAPDH9VpgspokZMt+JOyjSKDOpgFPgOGfqBq T+ssTtcpl6GoCaQ+oAIH/dE1IQDqOA9xtwenJK2N6aSCg6LnvkBn7EneYmtw7EIXHUEK UpD8/phafFWL8CRAVYzwGxHWyfD9syJsIY0rgRowtuGhVEjRWvwd7nea1qjQ7uaj9soJ 4ur0N3lIRLX16GFgN/InPPjUJBrwogQ3xLaGeeok9h+9V4spG4gLxEoHFT6TUiqyichE K9MQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h68-v6si5566277pgc.158.2018.05.04.04.12.25; Fri, 04 May 2018 04:12:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751447AbeEDLKa (ORCPT + 99 others); Fri, 4 May 2018 07:10:30 -0400 Received: from mx2.suse.de ([195.135.220.15]:49036 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751229AbeEDLK3 (ORCPT ); Fri, 4 May 2018 07:10:29 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 15B13AB3E; Fri, 4 May 2018 11:10:27 +0000 (UTC) Date: Fri, 4 May 2018 13:10:22 +0200 From: Michal Hocko To: "prakash.sangappa" Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, kirill.shutemov@linux.intel.com, n-horiguchi@ah.jp.nec.com, drepper@gmail.com, rientjes@google.com, Naoya Horiguchi , Dave Hansen Subject: Re: [RFC PATCH] Add /proc//numa_vamaps for numa node information Message-ID: <20180504111022.GN4535@dhcp22.suse.cz> References: <1525240686-13335-1-git-send-email-prakash.sangappa@oracle.com> <20180502143323.1c723ccb509c3497050a2e0a@linux-foundation.org> <20180503085741.GD4535@dhcp22.suse.cz> <40be68bb-8322-2bef-f454-22e1ab9029da@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40be68bb-8322-2bef-f454-22e1ab9029da@oracle.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 03-05-18 15:37:39, prakash.sangappa wrote: > > > On 05/03/2018 01:57 AM, Michal Hocko wrote: > > On Wed 02-05-18 16:43:58, prakash.sangappa wrote: > > > > > > On 05/02/2018 02:33 PM, Andrew Morton wrote: > > > > On Tue, 1 May 2018 22:58:06 -0700 Prakash Sangappa wrote: > > > > > > > > > For analysis purpose it is useful to have numa node information > > > > > corresponding mapped address ranges of the process. Currently > > > > > /proc//numa_maps provides list of numa nodes from where pages are > > > > > allocated per VMA of the process. This is not useful if an user needs to > > > > > determine which numa node the mapped pages are allocated from for a > > > > > particular address range. It would have helped if the numa node information > > > > > presented in /proc//numa_maps was broken down by VA ranges showing the > > > > > exact numa node from where the pages have been allocated. > > > > > > > > > > The format of /proc//numa_maps file content is dependent on > > > > > /proc//maps file content as mentioned in the manpage. i.e one line > > > > > entry for every VMA corresponding to entries in /proc//maps file. > > > > > Therefore changing the output of /proc//numa_maps may not be possible. > > > > > > > > > > Hence, this patch proposes adding file /proc//numa_vamaps which will > > > > > provide proper break down of VA ranges by numa node id from where the mapped > > > > > pages are allocated. For Address ranges not having any pages mapped, a '-' > > > > > is printed instead of numa node id. In addition, this file will include most > > > > > of the other information currently presented in /proc//numa_maps. The > > > > > additional information included is for convenience. If this is not > > > > > preferred, the patch could be modified to just provide VA range to numa node > > > > > information as the rest of the information is already available thru > > > > > /proc//numa_maps file. > > > > > > > > > > Since the VA range to numa node information does not include page's PFN, > > > > > reading this file will not be restricted(i.e requiring CAP_SYS_ADMIN). > > > > > > > > > > Here is the snippet from the new file content showing the format. > > > > > > > > > > 00400000-00401000 N0=1 kernelpagesize_kB=4 mapped=1 file=/tmp/hmap2 > > > > > 00600000-00601000 N0=1 kernelpagesize_kB=4 anon=1 dirty=1 file=/tmp/hmap2 > > > > > 00601000-00602000 N0=1 kernelpagesize_kB=4 anon=1 dirty=1 file=/tmp/hmap2 > > > > > 7f0215600000-7f0215800000 N0=1 kernelpagesize_kB=2048 dirty=1 file=/mnt/f1 > > > > > 7f0215800000-7f0215c00000 - file=/mnt/f1 > > > > > 7f0215c00000-7f0215e00000 N0=1 kernelpagesize_kB=2048 dirty=1 file=/mnt/f1 > > > > > 7f0215e00000-7f0216200000 - file=/mnt/f1 > > > > > .. > > > > > 7f0217ecb000-7f0217f20000 N0=85 kernelpagesize_kB=4 mapped=85 mapmax=51 > > > > > file=/usr/lib64/libc-2.17.so > > > > > 7f0217f20000-7f0217f30000 - file=/usr/lib64/libc-2.17.so > > > > > 7f0217f30000-7f0217f90000 N0=96 kernelpagesize_kB=4 mapped=96 mapmax=51 > > > > > file=/usr/lib64/libc-2.17.so > > > > > 7f0217f90000-7f0217fb0000 - file=/usr/lib64/libc-2.17.so > > > > > .. > > > > > > > > > > The 'pmap' command can be enhanced to include an option to show numa node > > > > > information which it can read from this new proc file. This will be a > > > > > follow on proposal. > > > > I'd like to hear rather more about the use-cases for this new > > > > interface. Why do people need it, what is the end-user benefit, etc? > > > This is mainly for debugging / performance analysis. Oracle Database > > > team is looking to use this information. > > But we do have an interface to query (e.g. move_pages) that your > > application can use. I am really worried that the broken out per node > > data can be really large (just take a large vma with interleaved policy > > as an example). So is this really worth adding as a general purpose proc > > interface? > > I guess move_pages could be useful. There needs to be a tool or > command which can read the numa node information using move_pages > to be used to observe another process. That should be trivial. You can get vma ranges of interest from /proc/maps and then use move_pages to get a more detailed information. > From an observability point of view, one of the use of the proposed > new file 'numa_vamaps' was to modify 'pmap' command to display numa > node information broken down by address ranges. Would having pmap > show numa node information be useful? I do not have a usecase for that. -- Michal Hocko SUSE Labs