Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp363523imm; Thu, 13 Sep 2018 22:57:06 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZRnYf+rORd81MYc3PtcyTuL1wM0r7zMX+69jRt3V8pncz37opJXD3t5cENOn81L2+CZmJp X-Received: by 2002:a62:8d16:: with SMTP id z22-v6mr10756637pfd.181.1536904626645; Thu, 13 Sep 2018 22:57:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536904626; cv=none; d=google.com; s=arc-20160816; b=y+7EKob3NSlIk+Lb5exSI35SFE6uCbHdniizNblS9rDDI92TLZrznCd2vsMdxKXwAh SnqI+N1Imqc0UvUyu9ZkQohNX7+ml25mGHHdkrbWwE88jsLMNf9oIcu5c0PJaheksxts 0+OGy7JpvFL5iASpLiWXW49dzn2pz1N5Yp7fwJZoL/xiboMUqp8JDpl0hJja9xICIfaV qJlqA6dnvzNoxJ/lAWzTW6ep7Lvx4/OvUQ8I69LQfhHC4eY0n5p5sQiUfnEFF+RE9Kug P4L7QAOtJ/zMm9S1LdKCC2OBvpzAVGKHeS0YflyCC6JZVk7oJ0lEs4AkCoWGdS2cUa99 MNzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=OK98ELnjIdCHJTm+1QQCsNWjx0TwHUSBfLjBhUxzc8c=; b=W1qfh9DhVn8d/PadgzhUzR0lUqqyxOJRd+tRZRnYhiPwBn+ERIJEdWfYhHpHhFrQE7 X9ooZ2i5pvUXRI5TNXzPnYSJcMSRDg0U/ExHoFl6MufHZHEgYNQxHAHgVyw6Qc5xREnk DPmITxmk6UsWlDR4AB4zwiqFeH+Z0nX7BQJ5LlEc6KJSTbNrFvLqh7iFEzRc+Od874OR k+oXHnQq9sFzEY0Fv2zmrr08llGUbf515OF2g1Jk/dIxvtE1ERBXDfmQIJgTLJHGmlRa 3M/kK4CxdAnPHghmU/XZ+g/xYDkqLSYF+YkIeYhtuIdclZgmIFoRSPCfppnURlYZLDbj VYnQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x7-v6si6196370pgh.595.2018.09.13.22.56.51; Thu, 13 Sep 2018 22:57:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727670AbeINLJd (ORCPT + 99 others); Fri, 14 Sep 2018 07:09:33 -0400 Received: from mx2.suse.de ([195.135.220.15]:39734 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726831AbeINLJd (ORCPT ); Fri, 14 Sep 2018 07:09:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 50B61AD14; Fri, 14 Sep 2018 05:56:39 +0000 (UTC) Date: Fri, 14 Sep 2018 07:56:37 +0200 From: Michal Hocko To: "prakash.sangappa" Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dave.hansen@intel.com, nao.horiguchi@gmail.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, khandual@linux.vnet.ibm.com, steven.sistare@oracle.com Subject: Re: [PATCH V2 0/6] VA to numa node information Message-ID: <20180914055637.GH20287@dhcp22.suse.cz> References: <1536783844-4145-1-git-send-email-prakash.sangappa@oracle.com> <20180913084011.GC20287@dhcp22.suse.cz> <375951d0-f103-dec3-34d8-bbeb2f45f666@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <375951d0-f103-dec3-34d8-bbeb2f45f666@oracle.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 13-09-18 15:32:25, prakash.sangappa wrote: > > > On 09/13/2018 01:40 AM, Michal Hocko wrote: > > On Wed 12-09-18 13:23:58, Prakash Sangappa wrote: > > > For analysis purpose it is useful to have numa node information > > > corresponding mapped virtual address ranges of a process. Currently, > > > the file /proc//numa_maps provides list of numa nodes from where pages > > > are allocated per VMA of a process. This is not useful if an user needs to > > > determine which numa node the mapped pages are allocated from for a > > > particular address range. It would have helped if the numa node information > > > presented in /proc//numa_maps was broken down by VA ranges showing the > > > exact numa node from where the pages have been allocated. > > > > > > The format of /proc//numa_maps file content is dependent on > > > /proc//maps file content as mentioned in the manpage. i.e one line > > > entry for every VMA corresponding to entries in /proc//maps file. > > > Therefore changing the output of /proc//numa_maps may not be possible. > > > > > > This patch set introduces the file /proc//numa_vamaps which > > > will provide proper break down of VA ranges by numa node id from where the > > > mapped pages are allocated. For Address ranges not having any pages mapped, > > > a '-' is printed instead of numa node id. > > > > > > Includes support to lseek, allowing seeking to a specific process Virtual > > > address(VA) starting from where the address range to numa node information > > > can to be read from this file. > > > > > > The new file /proc//numa_vamaps will be governed by ptrace access > > > mode PTRACE_MODE_READ_REALCREDS. > > > > > > See following for previous discussion about this proposal > > > > > > https://marc.info/?t=152524073400001&r=1&w=2 > > It would be really great to give a short summary of the previous > > discussion. E.g. why do we need a proc interface in the first place when > > we already have an API to query for the information you are proposing to > > export [1] > > > > [1] http://lkml.kernel.org/r/20180503085741.GD4535@dhcp22.suse.cz > > The proc interface provides an efficient way to export address range > to numa node id mapping information compared to using the API. Do you have any numbers? > For example, for sparsely populated mappings, if a VMA has large portions > not have any physical pages mapped, the page walk done thru the /proc file > interface can skip over non existent PMDs / ptes. Whereas using the > API the application would have to scan the entire VMA in page size units. What prevents you from pre-filtering by reading /proc/$pid/maps to get ranges of interest? > Also, VMAs having THP pages can have a mix of 4k pages and hugepages. > The page walks would be efficient in scanning and determining if it is > a THP huge page and step over it. Whereas using the API, the application > would not know what page size mapping is used for a given VA and so would > have to again scan the VMA in units of 4k page size. Why does this matter for something that is for analysis purposes. Reading the file for the whole address space is far from a free operation. Is the page walk optimization really essential for usability? Moreover what prevents move_pages implementation to be clever for the page walk itself? In other words why would we want to add a new API rather than make the existing one faster for everybody. > If this sounds reasonable, I can add it to the commit / patch description. This all is absolutely _essential_ for any new API proposed. Remember that once we add a new user interface, we have to maintain it for ever. We used to be too relaxed when adding new proc files in the past and it backfired many times already. -- Michal Hocko SUSE Labs