Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp4849541ybi; Tue, 28 May 2019 03:35:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqwU6+o7YK32tzaAmyjmbKktzBh1dPuNtRINJ1hg+wau94DdyUahCWV0isrBZpRRt7qShxXc X-Received: by 2002:a17:902:758b:: with SMTP id j11mr76140069pll.191.1559039755706; Tue, 28 May 2019 03:35:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559039755; cv=none; d=google.com; s=arc-20160816; b=z5x9zHmzw48Mf3flC4AlbLKZVFooqev5Vs6G7T/ewrm+aA82RoNTsUTDe4KKks956r CHjSDb/79CXGcUTshlZNPk/tjoxkY78iL2qJMjBjAygyMYz+nSQN4htCb9Hnp4v6dsMq etuD4e0EbqkS4kFkuqHjQ3FQ8odanJV9dVeYq//umc/m25B8mquiluyD+o724CiGFyvT /KHpGdvMPSCUwl9zB3uvpd9n/S6vGxJGiro84nbsfjtjuvcGl08Xk0cDHlwiNlYzDAPB 5pdW3zfnqBpR64UHUyQRULicP+i3aWOMDOaJa2eQjkqJtyQQI8MU2EqVZPqc/ErPTpN5 GjNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=GFm2KVjWN8Ii8lgnLv3wgkCb+64jV4Ob3uLihwbel/8=; b=dBKvzdKyYOW8oVVTbsqKvTtGxhfiJZuUrk/bZ9nR03Sz+igvVvkfC8r+3oChSQcSEZ iq6feOmBjrlFMJBJoTp7SEs5Asvb50a4fRvodCG5eqVB0wiHLJAhGMh8PjFEYhnxxbkY 1VBpbsQV+N8+DGBKWrjpdb1NjmFQK2F5I6ud3dTtJiD5hWmEMY4SJzBjxzgBB9FQapfa k8SR5bdQXY6DxE1xbNpir8YF3I5728IF1Djob2UMOTLjnxb48CBJpx29VqTLUkynRcH1 6SMjqU4SC18kfWQ7DEGNVJMslLf54U1R7ZdzX3BqiyJTI7htabQvXQDgulganORWsqhp rjoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c123si22965291pfg.35.2019.05.28.03.35.40; Tue, 28 May 2019 03:35:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726714AbfE1KdP (ORCPT + 99 others); Tue, 28 May 2019 06:33:15 -0400 Received: from mx2.suse.de ([195.135.220.15]:37476 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726305AbfE1KdP (ORCPT ); Tue, 28 May 2019 06:33:15 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 364C1AC66; Tue, 28 May 2019 10:33:13 +0000 (UTC) Date: Tue, 28 May 2019 12:33:12 +0200 From: Michal Hocko To: Daniel Colascione Cc: Minchan Kim , Andrew Morton , LKML , linux-mm , Johannes Weiner , Tim Murray , Joel Fernandes , Suren Baghdasaryan , Shakeel Butt , Sonny Rao , Brian Geffon , Linux API Subject: Re: [RFC 7/7] mm: madvise support MADV_ANONYMOUS_FILTER and MADV_FILE_FILTER Message-ID: <20190528103312.GV1658@dhcp22.suse.cz> References: <20190521062628.GE32329@dhcp22.suse.cz> <20190527075811.GC6879@google.com> <20190527124411.GC1658@dhcp22.suse.cz> <20190528032632.GF6879@google.com> <20190528062947.GL1658@dhcp22.suse.cz> <20190528081351.GA159710@google.com> <20190528084927.GB159710@google.com> <20190528090821.GU1658@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 28-05-19 02:39:03, Daniel Colascione wrote: > On Tue, May 28, 2019 at 2:08 AM Michal Hocko wrote: > > > > On Tue 28-05-19 17:49:27, Minchan Kim wrote: > > > On Tue, May 28, 2019 at 01:31:13AM -0700, Daniel Colascione wrote: > > > > On Tue, May 28, 2019 at 1:14 AM Minchan Kim wrote: > > > > > if we went with the per vma fd approach then you would get this > > > > > > feature automatically because map_files would refer to file backed > > > > > > mappings while map_anon could refer only to anonymous mappings. > > > > > > > > > > The reason to add such filter option is to avoid the parsing overhead > > > > > so map_anon wouldn't be helpful. > > > > > > > > Without chiming on whether the filter option is a good idea, I'd like > > > > to suggest that providing an efficient binary interfaces for pulling > > > > memory map information out of processes. Some single-system-call > > > > method for retrieving a binary snapshot of a process's address space > > > > complete with attributes (selectable, like statx?) for each VMA would > > > > reduce complexity and increase performance in a variety of areas, > > > > e.g., Android memory map debugging commands. > > > > > > I agree it's the best we can get *generally*. > > > Michal, any opinion? > > > > I am not really sure this is directly related. I think the primary > > question that we have to sort out first is whether we want to have > > the remote madvise call process or vma fd based. This is an important > > distinction wrt. usability. I have only seen pid vs. pidfd discussions > > so far unfortunately. > > I don't think the vma fd approach is viable. We have some processes > with a *lot* of VMAs --- system_server had 4204 when I checked just > now (and that's typical) --- and an FD operation per VMA would be > excessive. What do you mean by excessive here? Do you expect the process to have them open all at once? > VMAs also come and go pretty easily depending on changes in > protections and various faults. Is this really too much different from /proc//map_files? [...] > > An interface to query address range information is a separate but > > although a related topic. We have /proc//[s]maps for that right > > now and I understand it is not a general win for all usecases because > > it tends to be slow for some. I can see how /proc//map_anons could > > provide per vma information in a binary form via a fd based interface. > > But I would rather not conflate those two discussions much - well except > > if it could give one of the approaches more justification but let's > > focus on the madvise part first. > > I don't think it's a good idea to focus on one feature in a > multi-feature change when the interactions between features can be > very important for overall design of the multi-feature system and the > design of each feature. > > Here's my thinking on the high-level design: > > I'm imagining an address-range system that would work like this: we'd > create some kind of process_vm_getinfo(2) system call [1] that would > accept a statx-like attribute map and a pid/fd parameter as input and > return, on output, two things: 1) an array [2] of VMA descriptors > containing the requested information, and 2) a VMA configuration > sequence number. We'd then have process_madvise() and other > cross-process VM interfaces accept both address ranges and this > sequence number; they'd succeed only if the VMA configuration sequence > number is still current, i.e., the target process hasn't changed its > VMA configuration (implicitly or explicitly) since the call to > process_vm_getinfo(). The sequence number is essentially a cookie that is transparent to the userspace right? If yes, how does it differ from a fd (returned from /proc//map_{anons,files}/range) which is a cookie itself and it can be used to revalidate when the operation is requested and fail if something has changed. Moreover we already do have a fd based madvise syscall so there shouldn't be really a large need to add a new set of syscalls. [...] > Or maybe the whole sequence number thing is overkill and we don't need > atomicity? But if there's a concern that A shouldn't operate on B's > memory without knowing what it's operating on, then the scheme I've > proposed above solves this knowledge problem in a pretty lightweight > way. This is the main question here. Do we really want to enforce an external synchronization between the two processes to make sure that they are both operating on the same range - aka protect from the range going away and being reused for a different purpose. Right now it wouldn't be fatal because both operations are non destructive but I can imagine that there will be more madvise operations to follow (including those that are destructive) because people will simply find usecases for that. This should be reflected in the proposed API. -- Michal Hocko SUSE Labs