Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp400864ybi; Wed, 29 May 2019 23:59:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqzijbZfDfDEZ1hPWL8JcFv46QnrFoZspQ3ob4cFPxddv43seosj8/kJ5pMacxK8eVsyF6kw X-Received: by 2002:a63:550c:: with SMTP id j12mr2402863pgb.450.1559199581229; Wed, 29 May 2019 23:59:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559199581; cv=none; d=google.com; s=arc-20160816; b=S5sV+gLPaFTZ5oEFxjWCOexjBahpZH+M2QZeGbmJe7ZX4nIUsJQOA1bYiVUdyvJvTR 7hSU3cdMY956tyYs645Mk3qKBRzCVBLnoN6cFW4UuOi1skven7z/g88IJlKrFSCFkH6G drJXZweK7omHOT08h+oPMxdql6bQ0UZB+1U4GdaiqUCy8fcyhJqDkbYvTmuFpUptBAQ8 M6h4/N/ar7DcQJibKK+ingAld7jg1/ngRbvdM55b+sliZd/LS+nDNF2gE8c6gfSXrLs4 K09SFbyo+REkI4s9GRUroNXXih0vQ9ARxpkzH1MMDYeQZBq8mSmVgX/OsX+LsL31WdO0 i4Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=BDzlDGwVjUf9yU8ZGzQUMJXdFZlPVAg5rc69GOa9qEs=; b=uWuF2VhVvEVMtieWIN1s3qd7IEibe+qGgxJ0A0mErfd+PX6YmMDyksbHPkE6fJQK+N MTCO5mO+0sBVII/u9EbwdMSwPjJ2rBzYTz8/Vsh1uWAYBgTScHTltO5iWr1JAhYP7nhg bd2l7BRlym3tBDM49Rn5EXiLb2yMIEfiUd5gYSyWxhzUYj/mTJUn6WqCmA5/eFIecf84 sdYYsHzsuK084Uswf52x+m+gElDS9OwIV4xUlDpmnMODxHadPo/ASQ7qGzRecdXxg2HY IjX39Q3nvmUmYj85VlHESfBXnmGhMKm+TBVnupPkImGbUTw7+u8hISmluFlvZXdVxWJ3 46FQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i5si2444933pgs.4.2019.05.29.23.59.25; Wed, 29 May 2019 23:59:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727667AbfE3G57 (ORCPT + 99 others); Thu, 30 May 2019 02:57:59 -0400 Received: from mx2.suse.de ([195.135.220.15]:51768 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725862AbfE3G57 (ORCPT ); Thu, 30 May 2019 02:57:59 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3B51BAF48; Thu, 30 May 2019 06:57:57 +0000 (UTC) Date: Thu, 30 May 2019 08:57:55 +0200 From: Michal Hocko To: Minchan Kim Cc: Daniel Colascione , Andrew Morton , LKML , linux-mm , Johannes Weiner , Tim Murray , Joel Fernandes , Suren Baghdasaryan , Shakeel Butt , Sonny Rao , Brian Geffon , Linux API Subject: Re: [RFC 6/7] mm: extend process_madvise syscall to support vector arrary Message-ID: <20190530065755.GD6703@dhcp22.suse.cz> References: <20190520035254.57579-7-minchan@kernel.org> <20190520092258.GZ6836@dhcp22.suse.cz> <20190521024820.GG10039@google.com> <20190521062421.GD32329@dhcp22.suse.cz> <20190521102613.GC219653@google.com> <20190521103726.GM32329@dhcp22.suse.cz> <20190527074940.GB6879@google.com> <20190529103352.GD18589@dhcp22.suse.cz> <20190530021748.GE229459@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190530021748.GE229459@google.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 30-05-19 11:17:48, Minchan Kim wrote: > On Wed, May 29, 2019 at 12:33:52PM +0200, Michal Hocko wrote: > > On Wed 29-05-19 03:08:32, Daniel Colascione wrote: > > > On Mon, May 27, 2019 at 12:49 AM Minchan Kim wrote: > > > > > > > > On Tue, May 21, 2019 at 12:37:26PM +0200, Michal Hocko wrote: > > > > > On Tue 21-05-19 19:26:13, Minchan Kim wrote: > > > > > > On Tue, May 21, 2019 at 08:24:21AM +0200, Michal Hocko wrote: > > > > > > > On Tue 21-05-19 11:48:20, Minchan Kim wrote: > > > > > > > > On Mon, May 20, 2019 at 11:22:58AM +0200, Michal Hocko wrote: > > > > > > > > > [Cc linux-api] > > > > > > > > > > > > > > > > > > On Mon 20-05-19 12:52:53, Minchan Kim wrote: > > > > > > > > > > Currently, process_madvise syscall works for only one address range > > > > > > > > > > so user should call the syscall several times to give hints to > > > > > > > > > > multiple address range. > > > > > > > > > > > > > > > > > > Is that a problem? How big of a problem? Any numbers? > > > > > > > > > > > > > > > > We easily have 2000+ vma so it's not trivial overhead. I will come up > > > > > > > > with number in the description at respin. > > > > > > > > > > > > > > Does this really have to be a fast operation? I would expect the monitor > > > > > > > is by no means a fast path. The system call overhead is not what it used > > > > > > > to be, sigh, but still for something that is not a hot path it should be > > > > > > > tolerable, especially when the whole operation is quite expensive on its > > > > > > > own (wrt. the syscall entry/exit). > > > > > > > > > > > > What's different with process_vm_[readv|writev] and vmsplice? > > > > > > If the range needed to be covered is a lot, vector operation makes senese > > > > > > to me. > > > > > > > > > > I am not saying that the vector API is wrong. All I am trying to say is > > > > > that the benefit is not really clear so far. If you want to push it > > > > > through then you should better get some supporting data. > > > > > > > > I measured 1000 madvise syscall vs. a vector range syscall with 1000 > > > > ranges on ARM64 mordern device. Even though I saw 15% improvement but > > > > absoluate gain is just 1ms so I don't think it's worth to support. > > > > I will drop vector support at next revision. > > > > > > Please do keep the vector support. Absolute timing is misleading, > > > since in a tight loop, you're not going to contend on mmap_sem. We've > > > seen tons of improvements in things like camera start come from > > > coalescing mprotect calls, with the gains coming from taking and > > > releasing various locks a lot less often and bouncing around less on > > > the contended lock paths. Raw throughput doesn't tell the whole story, > > > especially on mobile. > > > > This will always be a double edge sword. Taking a lock for longer can > > improve a throughput of a single call but it would make a latency for > > anybody contending on the lock much worse. > > > > Besides that, please do not overcomplicate the thing from the early > > beginning please. Let's start with a simple and well defined remote > > madvise alternative first and build a vector API on top with some > > numbers based on _real_ workloads. > > First time, I didn't think about atomicity about address range race > because MADV_COLD/PAGEOUT is not critical for the race. > However you raised the atomicity issue because people would extend > hints to destructive ones easily. I agree with that and that's why > we discussed how to guarantee the race and Daniel comes up with good idea. Just for the clarification, I didn't really mean atomicity but rather a _consistency_ (essentially time to check to time to use consistency). > - vma configuration seq number via process_getinfo(2). > > We discussed the race issue without _read_ workloads/requests because > it's common sense that people might extend the syscall later. > > Here is same. For current workload, we don't need to support vector > for perfomance point of view based on my experiment. However, it's > rather limited experiment. Some configuration might have 10000+ vmas > or really slow CPU. > > Furthermore, I want to have vector support due to atomicity issue > if it's really the one we should consider. > With vector support of the API and vma configuration sequence number > from Daniel, we could support address ranges operations's atomicity. I am not sure what do you mean here. Perform all ranges atomicaly wrt. other address space modifications? If yes I am not sure we want that semantic because it can cause really long stalls for other operations but that is a discussion on its own and I would rather focus on a simple interface first. > However, since we don't introduce vector at this moment, we need to > introduce *another syscall* later to be able to handle multile ranges > all at once atomically if it's okay. Agreed. > Other thought: > Maybe we could extend address range batch syscall covers other MM > syscall like mmap/munmap/madvise/mprotect and so on because there > are multiple users that would benefit from this general batching > mechanism. Again a discussion on its own ;) -- Michal Hocko SUSE Labs