Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1056665imm; Fri, 29 Jun 2018 10:36:45 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf6T+03NJCxIEZkOwuXEGZ2+h/wvYKwnxIya3JCnoO0I95q5k9ARZmj7LvVIK26bnrUZaBt X-Received: by 2002:a62:6941:: with SMTP id e62-v6mr15526361pfc.56.1530293805412; Fri, 29 Jun 2018 10:36:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530293805; cv=none; d=google.com; s=arc-20160816; b=o65uTghaw2g+Z9ctZtt0lieKhI4spqlcg6BWh8Ep/3I70JRcwvRltneVVbJzAa9Jzx 8RXDWgJqq0/jGe+abeLhKL4q0mjiIleoCUOBwHqHtno0fC5lLlcAIU6bQy5X8PoFnjdl Qrr3gsxRaXeuaOgtRfD9QRETfFWs2SfwhDM/tGXfmI/4Lst2NH95njBeuqD/X4drWth0 z75TALYuZl8Bd68PL5ocb9uXBngf92U9oNX5BMJ8kBwT6tWY7dhqBAQ32vVc5PmBf7AJ eWFWJ2Jfzshp5CJGTImRsNQX+NB5C3sO5VWFuR2HTKrK2PYKsmchAU9xt6b+GHXaMgUd Mu9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=zhuEPf4lbCoa8jm7TzS6HD3nhwg46xG1j29xi3Grrk8=; b=QDqknwV7hcmEvNgwfrOKH5si7Xrz10CJB5uyUf0XSeMZf7KMMApejlSdBzF1EtPz6+ 60UPsI0g1G78+zjMAl0HG02/f+3jcAbOhus+Iy/kpJXns2CVQZy5lElwVNEIQ2HbOTWH PFLlYx53J3J0+WMljvXJzwImr19bWJz58i668lWsL9dGRAfj17qR9Ruw3NIoO3pSQfGG GyMzEV8hGGbFfgpeAAGQzyIhyRjsb6YFP/2KG/di4tlXddYB11dM4wfZl80aIfPb+iC0 nl1EopWqUvHZxjDoE90+M86cHHmUW1xjBcQIF0+y79teQ42CMQJtngkEZQaWEd/rWO/c +S3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z5-v6si7424958pgr.97.2018.06.29.10.36.31; Fri, 29 Jun 2018 10:36:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030324AbeF2QuY (ORCPT + 99 others); Fri, 29 Jun 2018 12:50:24 -0400 Received: from out30-133.freemail.mail.aliyun.com ([115.124.30.133]:38883 "EHLO out30-133.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932428AbeF2QuW (ORCPT ); Fri, 29 Jun 2018 12:50:22 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R471e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0T3cvr.e_1530291009; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T3cvr.e_1530291009) by smtp.aliyun-inc.com(127.0.0.1); Sat, 30 Jun 2018 00:50:13 +0800 Subject: Re: [RFC v2 PATCH 2/2] mm: mmap: zap pages with read mmap_sem for large mapping To: Michal Hocko Cc: Peter Zijlstra , Nadav Amit , Matthew Wilcox , ldufour@linux.vnet.ibm.com, Andrew Morton , Ingo Molnar , acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, namhyung@kernel.org, "open list:MEMORY MANAGEMENT" , linux-kernel@vger.kernel.org References: <20180620071817.GJ13685@dhcp22.suse.cz> <263935d9-d07c-ab3e-9e42-89f73f57be1e@linux.alibaba.com> <20180626074344.GZ2458@hirez.programming.kicks-ass.net> <20180627072432.GC32348@dhcp22.suse.cz> <20180628115101.GE32348@dhcp22.suse.cz> <2ecdb667-f4de-673d-6a5f-ee50df505d0c@linux.alibaba.com> <20180629113954.GB5963@dhcp22.suse.cz> From: Yang Shi Message-ID: <7827f941-aeb3-a44a-0711-bfc15ec1d912@linux.alibaba.com> Date: Fri, 29 Jun 2018 09:50:08 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180629113954.GB5963@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/29/18 4:39 AM, Michal Hocko wrote: > On Thu 28-06-18 17:59:25, Yang Shi wrote: >> >> On 6/28/18 12:10 PM, Yang Shi wrote: >>> >>> On 6/28/18 4:51 AM, Michal Hocko wrote: >>>> On Wed 27-06-18 10:23:39, Yang Shi wrote: >>>>> On 6/27/18 12:24 AM, Michal Hocko wrote: >>>>>> On Tue 26-06-18 18:03:34, Yang Shi wrote: >>>>>>> On 6/26/18 12:43 AM, Peter Zijlstra wrote: >>>>>>>> On Mon, Jun 25, 2018 at 05:06:23PM -0700, Yang Shi wrote: >>>>>>>>> By looking this deeper, we may not be able to >>>>>>>>> cover all the unmapping range >>>>>>>>> for VM_DEAD, for example, if the start addr is >>>>>>>>> in the middle of a vma. We >>>>>>>>> can't set VM_DEAD to that vma since that would >>>>>>>>> trigger SIGSEGV for still >>>>>>>>> mapped area. >>>>>>>>> >>>>>>>>> splitting can't be done with read mmap_sem held, >>>>>>>>> so maybe just set VM_DEAD >>>>>>>>> to non-overlapped vmas. Access to overlapped >>>>>>>>> vmas (first and last) will >>>>>>>>> still have undefined behavior. >>>>>>>> Acquire mmap_sem for writing, split, mark VM_DEAD, >>>>>>>> drop mmap_sem. Acquire >>>>>>>> mmap_sem for reading, madv_free drop mmap_sem. Acquire mmap_sem for >>>>>>>> writing, free everything left, drop mmap_sem. >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>>> Sure, you acquire the lock 3 times, but both write >>>>>>>> instances should be >>>>>>>> 'short', and I suppose you can do a demote between 1 >>>>>>>> and 2 if you care. >>>>>>> Thanks, Peter. Yes, by looking the code and trying two >>>>>>> different approaches, >>>>>>> it looks this approach is the most straight-forward one. >>>>>> Yes, you just have to be careful about the max vma count limit. >>>>> Yes, we should just need copy what do_munmap does as below: >>>>> >>>>> if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count) >>>>>              return -ENOMEM; >>>>> >>>>> If the mas map count limit has been reached, it will return >>>>> failure before >>>>> zapping mappings. >>>> Yeah, but as soon as you drop the lock and retake it, somebody might >>>> have changed the adddress space and we might get inconsistency. >>>> >>>> So I am wondering whether we really need upgrade_read (to promote read >>>> to write lock) and do the >>>>     down_write >>>>     split & set up VM_DEAD >>>>     downgrade_write >>>>     unmap >>>>     upgrade_read >>>>     zap ptes >>>>     up_write >> Promoting to write lock may be a trouble. There might be other users in the >> critical section with read lock, we have to wait them to finish. > Yes. Is that a problem though? Not a problem, but just not sure how complicated it would be. Considering all the lock debug/lockdep stuff. And, the behavior smells like rcu. > >>> I'm supposed address space changing just can be done by mmap, mremap, >>> mprotect. If so, we may utilize the new VM_DEAD flag. If the VM_DEAD >>> flag is set for the vma, just return failure since it is being unmapped. >>> >>> Does it sounds reasonable? >> It looks we just need care about MAP_FIXED (mmap) and MREMAP_FIXED (mremap), >> right? >> >> How about letting them return -EBUSY or -EAGAIN to notify the application? > Well, non of those is documented to return EBUSY and EAGAIN already has > a meaning for locked memory. > >> This changes the behavior a little bit, MAP_FIXED and mremap may fail if >> they fail the race with munmap (if the mapping is larger than 1GB). I'm not >> sure if any multi-threaded application uses MAP_FIXED and MREMAP_FIXED very >> heavily which may run into the race condition. I guess it should be rare to >> meet all the conditions to trigger the race. >> >> The programmer should be very cautious about MAP_FIXED.MREMAP_FIXED since >> they may corrupt its own address space as the man page noted. > Well, I suspect you are overcomplicating this a bit. This should be > really straightforward thing - well except for VM_DEAD which is quite > tricky already. We should rather not spread this trickyness outside of > the #PF path. And I would even try hard to start that part simple to see > whether it actually matters. Relying on races between threads without > any locking is quite questionable already. Nobody has pointed to a sane > usecase so far. I agree to keep it as simple as possible then see if it matters or not. So, in v3 I will just touch the page fault path.