Received: by 10.192.165.156 with SMTP id m28csp1132820imm; Wed, 18 Apr 2018 04:50:58 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/INuWx/QQJ9U/pOxWefs3nj+rEdTmis5lDkqn810KPZU/IqB37cMiNLq4vtgdJ9RqMHZYY X-Received: by 10.98.70.8 with SMTP id t8mr1707632pfa.185.1524052258633; Wed, 18 Apr 2018 04:50:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524052258; cv=none; d=google.com; s=arc-20160816; b=GYWnOCR9k8Be12l/selueJtPwlZCayQDBpVCbOBNI/2E3b16HWdfoP5U+ohCSqiI6b ByD46ORH2nmoXT1ZGUEI9uGJ9/YAmkfhviNJZNlJ3wBkOAV1vlVII6KrwjrFnTqqy/8i kzu/Ir1IpdRyIC+rh/NyljuOd8eT9MB11ZhW3Fd17VSxHQvF5M2w2g50XkwlBe0RpEWE 1IaiBfrxOdVd/8HuNtHH/Kr7I9ih5YL7c7ocqSw4ZCzgjoEegHr2ogvo9OMukkW4cDDJ dbzwYpqIoNUnRusFVjHSj18nNCFsD+8TSBWzOSAwjbAM7B55oEe5HweSoJtMVoU1Wxvi FABA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:date:message-id:in-reply-to :references:from:subject:cc:to:arc-authentication-results; bh=BexOqy7CTwAjD9s1gaml/uTg1BrQgzxpIaGXK91enho=; b=dj+2W8pcu1+Ii6CMBvouqKcUPE6VcSelRt9bsM3hfTmHk+bBRntJKn+NWRe4rE98i3 FBTLba4U6MoauzNMsxYufvyxWYVKL8f7J4biffysZAnxUCvJS+EZv/C2l5nCcFMMj5oI XwY8mIVdW7APidYaY6qkkTZ/+3FbhfDFK6qtl0Zut2N2mnl9/KASfEyC4rx1PD7M5VUy 8rm43IIgjSobgL8t4DfRQwtGr3mxeJlNk6wjvW5dt6LlwytzCnjwa4Yx1CqWmR6bhMAB MvWOMtxJ9ugD4gcJ7hjloGEcpQnTMB5gvsZP/YwvwvQJRX01TwaUOKS/1lTHG2xVIyAd Inrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2-v6si1095154pln.533.2018.04.18.04.50.44; Wed, 18 Apr 2018 04:50:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753873AbeDRLta (ORCPT + 99 others); Wed, 18 Apr 2018 07:49:30 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:40374 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753088AbeDRLt3 (ORCPT ); Wed, 18 Apr 2018 07:49:29 -0400 Received: from fsav405.sakura.ne.jp (fsav405.sakura.ne.jp [133.242.250.104]) by www262.sakura.ne.jp (8.14.5/8.14.5) with ESMTP id w3IBnCDf065043; Wed, 18 Apr 2018 20:49:13 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav405.sakura.ne.jp (F-Secure/fsigk_smtp/530/fsav405.sakura.ne.jp); Wed, 18 Apr 2018 20:49:12 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/530/fsav405.sakura.ne.jp) Received: from AQUA (softbank126099184120.bbtec.net [126.99.184.120]) (authenticated bits=0) by www262.sakura.ne.jp (8.14.5/8.14.5) with ESMTP id w3IBnChZ065039; Wed, 18 Apr 2018 20:49:12 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) To: mhocko@kernel.org, rientjes@google.com Cc: akpm@linux-foundation.org, aarcange@redhat.com, guro@fb.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch v2] mm, oom: fix concurrent munlock and oom reaper unmap From: Tetsuo Handa References: <201804180057.w3I0vieV034949@www262.sakura.ne.jp> <20180418075051.GO17484@dhcp22.suse.cz> In-Reply-To: <20180418075051.GO17484@dhcp22.suse.cz> Message-Id: <201804182049.EDJ21857.OHJOMOLFQVFFtS@I-love.SAKURA.ne.jp> X-Mailer: Winbiff [Version 2.51 PL2] X-Accept-Language: ja,en,zh Date: Wed, 18 Apr 2018 20:49:11 +0900 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Hocko wrote: > On Tue 17-04-18 19:52:41, David Rientjes wrote: > > Since exit_mmap() is done without the protection of mm->mmap_sem, it is > > possible for the oom reaper to concurrently operate on an mm until > > MMF_OOM_SKIP is set. > > > > This allows munlock_vma_pages_all() to concurrently run while the oom > > reaper is operating on a vma. Since munlock_vma_pages_range() depends on > > clearing VM_LOCKED from vm_flags before actually doing the munlock to > > determine if any other vmas are locking the same memory, the check for > > VM_LOCKED in the oom reaper is racy. > > > > This is especially noticeable on architectures such as powerpc where > > clearing a huge pmd requires serialize_against_pte_lookup(). If the pmd > > is zapped by the oom reaper during follow_page_mask() after the check for > > pmd_none() is bypassed, this ends up deferencing a NULL ptl. > > > > Fix this by reusing MMF_UNSTABLE to specify that an mm should not be > > reaped. This prevents the concurrent munlock_vma_pages_range() and > > unmap_page_range(). The oom reaper will simply not operate on an mm that > > has the bit set and leave the unmapping to exit_mmap(). > > This will further complicate the protocol and actually theoretically > restores the oom lockup issues because the oom reaper doesn't set > MMF_OOM_SKIP when racing with exit_mmap so we fully rely that nothing > blocks there... So the resulting code is more fragile and tricky. > > Can we try a simpler way and get back to what I was suggesting before > [1] and simply not play tricks with > down_write(&mm->mmap_sem); > up_write(&mm->mmap_sem); > > and use the write lock in exit_mmap for oom_victims? You mean something like this? Then, I'm tempted to call __oom_reap_task_mm() before holding mmap_sem for write. It would be OK to call __oom_reap_task_mm() at the beginning of __mmput()... diff --git a/mm/mmap.c b/mm/mmap.c index 188f195..ba7083b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3011,17 +3011,22 @@ void exit_mmap(struct mm_struct *mm) struct mmu_gather tlb; struct vm_area_struct *vma; unsigned long nr_accounted = 0; + const bool is_oom_mm = mm_is_oom_victim(mm); /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); if (mm->locked_vm) { + if (is_oom_mm) + down_write(&mm->mmap_sem); vma = mm->mmap; while (vma) { if (vma->vm_flags & VM_LOCKED) munlock_vma_pages_all(vma); vma = vma->vm_next; } + if (is_oom_mm) + up_write(&mm->mmap_sem); } arch_exit_mmap(mm); @@ -3037,7 +3042,7 @@ void exit_mmap(struct mm_struct *mm) /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); - if (unlikely(mm_is_oom_victim(mm))) { + if (unlikely(is_oom_mm)) { /* * Wait for oom_reap_task() to stop working on this * mm. Because MMF_OOM_SKIP is already set before