Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1392847imm; Fri, 29 Jun 2018 17:51:27 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK2QMae6C+N5oPUgA6EUHQj9GMtWIg81vyZ7TiDDYnqMghqM90lTHkm2V8b3+GcxJMW9djk X-Received: by 2002:a17:902:8d91:: with SMTP id v17-v6mr17215463plo.9.1530319887724; Fri, 29 Jun 2018 17:51:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530319887; cv=none; d=google.com; s=arc-20160816; b=jlNzRfYsSyJCCCHyLDwdnjUALV8VmCoe67y8NU7AOzK/hACdwugfB2MUWTZE0Eptem I9vpL00F2Fg6g5RBqJiJ2VCnTJCBeTNlElhbxLqlJuDiOTSsG0L5NMnRaEHO2aeLkloU r9vcDnGkKCwX0hjL6B7/FZdWftzG/Zk6JcVvuwFIp8tFTdJZ4/7wovYkibCegr3GGCBU xJhS/u2tYk5mqTwwwF+/a0LA6hffNx5uUrmcp5IpLLuNDHTQNMQBjUGqIiTARImTB6ck bvheGaOiKGr00+xBLkXmYJDROhqgZioUUCeVdruX2LnhOa44bV0iRiDiN7qaCMRKHQI5 3X0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=gvCJF1kc2MtPWaexqyJ2GcyNMuMWwQgQ1PS1kq11xdw=; b=nk8hyqSGDL2kINI+/+OM5wFS47QUFgZvCTrP1hmcUFgH2a2oKPwpfa+6GUsA2fMwgI niCzBD5kHl3nt9LbpwS5kpcM9EBLztJ3mI+ZREhER7dypmep2/gTkpVjMCndoObNUc2q SaBfuGnOvPSsoMYbmaWUz1KSeyLXG7KkM4OgLS5dbMPsZKaP/vobiJR6L4cv0JRKjC7L fczvOLq1CFUCGEfHDBjqgWWKsWP8zXqmI/I56h44fLcMEQEPzL67LYAwJgZZn5DSu63C y7iANXu6jrdC+dJ9zsFCOvh0Os68n4LSgVrKl3DKeYMOy3ZscRiHGFXI/8b0ve4fCFAZ U6Rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n4-v6si10131541plp.128.2018.06.29.17.51.13; Fri, 29 Jun 2018 17:51:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936814AbeF2Wkm (ORCPT + 99 others); Fri, 29 Jun 2018 18:40:42 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:51995 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935851AbeF2Wkj (ORCPT ); Fri, 29 Jun 2018 18:40:39 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04452;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0T3dXmHm_1530312021; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T3dXmHm_1530312021) by smtp.aliyun-inc.com(127.0.0.1); Sat, 30 Jun 2018 06:40:30 +0800 From: Yang Shi To: mhocko@kernel.org, willy@infradead.org, ldufour@linux.vnet.ibm.com, akpm@linux-foundation.org, peterz@infradead.org, mingo@redhat.com, acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, namhyung@kernel.org, tglx@linutronix.de, hpa@zytor.com Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC v3 PATCH 4/5] mm: mmap: zap pages with read mmap_sem for large mapping Date: Sat, 30 Jun 2018 06:39:44 +0800 Message-Id: <1530311985-31251-5-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1530311985-31251-1-git-send-email-yang.shi@linux.alibaba.com> References: <1530311985-31251-1-git-send-email-yang.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When running some mmap/munmap scalability tests with large memory (i.e. > 300GB), the below hung task issue may happen occasionally. INFO: task ps:14018 blocked for more than 120 seconds. Tainted: G E 4.9.79-009.ali3000.alios7.x86_64 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. ps D 0 14018 1 0x00000004 ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0 ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040 00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000 Call Trace: [] ? __schedule+0x250/0x730 [] schedule+0x36/0x80 [] rwsem_down_read_failed+0xf0/0x150 [] call_rwsem_down_read_failed+0x18/0x30 [] down_read+0x20/0x40 [] proc_pid_cmdline_read+0xd9/0x4e0 [] ? do_filp_open+0xa5/0x100 [] __vfs_read+0x37/0x150 [] ? security_file_permission+0x9b/0xc0 [] vfs_read+0x96/0x130 [] SyS_read+0x55/0xc0 [] entry_SYSCALL_64_fastpath+0x1a/0xc5 It is because munmap holds mmap_sem from very beginning to all the way down to the end, and doesn't release it in the middle. When unmapping large mapping, it may take long time (take ~18 seconds to unmap 320GB mapping with every single page mapped on an idle machine). It is because munmap holds mmap_sem from very beginning to all the way down to the end, and doesn't release it in the middle. When unmapping large mapping, it may take long time (take ~18 seconds to unmap 320GB mapping with every single page mapped on an idle machine). Zapping pages is the most time consuming part, according to the suggestion from Michal Hock [1], zapping pages can be done with holding read mmap_sem, like what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas. All zapped vmas will have VM_DEAD flag set, the page fault to VM_DEAD vma will trigger SIGSEGV. Define large mapping size thresh as PUD size or 1GB, just zap pages with read mmap_sem for mappings which are >= thresh value. If the vma has VM_LOCKED | VM_HUGETLB | VM_PFNMAP or uprobe, then just fallback to regular path since unmapping those mappings need acquire write mmap_sem. For the time being, just do this in munmap syscall path. Other vm_munmap() or do_munmap() call sites remain intact for stability reason. The below is some regression and performance data collected on a machine with 32 cores of E5-2680 @ 2.70GHz and 384GB memory. With the patched kernel, write mmap_sem hold time is dropped to us level from second. [1] https://lwn.net/Articles/753269/ Cc: Michal Hocko Cc: Matthew Wilcox Cc: Laurent Dufour Cc: Andrew Morton Signed-off-by: Yang Shi --- mm/mmap.c | 136 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 134 insertions(+), 2 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 87dcf83..d61e08b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2763,6 +2763,128 @@ static int munmap_lookup_vma(struct mm_struct *mm, struct vm_area_struct **vma, return 1; } +/* Consider PUD size or 1GB mapping as large mapping */ +#ifdef HPAGE_PUD_SIZE +#define LARGE_MAP_THRESH HPAGE_PUD_SIZE +#else +#define LARGE_MAP_THRESH (1 * 1024 * 1024 * 1024) +#endif + +/* Unmap large mapping early with acquiring read mmap_sem */ +static int do_munmap_zap_early(struct mm_struct *mm, unsigned long start, + size_t len, struct list_head *uf) +{ + unsigned long end = 0; + struct vm_area_struct *vma = NULL, *prev, *tmp; + bool success = false; + int ret = 0; + + if (!munmap_addr_sanity(start, len)) + return -EINVAL; + + len = PAGE_ALIGN(len); + + end = start + len; + + /* Just deal with uf in regular path */ + if (unlikely(uf)) + goto regular_path; + + if (len >= LARGE_MAP_THRESH) { + /* + * need write mmap_sem to split vma and set VM_DEAD flag + * splitting vma up-front to save PITA to clean if it is failed + */ + down_write(&mm->mmap_sem); + ret = munmap_lookup_vma(mm, &vma, &prev, start, end); + if (ret != 1) { + up_write(&mm->mmap_sem); + return ret; + } + /* This ret value might be returned, so reset it */ + ret = 0; + + /* + * Unmapping vmas, which has VM_LOCKED|VM_HUGETLB|VM_PFNMAP + * flag set or has uprobes set, need acquire write map_sem, + * so skip them in early zap. Just deal with such mapping in + * regular path. + * Borrow can_madv_dontneed_vma() to check the conditions. + */ + tmp = vma; + while (tmp && tmp->vm_start < end) { + if (!can_madv_dontneed_vma(tmp) || + vma_has_uprobes(tmp, start, end)) { + up_write(&mm->mmap_sem); + goto regular_path; + } + tmp = tmp->vm_next; + } + /* + * set VM_DEAD flag before tear down them. + * page fault on VM_DEAD vma will trigger SIGSEGV. + */ + tmp = vma; + for ( ; tmp && tmp->vm_start < end; tmp = tmp->vm_next) + tmp->vm_flags |= VM_DEAD; + up_write(&mm->mmap_sem); + + /* zap mappings with read mmap_sem */ + down_read(&mm->mmap_sem); + zap_page_range(vma, start, len); + /* indicates early zap is success */ + success = true; + up_read(&mm->mmap_sem); + } + +regular_path: + /* hold write mmap_sem for vma manipulation or regular path */ + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + if (success) { + /* vmas have been zapped, here clean up pgtable and vmas */ + struct vm_area_struct *next = prev ? prev->vm_next : mm->mmap; + struct mmu_gather tlb; + tlb_gather_mmu(&tlb, mm, start, end); + free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, + next ? next->vm_start : USER_PGTABLES_CEILING); + tlb_finish_mmu(&tlb, start, end); + + detach_vmas_to_be_unmapped(mm, vma, prev, end); + arch_unmap(mm, vma, start, end); + remove_vma_list(mm, vma); + } else { + /* vma is VM_LOCKED|VM_HUGETLB|VM_PFNMAP or has uprobe */ + if (vma) { + if (unlikely(uf)) { + int ret = userfaultfd_unmap_prep(vma, start, + end, uf); + if (ret) + goto out; + } + if (mm->locked_vm) { + tmp = vma; + while (tmp && tmp->vm_start < end) { + if (tmp->vm_flags & VM_LOCKED) { + mm->locked_vm -= vma_pages(tmp); + munlock_vma_pages_all(tmp); + } + tmp = tmp->vm_next; + } + } + detach_vmas_to_be_unmapped(mm, vma, prev, end); + unmap_region(mm, vma, prev, start, end); + remove_vma_list(mm, vma); + } else + /* When mapping size < LARGE_MAP_THRESH */ + ret = do_munmap(mm, start, len, uf); + } + +out: + up_write(&mm->mmap_sem); + return ret; +} + /* Munmap is split into 2 main parts -- this part which finds * what needs doing, and the areas themselves, which do the * work. This now handles partial unmappings. @@ -2829,6 +2951,17 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, return 0; } +static int vm_munmap_zap_early(unsigned long start, size_t len) +{ + int ret; + struct mm_struct *mm = current->mm; + LIST_HEAD(uf); + + ret = do_munmap_zap_early(mm, start, len, &uf); + userfaultfd_unmap_complete(mm, &uf); + return ret; +} + int vm_munmap(unsigned long start, size_t len) { int ret; @@ -2848,10 +2981,9 @@ int vm_munmap(unsigned long start, size_t len) SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len) { profile_munmap(addr); - return vm_munmap(addr, len); + return vm_munmap_zap_early(addr, len); } - /* * Emulation of deprecated remap_file_pages() syscall. */ -- 1.8.3.1