Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2990492imu; Mon, 19 Nov 2018 09:04:06 -0800 (PST) X-Google-Smtp-Source: AJdET5c3O4B5Q6DuvCbWAzAmiUE57toqEQq8e4BI3KOHNi53hm93Csq6L4Hh+hqxsLubuKczRw+P X-Received: by 2002:a65:41c2:: with SMTP id b2mr20575738pgq.67.1542647046584; Mon, 19 Nov 2018 09:04:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542647046; cv=none; d=google.com; s=arc-20160816; b=cS0G5OaV7ZO2u5VH3QxUzItO6RNduvdpu/cpp8L0HPdPBBzHVXHA0yEiLnhQ9Wclq4 A7WdKpD3gw3ElSIuDPgvWCAmZV44QCtVm/xebNnvaz2oB+3jMErnR0YMFlPsRQDgd5a6 SVnGvUmadDD32QgTh98RM0uBXnKxPQ8MPWQr6VeR7WK3n3M3VfY7boy6nGXanHwEIMq5 lR4f4oyLA2TCeifs37QzG2UjtXTUJxKczP+g7BHfm/TFERAZJzMd3dHu6fyfh4KZylOw B0Fl+6ge+fcE2Pqdrgcy0QdAJvlAa9HA368nNPh06hhP3PKPFaqPl35nShpcQcw+3osZ PqCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FIGjtYG42rkwN7ztr/SSJHLOgIgT/Gsohqttl0aq2Io=; b=O80QhW2IdAHIkmolPjw5TV9c7uAe7B6+7Co3g+fMlaG/uTOY7Rheiu7mESmL2rYWFV fy0R+9DB+beKFpzNGGyehNL1e+ioQ5ubzQnZRvpe7W/wWw0OhOc8/glJIs0367KNSU74 PfEGQLMk42QPA+iCDqy8OoZflbH32Ypm8nM9OmtwxnjCwKcpjPdy24piw1SaTG5Q34Tf gbnpixOh6AtOLMf3ZrV2x2YYpksD3nxf9YR4wY7ZPkVxOpE90CDyPxd24AXmBa1K7wZi oJeu9zzYv1GUhxur0dAEn7e5ToXBjjH+rdqTzi26ph+T4Glynu06AMrJz2ZAYoUClz/Q iPdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yNkoKBoK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n11si24719406pgj.373.2018.11.19.09.03.51; Mon, 19 Nov 2018 09:04:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yNkoKBoK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406002AbeKTD0z (ORCPT + 99 others); Mon, 19 Nov 2018 22:26:55 -0500 Received: from mail.kernel.org ([198.145.29.99]:40632 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405038AbeKTD0y (ORCPT ); Mon, 19 Nov 2018 22:26:54 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 580E82245E; Mon, 19 Nov 2018 17:02:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542646957; bh=EO1xMvlhdQ7icBU6nTvUtxTZJnG3/IWpVdt8W3LalXQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yNkoKBoKFpiHqnAT5nguGvDRZbGlbOMjkZi6ut8jKExavEhFS1yD/mof1cG+I+hZO HQ+/wN+lxMwwlqJnyw8PwSV9LoXmSaylbUs8nya9mUeoTbdfCZUUTws+fx7A7Oag17 jK5utjRCrvfbl7INjVs9vwVUnqAcmHlNzWzNv2Ww= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , "Kirill A. Shutemov" , Naoya Horiguchi , Michal Hocko , Vlastimil Babka , Davidlohr Bueso , Jerome Glisse , Andrew Morton Subject: [PATCH 4.4 157/160] mm: migration: fix migration of huge PMD shared pages Date: Mon, 19 Nov 2018 17:29:56 +0100 Message-Id: <20181119162644.424033521@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181119162630.031306128@linuxfoundation.org> References: <20181119162630.031306128@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Kravetz commit 017b1660df89f5fb4bfe66c34e35f7d2031100c7 upstream. The page migration code employs try_to_unmap() to try and unmap the source page. This is accomplished by using rmap_walk to find all vmas where the page is mapped. This search stops when page mapcount is zero. For shared PMD huge pages, the page map count is always 1 no matter the number of mappings. Shared mappings are tracked via the reference count of the PMD page. Therefore, try_to_unmap stops prematurely and does not completely unmap all mappings of the source page. This problem can result is data corruption as writes to the original source page can happen after contents of the page are copied to the target page. Hence, data is lost. This problem was originally seen as DB corruption of shared global areas after a huge page was soft offlined due to ECC memory errors. DB developers noticed they could reproduce the issue by (hotplug) offlining memory used to back huge pages. A simple testcase can reproduce the problem by creating a shared PMD mapping (note that this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on x86)), and using migrate_pages() to migrate process pages between nodes while continually writing to the huge pages being migrated. To fix, have the try_to_unmap_one routine check for huge PMD sharing by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a shared mapping it will be 'unshared' which removes the page table entry and drops the reference on the PMD page. After this, flush caches and TLB. mmu notifiers are called before locking page tables, but we can not be sure of PMD sharing until page tables are locked. Therefore, check for the possibility of PMD sharing before locking so that notifiers can prepare for the worst possible case. Link: http://lkml.kernel.org/r/20180823205917.16297-2-mike.kravetz@oracle.com [mike.kravetz@oracle.com: make _range_in_vma() a static inline] Link: http://lkml.kernel.org/r/6063f215-a5c8-2f0c-465a-2c515ddc952d@oracle.com Fixes: 39dde65c9940 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz Acked-by: Kirill A. Shutemov Reviewed-by: Naoya Horiguchi Acked-by: Michal Hocko Cc: Vlastimil Babka Cc: Davidlohr Bueso Cc: Jerome Glisse Cc: Mike Kravetz Cc: Signed-off-by: Andrew Morton Signed-off-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Jérôme Glisse Signed-off-by: Greg Kroah-Hartman --- include/linux/hugetlb.h | 14 ++++++++++++ include/linux/mm.h | 6 +++++ mm/hugetlb.c | 37 ++++++++++++++++++++++++++++++- mm/rmap.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 111 insertions(+), 2 deletions(-) --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -110,6 +110,8 @@ pte_t *huge_pte_alloc(struct mm_struct * unsigned long addr, unsigned long sz); pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr); int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep); +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end); struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, int write); struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, @@ -132,6 +134,18 @@ static inline unsigned long hugetlb_tota return 0; } +static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, + pte_t *ptep) +{ + return 0; +} + +static inline void adjust_range_if_pmd_sharing_possible( + struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} + #define follow_hugetlb_page(m,v,p,vs,a,b,i,w) ({ BUG(); 0; }) #define follow_huge_addr(mm, addr, write) ERR_PTR(-EINVAL) #define copy_hugetlb_page_range(src, dst, vma) ({ BUG(); 0; }) --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2058,6 +2058,12 @@ static inline struct vm_area_struct *fin return vma; } +static inline bool range_in_vma(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + return (vma && vma->vm_start <= start && end <= vma->vm_end); +} + #ifdef CONFIG_MMU pgprot_t vm_get_page_prot(unsigned long vm_flags); void vma_set_page_prot(struct vm_area_struct *vma); --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4216,13 +4216,41 @@ static bool vma_shareable(struct vm_area /* * check on proper vm_flags and page table alignment */ - if (vma->vm_flags & VM_MAYSHARE && - vma->vm_start <= base && end <= vma->vm_end) + if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, base, end)) return true; return false; } /* + * Determine if start,end range within vma could be mapped by shared pmd. + * If yes, adjust start and end to cover range associated with possible + * shared pmd mappings. + */ +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ + unsigned long check_addr = *start; + + if (!(vma->vm_flags & VM_MAYSHARE)) + return; + + for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) { + unsigned long a_start = check_addr & PUD_MASK; + unsigned long a_end = a_start + PUD_SIZE; + + /* + * If sharing is possible, adjust start/end if necessary. + */ + if (range_in_vma(vma, a_start, a_end)) { + if (a_start < *start) + *start = a_start; + if (a_end > *end) + *end = a_end; + } + } +} + +/* * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc() * and returns the corresponding pte. While this is not necessary for the * !shared pmd case because we can allocate the pmd later as well, it makes the @@ -4318,6 +4346,11 @@ int huge_pmd_unshare(struct mm_struct *m { return 0; } + +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} #define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1324,12 +1324,41 @@ static int try_to_unmap_one(struct page pte_t pteval; spinlock_t *ptl; int ret = SWAP_AGAIN; + unsigned long sh_address; + bool pmd_sharing_possible = false; + unsigned long spmd_start, spmd_end; enum ttu_flags flags = (enum ttu_flags)arg; /* munlock has nothing to gain from examining un-locked vmas */ if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) goto out; + /* + * Only use the range_start/end mmu notifiers if huge pmd sharing + * is possible. In the normal case, mmu_notifier_invalidate_page + * is sufficient as we only unmap a page. However, if we unshare + * a pmd, we will unmap a PUD_SIZE range. + */ + if (PageHuge(page)) { + spmd_start = address; + spmd_end = spmd_start + vma_mmu_pagesize(vma); + + /* + * Check if pmd sharing is possible. If possible, we could + * unmap a PUD_SIZE range. spmd_start/spmd_end will be + * modified if sharing is possible. + */ + adjust_range_if_pmd_sharing_possible(vma, &spmd_start, + &spmd_end); + if (spmd_end - spmd_start != vma_mmu_pagesize(vma)) { + sh_address = address; + + pmd_sharing_possible = true; + mmu_notifier_invalidate_range_start(vma->vm_mm, + spmd_start, spmd_end); + } + } + pte = page_check_address(page, mm, address, &ptl, 0); if (!pte) goto out; @@ -1356,6 +1385,30 @@ static int try_to_unmap_one(struct page } } + /* + * Call huge_pmd_unshare to potentially unshare a huge pmd. Pass + * sh_address as it will be modified if unsharing is successful. + */ + if (PageHuge(page) && huge_pmd_unshare(mm, &sh_address, pte)) { + /* + * huge_pmd_unshare unmapped an entire PMD page. There is + * no way of knowing exactly which PMDs may be cached for + * this mm, so flush them all. spmd_start/spmd_end cover + * this PUD_SIZE range. + */ + flush_cache_range(vma, spmd_start, spmd_end); + flush_tlb_range(vma, spmd_start, spmd_end); + + /* + * The ref count of the PMD page was dropped which is part + * of the way map counting is done for shared PMDs. When + * there is no other sharing, huge_pmd_unshare returns false + * and we will unmap the actual page and drop map count + * to zero. + */ + goto out_unmap; + } + /* Nuke the page table entry. */ flush_cache_page(vma, address, page_to_pfn(page)); if (should_defer_flush(mm, flags)) { @@ -1450,6 +1503,9 @@ out_unmap: if (ret != SWAP_FAIL && ret != SWAP_MLOCK && !(flags & TTU_MUNLOCK)) mmu_notifier_invalidate_page(mm, address); out: + if (pmd_sharing_possible) + mmu_notifier_invalidate_range_end(vma->vm_mm, + spmd_start, spmd_end); return ret; }