Received: by 2002:a25:b323:0:0:0:0:0 with SMTP id l35csp907547ybj; Sat, 21 Sep 2019 14:50:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqyjDw1Ee3TG1+eFJzgqngipWXFzwsPmFwYHmacr9HU85LCEeHzoZgHi/+8y5GxHtF8QZL9q X-Received: by 2002:a17:907:20e4:: with SMTP id rh4mr24241037ejb.59.1569102643329; Sat, 21 Sep 2019 14:50:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569102643; cv=none; d=google.com; s=arc-20160816; b=Y57ffS3boARVILS/G0AFiSFmxZvBMZyqZi2Sziq7t1oo1PNW0NOXbHDIfTT7I35YtU vGxYFICjlgIa5u/PziRbvxUGlJncKOiKD3DXvOzTsyphTCwK7yeNdAlRd3ZC9AlkDN2x k5RCMJIiSOcJpD5CuHUW/Zz4tBaOdzMWBvWDTc8CwqQG8hHIs+EjeGM1wzQmGSkO7Wej ttP+x15xZl6qhACnfT4yQ3UGJi5TfmQOnI9Kil1VqT0Q1NXxItA4alHULEtpPUU1AMHJ qlmTkVgCbywcSeRzoQ1oC8SsSX4Xu41ey9mmQKWTObm7UUrB5h0D6zydz/H14o6fBeXN kqqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=HURzeGp2u4rhHVt0b15EFAu+zShVjmgfa/ehpD45jAw=; b=mbPKabrMRDzK91NY8FKhJY3ENLv7cHW55D79hbyv9bIjdPIxgcYbz8dRxl8PoE1mU7 PijPFF3EaoeccXYFuu3YhOVR8QbaoNA6GH5MFQyYdk5yKZo9Cz1BGMHfk2EJgllre21b zHg83z/41e50B1qDm3szAR1UqxGqZtnLOzD0nsZjs0gsUjRcf3dZoea1CB8ySKm+vTVC aNpy6MBWrrwbVWDHqfWJHpxmPVYqz7Brk3+7CixiezLYAa9NbcYwVnzSsDcNhD+EsJvQ Ctldo1AJ0JygDMsam1ISp20an4xNrniyQ5j+1r+5C3RcI3kma1eMF57F9Jg+iycCpuLY nLaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q25si4096726edb.159.2019.09.21.14.50.20; Sat, 21 Sep 2019 14:50:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387641AbfITNzI (ORCPT + 99 others); Fri, 20 Sep 2019 09:55:08 -0400 Received: from foss.arm.com ([217.140.110.172]:45112 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728844AbfITNzI (ORCPT ); Fri, 20 Sep 2019 09:55:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85C741597; Fri, 20 Sep 2019 06:55:07 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.40.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1A7653F67D; Fri, 20 Sep 2019 06:55:02 -0700 (PDT) From: Jia He To: Catalin Marinas , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose Cc: Punit Agrawal , Anshuman Khandual , Alex Van Brunt , Robin Murphy , Thomas Gleixner , Andrew Morton , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Ralph Campbell , hejianet@gmail.com, Kaly Xin , nd@arm.com, Jia He Subject: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Date: Fri, 20 Sep 2019 21:54:37 +0800 Message-Id: <20190920135437.25622-4-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190920135437.25622-1-justin.he@arm.com> References: <20190920135437.25622-1-justin.he@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. Below call trace is from arm64 do_page_fault for debugging purpose [ 110.016195] Call trace: [ 110.016826] do_page_fault+0x5a4/0x690 [ 110.017812] do_mem_abort+0x50/0xb0 [ 110.018726] el1_da+0x20/0xc4 [ 110.019492] __arch_copy_from_user+0x180/0x280 [ 110.020646] do_wp_page+0xb0/0x860 [ 110.021517] __handle_mm_fault+0x994/0x1338 [ 110.022606] handle_mm_fault+0xe8/0x180 [ 110.023584] do_page_fault+0x240/0x690 [ 110.024535] do_mem_abort+0x50/0xb0 [ 110.025423] el0_da+0x20/0x24 The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." This patch fix it by calling pte_mkyoung. Also, the parameter is changed because vmf should be passed to cow_user_page() Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error in case there can be some obscure use-case.(by Kirill) [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork Reported-by: Yibo Cai Signed-off-by: Jia He --- mm/memory.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 61 insertions(+), 6 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..3e39e40fee87 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly = 2; #endif +#ifndef arch_faults_on_old_pte +static inline bool arch_faults_on_old_pte(void) +{ + return false; +} +#endif + static int __init disable_randmaps(char *s) { randomize_va_space = 0; @@ -2140,8 +2147,13 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, return same; } -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) +static inline int cow_user_page(struct page *dst, struct page *src, + struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; + unsigned long addr = vmf->address; + debug_dma_assert_idle(src); /* @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo * fails, we just zero-fill it. Live with it. */ if (unlikely(!src)) { - void *kaddr = kmap_atomic(dst); - void __user *uaddr = (void __user *)(va & PAGE_MASK); + void *kaddr; + pte_t entry; + void __user *uaddr = (void __user *)(addr & PAGE_MASK); + /* On architectures with software "accessed" bits, we would + * take a double page fault, so mark it accessed here. + */ + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, + &vmf->ptl); + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { + entry = pte_mkyoung(vmf->orig_pte); + if (ptep_set_access_flags(vma, addr, + vmf->pte, entry, 0)) + update_mmu_cache(vma, addr, vmf->pte); + } else { + /* Other thread has already handled the fault + * and we don't need to do anything. If it's + * not the case, the fault will be triggered + * again on the same address. + */ + pte_unmap_unlock(vmf->pte, vmf->ptl); + return -1; + } + pte_unmap_unlock(vmf->pte, vmf->ptl); + } + + kaddr = kmap_atomic(dst); /* * This really shouldn't fail, because the page is there * in the page tables. But it might just be unreadable, * in which case we just give up and fill the result with * zeroes. */ - if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* Give a warn in case there can be some obscure + * use-case + */ + WARN_ON_ONCE(1); clear_page(kaddr); + } kunmap_atomic(kaddr); flush_dcache_page(dst); } else - copy_user_highpage(dst, src, va, vma); + copy_user_highpage(dst, src, addr, vma); + + return 0; } static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma) @@ -2318,7 +2362,18 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) vmf->address); if (!new_page) goto oom; - cow_user_page(new_page, old_page, vmf->address, vma); + + if (cow_user_page(new_page, old_page, vmf)) { + /* COW failed, if the fault was solved by other, + * it's fine. If not, userspace would re-fault on + * the same address and we will handle the fault + * from the second attempt. + */ + put_page(new_page); + if (old_page) + put_page(old_page); + return 0; + } } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) -- 2.17.1