Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1515866ybl; Tue, 13 Aug 2019 14:06:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTEokKixSyEhb+0LOwnXS6Y/l2SD1i30NHt3EeKEHKi/6Yq+kU9ScnCpW6WjXCgBPawj0c X-Received: by 2002:aa7:8498:: with SMTP id u24mr3474205pfn.61.1565730411540; Tue, 13 Aug 2019 14:06:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565730411; cv=none; d=google.com; s=arc-20160816; b=xI6HT9n5IiFXle+tb7H9ZcsZfq40ercEl9OuvoEZ8Gpcdg4+bZr5s1VwVbmhtE7Uel pD1Lre4qzz8sepKYTEdhr0hWCp0STUMZUtSMQsiUNgmuypxf1kD9Da8BmSDC2Tb1rtmD GnP43k9h0loWT8Yh7/T+Sgc7dnoyFfVR3kkgmhWG5ZnKIbbKibxhZpLw6dGNRnif91gb 2GUngaHmxP83PtCspwbaa8l1ek8YqNXHFRm6+9feYYDWUt7aj5/fPMqrE7gOwFw2n40g 277Z4GlTy7qzKb6c4/u7bZ4O+5dN6qn26DduClT3LxldP1jEwcyVVIr1H8FOqBob1hpg TJrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=AKlRZyUtHmvIj6ld7Iw+9kvrQOKxqsDDPNq1Pw3gOd0=; b=odl9sCRlkM1dhauhNhnzEmqhup8hdpj+LLZZVN5MpgsfZGUfknyjQpP4IVGU/wWQ+R GpTIPAFMnFfOmvU8tNP3jScQHGj+3XTkq2OlKRSUS6iVj9nXbXvakWNMiShNdn+/Fetu zltRtftfBviijurQ3HHX9YXAnviq758Adtf+omj28YJZ6HNvMiD2yLbzdrtLEWF9EM9l DhhwJtr2DaJkTjFvA/A8rAvr/bria5dJRgct0DsHksJ+HuWC5SUSLo9bQOSi5Q+/TWnQ D87LYT23hVV9AXNTZjoZ+LSyP3yLdZ2/o57aj7iR5XwxYWmYeOVjfMaYHURu8AFjlfW1 bb7Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p19si32169435pgm.485.2019.08.13.14.06.36; Tue, 13 Aug 2019 14:06:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726949AbfHMVC7 (ORCPT + 99 others); Tue, 13 Aug 2019 17:02:59 -0400 Received: from mga06.intel.com ([134.134.136.31]:16076 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726907AbfHMVC4 (ORCPT ); Tue, 13 Aug 2019 17:02:56 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Aug 2019 14:02:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,382,1559545200"; d="scan'208";a="187901438" Received: from yyu32-desk1.sc.intel.com ([10.144.153.205]) by orsmga002.jf.intel.com with ESMTP; 13 Aug 2019 14:02:54 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin Cc: Yu-cheng Yu Subject: [PATCH v8 17/27] mm: Update can_follow_write_pte/pmd for shadow stack Date: Tue, 13 Aug 2019 13:52:15 -0700 Message-Id: <20190813205225.12032-18-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190813205225.12032-1-yu-cheng.yu@intel.com> References: <20190813205225.12032-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 18 ++++++++++++++++++ include/asm-generic/pgtable.h | 12 ++++++++++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 40 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 6f3959ca2a08..326715fd0c50 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -898,4 +898,22 @@ inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return (vm_flags & VM_SHSTK); } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 438ce73b57ea..b58f40525ebc 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1203,10 +1203,22 @@ static inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return false; } + +static inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + return pte_dirty(pte); +} + +static inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd_dirty(pmd); +} #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); bool arch_copy_pte_mapping(vm_flags_t vm_flags); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..d7b298c5f6cb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -179,10 +179,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -220,7 +222,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 39d66c628121..947eb0121671 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1444,10 +1444,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1460,7 +1462,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1