Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp38703imm; Thu, 30 Aug 2018 07:50:17 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYr//V4impfBRhMvFk+YC5Xyqio5kB2jnRMEkB22FaLVDwpgKQ3kvzMs/Zh0gPUeIEeTqX/ X-Received: by 2002:a17:902:5a02:: with SMTP id q2-v6mr10597478pli.253.1535640617176; Thu, 30 Aug 2018 07:50:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535640617; cv=none; d=google.com; s=arc-20160816; b=EZtsG0Ggy+qrx0r5TY28BlD5ZEg9veuygRPkBDoYN6u4Omrle1zVdw1p6+M5ni5qKz MM5fDcnq+/W0Xp+f+cmQkI/p0lI62DS20BhFAXM08DWY3Ba5Tfa9r5Idvd2iKzrxuJW3 YJMwt65Ry9dDd/QN0/9esEH7VjOXEHZp8bOYgzBx3xP7IPYnjsk6045I9t/uOsYprVjv tapdqR5EDucqBFcrNHsVD8Oaf2lAK+x//EnAMVjv9XFxOAMsbVO05G5Z2hlUucSZb5uD fTH69UoV1pHXek1SV3LSQz6v3530WRJUzZI8qobTDcnXrGVNWLjMzZ/OYg2+d9P5jAU0 EiAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=DS339Oq/VMp5xvhucSCa2NrKJaq5mpRNUbsXZO7WFgw=; b=SZBAz7lS/frG7UYJPyvQSuVhkhc2HngPFdkV3Z+DPTMpnPyxdIQwC5zlMWwS1/EisC V5vWHBp4iEKb51+8sGJE1hIbi3YstkB/qo6n+WkbZpttKPrnba9wHAizO7gdlD9kR8KC izz8pbIJRh0HN96tk7siaUP13k8sB8gucwOOj4Nullt6CJlnOXBaDcqXrPXFkuHenJhx Dd/HQ/FjKNtlzGq6Q6WsVt4nqXEuGfavQUlwleGN65+1u9NfG1VqJByVrwgQLCiLDLL1 cvyh8XPC2cHT+nxkJqpfaMK1UiHh4OT+pRnCGI47NV5b1HhrweNpAE1AETUuVtpkb66q yjoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h65-v6si7403022pfb.70.2018.08.30.07.50.00; Thu, 30 Aug 2018 07:50:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729505AbeH3SqQ (ORCPT + 99 others); Thu, 30 Aug 2018 14:46:16 -0400 Received: from mga17.intel.com ([192.55.52.151]:62771 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729418AbeH3SqN (ORCPT ); Thu, 30 Aug 2018 14:46:13 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Aug 2018 07:43:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,307,1531810800"; d="scan'208";a="67186720" Received: from 2b52.sc.intel.com ([143.183.136.52]) by fmsmga008.fm.intel.com with ESMTP; 30 Aug 2018 07:43:42 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [RFC PATCH v3 16/24] mm: Update can_follow_write_pte/pmd for shadow stack Date: Thu, 30 Aug 2018 07:38:56 -0700 Message-Id: <20180830143904.3168-17-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180830143904.3168-1-yu-cheng.yu@intel.com> References: <20180830143904.3168-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 18 ++++++++++++++++++ include/asm-generic/pgtable.h | 18 ++++++++++++++++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 46 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 0ab38bfbedfc..13dd18ad6fd8 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -889,4 +889,22 @@ inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) else return pmd; } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 0f25186cd38d..2e8e7fa4ab71 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1156,9 +1156,27 @@ static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) { return pmd; } + +#ifdef CONFIG_MMU +static inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd_dirty(pmd); +} +#endif +#endif /* CONFIG_MMU */ #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); +#endif #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..03cb2e331f80 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -64,10 +64,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -105,7 +107,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b4c8f2fb85e..702650eec0b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1387,10 +1387,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1403,7 +1405,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1