Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp780091imm; Fri, 21 Sep 2018 08:11:03 -0700 (PDT) X-Google-Smtp-Source: ANB0VdY9LCosJA6X0y7jfG7r+Isq2SHE/FU/HE97i9tTv1evvaKEcpR9/on7RZGATsMCl87Y4edq X-Received: by 2002:a63:60c1:: with SMTP id u184-v6mr42126341pgb.266.1537542663893; Fri, 21 Sep 2018 08:11:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537542663; cv=none; d=google.com; s=arc-20160816; b=FyP8My7XJDsE5UO6HOa1+9k8WiQxThs1OQ7vkeJZDRLXeZ2p5iDVYRKbJrAPmzqSfW 6Skwce6TAYPx++U5ykgHeHzKpPBOEHx8EEeHPoRlu7RUB0iO7C6wSEsDF/N+qsspTR0j PQB93+MgJ7pEVQVaAJ2TArWT49+SshTn15y6TU4RvleGUTkbd1yjqC91edSQTJF0l3Gy /BIjHzteREj28ZjX356fkWp2CWlQbRZYlkKDmTAxQDkoU9bdu7FjNmnK1AK30kBu2MfT ZjYNP4P/INwUut3/mWwSnVUXcY7AcQhPXqMn9xf1ienC39ij+T4d1Rq48AQfxdX4jjur nVug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=h73YAqb1qP/EQ0K+GtjIqsg5+zhUjTo3FsgsOLKvJKU=; b=wg2Z718WI0kqykPMFmjZVO02JzxKVMcCIdzHBrmt7KyFomK7+Z+kXl2Suox6VaBG5o ReJxrszgBFBpMcS0fpPhkAe+Fk7U0k+os+CkB44HVIz9FTvU2c+/J7OMIpW6ZDVqJ9MW C9k6l8R0cTiBC6DHzQdPIUjF465AdR8dk4eAN0FUxKKFKo0Cy/oZvWTBQ6IvtYaT6Ago mx0x1fEpLY4F1jFF744V/iTFpOlwvPbL7emyWsQUT6X6ux3JAx9dc8qErY/aGDt8VMTX f5hIZaysJKD1ZM4t3YmY2ZA6CodUFzOWWw7UU/q2dCVJybP8MsAB0nKM2usBxWj9nzRQ VegQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c13-v6si26115290pfi.256.2018.09.21.08.10.47; Fri, 21 Sep 2018 08:11:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390681AbeIUU7L (ORCPT + 99 others); Fri, 21 Sep 2018 16:59:11 -0400 Received: from mga05.intel.com ([192.55.52.43]:27306 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390298AbeIUU6J (ORCPT ); Fri, 21 Sep 2018 16:58:09 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Sep 2018 08:08:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,285,1534834800"; d="scan'208";a="71856577" Received: from 2b52.sc.intel.com ([143.183.136.51]) by fmsmga007.fm.intel.com with ESMTP; 21 Sep 2018 08:08:50 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [RFC PATCH v4 16/27] mm: Update can_follow_write_pte/pmd for shadow stack Date: Fri, 21 Sep 2018 08:03:40 -0700 Message-Id: <20180921150351.20898-17-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180921150351.20898-1-yu-cheng.yu@intel.com> References: <20180921150351.20898-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 19 +++++++++++++++++++ include/asm-generic/pgtable.h | 4 ++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 33 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index ccdfd3dd7163..e13a020e37db 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -894,4 +894,23 @@ inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return (vm_flags & VM_SHSTK); } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif + #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index a91f07454ced..6223017929be 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1131,10 +1131,14 @@ static inline bool arch_has_pfn_modify_check(void) #define pte_set_vma_features(pte, vma) pte #define pmd_set_vma_features(pmd, vma) pmd #define arch_copy_pte_mapping(vma_flags) false +#define pte_exclusive(pte, vma) pte_dirty(pte) +#define pmd_exclusive(pmd, vma) pmd_dirty(pmd) #else inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); bool arch_copy_pte_mapping(vm_flags_t vm_flags); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..03cb2e331f80 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -64,10 +64,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -105,7 +107,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index df39ae20fe40..c70aa8fa4cb2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1387,10 +1387,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1403,7 +1405,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1