Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp2316600imm; Thu, 11 Oct 2018 08:24:45 -0700 (PDT) X-Google-Smtp-Source: ACcGV61l805gxaLYAXNK2jStAX5S9h4BU4JDbDpCIewPcbY8/u/5mhqEQFu2YEcESt2jEvbLLpDb X-Received: by 2002:a62:6383:: with SMTP id x125-v6mr2007638pfb.13.1539271485123; Thu, 11 Oct 2018 08:24:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539271485; cv=none; d=google.com; s=arc-20160816; b=E1DMYzWYIxGc8tEhMUH3wQw1GFFoPD37d6cnNYJKVmdUhRmpi3AIK98uS4Gzz2XmPs QpKCc8nWhIbd0mVIqhWXVq88iM8YCzmwU6BGz8sDQ7pTkQbTFaevRvhrF4DsPPYm2spy XnfNvi+pb0OO2eow/kJkzppuAlisxAc7gBQ0Mg++ctIeeMAe9Do5nQNvToA2aGPiWwHX j4fh8nIgudy/Oppgodv1zBqSVM81ujCpEMFm6CKCUan2dfY4bqJp0/YY43TvupGze4Sn EvZNwhbIH0NTYLx3a5PMz9aVrM18nqmxdUR0AqTRfnDyFFlSSiLeLqzk0HJvsgIprxru qtCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=fbIU10g+3FIH2F9V39N52dFDZKOYUCa0+VKpaAcOC2w=; b=IqoFg6AV4GsvzzSIPNdB4ch81aGsPkC0NkcEouP2LPhZznt9zP888RAv5VbFXj1gEY dSDnjaEt09dvIqRvYVx/9vortKs2CycH/Vt2vmnnB+bsxJl2jOubufU30PggE7bLW90J JcfJs6GELek0zvzXxvtolA/LZwFWpDPoTA7lGVOmUc2ZOnMFFfZrI2Wc7/eiV7P5gClL r61d8flkEHqWp5oxf8pQBpfEkUoMGj8MOlgken7Us6NcFedJyox61midKZ+0Z+M48Nx9 Ad+11EKPmEG9RScr4cAmOukHmrK6rEPTUbHzJ+BwCYWuApzcPhGS0TZeaZSUpZymmeXp zcIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m7-v6si30491033pfi.286.2018.10.11.08.24.30; Thu, 11 Oct 2018 08:24:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729060AbeJKWvK (ORCPT + 99 others); Thu, 11 Oct 2018 18:51:10 -0400 Received: from mga05.intel.com ([192.55.52.43]:3986 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728413AbeJKWsZ (ORCPT ); Thu, 11 Oct 2018 18:48:25 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Oct 2018 08:20:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,368,1534834800"; d="scan'208";a="78019123" Received: from 2b52.sc.intel.com ([143.183.136.147]) by fmsmga008.fm.intel.com with ESMTP; 11 Oct 2018 08:20:45 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [PATCH v5 14/27] x86/mm: Modify ptep_set_wrprotect and pmdp_set_wrprotect for _PAGE_DIRTY_SW Date: Thu, 11 Oct 2018 08:15:10 -0700 Message-Id: <20181011151523.27101-15-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181011151523.27101-1-yu-cheng.yu@intel.com> References: <20181011151523.27101-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When Shadow Stack is enabled, the [R/O + PAGE_DIRTY_HW] setting is reserved only for the Shadow Stack. For non-Shadow Stack R/O PTEs, we use [R/O + PAGE_DIRTY_SW]. When a PTE goes from [R/W + PAGE_DIRTY_HW] to [R/O + PAGE_DIRTY_SW], it could become a transient Shadow Stack PTE in two cases. The first case is that some processors can start a write but end up seeing a read-only PTE by the time they get to the Dirty bit, creating a transient Shadow Stack PTE. However, this will not occur on processors supporting Shadow Stack therefore we don't need a TLB flush here. The second case is that when the software, without atomic, tests & replaces PAGE_DIRTY_HW with PAGE_DIRTY_SW, a transient Shadow Stack PTE can exist. This is prevented with cmpxchg. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Signed-off-by: Yu-cheng Yu --- arch/x86/include/asm/pgtable.h | 58 ++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 3ee554d81480..b6e0ee5c5503 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1203,7 +1203,36 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { +#ifdef CONFIG_X86_INTEL_SHADOW_STACK_USER + pte_t new_pte, pte = READ_ONCE(*ptep); + + /* + * Some processors can start a write, but end up + * seeing a read-only PTE by the time they get + * to the Dirty bit. In this case, they will + * set the Dirty bit, leaving a read-only, Dirty + * PTE which looks like a Shadow Stack PTE. + * + * However, this behavior has been improved and + * will not occur on processors supporting + * Shadow Stacks. Without this guarantee, a + * transition to a non-present PTE and flush the + * TLB would be needed. + * + * When changing a writable PTE to read-only and + * if the PTE has _PAGE_DIRTY_HW set, we move + * that bit to _PAGE_DIRTY_SW so that the PTE is + * not a valid Shadow Stack PTE. + */ + do { + new_pte = pte_wrprotect(pte); + new_pte.pte |= (new_pte.pte & _PAGE_DIRTY_HW) >> + _PAGE_BIT_DIRTY_HW << _PAGE_BIT_DIRTY_SW; + new_pte.pte &= ~_PAGE_DIRTY_HW; + } while (!try_cmpxchg(ptep, &pte, new_pte)); +#else clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); +#endif } #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) @@ -1266,7 +1295,36 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#ifdef CONFIG_X86_INTEL_SHADOW_STACK_USER + pmd_t new_pmd, pmd = READ_ONCE(*pmdp); + + /* + * Some processors can start a write, but end up + * seeing a read-only PMD by the time they get + * to the Dirty bit. In this case, they will + * set the Dirty bit, leaving a read-only, Dirty + * PMD which looks like a Shadow Stack PMD. + * + * However, this behavior has been improved and + * will not occur on processors supporting + * Shadow Stacks. Without this guarantee, a + * transition to a non-present PMD and flush the + * TLB would be needed. + * + * When changing a writable PMD to read-only and + * if the PMD has _PAGE_DIRTY_HW set, we move + * that bit to _PAGE_DIRTY_SW so that the PMD is + * not a valid Shadow Stack PMD. + */ + do { + new_pmd = pmd_wrprotect(pmd); + new_pmd.pmd |= (new_pmd.pmd & _PAGE_DIRTY_HW) >> + _PAGE_BIT_DIRTY_HW << _PAGE_BIT_DIRTY_SW; + new_pmd.pmd &= ~_PAGE_DIRTY_HW; + } while (!try_cmpxchg(pmdp, &pmd, new_pmd)); +#else clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); +#endif } #define pud_write pud_write -- 2.17.1