Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp4068425pxu; Mon, 12 Oct 2020 08:42:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzrHYoPw7n1EpHb+JMxANtG9UEtlOn1G1D+f/HSEXhbN1nOh8FGqlGiFS7+WO/d3VPJTrxK X-Received: by 2002:aa7:dbcb:: with SMTP id v11mr14922169edt.351.1602517322205; Mon, 12 Oct 2020 08:42:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602517322; cv=none; d=google.com; s=arc-20160816; b=gX1TclpVygNXpLSbnmwoxh/UpBYWBFowA1OowTI6EzvLJhEj30HvQCWGkzlG4MVxEg TgysvqXEVRZQdtLaKbgGGyEQq7Yj6o+7aeKQo5qmYhT0VAvMKJuWgbjkckmTjmboaqjF 1xsWfAHzV+tlqNn3AYfVOOKOzvHcfwoWqfYT+iF8w/0XlPg1tpIWFX/g2jtDMviEEk1J h9b62gNsjjSNycFE74hWSZdNbAPGDBZT9PhzwGP2S2dUAzNDDE+K0DSsAUOJgr/xQI+e X/WlaZjkhOcnc9ZkTYeJxxkIGkfZcXTlQRt3+dTZLYDWrQisA9jddhyoGEtpdZnDD5gx /JwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=xfZ5pxo8a26CtBSPgVIMCkIvkXTvy2dtHIckctMv0ck=; b=D1jtnJC7nmHCUVJ5FdrtN/AkVasVRB72nsQM6/JK3XQ4SuIyrdY1lFD3cgdavjEtQL s9RxpHcaUoOSB06/pfNGlKtZFNW7J0T6WDDL3vBj8IK3vwDq2xP3fL/KUObr1o6Lp5bg WctY/iDd01t1e8OL6nA1AKnXKwS+k6OeZWPnVN/IKaFhee6Xu7T7/lUgVKrgWaK5Jv0x kLBIuObcz5I7JlB8mhhwWGDrgEKQQbSiIedra4c97o6oY7RWaILuWd+IuUo6fNPSTEeh PalX7CzP2bPVI7FMwCtw/H5AimT3B84c4vicuviUPhY3UAR+RMzejKWAcQ+h610zFQGe IpoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l22si11951618ejc.719.2020.10.12.08.41.38; Mon, 12 Oct 2020 08:42:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390414AbgJLPkL (ORCPT + 99 others); Mon, 12 Oct 2020 11:40:11 -0400 Received: from mga05.intel.com ([192.55.52.43]:1378 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390304AbgJLPkG (ORCPT ); Mon, 12 Oct 2020 11:40:06 -0400 IronPort-SDR: 6ZkPwHPJDPUgi1eSXVJxNl6IRskpCcvX2Y6gjC9rdMWd4CF9xvV8QPVRyChm/4je4fXGASbBLL 953XdX1J5xZw== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="250452672" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="250452672" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 08:39:51 -0700 IronPort-SDR: WZJFFSkZ6eZowV2+FjLdAsYFYOZuvxhUAz6sjin4zFXIGWMlg/qkMfSuU/M9r30IqzNhEo1jqk j58r5EBdUhEQ== X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="530010843" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 08:39:50 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu Cc: Yu-cheng Yu Subject: [PATCH v14 11/26] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY_HW to _PAGE_COW Date: Mon, 12 Oct 2020 08:38:35 -0700 Message-Id: <20201012153850.26996-12-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20201012153850.26996-1-yu-cheng.yu@intel.com> References: <20201012153850.26996-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When shadow stack is introduced, [R/O + _PAGE_DIRTY_HW] PTE is reserved for shadow stack. Copy-on-write PTEs have [R/O + _PAGE_COW]. When a PTE goes from [R/W + _PAGE_DIRTY_HW] to [R/O + _PAGE_COW], it could become a transient shadow stack PTE in two cases: The first case is that some processors can start a write but end up seeing a read-only PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting shadow stack, therefore we don't need a TLB flush here. The second case is that when the software, without atomic, tests & replaces _PAGE_DIRTY_HW with _PAGE_COW, a transient shadow stack PTE can exist. This is prevented with cmpxchg. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Signed-off-by: Yu-cheng Yu Reviewed-by: Kees Cook --- arch/x86/include/asm/pgtable.h | 52 ++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 8d4c09831e67..8e637a5ed9e4 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1230,6 +1230,32 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { + /* + * Some processors can start a write, but end up seeing a read-only + * PTE by the time they get to the Dirty bit. In this case, they + * will set the Dirty bit, leaving a read-only, Dirty PTE which + * looks like a shadow stack PTE. + * + * However, this behavior has been improved and will not occur on + * processors supporting shadow stack. Without this guarantee, a + * transition to a non-present PTE and flush the TLB would be + * needed. + * + * When changing a writable PTE to read-only and if the PTE has + * _PAGE_DIRTY_HW set, move that bit to _PAGE_COW so that the + * PTE is not a shadow stack PTE. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + pte_t old_pte, new_pte; + + do { + old_pte = READ_ONCE(*ptep); + new_pte = pte_wrprotect(old_pte); + + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1286,6 +1312,32 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { + /* + * Some processors can start a write, but end up seeing a read-only + * PMD by the time they get to the Dirty bit. In this case, they + * will set the Dirty bit, leaving a read-only, Dirty PMD which + * looks like a Shadow Stack PMD. + * + * However, this behavior has been improved and will not occur on + * processors supporting Shadow Stack. Without this guarantee, a + * transition to a non-present PMD and flush the TLB would be + * needed. + * + * When changing a writable PMD to read-only and if the PMD has + * _PAGE_DIRTY_HW set, we move that bit to _PAGE_COW so that the + * PMD is not a shadow stack PMD. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + pmd_t old_pmd, new_pmd; + + do { + old_pmd = READ_ONCE(*pmdp); + new_pmd = pmd_wrprotect(old_pmd); + + } while (!try_cmpxchg((pmdval_t *)pmdp, (pmdval_t *)&old_pmd, pmd_val(new_pmd))); + + return; + } clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); } -- 2.21.0