Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1689566pxb; Fri, 20 Aug 2021 11:23:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwfcg3KoifIs6wTf/7bv0BQ9hr2efspnMPE9eWMyzGNNg7VuJkXEts+njYA+Tvvl1wmdvuF X-Received: by 2002:a5d:9ad0:: with SMTP id x16mr17438741ion.182.1629483828963; Fri, 20 Aug 2021 11:23:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629483828; cv=none; d=google.com; s=arc-20160816; b=OBL07ckxWH4Xb8vnLUwYVhFNMsXYJWppx546i6L7xN0tkuUTU++pZH9VY5IsIGZQRp CnLEOzooRwC6KvqEv3UPf/4LmXmT8AbNemV9a0K1Etr+4Jwm9AKcRL8K+Z9zPhaCAwzw ik8cNLqVBy7oDbw6RRkiXPXxA56wb7eYEpDQ7cPiCRwHpOzN/MJzXK3FKtjfirpEDmyB GnMqysTAsKM2TkXnsuALe2UOsFE7xFWhXYWnXgt4IT/NbNrceaN8T45ZMUv6TsJlSyQA f1WNxVJnjmGz5ObSu/2tS6HS7Z2DjnzNblgSjcV0P6pFi84sGk3FcQmyxDaYTQ5GqtTg 4HXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=WPPNAuaLi/PDdbZ7F2wPt1SniDuHEnTO87a9GTkV6IY=; b=aHFrfqZLeTpw9DUb7vgIgIT8CnySwVYwgIEXvZd6c4ksjvqC5oSU9pRJT2a3eqmLSJ rMwelQK3cl3BnzqKSrqH6/O+NtFPEfwQNmNp441xEG4Sad1MIsznvFVWtRVKv28xinDc bhevZu2dNm6kj8Rbkm3rgdMIyzUtyHRIca+A3U1Eec3r9h/DG7jyTByZ3MGrqi/6DrUi iRQ67zbwKBlSUNZHpyN8wSLD2/3FONY4o1u+g9zikz2OpG6EKYMPRuG0yIqZUHXNxXi6 ry3yzecA9qObdZFBiiLr4We6v+FlZIYVLTGaHqYHZtnK6tuVv1Ua062f/cZDpPcFtPWJ ZBwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q17si7534853ilj.42.2021.08.20.11.23.37; Fri, 20 Aug 2021 11:23:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237073AbhHTSXY (ORCPT + 99 others); Fri, 20 Aug 2021 14:23:24 -0400 Received: from mga05.intel.com ([192.55.52.43]:22601 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237940AbhHTSWX (ORCPT ); Fri, 20 Aug 2021 14:22:23 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10082"; a="302407847" X-IronPort-AV: E=Sophos;i="5.84,338,1620716400"; d="scan'208";a="302407847" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2021 11:18:53 -0700 X-IronPort-AV: E=Sophos;i="5.84,338,1620716400"; d="scan'208";a="533074811" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2021 11:18:53 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang , Rick P Edgecombe Cc: Yu-cheng Yu , "Kirill A . Shutemov" Subject: [PATCH v29 32/32] mm: Introduce PROT_SHADOW_STACK for shadow stack Date: Fri, 20 Aug 2021 11:12:01 -0700 Message-Id: <20210820181201.31490-33-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210820181201.31490-1-yu-cheng.yu@intel.com> References: <20210820181201.31490-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are three possible options to create a shadow stack allocation API: an arch_prctl, a new syscall, or adding PROT_SHADOW_STACK to mmap() and mprotect(). Each has its advantages and compromises. An arch_prctl() is the least intrusive. However, the existing x86 arch_prctl() takes only two parameters. Multiple parameters must be passed in a memory buffer. There is a proposal to pass more parameters in registers [1], but no active discussion on that. A new syscall minimizes compatibility issues and offers an extensible frame work to other architectures, but this will likely result in some overlap of mmap()/mprotect(). The introduction of PROT_SHADOW_STACK to mmap()/mprotect() takes advantage of existing APIs. The x86-specific PROT_SHADOW_STACK is translated to VM_SHADOW_STACK and a shadow stack mapping is created without reinventing the wheel. There are potential pitfalls though. The most obvious one would be using this as a bypass to shadow stack protection. However, the attacker would have to get to the syscall first. [1] https://lore.kernel.org/lkml/20200828121624.108243-1-hjl.tools@gmail.com/ Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov Cc: Kees Cook --- arch/x86/include/asm/mman.h | 60 +++++++++++++++++++++++++++++++- arch/x86/include/uapi/asm/mman.h | 2 ++ include/linux/mm.h | 1 + 3 files changed, 62 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/mman.h b/arch/x86/include/asm/mman.h index 629f6c81263a..b77933923b9a 100644 --- a/arch/x86/include/asm/mman.h +++ b/arch/x86/include/asm/mman.h @@ -20,11 +20,69 @@ ((vm_flags) & VM_PKEY_BIT2 ? _PAGE_PKEY_BIT2 : 0) | \ ((vm_flags) & VM_PKEY_BIT3 ? _PAGE_PKEY_BIT3 : 0)) -#define arch_calc_vm_prot_bits(prot, key) ( \ +#define pkey_vm_prot_bits(prot, key) ( \ ((key) & 0x1 ? VM_PKEY_BIT0 : 0) | \ ((key) & 0x2 ? VM_PKEY_BIT1 : 0) | \ ((key) & 0x4 ? VM_PKEY_BIT2 : 0) | \ ((key) & 0x8 ? VM_PKEY_BIT3 : 0)) +#else +#define pkey_vm_prot_bits(prot, key) (0) #endif +static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, + unsigned long pkey) +{ + unsigned long vm_prot_bits = pkey_vm_prot_bits(prot, pkey); + + if (prot & PROT_SHADOW_STACK) + vm_prot_bits |= VM_SHADOW_STACK; + + return vm_prot_bits; +} + +#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) + +#ifdef CONFIG_X86_SHADOW_STACK +static inline bool arch_validate_prot(unsigned long prot, unsigned long addr) +{ + unsigned long valid = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | + PROT_SHADOW_STACK; + + if (prot & ~valid) + return false; + + if (prot & PROT_SHADOW_STACK) { + if (!current->thread.shstk.size) + return false; + + /* + * A shadow stack mapping is indirectly writable by only + * the CALL and WRUSS instructions, but not other write + * instructions). PROT_SHADOW_STACK and PROT_WRITE are + * mutually exclusive. + */ + if (prot & PROT_WRITE) + return false; + } + + return true; +} + +#define arch_validate_prot arch_validate_prot + +static inline bool arch_validate_flags(struct vm_area_struct *vma, unsigned long vm_flags) +{ + /* + * Shadow stack must be anonymous and not shared. + */ + if ((vm_flags & VM_SHADOW_STACK) && !vma_is_anonymous(vma)) + return false; + + return true; +} + +#define arch_validate_flags(vma, vm_flags) arch_validate_flags(vma, vm_flags) + +#endif /* CONFIG_X86_SHADOW_STACK */ + #endif /* _ASM_X86_MMAN_H */ diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index f28fa4acaeaf..4c36b263cf0a 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -4,6 +4,8 @@ #define MAP_32BIT 0x40 /* only give out 32bit addresses */ +#define PROT_SHADOW_STACK 0x10 /* shadow stack pages */ + #include #endif /* _UAPI_ASM_X86_MMAN_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 07e642af59d3..041e7e8ff702 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -349,6 +349,7 @@ extern unsigned int kobjsize(const void *objp); #if defined(CONFIG_X86) # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ +# define VM_ARCH_CLEAR VM_SHADOW_STACK #elif defined(CONFIG_PPC) # define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */ #elif defined(CONFIG_PARISC) -- 2.21.0