Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp2831843rwb; Thu, 29 Sep 2022 16:02:57 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4Qc3dh3klq7fp4HAskmHM+KMaGRNcU4iWC81gcQQWxMz3qzUTMi/0iyHaVojaFCMxt6ezq X-Received: by 2002:a17:903:230a:b0:17b:b69d:a743 with SMTP id d10-20020a170903230a00b0017bb69da743mr3127823plh.18.1664492577602; Thu, 29 Sep 2022 16:02:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664492577; cv=none; d=google.com; s=arc-20160816; b=nMtEpaFC0AkEqI7eiGID7nyF+/zl+Yp8xC9mcPFvaxfn5QWwhTh8S9ZKBtdhZir7zS AWUj42wn8z2dLXI6jXHJ1G59J2uObJHvtN7eO7hlJDv/82/DjZZUKhb2j3BLJiJbMuQc RuYcHfTh5XlKRZDaoJH4p1YmUE61mzIpzgPLDmZT84fAzHnn8beknkarX3SgDyZ2el58 gEa4dh7lMmlwVjTqkkmX68Ig6ERTWAktT0Vka81Jx8Ms6DEMWlpL7R0cP5oyAWi1O94/ 18goLsAl1Q2MlUGUuVKDWe/3bivrlDXA4lTUl2DaL7862zoJvauVNuhtAlbY6rFKw1S0 59BA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=mFZ/2ILHxyUz8oUv7NSDjOnndkwzMgPd0BSIOD2PQY8=; b=WqWA+EIVaAIa46wU+U8edHnx5ehCeRyh8aUI3XYoD1F7/x8+k9zumnXPKXbvBdUStr eJXM9aevuq0DNYL8UAk1xt89kiaatziZktaAbk2jRyEdyW0a/2EKaz50zuHJKGrA3994 HrQMcU7E3WOt3guHi4Lm7bar2vnxBTpLlp/Vz5n+51n1s09Au2FzMjsMtU6ouYeTIv5e qtCXbl2ZOldErzPRZCWMxfZ1OsWP/htN/3w+bGWdHkvjySc2T2l/aGP3UPfVQnsKxI5x 5050SP50/h6P4BY31nZuILn00205A/uCWzz0x9grHVttHw+NPxQ6ubHqIXYDP4OdLC3J 4u1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ljQSLe5F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 75-20020a62154e000000b005351ff87f27si605045pfv.270.2022.09.29.16.02.44; Thu, 29 Sep 2022 16:02:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ljQSLe5F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230092AbiI2WdU (ORCPT + 99 others); Thu, 29 Sep 2022 18:33:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229998AbiI2Wcc (ORCPT ); Thu, 29 Sep 2022 18:32:32 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BA46895E6; Thu, 29 Sep 2022 15:30:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664490652; x=1696026652; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=a9AIyyilAohGDoPHrssf1eNaoetpft93CZ4gQ0mvYLg=; b=ljQSLe5FSnqxV1ibmoGUx+6og+XuaVjEXyJxTkmM4DzQ8s5QgQXSwsm6 gYorqNFGchm6KmCLHX7F1/uM+isWxG7fBdGAdrXorpaWUeeTRN5JTcEx2 tvrfKaeemvIXBGsKTV9YlFIsT3yybmgBsxpuy85ofoVhMCgTVhV4zobd5 7xPpX+LWrjpNvvEFbxy9dA17qwoZOxsjDXXKyZgOMPYN7N/c99QTOsQjP mLcPPMbmswZLpKXEZYfZQyEOp9J/Zr/nM2UAnXc5d+/oJpPtomSmJmIUn RZPRdwiDd+VVureNJ1MbibHKj+lE41cMEfmvK7nhCTNKY9ROsiJkxFVY9 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="285181975" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="285181975" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:24 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="691016239" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="691016239" Received: from sergungo-mobl.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.251.25.88]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:20 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v2 18/39] mm: Add guard pages around a shadow stack. Date: Thu, 29 Sep 2022 15:29:15 -0700 Message-Id: <20220929222936.14584-19-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220929222936.14584-1-rick.p.edgecombe@intel.com> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yu-cheng Yu The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP can move the spp to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack vmas, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stack's to grow, which is unneeded and adds a strange difference to how most regular stacks work. Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook --- v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) Yu-cheng v25: - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. Yu-cheng v24: - Instead changing vm_*_gap(), create x86-specific versions. arch/x86/mm/mmap.c | 23 +++++++++++++++++++++++ include/linux/mm.h | 11 ++++++----- mm/mmap.c | 7 +++++++ 3 files changed, 36 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c index f3f52c5e2fd6..b0427bd2da30 100644 --- a/arch/x86/mm/mmap.c +++ b/arch/x86/mm/mmap.c @@ -250,3 +250,26 @@ bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot) return false; return true; } + +unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* + * Shadow stack pointer is moved by CALL, RET, and INCSSP(Q/D). + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB + * (~1KB for INCSSPD) and touches the first and the last element + * in the range, which triggers a page fault if the range is not + * in a shadow stack. Because of this, creating 4-KB guard pages + * around a shadow stack prevents these instructions from going + * beyond. + * + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK + */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} diff --git a/include/linux/mm.h b/include/linux/mm.h index fef14ab3abcb..09458e77bf52 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2775,15 +2775,16 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return vma; } +unsigned long stack_guard_start_gap(struct vm_area_struct *vma); + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; } diff --git a/mm/mmap.c b/mm/mmap.c index 9d780f415be3..f0d2e9143bd0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -247,6 +247,13 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) return origbrk; } +unsigned long __weak stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + return 0; +} + static inline unsigned long vma_compute_gap(struct vm_area_struct *vma) { unsigned long gap, prev_end; -- 2.17.1