Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1264684ybl; Fri, 6 Dec 2019 14:15:39 -0800 (PST) X-Google-Smtp-Source: APXvYqwGQLYAwcONqVRZG2tmfSOmgvIADKKoV7BT8L+QwGBBQE5v/x7wMeVYpShC2kzL/rI1gQKH X-Received: by 2002:aca:b10b:: with SMTP id a11mr9064139oif.138.1575670538852; Fri, 06 Dec 2019 14:15:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575670538; cv=none; d=google.com; s=arc-20160816; b=jT68gk06DO5qu2VuL6Vyx2PBfvvigm1jY/x1x7jewZGVBKLksA2j/kthK8AS44ILdp SHs9yBKs6+/3wuOhcRe4cPYgo41UUexEXOfXcUY7walfkqKrcMiVfI6hy2x47IbzAqnl wjGtkHFfeTvpOfFSb1gtWD0s5b1bcF1mYrI/fLf+/L7DXdpLk38Q5wxRYdKYDsK9qOrM 4Z6b2o8q3c/Dq+3U2bSSnxo0KGWgyLKmIfyD269RXf7wX6BPizerfEwY/Id0aZNeSSgf QJC71iA6O4YeRx7IFUyCzmLLlLn6L213ULM+BC30T+NeHdaXPACjrVjr7vLSPXNh1GvL xwJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:dkim-signature; bh=RFQeZBUfRbHmuBkixdwdHyK03/i2WwDYL6zKZ0PVNPE=; b=UFxnal7M0TE7W6jMhc7NlYaZvqyZxB5rOj6mnnNPCP0UvLLlnMHlFkS9K3Tjc01cC6 tCnmj+lMPL2XxfEC5Dc36fF8a0f8ZjNqCml1wOwDxH8/Ij50TueKFxgjmthOdS9P15EK DXDs/f8uJ/mM0OBZTveTPFCvr06gi9hOUMSFbDJz5HgZ6tK3YKxmKZPRdpmRZ4gZ1LFm zUxq8QvTfx+LmkxbGA/z94v6lrheMMV01adwabHOb89M5ctL0YLP62bi4STpl0QNr3Ri 8n0oFzD+cMW3VaB0gTQslCfB9AC4ITO0bFwqLmNnut22mSSj9G7y4zdevUJvDaDn+NOJ mAbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=IKUt7ka9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 100si8939993otn.128.2019.12.06.14.15.27; Fri, 06 Dec 2019 14:15:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=IKUt7ka9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726879AbfLFWOg (ORCPT + 99 others); Fri, 6 Dec 2019 17:14:36 -0500 Received: from mail-pg1-f201.google.com ([209.85.215.201]:48382 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726837AbfLFWOe (ORCPT ); Fri, 6 Dec 2019 17:14:34 -0500 Received: by mail-pg1-f201.google.com with SMTP id p188so4542921pgp.15 for ; Fri, 06 Dec 2019 14:14:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RFQeZBUfRbHmuBkixdwdHyK03/i2WwDYL6zKZ0PVNPE=; b=IKUt7ka9OPp04nuNFtbFmNhMED5ty9XakjQvaYcOsgsRH/SYzLcVVpMACFeRRQLdNK a0mE4lnUMI89KyyiE6NlXFQ2m8GZLvuy4PR6RLdT9AEzgHkAZhU6G5b+ckma8n0/nC3A uTP3fT2oiMISqhQbVaIW4gO4gBClTWdiTL7rm1J1XulV10rrt/IDRTieIL/GcGuQQsG4 6Egq9+uHYBwiCGJ5LMSdZgryo1aNlorDmLG/eGF39k5gHk7IQUupl1+Ijf3qbFwsleLT LLpe6caJ3SjrMFXnNFw4D60rd2Nj6RFgfq1c/qqHzMkzJxwAtqhrgHtf2Hz7J2QnnbA5 OZRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RFQeZBUfRbHmuBkixdwdHyK03/i2WwDYL6zKZ0PVNPE=; b=AGUwrYTlU9B1NwaCXgSpuWpXCFrHNgiIFM3U+6jUvlnGABsrmXBAKckMVybSoXvf/z IL4r8Ga2GS3VA6jFNF0o4TxgKsNKDIc01n0mwh6iP0JjUBVLcxUGXV+c1vEO5fK52Q0m 3xtJ9Az2990qO6ItEZRudDHoFQ06u5oFU4dddAvE2t7DLl1riqNdC3RomYl5Z7i4QzxL vNjKASvewh9KhDUj5Vz00Nvcr3QwU1QJH56Wtj0yekrlF/8snSpSLev5US9nO52gtgJh 3E/Pb0dCA9L7+riiLblrzqaqYDdECK5NUFetJhXBUk9zOCFBjqzXCPM9MouzlQ9sOqbB oGuQ== X-Gm-Message-State: APjAAAXAzzmwePGI6B4qjNl2HZE4wtBZm+zyxfNV4Zit3iT9lEWbkk3L 4+YI0yjnq+TvcQsgdLLOP4dOYGHlHR+4V/Pwgo8= X-Received: by 2002:a63:3484:: with SMTP id b126mr5874359pga.17.1575670473753; Fri, 06 Dec 2019 14:14:33 -0800 (PST) Date: Fri, 6 Dec 2019 14:13:51 -0800 In-Reply-To: <20191206221351.38241-1-samitolvanen@google.com> Message-Id: <20191206221351.38241-16-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191206221351.38241-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v6 15/15] arm64: scs: add shadow stacks for SDEI From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/kernel/entry.S | 14 ++++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++++++++------ arch/arm64/kernel/sdei.c | 7 +++ 4 files changed, 112 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 7aa2d366b2df..9327c3d21b64 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1048,13 +1048,16 @@ ENTRY(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1064,6 +1067,15 @@ ENTRY(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index eaadf5430baa..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index d6259dac62b6..2854b9f7760a 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -162,6 +163,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -- 2.24.0.393.g34dc348eaf-goog