Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp1572070pxb; Fri, 1 Oct 2021 13:46:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzFpdrHBmEdox9ARiMd76W3M0rWgs2ml5UejTOQCcwkwK7cGsw2WwIqN+WHMJSZZBGAnkmU X-Received: by 2002:a17:906:c1d0:: with SMTP id bw16mr27594ejb.146.1633121183989; Fri, 01 Oct 2021 13:46:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633121183; cv=none; d=google.com; s=arc-20160816; b=jSkM5d1wGEsVqYapZAh2cesdsKkIXYfThYnGEVXyHVfSo3JyQMOUIcjfICoR+x79EZ DsA2U5Fyvc60c7d4XE6qwQcFBUwVAP7iaXUxCfv/cT5H8249EHnPzHixipqNSX51YSMq q0YlSvmSMO9YCEN1UP8wvw1rOnu4/pCsgdIRIRry20Yp6y8waknLQ6RCuzABRQJhZkAN E32xh8Hhu/ACdOzhBBCpBwdZiLWaD95J/Ss2deHxXyHltQCh/Sz/EKC7UgTfWAMjk9KC 7EpKqjSH5LkbS1uPcI6ow1IX2xW3lM1/qB5RNL9m+MCy1njv/2rHDXiA99vqWiQtg5A1 bbzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Nx12u0cB1s3RsxRSfxjFtysx8A9z2kvicyKJknGp5i4=; b=MOSOT7/F4peTbypcdXZliESpHomNjI7pBqh7c61l5EGwFP8t/i8rIS+zpSxMcKZe4f gWBOsIIIAISA5Fg4/vOl1NsWSzNVD4HVNncWNnnczsOCl8Lns9Wupo4c/5EPTOzjc5qV GF7+5pfGZfxjh7ZpUGOUK2G+iTa8YbGA7TC7p2bwtML3mT83T6UqlnodVAev5JjjjsR9 VGBuvW1pNwZ7Q2Hj/KRmipHM3OM0UBP6LvcJV/finLF9Bb1lJVyxCS3ZUtCfo3ORP4md m0RP6FnD06CQLq/TrViA75D+mtWGMeuWm5ucFDVFmrAzvm/sHrRPwuAa6uMPDtC2O4et nSUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@alien8.de header.s=dkim header.b=QXYPwXLi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j22si10169727ejt.173.2021.10.01.13.45.55; Fri, 01 Oct 2021 13:46:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@alien8.de header.s=dkim header.b=QXYPwXLi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alien8.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355417AbhJAUlP (ORCPT + 99 others); Fri, 1 Oct 2021 16:41:15 -0400 Received: from mail.skyhub.de ([5.9.137.197]:57916 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355332AbhJAUlP (ORCPT ); Fri, 1 Oct 2021 16:41:15 -0400 Received: from zn.tnic (p200300ec2f0e8e008cd8fcde3ecc481f.dip0.t-ipconnect.de [IPv6:2003:ec:2f0e:8e00:8cd8:fcde:3ecc:481f]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id E10001EC05DE; Fri, 1 Oct 2021 22:39:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1633120769; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=Nx12u0cB1s3RsxRSfxjFtysx8A9z2kvicyKJknGp5i4=; b=QXYPwXLiwkYiXk85g0gRv5E0xzRqmPapSW3XPk9wdv7llQVr3hlir2tPQtbMBbunIjJT48 kxDx+7tBMI/XOsG4OIQkjHhZnQUEyHrwoPpCL7/29FEQ45E+FIHfNJYI40zuhnmfRJmLmi tpVpmBfmz9u+lqasle08IumwAedRhvY= Date: Fri, 1 Oct 2021 22:39:25 +0200 From: Borislav Petkov To: Joerg Roedel Cc: Tom Lendacky , linux-kernel@vger.kernel.org, x86@kernel.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Brijesh Singh Subject: Re: [PATCH] x86/sev: Fully map the #VC exception stacks Message-ID: References: <113eca80a14cd280540c38488fd31ac0fa7bf36c.1633063250.git.thomas.lendacky@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It doesn't get any more straight-forward than this. We can ifdef around the ESTACKS_MEMBERS VC and VC2 arrays so that those things do get allocated only on a CONFIG_AMD_MEM_ENCRYPT kernel so that we don't waste 4 pages per CPU on machines which don't do SEV but meh. Thoughts? --- diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h index 3d52b094850a..13a3e8510c33 100644 --- a/arch/x86/include/asm/cpu_entry_area.h +++ b/arch/x86/include/asm/cpu_entry_area.h @@ -21,9 +21,9 @@ char MCE_stack_guard[guardsize]; \ char MCE_stack[EXCEPTION_STKSZ]; \ char VC_stack_guard[guardsize]; \ - char VC_stack[optional_stack_size]; \ + char VC_stack[EXCEPTION_STKSZ]; \ char VC2_stack_guard[guardsize]; \ - char VC2_stack[optional_stack_size]; \ + char VC2_stack[EXCEPTION_STKSZ]; \ char IST_top_guard[guardsize]; \ /* The exception stacks' physical storage. No guard pages required */ diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..88401675dabb 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -46,16 +46,6 @@ static struct ghcb __initdata *boot_ghcb; struct sev_es_runtime_data { struct ghcb ghcb_page; - /* Physical storage for the per-CPU IST stack of the #VC handler */ - char ist_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE); - - /* - * Physical storage for the per-CPU fall-back stack of the #VC handler. - * The fall-back stack is used when it is not safe to switch back to the - * interrupted stack in the #VC entry code. - */ - char fallback_stack[EXCEPTION_STKSZ] __aligned(PAGE_SIZE); - /* * Reserve one page per CPU as backup storage for the unencrypted GHCB. * It is needed when an NMI happens while the #VC handler uses the real @@ -99,27 +89,6 @@ DEFINE_STATIC_KEY_FALSE(sev_es_enable_key); /* Needed in vc_early_forward_exception */ void do_early_exception(struct pt_regs *regs, int trapnr); -static void __init setup_vc_stacks(int cpu) -{ - struct sev_es_runtime_data *data; - struct cpu_entry_area *cea; - unsigned long vaddr; - phys_addr_t pa; - - data = per_cpu(runtime_data, cpu); - cea = get_cpu_entry_area(cpu); - - /* Map #VC IST stack */ - vaddr = CEA_ESTACK_BOT(&cea->estacks, VC); - pa = __pa(data->ist_stack); - cea_set_pte((void *)vaddr, pa, PAGE_KERNEL); - - /* Map VC fall-back stack */ - vaddr = CEA_ESTACK_BOT(&cea->estacks, VC2); - pa = __pa(data->fallback_stack); - cea_set_pte((void *)vaddr, pa, PAGE_KERNEL); -} - static __always_inline bool on_vc_stack(struct pt_regs *regs) { unsigned long sp = regs->sp; @@ -787,7 +756,6 @@ void __init sev_es_init_vc_handling(void) for_each_possible_cpu(cpu) { alloc_runtime_data(cpu); init_ghcb(cpu); - setup_vc_stacks(cpu); } sev_es_setup_play_dead(); diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index f5e1e60c9095..82d062414f19 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -110,6 +110,13 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu) cea_map_stack(NMI); cea_map_stack(DB); cea_map_stack(MCE); + + if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT)) { + if (sev_es_active()) { + cea_map_stack(VC); + cea_map_stack(VC2); + } + } } #else static inline void percpu_setup_exception_stacks(unsigned int cpu) -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette