Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1705295ybh; Tue, 14 Jul 2020 05:19:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9LwgQolj9HP+KcCQutusqKlP55XvponL+76Xo8olgTi2heOfcVAiBFGlqb9b5dWpz5FxN X-Received: by 2002:a17:906:ca0e:: with SMTP id jt14mr4080246ejb.325.1594729138902; Tue, 14 Jul 2020 05:18:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594729138; cv=none; d=google.com; s=arc-20160816; b=AD17slS8unlcZc/TYbxWMp8YQFmFSk7bmGKA/bT1IfRStg0Sj/s+M+bKQ+zucu+5nK Y9d5/tC98pfToXcqvJIsgcRfCumINeHRaQggu6RoDEQmDMMTAmhU+TuM8Piq2dACyZpx TgdBvWHcoquURJpqriKOAj97N/7uBqHzNMC9H/V211N/4XjLKsNCnVPKfVazoNS3LFlS 1gXLsI8jacHpWlh8Ac6L6s8FcVsTe2Y+yOd+P3HR4Lhq7Kl/EUD8He+Jpvs7B8skrSF0 MunPSkIFzTP0/vbJiDq/3j2VitRRxcKwipx+G6J4BKBTMEsSA647KXGqIWCAOx6e5qR8 FOKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=eAFUz/DFluILnabAm91+34giKlP4SyIJqnB8FGK5Yao=; b=UlAjLPjIMa+4b6fuh/me1IZths+UBSCGRjCPwnQya66A7eLnIu10aVNXjZ26H3IVKm rbDsyNJJIvxZxbtX/153FAbp5ETSm88ED1qE2bBRl4XO7GgLSJ7jNM57297Bie02Vbxf AyiSUlcy2Cq2lXeOMkQS5m4+KkqnRHw+BlmQkCdOD8ZuUX69guRNaXPXwaH3I+Vq/7NK wmosG40nzKAKOHKoiiCSxIPv+nmfbBxTHo4B+rvqqgwFfjSmytjDCuolv+1vt8VuE8Mv QPK17ZufTm0+1Taqq1Y8VZ4mvPWVPpph6PMX5uJKA94f+GnpvtBDQIbTzIS1JpN0i0gz U2Rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ha20si9944378ejb.683.2020.07.14.05.18.34; Tue, 14 Jul 2020 05:18:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728650AbgGNMRL (ORCPT + 99 others); Tue, 14 Jul 2020 08:17:11 -0400 Received: from 8bytes.org ([81.169.241.247]:52886 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728224AbgGNMKn (ORCPT ); Tue, 14 Jul 2020 08:10:43 -0400 Received: from cap.home.8bytes.org (p5b006776.dip0.t-ipconnect.de [91.0.103.118]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by theia.8bytes.org (Postfix) with ESMTPSA id 0628F6A7; Tue, 14 Jul 2020 14:10:41 +0200 (CEST) From: Joerg Roedel To: x86@kernel.org Cc: Joerg Roedel , Joerg Roedel , hpa@zytor.com, Andy Lutomirski , Dave Hansen , Peter Zijlstra , Jiri Slaby , Dan Williams , Tom Lendacky , Juergen Gross , Kees Cook , David Rientjes , Cfir Cohen , Erdem Aktas , Masami Hiramatsu , Mike Stunes , Sean Christopherson , Martin Radev , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v4 15/75] x86/boot/compressed/64: Always switch to own page-table Date: Tue, 14 Jul 2020 14:08:17 +0200 Message-Id: <20200714120917.11253-16-joro@8bytes.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200714120917.11253-1-joro@8bytes.org> References: <20200714120917.11253-1-joro@8bytes.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel When booted through startup_64 the kernel keeps running on the EFI page-table until the KASLR code sets up its own page-table. Without KASLR the pre-decompression boot code never switches off the EFI page-table. Change that by unconditionally switching to a kernel controlled page-table after relocation. This makes sure we can make changes to the mapping when necessary, for example map pages unencrypted in SEV and SEV-ES guests. Also remove the debug_putstr() calls in initialize_identity_maps() because the function now runs before console_init() is called. Signed-off-by: Joerg Roedel --- arch/x86/boot/compressed/head_64.S | 3 +- arch/x86/boot/compressed/ident_map_64.c | 51 +++++++++++++++---------- arch/x86/boot/compressed/kaslr.c | 3 -- 3 files changed, 32 insertions(+), 25 deletions(-) diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 4174d2f97b29..36f18d5592f4 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -543,10 +543,11 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated) rep stosq /* - * Load stage2 IDT + * Load stage2 IDT and switch to our own page-table */ pushq %rsi call load_stage2_idt + call initialize_identity_maps popq %rsi /* diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index e3d980ae9c2b..ecf9353b064d 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -86,9 +86,31 @@ phys_addr_t physical_mask = (1ULL << __PHYSICAL_MASK_SHIFT) - 1; */ static struct x86_mapping_info mapping_info; +/* + * Adds the specified range to what will become the new identity mappings. + * Once all ranges have been added, the new mapping is activated by calling + * finalize_identity_maps() below. + */ +void add_identity_map(unsigned long start, unsigned long size) +{ + unsigned long end = start + size; + + /* Align boundary to 2M. */ + start = round_down(start, PMD_SIZE); + end = round_up(end, PMD_SIZE); + if (start >= end) + return; + + /* Build the mapping. */ + kernel_ident_mapping_init(&mapping_info, (pgd_t *)top_level_pgt, + start, end); +} + /* Locates and clears a region for a new top level page table. */ void initialize_identity_maps(void) { + unsigned long start, size; + /* If running as an SEV guest, the encryption mask is required. */ set_sev_encryption_mask(); @@ -121,37 +143,24 @@ void initialize_identity_maps(void) */ top_level_pgt = read_cr3_pa(); if (p4d_offset((pgd_t *)top_level_pgt, 0) == (p4d_t *)_pgtable) { - debug_putstr("booted via startup_32()\n"); pgt_data.pgt_buf = _pgtable + BOOT_INIT_PGT_SIZE; pgt_data.pgt_buf_size = BOOT_PGT_SIZE - BOOT_INIT_PGT_SIZE; memset(pgt_data.pgt_buf, 0, pgt_data.pgt_buf_size); } else { - debug_putstr("booted via startup_64()\n"); pgt_data.pgt_buf = _pgtable; pgt_data.pgt_buf_size = BOOT_PGT_SIZE; memset(pgt_data.pgt_buf, 0, pgt_data.pgt_buf_size); top_level_pgt = (unsigned long)alloc_pgt_page(&pgt_data); } -} -/* - * Adds the specified range to what will become the new identity mappings. - * Once all ranges have been added, the new mapping is activated by calling - * finalize_identity_maps() below. - */ -void add_identity_map(unsigned long start, unsigned long size) -{ - unsigned long end = start + size; - - /* Align boundary to 2M. */ - start = round_down(start, PMD_SIZE); - end = round_up(end, PMD_SIZE); - if (start >= end) - return; - - /* Build the mapping. */ - kernel_ident_mapping_init(&mapping_info, (pgd_t *)top_level_pgt, - start, end); + /* + * New page-table is set up - map the kernel image and load it + * into cr3. + */ + start = (unsigned long)_head; + size = _end - _head; + add_identity_map(start, size); + write_cr3(top_level_pgt); } /* diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index 7c61a8c5b9cf..856dc1c9bb0d 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -903,9 +903,6 @@ void choose_random_location(unsigned long input, boot_params->hdr.loadflags |= KASLR_FLAG; - /* Prepare to add new identity pagetables on demand. */ - initialize_identity_maps(); - /* Record the various known unsafe memory ranges. */ mem_avoid_init(input, input_size, *output); -- 2.27.0