Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1996045pxu; Tue, 24 Nov 2020 14:15:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJwQBu5wxR+jHXPZkaKakEIu/OoPFV2Hk5H7x9oScdsiB0ddiaH0tlrmNtPgSjUm1O5JO5XE X-Received: by 2002:a05:6402:1d0b:: with SMTP id dg11mr648505edb.55.1606256109437; Tue, 24 Nov 2020 14:15:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606256109; cv=none; d=google.com; s=arc-20160816; b=gYvisSgxJAXBxZ9dr0RvigTigy757NewXUbUb7x9rbzZ1cDHdrCCo/sAWSZiCRvAfT rHfPIrU2pTWv6nd6SeAmq7BISLvDURt6CgYJOa43J4I5RIwqZObhpui61b0mSo2J6LH0 zuS753g9i68MsXdc9En/O3SvfghQ93fB1ezb47OR+6Pa65sWD2RPDihF9bzwuGV+6FNj +v7g9PMy2+me7gjCbBHKYozoryF1L0G1370Bdg1G75D8Hhqdno8Hl1uyMIe/FiS7t80n TUCSxpdlAXxU4iIPXsKBFH4UaSoRYtrzMTwE/E+OspItlg+UZ5IMjSHIJnr2K/CxsHr3 Mdqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=WZIgzsPBV/3W6TJZ7QewcQZ9ohWfbOQshWGA35frMMc=; b=IPLkgp1jsbfocpfIGTOoHEPoFWrZk6ZH+J/x+x8QMt3oXdWc7loKONe8BDgi6D4FIe mqo9fnCFspIEDXFW2FYl1bvRHN215wCEiZqQe3oQYvh3eniPBaJExPX+5CxAcdORiSte 1LU9UOjcf2eunEOlUgxlVMR4WmoTlkrfKzyfDGS+k1SZA/wNpFPg9js2rm6ebm5t0UWs mpkWcanNEiJMk6y3dBDhKHJLJiS++hh3X94a26n+3SL9Qgm46f3wH2vxlfsoAQmMf8k7 uEU/M6dqNeeipUjGFzfa1cPlv6WZQjcZGeli5NiVPTDFvrrDhma7fBlFcCDgjVy21Lx0 xvcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TUZkymM2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gc15si127348ejb.621.2020.11.24.14.14.46; Tue, 24 Nov 2020 14:15:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TUZkymM2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388598AbgKXNpS (ORCPT + 99 others); Tue, 24 Nov 2020 08:45:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:43526 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388490AbgKXNpR (ORCPT ); Tue, 24 Nov 2020 08:45:17 -0500 Received: from mail-ot1-f51.google.com (mail-ot1-f51.google.com [209.85.210.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5F0BE208DB for ; Tue, 24 Nov 2020 13:45:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1606225516; bh=mcAqjy1BYPHIo78PHi82FxCHHNnjpKdF8mXo2viVFy0=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=TUZkymM2pQgI9k3DphdmGGwQO5UDmPgtHstTZDlrYo+5Na22+Z6OCslKbUfdh5qyx W7IlchkKQHw4tUzAn5tFcLAHjw67J4gJRj7Tyxs3/lbQJoKxKc28EsPaean3SR+mtS gflNUxQpC3sle80/RDm1lTQVf5VbX2ZMqxzuIiM0= Received: by mail-ot1-f51.google.com with SMTP id o3so19353739ota.8 for ; Tue, 24 Nov 2020 05:45:16 -0800 (PST) X-Gm-Message-State: AOAM532c1rKknwKTOCx9TInNXFJ1a1UHBUp/+b8ZxAQ5Q/2prfrWgOyp orH+7BWD/QPxTEpusPiQi7sCAktrzXR4yBGXWuY= X-Received: by 2002:a05:6830:3099:: with SMTP id f25mr3488962ots.77.1606225515506; Tue, 24 Nov 2020 05:45:15 -0800 (PST) MIME-Version: 1.0 References: <20201119162543.78001-1-dbrazdil@google.com> <20201119162543.78001-3-dbrazdil@google.com> In-Reply-To: <20201119162543.78001-3-dbrazdil@google.com> From: Ard Biesheuvel Date: Tue, 24 Nov 2020 14:45:04 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 2/6] kvm: arm64: Fix up RELA relocations in hyp code/data To: David Brazdil Cc: kvmarm , Linux ARM , Linux Kernel Mailing List , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Scull , Android Kernel Team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 19 Nov 2020 at 17:25, David Brazdil wrote: > > KVM nVHE code runs under a different VA mapping than the kernel, hence > so far it relied only on PC-relative addressing to avoid accidentally > using a relocated kernel VA from a constant pool (see hyp_symbol_addr). > > So as to reduce the possibility of a programmer error, fixup the > relocated addresses instead. Let the kernel relocate them to kernel VA > first, but then iterate over them again, filter those that point to hyp > code/data and convert the kernel VA to hyp VA. > > This is done after kvm_compute_layout and before apply_alternatives. > If this is significant enough to call out, please include the reason for it. > Signed-off-by: David Brazdil > --- > arch/arm64/include/asm/kvm_mmu.h | 1 + > arch/arm64/kernel/smp.c | 4 +- > arch/arm64/kvm/va_layout.c | 76 ++++++++++++++++++++++++++++++++ > 3 files changed, 80 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 5168a0c516ae..e5226f7e4732 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -105,6 +105,7 @@ alternative_cb_end > void kvm_update_va_mask(struct alt_instr *alt, > __le32 *origptr, __le32 *updptr, int nr_inst); > void kvm_compute_layout(void); > +void kvm_fixup_hyp_relocations(void); > > static __always_inline unsigned long __kern_hyp_va(unsigned long v) > { > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > index 18e9727d3f64..30241afc2c93 100644 > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -434,8 +434,10 @@ static void __init hyp_mode_check(void) > "CPU: CPUs started in inconsistent modes"); > else > pr_info("CPU: All CPU(s) started at EL1\n"); > - if (IS_ENABLED(CONFIG_KVM)) > + if (IS_ENABLED(CONFIG_KVM)) { > kvm_compute_layout(); > + kvm_fixup_hyp_relocations(); > + } > } > > void __init smp_cpus_done(unsigned int max_cpus) > diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c > index d8cc51bd60bf..b80fab974896 100644 > --- a/arch/arm64/kvm/va_layout.c > +++ b/arch/arm64/kvm/va_layout.c > @@ -10,6 +10,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -82,6 +83,81 @@ __init void kvm_compute_layout(void) > init_hyp_physvirt_offset(); > } > > +#define __load_elf_u64(s) \ > + ({ \ > + extern u64 s; \ > + u64 val; \ > + \ > + asm ("ldr %0, =%1" : "=r"(val) : "S"(&s)); \ > + val; \ > + }) > + Do you need this to ensure that the reference is absolute? There may be more elegant ways to achieve that, using weak references for instance. Also, in the relocation startup code, I deliberately used a 32-bit quantity here, as it won't get confused for an absolute virtual address that needs relocation. > +static bool __is_within_bounds(u64 addr, char *start, char *end) > +{ > + return start <= (char*)addr && (char*)addr < end; > +} > + > +static bool __is_in_hyp_section(u64 addr) > +{ > + return __is_within_bounds(addr, __hyp_text_start, __hyp_text_end) || > + __is_within_bounds(addr, __hyp_rodata_start, __hyp_rodata_end) || > + __is_within_bounds(addr, > + CHOOSE_NVHE_SYM(__per_cpu_start), > + CHOOSE_NVHE_SYM(__per_cpu_end)); > +} > + It is slightly disappointing that we need to filter these one by one like this, but I don't think there are any guarantees about the order in which the R_AARCH64_RELATIVE entries appear. > +static void __fixup_hyp_rel(u64 addr) __init ? > +{ > + u64 *ptr, kern_va, hyp_va; > + > + /* Adjust the relocation address taken from ELF for KASLR. */ > + addr += kaslr_offset(); > + > + /* Skip addresses not in any of the hyp sections. */ > + if (!__is_in_hyp_section(addr)) > + return; > + > + /* Get the LM alias of the relocation address. */ > + ptr = (u64*)kvm_ksym_ref((void*)addr); > + > + /* > + * Read the value at the relocation address. It has already been > + * relocated to the actual kernel kimg VA. > + */ > + kern_va = (u64)kvm_ksym_ref((void*)*ptr); > + > + /* Convert to hyp VA. */ > + hyp_va = __early_kern_hyp_va(kern_va); > + > + /* Store hyp VA at the relocation address. */ > + *ptr = __early_kern_hyp_va(kern_va); > +} > + > +static void __fixup_hyp_rela(void) __init ? > +{ > + Elf64_Rela *rel; > + size_t i, n; > + > + rel = (Elf64_Rela*)(kimage_vaddr + __load_elf_u64(__rela_offset)); > + n = __load_elf_u64(__rela_size) / sizeof(*rel); > + > + for (i = 0; i < n; ++i) > + __fixup_hyp_rel(rel[i].r_offset); > +} > + > +/* > + * The kernel relocated pointers to kernel VA. Iterate over relocations in > + * the hypervisor ELF sections and convert them to hyp VA. This avoids the > + * need to only use PC-relative addressing in hyp. > + */ > +__init void kvm_fixup_hyp_relocations(void) It is more idiomatic to put the __init after the 'void', and someone is undoubtedly going to send a patch to 'fix' that if we merge it like this. > +{ > + if (!IS_ENABLED(CONFIG_RELOCATABLE) || has_vhe()) > + return; > + > + __fixup_hyp_rela(); > +} > + > static u32 compute_instruction(int n, u32 rd, u32 rn) > { > u32 insn = AARCH64_BREAK_FAULT; > -- > 2.29.2.299.gdc1121823c-goog >