Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3448523pxf; Mon, 22 Mar 2021 06:46:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx19KXYna3ZF0UCrf5SJ9XJzFZKpma9o7SYxPNsk4TO8cNP7dkvy3aZkPO55y159XgNqdeE X-Received: by 2002:a17:906:5acd:: with SMTP id x13mr19335510ejs.211.1616420786671; Mon, 22 Mar 2021 06:46:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616420786; cv=none; d=google.com; s=arc-20160816; b=XcYpDS5s/GsUQc1Gra7yDy2GJ9R0IW8diJSoVBePeLdCgK8nIRufgyNOvBuHF8L/9a CkssUb3+M6D5FBxtJwXD+v7VfMRVwlDF3QjKH84uaxwgFk0ZBKlWLtuc4vKIe0AphBXb ctHawmVr9UmPQdKY9Oz0yRGuBxuQzZ3ZXjmNB2/K03bEHv/rxs1PmBZAWMNLqUZolGK/ 06DmFEazCO1Rktcsu+T004yn0HR94BS8HuRuNLQQGBpRi8WLePyHV+TB87soY/IaOjJd 0a19bGMxhBsXn6xxOAZ3SVREdB6ZWwNBRc5Itj11V8WZ2LMHMdtWkFQbbEH5RBoZU03i Ncjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:in-reply-to :subject:cc:to:from:message-id:date; bh=yh4u+hy1vVNxI2ViR8NsY/nEvTpb2MySB25jiCgkcUo=; b=eoByFU/Jm+ST9/9S8qVHDp5D8O0YLLHo84nrvJrM4N8D8dg6lDegK+yP4uKdu+yYZ6 g7tfoestXZm9QrlSIGm+1e9gLyCFxGZqYos4KlVAEt3TPncfdz0FYoMn2xpzpWeO4S+k rvt0voOoDvk0jiQAleQho3qekiaFAaWz7PPOnqEQzaoHVshsOmCDvJIRCYcu7b7ffl5I gZpsGL30eKVL7LKBC/BVIO33iWmrchCUKzDxnLhbueHWbHtzbycjQnBnKl1UMtwl/S36 tob0/3/Pk2rroUnIkI01wRcOi5KmlCciAmj0gc7fW5SWsFBlxl2Z5pvNMCtY8XE9mo1C /c+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h13si10631593edq.173.2021.03.22.06.46.02; Mon, 22 Mar 2021 06:46:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229944AbhCVNpE (ORCPT + 99 others); Mon, 22 Mar 2021 09:45:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:43330 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229946AbhCVNom (ORCPT ); Mon, 22 Mar 2021 09:44:42 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AE81F61923; Mon, 22 Mar 2021 13:44:41 +0000 (UTC) Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94) (envelope-from ) id 1lOKrH-0034qY-O6; Mon, 22 Mar 2021 13:44:39 +0000 Date: Mon, 22 Mar 2021 13:44:38 +0000 Message-ID: <87o8fbgv5l.wl-maz@kernel.org> From: Marc Zyngier To: Quentin Perret Cc: catalin.marinas@arm.com, will@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, android-kvm@google.com, seanjc@google.com, mate.toth-pal@arm.com, linux-kernel@vger.kernel.org, robh+dt@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, tabba@google.com, ardb@kernel.org, mark.rutland@arm.com, dbrazdil@google.com Subject: Re: [PATCH v6 13/38] KVM: arm64: Enable access to sanitized CPU features at EL2 In-Reply-To: <20210319100146.1149909-14-qperret@google.com> References: <20210319100146.1149909-1-qperret@google.com> <20210319100146.1149909-14-qperret@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: qperret@google.com, catalin.marinas@arm.com, will@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, android-kvm@google.com, seanjc@google.com, mate.toth-pal@arm.com, linux-kernel@vger.kernel.org, robh+dt@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, tabba@google.com, ardb@kernel.org, mark.rutland@arm.com, dbrazdil@google.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Quentin, On Fri, 19 Mar 2021 10:01:21 +0000, Quentin Perret wrote: > > Introduce the infrastructure in KVM enabling to copy CPU feature > registers into EL2-owned data-structures, to allow reading sanitised > values directly at EL2 in nVHE. > > Given that only a subset of these features are being read by the > hypervisor, the ones that need to be copied are to be listed under > together with the name of the nVHE variable that > will hold the copy. This introduces only the infrastructure enabling > this copy. The first users will follow shortly. > > Signed-off-by: Quentin Perret > --- > arch/arm64/include/asm/cpufeature.h | 1 + > arch/arm64/include/asm/kvm_cpufeature.h | 22 ++++++++++++++++++++++ > arch/arm64/include/asm/kvm_host.h | 4 ++++ > arch/arm64/kernel/cpufeature.c | 13 +++++++++++++ > arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++++ > 5 files changed, 59 insertions(+) > create mode 100644 arch/arm64/include/asm/kvm_cpufeature.h > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > index 61177bac49fa..a85cea2cac57 100644 > --- a/arch/arm64/include/asm/cpufeature.h > +++ b/arch/arm64/include/asm/cpufeature.h > @@ -607,6 +607,7 @@ void check_local_cpu_capabilities(void); > > u64 read_sanitised_ftr_reg(u32 id); > u64 __read_sysreg_by_encoding(u32 sys_id); > +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst); > > static inline bool cpu_supports_mixed_endian_el0(void) > { > diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h > new file mode 100644 > index 000000000000..3d245f96a9fe > --- /dev/null > +++ b/arch/arm64/include/asm/kvm_cpufeature.h > @@ -0,0 +1,22 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* > + * Copyright (C) 2020 - Google LLC > + * Author: Quentin Perret > + */ > + > +#ifndef __ARM64_KVM_CPUFEATURE_H__ > +#define __ARM64_KVM_CPUFEATURE_H__ > + > +#include > + > +#include > + > +#if defined(__KVM_NVHE_HYPERVISOR__) > +#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg name > +#define DEFINE_KVM_HYP_CPU_FTR_REG(name) struct arm64_ftr_reg name > +#else > +#define DECLARE_KVM_HYP_CPU_FTR_REG(name) extern struct arm64_ftr_reg kvm_nvhe_sym(name) > +#define DEFINE_KVM_HYP_CPU_FTR_REG(name) BUILD_BUG() > +#endif > + > +#endif > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 6a2031af9562..02e172dc5087 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -740,9 +740,13 @@ void kvm_clr_pmu_events(u32 clr); > > void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); > void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); > + > +void setup_kvm_el2_caps(void); > #else > static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} > static inline void kvm_clr_pmu_events(u32 clr) {} > + > +static inline void setup_kvm_el2_caps(void) {} > #endif > > void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 066030717a4c..6252476e4e73 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1154,6 +1154,18 @@ u64 read_sanitised_ftr_reg(u32 id) > } > EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg); > > +int copy_ftr_reg(u32 id, struct arm64_ftr_reg *dst) > +{ > + struct arm64_ftr_reg *regp = get_arm64_ftr_reg(id); > + > + if (!regp) > + return -EINVAL; > + > + *dst = *regp; > + > + return 0; > +} > + > #define read_sysreg_case(r) \ > case r: val = read_sysreg_s(r); break; > > @@ -2773,6 +2785,7 @@ void __init setup_cpu_features(void) > > setup_system_capabilities(); > setup_elf_hwcaps(arm64_elf_hwcaps); > + setup_kvm_el2_caps(); > > if (system_supports_32bit_el0()) > setup_elf_hwcaps(compat_elf_hwcaps); > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index 4f2f1e3145de..6c5d133689ae 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -21,6 +21,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -2775,3 +2776,21 @@ void kvm_sys_reg_table_init(void) > /* Clear all higher bits. */ > cache_levels &= (1 << (i*3))-1; > } > + > +#define CPU_FTR_REG_HYP_COPY(id, name) \ > + { .sys_id = id, .dst = (struct arm64_ftr_reg *)&kvm_nvhe_sym(name) } > +struct __ftr_reg_copy_entry { > + u32 sys_id; > + struct arm64_ftr_reg *dst; > +} hyp_ftr_regs[] __initdata = { > +}; > + > +void __init setup_kvm_el2_caps(void) > +{ > + int i; > + > + for (i = 0; i < ARRAY_SIZE(hyp_ftr_regs); i++) { > + WARN(copy_ftr_reg(hyp_ftr_regs[i].sys_id, hyp_ftr_regs[i].dst), > + "%u feature register not found\n", hyp_ftr_regs[i].sys_id); > + } > +} > -- > 2.31.0.rc2.261.g7f71774620-goog > > I can't say I'm thrilled with this. Actually, it is fair to say that I don't like it at all! ;-) Copying whole structures with pointers that make no sense at EL2 feels... wrong. As we discussed offline, the main reason for this infrastructure is that the read_ctr macro directly uses arm64_ftr_reg_ctrel0.sys_val when ARM64_MISMATCHED_CACHE_TYPE is set. One thing to realise is that with the protected mode, we can rely on patching as there is no such thing as a "late" CPU. So by specialising read_ctr when compiled for nVHE, we can just make it give us the final value, provided that KVM's own __flush_dcache_area() is limited to protected mode. Once this problem is solved, this whole patch can mostly go, as we are left with exactly *two* u64 quantities to be populated, something that we can probably do in kvm_sys_reg_table_init(). I'll post some patches later today to try and explain what I have in mind. Thanks, M. -- Without deviation from the norm, progress is not possible.