Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752052AbdGaMxG (ORCPT ); Mon, 31 Jul 2017 08:53:06 -0400 Received: from mail-wm0-f52.google.com ([74.125.82.52]:38159 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751017AbdGaMxD (ORCPT ); Mon, 31 Jul 2017 08:53:03 -0400 Date: Mon, 31 Jul 2017 14:53:00 +0200 From: Christoffer Dall To: Jintack Lim Cc: kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, corbet@lwn.net, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, akpm@linux-foundation.org, mchehab@kernel.org, cov@codeaurora.org, daniel.lezcano@linaro.org, david.daney@cavium.com, mark.rutland@arm.com, suzuki.poulose@arm.com, stefan@hello-penguin.com, andy.gross@linaro.org, wcohen@redhat.com, ard.biesheuvel@linaro.org, shankerd@codeaurora.org, vladimir.murzin@arm.com, james.morse@arm.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Jintack Lim Subject: Re: [RFC PATCH v2 37/38] KVM: arm64: Respect the virtual HCR_EL2.NV1 bit setting Message-ID: <20170731125300.GW5176@cbox> References: <1500397144-16232-1-git-send-email-jintack.lim@linaro.org> <1500397144-16232-38-git-send-email-jintack.lim@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1500397144-16232-38-git-send-email-jintack.lim@linaro.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2839 Lines: 86 On Tue, Jul 18, 2017 at 11:59:03AM -0500, Jintack Lim wrote: > Forward ELR_EL1, SPSR_EL1 and VBAR_EL1 traps to the virtual EL2 if the > virtual HCR_EL2.NV bit is set. > > This is for recursive nested virtualization. > > Signed-off-by: Jintack Lim > --- > arch/arm64/include/asm/kvm_arm.h | 1 + > arch/arm64/kvm/sys_regs.c | 18 ++++++++++++++++++ > 2 files changed, 19 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h > index aeaac4e..a1274b7 100644 > --- a/arch/arm64/include/asm/kvm_arm.h > +++ b/arch/arm64/include/asm/kvm_arm.h > @@ -23,6 +23,7 @@ > #include > > /* Hyp Configuration Register (HCR) bits */ > +#define HCR_NV1 (UL(1) << 43) > #define HCR_NV (UL(1) << 42) > #define HCR_E2H (UL(1) << 34) > #define HCR_ID (UL(1) << 33) > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index 3e4ec5e..6f67666 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -1031,6 +1031,15 @@ static bool trap_el2_regs(struct kvm_vcpu *vcpu, > return true; > } > > +/* This function is to support the recursive nested virtualization */ > +static bool forward_nv1_traps(struct kvm_vcpu *vcpu, struct sys_reg_params *p) > +{ > + if (!vcpu_mode_el2(vcpu) && (vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV1)) > + return true; > + > + return false; > +} > + > static bool access_elr(struct kvm_vcpu *vcpu, > struct sys_reg_params *p, > const struct sys_reg_desc *r) > @@ -1038,6 +1047,9 @@ static bool access_elr(struct kvm_vcpu *vcpu, > if (el12_reg(p) && forward_nv_traps(vcpu)) > return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > > + if (!el12_reg(p) && forward_nv1_traps(vcpu, p)) > + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > + > access_rw(p, &vcpu->arch.ctxt.gp_regs.elr_el1); > return true; > } > @@ -1049,6 +1061,9 @@ static bool access_spsr(struct kvm_vcpu *vcpu, > if (el12_reg(p) && forward_nv_traps(vcpu)) > return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > > + if (!el12_reg(p) && forward_nv1_traps(vcpu, p)) > + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > + > access_rw(p, &vcpu->arch.ctxt.gp_regs.spsr[KVM_SPSR_EL1]); > return true; > } > @@ -1060,6 +1075,9 @@ static bool access_vbar(struct kvm_vcpu *vcpu, > if (el12_reg(p) && forward_nv_traps(vcpu)) > return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > > + if (!el12_reg(p) && forward_nv1_traps(vcpu, p)) > + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_hsr(vcpu)); > + > access_rw(p, &vcpu_sys_reg(vcpu, r->reg)); > return true; > } > -- > 1.9.1 > Will we ever trap on any of these if !el12_reg() && !forward_nv_traps() ? If not, do we need the !el12_reg() checks here? Thanks, -Christoffer