Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp7700473ybl; Tue, 24 Dec 2019 07:18:36 -0800 (PST) X-Google-Smtp-Source: APXvYqzLc5dYJSilEAAHQqcV6M9zkYgtAxvlnykyGYYfyjyN0KbZC1d9fdvSqCx5EALYrHm39mHp X-Received: by 2002:a9d:6d81:: with SMTP id x1mr40860597otp.9.1577200716563; Tue, 24 Dec 2019 07:18:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577200716; cv=none; d=google.com; s=arc-20160816; b=vg52exFIZtctevi/R4C4Cp/5zT65LE0RO7ywtS0ws4Cls9pw9DjwN2gmja66sIaCX1 3qR921XTKmsKFwkg75M7a26LNNhNgzY1gaT4O+I3iJIMwNTiqH+hpXtybliszQ/jactg 9+G99VZqCkXjO25o5/EK5ZRo+eIxqVWBpF00XofxendnM0luIUZRfqo7a5PyD4JvFlp8 sdkw06tVpL+iBp2JJcVJyG+WO841Ewnqpul9228mugMPBQzOA7Br1ilQtw6LJdT0tVsV QAYeWqq2TRhXvKf/c77Ut3newifzAv+2skB3p63fYCbEU+AD341rfHI1BJQmsA4sISY2 yEzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=CB3MU60RV7Q/EW4nFGQ/LKNbkRlO3CCUb0uisTbX/4k=; b=pOHj2bt5LFkDLZxmIdMZlKa0+Dsls57JpS8/nAAJApqvtDnioBruRMLHLkuIFjexu2 3QvKN/RM7TeEI2n18l05Fnv68+NrW2kg5iS96N9/ec2/3iJTrKqENp9+uRYPY46fEB9J FihwxrtIxj7w/6B8MpEzMkCpA54IDohptkgTXoRYmkLnKBsR1FHy4zUqSVjODiTYGo3M s6AhOMfta6dejFoutW97vJAcJk0VhLVJbt3ohD3pNJaAnI0HJZLo/2tMuJVps9cKwEBo Rih/5AlzZNbC1f5ncPfa6YaQ8Ti2Llh2wnAxzmWLHiS6svuSeu6Ug2/QswmDDJF2NucR +Fig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w23si7855722otp.109.2019.12.24.07.18.24; Tue, 24 Dec 2019 07:18:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726259AbfLXPRn (ORCPT + 99 others); Tue, 24 Dec 2019 10:17:43 -0500 Received: from foss.arm.com ([217.140.110.172]:52960 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726168AbfLXPRn (ORCPT ); Tue, 24 Dec 2019 10:17:43 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3037E1FB; Tue, 24 Dec 2019 07:17:42 -0800 (PST) Received: from localhost (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 97CCD3F6CF; Tue, 24 Dec 2019 07:17:41 -0800 (PST) Date: Tue, 24 Dec 2019 15:17:39 +0000 From: Andrew Murray To: Marc Zyngier Cc: kvm@vger.kernel.org, Catalin Marinas , linux-kernel@vger.kernel.org, Sudeep Holla , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2 08/18] arm64: KVM: add support to save/restore SPE profiling buffer controls Message-ID: <20191224151739.GP42593@e119886-lin.cambridge.arm.com> References: <20191220143025.33853-1-andrew.murray@arm.com> <20191220143025.33853-9-andrew.murray@arm.com> <20191221135755.70a6e8df@why> <20191224104929.GE42593@e119886-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191224104929.GE42593@e119886-lin.cambridge.arm.com> User-Agent: Mutt/1.10.1+81 (426a6c1) (2018-08-26) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 24, 2019 at 10:49:30AM +0000, Andrew Murray wrote: > On Sat, Dec 21, 2019 at 01:57:55PM +0000, Marc Zyngier wrote: > > On Fri, 20 Dec 2019 14:30:15 +0000 > > Andrew Murray wrote: > > > > > From: Sudeep Holla > > > > > > Currently since we don't support profiling using SPE in the guests, > > > we just save the PMSCR_EL1, flush the profiling buffers and disable > > > sampling. However in order to support simultaneous sampling both in > > > > Is the sampling actually simultaneous? I don't believe so (the whole > > series would be much simpler if it was). > > No the SPE is used by either the guest or host at any one time. I guess > the term simultaneous was used to refer to illusion given to both guest > and host that they are able to use it whenever they like. I'll update > the commit message to drop the magic. > > > > > > > the host and guests, we need to save and reatore the complete SPE > > > > s/reatore/restore/ > > Noted. > > > > > > > profiling buffer controls' context. > > > > > > Let's add the support for the same and keep it disabled for now. > > > We can enable it conditionally only if guests are allowed to use > > > SPE. > > > > > > Signed-off-by: Sudeep Holla > > > [ Clear PMBSR bit when saving state to prevent spurious interrupts ] > > > Signed-off-by: Andrew Murray > > > --- > > > arch/arm64/kvm/hyp/debug-sr.c | 51 +++++++++++++++++++++++++++++------ > > > 1 file changed, 43 insertions(+), 8 deletions(-) > > > > > > diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c > > > index 8a70a493345e..12429b212a3a 100644 > > > --- a/arch/arm64/kvm/hyp/debug-sr.c > > > +++ b/arch/arm64/kvm/hyp/debug-sr.c > > > @@ -85,7 +85,8 @@ > > > default: write_debug(ptr[0], reg, 0); \ > > > } > > > > > > -static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt) > > > +static void __hyp_text > > > +__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) > > > > nit: don't split lines like this if you can avoid it. You can put the > > full_ctxt parameter on a separate line instead. > > Yes understood. > > > > > > > { > > > u64 reg; > > > > > > @@ -102,22 +103,46 @@ static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt) > > > if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) > > > return; > > > > > > - /* No; is the host actually using the thing? */ > > > - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); > > > - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) > > > + /* Save the control register and disable data generation */ > > > + ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1(SYS_PMSCR); > > > + > > > + if (!ctxt->sys_regs[PMSCR_EL1]) > > > > Shouldn't you check the enable bits instead of relying on the whole > > thing being zero? > > Yes that would make more sense (E1SPE and E0SPE). > > I feel that this check makes an assumption about the guest/host SPE > driver... What happens if the SPE driver writes to some SPE registers > but doesn't enable PMSCR? If the guest is also using SPE then those > writes will be lost, when the host returns and the SPE driver enables > SPE it won't work. > > With a quick look at the SPE driver I'm not sure this will happen, but > even so it makes me nervous relying on these assumptions. I wonder if > this risk is present in other devices? In fact, this may be a good reason to trap the SPE registers - this would allow you to conditionally save/restore based on a dirty bit. It would also allow you to re-evaluate the SPE interrupt (for example when the guest clears the status register) and thus potentially reduce any black hole. Thanks, Andrew Murray > > > > > > > return; > > > > > > /* Yes; save the control register and disable data generation */ > > > - ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1(SYS_PMSCR); > > > > You've already saved the control register... > > I'll remove that. > > > > > > > write_sysreg_el1(0, SYS_PMSCR); > > > isb(); > > > > > > /* Now drain all buffered data to memory */ > > > psb_csync(); > > > dsb(nsh); > > > + > > > + if (!full_ctxt) > > > + return; > > > + > > > + ctxt->sys_regs[PMBLIMITR_EL1] = read_sysreg_s(SYS_PMBLIMITR_EL1); > > > + write_sysreg_s(0, SYS_PMBLIMITR_EL1); > > > + > > > + /* > > > + * As PMBSR is conditionally restored when returning to the host we > > > + * must ensure the service bit is unset here to prevent a spurious > > > + * host SPE interrupt from being raised. > > > + */ > > > + ctxt->sys_regs[PMBSR_EL1] = read_sysreg_s(SYS_PMBSR_EL1); > > > + write_sysreg_s(0, SYS_PMBSR_EL1); > > > + > > > + isb(); > > > + > > > + ctxt->sys_regs[PMSICR_EL1] = read_sysreg_s(SYS_PMSICR_EL1); > > > + ctxt->sys_regs[PMSIRR_EL1] = read_sysreg_s(SYS_PMSIRR_EL1); > > > + ctxt->sys_regs[PMSFCR_EL1] = read_sysreg_s(SYS_PMSFCR_EL1); > > > + ctxt->sys_regs[PMSEVFR_EL1] = read_sysreg_s(SYS_PMSEVFR_EL1); > > > + ctxt->sys_regs[PMSLATFR_EL1] = read_sysreg_s(SYS_PMSLATFR_EL1); > > > + ctxt->sys_regs[PMBPTR_EL1] = read_sysreg_s(SYS_PMBPTR_EL1); > > > } > > > > > > -static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt) > > > +static void __hyp_text > > > +__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt) > > > { > > > if (!ctxt->sys_regs[PMSCR_EL1]) > > > return; > > > @@ -126,6 +151,16 @@ static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt) > > > isb(); > > > > > > /* Re-enable data generation */ > > > + if (full_ctxt) { > > > + write_sysreg_s(ctxt->sys_regs[PMBPTR_EL1], SYS_PMBPTR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMBLIMITR_EL1], SYS_PMBLIMITR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMSFCR_EL1], SYS_PMSFCR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMSEVFR_EL1], SYS_PMSEVFR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMSLATFR_EL1], SYS_PMSLATFR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMSIRR_EL1], SYS_PMSIRR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMSICR_EL1], SYS_PMSICR_EL1); > > > + write_sysreg_s(ctxt->sys_regs[PMBSR_EL1], SYS_PMBSR_EL1); > > > + } > > > write_sysreg_el1(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR); > > > } > > > > > > @@ -198,7 +233,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu) > > > guest_ctxt = &vcpu->arch.ctxt; > > > > > > if (!has_vhe()) > > > - __debug_restore_spe_nvhe(host_ctxt); > > > + __debug_restore_spe_nvhe(host_ctxt, false); > > > > > > if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) > > > return; > > > @@ -222,7 +257,7 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu) > > > > > > host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); > > > if (!has_vhe()) > > > - __debug_save_spe_nvhe(host_ctxt); > > > + __debug_save_spe_nvhe(host_ctxt, false); > > > } > > > > > > void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu) > > > > So all of this is for non-VHE. What happens in the VHE case? > > By the end of the series this ends up in __debug_save_host_context which is > called for both VHE/nVHE - on the re-spin I'll make it not look so confusing. > > Thanks, > > Andrew Murray > > > > > M. > > -- > > Jazz is not dead. It just smells funny... > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm