From: Reiji Watanabe <[email protected]>
The following patches will use the number of counters information
from the arm_pmu and use this to set the PMCR.N for the guest
during vCPU reset. However, since the guest is not associated
with any arm_pmu until userspace configures the vPMU device
attributes, and a reset can happen before this event, call
kvm_arm_support_pmu_v3() just before doing the reset.
No functional change intended.
Signed-off-by: Reiji Watanabe <[email protected]>
Signed-off-by: Raghavendra Rao Ananta <[email protected]>
---
arch/arm64/kvm/pmu-emul.c | 12 ++----------
arch/arm64/kvm/reset.c | 18 +++++++++++++-----
include/kvm/arm_pmu.h | 6 ++++++
3 files changed, 21 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fb9817bdfeb57..998e1bbd5310d 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -717,8 +717,7 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
* It is still necessary to get a valid cpu, though, to probe for the
* default PMU instance as userspace is not required to specify a PMU
* type. In order to uphold the preexisting behavior KVM selects the
- * PMU instance for the core where the first call to the
- * KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case
+ * PMU instance for the core during the vcpu reset. A dependent use case
* would be a user with disdain of all things big.LITTLE that affines
* the VMM to a particular cluster of cores.
*
@@ -893,7 +892,7 @@ static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
* where vCPUs can be scheduled on any core but the guest
* counters could stop working.
*/
-static int kvm_arm_set_default_pmu(struct kvm *kvm)
+int kvm_arm_set_default_pmu(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
@@ -946,13 +945,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
if (vcpu->arch.pmu.created)
return -EBUSY;
- if (!kvm->arch.arm_pmu) {
- int ret = kvm_arm_set_default_pmu(kvm);
-
- if (ret)
- return ret;
- }
-
switch (attr->attr) {
case KVM_ARM_VCPU_PMU_V3_IRQ: {
int __user *uaddr = (int __user *)(long)attr->addr;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 7a65a35ee4ac4..6912832b44b6d 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -206,6 +206,7 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
+ struct kvm *kvm = vcpu->kvm;
struct vcpu_reset_state reset_state;
int ret;
bool loaded;
@@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
vcpu->arch.reset_state.reset = false;
spin_unlock(&vcpu->arch.mp_state_lock);
+ if (kvm_vcpu_has_pmu(vcpu)) {
+ if (!kvm_arm_support_pmu_v3())
+ return -EINVAL;
+
+ /*
+ * When the vCPU has a PMU, but no PMU is set for the guest
+ * yet, set the default one.
+ */
+ if (unlikely(!kvm->arch.arm_pmu) && kvm_arm_set_default_pmu(kvm))
+ return -EINVAL;
+ }
+
/* Reset PMU outside of the non-preemptible section */
kvm_pmu_vcpu_reset(vcpu);
@@ -255,11 +268,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
else
pstate = VCPU_RESET_PSTATE_EL1;
- if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) {
- ret = -EINVAL;
- goto out;
- }
-
/* Reset core registers */
memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 31029f4f7be85..b80c75d80886b 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -101,6 +101,7 @@ void kvm_vcpu_pmu_resync_el0(void);
})
u8 kvm_arm_pmu_get_pmuver_limit(void);
+int kvm_arm_set_default_pmu(struct kvm *kvm);
#else
struct kvm_pmu {
@@ -174,6 +175,11 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
}
static inline void kvm_vcpu_pmu_resync_el0(void) {}
+static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
+{
+ return -ENODEV;
+}
+
#endif
#endif
--
2.42.0.582.g8ccd20d70d-goog
Hi Raghu,
On Tue, Sep 26, 2023 at 11:39:59PM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <[email protected]>
>
> The following patches will use the number of counters information
> from the arm_pmu and use this to set the PMCR.N for the guest
> during vCPU reset. However, since the guest is not associated
> with any arm_pmu until userspace configures the vPMU device
> attributes, and a reset can happen before this event, call
> kvm_arm_support_pmu_v3() just before doing the reset.
>
> No functional change intended.
I would argue there still is a functional change here, as PMU
initialization failure now shows up on a completely different ioctl for
userspace.
> @@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> vcpu->arch.reset_state.reset = false;
> spin_unlock(&vcpu->arch.mp_state_lock);
>
> + if (kvm_vcpu_has_pmu(vcpu)) {
> + if (!kvm_arm_support_pmu_v3())
> + return -EINVAL;
> +
> + /*
> + * When the vCPU has a PMU, but no PMU is set for the guest
> + * yet, set the default one.
> + */
> + if (unlikely(!kvm->arch.arm_pmu) && kvm_arm_set_default_pmu(kvm))
> + return -EINVAL;
> + }
> +
Ah, this probably will not mix well with my recent change to get rid of
the return value altogether from kvm_reset_vcpu() [*]. I see two ways to
handle this:
- Add a separate helper responsible for one-time setup of the vCPU
called from KVM_ARM_VCPU_INIT which may fail.
- Add a check for !kvm->arch.arm_pmu to kvm_arm_pmu_v3_init().
No strong preference, though.
[*]: https://lore.kernel.org/r/[email protected]
--
Thanks,
Oliver
Hi Oliver,
On Wed, Sep 27, 2023 at 1:02 AM Oliver Upton <[email protected]> wrote:
>
> Hi Raghu,
>
> On Tue, Sep 26, 2023 at 11:39:59PM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <[email protected]>
> >
> > The following patches will use the number of counters information
> > from the arm_pmu and use this to set the PMCR.N for the guest
> > during vCPU reset. However, since the guest is not associated
> > with any arm_pmu until userspace configures the vPMU device
> > attributes, and a reset can happen before this event, call
> > kvm_arm_support_pmu_v3() just before doing the reset.
> >
> > No functional change intended.
>
> I would argue there still is a functional change here, as PMU
> initialization failure now shows up on a completely different ioctl for
> userspace.
>
> > @@ -216,6 +217,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> > vcpu->arch.reset_state.reset = false;
> > spin_unlock(&vcpu->arch.mp_state_lock);
> >
> > + if (kvm_vcpu_has_pmu(vcpu)) {
> > + if (!kvm_arm_support_pmu_v3())
> > + return -EINVAL;
> > +
> > + /*
> > + * When the vCPU has a PMU, but no PMU is set for the guest
> > + * yet, set the default one.
> > + */
> > + if (unlikely(!kvm->arch.arm_pmu) && kvm_arm_set_default_pmu(kvm))
> > + return -EINVAL;
> > + }
> > +
>
> Ah, this probably will not mix well with my recent change to get rid of
> the return value altogether from kvm_reset_vcpu() [*]. I see two ways to
> handle this:
>
> - Add a separate helper responsible for one-time setup of the vCPU
> called from KVM_ARM_VCPU_INIT which may fail.
>
> - Add a check for !kvm->arch.arm_pmu to kvm_arm_pmu_v3_init().
>
> No strong preference, though.
>
Thanks for the pointer. I think adding it in kvm_arm_pmu_v3_init() may
not be feasible as the reset (reset_pmcr()) may happen before this
init and we may end up setting 0 as PMCR.N for the guest. I'll explore
the other option though.
Thank you.
Raghavendra
> [*]: https://lore.kernel.org/r/[email protected]
>
> --
> Thanks,
> Oliver