2023-09-16 01:26:57

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N

+cc will, rutland

Hi Raghu,

Please make sure you cc the right folks for changes that poke multiple
subsystems.

The diff looks OK, but I'm somewhat dubious of the need for this change
in the context of what you're trying to accomplish for KVM. I'd prefer
we either leave the existing definition/usage intact or rework *all* of
the PMUv3 masks to be of the shifted variety.

On Thu, Aug 17, 2023 at 12:30:22AM +0000, Raghavendra Rao Ananta wrote:
> From: Reiji Watanabe <[email protected]>
>
> Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and
> ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11),
> and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N.
> The following patches will also use these macros to extract PMCR_EL0.N.

Changelog is a bit wordy:

Define a shifted mask for accessing PMCR_EL0.N amenable to the use of
bitfield accessors and convert the existing, open-coded mask shifts to
the new definition.

> No functional change intended.
>
> Signed-off-by: Reiji Watanabe <[email protected]>
> Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> ---
> arch/arm64/kvm/pmu-emul.c | 3 +--
> arch/arm64/kvm/sys_regs.c | 7 +++----
> drivers/perf/arm_pmuv3.c | 3 +--
> include/linux/perf/arm_pmuv3.h | 2 +-
> 4 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index b87822024828a..f7b5fa16341ad 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>
> u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> {
> - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
> + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
>
> - val &= ARMV8_PMU_PMCR_N_MASK;
> if (val == 0)
> return BIT(ARMV8_PMU_CYCLE_IDX);
> else
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 39e9248c935e7..30108f09e088b 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> return 0;
>
> /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
> if (!kvm_supports_32bit_el0())
> pmcr |= ARMV8_PMU_PMCR_LC;
>
> @@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>
> static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> {
> - u64 pmcr, val;
> + u64 val;
>
> - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
> - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
> kvm_inject_undefined(vcpu);
> return false;
> diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> index 08b3a1bf0ef62..7618b0adc0b8c 100644
> --- a/drivers/perf/arm_pmuv3.c
> +++ b/drivers/perf/arm_pmuv3.c
> @@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info)
> probe->present = true;
>
> /* Read the nb of CNTx counters supported from PMNC */
> - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
> - & ARMV8_PMU_PMCR_N_MASK;
> + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
>
> /* Add the CPU cycles counter */
> cpu_pmu->num_events += 1;
> diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h
> index e3899bd77f5cc..ecbcf3f93560c 100644
> --- a/include/linux/perf/arm_pmuv3.h
> +++ b/include/linux/perf/arm_pmuv3.h
> @@ -216,7 +216,7 @@
> #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
> #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
> #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
> -#define ARMV8_PMU_PMCR_N_MASK 0x1f
> +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
> #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
>
> /*
> --
> 2.41.0.694.ge786442a9b-goog
>
>

--
Thanks,
Oliver


2023-09-18 17:58:36

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: Re: [PATCH v5 05/12] KVM: arm64: PMU: Simplify extracting PMCR_EL0.N

Hi Oliver,

On Fri, Sep 15, 2023 at 12:56 PM Oliver Upton <[email protected]> wrote:
>
> +cc will, rutland
>
> Hi Raghu,
>
> Please make sure you cc the right folks for changes that poke multiple
> subsystems.
>
> The diff looks OK, but I'm somewhat dubious of the need for this change
> in the context of what you're trying to accomplish for KVM. I'd prefer
> we either leave the existing definition/usage intact or rework *all* of
> the PMUv3 masks to be of the shifted variety.
>
I believe the original intention was to make accessing the PMCR.N
field simple. However, if you feel its redundant and is unnecessary
for this series I'm happy to drop the patch.

Thank you.
Raghavendra
> On Thu, Aug 17, 2023 at 12:30:22AM +0000, Raghavendra Rao Ananta wrote:
> > From: Reiji Watanabe <[email protected]>
> >
> > Some code extracts PMCR_EL0.N using ARMV8_PMU_PMCR_N_SHIFT and
> > ARMV8_PMU_PMCR_N_MASK. Define ARMV8_PMU_PMCR_N (0x1f << 11),
> > and simplify those codes using FIELD_GET() and/or ARMV8_PMU_PMCR_N.
> > The following patches will also use these macros to extract PMCR_EL0.N.
>
> Changelog is a bit wordy:
>
> Define a shifted mask for accessing PMCR_EL0.N amenable to the use of
> bitfield accessors and convert the existing, open-coded mask shifts to
> the new definition.
>
> > No functional change intended.
> >
> > Signed-off-by: Reiji Watanabe <[email protected]>
> > Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> > ---
> > arch/arm64/kvm/pmu-emul.c | 3 +--
> > arch/arm64/kvm/sys_regs.c | 7 +++----
> > drivers/perf/arm_pmuv3.c | 3 +--
> > include/linux/perf/arm_pmuv3.h | 2 +-
> > 4 files changed, 6 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index b87822024828a..f7b5fa16341ad 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -245,9 +245,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> >
> > u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> > {
> > - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
> > + u64 val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> >
> > - val &= ARMV8_PMU_PMCR_N_MASK;
> > if (val == 0)
> > return BIT(ARMV8_PMU_CYCLE_IDX);
> > else
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 39e9248c935e7..30108f09e088b 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -750,7 +750,7 @@ static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > return 0;
> >
> > /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
> > + pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N;
> > if (!kvm_supports_32bit_el0())
> > pmcr |= ARMV8_PMU_PMCR_LC;
> >
> > @@ -858,10 +858,9 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >
> > static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> > {
> > - u64 pmcr, val;
> > + u64 val;
> >
> > - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
> > - val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
> > + val = FIELD_GET(ARMV8_PMU_PMCR_N, __vcpu_sys_reg(vcpu, PMCR_EL0));
> > if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
> > kvm_inject_undefined(vcpu);
> > return false;
> > diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> > index 08b3a1bf0ef62..7618b0adc0b8c 100644
> > --- a/drivers/perf/arm_pmuv3.c
> > +++ b/drivers/perf/arm_pmuv3.c
> > @@ -1128,8 +1128,7 @@ static void __armv8pmu_probe_pmu(void *info)
> > probe->present = true;
> >
> > /* Read the nb of CNTx counters supported from PMNC */
> > - cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
> > - & ARMV8_PMU_PMCR_N_MASK;
> > + cpu_pmu->num_events = FIELD_GET(ARMV8_PMU_PMCR_N, armv8pmu_pmcr_read());
> >
> > /* Add the CPU cycles counter */
> > cpu_pmu->num_events += 1;
> > diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h
> > index e3899bd77f5cc..ecbcf3f93560c 100644
> > --- a/include/linux/perf/arm_pmuv3.h
> > +++ b/include/linux/perf/arm_pmuv3.h
> > @@ -216,7 +216,7 @@
> > #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
> > #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
> > #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
> > -#define ARMV8_PMU_PMCR_N_MASK 0x1f
> > +#define ARMV8_PMU_PMCR_N (0x1f << ARMV8_PMU_PMCR_N_SHIFT)
> > #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
> >
> > /*
> > --
> > 2.41.0.694.ge786442a9b-goog
> >
> >
>
> --
> Thanks,
> Oliver