Received: by 2002:a05:7412:251c:b0:e2:908c:2ebd with SMTP id w28csp1373319rda; Mon, 23 Oct 2023 10:29:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH2iMflhTsL/ovvc7DoIGaB5T8O/nEQ0aX/6nScSYNWWoG6HFXzbK0YgWf3HmDoXEB7OWJg X-Received: by 2002:a17:903:2301:b0:1c9:d940:78ea with SMTP id d1-20020a170903230100b001c9d94078eamr12470596plh.22.1698082152769; Mon, 23 Oct 2023 10:29:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698082152; cv=none; d=google.com; s=arc-20160816; b=PXzYRWosBie/Zzo1wlitKa4WqMvfjK/F/w1MFVdENrK0jjtqPMe5F9oBaoEF/CLoKC wzuicZwVxEqGlae84XV6x6k8wCDQ+LwWFNoH4iCgGIkIfnNNMFB/tiZvN/SRLg04zOPQ Fjy7DMQGTiMvErk1bg7TKjKoHhctBlpH8VGpESSok2m4s/fF4xRV9lzBJgwQ711f1hZr HCmIDn4yKrlMUg8ougGujY0mhNjR42lExTrsI2G8mg5N/fvzk7EDNsUX15CsKuEtRcz5 ZkgwWPbxYz9WBgSwr7NLt10Y2igDJcR7AvLvOCn7ZvOiXNjjAh0TRUMTdcLC+6fTdxH8 vGdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=Hh5gbGTjwswNSJut0j41J9mky6NNqMMsm33VLdUTGQU=; fh=TGk4urwePQAtNEz2g8zYdmZqxEEWumkMKaQFXo1TwoY=; b=bzn/8c+8Lstlhv92YM/J6E1y6cqeN8C1Z10Gu1npYhX1p8Y5xJZO7/aon0CU8A6dJ0 nSHDOD7HavaJCnvSwBNY6dUruLlq48fTU1sWw4wBNQhwGgaU2lF3jQA9mbubB+m4z693 mQsEXcOCrHpIDshXQgCAcg1y9mMLXhDUsM2VlQYB1mUm0RoNX9PjXQTMJB3pp5bxEpKs 303I+B8MjuZbl1hZJikOcPsT07jvzEeNQR842CRal+3Z39kxIaKi7KRH6HKs2UcRqHil hzSO7Mqhltii1IlhhOm+OkO444/vrhot4OBfsEQ2r7Qs/EA8ySywPQWaxpRpQX0uiO9o N2DA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ARHEb9Uw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id d8-20020a170902cec800b001c3fa95ca18si7088557plg.333.2023.10.23.10.29.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 10:29:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ARHEb9Uw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id BE8E780B1D3D; Mon, 23 Oct 2023 10:29:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232426AbjJWR3E (ORCPT + 99 others); Mon, 23 Oct 2023 13:29:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233406AbjJWR2m (ORCPT ); Mon, 23 Oct 2023 13:28:42 -0400 Received: from mail-il1-x135.google.com (mail-il1-x135.google.com [IPv6:2607:f8b0:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFBE310D5 for ; Mon, 23 Oct 2023 10:28:34 -0700 (PDT) Received: by mail-il1-x135.google.com with SMTP id e9e14a558f8ab-3529f5f5dadso5085ab.0 for ; Mon, 23 Oct 2023 10:28:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698082114; x=1698686914; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Hh5gbGTjwswNSJut0j41J9mky6NNqMMsm33VLdUTGQU=; b=ARHEb9Uw32OUtRDh2U8KGmEmLIl89Gy+AiTgHDj3OL8gOv24T7ht8nGPr6WLw5PnnP edUWMjzneYNV2Bqv8P9EP6f3dw5EXLPe87B/qWDwumYEVVFFGWf+LuZuK1uBGf5jXGmI Z/Ibc2Ft9Wfd/nBto4G3HFa72tzTR+xSZNgWbYcpSH6v65/G55EqDGgeQ/CJNJJHxV/n l1javPbmR9OnkrYu6WHRfDhqjS+I4QsXnyfoahEWHqoNu46qur0vFJbklD+4XdQVFsPf USo2u+jxbGkLyWZeQWDEZ4Fw2S0W4zeaTdc0U1aYIr5J3+4NbbO5hUf+IPTc2G2hwUFD 6s3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698082114; x=1698686914; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hh5gbGTjwswNSJut0j41J9mky6NNqMMsm33VLdUTGQU=; b=B6VkpoHlP+cFbwYvZfBJHN01a+TjI62wPwZewLmGgTewsCCKWB+L6MUN0cVL52Erdl 6dV/YWRNh7vSQIChsBgalR0t3Z+LwVHn/7XOprvQE2z1p5NGIXCGbkfnqgKHMg5McQat ZE304DALjikh3QJ34N+MFO+gzsmsP7kzShX4018qkBKyN1CjDtZotGUhiX3ZkvcJnHdp rucPL0dkY13gfY/O4NIr/eM5mNZsxfEH0XqM61PR7nWeyzZGLqZ6aDpnr5EbWSee2W8B ZXgKjkIfM8a9XHWWuFESgNKZLZukQdUG0EIFLu2R20Qjg41tjHZoIlaYJGmbtvMJl+ci ZIHQ== X-Gm-Message-State: AOJu0YxOFYnvWVWsGLYCEBp6SN5K6plRrktjXBS6usnykrQ7226ixoDh 9SpjQz/ZpglYVJWk04pUklKm+XlhhB++1G9MqBlDLA== X-Received: by 2002:a92:ac07:0:b0:351:ad4:85b with SMTP id r7-20020a92ac07000000b003510ad4085bmr27471ilh.4.1698082113782; Mon, 23 Oct 2023 10:28:33 -0700 (PDT) MIME-Version: 1.0 References: <20231020214053.2144305-1-rananta@google.com> <20231020214053.2144305-6-rananta@google.com> <86zg094j1o.wl-maz@kernel.org> In-Reply-To: <86zg094j1o.wl-maz@kernel.org> From: Raghavendra Rao Ananta Date: Mon, 23 Oct 2023 10:28:21 -0700 Message-ID: Subject: Re: [PATCH v8 05/13] KVM: arm64: Add {get,set}_user for PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR} To: Marc Zyngier Cc: Oliver Upton , Alexandru Elisei , James Morse , Suzuki K Poulose , Paolo Bonzini , Zenghui Yu , Shaoqin Huang , Jing Zhang , Reiji Watanabe , Colton Lewis , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 23 Oct 2023 10:29:12 -0700 (PDT) On Mon, Oct 23, 2023 at 5:31=E2=80=AFAM Marc Zyngier wrote= : > > On Fri, 20 Oct 2023 22:40:45 +0100, > Raghavendra Rao Ananta wrote: > > > > For unimplemented counters, the bits in PM{C,I}NTEN{SET,CLR} and > > PMOVS{SET,CLR} registers are expected to RAZ. To honor this, > > explicitly implement the {get,set}_user functions for these > > registers to mask out unimplemented counters for userspace reads > > and writes. > > > > Signed-off-by: Raghavendra Rao Ananta > > --- > > arch/arm64/kvm/sys_regs.c | 91 ++++++++++++++++++++++++++++++++++++--- > > 1 file changed, 85 insertions(+), 6 deletions(-) > > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > > index faf97878dfbbb..2e5d497596ef8 100644 > > --- a/arch/arm64/kvm/sys_regs.c > > +++ b/arch/arm64/kvm/sys_regs.c > > @@ -987,6 +987,45 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vc= pu, struct sys_reg_params *p, > > return true; > > } > > > > +static void set_pmreg_for_valid_counters(struct kvm_vcpu *vcpu, > > + u64 reg, u64 val, bool set) > > +{ > > + struct kvm *kvm =3D vcpu->kvm; > > + > > + mutex_lock(&kvm->arch.config_lock); > > + > > + /* Make the register immutable once the VM has started running */ > > + if (kvm_vm_has_ran_once(kvm)) { > > + mutex_unlock(&kvm->arch.config_lock); > > + return; > > + } > > + > > + val &=3D kvm_pmu_valid_counter_mask(vcpu); > > + mutex_unlock(&kvm->arch.config_lock); > > + > > + if (set) > > + __vcpu_sys_reg(vcpu, reg) |=3D val; > > + else > > + __vcpu_sys_reg(vcpu, reg) &=3D ~val; > > +} > > + > > +static int get_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_des= c *r, > > + u64 *val) > > +{ > > + u64 mask =3D kvm_pmu_valid_counter_mask(vcpu); > > + > > + *val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; > > + return 0; > > +} > > + > > +static int set_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_des= c *r, > > + u64 val) > > +{ > > + /* r->Op2 & 0x1: true for PMCNTENSET_EL0, else PMCNTENCLR_EL0 */ > > + set_pmreg_for_valid_counters(vcpu, PMCNTENSET_EL0, val, r->Op2 & = 0x1); > > + return 0; > > +} > > Huh, this is really ugly. Why the explosion of pointless helpers when > the whole design of the sysreg infrastructure to have *common* helpers > for registers that behave the same way? > > I'd expect something like the hack below instead. > > M. > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > index a2c5f210b3d6..8f560a2496f2 100644 > --- a/arch/arm64/kvm/sys_regs.c > +++ b/arch/arm64/kvm/sys_regs.c > @@ -987,42 +987,46 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcp= u, struct sys_reg_params *p, > return true; > } > > -static void set_pmreg_for_valid_counters(struct kvm_vcpu *vcpu, > - u64 reg, u64 val, bool set) > +static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r= , u64 val) > { > struct kvm *kvm =3D vcpu->kvm; > + bool set; > > mutex_lock(&kvm->arch.config_lock); > > /* Make the register immutable once the VM has started running */ > if (kvm_vm_has_ran_once(kvm)) { > mutex_unlock(&kvm->arch.config_lock); > - return; > + return 0; > } > > val &=3D kvm_pmu_valid_counter_mask(vcpu); > mutex_unlock(&kvm->arch.config_lock); > > + switch(r->reg) { > + case PMOVSSET_EL0: > + /* CRm[1] being set indicates a SET register, and CLR oth= erwise */ > + set =3D r->CRm & 2; > + break; > + default: > + /* Op2[0] being set indicates a SET register, and CLR oth= erwise */ > + set =3D r->Op2 & 1; > + break; > + } > + > if (set) > - __vcpu_sys_reg(vcpu, reg) |=3D val; > + __vcpu_sys_reg(vcpu, r->reg) |=3D val; > else > - __vcpu_sys_reg(vcpu, reg) &=3D ~val; > -} > - > -static int get_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_desc = *r, > - u64 *val) > -{ > - u64 mask =3D kvm_pmu_valid_counter_mask(vcpu); > + __vcpu_sys_reg(vcpu, r->reg) &=3D ~val; > > - *val =3D __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; > return 0; > } > > -static int set_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_desc = *r, > - u64 val) > +static int get_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r= , u64 *val) > { > - /* r->Op2 & 0x1: true for PMCNTENSET_EL0, else PMCNTENCLR_EL0 */ > - set_pmreg_for_valid_counters(vcpu, PMCNTENSET_EL0, val, r->Op2 & = 0x1); > + u64 mask =3D kvm_pmu_valid_counter_mask(vcpu); > + > + *val =3D __vcpu_sys_reg(vcpu, r->reg) & mask; > return 0; > } > > @@ -1054,23 +1058,6 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, = struct sys_reg_params *p, > return true; > } > > -static int get_pminten(struct kvm_vcpu *vcpu, const struct sys_reg_desc = *r, > - u64 *val) > -{ > - u64 mask =3D kvm_pmu_valid_counter_mask(vcpu); > - > - *val =3D __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask; > - return 0; > -} > - > -static int set_pminten(struct kvm_vcpu *vcpu, const struct sys_reg_desc = *r, > - u64 val) > -{ > - /* r->Op2 & 0x1: true for PMINTENSET_EL1, else PMINTENCLR_EL1 */ > - set_pmreg_for_valid_counters(vcpu, PMINTENSET_EL1, val, r->Op2 & = 0x1); > - return 0; > -} > - > static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params = *p, > const struct sys_reg_desc *r) > { > @@ -1095,23 +1082,6 @@ static bool access_pminten(struct kvm_vcpu *vcpu, = struct sys_reg_params *p, > return true; > } > > -static int set_pmovs(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r= , > - u64 val) > -{ > - /* r->CRm & 0x2: true for PMOVSSET_EL0, else PMOVSCLR_EL0 */ > - set_pmreg_for_valid_counters(vcpu, PMOVSSET_EL0, val, r->CRm & 0x= 2); > - return 0; > -} > - > -static int get_pmovs(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r= , > - u64 *val) > -{ > - u64 mask =3D kvm_pmu_valid_counter_mask(vcpu); > - > - *val =3D __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask; > - return 0; > -} > - > static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p= , > const struct sys_reg_desc *r) > { > @@ -2311,10 +2281,10 @@ static const struct sys_reg_desc sys_reg_descs[] = =3D { > > { PMU_SYS_REG(PMINTENSET_EL1), > .access =3D access_pminten, .reg =3D PMINTENSET_EL1, > - .get_user =3D get_pminten, .set_user =3D set_pminten }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > { PMU_SYS_REG(PMINTENCLR_EL1), > .access =3D access_pminten, .reg =3D PMINTENSET_EL1, > - .get_user =3D get_pminten, .set_user =3D set_pminten }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, > > { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 = }, > @@ -2366,13 +2336,13 @@ static const struct sys_reg_desc sys_reg_descs[] = =3D { > .reg =3D PMCR_EL0, .get_user =3D get_pmcr, .set_user =3D set_pm= cr }, > { PMU_SYS_REG(PMCNTENSET_EL0), > .access =3D access_pmcnten, .reg =3D PMCNTENSET_EL0, > - .get_user =3D get_pmcnten, .set_user =3D set_pmcnten }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > { PMU_SYS_REG(PMCNTENCLR_EL0), > .access =3D access_pmcnten, .reg =3D PMCNTENSET_EL0, > - .get_user =3D get_pmcnten, .set_user =3D set_pmcnten }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > { PMU_SYS_REG(PMOVSCLR_EL0), > .access =3D access_pmovs, .reg =3D PMOVSSET_EL0, > - .get_user =3D get_pmovs, .set_user =3D set_pmovs }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > /* > * PM_SWINC_EL0 is exposed to userspace as RAZ/WI, as it was > * previously (and pointlessly) advertised in the past... > @@ -2401,7 +2371,7 @@ static const struct sys_reg_desc sys_reg_descs[] = =3D { > .reset =3D reset_val, .reg =3D PMUSERENR_EL0, .val =3D 0 }, > { PMU_SYS_REG(PMOVSSET_EL0), > .access =3D access_pmovs, .reg =3D PMOVSSET_EL0, > - .get_user =3D get_pmovs, .set_user =3D set_pmovs }, > + .get_user =3D get_pmreg, .set_user =3D set_pmreg }, > > { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, > { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, > Thanks for the suggestion. I'll consider this in the next iteration. - Raghavendra