Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4374C636D4 for ; Tue, 31 Jan 2023 22:47:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231569AbjAaWrY (ORCPT ); Tue, 31 Jan 2023 17:47:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229961AbjAaWrW (ORCPT ); Tue, 31 Jan 2023 17:47:22 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 801DA4B74B for ; Tue, 31 Jan 2023 14:46:52 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id be8so16620109plb.7 for ; Tue, 31 Jan 2023 14:46:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=atishpatra.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ChdY/BOQPZ1uBdSScQAnHVJgIw1TfyjijiWytBSiQJU=; b=P/csI0Dsr6pSP+yfsdWmPLyR2OV2O9L2+BIw5ZYpmKa46y6S8PELKWe4xTX4sx2mEN iz0CIV4aU/Xp55F7vPWK75l8XALnKMEi/qkTo8b0lRlStd85P958+Ao3ZedelC7yXENR ORsQfgbiccitrYTxaXdve26+HM4rKn+N02qzY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ChdY/BOQPZ1uBdSScQAnHVJgIw1TfyjijiWytBSiQJU=; b=XjLlrC1MUEBIKi2CdAobcwsRKbvhu4ByASRQuQ5uZEfUvmyiBY8FYF+tXAN+Beg5Dz D94A87F+POTnH1dtDIZj7zFI9MOj3nHSv4EBCYoADZNN+9qnJd72hDlSRcAu6/0K/K/t 4oVrfmEADhJP4FHo907yl8nogSskP+xHcLX7D+dzeXqhVctP39jR7eylVvYqMJKb5QhI ejA79oFLvWItY6G+oHNFKoc/w5jqny/pde035w2nd0Uk/BxaPTuyD8phORnLwEa8+R56 CprRqGimI0JK8A55ciQ9cAlo05bikIDxMzPUAnRBnGGNSMcmqdkikqp//FRtJTdeqTBC ahpQ== X-Gm-Message-State: AO0yUKWHMZ/iGziCmXtJr3pZw1+otvF9TAOIlIGIcxoXiOIxJHc7//Wv Lu9UudeCkrpCIYuUw2zBxf55EAB2+GipmOW2tRKY9hmnqqMu X-Google-Smtp-Source: AK7set8AS0MpCMygHXWVJkHhzF3cP+gjObUvIrlXz6+xUJd4hSDWhV5uI7961IASYnnUHYVzNYuuX139DCNVJlEBhXE= X-Received: by 2002:a17:90a:2dc9:b0:22c:dd5:8f54 with SMTP id q9-20020a17090a2dc900b0022c0dd58f54mr4059996pjm.62.1675205208315; Tue, 31 Jan 2023 14:46:48 -0800 (PST) MIME-Version: 1.0 References: <20230127182558.2416400-1-atishp@rivosinc.com> <20230127182558.2416400-12-atishp@rivosinc.com> In-Reply-To: From: Atish Patra Date: Tue, 31 Jan 2023 14:46:36 -0800 Message-ID: Subject: Re: [PATCH v3 11/14] RISC-V: KVM: Implement trap & emulate for hpmcounters To: Anup Patel Cc: Atish Patra , linux-kernel@vger.kernel.org, Andrew Jones , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jan 29, 2023 at 4:44 AM Anup Patel wrote: > > On Fri, Jan 27, 2023 at 11:56 PM Atish Patra wrote: > > > > As the KVM guests only see the virtual PMU counters, all hpmcounter > > access should trap and KVM emulates the read access on behalf of guests. > > > > Reviewed-by: Andrew Jones > > Signed-off-by: Atish Patra > > --- > > arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++++ > > arch/riscv/kvm/vcpu_insn.c | 4 ++- > > arch/riscv/kvm/vcpu_pmu.c | 45 ++++++++++++++++++++++++++- > > 3 files changed, 63 insertions(+), 2 deletions(-) > > > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > index 3f43a43..022d45d 100644 > > --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h > > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > > @@ -43,6 +43,19 @@ struct kvm_pmu { > > #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) > > #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) > > > > +#if defined(CONFIG_32BIT) > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > +{ .base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ > > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > > +#else > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > > +#endif > > + > > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > > + unsigned long *val, unsigned long new_val, > > + unsigned long wr_mask); > > + > > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); > > int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > > struct kvm_vcpu_sbi_ext_data *edata); > > @@ -65,6 +78,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > > #else > > struct kvm_pmu { > > }; > > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > > +{ .base = 0, .count = 0, .func = NULL }, > > + > > Redundant newline here. > Fixed. > > > > static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > > { > > diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c > > index 0bb5276..f689337 100644 > > --- a/arch/riscv/kvm/vcpu_insn.c > > +++ b/arch/riscv/kvm/vcpu_insn.c > > @@ -213,7 +213,9 @@ struct csr_func { > > unsigned long wr_mask); > > }; > > > > -static const struct csr_func csr_funcs[] = { }; > > +static const struct csr_func csr_funcs[] = { > > + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS > > +}; > > > > /** > > * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space > > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > > index 7713927..894053a 100644 > > --- a/arch/riscv/kvm/vcpu_pmu.c > > +++ b/arch/riscv/kvm/vcpu_pmu.c > > @@ -17,6 +17,44 @@ > > > > #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > > > +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > + unsigned long *out_val) > > +{ > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > + struct kvm_pmc *pmc; > > + u64 enabled, running; > > + > > + pmc = &kvpmu->pmc[cidx]; > > + if (!pmc->perf_event) > > + return -EINVAL; > > + > > + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); > > + *out_val = pmc->counter_val; > > + > > + return 0; > > +} > > + > > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > > + unsigned long *val, unsigned long new_val, > > + unsigned long wr_mask) > > +{ > > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; > > + > > + if (!kvpmu || !kvpmu->init_done) > > + return KVM_INSN_EXIT_TO_USER_SPACE; > > As discussed previously, this should be KVM_INSN_ILLEGAL_TRAP. > Done. > > + > > + if (wr_mask) > > + return KVM_INSN_ILLEGAL_TRAP; > > + > > + cidx = csr_num - CSR_CYCLE; > > + > > + if (pmu_ctr_read(vcpu, cidx, val) < 0) > > + return KVM_INSN_EXIT_TO_USER_SPACE; > > Same as above. > Done. > > + > > + return ret; > > +} > > + > > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) > > { > > struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > > @@ -69,7 +107,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba > > int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > > struct kvm_vcpu_sbi_ext_data *edata) > > { > > - /* TODO */ > > + int ret; > > + > > + ret = pmu_ctr_read(vcpu, cidx, &edata->out_val); > > + if (ret == -EINVAL) > > + edata->err_val = SBI_ERR_INVALID_PARAM; > > + > > return 0; > > } > > > > -- > > 2.25.1 > > > > Regards, > Anup -- Regards, Atish