Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9919EC61D97 for ; Sun, 29 Jan 2023 12:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235015AbjA2MpC (ORCPT ); Sun, 29 Jan 2023 07:45:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235220AbjA2Mos (ORCPT ); Sun, 29 Jan 2023 07:44:48 -0500 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49624241EE for ; Sun, 29 Jan 2023 04:44:31 -0800 (PST) Received: by mail-ej1-x62d.google.com with SMTP id kt14so24876520ejc.3 for ; Sun, 29 Jan 2023 04:44:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20210112.gappssmtp.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=hzvl/LqOmRuw2DLDajc9nmwbZP+g/NBX1KDunvAJzQY=; b=fY2QFwBTKBZO8Qst6ZUPyMu24A/jtW4pov+KH93ZP6/BznpvqQcwmDLlU9AlvfKvHc kx1xqrQiAdRu1gI52+5qnmtBBw15VMdkOECUlNhT3f8RgCEjtRC2sD9hx92S0IRA7UZb dbuzIexlxY2xnBHPJSFzDxTOPcc6Woi6Hq1hant2sI9rU3oXS9qKpMHoAiAc/E8Rsm50 jF9blpDcxyN56OCp0mwJpQw9r5/Yp6W4WSPMHNQ/X+0RAf+3DoSPj0DDL4EpygJtbeHj IH1nClRvnDmSkrauRgGR/vJ2W9/BRexsnmqgHAhLchQrqyGIyHcdyqbbXNqK9kcehZ/R UZtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hzvl/LqOmRuw2DLDajc9nmwbZP+g/NBX1KDunvAJzQY=; b=N9euvkI6qzyBxkOupCw1e08aqE1mvMKTWpcf9ZWwkoRhphJ6gE4p2/HX3vwTB393rF OvfWwv5I481AhCncbq6l5SBByBWa1pBPTcWiRBlsjwiFEhKO4aHcKY3miluTOoAFULkj FwgbdGNthJJ3g0cNx2IUl7exKgT6FMfGE7IBW/4ksW6hLY5Utvgd+atQaWVqwXD62TyE +O9PhPbdsIIDfvJdVQDDcO0XMwERSizZMc5MloExfv3QPdrrkcem13pPn7UHz4P4j6fE xyA8ZrW9ywVxRtNW17/V5vG9JhrEVuKQ8hWztyIUXY+SCZDH66KPLzARAxptW/AsQ0cW PZtQ== X-Gm-Message-State: AO0yUKWtBROjyFOM8lkQHgDUCt2QD+68VZfEEvEaCkrmQkXkmpWLVIPm 7VbhlEwKtQCcr8DSl1T5lG1wlMDkhfR7yFmxdGymWQ== X-Google-Smtp-Source: AK7set+98zHjTjUCfz1Po+4TvGNEYBc/NoX6CxQ6nYhmE2sLz3ErpGLxfO2NEtvlVStSPaC/HwnxsOot12jRU9W6Y68= X-Received: by 2002:a17:906:e116:b0:881:4488:297c with SMTP id gj22-20020a170906e11600b008814488297cmr1198000ejb.129.1674996269674; Sun, 29 Jan 2023 04:44:29 -0800 (PST) MIME-Version: 1.0 References: <20230127182558.2416400-1-atishp@rivosinc.com> <20230127182558.2416400-12-atishp@rivosinc.com> In-Reply-To: <20230127182558.2416400-12-atishp@rivosinc.com> From: Anup Patel Date: Sun, 29 Jan 2023 18:14:18 +0530 Message-ID: Subject: Re: [PATCH v3 11/14] RISC-V: KVM: Implement trap & emulate for hpmcounters To: Atish Patra Cc: linux-kernel@vger.kernel.org, Andrew Jones , Atish Patra , Guo Ren , Heiko Stuebner , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, Mark Rutland , Palmer Dabbelt , Paul Walmsley , Sergey Matyukevich , Will Deacon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 27, 2023 at 11:56 PM Atish Patra wrote: > > As the KVM guests only see the virtual PMU counters, all hpmcounter > access should trap and KVM emulates the read access on behalf of guests. > > Reviewed-by: Andrew Jones > Signed-off-by: Atish Patra > --- > arch/riscv/include/asm/kvm_vcpu_pmu.h | 16 ++++++++++ > arch/riscv/kvm/vcpu_insn.c | 4 ++- > arch/riscv/kvm/vcpu_pmu.c | 45 ++++++++++++++++++++++++++- > 3 files changed, 63 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/kvm_vcpu_pmu.h b/arch/riscv/include/asm/kvm_vcpu_pmu.h > index 3f43a43..022d45d 100644 > --- a/arch/riscv/include/asm/kvm_vcpu_pmu.h > +++ b/arch/riscv/include/asm/kvm_vcpu_pmu.h > @@ -43,6 +43,19 @@ struct kvm_pmu { > #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) > #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) > > +#if defined(CONFIG_32BIT) > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > +{ .base = CSR_CYCLEH, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, \ > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > +#else > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > +{ .base = CSR_CYCLE, .count = 31, .func = kvm_riscv_vcpu_pmu_read_hpm }, > +#endif > + > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > + unsigned long *val, unsigned long new_val, > + unsigned long wr_mask); > + > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata); > int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx, > struct kvm_vcpu_sbi_ext_data *edata); > @@ -65,6 +78,9 @@ void kvm_riscv_vcpu_pmu_reset(struct kvm_vcpu *vcpu); > #else > struct kvm_pmu { > }; > +#define KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS \ > +{ .base = 0, .count = 0, .func = NULL }, > + Redundant newline here. > > static inline int kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu) > { > diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c > index 0bb5276..f689337 100644 > --- a/arch/riscv/kvm/vcpu_insn.c > +++ b/arch/riscv/kvm/vcpu_insn.c > @@ -213,7 +213,9 @@ struct csr_func { > unsigned long wr_mask); > }; > > -static const struct csr_func csr_funcs[] = { }; > +static const struct csr_func csr_funcs[] = { > + KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS > +}; > > /** > * kvm_riscv_vcpu_csr_return -- Handle CSR read/write after user space > diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c > index 7713927..894053a 100644 > --- a/arch/riscv/kvm/vcpu_pmu.c > +++ b/arch/riscv/kvm/vcpu_pmu.c > @@ -17,6 +17,44 @@ > > #define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs) > > +static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > + unsigned long *out_val) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + struct kvm_pmc *pmc; > + u64 enabled, running; > + > + pmc = &kvpmu->pmc[cidx]; > + if (!pmc->perf_event) > + return -EINVAL; > + > + pmc->counter_val += perf_event_read_value(pmc->perf_event, &enabled, &running); > + *out_val = pmc->counter_val; > + > + return 0; > +} > + > +int kvm_riscv_vcpu_pmu_read_hpm(struct kvm_vcpu *vcpu, unsigned int csr_num, > + unsigned long *val, unsigned long new_val, > + unsigned long wr_mask) > +{ > + struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > + int cidx, ret = KVM_INSN_CONTINUE_NEXT_SEPC; > + > + if (!kvpmu || !kvpmu->init_done) > + return KVM_INSN_EXIT_TO_USER_SPACE; As discussed previously, this should be KVM_INSN_ILLEGAL_TRAP. > + > + if (wr_mask) > + return KVM_INSN_ILLEGAL_TRAP; > + > + cidx = csr_num - CSR_CYCLE; > + > + if (pmu_ctr_read(vcpu, cidx, val) < 0) > + return KVM_INSN_EXIT_TO_USER_SPACE; Same as above. > + > + return ret; > +} > + > int kvm_riscv_vcpu_pmu_num_ctrs(struct kvm_vcpu *vcpu, struct kvm_vcpu_sbi_ext_data *edata) > { > struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu); > @@ -69,7 +107,12 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba > int kvm_riscv_vcpu_pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx, > struct kvm_vcpu_sbi_ext_data *edata) > { > - /* TODO */ > + int ret; > + > + ret = pmu_ctr_read(vcpu, cidx, &edata->out_val); > + if (ret == -EINVAL) > + edata->err_val = SBI_ERR_INVALID_PARAM; > + > return 0; > } > > -- > 2.25.1 > Regards, Anup