Received: by 2002:ab2:1347:0:b0:1f4:ac9d:b246 with SMTP id g7csp153255lqg; Wed, 10 Apr 2024 20:14:45 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVr/R61L81cYejGIfWG7eevnLnpEg3+IwTkwmMuSjKGQwjRN9EdwOBpGhw5xhNhIlDQ4/UMOIFj36tHI1SSBQF+xamwnzaNDB43mV9wXQ== X-Google-Smtp-Source: AGHT+IGqGjBwqvLQzPrl9jkYEsdiZOkNMZ/jWwR5nDQJCKR9hurSYZx9ta6edEGxJ6zbLx/M1Rlm X-Received: by 2002:a05:6808:23c4:b0:3c6:eb7:c776 with SMTP id bq4-20020a05680823c400b003c60eb7c776mr3365054oib.30.1712805285443; Wed, 10 Apr 2024 20:14:45 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712805285; cv=pass; d=google.com; s=arc-20160816; b=WOmMGHlafFFEKuSHDOIaad5w2Sgs8EpUlur5znTHKja/igKhDmqPpmRwep2od3R8eM V1kI6/SopgyFMFym95fMfjfH9D8iIO07NsRfWd0NMsw/UgLxAFM/KtDqyHcs2aE/Pfub D0WC2X0l0k9QeTwVQcLR4O+bVQG1az3i39uxUUOUf35hx99p5YAnme4WDhDRLzIBBsDo EcvNGEN/OuTpSKy3WFs0rI8y/4zERuglXNU4UyxUFLPB3/XJAYe2NfmMXD/gzsypUJFe nnOQI+j+3NDE7Hxou9NjVr/1rt+PzVQRGFuv6GcCSKvnaSXXkS+Enl8qDG+pptKDrY6k htow== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:content-language:in-reply-to:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:user-agent:date :message-id:from:references:cc:to:subject; bh=WbDfv3CoMsAN/yHTxX3omnGnrv1/JKcrFRXK3t2nZhI=; fh=+p62V03yP1NGsQVykQLByv5K3yKQi7Hbpdlg611Cruo=; b=UhBg0DGZkipIwi0btlYCrOAiJPGleXZrewSlxJmaNpPvTcOAjaQ0hyZrlmhQ5fwphF PdgReeF4YZUnAcsnSOHchIrG+k6LDpJ56fMESCnE34QhP/V4fAUEB6pqCa8DBa8rWQHY LC/tTtw3G408HA7GNqp9pjRDObDM4u5aXj30nfjMZgWgxW85u4smzaiXOzb6DeYQiK9U F/1IZWDidjTwZAWB+D76H8aztmC5ZXq30E1mDWkNgEr5ucJdKu6s7HEJP0CzF6RqMauL 5x/IwTgk0KwDagG6w3i/IiA6cyKjgsoajoH76RnrtNOoVZHhK1j+OINm9ao17HZW1N10 plDA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=loongson.cn); spf=pass (google.com: domain of linux-kernel+bounces-139693-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-139693-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id f22-20020a056a00239600b006ecf5a0d539si580787pfc.71.2024.04.10.20.14.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Apr 2024 20:14:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-139693-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=loongson.cn); spf=pass (google.com: domain of linux-kernel+bounces-139693-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-139693-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id DBD7428B0B8 for ; Thu, 11 Apr 2024 03:10:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8642B13B5B4; Thu, 11 Apr 2024 03:10:37 +0000 (UTC) Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BAFC223BF for ; Thu, 11 Apr 2024 03:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712805036; cv=none; b=FISITQMq/GzMyQ1KCyoD4At89RiBZBOcrKuIqO24vWD4RTqrJnD4oCvI4VrQ4TdmxO8o/3sKHZKIkRPyKB6uKlMR2ONkKwjhjPJnx7OXjIaGBS+uqetjznmp1SgTIPr+264RFaZHdYWIzFS/kh3r4oSqMFXi4Fj085EGpqbXSFQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712805036; c=relaxed/simple; bh=htrm/Yh4VAbFCvWpIwTWRkmTcpgliu+KynnJiA5ucDQ=; h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version: In-Reply-To:Content-Type; b=uuL2oXoX2zzv8UVNfriszRoYpY+ZyjHQAxAW2PcxxMb6IJpvz7Z9rKj9T6c94MazqkVvT0aOWPkW4ZL/Z48VPfOW/QMvQJcYH3xs6mzJISALybLTz7kLylEGj7q2nL0JXI2wcM+4ruVTvGbmF325AtmSnC2AJklGuy3I0gww74M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.20.42.173]) by gateway (Coremail) with SMTP id _____8AxirqnVBdmq54lAA--.5058S3; Thu, 11 Apr 2024 11:10:31 +0800 (CST) Received: from [10.20.42.173] (unknown [10.20.42.173]) by localhost.localdomain (Coremail) with SMTP id AQAAf8DxNBGkVBdmgMt3AA--.22735S3; Thu, 11 Apr 2024 11:10:30 +0800 (CST) Subject: Re: [PATCH 1/1] LoongArch: KVM: Add PMU support To: Song Gao , linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, zhaotianrui@loongson.cn, chenhuacai@kernel.org, kernel@xen0n.name, loongarch@lists.linux.dev References: <20240410095812.2943706-1-gaosong@loongson.cn> From: maobibo Message-ID: <3642a669-1a6d-11ee-8c85-6584257f33d1@loongson.cn> Date: Thu, 11 Apr 2024 11:10:28 +0800 User-Agent: Mozilla/5.0 (X11; Linux loongarch64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: <20240410095812.2943706-1-gaosong@loongson.cn> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-CM-TRANSID:AQAAf8DxNBGkVBdmgMt3AA--.22735S3 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBj93XoW3Kw4DGrW5JF48Aw47JFWDAwc_yoWkXr1rpF WDCws7WFW8KF1xG3WDJwnI9r43Wrs3Kw4aqry7trWSyF1jqry5Xr4kKrZrJFyrX34ftF1S 9Fn0k3Z5ZF4vywbCm3ZEXasCq-sJn29KB7ZKAUJUUUU5529EdanIXcx71UUUUU7KY7ZEXa sCq-sGcSsGvfJ3Ic02F40EFcxC0VAKzVAqx4xG6I80ebIjqfuFe4nvWSU5nxnvy29KBjDU 0xBIdaVrnRJUUUvIb4IE77IF4wAFF20E14v26r1j6r4UM7CY07I20VC2zVCF04k26cxKx2 IYs7xG6rWj6s0DM7CIcVAFz4kK6r1Y6r17M28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48v e4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI 0_Cr0_Gr1UM28EF7xvwVC2z280aVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIEc7CjxVAF wI0_Gr1j6F4UJwAS0I0E0xvYzxvE52x082IY62kv0487Mc804VCY07AIYIkI8VC2zVCFFI 0UMc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUXVWUAwAv7VC2z280 aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcVAKI48JMxk0xIA0c2IEe2 xFo4CEbIxvr21l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r 1DMIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r1j6r4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxV WUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r1j6r4UYxBIdaVFxhVjvjDU0xZFpf9x07j1 WlkUUUUU= On 2024/4/10 下午5:58, Song Gao wrote: > This patch adds KVM PMU support. We save/restore the host PMU CSR > values to kvm_context. guest entry saves the host PMU CSR and > guest exit restores the host PMU CSR when the host supports PMU It will be better if there is description about hw/sw pmu virtualization method, such as is it shared or separated with host pmu hardware, or is it software emulated? > > Signed-off-by: Song Gao > --- > arch/loongarch/include/asm/kvm_csr.h | 5 + > arch/loongarch/include/asm/kvm_host.h | 14 +++ > arch/loongarch/include/asm/kvm_vcpu.h | 2 + > arch/loongarch/include/asm/loongarch.h | 1 + > arch/loongarch/kvm/exit.c | 7 ++ > arch/loongarch/kvm/vcpu.c | 140 ++++++++++++++++++++++++- > 6 files changed, 167 insertions(+), 2 deletions(-) > > diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h > index 724ca8b7b401..dce556516c79 100644 > --- a/arch/loongarch/include/asm/kvm_csr.h > +++ b/arch/loongarch/include/asm/kvm_csr.h > @@ -208,4 +208,9 @@ static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, > csr->csrs[gid] |= val & _mask; > } > > +#define KVM_PMU_PLV_ENABLE (CSR_PERFCTRL_PLV0 | \ > + CSR_PERFCTRL_PLV1 | \ > + CSR_PERFCTRL_PLV2 | \ > + CSR_PERFCTRL_PLV3) > + How about KVM_PMU_ENABLED or KVM_PMU_EVENT_ENABLED? > #endif /* __ASM_LOONGARCH_KVM_CSR_H__ */ > diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h > index 2d62f7b0d377..0daa0bde7742 100644 > --- a/arch/loongarch/include/asm/kvm_host.h > +++ b/arch/loongarch/include/asm/kvm_host.h > @@ -54,6 +54,9 @@ struct kvm_arch_memory_slot { > struct kvm_context { > unsigned long vpid_cache; > struct kvm_vcpu *last_vcpu; > + /* Save host pmu csr */ > + u64 perf_ctrl[4]; > + u64 perf_cntr[4]; It will be better if macro is used rather than hardcoded number 4. > }; > > struct kvm_world_switch { > @@ -99,6 +102,7 @@ enum emulation_result { > #define KVM_LARCH_LASX (0x1 << 2) > #define KVM_LARCH_SWCSR_LATEST (0x1 << 3) > #define KVM_LARCH_HWCSR_USABLE (0x1 << 4) > +#define KVM_LARCH_PERF (0x1 << 5) > > struct kvm_vcpu_arch { > /* > @@ -195,6 +199,16 @@ static inline bool kvm_guest_has_lasx(struct kvm_vcpu_arch *arch) > return arch->cpucfg[2] & CPUCFG2_LASX; > } > > +static inline bool kvm_guest_has_pmu(struct kvm_vcpu_arch *arch) > +{ > + return arch->cpucfg[6] & CPUCFG6_PMP; macro LOONGARCH_CPUCFG6 will be better. > +} > + > +static inline int kvm_get_pmu_num(struct kvm_vcpu_arch *arch) > +{ > + return (arch->cpucfg[6] & CPUCFG6_PMNUM) >> CPUCFG6_PMNUM_SHIFT; Ditto Regards Bibo Mao > +} > + > /* Debug: dump vcpu state */ > int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu); > > diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h > index 0cb4fdb8a9b5..7212428022d1 100644 > --- a/arch/loongarch/include/asm/kvm_vcpu.h > +++ b/arch/loongarch/include/asm/kvm_vcpu.h > @@ -75,6 +75,8 @@ static inline void kvm_save_lasx(struct loongarch_fpu *fpu) { } > static inline void kvm_restore_lasx(struct loongarch_fpu *fpu) { } > #endif > > +int kvm_own_pmu(struct kvm_vcpu *vcpu); > + > void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz); > void kvm_reset_timer(struct kvm_vcpu *vcpu); > void kvm_save_timer(struct kvm_vcpu *vcpu); > diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h > index 46366e783c84..644380b6ebec 100644 > --- a/arch/loongarch/include/asm/loongarch.h > +++ b/arch/loongarch/include/asm/loongarch.h > @@ -119,6 +119,7 @@ > #define CPUCFG6_PMP BIT(0) > #define CPUCFG6_PAMVER GENMASK(3, 1) > #define CPUCFG6_PMNUM GENMASK(7, 4) > +#define CPUCFG6_PMNUM_SHIFT 4 > #define CPUCFG6_PMBITS GENMASK(13, 8) > #define CPUCFG6_UPM BIT(14) > > diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c > index ed1d89d53e2e..ab17d9eb711f 100644 > --- a/arch/loongarch/kvm/exit.c > +++ b/arch/loongarch/kvm/exit.c > @@ -83,6 +83,13 @@ static int kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) > rj = inst.reg2csr_format.rj; > csrid = inst.reg2csr_format.csr; > > + if (LOONGARCH_CSR_PERFCTRL0 <= csrid && csrid <= LOONGARCH_CSR_PERFCNTR3) { > + if(!kvm_own_pmu(vcpu)) { > + vcpu->arch.pc -= 4; > + return EMULATE_DONE; > + } > + } > + > /* Process CSR ops */ > switch (rj) { > case 0: /* process csrrd */ > diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c > index 3a8779065f73..6fb59b805807 100644 > --- a/arch/loongarch/kvm/vcpu.c > +++ b/arch/loongarch/kvm/vcpu.c > @@ -333,6 +333,12 @@ static int _kvm_get_cpucfg_mask(int id, u64 *v) > case LOONGARCH_CPUCFG5: > *v = GENMASK(31, 0); > return 0; > + case LOONGARCH_CPUCFG6: > + if (cpu_has_pmp) > + *v = GENMASK(14, 0); > + else > + *v = 0; > + return 0; > case LOONGARCH_CPUCFG16: > *v = GENMASK(16, 0); > return 0; > @@ -351,7 +357,7 @@ static int _kvm_get_cpucfg_mask(int id, u64 *v) > > static int kvm_check_cpucfg(int id, u64 val) > { > - int ret; > + int ret, host; > u64 mask = 0; > > ret = _kvm_get_cpucfg_mask(id, &mask); > @@ -377,6 +383,18 @@ static int kvm_check_cpucfg(int id, u64 val) > /* LASX architecturally implies LSX and FP but val does not satisfy that */ > return -EINVAL; > return 0; > + case LOONGARCH_CPUCFG6: > + if (val & CPUCFG6_PMP) { > + host = read_cpucfg(LOONGARCH_CPUCFG6); > + if ((val & CPUCFG6_PMBITS) != (host & CPUCFG6_PMBITS)) > + /* Guest pmbits must be the same with host */ > + return -EINVAL; > + if ((val & CPUCFG6_PMNUM) > (host & CPUCFG6_PMNUM)) > + return -EINVAL; > + if ((val & CPUCFG6_UPM) && !(host & CPUCFG6_UPM)) > + return -EINVAL; > + } > + return 0; > default: > /* > * Values for the other CPUCFG IDs are not being further validated > @@ -552,7 +570,8 @@ static int kvm_loongarch_cpucfg_has_attr(struct kvm_vcpu *vcpu, > struct kvm_device_attr *attr) > { > switch (attr->attr) { > - case 2: > + case LOONGARCH_CPUCFG2: > + case LOONGARCH_CPUCFG6: > return 0; > default: > return -ENXIO; > @@ -844,6 +863,119 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu) > preempt_enable(); > } > > +static inline void kvm_save_host_pmu(struct kvm_vcpu *vcpu) > +{ > + struct kvm_context *context; > + > + context = this_cpu_ptr(vcpu->kvm->arch.vmcs); > + context->perf_ctrl[0] = read_csr_perfctrl0(); > + context->perf_cntr[0] = read_csr_perfcntr0(); > + context->perf_ctrl[1] = read_csr_perfctrl1(); > + context->perf_cntr[1] = read_csr_perfcntr1(); > + context->perf_ctrl[2] = read_csr_perfctrl2(); > + context->perf_cntr[2] = read_csr_perfcntr2(); > + context->perf_ctrl[3] = read_csr_perfctrl3(); > + context->perf_cntr[3] = read_csr_perfcntr3(); > +} > + > +static inline void kvm_restore_host_pmu(struct kvm_vcpu *vcpu) > +{ > + struct kvm_context *context; > + > + context = this_cpu_ptr(vcpu->kvm->arch.vmcs); > + write_csr_perfctrl0(context->perf_ctrl[0]); > + write_csr_perfcntr0(context->perf_cntr[0]); > + write_csr_perfctrl1(context->perf_ctrl[1]); > + write_csr_perfcntr1(context->perf_cntr[1]); > + write_csr_perfctrl2(context->perf_ctrl[2]); > + write_csr_perfcntr2(context->perf_cntr[2]); > + write_csr_perfctrl3(context->perf_ctrl[3]); > + write_csr_perfcntr3(context->perf_cntr[3]); > +} > + > +static inline void kvm_save_guest_pmu(struct kvm_vcpu *vcpu) > +{ > + struct loongarch_csrs *csr = vcpu->arch.csr; > + > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL0); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR0); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL1); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR1); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL2); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR2); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL3); > + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR3); > + kvm_write_hw_gcsr(LOONGARCH_CSR_PERFCTRL0, 0); > + kvm_write_hw_gcsr(LOONGARCH_CSR_PERFCTRL1, 0); > + kvm_write_hw_gcsr(LOONGARCH_CSR_PERFCTRL2, 0); > + kvm_write_hw_gcsr(LOONGARCH_CSR_PERFCTRL3, 0); > +} > + > +static inline void kvm_restore_guest_pmu(struct kvm_vcpu *vcpu) > +{ > + struct loongarch_csrs *csr = vcpu->arch.csr; > + > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL0); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR0); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL1); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR1); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL2); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR2); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCTRL3); > + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PERFCNTR3); > +} > + > +static void _kvm_own_pmu(struct kvm_vcpu *vcpu) > +{ > + unsigned long val; > + > + kvm_save_host_pmu(vcpu); > + /* Set PM0-PM(num) to Guest */ > + val = read_csr_gcfg() & ~CSR_GCFG_GPERF; > + val |= (kvm_get_pmu_num(&vcpu->arch) + 1) << CSR_GCFG_GPERF_SHIFT; > + write_csr_gcfg(val); > + kvm_restore_guest_pmu(vcpu); > +} > + > +int kvm_own_pmu(struct kvm_vcpu *vcpu) > +{ > + if (!kvm_guest_has_pmu(&vcpu->arch)) > + return -EINVAL; > + > + preempt_disable(); > + _kvm_own_pmu(vcpu); > + vcpu->arch.aux_inuse |= KVM_LARCH_PERF; > + preempt_enable(); > + return 0; > +} > + > +static void kvm_lose_pmu(struct kvm_vcpu *vcpu) > +{ > + struct loongarch_csrs *csr = vcpu->arch.csr; > + > + if (!(vcpu->arch.aux_inuse & KVM_LARCH_PERF)) > + return; > + > + kvm_save_guest_pmu(vcpu); > + /* Disable pmu access from guest */ > + write_csr_gcfg(read_csr_gcfg() & ~CSR_GCFG_GPERF); > + if (((kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL0) | > + kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL1) | > + kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL2) | > + kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL3)) > + & KVM_PMU_PLV_ENABLE) == 0) > + vcpu->arch.aux_inuse &= ~KVM_LARCH_PERF; > + kvm_restore_host_pmu(vcpu); > +} > + > +static void kvm_restore_pmu(struct kvm_vcpu *vcpu) > +{ > + if (!(vcpu->arch.aux_inuse & KVM_LARCH_PERF)) > + return; > + > + _kvm_own_pmu(vcpu); > +} > + > int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq) > { > int intr = (int)irq->irq; > @@ -982,6 +1114,9 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > /* Control guest page CCA attribute */ > change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT); > > + /* Restore hardware perf csr */ > + kvm_restore_pmu(vcpu); > + > /* Don't bother restoring registers multiple times unless necessary */ > if (vcpu->arch.aux_inuse & KVM_LARCH_HWCSR_USABLE) > return 0; > @@ -1065,6 +1200,7 @@ static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu) > struct loongarch_csrs *csr = vcpu->arch.csr; > > kvm_lose_fpu(vcpu); > + kvm_lose_pmu(vcpu); > > /* > * Update CSR state from hardware if software CSR state is stale, >