Received: by 2002:a25:86ce:0:0:0:0:0 with SMTP id y14csp1178008ybm; Tue, 21 May 2019 09:46:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzAhWtL4Om6H/MAwY/oSoIcFP5zBh34xE55dALGIKjdf8aQUBgZGErl+lVTDqOwo4fxuwIU X-Received: by 2002:a63:1726:: with SMTP id x38mr28277459pgl.365.1558457200820; Tue, 21 May 2019 09:46:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558457200; cv=none; d=google.com; s=arc-20160816; b=oPEJSuwUWDB+MxdIwl92GDtF1q6o567GgBPKL+VIJaE3KA+pTR/2w0knUHPxhDdy+6 iR+Zs1aCX3GtpOvRHjozsnXKzGGL6JOTke7M5jA7WvsSlKTLKLuZugWFX+ip9Aef/XmG nxytqe2MC/rplQNnKkaGmNCEPz9DQ2T8kSzd1djl/4xrozxCsS7XBRKsMjUxLHfPX9MK UiA6OkQviZdeqaVtBglpKoXCKWUVjZ83+vFrMasIWUSOWsnCrqB43PpF7JQW1Cmb/wjl QubtWSU3Xz1y+DWJU2d02IcNpfk1s4RFx3lonhAGyocRzcTcOK+iP0rKbVYQfg4ee41B gJOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=aPJ7juYzFRVnzJFQ0wH2efafgQR09GFUbhZaznZNvDg=; b=UM7zKOppiQzK5BglZO5L3g+7CDpK8S5/5I/DpiJc4aNPrEjmk1rKyDoBQIzeH7QxHx ovZNW7qDEoUE0UsLdkIqZn9A6Lki4K9NavILbtIaL1Zqk2czxDrIjHM+NhfHGC6M02Vp juK7adhFlgbKIsDMNXLao5c9NOtnjtbukPKE458ryQEB8yfr3hUXHwoa58M3V9HEIUzt LCDfjXN623EPKr3sIxBD3y1AS3iuRUmohyvsGWXbZ9UamZzPr9Js0/JodYvEhL7rlyi5 1FChuqFhA7BnVg7oTfhK8Q6AMryXG1sr1TwulnXQM1kxVyhvlzvn6ryAR7Cr13/1GgP+ Wu1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m14si14319902pgj.377.2019.05.21.09.46.25; Tue, 21 May 2019 09:46:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729056AbfEUQot (ORCPT + 99 others); Tue, 21 May 2019 12:44:49 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:38660 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727925AbfEUQos (ORCPT ); Tue, 21 May 2019 12:44:48 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 60B49374; Tue, 21 May 2019 09:44:48 -0700 (PDT) Received: from localhost (unknown [10.37.6.20]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EAC5F3F718; Tue, 21 May 2019 09:44:47 -0700 (PDT) Date: Tue, 21 May 2019 17:44:46 +0100 From: Andrew Murray To: Xiang Zheng Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, marc.zyngier@arm.com Subject: Re: [PATCH] KVM: ARM64: Update perf event when setting PMU count value Message-ID: <20190521164445.GW8268@e119886-lin.cambridge.arm.com> References: <20190519100559.7188-1-zhengxiang9@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190519100559.7188-1-zhengxiang9@huawei.com> User-Agent: Mutt/1.10.1+81 (426a6c1) (2018-08-26) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 19, 2019 at 06:05:59PM +0800, Xiang Zheng wrote: > Guest will adjust the sample period and set PMU counter value when > it takes a long time to handle the PMU interrupts. > > However, we don't have a corresponding change on the virtual PMU > which is emulated via a perf event. It could cause a large number > of PMU interrupts injected to guest. Then guest will get hang for > handling these interrupts. Yes this is indeed an issue. I believe I've addressed this in my 'chained pmu' series - the relevant patch is here... https://lists.cs.columbia.edu/pipermail/kvmarm/2019-May/035933.html Some other comments below. > > So update the sample_period of perf event if the counter value is > changed to avoid this case. > > Signed-off-by: Xiang Zheng > --- > virt/kvm/arm/pmu.c | 54 +++++++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 45 insertions(+), 9 deletions(-) > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index 1c5b76c..cbad3ec 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -24,6 +24,11 @@ > #include > #include > > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); > +static struct perf_event *kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, > + struct kvm_pmc *pmc, > + struct perf_event_attr *attr); > + > /** > * kvm_pmu_get_counter_value - get PMU counter value > * @vcpu: The vcpu pointer > @@ -57,11 +62,29 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > */ > void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) > { > - u64 reg; > + u64 reg, counter, old_sample_period; > + struct kvm_pmu *pmu = &vcpu->arch.pmu; > + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; > + struct perf_event *event; > + struct perf_event_attr attr; > > reg = (select_idx == ARMV8_PMU_CYCLE_IDX) > ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; > __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); > + > + if (pmc->perf_event) { > + attr = pmc->perf_event->attr; > + old_sample_period = attr.sample_period; > + counter = kvm_pmu_get_counter_value(vcpu, select_idx); > + attr.sample_period = (-counter) & pmc->bitmask; > + if (attr.sample_period == old_sample_period) > + return; I'd be interested to know how often this would evaluate to true. > + > + kvm_pmu_stop_counter(vcpu, pmc); > + event = kvm_pmu_create_perf_event(vcpu, pmc, &attr); I'm not sure it's necessary to change the prototype of kvm_pmu_create_perf_event as this function will recalculate the sample period based on the updated counter value anyway. Thanks, Andrew Murray > + if (event) > + pmc->perf_event = event; > + } > } > > /** > @@ -303,6 +326,24 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, > } > } > > +static struct perf_event *kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, > + struct kvm_pmc *pmc, > + struct perf_event_attr *attr) > +{ > + struct perf_event *event; > + > + event = perf_event_create_kernel_counter(attr, -1, current, > + kvm_pmu_perf_overflow, pmc); > + > + if (IS_ERR(event)) { > + pr_err_once("kvm: pmu event creation failed %ld\n", > + PTR_ERR(event)); > + return NULL; > + } > + > + return event; > +} > + > /** > * kvm_pmu_software_increment - do software increment > * @vcpu: The vcpu pointer > @@ -416,15 +457,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, > /* The initial sample period (overflow count) of an event. */ > attr.sample_period = (-counter) & pmc->bitmask; > > - event = perf_event_create_kernel_counter(&attr, -1, current, > - kvm_pmu_perf_overflow, pmc); > - if (IS_ERR(event)) { > - pr_err_once("kvm: pmu event creation failed %ld\n", > - PTR_ERR(event)); > - return; > - } > + event = kvm_pmu_create_perf_event(vcpu, pmc, &attr); > > - pmc->perf_event = event; > + if (event) > + pmc->perf_event = event; > } > > bool kvm_arm_support_pmu_v3(void) > -- > 1.8.3.1 > > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm