Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1464574ybl; Thu, 5 Dec 2019 01:44:08 -0800 (PST) X-Google-Smtp-Source: APXvYqy9AEiavRd6fEpBoZIM5inNhBftG4TH8OofTERPUSOKvC1Bh8WK8CQgsThQ8BU5vipDITqs X-Received: by 2002:a05:6830:1752:: with SMTP id 18mr6197615otz.334.1575539048855; Thu, 05 Dec 2019 01:44:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575539048; cv=none; d=google.com; s=arc-20160816; b=jRcOOlB9YdZweJ2bm4oZILC4Wkz0j/smmyZI0wBS2S255+SJdJ6lIvEVbxoUpYMpX7 dO243/OtQXSaL77Ce9UXYkb4NlNQ8XBM5QIC7+Zpdx2p+9u+6WoPxpLcLcSrwjCa/DvK mc0vAppdlfmHrPE/p2nnCE3VNNr4VGr3cM77Vg6Qv4/kYH+XhbKC67n94F1eFAWNviLr RRdT4YGL1MLSqJY/yr//GddDkyX+6v4OE9Z2J6FCH0eJ5NKxX1Dcag93iGJ+BUFtOPrN 85UV/4CU84qxhT6d4GHiToVGcECnr4dQnLT9gY+F5N/Bp315uGCzsLM2HZV7Q7franUO riyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:references :in-reply-to:cc:from:date:content-transfer-encoding:mime-version :subject:to; bh=DevXf+/H35WcxXnQX3Gpw2uz9aOpCuARUtJxmtk7euA=; b=QkGNXqEGo2H+bolVLyLqDu2m3/qrJ82JoaWj4x9yuyNY0cI9bEQk6Q8xwDmXJ+Zg2J NJ1N4Wuf9fTVCA23QwsGIOmjcOsn0q8QHWKl92jM1QWnUB+gjNtLsAhT/KBiHv/0knc2 FmibEuh3uT2vqSrwZzCSBeLYD9CERTyDcK05Ynp5rrCm9PPnNJNfxaycgB9ZCpCa2J50 bmiI0UJqkH7O8lhtKTDpCz0fiylHDVug9Os2OkoG0/rFqY3AjqLu5fSghR97hI9yitPt fk3Iugq05vz+pTXufr/GyAR/Dg3TWci//SCpLmX0nmhmGxyNvQfm8Yuug236RlNN3ctO WPeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i12si4542391oik.171.2019.12.05.01.43.56; Thu, 05 Dec 2019 01:44:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729198AbfLEJnM (ORCPT + 99 others); Thu, 5 Dec 2019 04:43:12 -0500 Received: from inca-roads.misterjones.org ([213.251.177.50]:56981 "EHLO inca-roads.misterjones.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726239AbfLEJnM (ORCPT ); Thu, 5 Dec 2019 04:43:12 -0500 Received: from www-data by cheepnis.misterjones.org with local (Exim 4.80) (envelope-from ) id 1icnfB-0003Vq-GN; Thu, 05 Dec 2019 10:43:09 +0100 To: Eric Auger Subject: Re: [RFC 2/3] KVM: arm64: pmu: Fix chained =?UTF-8?Q?SW=5FINCR=20?= =?UTF-8?Q?counters?= X-PHP-Originating-Script: 0:main.inc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 05 Dec 2019 09:43:09 +0000 From: Marc Zyngier Cc: , , , , , , In-Reply-To: <20191204204426.9628-3-eric.auger@redhat.com> References: <20191204204426.9628-1-eric.auger@redhat.com> <20191204204426.9628-3-eric.auger@redhat.com> Message-ID: <561ac6df385e977cc51d51a8ab28ee49@www.loen.fr> X-Sender: maz@kernel.org User-Agent: Roundcube Webmail/0.7.2 X-SA-Exim-Connect-IP: X-SA-Exim-Rcpt-To: eric.auger@redhat.com, eric.auger.pro@gmail.com, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, james.morse@arm.com, andrew.murray@arm.com, suzuki.poulose@arm.com, drjones@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Eric, On 2019-12-04 20:44, Eric Auger wrote: > At the moment a SW_INCR counter always overflows on 32-bit > boundary, independently on whether the n+1th counter is > programmed as CHAIN. > > Check whether the SW_INCR counter is a 64b counter and if so, > implement the 64b logic. > > Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU counters") > Signed-off-by: Eric Auger > --- > virt/kvm/arm/pmu.c | 16 +++++++++++++++- > 1 file changed, 15 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index c3f8b059881e..7ab477db2f75 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -491,6 +491,8 @@ void kvm_pmu_software_increment(struct kvm_vcpu > *vcpu, u64 val) > > enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); > for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) { > + bool chained = test_bit(i >> 1, vcpu->arch.pmu.chained); > + I'd rather you use kvm_pmu_pmc_is_chained() rather than open-coding this. But see below: > if (!(val & BIT(i))) > continue; > type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i) > @@ -500,8 +502,20 @@ void kvm_pmu_software_increment(struct kvm_vcpu > *vcpu, u64 val) > reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1; > reg = lower_32_bits(reg); > __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg; > - if (!reg) > + if (reg) /* no overflow */ > + continue; > + if (chained) { > + reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1; > + reg = lower_32_bits(reg); > + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg; > + if (reg) > + continue; > + /* mark an overflow on high counter */ > + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1); > + } else { > + /* mark an overflow */ > __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); > + } > } > } > } I think the whole function is a bit of a mess, and could be better structured to treat 64bit counters as a first class citizen. I'm suggesting something along those lines, which tries to streamline things a bit and keep the flow uniform between the two word sizes. IMHO, it helps reasonning about it and gives scope to the ARMv8.5 full 64bit counters... It is of course completely untested. Thoughts? M. diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index 8731dfeced8b..cf371f643ade 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -480,26 +480,43 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, */ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) { + struct kvm_pmu *pmu = &vcpu->arch.pmu; int i; - u64 type, enable, reg; - if (val == 0) - return; + /* Weed out disabled counters */ + val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); - enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) { + u64 type, reg; + int ovs = i; + if (!(val & BIT(i))) continue; - type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i) - & ARMV8_PMU_EVTYPE_EVENT; - if ((type == ARMV8_PMUV3_PERFCTR_SW_INCR) - && (enable & BIT(i))) { - reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1; - reg = lower_32_bits(reg); - __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg; - if (!reg) - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); + + /* PMSWINC only applies to ... SW_INC! */ + type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + type &= ARMV8_PMU_EVTYPE_EVENT; + if (type != ARMV8_PMUV3_PERFCTR_SW_INCR) + continue; + + /* Potential 64bit value */ + reg = kvm_pmu_get_counter_value(vcpu, i) + 1; + + /* Start by writing back the low 32bits */ + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = lower_32_bits(reg); + + /* + * 64bit counter? Write back the upper bits and target + * the overflow bit at the next counter + */ + if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) { + reg = upper_32_bits(reg); + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg; + ovs++; } + + if (!lower_32_bits(reg)) + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(ovs); } } -- Jazz is not dead. It just smells funny...