Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp149665pxf; Wed, 10 Mar 2021 02:55:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJyY8MduvAR4V04f516DUzPMo2G6zfoN6TWplUs9eMPv8rU1odwJj1gUkHhW3USjbgLjeCB6 X-Received: by 2002:a17:906:1b54:: with SMTP id p20mr2952298ejg.307.1615373729324; Wed, 10 Mar 2021 02:55:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615373729; cv=none; d=google.com; s=arc-20160816; b=0uVZIcF8v/buk5gxI4Tz51ne2NXrM/6ytdYRh8tCjt2Wac5E06mNVuRiyPIk0Jp28h saHAjkKo2hmQJotX3FNU4jUbqOgTEqeLwqvrIlcvcZDCa2Q0z00GTLyld4DoAQFjV2Pi j413klBm4d/grc5NUvBxxd1AZXdWJCr3zobl2tG0U1KrponyuiW9vW5CauimrRhtHNDz FyA5MVetylhdy5V7hOaAnkKwg+uZQ7qjczus6Eo/IGRo6UZGfUw+1tagaH2IgF9D64gE OyF4wxgHAWkG1099oFghOTr9kCFs9CugnJKSUZqvnx2OqF5tGzh4Rd6BD/jK+4MgYZuY cYiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=8/TYb5Vfr3C+xBMf3tLGw1gMiBekJy0PMvuz0lHS0nU=; b=TqcB4mexpVOAB8gaVk0EfJVZkEEm8PgR0WsvMRddmPEgTwBczhLtcI7tWNMz2sEimG ojDVINAI8Cvqe0NmN1gBIx5NW6Ke/9xHS/rqOAc3u6u027WWhkOrEyfxou/3q/1kmdbf SfJAG0JuS54zuxrqaaXSHqEs0mr2Oii/3vwzMT3l2UI4v/3xzu6fBYU2UTgcdfLrxJ2S mjHpqyoghzNNjIh2MB+aM0PSMAScAuP1VJAd2ilBBlgvwKZn1vDaDpESLIS39Kcv7tF3 wtN0B9LdWXcBBDqQokoUWhcBic+xi8IYBmdNTWt1otcYUVkBPr+KiljGxfvSXksCw/kk 9UrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gb41si1322971ejc.90.2021.03.10.02.55.06; Wed, 10 Mar 2021 02:55:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231228AbhCJKxf (ORCPT + 99 others); Wed, 10 Mar 2021 05:53:35 -0500 Received: from foss.arm.com ([217.140.110.172]:43812 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232041AbhCJKxP (ORCPT ); Wed, 10 Mar 2021 05:53:15 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C0351FB; Wed, 10 Mar 2021 02:53:15 -0800 (PST) Received: from [192.168.0.110] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8B54B3F85F; Wed, 10 Mar 2021 02:53:13 -0800 (PST) Subject: Re: [PATCH] arm64: perf: Fix 64-bit event counter read truncation To: Rob Herring , Will Deacon , Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Julien Thierry , Catalin Marinas , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim References: <20210310004412.1450128-1-robh@kernel.org> From: Alexandru Elisei Message-ID: Date: Wed, 10 Mar 2021 10:53:36 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210310004412.1450128-1-robh@kernel.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Rob, On 3/10/21 12:44 AM, Rob Herring wrote: > Commit 0fdf1bb75953 ("arm64: perf: Avoid PMXEV* indirection") changed > armv8pmu_read_evcntr() to return a u32 instead of u64. The result is > silent truncation of the event counter when using 64-bit counters. Given > the offending commit appears to have passed thru several folks, it seems > likely this was a bad rebase after v8.5 PMU 64-bit counters landed. Thank you for the fix, it does seem that I made a mistake when rebasing the series. Version v4 of the PMU NMI series was sent in 2019, then patch 8673e02e5841 ("arm64: perf: Add support for ARMv8.5-PMU 64-bit counters") from March 2020 changed the read of PMEVCNTR_EL0 to return an u64, then version v5 from June 2020 changed it back to returning an u32. The result of read_pmvevcntr() is returned by armv8pmu_read_evcntr(), and it is an unsigned long which is 64bits for arm64, so the patch looks good to me: Reviewed-by: Alexandru Elisei Thanks, Alex > > Fixes: 0fdf1bb75953 ("arm64: perf: Avoid PMXEV* indirection") > Cc: Alexandru Elisei > Cc: Julien Thierry > Cc: Mark Rutland > Cc: Will Deacon > Cc: Catalin Marinas > Cc: Peter Zijlstra > Cc: Ingo Molnar > Cc: Arnaldo Carvalho de Melo > Cc: Alexander Shishkin > Cc: Jiri Olsa > Cc: Namhyung Kim > Signed-off-by: Rob Herring > --- > arch/arm64/kernel/perf_event.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c > index 7d2318f80955..4658fcf88c2b 100644 > --- a/arch/arm64/kernel/perf_event.c > +++ b/arch/arm64/kernel/perf_event.c > @@ -460,7 +460,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx) > return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx)); > } > > -static inline u32 armv8pmu_read_evcntr(int idx) > +static inline u64 armv8pmu_read_evcntr(int idx) > { > u32 counter = ARMV8_IDX_TO_COUNTER(idx); >