Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1184541pxk; Fri, 4 Sep 2020 03:00:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwdXXz2+U/6Hg/jul1Aleye142mJohizJMID4XGzN8ds73HuTILb3nXiQnm5NryrVtTr16H X-Received: by 2002:aa7:d35a:: with SMTP id m26mr7317393edr.183.1599213617778; Fri, 04 Sep 2020 03:00:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599213617; cv=none; d=google.com; s=arc-20160816; b=IJx8K0YyQkKd2AobSr6ZsgY0oua3PYB/ygzPyTfoKPlXmJMHO95CZ+4QYUnB2XSCao kyjHyE5ZCh1se5TR7UqwWns7rnDPVrRbZZy39XZx7ZA+JLsJZxOZql+s5CFMVumcHS/t Xjp/0vObPC6pM0VsFWoioZ5K595OlghKuBhZpf1yQiyuqBqLQhV0QbNlXZ9jSHkjQ+UL OFamwzqI88UeiHbtT6BowhUIAeLOhhts66jhOovmP3FUndDqXJCBR5/MkUHvpyFWGDCQ T46dMfE7LhDlTcpQc+0MyvIQnseRvTCuBKZnHc2Q3LKY12M8J/Nvu4pQwIbhP75aL6fV 8DIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=/x3C4SFRd9QWlHnSKgsq3P8eIJgm/LCcMfuhB3ATgwY=; b=vLBD9KOUtCVG17m0gLuwI+BurAVFQEc3dhMBjNxCCbNtMY8wkEQCg1jbugevhdVhyy ss2z/cD7MbAxZn2ZM2gBm3xM0jH0A/ZwLbrJavH8h68vXbdGfPidUhKcZqvHOcpKbOTP oEgu073z84fvA3Qcoga7xNxiJajObV7iGXXDRAA9pHwShs/dARFlOYLth0G8suHnOAO0 /ZFRg73+//x1UnJYivYVif6fKMwr6OZnujmPBPptXmoleCTv1VE/cEsZ8ega3cFPp9GA ZT+PKGpU5ODQHtUzokvaUdBydm3x+rso3Z8BQsbQfvEh/Ad0ZWHViZ7yRDqTrRJ4J8j4 pAjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q3si3520773eds.564.2020.09.04.02.59.54; Fri, 04 Sep 2020 03:00:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729877AbgIDJ6u (ORCPT + 99 others); Fri, 4 Sep 2020 05:58:50 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:10813 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726114AbgIDJ6t (ORCPT ); Fri, 4 Sep 2020 05:58:49 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 59DC36ED6012F55B743F; Fri, 4 Sep 2020 17:58:47 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Fri, 4 Sep 2020 17:58:41 +0800 From: Qi Liu To: , , CC: , , Subject: [PATCH] arm64: perf: Remove unnecessary event_idx check Date: Fri, 4 Sep 2020 17:57:38 +0800 Message-ID: <1599213458-28394-1-git-send-email-liuqi115@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org event_idx is obtained from armv8pmu_get_event_idx(), and this idx must be between ARMV8_IDX_CYCLE_COUNTER and cpu_pmu->num_events. So it's unnecessary to do this check. Let's remove it. Signed-off-by: Qi Liu --- arch/arm64/kernel/perf_event.c | 20 ++------------------ 1 file changed, 2 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 462f9a9..885a357 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -307,8 +307,6 @@ static struct attribute_group armv8_pmuv3_format_attr_group = { */ #define ARMV8_IDX_CYCLE_COUNTER 0 #define ARMV8_IDX_COUNTER0 1 -#define ARMV8_IDX_COUNTER_LAST(cpu_pmu) \ - (ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1) /* @@ -365,12 +363,6 @@ static inline int armv8pmu_has_overflowed(u32 pmovsr) return pmovsr & ARMV8_PMU_OVERFLOWED_MASK; } -static inline int armv8pmu_counter_valid(struct arm_pmu *cpu_pmu, int idx) -{ - return idx >= ARMV8_IDX_CYCLE_COUNTER && - idx <= ARMV8_IDX_COUNTER_LAST(cpu_pmu); -} - static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx) { return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx)); @@ -440,15 +432,11 @@ static u64 armv8pmu_unbias_long_counter(struct perf_event *event, u64 value) static u64 armv8pmu_read_counter(struct perf_event *event) { - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx; u64 value = 0; - if (!armv8pmu_counter_valid(cpu_pmu, idx)) - pr_err("CPU%u reading wrong counter %d\n", - smp_processor_id(), idx); - else if (idx == ARMV8_IDX_CYCLE_COUNTER) + if (idx == ARMV8_IDX_CYCLE_COUNTER) value = read_sysreg(pmccntr_el0); else value = armv8pmu_read_hw_counter(event); @@ -477,16 +465,12 @@ static inline void armv8pmu_write_hw_counter(struct perf_event *event, static void armv8pmu_write_counter(struct perf_event *event, u64 value) { - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx; value = armv8pmu_bias_long_counter(event, value); - if (!armv8pmu_counter_valid(cpu_pmu, idx)) - pr_err("CPU%u writing wrong counter %d\n", - smp_processor_id(), idx); - else if (idx == ARMV8_IDX_CYCLE_COUNTER) + if (idx == ARMV8_IDX_CYCLE_COUNTER) write_sysreg(value, pmccntr_el0); else armv8pmu_write_hw_counter(event, value); -- 2.8.1