Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2592185rdb; Mon, 4 Dec 2023 01:46:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IHe5Ds92YOxtIEP3K/Hbeit+RR75nz+PV5PEUJ5Wep6kEqNJVpQqRzLNYwlGq8omZ179cj8 X-Received: by 2002:a05:6a00:150c:b0:6ce:3df6:89f7 with SMTP id q12-20020a056a00150c00b006ce3df689f7mr1588920pfu.2.1701683162951; Mon, 04 Dec 2023 01:46:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701683162; cv=none; d=google.com; s=arc-20160816; b=gBChbgIJnM0Mf0PeTipOBi9FWPRDoFwkOaBzlBoCR3pAj+iPLvMQeNtZEf//PwAwTy uOcok8qoghcNpAvNqq0xWCK+CQscmVyc/bshA39JRL9nNXE5iiD5lVMlXOu1ZCl+or4f tRsBdkTKl7hvGx6k7brEuncSgJDIOvohGTsooQohA4gzDs/IMndlBP3dabNTVAWw0JpM rKVUhd4p0ENojbIb77vDHxQTEO1GSK+byVzqmWux7eqMnmwrOW4+WDbgt5cSJsZGwfoS mXPoN63KhB04ofHdVONt7ySVMoLoXAG8DINu4lCyaqQyb1gJRRzoMTmxJpDojSCJtbMc u1UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=5h8d1+2RIDYQXQ1oZW+8sphsp3U7Rz9yEq0D5I151qY=; fh=goQrBEBAXH4mnhPT54cd77Pb6HH9He8Yo0wGK7n2Ld4=; b=en9HKoPajnUism2URS8ugZyLwDJ3gQiiy28KJwiAhGsm5HuALaZxWzGUzZFsHABRj+ mjxae8U7xOha4awqjLjPpCBDfijSxUXzN9UMNWs+nJtvB5Ih3E3AKgFp5dzRJOYCNI9a pm5tz615r+KEeaZVpWJVUlZ7uSwYZyYSdO8d00s8EIpA0EG6pL9tz4nSdByA8tH2Ia60 VsU5YnT0DFxqsfRr+a16M+dq1+Dtm2+TiuR36lrjmqfrsbCkYwlCq/u62qqtYGGTurPc zwX0rVIohAV+rvkFSKhrGKxTt6am9YE8RCqJoi9moi8ks0i72FgzzaacZ72mJAEMH0bc H6rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id z6-20020a636506000000b005a0788d4a50si7691927pgb.376.2023.12.04.01.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 01:46:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A1E8C807C867; Mon, 4 Dec 2023 01:45:48 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229538AbjLDJpY (ORCPT + 99 others); Mon, 4 Dec 2023 04:45:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229532AbjLDJpW (ORCPT ); Mon, 4 Dec 2023 04:45:22 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DB85CB2; Mon, 4 Dec 2023 01:45:27 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E018EFEC; Mon, 4 Dec 2023 01:46:14 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.44.129]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F6443F6C4; Mon, 4 Dec 2023 01:45:26 -0800 (PST) Date: Mon, 4 Dec 2023 09:45:21 +0000 From: Mark Rutland To: Anshuman Khandual Cc: linux-arm-kernel@lists.infradead.org, Will Deacon , Russell King , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] arm: perf: Remove PMU locking Message-ID: References: <20231115092805.737822-1-anshuman.khandual@arm.com> <20231115092805.737822-2-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231115092805.737822-2-anshuman.khandual@arm.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 04 Dec 2023 01:45:48 -0800 (PST) On Wed, Nov 15, 2023 at 02:58:04PM +0530, Anshuman Khandual wrote: > The PMU is disabled and enabled, and the counters are programmed from > contexts where interrupts or preemption is disabled. > > The functions to toggle the PMU and to program the PMU counters access the > registers directly and don't access data modified by the interrupt handler. > That, and the fact that they're always called from non-preemptible > contexts, means that we don't need to disable interrupts or use a spinlock. > > This is very similar to an earlier change on arm64 platform. > > commit 2a0e2a02e4b7 ("arm64: perf: Remove PMU locking"). I realise the commit message is a copy of the wording from 2a0e2a02e4b7, but some of this isn't quite right; could we please replace that with: | Currently the 32-bit arm PMU drivers use the pmu_hw_events::lock spinlock in | their arm_pmu::{start,stop,enable,disable}() callbacks to protect hardware | state and event data. | | This locking is not necessary as the perf core code already provides mutual | exclusion, disabling interrupts to serialize against the IRQ handler, and | using perf_event_context::lock to protect against concurrent modifications of | events cross-cpu. | | The locking was removed from the arm64 (now PMUv3) PMU driver in commit: | | 2a0e2a02e4b7 ("arm64: perf: Remove PMU locking") | | ... and the same reasoning applies to all the 32-bit PMU drivers. | | Remove the locking from the 32-bit PMU drivers. With that wording: Acked-by: Mark Rutland Mark. > Cc: Mark Rutland > Cc: Will Deacon > Cc: Russell King > Cc: linux-perf-users@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > Cc: linux-arm-kernel@lists.infradead.org > Signed-off-by: Anshuman Khandual > --- > arch/arm/kernel/perf_event_v6.c | 28 ++++-------------- > arch/arm/kernel/perf_event_v7.c | 44 ----------------------------- > arch/arm/kernel/perf_event_xscale.c | 44 ++++++----------------------- > 3 files changed, 13 insertions(+), 103 deletions(-) > > diff --git a/arch/arm/kernel/perf_event_v6.c b/arch/arm/kernel/perf_event_v6.c > index 1ae99deeec54..8fc080c9e4fb 100644 > --- a/arch/arm/kernel/perf_event_v6.c > +++ b/arch/arm/kernel/perf_event_v6.c > @@ -268,10 +268,8 @@ static inline void armv6pmu_write_counter(struct perf_event *event, u64 value) > > static void armv6pmu_enable_event(struct perf_event *event) > { > - unsigned long val, mask, evt, flags; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long val, mask, evt; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > if (ARMV6_CYCLE_COUNTER == idx) { > @@ -294,12 +292,10 @@ static void armv6pmu_enable_event(struct perf_event *event) > * Mask out the current event and set the counter to count the event > * that we're interested in. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = armv6_pmcr_read(); > val &= ~mask; > val |= evt; > armv6_pmcr_write(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static irqreturn_t > @@ -362,26 +358,20 @@ armv6pmu_handle_irq(struct arm_pmu *cpu_pmu) > > static void armv6pmu_start(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = armv6_pmcr_read(); > val |= ARMV6_PMCR_ENABLE; > armv6_pmcr_write(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void armv6pmu_stop(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = armv6_pmcr_read(); > val &= ~ARMV6_PMCR_ENABLE; > armv6_pmcr_write(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static int > @@ -419,10 +409,8 @@ static void armv6pmu_clear_event_idx(struct pmu_hw_events *cpuc, > > static void armv6pmu_disable_event(struct perf_event *event) > { > - unsigned long val, mask, evt, flags; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long val, mask, evt; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > if (ARMV6_CYCLE_COUNTER == idx) { > @@ -444,20 +432,16 @@ static void armv6pmu_disable_event(struct perf_event *event) > * of ETM bus signal assertion cycles. The external reporting should > * be disabled and so this should never increment. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = armv6_pmcr_read(); > val &= ~mask; > val |= evt; > armv6_pmcr_write(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void armv6mpcore_pmu_disable_event(struct perf_event *event) > { > - unsigned long val, mask, flags, evt = 0; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long val, mask, evt = 0; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > if (ARMV6_CYCLE_COUNTER == idx) { > @@ -475,12 +459,10 @@ static void armv6mpcore_pmu_disable_event(struct perf_event *event) > * Unlike UP ARMv6, we don't have a way of stopping the counters. We > * simply disable the interrupt reporting. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = armv6_pmcr_read(); > val &= ~mask; > val |= evt; > armv6_pmcr_write(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static int armv6_map_event(struct perf_event *event) > diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c > index eb2190477da1..c890354b04e9 100644 > --- a/arch/arm/kernel/perf_event_v7.c > +++ b/arch/arm/kernel/perf_event_v7.c > @@ -870,10 +870,8 @@ static void armv7_pmnc_dump_regs(struct arm_pmu *cpu_pmu) > > static void armv7pmu_enable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > if (!armv7_pmnc_counter_valid(cpu_pmu, idx)) { > @@ -886,7 +884,6 @@ static void armv7pmu_enable_event(struct perf_event *event) > * Enable counter and interrupt, and set the counter to count > * the event that we're interested in. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* > * Disable counter > @@ -910,16 +907,12 @@ static void armv7pmu_enable_event(struct perf_event *event) > * Enable counter > */ > armv7_pmnc_enable_counter(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void armv7pmu_disable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > if (!armv7_pmnc_counter_valid(cpu_pmu, idx)) { > @@ -931,7 +924,6 @@ static void armv7pmu_disable_event(struct perf_event *event) > /* > * Disable counter and interrupt > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* > * Disable counter > @@ -942,8 +934,6 @@ static void armv7pmu_disable_event(struct perf_event *event) > * Disable interrupt for this counter > */ > armv7_pmnc_disable_intens(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu) > @@ -1009,24 +999,14 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu) > > static void armv7pmu_start(struct arm_pmu *cpu_pmu) > { > - unsigned long flags; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > - > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > /* Enable all counters */ > armv7_pmnc_write(armv7_pmnc_read() | ARMV7_PMNC_E); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void armv7pmu_stop(struct arm_pmu *cpu_pmu) > { > - unsigned long flags; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > - > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > /* Disable all counters */ > armv7_pmnc_write(armv7_pmnc_read() & ~ARMV7_PMNC_E); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static int armv7pmu_get_event_idx(struct pmu_hw_events *cpuc, > @@ -1492,14 +1472,10 @@ static void krait_clearpmu(u32 config_base) > > static void krait_pmu_disable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > int idx = hwc->idx; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > > /* Disable counter and interrupt */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* Disable counter */ > armv7_pmnc_disable_counter(idx); > @@ -1512,23 +1488,17 @@ static void krait_pmu_disable_event(struct perf_event *event) > > /* Disable interrupt for this counter */ > armv7_pmnc_disable_intens(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void krait_pmu_enable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > int idx = hwc->idx; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > > /* > * Enable counter and interrupt, and set the counter to count > * the event that we're interested in. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* Disable counter */ > armv7_pmnc_disable_counter(idx); > @@ -1548,8 +1518,6 @@ static void krait_pmu_enable_event(struct perf_event *event) > > /* Enable counter */ > armv7_pmnc_enable_counter(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void krait_pmu_reset(void *info) > @@ -1825,14 +1793,10 @@ static void scorpion_clearpmu(u32 config_base) > > static void scorpion_pmu_disable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > int idx = hwc->idx; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > > /* Disable counter and interrupt */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* Disable counter */ > armv7_pmnc_disable_counter(idx); > @@ -1845,23 +1809,17 @@ static void scorpion_pmu_disable_event(struct perf_event *event) > > /* Disable interrupt for this counter */ > armv7_pmnc_disable_intens(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void scorpion_pmu_enable_event(struct perf_event *event) > { > - unsigned long flags; > struct hw_perf_event *hwc = &event->hw; > int idx = hwc->idx; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > > /* > * Enable counter and interrupt, and set the counter to count > * the event that we're interested in. > */ > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > > /* Disable counter */ > armv7_pmnc_disable_counter(idx); > @@ -1881,8 +1839,6 @@ static void scorpion_pmu_enable_event(struct perf_event *event) > > /* Enable counter */ > armv7_pmnc_enable_counter(idx); > - > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void scorpion_pmu_reset(void *info) > diff --git a/arch/arm/kernel/perf_event_xscale.c b/arch/arm/kernel/perf_event_xscale.c > index f6cdcacfb96d..7a2ba1c689a7 100644 > --- a/arch/arm/kernel/perf_event_xscale.c > +++ b/arch/arm/kernel/perf_event_xscale.c > @@ -203,10 +203,8 @@ xscale1pmu_handle_irq(struct arm_pmu *cpu_pmu) > > static void xscale1pmu_enable_event(struct perf_event *event) > { > - unsigned long val, mask, evt, flags; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long val, mask, evt; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > switch (idx) { > @@ -229,20 +227,16 @@ static void xscale1pmu_enable_event(struct perf_event *event) > return; > } > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale1pmu_read_pmnc(); > val &= ~mask; > val |= evt; > xscale1pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void xscale1pmu_disable_event(struct perf_event *event) > { > - unsigned long val, mask, evt, flags; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long val, mask, evt; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > switch (idx) { > @@ -263,12 +257,10 @@ static void xscale1pmu_disable_event(struct perf_event *event) > return; > } > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale1pmu_read_pmnc(); > val &= ~mask; > val |= evt; > xscale1pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static int > @@ -300,26 +292,20 @@ static void xscalepmu_clear_event_idx(struct pmu_hw_events *cpuc, > > static void xscale1pmu_start(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale1pmu_read_pmnc(); > val |= XSCALE_PMU_ENABLE; > xscale1pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void xscale1pmu_stop(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale1pmu_read_pmnc(); > val &= ~XSCALE_PMU_ENABLE; > xscale1pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static inline u64 xscale1pmu_read_counter(struct perf_event *event) > @@ -549,10 +535,8 @@ xscale2pmu_handle_irq(struct arm_pmu *cpu_pmu) > > static void xscale2pmu_enable_event(struct perf_event *event) > { > - unsigned long flags, ien, evtsel; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long ien, evtsel; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > ien = xscale2pmu_read_int_enable(); > @@ -587,18 +571,14 @@ static void xscale2pmu_enable_event(struct perf_event *event) > return; > } > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > xscale2pmu_write_event_select(evtsel); > xscale2pmu_write_int_enable(ien); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void xscale2pmu_disable_event(struct perf_event *event) > { > - unsigned long flags, ien, evtsel, of_flags; > - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); > + unsigned long ien, evtsel, of_flags; > struct hw_perf_event *hwc = &event->hw; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > int idx = hwc->idx; > > ien = xscale2pmu_read_int_enable(); > @@ -638,11 +618,9 @@ static void xscale2pmu_disable_event(struct perf_event *event) > return; > } > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > xscale2pmu_write_event_select(evtsel); > xscale2pmu_write_int_enable(ien); > xscale2pmu_write_overflow_flags(of_flags); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static int > @@ -663,26 +641,20 @@ xscale2pmu_get_event_idx(struct pmu_hw_events *cpuc, > > static void xscale2pmu_start(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale2pmu_read_pmnc() & ~XSCALE_PMU_CNT64; > val |= XSCALE_PMU_ENABLE; > xscale2pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static void xscale2pmu_stop(struct arm_pmu *cpu_pmu) > { > - unsigned long flags, val; > - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); > + unsigned long val; > > - raw_spin_lock_irqsave(&events->pmu_lock, flags); > val = xscale2pmu_read_pmnc(); > val &= ~XSCALE_PMU_ENABLE; > xscale2pmu_write_pmnc(val); > - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); > } > > static inline u64 xscale2pmu_read_counter(struct perf_event *event) > -- > 2.25.1 >