Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp467772pxa; Wed, 19 Aug 2020 06:34:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwUtDfPavEX6Id5bOdMPKBt9ZndLHEc16/UWGSxG6dBt2RyWZjArCbh8gz5Nn5VEdB3r8NH X-Received: by 2002:a17:907:2115:: with SMTP id qn21mr26335322ejb.157.1597844099576; Wed, 19 Aug 2020 06:34:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597844099; cv=none; d=google.com; s=arc-20160816; b=AkaWODDEO812zaWnYZizHwTIDxSHGTx/2hkWSme2RfGOjhiixuG7tfELLVfH8w75ni ksy9XU7sI4XzNz29EjcpB04OakArGqUBEEpeOmU6H++LDL/SYUPL30C8UeDcRD6gEJuH IIAXd3gP3ICorWXXOlVo1oBUj2ipAKlTJmNa1ESqWX2+7oRnBjVriHmTL4WjjB9AwuU8 YAotV8DdyFDUDp/NPdZOqaCHhg46n/3fFGBGvaU2A6OJHM2+oWa6YVFCBaJaFKgrRa8E skpA3DPfBeO1/K5zfnX4P2XiYqesZlVlalBz3o37uXqhSiEWtWyfigbfWr7YFJukVi/d ySfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hU8GF/OGRQq5pxuhr5+D5ww7gwzsfRG24K7fMOO67+w=; b=foZlMzled5Pg5hzvYfRN5eETRpHqs+iSqN6sQjZWnMFOK4C9gBlQdw7JomhnO/5GvC t0n+HdgZnHXzJuv49uxDkdrNTLIiAaxvrgwfE5IBh0X1Y/2B7KLnooRK5ffERr38Lr66 vZQDYGZrp6xgRyBx8LNVwjFLaSGy+HKlU9Yl78cYLy7J2dmRck7zlqUanQ0EDUOpbhrY c9buO/rp0wt1OWTdjFBLrUrODKUIzRPb/HldqYcSYWQdHVa47xbmUGLl1g9VEol5FuP4 ZlsxYbeLciJn7Kbi+RKa2luHm8iZw2pbA1Q/OuvInP3u0NyN9/8baOe1cdEivucvH4e9 75HA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b12si15427183ejg.578.2020.08.19.06.34.36; Wed, 19 Aug 2020 06:34:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727992AbgHSNdi (ORCPT + 99 others); Wed, 19 Aug 2020 09:33:38 -0400 Received: from foss.arm.com ([217.140.110.172]:36922 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728496AbgHSNdg (ORCPT ); Wed, 19 Aug 2020 09:33:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 45D5B1FB; Wed, 19 Aug 2020 06:33:35 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 325C73F71F; Wed, 19 Aug 2020 06:33:33 -0700 (PDT) From: Alexandru Elisei To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, maz@kernel.org, will@kernel.org, catalin.marinas@arm.com, swboyd@chromium.org, sumit.garg@linaro.org, Julien Thierry , Will Deacon , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim Subject: [PATCH v6 3/7] arm64: perf: Remove PMU locking Date: Wed, 19 Aug 2020 14:34:15 +0100 Message-Id: <20200819133419.526889-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200819133419.526889-1-alexandru.elisei@arm.com> References: <20200819133419.526889-1-alexandru.elisei@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Julien Thierry The PMU is disabled and enabled, and the counters are programmed from contexts where interrupts or preemption is disabled. The functions to toggle the PMU and to program the PMU counters access the registers directly and don't access data modified by the interrupt handler. That, and the fact that they're always called from non-preemptible contexts, means that we don't need to disable interrupts or use a spinlock. Cc: Will Deacon Cc: Mark Rutland Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: Catalin Marinas Signed-off-by: Julien Thierry [Explained why locking is not needed, removed WARN_ONs] Signed-off-by: Alexandru Elisei --- arch/arm64/kernel/perf_event.c | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index ac818abd65b6..80744c2f1454 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -672,15 +672,10 @@ static inline u32 armv8pmu_getreset_flags(void) static void armv8pmu_enable_event(struct perf_event *event) { - unsigned long flags; - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - /* * Enable counter and interrupt, and set the counter to count * the event that we're interested in. */ - raw_spin_lock_irqsave(&events->pmu_lock, flags); /* * Disable counter @@ -708,21 +703,10 @@ static void armv8pmu_enable_event(struct perf_event *event) */ isb(); armv8pmu_enable_event_counter(event); - - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static void armv8pmu_disable_event(struct perf_event *event) { - unsigned long flags; - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - /* - * Disable counter and interrupt - */ - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* * Disable counter */ @@ -732,30 +716,18 @@ static void armv8pmu_disable_event(struct perf_event *event) * Disable interrupt for this counter */ armv8pmu_disable_event_irq(event); - - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static void armv8pmu_start(struct arm_pmu *cpu_pmu) { - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); /* Enable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); /* Disable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) -- 2.28.0