Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp3597621pxv; Mon, 28 Jun 2021 08:15:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyq/4RzUmUu89+ZybHfk9xKMYBls/qNGZyrm9L5CNlB0sw9OM+JL1dTK5/0AWsf1M8cwOR+ X-Received: by 2002:a92:cd8d:: with SMTP id r13mr18403596ilb.20.1624893301785; Mon, 28 Jun 2021 08:15:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624893301; cv=none; d=google.com; s=arc-20160816; b=v3GjP5hc4pE+mmDCb4anmHHQyh8v/+TGWqo4RSBLvY0boMfwq07BuSfNKfa5lMlzuA +nlcXyS68LjDcY31TeTNcbetvcmIVuKeQva69pQ5Fs0rHSYm24dvcA8yng14Wl/+KV8r sgPN+6J7RJwjT1av+SfoztIHX3ckHgU/A9FAM8Yb3xpakCaI7fb4U8ahQ7lgzOta3sIt 80FXL75dOE5FTkFNzkBrV3R31FQ2RLS3w2A9j+QtD+roLlzrOnL0ogzWnuPR8rbX1vnv vU5V65ebUDSk1sxa/Z9vUtQk8XoHAU7ddpEvyKzXvgOfRFZd30oOj84GfRT8Th6mBEs6 oe8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AuglmJwF8Xk3wAm6CAQnZ3i+/9b5A4BsDo5U3i9hQug=; b=qK1YHhRXJIGhFhMBJYEm3hq7ZCTut3I3FkHky5sHPZPV2VPkgqZUrKQm4zR1K5upkC AIElY90onSe2hQ7NAzVk0in9dJLJH3tLTt4W5zHra9Puvq7LZoo5gogS+RSFK+wAZyRC nR8MUMe6/JqtE5Q2lDjK4gG4GmHI9oVVbN1ctNsIOCG889MN2qYUUlj4ueK+xsJAPbqa jMj/ZnMSAZZVPmHPMboTXOctWCpyBGahtTHBmOWP2rtJ7ocxqCjqn40D5GyC0fg3Gw2n /SrpA3cFUisYUKSLcnu7XPi3EwXrriYKIcc8JWO4H2p3AcQsEWmpiJlEG4I4MLcSPxIl o+ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uHq4g928; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e21si17899832jam.39.2021.06.28.08.14.45; Mon, 28 Jun 2021 08:15:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uHq4g928; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237892AbhF1POz (ORCPT + 99 others); Mon, 28 Jun 2021 11:14:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:60356 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233312AbhF1OxW (ORCPT ); Mon, 28 Jun 2021 10:53:22 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 376EF61CB2; Mon, 28 Jun 2021 14:37:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624891050; bh=GB+9jH144PpKcUwCnM/eGiL5EfEWHZ/J6bbZxDPKUl8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uHq4g928D9jbB4G2f8PidyU+M/bzED/qN8gnuqTxcD5HlqThZ2sH8Qvn4yShuqe+K et6rmrzMhQpxNDcaCjY9/QPyEdfz6MQ5eTqVJO4LJwyDzadRGVimMpV/8WmQKkbGOW CTzJVjny2pg9Iyki8viBM3kUWtAfRLwMxZ4fovWkiLhsbY3zZBnpnSHy37OdIDJD2S Q2M8lmHduWbS8gmWtn30hWTO2Y3uPtb16YhY2uhZO1TbkfWlJmVcgIFH8t94NuL0ii oEbYDsPcsJioo7GcLI62yLn0SIOj96aMDFZNQ0NpcqfljXG/23YY2CFhTyHNzEqJau OWXlaN87ulntw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Suzuki K Poulose , Mark Rutland , Will Deacon , Aman Priyadarshi , Greg Kroah-Hartman Subject: [PATCH 4.14 70/88] arm64: perf: Disable PMU while processing counter overflows Date: Mon, 28 Jun 2021 10:36:10 -0400 Message-Id: <20210628143628.33342-71-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210628143628.33342-1-sashal@kernel.org> References: <20210628143628.33342-1-sashal@kernel.org> MIME-Version: 1.0 X-KernelTest-Patch: http://kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.238-rc1.gz X-KernelTest-Tree: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git X-KernelTest-Branch: linux-4.14.y X-KernelTest-Patches: git://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git X-KernelTest-Version: 4.14.238-rc1 X-KernelTest-Deadline: 2021-06-30T14:36+00:00 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Suzuki K Poulose commit 3cce50dfec4a5b0414c974190940f47dd32c6dee upstream. The arm64 PMU updates the event counters and reprograms the counters in the overflow IRQ handler without disabling the PMU. This could potentially cause skews in for group counters, where the overflowed counters may potentially loose some event counts, while they are reprogrammed. To prevent this, disable the PMU while we process the counter overflows and enable it right back when we are done. This patch also moves the PMU stop/start routines to avoid a forward declaration. Suggested-by: Mark Rutland Cc: Will Deacon Acked-by: Mark Rutland Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Aman Priyadarshi Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 53df84b2a07f..4ee1228d29eb 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struct perf_event *event) raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } +static void armv8pmu_start(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Enable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void armv8pmu_stop(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Disable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) { u32 pmovsr; @@ -695,6 +717,11 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) */ regs = get_irq_regs(); + /* + * Stop the PMU while processing the counter overflows + * to prevent skews in group events. + */ + armv8pmu_stop(cpu_pmu); for (idx = 0; idx < cpu_pmu->num_events; ++idx) { struct perf_event *event = cpuc->events[idx]; struct hw_perf_event *hwc; @@ -719,6 +746,7 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) if (perf_event_overflow(event, &data, regs)) cpu_pmu->disable(event); } + armv8pmu_start(cpu_pmu); /* * Handle the pending perf events. @@ -732,28 +760,6 @@ static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev) return IRQ_HANDLED; } -static void armv8pmu_start(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Enable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - -static void armv8pmu_stop(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Disable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, struct perf_event *event) { -- 2.30.2