Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2516106imm; Mon, 24 Sep 2018 05:47:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV61YeQQ+jI5MMiQWv/6T5KWM1Ku/n7+YwFKH+jtK6hmX7DY3MJAdQm2Gf99+bFz59/hJsF4z X-Received: by 2002:a17:902:8d95:: with SMTP id v21-v6mr10575893plo.213.1537793231825; Mon, 24 Sep 2018 05:47:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537793231; cv=none; d=google.com; s=arc-20160816; b=PLSzx9kFK/n5kjqy9sdHJ1s3XCuMVig41oJm15Cwr46dWhCcX9nHaAkfAqltSDabdj 931XvNjzBLO9aYYEP7m5dgC/i/lST+MWJh1prut1D/hd6LGc6ZsQZ0Oppy4+g86GoY2+ 3Ue7LC0RvQgSTHzScVGH6rZ/NQodUAqpuJHRgwB21vVhtUKz8jfGs4+xvQV5auaDLVBr Xr4bcTX9Hm2gJumpTaoyrdglvmkASRboMpMqLIeVzGHUU+NUqYEjh9g4f04dlRYFnoNV kQNmohGFaW01MjF80ndUzq9/cGBX6UFWgSeYjQVhpxWQ3Ze6rE7gELDylkEmjWjWcAoU k/Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=nZYbWFfaCvbJ1MvRrrwCEw366/WOouSs1bbUjjTZSrE=; b=czKzxP7AwcuW+XvFi2z7OmX64HMwo3FR9q9fBeyByT0roGy6RKgCz22/A3cF8CCf8B rwluXIOLxjKX54REyvXgtOcy6l76J8t0QEvJB/99k/ERohztuzPmRFXmf8YfvN0H6ASh MrFy1MxjVb/B8mh+gzmU15NjIELZ2GX3Qs4uuHxvGNB4S0hhkmtdBpimNFaWWOJJRIdh xJ6PAuBfjvOnd41CFsRJZCNFdI2SPTOi61dyMnVMvX1Q9xcGtNlQjjyKeIFBOFvIEWEJ FyYkzxlrPmVlTlfLo8VqrtuXuk+XzuRTKKG9bM1MJlbVGlXKnujbot0AO7Z4FuYBbmah egPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 21-v6si35167678pgg.588.2018.09.24.05.46.56; Mon, 24 Sep 2018 05:47:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389208AbeIXSql (ORCPT + 99 others); Mon, 24 Sep 2018 14:46:41 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59654 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388693AbeIXSqj (ORCPT ); Mon, 24 Sep 2018 14:46:39 -0400 Received: from localhost (ip-213-127-77-73.ip.prioritytelecom.net [213.127.77.73]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 98BFA1099; Mon, 24 Sep 2018 12:44:40 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Will Deacon , Suzuki K Poulose , Sasha Levin Subject: [PATCH 4.18 221/235] arm64: perf: Disable PMU while processing counter overflows Date: Mon, 24 Sep 2018 13:53:27 +0200 Message-Id: <20180924113125.420432457@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180924113103.999624566@linuxfoundation.org> References: <20180924113103.999624566@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Suzuki K Poulose [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ] The arm64 PMU updates the event counters and reprograms the counters in the overflow IRQ handler without disabling the PMU. This could potentially cause skews in for group counters, where the overflowed counters may potentially loose some event counts, while they are reprogrammed. To prevent this, disable the PMU while we process the counter overflows and enable it right back when we are done. This patch also moves the PMU stop/start routines to avoid a forward declaration. Suggested-by: Mark Rutland Cc: Will Deacon Acked-by: Mark Rutland Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/perf_event.c | 50 ++++++++++++++++++++++------------------- 1 file changed, 28 insertions(+), 22 deletions(-) --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struc raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } +static void armv8pmu_start(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Enable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + +static void armv8pmu_stop(struct arm_pmu *cpu_pmu) +{ + unsigned long flags; + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); + + raw_spin_lock_irqsave(&events->pmu_lock, flags); + /* Disable all counters */ + armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); +} + static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u32 pmovsr; @@ -694,6 +716,11 @@ static irqreturn_t armv8pmu_handle_irq(s */ regs = get_irq_regs(); + /* + * Stop the PMU while processing the counter overflows + * to prevent skews in group events. + */ + armv8pmu_stop(cpu_pmu); for (idx = 0; idx < cpu_pmu->num_events; ++idx) { struct perf_event *event = cpuc->events[idx]; struct hw_perf_event *hwc; @@ -718,6 +745,7 @@ static irqreturn_t armv8pmu_handle_irq(s if (perf_event_overflow(event, &data, regs)) cpu_pmu->disable(event); } + armv8pmu_start(cpu_pmu); /* * Handle the pending perf events. @@ -731,28 +759,6 @@ static irqreturn_t armv8pmu_handle_irq(s return IRQ_HANDLED; } -static void armv8pmu_start(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Enable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - -static void armv8pmu_stop(struct arm_pmu *cpu_pmu) -{ - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* Disable all counters */ - armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); -} - static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc, struct perf_event *event) {