Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp303018iog; Mon, 13 Jun 2022 03:06:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyU+l6Qb3Zd5/rmdFWcn824XHynC5zD1/Qq9hPPAKGw8bUcvzrJQnVkyRhiraLv/hyL+WEy X-Received: by 2002:aa7:cdd5:0:b0:42d:b87a:6d7 with SMTP id h21-20020aa7cdd5000000b0042db87a06d7mr64569664edw.91.1655114774960; Mon, 13 Jun 2022 03:06:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655114774; cv=none; d=google.com; s=arc-20160816; b=obSSSzpRkH456wRFZr4z+2bFW55CV/p7GYIS8FsiufWA+t8B+MPltZjHfvV6i7DH1p BOESaJ7v/QCBblsSWGnFMlXHANRNGHF32Xp/z6Jig0mH+WTH9IWFZLDMb0krYONuxXkh v17rubh/OCZOVz7f8NiMpaMULPNOyPSMvuXE1Y4D57ytQu3n+5tzcvtBlPGn0W0tjH4a 75Oh4AgnOgLmpnVinzo0G7RbDIzYRdGikXMCZd40WWfy1XED95pFzoUZu/8F//9m38+j 7z/mMEJpie2KVgWKNfLVG2Xj8GytNtHOyw08ypHQLGUE99Zfa6U7eJ7YLV4hRZ6nDhf+ 8wEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=61ITDOdez8Jw/h190wsM4ponifXOk0Q0mNTYmeZet44=; b=w0Z9of8IQ1g7H4cIyXIuTG+cnjtWdrtkthaIKVkCNmFo1PBlziKUsoHxzufLuncs/C dXGmJ4dv+cC/CvX85luKh/EFht6ACNGTjKQesGplIzknIfLjueKakB6HsMWRlC3swwkK ueBw4kK4HUwDDEMC/olR4Cgymn7fgeUzy0YOC+pOvOj/HlikirAXkXphuUUtensVNKtf 4hGOLzU0YOo+KVkEIuu8BEpWk2jicVv5hTot0oRJEhXlmeEJstDzIASPHqj7zWNrLuhS DY1PW9vIPR7nwjWi1r4GRQ+VF+9uDXwQJRZ43s9ZNMVPpfyBOXDVAX9jS/ny4V00QOSP 7Tow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v24-20020a17090651d800b00715777cb7f2si6049354ejk.86.2022.06.13.03.05.49; Mon, 13 Jun 2022 03:06:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239269AbiFMKCa (ORCPT + 99 others); Mon, 13 Jun 2022 06:02:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240693AbiFMKCU (ORCPT ); Mon, 13 Jun 2022 06:02:20 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 91E491D0F0; Mon, 13 Jun 2022 03:02:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 761CCD6E; Mon, 13 Jun 2022 03:02:15 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.38.134]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6BDC23F66F; Mon, 13 Jun 2022 03:02:08 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org Cc: Anshuman Khandual , Catalin Marinas , James Clark , Rob Herring , Marc Zyngier , Ingo Molnar , linux-perf-users@vger.kernel.org Subject: [PATCH 6/8] arm64/perf: Drive BRBE from perf event states Date: Mon, 13 Jun 2022 15:31:17 +0530 Message-Id: <20220613100119.684673-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613100119.684673-1-anshuman.khandual@arm.com> References: <20220613100119.684673-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Branch stack sampling rides along the normal perf event and all the branch records get captured during the PMU interrupt. This just changes perf event handling on the arm64 platform to accommodate required BRBE operations that will enable branch stack sampling support. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 6 +++++ drivers/perf/arm_pmu.c | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index e7013699171f..5bfaba8edad1 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -874,6 +874,12 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event)) { + cpu_pmu->brbe_read(cpuc, event); + data.br_stack = &cpuc->brbe_stack; + cpu_pmu->brbe_reset(cpuc); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 59d3980b8ca2..52a903d26702 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -271,12 +271,22 @@ armpmu_stop(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to update the counter, so ignore * PERF_EF_UPDATE, see comments in armpmu_start(). */ if (!(hwc->state & PERF_HES_STOPPED)) { + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context = NULL; + armpmu->brbe_disable(hw_events); + } + } + armpmu->disable(event); armpmu_event_update(event); hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; @@ -287,6 +297,7 @@ static void armpmu_start(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to reprogram the period, so ignore @@ -304,6 +315,14 @@ static void armpmu_start(struct perf_event *event, int flags) * happened since disabling. */ armpmu_event_set_period(event); + if (has_branch_stack(event)) { + if (event->ctx->task && hw_events->brbe_context != event->ctx) { + armpmu->brbe_reset(hw_events); + hw_events->brbe_context = event->ctx; + } + armpmu->brbe_enable(hw_events); + hw_events->brbe_users++; + } armpmu->enable(event); } @@ -349,6 +368,10 @@ armpmu_add(struct perf_event *event, int flags) hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + if (flags & PERF_EF_START) armpmu_start(event, PERF_EF_RELOAD); @@ -443,6 +466,7 @@ __hw_perf_event_init(struct perf_event *event) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); int mapping; hwc->flags = 0; @@ -492,6 +516,9 @@ __hw_perf_event_init(struct perf_event *event) local64_set(&hwc->period_left, hwc->sample_period); } + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + return validate_group(event); } @@ -520,6 +547,18 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_context *ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(ctx->pmu); + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); + + if (!hw_events->brbe_users) + return; + + if (sched_in) + armpmu->brbe_reset(hw_events); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -877,6 +916,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, -- 2.25.1