Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1272453rwb; Sun, 6 Nov 2022 22:45:15 -0800 (PST) X-Google-Smtp-Source: AMsMyM7uPA7zAdIJcsIKVTxSjsTXn+55qm7SENWPgm/vlr0s/tbGXSlFx/WkHdheHl/XayZZC0Pu X-Received: by 2002:a17:907:75e4:b0:7ae:2336:9d17 with SMTP id jz4-20020a17090775e400b007ae23369d17mr19445337ejc.7.1667803515124; Sun, 06 Nov 2022 22:45:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667803515; cv=none; d=google.com; s=arc-20160816; b=d2LXZrz7m5s3VMNXMg1dieW71fKKwYThlXzOHMvED3iE48DffqyojLD0L9zXVdhtKz OyWkXL+f50ziP7pzQMDBxHhXpXIJJcxpifxuoOHVg1VKQSgIMG9URDDSz+2rsLMbFsyt Wt62WWgeUuS8zvpHjRbysuLi0PEwC211FhLPfHKHxkuu0ungMEMYfu6bP8hWBUhHweTz A6Ypb3N4s3c5AmvRH7UwonSnkKB3uk1+OokzH8d6o716GyVGG6FZFYoqryf5IBTBi5Jn rtSeaPTB3VL9ZQWOM9C2cn/JOOKppKPJ2HPhQoeytCE5ounQSszcedwauJTyhdgJkAD+ lxHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cYamx6lYnElqtf/c8BDtVQTl+FmYIGUiLwujVWjJRkQ=; b=mDPOyhuw15UT+RuBYKTWlZTgNc+Zxur1VFNjVBykVgEup7amlpLf2GEWAYzWHnQ4GG JA1DEjzbDmhinaDj/uUa+MKYNBEzNvlUupYmMxTvJK21+pLPc4Wonf0uSBiWWk6SkV38 q4xvg/bDBQ1WNCu9pwIdtdGJqgHxqBP//ITgUgLTXj0xE5z7RMEE6zj+WgfdO4sQAGnc CGz6cLMZFkvkKdeJeyp5Qg/Esv1J5caLsTZ7ZNpi4c1WTM2s5Z1HyGEAUm+x7DSBfz+p h93fnjwx+faQHPXATkvGgnTkLXlyjED1R7hrgoc7AQ6KNp74xVTwvopjZ2YnG3sox3Od qYiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o7-20020a056402038700b00458a650f3fasi6986957edv.409.2022.11.06.22.44.53; Sun, 06 Nov 2022 22:45:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231334AbiKGG0L (ORCPT + 95 others); Mon, 7 Nov 2022 01:26:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231326AbiKGG0F (ORCPT ); Mon, 7 Nov 2022 01:26:05 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 273EF1275D; Sun, 6 Nov 2022 22:26:03 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10E331FB; Sun, 6 Nov 2022 22:26:09 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4DE3C3F534; Sun, 6 Nov 2022 22:25:58 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 5/7] arm64/perf: Drive BRBE from perf event states Date: Mon, 7 Nov 2022 11:55:12 +0530 Message-Id: <20221107062514.2851047-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Branch stack sampling rides along the normal perf event and all the branch records get captured during the PMU interrupt. This just changes perf event handling on the arm64 platform to accommodate required BRBE operations that will enable branch stack sampling support. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 7 ++++++ drivers/perf/arm_pmu.c | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index c97377e28288..97db333d1208 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -874,6 +874,13 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event)) { + cpu_pmu->brbe_read(cpuc, event); + data.br_stack = &cpuc->branches->brbe_stack; + data.sample_flags |= PERF_SAMPLE_BRANCH_STACK; + cpu_pmu->brbe_reset(cpuc); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 5048a500441e..1a8dca4e513e 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -271,12 +271,22 @@ armpmu_stop(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to update the counter, so ignore * PERF_EF_UPDATE, see comments in armpmu_start(). */ if (!(hwc->state & PERF_HES_STOPPED)) { + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context = NULL; + armpmu->brbe_disable(hw_events); + } + } + armpmu->disable(event); armpmu_event_update(event); hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; @@ -287,6 +297,7 @@ static void armpmu_start(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to reprogram the period, so ignore @@ -304,6 +315,14 @@ static void armpmu_start(struct perf_event *event, int flags) * happened since disabling. */ armpmu_event_set_period(event); + if (has_branch_stack(event)) { + if (event->ctx->task && hw_events->brbe_context != event->ctx) { + armpmu->brbe_reset(hw_events); + hw_events->brbe_context = event->ctx; + } + armpmu->brbe_enable(hw_events); + hw_events->brbe_users++; + } armpmu->enable(event); } @@ -349,6 +368,10 @@ armpmu_add(struct perf_event *event, int flags) hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + if (flags & PERF_EF_START) armpmu_start(event, PERF_EF_RELOAD); @@ -443,6 +466,7 @@ __hw_perf_event_init(struct perf_event *event) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); int mapping; hwc->flags = 0; @@ -492,6 +516,9 @@ __hw_perf_event_init(struct perf_event *event) local64_set(&hwc->period_left, hwc->sample_period); } + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + return validate_group(event); } @@ -520,6 +547,18 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_context *ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(ctx->pmu); + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); + + if (!hw_events->brbe_users) + return; + + if (sched_in) + armpmu->brbe_reset(hw_events); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -877,6 +916,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, -- 2.25.1