2024-04-02 02:21:33

by Andrii Nakryiko

[permalink] [raw]
Subject: [PATCH v5 0/4] perf/x86/amd: add LBR capture support outside of hardware events

Add AMD-specific implementation of perf_snapshot_branch_stack static call that
allows LBR capture from arbitrary points in the kernel. This is utilized by
BPF programs. See patch #3 for all the details.

Patches #1 and #2 are preparatory steps to ensure LBR freezing is completely
inlined and have no branches, to minimize LBR snapshot contamination.

Patch #4 removes an artificial restriction on perf events with LBR enabled.

v4->v5:
- rebased on top of perf/urgent branch to resolve conflicts with
598c2fafc06f ("perf/x86/amd/lbr: Use freeze based on availability").

Andrii Nakryiko (4):
perf/x86/amd: ensure amd_pmu_core_disable_all() is always inlined
perf/x86/amd: avoid taking branches before disabling LBR
perf/x86/amd: support capturing LBR from software events
perf/x86/amd: don't reject non-sampling events with configured LBR

arch/x86/events/amd/core.c | 37 +++++++++++++++++++++++++++++++++++-
arch/x86/events/amd/lbr.c | 13 +------------
arch/x86/events/perf_event.h | 13 +++++++++++++
3 files changed, 50 insertions(+), 13 deletions(-)

--
2.43.0



2024-04-02 02:21:49

by Andrii Nakryiko

[permalink] [raw]
Subject: [PATCH v5 1/4] perf/x86/amd: ensure amd_pmu_core_disable_all() is always inlined

In the following patches we will enable LBR capture on AMD CPUs at
arbitrary point in time, which means that LBR recording won't be frozen
by hardware automatically as part of hardware overflow event. So we need
to take care to minimize amount of branches and function calls/returns
on the path to freezing LBR, minimizing LBR snapshot altering as much as
possible.

amd_pmu_core_disable_all() is one of the functions on this path, and is
already marked as __always_inline. But it calls amd_pmu_set_global_ctl()
which is marked as just inline. So to guarantee no function call will
be generated thoughout mark amd_pmu_set_global_ctl() as __always_inline
as well.

Reviewed-by: Sandipan Das <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
---
arch/x86/events/amd/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 985ef3b47919..9b15afda0326 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -647,7 +647,7 @@ static void amd_pmu_cpu_dead(int cpu)
}
}

-static inline void amd_pmu_set_global_ctl(u64 ctl)
+static __always_inline void amd_pmu_set_global_ctl(u64 ctl)
{
wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, ctl);
}
--
2.43.0


2024-04-02 02:22:05

by Andrii Nakryiko

[permalink] [raw]
Subject: [PATCH v5 2/4] perf/x86/amd: avoid taking branches before disabling LBR

In the following patches we will enable LBR capture on AMD CPUs at
arbitrary point in time, which means that LBR recording won't be frozen
by hardware automatically as part of hardware overflow event. So we need
to take care to minimize amount of branches and function calls/returns
on the path to freezing LBR, minimizing LBR snapshot altering as much as
possible.

As such, split out LBR disabling logic from the sanity checking logic
inside amd_pmu_lbr_disable_all(). This will ensure that no branches are
taken before LBR is frozen in the functionality added in the next patch.
Use __always_inline to also eliminate any possible function calls.

Reviewed-by: Sandipan Das <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
---
arch/x86/events/amd/lbr.c | 9 +--------
arch/x86/events/perf_event.h | 13 +++++++++++++
2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
index 5149830c7c4f..33d0a45c0cd3 100644
--- a/arch/x86/events/amd/lbr.c
+++ b/arch/x86/events/amd/lbr.c
@@ -414,18 +414,11 @@ void amd_pmu_lbr_enable_all(void)
void amd_pmu_lbr_disable_all(void)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- u64 dbg_ctl, dbg_extn_cfg;

if (!cpuc->lbr_users || !x86_pmu.lbr_nr)
return;

- rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
- wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
-
- if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
- rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
- wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
- }
+ __amd_pmu_lbr_disable();
}

__init int amd_pmu_lbr_init(void)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fb56518356ec..72b022a1e16c 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1329,6 +1329,19 @@ void amd_pmu_lbr_enable_all(void);
void amd_pmu_lbr_disable_all(void);
int amd_pmu_lbr_hw_config(struct perf_event *event);

+static __always_inline void __amd_pmu_lbr_disable(void)
+{
+ u64 dbg_ctl, dbg_extn_cfg;
+
+ rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
+ wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
+
+ if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
+ rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
+ wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+ }
+}
+
#ifdef CONFIG_PERF_EVENTS_AMD_BRS

#define AMD_FAM19H_BRS_EVENT 0xc4 /* RETIRED_TAKEN_BRANCH_INSTRUCTIONS */
--
2.43.0


2024-04-02 02:22:20

by Andrii Nakryiko

[permalink] [raw]
Subject: [PATCH v5 3/4] perf/x86/amd: support capturing LBR from software events

Upstream commit c22ac2a3d4bd ("perf: Enable branch record for software
events") added ability to capture LBR (Last Branch Records) on Intel CPUs
from inside BPF program at pretty much any arbitrary point. This is
extremely useful capability that allows to figure out otherwise
hard to debug problems, because LBR is now available based on some
application-defined conditions, not just hardware-supported events.

retsnoop ([0]) is one such tool that takes a huge advantage of this
functionality and has proved to be an extremely useful tool in
practice.

Now, AMD Zen4 CPUs got support for similar LBR functionality, but
necessary wiring inside the kernel is not yet setup. This patch seeks to
rectify this and follows a similar approach to the original patch
for Intel CPUs. We implement an AMD-specific callback set to be called
through perf_snapshot_branch_stack static call.

Previous preparatory patches ensured that amd_pmu_core_disable_all() and
__amd_pmu_lbr_disable() will be completely inlined and will have no
branches, so LBR snapshot contamination will be minimized.

This was tested on AMD Bergamo CPU and worked well when utilized from
the aforementioned retsnoop tool.

[0] https://github.com/anakryiko/retsnoop

Reviewed-by: Sandipan Das <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
---
arch/x86/events/amd/core.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 9b15afda0326..1fc4ce44e743 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -907,6 +907,37 @@ static int amd_pmu_handle_irq(struct pt_regs *regs)
return amd_pmu_adjust_nmi_window(handled);
}

+/*
+ * AMD-specific callback invoked through perf_snapshot_branch_stack static
+ * call, defined in include/linux/perf_event.h. See its definition for API
+ * details. It's up to caller to provide enough space in *entries* to fit all
+ * LBR records, otherwise returned result will be truncated to *cnt* entries.
+ */
+static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+ struct cpu_hw_events *cpuc;
+ unsigned long flags;
+
+ /*
+ * The sequence of steps to freeze LBR should be completely inlined
+ * and contain no branches to minimize contamination of LBR snapshot
+ */
+ local_irq_save(flags);
+ amd_pmu_core_disable_all();
+ __amd_pmu_lbr_disable();
+
+ cpuc = this_cpu_ptr(&cpu_hw_events);
+
+ amd_pmu_lbr_read();
+ cnt = min(cnt, x86_pmu.lbr_nr);
+ memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt);
+
+ amd_pmu_v2_enable_all(0);
+ local_irq_restore(flags);
+
+ return cnt;
+}
+
static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -1443,6 +1474,10 @@ static int __init amd_core_pmu_init(void)
static_call_update(amd_pmu_branch_reset, amd_pmu_lbr_reset);
static_call_update(amd_pmu_branch_add, amd_pmu_lbr_add);
static_call_update(amd_pmu_branch_del, amd_pmu_lbr_del);
+
+ /* Only support branch_stack snapshot on perfmon v2 */
+ if (x86_pmu.handle_irq == amd_pmu_v2_handle_irq)
+ static_call_update(perf_snapshot_branch_stack, amd_pmu_v2_snapshot_branch_stack);
} else if (!amd_brs_init()) {
/*
* BRS requires special event constraints and flushing on ctxsw.
--
2.43.0


2024-04-02 02:22:36

by Andrii Nakryiko

[permalink] [raw]
Subject: [PATCH v5 4/4] perf/x86/amd: don't reject non-sampling events with configured LBR

Now that it's possible to capture LBR on AMD CPU from BPF at arbitrary
point, there is no reason to artificially limit this feature to just
sampling events. So corresponding check is removed. AFAIU, there is no
correctness implications of doing this (and it was possible to bypass
this check by just setting perf_event's sample_period to 1 anyways, so
it doesn't guard all that much).

Reviewed-by: Sandipan Das <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
---
arch/x86/events/amd/lbr.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
index 33d0a45c0cd3..19c7b76e21bc 100644
--- a/arch/x86/events/amd/lbr.c
+++ b/arch/x86/events/amd/lbr.c
@@ -310,10 +310,6 @@ int amd_pmu_lbr_hw_config(struct perf_event *event)
{
int ret = 0;

- /* LBR is not recommended in counting mode */
- if (!is_sampling_event(event))
- return -EINVAL;
-
ret = amd_pmu_lbr_setup_filter(event);
if (!ret)
event->attach_state |= PERF_ATTACH_SCHED_CB;
--
2.43.0


2024-04-03 07:29:41

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v5 0/4] perf/x86/amd: add LBR capture support outside of hardware events


* Andrii Nakryiko <[email protected]> wrote:

> Add AMD-specific implementation of perf_snapshot_branch_stack static call that
> allows LBR capture from arbitrary points in the kernel. This is utilized by
> BPF programs. See patch #3 for all the details.
>
> Patches #1 and #2 are preparatory steps to ensure LBR freezing is completely
> inlined and have no branches, to minimize LBR snapshot contamination.
>
> Patch #4 removes an artificial restriction on perf events with LBR enabled.
>
> v4->v5:
> - rebased on top of perf/urgent branch to resolve conflicts with
> 598c2fafc06f ("perf/x86/amd/lbr: Use freeze based on availability").
>
> Andrii Nakryiko (4):
> perf/x86/amd: ensure amd_pmu_core_disable_all() is always inlined
> perf/x86/amd: avoid taking branches before disabling LBR
> perf/x86/amd: support capturing LBR from software events
> perf/x86/amd: don't reject non-sampling events with configured LBR
>
> arch/x86/events/amd/core.c | 37 +++++++++++++++++++++++++++++++++++-
> arch/x86/events/amd/lbr.c | 13 +------------
> arch/x86/events/perf_event.h | 13 +++++++++++++
> 3 files changed, 50 insertions(+), 13 deletions(-)

Applied to tip:perf/core for a v6.10 merge, thanks a lot Andrii!

Ingo

2024-04-03 07:41:17

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: perf/core] perf/x86/amd: Don't reject non-sampling events with configured LBR

The following commit has been merged into the perf/core branch of tip:

Commit-ID: 9794563d4d053b1b46a0cc91901f0a11d8678c19
Gitweb: https://git.kernel.org/tip/9794563d4d053b1b46a0cc91901f0a11d8678c19
Author: Andrii Nakryiko <[email protected]>
AuthorDate: Mon, 01 Apr 2024 19:21:18 -07:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 03 Apr 2024 09:14:26 +02:00

perf/x86/amd: Don't reject non-sampling events with configured LBR

Now that it's possible to capture LBR on AMD CPU from BPF at arbitrary
point, there is no reason to artificially limit this feature to just
sampling events. So corresponding check is removed. AFAIU, there is no
correctness implications of doing this (and it was possible to bypass
this check by just setting perf_event's sample_period to 1 anyways, so
it doesn't guard all that much).

Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Sandipan Das <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/events/amd/lbr.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
index 33d0a45..19c7b76 100644
--- a/arch/x86/events/amd/lbr.c
+++ b/arch/x86/events/amd/lbr.c
@@ -310,10 +310,6 @@ int amd_pmu_lbr_hw_config(struct perf_event *event)
{
int ret = 0;

- /* LBR is not recommended in counting mode */
- if (!is_sampling_event(event))
- return -EINVAL;
-
ret = amd_pmu_lbr_setup_filter(event);
if (!ret)
event->attach_state |= PERF_ATTACH_SCHED_CB;

2024-04-03 07:41:27

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: perf/core] perf/x86/amd: Support capturing LBR from software events

The following commit has been merged into the perf/core branch of tip:

Commit-ID: a4d18112e5317c120bcadeb486fbe950f749bb5e
Gitweb: https://git.kernel.org/tip/a4d18112e5317c120bcadeb486fbe950f749bb5e
Author: Andrii Nakryiko <[email protected]>
AuthorDate: Mon, 01 Apr 2024 19:21:17 -07:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 03 Apr 2024 09:14:26 +02:00

perf/x86/amd: Support capturing LBR from software events

Upstream commit c22ac2a3d4bd ("perf: Enable branch record for software
events") added ability to capture LBR (Last Branch Records) on Intel CPUs
from inside BPF program at pretty much any arbitrary point. This is
extremely useful capability that allows to figure out otherwise
hard to debug problems, because LBR is now available based on some
application-defined conditions, not just hardware-supported events.

'retsnoop' is one such tool that takes a huge advantage of this
functionality and has proved to be an extremely useful tool in
practice:

https://github.com/anakryiko/retsnoop

Now, AMD Zen4 CPUs got support for similar LBR functionality, but
necessary wiring inside the kernel is not yet setup. This patch seeks to
rectify this and follows a similar approach to the original patch
for Intel CPUs. We implement an AMD-specific callback set to be called
through perf_snapshot_branch_stack static call.

Previous preparatory patches ensured that amd_pmu_core_disable_all() and
__amd_pmu_lbr_disable() will be completely inlined and will have no
branches, so LBR snapshot contamination will be minimized.

This was tested on AMD Bergamo CPU and worked well when utilized from
the aforementioned retsnoop tool.

Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Sandipan Das <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/events/amd/core.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 9b15afd..1fc4ce4 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -907,6 +907,37 @@ static int amd_pmu_handle_irq(struct pt_regs *regs)
return amd_pmu_adjust_nmi_window(handled);
}

+/*
+ * AMD-specific callback invoked through perf_snapshot_branch_stack static
+ * call, defined in include/linux/perf_event.h. See its definition for API
+ * details. It's up to caller to provide enough space in *entries* to fit all
+ * LBR records, otherwise returned result will be truncated to *cnt* entries.
+ */
+static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+ struct cpu_hw_events *cpuc;
+ unsigned long flags;
+
+ /*
+ * The sequence of steps to freeze LBR should be completely inlined
+ * and contain no branches to minimize contamination of LBR snapshot
+ */
+ local_irq_save(flags);
+ amd_pmu_core_disable_all();
+ __amd_pmu_lbr_disable();
+
+ cpuc = this_cpu_ptr(&cpu_hw_events);
+
+ amd_pmu_lbr_read();
+ cnt = min(cnt, x86_pmu.lbr_nr);
+ memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt);
+
+ amd_pmu_v2_enable_all(0);
+ local_irq_restore(flags);
+
+ return cnt;
+}
+
static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -1443,6 +1474,10 @@ static int __init amd_core_pmu_init(void)
static_call_update(amd_pmu_branch_reset, amd_pmu_lbr_reset);
static_call_update(amd_pmu_branch_add, amd_pmu_lbr_add);
static_call_update(amd_pmu_branch_del, amd_pmu_lbr_del);
+
+ /* Only support branch_stack snapshot on perfmon v2 */
+ if (x86_pmu.handle_irq == amd_pmu_v2_handle_irq)
+ static_call_update(perf_snapshot_branch_stack, amd_pmu_v2_snapshot_branch_stack);
} else if (!amd_brs_init()) {
/*
* BRS requires special event constraints and flushing on ctxsw.

2024-04-03 07:44:22

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: perf/core] perf/x86/amd: Ensure amd_pmu_core_disable_all() is always inlined

The following commit has been merged into the perf/core branch of tip:

Commit-ID: 0dbf66fa7e80024629f816c2ec7a9f3d39637822
Gitweb: https://git.kernel.org/tip/0dbf66fa7e80024629f816c2ec7a9f3d39637822
Author: Andrii Nakryiko <[email protected]>
AuthorDate: Mon, 01 Apr 2024 19:21:15 -07:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 03 Apr 2024 09:14:26 +02:00

perf/x86/amd: Ensure amd_pmu_core_disable_all() is always inlined

In the following patches we will enable LBR capture on AMD CPUs at
arbitrary point in time, which means that LBR recording won't be frozen
by hardware automatically as part of hardware overflow event. So we need
to take care to minimize amount of branches and function calls/returns
on the path to freezing LBR, minimizing LBR snapshot altering as much as
possible.

amd_pmu_core_disable_all() is one of the functions on this path, and is
already marked as __always_inline. But it calls amd_pmu_set_global_ctl()
which is marked as just inline. So to guarantee no function call will
be generated thoughout mark amd_pmu_set_global_ctl() as __always_inline
as well.

Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Sandipan Das <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/events/amd/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 985ef3b..9b15afd 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -647,7 +647,7 @@ static void amd_pmu_cpu_dead(int cpu)
}
}

-static inline void amd_pmu_set_global_ctl(u64 ctl)
+static __always_inline void amd_pmu_set_global_ctl(u64 ctl)
{
wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, ctl);
}

2024-04-03 08:01:27

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: perf/core] perf/x86/amd: Avoid taking branches before disabling LBR

The following commit has been merged into the perf/core branch of tip:

Commit-ID: 1eddf187e5d087de4560ec7c3baa2f8283920710
Gitweb: https://git.kernel.org/tip/1eddf187e5d087de4560ec7c3baa2f8283920710
Author: Andrii Nakryiko <[email protected]>
AuthorDate: Mon, 01 Apr 2024 19:21:16 -07:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 03 Apr 2024 09:14:26 +02:00

perf/x86/amd: Avoid taking branches before disabling LBR

In the following patches we will enable LBR capture on AMD CPUs at
arbitrary point in time, which means that LBR recording won't be frozen
by hardware automatically as part of hardware overflow event. So we need
to take care to minimize amount of branches and function calls/returns
on the path to freezing LBR, minimizing LBR snapshot altering as much as
possible.

As such, split out LBR disabling logic from the sanity checking logic
inside amd_pmu_lbr_disable_all(). This will ensure that no branches are
taken before LBR is frozen in the functionality added in the next patch.
Use __always_inline to also eliminate any possible function calls.

Signed-off-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Sandipan Das <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/events/amd/lbr.c | 9 +--------
arch/x86/events/perf_event.h | 13 +++++++++++++
2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
index 5149830..33d0a45 100644
--- a/arch/x86/events/amd/lbr.c
+++ b/arch/x86/events/amd/lbr.c
@@ -414,18 +414,11 @@ void amd_pmu_lbr_enable_all(void)
void amd_pmu_lbr_disable_all(void)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- u64 dbg_ctl, dbg_extn_cfg;

if (!cpuc->lbr_users || !x86_pmu.lbr_nr)
return;

- rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
- wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
-
- if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
- rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
- wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
- }
+ __amd_pmu_lbr_disable();
}

__init int amd_pmu_lbr_init(void)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fb56518..72b022a 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1329,6 +1329,19 @@ void amd_pmu_lbr_enable_all(void);
void amd_pmu_lbr_disable_all(void);
int amd_pmu_lbr_hw_config(struct perf_event *event);

+static __always_inline void __amd_pmu_lbr_disable(void)
+{
+ u64 dbg_ctl, dbg_extn_cfg;
+
+ rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
+ wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
+
+ if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
+ rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
+ wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+ }
+}
+
#ifdef CONFIG_PERF_EVENTS_AMD_BRS

#define AMD_FAM19H_BRS_EVENT 0xc4 /* RETIRED_TAKEN_BRANCH_INSTRUCTIONS */