2019-11-15 13:41:44

by Liang, Kan

[permalink] [raw]
Subject: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI

From: Kan Liang <[email protected]>

The perf PMI handler, intel_pmu_handle_irq(), currently does
unnecessary MSR accesses when PEBS is enabled.

When entering the handler, global ctrl is explicitly disabled. All
counters do not count anymore. It doesn't matter if the PEBS is
enabled or disabled. Furthermore, cpuc->pebs_enabled is not changed
in PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need
to be changed either.

When exiting the handler, only the active PMU will be restore.
For active PMU, PEBS status is unchanged during the PMI handler.
Avoiding PEBS MSR access is harmless.
For inactive PMU, disable_all() will be called right after PMI handler,
which will eventually disable PEBS. During the period between PMI
handler exit and PEBS finally disabled, the global ctrl is always
disabled since we don't restore PMU state for inactive PMU. This
case is also harmless.

Use ftrace to measure the duration of intel_pmu_handle_irq() on BDX.
#perf record -e cycles:P -- ./tchain_edit

The average duration of intel_pmu_handle_irq()
Without the patch 1.144 us
With the patch 1.025 us

Signed-off-by: Kan Liang <[email protected]>
---
arch/x86/events/intel/core.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index dc64b16e6b71..d715eb966334 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -1945,6 +1945,11 @@ static __initconst const u64 knl_hw_cache_extra_regs
* intel_bts events don't coexist with intel PMU's BTS events because of
* x86_add_exclusive(x86_lbr_exclusive_lbr); there's no need to keep them
* disabled around intel PMU's event batching etc, only inside the PMI handler.
+ *
+ * Avoid PEBS_ENABLE MSR access in PMIs
+ * The GLOBAL_CTRL has been disabled. All counters do not count anymore.
+ * It doesn't matter if the PEBS is enabled or disabled.
+ * Furthermore, PEBS status doesn't change in PMI.
*/
static void __intel_pmu_disable_all(void)
{
@@ -1954,13 +1959,12 @@ static void __intel_pmu_disable_all(void)

if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask))
intel_pmu_disable_bts();
-
- intel_pmu_pebs_disable_all();
}

static void intel_pmu_disable_all(void)
{
__intel_pmu_disable_all();
+ intel_pmu_pebs_disable_all();
intel_pmu_lbr_disable_all();
}

@@ -1968,7 +1972,6 @@ static void __intel_pmu_enable_all(int added, bool pmi)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);

- intel_pmu_pebs_enable_all();
intel_pmu_lbr_enable_all(pmi);
wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask);
@@ -1986,6 +1989,7 @@ static void __intel_pmu_enable_all(int added, bool pmi)

static void intel_pmu_enable_all(int added)
{
+ intel_pmu_pebs_enable_all();
__intel_pmu_enable_all(added, false);
}

--
2.17.1


2019-11-15 14:09:55

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI

On Fri, Nov 15, 2019 at 05:39:17AM -0800, [email protected] wrote:
> From: Kan Liang <[email protected]>
>
> The perf PMI handler, intel_pmu_handle_irq(), currently does
> unnecessary MSR accesses when PEBS is enabled.
>
> When entering the handler, global ctrl is explicitly disabled. All
> counters do not count anymore. It doesn't matter if the PEBS is
> enabled or disabled. Furthermore, cpuc->pebs_enabled is not changed
> in PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need
> to be changed either.

PMI can throttle, and iirc x86_pmu_stop() ends up in
intel_pmu_pebs_disable()

2019-11-15 14:48:52

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI



On 11/15/2019 9:07 AM, Peter Zijlstra wrote:
> On Fri, Nov 15, 2019 at 05:39:17AM -0800, [email protected] wrote:
>> From: Kan Liang <[email protected]>
>>
>> The perf PMI handler, intel_pmu_handle_irq(), currently does
>> unnecessary MSR accesses when PEBS is enabled.
>>
>> When entering the handler, global ctrl is explicitly disabled. All
>> counters do not count anymore. It doesn't matter if the PEBS is
>> enabled or disabled. Furthermore, cpuc->pebs_enabled is not changed
>> in PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need
>> to be changed either.
>
> PMI can throttle, and iirc x86_pmu_stop() ends up in
> intel_pmu_pebs_disable()
>

Right, the declaration is inaccurate. I will fix it in v2.
But the patch still works for the case of PMI throttle.
The intel_pmu_pebs_disable() will update cpuc->pebs_enabled and
unconditionally modify MSR_IA32_PEBS_ENABLE.
When exiting the handler, current perf re-write the PEBS MSR according
to the updated cpuc->pebs_enabled, which is still unnecessary.

Thanks,
Kan

2019-11-15 18:06:26

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI



On 11/15/2019 9:46 AM, Liang, Kan wrote:
>
>
> On 11/15/2019 9:07 AM, Peter Zijlstra wrote:
>> On Fri, Nov 15, 2019 at 05:39:17AM -0800, [email protected]
>> wrote:
>>> From: Kan Liang <[email protected]>
>>>
>>> The perf PMI handler, intel_pmu_handle_irq(), currently does
>>> unnecessary MSR accesses when PEBS is enabled.
>>>
>>> When entering the handler, global ctrl is explicitly disabled. All
>>> counters do not count anymore. It doesn't matter if the PEBS is
>>> enabled or disabled. Furthermore, cpuc->pebs_enabled is not changed
>>> in PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need
>>> to be changed either.
>>
>> PMI can throttle, and iirc x86_pmu_stop() ends up in
>> intel_pmu_pebs_disable()
>>
>
> Right, the declaration is inaccurate. I will fix it in v2.
> But the patch still works for the case of PMI throttle.
> The intel_pmu_pebs_disable() will update cpuc->pebs_enabled and
> unconditionally modify MSR_IA32_PEBS_ENABLE.

It seems not true for a corner case.
PMI may land after cpuc->enabled=0 in x86_pmu_disable() and PMI throttle
may be triggered for the PMI. For this rare case,
intel_pmu_pebs_disable() will not touch MSR_IA32_PEBS_ENABLE.

I don't have a test case for this rare corner case. But I think we may
have to handle it in PMI handler as well. I will add the codes as below
to V2.

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index bc6468329c52..7198a372a5ab 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2620,6 +2624,15 @@ static int handle_pmi_common(struct pt_regs
*regs, u64 status)
handled++;
x86_pmu.drain_pebs(regs);
status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+
+ /*
+ * PMI may land after cpuc->enabled=0 in
x86_pmu_disable() and
+ * PMI throttle may be triggered for the PMI.
+ * For this rare case, intel_pmu_pebs_disable() will not
touch
+ * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
+ */
+ if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
+ wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
}


Thanks,
Kan

> When exiting the handler, current perf re-write the PEBS MSR according
> to the updated cpuc->pebs_enabled, which is still unnecessary.
>

2019-11-15 18:35:26

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI

> @@ -2620,6 +2624,15 @@ static int handle_pmi_common(struct pt_regs *regs,
> u64 status)
> handled++;
> x86_pmu.drain_pebs(regs);
> status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
> +
> + /*
> + * PMI may land after cpuc->enabled=0 in x86_pmu_disable()
> and
> + * PMI throttle may be triggered for the PMI.
> + * For this rare case, intel_pmu_pebs_disable() will not
> touch
> + * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
> + */
> + if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
> + wrmsrl(MSR_IA32_PEBS_ENABLE, 0);

How does the enable_all() code know to reenable it in this case?

-Andi

2019-11-15 18:44:11

by Liang, Kan

[permalink] [raw]
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI



On 11/15/2019 1:33 PM, Andi Kleen wrote:
>> @@ -2620,6 +2624,15 @@ static int handle_pmi_common(struct pt_regs *regs,
>> u64 status)
>> handled++;
>> x86_pmu.drain_pebs(regs);
>> status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
>> +
>> + /*
>> + * PMI may land after cpuc->enabled=0 in x86_pmu_disable()
>> and
>> + * PMI throttle may be triggered for the PMI.
>> + * For this rare case, intel_pmu_pebs_disable() will not
>> touch
>> + * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
>> + */
>> + if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
>> + wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
>
> How does the enable_all() code know to reenable it in this case?

For this case, we know that perf is disabling the PMU. The PMI handler
will not restore PMU state when it's inactive. The enable_all() will not
be called.

Thanks,
Kan