Hello
ther perf_fuzzer on current linux-git on a Haswell system triggers the
following.
I've truncated the call chain, as it goes on for quite a while, let me
know if you want/need more information.
Vince
[32694.087403] unchecked MSR access error: WRMSR to 0x318 (tried to write 0x0000000000000000) at rIP: 0xffffffff8106f854 (native_write_msr+0x4/0x20)
[32694.101374] Call Trace:
[32694.103974] perf_clear_dirty_counters+0x86/0x100
[32694.109027] switch_mm_irqs_off+0x1d0/0x430
[32694.113498] __schedule+0x29f/0x1490
[32694.117300] ? cr4_update_irqsoff+0x2a/0x30
[32694.121762] ? switch_mm_irqs_off+0x1ba/0x430
[32694.126418] ? rcu_eqs_exit.constprop.0+0x2e/0x60
[32694.131515] ? cpuidle_enter_state+0xb7/0x350
[32694.136152] schedule_idle+0x26/0x40
[32694.139974] do_idle+0x16e/0x280
[32694.143421] cpu_startup_entry+0x19/0x20
[32694.147643] secondary_startup_64_no_verify+0xb0/0xbb
[32694.973583] Call Trace:
[32694.976215] perf_clear_dirty_counters+0x86/0x100
[32694.981290] switch_mm_irqs_off+0x1d0/0x430
[32694.985797] __schedule+0x29f/0x1490
[32694.989664] ? cr4_update_irqsoff+0x2a/0x30
[32694.994134] ? switch_mm_irqs_off+0x1ba/0x430
[32694.998789] ? rcu_eqs_exit.constprop.0+0x2e/0x60
[32695.003831] ? cpuidle_enter_state+0xb7/0x350
[32695.008521] schedule_idle+0x26/0x40
[32695.012362] do_idle+0x16e/0x280
[32695.015838] cpu_startup_entry+0x19/0x20
[32695.020041] secondary_startup_64_no_verify+0xb0/0xbb
[32695.126530] Call Trace:
[32695.129184] perf_clear_dirty_counters+0x86/0x100
[32695.134268] switch_mm_irqs_off+0x1d0/0x430
[32695.138757] __schedule+0x29f/0x1490
[32695.142577] ? tick_nohz_get_sleep_length+0x6b/0xa0
[32695.147806] ? rcu_eqs_exit.constprop.0+0x2e/0x60
[32695.152806] ? cpuidle_enter_state+0xb7/0x350
[32695.157500] schedule_idle+0x26/0x40
[32695.161342] do_idle+0x16e/0x280
[32695.164798] cpu_startup_entry+0x19/0x20
[32695.169010] secondary_startup_64_no_verify+0xb0/0xbb
On 7/29/2021 5:14 AM, Peter Zijlstra wrote:
> On Wed, Jul 28, 2021 at 12:49:43PM -0400, Vince Weaver wrote:
>> [32694.087403] unchecked MSR access error: WRMSR to 0x318 (tried to write 0x0000000000000000) at rIP: 0xffffffff8106f854 (native_write_msr+0x4/0x20)
>> [32694.101374] Call Trace:
>> [32694.103974] perf_clear_dirty_counters+0x86/0x100
>
> Hmm.. if I read this right that's MSR_ARCH_PERFMON_FIXED_CTR0 + i, given
> that FIXED_CTR0 is 0x309 that gives i == 15, which is FIXED_BTS.
>
> I'm thinking something like this ought to cure things.
>
> ---
> arch/x86/events/core.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 1eb45139fcc6..04edf8017961 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -2489,13 +2489,15 @@ void perf_clear_dirty_counters(void)
> return;
>
> for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
> - /* Metrics and fake events don't have corresponding HW counters. */
> - if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
> - continue;
> - else if (i >= INTEL_PMC_IDX_FIXED)
> + if (i >= INTEL_PMC_IDX_FIXED) {
> + /* Metrics and fake events don't have corresponding HW counters. */
> + if ((i - INTEL_PMC_IDX_FIXED) >= x86_pmu.num_counters_fixed)
Yes, the fix is better. My previous implementation tries to pick up all
the special cases. It's very likely to miss some special cases like
FIXED_BTS and probably any new fake events added later if there are.
Thanks for the fix.
The x86_pmu.num_counters_fixed should work well on HSW. But we have
hybrid machines now. I think we can use
hybrid(cpuc->pmu, num_counters_fixed) instead, which should be more
accurate.
Thanks,
Kan
> + continue;
> +
> wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
> - else
> + } else {
> wrmsrl(x86_pmu_event_addr(i), 0);
> + }
> }
>
> bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
>
On Thu, 29 Jul 2021, Peter Zijlstra wrote:
> On Wed, Jul 28, 2021 at 12:49:43PM -0400, Vince Weaver wrote:
> > [32694.087403] unchecked MSR access error: WRMSR to 0x318 (tried to write 0x0000000000000000) at rIP: 0xffffffff8106f854 (native_write_msr+0x4/0x20)
> > [32694.101374] Call Trace:
> > [32694.103974] perf_clear_dirty_counters+0x86/0x100
>
> Hmm.. if I read this right that's MSR_ARCH_PERFMON_FIXED_CTR0 + i, given
> that FIXED_CTR0 is 0x309 that gives i == 15, which is FIXED_BTS.
>
> I'm thinking something like this ought to cure things.
I know it sounds like the complete fix is a bit different from this, but I
did want to report that the patch does fix the issue on my machine.
Vince Weaver
[email protected]
On 29/7/2021 11:30 pm, Peter Zijlstra wrote:
> On Thu, Jul 29, 2021 at 09:21:01AM -0400, Liang, Kan wrote:
>>
>>
>> On 7/29/2021 5:14 AM, Peter Zijlstra wrote:
>>> On Wed, Jul 28, 2021 at 12:49:43PM -0400, Vince Weaver wrote:
>>>> [32694.087403] unchecked MSR access error: WRMSR to 0x318 (tried to write 0x0000000000000000) at rIP: 0xffffffff8106f854 (native_write_msr+0x4/0x20)
>>>> [32694.101374] Call Trace:
>>>> [32694.103974] perf_clear_dirty_counters+0x86/0x100
>>>
>>> Hmm.. if I read this right that's MSR_ARCH_PERFMON_FIXED_CTR0 + i, given
>>> that FIXED_CTR0 is 0x309 that gives i == 15, which is FIXED_BTS.
>>>
>>> I'm thinking something like this ought to cure things.
>>>
>>> ---
>>> arch/x86/events/core.c | 12 +++++++-----
>>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
>>> index 1eb45139fcc6..04edf8017961 100644
>>> --- a/arch/x86/events/core.c
>>> +++ b/arch/x86/events/core.c
>>> @@ -2489,13 +2489,15 @@ void perf_clear_dirty_counters(void)
>>> return;
>>> for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
>>> - /* Metrics and fake events don't have corresponding HW counters. */
>>> - if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
>>> - continue;
>>> - else if (i >= INTEL_PMC_IDX_FIXED)
>>> + if (i >= INTEL_PMC_IDX_FIXED) {
>>> + /* Metrics and fake events don't have corresponding HW counters. */
>>> + if ((i - INTEL_PMC_IDX_FIXED) >= x86_pmu.num_counters_fixed)
>>
>> Yes, the fix is better. My previous implementation tries to pick up all the
>> special cases. It's very likely to miss some special cases like FIXED_BTS
>> and probably any new fake events added later if there are.
>> Thanks for the fix.
>>
>> The x86_pmu.num_counters_fixed should work well on HSW. But we have hybrid
>> machines now. I think we can use
>> hybrid(cpuc->pmu, num_counters_fixed) instead, which should be more
>> accurate.
>
> Yes, good point. I guess I still need to adjust to the hybrid world.
I recently enabled guest BTS and encountered the same call trace.
For the proposal hybrid(cpuc->pmu, num_counters_fixed),
Tested-by: Like Xu <[email protected]>