Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
intel_iommu=off. Whilst my myriad of broken bios issues cleared, I now
see in perf top acpi_os_read_port as continually the busiest function.
With intel_iommu enabled, _spin_lock was always on top, and nothing else
was notable.
This seems odd to me, perhaps this will make sense to someone else.
FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors or
oddities in dmesg or /var/log/messages.
* Michael Breuer <[email protected]> wrote:
> Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
> intel_iommu=off. Whilst my myriad of broken bios issues cleared, I now
> see in perf top acpi_os_read_port as continually the busiest function.
> With intel_iommu enabled, _spin_lock was always on top, and nothing
> else was notable.
>
> This seems odd to me, perhaps this will make sense to someone else.
>
> FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors or
> oddities in dmesg or /var/log/messages.
Could you post the perf top output please?
Also, could you also post the output of:
perf stat -a --repeat 10 sleep 1
this will show us how idle the system is. (My guess is that your system
is idle and perf top shows acpi_os_read_port because the system goes to
idle via ACPI methods and PIO is slow. In that case all is nominal and
your system is fine. But it's hard to tell without more details.)
Thanks,
Ingo
On Sat, 2009-11-28 at 08:18 +0100, Ingo Molnar wrote:
> * Michael Breuer <[email protected]> wrote:
>
> > Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
> > intel_iommu=off. Whilst my myriad of broken bios issues cleared, I now
> > see in perf top acpi_os_read_port as continually the busiest function.
> > With intel_iommu enabled, _spin_lock was always on top, and nothing
> > else was notable.
> >
> > This seems odd to me, perhaps this will make sense to someone else.
> >
> > FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors or
> > oddities in dmesg or /var/log/messages.
>
> Could you post the perf top output please?
>
> Also, could you also post the output of:
>
> perf stat -a --repeat 10 sleep 1
>
> this will show us how idle the system is. (My guess is that your system
> is idle and perf top shows acpi_os_read_port because the system goes to
> idle via ACPI methods and PIO is slow. In that case all is nominal and
> your system is fine. But it's hard to tell without more details.)
Isn't there also a list of symbols it won't display? Can't find either
the list nor the switch to turn it off (really didn't look hard), but
acme might know.
Ok - did the following in runlevel 3 to avoid the dmar errors I'm
getting with nouveau & vt-d.
In theory, the system was similarly loaded (i.e., doing pretty much
nothing) for both runs.
The sample is consistent with what I've seen previously.
Perhaps there's no issue, or perhaps the issue is with my broken bios
and intel_iommu=on.
Perf top with intel_iommu=off: (snapshop) - acpi_os_read_port is often
#1 and I've seen it over 30%.
------------------------------------------------------------------------------
PerfTop: 3957 irqs/sec kernel:84.0% [100000 cycles], (all, 8 CPUs)
------------------------------------------------------------------------------
samples pcnt kernel function
_______ _____ _______________
3183.00 - 16.7% : _spin_lock
3167.00 - 16.7% : acpi_os_read_port
1053.00 - 5.5% : io_apic_modify_irq
810.00 - 4.3% : hpet_next_event
529.00 - 2.8% : _spin_lock_irqsave
522.00 - 2.7% : io_apic_sync
283.00 - 1.5% : tg_shares_up
270.00 - 1.4% : acpi_idle_enter_bm
259.00 - 1.4% : irq_to_desc
222.00 - 1.2% : i8042_interrupt
213.00 - 1.1% : acpi_hw_validate_io_request
204.00 - 1.1% : ktime_get
180.00 - 0.9% : find_busiest_group
169.00 - 0.9% : _spin_unlock_irqrestore
168.00 - 0.9% : sub_preempt_count
Performance counter stats for 'sleep 1' (10 runs):
8021.581362 task-clock-msecs # 8.009 CPUs ( +-
0.033% )
607 context-switches # 0.000 M/sec ( +-
4.251% )
27 CPU-migrations # 0.000 M/sec ( +-
11.455% )
408 page-faults # 0.000 M/sec ( +-
34.557% )
311405638 cycles # 38.821 M/sec ( +-
6.887% )
85807775 instructions # 0.276 IPC ( +-
13.824% )
2300079 cache-references # 0.287 M/sec ( +-
6.859% )
77314 cache-misses # 0.010 M/sec ( +-
11.184% )
1.001616593 seconds time elapsed ( +- 0.009% )
intel_iommu on:
------------------------------------------------------------------------------
PerfTop: 9941 irqs/sec kernel:81.9% [100000 cycles], (all, 8 CPUs)
------------------------------------------------------------------------------
samples pcnt kernel function
_______ _____ _______________
11465.00 - 20.8% : _spin_lock
3679.00 - 6.7% : io_apic_modify_irq
3295.00 - 6.0% : hpet_next_event
2172.00 - 3.9% : _spin_lock_irqsave
2111.00 - 3.8% : acpi_os_read_port
1094.00 - 2.0% : io_apic_sync
904.00 - 1.6% : find_busiest_group
695.00 - 1.3% : _spin_unlock_irqrestore
686.00 - 1.2% : tg_shares_up
620.00 - 1.1% : acpi_idle_enter_bm
577.00 - 1.0% : add_preempt_count
568.00 - 1.0% : sub_preempt_count
475.00 - 0.9% : audit_filter_syscall
470.00 - 0.9% : schedule
450.00 - 0.8% : tick_nohz_stop_sched_tick
Performance counter stats for 'sleep 1' (10 runs):
8015.967731 task-clock-msecs # 8.003 CPUs ( +-
0.024% )
2628 context-switches # 0.000 M/sec ( +-
20.053% )
124 CPU-migrations # 0.000 M/sec ( +-
20.561% )
3014 page-faults # 0.000 M/sec ( +-
35.573% )
850702031 cycles # 106.126 M/sec ( +-
10.601% )
311032631 instructions # 0.366 IPC ( +-
17.859% )
8578386 cache-references # 1.070 M/sec ( +-
13.894% )
333768 cache-misses # 0.042 M/sec ( +-
21.894% )
1.001656333 seconds time elapsed ( +- 0.008% )
Ingo Molnar wrote:
> * Michael Breuer <[email protected]> wrote:
>
>
>> Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
>> intel_iommu=off. Whilst my myriad of broken bios issues cleared, I now
>> see in perf top acpi_os_read_port as continually the busiest function.
>> With intel_iommu enabled, _spin_lock was always on top, and nothing
>> else was notable.
>>
>> This seems odd to me, perhaps this will make sense to someone else.
>>
>> FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors or
>> oddities in dmesg or /var/log/messages.
>>
>
> Could you post the perf top output please?
>
> Also, could you also post the output of:
>
> perf stat -a --repeat 10 sleep 1
>
> this will show us how idle the system is. (My guess is that your system
> is idle and perf top shows acpi_os_read_port because the system goes to
> idle via ACPI methods and PIO is slow. In that case all is nominal and
> your system is fine. But it's hard to tell without more details.)
>
> Thanks,
>
> Ingo
>
On Sat, 28 Nov 2009 08:18:08 +0100
Ingo Molnar <[email protected]> wrote:
>
> * Michael Breuer <[email protected]> wrote:
>
> > Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
> > intel_iommu=off. Whilst my myriad of broken bios issues cleared, I
> > now see in perf top acpi_os_read_port as continually the busiest
> > function. With intel_iommu enabled, _spin_lock was always on top,
> > and nothing else was notable.
> >
> > This seems odd to me, perhaps this will make sense to someone else.
> >
> > FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors
> > or oddities in dmesg or /var/log/messages.
>
> Could you post the perf top output please?
>
> Also, could you also post the output of:
>
> perf stat -a --repeat 10 sleep 1
>
> this will show us how idle the system is. (My guess is that your
> system is idle and perf top shows acpi_os_read_port because the
> system goes to idle via ACPI methods and PIO is slow. In that case
> all is nominal and your system is fine. But it's hard to tell without
> more details.)
>
yeah the os_read_port is part of the idle loop, so if your system is
idle it'll show up big.... not much we can optimize there though...
Ok - my only question then is why things appear so different with
intel_iommu enabled.
Arjan van de Ven wrote:
> On Sat, 28 Nov 2009 08:18:08 +0100
> Ingo Molnar <[email protected]> wrote:
>
>
>> * Michael Breuer <[email protected]> wrote:
>>
>>
>>> Having given up for now on VT-D, I rebooted 2.6.38 rc8 with
>>> intel_iommu=off. Whilst my myriad of broken bios issues cleared, I
>>> now see in perf top acpi_os_read_port as continually the busiest
>>> function. With intel_iommu enabled, _spin_lock was always on top,
>>> and nothing else was notable.
>>>
>>> This seems odd to me, perhaps this will make sense to someone else.
>>>
>>> FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors
>>> or oddities in dmesg or /var/log/messages.
>>>
>> Could you post the perf top output please?
>>
>> Also, could you also post the output of:
>>
>> perf stat -a --repeat 10 sleep 1
>>
>> this will show us how idle the system is. (My guess is that your
>> system is idle and perf top shows acpi_os_read_port because the
>> system goes to idle via ACPI methods and PIO is slow. In that case
>> all is nominal and your system is fine. But it's hard to tell without
>> more details.)
>>
>>
>
> yeah the os_read_port is part of the idle loop, so if your system is
> idle it'll show up big.... not much we can optimize there though...
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
On Sat, 28 Nov 2009 13:10:21 -0500
Michael Breuer <[email protected]> wrote:
> Ok - my only question then is why things appear so different with
> intel_iommu enabled.
something else is even more expensive then :0
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
Ok - one more rather odd (to me) data point...
I started playing around with various settings, and traced the calls to
acpi_os_read_port.
To summarize:
With intel_iommu=off, I see a large percentage of calls to
acpi_os_read_port resulting from user apps (portsentry is #1).
With intel_iommu=on, NONE of trace points to any user apps - all derive
from the idle loop.
To make things more interesting, when I enable intel_iommu and disable
vt-d in bios, the system performs much better (20% improvement in
glxgears, for example), perf top looks like this:
------------------------------------------------------------------------------
PerfTop: 4863 irqs/sec kernel:62.7% [100000 cycles], (all, 8 CPUs)
------------------------------------------------------------------------------
samples pcnt kernel function
_______ _____ _______________
2213.00 - 5.5% : acpi_idle_enter_bm
2001.00 - 5.0% : acpi_os_read_port
1544.00 - 3.9% : _spin_lock_irqsave
1075.00 - 2.7% : ioread32
928.00 - 2.3% : find_busiest_group
851.00 - 2.1% : _spin_unlock_irqrestore
823.00 - 2.1% : hpet_next_event
810.00 - 2.0% : tg_shares_up
655.00 - 1.6% : fget_light
641.00 - 1.6% : schedule
639.00 - 1.6% : tick_nohz_stop_sched_tick
638.00 - 1.6% : sub_preempt_count
634.00 - 1.6% : add_preempt_count
548.00 - 1.4% : do_sys_poll
446.00 - 1.1% : trace_hardirqs_off
And additionally, one recurring boot warning I've seen since I first
booted this box has disappeared - first boot message of IRQ16 disabled.
I'm thinking that bad VT-D bios is causing trouble even when intel_iommu
is disabled.
On 11/29/2009 03:47 PM, Arjan van de Ven wrote:
> On Sat, 28 Nov 2009 13:10:21 -0500
> Michael Breuer<[email protected]> wrote:
>
>
>> Ok - my only question then is why things appear so different with
>> intel_iommu enabled.
>>
> something else is even more expensive then :0
>
>
>