Hi All,
During LSFMM Dave Jones discussed the current situation around
testing/trinity in the mm. One of the conclusions was that basically we
lack tools to gather the necessary information to make debugging a less
painful process, making it pretty much a black box for a lot of cases.
One of the suggested ways to do so was to improve our tracing. Currently
we have events for kmem, vmscan and oom (which really just traces the
tunable updates) -- In addition Dave Hansen also also been trying to add
tracing for TLB range flushing, hopefully that can make it in some time
soon. However, this lacks the more general data that governs all of the
core VM, such as vmas and of course the mm_struct.
To this end, I've started adding events to trace the vma lifecycle,
including: creating, removing, splitting, merging, copying and
adjusting. Currently it only prints out the start and end virtual
addresses, such as:
bash-3661 [000] .... 222.964847: split_vma: [8a8000-9a6000] => new: [9a6000-9b6000]
Now, on a more general scenario, I basically would like to know, 1) is
this actually useful... I'm hoping that, if in fact something like this
gets merged, it won't just sit there. 2) What other general data would
be useful for debugging purposes? I'm happy to collect feedback and send
out something we can all benefit from.
Thanks,
Davidlohr
On 04/03/2014 02:44 PM, Davidlohr Bueso wrote:
> Now, on a more general scenario, I basically would like to know, 1) is
> this actually useful... I'm hoping that, if in fact something like this
> gets merged, it won't just sit there. 2) What other general data would
> be useful for debugging purposes? I'm happy to collect feedback and send
> out something we can all benefit from.
One thing that would be nice, specifically for the VM, would be to turn
all of the things that touch the /proc/vmstat counters
(count_vm_event(), etc...) in to tracepoints.
I started on it once, but ran in to some header dependency hell and gave
up before I got anything useful.
On 04/03/2014 05:44 PM, Davidlohr Bueso wrote:
> Hi All,
>
> During LSFMM Dave Jones discussed the current situation around
> testing/trinity in the mm. One of the conclusions was that basically we
> lack tools to gather the necessary information to make debugging a less
> painful process, making it pretty much a black box for a lot of cases.
>
> One of the suggested ways to do so was to improve our tracing. Currently
> we have events for kmem, vmscan and oom (which really just traces the
> tunable updates) -- In addition Dave Hansen also also been trying to add
> tracing for TLB range flushing, hopefully that can make it in some time
> soon. However, this lacks the more general data that governs all of the
> core VM, such as vmas and of course the mm_struct.
>
> To this end, I've started adding events to trace the vma lifecycle,
> including: creating, removing, splitting, merging, copying and
> adjusting. Currently it only prints out the start and end virtual
> addresses, such as:
>
> bash-3661 [000] .... 222.964847: split_vma: [8a8000-9a6000] => new: [9a6000-9b6000]
>
> Now, on a more general scenario, I basically would like to know, 1) is
> this actually useful... I'm hoping that, if in fact something like this
> gets merged, it won't just sit there. 2) What other general data would
> be useful for debugging purposes? I'm happy to collect feedback and send
> out something we can all benefit from.
There's another thing we have to think about, which is the bottleneck of
getting that debug info out.
Turning on any sort of tracing/logging in mm/ would trigger huge amounts
of data flowing out. Any attempt to store that data anywhere would result
either in too much interference to the tests so that issues stop reproducing,
or way too much data to even be able to get through the guest <-> host pipe.
I was working on a similar idea, which is similar to what lockdep does now:
when you get a lockdep spew you see a nice output which also shows call
traces of relevant locks. What if, for example, we could make dump_page()
also dump the traces of where each of it's flags was set or cleared?
Thanks,
Sasha
On Thu, 2014-04-03 at 15:31 -0700, Dave Hansen wrote:
> On 04/03/2014 02:44 PM, Davidlohr Bueso wrote:
> > Now, on a more general scenario, I basically would like to know, 1) is
> > this actually useful... I'm hoping that, if in fact something like this
> > gets merged, it won't just sit there. 2) What other general data would
> > be useful for debugging purposes? I'm happy to collect feedback and send
> > out something we can all benefit from.
>
> One thing that would be nice, specifically for the VM, would be to turn
> all of the things that touch the /proc/vmstat counters
> (count_vm_event(), etc...) in to tracepoints.
Hmm what would be the difference? what's the issue with /proc/vmstat? I
guess for one, some of the stats depend on build rules (ie TLB flushing
statistics).
On 04/04/2014 01:24 AM, Sasha Levin wrote:
> On 04/03/2014 05:44 PM, Davidlohr Bueso wrote:
>> Hi All,
>>
>> During LSFMM Dave Jones discussed the current situation around
>> testing/trinity in the mm. One of the conclusions was that basically we
>> lack tools to gather the necessary information to make debugging a less
>> painful process, making it pretty much a black box for a lot of cases.
>>
>> One of the suggested ways to do so was to improve our tracing. Currently
>> we have events for kmem, vmscan and oom (which really just traces the
>> tunable updates) -- In addition Dave Hansen also also been trying to add
>> tracing for TLB range flushing, hopefully that can make it in some time
>> soon. However, this lacks the more general data that governs all of the
>> core VM, such as vmas and of course the mm_struct.
>>
>> To this end, I've started adding events to trace the vma lifecycle,
>> including: creating, removing, splitting, merging, copying and
>> adjusting. Currently it only prints out the start and end virtual
>> addresses, such as:
>>
>> bash-3661 [000] .... 222.964847: split_vma: [8a8000-9a6000] => new: [9a6000-9b6000]
>>
>> Now, on a more general scenario, I basically would like to know, 1) is
>> this actually useful... I'm hoping that, if in fact something like this
>> gets merged, it won't just sit there. 2) What other general data would
>> be useful for debugging purposes? I'm happy to collect feedback and send
>> out something we can all benefit from.
I think that adding more tracepoints might be more useful for debugging
performance-related problems (e.g. compaction) that don't manifest as
panic, and that VM_BUG_ON is more suited for this kind of debugging. But
I might be wrong.
> There's another thing we have to think about, which is the bottleneck of
> getting that debug info out.
>
> Turning on any sort of tracing/logging in mm/ would trigger huge amounts
> of data flowing out. Any attempt to store that data anywhere would result
> either in too much interference to the tests so that issues stop reproducing,
> or way too much data to even be able to get through the guest <-> host pipe.
>
> I was working on a similar idea, which is similar to what lockdep does now:
> when you get a lockdep spew you see a nice output which also shows call
> traces of relevant locks. What if, for example, we could make dump_page()
> also dump the traces of where each of it's flags was set or cleared?
Hm doesn't the oops printing already print accumulated trace buffers?
Wouldn't it be easier to post-process that instead of trying to do some
smart "unwinding" during oops? Is it possible to enable tracing without
actually consuming the data, just for this purpose?
Vlastimil
> Thanks,
> Sasha
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>