I've looked through the scheduling code and searched pretty much
everywhere I could think of and I've found nothing on this, so it's
time to escalate :-).
The measurement of CPU usage and virtual itimers is crude at best.
It's actually possible to set a relatively short virtual itimer (say,
50ms) and have it use almost no CPU before going off. I have actually
had this happen. For larger values and for gross rusage measurements,
this current scheme is probably ok, but for more fine-grained stuff it
is not ok.
Most, if not all, modern CPUs have a high-precision clock that could
easily track CPU usage down to the microsecond level.
My questions are: has anyone done any work on this? If not, and, if I
was willing to do the work, would it be likely to be accepted?
What I would like to do is the following:
* Add a way to get to a high-precision clock that architectures can
* Modify fields in task_struct to optionally support higher precision.
* Add some optional code before and after switch_to() to update the
timer values and rusage values.
* Add a way for architectures to report when they enter and leave
system code (for tracking system verses user usage).
All of this, of course, would be optional and the old way would still