2012-10-05 12:04:43

by el_es

[permalink] [raw]
Subject: The uncatchable jitter, or may the scheduler wars be over?

Hello,

first of all, the posts that inspired me to write this up,
were from Uwaysi Bin Kareem (paradoxuncreated dot com).

Here is what I think:
could the source of graphic/video jitter as most people
perceive it, be something that could be technically defined
as 'graphic buffer underrun', caused by the scheduler
unable to align the deadline for some userspace programs
that are crucial to video/opengl output v-refresh, that
being really HARD RT ? As in, say the scheduler could
sometimes decide to preempt the userspace in the middle of
OpenGL/fb call [pretty easy to imagine this : userspace that
often blocks on calls to the video hardware, or has a
usespace thread that does that, and is unable to finish
some opengl pipeline calls before end of its slice, or
in case of misalignment, can execute enough commands to
create one (or several) frame(s), and then is cut in the
middle of creating another one and has to wait for its
turn again, and in the mean time, vsync/video buffer swap
occurs, and that last frame is lost/discarded/created with
time settings from previous slice which are wrong]

Bearing in mind, that the most length the video/fb/opengl
buffer can have, is probably 3 (triple buffering as in
some game settings), as opposed to (at least some)
sound hw which can have buffers several ms long,
it's not hard to imagine what happens if userspace cannot
make it in time to update the buffer, causing 'underruns'.

This would also explain why it doesn't matter to 'server'
people - they don't have RT video hw/buffers they care for...
(but they tune the below for max throughput instead)

But whether it is measurable or not - I don't know.

The OP (Uwaysi) has been fiddling with HZ value and the
averaging period of the scheduler (which he called 'filter')
(and granularity too). He's had some interesting results IMO.

Hope the above makes sense and not much gibberish :)

Lukasz


2012-11-04 14:04:19

by Uwaysi Bin Kareem

[permalink] [raw]
Subject: Re: The uncatchable jitter, or may the scheduler wars be over?

On Fri, 05 Oct 2012 14:04:29 +0200, el es <[email protected]> wrote:

> Hello,
>
> first of all, the posts that inspired me to write this up,
> were from Uwaysi Bin Kareem (paradoxuncreated dot com).
>
> Here is what I think:
> could the source of graphic/video jitter as most people
> perceive it, be something that could be technically defined
> as 'graphic buffer underrun', caused by the scheduler
> unable to align the deadline for some userspace programs
> that are crucial to video/opengl output v-refresh, that
> being really HARD RT ? As in, say the scheduler could
> sometimes decide to preempt the userspace in the middle of
> OpenGL/fb call [pretty easy to imagine this : userspace that
> often blocks on calls to the video hardware, or has a
> usespace thread that does that, and is unable to finish
> some opengl pipeline calls before end of its slice, or
> in case of misalignment, can execute enough commands to
> create one (or several) frame(s), and then is cut in the
> middle of creating another one and has to wait for its
> turn again, and in the mean time, vsync/video buffer swap
> occurs, and that last frame is lost/discarded/created with
> time settings from previous slice which are wrong]
>
> Bearing in mind, that the most length the video/fb/opengl
> buffer can have, is probably 3 (triple buffering as in
> some game settings), as opposed to (at least some)
> sound hw which can have buffers several ms long,
> it's not hard to imagine what happens if userspace cannot
> make it in time to update the buffer, causing 'underruns'.
>
> This would also explain why it doesn't matter to 'server'
> people - they don't have RT video hw/buffers they care for...
> (but they tune the below for max throughput instead)
>
> But whether it is measurable or not - I don't know.
>
> The OP (Uwaysi) has been fiddling with HZ value and the
> averaging period of the scheduler (which he called 'filter')
> (and granularity too). He's had some interesting results IMO.
>
> Hope the above makes sense and not much gibberish :)
>
> Lukasz
>

I have now tried both CFS and BFS.Doom 3 is now running with very low
jitter on both. Both need a 90hz timer, no highres timer, and a
granularity/interval suited for "natural" (psychovisual profile).
I also compiled them with some optimizations, and options for low jitter.
(KBUILD_CFLAGS += -O3 -fno-defer-pop --param prefetch-latency=200)
With Vsync on in doom3, it runs very smooth. Vsync off, BFS has less
jitter than CFS.
Doom 3 does 3 passes to opengl, and therefore seems more jitter-sensitive,
so getting it to run well, means minimizing jitter.
Compatibility layers, like Wine adds complexity though, and I have HL2
running in an intensely tweaked XP install, perfectly (without jitter).
With wine and BFS, it runs as good, but with some major one second
jitters. With CFS, some more smaller jitters / higher average jitter. But
the major jitters are of less lenght. Videojitter on youtube, seems less
with CFS aswell.

So for "scheduler wars" indeed, identifying those jitters, and getting the
best of both, is optimal.

This with "low-latency desktop" preemption.

I have yet to get the realtime patch/threadirqs working, however within
the month I will have a new e5 computer, which probably will be a whole
lot more fun to test that on.

Also like I stated elsewhere, since daemons seem to make a difference,
optimally putting daemons or processes that can, on a low-jitter queue,
transparent to the user, seems optimal. Unfortunately realtime is not
quite working as one would expect, causing input to be choked at times, if
you want to have one main app, and the rest on sched_other, as a
low-jitter queue. So I am still iterating this.

Reducing jitter, seems to generally improve the computing experience,
setting also higher expectations to quality. Also a machine with jitter
ofcourse, behaves like a lower-end computer. So reducing jitter, seems to
be central to an enjoyable computing experience. This all without
unreasonable effort ofcourse.

Peace Be With You.

2012-11-04 17:04:00

by el_es

[permalink] [raw]
Subject: Re: The uncatchable jitter, or may the scheduler wars be over?

A word of addition,

On Sun, Nov 4, 2012 at 2:04 PM, Uwaysi Bin Kareem
<[email protected]> wrote:
[snip]
>
> Also like I stated elsewhere, since daemons seem to make a difference,
> optimally putting daemons or processes that can, on a low-jitter queue,
> transparent to the user, seems optimal. Unfortunately realtime is not quite
> working as one would expect, causing input to be choked at times, if you
> want to have one main app, and the rest on sched_other, as a low-jitter
> queue. So I am still iterating this.

Hard real time kernel, will make the situation even worse: there the userspace
will get preempted always and no matter what it is doing; RT means here,
the userspace will /get/ the slice, but whether the slice will be enough, no one
can guarantee but whoever wrote the userspace.
It's the userspace that must decide 'do I have enough time to run
another rendering loop within
this time slice (or before vsync is imminent)'.

(As in: real time is not 'as fast as possible' but 'as fast as
specified' and the specification
need to be within reason).

>
[snip]
> Peace Be With You.

Lukasz

2012-11-05 08:34:42

by Ove Karlsen

[permalink] [raw]
Subject: Re: The uncatchable jitter, or may the scheduler wars be over?

On Sun, 04 Nov 2012 18:03:58 +0100, Lukasz Sokol <[email protected]>
wrote:

> A word of addition,
>
> On Sun, Nov 4, 2012 at 2:04 PM, Uwaysi Bin Kareem
> <[email protected]> wrote:
> [snip]
>>
>> Also like I stated elsewhere, since daemons seem to make a difference,
>> optimally putting daemons or processes that can, on a low-jitter queue,
>> transparent to the user, seems optimal. Unfortunately realtime is not
>> quite
>> working as one would expect, causing input to be choked at times, if you
>> want to have one main app, and the rest on sched_other, as a low-jitter
>> queue. So I am still iterating this.
>
> Hard real time kernel, will make the situation even worse: there the
> userspace
> will get preempted always and no matter what it is doing; RT means here,
> the userspace will /get/ the slice, but whether the slice will be
> enough, no one
> can guarantee but whoever wrote the userspace.
> It's the userspace that must decide 'do I have enough time to run
> another rendering loop within
> this time slice (or before vsync is imminent)'.
>
> (As in: real time is not 'as fast as possible' but 'as fast as
> specified' and the specification
> need to be within reason).
>
>>
> [snip]
>> Peace Be With You.
>
> Lukasz

I meant realtime-thread here, not added preemption points, for
realtime-behaviour.
But I understand your point. So "low-jitter" is ofcourse the sweetspot,
where you have just enough interrupts and preemption points, for exactly
that, but not too much.

Peace Be With You.