This patch series is intended to improve I/O latency, addressing an often
neglected, important subset of workloads: the ones for which cfq currently
prefers not to do any idling.
Those are the ones that would benefit most from having low latency, in fact
they are any of:
* processes with large think times (e.g. interactive ones like file managers)
* seeky (e.g. programs faulting in their code at startup)
* or marked as no-idle from upper levels.
The patch series addresses this by:
* reducing queues' timeslice when many queues have pending I/O
* separating queues with different priorities and different characteristics in
different service trees, each with an allocated time slice
* enable idling when switching between service trees, even for queues that
would not have idling enabled otherwise.
This provides various benefits:
* service tree insertion code is simplified, since it doesn't need to cope with
priorities any more.
* high priority no_idle queues are no longer penalized when competing with
lower priority, idling queues
* seeky and no_idle queues have their fair share of disk time, without
penalizing NCQ drives' performances, since they can all dispatch together,
filling up the available NCQ slots.
On a non-NCQ capable drive, a workload of 4 random readers competing with
sequential writer, the maximum latency experienced by readers decreased from >
500ms to about 160ms.
On Mon, Oct 19 2009, Corrado Zoccolo wrote:
> This patch series is intended to improve I/O latency, addressing an often
> neglected, important subset of workloads: the ones for which cfq currently
> prefers not to do any idling.
>
> Those are the ones that would benefit most from having low latency, in fact
> they are any of:
> * processes with large think times (e.g. interactive ones like file managers)
> * seeky (e.g. programs faulting in their code at startup)
> * or marked as no-idle from upper levels.
>
> The patch series addresses this by:
> * reducing queues' timeslice when many queues have pending I/O
> * separating queues with different priorities and different characteristics in
> different service trees, each with an allocated time slice
> * enable idling when switching between service trees, even for queues that
> would not have idling enabled otherwise.
>
> This provides various benefits:
> * service tree insertion code is simplified, since it doesn't need to cope with
> priorities any more.
> * high priority no_idle queues are no longer penalized when competing with
> lower priority, idling queues
> * seeky and no_idle queues have their fair share of disk time, without
> penalizing NCQ drives' performances, since they can all dispatch together,
> filling up the available NCQ slots.
>
> On a non-NCQ capable drive, a workload of 4 random readers competing with
> sequential writer, the maximum latency experienced by readers decreased from >
> 500ms to about 160ms.
Thanks, interesting series. I'll look over the patches as time permits
and try and get some testing time in when I get back.
--
Jens Axboe