On Sun, Feb 28, 2016 at 04:02:51PM +0100, Pavel Machek wrote:
> Hi!
>
> > The problem is we don't know the max bandwidth a disk can provide for a
> > specific workload, which depends on the device and IO pattern. The estimated
> > bandwidth by patch 1 will be always not accurate unless the disk is already in
> > max bandwidth. To solve this issue, we always over estimate the bandwidth. Over
> > esitmate bandwidth, workload dispatchs more IO, estimated bandwidth becomes
> > higher, dispatches even more IO. The loop will run till we enter a stable
> > state, in which the disk gets max bandwidth. The 'slightly adjust and run into
> > stable state' is the core algorithm the patch series use. We also use it to
> > detect inactive cgroup.
>
> Ok, so you want to reach a steady state, but what if workloads varies
> a lot?
>
> Lets say random writes for ten minutes, then linear write.
>
> Will the linear write be severely throttled because of the previous
> seeks?
If the workload vary a lot, it's possible there is fairness issue or
performance issue when the workload is changing. The fairness or
performance issue will depend on the changing interval. If the changing
interval is short, say, several milliseconds, the issue would be big.
For the case above, the linear write will get throttled initially as
previously estimated bandwidth is low. The workload will get less
throttled soon as estimated bandwidth will get bigger. Within some time,
the workload will enter stable state and get highest bandwidth.
> Can a task get bigger bandwidth by doing some additional (useless)
> work?
>
> Like "I do bigger reads in the random read phase, so that I'm not
> throttled that badly when I do the linear read"?
Yes, it's possible, but only at the stage approaching to stable state.
Thanks,
Shaohua