If the inverse loadweight should be zero, function "calc_delta_mine"
calculates the inverse of "lw->weight" (in 32bit integer ops).
This calculation is actually a little bit impure (because it is
inverting something around "lw-weight"+1), especially when
"lw->weight" becomes smaller. (This could explain some aritmetical
issues for small shares...)
The correct inverse would be 1/lw->weight multiplied by
"WMULT_CONST" for fixcomma-scaling it into integers.
(So WMULT_CONST/lw->weight ...)
For safety it is also important to check if division by zero
could happen...
The old, impure algorithm took two divisions for inverting lw->weight,
the new, more exact one only takes one and an additional unlikely-if.
Signed-off-by: Stephan Baerwolf <[email protected]>
---
kernel/sched.c | 12 +++++++++---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 312f8b9..bb55996 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1307,15 +1307,21 @@ calc_delta_mine(unsigned long delta_exec,
unsigned long weight,
{
u64 tmp;
+ tmp = (u64)delta_exec * weight;
+
+ // actually we would have to trap - division by zero - but we stay
and pretend the limit of the operation...
+ if (unlikely(lw->weight == 0)) {
+ if (unlikely(tmp == ((u64)0))) return (unsigned long)0;
+ else return (unsigned long)LONG_MAX;
+ }
+
if (!lw->inv_weight) {
if (BITS_PER_LONG > 32 && unlikely(lw->weight >= WMULT_CONST))
lw->inv_weight = 1;
else
- lw->inv_weight = 1 + (WMULT_CONST-lw->weight/2)
- / (lw->weight+1);
+ lw->inv_weight = WMULT_CONST / lw->weight;
}
- tmp = (u64)delta_exec * weight;
/*
* Check whether we'd overflow the 64-bit multiplication:
*/
--
1.7.3.4
--
Dipl.-Inf. Stephan B?rwolf
Ilmenau University of Technology, Integrated Communication Systems Group
Phone: +49 (0)3677 69 2821, FAX: +49 (0)3677 69 1614
Email: [email protected],
Web: http://www.tu-ilmenau.de/iks
* Stephan B?rwolf <[email protected]> wrote:
> If the inverse loadweight should be zero, function "calc_delta_mine"
> calculates the inverse of "lw->weight" (in 32bit integer ops).
>
> This calculation is actually a little bit impure (because it is
> inverting something around "lw-weight"+1), especially when
> "lw->weight" becomes smaller. (This could explain some aritmetical
> issues for small shares...)
>
> The correct inverse would be 1/lw->weight multiplied by
> "WMULT_CONST" for fixcomma-scaling it into integers.
> (So WMULT_CONST/lw->weight ...)
>
> For safety it is also important to check if division by zero
> could happen...
>
> The old, impure algorithm took two divisions for inverting lw->weight,
> the new, more exact one only takes one and an additional unlikely-if.
>
> Signed-off-by: Stephan Baerwolf <[email protected]>
> ---
> kernel/sched.c | 12 +++++++++---
> 1 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 312f8b9..bb55996 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -1307,15 +1307,21 @@ calc_delta_mine(unsigned long delta_exec,
> unsigned long weight,
> {
> u64 tmp;
>
> + tmp = (u64)delta_exec * weight;
> +
> + // actually we would have to trap - division by zero - but we stay
> and pretend the limit of the operation...
> + if (unlikely(lw->weight == 0)) {
> + if (unlikely(tmp == ((u64)0))) return (unsigned long)0;
> + else return (unsigned long)LONG_MAX;
Can lw->weight ever be zero here? I dont think so - and if it is then getting a
kernel crash there is preferred to hiding it.
Once we do that your patch becomes a lot simpler.
> + }
> +
> if (!lw->inv_weight) {
> if (BITS_PER_LONG > 32 && unlikely(lw->weight >= WMULT_CONST))
> lw->inv_weight = 1;
> else
> - lw->inv_weight = 1 + (WMULT_CONST-lw->weight/2)
> - / (lw->weight+1);
> + lw->inv_weight = WMULT_CONST / lw->weight;
hm, i definitely think there was a rounding reason for that - but apparently
i'm an idiot who does not add comments to non-obvious code! :-)
Peter, do you remember this?
> }
>
> - tmp = (u64)delta_exec * weight;
I agree that moving this multiplication early in the sequence is better for
micro-performance regardless of the lw->weight optimization you do: it can be
executed in parallel.
Thanks,
Ingo
On Wed, 2011-05-11 at 18:20 +0200, Ingo Molnar wrote:
> > - lw->inv_weight = 1 + (WMULT_CONST-lw->weight/2)
> > - / (lw->weight+1);
> > + lw->inv_weight = WMULT_CONST / lw->weight;
>
> hm, i definitely think there was a rounding reason for that - but apparently
> i'm an idiot who does not add comments to non-obvious code! :-)
I suspect I might be the idiot,
> Peter, do you remember this?
I think what we wanted to do was minimize the error:
err = weight - inv*WMULT_CONST
by adding another term. But we could well have simply made a mess of it
instead.
>I think what we wanted to do was minimize the error:
> err = weight - inv*WMULT_CONST
Hi all,
thanks for your fast response.
I think what you mean is:
err = | WMULT_CONST - inv*weight | --> min
right?
But the following table(s) shows the difference in the inverses:
(assuming WMULT_CONST = 2**32 as on x64)
weight oldway
inv_weigth new inv_weight
1 2147483649 (=1+2**31)
4294967295 [=(2**32)-1]
2
1431655766 2147483647
3 (= WEIGHT_IDLEPRIO) 1073741824 1431655765
15 (= nice 19)
268435456 286331153
1024 (=nice 0)
4190212 4194304
...
...
88761 (=nice -20)
48387 48388
----
weight err
oldway err newway
1 2147483647 (=2**31 - 1)
1
2 1431655764
2
3 (= WEIGHT_IDLEPRIO) 1073741824 1
15 (= nice 19) 268435456 1
1024 (=nice 0)
4190208 0
...
...
88761 (=nice -20) 88789
28
Thus the "err" of the old way can become very large (up to about 2^31 ).
And of course the new error increases with increasing weight, it is
still alway
smaller than the oldway err (because oldway inv converts to newway inv)...
regards Stephan
--
Dipl.-Inf. Stephan Bärwolf
Ilmenau University of Technology, Integrated Communication Systems Group
Phone: +49 (0)3677 69 2821, FAX: +49 (0)3677 69 1614
Email: [email protected],
Web: http://www.tu-ilmenau.de/iks
On Wed, 2011-05-11 at 19:35 +0200, Stephan Bärwolf wrote:
> Thus the "err" of the old way can become very large (up to about 2^31 ).
> And of course the new error increases with increasing weight, it is
> still alway
> smaller than the oldway err (because oldway inv converts to newway inv)...
Your table got white-space mungled, but I'll take your word for it. I'll
give the patch a spin tomorrow (without the weight==0 bits), since by
brain officially gave up for today ;-)