c8b28116 claimed to have no user-visible effect, but allows setting cpu.shares
to < MIN_SHARES, which the user then indeed sees.
Signed-off-by: Mike Galbraith <[email protected]>
---
kernel/sched.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -292,7 +292,7 @@ static DEFINE_SPINLOCK(task_group_lock);
* (The default weight is 1024 - so there's no practical
* limitation from this.)
*/
-#define MIN_SHARES 2
+#define MIN_SHARES (scale_load(2))
#define MAX_SHARES (1UL << (18 + SCHED_LOAD_RESOLUTION))
static int root_task_group_load = ROOT_TASK_GROUP_LOAD;
On Sat, 2011-06-04 at 11:29 +0200, Mike Galbraith wrote:
> c8b28116 claimed to have no user-visible effect, but allows setting cpu.shares
> to < MIN_SHARES, which the user then indeed sees.
>
> Signed-off-by: Mike Galbraith <[email protected]>
> ---
> kernel/sched.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -292,7 +292,7 @@ static DEFINE_SPINLOCK(task_group_lock);
> * (The default weight is 1024 - so there's no practical
> * limitation from this.)
> */
> -#define MIN_SHARES 2
> +#define MIN_SHARES (scale_load(2))
> #define MAX_SHARES (1UL << (18 + SCHED_LOAD_RESOLUTION))
>
> static int root_task_group_load = ROOT_TASK_GROUP_LOAD;
Hurm, but that destroys most of the gains from that patch, the whole
point was being able to have finer graunlarities, but now
calc_cfs_shares() and effective_load() are clipped the coarse
granularity.
So maybe explicitly change the MIN_SHARES usage in
sched_group_set_shares(). That wants to become a clamp user anyway,
something like:
shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
That way MAX_SHARES can also loose its SCHED_LOAD_RESOLUTION factor and
bring is back in line with MIN_SHARES.
On Sat, 2011-06-04 at 13:24 +0200, Peter Zijlstra wrote:
> On Sat, 2011-06-04 at 11:29 +0200, Mike Galbraith wrote:
> > c8b28116 claimed to have no user-visible effect, but allows setting cpu.shares
> > to < MIN_SHARES, which the user then indeed sees.
> >
> > Signed-off-by: Mike Galbraith <[email protected]>
> > ---
> > kernel/sched.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > Index: linux-2.6/kernel/sched.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched.c
> > +++ linux-2.6/kernel/sched.c
> > @@ -292,7 +292,7 @@ static DEFINE_SPINLOCK(task_group_lock);
> > * (The default weight is 1024 - so there's no practical
> > * limitation from this.)
> > */
> > -#define MIN_SHARES 2
> > +#define MIN_SHARES (scale_load(2))
> > #define MAX_SHARES (1UL << (18 + SCHED_LOAD_RESOLUTION))
> >
> > static int root_task_group_load = ROOT_TASK_GROUP_LOAD;
>
> Hurm, but that destroys most of the gains from that patch, the whole
> point was being able to have finer graunlarities, but now
> calc_cfs_shares() and effective_load() are clipped the coarse
> granularity.
Oh well, drink one, spill one, give one away :)
> So maybe explicitly change the MIN_SHARES usage in
> sched_group_set_shares(). That wants to become a clamp user anyway,
> something like:
>
> shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
>
> That way MAX_SHARES can also loose its SCHED_LOAD_RESOLUTION factor and
> bring is back in line with MIN_SHARES.
sched, cgroups: fix sched_group_set_shares() on 64 bit boxen
c8b28116 claimed to have no user-visible effect, but allows setting cpu.shares
to < MIN_SHARES, which the user then sees.
Signed-off-by: Mike Galbraith <[email protected]>
---
kernel/sched.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -292,8 +292,8 @@ static DEFINE_SPINLOCK(task_group_lock);
* (The default weight is 1024 - so there's no practical
* limitation from this.)
*/
-#define MIN_SHARES 2
-#define MAX_SHARES (1UL << (18 + SCHED_LOAD_RESOLUTION))
+#define MIN_SHARES (1UL << 1)
+#define MAX_SHARES (1UL << 18)
static int root_task_group_load = ROOT_TASK_GROUP_LOAD;
#endif
@@ -8439,10 +8439,7 @@ int sched_group_set_shares(struct task_g
if (!tg->se[0])
return -EINVAL;
- if (shares < MIN_SHARES)
- shares = MIN_SHARES;
- else if (shares > MAX_SHARES)
- shares = MAX_SHARES;
+ shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
mutex_lock(&shares_mutex);
if (tg->shares == shares)