When pulling RT task for a given runqueue, if it is already overloaded
with RT tasks, the pull operation could be avoided at the moment.
btw, it looks like a typo?
Signed-off-by: Hillf Danton <[email protected]>
---
kernel/sched_rt.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 19ecb31..14c764b 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1440,7 +1440,7 @@ static int pull_rt_task(struct rq *this_rq)
struct task_struct *p;
struct rq *src_rq;
- if (likely(!rt_overloaded(this_rq)))
+ if (unlikely(rt_overloaded(this_rq)))
return 0;
for_each_cpu(cpu, this_rq->rd->rto_mask) {
On Sun, May 15, 2011 at 10:50 AM, Hillf Danton <[email protected]> wrote:
> When pulling RT task for a given runqueue, if it is already overloaded
> with RT tasks, the pull operation could be avoided at the moment.
>
> btw, it looks like a typo?
No.
Below is how rt_overloaded() is realized:
static inline int rt_overloaded(struct rq *rq)
{
return atomic_read(&rq->rd->rto_count);
}
You can notice it's about the overload of the very root_domain.
Thanks,
Yong
>
> Signed-off-by: Hillf Danton <[email protected]>
> ---
> kernel/sched_rt.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
> index 19ecb31..14c764b 100644
> --- a/kernel/sched_rt.c
> +++ b/kernel/sched_rt.c
> @@ -1440,7 +1440,7 @@ static int pull_rt_task(struct rq *this_rq)
> struct task_struct *p;
> struct rq *src_rq;
>
> - if (likely(!rt_overloaded(this_rq)))
> + if (unlikely(rt_overloaded(this_rq)))
> return 0;
>
> for_each_cpu(cpu, this_rq->rd->rto_mask) {
>
--
Only stand for myself
On Tue, May 17, 2011 at 10:35 AM, Yong Zhang <[email protected]> wrote:
> On Sun, May 15, 2011 at 10:50 AM, Hillf Danton <[email protected]> wrote:
>> When pulling RT task for a given runqueue, if it is already overloaded
>> with RT tasks, the pull operation could be avoided at the moment.
>>
>> btw, it looks like a typo?
>
> No.
>
> Below is how rt_overloaded() is realized:
> static inline int rt_overloaded(struct rq *rq)
> {
> return atomic_read(&rq->rd->rto_count);
> }
>
> You can notice it's about the overload of the very root_domain.
>
Well, why is it going no head if not overloaded?
thanks
Hillf
On Tue, 2011-05-17 at 22:47 +0800, Hillf Danton wrote:
> On Tue, May 17, 2011 at 10:35 AM, Yong Zhang <[email protected]> wrote:
> > On Sun, May 15, 2011 at 10:50 AM, Hillf Danton <[email protected]> wrote:
> >> When pulling RT task for a given runqueue, if it is already overloaded
> >> with RT tasks, the pull operation could be avoided at the moment.
> >>
> >> btw, it looks like a typo?
> >
> > No.
> >
> > Below is how rt_overloaded() is realized:
> > static inline int rt_overloaded(struct rq *rq)
> > {
> > return atomic_read(&rq->rd->rto_count);
> > }
> >
> > You can notice it's about the overload of the very root_domain.
> >
> Well, why is it going no head if not overloaded?
To avoid examining masks (maybe huge) routinely. Challenge is to
improve oddball case (overload) without injuring the common case.
-Mike
On Tue, May 17, 2011 at 08:27:09PM +0200, Mike Galbraith wrote:
> > >
> > Well, why is it going no head if not overloaded?
>
> To avoid examining masks (maybe huge) routinely. Challenge is to
> improve oddball case (overload) without injuring the common case.
Correct.
-- Steve