To prevent the throttling of RT idle threads, like the idle-inject
threads, skip accounting of runtime for these threads.
Signed-off-by: Atul Pant <[email protected]>
---
kernel/sched/rt.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 4ac36eb4cdee..d20999270e75 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1075,7 +1075,9 @@ static void update_curr_rt(struct rq *rq)
struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
int exceeded;
- if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
+ if (sched_rt_runtime(rt_rq) != RUNTIME_INF &&
+ !(curr->policy == SCHED_FIFO &&
+ curr->flags & PF_IDLE)) {
raw_spin_lock(&rt_rq->rt_runtime_lock);
rt_rq->rt_time += delta_exec;
exceeded = sched_rt_runtime_exceeded(rt_rq);
--
2.25.1
On 4/10/24 06:54, Atul Pant wrote:
> To prevent the throttling of RT idle threads, like the idle-inject
> threads, skip accounting of runtime for these threads.
>
> Signed-off-by: Atul Pant <[email protected]>
> ---
> kernel/sched/rt.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 4ac36eb4cdee..d20999270e75 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1075,7 +1075,9 @@ static void update_curr_rt(struct rq *rq)
> struct rt_rq *rt_rq = rt_rq_of_se(rt_se);
> int exceeded;
>
> - if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
> + if (sched_rt_runtime(rt_rq) != RUNTIME_INF &&
> + !(curr->policy == SCHED_FIFO &&
> + curr->flags & PF_IDLE)) {
FYI, this will not be a problem with dl server because play_idle_precise()
disables preemption, so the dl server will not be scheduled until preempt_enable().
with the DL server, the time consumed as an rt task will not change the DL server
behavior because the logic inverts: it provides bandwidth for the fair
scheduler (instead of throttling the RT sched).
-- Daniel
> raw_spin_lock(&rt_rq->rt_runtime_lock);
> rt_rq->rt_time += delta_exec;
> exceeded = sched_rt_runtime_exceeded(rt_rq);