Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp278663rwd; Wed, 31 May 2023 23:24:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5H2RXfENm5YUeJUq3JCq0FOWXoTlBpcK7A4SCiCO4VV3ws6m9ajA+LHSN+Tm17+K5FInre X-Received: by 2002:a05:6a20:2446:b0:10b:aeff:aa00 with SMTP id t6-20020a056a20244600b0010baeffaa00mr1035791pzc.28.1685600652935; Wed, 31 May 2023 23:24:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685600652; cv=none; d=google.com; s=arc-20160816; b=OhSg/W0I6OgTbUl3BGpYckQYGfjCZEYGkBUoiN73mMDOOdCW8if2YeQ5P3CGuE913p I5/p2nyhUc0OvWENLFfHnPfUqdzDkVTLQd3xRS+9uso7V+yepZ5WaH77ePbxopksy2A1 yFqVe4BG5/6KaX3F2XZWJ7qCWgcnnDyBxQP0gLcr/4BAWgAXXJ46Afj2uUrR5czEDTPr aEjA334W3IniRxk65dKFnmEwSltBk3RQJlcjkSNkOFSm2R9p5HnvMtkOX81dUCdvFa2k 7MFDwvZWS/AarHOuhiG14vwgjDLKtXdGwippdjGnmNWDejTtH5RDAqLjVGltcgiiOqQ+ TjbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=3+rZnIdWJABkpygy4zdB4e7dUMI9gFX506r6hRomUdM=; b=Y7jIha2R80PKmIdbllBq0UIPrZJY6NR5x7oJ06kIx+wCyjA5pgkkMl6N6+0JcETJNm EHirW1ZsLvdE+TFeFRBuekMfjjxaU5P/+MO4kv/FYrpIarCAFFQxRiKPgfpY1XDrdlq2 boLD+DFDfhYp4m/hZZmytj97yThp393+FDWKtHl1i1v8qqskUZf6O1DidQDXfy71wKFe kRZogZ4u5YmAmRNQvlwZrUYGBhI8oekEKdCRVe+I9C1t7Qris2Fakp2LrD2f8ClYkWmF CFelLuynWOfcu+cmQRBLSLsWc8d5OyWldVSgKeq0mYgt1qAHe7dDICImRSszwlwSvHzk djkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Z4EPGyR/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 129-20020a621787000000b0063732344e2fsi4741793pfx.190.2023.05.31.23.24.00; Wed, 31 May 2023 23:24:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Z4EPGyR/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231688AbjFAF7y (ORCPT + 99 others); Thu, 1 Jun 2023 01:59:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231695AbjFAF7Z (ORCPT ); Thu, 1 Jun 2023 01:59:25 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CE55198 for ; Wed, 31 May 2023 22:59:09 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-64d4ea109faso571555b3a.1 for ; Wed, 31 May 2023 22:59:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685599148; x=1688191148; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3+rZnIdWJABkpygy4zdB4e7dUMI9gFX506r6hRomUdM=; b=Z4EPGyR/VJ5SIF2UnRNBnN6R3o6nu7eIQjfrv8E2cr+qI39jqoDdo/3aXkuCHuT49e G3MC035LHiiPPsjHJJdicbRuPgsTsZtW5afwisdRBESR3vJYqdeHdl+8JHiDSNJLsyru AVL30nYa8xqbcGYFipgRFtpRbZXv81I2n2FK98H+xgke+66C38Uf7i6MNpAx0Mse5peB pAdTy7SL7KNpo8TNgbYWqur72Flzi85YuEWwZrmqe354HI0bIoljDjjTSboE+YMo2rTc fNi2wKpCV/xVJpNmyPueHTKJT4CwOOEOqLVE+eekti01wxpFEZ6Z2ksMhDmXtVNS9An7 mnBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685599148; x=1688191148; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3+rZnIdWJABkpygy4zdB4e7dUMI9gFX506r6hRomUdM=; b=Fld8Vix22hLlqh3+shdg6pcfuXu7VXe51KQ5QeW0atmM28di2M3BROJfztgsOKZKcF oJydL3Ztl3hs3PxuZYtua5lj8nny8x3QNfAXN0OYYMwZ8bY66fYqGYTAs9iBlD/m/d7m bgGneKp6/QRo6SFQNlZ2WBjZo5Mce9fixLAVi7O1Isvcylszg/G3h4/T2GkDr/HPtoPQ GWbKVU9ngPNcuBP+A6bHpxWQ9Gu/+YEmHxrCzmP7bpaRmp/RReaddAhalBq/1e0YR2Xy 6iAPQjPYQs1dtAI6p8+fYE/3GsmsAW5OqcdZPmwZeFPZ44JobFvsG1Fft3Z73iIQhc9d HzLw== X-Gm-Message-State: AC+VfDwPXnC14cL3n2t7/WGEsAbgusKyIDAfNSdmCLRtAyLkijz9Tvgg ndtSUJn+36TEn6A4TASQr7B+UxROgaAXM6ypoh8sxfIaMl7CicwBIrtg8dypdus259nLLd8mlzK iXVPSkhJta20m+w+EpSWGZWG1zKEFBC13haUD8Sun4qGMi4OCnOHyp9+LhsfrsoUFlX6O76I= X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:24c9:b0:647:5400:a7cb with SMTP id d9-20020a056a0024c900b006475400a7cbmr3027830pfv.3.1685599148616; Wed, 31 May 2023 22:59:08 -0700 (PDT) Date: Thu, 1 Jun 2023 05:58:10 +0000 In-Reply-To: <20230601055846.2349566-1-jstultz@google.com> Mime-Version: 1.0 References: <20230601055846.2349566-1-jstultz@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230601055846.2349566-8-jstultz@google.com> Subject: [PATCH v4 07/13] sched: Split scheduler execution context From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E . McKenney" , kernel-team@android.com, "Connor O'Brien" , John Stultz Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra Lets define the scheduling context as all the scheduler state in task_struct and the execution context as all state required to run the task. Currently both are intertwined in task_struct. We want to logically split these such that we can run the execution context of one task with the scheduling context of another. To this purpose introduce rq_selected() macro to point to the task_struct used for scheduler state and preserve rq->curr to denote the execution context. NOTE: Peter previously mentioned he didn't like the name "rq_selected()", but I've not come up with a better alternative. I'm very open to other name proposals. Question for Peter: Dietmar suggested you'd prefer I drop the conditionalization of the scheduler context pointer on the rq (so rq_selected() would be open coded as rq->curr_sched or whatever we agree on for a name), but I'd think in the !CONFIG_PROXY_EXEC case we'd want to avoid the wasted pointer and its use (since it curr_sched would always be == curr)? If I'm wrong I'm fine switching this, but would appreciate clarification. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E . McKenney" Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20181009092434.26221-5-juri.lelli@redhat.com [add additional comments and update more sched_class code to use rq::proxy] Signed-off-by: Connor O'Brien [jstultz: Rebased and resolved minor collisions, reworked to use accessors, tweaked update_curr_common to use rq_proxy fixing rt scheduling issues] Signed-off-by: John Stultz --- v2: * Reworked to use accessors * Fixed update_curr_common to use proxy instead of curr v3: * Tweaked wrapper names * Swapped proxy for selected for clarity v4: * Minor variable name tweaks for readability * Use a macro instead of a inline function and drop other helper functions as suggested by Peter. * Remove verbose comments/questions to avoid review distractions, as suggested by Dietmar --- kernel/sched/core.c | 38 +++++++++++++++++++++++++------------- kernel/sched/deadline.c | 35 ++++++++++++++++++----------------- kernel/sched/fair.c | 18 +++++++++--------- kernel/sched/rt.c | 41 ++++++++++++++++++++--------------------- kernel/sched/sched.h | 37 +++++++++++++++++++++++++++++++++++-- 5 files changed, 107 insertions(+), 62 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a68d1276bab0..ace75aadb90b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -793,7 +793,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer) rq_lock(rq, &rf); update_rq_clock(rq); - rq->curr->sched_class->task_tick(rq, rq->curr, 1); + rq_selected(rq)->sched_class->task_tick(rq, rq_selected(rq), 1); rq_unlock(rq, &rf); return HRTIMER_NORESTART; @@ -2200,16 +2200,18 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p, void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags) { - if (p->sched_class == rq->curr->sched_class) - rq->curr->sched_class->check_preempt_curr(rq, p, flags); - else if (sched_class_above(p->sched_class, rq->curr->sched_class)) + struct task_struct *curr = rq_selected(rq); + + if (p->sched_class == curr->sched_class) + curr->sched_class->check_preempt_curr(rq, p, flags); + else if (sched_class_above(p->sched_class, curr->sched_class)) resched_curr(rq); /* * A queue event has occurred, and we're going to schedule. In * this case, we can save a useless back to back clock update. */ - if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr)) + if (task_on_rq_queued(curr) && test_tsk_need_resched(rq->curr)) rq_clock_skip_update(rq); } @@ -2599,7 +2601,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) lockdep_assert_held(&p->pi_lock); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) { /* @@ -5535,7 +5537,7 @@ unsigned long long task_sched_runtime(struct task_struct *p) * project cycles that may never be accounted to this * thread, breaking clock_gettime(). */ - if (task_current(rq, p) && task_on_rq_queued(p)) { + if (task_current_selected(rq, p) && task_on_rq_queued(p)) { prefetch_curr_exec_start(p); update_rq_clock(rq); p->sched_class->update_curr(rq); @@ -5603,7 +5605,8 @@ void scheduler_tick(void) { int cpu = smp_processor_id(); struct rq *rq = cpu_rq(cpu); - struct task_struct *curr = rq->curr; + /* accounting goes to the selected task */ + struct task_struct *curr = rq_selected(rq); struct rq_flags rf; unsigned long thermal_pressure; u64 resched_latency; @@ -5701,6 +5704,13 @@ static void sched_tick_remote(struct work_struct *work) if (cpu_is_offline(cpu)) goto out_unlock; + /* + * Since this is a remote tick for full dynticks mode, we are + * always sure that there is no proxy (only a single task is + * running). + */ + SCHED_WARN_ON(rq->curr != rq_selected(rq)); + update_rq_clock(rq); if (!is_idle_task(curr)) { @@ -6631,6 +6641,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) } next = pick_next_task(rq, prev, &rf); + rq_set_selected(rq, next); clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG @@ -7097,7 +7108,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) prev_class = p->sched_class; queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flag); if (running) @@ -7185,7 +7196,7 @@ void set_user_nice(struct task_struct *p, long nice) goto out_unlock; } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK); if (running) @@ -7749,7 +7760,7 @@ static int __sched_setscheduler(struct task_struct *p, } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flags); if (running) @@ -9249,6 +9260,7 @@ void __init init_idle(struct task_struct *idle, int cpu) rcu_read_unlock(); rq->idle = idle; + rq_set_selected(rq, idle); rcu_assign_pointer(rq->curr, idle); idle->on_rq = TASK_ON_RQ_QUEUED; #ifdef CONFIG_SMP @@ -9351,7 +9363,7 @@ void sched_setnuma(struct task_struct *p, int nid) rq = task_rq_lock(p, &rf); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); @@ -10478,7 +10490,7 @@ void sched_move_task(struct task_struct *tsk) update_rq_clock(rq); - running = task_current(rq, tsk); + running = task_current_selected(rq, tsk); queued = task_on_rq_queued(tsk); if (queued) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index f6f746d52410..d41d562df078 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1179,7 +1179,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) #endif enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -1306,7 +1306,7 @@ static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se) */ static void update_curr_dl(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_dl_entity *dl_se = &curr->dl; s64 delta_exec, scaled_delta_exec; int cpu = cpu_of(rq); @@ -1819,7 +1819,7 @@ static int find_later_rq(struct task_struct *task); static int select_task_rq_dl(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; bool select_rq; struct rq *rq; @@ -1830,6 +1830,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If we are dealing with a -deadline task, we must @@ -1840,9 +1841,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) * other hand, if it has a shorter deadline, we * try to make it stay here, it might be important. */ - select_rq = unlikely(dl_task(curr)) && + select_rq = unlikely(dl_task(selected)) && (curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &curr->dl)) && + !dl_entity_preempt(&p->dl, &selected->dl)) && p->nr_cpus_allowed > 1; /* @@ -1905,7 +1906,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) * let's hope p can move out. */ if (rq->curr->nr_cpus_allowed == 1 || - !cpudl_find(&rq->rd->cpudl, rq->curr, NULL)) + !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL)) return; /* @@ -1944,7 +1945,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf) static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, int flags) { - if (dl_entity_preempt(&p->dl, &rq->curr->dl)) { + if (dl_entity_preempt(&p->dl, &rq_selected(rq)->dl)) { resched_curr(rq); return; } @@ -1954,7 +1955,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, * In the unlikely case current and p have the same deadline * let us try to decide what's the best thing to do... */ - if ((p->dl.deadline == rq->curr->dl.deadline) && + if ((p->dl.deadline == rq_selected(rq)->dl.deadline) && !test_tsk_need_resched(rq->curr)) check_preempt_equal_dl(rq, p); #endif /* CONFIG_SMP */ @@ -1989,7 +1990,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first) if (hrtick_enabled_dl(rq)) start_hrtick_dl(rq, p); - if (rq->curr->sched_class != &dl_sched_class) + if (rq_selected(rq)->sched_class != &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); deadline_queue_push_tasks(rq); @@ -2306,8 +2307,8 @@ static int push_dl_task(struct rq *rq) * can move away, it makes sense to just reschedule * without going further in pushing next_task. */ - if (dl_task(rq->curr) && - dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) && + if (dl_task(rq_selected(rq)) && + dl_time_before(next_task->dl.deadline, rq_selected(rq)->dl.deadline) && rq->curr->nr_cpus_allowed > 1) { resched_curr(rq); return 0; @@ -2432,7 +2433,7 @@ static void pull_dl_task(struct rq *this_rq) * deadline than the current task of its runqueue. */ if (dl_time_before(p->dl.deadline, - src_rq->curr->dl.deadline)) + rq_selected(src_rq)->dl.deadline)) goto skip; if (is_migration_disabled(p)) { @@ -2471,9 +2472,9 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p) if (!task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - dl_task(rq->curr) && + dl_task(rq_selected(rq)) && (rq->curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &rq->curr->dl))) { + !dl_entity_preempt(&p->dl, &rq_selected(rq)->dl))) { push_dl_tasks(rq); } } @@ -2636,12 +2637,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) return; } - if (rq->curr != p) { + if (rq_selected(rq) != p) { #ifdef CONFIG_SMP if (p->nr_cpus_allowed > 1 && rq->dl.overloaded) deadline_queue_push_tasks(rq); #endif - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) check_preempt_curr_dl(rq, p, 0); else resched_curr(rq); @@ -2670,7 +2671,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p, if (!rq->dl.overloaded) deadline_queue_pull_task(rq); - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { /* * If we now have a earlier deadline task than p, * then reschedule, provided p is still on this diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bf9e8f29398e..62c3c1762004 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -919,7 +919,7 @@ static s64 update_curr_se(struct rq *rq, struct sched_entity *curr) */ s64 update_curr_common(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); s64 delta_exec; delta_exec = update_curr_se(rq, &curr->se); @@ -964,7 +964,7 @@ static void update_curr(struct cfs_rq *cfs_rq) static void update_curr_fair(struct rq *rq) { - update_curr(cfs_rq_of(&rq->curr->se)); + update_curr(cfs_rq_of(&rq_selected(rq)->se)); } static inline void @@ -6230,7 +6230,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) s64 delta = slice - ran; if (delta < 0) { - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); return; } @@ -6245,7 +6245,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) */ static void hrtick_update(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class) return; @@ -7882,7 +7882,7 @@ static void set_skip_buddy(struct sched_entity *se) */ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); int scale = cfs_rq->nr_running >= sched_nr_latency; @@ -7916,7 +7916,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_ * prevents us from potentially nominating it as a false LAST_BUDDY * below. */ - if (test_tsk_need_resched(curr)) + if (test_tsk_need_resched(rq->curr)) return; /* Idle tasks are by definition preempted by non-idle tasks. */ @@ -8915,7 +8915,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done) * update_load_avg() can call cpufreq_update_util(). Make sure that RT, * DL and IRQ signals have been updated before updating CFS. */ - curr_class = rq->curr->sched_class; + curr_class = rq_selected(rq)->sched_class; thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); @@ -12162,7 +12162,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio) * our priority decreased, or if we are not currently running on * this runqueue and our priority is higher than the current's */ - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { if (p->prio > oldprio) resched_curr(rq); } else @@ -12307,7 +12307,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p) * kick off the schedule if running, otherwise just see * if we can still preempt the current task. */ - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); else check_preempt_curr(rq, p, 0); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 0d0b276c447d..3ba24c3fce20 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -574,7 +574,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { - struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr; + struct task_struct *curr = rq_selected(rq_of_rt_rq(rt_rq)); struct rq *rq = rq_of_rt_rq(rt_rq); struct sched_rt_entity *rt_se; @@ -1044,7 +1044,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq) */ static void update_curr_rt(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_rt_entity *rt_se = &curr->rt; s64 delta_exec; @@ -1591,7 +1591,7 @@ static int find_lowest_rq(struct task_struct *task); static int select_task_rq_rt(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; struct rq *rq; bool test; @@ -1603,6 +1603,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If the current task on @p's runqueue is an RT task, then @@ -1631,8 +1632,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) * systems like big.LITTLE. */ test = curr && - unlikely(rt_task(curr)) && - (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio); + unlikely(rt_task(selected)) && + (curr->nr_cpus_allowed < 2 || selected->prio <= p->prio); if (test || !rt_task_fits_capacity(p, cpu)) { int target = find_lowest_rq(p); @@ -1662,12 +1663,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { - /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. - */ if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL)) return; /* @@ -1710,7 +1707,9 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf) */ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags) { - if (p->prio < rq->curr->prio) { + struct task_struct *curr = rq_selected(rq); + + if (p->prio < curr->prio) { resched_curr(rq); return; } @@ -1728,7 +1727,7 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag * to move current somewhere else, making room for our non-migratable * task. */ - if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr)) + if (p->prio == curr->prio && !test_tsk_need_resched(rq->curr)) check_preempt_equal_prio(rq, p); #endif } @@ -1753,7 +1752,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f * utilization. We only care of the case where we start to schedule a * rt task */ - if (rq->curr->sched_class != &rt_sched_class) + if (rq_selected(rq)->sched_class != &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); rt_queue_push_tasks(rq); @@ -2033,7 +2032,7 @@ static struct task_struct *pick_next_pushable_task(struct rq *rq) struct task_struct, pushable_tasks); BUG_ON(rq->cpu != task_cpu(p)); - BUG_ON(task_current(rq, p)); + BUG_ON(task_current(rq, p) || task_current_selected(rq, p)); BUG_ON(p->nr_cpus_allowed <= 1); BUG_ON(!task_on_rq_queued(p)); @@ -2066,7 +2065,7 @@ static int push_rt_task(struct rq *rq, bool pull) * higher priority than current. If that's the case * just reschedule current. */ - if (unlikely(next_task->prio < rq->curr->prio)) { + if (unlikely(next_task->prio < rq_selected(rq)->prio)) { resched_curr(rq); return 0; } @@ -2419,7 +2418,7 @@ static void pull_rt_task(struct rq *this_rq) * p if it is lower in priority than the * current task on the run queue */ - if (p->prio < src_rq->curr->prio) + if (p->prio < rq_selected(src_rq)->prio) goto skip; if (is_migration_disabled(p)) { @@ -2461,9 +2460,9 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p) bool need_to_push = !task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - (dl_task(rq->curr) || rt_task(rq->curr)) && + (dl_task(rq_selected(rq)) || rt_task(rq_selected(rq))) && (rq->curr->nr_cpus_allowed < 2 || - rq->curr->prio <= p->prio); + rq_selected(rq)->prio <= p->prio); if (need_to_push) push_rt_tasks(rq); @@ -2547,7 +2546,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p) if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) rt_queue_push_tasks(rq); #endif /* CONFIG_SMP */ - if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq))) + if (p->prio < rq_selected(rq)->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } } @@ -2562,7 +2561,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) if (!task_on_rq_queued(p)) return; - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { #ifdef CONFIG_SMP /* * If our priority decreases while running, we @@ -2588,7 +2587,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) * greater than the current running task * then reschedule. */ - if (p->prio < rq->curr->prio) + if (p->prio < rq_selected(rq)->prio) resched_curr(rq); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4a1ef64449b2..29597a6fd65b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1008,7 +1008,10 @@ struct rq { */ unsigned int nr_uninterruptible; - struct task_struct __rcu *curr; + struct task_struct __rcu *curr; /* Execution context */ +#ifdef CONFIG_PROXY_EXEC + struct task_struct __rcu *curr_sched; /* Scheduling context (policy) */ +#endif struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; @@ -1207,6 +1210,22 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +#ifdef CONFIG_PROXY_EXEC +#define rq_selected(rq) ((rq)->curr_sched) +#define cpu_curr_selected(cpu) (cpu_rq(cpu)->curr_sched) +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + rcu_assign_pointer(rq->curr_sched, t); +} +#else +#define rq_selected(rq) ((rq)->curr) +#define cpu_curr_selected(cpu) (cpu_rq(cpu)->curr) +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + /* Do nothing */ +} +#endif + struct sched_group; #ifdef CONFIG_SCHED_CORE static inline struct cpumask *sched_group_span(struct sched_group *sg); @@ -2068,11 +2087,25 @@ static inline u64 global_rt_runtime(void) return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC; } +/* + * Is p the current execution context? + */ static inline int task_current(struct rq *rq, struct task_struct *p) { return rq->curr == p; } +/* + * Is p the current scheduling context? + * + * Note that it might be the current execution context at the same time if + * rq->curr == rq_selected() == p. + */ +static inline int task_current_selected(struct rq *rq, struct task_struct *p) +{ + return rq_selected(rq) == p; +} + static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { #ifdef CONFIG_SMP @@ -2234,7 +2267,7 @@ struct sched_class { static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq->curr != prev); + WARN_ON_ONCE(rq_selected(rq) != prev); prev->sched_class->put_prev_task(rq, prev); } -- 2.41.0.rc0.172.g3f132b7071-goog