Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp752124ybz; Wed, 15 Apr 2020 18:10:54 -0700 (PDT) X-Google-Smtp-Source: APiQypLqqm6HvJbN0GbpjT3f3+ug4krpTLqM768oBobbEn7mL4/Kipa+A3n6NwlTVEZt1zfS6nA8 X-Received: by 2002:aa7:c1ca:: with SMTP id d10mr8590646edp.152.1586999454159; Wed, 15 Apr 2020 18:10:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586999454; cv=none; d=google.com; s=arc-20160816; b=hSp1GM/CoF8wX5KIN6sqS7ocImxyyjinC8IhCAnDSkVun0uCWCsZDRs0fz+jQqhx1F BL1aA5gtSXuFUSud9nqv7eWJ0mU5q9HqGek0JTERBvyd8CvbMeaaJ5FC62izs2LLMl9c AGZKtpKcXr7JJfcxt3mkCCwPdlraV22la/wFYvmx9vrKKNt/gLge431x5SGvYqV5qg44 xIyQNIFnHj5n4OnJ50U0IM1fNP6ZqQPXdTlhD5H1sBNG1xkR9sKoX9Q90Ilc9WzLbH/R DZNrafhcQ7+d/8sn3IaVYPHP+b4DV0SMeSkypnfDzw/D0Jl5dmv4J0svaF9A0reQfBVY w+uA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=P90OCU/ZspJhtZ25FaWJoaOy10NyejDYjMnKnHC1H1k=; b=rnqs5mzwHzOS8F3gGrzeN7c80vsrpozRp1K1EQbytXng5EOITF4WUlPbWy1NGAtYsZ K2vdvZxuhxTQdPbRLmqJ4SsESXnineF5nUyrRD0eocFCG2jnoWYQkuuCp83dXnrMwIQL yb3gjUiYg5+MjpRtudso+sezbpWV103qjGleq2JyBotxE7ENWne4ZAxXbLFNXXhkmSyF GDCyj4zhU9CNDpLjc4WZp3pZ/V2SVVL34EBViVkZ3sf1CZ1LEpYta8pjZJoSW9Wxbpc4 1w0fBn6U+wgAGpedmZdxG4/rvxkYdoVFWOBPfG7O5mhsjpw9f1dmn/Y/XOAg/DDOMna2 gpnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s23si11750047eds.384.2020.04.15.18.10.27; Wed, 15 Apr 2020 18:10:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2442282AbgDOVIu (ORCPT + 99 others); Wed, 15 Apr 2020 17:08:50 -0400 Received: from foss.arm.com ([217.140.110.172]:52584 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2442232AbgDOVFn (ORCPT ); Wed, 15 Apr 2020 17:05:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DCE32139F; Wed, 15 Apr 2020 14:05:31 -0700 (PDT) Received: from e113632-lin.cambridge.arm.com (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A82223F6C4; Wed, 15 Apr 2020 14:05:30 -0700 (PDT) From: Valentin Schneider To: linux-kernel@vger.kernel.org Cc: mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, Ingo Molnar , Juri Lelli , Steven Rostedt Subject: [PATCH v3 6/9] sched: Kill select_task_rq()'s sd_flag parameter Date: Wed, 15 Apr 2020 22:05:09 +0100 Message-Id: <20200415210512.805-7-valentin.schneider@arm.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20200415210512.805-1-valentin.schneider@arm.com> References: <20200415210512.805-1-valentin.schneider@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Only select_task_rq_fair() uses that parameter to do an actual domain search, other classes only care about what kind of wakeup is happening (fork, exec, or "regular") and thus just translate the flag into a wakeup type. WF_TTWU and WF_EXEC have just been added, use these along with WF_FORK to encode the wakeup types we care about. This cleans up the API a bit, but adds an extra conversion layer within select_task_rq_fair(). Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Steven Rostedt Signed-off-by: Valentin Schneider --- kernel/sched/core.c | 10 +++++----- kernel/sched/deadline.c | 4 ++-- kernel/sched/fair.c | 18 +++++++++++++++--- kernel/sched/idle.c | 2 +- kernel/sched/rt.c | 4 ++-- kernel/sched/sched.h | 2 +- kernel/sched/stop_task.c | 2 +- 7 files changed, 27 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3a61a3b8eaa9..aea9badd397a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2094,12 +2094,12 @@ static int select_fallback_rq(int cpu, struct task_struct *p) * The caller (fork, wakeup) owns p->pi_lock, ->cpus_ptr is stable. */ static inline -int select_task_rq(struct task_struct *p, int cpu, int sd_flags, int wake_flags) +int select_task_rq(struct task_struct *p, int cpu, int wake_flags) { lockdep_assert_held(&p->pi_lock); if (p->nr_cpus_allowed > 1) - cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags); + cpu = p->sched_class->select_task_rq(p, cpu, wake_flags); else cpu = cpumask_any(p->cpus_ptr); @@ -2612,7 +2612,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) atomic_dec(&task_rq(p)->nr_iowait); } - cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags); + cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU); if (task_cpu(p) != cpu) { wake_flags |= WF_MIGRATED; psi_ttwu_dequeue(p); @@ -2945,7 +2945,7 @@ void wake_up_new_task(struct task_struct *p) * as we're not fully set-up yet. */ p->recent_used_cpu = task_cpu(p); - __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0)); + __set_task_cpu(p, select_task_rq(p, task_cpu(p), WF_FORK)); #endif rq = __task_rq_lock(p, &rf); update_rq_clock(rq); @@ -3486,7 +3486,7 @@ void sched_exec(void) int dest_cpu; raw_spin_lock_irqsave(&p->pi_lock, flags); - dest_cpu = p->sched_class->select_task_rq(p, task_cpu(p), SD_BALANCE_EXEC, 0); + dest_cpu = p->sched_class->select_task_rq(p, task_cpu(p), WF_EXEC); if (dest_cpu == smp_processor_id()) goto unlock; diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 504d2f51b0d6..0e96b435c51b 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1601,12 +1601,12 @@ static void yield_task_dl(struct rq *rq) static int find_later_rq(struct task_struct *task); static int -select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags) +select_task_rq_dl(struct task_struct *p, int cpu, int flags) { struct task_struct *curr; struct rq *rq; - if (sd_flag != SD_BALANCE_WAKE) + if (!(flags & WF_TTWU)) goto out; rq = cpu_rq(cpu); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3d34b4e4060f..b0bf98e6798b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6611,7 +6611,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) /* * select_task_rq_fair: Select target runqueue for the waking task in domains - * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, + * that have the relevant SD flag set. In practice, this is SD_BALANCE_WAKE, * SD_BALANCE_FORK, or SD_BALANCE_EXEC. * * Balances load by selecting the idlest CPU in the idlest group, or under @@ -6622,13 +6622,25 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) * preempt must be disabled. */ static int -select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_flags) +select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags) { + int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING); struct sched_domain *tmp, *sd = NULL; int cpu = smp_processor_id(); int new_cpu = prev_cpu; int want_affine = 0; - int sync = (wake_flags & WF_SYNC) && !(current->flags & PF_EXITING); + int sd_flag; + + switch (wake_flags & (WF_TTWU | WF_FORK | WF_EXEC)) { + case WF_TTWU: + sd_flag = SD_BALANCE_WAKE; + break; + case WF_FORK: + sd_flag = SD_BALANCE_FORK; + break; + default: + sd_flag = SD_BALANCE_EXEC; + } if (sd_flag & SD_BALANCE_WAKE) { record_wakee(p); diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index b743bf38f08f..e9c6a27f0647 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -367,7 +367,7 @@ void cpu_startup_entry(enum cpuhp_state state) #ifdef CONFIG_SMP static int -select_task_rq_idle(struct task_struct *p, int cpu, int sd_flag, int flags) +select_task_rq_idle(struct task_struct *p, int cpu, int flags) { return task_cpu(p); /* IDLE tasks as never migrated */ } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index df11d88c9895..88427ea0231b 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1426,14 +1426,14 @@ static void yield_task_rt(struct rq *rq) static int find_lowest_rq(struct task_struct *task); static int -select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags) +select_task_rq_rt(struct task_struct *p, int cpu, int flags) { struct task_struct *curr; struct rq *rq; bool test; /* For anything but wake ups, just return the task_cpu */ - if (sd_flag != SD_BALANCE_WAKE && sd_flag != SD_BALANCE_FORK) + if (!(flags & (WF_TTWU | WF_FORK))) goto out; rq = cpu_rq(cpu); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f32c5fa229af..448f5d630544 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1767,7 +1767,7 @@ struct sched_class { #ifdef CONFIG_SMP int (*balance)(struct rq *rq, struct task_struct *prev, struct rq_flags *rf); - int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags); + int (*select_task_rq)(struct task_struct *p, int task_cpu, int flags); void (*migrate_task_rq)(struct task_struct *p, int new_cpu); void (*task_woken)(struct rq *this_rq, struct task_struct *task); diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index 4c9e9975684f..4f061ddf8470 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -11,7 +11,7 @@ #ifdef CONFIG_SMP static int -select_task_rq_stop(struct task_struct *p, int cpu, int sd_flag, int flags) +select_task_rq_stop(struct task_struct *p, int cpu, int flags) { return task_cpu(p); /* stop tasks as never migrate */ } -- 2.24.0