Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753576AbdLNQPH (ORCPT ); Thu, 14 Dec 2017 11:15:07 -0500 Received: from mail.efficios.com ([167.114.142.141]:56148 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753515AbdLNQPC (ORCPT ); Thu, 14 Dec 2017 11:15:02 -0500 From: Mathieu Desnoyers To: Peter Zijlstra , "Paul E . McKenney" , Boqun Feng , Andy Lutomirski , Dave Watson Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Paul Turner , Andrew Morton , Russell King , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andrew Hunter , Andi Kleen , Chris Lameter , Ben Maurer , Steven Rostedt , Josh Triplett , Linus Torvalds , Catalin Marinas , Will Deacon , Michael Kerrisk , Mathieu Desnoyers Subject: [RFC PATCH for 4.16 09/21] sched: Implement push_task_to_cpu Date: Thu, 14 Dec 2017 11:13:51 -0500 Message-Id: <20171214161403.30643-10-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171214161403.30643-1-mathieu.desnoyers@efficios.com> References: <20171214161403.30643-1-mathieu.desnoyers@efficios.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3486 Lines: 110 Implement push_task_to_cpu(), which moves the task received as argument to the destination cpu's runqueue. It only does so if the CPU is within the CPU allowed mask of the task, else it returns -EINVAL. It does not change the CPU allowed mask, and can therefore be used within applications which rely on owning the sched_setaffinity() state. It does not pin the task to the destination CPU, which means that the scheduler may choose to move the task away from that CPU before the task executes. Code invoking push_task_to_cpu() must be prepared to retry in that case. Signed-off-by: Mathieu Desnoyers CC: "Paul E. McKenney" CC: Peter Zijlstra CC: Paul Turner CC: Thomas Gleixner CC: Andrew Hunter CC: Andy Lutomirski CC: Andi Kleen CC: Dave Watson CC: Chris Lameter CC: Ingo Molnar CC: "H. Peter Anvin" CC: Ben Maurer CC: Steven Rostedt CC: Josh Triplett CC: Linus Torvalds CC: Andrew Morton CC: Russell King CC: Catalin Marinas CC: Will Deacon CC: Michael Kerrisk CC: Boqun Feng CC: linux-api@vger.kernel.org --- kernel/sched/core.c | 37 +++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 9 +++++++++ 2 files changed, 46 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a7cc81d1fcb6..58a8f93949d8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1061,6 +1061,43 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) set_curr_task(rq, p); } +int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu) +{ + struct rq_flags rf; + struct rq *rq; + int ret = 0; + + rq = task_rq_lock(p, &rf); + update_rq_clock(rq); + + if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed)) { + ret = -EINVAL; + goto out; + } + + if (task_cpu(p) == dest_cpu) + goto out; + + if (task_running(rq, p) || p->state == TASK_WAKING) { + struct migration_arg arg = { p, dest_cpu }; + /* Need help from migration thread: drop lock and wait. */ + task_rq_unlock(rq, p, &rf); + stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); + tlb_migrate_finish(p->mm); + return 0; + } else if (task_on_rq_queued(p)) { + /* + * OK, since we're going to drop the lock immediately + * afterwards anyway. + */ + rq = move_queued_task(rq, &rf, p, dest_cpu); + } +out: + task_rq_unlock(rq, p, &rf); + + return ret; +} + /* * Change a given task's CPU affinity. Migrate the thread to a * proper CPU and schedule it away if the CPU it's executing on diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 85e7a622ee88..85f409a18772 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1224,6 +1224,15 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu) rseq_migrate(p); } +#ifdef CONFIG_SMP +int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu); +#else +static inline int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu) +{ + return 0; +} +#endif + /* * Tunables that become constants when CONFIG_SCHED_DEBUG is off: */ -- 2.11.0