Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1762549Ab3ECEt4 (ORCPT ); Fri, 3 May 2013 00:49:56 -0400 Received: from mail-qa0-f45.google.com ([209.85.216.45]:42990 "EHLO mail-qa0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761394Ab3ECEtx (ORCPT ); Fri, 3 May 2013 00:49:53 -0400 From: kosaki.motohiro@gmail.com To: linux-kernel@vger.kernel.org Cc: Olivier Langlois , Thomas Gleixner , Frederic Weisbecker , Ingo Molnar , Peter Zijlstra , KOSAKI Motohiro Subject: [PATCH 6/7] sched: task_sched_runtime introduce micro optimization Date: Fri, 3 May 2013 00:47:47 -0400 Message-Id: <1367556468-4021-8-git-send-email-kosaki.motohiro@gmail.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1367556468-4021-1-git-send-email-kosaki.motohiro@gmail.com> References: <1367556468-4021-1-git-send-email-kosaki.motohiro@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1938 Lines: 54 From: KOSAKI Motohiro rq lock in task_sched_runtime() is necessary for two reasons. 1) accessing se.sum_exec_runtime is not atomic on 32bit and 2) do_task_delta_exec() require it. So, 64bit can avoid holding rq lock when add_delta is false and delta_exec is 0. Cc: Olivier Langlois Cc: Thomas Gleixner Cc: Frederic Weisbecker Cc: Ingo Molnar Suggested-by: Paul Turner Acked-by: Peter Zijlstra Signed-off-by: KOSAKI Motohiro --- kernel/sched/core.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b817e6d..75872c3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2657,6 +2657,21 @@ unsigned long long task_sched_runtime(struct task_struct *p, bool add_delta) struct rq *rq; u64 ns = 0; +#ifdef CONFIG_64BIT + /* + * 64-bit doesn't need locks to atomically read a 64bit value. So we + * have two optimization chances, 1) when caller doesn't need + * delta_exec and 2) when the task's delta_exec is 0. The former is + * obvious. The latter is complicated. reading ->on_cpu is racy, but + * this is ok. If we race with it leaving cpu, we'll take a lock. So + * we're correct. If we race with it entering cpu, unaccounted time + * is 0. This is indistinguishable from the read occurring a few + * cycles earlier. + */ + if (!add_delta || !p->on_cpu) + return p->se.sum_exec_runtime; +#endif + rq = task_rq_lock(p, &flags); ns = p->se.sum_exec_runtime; if (add_delta) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/