Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754213AbbGPIMJ (ORCPT ); Thu, 16 Jul 2015 04:12:09 -0400 Received: from lgeamrelo04.lge.com ([156.147.1.127]:42388 "EHLO lgeamrelo04.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753626AbbGPIMG (ORCPT ); Thu, 16 Jul 2015 04:12:06 -0400 X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com From: byungchul.park@lge.com To: pjt@google.com Cc: linux-kernel@vger.kernel.org, Byungchul Park Subject: [PATCH] sched: prevent sched entity from being decayed twice when both waking and migrating it Date: Thu, 16 Jul 2015 17:11:57 +0900 Message-Id: <1437034317-15120-1-git-send-email-byungchul.park@lge.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3135 Lines: 87 From: Byungchul Park hello paul, can i ask you something? when a sched entity is both waken and migrated, it looks being decayed twice. did you do it on purpose? or am i missing something? :( thanks, byungchul --------------->8--------------- >From 793c963d0b29977a0f6f9330291a9ea469cc54f0 Mon Sep 17 00:00:00 2001 From: Byungchul Park Date: Thu, 16 Jul 2015 16:49:48 +0900 Subject: [PATCH] sched: prevent sched entity from being decayed twice when both waking and migrating it current code is decaying load average variables with a sleep time twice, when both waking and migrating it. the first decaying happens in a call path "migrate_task_rq_fair() -> __synchronize_entity_decay()". the second decaying happens in a call path "enqueue_entity_load_avg() -> update_entity_load_avg()". so make it happen once. Signed-off-by: Byungchul Park --- kernel/sched/fair.c | 29 +++-------------------------- 1 file changed, 3 insertions(+), 26 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 09456fc..c86cca0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2873,32 +2873,9 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int wakeup) { - /* - * We track migrations using entity decay_count <= 0, on a wake-up - * migration we use a negative decay count to track the remote decays - * accumulated while sleeping. - * - * Newly forked tasks are enqueued with se->avg.decay_count == 0, they - * are seen by enqueue_entity_load_avg() as a migration with an already - * constructed load_avg_contrib. - */ - if (unlikely(se->avg.decay_count <= 0)) { + /* we track migrations using entity decay_count == 0 */ + if (unlikely(!se->avg.decay_count)) { se->avg.last_runnable_update = rq_clock_task(rq_of(cfs_rq)); - if (se->avg.decay_count) { - /* - * In a wake-up migration we have to approximate the - * time sleeping. This is because we can't synchronize - * clock_task between the two cpus, and it is not - * guaranteed to be read-safe. Instead, we can - * approximate this using our carried decays, which are - * explicitly atomically readable. - */ - se->avg.last_runnable_update -= (-se->avg.decay_count) - << 20; - update_entity_load_avg(se, 0); - /* Indicate that we're now synchronized and on-rq */ - se->avg.decay_count = 0; - } wakeup = 0; } else { __synchronize_entity_decay(se); @@ -5114,7 +5091,7 @@ migrate_task_rq_fair(struct task_struct *p, int next_cpu) * be negative here since on-rq tasks have decay-count == 0. */ if (se->avg.decay_count) { - se->avg.decay_count = -__synchronize_entity_decay(se); + __synchronize_entity_decay(se); atomic_long_add(se->avg.load_avg_contrib, &cfs_rq->removed_load); } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/