Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932319Ab2JBVOr (ORCPT ); Tue, 2 Oct 2012 17:14:47 -0400 Received: from mail-ia0-f174.google.com ([209.85.210.174]:34960 "EHLO mail-ia0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932172Ab2JBVOp (ORCPT ); Tue, 2 Oct 2012 17:14:45 -0400 MIME-Version: 1.0 In-Reply-To: References: <20120823141422.444396696@google.com> <20120823141507.061208672@google.com> <5060B82B.1050206@cs.tu-berlin.de> From: Paul Turner Date: Tue, 2 Oct 2012 14:14:14 -0700 Message-ID: Subject: Re: [patch 11/16] sched: replace update_shares weight distribution with per-entity computation To: Benjamin Segall Cc: =?ISO-8859-1?Q?Jan_H=2E_Sch=F6nherr?= , linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Vaidyanathan Srinivasan , Srivatsa Vaddagiri , Kamalesh Babulal , Venki Pallipadi , Mike Galbraith , Vincent Guittot , Nikunj A Dadhania , Morten Rasmussen , "Paul E. McKenney" , Namhyung Kim Content-Type: text/plain; charset=ISO-8859-1 X-System-Of-Record: true Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1568 Lines: 38 On Mon, Sep 24, 2012 at 1:39 PM, Benjamin Segall wrote: > blocked_load_avg ~= \sum_child child.runnable_avg_sum/child.runnable_avg_period * child.weight > > The thought was: So if all the children have hit zero runnable_avg_sum > (or in the case of a child task, will when they wake up), then > blocked_avg sum should also hit zero at the same and we're in theory > fine. > > However, child load can be significantly larger than even the maximum > value of runnable_avg_sum (and you can get a full contribution off a new > task with only one tick of runnable_avg_sum anyway...), so > runnable_avg_sum can hit zero first due to rounding. We should case on > runnable_avg_sum || blocked_load_avg. Clipping blocked_load_avg when runnable_avg_sum goes to zero is sufficient. At this point we cannot contribute to our parent anyway. > > > As a side note, currently decay_load uses SRR, which means none of these > will hit zero anyway if updates occur more often than once per 32ms. I'm > not sure how we missed /that/, but fixes incoming. Egads, fixed. We definitely used to have that, I think it got lost in the "clean everything up, break it into a series, and make it pretty" step. Perhaps that explains why some of the numbers in the previous table were a little different. > > Thanks, > Ben > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/