Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753817AbcD1JL5 (ORCPT ); Thu, 28 Apr 2016 05:11:57 -0400 Received: from merlin.infradead.org ([205.233.59.134]:46697 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753543AbcD1JLy (ORCPT ); Thu, 28 Apr 2016 05:11:54 -0400 Date: Thu, 28 Apr 2016 11:11:52 +0200 From: Peter Zijlstra To: Mike Galbraith Cc: LKML , Brendan Gregg , Jeff Merkey Subject: Re: [patch] sched: Fix smp nice induced group scheduling load distribution woes Message-ID: <20160428091152.GC3448@twins.programming.kicks-ass.net> References: <1461481517.3835.125.camel@gmail.com> <1461575925.3670.25.camel@gmail.com> <1461740991.3622.3.camel@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1461740991.3622.3.camel@gmail.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1232 Lines: 28 On Wed, Apr 27, 2016 at 09:09:51AM +0200, Mike Galbraith wrote: > On even a modest sized NUMA box any load that wants to scale > is essentially reduced to SCHED_IDLE class by smp nice scaling. > Limit niceness to prevent cramming a box wide load into a too > small space. Given niceness affects latency, give the user the > option to completely disable box wide group fairness as well. Have you tried the (obvious) ? I suppose we really should just do this (and yuyang's cleanup patches I suppose). Nobody has ever been able to reproduce those increased power usage claims and Google is running with this enabled. --- diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 69da6fcaa0e8..968f573413de 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -53,7 +53,7 @@ static inline void cpu_load_update_active(struct rq *this_rq) { } * when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the * increased costs. */ -#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load */ +#ifdef CONFIG_64BIT # define SCHED_LOAD_RESOLUTION 10 # define scale_load(w) ((w) << SCHED_LOAD_RESOLUTION) # define scale_load_down(w) ((w) >> SCHED_LOAD_RESOLUTION)