Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753455AbbEHWwV (ORCPT ); Fri, 8 May 2015 18:52:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60874 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752886AbbEHWwU (ORCPT ); Fri, 8 May 2015 18:52:20 -0400 Message-ID: <554D3E1B.6010501@redhat.com> Date: Fri, 08 May 2015 18:52:11 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: dedekind1@gmail.com CC: linux-kernel@vger.kernel.org, mgorman@suse.de, peterz@infradead.org, jhladky@redhat.com Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa balancing destination References: <1430908530.7444.145.camel@sauron.fi.intel.com> <20150506114128.0c846a37@cuia.bos.redhat.com> <1431090801.1418.87.camel@sauron.fi.intel.com> <554D1681.7040902@redhat.com> In-Reply-To: <554D1681.7040902@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2512 Lines: 66 On 05/08/2015 04:03 PM, Rik van Riel wrote: > If the normal scheduler load balancer is moving tasks the > other way the NUMA balancer is moving them, things will > not converge, and tasks will have worse memory locality > than not doing NUMA balancing at all. > > Currently the load balancer has a preference for moving > tasks to their preferred nodes (NUMA_FAVOUR_HIGHER, true), > but there is no resistance to moving tasks away from their > preferred nodes (NUMA_RESIST_LOWER, false). That setting > was arrived at after a fair amount of experimenting, and > is probably correct. Never mind that. After reading the code several times after that earlier post, it looks like having NUMA_FAVOR_HIGHER enabled does absolutely nothing without also having NUMA_RESIST_LOWER enabled, at least not for idle balancing. At first glance, this code looks correct, and even useful: /* * Aggressive migration if: * 1) destination numa is preferred * 2) task is cache cold, or * 3) too many balance attempts have failed. */ tsk_cache_hot = task_hot(p, env); if (!tsk_cache_hot) tsk_cache_hot = migrate_degrades_locality(p, env); if (migrate_improves_locality(p, env) || !tsk_cache_hot || env->sd->nr_balance_failed > env->sd->cache_nice_tries) { if (tsk_cache_hot) { schedstat_inc(env->sd, lb_hot_gained[env->idle]); schedstat_inc(p, se.statistics.nr_forced_migrations); } return 1; } However, with NUMA_RESIST_LOWER disabled (default), migrate_degrades_locality always returns 0. Furthermore, sched_migrate_latency_ns, which influences task_hot, is set so small (.5 us) that task_hot is likely to always return false for workloads with frequent sleeps and network latencies, like a web workload... In other words, the idle balancing code will treat tasks moving towards their preferred NUMA node the same as tasks moving away from their preferred NUMA node. It will move tasks regardless of NUMA affinity, and can end up in a big fight with the NUMA balancing code, as you have observed. I am not sure what to do about this. Peter? -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/