Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp81990rdb; Tue, 19 Dec 2023 09:57:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IFH2kCTIymPgF1NzkW4/iwt2wAGxBDyzexICEsT6y+DAR5civMLczb8qtBJ4rYxg6XffStw X-Received: by 2002:a05:6808:1706:b0:3b8:b063:6baf with SMTP id bc6-20020a056808170600b003b8b0636bafmr21215545oib.94.1703008631733; Tue, 19 Dec 2023 09:57:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703008631; cv=none; d=google.com; s=arc-20160816; b=jYxlZE1jGlwrRQXv9wHRJ7UVnUtbe2IjOPmBOWHNpwHCY1u1c3/zrG6MgmCF0B3/ty snacDTBy5gXtrzzlFLAMwLAdje2IUEzCyuCQwa+HU03yuK++dedBMQdZmBGrsJefOF/z vN/6z3/T9KK1ZjUtH2mOuCwMu6ZSlnWxZ+kVa9M8ccrJrbIM952mqu/J5kxEDyA7Rvql c5qGXFR5mnetPumYN31zPtUlQHhnYhQDZ8t8hvWxGG4mUlLqb1zHwLREdTX4Y+oOQTv2 b9OEWhucUsMDkyO1Wy5l0uBLO3xIQpSI9aTh8YdYlper/r9H+8+vX+BmibrbgFUfh/Tb Aodw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :references:message-id:in-reply-to:subject:cc:to:from:date :dkim-signature; bh=a4fN2kEgK9MheAjxSRsReB+gR+8YaI8Z7oUYP+S6p5c=; fh=WuLomYDyAW/aYAkZdiGMRXhA3UBMDOVvFe0bJPpPH3o=; b=T2rEGIzCXXOQi9oq/oFholcopx+sFMJOIs15dBivwbp/XZUvChew6ff7bpx36VQVCj EfmEYIivsDxJHJJdHUn0rEZSoDN48qUtT9oPRJqbGQ7rO4GWfuYPAR0m6I1AllJraCGA M7Yu4haRSvgbirTj0WgBoW0muXLfoKtqJ+yn86DI2RPWy9+cdLOJGNBYPxK9YGOBEmE0 BLU1j7KxS6CA21QCuO/7/zaxPIhfbU9njE+tVPbSuVYko60k0esfd4FmX6RdkBmeNh0S PW9v3AmyYeGFp88zAOx+EC74Xm4Z2cfj0FP2+TZVnhFb2g9krWhhmcQjkYw6H531A/iP kPsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@inria.fr header.s=dc header.b=rvP2Vcnj; spf=pass (google.com: domain of linux-kernel+bounces-5742-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5742-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=inria.fr Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id x14-20020a0ce24e000000b0067f398fe69asi6087788qvl.525.2023.12.19.09.57.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 09:57:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5742-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@inria.fr header.s=dc header.b=rvP2Vcnj; spf=pass (google.com: domain of linux-kernel+bounces-5742-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5742-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=inria.fr Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 743A71C25018 for ; Tue, 19 Dec 2023 17:57:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C059046422; Tue, 19 Dec 2023 17:51:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=inria.fr header.i=@inria.fr header.b="rvP2Vcnj" X-Original-To: linux-kernel@vger.kernel.org Received: from mail2-relais-roc.national.inria.fr (mail2-relais-roc.national.inria.fr [192.134.164.83]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A73A42070 for ; Tue, 19 Dec 2023 17:51:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=inria.fr Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=inria.fr DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=inria.fr; s=dc; h=date:from:to:cc:subject:in-reply-to:message-id: references:mime-version; bh=a4fN2kEgK9MheAjxSRsReB+gR+8YaI8Z7oUYP+S6p5c=; b=rvP2Vcnjf9WVfmPwN5tKp8pkR7uWCw566RUigYJqXbdTPgshAjFr0oRP LGr9G/BOYX2yDdKLpCr0NIhgSdeJAsMrAreNtfjyDSFVcROTvg8YJgHiR WQdh+aJJteoKl6gGUYPwjDMMX+LwQ10eYAQ8Y3IaS9xWgujpmPoDbcFfb M=; Authentication-Results: mail2-relais-roc.national.inria.fr; dkim=none (message not signed) header.i=none; spf=SoftFail smtp.mailfrom=julia.lawall@inria.fr; dmarc=fail (p=none dis=none) d=inria.fr X-IronPort-AV: E=Sophos;i="6.04,289,1695679200"; d="scan'208";a="143185550" Received: from dt-lawall.paris.inria.fr ([128.93.67.65]) by mail2-relais-roc.national.inria.fr with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Dec 2023 18:51:18 +0100 Date: Tue, 19 Dec 2023 18:51:18 +0100 (CET) From: Julia Lawall To: Vincent Guittot cc: Peter Zijlstra , Ingo Molnar , Dietmar Eggemann , Mel Gorman , linux-kernel@vger.kernel.org Subject: Re: EEVDF and NUMA balancing In-Reply-To: Message-ID: <98b3df1-79b7-836f-e334-afbdd594b55@inria.fr> References: <20231003215159.GJ1539@noisy.programming.kicks-ass.net> <20231004120544.GA6307@noisy.programming.kicks-ass.net> <20231004174801.GE19999@noisy.programming.kicks-ass.net> <20231009102949.GC14330@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII > > One CPU has 2 threads, and the others have one. The one with two threads > > is returned as the busiest one. But nothing happens, because both of them > > prefer the socket that they are on. > > This explains way load_balance uses migrate_util and not migrate_task. > One CPU with 2 threads can be overloaded > > ok, so it seems that your 1st problem is that you have 2 threads on > the same CPU whereas you should have an idle core in this numa node. > All cores are sharing the same LLC, aren't they ? Sorry, not following this. Socket 1 has N-1 threads, and thus an idle CPU. Socket 2 has N+1 threads, and thus one CPU with two threads. Socket 1 tries to steal from that one CPU with two threads, but that fails, because both threads prefer being on Socket 2. Since most (or all?) of the threads on Socket 2 perfer being on Socket 2. the only hope for Socket 1 to fill in its idle core is active balancing. But active balancing is not triggered because of migrate_util and because CPU_NEWLY_IDLE prevents the failure counter from ebing increased. The part that I am currently missing to understand is that when I convert CPU_NEWLY_IDLE to CPU_IDLE, it typically picks a CPU with only one thread as busiest. I have the impression that the fbq_type intervenes to cause it to avoid the CPU with two threads that already prefer Socket 2. But I don't know at the moment why that is the case. In any case, it's fine to active balance from a CPU with only one thread, because Socket 2 will even itself out afterwards. > > You should not have more than 1 thread per CPU when there are N+1 > threads on a node with N cores / 2N CPUs. Hmm, I think there is a miscommunication about cores and CPUs. The machine has two sockets with 16 physical cores each, and thus 32 hyperthreads. There are 64 threads running. julia > This will enable the > load_balance to try to migrate a task instead of some util(ization) > and you should reach the active load balance. > > > > > > In theory you should have the > > > local "group_has_spare" and the busiest "group_fully_busy" (at most). > > > This means that no group should be overloaded and load_balance should > > > not try to migrate utli but only task > > > > I didn't collect information about the groups. I will look into that. > > > > julia > > > > > > > > > > > > > > > > and changing the above test to: > > > > > > > > if ((env->migration_type == migrate_task || env->migration_type == migrate_util) && > > > > (sd->nr_balance_failed > sd->cache_nice_tries+2)) > > > > > > > > seems to solve the problem. > > > > > > > > I will test this on more applications. But let me know if the above > > > > solution seems completely inappropriate. Maybe it violates some other > > > > constraints. > > > > > > > > I have no idea why this problem became more visible with EEVDF. It seems > > > > to have to do with the time slices all turning out to be the same. I got > > > > the same behavior in 6.5 by overwriting the timeslice calculation to > > > > always return 1. But I don't see the connection between the timeslice and > > > > the behavior of the idle task. > > > > > > > > thanks, > > > > julia > > > >