Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2294324pxb; Fri, 5 Feb 2021 14:08:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJyRimUTKB5CLAoiUToG1QCRED4/rOsguTXKdJkk1Ouyjpi5IKFq0azlJswOsI6/NYgz5kU3 X-Received: by 2002:a17:906:5659:: with SMTP id v25mr6172851ejr.8.1612562898100; Fri, 05 Feb 2021 14:08:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612562898; cv=none; d=google.com; s=arc-20160816; b=McTr8wL+Nwm4Ca65j4GCsyQjImFEFUhVlvSCc43lNI3ryV77Ej4f902nehoaB/tJCy mtJADdQiREpXXLh2nEu/d/1eKvJRtI0Hs+/wmIbbAkoPN9jtfXC95MSDaPdw5uVn6my6 +tCqmm7n5YDoG38p3LdKPTChuLqIvF0JCZfN43tvA0MyMdsDLL4zXGSnjY/FcjQs4NAo UWEMfIun540qdddwes5qHee9UC39wMAsDUPUTjxi+gZiLA6VSZGu1f/Cl8jQ685JVKqk NFhrgEibe4Zo62mRieBlwpwgslW7m0NgW+KIoK76gfaMmS+BImQhHpSgB2JpMig4VqAe f8YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ynxDYVPi5H1dR4wRWFtBqhxHAhdQT1a04o63peTIaaM=; b=SBhGGHRA5efy84ICtzEnJLN57Ixlp3SZuUnAgtuH3NEPKAmSPiBakWmBnjRXLPFvS0 haiK5L4qBbYJJ5dQIWYc74yKhRVN2+/CkMCyp8xq+S9VPJzLnv2gXBdhc2uEnNPhX3EB QaGaG3ckXefviW/ojKcx+FVoCm6UoWxtaCyItPTXzfeCtrPPA5fcncS2YYlxyv1OQ3Ya CRInxJhnpja8UGmTRpziYOTtWlPuOWK23Ic7ddcjD1weq18Zg2IK5bkG/9dzOxRaAfIU w4ifYFwlraKknURnPqpa5k/u40PaM/LnceHaih7yfjGjPdz/o+b5//urariPZc3hRve2 EDnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YAi67F9U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b8si6308968eds.433.2021.02.05.14.07.53; Fri, 05 Feb 2021 14:08:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YAi67F9U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233134AbhBEWFw (ORCPT + 99 others); Fri, 5 Feb 2021 17:05:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232968AbhBEO5n (ORCPT ); Fri, 5 Feb 2021 09:57:43 -0500 Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com [IPv6:2a00:1450:4864:20::230]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC0D0C0617AA for ; Fri, 5 Feb 2021 08:27:01 -0800 (PST) Received: by mail-lj1-x230.google.com with SMTP id t8so8404493ljk.10 for ; Fri, 05 Feb 2021 08:27:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ynxDYVPi5H1dR4wRWFtBqhxHAhdQT1a04o63peTIaaM=; b=YAi67F9U6c6htAnlyoN6tFU4wduGBfNlZzfU3yC/5iyl8hBcLXSIoUzqj2yvInpXnT BKRoPH+JleS0le4E5Agh8m2Jtlw6aKuGefKMhDM5Er5kQqFRanNA3b7S5KlbjzSpU1nQ yeTr7l4X8rL9MA0PaXXRagCH++FLemtMlx/UaYiyoblJY0hute5uPrxdPLSUubjP9k70 ye7fi57oliwzf35tTQQrj52oIUVrvtE4n3Ng1WrSlo/OkoVzrOhs1fbl0DhlK0lmcdha aAxG33YvefGPQca9OZwlNAo6M49PK7KqTma5PlmEY9dsNoi0jfMjIW0so7le75KuIzNa x35Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ynxDYVPi5H1dR4wRWFtBqhxHAhdQT1a04o63peTIaaM=; b=JeDzJHDc8RHD4yY6sWZ6x+d4jBC7F4y2m2oLVOgIWpkqgOA5YFdtF1iEj6YBUi9CoE 07trzJKi9vWjwEXcQpwi4V6VWzJWlMcLOOnNTNr4C4AVPKQuRZPaWylC98JT558sHQqo cowmTspU+QcSQHnWj4T09XJJfOe4V0+GL8h1H8tFEPUqU9YnKi6aUkpCCqG4xlcWtRv4 JaVFvKj4EctMzmASnMz3tBl1r+8W/mg4Wiv9zfxPMia2bXSDCFbebufP1aFxgsR4Poun GEExdcmX8jqWMmPw+E20tD9usgTegAm45gl1V3MIsYlLAVl0gSfAqkchxilECA5++m6T nQhg== X-Gm-Message-State: AOAM531DfUAh19gGSTJrd50R1MgR7WM8/CRB4lrQzCe+x4TMsu4PcCuV t7jiF6wmrthq/oA1axNSWGnQX021g5ENo01KtIYIShlk6l0= X-Received: by 2002:a2e:7605:: with SMTP id r5mr2714028ljc.299.1612535659229; Fri, 05 Feb 2021 06:34:19 -0800 (PST) MIME-Version: 1.0 References: <20210128183141.28097-1-valentin.schneider@arm.com> <20210128183141.28097-2-valentin.schneider@arm.com> In-Reply-To: From: Vincent Guittot Date: Fri, 5 Feb 2021 15:34:07 +0100 Message-ID: Subject: Re: [PATCH 1/8] sched/fair: Clean up active balance nr_balance_failed trickery To: Valentin Schneider Cc: linux-kernel , Peter Zijlstra , Ingo Molnar , Dietmar Eggemann , Morten Rasmussen , Qais Yousef , Quentin Perret , Pavan Kondeti , Rik van Riel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 5 Feb 2021 at 15:05, Valentin Schneider wrote: > > On 05/02/21 14:51, Vincent Guittot wrote: > > On Thu, 28 Jan 2021 at 19:32, Valentin Schneider > > wrote: > >> > >> When triggering an active load balance, sd->nr_balance_failed is set to > >> such a value that any further can_migrate_task() using said sd will ignore > >> the output of task_hot(). > >> > >> This behaviour makes sense, as active load balance intentionally preempts a > >> rq's running task to migrate it right away, but this asynchronous write is > >> a bit shoddy, as the stopper thread might run active_load_balance_cpu_stop > >> before the sd->nr_balance_failed write either becomes visible to the > >> stopper's CPU or even happens on the CPU that appended the stopper work. > >> > >> Add a struct lb_env flag to denote active balancing, and use it in > >> can_migrate_task(). Remove the sd->nr_balance_failed write that served the > >> same purpose. > >> > >> Signed-off-by: Valentin Schneider > >> --- > >> kernel/sched/fair.c | 17 ++++++++++------- > >> 1 file changed, 10 insertions(+), 7 deletions(-) > >> > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > >> index 197a51473e0c..0f6a4e58ce3c 100644 > >> --- a/kernel/sched/fair.c > >> +++ b/kernel/sched/fair.c > >> @@ -7423,6 +7423,7 @@ enum migration_type { > >> #define LBF_SOME_PINNED 0x08 > >> #define LBF_NOHZ_STATS 0x10 > >> #define LBF_NOHZ_AGAIN 0x20 > >> +#define LBF_ACTIVE_LB 0x40 > >> > >> struct lb_env { > >> struct sched_domain *sd; > >> @@ -7608,10 +7609,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) > >> > >> /* > >> * Aggressive migration if: > >> - * 1) destination numa is preferred > >> - * 2) task is cache cold, or > >> - * 3) too many balance attempts have failed. > >> + * 1) active balance > >> + * 2) destination numa is preferred > >> + * 3) task is cache cold, or > >> + * 4) too many balance attempts have failed. > >> */ > >> + if (env->flags & LBF_ACTIVE_LB) > >> + return 1; > >> + > > > > This changes the behavior for numa system because it skips > > migrate_degrades_locality() which can return 1 and prevent active > > migration whatever nr_balance_failed > > > > Is that intentional ? > > > > If I read this right, the result of migrate_degrades_locality() is > (currently) ignored if > > env->sd->nr_balance_failed > env->sd->cache_nice_tries You're right, I have misread the || condition > > While on the load_balance() side, we have: > > /* We've kicked active balancing, force task migration. */ > sd->nr_balance_failed = sd->cache_nice_tries+1; > > So we should currently be ignoring migrate_degrades_locality() in the > active balance case - what I wrote in the changelog for task_hot() still > applies to migrate_degrades_locality().