Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp366617pxb; Wed, 24 Feb 2021 04:30:16 -0800 (PST) X-Google-Smtp-Source: ABdhPJy/xNnuq3FtOC//xHpjE6kKHwoefafYBGwCRA/7x0NKjY9dt96UV0ktR6mg4VkZ7FbxBSCc X-Received: by 2002:a50:cd8c:: with SMTP id p12mr33307381edi.114.1614169816347; Wed, 24 Feb 2021 04:30:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614169816; cv=none; d=google.com; s=arc-20160816; b=snwrCcaeIOuoHk7WW4MI2i2Xc9kYf4bu6QfFd7jGfAh6wS1NRV5HC9yyOgIrTZVwsj Qo0G3urWpOqDOlUKv58/rLSqkYlpyZTpLzlRnGOICZAigYAtpPvfYk6DjtpNEcZHHQO3 ga499OfDzUWkeVhCyQL1T/Ju0X841zujJE+HiVJjAJYUVPDs9ttbOCcWvgGKjmeoce5w sfiogdr4BVFkYaQknLytAo+UxE0FnSG/yIR2NUDuhGYG46RtpCEsNi1wNidsnKem7WBJ jmTAiJWw7Q1ou0XvlnKoN7p8mpDqiWWOvN0kcApBZzCOdZFa2FvAQMpYNVU87EyyA6YB Yx3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:ironport-sdr:ironport-sdr; bh=HanubZ2eekG7w2QJxKrPu1cxU7/MexYgL0fLEimeUy8=; b=K0wc8VI0y9mPQYp7rFWwOwfZiYhYz1Za3PB7aH920NEJuO38+yc26YcWR815/W62b0 LquqEw9AfDyBLtibVa4qQuWxExqPrP/M2OgslOWTL3nqizGl9ac7H4ssiz88CIcQ/Fzg thbs/PbnV3HXznbhFvDLeYNq4b6UqE6v5nPU8C3TOT9jhJIxPuOgU67wT7U49FInBMdg UGGy2Ao1T2cYycngUpKGOq9ipeAGgU/ZEdANhSnltJ+FhTZfE0DCxxHYQUJWsS/CnjMx qLiAAtMigZx3EyddCqh+UV0dUHJD4d3ZGomQUBEnA1JRquGdYxNAWjLBBsdq9sGWsSyY 9aMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a11si1042754edn.318.2021.02.24.04.29.27; Wed, 24 Feb 2021 04:30:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232735AbhBXC5l (ORCPT + 99 others); Tue, 23 Feb 2021 21:57:41 -0500 Received: from mga18.intel.com ([134.134.136.126]:57634 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232313AbhBXC5k (ORCPT ); Tue, 23 Feb 2021 21:57:40 -0500 IronPort-SDR: IpORc8lhrrAIx/xA/woP9+RRroz7cfgwQDX98UB3sT5ZYfS/u3CZTxM8Qbe/albU2PpT0bdBjX 83rppBX7w5hw== X-IronPort-AV: E=McAfee;i="6000,8403,9904"; a="172683092" X-IronPort-AV: E=Sophos;i="5.81,201,1610438400"; d="scan'208";a="172683092" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2021 18:55:52 -0800 IronPort-SDR: fYiic4xANDh3Mp95KdoJMn/yIv8nLdpVLaL+yruZfaT82YR/rJKpBqgVjN8hI6vsAFCced5F5+ t3SWtRFVL4/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,201,1610438400"; d="scan'208";a="433056773" Received: from cli6-desk1.ccr.corp.intel.com (HELO [10.239.161.125]) ([10.239.161.125]) by fmsmga002.fm.intel.com with ESMTP; 23 Feb 2021 18:55:48 -0800 Subject: Re: [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level To: Vincent Guittot Cc: Aubrey Li , Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , linux-kernel , Andi Kleen , Tim Chen , Srinivas Pandruvada , "Rafael J . Wysocki" References: <1611554578-6464-1-git-send-email-aubrey.li@intel.com> From: "Li, Aubrey" Message-ID: Date: Wed, 24 Feb 2021 10:55:47 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/2/24 1:33, Vincent Guittot wrote: > On Tue, 23 Feb 2021 at 06:41, Li, Aubrey wrote: >> >> Hi Vincent, >> >> Sorry for the delay, I just returned from Chinese New Year holiday. >> >> On 2021/1/25 22:51, Vincent Guittot wrote: >>> On Mon, 25 Jan 2021 at 15:00, Li, Aubrey wrote: >>>> >>>> On 2021/1/25 18:56, Vincent Guittot wrote: >>>>> On Mon, 25 Jan 2021 at 06:50, Aubrey Li wrote: >>>>>> >>>>>> A long-tail load balance cost is observed on the newly idle path, >>>>>> this is caused by a race window between the first nr_running check >>>>>> of the busiest runqueue and its nr_running recheck in detach_tasks. >>>>>> >>>>>> Before the busiest runqueue is locked, the tasks on the busiest >>>>>> runqueue could be pulled by other CPUs and nr_running of the busiest >>>>>> runqueu becomes 1, this causes detach_tasks breaks with LBF_ALL_PINNED >>>>> >>>>> We should better detect that when trying to detach task like below >>>> >>>> This should be a compromise from my understanding. If we give up load balance >>>> this time due to the race condition, we do reduce the load balance cost on the >>>> newly idle path, but if there is an imbalance indeed at the same sched_domain >>> >>> Redo path is there in case, LB has found an imbalance but it can't >>> move some loads from this busiest rq to dest rq because of some cpu >>> affinity. So it tries to fix the imbalance by moving load onto another >>> rq of the group. In your case, the imbalance has disappeared because >>> it has already been pulled by another rq so you don't have to try to >>> find another imbalance. And I would even say you should not in order >>> to let other level to take a chance to spread the load >>> >>>> level, we have to wait the next softirq entry to handle that imbalance. This >>>> means the tasks on the second busiest runqueue have to stay longer, which could >>>> introduce tail latency as well. That's why I introduced a variable to control >>>> the redo loops. I'll send this to the benchmark queue to see if it makes any >>> >>> TBH, I don't like multiplying the number of knobs >> >> Sure, I can take your approach, :) >> >>>>> >>>>> --- a/kernel/sched/fair.c >>>>> +++ b/kernel/sched/fair.c >>>>> @@ -7688,6 +7688,16 @@ static int detach_tasks(struct lb_env *env) >>>>> >>>>> lockdep_assert_held(&env->src_rq->lock); >>>>> >>>>> + /* >>>>> + * Another CPU has emptied this runqueue in the meantime. >>>>> + * Just return and leave the load_balance properly. >>>>> + */ >>>>> + if (env->src_rq->nr_running <= 1 && !env->loop) { >> >> May I know why !env->loop is needed here? IIUC, if detach_tasks is invoked > > IIRC, my point was to do the test only when trying to detach the 1st > task. A lot of things can happen when a break is involved but TBH I > can't remember a precise UC. It may be over cautious When the break happens, rq unlock and local irq restored, so it's still possible the rq is emptied by another CPU. > >> from LBF_NEED_BREAK, env->loop could be non-zero, but as long as src_rq's >> nr_running <=1, we should return immediately with LBF_ALL_PINNED flag cleared. >> >> How about the following change? >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index 04a3ce20da67..1761d33accaa 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -7683,8 +7683,11 @@ static int detach_tasks(struct lb_env *env) >> * We don't want to steal all, otherwise we may be treated likewise, >> * which could at worst lead to a livelock crash. >> */ >> - if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1) >> + if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1) { > > IMO, we must do the test before: while (!list_empty(tasks)) { > > because src_rq might have become empty if waiting tasks have been > pulled by another cpu and the running one became idle in the meantime Okay, after the running one became idle, it still has LBF_ALL_PINNED, which needs to be cleared as well. Thanks! > >> + /* Clear the flag as we will not test any task */ >> + env->flag &= ~LBF_ALL_PINNED; >> break; >> + } >> >> p = list_last_entry(tasks, struct task_struct, se.group_node); >> >> Thanks, >> -Aubrey