Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1864581pxa; Mon, 3 Aug 2020 00:39:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwaD8+G4CqMTmCorwQBOiZUQjG1XUqyTfgy3zIH7z6J6nF4eeQl6WJXy3ghaa3HKIMqeSwn X-Received: by 2002:aa7:df15:: with SMTP id c21mr7640788edy.334.1596440390335; Mon, 03 Aug 2020 00:39:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596440390; cv=none; d=google.com; s=arc-20160816; b=xc6gC/owf3Cz2ElylNyZ0xG914SvqeOSu/Khkv8cMgCX9QGcPMTCMXYexcfkrQZWDu fWSXx7cvEYDD9j1HTZ2qaRXVenVlwvy9BA8WvrEYaWr7LyvTg05/vvqNhVecTn5k0w0E SUl5FKfEpVXt8qyZyqb2bYEH+d2XhtehONfjlPJm9PohSIvAHuWQLnEo0jy22Qm1jOs9 HDUvljmOs0agVQ3YZ1U6IWrwO4vz6+O7rL+5TYAcV339TExrcuQd06l0BN9hG+LYcgUL 1wYcsaagOSGX/ihIDiG5CZJP7JCtpKILiBPa28ftuZpdjyXvnYpELwexjAW/YXXBmTYw bnSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:subject:from; bh=rPk/ARDdJOofCzIdJSO1mDYi+rT2e1d/GBVztJcbfCs=; b=EkWZbKeoxXnS1W8rVlJHkQcSVO65P6mBae0DkH1TP/xSDoVsrV4cOoQsIZb3+Cv6uL kIt2BkBnbK0W9QYqNSPL65I1yMGLGL3fvBOpLa8r/Ep0qhK6P3rBt0HfPxeesdqj6ZjS pNQ8yzDkxvbjr5hLjDAAxy1j199KJQHPk1aUF/ykw/baS+OhzQ4JC1dYFItxS405bC3e 7DnI57xibT1Zu14QbnZJ/CD4mOeGO/ol6gFfyLPeH/pNhhqnKEwKjW8ZFx0TlNvMY0iB NFY64e3Ml9P527slP5q3c1yXgdE8fJXyPeY+fTDO1eWX3bvq6Q1asuPL3N2CDUomeV/0 +ZqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u6si9898415ejf.218.2020.08.03.00.39.28; Mon, 03 Aug 2020 00:39:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725907AbgHCHhJ (ORCPT + 99 others); Mon, 3 Aug 2020 03:37:09 -0400 Received: from foss.arm.com ([217.140.110.172]:53472 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725806AbgHCHhI (ORCPT ); Mon, 3 Aug 2020 03:37:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A8DC3D6E; Mon, 3 Aug 2020 00:37:07 -0700 (PDT) Received: from [192.168.178.2] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9A6EC3F718; Mon, 3 Aug 2020 00:37:01 -0700 (PDT) From: Dietmar Eggemann Subject: Re: [PATCH] sched/fair: Fix the logic about active_balance in load_balance() To: Qi Zheng , mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de Cc: linux-kernel@vger.kernel.org References: <20200802045141.130533-1-arch0.zheng@gmail.com> Message-ID: Date: Mon, 3 Aug 2020 09:36:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200802045141.130533-1-arch0.zheng@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/08/2020 06:51, Qi Zheng wrote: > I think the unbalance scenario here should be that we need to > do active balance but it is not actually done. So fix it. > > Signed-off-by: Qi Zheng > --- > kernel/sched/fair.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 2ba8f230feb9..6d8c53718b67 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9710,7 +9710,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, > } else > sd->nr_balance_failed = 0; > > - if (likely(!active_balance) || voluntary_active_balance(&env)) { > + if (likely(!active_balance) && voluntary_active_balance(&env)) { > /* We were unbalanced, so reset the balancing interval */ > sd->balance_interval = sd->min_interval; > } else { > Active balance is potentially already been done when we reach this code. See 'if (need_active_balance(&env))' and 'if (!busiest->active_balance)' further up. Here we only reset sd->balance_interval in case: (A) the last load balance wasn't an active one (B) the reason for the active load balance was: (1) asym packing (2) capacity of src_cpu is reduced compared to the one of dst_cpu (3) misfit handling (B) is done to not unnecessarily increase of balance interval, see commit 46a745d90585 ("sched/fair: Fix unnecessary increase of balance interval").