Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp2010706ybj; Wed, 6 May 2020 09:05:31 -0700 (PDT) X-Google-Smtp-Source: APiQypJWyWhaGriQwvd8acjWZ18i11WmCQ+oEzfyw+Es3TS95uk1DSul5m/J5pGa8yGjqZyftkVh X-Received: by 2002:a05:6402:6d5:: with SMTP id n21mr7871974edy.82.1588781131586; Wed, 06 May 2020 09:05:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588781131; cv=none; d=google.com; s=arc-20160816; b=Gc2hXqVA7tAliyZ3Kt21rAEBzLb3+6L8PcxbKnjL/5Bb+FAd4iM7JMqDXDMUc6M2Vk JbXUhg9NoiLCl6zCTiZ95lZwntzND5k2Ji3lQqGJ28RITqWjZff722hFeysEGjAuZIa+ rGABTqGKWRG5ISk4smpUtMG5CzH5Zl8OkLmiKfA5b3R0MjA7vNdXMpZ/VboDsLCuE15Y e6DCP+BjXB+2Q5kPJ9ZDaf2UT9TRtqz5bxb08rEK00z+dLnNwv+mBEwWfIFl8/IIEyKB OtDgh0IuAxpOG+94Gr94S7a4edh+mnAzO6EJcK9dkoYadC/3YKy8XHWccGWCspDwS411 TquQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=611XPOjvdbDlmJkjDqa5hx74bf3d10kOaTOyW+I0e8U=; b=08zhG3Z+K3ewkK9YfIgV9R6eS9c7V4upZoXWfsMM/4hhhWJfZdooSsR7crHMMEjRwi jFYg2vsRHnDmkd+zcT+Cw0wD2Xqxsqow5GxnC4YguJAOJ0pcQ7XW7acFJEDKJC82sntq cOKI/G6/laDjo2+l/qP6x/1HVbHkciuZ0LpWOrbPr/F2dFKoLRTVEG6sMgckGh2qD/Qp UZPRXSJxpAkFKdqaxpGm9AZs/t9RqwwOflm5YJ1GFBANKHH4VbFv3wZEIQSdfAiN79Co cwuZq8e9TtBALWzyZbDL1nfBtWEJNFVUe1QZwpzFy4GdKAq4vIVnqSqR5iWmzmf4QJLh DmSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o15si943351edv.18.2020.05.06.09.05.02; Wed, 06 May 2020 09:05:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729819AbgEFQDL (ORCPT + 99 others); Wed, 6 May 2020 12:03:11 -0400 Received: from foss.arm.com ([217.140.110.172]:40174 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729251AbgEFQDK (ORCPT ); Wed, 6 May 2020 12:03:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F30BFD6E; Wed, 6 May 2020 09:03:04 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A1A133F68F; Wed, 6 May 2020 09:03:03 -0700 (PDT) References: <20200503083407.GA27766@iZj6chx1xj0e0buvshuecpZ> <20200505134056.GA31680@iZj6chx1xj0e0buvshuecpZ> <20200505142711.GA12952@vingu-book> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: Peng Liu , Dietmar Eggemann , Ingo Molnar , Peter Zijlstra , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Subject: Re: [PATCH] sched/fair: Fix nohz.next_balance update In-reply-to: Date: Wed, 06 May 2020 17:02:56 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/05/20 14:45, Vincent Guittot wrote: >> But then we may skip an update if we goto abort, no? Imagine we have just >> NOHZ_STATS_KICK, so we don't call any rebalance_domains(), and then as we >> go through the last NOHZ CPU in the loop we hit need_resched(). We would >> end in the abort part without any update to nohz.next_balance, despite >> having accumulated relevant data in the local next_balance variable. > > Yes but on the other end, the last CPU has not been able to run the > rebalance_domain so we must not move nohz.next_balance otherwise it > will have to wait for at least another full period > In fact, I think that we have a problem with current implementation > because if we abort because local cpu because busy we might end up > skipping idle load balance for a lot of idle CPUs > > As an example, imagine that we have 10 idle CPUs with the same > rq->next_balance which equal nohz.next_balance. _nohz_idle_balance > starts on CPU0, it processes idle lb for CPU1 but then has to abort > because of need_resched. If we update nohz.next_balance like > currently, the next idle load balance will happen after a full > balance interval whereas we still have 8 CPUs waiting for running an > idle load balance. > > My proposal also fixes this problem > That's a very good point; so with NOHZ_BALANCE_KICK we can reduce nohz.next_balance via rebalance_domains(), and otherwise we would only increase it if we go through a complete for_each_cpu() loop in _nohz_idle_balance(). That said, if for some reason we keep bailing out of the loop, we won't push nohz.next_balance forward and thus may repeatedly nohz-balance only the first few CPUs in the NOHZ mask. I think that can happen if we have say 2 tasks pinned to a single rq, in that case nohz_balancer_kick() will kick a NOHZ balance whenever now >= nohz.next_balance.