Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp137784ybj; Wed, 6 May 2020 13:29:12 -0700 (PDT) X-Google-Smtp-Source: APiQypKdqckadp37Qyc9UwKXHMSJiNBlplbEvUp3Y27X6VrPSgBoDawBxVX/S34SVjjxJWWStXH+ X-Received: by 2002:a05:6402:686:: with SMTP id f6mr9188952edy.136.1588796952522; Wed, 06 May 2020 13:29:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588796952; cv=none; d=google.com; s=arc-20160816; b=gl1xUr07qKB7mX2HgJ1/B6Rojrw3HY3DQYs0OSZfYT2FUZrUia8YWYqsRiDeQygHaD 8ftLiyTT/633ux2r3h20p8WDdsMDHgT36BCMG0mxtvpgn7c1UJH29SOWVtbKCoqmUZXq gDQm4k5jcGFPNEfEZxDO9LZp8X5ioxIFLnHIIn+z8p+sLlh0cLSPM+yAB6Fbn1ltbZAe r0iXjd6CkhvAJSvl0jFjyn565es64owRa99SOBYyHABuKoCE7hMOcN6AWNq2muIX+fRM FIZXovc8oz/eL2Dy6jLsaaHddYS9Oehx2Szlf8byjSvaAEORFPQ5hDUYcV788wgC87Ed 95mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=EqxkVVncu5yR90vTbxOUzAPYs96CHpMnHcXZIb6Q9j8=; b=yEDb7KZqJw3d72W+mS18VcBXP4UvNht63yrcC/XvkSNJqcQ7DNtQKTNalmb/nxVNGD d7+oyMnq+libz1sPGhlXh3fsrkUh58XgfTSv5QXXBbSP04X3T2l7k26JnhRoSpaQ4QKc 1HJAp2DROWngUPX/fSWQHDPPmcROz/cnFHWhS5rdJmeGPhHcfYB9jx7dUn1sZWOwIfCi WPqzejvm+clSRByVu3c+SkjRtbpb1evp+cT+aXBlvoAcrvU1RLd2xmO4cAD9KeL1QQpB Cifr6i0bkW4fQtvditxrBL1pqyE74HGKRkOyBXGYFupt0CJQ4bCjwAjBY41qKqqRsWuQ Jzyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb14si2405058edb.170.2020.05.06.13.28.48; Wed, 06 May 2020 13:29:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728698AbgEFUVY (ORCPT + 99 others); Wed, 6 May 2020 16:21:24 -0400 Received: from foss.arm.com ([217.140.110.172]:44684 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727788AbgEFUVY (ORCPT ); Wed, 6 May 2020 16:21:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D1A5D6E; Wed, 6 May 2020 13:21:23 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1C3B73F305; Wed, 6 May 2020 13:21:22 -0700 (PDT) References: <20200503083407.GA27766@iZj6chx1xj0e0buvshuecpZ> <20200505134056.GA31680@iZj6chx1xj0e0buvshuecpZ> <20200505142711.GA12952@vingu-book> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: Peng Liu , Dietmar Eggemann , Ingo Molnar , Peter Zijlstra , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Subject: Re: [PATCH] sched/fair: Fix nohz.next_balance update In-reply-to: Date: Wed, 06 May 2020 21:21:15 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/05/20 17:56, Vincent Guittot wrote: > On Wed, 6 May 2020 at 18:03, Valentin Schneider > wrote: >> >> >> On 06/05/20 14:45, Vincent Guittot wrote: >> >> But then we may skip an update if we goto abort, no? Imagine we have just >> >> NOHZ_STATS_KICK, so we don't call any rebalance_domains(), and then as we >> >> go through the last NOHZ CPU in the loop we hit need_resched(). We would >> >> end in the abort part without any update to nohz.next_balance, despite >> >> having accumulated relevant data in the local next_balance variable. >> > >> > Yes but on the other end, the last CPU has not been able to run the >> > rebalance_domain so we must not move nohz.next_balance otherwise it >> > will have to wait for at least another full period >> > In fact, I think that we have a problem with current implementation >> > because if we abort because local cpu because busy we might end up >> > skipping idle load balance for a lot of idle CPUs >> > >> > As an example, imagine that we have 10 idle CPUs with the same >> > rq->next_balance which equal nohz.next_balance. _nohz_idle_balance >> > starts on CPU0, it processes idle lb for CPU1 but then has to abort >> > because of need_resched. If we update nohz.next_balance like >> > currently, the next idle load balance will happen after a full >> > balance interval whereas we still have 8 CPUs waiting for running an >> > idle load balance. >> > >> > My proposal also fixes this problem >> > >> >> That's a very good point; so with NOHZ_BALANCE_KICK we can reduce >> nohz.next_balance via rebalance_domains(), and otherwise we would only >> increase it if we go through a complete for_each_cpu() loop in >> _nohz_idle_balance(). >> >> That said, if for some reason we keep bailing out of the loop, we won't >> push nohz.next_balance forward and thus may repeatedly nohz-balance only >> the first few CPUs in the NOHZ mask. I think that can happen if we have >> say 2 tasks pinned to a single rq, in that case nohz_balancer_kick() will >> kick a NOHZ balance whenever now >= nohz.next_balance. > > If we take my example above and we have CPU0 which is idle at every > tick and selected as ilb_cpu but unluckily CPU0 has to abort before > running ilb for CPU1 everytime, I agree that we can end up trying to > run ilb on CPU0 at every tick without any success. We might consider > to kick_ilb in _nohz_idle_balance if we have to abort to let another > CPU handle the ilb That's an idea; maybe something like the next CPU that was due to be rebalanced (i.e. the one for which we hit the goto abort).