Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp2066586ybj; Wed, 6 May 2020 10:00:48 -0700 (PDT) X-Google-Smtp-Source: APiQypKJvSlndDntR0sUUL19vO9///zZcqHgfnR3bQBh4iY5sBvWUbN2EVAKpDBgAcMkBqEf4ixG X-Received: by 2002:a05:6402:129a:: with SMTP id w26mr7609564edv.254.1588784448513; Wed, 06 May 2020 10:00:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588784448; cv=none; d=google.com; s=arc-20160816; b=A/HGYhpRIYorWWbfu07HcljRq4hdStb01+JmEU7wQoIBWbZH22XLd8Dez3UoBpTxMO 8UEOkmlZEhre5jxknmSX1RBl4wBDBnmui+w6HuIR5mgh8EHcRWlCfRAuqJXRAt18Lqjg veXTA4CaN0tnPOQ3GhFpTYqwM/RaGULgn5cy442H6iP0baGa2NDFZNK8qPctz0CDx86a rfM8POodhmYWzPjhCH9k6a4BSdPQZ3vdfIkhsieWpK7XosmyOYnLJsDkg9boi93CgCJe DVlCiNEH49o2yOdysibPwuPB2RPfwPUzBcUwQyAr/bgAn0I0jMuPKDiww4Odoj3oHGX8 GFgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=pGzWREigVi0gy27TVyA7ZnJBxh+sA8DcxHlip3QEgLg=; b=fnaHtkpAW7+pGQvSnXZFPPJccc4WUdfGyBZTUH21W1RI4OFJ2r5U1ecYr6FRpvJCXq W07xr+DAUs6cXxmJckQcpNHllensqzaigTjMX7ZE1qOwItB+7m0+ZRoljNyRI7EF9mOp nrf1CMGl7XqdrKjvs/F74nnpdu+6RtdP4qZssbsGnO7utajLENFSLxTyYYfQdXM1qU8j D6OdDSSmQSLNCk4CeDrw96/ZfNkthRGytoOjibcNz/dN5/mqTqCSLcnihSZzs5gbxPT/ lT2DDlkdfEpYMBBvZXgctJetjlrfXyzzri6vYqXgoTh0ZUNS3nsMkk0eCSZIK/PmBANj Og7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=K9QQNbSW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ec15si1597408ejb.286.2020.05.06.10.00.22; Wed, 06 May 2020 10:00:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=K9QQNbSW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730120AbgEFQ4S (ORCPT + 99 others); Wed, 6 May 2020 12:56:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1729981AbgEFQ4S (ORCPT ); Wed, 6 May 2020 12:56:18 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B588DC061A10 for ; Wed, 6 May 2020 09:56:17 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id e25so3147613ljg.5 for ; Wed, 06 May 2020 09:56:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pGzWREigVi0gy27TVyA7ZnJBxh+sA8DcxHlip3QEgLg=; b=K9QQNbSW5DvWRhcbYjot86EYnvfYjnbtFk02iHbc1TWXlqpiSk6gx9ZDHzyTckdCnG 7aJPEVO4PY3frmrW4cWp4PlPt0jYpoNJJAnAGd07ne9XvfDhZBgG4tIgFpMTeb5mtoDC t+7X9Fu20gYUEz1OyWAMzTkIiKprhGCBOvgq4ryewJs2D8e1wVpoAgn9hGppYmjC9oZj KBvhcM7CsUEs1G9Lqy7I+e3WBKHJ6zKy4rbfAJTnlxIj9Iel/DoxR2iDknVl5n9WPZdA 249Y4ntgVR34+9LmgCsMdp0mqt8GrSosXZf0ijTSP//WSnmMWdUvTKTgMJEQK2LT1lcM wJZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pGzWREigVi0gy27TVyA7ZnJBxh+sA8DcxHlip3QEgLg=; b=cHq0LsmSwqi9zVoHJMfnPGtgmn9aNNKJwqpr6O9So3Di16DC8StRYcowwfCpgM39ZN YKxf6+/foD1jHPiseeOnZ7ul2q4eok27eQM9b3/K5C+GQP8DnJ/TFoWCaOVJx0o4ETKf 5M09mwDiuBJb7XelFfwI7nD/3e6J96fP4PEYOVlsW1Re4nHRpRLvow2L6+Ku+S1QNmrC gPr84FwNYaC2dt8blK8SBKiKkFnLCHiWd1+2BVgL3FlF7PpZfi868yS27JHyuAx5baL0 ZGKAJr/Nyscv0imXKaDrR9w9r4KOERqiEn2isJujc70Q7wZewLFvUToNoKq/6aSnpjCP wbPA== X-Gm-Message-State: AGi0PuYreMVVfpPxnQe6goJQP6q1oZoAKqY/lQTihvvM52Rb1qpNJre1 tp++Qc+US4c7MOk+EyBZdp/uUjt0ma56VZib1Vm97g== X-Received: by 2002:a2e:3209:: with SMTP id y9mr5332790ljy.154.1588784175979; Wed, 06 May 2020 09:56:15 -0700 (PDT) MIME-Version: 1.0 References: <20200503083407.GA27766@iZj6chx1xj0e0buvshuecpZ> <20200505134056.GA31680@iZj6chx1xj0e0buvshuecpZ> <20200505142711.GA12952@vingu-book> In-Reply-To: From: Vincent Guittot Date: Wed, 6 May 2020 18:56:02 +0200 Message-ID: Subject: Re: [PATCH] sched/fair: Fix nohz.next_balance update To: Valentin Schneider Cc: Peng Liu , Dietmar Eggemann , Ingo Molnar , Peter Zijlstra , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 6 May 2020 at 18:03, Valentin Schneider wrote: > > > On 06/05/20 14:45, Vincent Guittot wrote: > >> But then we may skip an update if we goto abort, no? Imagine we have just > >> NOHZ_STATS_KICK, so we don't call any rebalance_domains(), and then as we > >> go through the last NOHZ CPU in the loop we hit need_resched(). We would > >> end in the abort part without any update to nohz.next_balance, despite > >> having accumulated relevant data in the local next_balance variable. > > > > Yes but on the other end, the last CPU has not been able to run the > > rebalance_domain so we must not move nohz.next_balance otherwise it > > will have to wait for at least another full period > > In fact, I think that we have a problem with current implementation > > because if we abort because local cpu because busy we might end up > > skipping idle load balance for a lot of idle CPUs > > > > As an example, imagine that we have 10 idle CPUs with the same > > rq->next_balance which equal nohz.next_balance. _nohz_idle_balance > > starts on CPU0, it processes idle lb for CPU1 but then has to abort > > because of need_resched. If we update nohz.next_balance like > > currently, the next idle load balance will happen after a full > > balance interval whereas we still have 8 CPUs waiting for running an > > idle load balance. > > > > My proposal also fixes this problem > > > > That's a very good point; so with NOHZ_BALANCE_KICK we can reduce > nohz.next_balance via rebalance_domains(), and otherwise we would only > increase it if we go through a complete for_each_cpu() loop in > _nohz_idle_balance(). > > That said, if for some reason we keep bailing out of the loop, we won't > push nohz.next_balance forward and thus may repeatedly nohz-balance only > the first few CPUs in the NOHZ mask. I think that can happen if we have > say 2 tasks pinned to a single rq, in that case nohz_balancer_kick() will > kick a NOHZ balance whenever now >= nohz.next_balance. If we take my example above and we have CPU0 which is idle at every tick and selected as ilb_cpu but unluckily CPU0 has to abort before running ilb for CPU1 everytime, I agree that we can end up trying to run ilb on CPU0 at every tick without any success. We might consider to kick_ilb in _nohz_idle_balance if we have to abort to let another CPU handle the ilb