Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1040743ybt; Tue, 7 Jul 2020 06:31:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywbu7X9uteRQ4mv8clZgrENJROQWgKOoymUr0GfEbAnMW/gV/i10N7xCmI+MiqEgQd757a X-Received: by 2002:a05:6402:cb3:: with SMTP id cn19mr59261234edb.368.1594128689040; Tue, 07 Jul 2020 06:31:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594128689; cv=none; d=google.com; s=arc-20160816; b=scgYbqkiOo2gPOwNiesH2SzULaiifHNWWSPBFUs9Ft/FiPda5rDjLbbv59SU5bF0DJ DsBFPnzggY0kcQbxWvaTwh8tzfQbLXDvImQisCCHw2mWVf/rgrmJ9s7Ws6a6aryrlP8A 40CAKdujPVrNqjVf7mmGi4kWjNX4tCyYBT5fLWYL+9dsFdKFysxyIMF1q2GIHdSRla21 WQDq7NpwwWBrNlIMhy0Q6nm6+vYgwtLAZrZvyp01UI6pwetDMzt4TyE03q5DAyiLlO1Q TA3UXZxIwzAOwLUXj5fcYpjbCaQNIOd8glE/X/LHbj83NAQse5jbBBdWMdN+aAQgIzBm iW2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=ytTXbUnpyEHr/PePdG9cbFm9zb2rnLmsICtm/Rte0aQ=; b=i3D5LnmL6eDNkExz2/LcfldHeZZHC+nhbtHrnIY6BuMd2bfGuVbcpj8PhKt59jmYnm 0tigwbIOLDlDE4XpPHuwgODBrOtMGPW0mstnAVyOl8Tpc72XFrgdmYr7yY8O7OlAEOCs nDAEw2tkBhPWbhOJ0nvvEj8y77UmoCzsrqn3wxU1pmhljIK/kAruCE5RaoM5hMlGLrDG LL6NC9hjUQpA2a6wGT4E7oRa82JTom61v+aivErX7Ce2ccmZWToiXgzWm/M7bkhighA5 aF1+3enQBHS76193GvG6BU6tpq0aGd66UV1WexI7FAWR2++8PCK7FGDFhJgZimnAU9Sm 2H1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vu7obTCv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k13si14327005eji.354.2020.07.07.06.31.04; Tue, 07 Jul 2020 06:31:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vu7obTCv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727064AbgGGNan (ORCPT + 99 others); Tue, 7 Jul 2020 09:30:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726805AbgGGNam (ORCPT ); Tue, 7 Jul 2020 09:30:42 -0400 Received: from mail-lj1-x243.google.com (mail-lj1-x243.google.com [IPv6:2a00:1450:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B220C061755 for ; Tue, 7 Jul 2020 06:30:42 -0700 (PDT) Received: by mail-lj1-x243.google.com with SMTP id q7so36637328ljm.1 for ; Tue, 07 Jul 2020 06:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ytTXbUnpyEHr/PePdG9cbFm9zb2rnLmsICtm/Rte0aQ=; b=vu7obTCv449t1/QABeXXxJQEQrPb1cGJ7lu2d97+PN05Ckxdy2ypFjDk9MEknqIegU U57fxfbxMO2VNzRIXXQywb2JeakztpC62BtiCdVprMaReNIqQJcz1p+iTwqgsb9Cm3L7 xGcdQ9R6Zai5AO4F7Y6QhtO8O46Aayg6DtUnfixyIIG0o9FDTjEhHqFpPCI2ZfvLtRbX D4u2xPayPxRcfHZSRQPffJxWfTPAXY34+xhXwkbbsx16ffyJM4KMklYSp4JLNgvkmyLs 4l/gXdPkNkgNkel5QnRe0lwCyUrNGPAxYxml/sRtfYcErqVT0VL3mNPf0fLTl3ESguVf jmAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ytTXbUnpyEHr/PePdG9cbFm9zb2rnLmsICtm/Rte0aQ=; b=ImQ87cCPb2Rat3ARXr8HVTZXUBGnacSyIYRm63C4RI+mvhIKaT967RNqDy7NM80e3R fF3lWqZaiaBHlmdMeBr6mTzLR9gXwRRHBPcBqeln/DGpJAH60CWCJIRLxnGMO8nc/1ZH Nzw1aC5h+vXN38mwQgZXVWCKULiUfgfQxBmy4DZ3JUCBIYuKUt1QcduDlX0rbQM9lff1 aiNrhUoQHOOiS+OGlUvo4Y9AsP2/8wxmeOcLB9fg+UlcTXF5BVe7I1S9GrQZhjpQC9cE V5q/K1iqjK9QSqSeB5+eDaYO3fKvx6xAiocpPx42t0/zUmjx8sDNEi0gG6Fd9w1bYBwK smJw== X-Gm-Message-State: AOAM5321cj/o6OYWpjg/uy62uR45dI+C0yAloCIu9prpAaWbdNCJhZ/y 9SmnuudCUmJAGu4jmBCQ77wjBG0ZmzjxUOiBXlV1zwtg X-Received: by 2002:a2e:3c0e:: with SMTP id j14mr30747645lja.25.1594128640669; Tue, 07 Jul 2020 06:30:40 -0700 (PDT) MIME-Version: 1.0 References: <20200702144258.19326-1-vincent.guittot@linaro.org> In-Reply-To: From: Vincent Guittot Date: Tue, 7 Jul 2020 15:30:29 +0200 Message-ID: Subject: Re: [PATCH] sched/fair: handle case of task_h_load() returning 0 To: Valentin Schneider Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2 Jul 2020 at 18:28, Vincent Guittot wrote: > > On Thu, 2 Jul 2020 at 18:11, Valentin Schneider > wrote: > > > > > > On 02/07/20 15:42, Vincent Guittot wrote: > > > task_h_load() can return 0 in some situations like running stress-ng > > > mmapfork, which forks thousands of threads, in a sched group on a 224 cores > > > system. The load balance doesn't handle this correctly because > > > env->imbalance never decreases and it will stop pulling tasks only after > > > reaching loop_max, which can be equal to the number of running tasks of > > > the cfs. Make sure that imbalance will be decreased by at least 1. > > > > > > misfit task is the other feature that doesn't handle correctly such > > > situation although it's probably more difficult to face the problem > > > because of the smaller number of CPUs and running tasks on heterogenous > > > system. > > > > > > We can't simply ensure that task_h_load() returns at least one because it > > > would imply to handle underrun in other places. > > > > Nasty one, that... > > > > Random thought: isn't that the kind of thing we have scale_load() and > > scale_load_down() for? There's more uses of task_h_load() than I would like > > for this, but if we upscale its output (or introduce an upscaled variant), > > we could do something like: > > > > --- > > detach_tasks() > > { > > long imbalance = env->imbalance; > > > > if (env->migration_type == migrate_load) > > imbalance = scale_load(imbalance); > > > > while (!list_empty(tasks)) { > > /* ... */ > > switch (env->migration_type) { > > case migrate_load: > > load = task_h_load_upscaled(p); > > /* ... usual bits here ...*/ > > lsub_positive(&env->imbalance, load); > > break; > > /* ... */ > > } > > > > if (!scale_load_down(env->imbalance)) > > break; > > } > > } > > --- > > > > It's not perfect, and there's still the misfit situation to sort out - > > still, do you think this is something we could go towards? > > This will not work for 32bits system. > > For 64bits, I have to think a bit more if the upscale would fix all > cases and support propagation across a hierarchy. And in this case we > could also consider to make scale_load/scale_load_down a nop all the > time In addition that problem remains on 32bits, the problem can still happen after extending the scale so this current patch still makes sense. Then if we want to reduce the cases where task_h_load returns 0, we should better make scale_load_down a nop otherwise we will have to maintain 2 values h_load and scale_h_load across the hierarchy > > > > > > > > > Signed-off-by: Vincent Guittot > > > --- > > > kernel/sched/fair.c | 18 +++++++++++++++++- > > > 1 file changed, 17 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 6fab1d17c575..62747c24aa9e 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -4049,7 +4049,13 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) > > > return; > > > } > > > > > > - rq->misfit_task_load = task_h_load(p); > > > + /* > > > + * Make sure that misfit_task_load will not be null even if > > > + * task_h_load() returns 0. misfit_task_load is only used to select > > > + * rq with highest load so adding 1 will not modify the result > > > + * of the comparison. > > > + */ > > > + rq->misfit_task_load = task_h_load(p) + 1; > > > } > > > > > > #else /* CONFIG_SMP */ > > > @@ -7664,6 +7670,16 @@ static int detach_tasks(struct lb_env *env) > > > env->sd->nr_balance_failed <= env->sd->cache_nice_tries) > > > goto next; > > > > > > + /* > > > + * Depending of the number of CPUs and tasks and the > > > + * cgroup hierarchy, task_h_load() can return a null > > > + * value. Make sure that env->imbalance decreases > > > + * otherwise detach_tasks() will stop only after > > > + * detaching up to loop_max tasks. > > > + */ > > > + if (!load) > > > + load = 1; > > > + > > > env->imbalance -= load; > > > break;