Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1650095pxf; Fri, 26 Mar 2021 11:32:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxoLgEgNScafl1Osvs1uyiOdjc0h6oqnJ4pB93vg7N8+mV8ztJMhgYtdZ3kzpSA3pg4HQdt X-Received: by 2002:a05:6402:32a:: with SMTP id q10mr16620406edw.15.1616783547649; Fri, 26 Mar 2021 11:32:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616783547; cv=none; d=google.com; s=arc-20160816; b=P/2kpBD2ytysKOiZKocBvQMfPowN51Ta/HofnLoY3YrEI2k6k5z9YCI642rWKIuOWi Swye3SF+9f/qWwJKlLhULjiAi/roBwpqiqnh+KrInKQt/UmdCYwJXpi1elhdxfa1yszt 7RGo49pgjRYRf4rbD9E+Q3LnyN/5gDCdaDPAGe8MWb0O3UYIoZW/WGinh50SuL2iqP4n mUrmnmEvEdDThakMpLzO75etoxgA8fH4WUQXlrVnfyUx2KVw9j5rofV9nkmgHpDHUKwV W79x/5UaMet3A8NF88Rd8heDyKLQ2ULa76yFUuAcEwvY2iRnjGBMt0QlUhi40Nc0csQY EXUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=X3uQLAMw9mpcnPe/FGqyshXbloG5ZX2W2XuSLqsdMdg=; b=hbk0PXNAd9YkU3pkZZcgNMH8quoGuycGR/1/vS4whvqz2LGIU5yPvTCBBjHFTf3V3l pCFHdMz2hjfBOkhKQLYOaP4p46oc3plD7B9jRxx4GnX3TBBLqXXnjqw0KqpZa9Cmn5zV rk3J6GNSnveaxMoiZpuodq7eTXUB7ogNvwmEFWqxmIDN/mVsAwN17pOC4+MfwaHcwbZg xjHGAabz1y/UUfgC3mPHqp9k58X9YcGsKuTTnEDe7Wr+xCkiHy/pw5CJHz3/ci2MozX+ ZWLlDe+exnQ9vesK2FFZw1T/Hn+g6OKjqnjTCwEb/4jSW6j7KUk5+msV2RKRrl0+jgE6 giTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=P76Xw1o0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j27si7545560edy.231.2021.03.26.11.32.04; Fri, 26 Mar 2021 11:32:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=P76Xw1o0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230249AbhCZSbH (ORCPT + 99 others); Fri, 26 Mar 2021 14:31:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230003AbhCZSam (ORCPT ); Fri, 26 Mar 2021 14:30:42 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B113C0613AA for ; Fri, 26 Mar 2021 11:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=X3uQLAMw9mpcnPe/FGqyshXbloG5ZX2W2XuSLqsdMdg=; b=P76Xw1o0k+eQy4S9LRKMM2Pvch 6UO3iCKRI02/7Ppq1Rc/eWIIqIRVOPDPH165ysrQYevVrpdHHImKXLM8N1GHgFRzXr4UYmzGIqzIY QU6DgIRpRR2H4o9uvfZKUAUV6eisGpJWe9qNmL/r+Gb8ZnTCMG/mita8Zq2YkaKd7Od/3LOJDxdo7 JzQzgEemW1k7Abmq7QvuXEeeFfI9JBeclyJ6nBmTxab+/qKdOp1OI3VBr9j0VcM4VCGmLQ7hrJv6B MX6Z+bPA28zDxzXjZc7yPoeeUuoa8WJlEWb1RQf1x9T666B+K4A35B/12HFUC7ALbGTdFKB/iMFbU bw6u8mtA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=worktop.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lPrE1-0049Bt-Mo; Fri, 26 Mar 2021 18:30:25 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id 741AD981045; Fri, 26 Mar 2021 19:30:24 +0100 (CET) Date: Fri, 26 Mar 2021 19:30:24 +0100 From: Peter Zijlstra To: Vincent Guittot Cc: Ingo Molnar , Mel Gorman , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Daniel Bristot de Oliveira , Josh Don , Valentin Schneider , linux-kernel , greg@kroah.com Subject: Re: [PATCH 9/9] sched,fair: Alternative sched_slice() Message-ID: <20210326183024.GM4746@worktop.programming.kicks-ass.net> References: <20210326103352.603456266@infradead.org> <20210326103935.444833549@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 26, 2021 at 04:37:03PM +0100, Vincent Guittot wrote: > On Fri, 26 Mar 2021 at 11:43, Peter Zijlstra wrote: > > > > The current sched_slice() seems to have issues; there's two possible > > things that could be improved: > > > > - the 'nr_running' used for __sched_period() is daft when cgroups are > > considered. Using the RQ wide h_nr_running seems like a much more > > consistent number. > > > > - (esp) cgroups can slice it real fine, which makes for easy > > over-scheduling, ensure min_gran is what the name says. > > > > Signed-off-by: Peter Zijlstra (Intel) > > --- > > kernel/sched/fair.c | 15 ++++++++++++++- > > kernel/sched/features.h | 3 +++ > > 2 files changed, 17 insertions(+), 1 deletion(-) > > > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long > > */ > > static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) > > { > > - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); > > + unsigned int nr_running = cfs_rq->nr_running; > > + u64 slice; > > + > > + if (sched_feat(ALT_PERIOD)) > > + nr_running = rq_of(cfs_rq)->cfs.h_nr_running; > > + > > + slice = __sched_period(nr_running + !se->on_rq); > > + > > + if (sched_feat(BASE_SLICE)) > > + slice -= sysctl_sched_min_granularity; > > > > for_each_sched_entity(se) { > > struct load_weight *load; > > @@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf > > } > > slice = __calc_delta(slice, se->load.weight, load); > > } > > + > > + if (sched_feat(BASE_SLICE)) > > + slice += sysctl_sched_min_granularity; > > Why not only doing a max of slice and sysctl_sched_min_granularity > instead of scaling only the part above sysctl_sched_min_granularity ? > > With your change, cases where the slices would have been in a good > range already, will be modified as well Can do I suppose. Not sure how I ended up with this.