Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1269166ybl; Fri, 23 Aug 2019 16:35:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqxH3HnwP0f9SWAhwMWhMkNsnli6UweIOKwPOOVfOwtLvigJigIwFz/YnkhfwhK6/c+bzMBP X-Received: by 2002:aa7:91cc:: with SMTP id z12mr8114679pfa.76.1566603330177; Fri, 23 Aug 2019 16:35:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566603330; cv=none; d=google.com; s=arc-20160816; b=zOTfbYqZX7KJKiYuryplXMe88x10c6h8y3PvseEPrN1We1RwcXIPC8mDOTreKQo9Fn Zi/Pv2DlA9yo4MVtw1XYqfp3cJz/4lu+sKRXN7Bq9loEKlR/EUmFUHBCjVr9OS3eWeb6 WV0V/uH+U4XLwV/3TiQTMBv5fo2vsHHOw3Gf5gHQx9xVClc7V8siaB+ZkFJ+AUYnKRjy wn4YE4v/Tq9Zwqmn+w/mPkVREutHFDKy/g8nfc/DnPEdJfS8SLps0nkYzXGrVXwiNxX5 ch6NU7eWbEvpFJavUcE6CorXepG1R1aD7EkdNuYAccU1eGG3tC+y+qrjM0TbyB3ufzn+ utaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=jNPZ1qDcQDKvXBDK65ciuIEIzx+dAIGu5awjMUedOo0=; b=vUMpbD+/T1qdljctEzRQT4KPdGLtC1CIjoQYzPzLQStU1Eti84KVTdPHSe97j93l0B tWQ0d+reIeQZRVmLNp5/pDDrSKq+5jiH+4Ar2+I63bxfTEQ9w6PncBu2K1OmQtloVIs6 +hKXG/yHo5KHWPvrMbtnGWtbtF4kG2xey1pQWXZceR8KdoHL3W217rayv4X83ImLugZn oaOldX5n6WoOXFP6ZB7rxmG2o/5BVzlWkbvoF9H2uDuZOOUNX4TTV3Me1CHgguXK8EZr VaqMNKfqzp4nQxpjHp1+qeAnYyPiSh/+/XAvloJIAuKNaGDH2TCRCtjaufP1ziJC6GkL rEqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o33si3962546pld.32.2019.08.23.16.35.15; Fri, 23 Aug 2019 16:35:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404807AbfHWSDp (ORCPT + 99 others); Fri, 23 Aug 2019 14:03:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39088 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404603AbfHWSDp (ORCPT ); Fri, 23 Aug 2019 14:03:45 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CE14D3082231; Fri, 23 Aug 2019 18:03:44 +0000 (UTC) Received: from pauld.bos.csb (dhcp-17-51.bos.redhat.com [10.18.17.51]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 17E12600C1; Fri, 23 Aug 2019 18:03:44 +0000 (UTC) Date: Fri, 23 Aug 2019 14:03:42 -0400 From: Phil Auld To: bsegall@google.com Cc: Dave Chiluk , Qian Cai , Ingo Molnar , Peter Zijlstra , Linux Kernel Mailing List Subject: Re: [PATCH -next v2] sched/fair: fix -Wunused-but-set-variable warnings Message-ID: <20190823180342.GC14209@pauld.bos.csb> References: <1566326455-8038-1-git-send-email-cai@lca.pw> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Fri, 23 Aug 2019 18:03:44 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 23, 2019 at 10:28:02AM -0700 bsegall@google.com wrote: > Dave Chiluk writes: > > > On Wed, Aug 21, 2019 at 12:36 PM wrote: > >> > >> Qian Cai writes: > >> > >> > The linux-next commit "sched/fair: Fix low cpu usage with high > >> > throttling by removing expiration of cpu-local slices" [1] introduced a > >> > few compilation warnings, > >> > > >> > kernel/sched/fair.c: In function '__refill_cfs_bandwidth_runtime': > >> > kernel/sched/fair.c:4365:6: warning: variable 'now' set but not used > >> > [-Wunused-but-set-variable] > >> > kernel/sched/fair.c: In function 'start_cfs_bandwidth': > >> > kernel/sched/fair.c:4992:6: warning: variable 'overrun' set but not used > >> > [-Wunused-but-set-variable] > >> > > >> > Also, __refill_cfs_bandwidth_runtime() does no longer update the > >> > expiration time, so fix the comments accordingly. > >> > > >> > [1] https://lore.kernel.org/lkml/1558121424-2914-1-git-send-email-chiluk+linux@indeed.com/ > >> > > >> > Signed-off-by: Qian Cai > >> > >> Reviewed-by: Ben Segall > >> > >> > --- > >> > > >> > v2: Keep hrtimer_forward_now() in start_cfs_bandwidth() per Ben. > >> > > >> > kernel/sched/fair.c | 19 ++++++------------- > >> > 1 file changed, 6 insertions(+), 13 deletions(-) > >> > > >> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > >> > index 84959d3285d1..06782491691f 100644 > >> > --- a/kernel/sched/fair.c > >> > +++ b/kernel/sched/fair.c > >> > @@ -4354,21 +4354,16 @@ static inline u64 sched_cfs_bandwidth_slice(void) > >> > } > >> > > >> > /* > >> > - * Replenish runtime according to assigned quota and update expiration time. > >> > - * We use sched_clock_cpu directly instead of rq->clock to avoid adding > >> > - * additional synchronization around rq->lock. > >> > + * Replenish runtime according to assigned quota. We use sched_clock_cpu > >> > + * directly instead of rq->clock to avoid adding additional synchronization > >> > + * around rq->lock. > >> > * > >> > * requires cfs_b->lock > >> > */ > >> > void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b) > >> > { > >> > - u64 now; > >> > - > >> > - if (cfs_b->quota == RUNTIME_INF) > >> > - return; > >> > - > >> > - now = sched_clock_cpu(smp_processor_id()); > >> > - cfs_b->runtime = cfs_b->quota; > >> > + if (cfs_b->quota != RUNTIME_INF) > >> > + cfs_b->runtime = cfs_b->quota; > >> > } > >> > > >> > static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg) > >> > @@ -4989,15 +4984,13 @@ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq) > >> > > >> > void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b) > >> > { > >> > - u64 overrun; > >> > - > >> > lockdep_assert_held(&cfs_b->lock); > >> > > >> > if (cfs_b->period_active) > >> > return; > >> > > >> > cfs_b->period_active = 1; > >> > - overrun = hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period); > >> > + hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period); > >> > hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED); > >> > } > > > > Looks good. > > Reviewed-by: Dave Chiluk > > > > Sorry for the slow response, I was on vacation. > > > > @Ben do you think it would be useful to still capture overrun, and > > WARN on any overruns? We wouldn't expect overruns, but their > > existence would indicate an over-loaded node or too short of a > > cfs_period. Additionally, it would be interesting to see if we could > > capture the offset between when the bandwidth was refilled, and when > > the timer was supposed to fire. I've always done all my calculations > > assuming that the timer fires and is handled exceedingly close to the > > time it was supposed to fire. Although, if the node is running that > > overloaded you probably have many more problems than worrying about > > timer warnings. > > That "overrun" there is not really an overrun - it's the number of > complete periods the timer has been inactive for. It was used so that a > given tg's period timer would keep the same > phase/offset/whatever-you-call-it, even if it goes idle for a while, > rather than having the next period start N ms after a task wakes up. > > Also, poor choices by userspace is not generally something the kernel > generally WARNs on, as I understand it. I don't think it matters in the start_cfs_bandwidth case, anyway. We do effectively check in sched_cfs_period_timer. Cleanup looks okay to me as well. Cheers, Phil --