Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752192AbdGDJlu (ORCPT ); Tue, 4 Jul 2017 05:41:50 -0400 Received: from merlin.infradead.org ([205.233.59.134]:49506 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751979AbdGDJlt (ORCPT ); Tue, 4 Jul 2017 05:41:49 -0400 Date: Tue, 4 Jul 2017 11:41:41 +0200 From: Peter Zijlstra To: Ingo Molnar Cc: josef@toxicpanda.com, mingo@redhat.com, linux-kernel@vger.kernel.org, kernel-team@fb.com, Josef Bacik Subject: Re: [RFC][PATCH] sched: attach extra runtime to the right avg Message-ID: <20170704094141.mebcs2pjv2s6vynt@hirez.programming.kicks-ass.net> References: <1498787766-9593-1-git-send-email-jbacik@fb.com> <20170702093718.aq5p5xxfvrjdeful@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170702093718.aq5p5xxfvrjdeful@gmail.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3567 Lines: 88 On Sun, Jul 02, 2017 at 11:37:18AM +0200, Ingo Molnar wrote: > * josef@toxicpanda.com wrote: > > > From: Josef Bacik > > > > We only track the load avg of a se in 1024 ns chunks, so in order to > > make up for the loss of the < 1024 ns part of a run/sleep delta we only > > add the time we processed to the se->avg.last_update_time. The problem > > is there is no way to know if this extra time was while we were asleep > > or while we were running. Instead keep track of the remainder and apply > > it in the appropriate place. If the remainder was while we were > > running, add it to the delta the next time we update the load avg while > > running, and the same for sleeping. This (coupled with other fixes) > > mostly fixes the regression to my workload introduced by Peter's > > experimental runnable load propagation patches. > > > > Signed-off-by: Josef Bacik > > > @@ -2897,12 +2904,16 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > * Use 1024ns as the unit of measurement since it's a reasonable > > * approximation of 1us and fast to compute. > > */ > > + remainder = delta & (1023UL); > > + sa->last_update_time = now; > > + if (running) > > + sa->run_remainder = remainder; > > + else > > + sa->sleep_remainder = remainder; > > delta >>= 10; > > if (!delta) > > return 0; > > > > - sa->last_update_time += delta << 10; > > - > > So I'm wondering, this chunk changes how sa->last_update_time is maintained in > ___update_load_avg(): the new code takes a precise timestamp, but the old code was > not taking an imprecise timestamp, but was updating it via deltas - where each > delta was rounded down to the nearest 1024 nsecs boundary. Right.. > That, if this is the main code path that updates ->last_update_time, creates a > constant drift of rounding error that skews ->last_update_time into larger and > larger distances from the real 'now' - ever increasing the value of 'delta'. Well, its a 0-sum. It doesn't drift unbounded. The difference will grow up to 1023, at which point we'll account for it whole and we're back to 0. The problem is that there's two states: running, blocked. And the current scheme does not differentiate. We'll accrue the sub-block and spill it into whatever state gets lucky. Now, on average you'd hope that that works out and both running and blocked get an equal number of spills pro-rata. But apparently this isn't quite working out for Josef. > An intermediate approach to improve that skew would be something like below. It > doesn't track the remainder like your patch does, but doesn't lose precision > either, just rounds down 'now' to the nearest 1024 boundary. > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 008c514dc241..b03703cd7989 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2965,7 +2965,7 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, > if (!delta) > return 0; > > - sa->last_update_time += delta << 10; > + sa->last_update_time = now & ~1023ULL; > So if we have a task that always runs <1024ns it should still get blocks of runtime because the difference between now and now&~1023 can be !0 and spill. I'm just not immediately seeing how its different from the 0-sum we had. It should be identical since delta*1024 would equally land us on those same edges (there's an offset in the differential form between the two, but since we start with last_update_time=0, the resulting edges are the same afaict). *confused*