Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp545872ybk; Fri, 15 May 2020 07:28:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwemv5o9iXK5UWaFSo9Tpo38P48e33yVjulqegPB50fSpT1nb3ibhdsUYGmr5iqEVk7V1AP X-Received: by 2002:aa7:d342:: with SMTP id m2mr3185590edr.341.1589552931600; Fri, 15 May 2020 07:28:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589552931; cv=none; d=google.com; s=arc-20160816; b=HfkZWOnHEwOFkw0BIqTLp0SJMVaFyc5tK7heOBvayCF9mRyIc50kOjZplFfbUOW4ll YtDv7WjHoYUdfnIjrYreYUlNaDp585mkQjfD4He/0sPyfqiCjQnS/+FJ6YsOSOSxTTPZ 7NFHBFoeJaF54bYalOUs+OcFE9u2Th+3uB5WedaAOSYxrJmRAW//v5IC9Zmj5CBpTemg 1tTfhtiNnuLeX/wqSDD9cMbma4+p5hR87oxHj5yG5C94GiuvlR0pG43MJIOCQrL72Vo/ keRAULXhbDWP5ymfqPCXahSKQ0DpJqDT0D5FfBgI8jObiocPqGAvdKhzg34Yy2NVwVN7 vFEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=Xrgp2986PlkyCk9J+GXSDElLOJy9O+udFBEXW76d7GM=; b=ICAQJM/KS2Qi1DvN+pC1saQ4ynNaaicfA4y6m1JOEVYrXBvkli0zumW21U1euBP6fO J2prJJYDaCZKI3tJLyagzuxzbHdfEt9icZs8shLEXRGLZodw0tPtZuGzUQr20+fZCR6L AS1IYRjzgt6SwYUQC6XVwKZBFoLKbOpsLUdZL93kTL969Lg08RCkfpMMcw5OmuRdGq8o 3PdNofldzzM1Y8mxsYMu2sr8yzqThBaR0/mdw678vIS99xa50aYtOPn25goOBdH/Bxe3 ekLAOXsX4V6+sjq1RyIjZL3EHGwbqQ+Vt5DZuRneNIdxA6CAbfrM2A1NMyGpv7RLWhh+ sHVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=UnN+uBBu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i9si1490241edn.487.2020.05.15.07.28.28; Fri, 15 May 2020 07:28:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@digitalocean.com header.s=google header.b=UnN+uBBu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=digitalocean.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726237AbgEOOZB (ORCPT + 99 others); Fri, 15 May 2020 10:25:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726140AbgEOOZB (ORCPT ); Fri, 15 May 2020 10:25:01 -0400 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D36B0C061A0C for ; Fri, 15 May 2020 07:24:59 -0700 (PDT) Received: by mail-ot1-x32a.google.com with SMTP id g11so519651ots.4 for ; Fri, 15 May 2020 07:24:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digitalocean.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Xrgp2986PlkyCk9J+GXSDElLOJy9O+udFBEXW76d7GM=; b=UnN+uBBuA89j47gx2ggmo0MvUeT9kXYX6nEpuBN6c1IeZGVUIVHeIqvVUOLtiKn4VC 9ljZqs0oBPdP9fTqoYzHP4tRc55L9OIsftAnU3cDLtHpjbdk+WR9kfleObZyR62XVq+e UKdTJSLNqswGwoHVEOnh8MHGSs+y9T4ba1UsM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Xrgp2986PlkyCk9J+GXSDElLOJy9O+udFBEXW76d7GM=; b=N4I1ujU2BejrWrKA0YnJXqu6/1lXKoxvgLF11d6sftevUHRdF03qpEehLjHOWbKWXt eNmeS8KXgmIgZrDIrWFnXNlehezkcORtYBz1guWIOZrNWZrfIYj3U+dzbQl+fPgXQ4tE iOW55x56FEORUzcsbcGa/xKRZ5fnXKWRuvrZMgPIhTp/nJxgLcJ3Mz+C96/vtTZymSit U5Y783kkSLYdhS/PVZG6iSFO+g9S9ql8uf4B/HT1T4Vi5R65M0bJ+p+3zX+pD3aGPAdK uu9Tw3AlPHrp6jJUervGaDO6QEztMDiqngCvpuxjFgZIqjgcuTuSFugwppFYjsTuRVJX LMwQ== X-Gm-Message-State: AOAM531i+R47vQO+NN9hl+HDY0Dvj9PradspGEvLdOu3trIeAfpYg8lv UPPtx9SCVwTuV9XD1clAaIkC3NLWXD+sTlOFnZnOiQ== X-Received: by 2002:a9d:6ac9:: with SMTP id m9mr2479709otq.33.1589552699030; Fri, 15 May 2020 07:24:59 -0700 (PDT) MIME-Version: 1.0 References: <20200420080759.GA224731@ziqianlu-desktop.localdomain> <20200421025131.GA227300@aaronlu-desktop> <20200424142443.GA263207@aaronlu-desktop> <20200506143506.GH5298@hirez.programming.kicks-ass.net> <20200508084419.GA120223@aaronlu-desktop> <20200508090925.GV5298@hirez.programming.kicks-ass.net> <20200508123457.GA122180@aaronlu-desktop> <20200514130248.GD2940@hirez.programming.kicks-ass.net> <20200515103844.GG2978@hirez.programming.kicks-ass.net> In-Reply-To: <20200515103844.GG2978@hirez.programming.kicks-ass.net> From: Vineeth Remanan Pillai Date: Fri, 15 May 2020 10:24:47 -0400 Message-ID: Subject: Re: [PATCH updated v2] sched/fair: core wide cfs task priority comparison To: Peter Zijlstra Cc: Aaron Lu , Nishanth Aravamudan , Julien Desfossez , Tim Chen , Ingo Molnar , Thomas Gleixner , Paul Turner , Linus Torvalds , Aaron Lu , Linux List Kernel Mailing , =?UTF-8?B?RnLDqWTDqXJpYyBXZWlzYmVja2Vy?= , Kees Cook , Greg Kerr , Phil Auld , Aubrey Li , "Li, Aubrey" , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , Joel Fernandes , Joel Fernandes Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 15, 2020 at 6:39 AM Peter Zijlstra wrote: > > It's complicated ;-) > > So this sync is basically a relative reset of S to 0. > > So with 2 queues, when one goes idle, we drop them both to 0 and one > then increases due to not being idle, and the idle one builds up lag to > get re-elected. So far so simple, right? > > When there's 3, we can have the situation where 2 run and one is idle, > we sync to 0 and let the idle one build up lag to get re-election. Now > suppose another one also drops idle. At this point dropping all to 0 > again would destroy the built-up lag from the queue that was already > idle, not good. > Thanks for the clarification :-). I was suggesting an idea of corewide force_idle. We sync the core_vruntime on first force_idle of a sibling in the core and start using core_vruntime for priority comparison from then on. That way, we don't reset the lag on every force_idle and the lag builds up from the first sibling that was forced_idle. I think this would work with infeasible weights as well, but needs to think more to see if it would break. A sample check to enter this core wide force_idle state is: (cpumask_weight(cpu_smt_mask(cpu)) == old_active && new_active < old_active) And we exit the core wide force_idle state when the last sibling goes out of force_idle and can start using min_vruntime for priority comparison from then on. When there is a cookie match on all siblings, we don't do priority comparison now. But I think we need to do priority comparison for cookie matches also, so that we update 'max' in the loop. And for this comparison during a no forced_idle scenario, I hope it should be fine to use the min_vruntime. Updating 'max' in the loop when cookie matches is not really needed for SMT2, but would be needed for SMTn. This is just a wild idea on top of your patches. Might not be accurate in all cases and need to think more about the corner cases. I thought I would think it loud here :-) > So instead of syncing everything, we can: > > less := !((s64)(s_a - s_b) <= 0) > > (v_a - S_a) - (v_b - S_b) == v_a - v_b - S_a + S_b > == v_a - (v_b - S_a + S_b) > > IOW, we can recast the (lag) comparison to a one-sided difference. > So if then, instead of syncing the whole queue, sync the idle queue > against the active queue with S_a + S_b at the point where we sync. > > (XXX consider the implication of living in a cyclic group: N / 2^n N) > > This gives us means of syncing single queues against the active queue, > and for already idle queues to preseve their build-up lag. > > Of course, then we get the situation where there's 2 active and one > going idle, who do we pick to sync against? Theory would have us sync > against the combined S, but as we've already demonstated, there is no > such thing in infeasible weight scenarios. > > One thing I've considered; and this is where that core_active rudiment > came from, is having active queues sync up between themselves after > every tick. This limits the observed divergence due to the work > conservance. > > On top of that, we can improve upon things by moving away from our > horrible (10) hack and moving to (9) and employing (13) here. > > Anyway, I got partway through that in the past days, but then my head > hurt. I'll consider it some more :-) This sounds much better and a more accurate approach then the one I mentioned above. Please share the code when you have it in some form :-) > > > > + new_active++; > > I think we need to reset new_active on restarting the selection. > > But this loop is after selection has been done; we don't modify > new_active during selection. My bad, sorry about this false alarm! > > > + > > > + vruntime_a = se_a->vruntime - cfs_rq_a->core_vruntime; > > > + vruntime_b = se_b->vruntime - cfs_rq_b->core_vruntime; > > Should we be using core_vruntime conditionally? should it be min_vruntime for > > default comparisons and core_vruntime during force_idle? > > At the very least it should be min_vruntime when cfs_rq_a == cfs_rq_b, > ie. when we're on the same CPU. > yes, this makes sense. The issue that I was thinking about is, when there is no force_idle and all siblings run compatible tasks for a while, min_vruntime progresses, but core_vruntime lags behind. And when a new task gets enqueued, it gets the min_vruntime. But now during comparison it might be treated unfairly. Consider a small example of two rqs rq1 and rq2. rq1->cfs->min_vruntime = 1000 rq2->cfs->min_vruntime = 2000 During a force_idle, core_vruntime gets synced and rq1->cfs->core_vruntime = 1000 rq2->cfs->core_vruntime = 2000 Now, suppose the core is out of force_idle and runs two compatible tasks for a while, where the task on rq1 has more weight. min_vruntime progresses on both, but slowly on rq1. Say the progress looks like: rq1->cfs->min_vruntime = 1200, se1->vruntime = 1200 rq2->cfs->min_vruntime = 2500, se2->vruntime = 2500 If a new incompatible task(se3) gets enqueued to rq2, it's vruntime would be based on rq2's min_vruntime, say: se3->vruntime = 2500 During our priority comparison, lag would be: l_se1 = 200 l_se3 = 500 So se1, will get selected and run with se2 until its lag catches up with se3's lag(even if se3 has more weight than se1). This is a hypothetical situation, but can happen I think. And if we use min_vruntime for comparison during no force_idle scenario, we could avoid this. What do you think? I didn't clearly understand the tick based active sync and probably would better fix this problem I guess. Thanks, Vineeth