Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1965104pxb; Mon, 11 Oct 2021 17:34:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHDMZea1T78Xij78lph5SYaUU7LfgTf0307/DgwC5/ObvlHJ7IQWg74xohMf0FA1Asc3Om X-Received: by 2002:a17:902:7684:b0:13d:e9ec:b467 with SMTP id m4-20020a170902768400b0013de9ecb467mr27157596pll.77.1633998895053; Mon, 11 Oct 2021 17:34:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633998895; cv=none; d=google.com; s=arc-20160816; b=fiGFh064dr47gQUFUTjiLNJZpFkW6+yfuEISutF4sOp+erVJYgrdcR8D6dlvVNGb3U Iq5osKKrqXACBctnvomodxfxVDMhM9d3jJNVmWEWjOCkdr/rGk4oZQTXprsrmM0+FgkH ho/i9yfA4lfVHqBfrIR80JkVX7SyHD1Ch6wZXxvoDWsslmVYL3AEwEnxre4uD0SWs2YT dFL3Fy42Ne3lIqrczmuxmelAi9EroaHQk4SSXGlwISn1mxy/toDHlX3Bw16MgxYS1aMq iMD34NnpGJcdY9bexA17prbHp9t0pj6U09qiwsJEi4+IAcBmpcEodte/QcNlnEvlV+x8 oHnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=3S62nvOHjceE7OYCQfU8AzRtrSAtZeU322vT6Q84Ffk=; b=Le2jnYFc/OJKFt9xi9XG3MjjvBsLz/awpMTo4IhW7EPK5+/4j2reDEvBr9pz2NZtwF 80GkGMEna58jMWKXsjUi/oPO0qXGlGteQmyG498HJdeSYiJWC/3i7G/6Rt/Gx7j717M6 Ml4ys9FzXSLCrQon70hMKCTiJaqBAu3y5zxjjeQIRas8nnhjBhNf9R2mRU1Kai2tzceD PW2hb5AEn8V2q0+7+bQeGYLyIjL1ScMy6RKT0DPKtSpjTYP3y/vLFbFN0G5pkAqPYUX7 JEHmd+PmeQewnW6VyqqQh+I5a+oPXjlANldFxepoFSVUX1U7QA44XtQxIFo7iYBw/3Ca f8og== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=XHQTeHQP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y129si13601441pfc.252.2021.10.11.17.34.25; Mon, 11 Oct 2021 17:34:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=XHQTeHQP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235747AbhJLAeK (ORCPT + 99 others); Mon, 11 Oct 2021 20:34:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235704AbhJLAeD (ORCPT ); Mon, 11 Oct 2021 20:34:03 -0400 Received: from mail-yb1-xb30.google.com (mail-yb1-xb30.google.com [IPv6:2607:f8b0:4864:20::b30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7664C061570 for ; Mon, 11 Oct 2021 17:31:59 -0700 (PDT) Received: by mail-yb1-xb30.google.com with SMTP id s4so42810279ybs.8 for ; Mon, 11 Oct 2021 17:31:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3S62nvOHjceE7OYCQfU8AzRtrSAtZeU322vT6Q84Ffk=; b=XHQTeHQPS0PjyB1Vm5EaNuYtJemY2DtY3b2aoOM1jPIXuy2FYfGrVklvUz4WycyF3m /+y2RZ6Q1B7r65FNFTUMdJ4zEFFnFQYQOYpkt/rOjtZaMYBobDDFeijXtdhlatRGkqBc gvovPmjfIxB3ikKZKInFvUTrnemtgV0mwbgSImuL+RI7Z5sunNDEbRJnWV+g6R9/ud2B y6b2vdX5PtLT1xzy0BIc9SP9OQX8lHU7pInxDCX2ivCq1pzv1+tRa5x5QdWJDBEwVltd DZnUoCnaCi5VM+SvHwfU2RMBdDfJ7Z3mh8hlRX6GiKnuuDJXEynPuBaXc+keWPqBRhbA yCvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3S62nvOHjceE7OYCQfU8AzRtrSAtZeU322vT6Q84Ffk=; b=qyh6BIQ3HV9zhVd5zg4fHRyex86zcAhazOVzx56HejaweANh5nD/9c6h/kVyu3dCkb 1aBtKjOXGUGKg2XCGndlP4Iysa96EZJzTGSXtZWkoF1A7Zl0vdaMUPrw4YyFCa7cmPLm jW33wxIUsnpPF49NOVojQoc0rHAq8iaPzfnHGhJy7/sGhvKXFQwDEQXyGrKVf0DppFmK XGw8KVDxDYZwycfAGt6aIx6Qrm6tWrn0wKG80R/dr4J/7ssy3+kswBCI3gTncccJquJG dal+J0xR9DMbaf1SqnsjjnivnPrmDRM4Zi388K801rUKU2/8eMISXbV//bFy9BdsaCJW sUzw== X-Gm-Message-State: AOAM531QZ7QlDCBxwGo3Gy2rNgJS/qsM1bYTckaj0xS+RjmZW52ShpUw PKPNUTrLDwMoFVjmgsmh8hTmqVzjUxVhwuKPHCCung== X-Received: by 2002:a25:1346:: with SMTP id 67mr25663864ybt.405.1633998712255; Mon, 11 Oct 2021 17:31:52 -0700 (PDT) MIME-Version: 1.0 References: <20211008000825.1364224-1-joshdon@google.com> In-Reply-To: From: Josh Don Date: Mon, 11 Oct 2021 17:31:41 -0700 Message-ID: Subject: Re: [PATCH] sched/core: forced idle accounting To: Hao Luo Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Joel Fernandes , Vineeth Pillai , linux-kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 11, 2021 at 10:33 AM Hao Luo wrote: > > On Thu, Oct 7, 2021 at 5:08 PM Josh Don wrote: > > -void sched_core_dequeue(struct rq *rq, struct task_struct *p) > > +void sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags) > > { > > rq->core->core_task_seq++; > > > > - if (!sched_core_enqueued(p)) > > - return; > > + if (sched_core_enqueued(p)) { > > + rb_erase(&p->core_node, &rq->core_tree); > > + RB_CLEAR_NODE(&p->core_node); > > + } > > > > - rb_erase(&p->core_node, &rq->core_tree); > > - RB_CLEAR_NODE(&p->core_node); > > + /* > > + * Migrating the last task off the cpu, with the cpu in forced idle > > + * state. Reschedule to create an accounting edge for forced idle, > > + * and re-examine whether the core is still in forced idle state. > > + */ > > + if (!(flags & DEQUEUE_SAVE) && rq->nr_running == 1 && > > + rq->core->core_forceidle && rq->curr == rq->idle) > > + resched_curr(rq); > > Resched_curr is probably an unwanted side effect of dequeue. Maybe we > could extract the check and resched_curr out into a function, and call > the function outside of sched_core_dequeue(). In that way, the > interface of dequeue doesn't need to change. This resched is an atypical case; normal load balancing won't steal the last runnable task off a cpu. The main reasons this resched could trigger are: migration due to affinity change, and migration due to sched core doing a cookie_steal. Could bubble this up to deactivate_task(), but seems less brittle to keep this in dequeue() with the check against DEQUEUE_SAVE (since this creates an important accounting edge). Thoughts? > > /* > > @@ -5765,7 +5782,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) > > for_each_cpu_wrap(i, smt_mask, cpu) { > > rq_i = cpu_rq(i); > > > > - if (i != cpu) > > + if (i != cpu && (rq_i != rq->core || !core_clock_updated)) > > update_rq_clock(rq_i); > > Do you mean (rq_i != rq->core && !core_clock_updated)? I thought > rq->core has core_clock updated always. rq->clock is updated on entry to pick_next_task(). rq->core is only updated if rq == rq->core, or if we've done the clock update for rq->core above.