Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1544773pxb; Thu, 4 Feb 2021 16:18:58 -0800 (PST) X-Google-Smtp-Source: ABdhPJwapMypPFz+268QfQk8os1Zq0xILBmiji3U25Dbli2o/AoF6XC1EbMauX0iKdFnHueiP6nH X-Received: by 2002:a05:6402:1383:: with SMTP id b3mr1016028edv.131.1612484338003; Thu, 04 Feb 2021 16:18:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612484337; cv=none; d=google.com; s=arc-20160816; b=Z2V6YvA3mKclmh7PUjRP+uf6c+YdL8fyqirUcmj5nvttZcbhlbEozd82Iytqn8iy0P FcFcD6BPdgUegguc27aLJJyBFwO97iHBB+l9mIXPDx7MvdJPlUTjqol08f1nBEPA5+bE ukSthNHOUES/UmDJqu8aXBxCBjPdvX7lRebmfuQyC7pgX/uTukBA5kW0RMfTNxWCjVrc YoiQY/C2iEi6800MjyPP/PPmFQZGfZin0cWaPnOnTtIvO6g95NrZvFTjqLYW/GDoDw3d Zig21g917kBs+7MVaGi9hTccU3DQRW7Jiy6vFemLf02Rx+9Xcwf3dLPb10AjF70juyzY tuyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=tFbUE2BAlkR7eHrPeMaw32l4jorlo3n3mpeT61my3SQ=; b=fAoyYydiR/4yWPu/3nRyjUfDtz16ZQxVl5O0ovy7XrBH7czfmlyFC27WLqG+O6Q2oN bLI6Ivi87dE+KNI2pC39fMyxm6TdYBJrN2Znrrx7G5hll1H2ZjyAiAt60qhCJ7D7WVu6 +UYL0thDSWJEk/VD+Rg1sGBWgwFewleFZQqTDCyH33bUDO61A7SnffKqQVqKIDHfnpFS 9Bbn9pradGdmsPD7Es3xIEarWaczpuJjQfI8XUx+JsjAyD5RqvrFylg9CXSHbuP3SSff 8WzZNWzFjKJknAAu4WWk1N7uSc53zcF6uWqk0s++6WclNLa/XJ+7iBBfpjsCI1t0Y+02 Nnxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=1KYCkcwO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k8si4035040ejr.335.2021.02.04.16.18.33; Thu, 04 Feb 2021 16:18:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=1KYCkcwO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236531AbhBDOER (ORCPT + 99 others); Thu, 4 Feb 2021 09:04:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236529AbhBDOBb (ORCPT ); Thu, 4 Feb 2021 09:01:31 -0500 Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AA9DC0613D6 for ; Thu, 4 Feb 2021 06:00:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tFbUE2BAlkR7eHrPeMaw32l4jorlo3n3mpeT61my3SQ=; b=1KYCkcwOjf3sgHciHRLJffcHb3 iO2tIJZ1eloyOzXgFquNHeDl3M+sIdv+yJcVANVvP5igPPk5V5/VuiC/YfgidOWHBrz6ea3Hg9xNh fLaqRJ6LYIAlVubXWHK/+SGwg8dBemjMpIuGKGRn5BzrE/oJCYizaaB/VlWv8wsHnQ1XHu/m1FNe2 n2YyL00wC6vYfnD9T76FP3ek7xBHV4eH+jtK2jcEyyWRa8P21SE5VS6TeKw/09ay38q0CT13mpcMt 4FuefYdKf7X+6n6WKMJv/2ospvOp/hK/7k2XhOwnoc+rJ+nH7zHMjNwX+p23TKP7oqz1veJrSY1pb DX8axWHQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1l7fAu-0008Rc-G6; Thu, 04 Feb 2021 14:00:00 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9035130066E; Thu, 4 Feb 2021 14:59:58 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 745312C12E919; Thu, 4 Feb 2021 14:59:58 +0100 (CET) Date: Thu, 4 Feb 2021 14:59:58 +0100 From: Peter Zijlstra To: "Joel Fernandes (Google)" Cc: Nishanth Aravamudan , Julien Desfossez , Tim Chen , Vineeth Pillai , Aaron Lu , Aubrey Li , tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, torvalds@linux-foundation.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, pjt@google.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Alexandre Chartre , James.Bottomley@hansenpartnership.com, OWeisse@umich.edu, Dhaval Giani , Junaid Shahid , jsbarnes@google.com, chris.hyser@oracle.com, Ben Segall , Josh Don , Hao Luo , Tom Lendacky Subject: Re: [PATCH v10 2/5] sched: CGroup tagging interface for core scheduling Message-ID: References: <20210123011704.1901835-1-joel@joelfernandes.org> <20210123011704.1901835-3-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 03, 2021 at 05:51:15PM +0100, Peter Zijlstra wrote: > > I'm slowly starting to go through this... > > On Fri, Jan 22, 2021 at 08:17:01PM -0500, Joel Fernandes (Google) wrote: > > +static bool sched_core_empty(struct rq *rq) > > +{ > > + return RB_EMPTY_ROOT(&rq->core_tree); > > +} > > + > > +static struct task_struct *sched_core_first(struct rq *rq) > > +{ > > + struct task_struct *task; > > + > > + task = container_of(rb_first(&rq->core_tree), struct task_struct, core_node); > > + return task; > > +} > > AFAICT you can do with: > > static struct task_struct *sched_core_any(struct rq *rq) > { > return rb_entry(rq->core_tree.rb_node, struct task_struct, code_node); > } > > > +static void sched_core_flush(int cpu) > > +{ > > + struct rq *rq = cpu_rq(cpu); > > + struct task_struct *task; > > + > > + while (!sched_core_empty(rq)) { > > + task = sched_core_first(rq); > > + rb_erase(&task->core_node, &rq->core_tree); > > + RB_CLEAR_NODE(&task->core_node); > > + } > > + rq->core->core_task_seq++; > > +} > > However, > > > + for_each_possible_cpu(cpu) { > > + struct rq *rq = cpu_rq(cpu); > > + > > + WARN_ON_ONCE(enabled == rq->core_enabled); > > + > > + if (!enabled || (enabled && cpumask_weight(cpu_smt_mask(cpu)) >= 2)) { > > + /* > > + * All active and migrating tasks will have already > > + * been removed from core queue when we clear the > > + * cgroup tags. However, dying tasks could still be > > + * left in core queue. Flush them here. > > + */ > > + if (!enabled) > > + sched_core_flush(cpu); > > + > > + rq->core_enabled = enabled; > > + } > > + } > > I'm not sure I understand. Is the problem that we're still schedulable > during do_exit() after cgroup_exit() ? It could be argued that when we > leave the cgroup there, we should definitely leave the tag group too. That is, did you forget to implement cpu_cgroup_exit()?