Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp439384pxb; Wed, 3 Feb 2021 08:56:55 -0800 (PST) X-Google-Smtp-Source: ABdhPJwAZW39wTxW8Qy6rFGJzwRGvvcM1L1bAqXB6yRVkrkCO3QAMl51piBW3CTwKuonbqLsdaLs X-Received: by 2002:aa7:c407:: with SMTP id j7mr3905004edq.28.1612371415670; Wed, 03 Feb 2021 08:56:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612371415; cv=none; d=google.com; s=arc-20160816; b=Eh5ZEMABaGTmfRZzLS8OtGnTD3Yegn3AiyL1hBLqsG4YCp8WaZFzsj0+8pUzpXDdkQ LPFquizT4UoTzY6Nwr+ES6KHpd/ZRQlQuRFr8PkmbYNYoGIEAFCoJxcb4bbvolfycqcn IBSN4N0DNTMduaTxCVkLq1tQkflMi+tY3t8QOpAg1lUNiC2ELuIVHOljgblwiwUDAtlh x4JjyBP/4BORcd8rfouyP/7SOpr/i10Edq6pYS3w0x+H4wW7H1Xal7WUNzQb46TGG8pL pQhwsx8MiKd1MfBge9Z/V91+c3OfgiVU2mMxX3sliB1cDzxDzV8mssXEs48tun6w7YWh x0tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=LRuQQ/TiLUC5JMAELdQabsqA6mbE/jmkJNaJ/Pha76M=; b=XWzFqoGqvicr90svYGgZ+HC4AkGGmp7CpwV7C9nEQ/7G64BS8ogzZuswAiPhZIgFW4 HVgU9tX+3mtYmpY8adRWy4ML3GMh1ab7L2Zla4N9SP2s+s+orK3pA/YpxgD43BGnRQ5a YhrLdJA2RcKg29PIs7RpEStAKwyMhWPDcZg0nyUzSaRDmWg0FK69Teh7PzbtAj8EJAgp 02Q6zJIQrnvKJlc7CsOpgKpERg12rIyTliSfkEFfs0kIEEkxNHdUlzvm02rAmpnye/Vp 900hdXxwLyrOsl1YO0v/8WU08M52MDCscGDofL2qoA5Psa3SuR+T6K73AK53pMTZdBG3 qX8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=1xIHBqiQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 16si1539478edw.253.2021.02.03.08.56.28; Wed, 03 Feb 2021 08:56:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=1xIHBqiQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231211AbhBCQxG (ORCPT + 99 others); Wed, 3 Feb 2021 11:53:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231189AbhBCQxB (ORCPT ); Wed, 3 Feb 2021 11:53:01 -0500 Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:8b0:10b:1231::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90361C0613D6 for ; Wed, 3 Feb 2021 08:52:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LRuQQ/TiLUC5JMAELdQabsqA6mbE/jmkJNaJ/Pha76M=; b=1xIHBqiQDaxysRCpMbX7WOMuVm OnJDi9V/wvFQrkhskS91PLYPDeO6S4h4jnJasd7F7kEt/9nBqEqnxZ0fdYK2Hy7ks1lFS/O3DaVSn YDmdvvvmqnotiUzU6G0KoVWxZGP5iJ1rOED6tNQGG1tRRtLLVbhVnzqXQw2VvXaPamaKGn9ivFcXr /wHiQkn4vQuo7YvLlwgoWV+9mS9pbBb6SwVDNIApdoNcsKUbDV1MkC1EO51+lnyVq/hKspBN86PZR UtVleQB0kMySe1pReRs88IPrXNTwoR7Q6BWo0Iq6C1fI0GGZxk5vK91kbJqwDEDZc9V4bXH2gQhho e8poS0uQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1l7LN9-0003cz-Oh; Wed, 03 Feb 2021 16:51:20 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E4FF930066E; Wed, 3 Feb 2021 17:51:15 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id BEA682C18B5C7; Wed, 3 Feb 2021 17:51:15 +0100 (CET) Date: Wed, 3 Feb 2021 17:51:15 +0100 From: Peter Zijlstra To: "Joel Fernandes (Google)" Cc: Nishanth Aravamudan , Julien Desfossez , Tim Chen , Vineeth Pillai , Aaron Lu , Aubrey Li , tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, torvalds@linux-foundation.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, pjt@google.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Alexandre Chartre , James.Bottomley@hansenpartnership.com, OWeisse@umich.edu, Dhaval Giani , Junaid Shahid , jsbarnes@google.com, chris.hyser@oracle.com, Ben Segall , Josh Don , Hao Luo , Tom Lendacky Subject: Re: [PATCH v10 2/5] sched: CGroup tagging interface for core scheduling Message-ID: References: <20210123011704.1901835-1-joel@joelfernandes.org> <20210123011704.1901835-3-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210123011704.1901835-3-joel@joelfernandes.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I'm slowly starting to go through this... On Fri, Jan 22, 2021 at 08:17:01PM -0500, Joel Fernandes (Google) wrote: > +static bool sched_core_empty(struct rq *rq) > +{ > + return RB_EMPTY_ROOT(&rq->core_tree); > +} > + > +static struct task_struct *sched_core_first(struct rq *rq) > +{ > + struct task_struct *task; > + > + task = container_of(rb_first(&rq->core_tree), struct task_struct, core_node); > + return task; > +} AFAICT you can do with: static struct task_struct *sched_core_any(struct rq *rq) { return rb_entry(rq->core_tree.rb_node, struct task_struct, code_node); } > +static void sched_core_flush(int cpu) > +{ > + struct rq *rq = cpu_rq(cpu); > + struct task_struct *task; > + > + while (!sched_core_empty(rq)) { > + task = sched_core_first(rq); > + rb_erase(&task->core_node, &rq->core_tree); > + RB_CLEAR_NODE(&task->core_node); > + } > + rq->core->core_task_seq++; > +} However, > + for_each_possible_cpu(cpu) { > + struct rq *rq = cpu_rq(cpu); > + > + WARN_ON_ONCE(enabled == rq->core_enabled); > + > + if (!enabled || (enabled && cpumask_weight(cpu_smt_mask(cpu)) >= 2)) { > + /* > + * All active and migrating tasks will have already > + * been removed from core queue when we clear the > + * cgroup tags. However, dying tasks could still be > + * left in core queue. Flush them here. > + */ > + if (!enabled) > + sched_core_flush(cpu); > + > + rq->core_enabled = enabled; > + } > + } I'm not sure I understand. Is the problem that we're still schedulable during do_exit() after cgroup_exit() ? It could be argued that when we leave the cgroup there, we should definitely leave the tag group too.