Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3687143imm; Tue, 17 Jul 2018 08:42:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcEzmUfpUBWXIGU6HhTHlbAiE68vrmTObiOf2wtKLPGT/gy/Pg3EUz3IUDPjp1qGvdGC0Gl X-Received: by 2002:aa7:808f:: with SMTP id v15-v6mr1237335pff.38.1531842162651; Tue, 17 Jul 2018 08:42:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531842162; cv=none; d=google.com; s=arc-20160816; b=JDcGeiJR0MTLwHKOGPZ4uDtX12wgMoDWdDel5y1SWPFzCg/241Kb6RwUaSXrDxJF1X DMS9J8EV1vo3jEAN463o60/lDbHj4D/swI091dVHcQQr01pES0kb49yZM8osR4XA67Qp Do9t6p32fZtbwXp12eXRItLYTCum4sGoifPQpvuA/JXQ/qPGmcKni6Y+Z6fYIOE2Ix6k YJBVvvwdM3g4Eg18c9U2tktWg02wR1l73JgKDU3mSFqBV9zy5/zKNbYyblHjCA2ue0jp Y1GcOK3LJ6CUy2JGlyhZKYnp8/lOHDrLY1WJ9/IJP0cPNQKBxbxOyJYgSrgMylHqz5jq Kn1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=iNAjETrR5c7BDYGuWnQBomWlbWRoOKw/fyTQZw45yV4=; b=yaThAoJKL4Dz2WXjFc4tEoUopog7n8Kce0qH1KpiEvdafMMtBEEj3pOn9wzQBvFWka YdmRwnoKe8GqLtMWCIy9MUSyF/sWtSxuCiIqFlmZN6ypSchG/TKC9rf0lwBWzyo9zyx+ vgTRfc11H93fH3TnoaBoD2nB78ccmFYwcnkoNXdJ7vGHW3RB2DYGdsE8JYZsx0ah/M0X oMNaChjq+Gr5vVfWSXeIHD9M90YKGb00rE3TdI1SrBM22iIS8xwuBXTSYC9kDX6k2KaQ j0MHE8z5kAdCPo5GTNl0No9g7cNlVLfAWl37HLjeuBHs6GnNfc8KQXXBnbCN8la4ygGC H27Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="XfyWS/ww"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r7-v6si1082842pgn.491.2018.07.17.08.42.27; Tue, 17 Jul 2018 08:42:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="XfyWS/ww"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730112AbeGQQO2 (ORCPT + 99 others); Tue, 17 Jul 2018 12:14:28 -0400 Received: from merlin.infradead.org ([205.233.59.134]:44648 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729721AbeGQQO2 (ORCPT ); Tue, 17 Jul 2018 12:14:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=iNAjETrR5c7BDYGuWnQBomWlbWRoOKw/fyTQZw45yV4=; b=XfyWS/wwW77zx27Q51FmI+vEU Nsd/PRWzhvrGhM2k3YLj3aAymglQQsROWjP82T/jyCRBZQAoaq7gpUovFd2eB8iFNjt2WCNY10lqx a4OCKUVGy6aQtCIWC0zeYjL1/EXizXkd9yBND21terBlI7EyU+ZPMXp7jw3LbEWggavNYYteZEAcM XGNoDxl+rnG51vmh5tUwfPZQtKsCe7Pp9nyfk36pAct9/1INHJlBDhGT27AqPDRjRMsq1nx71qQH1 ji73rZVzqC9cI1DS+VoX7ibQnU7IyrWeyJVZknPGjZeX514c/40/sAxlip1SBXHaCqHDVggST5kU6 QU8rXIn8g==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1ffS61-0001V3-9K; Tue, 17 Jul 2018 15:41:01 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id D1B9F20275F38; Tue, 17 Jul 2018 17:40:59 +0200 (CEST) Date: Tue, 17 Jul 2018 17:40:59 +0200 From: Peter Zijlstra To: Johannes Weiner Cc: Ingo Molnar , Andrew Morton , Linus Torvalds , Tejun Heo , Suren Baghdasaryan , Vinayak Menon , Christopher Lameter , Mike Galbraith , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH 09/10] psi: cgroup support Message-ID: <20180717154059.GB2476@hirez.programming.kicks-ass.net> References: <20180712172942.10094-1-hannes@cmpxchg.org> <20180712172942.10094-10-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180712172942.10094-10-hannes@cmpxchg.org> User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 12, 2018 at 01:29:41PM -0400, Johannes Weiner wrote: > +/** > + * cgroup_move_task - move task to a different cgroup > + * @task: the task > + * @to: the target css_set > + * > + * Move task to a new cgroup and safely migrate its associated stall > + * state between the different groups. > + * > + * This function acquires the task's rq lock to lock out concurrent > + * changes to the task's scheduling state and - in case the task is > + * running - concurrent changes to its stall state. > + */ > +void cgroup_move_task(struct task_struct *task, struct css_set *to) > +{ > + unsigned int task_flags = 0; > + struct rq_flags rf; > + struct rq *rq; > + u64 now; > + > + rq = task_rq_lock(task, &rf); > + > + if (task_on_rq_queued(task)) { > + task_flags = TSK_RUNNING; > + } else if (task->in_iowait) { > + task_flags = TSK_IOWAIT; > + } > + if (task->flags & PF_MEMSTALL) > + task_flags |= TSK_MEMSTALL; > + > + if (task_flags) { > + update_rq_clock(rq); > + now = rq_clock(rq); > + psi_task_change(task, now, task_flags, 0); > + } > + > + /* > + * Lame to do this here, but the scheduler cannot be locked > + * from the outside, so we move cgroups from inside sched/. > + */ > + rcu_assign_pointer(task->cgroups, to); > + > + if (task_flags) > + psi_task_change(task, now, 0, task_flags); > + > + task_rq_unlock(rq, task, &rf); > +} Why is that not part of cpu_cgroup_attach() / sched_move_task() ?