Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp7231939ybh; Thu, 8 Aug 2019 12:12:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqw+tpUeO/OHmnokPUNEzbWoWlqz6JucTTFr9scNVEeKh7yRNH7MZd3aMor/LwByJf1NHVuE X-Received: by 2002:a17:90a:db52:: with SMTP id u18mr5586020pjx.107.1565291561584; Thu, 08 Aug 2019 12:12:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565291561; cv=none; d=google.com; s=arc-20160816; b=bBeJHMOxNlRKGdy+n4tCrO+lFi689KvYlmipNu9WW6UrTA+w62n/K4dj1ltzroOo1s PKKSgQW4WYOUV0MPKuMS9xPDI+K5bPh8BeDzW1FiDnUyQiaIH6siM/kgIUpX7yGXU+8k Axk2S97z/rRmuAyfmuVrN0rQ/gZMFmAI94B+czYTXbNC5keORr4N3kFI11v6IzQV4+sf svchK8aeASMvmb/NGF4mn8IAZX/pKsbWwXt1NUKmIzAxlu6VBpqq9k3EGXGDOEK3o/ol wFlt8GjL8hI8Jlb1x1G/SuaNX+byziGlCfhKoaK39dqU0kpZAt10hAPWmjsg8HT5obLS ruBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ro/V+J4YKPsFDOvkeazjA/P++ELm141568MXhCjXQmU=; b=BfloHekx9hkj6LNndBHCMl2eXId/VH8wHRQe9rggCvdtrB1/ksDnBGWD7BHBtVWdqf /DyS5/smbXwZyE4V74LYOprJ6BYV7YdQfc17mHSTeEC5e5rnCWZ/mWfUHC58p7sMXcoi Bcu/RTUI02tklbyMWONq3C+IvzK+bn5Z0v3vxE0vXWJ2qrwFuRHOSIt2TZLUROZUpeRQ oo8rt+Z00s08YAwdlcPYTV2DMvrHx7JcQ/ws5MkVlDU3ZveRKT9fkFg+gRdM+8dcMpvp fit+pKSgA02i95uHmVGz0qk+PgafNN1UI/0XVYc47SNOMv84M95AGaNwjAbvquQMllHX dSaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XQARlqRC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a17si48469158pfn.38.2019.08.08.12.12.26; Thu, 08 Aug 2019 12:12:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XQARlqRC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405196AbfHHTJz (ORCPT + 99 others); Thu, 8 Aug 2019 15:09:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:44244 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405186AbfHHTJx (ORCPT ); Thu, 8 Aug 2019 15:09:53 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CCF142173E; Thu, 8 Aug 2019 19:09:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1565291393; bh=SmPRD6NLtLfkZn7PQ79HocHTS2DXrJWNgunAhGJjvRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XQARlqRC9rGjgEN2g6eNUVF16+Mx86LZOZsePJVa08KfLJlzJ6jQwFK3fIPAhWim6 ME3BGcveF1bGCkTqSzMbCbMDxDWc3k5vFnDPFy4UPiW4oaRCwcabnsqgK1BHT0EQpw rQ9GEABdMb3Y0tirfj9BdhpLmz6hxL7zkLxqPnFw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tejun Heo , Oleg Nesterov , Topi Miettinen Subject: [PATCH 4.19 42/45] cgroup: Include dying leaders with live threads in PROCS iterations Date: Thu, 8 Aug 2019 21:05:28 +0200 Message-Id: <20190808190456.269142070@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190808190453.827571908@linuxfoundation.org> References: <20190808190453.827571908@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tejun Heo commit c03cd7738a83b13739f00546166969342c8ff014 upstream. CSS_TASK_ITER_PROCS currently iterates live group leaders; however, this means that a process with dying leader and live threads will be skipped. IOW, cgroup.procs might be empty while cgroup.threads isn't, which is confusing to say the least. Fix it by making cset track dying tasks and include dying leaders with live threads in PROCS iteration. Signed-off-by: Tejun Heo Reported-and-tested-by: Topi Miettinen Cc: Oleg Nesterov Signed-off-by: Greg Kroah-Hartman --- include/linux/cgroup-defs.h | 1 + include/linux/cgroup.h | 1 + kernel/cgroup/cgroup.c | 44 +++++++++++++++++++++++++++++++++++++------- 3 files changed, 39 insertions(+), 7 deletions(-) --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -207,6 +207,7 @@ struct css_set { */ struct list_head tasks; struct list_head mg_tasks; + struct list_head dying_tasks; /* all css_task_iters currently walking this cset */ struct list_head task_iters; --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -60,6 +60,7 @@ struct css_task_iter { struct list_head *task_pos; struct list_head *tasks_head; struct list_head *mg_tasks_head; + struct list_head *dying_tasks_head; struct css_set *cur_cset; struct css_set *cur_dcset; --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -673,6 +673,7 @@ struct css_set init_css_set = { .dom_cset = &init_css_set, .tasks = LIST_HEAD_INIT(init_css_set.tasks), .mg_tasks = LIST_HEAD_INIT(init_css_set.mg_tasks), + .dying_tasks = LIST_HEAD_INIT(init_css_set.dying_tasks), .task_iters = LIST_HEAD_INIT(init_css_set.task_iters), .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), @@ -1145,6 +1146,7 @@ static struct css_set *find_css_set(stru cset->dom_cset = cset; INIT_LIST_HEAD(&cset->tasks); INIT_LIST_HEAD(&cset->mg_tasks); + INIT_LIST_HEAD(&cset->dying_tasks); INIT_LIST_HEAD(&cset->task_iters); INIT_LIST_HEAD(&cset->threaded_csets); INIT_HLIST_NODE(&cset->hlist); @@ -4152,15 +4154,18 @@ static void css_task_iter_advance_css_se it->task_pos = NULL; return; } - } while (!css_set_populated(cset)); + } while (!css_set_populated(cset) && !list_empty(&cset->dying_tasks)); if (!list_empty(&cset->tasks)) it->task_pos = cset->tasks.next; - else + else if (!list_empty(&cset->mg_tasks)) it->task_pos = cset->mg_tasks.next; + else + it->task_pos = cset->dying_tasks.next; it->tasks_head = &cset->tasks; it->mg_tasks_head = &cset->mg_tasks; + it->dying_tasks_head = &cset->dying_tasks; /* * We don't keep css_sets locked across iteration steps and thus @@ -4199,6 +4204,8 @@ static void css_task_iter_skip(struct cs static void css_task_iter_advance(struct css_task_iter *it) { + struct task_struct *task; + lockdep_assert_held(&css_set_lock); repeat: if (it->task_pos) { @@ -4215,17 +4222,32 @@ repeat: if (it->task_pos == it->tasks_head) it->task_pos = it->mg_tasks_head->next; if (it->task_pos == it->mg_tasks_head) + it->task_pos = it->dying_tasks_head->next; + if (it->task_pos == it->dying_tasks_head) css_task_iter_advance_css_set(it); } else { /* called from start, proceed to the first cset */ css_task_iter_advance_css_set(it); } - /* if PROCS, skip over tasks which aren't group leaders */ - if ((it->flags & CSS_TASK_ITER_PROCS) && it->task_pos && - !thread_group_leader(list_entry(it->task_pos, struct task_struct, - cg_list))) - goto repeat; + if (!it->task_pos) + return; + + task = list_entry(it->task_pos, struct task_struct, cg_list); + + if (it->flags & CSS_TASK_ITER_PROCS) { + /* if PROCS, skip over tasks which aren't group leaders */ + if (!thread_group_leader(task)) + goto repeat; + + /* and dying leaders w/o live member threads */ + if (!atomic_read(&task->signal->live)) + goto repeat; + } else { + /* skip all dying ones */ + if (task->flags & PF_EXITING) + goto repeat; + } } /** @@ -5682,6 +5704,7 @@ void cgroup_exit(struct task_struct *tsk if (!list_empty(&tsk->cg_list)) { spin_lock_irq(&css_set_lock); css_set_move_task(tsk, cset, NULL, false); + list_add_tail(&tsk->cg_list, &cset->dying_tasks); cset->nr_tasks--; spin_unlock_irq(&css_set_lock); } else { @@ -5702,6 +5725,13 @@ void cgroup_release(struct task_struct * do_each_subsys_mask(ss, ssid, have_release_callback) { ss->release(task); } while_each_subsys_mask(); + + if (use_task_css_set_links) { + spin_lock_irq(&css_set_lock); + css_set_skip_task_iters(task_css_set(task), task); + list_del_init(&task->cg_list); + spin_unlock_irq(&css_set_lock); + } } void cgroup_free(struct task_struct *task)