Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757939Ab0GNUsU (ORCPT ); Wed, 14 Jul 2010 16:48:20 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:57351 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757224Ab0GNUsT (ORCPT ); Wed, 14 Jul 2010 16:48:19 -0400 Date: Wed, 14 Jul 2010 13:53:24 -0700 From: Sukadev Bhattiprolu To: "Eric W. Biederman" Cc: Andrew Morton , Linux Containers , linux-kernel@vger.kernel.org, Pavel Emelyanov , Oleg Nesterov , "Serge E. Hallyn" Subject: Re: [PATCH] pidns: Fix wait for zombies to be reaped in zap_pid_ns_processes Message-ID: <20100714205324.GA13956@us.ibm.com> References: <20100625212758.GA30474@redhat.com> <20100625220713.GA31123@us.ibm.com> <20100709121425.GB18586@hawkmoon.kerlabs.com> <20100709141324.GC18586@hawkmoon.kerlabs.com> <20100711141406.GD18586@hawkmoon.kerlabs.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux 2.0.32 on an i486 User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3218 Lines: 92 Eric W. Biederman [ebiederm@xmission.com] wrote: | | Changing zap_pid_ns_processes to fix the problem instead of | changing the code elsewhere is one of the few solutions I have | seen that does not increase the cost of the lat_proc test from | lmbench. I think its a good fix for the problem. but I have a nit and a minor comment below. Thanks, Sukadev | | Reported-by: Louis Rilling Reviewed-by: Sukadev Bhattiprolu | Signed-off-by: Eric W. Biederman | --- | kernel/pid_namespace.c | 50 ++++++++++++++++++++++++++++++++++++----------- | 1 files changed, 38 insertions(+), 12 deletions(-) | | diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c | index a5aff94..aaf2ab0 100644 | --- a/kernel/pid_namespace.c | +++ b/kernel/pid_namespace.c | @@ -139,16 +139,20 @@ void free_pid_ns(struct kref *kref) | | void zap_pid_ns_processes(struct pid_namespace *pid_ns) | { | + struct task_struct *me = current; | int nr; | int rc; | struct task_struct *task; | | /* | - * The last thread in the cgroup-init thread group is terminating. | - * Find remaining pid_ts in the namespace, signal and wait for them | - * to exit. | + * The last task in the pid namespace-init threa group is terminating. nit: thread | + * Find remaining pids in the namespace, signal and wait for them | + * to to be reaped. | * | - * Note: This signals each threads in the namespace - even those that | + * By waiting for all of the tasks to be reaped before init is reaped | + * we provide the invariant that no task can escape the pid namespace. | + * | + * Note: This signals each task in the namespace - even those that | * belong to the same thread group, To avoid this, we would have | * to walk the entire tasklist looking a processes in this | * namespace, but that could be unnecessarily expensive if the | @@ -157,28 +161,50 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns) | * | */ | read_lock(&tasklist_lock); | - nr = next_pidmap(pid_ns, 1); | - while (nr > 0) { | - rcu_read_lock(); | + for (nr = next_pidmap(pid_ns, 0); nr > 0; nr = next_pidmap(pid_ns, nr)) { Is it necessary to start the search at nr == 0 ? We will find nr == 1 first and then immediately skip over it - bc same_thread_group() will be TRUE. | | /* | * Any nested-container's init processes won't ignore the | * SEND_SIG_NOINFO signal, see send_signal()->si_fromuser(). | */ | - task = pid_task(find_vpid(nr), PIDTYPE_PID); | - if (task) | + rcu_read_lock(); | + task = pid_task(find_pid_ns(nr, pid_ns), PIDTYPE_PID); | + if (task && !same_thread_group(task, me)) | send_sig_info(SIGKILL, SEND_SIG_NOINFO, task); Also, if we start the search at 1, we could skip the only the other possible thread in the group with (nr != my_pid_nr) but its not really an optimization. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/