Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
possible to get into a situation where a pidns reaper is
<defunct>, reparented to host pid 1, but never reaped. How to
reproduce this is documented at
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
(and see
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
In short, run repeated starts of a container whose init is
Process.exit(0);
sysrq-t when such a task is playing zombie shows:
[ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
[ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
[ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
[ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
[ 131.132978] Call Trace:
[ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
[ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
[ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
[ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
[ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
[ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
Further debugging showed that every time this happened, zap_pid_ns_processes()
started with nr_hashed being 3, while we were expecting it to drop to 2.
Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
time the reaper is PF_EXITING, to force it to re-test the
pidns->nr_hashed = init_pids test. Note that this is more like what
__unhash_process() used to do before
af4b8a83add95ef40716401395b44a1b579965f4.
Signed-off-by: Serge Hallyn <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
---
kernel/pid.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/pid.c b/kernel/pid.c
index 0db3e79..6b312c4 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
case 0:
schedule_work(&ns->proc_work);
break;
+ default:
+ if (ns->child_reaper->flags & PF_EXITING)
+ wake_up_process(ns->child_reaper);
+ break;
}
}
spin_unlock_irqrestore(&pidmap_lock, flags);
--
1.8.3.2
Serge Hallyn <[email protected]> writes:
> Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> possible to get into a situation where a pidns reaper is
> <defunct>, reparented to host pid 1, but never reaped. How to
> reproduce this is documented at
Commit 751c644b95bb48aaa8825f0c66abbcc184d92051 also played a role
where we started handling multi-threaded inits but the wake-up remains
broken.
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> (and see
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> In short, run repeated starts of a container whose init is
>
> Process.exit(0);
>
> sysrq-t when such a task is playing zombie shows:
>
> [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
> [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
> [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
> [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
> [ 131.132978] Call Trace:
> [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
> [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
>
> Further debugging showed that every time this happened, zap_pid_ns_processes()
> started with nr_hashed being 3, while we were expecting it to drop to 2.
> Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
> waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
> if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
> time the reaper is PF_EXITING, to force it to re-test the
> pidns->nr_hashed = init_pids test. Note that this is more like what
> __unhash_process() used to do before
> af4b8a83add95ef40716401395b44a1b579965f4.
I completely agree with your problem analysis. All we hold in
free_pid is the pidmap_lock. Not the task_lock which guards
ns->child_reaper nor the signhand lock which guards PF_EXITING.
I think a final patch needs an analysis why whichever wakeup scheme we
use does not have races which will result in the failure to send a
wakeup.
Using a default case and PF_EXITING test while retaing the previous
nr_hashed == 1 seems a little hacky.
Regardless thank you for all of your hard work to track this one down.
I feel silly for not considering the wakeup side before.
> Signed-off-by: Serge Hallyn <[email protected]>
> Cc: "Eric W. Biederman" <[email protected]>
> ---
> kernel/pid.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/kernel/pid.c b/kernel/pid.c
> index 0db3e79..6b312c4 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
> case 0:
> schedule_work(&ns->proc_work);
> break;
> + default:
> + if (ns->child_reaper->flags & PF_EXITING)
> + wake_up_process(ns->child_reaper);
> + break;
> }
> }
> spin_unlock_irqrestore(&pidmap_lock, flags);
Serge Hallyn <[email protected]> writes:
> Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> possible to get into a situation where a pidns reaper is
> <defunct>, reparented to host pid 1, but never reaped. How to
> reproduce this is documented at
>
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> (and see
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> In short, run repeated starts of a container whose init is
>
> Process.exit(0);
>
> sysrq-t when such a task is playing zombie shows:
>
> [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
> [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
> [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
> [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
> [ 131.132978] Call Trace:
> [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
> [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
>
> Further debugging showed that every time this happened, zap_pid_ns_processes()
> started with nr_hashed being 3, while we were expecting it to drop to 2.
> Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
> waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
> if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
> time the reaper is PF_EXITING, to force it to re-test the
> pidns->nr_hashed = init_pids test. Note that this is more like what
> __unhash_process() used to do before
> af4b8a83add95ef40716401395b44a1b579965f4.
>
> Signed-off-by: Serge Hallyn <[email protected]>
> Cc: "Eric W. Biederman" <[email protected]>
> ---
> kernel/pid.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/kernel/pid.c b/kernel/pid.c
> index 0db3e79..6b312c4 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
> case 0:
> schedule_work(&ns->proc_work);
> break;
> + default:
> + if (ns->child_reaper->flags & PF_EXITING)
> + wake_up_process(ns->child_reaper);
> + break;
> }
> }
> spin_unlock_irqrestore(&pidmap_lock, flags);
qSo I think the change that we actually want is just to send a wake-up
when we have two pids in the pid namespace as well as one pid.
- That can send one extraneous wake-up but that is relatively harmless.
- We can detect the condition race free.
- With only two pids remaining we are guaranteed that which ever task is
the child_reaper will persist through zap_pid_ns_processes.
There are 3 cases.
init-tgleader other -- Single threaded init so of course we won't free the task
init-tgleader-dead init-thread -- The last living init thread will call zap_pid_ns_processes.
init-tgleader init-thread -- An init with two living threads child_reaper must be the init thread group leader
Which means at the cost of an extra wake-up we are guaranteed not to
have races.
Serge does that look good to you?
Eric
diff --git a/kernel/pid.c b/kernel/pid.c
index 17755ae..ab75add 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
struct pid_namespace *ns = upid->ns;
hlist_del_rcu(&upid->pid_chain);
switch(--ns->nr_hashed) {
+ case 2:
case 1:
/* When all that is left in the pid namespace
* is the reaper wake up the reaper. The reaper
Quoting Eric W. Biederman ([email protected]):
> Serge Hallyn <[email protected]> writes:
>
> > Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> > possible to get into a situation where a pidns reaper is
> > <defunct>, reparented to host pid 1, but never reaped. How to
> > reproduce this is documented at
> >
> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> > (and see
> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> > In short, run repeated starts of a container whose init is
> >
> > Process.exit(0);
> >
> > sysrq-t when such a task is playing zombie shows:
> >
> > [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
> > [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
> > [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
> > [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
> > [ 131.132978] Call Trace:
> > [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> > [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> > [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
> > [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> > [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> > [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
> >
> > Further debugging showed that every time this happened, zap_pid_ns_processes()
> > started with nr_hashed being 3, while we were expecting it to drop to 2.
> > Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
> > waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
> > if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
> > time the reaper is PF_EXITING, to force it to re-test the
> > pidns->nr_hashed = init_pids test. Note that this is more like what
> > __unhash_process() used to do before
> > af4b8a83add95ef40716401395b44a1b579965f4.
> >
> > Signed-off-by: Serge Hallyn <[email protected]>
> > Cc: "Eric W. Biederman" <[email protected]>
> > ---
> > kernel/pid.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/pid.c b/kernel/pid.c
> > index 0db3e79..6b312c4 100644
> > --- a/kernel/pid.c
> > +++ b/kernel/pid.c
> > @@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
> > case 0:
> > schedule_work(&ns->proc_work);
> > break;
> > + default:
> > + if (ns->child_reaper->flags & PF_EXITING)
> > + wake_up_process(ns->child_reaper);
> > + break;
> > }
> > }
> > spin_unlock_irqrestore(&pidmap_lock, flags);
>
> qSo I think the change that we actually want is just to send a wake-up
> when we have two pids in the pid namespace as well as one pid.
>
> - That can send one extraneous wake-up but that is relatively harmless.
Would more than one extraneous wake-up be more harmful?
> - We can detect the condition race free.
> - With only two pids remaining we are guaranteed that which ever task is
> the child_reaper will persist through zap_pid_ns_processes.
My problem is I don't really understand the assumptions behind nr_hashed.
I *thought* it was simply >1 if the init was threaded - but are threads
in init limited to 2? Or am I totally wrong about what the 2 means?
If init *is* threaded, and the pid_ns->child_reaper exits but the other
thread is still alive, then find_new_reaper should set pid_ns->child_reaper
to the not-PF_EXITING task using
509 while_each_thread(father, thread) {
510 if (thread->flags & PF_EXITING)
511 continue;
512 if (unlikely(pid_ns->child_reaper == father))
513 pid_ns->child_reaper = thread;
514 return thread;
515 }
right?
Which seems to suggest that checking for pid_ns->child_reaper->flags &
PF_EXITING should always give us the right answer in free_pid().
> There are 3 cases.
> init-tgleader other -- Single threaded init so of course we won't free the task
> init-tgleader-dead init-thread -- The last living init thread will call zap_pid_ns_processes.
right,
> init-tgleader init-thread -- An init with two living threads child_reaper must be the init thread group leader
>
> Which means at the cost of an extra wake-up we are guaranteed not to
> have races.
>
> Serge does that look good to you?
I may just need to spend a few hours going back over the old commits
and related email threads pertaining to multi-threaded inits. I now
regret not having paid enough attention at the time :)
> diff --git a/kernel/pid.c b/kernel/pid.c
> index 17755ae..ab75add 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
> struct pid_namespace *ns = upid->ns;
> hlist_del_rcu(&upid->pid_chain);
> switch(--ns->nr_hashed) {
> + case 2:
> case 1:
> /* When all that is left in the pid namespace
> * is the reaper wake up the reaper. The reaper
I considered this, but I wasn't quite sure... I have two concersn about
the 2. First, why can't it be 3 (3 PF_EXITING init-threads).
Second, what if nr_hashed is 2 but init in fact wasn't exiting? Oh, that's
the one you're saying isn't an issue, just a spurious extra wakeup? Right.
-serge
"Serge E. Hallyn" <[email protected]> writes:
> Quoting Eric W. Biederman ([email protected]):
>> Serge Hallyn <[email protected]> writes:
>>
>> > Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
>> > possible to get into a situation where a pidns reaper is
>> > <defunct>, reparented to host pid 1, but never reaped. How to
>> > reproduce this is documented at
>> >
>> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
>> > (and see
>> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
>> > In short, run repeated starts of a container whose init is
>> >
>> > Process.exit(0);
>> >
>> > sysrq-t when such a task is playing zombie shows:
>> >
>> > [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
>> > [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
>> > [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
>> > [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
>> > [ 131.132978] Call Trace:
>> > [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
>> > [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
>> > [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
>> > [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
>> > [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
>> > [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
>> >
>> > Further debugging showed that every time this happened, zap_pid_ns_processes()
>> > started with nr_hashed being 3, while we were expecting it to drop to 2.
>> > Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
>> > waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
>> > if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
>> > time the reaper is PF_EXITING, to force it to re-test the
>> > pidns->nr_hashed = init_pids test. Note that this is more like what
>> > __unhash_process() used to do before
>> > af4b8a83add95ef40716401395b44a1b579965f4.
>> >
>> > Signed-off-by: Serge Hallyn <[email protected]>
>> > Cc: "Eric W. Biederman" <[email protected]>
>> > ---
>> > kernel/pid.c | 4 ++++
>> > 1 file changed, 4 insertions(+)
>> >
>> > diff --git a/kernel/pid.c b/kernel/pid.c
>> > index 0db3e79..6b312c4 100644
>> > --- a/kernel/pid.c
>> > +++ b/kernel/pid.c
>> > @@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
>> > case 0:
>> > schedule_work(&ns->proc_work);
>> > break;
>> > + default:
>> > + if (ns->child_reaper->flags & PF_EXITING)
>> > + wake_up_process(ns->child_reaper);
>> > + break;
>> > }
>> > }
>> > spin_unlock_irqrestore(&pidmap_lock, flags);
>>
>> qSo I think the change that we actually want is just to send a wake-up
>> when we have two pids in the pid namespace as well as one pid.
>>
>> - That can send one extraneous wake-up but that is relatively harmless.
>
> Would more than one extraneous wake-up be more harmful?
An extraneous wake-up is a waste of time but not a correctness issue.
Anything that sleeps needs to be able to handle extraneous wake-ups.
>> - We can detect the condition race free.
>> - With only two pids remaining we are guaranteed that which ever task is
>> the child_reaper will persist through zap_pid_ns_processes.
>
> My problem is I don't really understand the assumptions behind nr_hashed.
> I *thought* it was simply >1 if the init was threaded - but are threads
> in init limited to 2? Or am I totally wrong about what the 2 means?
nr_hashed if fundamentally the number of pids in the pid hash table of a
pid namespace. So if init has 2 thread ids nr_hashed is greather than one.
> If init *is* threaded, and the pid_ns->child_reaper exits but the other
> thread is still alive, then find_new_reaper should set pid_ns->child_reaper
> to the not-PF_EXITING task using
>
> 509 while_each_thread(father, thread) {
> 510 if (thread->flags & PF_EXITING)
> 511 continue;
> 512 if (unlikely(pid_ns->child_reaper == father))
> 513 pid_ns->child_reaper = thread;
> 514 return thread;
> 515 }
>
> right?
Yes.
> Which seems to suggest that checking for pid_ns->child_reaper->flags &
> PF_EXITING should always give us the right answer in free_pid().
I don't know that it is wrong, but we don't always have the task_lock
which protects PF_EXITING. In particular when we are called from
change_pid.
Even more the task_lock protects pid_ns->child_reaper.
The thread_group_leader of any process may not be reaped until all of
the other threads are dead. All of the other threads of a
multi-threaded process self reap when they exit.
Which means before we are reduced to nr_threads == 2 it is possible
that child_reaper will be a thread that will self reap and free it's
data structures before we are done waking it up and/or testing
PF_EXITING.
>> There are 3 cases.
>> init-tgleader other -- Single threaded init so of course we won't free the task
>> init-tgleader-dead init-thread -- The last living init thread will call zap_pid_ns_processes.
>
> right,
>
>> init-tgleader init-thread -- An init with two living threads child_reaper must be the init thread group leader
>>
>> Which means at the cost of an extra wake-up we are guaranteed not to
>> have races.
>>
>> Serge does that look good to you?
>
> I may just need to spend a few hours going back over the old commits
> and related email threads pertaining to multi-threaded inits. I now
> regret not having paid enough attention at the time :)
Multi-threaded inits are indeed strange, and de_thread in fs/exec.c
that implements the rule that after exec the thread group id and
the thread id are always the same is the most annoying of the bunch.
If we did not have that guarantee we could remove a lot of the special
cases. Sigh.
>> diff --git a/kernel/pid.c b/kernel/pid.c
>> index 17755ae..ab75add 100644
>> --- a/kernel/pid.c
>> +++ b/kernel/pid.c
>> @@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
>> struct pid_namespace *ns = upid->ns;
>> hlist_del_rcu(&upid->pid_chain);
>> switch(--ns->nr_hashed) {
>> + case 2:
>> case 1:
>> /* When all that is left in the pid namespace
>> * is the reaper wake up the reaper. The reaper
>
> I considered this, but I wasn't quite sure... I have two concersn about
> the 2. First, why can't it be 3 (3 PF_EXITING init-threads).
You can have 3 PF_EXITING init threads, but at least one of them will
self reap and then you will have only two.
> Second, what if nr_hashed is 2 but init in fact wasn't exiting? Oh, that's
> the one you're saying isn't an issue, just a spurious extra wakeup?
> Right.
Yes. In fact that will be the common case where we send the spurious
extra wakeup.
My primary concern with the analsysis was to guarantee that child_reaper
is guaranteed to point to a valid process with only the pidmap_lock's
protection. With nr_hashed == 3 I can't see a way to guarantee that as
the thread group leader may be dead, and the current init thread may be
exiting, and a third init thread may be the thread that calls
zap_pid_ns_processes.
And that weird case is what makes me nervous about testing PF_EXITING.
What if the child_reaper is freed while we are testing the bits.
Eric
Quoting Eric W. Biederman ([email protected]):
> The thread_group_leader of any process may not be reaped until all of
> the other threads are dead. All of the other threads of a
> multi-threaded process self reap when they exit.
Ah, I see. Thanks.
Then your patch does sound like the safest solution. Haven't
tested yet (hd failure stands in my way) though I can't see
how it could not fix the bug I was seeing.
-serge
Quoting Eric W. Biederman ([email protected]):
> Serge Hallyn <[email protected]> writes:
>
> > Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> > possible to get into a situation where a pidns reaper is
> > <defunct>, reparented to host pid 1, but never reaped. How to
> > reproduce this is documented at
> >
> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> > (and see
> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> > In short, run repeated starts of a container whose init is
> >
> > Process.exit(0);
> >
> > sysrq-t when such a task is playing zombie shows:
> >
> > [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
> > [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
> > [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
> > [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
> > [ 131.132978] Call Trace:
> > [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> > [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> > [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
> > [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> > [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> > [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
> >
> > Further debugging showed that every time this happened, zap_pid_ns_processes()
> > started with nr_hashed being 3, while we were expecting it to drop to 2.
> > Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
> > waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
> > if nr_hashed hits 1. This patch makes free_pid() wake the reaper any
> > time the reaper is PF_EXITING, to force it to re-test the
> > pidns->nr_hashed = init_pids test. Note that this is more like what
> > __unhash_process() used to do before
> > af4b8a83add95ef40716401395b44a1b579965f4.
> >
> > Signed-off-by: Serge Hallyn <[email protected]>
> > Cc: "Eric W. Biederman" <[email protected]>
> > ---
> > kernel/pid.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/pid.c b/kernel/pid.c
> > index 0db3e79..6b312c4 100644
> > --- a/kernel/pid.c
> > +++ b/kernel/pid.c
> > @@ -274,6 +274,10 @@ void free_pid(struct pid *pid)
> > case 0:
> > schedule_work(&ns->proc_work);
> > break;
> > + default:
> > + if (ns->child_reaper->flags & PF_EXITING)
> > + wake_up_process(ns->child_reaper);
> > + break;
> > }
> > }
> > spin_unlock_irqrestore(&pidmap_lock, flags);
>
> qSo I think the change that we actually want is just to send a wake-up
> when we have two pids in the pid namespace as well as one pid.
>
> - That can send one extraneous wake-up but that is relatively harmless.
> - We can detect the condition race free.
> - With only two pids remaining we are guaranteed that which ever task is
> the child_reaper will persist through zap_pid_ns_processes.
>
> There are 3 cases.
> init-tgleader other -- Single threaded init so of course we won't free the task
> init-tgleader-dead init-thread -- The last living init thread will call zap_pid_ns_processes.
> init-tgleader init-thread -- An init with two living threads child_reaper must be the init thread group leader
>
> Which means at the cost of an extra wake-up we are guaranteed not to
> have races.
>
> Serge does that look good to you?
Yeah, I haven't reproduced the defunct tasks with this patch.
Acked-by: Serge Hallyn <[email protected]>
Tested-by: Serge Hallyn <[email protected]>
thanks,
-serge
>
> Eric
>
>
>
> diff --git a/kernel/pid.c b/kernel/pid.c
> index 17755ae..ab75add 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
> struct pid_namespace *ns = upid->ns;
> hlist_del_rcu(&upid->pid_chain);
> switch(--ns->nr_hashed) {
> + case 2:
> case 1:
> /* When all that is left in the pid namespace
> * is the reaper wake up the reaper. The reaper
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Serge Hallyn <[email protected]> writes:
> Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> possible to get into a situation where a pidns reaper is
> <defunct>, reparented to host pid 1, but never reaped. How to
> reproduce this is documented at
>
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> (and see
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> In short, run repeated starts of a container whose init is
>
> Process.exit(0);
>
> sysrq-t when such a task is playing zombie shows:
>
> [ 131.132978] init x ffff88011fc14580 0 2084 2039 0x00000000
> [ 131.132978] ffff880116e89ea8 0000000000000002 ffff880116e89fd8 0000000000014580
> [ 131.132978] ffff880116e89fd8 0000000000014580 ffff8801172a0000 ffff8801172a0000
> [ 131.132978] ffff8801172a0630 ffff88011729fff0 ffff880116e14650 ffff88011729fff0
> [ 131.132978] Call Trace:
> [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> [ 131.132978] [<ffffffff81071eae>] ? signal_wake_up_state+0x1e/0x30
> [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
>
> Further debugging showed that every time this happened, zap_pid_ns_processes()
> started with nr_hashed being 3, while we were expecting it to drop to 2.
> Any time it didn't happen, nr_hashed was 1 or 2. So the reaper was
> waiting for nr_hashed to become 2, but free_pid() only wakes the reaper
> if nr_hashed hits 1.
The issue is that when the task group leader of an init process exits
before other tasks of the init process when the init process finally
exits it will be a secondary task sleeping in zap_pid_ns_processes and
waiting to wake up when the number of hashed pids drops to two. This
case waits forever as free_pid only sends a wake up when the number of
hashed pids drops to 1.
To correct this the simple strategy of sending a possibly unncessary
wake up when the number of hashed pids drops to 2 is adopted.
Sending one extraneous wake up is relatively harmless, at worst we
waste a little cpu time in the rare case when a pid namespace
appropaches exiting.
We can detect the case when the pid namespace drops to just two pids
hashed race free in free_pid.
Dereferencing pid_ns->child_reaper with the pidmap_lock held is safe
without out the tasklist_lock because it is guaranteed that the
detach_pid will be called on the child_reaper before it is freed and
detach_pid calls __change_pid which calls free_pid which takes the
pidmap_lock. __change_pid only calls free_pid if this is the
last use of the pid. For a thread that is not the thread group leader
the threads pid will only ever have one user because a threads pid
is not allowed to be the pid of a process, of a process group or
a session. For a thread that is a thread group leader all of
the other threads of that process will be reaped before it is allowed
for the thread group leader to be reaped ensuring there will only
be one user of the threads pid as a process pid. Furthermore
because the thread is the init process of a pid namespace all of the
other processes in the pid namespace will have also been already freed
leading to the fact that the pid will not be used as a session pid or
a process group pid for any other running process.
CC: [email protected]
Acked-by: Serge Hallyn <[email protected]>
Tested-by: Serge Hallyn <[email protected]>
Reported-by: Serge Hallyn <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
---
kernel/pid.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/pid.c b/kernel/pid.c
index 17755ae..ab75add 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
struct pid_namespace *ns = upid->ns;
hlist_del_rcu(&upid->pid_chain);
switch(--ns->nr_hashed) {
+ case 2:
case 1:
/* When all that is left in the pid namespace
* is the reaper wake up the reaper. The reaper
--
1.7.5.4
Sorry for delay, vacation.
On 08/30, Eric W. Biederman wrote:
>
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -265,6 +265,7 @@ void free_pid(struct pid *pid)
> struct pid_namespace *ns = upid->ns;
> hlist_del_rcu(&upid->pid_chain);
> switch(--ns->nr_hashed) {
> + case 2:
> case 1:
> /* When all that is left in the pid namespace
> * is the reaper wake up the reaper. The reaper
I think the patch is fine, and this matches "init_pids" in
zap_pid_ns_processes().
But, Eric, if this patch was not applied yet, any chance you can
add a comment ? Just a little note about the potential zombie
leader can help to understand this code. I won't insist of course,
but this "case 2" doesn't look obvious.
Off topic. What if the first alloc_pid() succeeds and then later
copy_process() fails. In this case free_pid() is called but
PIDNS_HASH_ADDING was not cleared, we miss kern_unmount(), no?
Oleg.
On 09/08, Oleg Nesterov wrote:
>
> Off topic. What if the first alloc_pid() succeeds and then later
> copy_process() fails. In this case free_pid() is called but
> PIDNS_HASH_ADDING was not cleared, we miss kern_unmount(), no?
Perhaps something like below?
Oleg.
--- x/kernel/pid.c
+++ x/kernel/pid.c
@@ -272,6 +272,8 @@ void free_pid(struct pid *pid)
*/
wake_up_process(ns->child_reaper);
break;
+ case PIDNS_HASH_ADDING:
+ WARN_ON(ns->child_reaper);
case 0:
schedule_work(&ns->proc_work);
break;
Oleg Nesterov <[email protected]> writes:
> On 09/08, Oleg Nesterov wrote:
>>
>> Off topic. What if the first alloc_pid() succeeds and then later
>> copy_process() fails. In this case free_pid() is called but
>> PIDNS_HASH_ADDING was not cleared, we miss kern_unmount(), no?
>
> Perhaps something like below?
I am thinking more:
diff --git a/kernel/pid.c b/kernel/pid.c
index ab75add..ef59516 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -273,6 +273,10 @@ void free_pid(struct pid *pid)
*/
wake_up_process(ns->child_reaper);
break;
+ case PIDNS_HASH_ADDING:
+ /* Handle a fork failure of the first process */
+ ns->nr_hashed = 0;
+ /* fall through */
case 0:
schedule_work(&ns->proc_work);
break;
At which point I ask myself what of the pathlogocical case where the
first fork fails but because we created the pid namespace with unshare
there is a concurrent fork from another process into the pid namespace
that succeeds. Resulting in one pid in the pid namespace that is not
the reaper.
So we also need something like this.
@@ -324,6 +328,8 @@ struct pid *alloc_pid(struct pid_namespace *ns)
spin_lock_irq(&pidmap_lock);
if (!(ns->nr_hashed & PIDNS_HASH_ADDING))
goto out_unlock;
+ if (!is_child_reaper(pid) && !ns->child_reaper)
+ goto out_unlock;
for ( ; upid >= pid->numbers; --upid) {
hlist_add_head_rcu(&upid->pid_chain,
&pid_hash[pid_hashfn(upid->nr, upid->ns)]);
but I think my locking is wrong to safely test ns->child_reaper.
Perhaps I should prevent setns if there is no reaper?
Ideas?
Eric
Ramkumar Ramachandra <[email protected]> writes:
> Eric W. Biederman <[email protected]> wrote:
>
> Serge Hallyn <[email protected]> writes:
> > Since commit af4b8a83add95ef40716401395b44a1b579965f4 it's been
> > possible to get into a situation where a pidns reaper is
> > <defunct>, reparented to host pid 1, but never reaped. How to
> > reproduce this is documented at
> >
> > https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526
> > (and see
> >
> https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1168526/comments/13)
> > In short, run repeated starts of a container whose init is
> >
> > Process.exit(0);
> >
> > sysrq-t when such a task is playing zombie shows:
> >
> > [ 131.132978] init x ffff88011fc14580 0 2084
> 2039 0x00000000
> > [ 131.132978] ffff880116e89ea8 0000000000000002
> ffff880116e89fd8 0000000000014580
> > [ 131.132978] ffff880116e89fd8 0000000000014580
> ffff8801172a0000 ffff8801172a0000
> > [ 131.132978] ffff8801172a0630 ffff88011729fff0
> ffff880116e14650 ffff88011729fff0
> > [ 131.132978] Call Trace:
> > [ 131.132978] [<ffffffff816f6159>] schedule+0x29/0x70
> > [ 131.132978] [<ffffffff81064591>] do_exit+0x6e1/0xa40
> > [ 131.132978] [<ffffffff81071eae>] ?
> signal_wake_up_state+0x1e/0x30
> > [ 131.132978] [<ffffffff8106496f>] do_group_exit+0x3f/0xa0
> > [ 131.132978] [<ffffffff810649e4>] SyS_exit_group+0x14/0x20
> > [ 131.132978] [<ffffffff8170102f>] tracesys+0xe1/0xe6
>
>
> Interestingly, notice how the memory addresses begin with ffff88011,
> and then ffffffff81 in the call trace.
The cause is known and this patch fixes the problem. So I don't know
why you would be looking at the addresses, there is no mystery to be solved.
That said roughly ffff880000000000 is where the kernel has it's identity
mapping of all physical memory in the system.
Meanwhile roughly ffffffff80000000 is the high 2GB of memory where the
kernel text pages reside.
So what is seen is exactly what is expected that the data pointers point
into the kernels identity mapping and the code address use the mapping
for kernel code.
Eric
On 09/08, Eric W. Biederman wrote:
>
> Oleg Nesterov <[email protected]> writes:
>
> > On 09/08, Oleg Nesterov wrote:
> >>
> >> Off topic. What if the first alloc_pid() succeeds and then later
> >> copy_process() fails. In this case free_pid() is called but
> >> PIDNS_HASH_ADDING was not cleared, we miss kern_unmount(), no?
> >
> > Perhaps something like below?
>
> I am thinking more:
>
> diff --git a/kernel/pid.c b/kernel/pid.c
> index ab75add..ef59516 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -273,6 +273,10 @@ void free_pid(struct pid *pid)
> */
> wake_up_process(ns->child_reaper);
> break;
> + case PIDNS_HASH_ADDING:
> + /* Handle a fork failure of the first process */
> + ns->nr_hashed = 0;
Agreed, it also makes sense to clear ->nr_hashed. But I still think
that WARN_ON(ns->child_reaper) makes sense too.
> At which point I ask myself what of the pathlogocical case where the
> first fork fails but because we created the pid namespace with unshare
> there is a concurrent fork from another process into the pid namespace
> that succeeds. Resulting in one pid in the pid namespace that is not
> the reaper.
But how can setns() work before the first fork() succeeds and makes the
->child_reaper visible in /proc ?
Probably I missed something obvious, I didn't sleep today...
Oleg.
Oleg Nesterov <[email protected]> writes:
> On 09/08, Eric W. Biederman wrote:
>>
>> Oleg Nesterov <[email protected]> writes:
>>
>> > On 09/08, Oleg Nesterov wrote:
>> >>
>> >> Off topic. What if the first alloc_pid() succeeds and then later
>> >> copy_process() fails. In this case free_pid() is called but
>> >> PIDNS_HASH_ADDING was not cleared, we miss kern_unmount(), no?
>> >
>> > Perhaps something like below?
>>
>> I am thinking more:
>>
>> diff --git a/kernel/pid.c b/kernel/pid.c
>> index ab75add..ef59516 100644
>> --- a/kernel/pid.c
>> +++ b/kernel/pid.c
>> @@ -273,6 +273,10 @@ void free_pid(struct pid *pid)
>> */
>> wake_up_process(ns->child_reaper);
>> break;
>> + case PIDNS_HASH_ADDING:
>> + /* Handle a fork failure of the first process */
>> + ns->nr_hashed = 0;
>
> Agreed, it also makes sense to clear ->nr_hashed. But I still think
> that WARN_ON(ns->child_reaper) makes sense too.
I don't know that I like warnings for impossible conditions. How could
we even make a mistake that gets us there?
>> At which point I ask myself what of the pathlogocical case where the
>> first fork fails but because we created the pid namespace with unshare
>> there is a concurrent fork from another process into the pid namespace
>> that succeeds. Resulting in one pid in the pid namespace that is not
>> the reaper.
>
> But how can setns() work before the first fork() succeeds and makes the
> ->child_reaper visible in /proc ?
>
> Probably I missed something obvious, I didn't sleep today...
Actually that is a very good point. That is an accidental feature but
one I very much appreciate today.
Of course this leads me to the question of what the checkpoint/restart
guys can do about checkpointing that properly. Sigh.
Eric
On 09/09, Eric W. Biederman wrote:
>
> Oleg Nesterov <[email protected]> writes:
> >
> > Agreed, it also makes sense to clear ->nr_hashed. But I still think
> > that WARN_ON(ns->child_reaper) makes sense too.
>
> I don't know that I like warnings for impossible conditions.
But WARN_ON() should only check for "impossible" conditions ;)
> How could
> we even make a mistake that gets us there?
I do not know! I mean, this should not happen, that is why it adds
a warning.
And note that "ns->nr_hashed = 0" is not really needed, still I agree
it makes sense.
However I won't mind to remove this warning if you really dislike it.
> >> At which point I ask myself what of the pathlogocical case where the
> >> first fork fails but because we created the pid namespace with unshare
> >> there is a concurrent fork from another process into the pid namespace
> >> that succeeds. Resulting in one pid in the pid namespace that is not
> >> the reaper.
> >
> > But how can setns() work before the first fork() succeeds and makes the
> > ->child_reaper visible in /proc ?
> >
> > Probably I missed something obvious, I didn't sleep today...
>
> Actually that is a very good point. That is an accidental feature but
> one I very much appreciate today.
OK. Please review v2 then. I also shamelessly stole your comment.
Oleg.
"case 0" in free_pid() assumes that disable_pid_allocation() should
clear PIDNS_HASH_ADDING before the last pid goes away. However this
doesn't happen if the 1st fork() fails to create the child reaper
which should call disable_pid_allocation().
Signed-off-by: Oleg Nesterov <[email protected]>
---
kernel/pid.c | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/kernel/pid.c b/kernel/pid.c
index 66505c1..606a212 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -272,6 +272,11 @@ void free_pid(struct pid *pid)
*/
wake_up_process(ns->child_reaper);
break;
+ case PIDNS_HASH_ADDING:
+ /* Handle a fork failure of the first process */
+ WARN_ON(ns->child_reaper);
+ ns->nr_hashed = 0;
+ /* fall through */
case 0:
schedule_work(&ns->proc_work);
break;
--
1.5.5.1