[BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
[ replicator, attached ]
[ workaround patch that crudely clears the loop, attached ]
[ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
When a certain, secure-site application is run on 4.4.262, it locks up and
is unkillable. Crash(8) and sysrq backtraces show that the application
is looping in the kernel in futex_unlock_pi.
Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
This patch has the following comment:
The only problem is that this breaks RT timeliness guarantees. That
is, consider the following scenario:
T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
CPU0
T1
lock_pi()
queue_me() <- Waiter is visible
preemption
T2
unlock_pi()
loops with -EAGAIN forever
Which is undesirable for PI primitives. Future patches will rectify
this.
This describes the situation exactly. To prove, we developed a little
kernel patch that, on loop detection, puts a message into the kernel log for
just the first occurrence, keeps a count of the number of occurrences seen
since boot, and tries to break out of the loop via usleep_range(1000,1000).
Note that the patch is not really needed for replication. It merely shows,
by 'fixing' the problem, that it really is the EAGAIN loop that triggers
the lockup.
Along with this patch, we submit a replicator. Running this replicator
with this patch, it can be seen that 4.4.256 does not have the problem.
4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
w/o the patch) does not have the problem.
From this pattern there may be some futex fixup patch that was ported
back into 4.9 but failed to make it to 4.4.
Acknowledgements: My colleague, Scott Shaffer, performed the crash/sysrq
analysis that found the futex_unlock_pi loop, and he raised the suspicion
that commit 73d786b might be the cause.
Signed-off-by: Joe Korty <[email protected]>
On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
>
> [ replicator, attached ]
> [ workaround patch that crudely clears the loop, attached ]
> [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
>
> When a certain, secure-site application is run on 4.4.262, it locks up and
> is unkillable. Crash(8) and sysrq backtraces show that the application
> is looping in the kernel in futex_unlock_pi.
>
> Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
>
> 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
>
> This patch has the following comment:
>
> The only problem is that this breaks RT timeliness guarantees. That
> is, consider the following scenario:
>
> T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
>
> CPU0
>
> T1
> lock_pi()
> queue_me() <- Waiter is visible
>
> preemption
>
> T2
> unlock_pi()
> loops with -EAGAIN forever
>
> Which is undesirable for PI primitives. Future patches will rectify
> this.
>
> This describes the situation exactly. To prove, we developed a little
> kernel patch that, on loop detection, puts a message into the kernel log for
> just the first occurrence, keeps a count of the number of occurrences seen
> since boot, and tries to break out of the loop via usleep_range(1000,1000).
> Note that the patch is not really needed for replication. It merely shows,
> by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> the lockup.
>
> Along with this patch, we submit a replicator. Running this replicator
> with this patch, it can be seen that 4.4.256 does not have the problem.
> 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> w/o the patch) does not have the problem.
>
> >From this pattern there may be some futex fixup patch that was ported
> back into 4.9 but failed to make it to 4.4.
Odd, I can't seem to find anything that we missed. Can you dig to see
if there is something that we need to do here so we can resolve this?
thanks,
greg k-h
[ Added missing people to the cc: as listed in MAINTAINERS ]
On Thu, Jul 22, 2021 at 04:11:41PM +0200, Greg Kroah-Hartman wrote:
> On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> > [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
> >
> > [ replicator, attached ]
> > [ workaround patch that crudely clears the loop, attached ]
> > [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
> >
> > When a certain, secure-site application is run on 4.4.262, it locks up and
> > is unkillable. Crash(8) and sysrq backtraces show that the application
> > is looping in the kernel in futex_unlock_pi.
> >
> > Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
> >
> > 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
> >
> > This patch has the following comment:
> >
> > The only problem is that this breaks RT timeliness guarantees. That
> > is, consider the following scenario:
> >
> > T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
> >
> > CPU0
> >
> > T1
> > lock_pi()
> > queue_me() <- Waiter is visible
> >
> > preemption
> >
> > T2
> > unlock_pi()
> > loops with -EAGAIN forever
> >
> > Which is undesirable for PI primitives. Future patches will rectify
> > this.
> >
> > This describes the situation exactly. To prove, we developed a little
> > kernel patch that, on loop detection, puts a message into the kernel log for
> > just the first occurrence, keeps a count of the number of occurrences seen
> > since boot, and tries to break out of the loop via usleep_range(1000,1000).
> > Note that the patch is not really needed for replication. It merely shows,
> > by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> > the lockup.
> >
> > Along with this patch, we submit a replicator. Running this replicator
> > with this patch, it can be seen that 4.4.256 does not have the problem.
> > 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> > w/o the patch) does not have the problem.
> >
> > >From this pattern there may be some futex fixup patch that was ported
> > back into 4.9 but failed to make it to 4.4.
>
> Odd, I can't seem to find anything that we missed. Can you dig to see
> if there is something that we need to do here so we can resolve this?
>
> thanks,
> greg k-h
Hi Greg,
4.12 has these apparently-original patches:
73d786b futex: Rework inconsistent rt_mutex/futex_q state
cfafcd1 futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
I have verified that the first commit, 73d786b, introduces
the futex_unlock_pi infinite loop bug into 4.12. I have
also verified that the last commit, cfafcd1, fixes the bug.
4.9 has had both futex patches backported into it.
Verified that 4.9.276 does not suffer from the bug.
4.4 has had the first patch backported, as 394fc49, but
not the last. I have verified that building a kernel at
394fc49 and at v4.4.276, the bug is seen, and at 394fc49^,
the bug is not present.
The missing commit, cfafcd1 in 4.12, is present in 4.9
as 13c98b0. A visual spot-check of 13c98b0, as a patch,
with kernel/futex.c in 4.4.276 did not find any hunks of
13c98b0 present in 4.4.276's kernel/futex.c.
Regards,
Joe
On Tue, Jul 27, 2021 at 06:19:50PM -0400, Joe Korty wrote:
>
> [ Added missing people to the cc: as listed in MAINTAINERS ]
>
> On Thu, Jul 22, 2021 at 04:11:41PM +0200, Greg Kroah-Hartman wrote:
> > On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> > > [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
> > >
> > > [ replicator, attached ]
> > > [ workaround patch that crudely clears the loop, attached ]
> > > [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
> > >
> > > When a certain, secure-site application is run on 4.4.262, it locks up and
> > > is unkillable. Crash(8) and sysrq backtraces show that the application
> > > is looping in the kernel in futex_unlock_pi.
> > >
> > > Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
> > >
> > > 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
> > >
> > > This patch has the following comment:
> > >
> > > The only problem is that this breaks RT timeliness guarantees. That
> > > is, consider the following scenario:
> > >
> > > T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
> > >
> > > CPU0
> > >
> > > T1
> > > lock_pi()
> > > queue_me() <- Waiter is visible
> > >
> > > preemption
> > >
> > > T2
> > > unlock_pi()
> > > loops with -EAGAIN forever
> > >
> > > Which is undesirable for PI primitives. Future patches will rectify
> > > this.
> > >
> > > This describes the situation exactly. To prove, we developed a little
> > > kernel patch that, on loop detection, puts a message into the kernel log for
> > > just the first occurrence, keeps a count of the number of occurrences seen
> > > since boot, and tries to break out of the loop via usleep_range(1000,1000).
> > > Note that the patch is not really needed for replication. It merely shows,
> > > by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> > > the lockup.
> > >
> > > Along with this patch, we submit a replicator. Running this replicator
> > > with this patch, it can be seen that 4.4.256 does not have the problem.
> > > 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> > > w/o the patch) does not have the problem.
> > >
> > > >From this pattern there may be some futex fixup patch that was ported
> > > back into 4.9 but failed to make it to 4.4.
> >
> > Odd, I can't seem to find anything that we missed. Can you dig to see
> > if there is something that we need to do here so we can resolve this?
> >
> > thanks,
> > greg k-h
>
>
> Hi Greg,
>
> 4.12 has these apparently-original patches:
>
> 73d786b futex: Rework inconsistent rt_mutex/futex_q state
> cfafcd1 futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
>
> I have verified that the first commit, 73d786b, introduces
> the futex_unlock_pi infinite loop bug into 4.12. I have
> also verified that the last commit, cfafcd1, fixes the bug.
>
> 4.9 has had both futex patches backported into it.
> Verified that 4.9.276 does not suffer from the bug.
>
> 4.4 has had the first patch backported, as 394fc49, but
> not the last. I have verified that building a kernel at
> 394fc49 and at v4.4.276, the bug is seen, and at 394fc49^,
> the bug is not present.
>
> The missing commit, cfafcd1 in 4.12, is present in 4.9
> as 13c98b0. A visual spot-check of 13c98b0, as a patch,
> with kernel/futex.c in 4.4.276 did not find any hunks of
> 13c98b0 present in 4.4.276's kernel/futex.c.
Ok, so what do you recommend be done to resolve this?
thanks,
greg k-h
On Wed, Jul 28, 2021 at 08:07:00AM +0200, Greg Kroah-Hartman wrote:
> On Tue, Jul 27, 2021 at 06:19:50PM -0400, Joe Korty wrote:
> >
> > [ Added missing people to the cc: as listed in MAINTAINERS ]
> >
> > On Thu, Jul 22, 2021 at 04:11:41PM +0200, Greg Kroah-Hartman wrote:
> > > On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> > > > [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
> > > >
> > > > [ replicator, attached ]
> > > > [ workaround patch that crudely clears the loop, attached ]
> > > > [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
> > > >
> > > > When a certain, secure-site application is run on 4.4.262, it locks up and
> > > > is unkillable. Crash(8) and sysrq backtraces show that the application
> > > > is looping in the kernel in futex_unlock_pi.
> > > >
> > > > Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
> > > >
> > > > 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
> > > >
> > > > This patch has the following comment:
> > > >
> > > > The only problem is that this breaks RT timeliness guarantees. That
> > > > is, consider the following scenario:
> > > >
> > > > T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
> > > >
> > > > CPU0
> > > >
> > > > T1
> > > > lock_pi()
> > > > queue_me() <- Waiter is visible
> > > >
> > > > preemption
> > > >
> > > > T2
> > > > unlock_pi()
> > > > loops with -EAGAIN forever
> > > >
> > > > Which is undesirable for PI primitives. Future patches will rectify
> > > > this.
> > > >
> > > > This describes the situation exactly. To prove, we developed a little
> > > > kernel patch that, on loop detection, puts a message into the kernel log for
> > > > just the first occurrence, keeps a count of the number of occurrences seen
> > > > since boot, and tries to break out of the loop via usleep_range(1000,1000).
> > > > Note that the patch is not really needed for replication. It merely shows,
> > > > by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> > > > the lockup.
> > > >
> > > > Along with this patch, we submit a replicator. Running this replicator
> > > > with this patch, it can be seen that 4.4.256 does not have the problem.
> > > > 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> > > > w/o the patch) does not have the problem.
> > > >
> > > > >From this pattern there may be some futex fixup patch that was ported
> > > > back into 4.9 but failed to make it to 4.4.
> > >
> > > Odd, I can't seem to find anything that we missed. Can you dig to see
> > > if there is something that we need to do here so we can resolve this?
> > >
> > > thanks,
> > > greg k-h
> >
> >
> > Hi Greg,
> >
> > 4.12 has these apparently-original patches:
> >
> > 73d786b futex: Rework inconsistent rt_mutex/futex_q state
> > cfafcd1 futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
> >
> > I have verified that the first commit, 73d786b, introduces
> > the futex_unlock_pi infinite loop bug into 4.12. I have
> > also verified that the last commit, cfafcd1, fixes the bug.
> >
> > 4.9 has had both futex patches backported into it.
> > Verified that 4.9.276 does not suffer from the bug.
> >
> > 4.4 has had the first patch backported, as 394fc49, but
> > not the last. I have verified that building a kernel at
> > 394fc49 and at v4.4.276, the bug is seen, and at 394fc49^,
> > the bug is not present.
> >
> > The missing commit, cfafcd1 in 4.12, is present in 4.9
> > as 13c98b0. A visual spot-check of 13c98b0, as a patch,
> > with kernel/futex.c in 4.4.276 did not find any hunks of
> > 13c98b0 present in 4.4.276's kernel/futex.c.
>
> Ok, so what do you recommend be done to resolve this?
>
> thanks,
> greg k-h
I suppose we could either back out 394fc49 from 4.4, or
backport 13c98b0 from 4.9 to 4.4. At the time I wrote
the above, I hadn't tried either approach yet.
Since then, I did a trial backport of 13c98b0 into 4.4.
All the changes to kernel/futex.c applied, none of the
changes to kernel/locking/rtmutex.c applied. That implies
to me that we have at least one other patch that needs
finding-n-backporting before we can proceed.
I hope this doesn't turn into a Wack-a-Mole operation...
Joe
On Wed, Jul 28, 2021 at 09:51:14AM -0400, Joe Korty wrote:
> On Wed, Jul 28, 2021 at 08:07:00AM +0200, Greg Kroah-Hartman wrote:
> > On Tue, Jul 27, 2021 at 06:19:50PM -0400, Joe Korty wrote:
> > >
> > > [ Added missing people to the cc: as listed in MAINTAINERS ]
> > >
> > > On Thu, Jul 22, 2021 at 04:11:41PM +0200, Greg Kroah-Hartman wrote:
> > > > On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> > > > > [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
> > > > >
> > > > > [ replicator, attached ]
> > > > > [ workaround patch that crudely clears the loop, attached ]
> > > > > [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
> > > > >
> > > > > When a certain, secure-site application is run on 4.4.262, it locks up and
> > > > > is unkillable. Crash(8) and sysrq backtraces show that the application
> > > > > is looping in the kernel in futex_unlock_pi.
> > > > >
> > > > > Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
> > > > >
> > > > > 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
> > > > >
> > > > > This patch has the following comment:
> > > > >
> > > > > The only problem is that this breaks RT timeliness guarantees. That
> > > > > is, consider the following scenario:
> > > > >
> > > > > T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
> > > > >
> > > > > CPU0
> > > > >
> > > > > T1
> > > > > lock_pi()
> > > > > queue_me() <- Waiter is visible
> > > > >
> > > > > preemption
> > > > >
> > > > > T2
> > > > > unlock_pi()
> > > > > loops with -EAGAIN forever
> > > > >
> > > > > Which is undesirable for PI primitives. Future patches will rectify
> > > > > this.
> > > > >
> > > > > This describes the situation exactly. To prove, we developed a little
> > > > > kernel patch that, on loop detection, puts a message into the kernel log for
> > > > > just the first occurrence, keeps a count of the number of occurrences seen
> > > > > since boot, and tries to break out of the loop via usleep_range(1000,1000).
> > > > > Note that the patch is not really needed for replication. It merely shows,
> > > > > by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> > > > > the lockup.
> > > > >
> > > > > Along with this patch, we submit a replicator. Running this replicator
> > > > > with this patch, it can be seen that 4.4.256 does not have the problem.
> > > > > 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> > > > > w/o the patch) does not have the problem.
> > > > >
> > > > > >From this pattern there may be some futex fixup patch that was ported
> > > > > back into 4.9 but failed to make it to 4.4.
> > > >
> > > > Odd, I can't seem to find anything that we missed. Can you dig to see
> > > > if there is something that we need to do here so we can resolve this?
> > > >
> > > > thanks,
> > > > greg k-h
> > >
> > >
> > > Hi Greg,
> > >
> > > 4.12 has these apparently-original patches:
> > >
> > > 73d786b futex: Rework inconsistent rt_mutex/futex_q state
> > > cfafcd1 futex: Rework futex_lock_pi() to use rt_mutex_*_proxy_lock()
> > >
> > > I have verified that the first commit, 73d786b, introduces
> > > the futex_unlock_pi infinite loop bug into 4.12. I have
> > > also verified that the last commit, cfafcd1, fixes the bug.
> > >
> > > 4.9 has had both futex patches backported into it.
> > > Verified that 4.9.276 does not suffer from the bug.
> > >
> > > 4.4 has had the first patch backported, as 394fc49, but
> > > not the last. I have verified that building a kernel at
> > > 394fc49 and at v4.4.276, the bug is seen, and at 394fc49^,
> > > the bug is not present.
> > >
> > > The missing commit, cfafcd1 in 4.12, is present in 4.9
> > > as 13c98b0. A visual spot-check of 13c98b0, as a patch,
> > > with kernel/futex.c in 4.4.276 did not find any hunks of
> > > 13c98b0 present in 4.4.276's kernel/futex.c.
> >
> > Ok, so what do you recommend be done to resolve this?
> >
> > thanks,
> > greg k-h
>
> I suppose we could either back out 394fc49 from 4.4, or
> backport 13c98b0 from 4.9 to 4.4. At the time I wrote
> the above, I hadn't tried either approach yet.
>
> Since then, I did a trial backport of 13c98b0 into 4.4.
> All the changes to kernel/futex.c applied, none of the
> changes to kernel/locking/rtmutex.c applied. That implies
> to me that we have at least one other patch that needs
> finding-n-backporting before we can proceed.
Ok, let me know if there's anything I can apply here after you test
things.
greg k-h