After kern_unmount returns, callers can no longer access the
vfsmount structure. However, the vfsmount structure does need
to be kept around until the end of the RCU grace period, to
make sure other accesses have all gone away too.
This can be accomplished by either gating each kern_unmount
on synchronize_rcu (the comment in the code says it all), or
by deferring the freeing until the next grace period, where
it needs to be handled in a workqueue due to the locking in
mntput_no_expire().
Suggested-by: Eric Biederman <[email protected]>
Reported-by: Chris Mason <[email protected]>
---
fs/namespace.c | 11 +++++++++--
include/linux/mount.h | 2 ++
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/fs/namespace.c b/fs/namespace.c
index 40b994a29e90..9f62cf6c69de 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -4384,13 +4384,20 @@ struct vfsmount *kern_mount(struct file_system_type *type)
}
EXPORT_SYMBOL_GPL(kern_mount);
+static void mntput_rcu_work(struct work_struct *work)
+{
+ struct vfsmount *mnt = container_of(to_rcu_work(work),
+ struct vfsmount, free_rwork);
+ mntput(mnt);
+}
+
void kern_unmount(struct vfsmount *mnt)
{
/* release long term mount so mount point can be released */
if (!IS_ERR_OR_NULL(mnt)) {
real_mount(mnt)->mnt_ns = NULL;
- synchronize_rcu(); /* yecchhh... */
- mntput(mnt);
+ INIT_RCU_WORK(&mnt->free_rwork, mntput_rcu_work);
+ queue_rcu_work(system_wq, &mnt->free_rwork);
}
}
EXPORT_SYMBOL(kern_unmount);
diff --git a/include/linux/mount.h b/include/linux/mount.h
index 7f18a7555dff..cd007cb70d57 100644
--- a/include/linux/mount.h
+++ b/include/linux/mount.h
@@ -16,6 +16,7 @@
#include <linux/spinlock.h>
#include <linux/seqlock.h>
#include <linux/atomic.h>
+#include <linux/workqueue.h>
struct super_block;
struct vfsmount;
@@ -73,6 +74,7 @@ struct vfsmount {
struct super_block *mnt_sb; /* pointer to superblock */
int mnt_flags;
struct user_namespace *mnt_userns;
+ struct rcu_work free_rwork;
} __randomize_layout;
static inline struct user_namespace *mnt_user_ns(const struct vfsmount *mnt)
--
2.34.1
On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> After kern_unmount returns, callers can no longer access the
> vfsmount structure. However, the vfsmount structure does need
> to be kept around until the end of the RCU grace period, to
> make sure other accesses have all gone away too.
>
> This can be accomplished by either gating each kern_unmount
> on synchronize_rcu (the comment in the code says it all), or
> by deferring the freeing until the next grace period, where
> it needs to be handled in a workqueue due to the locking in
> mntput_no_expire().
NAK. There's code that relies upon kern_unmount() being
synchronous. That's precisely the reason why MNT_INTERNAL
is treated that way in mntput_no_expire().
On Fri, Feb 18, 2022 at 08:24:09PM +0000, Al Viro wrote:
> On Fri, Feb 18, 2022 at 07:43:43PM +0000, Al Viro wrote:
> > On Fri, Feb 18, 2022 at 02:33:31PM -0500, Rik van Riel wrote:
> > > On Fri, 2022-02-18 at 19:26 +0000, Al Viro wrote:
> > > > On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> > > > > After kern_unmount returns, callers can no longer access the
> > > > > vfsmount structure. However, the vfsmount structure does need
> > > > > to be kept around until the end of the RCU grace period, to
> > > > > make sure other accesses have all gone away too.
> > > > >
> > > > > This can be accomplished by either gating each kern_unmount
> > > > > on synchronize_rcu (the comment in the code says it all), or
> > > > > by deferring the freeing until the next grace period, where
> > > > > it needs to be handled in a workqueue due to the locking in
> > > > > mntput_no_expire().
> > > >
> > > > NAK.? There's code that relies upon kern_unmount() being
> > > > synchronous.? That's precisely the reason why MNT_INTERNAL
> > > > is treated that way in mntput_no_expire().
> > >
> > > Fair enough. Should I make a kern_unmount_rcu() version
> > > that gets called just from mq_put_mnt()?
> >
> > Umm... I'm not sure you can afford having struct ipc_namespace
> > freed and reused before the mqueue superblock gets at least to
> > deactivate_locked_super().
>
> BTW, that's a good demonstration of the problems with making those
> beasts async. struct mount is *not* accessed past kern_unmount(),
> but the objects used by the superblock might very well be - in
> this case they (struct ipc_namespace, pointed to by s->s_fs_data)
> are freed by the caller after kern_unmount() returns. And possibly
> reused. Now note that they are used as search keys by
> mqueue_get_tree() and it becomes very fishy.
>
> If you want to go that way, make it something like
>
> void put_ipc_ns(struct ipc_namespace *ns)
> {
> if (refcount_dec_and_lock(&ns->ns.count, &mq_lock)) {
> mq_clear_sbinfo(ns);
> spin_unlock(&mq_lock);
> kern_unmount_rcu(ns->mq_mnt);
> }
> }
>
> and give mqueue this for ->kill_sb():
>
> static void mqueue_kill_super(struct super_block *sb)
> {
> struct ipc_namespace *ns = sb->s_fs_info;
> kill_litter_super(sb);
> do the rest of free_ipc_ns();
> }
>
> One thing: kern_unmount_rcu() needs a very big warning about
> the caution needed from its callers. It's really not safe
> for general use, and it will be a temptation for folks with
> scalability problems like this one to just use it instead of
> kern_unmount() and declare the problem solved.
FWIW, that won't work correctly wrt failure exits. I'm digging
through the lifetime rules in there right now, will post when
I'm done.
On Fri, Feb 18, 2022 at 02:33:31PM -0500, Rik van Riel wrote:
> On Fri, 2022-02-18 at 19:26 +0000, Al Viro wrote:
> > On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> > > After kern_unmount returns, callers can no longer access the
> > > vfsmount structure. However, the vfsmount structure does need
> > > to be kept around until the end of the RCU grace period, to
> > > make sure other accesses have all gone away too.
> > >
> > > This can be accomplished by either gating each kern_unmount
> > > on synchronize_rcu (the comment in the code says it all), or
> > > by deferring the freeing until the next grace period, where
> > > it needs to be handled in a workqueue due to the locking in
> > > mntput_no_expire().
> >
> > NAK.? There's code that relies upon kern_unmount() being
> > synchronous.? That's precisely the reason why MNT_INTERNAL
> > is treated that way in mntput_no_expire().
>
> Fair enough. Should I make a kern_unmount_rcu() version
> that gets called just from mq_put_mnt()?
Umm... I'm not sure you can afford having struct ipc_namespace
freed and reused before the mqueue superblock gets at least to
deactivate_locked_super().
On Fri, Feb 18, 2022 at 07:43:43PM +0000, Al Viro wrote:
> On Fri, Feb 18, 2022 at 02:33:31PM -0500, Rik van Riel wrote:
> > On Fri, 2022-02-18 at 19:26 +0000, Al Viro wrote:
> > > On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> > > > After kern_unmount returns, callers can no longer access the
> > > > vfsmount structure. However, the vfsmount structure does need
> > > > to be kept around until the end of the RCU grace period, to
> > > > make sure other accesses have all gone away too.
> > > >
> > > > This can be accomplished by either gating each kern_unmount
> > > > on synchronize_rcu (the comment in the code says it all), or
> > > > by deferring the freeing until the next grace period, where
> > > > it needs to be handled in a workqueue due to the locking in
> > > > mntput_no_expire().
> > >
> > > NAK.? There's code that relies upon kern_unmount() being
> > > synchronous.? That's precisely the reason why MNT_INTERNAL
> > > is treated that way in mntput_no_expire().
> >
> > Fair enough. Should I make a kern_unmount_rcu() version
> > that gets called just from mq_put_mnt()?
>
> Umm... I'm not sure you can afford having struct ipc_namespace
> freed and reused before the mqueue superblock gets at least to
> deactivate_locked_super().
BTW, that's a good demonstration of the problems with making those
beasts async. struct mount is *not* accessed past kern_unmount(),
but the objects used by the superblock might very well be - in
this case they (struct ipc_namespace, pointed to by s->s_fs_data)
are freed by the caller after kern_unmount() returns. And possibly
reused. Now note that they are used as search keys by
mqueue_get_tree() and it becomes very fishy.
If you want to go that way, make it something like
void put_ipc_ns(struct ipc_namespace *ns)
{
if (refcount_dec_and_lock(&ns->ns.count, &mq_lock)) {
mq_clear_sbinfo(ns);
spin_unlock(&mq_lock);
kern_unmount_rcu(ns->mq_mnt);
}
}
and give mqueue this for ->kill_sb():
static void mqueue_kill_super(struct super_block *sb)
{
struct ipc_namespace *ns = sb->s_fs_info;
kill_litter_super(sb);
do the rest of free_ipc_ns();
}
One thing: kern_unmount_rcu() needs a very big warning about
the caution needed from its callers. It's really not safe
for general use, and it will be a temptation for folks with
scalability problems like this one to just use it instead of
kern_unmount() and declare the problem solved.
On Fri, Feb 18, 2022 at 09:06:31PM +0000, Al Viro wrote:
> FWIW, that won't work correctly wrt failure exits. I'm digging
> through the lifetime rules in there right now, will post when
> I'm done.
OK, now that I'd reconstructed the picture... The problems with
delayed shutdown are prevented by mq_clear_sbinfo() call in there -
mqueue is capable of more or less gracefully dealing with
having ->s_fs_info ripped from under it, which is what that
thing does. Before the kern_unmount(). And since that code is
non-modular, we don't need to protect that either.
IOW, having
void put_ipc_ns(struct ipc_namespace *ns)
{
if (refcount_dec_and_lock(&ns->ns.count, &mq_lock)) {
mq_clear_sbinfo(ns);
spin_unlock(&mq_lock);
free_ipc_ns(ns);
}
}
and
void mq_put_mnt(struct ipc_namespace *ns)
{
/*
* The only reason it's safe to have the mntput async
* is that we'd already ripped the ipc_namespace away
* from the mqueue superblock, by having called
* mq_clear_sbinfo().
*
* NOTE: kern_unmount_rcu() IS NOT SAFE TO USE
* WITHOUT SERIOUS PRECAUTIONS.
*
* Anything that is used by filesystem must either
* be already taken away (and fs must survive that)
* or have its destruction delayed until the superblock
* shutdown.
*
*/
kern_unmount_rcu(ns->mq_mnt);
}
would suffice. free_ipc_work/free_ipc/mnt_llist can be killed off.
On Sat, Feb 19, 2022 at 05:58:13AM +0000, Al Viro wrote:
> On Sat, Feb 19, 2022 at 05:53:37AM +0000, Al Viro wrote:
> > On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> >
> > > struct super_block;
> > > struct vfsmount;
> > > @@ -73,6 +74,7 @@ struct vfsmount {
> > > struct super_block *mnt_sb; /* pointer to superblock */
> > > int mnt_flags;
> > > struct user_namespace *mnt_userns;
> > > + struct rcu_work free_rwork;
> > > } __randomize_layout;
> >
> > Wait, what? First of all, that has no business being in vfsmount -
> > everything that deeply internal belongs in struct mount, not in
> > its public part. Moreover, there's already mount->mnt_rcu, so what's
> > the point duplicating that?
>
> Argh... You need rcu_work there...
>
> OK, so make that a member of the same union mnt_rcu is. In struct mount,
> please. And I'm not sure I like the idea of shoving that much into
> struct mount, TBH...
We might have a plenty of struct mount instances. Very few of them will
ever be internal, in the first place. Fewer yet - using kern_unmount_rcu().
And it's not small. If anything, I would consider something like
call_rcu(&m->mnt_rcu, callback)
with callback adding struct mount to llist and doing schedule_delayed_work()
on that. With work consisting of doing mntput on everything in it.
I'll get some sleep and put together something along those lines tomorrow
morning.
On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> struct super_block;
> struct vfsmount;
> @@ -73,6 +74,7 @@ struct vfsmount {
> struct super_block *mnt_sb; /* pointer to superblock */
> int mnt_flags;
> struct user_namespace *mnt_userns;
> + struct rcu_work free_rwork;
> } __randomize_layout;
Wait, what? First of all, that has no business being in vfsmount -
everything that deeply internal belongs in struct mount, not in
its public part. Moreover, there's already mount->mnt_rcu, so what's
the point duplicating that?
On Fri, 2022-02-18 at 19:26 +0000, Al Viro wrote:
> On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
> > After kern_unmount returns, callers can no longer access the
> > vfsmount structure. However, the vfsmount structure does need
> > to be kept around until the end of the RCU grace period, to
> > make sure other accesses have all gone away too.
> >
> > This can be accomplished by either gating each kern_unmount
> > on synchronize_rcu (the comment in the code says it all), or
> > by deferring the freeing until the next grace period, where
> > it needs to be handled in a workqueue due to the locking in
> > mntput_no_expire().
>
> NAK. There's code that relies upon kern_unmount() being
> synchronous. That's precisely the reason why MNT_INTERNAL
> is treated that way in mntput_no_expire().
Fair enough. Should I make a kern_unmount_rcu() version
that gets called just from mq_put_mnt()?
--
All Rights Reversed.
On Sat, Feb 19, 2022 at 05:53:37AM +0000, Al Viro wrote:
> On Fri, Feb 18, 2022 at 01:31:13PM -0500, Rik van Riel wrote:
>
> > struct super_block;
> > struct vfsmount;
> > @@ -73,6 +74,7 @@ struct vfsmount {
> > struct super_block *mnt_sb; /* pointer to superblock */
> > int mnt_flags;
> > struct user_namespace *mnt_userns;
> > + struct rcu_work free_rwork;
> > } __randomize_layout;
>
> Wait, what? First of all, that has no business being in vfsmount -
> everything that deeply internal belongs in struct mount, not in
> its public part. Moreover, there's already mount->mnt_rcu, so what's
> the point duplicating that?
Argh... You need rcu_work there...
OK, so make that a member of the same union mnt_rcu is. In struct mount,
please. And I'm not sure I like the idea of shoving that much into
struct mount, TBH...