2023-06-07 06:02:46

by Bagas Sanjaya

[permalink] [raw]
Subject: Fwd: Remounting ext4 filesystem from ro to rw fails when quotas are enabled

Hi,

I notice a regression report on Bugzilla [1]. Quoting from it:

> Since commit a44be64, remounting a read-only ext4 filesystem to become read-write fails when quotas are enabled. The mount syscall returns -EROFS and outputs the following in dmesg:
>
> ```
> EXT4-fs warning (device loop0): ext4_enable_quotas:7028: Failed to enable quota tracking (type=0, err=-30, ino=3). Please run e2fsck
> ```
>
>
> Root cause
>
> The problem can be traced back to the changes introduced in commit a44be64. It appears that the issue arises because the SB_RDONLY bit of the s_flags field is now only cleared after executing the ext4_enable_quotas function. However, the vfs_setup_quota_inode function, called by ext4_enable_quotas, checks whether this bit is set (fs/quota/dquot.c:2331):
>
> ```
> if (IS_RDONLY(inode))
> return -EROFS;
> ```
>
> This condition therefore always triggers the -EROFS fail condition.
>
>
> Steps to Reproduce
>
> The bug can be reproduced by executing the following script on a current mainline kernel with defconfig:
>
> ```
> #!/bin/bash
>
> set -ex
>
> truncate -s 1G /tmp/img
> mkfs.ext4 /tmp/img
> tune2fs -Q usrquota,grpquota,prjquota /tmp/img
> losetup /dev/loop0 /tmp/img
> mount -o ro /dev/loop0 /mnt
> mount -o remount,rw /mnt
> ```
>
> Executing the script results in the following output:
>
> ```
> + truncate -s 1G /tmp/img
> + mkfs.ext4 /tmp/img
> mke2fs 1.47.0 (5-Feb-2023)
> Discarding device blocks: done
> Creating filesystem with 262144 4k blocks and 65536 inodes
> Filesystem UUID: b96a3da2-043f-11ee-b6f0-47c69db05231
> Superblock backups stored on blocks:
> 32768, 98304, 163840, 229376
>
> Allocating group tables: done
> Writing inode tables: done
> Creating journal (8192 blocks): done
> Writing superblocks and filesystem accounting information: done
>
> + tune2fs -Q usrquota,grpquota,prjquota /tmp/img
> tune2fs 1.47.0 (5-Feb-2023)
> + losetup /dev/loop0 /tmp/img
> [ 6.766763] loop0: detected capacity change from 0 to 2097152
> + mount -o ro /dev/loop0 /mnt
> [ 6.791561] EXT4-fs (loop0): mounted filesystem b96a3da2-043f-11ee-b6f0-47c69db05231 ro with ordered data mode. Quota mode: journalled.
> + mount -o remount,rw /mnt
> [ 6.805546] EXT4-fs warning (device loop0): ext4_enable_quotas:7028: Failed to enable quota tracking (type=0, err=-30, ino=3). Please run e2fsck to fix.
> mount: /mnt: cannot remount /dev/loop0 read-write, is write-protected.
> dmesg(1) may have more information after failed mount system call.
> ```

See Bugzilla for the full thread.

Ted, it looks like this regression is caused by your ext4_xattr_block_set()
fix to earlier syzbot report. Would you like to take a look on it?

Anyway, I'm adding it to regzbot:

#regzbot introduced: a44be64bbecb15 https://bugzilla.kernel.org/show_bug.cgi?id=217529
#regzbot title: Remounting ext4 filesystem from ro to rw fails when quotas are enabled

Thanks.

[1]: https://bugzilla.kernel.org/show_bug.cgi?id=217529

--
An old man doll... just what I always wanted! - Clara


2023-06-08 04:57:23

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Fwd: Remounting ext4 filesystem from ro to rw fails when quotas are enabled

On Wed, Jun 07, 2023 at 12:51:26PM +0700, Bagas Sanjaya wrote:
> I notice a regression report on Bugzilla [1]. Quoting from it:
>
> > Since commit a44be64, remounting a read-only ext4 filesystem to
> > become read-write fails when quotas are enabled. The mount syscall
> > returns -EROFS and outputs the following in dmesg:

Yeah, and I think all we can do is revert this commit:

ext4: don't clear SB_RDONLY when remounting r/w until quota is re-enabled

I think I saw some messages go by about this getting queued for the
stable kernel; if so, could it please be dropped?

> >
> > The problem can be traced back to the changes introduced in commit
> > a44be64. It appears that the issue arises because the SB_RDONLY
> > bit of the s_flags field is now only cleared after executing the
> > ext4_enable_quotas function. However, the vfs_setup_quota_inode
> > function, called by ext4_enable_quotas, checks whether this bit is
> > set (fs/quota/dquot.c:2331):

The problem that we're trying to solve is the malicious syzbot
reproducer is in one thread, twiddling the file system state from r/o
to rw and back. In another thread, it's attempt to create files and
directories. And occasionally, we're tripping this warning:

WARN_ON_ONCE(dquot_initialize_needed(inode));

That's because we're racing with the quota getting initialized, and
the moment we clear the SB_RDONLY bit the thread which is trying to
create a directory or file will proceed with the operation --- even
though the quota subsystem hasn't been initialized yet. That's why
the patch attempted to move the clearing the SB_RDONLY bit ahead of
reiniitalization of the quota subsystem.

Since this is screwing up the ability to remount the file system rw,
we need to revert this commit, at which point, we'll be able to
trigger this warning again.

So how do we fix the warning? Well, we could just drop the
WARN_ON_ONCE(); the downside is that when this race gets hit, the
quota operations to allocate the block and inode will silently become
a no-op, which means the quota will get out of sync with reality.

Alternatively, we could add a call to the beginning to
ext4_xattr_block_Set():

if (dquot_initialize_needed(inode))
reutrn -EROFS;

... on the theory that the only time we should hit this is when there
is a quota setup racing with inode creation, and it's better that we
just let the mkdir or open with O_CREAT fail than to succeed, and
allocate blocks before the quota subsystem has been initialized. I'm
not sure how safe this would be on older quota setups (pre-ext4 quota
feature), since I suspect the race window is a quite a bit bigger if I
understand correctly how things worked with the legacy quota support.

The final really hacky thing I could imagine is to hack
dquot_initialize() to something like this:

int dquot_initialize(struct inode *inode)
{
ret = __dquot_initialize(inode, -1);
if (ret)
return ret;
if (dquot_initialize_needed(inode)) {
msleep(1000)
return __dquot_initialize(inode, -1);
}
return 0;
}

But I threw up a little in my mouth while writing it....

So I'm tempted to just remove the WARN_ON's, and just accept that if
superuser wants to twiddle the read/only state of a file system with
quota, at high rates, while something else is trying to create
files/directories, most of which will fail while the file system is
read-only, then the quota may gets out of sync, and... ¯\_(ツ)_/¯

Since this is only something that crazy people or malicious Syzbot
reproducers would try to do, I'm really having a hard time bringing
myself to care. Especially since we have plenty of other places where
we aren't doing the dquot_initialize_needed() check, so the
opportunities for the quota to get out of sync already exist in other
code paths.

Jan, what do you think?

- Ted

2023-06-08 14:27:00

by Theodore Ts'o

[permalink] [raw]
Subject: [PATCH 1/2] Revert "ext4: don't clear SB_RDONLY when remounting r/w until quota is re-enabled"

This reverts commit a44be64bbecb15a452496f60db6eacfee2b59c79.

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Theodore Ts'o <[email protected]>
---
fs/ext4/super.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 56a5d1c469fc..05fcecc36244 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -6388,7 +6388,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
struct ext4_mount_options old_opts;
ext4_group_t g;
int err = 0;
- int enable_rw = 0;
#ifdef CONFIG_QUOTA
int enable_quota = 0;
int i, j;
@@ -6575,7 +6574,7 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
if (err)
goto restore_opts;

- enable_rw = 1;
+ sb->s_flags &= ~SB_RDONLY;
if (ext4_has_feature_mmp(sb)) {
err = ext4_multi_mount_protect(sb,
le64_to_cpu(es->s_mmp_block));
@@ -6622,9 +6621,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
ext4_release_system_zone(sb);

- if (enable_rw)
- sb->s_flags &= ~SB_RDONLY;
-
/*
* Reinitialize lazy itable initialization thread based on
* current settings
--
2.31.0


2023-06-08 18:01:15

by Jan Kara

[permalink] [raw]
Subject: Re: Fwd: Remounting ext4 filesystem from ro to rw fails when quotas are enabled

On Thu 08-06-23 00:40:56, Theodore Ts'o wrote:
> On Wed, Jun 07, 2023 at 12:51:26PM +0700, Bagas Sanjaya wrote:
> > I notice a regression report on Bugzilla [1]. Quoting from it:
> >
> > > Since commit a44be64, remounting a read-only ext4 filesystem to
> > > become read-write fails when quotas are enabled. The mount syscall
> > > returns -EROFS and outputs the following in dmesg:
>
> Yeah, and I think all we can do is revert this commit:
>
> ext4: don't clear SB_RDONLY when remounting r/w until quota is re-enabled
>
> I think I saw some messages go by about this getting queued for the
> stable kernel; if so, could it please be dropped?

Yeah, for now I'd revert the fix. It needs more thought and it is not like
it is fixing any serious vulnerability or so.

> > > The problem can be traced back to the changes introduced in commit
> > > a44be64. It appears that the issue arises because the SB_RDONLY
> > > bit of the s_flags field is now only cleared after executing the
> > > ext4_enable_quotas function. However, the vfs_setup_quota_inode
> > > function, called by ext4_enable_quotas, checks whether this bit is
> > > set (fs/quota/dquot.c:2331):
>
> The problem that we're trying to solve is the malicious syzbot
> reproducer is in one thread, twiddling the file system state from r/o
> to rw and back. In another thread, it's attempt to create files and
> directories. And occasionally, we're tripping this warning:
>
> WARN_ON_ONCE(dquot_initialize_needed(inode));
>
> That's because we're racing with the quota getting initialized, and
> the moment we clear the SB_RDONLY bit the thread which is trying to
> create a directory or file will proceed with the operation --- even
> though the quota subsystem hasn't been initialized yet. That's why
> the patch attempted to move the clearing the SB_RDONLY bit ahead of
> reiniitalization of the quota subsystem.
>
> Since this is screwing up the ability to remount the file system rw,
> we need to revert this commit, at which point, we'll be able to
> trigger this warning again.
>
> So how do we fix the warning? Well, we could just drop the
> WARN_ON_ONCE(); the downside is that when this race gets hit, the
> quota operations to allocate the block and inode will silently become
> a no-op, which means the quota will get out of sync with reality.

Yes, the warning triggering is actually a good thing because it shows us we
have a race where quota can get out of sync with real space usage.

> Alternatively, we could add a call to the beginning to
> ext4_xattr_block_Set():
>
> if (dquot_initialize_needed(inode))
> reutrn -EROFS;
>
> ... on the theory that the only time we should hit this is when there
> is a quota setup racing with inode creation, and it's better that we
> just let the mkdir or open with O_CREAT fail than to succeed, and
> allocate blocks before the quota subsystem has been initialized. I'm
> not sure how safe this would be on older quota setups (pre-ext4 quota
> feature), since I suspect the race window is a quite a bit bigger if I
> understand correctly how things worked with the legacy quota support.

No, xattr code is just a messenger here. There is a fundamental problem
that once we clear SB_RDONLY in superblock flags, anybody can come up with
any operation modifying the filesystem but until we are finished with
dquot_resume(), this operation can escape proper quota accounting.

> The final really hacky thing I could imagine is to hack
> dquot_initialize() to something like this:
>
> int dquot_initialize(struct inode *inode)
> {
> ret = __dquot_initialize(inode, -1);
> if (ret)
> return ret;
> if (dquot_initialize_needed(inode)) {
> msleep(1000)
> return __dquot_initialize(inode, -1);
> }
> return 0;
> }
>
> But I threw up a little in my mouth while writing it....
>
> So I'm tempted to just remove the WARN_ON's, and just accept that if
> superuser wants to twiddle the read/only state of a file system with
> quota, at high rates, while something else is trying to create
> files/directories, most of which will fail while the file system is
> read-only, then the quota may gets out of sync, and... ¯\_(ツ)_/¯
>
> Since this is only something that crazy people or malicious Syzbot
> reproducers would try to do, I'm really having a hard time bringing
> myself to care. Especially since we have plenty of other places where
> we aren't doing the dquot_initialize_needed() check, so the
> opportunities for the quota to get out of sync already exist in other
> code paths.
>
> Jan, what do you think?

Well, I think we should fix this properly. The problem is that currently
changing read-only state is just flipping the flag but we need an
intermediate state like "superblock is really RW but for userspace it is
RO" so that we can prepare the filesystem for taking the writes - in fact
we've already faced very similar problem with orphan replay but there we've
worked around it by forbidding orphan replay on remount and playing a bit
hacky games on mount.

I think I can see a way to solve this problem. I'll send patches.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2023-06-15 10:05:45

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 1/2] Revert "ext4: don't clear SB_RDONLY when remounting r/w until quota is re-enabled"

On Thu 08-06-23 10:18:04, Theodore Ts'o wrote:
> This reverts commit a44be64bbecb15a452496f60db6eacfee2b59c79.
>
> Link: https://lore.kernel.org/r/[email protected]
> Signed-off-by: Theodore Ts'o <[email protected]>

So I was looking more into how the warning in xattr code can trigger. Sadly
the syzbot reproducer does not seem to reproduce the issue for me when I
enable the warnings in xattr code. Anyway, after staring for some time into
the code I think the problem has actually been introduced by the transition
to the new mount API. Because in the old API, userspace could not start
writes to the filesystem until we called set_mount_attributes() in
do_remount() which cleared the MNT_READONLY flag on the mount (which
happens after reconfigure_super(). In the new API, fsconfig(2) syscall
allows you to toggle SB_RDONLY flag without touching the mount flags and so
we can have userspace writing the filesystem as soon as we clear SB_RDONLY
flag.

I have checked and the other direction (i.e., remount read-only) is
properly serialized in the VFS by setting sb->s_readonly_remount (and
making sure we either return EBUSY or all writers are going to see
s_readonly_remount set) before calling into filesystem's reconfigure code.
I have actually a fixup within ext4 ready but I think it may be better to
fix this within VFS and provide similar serialization like for the rw-ro
transition there... Let's see what VFS people say to this.

Honza

--
Jan Kara <[email protected]>
SUSE Labs, CR