Hello,
syzbot found the following issue on:
HEAD commit: c8451c141e07 Merge tag 'acpi-6.2-rc2' of git://git.kernel...
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=13c7c8b2480000
kernel config: https://syzkaller.appspot.com/x/.config?x=60208ff8fae87ede
dashboard link: https://syzkaller.appspot.com/bug?extid=7bb7cd3595533513a9e7
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10beaa7c480000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=117c1a2a480000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3bf54f63556a/disk-c8451c14.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/e05385cdce2b/vmlinux-c8451c14.xz
kernel image: https://storage.googleapis.com/syzbot-assets/78ecaed069cb/bzImage-c8451c14.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/9bbf851f8c00/mount_0.gz
The issue was bisected to:
commit 55d1cbbbb29e6656c662ee8f73ba1fc4777532eb
Author: Arnd Bergmann <[email protected]>
Date: Tue Nov 9 02:35:04 2021 +0000
hfs/hfsplus: use WARN_ON for sanity check
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=16aa6992480000
final oops: https://syzkaller.appspot.com/x/report.txt?x=15aa6992480000
console output: https://syzkaller.appspot.com/x/log.txt?x=11aa6992480000
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]
Fixes: 55d1cbbbb29e ("hfs/hfsplus: use WARN_ON for sanity check")
------------[ cut here ]------------
WARNING: CPU: 0 PID: 55 at fs/hfs/inode.c:489 hfs_write_inode+0x8bb/0x9e0 fs/hfs/inode.c:489
Modules linked in:
CPU: 0 PID: 55 Comm: kworker/u4:4 Not tainted 6.2.0-rc1-syzkaller-00084-gc8451c141e07 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Workqueue: writeback wb_workfn (flush-7:0)
RIP: 0010:hfs_write_inode+0x8bb/0x9e0 fs/hfs/inode.c:489
Code: 24 c8 00 00 00 48 8d 90 74 ff ff ff 48 8d 70 a6 e8 9a f5 ff ff e9 ec fc ff ff 41 bc fb ff ff ff e9 04 fd ff ff e8 35 90 37 ff <0f> 0b e9 bf fb ff ff e8 29 90 37 ff 0f 0b e9 b9 fe ff ff e8 cd 77
RSP: 0018:ffffc9000201f6e8 EFLAGS: 00010293
RAX: 0000000000000000 RBX: 1ffff92000403edf RCX: 0000000000000000
RDX: ffff888018518040 RSI: ffffffff8248e35b RDI: 0000000000000005
RBP: ffff88807639a198 R08: 0000000000000005 R09: 0000000000000065
R10: 0000000000000050 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000050 R14: ffffc9000201f728 R15: 0000000000000082
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffc010a2a68 CR3: 000000001db0d000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
write_inode fs/fs-writeback.c:1451 [inline]
__writeback_single_inode+0xcfc/0x1440 fs/fs-writeback.c:1663
writeback_sb_inodes+0x54d/0xf90 fs/fs-writeback.c:1889
wb_writeback+0x2c5/0xd70 fs/fs-writeback.c:2063
wb_do_writeback fs/fs-writeback.c:2206 [inline]
wb_workfn+0x2e0/0x12f0 fs/fs-writeback.c:2246
process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
worker_thread+0x669/0x1090 kernel/workqueue.c:2436
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at [email protected].
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches
On Wed, Jan 4, 2023, at 15:24, syzbot wrote:
> The issue was bisected to:
>
> commit 55d1cbbbb29e6656c662ee8f73ba1fc4777532eb
> Author: Arnd Bergmann <[email protected]>
> Date: Tue Nov 9 02:35:04 2021 +0000
>
> hfs/hfsplus: use WARN_ON for sanity check
>
My patch was a mechanical conversion from '/* panic? */'
to 'WARN_ON()' to work around a compiler warning,
and the previous code had been in there since the
2004 HFS rewrite by Roman Zippel.
I know nothing about what this function actually does,
so my best answer is that we could revert my patch
and use pr_debug() instead of WARN_ON() for all of these.
Arnd
On Wed, Jan 4, 2023 at 6:43 AM Arnd Bergmann <[email protected]> wrote:
>
> My patch was a mechanical conversion from '/* panic? */'
> to 'WARN_ON()' to work around a compiler warning,
> and the previous code had been in there since the
> 2004 HFS rewrite by Roman Zippel.
Yeah. Looking at the code, the warning does make sense, in that the
code then does a
hfs_bnode_write(...)
with the updated data, using the size of "struct hfs_cat_file", so
checking that the entry has that length does seem somewhat sensible.
At the same time, the HFS_IS_RSRC(inode) case in the same function
never had that "panic ?" thing, and obviously the two cases that *do*
have the check never actually did anything (since 2004, as you point
out - you have to go back to before the git days to even see any
development in this area).
> I know nothing about what this function actually does,
> so my best answer is that we could revert my patch
> and use pr_debug() instead of WARN_ON() for all of these.
Looks like this is syzbot just mounting a garbage image (or is it
actually some real hfs thing?)
I'm not sure a pr_debug() would even be appropriate. I think "return
-EIO" (but with a hfs_find_exit()) is likely the right answer. We've
done that in the past (see commit 8d824e69d9f3: "hfs: fix OOB Read in
__hfs_brec_find").
I suspect this code is basically all dead. From what I can tell, hfs
only gets updates for
(a) syzbot reports
(b) vfs interface changes
and the last real changes seem to have been by Ernesto A. Fernández
back in 2018.
Hmm. Looking at that code, we have another bug in there, introduced by
an earlier fix for a similar issue: commit 8d824e69d9f3 ("hfs: fix OOB
Read in __hfs_brec_find") added
+ if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
+ return -EIO;
but it's after hfs_find_init(), so it should actually have done a
hfs_find_exit() to not leak memory.
So we should probably fix that too.
Something like this ENTIRELY UNTESTED patch?
Do we have anybody who looks at hfs?
Linus
On Wed, Jan 4, 2023, at 20:06, Linus Torvalds wrote:
>
> I suspect this code is basically all dead. From what I can tell, hfs
> only gets updates for
>
> (a) syzbot reports
>
> (b) vfs interface changes
There is clearly no new work going into it, and most data exchange
with MacOS would use HFS+, but I think there are still some users.
> and the last real changes seem to have been by Ernesto A. Fernández
> back in 2018.
>
> Hmm. Looking at that code, we have another bug in there, introduced by
> an earlier fix for a similar issue: commit 8d824e69d9f3 ("hfs: fix OOB
> Read in __hfs_brec_find") added
>
> + if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
> + return -EIO;
>
> but it's after hfs_find_init(), so it should actually have done a
> hfs_find_exit() to not leak memory.
>
> So we should probably fix that too.
>
> Something like this ENTIRELY UNTESTED patch?
>
> Do we have anybody who looks at hfs?
Adding Viacheslav Dubeyko to Cc, he's at least been reviewing
patches for HFS and HFS+ somewhat recently. The linux-m68k
list may have some users dual-booting old MacOS.
Viacheslav, see the start of the thread at
https://lore.kernel.org/lkml/[email protected]/
Arnd
Hello Arnd!
On 1/4/23 23:33, Arnd Bergmann wrote:
> Adding Viacheslav Dubeyko to Cc, he's at least been reviewing
> patches for HFS and HFS+ somewhat recently. The linux-m68k
> list may have some users dual-booting old MacOS.
HFS/HFS+ is also used by PowerPC users on PowerMac hardware for both
exchanging data between MacOS and Linux as well as for booting Linux
using GRUB with a HFS/HFS+ /boot partition.
Debian's grub-installer creates an HFS partition from which GRUB loads
the kernel and the initrd [1]. In order to be able to update kernel and
initrd, the running Linux system needs to be able to read/write HFS/HFS+
partitions which is used for the /boot partition.
Adrian
> [1] https://salsa.debian.org/installer-team/grub-installer/-/blob/master/grub-installer#L918
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer
`. `' Physicist
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Hi Arnd,
> On Jan 4, 2023, at 2:33 PM, Arnd Bergmann <[email protected]> wrote:
>
> On Wed, Jan 4, 2023, at 20:06, Linus Torvalds wrote:
>>
>> I suspect this code is basically all dead. From what I can tell, hfs
>> only gets updates for
>>
>> (a) syzbot reports
>>
>> (b) vfs interface changes
>
> There is clearly no new work going into it, and most data exchange
> with MacOS would use HFS+, but I think there are still some users.
>
>> and the last real changes seem to have been by Ernesto A. Fernández
>> back in 2018.
>>
>> Hmm. Looking at that code, we have another bug in there, introduced by
>> an earlier fix for a similar issue: commit 8d824e69d9f3 ("hfs: fix OOB
>> Read in __hfs_brec_find") added
>>
>> + if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
>> + return -EIO;
>>
>> but it's after hfs_find_init(), so it should actually have done a
>> hfs_find_exit() to not leak memory.
>>
>> So we should probably fix that too.
>>
>> Something like this ENTIRELY UNTESTED patch?
>>
>> Do we have anybody who looks at hfs?
>
> Adding Viacheslav Dubeyko to Cc, he's at least been reviewing
> patches for HFS and HFS+ somewhat recently. The linux-m68k
> list may have some users dual-booting old MacOS.
>
> Viacheslav, see the start of the thread at
> https://lore.kernel.org/lkml/[email protected]/
>
Let me take a look into the issue.
Thanks,
Slava.
> On Jan 4, 2023, at 4:36 PM, Viacheslav Dubeyko <[email protected]> wrote:
>
> Hi Arnd,
>
>> On Jan 4, 2023, at 2:33 PM, Arnd Bergmann <[email protected]> wrote:
<skipped>
>>>
>>> Something like this ENTIRELY UNTESTED patch?
>>>
>>> Do we have anybody who looks at hfs?
>>
>> Adding Viacheslav Dubeyko to Cc, he's at least been reviewing
>> patches for HFS and HFS+ somewhat recently. The linux-m68k
>> list may have some users dual-booting old MacOS.
>>
>> Viacheslav, see the start of the thread at
>> https://lore.kernel.org/lkml/[email protected]/
>>
>
> Let me take a look into the issue.
>
As far as I can see, I cannot reproduce the issue for newly created HFS volume
with simple operation of creation of several files of 4MB in size. The sync_fs
operation definitely calls hfs_write_inode() method. But I don't see such issue.
The hfs_write_inode() allocates struct hfs_find_data fd variable on stack
(fs/hfs/inode.c:426). The fd.entrylength is initialized in __hfs_brec_find()
(fs/hfs/bfind.c:100). Technically, fd->entrylength = len - keylen can introduce
overflow. But, such issue can take place for corrupted volume. Internal logic
error should result with returning error by hfs_brec_find (fs/hfs/inode.c:466):
if (hfs_brec_find(&fd))
/* panic? */
goto out;
And, finally, logic should end without going into the issue.
Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
sudo losetup /dev/loop20 ./mount_0
sudo fsck.hfs -d /dev/loop20
** /dev/loop20
Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K.
** Checking HFS volume.
hfs_swap_BTNode: record #1 invalid offset (0xFFF8)
Invalid node structure
(3, 0)
Invalid B-tree node size
(3, 0)
** Volume check failed.
volume check failed with error 7
volume type is HFS
primary MDB is at block 2 0x02
alternate MDB is at block 62 0x3e
primary VHB is at block 0 0x00
alternate VHB is at block 0 0x00
sector size = 512 0x200
VolumeObject flags = 0x19
total sectors for volume = 64 0x40
total sectors for embedded volume = 0 0x00
So, HFS volume corruption had happened somehow earlier.
The reported issue is only a side effect of volume corruption.
The real issue of HFS volume corruption had taken place before.
And it was a silent issue somehow.
Finally, I don’t see any issue with WARN_ON() in fs/hfs/inode.c:489.
If we have some issue, then it could happen in b-tree logic or
HFS volume was corrupted somehow else. But available report doesn’t
provide any hint what could be wrong during testing.
Thanks,
Slava.
On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
So basically, you can't trust anything you read from the disc.
> On Jan 5, 2023, at 7:46 AM, Matthew Wilcox <[email protected]> wrote:
>
> On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
>> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
>
> Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
> So basically, you can't trust anything you read from the disc.
>
If the volume has been deliberately corrupted, then no guarantee that file system
driver will behave nicely. Technically speaking, inode write operation should never
happened for corrupted volume because the corruption should be detected during
b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
it is worth to do because not so many guys really use HFS/HFS+ as the main file
system under Linux.
Thanks,
Slava.
Hi Arnd,
Am 05.01.23 um 11:33 schrieb Arnd Bergmann:
> On Wed, Jan 4, 2023, at 20:06, Linus Torvalds wrote:
>> I suspect this code is basically all dead. From what I can tell, hfs
>> only gets updates for
>>
>> (a) syzbot reports
>>
>> (b) vfs interface changes
> There is clearly no new work going into it, and most data exchange
> with MacOS would use HFS+, but I think there are still some users.
PowerPC yaboot boot partitions spring to mind here. Plain HFS is still
used in places where it can't be replaced AFAIK.
>
>> and the last real changes seem to have been by Ernesto A. Fernández
>> back in 2018.
>>
>> Hmm. Looking at that code, we have another bug in there, introduced by
>> an earlier fix for a similar issue: commit 8d824e69d9f3 ("hfs: fix OOB
>> Read in __hfs_brec_find") added
>>
>> + if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
>> + return -EIO;
>>
>> but it's after hfs_find_init(), so it should actually have done a
>> hfs_find_exit() to not leak memory.
>>
>> So we should probably fix that too.
>>
>> Something like this ENTIRELY UNTESTED patch?
Looking at Linus' patch, I wonder whether the missing fd.entrylength
size test in the HFS_IS_RSRC(inode) case was due to the fact that a
file's resource fork may be empty?
Adding Brad Boyer (bfind.c author) to Cc. Brad might know what
fd.entrylength should be set to in such a case.
Cheers,
Michael
>>
>> Do we have anybody who looks at hfs?
> Adding Viacheslav Dubeyko to Cc, he's at least been reviewing
> patches for HFS and HFS+ somewhat recently. The linux-m68k
> list may have some users dual-booting old MacOS.
>
> Viacheslav, see the start of the thread at
> https://lore.kernel.org/lkml/[email protected]/
>
> Arnd
On Thu, Jan 5, 2023 at 1:35 PM Michael Schmitz <[email protected]> wrote:
>
> Looking at Linus' patch, I wonder whether the missing fd.entrylength
> size test in the HFS_IS_RSRC(inode) case was due to the fact that a
> file's resource fork may be empty?
But if that is the case, then the subsequent hfs_bnode_read would
return garbage, no? And then writing it back after the update would be
even worse.
So adding that
+ if (fd.entrylength < sizeof(struct hfs_cat_file))
+ goto out;
would seem to be the right thing anyway. No?
But I really don't know the code, so this is all from just looking at
it and going "that makes no sense". Maybe it _does_ make sense to
people who have more background on it.
Linus
Hi Linus,
Am 06.01.2023 um 10:53 schrieb Linus Torvalds:
> On Thu, Jan 5, 2023 at 1:35 PM Michael Schmitz <[email protected]> wrote:
>>
>> Looking at Linus' patch, I wonder whether the missing fd.entrylength
>> size test in the HFS_IS_RSRC(inode) case was due to the fact that a
>> file's resource fork may be empty?
>
> But if that is the case, then the subsequent hfs_bnode_read would
> return garbage, no? And then writing it back after the update would be
> even worse.
>
> So adding that
>
> + if (fd.entrylength < sizeof(struct hfs_cat_file))
> + goto out;
>
> would seem to be the right thing anyway. No?
Yes, it would seem to be the right thing (in order to avoid further
corrupting HFS data structures). Returning -EIO might cause a regression
though.
> But I really don't know the code, so this is all from just looking at
> it and going "that makes no sense". Maybe it _does_ make sense to
> people who have more background on it.
What had me wondering is that the 'panic?' comment was only present in
the directory and regular file data cased but not in the resource fork
case.
But I don't really understand the code too well either. I'll have to see
for myself whether or not your patch does cause a regression on HFS
filesystems such as the OF bootstrap partition used on PowerPC Macs.
Cheers,
Michael
>
> Linus
>
Hi Linus,
Am 06.01.2023 um 12:46 schrieb Michael Schmitz:
> Hi Linus,
>
> Am 06.01.2023 um 10:53 schrieb Linus Torvalds:
>> On Thu, Jan 5, 2023 at 1:35 PM Michael Schmitz <[email protected]>
>> wrote:
>>>
>>> Looking at Linus' patch, I wonder whether the missing fd.entrylength
>>> size test in the HFS_IS_RSRC(inode) case was due to the fact that a
>>> file's resource fork may be empty?
>>
>> But if that is the case, then the subsequent hfs_bnode_read would
>> return garbage, no? And then writing it back after the update would be
>> even worse.
>>
>> So adding that
>>
>> + if (fd.entrylength < sizeof(struct hfs_cat_file))
>> + goto out;
>>
>> would seem to be the right thing anyway. No?
>
> Yes, it would seem to be the right thing (in order to avoid further
> corrupting HFS data structures). Returning -EIO might cause a regression
> though.
A brief test on a HFS filesystem image (copy of my yaboot bootstrap
partition) did not show any regression, so your patch appears to be just
fine as-is.
Cheers,
Michael
On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
> > On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
> >> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
> >
> > Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
> > So basically, you can't trust anything you read from the disc.
> >
>
> If the volume has been deliberately corrupted, then no guarantee that file system
> driver will behave nicely. Technically speaking, inode write operation should never
> happened for corrupted volume because the corruption should be detected during
> b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
> drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
> it is worth to do because not so many guys really use HFS/HFS+ as the main file
> system under Linux.
Most popular distros will happily auto-mount HFS/HFS+ from anything
inserted into USB (e.g. what one may think is a charger). This creates
interesting security consequences for most Linux users.
An image may also be corrupted non-deliberately, which will lead to
random memory corruptions if the kernel trusts it blindly.
On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
> On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
> > > On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
> > >> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
> > >
> > > Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
> > > So basically, you can't trust anything you read from the disc.
> > >
> >
> > If the volume has been deliberately corrupted, then no guarantee that file system
> > driver will behave nicely. Technically speaking, inode write operation should never
> > happened for corrupted volume because the corruption should be detected during
> > b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
> > drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
> > it is worth to do because not so many guys really use HFS/HFS+ as the main file
> > system under Linux.
>
>
> Most popular distros will happily auto-mount HFS/HFS+ from anything
> inserted into USB (e.g. what one may think is a charger). This creates
> interesting security consequences for most Linux users.
> An image may also be corrupted non-deliberately, which will lead to
> random memory corruptions if the kernel trusts it blindly.
Then we should delete the HFS/HFS+ filesystems. They're orphaned in
MAINTAINERS and if distros are going to do such a damnfool thing,
then we must stop them.
(Please ignore my previous mail which was CC'ed to the wrong list)
Hello!
On Thu, 2023-07-20 at 18:30 +0100, Matthew Wilcox wrote:
> On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
> > On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
> > > > On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
> > > > > Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
> > > >
> > > > Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
> > > > So basically, you can't trust anything you read from the disc.
> > > >
> > >
> > > If the volume has been deliberately corrupted, then no guarantee that file system
> > > driver will behave nicely. Technically speaking, inode write operation should never
> > > happened for corrupted volume because the corruption should be detected during
> > > b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
> > > drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
> > > it is worth to do because not so many guys really use HFS/HFS+ as the main file
> > > system under Linux.
> >
> >
> > Most popular distros will happily auto-mount HFS/HFS+ from anything
> > inserted into USB (e.g. what one may think is a charger). This creates
> > interesting security consequences for most Linux users.
> > An image may also be corrupted non-deliberately, which will lead to
> > random memory corruptions if the kernel trusts it blindly.
>
> Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> MAINTAINERS and if distros are going to do such a damnfool thing,
> then we must stop them.
Both HFS and HFS+ work perfectly fine. And if distributions or users are so
sensitive about security, it's up to them to blacklist individual features
in the kernel.
Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
and I don't think it's justified to introduce such a hard compatibility
breakage just because some people are worried about theoretical evil
maid attacks.
HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
and I don't think it's okay to break all these systems running Linux.
Thanks,
Adrian
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer
`. `' Physicist
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer
`. `' Physicist
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
On Thu, 2023-07-20 at 18:59 +0100, Matthew Wilcox wrote:
> On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
> > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> > > MAINTAINERS and if distros are going to do such a damnfool thing,
> > > then we must stop them.
> >
> > Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> > sensitive about security, it's up to them to blacklist individual features
> > in the kernel.
> >
> > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> > and I don't think it's justified to introduce such a hard compatibility
> > breakage just because some people are worried about theoretical evil
> > maid attacks.
> >
> > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> > and I don't think it's okay to break all these systems running Linux.
>
> If they're so popular, then it should be no trouble to find somebody
> to volunteer to maintain those filesystems. Except they've been
> marked as orphaned since 2011 and effectively were orphaned several
> years before that (the last contribution I see from Roman Zippel is
> in 2008, and his last contribution to hfs was in 2006).
I suspect that this is one of those catch-22 situations: distros are
going to enable every feature under the sun. That doesn't mean that
anyone is actually _using_ them these days.
Is "staging" still a thing? Maybe we should move these drivers into the
staging directory and pick a release where we'll sunset it, and then see
who comes out of the woodwork?
Cheers,
--
Jeff Layton <[email protected]>
Hello!
On Thu, 2023-07-20 at 18:30 +0100, Matthew Wilcox wrote:
> On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
> > On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
> > > > On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
> > > > > Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
> > > >
> > > > Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
> > > > So basically, you can't trust anything you read from the disc.
> > > >
> > >
> > > If the volume has been deliberately corrupted, then no guarantee that file system
> > > driver will behave nicely. Technically speaking, inode write operation should never
> > > happened for corrupted volume because the corruption should be detected during
> > > b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
> > > drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
> > > it is worth to do because not so many guys really use HFS/HFS+ as the main file
> > > system under Linux.
> >
> >
> > Most popular distros will happily auto-mount HFS/HFS+ from anything
> > inserted into USB (e.g. what one may think is a charger). This creates
> > interesting security consequences for most Linux users.
> > An image may also be corrupted non-deliberately, which will lead to
> > random memory corruptions if the kernel trusts it blindly.
>
> Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> MAINTAINERS and if distros are going to do such a damnfool thing,
> then we must stop them.
Both HFS and HFS+ work perfectly fine. And if distributions or users are so
sensitive about security, it's up to them to blacklist individual features
in the kernel.
Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
and I don't think it's justified to introduce such a hard compatibility
breakage just because some people are worried about theoretical evil
maid attacks.
HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
and I don't think it's okay to break all these systems running Linux.
Thanks,
Adrian
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer
`. `' Physicist
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
> > Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> > MAINTAINERS and if distros are going to do such a damnfool thing,
> > then we must stop them.
>
> Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> sensitive about security, it's up to them to blacklist individual features
> in the kernel.
>
> Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> and I don't think it's justified to introduce such a hard compatibility
> breakage just because some people are worried about theoretical evil
> maid attacks.
>
> HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> and I don't think it's okay to break all these systems running Linux.
If they're so popular, then it should be no trouble to find somebody
to volunteer to maintain those filesystems. Except they've been
marked as orphaned since 2011 and effectively were orphaned several
years before that (the last contribution I see from Roman Zippel is
in 2008, and his last contribution to hfs was in 2006).
Hi Adrian,
Am 21.07.2023 um 05:56 schrieb John Paul Adrian Glaubitz:
> (Please ignore my previous mail which was CC'ed to the wrong list)
>
> Hello!
>
> On Thu, 2023-07-20 at 18:30 +0100, Matthew Wilcox wrote:
>> On Thu, Jul 20, 2023 at 05:27:57PM +0200, Dmitry Vyukov wrote:
>>> On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
>>>>> On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
>>>>>> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
>>>>>
>>>>> Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
>>>>> So basically, you can't trust anything you read from the disc.
>>>>>
>>>>
>>>> If the volume has been deliberately corrupted, then no guarantee that file system
>>>> driver will behave nicely. Technically speaking, inode write operation should never
>>>> happened for corrupted volume because the corruption should be detected during
>>>> b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
>>>> drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
>>>> it is worth to do because not so many guys really use HFS/HFS+ as the main file
>>>> system under Linux.
>>>
>>>
>>> Most popular distros will happily auto-mount HFS/HFS+ from anything
>>> inserted into USB (e.g. what one may think is a charger). This creates
>>> interesting security consequences for most Linux users.
>>> An image may also be corrupted non-deliberately, which will lead to
>>> random memory corruptions if the kernel trusts it blindly.
>>
>> Then we should delete the HFS/HFS+ filesystems. They're orphaned in
>> MAINTAINERS and if distros are going to do such a damnfool thing,
>> then we must stop them.
>
> Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> sensitive about security, it's up to them to blacklist individual features
> in the kernel.
>
> Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> and I don't think it's justified to introduce such a hard compatibility
> breakage just because some people are worried about theoretical evil
> maid attacks.
Seconded.
> HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> and I don't think it's okay to break all these systems running Linux.
You can still boot Linux on these systems without HFS support.
Installing a new kernel to the HFS filesystem, or a boot loader like
yaboot, might be another matter. But there still is an user space option
like hfsutils or hfsplus.
That said, in terms of the argument about USB media with corrupt HFS
filesystems presenting a security risk, I take the view that once you
have physical access to a system, all bets are off. Doubly so if
auto-mounting USB media is enabled.
Cheers,
Michael
>
> Thanks,
> Adrian
>
On Thu, Jul 20, 2023 at 2:39 PM Matthew Wilcox <[email protected]> wrote:
>
> On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
> > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> > > MAINTAINERS and if distros are going to do such a damnfool thing,
> > > then we must stop them.
> >
> > Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> > sensitive about security, it's up to them to blacklist individual features
> > in the kernel.
> >
> > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> > and I don't think it's justified to introduce such a hard compatibility
> > breakage just because some people are worried about theoretical evil
> > maid attacks.
> >
> > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> > and I don't think it's okay to break all these systems running Linux.
>
> If they're so popular, then it should be no trouble to find somebody
> to volunteer to maintain those filesystems. Except they've been
> marked as orphaned since 2011 and effectively were orphaned several
> years before that (the last contribution I see from Roman Zippel is
> in 2008, and his last contribution to hfs was in 2006).
One data point may help.. I've been running Linux on an old PowerMac
and an old Intel MacBook since about 2014 or 2015 or so. I have needed
the HFS/HFS+ filesystem support for about 9 years now (including that
"blessed" support for the Apple Boot partition).
There's never been a problem with Linux and the Apple filesystems.
Maybe it speaks to the maturity/stability of the code that already
exists. The code does not need a lot of attention nowadays.
Maybe the orphaned status is the wrong metric to use to determine
removal. Maybe a better metric would be installation base. I.e., how
many users use the filesystem.
Jeff
On Thu, Jul 20, 2023 at 05:38:52PM -0400, Jeffrey Walton wrote:
> On Thu, Jul 20, 2023 at 2:39 PM Matthew Wilcox <[email protected]> wrote:
> >
> > On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
> > > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> > > > MAINTAINERS and if distros are going to do such a damnfool thing,
> > > > then we must stop them.
> > >
> > > Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> > > sensitive about security, it's up to them to blacklist individual features
> > > in the kernel.
> > >
> > > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> > > and I don't think it's justified to introduce such a hard compatibility
> > > breakage just because some people are worried about theoretical evil
> > > maid attacks.
> > >
> > > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> > > and I don't think it's okay to break all these systems running Linux.
> >
> > If they're so popular, then it should be no trouble to find somebody
> > to volunteer to maintain those filesystems. Except they've been
> > marked as orphaned since 2011 and effectively were orphaned several
> > years before that (the last contribution I see from Roman Zippel is
> > in 2008, and his last contribution to hfs was in 2006).
>
> One data point may help.. I've been running Linux on an old PowerMac
> and an old Intel MacBook since about 2014 or 2015 or so. I have needed
> the HFS/HFS+ filesystem support for about 9 years now (including that
> "blessed" support for the Apple Boot partition).
>
> There's never been a problem with Linux and the Apple filesystems.
> Maybe it speaks to the maturity/stability of the code that already
> exists. The code does not need a lot of attention nowadays.
>
> Maybe the orphaned status is the wrong metric to use to determine
> removal. Maybe a better metric would be installation base. I.e., how
> many users use the filesystem.
I think you're missing the context. There are bugs in how this filesystem
handles intentionally-corrupted filesystems. That's being reported as
a critical bug because apparently some distributions automount HFS/HFS+
filesystems presented to them on a USB key. Nobody is being paid to fix
these bugs. Nobody is volunteering to fix these bugs out of the kindness
of their heart. What choice do we have but to remove the filesystem,
regardless of how many happy users it has?
On Thu, Jul 20, 2023 at 02:27:50PM -0400, Jeff Layton wrote:
> On Thu, 2023-07-20 at 18:59 +0100, Matthew Wilcox wrote:
> > On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote:
> > > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in
> > > > MAINTAINERS and if distros are going to do such a damnfool thing,
> > > > then we must stop them.
> > >
> > > Both HFS and HFS+ work perfectly fine. And if distributions or users are so
> > > sensitive about security, it's up to them to blacklist individual features
> > > in the kernel.
> > >
> > > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years
> > > and I don't think it's justified to introduce such a hard compatibility
> > > breakage just because some people are worried about theoretical evil
> > > maid attacks.
> > >
> > > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac
> > > and I don't think it's okay to break all these systems running Linux.
> >
> > If they're so popular, then it should be no trouble to find somebody
> > to volunteer to maintain those filesystems. Except they've been
> > marked as orphaned since 2011 and effectively were orphaned several
> > years before that (the last contribution I see from Roman Zippel is
> > in 2008, and his last contribution to hfs was in 2006).
>
> I suspect that this is one of those catch-22 situations: distros are
> going to enable every feature under the sun. That doesn't mean that
> anyone is actually _using_ them these days.
>
> Is "staging" still a thing? Maybe we should move these drivers into the
> staging directory and pick a release where we'll sunset it, and then see
> who comes out of the woodwork?
No, the train wreck of filesystems in staging proved that it wasn't
a viable process.
We should just follow the same process as we are using for reiser -
mark it as deprecated in place, pick a date that we are going to
remove it, then add a warning (both runtime, in kconfig and probably
in the kernel filesystem documentation) that it is deprecated and
support is going to be removed at a certain date.
We should be applying the same criteria and process for all the
other filesystems that are orphaned, too. We need to much more
proactive about dropping support for unmaintained filesystems that
nobody is ever fixing despite the constant stream of
corruption- and deadlock- related bugs reported against them.
-Dave.
--
Dave Chinner
[email protected]
On Thu, 20 Jul 2023 at 15:37, Matthew Wilcox <[email protected]> wrote:
>
> I think you're missing the context. There are bugs in how this filesystem
> handles intentionally-corrupted filesystems. That's being reported as
> a critical bug because apparently some distributions automount HFS/HFS+
> filesystems presented to them on a USB key. Nobody is being paid to fix
> these bugs. Nobody is volunteering to fix these bugs out of the kindness
> of their heart. What choice do we have but to remove the filesystem,
> regardless of how many happy users it has?
You're being silly.
We have tons of sane options. The obvious one is "just don't mount
untrusted media".
Now, the kernel doesn't know which media is trusted or not, since the
kernel doesn't actually see things like /etc/mtab and friends. So we
in the kernel can't do that, but distros should have a very easy time
just fixing their crazy models.
Saying that the kernel should remove a completely fine filesystem just
because some crazy use-cases that nobody cares about are broken, now
*that* just crazy.
Now, would it be good to have a maintainer for hgs? Obviously. But no,
we don't remove filesystems just because they don't have maintainers.
And no, we have not suddenly started saying "users don't matter".
Linus
On Fri, 21 Jul 2023, Dave Chinner wrote:
> > I suspect that this is one of those catch-22 situations: distros are
> > going to enable every feature under the sun. That doesn't mean that
> > anyone is actually _using_ them these days.
I think the value of filesystem code is not just a question of how often
it gets executed -- it's also about retaining access to the data collected
in archives, museums, galleries etc. that is inevitably held in old
formats.
>
> We need to much more proactive about dropping support for unmaintained
> filesystems that nobody is ever fixing despite the constant stream of
> corruption- and deadlock- related bugs reported against them.
>
IMO, a stream of bug reports is not a reason to remove code (it's a reason
to revert some commits).
Anyway, that stream of bugs presumably flows from the unstable kernel API,
which is inherently high-maintenance. It seems that a stable API could be
more appropriate for any filesystem for which the on-disk format is fixed
(by old media, by unmaintained FLOSS implementations or abandoned
proprietary implementations).
Being in userspace, I suppose FUSE could be a stable API though I imagine
it's not ideal in the sense that migrating kernel code there would be
difficult. Maybe userspace NFS 4 would be a better fit? (I've no idea, I'm
out of my depth in /fs...)
Ideally, kernel-to-userspace code migration would be done with automatic
program transformation -- otherwise it would become another stream of
bugs.
Removing support for a file system and dam the user base who happily and actively use the file system is never the right option.
There are always a lot of users who use so called obsolete hardware and various software to support their needs every day. They don’t subscribe to mailing lists or are in no way active in the community and they depend on Linux continuing to support them. Changing the status quo for a particularly narrow attack surface should never be taken.
Not having a maintainer is not ideal but the code has been very reliable and as the saying goes if it’s not broken ……..
If a serious problem did come up with this file system there are a number of developers that could do a fix and not be its full time maintainer.
Calling for the removal is just nonsensical to me.
Mike Hosken
Sent via my iPhone
> On 21/07/2023, at 11:12, Linus Torvalds <[email protected]> wrote:
>
> On Thu, 20 Jul 2023 at 15:37, Matthew Wilcox <[email protected]> wrote:
>>
>> I think you're missing the context. There are bugs in how this filesystem
>> handles intentionally-corrupted filesystems. That's being reported as
>> a critical bug because apparently some distributions automount HFS/HFS+
>> filesystems presented to them on a USB key. Nobody is being paid to fix
>> these bugs. Nobody is volunteering to fix these bugs out of the kindness
>> of their heart. What choice do we have but to remove the filesystem,
>> regardless of how many happy users it has?
>
> You're being silly.
>
> We have tons of sane options. The obvious one is "just don't mount
> untrusted media".
>
> Now, the kernel doesn't know which media is trusted or not, since the
> kernel doesn't actually see things like /etc/mtab and friends. So we
> in the kernel can't do that, but distros should have a very easy time
> just fixing their crazy models.
>
> Saying that the kernel should remove a completely fine filesystem just
> because some crazy use-cases that nobody cares about are broken, now
> *that* just crazy.
>
> Now, would it be good to have a maintainer for hgs? Obviously. But no,
> we don't remove filesystems just because they don't have maintainers.
>
> And no, we have not suddenly started saying "users don't matter".
>
> Linus
>
On Fri, Jul 21, 2023 at 11:03:28AM +1000, Finn Thain wrote:
> On Fri, 21 Jul 2023, Dave Chinner wrote:
>
> > > I suspect that this is one of those catch-22 situations: distros are
> > > going to enable every feature under the sun. That doesn't mean that
> > > anyone is actually _using_ them these days.
>
> I think the value of filesystem code is not just a question of how often
> it gets executed -- it's also about retaining access to the data collected
> in archives, museums, galleries etc. that is inevitably held in old
> formats.
That's an argument for adding support to tar, not for maintaining
read/write support.
> > We need to much more proactive about dropping support for unmaintained
> > filesystems that nobody is ever fixing despite the constant stream of
> > corruption- and deadlock- related bugs reported against them.
>
> IMO, a stream of bug reports is not a reason to remove code (it's a reason
> to revert some commits).
>
> Anyway, that stream of bugs presumably flows from the unstable kernel API,
> which is inherently high-maintenance. It seems that a stable API could be
> more appropriate for any filesystem for which the on-disk format is fixed
> (by old media, by unmaintained FLOSS implementations or abandoned
> proprietary implementations).
You've misunderstood. Google have decided to subject the entire kernel
(including obsolete unmaintained filesystems) to stress tests that it's
never had before. IOW these bugs have been there since the code was
merged. There's nothing to back out. There's no API change to blame.
It's always been buggy and it's never mattered before.
It wouldn't be so bad if Google had also decided to fund people to fix
those bugs, but no, they've decided to dump them on public mailing lists
and berate developers into fixing them.
Hi Matthew,
Am 21.07.2023 um 13:11 schrieb Matthew Wilcox:
> You've misunderstood. Google have decided to subject the entire kernel
> (including obsolete unmaintained filesystems) to stress tests that it's
> never had before. IOW these bugs have been there since the code was
> merged. There's nothing to back out. There's no API change to blame.
> It's always been buggy and it's never mattered before.
>
> It wouldn't be so bad if Google had also decided to fund people to fix
> those bugs, but no, they've decided to dump them on public mailing lists
> and berate developers into fixing them.
Dumping these reports on public mailing lists may still be OK (leaving
aside that this invites writing code to exploit these bugs). Asking
nicely for a fix, too.
'Berating developers' clearly oversteps the mark.
Maybe Google need to train their AI (that they're evidently training on
kernel source, so ought to be grateful for such a nice training set)
with a view to manners? We'd sure hate Google's input to go ignored for
lack of civility?
(We could always reassign bugs of this sort against e.g. HFS to
distrubtions, of course. They might have the resources to do something
about it. Doesn't Google distribute Linux in some form or other? Is
Android or ChromeOS susceptible to this issue? Time to find out ...)
Be that as it may - removing code that still has use, just to appease
pushy Google staff (or AI) is just plain wrong IMO.
Cheers,
Michael
On Fri, 21 Jul 2023, Matthew Wilcox wrote:
> On Fri, Jul 21, 2023 at 11:03:28AM +1000, Finn Thain wrote:
> > On Fri, 21 Jul 2023, Dave Chinner wrote:
> >
> > > > I suspect that this is one of those catch-22 situations: distros
> > > > are going to enable every feature under the sun. That doesn't mean
> > > > that anyone is actually _using_ them these days.
> >
> > I think the value of filesystem code is not just a question of how
> > often it gets executed -- it's also about retaining access to the data
> > collected in archives, museums, galleries etc. that is inevitably held
> > in old formats.
>
> That's an argument for adding support to tar, not for maintaining
> read/write support.
>
I rather think it's an argument for collaboration between the interested
parties upstream (inluding tar developers). As I see it, the question is,
what kind of "upstream" is best for that?
> > > We need to much more proactive about dropping support for
> > > unmaintained filesystems that nobody is ever fixing despite the
> > > constant stream of corruption- and deadlock- related bugs reported
> > > against them.
> >
> > IMO, a stream of bug reports is not a reason to remove code (it's a
> > reason to revert some commits).
> >
> > Anyway, that stream of bugs presumably flows from the unstable kernel
> > API, which is inherently high-maintenance. It seems that a stable API
> > could be more appropriate for any filesystem for which the on-disk
> > format is fixed (by old media, by unmaintained FLOSS implementations
> > or abandoned proprietary implementations).
>
> You've misunderstood. Google have decided to subject the entire kernel
> (including obsolete unmaintained filesystems) to stress tests that it's
> never had before. IOW these bugs have been there since the code was
> merged. There's nothing to back out. There's no API change to blame.
> It's always been buggy and it's never mattered before.
>
I see. Thanks for providing that background.
> It wouldn't be so bad if Google had also decided to fund people to fix
> those bugs, but no, they've decided to dump them on public mailing lists
> and berate developers into fixing them.
>
Those bugs, if moved from kernel to userspace, would be less harmful,
right?
Hi Michael!
On Fri, 2023-07-21 at 07:05 +1200, Michael Schmitz wrote:
> Installing a new kernel to the HFS filesystem, or a boot loader like
> yaboot, might be another matter. But there still is an user space option
> like hfsutils or hfsplus.
Both won't work with GRUB. grub-install expects the boot partition to be
mounted while hfsutils/hfsplus don't use the VFS layer.
> That said, in terms of the argument about USB media with corrupt HFS
> filesystems presenting a security risk, I take the view that once you
> have physical access to a system, all bets are off. Doubly so if
> auto-mounting USB media is enabled.
I also don't understand why this was brought up. There are so many other
potential options for attacking computer, even remotely. We also didn't
drop x86 support after all these CPU vulnerabilities were discovered, did
we?
Adrian
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer
`. `' Physicist
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Dmitry Vyukov <[email protected]> writes:
> On Thu, 5 Jan 2023 at 17:45, Viacheslav Dubeyko <[email protected]> wrote:
>> > On Wed, Jan 04, 2023 at 08:37:16PM -0800, Viacheslav Dubeyko wrote:
>> >> Also, as far as I can see, available volume in report (mount_0.gz) somehow corrupted already:
>> >
>> > Syzbot generates deliberately-corrupted (aka fuzzed) filesystem images.
>> > So basically, you can't trust anything you read from the disc.
>> >
>>
>> If the volume has been deliberately corrupted, then no guarantee that file system
>> driver will behave nicely. Technically speaking, inode write operation should never
>> happened for corrupted volume because the corruption should be detected during
>> b-tree node initialization time. If we would like to achieve such nice state of HFS/HFS+
>> drivers, then it requires a lot of refactoring/implementation efforts. I am not sure that
>> it is worth to do because not so many guys really use HFS/HFS+ as the main file
>> system under Linux.
>
>
> Most popular distros will happily auto-mount HFS/HFS+ from anything
> inserted into USB (e.g. what one may think is a charger). This creates
> interesting security consequences for most Linux users.
> An image may also be corrupted non-deliberately, which will lead to
> random memory corruptions if the kernel trusts it blindly.
I am going to point out that there are no known linux filesystems
that are safe to mount when someone has written a deliberately
corrupted filesystem on a usb stick.
Some filesystems like ext4 make a best effort to fix bugs of this sort
as they are discovered but unless something has changed since last I
looked no one makes the effort to ensure that it is 100% safe to mount
any possible corrupted version of any Linux filesystem.
If there is any filesystem in Linux that is safe to automount from
an untrusted USB stick I really would like to hear about it. We could
allow mounting them in unprivileged user namespaces and give all kinds
of interesting capabilities to our users.
As it is I respectfully suggest that if there is a security issue it is
the userspace code that automounts any filesystem on an untrusted USB
stick.
Eric
Please stop sending these emails to me and remove me from the recipient list?
!
Sent from my iPhone
> On 21 Jul 2023, at 02:27, Matthew Wilcox <[email protected]> wrote:
>
> On Fri, Jul 21, 2023 at 11:03:28AM +1000, Finn Thain wrote:
>> On Fri, 21 Jul 2023, Dave Chinner wrote:
>>
>>>> I suspect that this is one of those catch-22 situations: distros are
>>>> going to enable every feature under the sun. That doesn't mean that
>>>> anyone is actually _using_ them these days.
>>
>> I think the value of filesystem code is not just a question of how often
>> it gets executed -- it's also about retaining access to the data collected
>> in archives, museums, galleries etc. that is inevitably held in old
>> formats.
>
> That's an argument for adding support to tar, not for maintaining
> read/write support.
>
>>> We need to much more proactive about dropping support for unmaintained
>>> filesystems that nobody is ever fixing despite the constant stream of
>>> corruption- and deadlock- related bugs reported against them.
>>
>> IMO, a stream of bug reports is not a reason to remove code (it's a reason
>> to revert some commits).
>>
>> Anyway, that stream of bugs presumably flows from the unstable kernel API,
>> which is inherently high-maintenance. It seems that a stable API could be
>> more appropriate for any filesystem for which the on-disk format is fixed
>> (by old media, by unmaintained FLOSS implementations or abandoned
>> proprietary implementations).
>
> You've misunderstood. Google have decided to subject the entire kernel
> (including obsolete unmaintained filesystems) to stress tests that it's
> never had before. IOW these bugs have been there since the code was
> merged. There's nothing to back out. There's no API change to blame.
> It's always been buggy and it's never mattered before.
>
> It wouldn't be so bad if Google had also decided to fund people to fix
> those bugs, but no, they've decided to dump them on public mailing lists
> and berate developers into fixing them.
>
On Fri, 21 Jul 2023, Matthew Wilcox wrote:
>
> You've misunderstood. Google have decided to subject the entire kernel
> (including obsolete unmaintained filesystems) to stress tests that it's
> never had before. IOW these bugs have been there since the code was
> merged. There's nothing to back out. There's no API change to blame.
> It's always been buggy and it's never mattered before.
>
I'm not blaming the unstable API for the bugs, I'm blaming it for the
workload. A stable API (like a userspace API) decreases the likelihood
that overloaded maintainers have to orphan a filesystem implementation.
On Fri, Jul 21, 2023 at 06:14:04PM +1000, Finn Thain wrote:
>
> I'm not blaming the unstable API for the bugs, I'm blaming it for the
> workload. A stable API (like a userspace API) decreases the likelihood
> that overloaded maintainers have to orphan a filesystem implementation.
You are incorrect. The HFS file system has gotten zero development
attention and the bugs were not the result of the API changes. The
main issue here is that the HFS file system does not have maintainer,
and decreasing the workload will not magically make someone appear
with deep knowledge of that particular part of the code base.
It's also the case that the actual amount of work on the "overloaded
maintainers" caused by API changes is minimal --- it's dwarfed by
syzbot noise (complaints from syzbot that aren't really bugs, or for
really outr? threat models).
API changes within the kernel are the responsibility of the people
making the change. For example, consider all of the folio changes
that have been landing in the kernel; the amount of extra work on the
part of most file system maintainers is minimal, because it's the
people making the API changes who update the file system. I won't say
that it's _zero_ work, because file system maintainers review the
changes, and we run regression tests, and we sometimes need to point
out when a bug has been introduced --- at which point the person
making the API change has the responsibility of fixing or reverting
the change.
An unstable API are much painful for out-of-tree kernel code. But
upstream kernel developers aren't really concerned with out-of-tree
kernel code, except to point out that the work of the people who are
promulgated out-of-tree modules would be much less if they actually
got them cleaned up and made acceptable for upstream inclusion.
- Ted