Subject: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

Hi, this is your Linux kernel regression tracker.

I noticed a regression report in bugzilla.kernel.org. As many (most?)
kernel developer don't keep an eye on it, I decided to forward it by
mail. Quoting from https://bugzilla.kernel.org/show_bug.cgi?id=217074 :

> Hello,
> Installing a new kernel 6.1.12 does not allow assembly of raid0 device.
>
> Going back to previous working kernels: 5.15.65, 5.15.75 assembles the raid0 without any problems.
>
> Kernel command line parameters:
> ... ro kvm_amd.nested=0 kvm_amd.avic=1 kvm_amd.npt=1 raid0.default_layout=2
>
> mdadm assembly attempt fails with:
> 'mdadm: unexpected failure opening /dev/md<NR>'
>
> Tried with mdadm-4.1 and mdadm-4.2, but as it works with either versions of mdadm, I rule out the mdadm software.
>
> strace -f output, last few lines:
>
> mkdir("/run/mdadm", 0755) = -1 EEXIST (File exists)
> openat(AT_FDCWD, "/run/mdadm/map.lock", O_RDWR|O_CREAT|O_TRUNC, 0600) = 3
> fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
> flock(3, LOCK_EX) = 0
> newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> read(4, "", 4096) = 0
> close(4) = 0
> openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> read(4, "", 4096) = 0
> close(4) = 0
> newfstatat(AT_FDCWD, "/dev/.udev", 0x7ffcd8243c90, 0) = -1 ENOENT (No such file or directory)
> newfstatat(AT_FDCWD, "/run/udev", {st_mode=S_IFDIR|0755, st_size=160, ...}, 0) = 0
> openat(AT_FDCWD, "/proc/mdstat", O_RDONLY) = 4
> fcntl(4, F_SETFD, FD_CLOEXEC) = 0
> newfstatat(4, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
> read(4, "Personalities : [raid1] [raid0] "..., 1024) = 56
> read(4, "", 1024) = 0
> close(4) = 0
> openat(AT_FDCWD, "/sys/block/md127/dev", O_RDONLY) = -1 ENOENT (No such file or directory)
> getpid() = 18351
> mknodat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> openat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> unlink("/dev/.tmp.md.18351:9:127") = 0
> getpid() = 18351
> mknodat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> openat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> unlink("/tmp/.tmp.md.18351:9:127") = 0
> write(2, "mdadm: unexpected failure openin"..., 45mdadm: unexpected failure opening /dev/md127
> ) = 45
> unlink("/run/mdadm/map.lock") = 0
> close(3) = 0
> exit_group(1) = ?
> +++ exited with 1 +++
>
>
> Tried with kernel compiled with either CONFIG_DEVTMPFS_SAFE=y or CONFIG_DEVTMPFS_SAFE=n, fails the same way.
>
> The raid consists of 4 devices, here is mdstat contents:
>
> Personalities : [raid0]
> md127 : active raid0 sda[0] sdc[2] sdd[3] sdb[1]
> 2929769472 blocks super 1.2 512k chunks
>
> unused devices: <none>
>
>
> Examining the 4 block devices:
>
> gnusystem /var/log # mdadm --misc -E /dev/sda
> /dev/sda:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> Name : gnusystem:md0-store (local to host gnusystem)
> Creation Time : Wed Sep 29 22:28:09 2021
> Raid Level : raid0
> Raid Devices : 4
>
> Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> Data Offset : 264192 sectors
> Super Offset : 8 sectors
> Unused Space : before=264112 sectors, after=0 sectors
> State : clean
> Device UUID : 7f226c1c:23632b9d:e3d6c656:74522906
>
> Update Time : Wed Sep 29 22:28:09 2021
> Bad Block Log : 512 entries available at offset 8 sectors
> Checksum : 51e99fb5 - correct
> Events : 0
>
> Chunk Size : 512K
>
> Device Role : Active device 0
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> gnusystem /var/log # mdadm --misc -E /dev/sdb
> /dev/sdb:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> Name : gnusystem:md0-store (local to host gnusystem)
> Creation Time : Wed Sep 29 22:28:09 2021
> Raid Level : raid0
> Raid Devices : 4
>
> Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> Data Offset : 264192 sectors
> Super Offset : 8 sectors
> Unused Space : before=264112 sectors, after=0 sectors
> State : clean
> Device UUID : ed8795fe:c7e6719a:165db37e:32ec0894
>
> Update Time : Wed Sep 29 22:28:09 2021
> Bad Block Log : 512 entries available at offset 8 sectors
> Checksum : 215db63b - correct
> Events : 0
>
> Chunk Size : 512K
>
> Device Role : Active device 1
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> gnusystem /var/log # mdadm --misc -E /dev/sdc
> /dev/sdc:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> Name : gnusystem:md0-store (local to host gnusystem)
> Creation Time : Wed Sep 29 22:28:09 2021
> Raid Level : raid0
> Raid Devices : 4
>
> Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> Data Offset : 264192 sectors
> Super Offset : 8 sectors
> Unused Space : before=264112 sectors, after=0 sectors
> State : clean
> Device UUID : 3713dfff:d2e29aaf:3275039d:08b317bb
>
> Update Time : Wed Sep 29 22:28:09 2021
> Bad Block Log : 512 entries available at offset 8 sectors
> Checksum : 42f70f03 - correct
> Events : 0
>
> Chunk Size : 512K
>
> Device Role : Active device 2
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> gnusystem /var/log # mdadm --misc -E /dev/sdd
> /dev/sdd:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> Name : gnusystem:md0-store (local to host gnusystem)
> Creation Time : Wed Sep 29 22:28:09 2021
> Raid Level : raid0
> Raid Devices : 4
>
> Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> Data Offset : 264192 sectors
> Super Offset : 8 sectors
> Unused Space : before=264112 sectors, after=0 sectors
> State : clean
> Device UUID : 7da858ae:c0d6ca51:0ecaaaf0:280367cc
>
> Update Time : Wed Sep 29 22:28:09 2021
> Bad Block Log : 512 entries available at offset 8 sectors
> Checksum : 32cf4ab4 - correct
> Events : 0
>
> Chunk Size : 512K
>
> Device Role : Active device 3
> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>
> If any more information is needed, let me know.

See the ticket for details.


[TLDR for the rest of this mail: I'm adding this report to the list of
tracked Linux kernel regressions; the text you find below is based on a
few templates paragraphs you might have encountered already in similar
form.]

BTW, let me use this mail to also add the report to the list of tracked
regressions to ensure it's doesn't fall through the cracks:

#regzbot introduced: v5.15..v6.1.12
https://bugzilla.kernel.org/show_bug.cgi?id=217074
#regzbot title: block: md: raid0 no longer assembled
#regzbot ignore-activity

This isn't a regression? This issue or a fix for it are already
discussed somewhere else? It was fixed already? You want to clarify when
the regression started to happen? Or point out I got the title or
something else totally wrong? Then just reply and tell me -- ideally
while also telling regzbot about it, as explained by the page listed in
the footer of this mail.

Developers: When fixing the issue, remember to add 'Link:' tags pointing
to the report (e.g. the buzgzilla ticket and maybe this mail as well, if
this thread sees some discussion). See page linked in footer for details.

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
If I did something stupid, please tell me, as explained on that page.


2023-02-28 18:46:04

by Song Liu

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

Thanks for the report. I was on vacation last week. I will look into this soon.

Thanks,
Song

On Thu, Feb 23, 2023 at 8:06 AM Linux regression tracking (Thorsten
Leemhuis) <[email protected]> wrote:
>
> Hi, this is your Linux kernel regression tracker.
>
> I noticed a regression report in bugzilla.kernel.org. As many (most?)
> kernel developer don't keep an eye on it, I decided to forward it by
> mail. Quoting from https://bugzilla.kernel.org/show_bug.cgi?id=217074 :
>
> > Hello,
> > Installing a new kernel 6.1.12 does not allow assembly of raid0 device.
> >
> > Going back to previous working kernels: 5.15.65, 5.15.75 assembles the raid0 without any problems.
> >
> > Kernel command line parameters:
> > ... ro kvm_amd.nested=0 kvm_amd.avic=1 kvm_amd.npt=1 raid0.default_layout=2
> >
> > mdadm assembly attempt fails with:
> > 'mdadm: unexpected failure opening /dev/md<NR>'
> >
> > Tried with mdadm-4.1 and mdadm-4.2, but as it works with either versions of mdadm, I rule out the mdadm software.
> >
> > strace -f output, last few lines:
> >
> > mkdir("/run/mdadm", 0755) = -1 EEXIST (File exists)
> > openat(AT_FDCWD, "/run/mdadm/map.lock", O_RDWR|O_CREAT|O_TRUNC, 0600) = 3
> > fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
> > flock(3, LOCK_EX) = 0
> > newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "", 4096) = 0
> > close(4) = 0
> > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "", 4096) = 0
> > close(4) = 0
> > newfstatat(AT_FDCWD, "/dev/.udev", 0x7ffcd8243c90, 0) = -1 ENOENT (No such file or directory)
> > newfstatat(AT_FDCWD, "/run/udev", {st_mode=S_IFDIR|0755, st_size=160, ...}, 0) = 0
> > openat(AT_FDCWD, "/proc/mdstat", O_RDONLY) = 4
> > fcntl(4, F_SETFD, FD_CLOEXEC) = 0
> > newfstatat(4, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "Personalities : [raid1] [raid0] "..., 1024) = 56
> > read(4, "", 1024) = 0
> > close(4) = 0
> > openat(AT_FDCWD, "/sys/block/md127/dev", O_RDONLY) = -1 ENOENT (No such file or directory)
> > getpid() = 18351
> > mknodat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > openat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > unlink("/dev/.tmp.md.18351:9:127") = 0
> > getpid() = 18351
> > mknodat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > openat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > unlink("/tmp/.tmp.md.18351:9:127") = 0
> > write(2, "mdadm: unexpected failure openin"..., 45mdadm: unexpected failure opening /dev/md127
> > ) = 45
> > unlink("/run/mdadm/map.lock") = 0
> > close(3) = 0
> > exit_group(1) = ?
> > +++ exited with 1 +++
> >
> >
> > Tried with kernel compiled with either CONFIG_DEVTMPFS_SAFE=y or CONFIG_DEVTMPFS_SAFE=n, fails the same way.
> >
> > The raid consists of 4 devices, here is mdstat contents:
> >
> > Personalities : [raid0]
> > md127 : active raid0 sda[0] sdc[2] sdd[3] sdb[1]
> > 2929769472 blocks super 1.2 512k chunks
> >
> > unused devices: <none>
> >
> >
> > Examining the 4 block devices:
> >
> > gnusystem /var/log # mdadm --misc -E /dev/sda
> > /dev/sda:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 7f226c1c:23632b9d:e3d6c656:74522906
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 51e99fb5 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 0
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdb
> > /dev/sdb:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : ed8795fe:c7e6719a:165db37e:32ec0894
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 215db63b - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 1
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdc
> > /dev/sdc:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 3713dfff:d2e29aaf:3275039d:08b317bb
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 42f70f03 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 2
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdd
> > /dev/sdd:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 7da858ae:c0d6ca51:0ecaaaf0:280367cc
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 32cf4ab4 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 3
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> >
> > If any more information is needed, let me know.
>
> See the ticket for details.
>
>
> [TLDR for the rest of this mail: I'm adding this report to the list of
> tracked Linux kernel regressions; the text you find below is based on a
> few templates paragraphs you might have encountered already in similar
> form.]
>
> BTW, let me use this mail to also add the report to the list of tracked
> regressions to ensure it's doesn't fall through the cracks:
>
> #regzbot introduced: v5.15..v6.1.12
> https://bugzilla.kernel.org/show_bug.cgi?id=217074
> #regzbot title: block: md: raid0 no longer assembled
> #regzbot ignore-activity
>
> This isn't a regression? This issue or a fix for it are already
> discussed somewhere else? It was fixed already? You want to clarify when
> the regression started to happen? Or point out I got the title or
> something else totally wrong? Then just reply and tell me -- ideally
> while also telling regzbot about it, as explained by the page listed in
> the footer of this mail.
>
> Developers: When fixing the issue, remember to add 'Link:' tags pointing
> to the report (e.g. the buzgzilla ticket and maybe this mail as well, if
> this thread sees some discussion). See page linked in footer for details.
>
> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
> --
> Everything you wanna know about Linux kernel regression tracking:
> https://linux-regtracking.leemhuis.info/about/#tldr
> If I did something stupid, please tell me, as explained on that page.

Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On 28.02.23 19:45, Song Liu wrote:
> Thanks for the report. I was on vacation last week.

Welcome back, hope you enjoyed it.

> I will look into this soon.

Thx. In the bug report a workaround was found: enable
BLOCK_LEGACY_AUTOLOAD. See also:

https://lore.kernel.org/all/[email protected]/

I wonder if there is some constellation where that "default y" is not
honored for some reason (maybe because the people already have it
disabled in their .config). But well, not my area of expertise, maybe I
shouldn't even have mentioned that thought.

Ciao, Thorsten

> On Thu, Feb 23, 2023 at 8:06 AM Linux regression tracking (Thorsten
> Leemhuis) <[email protected]> wrote:
>>
>> Hi, this is your Linux kernel regression tracker.
>>
>> I noticed a regression report in bugzilla.kernel.org. As many (most?)
>> kernel developer don't keep an eye on it, I decided to forward it by
>> mail. Quoting from https://bugzilla.kernel.org/show_bug.cgi?id=217074 :
>>
>>> Hello,
>>> Installing a new kernel 6.1.12 does not allow assembly of raid0 device.
>>>
>>> Going back to previous working kernels: 5.15.65, 5.15.75 assembles the raid0 without any problems.
>>>
>>> Kernel command line parameters:
>>> ... ro kvm_amd.nested=0 kvm_amd.avic=1 kvm_amd.npt=1 raid0.default_layout=2
>>>
>>> mdadm assembly attempt fails with:
>>> 'mdadm: unexpected failure opening /dev/md<NR>'
>>>
>>> Tried with mdadm-4.1 and mdadm-4.2, but as it works with either versions of mdadm, I rule out the mdadm software.
>>>
>>> strace -f output, last few lines:
>>>
>>> mkdir("/run/mdadm", 0755) = -1 EEXIST (File exists)
>>> openat(AT_FDCWD, "/run/mdadm/map.lock", O_RDWR|O_CREAT|O_TRUNC, 0600) = 3
>>> fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
>>> flock(3, LOCK_EX) = 0
>>> newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
>>> openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
>>> fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
>>> newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
>>> read(4, "", 4096) = 0
>>> close(4) = 0
>>> openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
>>> fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
>>> newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
>>> read(4, "", 4096) = 0
>>> close(4) = 0
>>> newfstatat(AT_FDCWD, "/dev/.udev", 0x7ffcd8243c90, 0) = -1 ENOENT (No such file or directory)
>>> newfstatat(AT_FDCWD, "/run/udev", {st_mode=S_IFDIR|0755, st_size=160, ...}, 0) = 0
>>> openat(AT_FDCWD, "/proc/mdstat", O_RDONLY) = 4
>>> fcntl(4, F_SETFD, FD_CLOEXEC) = 0
>>> newfstatat(4, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
>>> read(4, "Personalities : [raid1] [raid0] "..., 1024) = 56
>>> read(4, "", 1024) = 0
>>> close(4) = 0
>>> openat(AT_FDCWD, "/sys/block/md127/dev", O_RDONLY) = -1 ENOENT (No such file or directory)
>>> getpid() = 18351
>>> mknodat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
>>> openat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
>>> unlink("/dev/.tmp.md.18351:9:127") = 0
>>> getpid() = 18351
>>> mknodat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
>>> openat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
>>> unlink("/tmp/.tmp.md.18351:9:127") = 0
>>> write(2, "mdadm: unexpected failure openin"..., 45mdadm: unexpected failure opening /dev/md127
>>> ) = 45
>>> unlink("/run/mdadm/map.lock") = 0
>>> close(3) = 0
>>> exit_group(1) = ?
>>> +++ exited with 1 +++
>>>
>>>
>>> Tried with kernel compiled with either CONFIG_DEVTMPFS_SAFE=y or CONFIG_DEVTMPFS_SAFE=n, fails the same way.
>>>
>>> The raid consists of 4 devices, here is mdstat contents:
>>>
>>> Personalities : [raid0]
>>> md127 : active raid0 sda[0] sdc[2] sdd[3] sdb[1]
>>> 2929769472 blocks super 1.2 512k chunks
>>>
>>> unused devices: <none>
>>>
>>>
>>> Examining the 4 block devices:
>>>
>>> gnusystem /var/log # mdadm --misc -E /dev/sda
>>> /dev/sda:
>>> Magic : a92b4efc
>>> Version : 1.2
>>> Feature Map : 0x0
>>> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
>>> Name : gnusystem:md0-store (local to host gnusystem)
>>> Creation Time : Wed Sep 29 22:28:09 2021
>>> Raid Level : raid0
>>> Raid Devices : 4
>>>
>>> Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
>>> Data Offset : 264192 sectors
>>> Super Offset : 8 sectors
>>> Unused Space : before=264112 sectors, after=0 sectors
>>> State : clean
>>> Device UUID : 7f226c1c:23632b9d:e3d6c656:74522906
>>>
>>> Update Time : Wed Sep 29 22:28:09 2021
>>> Bad Block Log : 512 entries available at offset 8 sectors
>>> Checksum : 51e99fb5 - correct
>>> Events : 0
>>>
>>> Chunk Size : 512K
>>>
>>> Device Role : Active device 0
>>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>> gnusystem /var/log # mdadm --misc -E /dev/sdb
>>> /dev/sdb:
>>> Magic : a92b4efc
>>> Version : 1.2
>>> Feature Map : 0x0
>>> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
>>> Name : gnusystem:md0-store (local to host gnusystem)
>>> Creation Time : Wed Sep 29 22:28:09 2021
>>> Raid Level : raid0
>>> Raid Devices : 4
>>>
>>> Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
>>> Data Offset : 264192 sectors
>>> Super Offset : 8 sectors
>>> Unused Space : before=264112 sectors, after=0 sectors
>>> State : clean
>>> Device UUID : ed8795fe:c7e6719a:165db37e:32ec0894
>>>
>>> Update Time : Wed Sep 29 22:28:09 2021
>>> Bad Block Log : 512 entries available at offset 8 sectors
>>> Checksum : 215db63b - correct
>>> Events : 0
>>>
>>> Chunk Size : 512K
>>>
>>> Device Role : Active device 1
>>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>> gnusystem /var/log # mdadm --misc -E /dev/sdc
>>> /dev/sdc:
>>> Magic : a92b4efc
>>> Version : 1.2
>>> Feature Map : 0x0
>>> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
>>> Name : gnusystem:md0-store (local to host gnusystem)
>>> Creation Time : Wed Sep 29 22:28:09 2021
>>> Raid Level : raid0
>>> Raid Devices : 4
>>>
>>> Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
>>> Data Offset : 264192 sectors
>>> Super Offset : 8 sectors
>>> Unused Space : before=264112 sectors, after=0 sectors
>>> State : clean
>>> Device UUID : 3713dfff:d2e29aaf:3275039d:08b317bb
>>>
>>> Update Time : Wed Sep 29 22:28:09 2021
>>> Bad Block Log : 512 entries available at offset 8 sectors
>>> Checksum : 42f70f03 - correct
>>> Events : 0
>>>
>>> Chunk Size : 512K
>>>
>>> Device Role : Active device 2
>>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>> gnusystem /var/log # mdadm --misc -E /dev/sdd
>>> /dev/sdd:
>>> Magic : a92b4efc
>>> Version : 1.2
>>> Feature Map : 0x0
>>> Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
>>> Name : gnusystem:md0-store (local to host gnusystem)
>>> Creation Time : Wed Sep 29 22:28:09 2021
>>> Raid Level : raid0
>>> Raid Devices : 4
>>>
>>> Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
>>> Data Offset : 264192 sectors
>>> Super Offset : 8 sectors
>>> Unused Space : before=264112 sectors, after=0 sectors
>>> State : clean
>>> Device UUID : 7da858ae:c0d6ca51:0ecaaaf0:280367cc
>>>
>>> Update Time : Wed Sep 29 22:28:09 2021
>>> Bad Block Log : 512 entries available at offset 8 sectors
>>> Checksum : 32cf4ab4 - correct
>>> Events : 0
>>>
>>> Chunk Size : 512K
>>>
>>> Device Role : Active device 3
>>> Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
>>>
>>> If any more information is needed, let me know.
>>
>> See the ticket for details.
>>
>>
>> [TLDR for the rest of this mail: I'm adding this report to the list of
>> tracked Linux kernel regressions; the text you find below is based on a
>> few templates paragraphs you might have encountered already in similar
>> form.]
>>
>> BTW, let me use this mail to also add the report to the list of tracked
>> regressions to ensure it's doesn't fall through the cracks:
>>
>> #regzbot introduced: v5.15..v6.1.12
>> https://bugzilla.kernel.org/show_bug.cgi?id=217074
>> #regzbot title: block: md: raid0 no longer assembled
>> #regzbot ignore-activity
>>
>> This isn't a regression? This issue or a fix for it are already
>> discussed somewhere else? It was fixed already? You want to clarify when
>> the regression started to happen? Or point out I got the title or
>> something else totally wrong? Then just reply and tell me -- ideally
>> while also telling regzbot about it, as explained by the page listed in
>> the footer of this mail.
>>
>> Developers: When fixing the issue, remember to add 'Link:' tags pointing
>> to the report (e.g. the buzgzilla ticket and maybe this mail as well, if
>> this thread sees some discussion). See page linked in footer for details.
>>
>> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
>> --
>> Everything you wanna know about Linux kernel regression tracking:
>> https://linux-regtracking.leemhuis.info/about/#tldr
>> If I did something stupid, please tell me, as explained on that page.
>
>

2023-03-03 22:43:31

by Song Liu

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

+ Jes.

It appeared to me that we can assemble the array if we have any of the
following:
1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
2. Have a valid /etc/mdadm.conf;
3. Update mdadm to handle this case. (I tried some ugly hacks, which worked but
weren't clean).

Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
think we need mdadm to handle this properly. But the logistics might
be complicated, as
mdadm are shipped separately.

Jes, what do you think about this? AFAICT, we need to update the logic in
mdopen.c:create_mddev().

Thanks,
Song


On Thu, Feb 23, 2023 at 8:06 AM Linux regression tracking (Thorsten
Leemhuis) <[email protected]> wrote:
>
> Hi, this is your Linux kernel regression tracker.
>
> I noticed a regression report in bugzilla.kernel.org. As many (most?)
> kernel developer don't keep an eye on it, I decided to forward it by
> mail. Quoting from https://bugzilla.kernel.org/show_bug.cgi?id=217074 :
>
> > Hello,
> > Installing a new kernel 6.1.12 does not allow assembly of raid0 device.
> >
> > Going back to previous working kernels: 5.15.65, 5.15.75 assembles the raid0 without any problems.
> >
> > Kernel command line parameters:
> > ... ro kvm_amd.nested=0 kvm_amd.avic=1 kvm_amd.npt=1 raid0.default_layout=2
> >
> > mdadm assembly attempt fails with:
> > 'mdadm: unexpected failure opening /dev/md<NR>'
> >
> > Tried with mdadm-4.1 and mdadm-4.2, but as it works with either versions of mdadm, I rule out the mdadm software.
> >
> > strace -f output, last few lines:
> >
> > mkdir("/run/mdadm", 0755) = -1 EEXIST (File exists)
> > openat(AT_FDCWD, "/run/mdadm/map.lock", O_RDWR|O_CREAT|O_TRUNC, 0600) = 3
> > fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
> > flock(3, LOCK_EX) = 0
> > newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "", 4096) = 0
> > close(4) = 0
> > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "", 4096) = 0
> > close(4) = 0
> > newfstatat(AT_FDCWD, "/dev/.udev", 0x7ffcd8243c90, 0) = -1 ENOENT (No such file or directory)
> > newfstatat(AT_FDCWD, "/run/udev", {st_mode=S_IFDIR|0755, st_size=160, ...}, 0) = 0
> > openat(AT_FDCWD, "/proc/mdstat", O_RDONLY) = 4
> > fcntl(4, F_SETFD, FD_CLOEXEC) = 0
> > newfstatat(4, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > read(4, "Personalities : [raid1] [raid0] "..., 1024) = 56
> > read(4, "", 1024) = 0
> > close(4) = 0
> > openat(AT_FDCWD, "/sys/block/md127/dev", O_RDONLY) = -1 ENOENT (No such file or directory)
> > getpid() = 18351
> > mknodat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > openat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > unlink("/dev/.tmp.md.18351:9:127") = 0
> > getpid() = 18351
> > mknodat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > openat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > unlink("/tmp/.tmp.md.18351:9:127") = 0
> > write(2, "mdadm: unexpected failure openin"..., 45mdadm: unexpected failure opening /dev/md127
> > ) = 45
> > unlink("/run/mdadm/map.lock") = 0
> > close(3) = 0
> > exit_group(1) = ?
> > +++ exited with 1 +++
> >
> >
> > Tried with kernel compiled with either CONFIG_DEVTMPFS_SAFE=y or CONFIG_DEVTMPFS_SAFE=n, fails the same way.
> >
> > The raid consists of 4 devices, here is mdstat contents:
> >
> > Personalities : [raid0]
> > md127 : active raid0 sda[0] sdc[2] sdd[3] sdb[1]
> > 2929769472 blocks super 1.2 512k chunks
> >
> > unused devices: <none>
> >
> >
> > Examining the 4 block devices:
> >
> > gnusystem /var/log # mdadm --misc -E /dev/sda
> > /dev/sda:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 7f226c1c:23632b9d:e3d6c656:74522906
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 51e99fb5 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 0
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdb
> > /dev/sdb:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : ed8795fe:c7e6719a:165db37e:32ec0894
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 215db63b - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 1
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdc
> > /dev/sdc:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 3713dfff:d2e29aaf:3275039d:08b317bb
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 42f70f03 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 2
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > gnusystem /var/log # mdadm --misc -E /dev/sdd
> > /dev/sdd:
> > Magic : a92b4efc
> > Version : 1.2
> > Feature Map : 0x0
> > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > Name : gnusystem:md0-store (local to host gnusystem)
> > Creation Time : Wed Sep 29 22:28:09 2021
> > Raid Level : raid0
> > Raid Devices : 4
> >
> > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > Data Offset : 264192 sectors
> > Super Offset : 8 sectors
> > Unused Space : before=264112 sectors, after=0 sectors
> > State : clean
> > Device UUID : 7da858ae:c0d6ca51:0ecaaaf0:280367cc
> >
> > Update Time : Wed Sep 29 22:28:09 2021
> > Bad Block Log : 512 entries available at offset 8 sectors
> > Checksum : 32cf4ab4 - correct
> > Events : 0
> >
> > Chunk Size : 512K
> >
> > Device Role : Active device 3
> > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> >
> > If any more information is needed, let me know.
>
> See the ticket for details.
>
>
> [TLDR for the rest of this mail: I'm adding this report to the list of
> tracked Linux kernel regressions; the text you find below is based on a
> few templates paragraphs you might have encountered already in similar
> form.]
>
> BTW, let me use this mail to also add the report to the list of tracked
> regressions to ensure it's doesn't fall through the cracks:
>
> #regzbot introduced: v5.15..v6.1.12
> https://bugzilla.kernel.org/show_bug.cgi?id=217074
> #regzbot title: block: md: raid0 no longer assembled
> #regzbot ignore-activity
>
> This isn't a regression? This issue or a fix for it are already
> discussed somewhere else? It was fixed already? You want to clarify when
> the regression started to happen? Or point out I got the title or
> something else totally wrong? Then just reply and tell me -- ideally
> while also telling regzbot about it, as explained by the page listed in
> the footer of this mail.
>
> Developers: When fixing the issue, remember to add 'Link:' tags pointing
> to the report (e.g. the buzgzilla ticket and maybe this mail as well, if
> this thread sees some discussion). See page linked in footer for details.
>
> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
> --
> Everything you wanna know about Linux kernel regression tracking:
> https://linux-regtracking.leemhuis.info/about/#tldr
> If I did something stupid, please tell me, as explained on that page.

2023-03-05 21:21:22

by NeilBrown

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Sat, 04 Mar 2023, Song Liu wrote:
> + Jes.
>
> It appeared to me that we can assemble the array if we have any of the
> following:
> 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> 2. Have a valid /etc/mdadm.conf;
> 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked but
> weren't clean).
>
> Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> think we need mdadm to handle this properly. But the logistics might
> be complicated, as
> mdadm are shipped separately.
>
> Jes, what do you think about this? AFAICT, we need to update the logic in
> mdopen.c:create_mddev().

mdadm already handles this, but only if
CREATE names=yes
is present in /etc/mdadm.conf

Maybe we should flip the default for the next mdadm release, and patch
the kernel (with a stable backport) to select BLOCK_LEGACY_AUTOLOAD if
BLK_DEV_MD=m
Then revert that - say - 6 months after the new mdadm is released.

NeilBrown


>
> Thanks,
> Song
>
>
> On Thu, Feb 23, 2023 at 8:06 AM Linux regression tracking (Thorsten
> Leemhuis) <[email protected]> wrote:
> >
> > Hi, this is your Linux kernel regression tracker.
> >
> > I noticed a regression report in bugzilla.kernel.org. As many (most?)
> > kernel developer don't keep an eye on it, I decided to forward it by
> > mail. Quoting from https://bugzilla.kernel.org/show_bug.cgi?id=217074 :
> >
> > > Hello,
> > > Installing a new kernel 6.1.12 does not allow assembly of raid0 device.
> > >
> > > Going back to previous working kernels: 5.15.65, 5.15.75 assembles the raid0 without any problems.
> > >
> > > Kernel command line parameters:
> > > ... ro kvm_amd.nested=0 kvm_amd.avic=1 kvm_amd.npt=1 raid0.default_layout=2
> > >
> > > mdadm assembly attempt fails with:
> > > 'mdadm: unexpected failure opening /dev/md<NR>'
> > >
> > > Tried with mdadm-4.1 and mdadm-4.2, but as it works with either versions of mdadm, I rule out the mdadm software.
> > >
> > > strace -f output, last few lines:
> > >
> > > mkdir("/run/mdadm", 0755) = -1 EEXIST (File exists)
> > > openat(AT_FDCWD, "/run/mdadm/map.lock", O_RDWR|O_CREAT|O_TRUNC, 0600) = 3
> > > fcntl(3, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)
> > > flock(3, LOCK_EX) = 0
> > > newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > > read(4, "", 4096) = 0
> > > close(4) = 0
> > > openat(AT_FDCWD, "/run/mdadm/map", O_RDONLY) = 4
> > > fcntl(4, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
> > > newfstatat(4, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > > read(4, "", 4096) = 0
> > > close(4) = 0
> > > newfstatat(AT_FDCWD, "/dev/.udev", 0x7ffcd8243c90, 0) = -1 ENOENT (No such file or directory)
> > > newfstatat(AT_FDCWD, "/run/udev", {st_mode=S_IFDIR|0755, st_size=160, ...}, 0) = 0
> > > openat(AT_FDCWD, "/proc/mdstat", O_RDONLY) = 4
> > > fcntl(4, F_SETFD, FD_CLOEXEC) = 0
> > > newfstatat(4, "", {st_mode=S_IFREG|0444, st_size=0, ...}, AT_EMPTY_PATH) = 0
> > > read(4, "Personalities : [raid1] [raid0] "..., 1024) = 56
> > > read(4, "", 1024) = 0
> > > close(4) = 0
> > > openat(AT_FDCWD, "/sys/block/md127/dev", O_RDONLY) = -1 ENOENT (No such file or directory)
> > > getpid() = 18351
> > > mknodat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > > openat(AT_FDCWD, "/dev/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > > unlink("/dev/.tmp.md.18351:9:127") = 0
> > > getpid() = 18351
> > > mknodat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", S_IFBLK|0600, makedev(0x9, 0x7f)) = 0
> > > openat(AT_FDCWD, "/tmp/.tmp.md.18351:9:127", O_RDWR|O_EXCL|O_DIRECT) = -1 ENXIO (No such device or address)
> > > unlink("/tmp/.tmp.md.18351:9:127") = 0
> > > write(2, "mdadm: unexpected failure openin"..., 45mdadm: unexpected failure opening /dev/md127
> > > ) = 45
> > > unlink("/run/mdadm/map.lock") = 0
> > > close(3) = 0
> > > exit_group(1) = ?
> > > +++ exited with 1 +++
> > >
> > >
> > > Tried with kernel compiled with either CONFIG_DEVTMPFS_SAFE=y or CONFIG_DEVTMPFS_SAFE=n, fails the same way.
> > >
> > > The raid consists of 4 devices, here is mdstat contents:
> > >
> > > Personalities : [raid0]
> > > md127 : active raid0 sda[0] sdc[2] sdd[3] sdb[1]
> > > 2929769472 blocks super 1.2 512k chunks
> > >
> > > unused devices: <none>
> > >
> > >
> > > Examining the 4 block devices:
> > >
> > > gnusystem /var/log # mdadm --misc -E /dev/sda
> > > /dev/sda:
> > > Magic : a92b4efc
> > > Version : 1.2
> > > Feature Map : 0x0
> > > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > > Name : gnusystem:md0-store (local to host gnusystem)
> > > Creation Time : Wed Sep 29 22:28:09 2021
> > > Raid Level : raid0
> > > Raid Devices : 4
> > >
> > > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > > Data Offset : 264192 sectors
> > > Super Offset : 8 sectors
> > > Unused Space : before=264112 sectors, after=0 sectors
> > > State : clean
> > > Device UUID : 7f226c1c:23632b9d:e3d6c656:74522906
> > >
> > > Update Time : Wed Sep 29 22:28:09 2021
> > > Bad Block Log : 512 entries available at offset 8 sectors
> > > Checksum : 51e99fb5 - correct
> > > Events : 0
> > >
> > > Chunk Size : 512K
> > >
> > > Device Role : Active device 0
> > > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > > gnusystem /var/log # mdadm --misc -E /dev/sdb
> > > /dev/sdb:
> > > Magic : a92b4efc
> > > Version : 1.2
> > > Feature Map : 0x0
> > > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > > Name : gnusystem:md0-store (local to host gnusystem)
> > > Creation Time : Wed Sep 29 22:28:09 2021
> > > Raid Level : raid0
> > > Raid Devices : 4
> > >
> > > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > > Data Offset : 264192 sectors
> > > Super Offset : 8 sectors
> > > Unused Space : before=264112 sectors, after=0 sectors
> > > State : clean
> > > Device UUID : ed8795fe:c7e6719a:165db37e:32ec0894
> > >
> > > Update Time : Wed Sep 29 22:28:09 2021
> > > Bad Block Log : 512 entries available at offset 8 sectors
> > > Checksum : 215db63b - correct
> > > Events : 0
> > >
> > > Chunk Size : 512K
> > >
> > > Device Role : Active device 1
> > > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > > gnusystem /var/log # mdadm --misc -E /dev/sdc
> > > /dev/sdc:
> > > Magic : a92b4efc
> > > Version : 1.2
> > > Feature Map : 0x0
> > > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > > Name : gnusystem:md0-store (local to host gnusystem)
> > > Creation Time : Wed Sep 29 22:28:09 2021
> > > Raid Level : raid0
> > > Raid Devices : 4
> > >
> > > Avail Dev Size : 976508976 sectors (465.64 GiB 499.97 GB)
> > > Data Offset : 264192 sectors
> > > Super Offset : 8 sectors
> > > Unused Space : before=264112 sectors, after=0 sectors
> > > State : clean
> > > Device UUID : 3713dfff:d2e29aaf:3275039d:08b317bb
> > >
> > > Update Time : Wed Sep 29 22:28:09 2021
> > > Bad Block Log : 512 entries available at offset 8 sectors
> > > Checksum : 42f70f03 - correct
> > > Events : 0
> > >
> > > Chunk Size : 512K
> > >
> > > Device Role : Active device 2
> > > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > > gnusystem /var/log # mdadm --misc -E /dev/sdd
> > > /dev/sdd:
> > > Magic : a92b4efc
> > > Version : 1.2
> > > Feature Map : 0x0
> > > Array UUID : bb710ce6:edd5d68d:a0a0a405:edd99547
> > > Name : gnusystem:md0-store (local to host gnusystem)
> > > Creation Time : Wed Sep 29 22:28:09 2021
> > > Raid Level : raid0
> > > Raid Devices : 4
> > >
> > > Avail Dev Size : 1953260976 sectors (931.39 GiB 1000.07 GB)
> > > Data Offset : 264192 sectors
> > > Super Offset : 8 sectors
> > > Unused Space : before=264112 sectors, after=0 sectors
> > > State : clean
> > > Device UUID : 7da858ae:c0d6ca51:0ecaaaf0:280367cc
> > >
> > > Update Time : Wed Sep 29 22:28:09 2021
> > > Bad Block Log : 512 entries available at offset 8 sectors
> > > Checksum : 32cf4ab4 - correct
> > > Events : 0
> > >
> > > Chunk Size : 512K
> > >
> > > Device Role : Active device 3
> > > Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
> > >
> > > If any more information is needed, let me know.
> >
> > See the ticket for details.
> >
> >
> > [TLDR for the rest of this mail: I'm adding this report to the list of
> > tracked Linux kernel regressions; the text you find below is based on a
> > few templates paragraphs you might have encountered already in similar
> > form.]
> >
> > BTW, let me use this mail to also add the report to the list of tracked
> > regressions to ensure it's doesn't fall through the cracks:
> >
> > #regzbot introduced: v5.15..v6.1.12
> > https://bugzilla.kernel.org/show_bug.cgi?id=217074
> > #regzbot title: block: md: raid0 no longer assembled
> > #regzbot ignore-activity
> >
> > This isn't a regression? This issue or a fix for it are already
> > discussed somewhere else? It was fixed already? You want to clarify when
> > the regression started to happen? Or point out I got the title or
> > something else totally wrong? Then just reply and tell me -- ideally
> > while also telling regzbot about it, as explained by the page listed in
> > the footer of this mail.
> >
> > Developers: When fixing the issue, remember to add 'Link:' tags pointing
> > to the report (e.g. the buzgzilla ticket and maybe this mail as well, if
> > this thread sees some discussion). See page linked in footer for details.
> >
> > Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
> > --
> > Everything you wanna know about Linux kernel regression tracking:
> > https://linux-regtracking.leemhuis.info/about/#tldr
> > If I did something stupid, please tell me, as explained on that page.
>


2023-03-06 17:51:22

by Song Liu

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Sun, Mar 5, 2023 at 1:21 PM NeilBrown <[email protected]> wrote:
>
> On Sat, 04 Mar 2023, Song Liu wrote:
> > + Jes.
> >
> > It appeared to me that we can assemble the array if we have any of the
> > following:
> > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > 2. Have a valid /etc/mdadm.conf;
> > 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked but
> > weren't clean).
> >
> > Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> > think we need mdadm to handle this properly. But the logistics might
> > be complicated, as
> > mdadm are shipped separately.
> >
> > Jes, what do you think about this? AFAICT, we need to update the logic in
> > mdopen.c:create_mddev().
>
> mdadm already handles this, but only if
> CREATE names=yes
> is present in /etc/mdadm.conf
>
> Maybe we should flip the default for the next mdadm release, and patch
> the kernel (with a stable backport) to select BLOCK_LEGACY_AUTOLOAD if
> BLK_DEV_MD=m
> Then revert that - say - 6 months after the new mdadm is released.

I like this idea. I guess we also need to select BLOCK_LEGACY_AUTOLOAD
if BLK_DEV_MD=y?

Thanks,
Song

2023-03-06 22:35:13

by NeilBrown

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Tue, 07 Mar 2023, Song Liu wrote:
> On Sun, Mar 5, 2023 at 1:21 PM NeilBrown <[email protected]> wrote:
> >
> > On Sat, 04 Mar 2023, Song Liu wrote:
> > > + Jes.
> > >
> > > It appeared to me that we can assemble the array if we have any of the
> > > following:
> > > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > > 2. Have a valid /etc/mdadm.conf;
> > > 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked but
> > > weren't clean).
> > >
> > > Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> > > think we need mdadm to handle this properly. But the logistics might
> > > be complicated, as
> > > mdadm are shipped separately.
> > >
> > > Jes, what do you think about this? AFAICT, we need to update the logic in
> > > mdopen.c:create_mddev().
> >
> > mdadm already handles this, but only if
> > CREATE names=yes
> > is present in /etc/mdadm.conf
> >
> > Maybe we should flip the default for the next mdadm release, and patch
> > the kernel (with a stable backport) to select BLOCK_LEGACY_AUTOLOAD if
> > BLK_DEV_MD=m
> > Then revert that - say - 6 months after the new mdadm is released.
>
> I like this idea. I guess we also need to select BLOCK_LEGACY_AUTOLOAD
> if BLK_DEV_MD=y?

Yes of course - sorry.

Something like the following.

Thanks,
NeilBrown


From: NeilBrown <[email protected]>
Subject: md: select BLOCK_LEGACY_AUTOLOAD

When BLOCK_LEGACY_AUTOLOAD is not enable, mdadm is not able to
activate new arrays unless "CREATE names=yes" appears in
mdadm.conf

As this is a regression we need to always enable BLOCK_LEGACY_AUTOLOAD
for when MD is selected - at least until mdadm is updated and the
updates widely available.

Fixes: fbdee71bb5d8 ("block: deprecate autoloading based on dev_t")
Signed-off-by: NeilBrown <[email protected]>

diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 998a5cfdbc4e..662d219c39bf 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -16,6 +16,10 @@ if MD

config BLK_DEV_MD
tristate "RAID support"
select BLOCK_HOLDER_DEPRECATED if SYSFS
+ # BLOCK_LEGACY_AUTOLOAD requirement should be removed
+ # after relevant mdadm enhancements - to make "names=yes"
+ # the default - are widely available.
+ select BLOCK_LEGACY_AUTOLOAD
help
This driver lets you combine several hard disk partitions into one
logical block device. This can be used to simply append one



2023-03-07 08:53:21

by Mariusz Tkaczyk

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Mon, 06 Mar 2023 08:21:07 +1100
"NeilBrown" <[email protected]> wrote:

> On Sat, 04 Mar 2023, Song Liu wrote:
> > + Jes.
> >
> > It appeared to me that we can assemble the array if we have any of the
> > following:
> > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > 2. Have a valid /etc/mdadm.conf;
> > 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked
> > but weren't clean).
> >
> > Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> > think we need mdadm to handle this properly. But the logistics might
> > be complicated, as
> > mdadm are shipped separately.
> >
> > Jes, what do you think about this? AFAICT, we need to update the logic in
> > mdopen.c:create_mddev().
>
> mdadm already handles this, but only if
> CREATE names=yes
> is present in /etc/mdadm.conf

Hi,

"CREATE names=yes" enforces creation of /dev/md_name arrays instead of
/dev/mdXXX. It is a large change for users, too aggressive IMO. It will destroy
many setups.

To resolve it, we need is to use create_named_array() but respect old naming
convention. We already have find_free_devnm(), and we are able to keep
consistency because we can create 'mdXXX':

/sys/module/md_mod/parameters # echo md125 > new_array

/sys/module/md_mod/parameters # ll /sys/block/md125
lrwxrwxrwx 1 root root 0 Mar 7 10:54 /sys/block/md125 ->
../devices/virtual/block/md125

That will require adjustments in mdadm, but I think that we can keep
names the same way. I created the test for verification of base creation flows,
we can use it to avoid regression:
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/tests/00createnames

Thoughts?

BTW. I wanted to get rid of this legacy "create_on_open" from mdadm anyway but
never had time to. If you agree, I can proceed with fixing it.

Thanks,
Mariusz

2023-03-07 20:03:08

by Jes Sorensen

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On 3/7/23 03:52, Mariusz Tkaczyk wrote:
> On Mon, 06 Mar 2023 08:21:07 +1100
> "NeilBrown" <[email protected]> wrote:
>
>> On Sat, 04 Mar 2023, Song Liu wrote:
>>> + Jes.
>>>
>>> It appeared to me that we can assemble the array if we have any of the
>>> following:
>>> 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
>>> 2. Have a valid /etc/mdadm.conf;
>>> 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked
>>> but weren't clean).
>>>
>>> Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
>>> think we need mdadm to handle this properly. But the logistics might
>>> be complicated, as
>>> mdadm are shipped separately.
>>>
>>> Jes, what do you think about this? AFAICT, we need to update the logic in
>>> mdopen.c:create_mddev().
>>
>> mdadm already handles this, but only if
>> CREATE names=yes
>> is present in /etc/mdadm.conf
>
> Hi,
>
> "CREATE names=yes" enforces creation of /dev/md_name arrays instead of
> /dev/mdXXX. It is a large change for users, too aggressive IMO. It will destroy
> many setups.

This is my concern too. I find a lot of people setup their box with md
and then keep it running, but they don't necessarily update it as
frequently as their laptop etc. This could cause some unpleasant
surprises down the road.

> To resolve it, we need is to use create_named_array() but respect old naming
> convention. We already have find_free_devnm(), and we are able to keep
> consistency because we can create 'mdXXX':
>
> /sys/module/md_mod/parameters # echo md125 > new_array
>
> /sys/module/md_mod/parameters # ll /sys/block/md125
> lrwxrwxrwx 1 root root 0 Mar 7 10:54 /sys/block/md125 ->
> ../devices/virtual/block/md125
>
> That will require adjustments in mdadm, but I think that we can keep
> names the same way. I created the test for verification of base creation flows,
> we can use it to avoid regression:
> https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/tests/00createnames
>
> Thoughts?

I like this.

> BTW. I wanted to get rid of this legacy "create_on_open" from mdadm anyway but
> never had time to. If you agree, I can proceed with fixing it.

Oh yes

Cheers,
Jes


2023-03-07 22:36:20

by NeilBrown

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Tue, 07 Mar 2023, Mariusz Tkaczyk wrote:
> On Mon, 06 Mar 2023 08:21:07 +1100
> "NeilBrown" <[email protected]> wrote:
>
> > On Sat, 04 Mar 2023, Song Liu wrote:
> > > + Jes.
> > >
> > > It appeared to me that we can assemble the array if we have any of the
> > > following:
> > > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > > 2. Have a valid /etc/mdadm.conf;
> > > 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked
> > > but weren't clean).
> > >
> > > Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> > > think we need mdadm to handle this properly. But the logistics might
> > > be complicated, as
> > > mdadm are shipped separately.
> > >
> > > Jes, what do you think about this? AFAICT, we need to update the logic in
> > > mdopen.c:create_mddev().
> >
> > mdadm already handles this, but only if
> > CREATE names=yes
> > is present in /etc/mdadm.conf
>
> Hi,
>
> "CREATE names=yes" enforces creation of /dev/md_name arrays instead of
> /dev/mdXXX. It is a large change for users, too aggressive IMO. It will destroy
> many setups.
>
> To resolve it, we need is to use create_named_array() but respect old naming
> convention. We already have find_free_devnm(), and we are able to keep
> consistency because we can create 'mdXXX':
>
> /sys/module/md_mod/parameters # echo md125 > new_array
>
> /sys/module/md_mod/parameters # ll /sys/block/md125
> lrwxrwxrwx 1 root root 0 Mar 7 10:54 /sys/block/md125 ->
> ../devices/virtual/block/md125
>
> That will require adjustments in mdadm, but I think that we can keep
> names the same way. I created the test for verification of base creation flows,
> we can use it to avoid regression:
> https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/tests/00createnames
>
> Thoughts?

Yes, that is probably sensible.

It would be nice to have a way to expose the "names=yes" functionality
more obviously. I think people would generally prefer it (/proc/mdstat
becomes more meaningful) but I agree that forcing it on people is the
the best approach.

Maybe a super-block flag that enables it for that array, and we start
setting that flag when creating new arrays?

NeilBrown


>
> BTW. I wanted to get rid of this legacy "create_on_open" from mdadm anyway but
> never had time to. If you agree, I can proceed with fixing it.
>
> Thanks,
> Mariusz
>


2023-03-08 08:56:02

by Mariusz Tkaczyk

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Wed, 08 Mar 2023 09:33:52 +1100
"NeilBrown" <[email protected]> wrote:

> On Tue, 07 Mar 2023, Mariusz Tkaczyk wrote:
> > On Mon, 06 Mar 2023 08:21:07 +1100
> > "NeilBrown" <[email protected]> wrote:
> >
> > > On Sat, 04 Mar 2023, Song Liu wrote:
> > > > + Jes.
> > > >
> > > > It appeared to me that we can assemble the array if we have any of the
> > > > following:
> > > > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > > > 2. Have a valid /etc/mdadm.conf;
> > > > 3. Update mdadm to handle this case. (I tried some ugly hacks, which
> > > > worked but weren't clean).
> > > >
> > > > Since we eventually would like to get rid of
> > > > CONFIG_BLOCK_LEGACY_AUTOLOAD, I think we need mdadm to handle this
> > > > properly. But the logistics might be complicated, as
> > > > mdadm are shipped separately.
> > > >
> > > > Jes, what do you think about this? AFAICT, we need to update the logic
> > > > in mdopen.c:create_mddev().
> > >
> > > mdadm already handles this, but only if
> > > CREATE names=yes
> > > is present in /etc/mdadm.conf
> >
> > Hi,
> >
> > "CREATE names=yes" enforces creation of /dev/md_name arrays instead of
> > /dev/mdXXX. It is a large change for users, too aggressive IMO. It will
> > destroy many setups.
> >
> > To resolve it, we need is to use create_named_array() but respect old naming
> > convention. We already have find_free_devnm(), and we are able to keep
> > consistency because we can create 'mdXXX':
> >
> > /sys/module/md_mod/parameters # echo md125 > new_array
> >
> > /sys/module/md_mod/parameters # ll /sys/block/md125
> > lrwxrwxrwx 1 root root 0 Mar 7 10:54 /sys/block/md125 ->
> > ../devices/virtual/block/md125
> >
> > That will require adjustments in mdadm, but I think that we can keep
> > names the same way. I created the test for verification of base creation
> > flows, we can use it to avoid regression:
> > https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/tests/00createnames
> >
> > Thoughts?
>
> Yes, that is probably sensible.
>
> It would be nice to have a way to expose the "names=yes" functionality
> more obviously. I think people would generally prefer it (/proc/mdstat
> becomes more meaningful) but I agree that forcing it on people is the
> the best approach.
>
> Maybe a super-block flag that enables it for that array, and we start
> setting that flag when creating new arrays?
>

That is controversial. I don't feel that using metadata to resolve Linux issue
is as a good way. It will work for native for sure but for IMSM it is not such
simple.

I think that we can handle it by config file.
When we set "ARRAY /dev/mdXXX" then it is honored. I'm familiar with such
cases, that works. In such case link may or may not be generated so far I
remember but... nobody cares if array name is meaningful and we can fix that
too. It seems to not be a big deal.

I would like to propose doing same for 'names'.
If there is no config or name is set to "/dev/md/name" it should work as now.
So, if someone would need backward compatibility we can add flag to --detail
and --examine breifs forces that, let say --use-link-as-name. By default
"ARRAY /dev/md_name" entry will be generated.

My PoV is based on IMSM, there may be small differences between IMSM and native
but I think that it applies everywhere.

Thanks,
Mariusz

2023-03-13 20:35:43

by Song Liu

[permalink] [raw]
Subject: Re: [regression] Bug 217074 - upgrading to kernel 6.1.12 from 5.15.x can no longer assemble software raid0

On Mon, Mar 6, 2023 at 2:35 PM NeilBrown <[email protected]> wrote:
>
> On Tue, 07 Mar 2023, Song Liu wrote:
> > On Sun, Mar 5, 2023 at 1:21 PM NeilBrown <[email protected]> wrote:
> > >
> > > On Sat, 04 Mar 2023, Song Liu wrote:
> > > > + Jes.
> > > >
> > > > It appeared to me that we can assemble the array if we have any of the
> > > > following:
> > > > 1. Enable CONFIG_BLOCK_LEGACY_AUTOLOAD;
> > > > 2. Have a valid /etc/mdadm.conf;
> > > > 3. Update mdadm to handle this case. (I tried some ugly hacks, which worked but
> > > > weren't clean).
> > > >
> > > > Since we eventually would like to get rid of CONFIG_BLOCK_LEGACY_AUTOLOAD, I
> > > > think we need mdadm to handle this properly. But the logistics might
> > > > be complicated, as
> > > > mdadm are shipped separately.
> > > >
> > > > Jes, what do you think about this? AFAICT, we need to update the logic in
> > > > mdopen.c:create_mddev().
> > >
> > > mdadm already handles this, but only if
> > > CREATE names=yes
> > > is present in /etc/mdadm.conf
> > >
> > > Maybe we should flip the default for the next mdadm release, and patch
> > > the kernel (with a stable backport) to select BLOCK_LEGACY_AUTOLOAD if
> > > BLK_DEV_MD=m
> > > Then revert that - say - 6 months after the new mdadm is released.
> >
> > I like this idea. I guess we also need to select BLOCK_LEGACY_AUTOLOAD
> > if BLK_DEV_MD=y?
>
> Yes of course - sorry.
>
> Something like the following.
>
> Thanks,
> NeilBrown

Thanks Neil!

I applied this to md-fixes.

Song

>
>
> From: NeilBrown <[email protected]>
> Subject: md: select BLOCK_LEGACY_AUTOLOAD
>
> When BLOCK_LEGACY_AUTOLOAD is not enable, mdadm is not able to
> activate new arrays unless "CREATE names=yes" appears in
> mdadm.conf
>
> As this is a regression we need to always enable BLOCK_LEGACY_AUTOLOAD
> for when MD is selected - at least until mdadm is updated and the
> updates widely available.
>
> Fixes: fbdee71bb5d8 ("block: deprecate autoloading based on dev_t")
> Signed-off-by: NeilBrown <[email protected]>
>
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index 998a5cfdbc4e..662d219c39bf 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -16,6 +16,10 @@ if MD
>
> config BLK_DEV_MD
> tristate "RAID support"
> select BLOCK_HOLDER_DEPRECATED if SYSFS
> + # BLOCK_LEGACY_AUTOLOAD requirement should be removed
> + # after relevant mdadm enhancements - to make "names=yes"
> + # the default - are widely available.
> + select BLOCK_LEGACY_AUTOLOAD
> help
> This driver lets you combine several hard disk partitions into one
> logical block device. This can be used to simply append one
>
>