2020-05-22 10:00:01

by Martijn Coenen

[permalink] [raw]
Subject: Writeback bug causing writeback stalls

Hi,

We've been working on implementing a FUSE filesystem in Android, and have
run into what appears to be a bug in the kernel writeback code. The problem
that we observed is that an inode in the filesystem is on the b_dirty_time list,
but its i_state field does have I_DIRTY_PAGES set, which I think means that
the inode is on the wrong list. This condition doesn't appear to fix itself even
as new pages are dirtied, because __mark_inode_dirty() has this check:

if ((inode->i_state & flags) != flags) {

before considering moving the inode to another list. Since the inode already
has I_DIRTY_PAGES set, we're not going to move it to the dirty list. I *think*
the only way the inode gets out of this condition is whenever we handle the
b_dirty_time list, which can take a while.

The reason we even noticed this bug in the first place is that FUSE has a very
low wb max_ratio by default (1), and so at some point processes get stuck in
balance_dirty_pages_ratelimited(), waiting for pages to be written. They hold
the inode's write lock, and when other processes try to acquire it, they get
stuck. We have a watchdog that reboots the device after ~10 mins of a task
being stuck in D state, and this triggered regularly in some tests.

After careful studying of the kernel code, I found a reliable repro scenario
for this condition, which is described in more detail below. But essentially
what I think is happening is this:

__mark_inode_dirty() has an early return condition for when a sync is in
progress, where it updates the inode i_state but not the writeback list:

inode->i_state |= flags;

/*
* If the inode is being synced, just update its dirty state.
* The unlocker will place the inode on the appropriate
* superblock list, based upon its state.
*/
if (inode->i_state & I_SYNC)
goto out_unlock_inode;

now this comment is true for the generic flusher threads, which run
writeback_sb_inodes(), which calls requeue_inode() to move the inode back to the
correct wb list when the sync is done. However, there is another
function that uses
I_SYNC: writeback_single_inode(). This function has some comments saying it
prefers not to touch writeback lists, and in fact only removes it if the inode
is clean:

/*
* Skip inode if it is clean and we have no outstanding writeback in
* WB_SYNC_ALL mode. We don't want to mess with writeback lists in this
* function since flusher thread may be doing for example sync in
* parallel and if we move the inode, it could get skipped. So here we
* make sure inode is on some writeback list and leave it there unless
* we have completely cleaned the inode.
*/

writeback_single_inode() is called from a few functions, in particular
write_inode_now(). write_inode_now() is called by FUSE's flush() f_ops.

So, the sequence of events is something like this. Let's assume the inode is
already on b_dirty_time for valid reasons. Then:

CPU1 CPU2
fuse_flush()
write_inode_now()
writeback_single_inode()
sets I_SYNC
__writeback_single_inode()
writes back data
clears inode dirty flags
unlocks inode
calls mark_inode_dirty_sync()
sets I_DIRTY_SYNC, but doesn't
update wb list because I_SYNC is
still set
write() // somebody else writes
mark_inode_dirty(I_DIRTY_PAGES)
sets I_DIRTY_PAGES on i_state
doesn't update wb list,
because I_SYNC set
locks inode again
sees inode is still dirty,
doesn't touch WB list
clears I_SYNC

So now we have an inode on b_dirty_time with I_DIRTY_PAGES | I_DIRTY_SYNC set,
and subsequent calls to mark_inode_dirty() with either I_DIRTY_PAGES or
I_DIRTY_SYNC will do nothing to change that. The flusher won't touch
the inode either,
because it's not on a b_dirty or b_io list.

The easiest way to fix this, I think, is to call requeue_inode() at the end of
writeback_single_inode(), much like it is called from writeback_sb_inodes().
However, requeue_inode() has the following ominous warning:

/*
* Find proper writeback list for the inode depending on its current state and
* possibly also change of its state while we were doing writeback. Here we
* handle things such as livelock prevention or fairness of writeback among
* inodes. This function can be called only by flusher thread - noone else
* processes all inodes in writeback lists and requeueing inodes behind flusher
* thread's back can have unexpected consequences.
*/

Obviously this is very critical code both from a correctness and a performance
point of view, so I wanted to run this by the maintainers and folks who have
contributed to this code first.

The way I got to reproduce this reliably was by using what is pretty much a
pass-through FUSE filesystem, and the following two commands run in parallel:

[1] fio --rw=write --size=5G -blocksize=80000 --name=test --directory=/sdcard/

[2] while true; do echo flushme >> /sdcard/test.0.0; sleep 0.1; done

I doubt the blocksize matters, it's just the blocksize that I observed being
used in one of our testruns that hit this. [2] essentially calls fuse_flush()
every 100ms, which is often enough to reproduce this problem within seconds;
FIO will stall and enter balance_dirty_pages_ratelimited(), and [2] will hang
because it needs the inode write lock.

Other filesystems may hit the same problem, though write_inode_now() is usually
only used when no more dirty pages are expected (eg in final iput()). There are
some other functions that call writeback_single_inode() that are more
widely used,
like sync_inode() and sync_inode_metadata().

Curious to hear your thoughts on this. I'm happy to provide more info
or traces if
needed.

Thanks,
Martijn


2020-05-22 14:42:57

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

Hi!

On Fri 22-05-20 11:57:42, Martijn Coenen wrote:
<snip>

> So, the sequence of events is something like this. Let's assume the inode is
> already on b_dirty_time for valid reasons. Then:
>
> CPU1 CPU2
> fuse_flush()
> write_inode_now()
> writeback_single_inode()
> sets I_SYNC
> __writeback_single_inode()
> writes back data
> clears inode dirty flags
> unlocks inode
> calls mark_inode_dirty_sync()
> sets I_DIRTY_SYNC, but doesn't
> update wb list because I_SYNC is
> still set
> write() // somebody else writes
> mark_inode_dirty(I_DIRTY_PAGES)
> sets I_DIRTY_PAGES on i_state
> doesn't update wb list,
> because I_SYNC set
> locks inode again
> sees inode is still dirty,
> doesn't touch WB list
> clears I_SYNC
>
> So now we have an inode on b_dirty_time with I_DIRTY_PAGES | I_DIRTY_SYNC set,
> and subsequent calls to mark_inode_dirty() with either I_DIRTY_PAGES or
> I_DIRTY_SYNC will do nothing to change that. The flusher won't touch
> the inode either,
> because it's not on a b_dirty or b_io list.

Thanks for the detailed analysis and explanation! I agree that what you
describe is a bug in the writeback code.

> The easiest way to fix this, I think, is to call requeue_inode() at the end of
> writeback_single_inode(), much like it is called from writeback_sb_inodes().
> However, requeue_inode() has the following ominous warning:
>
> /*
> * Find proper writeback list for the inode depending on its current state and
> * possibly also change of its state while we were doing writeback. Here we
> * handle things such as livelock prevention or fairness of writeback among
> * inodes. This function can be called only by flusher thread - noone else
> * processes all inodes in writeback lists and requeueing inodes behind flusher
> * thread's back can have unexpected consequences.
> */
>
> Obviously this is very critical code both from a correctness and a performance
> point of view, so I wanted to run this by the maintainers and folks who have
> contributed to this code first.

Sadly, the fix won't be so easy. The main problem with calling
requeue_inode() from writeback_single_inode() is that if there's parallel
sync(2) call, inode->i_io_list is used to track all inodes that need writing
before sync(2) can complete. So requeueing inodes in parallel while sync(2)
runs can result in breaking data integrity guarantees of it. But I agree
we need to find some mechanism to safely move inode to appropriate dirty
list reasonably quickly.

Probably I'd add an inode state flag telling that inode is queued for
writeback by flush worker and we won't touch dirty lists in that case,
otherwise we are safe to update current writeback list as needed. I'll work
on fixing this as when I was reading the code I've noticed there are other
quirks in the code as well. Thanks for the report!

Honza

--
Jan Kara <[email protected]>
SUSE Labs, CR

2020-05-22 15:27:40

by Martijn Coenen

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

[ dropped [email protected] from CC: since that list
can't receive emails from outside google.com - sorry about that ]

Hi Jan,

On Fri, May 22, 2020 at 4:41 PM Jan Kara <[email protected]> wrote:
> > The easiest way to fix this, I think, is to call requeue_inode() at the end of
> > writeback_single_inode(), much like it is called from writeback_sb_inodes().
> > However, requeue_inode() has the following ominous warning:
> >
> > /*
> > * Find proper writeback list for the inode depending on its current state and
> > * possibly also change of its state while we were doing writeback. Here we
> > * handle things such as livelock prevention or fairness of writeback among
> > * inodes. This function can be called only by flusher thread - noone else
> > * processes all inodes in writeback lists and requeueing inodes behind flusher
> > * thread's back can have unexpected consequences.
> > */
> >
> > Obviously this is very critical code both from a correctness and a performance
> > point of view, so I wanted to run this by the maintainers and folks who have
> > contributed to this code first.
>
> Sadly, the fix won't be so easy. The main problem with calling
> requeue_inode() from writeback_single_inode() is that if there's parallel
> sync(2) call, inode->i_io_list is used to track all inodes that need writing
> before sync(2) can complete. So requeueing inodes in parallel while sync(2)
> runs can result in breaking data integrity guarantees of it.

Ah, makes sense.

> But I agree
> we need to find some mechanism to safely move inode to appropriate dirty
> list reasonably quickly.
>
> Probably I'd add an inode state flag telling that inode is queued for
> writeback by flush worker and we won't touch dirty lists in that case,
> otherwise we are safe to update current writeback list as needed. I'll work
> on fixing this as when I was reading the code I've noticed there are other
> quirks in the code as well. Thanks for the report!

Thanks! While looking at the code I also saw some other paths that
appeared to be racy, though I haven't worked them out in detail to
confirm that - the locking around the inode and writeback lists is
tricky. What's the best way to follow up on those? Happy to post them
to this same thread after I spend a bit more time looking at the code.

Thanks,
Martijn


>
> Honza
>
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR

2020-05-22 15:38:36

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

On Fri 22-05-20 17:23:30, Martijn Coenen wrote:
> [ dropped [email protected] from CC: since that list
> can't receive emails from outside google.com - sorry about that ]
>
> Hi Jan,
>
> On Fri, May 22, 2020 at 4:41 PM Jan Kara <[email protected]> wrote:
> > > The easiest way to fix this, I think, is to call requeue_inode() at the end of
> > > writeback_single_inode(), much like it is called from writeback_sb_inodes().
> > > However, requeue_inode() has the following ominous warning:
> > >
> > > /*
> > > * Find proper writeback list for the inode depending on its current state and
> > > * possibly also change of its state while we were doing writeback. Here we
> > > * handle things such as livelock prevention or fairness of writeback among
> > > * inodes. This function can be called only by flusher thread - noone else
> > > * processes all inodes in writeback lists and requeueing inodes behind flusher
> > > * thread's back can have unexpected consequences.
> > > */
> > >
> > > Obviously this is very critical code both from a correctness and a performance
> > > point of view, so I wanted to run this by the maintainers and folks who have
> > > contributed to this code first.
> >
> > Sadly, the fix won't be so easy. The main problem with calling
> > requeue_inode() from writeback_single_inode() is that if there's parallel
> > sync(2) call, inode->i_io_list is used to track all inodes that need writing
> > before sync(2) can complete. So requeueing inodes in parallel while sync(2)
> > runs can result in breaking data integrity guarantees of it.
>
> Ah, makes sense.
>
> > But I agree
> > we need to find some mechanism to safely move inode to appropriate dirty
> > list reasonably quickly.
> >
> > Probably I'd add an inode state flag telling that inode is queued for
> > writeback by flush worker and we won't touch dirty lists in that case,
> > otherwise we are safe to update current writeback list as needed. I'll work
> > on fixing this as when I was reading the code I've noticed there are other
> > quirks in the code as well. Thanks for the report!
>
> Thanks! While looking at the code I also saw some other paths that
> appeared to be racy, though I haven't worked them out in detail to
> confirm that - the locking around the inode and writeback lists is
> tricky. What's the best way to follow up on those? Happy to post them
> to this same thread after I spend a bit more time looking at the code.

Sure, if you are aware some some other problems, just write them to this
thread. FWIW stuff that I've found so far:

1) __I_DIRTY_TIME_EXPIRED setting in move_expired_inodes() can get lost as
there are other places doing RMW modifications of inode->i_state.

2) sync(2) is prone to livelocks as when we queue inodes from b_dirty_time
list, we don't take dirtied_when into account (and that's the only thing
that makes sure aggressive dirtier cannot livelock sync).

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2020-05-23 08:18:23

by Martijn Coenen

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

Jaegeuk wondered whether callers of write_inode_now() should hold
i_rwsem, and whether that would also prevent this problem. Some
existing callers of write_inode_now() do, eg ntfs and hfs:

hfs_file_fsync()
inode_lock(inode);

/* sync the inode to buffers */
ret = write_inode_now(inode, 0);

but there are also some that don't (eg fat, fuse, orangefs).

Thanks,
Martijn


On Fri, May 22, 2020 at 5:36 PM Jan Kara <[email protected]> wrote:
>
> On Fri 22-05-20 17:23:30, Martijn Coenen wrote:
> > [ dropped [email protected] from CC: since that list
> > can't receive emails from outside google.com - sorry about that ]
> >
> > Hi Jan,
> >
> > On Fri, May 22, 2020 at 4:41 PM Jan Kara <[email protected]> wrote:
> > > > The easiest way to fix this, I think, is to call requeue_inode() at the end of
> > > > writeback_single_inode(), much like it is called from writeback_sb_inodes().
> > > > However, requeue_inode() has the following ominous warning:
> > > >
> > > > /*
> > > > * Find proper writeback list for the inode depending on its current state and
> > > > * possibly also change of its state while we were doing writeback. Here we
> > > > * handle things such as livelock prevention or fairness of writeback among
> > > > * inodes. This function can be called only by flusher thread - noone else
> > > > * processes all inodes in writeback lists and requeueing inodes behind flusher
> > > > * thread's back can have unexpected consequences.
> > > > */
> > > >
> > > > Obviously this is very critical code both from a correctness and a performance
> > > > point of view, so I wanted to run this by the maintainers and folks who have
> > > > contributed to this code first.
> > >
> > > Sadly, the fix won't be so easy. The main problem with calling
> > > requeue_inode() from writeback_single_inode() is that if there's parallel
> > > sync(2) call, inode->i_io_list is used to track all inodes that need writing
> > > before sync(2) can complete. So requeueing inodes in parallel while sync(2)
> > > runs can result in breaking data integrity guarantees of it.
> >
> > Ah, makes sense.
> >
> > > But I agree
> > > we need to find some mechanism to safely move inode to appropriate dirty
> > > list reasonably quickly.
> > >
> > > Probably I'd add an inode state flag telling that inode is queued for
> > > writeback by flush worker and we won't touch dirty lists in that case,
> > > otherwise we are safe to update current writeback list as needed. I'll work
> > > on fixing this as when I was reading the code I've noticed there are other
> > > quirks in the code as well. Thanks for the report!
> >
> > Thanks! While looking at the code I also saw some other paths that
> > appeared to be racy, though I haven't worked them out in detail to
> > confirm that - the locking around the inode and writeback lists is
> > tricky. What's the best way to follow up on those? Happy to post them
> > to this same thread after I spend a bit more time looking at the code.
>
> Sure, if you are aware some some other problems, just write them to this
> thread. FWIW stuff that I've found so far:
>
> 1) __I_DIRTY_TIME_EXPIRED setting in move_expired_inodes() can get lost as
> there are other places doing RMW modifications of inode->i_state.
>
> 2) sync(2) is prone to livelocks as when we queue inodes from b_dirty_time
> list, we don't take dirtied_when into account (and that's the only thing
> that makes sure aggressive dirtier cannot livelock sync).
>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR

2020-05-25 07:36:53

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

On Sat 23-05-20 10:15:20, Martijn Coenen wrote:
> Jaegeuk wondered whether callers of write_inode_now() should hold
> i_rwsem, and whether that would also prevent this problem. Some
> existing callers of write_inode_now() do, eg ntfs and hfs:
>
> hfs_file_fsync()
> inode_lock(inode);
>
> /* sync the inode to buffers */
> ret = write_inode_now(inode, 0);
>
> but there are also some that don't (eg fat, fuse, orangefs).

Well, most importantly filesystems like ext4, xfs, btrfs don't hold i_rwsem
when writing back inode and that's deliberate because of performance. We
don't want to block writes (or event reads in case of XFS) for the inode
during writeback.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2020-05-25 08:14:03

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

On Sun 24-05-20 22:05:22, Hillf Danton wrote:
>
> On Fri, 22 May 2020 11:57:42 +0200 Martijn Coenen wrote:
> >
> > So, the sequence of events is something like this. Let's assume the inode is
> > already on b_dirty_time for valid reasons. Then:
> >
> > CPU1 CPU2
> > fuse_flush()
> > write_inode_now()
> > writeback_single_inode()
> > sets I_SYNC
> > __writeback_single_inode()
> > writes back data
> > clears inode dirty flags
> > unlocks inode
> > calls mark_inode_dirty_sync()
> > sets I_DIRTY_SYNC, but doesn't
> > update wb list because I_SYNC is
> > still set
> > write() // somebody else writes
> > mark_inode_dirty(I_DIRTY_PAGES)
> > sets I_DIRTY_PAGES on i_state
> > doesn't update wb list,
> > because I_SYNC set
> > locks inode again
> > sees inode is still dirty,
> > doesn't touch WB list
> > clears I_SYNC
> >
> > So now we have an inode on b_dirty_time with I_DIRTY_PAGES | I_DIRTY_SYNC set,
> > and subsequent calls to mark_inode_dirty() with either I_DIRTY_PAGES or
> > I_DIRTY_SYNC will do nothing to change that. The flusher won't touch
> > the inode either, because it's not on a b_dirty or b_io list.

Hi Hillf,

> Based on the above analysis, check of I_DIRTY_TIME is added before and
> after calling __writeback_single_inode() to detect the case you reported.
>
> If a dirty inode is not on the right io list after writeback, we can
> move it to a new one; and we can do that as we are the I_SYNC owner.
>
> While changing its io list, the inode's dirty timestamp is also updated
> to the current tick as does in __mark_inode_dirty().

Apparently you didn't read my reply to Martinj because what you did in this
patch is exactly what I described that we cannot do because that can cause
sync(2) to miss inodes and thus break its data integrity guarantees. So we
have to come up with a different solution.

Honza

> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -1528,6 +1528,7 @@ static int writeback_single_inode(struct
> struct writeback_control *wbc)
> {
> struct bdi_writeback *wb;
> + bool dt;
> int ret = 0;
>
> spin_lock(&inode->i_lock);
> @@ -1560,6 +1561,7 @@ static int writeback_single_inode(struct
> !mapping_tagged(inode->i_mapping, PAGECACHE_TAG_WRITEBACK)))
> goto out;
> inode->i_state |= I_SYNC;
> + dt = inode->i_state & I_DIRTY_TIME;
> wbc_attach_and_unlock_inode(wbc, inode);
>
> ret = __writeback_single_inode(inode, wbc);
> @@ -1574,6 +1576,14 @@ static int writeback_single_inode(struct
> */
> if (!(inode->i_state & I_DIRTY_ALL))
> inode_io_list_del_locked(inode, wb);
> + else if (!(inode->i_state & I_DIRTY_TIME) && dt) {
> + /*
> + * We can correct inode's io list, however, by moving it to
> + * b_dirty from b_dirty_time as we are the I_SYNC owner
> + */
> + inode->dirtied_when = jiffies;
> + inode_io_list_move_locked(inode, wb, &wb->b_dirty);
> + }
> spin_unlock(&wb->list_lock);
> inode_sync_complete(inode);
> out:
> --
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2020-05-27 10:39:40

by Martijn Coenen

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

Hi Jan,

On Mon, May 25, 2020 at 9:31 AM Jan Kara <[email protected]> wrote:
> Well, most importantly filesystems like ext4, xfs, btrfs don't hold i_rwsem
> when writing back inode and that's deliberate because of performance. We
> don't want to block writes (or event reads in case of XFS) for the inode
> during writeback.

Thanks for clarifying, that makes sense. By the way, do you have an
ETA for your fix? We are under some time pressure to get this fixed in
our downstream kernels, but I'd much rather take a fix from upstream
from somebody who knows this code well. Alternatively, I can take a
stab at the idea you proposed and send a patch to LKML for review this
week.

Thanks,
Martijn


>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR

2020-05-29 15:25:55

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

Hello Martinj!

On Wed 27-05-20 10:14:09, Martijn Coenen wrote:
> On Mon, May 25, 2020 at 9:31 AM Jan Kara <[email protected]> wrote:
> > Well, most importantly filesystems like ext4, xfs, btrfs don't hold i_rwsem
> > when writing back inode and that's deliberate because of performance. We
> > don't want to block writes (or event reads in case of XFS) for the inode
> > during writeback.
>
> Thanks for clarifying, that makes sense. By the way, do you have an
> ETA for your fix? We are under some time pressure to get this fixed in
> our downstream kernels, but I'd much rather take a fix from upstream
> from somebody who knows this code well. Alternatively, I can take a
> stab at the idea you proposed and send a patch to LKML for review this
> week.

I understand. I have written a fix (attached). Currently its under testing
together with other cleanups. If everything works fine, I plan to submit
the patches on Monday.

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR


Attachments:
(No filename) (0.98 kB)
0001-writeback-Avoid-skipping-inode-writeback.patch (7.87 kB)
Download all attachments

2020-05-29 19:40:56

by Martijn Coenen

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

Hi Jan,

On Fri, May 29, 2020 at 5:20 PM Jan Kara <[email protected]> wrote:
> I understand. I have written a fix (attached). Currently its under testing
> together with other cleanups. If everything works fine, I plan to submit
> the patches on Monday.

Thanks a lot for the quick fix! I ran my usual way to reproduce the
problem, and did not see it, so that's good! I do observe write speed
dips - eg we usually sustain 180 MB/s on this device, but now it
regularly dips down to 10 MB/s, then jumps back up again. That might
be unrelated to your patch though, I will run more tests over the
weekend and report back!

Best,
Martijn

>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR

2020-06-01 09:12:10

by Jan Kara

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

On Fri 29-05-20 21:37:50, Martijn Coenen wrote:
> Hi Jan,
>
> On Fri, May 29, 2020 at 5:20 PM Jan Kara <[email protected]> wrote:
> > I understand. I have written a fix (attached). Currently its under testing
> > together with other cleanups. If everything works fine, I plan to submit
> > the patches on Monday.
>
> Thanks a lot for the quick fix! I ran my usual way to reproduce the
> problem, and did not see it, so that's good! I do observe write speed
> dips - eg we usually sustain 180 MB/s on this device, but now it
> regularly dips down to 10 MB/s, then jumps back up again. That might
> be unrelated to your patch though, I will run more tests over the
> weekend and report back!

Thanks for testing! My test run has completed fine so I'll submit patches
for review. But I'm curious what's causing the dips in throughput in your
test...

Honza

--
Jan Kara <[email protected]>
SUSE Labs, CR

2020-06-02 12:18:29

by Martijn Coenen

[permalink] [raw]
Subject: Re: Writeback bug causing writeback stalls

On Mon, Jun 1, 2020 at 11:09 AM Jan Kara <[email protected]> wrote:
> Thanks for testing! My test run has completed fine so I'll submit patches
> for review. But I'm curious what's causing the dips in throughput in your
> test...

It turned out to be unrelated to your patch. Sorry for the noise! We
have the patch in dogfood on some of our devices, and I will let you
know if we run into any issues. I'll also spend some more time
reviewing your patches and will respond to them later.

Thanks,
Martijn
>
> Honza
>
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR