Hi David,
two weeks ago, I updated a cluster of web servers to Linux kernel
5.17.1 (5.16.x previously) which includes your rewrite of the fscache
code.
In the last few days, there were numerous complaints about broken
WordPress installations after WordPress was updated. There were
PHP syntax errors everywhere.
Indeed there were broken PHP files, but the interesting part is: those
corruptions were only on one of the web servers; the others were fine,
the file contents were only broken on one of the servers.
File size and time stamp and everyhing in "stat" is identical, just
the file contents are corrupted; it looks like a mix of old and new
contents. The corruptions always started at multiples of 4096 bytes.
An example diff:
--- ok/wp-includes/media.php 2022-04-06 05:51:50.000000000 +0200
+++ broken/wp-includes/media.php 2022-04-06 05:51:50.000000000 +0200
@@ -5348,7 +5348,7 @@
/**
* Filters the threshold for how many of the first content media elements to not lazy-load.
*
- * For these first content media elements, the `loading` attribute will be omitted. By default, this is the case
+ * For these first content media elements, the `loading` efault, this is the case
* for only the very first content media element.
*
* @since 5.9.0
@@ -5377,3 +5377,4 @@
return $content_media_count;
}
+^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
The corruption can be explained by WordPress commit
https://github.com/WordPress/WordPress/commit/07855db0ee8d5cff2 which
makes the file 31 bytes longer (185055 -> 185086). The "broken" web
server sees the new contents until offset 184320 (= 45 * 4096), but
sees the old contents from there on; followed by 31 null bytes
(because the kernel reads past the end of the cache?).
All web servers mount a storage via NFSv3 with fscache.
My suspicion is that this is caused by a fscache regression in Linux
5.17. What do you think?
What can I do to debug this further, is there any information you
need? I don't know much about how fscache works internally and how to
obtain information.
Max
[TLDR: I'm adding the regression report below to regzbot, the Linux
kernel regression tracking bot; all text you find below is compiled from
a few templates paragraphs you might have encountered already already
from similar mails.]
Hi, this is your Linux kernel regression tracker. CCing the regression
mailing list, as it should be in the loop for all regressions, as
explained here:
https://www.kernel.org/doc/html/latest/admin-guide/reporting-issues.html
On 12.04.22 17:10, Max Kellermann wrote:
> Hi David,
>
> two weeks ago, I updated a cluster of web servers to Linux kernel
> 5.17.1 (5.16.x previously) which includes your rewrite of the fscache
> code.
>
> In the last few days, there were numerous complaints about broken
> WordPress installations after WordPress was updated. There were
> PHP syntax errors everywhere.
>
> Indeed there were broken PHP files, but the interesting part is: those
> corruptions were only on one of the web servers; the others were fine,
> the file contents were only broken on one of the servers.
>
> File size and time stamp and everyhing in "stat" is identical, just
> the file contents are corrupted; it looks like a mix of old and new
> contents. The corruptions always started at multiples of 4096 bytes.
>
> An example diff:
>
> --- ok/wp-includes/media.php 2022-04-06 05:51:50.000000000 +0200
> +++ broken/wp-includes/media.php 2022-04-06 05:51:50.000000000 +0200
> @@ -5348,7 +5348,7 @@
> /**
> * Filters the threshold for how many of the first content media elements to not lazy-load.
> *
> - * For these first content media elements, the `loading` attribute will be omitted. By default, this is the case
> + * For these first content media elements, the `loading` efault, this is the case
> * for only the very first content media element.
> *
> * @since 5.9.0
> @@ -5377,3 +5377,4 @@
>
> return $content_media_count;
> }
> +^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
>
> The corruption can be explained by WordPress commit
> https://github.com/WordPress/WordPress/commit/07855db0ee8d5cff2 which
> makes the file 31 bytes longer (185055 -> 185086). The "broken" web
> server sees the new contents until offset 184320 (= 45 * 4096), but
> sees the old contents from there on; followed by 31 null bytes
> (because the kernel reads past the end of the cache?).
>
> All web servers mount a storage via NFSv3 with fscache.
>
> My suspicion is that this is caused by a fscache regression in Linux
> 5.17. What do you think?
>
> What can I do to debug this further, is there any information you
> need? I don't know much about how fscache works internally and how to
> obtain information.
Thx for the report. Maybe a bisection is what's needed here, but lets
see what David says, maybe he has a idea already.
To be sure below issue doesn't fall through the cracks unnoticed, I'm
adding it to regzbot, my Linux kernel regression tracking bot:
#regzbot ^introduced v5.16..v5.17
#regzbot title fscache: file contents are corrupted
#regzbot ignore-activity
If it turns out this isn't a regression, free free to remove it from the
tracking by sending a reply to this thread containing a paragraph like
"#regzbot invalid: reason why this is invalid" (without the quotes).
Reminder for developers: when fixing the issue, please add a 'Link:'
tags pointing to the report (the mail quoted above) using
lore.kernel.org/r/, as explained in
'Documentation/process/submitting-patches.rst' and
'Documentation/process/5.Posting.rst'. Regzbot needs them to
automatically connect reports with fixes, but they are useful in
general, too.
I'm sending this to everyone that got the initial report, to make
everyone aware of the tracking. I also hope that messages like this
motivate people to directly get at least the regression mailing list and
ideally even regzbot involved when dealing with regressions, as messages
like this wouldn't be needed then. And don't worry, if I need to send
other mails regarding this regression only relevant for regzbot I'll
send them to the regressions lists only (with a tag in the subject so
people can filter them away). With a bit of luck no such messages will
be needed anyway.
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
P.S.: As the Linux kernel's regression tracker I'm getting a lot of
reports on my table. I can only look briefly into most of them and lack
knowledge about most of the areas they concern. I thus unfortunately
will sometimes get things wrong or miss something important. I hope
that's not the case here; if you think it is, don't hesitate to tell me
in a public reply, it's in everyone's interest to set the public record
straight.
--
Additional information about regzbot:
If you want to know more about regzbot, check out its web-interface, the
getting start guide, and the references documentation:
https://linux-regtracking.leemhuis.info/regzbot/
https://gitlab.com/knurd42/regzbot/-/blob/main/docs/getting_started.md
https://gitlab.com/knurd42/regzbot/-/blob/main/docs/reference.md
The last two documents will explain how you can interact with regzbot
yourself if your want to.
Hint for reporters: when reporting a regression it's in your interest to
CC the regression list and tell regzbot about the issue, as that ensures
the regression makes it onto the radar of the Linux kernel's regression
tracker -- that's in your interest, as it ensures your report won't fall
through the cracks unnoticed.
Hint for developers: you normally don't need to care about regzbot once
it's involved. Fix the issue as you normally would, just remember to
include 'Link:' tag in the patch descriptions pointing to all reports
about the issue. This has been expected from developers even before
regzbot showed up for reasons explained in
'Documentation/process/submitting-patches.rst' and
'Documentation/process/5.Posting.rst'.
On 2022/04/16 13:38, Thorsten Leemhuis <[email protected]> wrote:
> Thx for the report. Maybe a bisection is what's needed here, but lets
> see what David says, maybe he has a idea already.
I wish I could do that, but it's very hard to reproduce; the first
reports came after a week or so. That way, a bisect would take
months. So yes, wait for David, because he might give a clue how to
trigger the problem more quickly to make a bisect practical.
On 2022/04/19 17:56, David Howells <[email protected]> wrote:
> This is weird. It looks like content got slid down by 31 bytes and 31 zero
> bytes got added at the end. I'm not sure how fscache would achieve that -
> nfs's implementation should only be dealing with pages.
Did you read this part of my email?:
On 2022/04/12 17:10, Max Kellermann <[email protected]> wrote:
> The corruption can be explained by WordPress commit
> https://github.com/WordPress/WordPress/commit/07855db0ee8d5cff2 which
> makes the file 31 bytes longer (185055 -> 185086). The "broken" web
> server sees the new contents until offset 184320 (= 45 * 4096), but
> sees the old contents from there on; followed by 31 null bytes
> (because the kernel reads past the end of the cache?).
My theory was that fscache shows a mix of old and new pages after the
file was modified. Does this make sense?
Is there anything I can do to give you data from this server's cache?
Max Kellermann <[email protected]> wrote:
> At least one web server is still in this broken state right now. So
> if you need anything from that server, tell me, and I'll get it.
Can you turn on:
echo 65536 >/sys/kernel/debug/tracing/buffer_size_kb
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_read/enable
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_write/enable
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_trunc/enable
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_io_error/enable
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_vfs_error/enable
Then try and trigger the bug if you can. The trace can be viewed with:
cat /sys/kernel/debug/tracing/trace | less
The problem very likely happens on write rather than read. If you know of a
file that's corrupt, turn on the tracing above and read that file. Then look
in the trace buffer and you should see the corresponding lines and they should
have the backing inode in them, marked "B=iiii" where "iiii" is the inode
number of the file in hex. You should be able to examine the backing file by
finding it with something like:
find /var/cache/fscache -inum $((0xiiii))
and see if you can see the corruption in there. Note that there may be blocks
of zeroes corresponding to unfetched file blocks.
Also, what filesystem is backing your cachefiles cache? It could be useful to
dump the extent list of the file. You should be able to do this with
"filefrag -e".
As to why this happens, a write that's misaligned by 31 bytes should cause DIO
to a disk to fail - so it shouldn't be possible to write that. However, I'm
doing fallocate and truncate on the file to shape it so that DIO will work on
it, so it's possible that there's a bug there. The cachefiles_trunc trace
lines may help catch that.
David
On 2022/04/19 18:17, David Howells <[email protected]> wrote:
> find /var/cache/fscache -inum $((0xiiii))
>
> and see if you can see the corruption in there. Note that there may be blocks
> of zeroes corresponding to unfetched file blocks.
I checked several known-corrupt files, but unfortunately, all
corruption have disappeared :-(
The /var/cache/fscache/ files have a time stamp half an hour ago
(17:53 CET = 15:53 GMT). I don't know what happened at that time -
too bad this disappeared after a week, just when we started
investigating it.
All those new files are all-zero. No data is stored in any of them.
Note that I had to enable
/sys/kernel/debug/tracing/events/cachefiles/enable; the trace events
you named (read/write/trunc/io_error/vfs_error) do not emit anything.
This is what I see:
kworker/u98:11-1446185 [016] ..... 1813913.318370: cachefiles_ref: c=00014bd5 o=12080f1c u=1 NEW obj
kworker/u98:11-1446185 [016] ..... 1813913.318379: cachefiles_lookup: o=12080f1c dB=3e01ee B=3e5580 e=0
kworker/u98:11-1446185 [016] ..... 1813913.318380: cachefiles_mark_active: o=12080f1c B=3e5580
kworker/u98:11-1446185 [016] ..... 1813913.318401: cachefiles_coherency: o=12080f1c OK B=3e5580 c=0
kworker/u98:11-1446185 [016] ..... 1813913.318402: cachefiles_ref: c=00014bd5 o=12080f1c u=1 SEE lookup_cookie
> Also, what filesystem is backing your cachefiles cache? It could be useful to
> dump the extent list of the file. You should be able to do this with
> "filefrag -e".
It's ext4.
Filesystem type is: ef53
File size of /var/cache/fscache/cache/Infs,3.0,2,,a4214ac,c0000208,,,3002c0,10000,10000,12c,1770,bb8,1770,1/@58/T,c0000208,,1cf4167,184558d9,c0000208,,40,36bab37,40, is 188416 (46 blocks of 4096 bytes)
/var/cache/fscache/cache/Infs,3.0,2,,a4214ac,c0000208,,,3002c0,10000,10000,12c,1770,bb8,1770,1/@58/T,c0000208,,1cf4167,184558d9,c0000208,,40,36bab37,40,: 0 extents found
File size of /var/cache/fscache/cache/Infs,3.0,2,,a4214ac,c0000208,,,3002c0,10000,10000,12c,1770,bb8,1770,1/@ea/T,c0000208,,10cc976,1208c7f6,c0000208,,40,36bab37,40, is 114688 (28 blocks of 4096 bytes)
/var/cache/fscache/cache/Infs,3.0,2,,a4214ac,c0000208,,,3002c0,10000,10000,12c,1770,bb8,1770,1/@ea/T,c0000208,,10cc976,1208c7f6,c0000208,,40,36bab37,40,: 0 extents found
> As to why this happens, a write that's misaligned by 31 bytes should cause DIO
> to a disk to fail - so it shouldn't be possible to write that. However, I'm
> doing fallocate and truncate on the file to shape it so that DIO will work on
> it, so it's possible that there's a bug there. The cachefiles_trunc trace
> lines may help catch that.
I don't think any write is misaligned. This was triggered by a
WordPress update, so I think the WordPress updater truncated and
rewrote all files. Random guess: some pages got transferred to the
NFS server, but the local copy in fscache did not get updated.
Max
Max Kellermann <[email protected]> wrote:
> Did you read this part of my email?:
Sorry, I'm trying to deal with several things at once.
> My theory was that fscache shows a mix of old and new pages after the
> file was modified. Does this make sense?
Okay - that makes a bit more sense.
Could the file have been modified by a third party? If you're using NFS3
there's a problem if two clients can modify a file at the same time. The
second write can mask the first write and the client has no way to detect it.
The problem is inherent to the protocol design. The NFS2 and NFS3 protocols
don't support anything better than {ctime,mtime,filesize} - the change
attribute only becomes available with NFS4.
If an NFS file is opened for writing locally, the cache for it supposed to be
invalidated and remain unused until there are no open file descriptors left
referring to it. This is intended for handling DIO writes, but it should
serve for this also.
The following might be of use in checking if the invalidation happens locally:
echo 1 >/sys/kernel/debug/tracing/events/fscache/fscache_invalidate/enable
And then this can be used to check if it correctly identifies that it has an
obsolete version of the file in the cache when it binds to it:
echo 1 >/sys/kernel/debug/tracing/events/cachefiles/cachefiles_coherency/enable
David
Max Kellermann <[email protected]> wrote:
> - * For these first content media elements, the `loading` attribute will be omitted. By default, this is the case
> + * For these first content media elements, the `loading` efault, this is the case
> * for only the very first content media element.
> *
> * @since 5.9.0
> @@ -5377,3 +5377,4 @@
>
> return $content_media_count;
> }
> +^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
This is weird. It looks like content got slid down by 31 bytes and 31 zero
bytes got added at the end. I'm not sure how fscache would achieve that -
nfs's implementation should only be dealing with pages.
David
Max Kellermann <[email protected]> wrote:
> two weeks ago, I updated a cluster of web servers to Linux kernel
> 5.17.1 (5.16.x previously) which includes your rewrite of the fscache
> code.
I presume you are actually using a cache?
David
On 2022/04/19 18:42, David Howells <[email protected]> wrote:
> Could the file have been modified by a third party?
According to our support tickets, the customers used WordPress's
built-in updater, which resulted in corrupt PHP sources.
We have configured stickiness in the load balancer; HTTP requests to
one website always go through the same web server. Which implies that
the same web server that saw the corrupt files was the very same one
that wrote the new file contents. This part surprises me, because
writing a page to the NFS server should update (or flush/invalidate)
the old cache page. It would be easy for a *different* NFS client to
miss out on updated file contents, but this is not what happened.
On 2022/04/19 18:47, David Howells <[email protected]> wrote:
> Do the NFS servers change the files that are being served - or is it
> just WordPress pushing the changes to the NFS servers for the web
> servers to then export?
I'm not sure if I understand this question correctly. The NFS server
(a NetApp, btw.) sees the new file contents correctly; all other web
servers also see non-corrupt new files. Only the one web server which
performed the update saw broken files.
Max
On 2022/04/19 15:02, David Howells <[email protected]> wrote:
> I presume you are actually using a cache?
Yes, see:
On 2022/04/12 17:10, Max Kellermann <[email protected]> wrote:
> All web servers mount a storage via NFSv3 with fscache.
At least one web server is still in this broken state right now. So
if you need anything from that server, tell me, and I'll get it.
I will need to downgrade to 5.16 tomorrow to get rid of the corruption
bug (I've delayed this for a week, waiting for your reply). After
tomorrow, I can no longer help debugging this.
Max
Max Kellermann <[email protected]> wrote:
> I don't think any write is misaligned. This was triggered by a
> WordPress update, so I think the WordPress updater truncated and
> rewrote all files. Random guess: some pages got transferred to the
> NFS server, but the local copy in fscache did not get updated.
Do the NFS servers change the files that are being served - or is it just
WordPress pushing the changes to the NFS servers for the web servers to then
export?
David
Max Kellermann <[email protected]> wrote:
> > Do the NFS servers change the files that are being served - or is it
> > just WordPress pushing the changes to the NFS servers for the web
> > servers to then export?
>
> I'm not sure if I understand this question correctly. The NFS server
> (a NetApp, btw.) sees the new file contents correctly; all other web
> servers also see non-corrupt new files. Only the one web server which
> performed the update saw broken files.
I was wondering if there was missing invalidation if the web clients were
modifying the same files in parallel, but it sounds like only one place is
doing the modification, and the problem is the lack of invalidation when a
file is opened for writing.
I have a tentative patch for this - see attached.
David
---
commit 9b00af0190dfee6073aab47ee88e15c31d3c357d
Author: David Howells <[email protected]>
Date: Wed Apr 20 14:27:17 2022 +0100
fscache: Fix invalidation/lookup race
If an NFS file is opened for writing and closed, fscache_invalidate() will
be asked to invalidate the file - however, if the cookie is in the
LOOKING_UP state (or the CREATING state), then request to invalidate
doesn't get recorded for fscache_cookie_state_machine() to do something
with.
Fix this by making __fscache_invalidate() set a flag if it sees the cookie
is in the LOOKING_UP state to indicate that we need to go to invalidation.
Note that this requires a count on the n_accesses counter for the state
machine, which that will release when it's done.
fscache_cookie_state_machine() then shifts to the INVALIDATING state if it
sees the flag.
Without this, an nfs file can get corrupted if it gets modified locally and
then read locally as the cache contents may not get updated.
Fixes: d24af13e2e23 ("fscache: Implement cookie invalidation")
Reported-by: Max Kellermann <[email protected]>
Signed-off-by: David Howells <[email protected]>
Link: https://lore.kernel.org/r/[email protected] [1]
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index 9d3cf0111709..3bb6deeb4279 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
@@ -705,7 +705,11 @@ static void fscache_cookie_state_machine(struct fscache_cookie *cookie)
spin_unlock(&cookie->lock);
fscache_init_access_gate(cookie);
fscache_perform_lookup(cookie);
- goto again;
+ spin_lock(&cookie->lock);
+ if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
+ __fscache_set_cookie_state(cookie,
+ FSCACHE_COOKIE_STATE_INVALIDATING);
+ goto again_locked;
case FSCACHE_COOKIE_STATE_INVALIDATING:
spin_unlock(&cookie->lock);
@@ -752,6 +756,9 @@ static void fscache_cookie_state_machine(struct fscache_cookie *cookie)
spin_lock(&cookie->lock);
}
+ if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
+ fscache_end_cookie_access(cookie, fscache_access_invalidate_cookie_end);
+
switch (state) {
case FSCACHE_COOKIE_STATE_RELINQUISHING:
fscache_see_cookie(cookie, fscache_cookie_see_relinquish);
@@ -1048,6 +1055,9 @@ void __fscache_invalidate(struct fscache_cookie *cookie,
return;
case FSCACHE_COOKIE_STATE_LOOKING_UP:
+ __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie);
+ set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags);
+ fallthrough;
case FSCACHE_COOKIE_STATE_CREATING:
spin_unlock(&cookie->lock);
_leave(" [look %x]", cookie->inval_counter);
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index e25539072463..a25804f141d3 100644
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -129,6 +129,7 @@ struct fscache_cookie {
#define FSCACHE_COOKIE_DO_PREP_TO_WRITE 12 /* T if cookie needs write preparation */
#define FSCACHE_COOKIE_HAVE_DATA 13 /* T if this cookie has data stored */
#define FSCACHE_COOKIE_IS_HASHED 14 /* T if this cookie is hashed */
+#define FSCACHE_COOKIE_DO_INVALIDATE 15 /* T if cookie needs invalidation */
enum fscache_cookie_state state;
u8 advice; /* FSCACHE_ADV_* */
On 2022/04/19 18:42, David Howells <[email protected]> wrote:
> Could the file have been modified by a third party? If you're using NFS3
> there's a problem if two clients can modify a file at the same time. The
> second write can mask the first write and the client has no way to detect it.
> The problem is inherent to the protocol design. The NFS2 and NFS3 protocols
> don't support anything better than {ctime,mtime,filesize} - the change
> attribute only becomes available with NFS4.
I tried to write a script to stress-test writing and reading, but
found no clue so far. I'll continue that tomorrow.
My latest theory is that this is a race condition; what if one process
writes to the file, which invalidates the cache; then, in the middle
of invalidating the local cache and sending the write to the NFS
server, another process (on the same server) reads the file; what
modification time and what data will it see? What if the cache gets
filled with old data, while new data to-be-written is still in flight?
Max
On 2022/04/20 15:55, David Howells <[email protected]> wrote:
> I have a tentative patch for this - see attached.
Quick feedback: your patch has been running on our servers for two
weeks, and I have received no new complaints about corrupted files.
That doesn't prove the patch is correct or that it really solves my
problem, but anyway it's a good sign. Thanks so far.
Max
On 2022/05/31 10:35, David Howells <[email protected]> wrote:
> Max Kellermann <[email protected]> wrote:
>
> > On 2022/04/20 15:55, David Howells <[email protected]> wrote:
> > > I have a tentative patch for this - see attached.
> >
> > Quick feedback: your patch has been running on our servers for two
> > weeks, and I have received no new complaints about corrupted files.
> > That doesn't prove the patch is correct or that it really solves my
> > problem, but anyway it's a good sign. Thanks so far.
>
> Can I put that down as a Tested-by?
Yes. A month later, still no new corruption.
Max Kellermann <[email protected]> wrote:
> On 2022/04/20 15:55, David Howells <[email protected]> wrote:
> > I have a tentative patch for this - see attached.
>
> Quick feedback: your patch has been running on our servers for two
> weeks, and I have received no new complaints about corrupted files.
> That doesn't prove the patch is correct or that it really solves my
> problem, but anyway it's a good sign. Thanks so far.
Can I put that down as a Tested-by?
David
Max Kellermann <[email protected]> wrote:
> > Can I put that down as a Tested-by?
>
> Yes. A month later, still no new corruption.
Thanks!
David
On 31.05.22 11:13, David Howells wrote:
> Max Kellermann <[email protected]> wrote:
>
>>> Can I put that down as a Tested-by?
>>
>> Yes. A month later, still no new corruption.
>
> Thanks!
David, is the patch from this thread ("fscache: Fix invalidation/lookup
race" --
https://lore.kernel.org/lkml/[email protected]/ )
heading toward mainline any time soon? This is a tracked regression and
it looked to me like there hasn't been any progress in the last two weeks.
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
P.S.: As the Linux kernel's regression tracker I deal with a lot of
reports and sometimes miss something important when writing mails like
this. If that's the case here, don't hesitate to tell me in a public
reply, it's in everyone's interest to set the public record straight.
#regzbot poke