In coda_psdev_write(), the header of the buffer is fetched twice from the
userspace. The first fetch is used to peek at the opcode and unique id while
the second fetch copies the whole message. However, there could be
inconsistency in these two fields between two fetches as buf resides in
userspace memory and a user process can rush to change it across fetches.
Which means that the corresponding opcode and unique id fields in
req->uc_data could be different from what is fetched in for the first time.
Whether this double-fetch situation is a security critical bug depends on
how req->uc_data will be used later. However, given that it is hard to
enumerate all the possible use cases, a safer way is to ensure that the
peeked header is actually the same message header after the second fetch.
This patch enforces that the header of the message fetched into req->uc_data
is the same as what is fetched in originally. In other words, hdr.opcode and
hdr.unique do not change.
Signed-off-by: Meng Xu <[email protected]>
---
fs/coda/psdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c
index f40e395..b9dbdd8 100644
--- a/fs/coda/psdev.c
+++ b/fs/coda/psdev.c
@@ -178,6 +178,12 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf,
goto out;
}
+ /*
+ * Override the request header to make sure that it matches the
+ * first fetch from buf
+ */
+ memcpy(req->uc_data, &hdr, 2 * sizeof(u_long));
+
/* adjust outsize. is this useful ?? */
req->uc_outSize = nbytes;
req->uc_flags |= CODA_REQ_WRITE;
--
2.7.4
Hi Jaharkes and Coda filesystem developers,
I am resending the email on a potential race condition bug I found in the
Coda filesystem as well as the patch I propose. Please feel free to comment
whether you think this is a serious problem and whether the patch will work.
Thank you.
Best Regards,
Meng
> On Sep 19, 2017, at 11:23 AM, Meng Xu <[email protected]> wrote:
>
> In coda_psdev_write(), the header of the buffer is fetched twice from the
> userspace. The first fetch is used to peek at the opcode and unique id while
> the second fetch copies the whole message. However, there could be
> inconsistency in these two fields between two fetches as buf resides in
> userspace memory and a user process can rush to change it across fetches.
> Which means that the corresponding opcode and unique id fields in
> req->uc_data could be different from what is fetched in for the first time.
>
> Whether this double-fetch situation is a security critical bug depends on
> how req->uc_data will be used later. However, given that it is hard to
> enumerate all the possible use cases, a safer way is to ensure that the
> peeked header is actually the same message header after the second fetch.
>
> This patch enforces that the header of the message fetched into req->uc_data
> is the same as what is fetched in originally. In other words, hdr.opcode and
> hdr.unique do not change.
>
> Signed-off-by: Meng Xu <[email protected]>
> ---
> fs/coda/psdev.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c
> index f40e395..b9dbdd8 100644
> --- a/fs/coda/psdev.c
> +++ b/fs/coda/psdev.c
> @@ -178,6 +178,12 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf,
> goto out;
> }
>
> + /*
> + * Override the request header to make sure that it matches the
> + * first fetch from buf
> + */
> + memcpy(req->uc_data, &hdr, 2 * sizeof(u_long));
> +
> /* adjust outsize. is this useful ?? */
> req->uc_outSize = nbytes;
> req->uc_flags |= CODA_REQ_WRITE;
> --
> 2.7.4
>
On Sat, Sep 23, 2017 at 10:35:45PM -0400, Meng Xu wrote:
> Hi Jaharkes and Coda filesystem developers,
>
> I am resending the email on a potential race condition bug I found in the
> Coda filesystem as well as the patch I propose. Please feel free to comment
> whether you think this is a serious problem and whether the patch will work.
> Thank you.
Hi,
It does look like there is a potential race there but I don't believe it
is very serious because,
- The userspace Coda cache manager component (Coda client) is using
co-routines instead of true multithreading. So even if someone tries
to exploit this through a vulnerability through the Coda client the
whole process is blocked on the write systemcall and there is no other
thread of execution available that can try to change the fields.
- If someone can exploit the Coda client, they already have achieved
root, so no need to try to exploit anything in the kernel.
- If someone can replace the opcode and unique id, they can replace
anything else in the message, with much worse results. For instance by
rewriting the response to an open call from success to failure the
attacker is able to cause a file descriptor leak because the Coda
client believes there is an open file, while the kernel and
application believe the open failed, etc.
Now there are two types of messages the Coda client writes to the kernel
device.
- Downcalls which provide hints to trigger cache revalidations, here the
unique id is ignored and the opcode mostly indicates how severe the
cache invalidation is (only a single object, whole subtree, etc.) If
someone wants to mess with that rewriting the file identifier that is
in the message body is more useful and the worst they can do is make
stale data stick in the kernel cache. I notice that your patch does
not address the copy_from_user for downcalls around line 128.
- The other type are the upcall responses, file system requests are
blocked until the Coda client returns a response with the matching
unique id from the request. This match is performed at line 158, which
is after the peek, but before the second copy_from_user. Once the
matching request has been found and dequeued we do not look at the
unique id anymore. Because the response is correlated to a specific
request and we do not re-copy the opcode from the response to the
req->uc_opcode field it doesn't really matter what the opcode in the
response was at this point because all decisions are based on the
request opcode.
Jan