2014-07-23 13:52:22

by Andrey Utkin

[permalink] [raw]
Subject: Reading large amounts from /dev/urandom broken

Dear developers, please check bugzilla ticket
https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
issue, but starting with comment#3.

Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
with the chunk

nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));

which is described in commit message as "additional paranoia check to
prevent overly large count values to be passed into urandom_read()".

I don't know why people pull such large amounts of data from urandom,
but given today there are two bugreports regarding problems doing
that, i consider that this is practiced.

--
Andrey Utkin


2014-07-23 14:32:40

by Hannes Frederic Sowa

[permalink] [raw]
Subject: Re: Reading large amounts from /dev/urandom broken

Hi Andrey,

thanks for the heads up!

On Mi, 2014-07-23 at 16:52 +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
>
> Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
>
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
>
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

Ted, we should roll back the min_t change and just account for SIZE_MAX
bytes in case we overflow the nbytes << (ENTROPY_SHIFT + 3) calculation.

Or something alike?

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 71529e1..f11a6cc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1006,7 +1006,10 @@ retry:
WARN_ON(1);
entropy_count = 0;
}
- nfrac = ibytes << (ENTROPY_SHIFT + 3);
+ if (ibytes > SIZE_MAX >> (ENTROPY_SHIFT + 3))
+ nfrac = SIZE_MAX;
+ else
+ nfrac = ibytes << (ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;
else
@@ -1386,7 +1389,6 @@ urandom_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
"with %d bits of entropy available\n",
current->comm, nonblocking_pool.entropy_total);

- nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
ret = extract_entropy_user(&nonblocking_pool, buf, nbytes);

trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool),

2014-07-23 15:15:06

by Theodore Ts'o

[permalink] [raw]
Subject: Re: Reading large amounts from /dev/urandom broken

On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
>
> Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
>
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
>
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

I've inquired on the bugzilla why the reporter is abusing urandom in
this way. The other commenter on the bug replicated the problem, but
that's not a "second bug report" in my book.

At the very least, this will probably cause me to insert a warning
printk: "insane user of /dev/urandom: [current->comm] requested %d
bytes" whenever someone tries to request more than 4k.

- Ted

2014-07-23 15:19:42

by Hannes Frederic Sowa

[permalink] [raw]
Subject: Re: Reading large amounts from /dev/urandom broken

On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> > Dear developers, please check bugzilla ticket
> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> > issue, but starting with comment#3.
> >
> > Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> > with the chunk
> >
> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> >
> > which is described in commit message as "additional paranoia check to
> > prevent overly large count values to be passed into urandom_read()".
> >
> > I don't know why people pull such large amounts of data from urandom,
> > but given today there are two bugreports regarding problems doing
> > that, i consider that this is practiced.
>
> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way. The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
>
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Ok, I would be fine with that.

The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
to not break it. But, of course, there are other possibilities.

Bye,
Hannes

2014-07-24 20:39:48

by Alex Elsayed

[permalink] [raw]
Subject: Re: Reading large amounts from /dev/urandom broken

Hannes Frederic Sowa wrote:

> On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
>> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
>> > Dear developers, please check bugzilla ticket
>> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
>> > issue, but starting with comment#3.
>> >
>> > Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
>> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
>> > with the chunk
>> >
>> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>> >
>> > which is described in commit message as "additional paranoia check to
>> > prevent overly large count values to be passed into urandom_read()".
>> >
>> > I don't know why people pull such large amounts of data from urandom,
>> > but given today there are two bugreports regarding problems doing
>> > that, i consider that this is practiced.
>>
>> I've inquired on the bugzilla why the reporter is abusing urandom in
>> this way. The other commenter on the bug replicated the problem, but
>> that's not a "second bug report" in my book.
>>
>> At the very least, this will probably cause me to insert a warning
>> printk: "insane user of /dev/urandom: [current->comm] requested %d
>> bytes" whenever someone tries to request more than 4k.
>
> Ok, I would be fine with that.
>
> The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
> to not break it. But, of course, there are other possibilities.

Personally, I'd say that _is_ insane - reading from urandom still consumes
entropy (causing readers of /dev/random to block more often); when
alternatives (such as dd'ing to dm-crypt) both avoid the issue _and_ are
faster then it should very well be considered pathological.