When I compile and run the code below, the string of octal characters
generated by reading /dev/random contains long strings of zeroes. I was
under the impression that /dev/random is "more random" than
/dev/urandom, and will block when it runs out of entropy until it
gathers more. It's only when RAND_LEN is on the largish side that these
strings of zeroes appear.
I'm guessing this behaviour is to be expected; I just don't understand
it.
This is linux kernel 2.6.8.1.
#include <stdio.h>
#include <fcntl.h>
#define RAND_LEN 1024
void
read_random( const char* dev ) {
int i, fd;
char dat[RAND_LEN + 1];
fd = open( dev, O_RDONLY );
dat[RAND_LEN] = '\0';
if( fd != -1 ) {
read( fd, dat, RAND_LEN );
for( i = 0; i < RAND_LEN; i++ ) {
dat[i] = (dat[i] & 0x07) + '0';
}
printf( "%s: %s\n\n", dev, dat );
} else {
exit( 1 );
}
close( fd );
}
int
main( void ) {
read_random( "/dev/random" );
read_random( "/dev/urandom" );
return( 0 );
}
e.g.
$ ./test
/dev/random: 5417351746543663033176215502400146103743161053514107722724572227577211175264555157343736274051424454260000000410000000000477510042000000000042400000100004776700057700000477560000103410000004100000041005777000411040100000000000000000000000000000000000000000000000000000000000000000000000000000411025574400000000000000000000000000000000000000000000100000000046404040000060574400000000010000000000000000000000001000000000000000000000000000000000006200000000000000041041770000077723004040600045770000000000000000000000006000404003400000000000000000000000000000000000000000000000000000000000000000120026302630000000000000473005300320031030000610041045104530477772004530211202304677001010000610000010000000000000001200114000100000041000000140067712002630724000004677000047300530032003103000061004104010324007777200324067710140077700101000061000001000000000000000000000000410470003104777610045100610100000000000077700000000000000006771077706104530023000300310407747770300730070000220417740776210465006504777574046504077041030000610
/dev/urandom: 1655466217206163634777531607666420217322330273327062554377500673762610503232764303737770374066306640130702327235453637234726515320440537063623075347362744022761643031224147437567521545005217571341376634432705100716667264573662535077131060611205546167254244173277374124057015170471752463673753257446324120132341703110113324160500424545414427103452004326607527142023223056654417412072453735343241747171121471174343351366215703670565624370656416102432525141711102001344506103237445351607531346546542130670340524726467315466500144520507166250124000513673351302523654724763452657772702110446526330137221714042445716025510445073154154554433625137743012150647422324104477276471266371157432250425702471733705106524743537622014253265310174041504105572260571526120773547047577667711773160042711607626651223077130635371771754041162573451444327036771052224666641252456167376416110445223405032201545552374202763513507366033462320535502163264137406437201346313404576320371006224721327520444466067703201155645714316637347641311513007645264
Happy New Year!
--
Ron Peterson
Network & Systems Manager
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso
Ron Peterson wrote:
> When I compile and run the code below, the string of octal characters
> generated by reading /dev/random contains long strings of zeroes. I was
> under the impression that /dev/random is "more random" than
> /dev/urandom, and will block when it runs out of entropy until it
> gathers more. It's only when RAND_LEN is on the largish side that these
> strings of zeroes appear.
>
>[...]
> read( fd, dat, RAND_LEN );
> for( i = 0; i < RAND_LEN; i++ ) {
> dat[i] = (dat[i] & 0x07) + '0';
This is wrong. You must check the return value of read to know how many
bytes have you actually gathered, instead of assuming RAND_LEN...
--
Paulo Marques - http://www.grupopie.com
"A journey of a thousand miles begins with a single step."
Lao-tzu, The Way of Lao-tzu
On Fri, 7 Jan 2005, Ron Peterson wrote:
> When I compile and run the code below, the string of octal characters
> generated by reading /dev/random contains long strings of zeroes. I was
> under the impression that /dev/random is "more random" than
> /dev/urandom, and will block when it runs out of entropy until it
> gathers more. It's only when RAND_LEN is on the largish side that these
> strings of zeroes appear.
>
> I'm guessing this behaviour is to be expected; I just don't understand
> it.
>
> This is linux kernel 2.6.8.1.
>
> #include <stdio.h>
> #include <fcntl.h>
> #define RAND_LEN 1024
I don't understand your code. Using standard Unix tools, results
in the expected behavior:
Script started on Fri 07 Jan 2005 02:19:00 PM EST
LINUX> od -x /dev/random
0000000 43ba fd1e 2a04 4339 2b7e e55a 6e54 71c2
0000020 6102 9518 d0e9 5f9e b4ed 1ccc 21a1 078c
0000040 8d91 c70c 9ef0 c053 fce2 cb8c f890 1306
0000060 138c a669 8bdd af97 629b 324f b771 2f7a
0000100 0a8b 9c58 d131 db14 6350 1fde d4b7 b775
0000120 d16a 77f8 a201 abe1 a2fa 364c d773 2a63
0000140 526b 6042 7a83 f457 58ca 016c 780f 7506
0000160 d564 216d e325 86d0 5c82 a180 b835 3bd3
0000200 0c6b 90f1 0f15 823d 2df7 e8fa d217 5b21
0000220 aabe 8ff2 3f1f 8557 581e 8479 9863 202b
0000240 95e3 d007 7583 d0f4 3fbc b790 f922 4013
0000260 6e16 5a37 0465 4733 2a03 ce2a bb8e c1ee
0000300 f359 f087 798e 0c1d ac02 e6e9 8cb4 e905
0000320 21cc 1816 fd9c f72d 10e7 6195 59cb f40d
0000340 868b 11ef 8a1e a45c e162 acb9 caca 2355
0000360 a3a3 80e9 5f96 87a2 d89c af59 23e3 1918
0000400 58c3 a823 d8b2 e74b 4af0 0fcd bba4 5919
0000420 9ffc 9e74 7ec6 9272 ccdd ba80 ed8e 33c9
0000440 60d3 14eb aa6d 589d bea6 4244 6892 640f
0000460 bf98 5397 3408 1fb3 40fe 3def 4277 663a
0000500 ae94 00d4 fa55 8cd0 8218 4901 1b95 5fd9
0000520 8361 7a55 7cb9 69c8 3c14 9d78 71e5 deba
0000540 7c74 65cf fa62 758a c686 07fb d300 e62d
0000560 b130 792a 9bcc 5652 2d20 d875 852a ede8
0000600 9141 201b 6e54 1271 e5e9 b87e 3f8a 9de9
0000620 8c85 ef50 bcc1 b09a 87aa 46fc d371 06f1
0000640 8345 e3ec bdb6 d97f 462e 9021 90d4 0100
0000660 98bb 5e75 4871 8818 21a3 8082 6844 2845
0000700 efae 3fee 9f6c 1b64 146e 4234 4c39 8934
0000720 1668 920b 5d3b 896f 344e 01d8 7a85 dba1
0000740 cae7 752b 5691 ce82 a6d1 c991 a648 d55a
0000760 afbb bf02 c8c0 a4c4 4d58 fce5 fa87 fe04
0001000 5459 5fad b930 1e3d e25d d5da 25ca 7c16
0001020 a446 f60d b2cc a679 5f57 b9b0 8a0e 24f7
0001040 01a3 65a3 ce7f 1855 f5b6 f9bc 888d b670
0001060 9771 c8d1 2e9b 8ea5 a95a ff33 78a5 695a
0001100 c738 0707 0f8e 8403 ca41 b272 f7a8 7727
LINUX> od -x /dev/urandom
0000000 1647 22fa d1eb 2fa6 99a6 c2f1 9660 d855
0000020 c9df 6118 3774 f9d4 c9bd 619d 3f11 b092
0000040 713b 0c07 d578 7b5e 1128 fe54 7252 bf5b
0000060 109e 4c40 ae9f 2d09 66a0 25bf 241b af82
0000100 6065 af28 7785 2ea1 2955 613a 4f38 41bd
0000120 d978 f5de 2886 9565 2eb7 6f96 144d 2b4d
0000140 d43d 745c 9ac1 68ea a804 c76a fd9e b16b
0000160 16f9 918b 393b c469 8a72 cc1d dfaf 62cf
0000200 a6db a208 81f3 bcb0 8748 9c64 e1e9 fdba
0000220 d80d 1531 d7b4 a0ae a13e f0d2 2605 444c
0000240 deaa dfbc ab47 21af 3b9c 6a63 1ca5 6d38
0000260 b3fc 8ddf ecff 3d76 5a03 5bd7 23ed bca0
0000300 bf60 5fc9 496c 1edb c32d f8fd 5f1e fe96
0000320 8f61 8470 612e df39 495c 91be f9d1 84e3
0000340 a4a3 464a 007a dd81 c121 c59b 536b 426e
0000360 e08a 4cd6 8820 8ca4 c88e b35c 039c aacf
0000400 2a5d 1f2f 0950 5e30 d4bb 7413 4710 65c0
0000420 2a46 98f8 4a4a 6153 2b4b 58d2 8996 9551
0000440 255a 352a 6b06 49d2 5a21 431b 51af 7768
0000460 9f24 3fdf 9e7c 0555 8cf7 f471 f89e 811a
0000500 ae5f e487 8a1f d51a dadf d179 5f48 6f49
0000520 49e5 dc68 0df7 571e 1d44 4662 dcff 1a56
0000540 8986 6dd3 10f5 c893 d9cf d5cb ac51 95e3
0000560 1486 5448 5960 9cd7 0420 b6bb fa00 4726
0000600 e7e3 5d7e 5470 d042 0ed0 9c94 9ade 1380
0000620 1106 f59c 7278 03f2 45c9 41ab 89e9 5d5b
0000640 1b8f 50cf e19a 87c1 cdd0 441a 0358 7f79
0000660 bca3 746f fa8b 0bca c367 1f02 a7a6 a49b
0000700 dd8c 8125 9e8a 57c8 3206 2359 78ca 3a53
0000720 aa24 be40 3859 ad64 c311 ef49 5ae3 ea69
0000740 5f23 c3d8 2b8a e809 9ce7 32b5 e7ae f5b4
0000760 ba39 db5d 68e9 9b6f 052d 0e87 038e 431b
0001000 c9e2 dd73 91ab f330 9888 d65e a85a aa5e
0001020 a825 48a1 6b1d 1995 a4d3 7351 35c8 0923
0001040 d005 75f9 a381 08cc 3c9b da16 bbd1 4a01
0001060 150d 4fcb a7b9 c4d9 3677 96d2 c094 b7bd
0001100 0355 902f 0c03 d7ba f8c2 1010 d77d b865
0001120 8b65 5907 f11b 097e a446 e219 6ed6 dbc7
0001140 f95b 3279 7f6b 6331 6cde 1997 5d64 3231
0001160 88c3 59e7 8823 b026 a6ac 9d6e 5f81 1916
^C
LINUX> exit
Script done on Fri 07 Jan 2005 02:19:40 PM EST
You can't just AND-off a bunch of bits and claim that
the random-number generator doesn't work!
Cheers,
Dick Johnson
Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by Dictator Bush.
98.36% of all statistics are fiction.
On Fri, 2005-01-07 at 14:05 -0500, Ron Peterson wrote:
> read( fd, dat, RAND_LEN );
> for( i = 0; i < RAND_LEN; i++ ) {
> dat[i] = (dat[i] & 0x07) + '0';
> }
Your problem is probably because read() need not actually read RAND_LEN
bytes. Particularly with /dev/random, since it will only return bytes
up to the entropy estimate. But you assume it read RAND_LEN, when those
are unread. And possibly zero. So that is probably your bug.
The AND makes zero sense, either.
Just use dd(1).
Robert Love
Ron Peterson wrote:
> When I compile and run the code below, the string of octal characters
> generated by reading /dev/random contains long strings of zeroes. I was
> under the impression that /dev/random is "more random" than
> /dev/urandom, and will block when it runs out of entropy until it
> gathers more. It's only when RAND_LEN is on the largish side that these
> strings of zeroes appear.
Just a shot in the dark--could you check the return value of read()?
Maybe it's not filling up your entire buffer.
Chris
* Ron Peterson:
> When I compile and run the code below, the string of octal characters
> generated by reading /dev/random contains long strings of zeroes. I was
> under the impression that /dev/random is "more random" than
> /dev/urandom, and will block when it runs out of entropy until it
> gathers more. It's only when RAND_LEN is on the largish side that these
> strings of zeroes appear.
> read( fd, dat, RAND_LEN );
This is the bug. *Always* check the return value of read.
(Try running your program with "strace -e read" for a hint of what's
really going on.)
On Fri, Jan 07, 2005 at 02:40:43PM -0500, Robert Love wrote:
> On Fri, 2005-01-07 at 14:05 -0500, Ron Peterson wrote:
>
> > read( fd, dat, RAND_LEN );
> > for( i = 0; i < RAND_LEN; i++ ) {
> > dat[i] = (dat[i] & 0x07) + '0';
> > }
>
> Your problem is probably because read() need not actually read RAND_LEN
> bytes. Particularly with /dev/random, since it will only return bytes
> up to the entropy estimate. But you assume it read RAND_LEN, when those
> are unread. And possibly zero. So that is probably your bug.
>
> The AND makes zero sense, either.
>
> Just use dd(1).
Ah, thanks! (to you and everyone else.)
(I can't use dd because I want to use the value to feed gmp_randseed
from gmp library. I need the AND to create the proper ascii code for a
numeral.)
Best.
--
Ron Peterson
Network & Systems Manager
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso
On Fri, Jan 07, 2005 at 02:05:36PM -0500, Ron Peterson wrote:
> When I compile and run the code below, the string of octal characters
> generated by reading /dev/random contains long strings of zeroes.
But it is your program that invents the zeros, they are not returned
by /dev/random. The bug in your program is failing to check the
return value of read().
On Fri, 7 Jan 2005, Andries Brouwer wrote:
> On Fri, Jan 07, 2005 at 02:05:36PM -0500, Ron Peterson wrote:
>
>> When I compile and run the code below, the string of octal characters
>> generated by reading /dev/random contains long strings of zeroes.
>
> But it is your program that invents the zeros, they are not returned
> by /dev/random. The bug in your program is failing to check the
> return value of read().
> -
Also, the following shows that the AND operation will destroy
the randomness of the data. In this case I AND with 1, which
should produce as many '1's as '0's, ... and clearly does not.
Script started on Fri 07 Jan 2005 05:36:43 PM EST
LINUX> cat >xxx.c
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#define LEN 0x20
void doit(unsigned char *buf) {
size_t i, odds, evens;
int fd, ret;
odds = evens = 0;
printf("Trying %s\n", buf);
if((fd = open(buf, O_RDONLY)) < 0)
exit(EXIT_FAILURE);
if((ret = read(fd, buf, LEN)) > 0)
{
for(i=0; i< ret; i++)
{
if(buf[i] & 1)
odds++;
else
evens++;
printf("%02x", buf[i] & 1);
}
printf("\n odds = %u evens = %u\n", odds, evens);
}
(void)close(fd);
}
int main() {
char buf[0x100];
strcpy(buf, "/dev/random");
doit(buf);
strcpy(buf, "/dev/urandom");
doit(buf);
return 0;
}
LINUX> gcc -Wall -O2 -o xxx xxx.c
LINUX> ./xxx
Trying /dev/random
0100000101010000010001000101000000000000000101000100010000000101
odds = 14 evens = 18
Trying /dev/urandom
0001010001000100000101000100010001000000000000000000010000000000
odds = 10 evens = 22
LINUX> ./xxx
Trying /dev/random
0100000100010101000101010101010101000100010000010001010000000101
odds = 20 evens = 12
Trying /dev/urandom
0100000100000101010001000101010001010001000000010101010100010000
odds = 18 evens = 14
LINUX> exi[Kit
Script done on Fri 07 Jan 2005 05:37:37 PM EST
Cheers,
Dick Johnson
Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by Dictator Bush.
98.36% of all statistics are fiction.
linux-os wrote:
> Also, the following shows that the AND operation will destroy
> the randomness of the data. In this case I AND with 1, which
> should produce as many '1's as '0's, ... and clearly does not.
>
It should not. If it always resulted in exactly the same number of '0's
and '1's then it wouldn't be random. But the relative rate of '0's and
'1's will approach 50% if the number of tries is statistically
significant. 32 tries isn't.
Michal
On Fri, Jan 07, 2005 at 05:39:46PM -0500, linux-os wrote:
> Also, the following shows that the AND operation will destroy
> the randomness of the data. In this case I AND with 1, which
> should produce as many '1's as '0's, ... and clearly does not.
You do not sound like a reliable source of information
about randomness.
> LINUX> gcc -Wall -O2 -o xxx xxx.c
> LINUX> ./xxx
> Trying /dev/random
> 0100000101010000010001000101000000000000000101000100010000000101
> odds = 14 evens = 18
> Trying /dev/urandom
> 0001010001000100000101000100010001000000000000000000010000000000
> odds = 10 evens = 22
> LINUX> ./xxx
> Trying /dev/random
> 0100000100010101000101010101010101000100010000010001010000000101
> odds = 20 evens = 12
> Trying /dev/urandom
> 0100000100000101010001000101010001010001000000010101010100010000
> odds = 18 evens = 14
linux-os <[email protected]> writes:
> In this case I AND with 1, which should produce as many '1's as
> '0's, ... and clearly does not.
Actually, a fair coin flipped N times is unlikely to come up heads
exactly N/2 times, and the probability of this drops quickly as N
grows.
What is true is that it will usually come up heads N/2 times, give or
take sqrt(N). Mathematicians call this the "Central Limit Theorem".
For example, take N=32. The square root of 32 is a little less than
6. So we expect to see between 16-6 (i.e., 10) and 16+6 (i.e., 22)
heads in a typical trial. (Of course, in one trial out of 4 billion
it will come up all heads. The Central Limit Theorem is about "usual"
outcomes, not every outcome.)
So we expect between 10 and 22 odds/evens in your trial.
> Trying /dev/random
> 0100000101010000010001000101000000000000000101000100010000000101
> odds = 14 evens = 18
> Trying /dev/urandom
> 0001010001000100000101000100010001000000000000000000010000000000
> odds = 10 evens = 22
> LINUX> ./xxx
> Trying /dev/random
> 0100000100010101000101010101010101000100010000010001010000000101
> odds = 20 evens = 12
> Trying /dev/urandom
> 0100000100000101010001000101010001010001000000010101010100010000
> odds = 18 evens = 14
Well how about that. Try it with larger N, and you will find it gets
even harder to hit a case where the total is outside the sqrt(N) error
margin. And of course, as a percentage of N, sqrt(N) only shrinks as
N grows.
If you doubt any of this, try it with a real coin. Or read a book on
probability.
- Pat
On Sat, 8 Jan 2005, Patrick J. LoPresti wrote:
> linux-os <[email protected]> writes:
>
>> In this case I AND with 1, which should produce as many '1's as
>> '0's, ... and clearly does not.
>
> Actually, a fair coin flipped N times is unlikely to come up heads
> exactly N/2 times, and the probability of this drops quickly as N
> grows.
>
> What is true is that it will usually come up heads N/2 times, give or
> take sqrt(N). Mathematicians call this the "Central Limit Theorem".
>
> For example, take N=32. The square root of 32 is a little less than
> 6. So we expect to see between 16-6 (i.e., 10) and 16+6 (i.e., 22)
> heads in a typical trial. (Of course, in one trial out of 4 billion
> it will come up all heads. The Central Limit Theorem is about "usual"
> outcomes, not every outcome.)
>
> So we expect between 10 and 22 odds/evens in your trial.
>
>> Trying /dev/random
>> 0100000101010000010001000101000000000000000101000100010000000101
>> odds = 14 evens = 18
>> Trying /dev/urandom
>> 0001010001000100000101000100010001000000000000000000010000000000
>> odds = 10 evens = 22
>> LINUX> ./xxx
>> Trying /dev/random
>> 0100000100010101000101010101010101000100010000010001010000000101
>> odds = 20 evens = 12
>> Trying /dev/urandom
>> 0100000100000101010001000101010001010001000000010101010100010000
>> odds = 18 evens = 14
>
> Well how about that. Try it with larger N, and you will find it gets
> even harder to hit a case where the total is outside the sqrt(N) error
> margin. And of course, as a percentage of N, sqrt(N) only shrinks as
> N grows.
>
> If you doubt any of this, try it with a real coin. Or read a book on
> probability.
>
> - Pat
One is free to use any number of samples. The short number of samples
was DELIBERATELY used to exacerbate the problem although a number
or nay-sayers jumped on this in an attempt to prove that I don't
know what I'm talking about.
In the first place, the problem was to display the error of using
an ANDing operation to truncate a random number. In the limit,
one could AND with 0 and show that all randomness has been removed.
However, those who know nothing about the theory would then
probably jump upon this as a "special case" even though it usn't.
Cheers,
Dick Johnson
Penguin : Linux version 2.6.10 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by Dictator Bush.
98.36% of all statistics are fiction.
linux-os wrote:
> [...]
> One is free to use any number of samples. The short number of samples
> was DELIBERATELY used to exacerbate the problem although a number
> or nay-sayers jumped on this in an attempt to prove that I don't
> know what I'm talking about.
It seems to me that you actually don't.
Since this is a *uniform* distribution in the range [0..2^N[, than any
of those N bits must also show a uniform distribution, or the
distribution of the sum of the bits wouldn't be uniform. (isn't this
obvious?)
It would be different of course, if this was not a uniform distribution,
or the range was not a power of 2...
Of course, I agree that throwing away 5 bits in every byte of perfect
entropy that the kernel worked so hard to gather is just wrong, but the
randomness of the result is not the reason why.
> In the first place, the problem was to display the error of using
> an ANDing operation to truncate a random number. In the limit,
> one could AND with 0 and show that all randomness has been removed.
Not really.. you just get a perfect random uniform distribution if the
range [0..0] :)
--
Paulo Marques - http://www.grupopie.com
"A journey of a thousand miles begins with a single step."
Lao-tzu, The Way of Lao-tzu
On 10 Jan 2005, at 14:03, Paulo Marques wrote:
>> In the first place, the problem was to display the error of using
>> an ANDing operation to truncate a random number. In the limit,
>> one could AND with 0 and show that all randomness has been removed.
>
> Not really.. you just get a perfect random uniform distribution if the
> range [0..0] :)
I would say that a sample space (omega) of one unique element cannot be
considered entirely random. For that if you perform the random
experiment, you will always get that unique sample, and thus p(Sample)
= p(Omega) = 1.
Let Omega = { 0 }, thus p(Omega) = p(0) = 1, which I wouldn't consider
random at all.
linux-os <[email protected]> writes:
> In the first place, the problem was to display the error of using
> an ANDing operation to truncate a random number.
In the first place, you began by claiming that the number of zero bits
should equal the number of one bits. I explained how that was wrong,
and now you are saying you meant something else.
Fine, but you are still wrong; it is not an error to use AND.
/dev/random is just a stream of bits. Each bit behaves like a coin
toss. When you group the bits into bytes and then AND each byte with
1, you merely examine every eighth bit and throw away the rest. Each
bit you examine still behaves just like a coin toss.
So ANDing with 1 produces fine output if you are trying to simulate a
fair coin. Counter to your original claim, the output of your program
is completely consistent with the theory on this.
> In the limit, one could AND with 0 and show that all randomness has
> been removed.
Since that throws away ALL of the bits, of course it "removes the
randomness". Yes, each byte from /dev/random is a perfectly random
number between 0 and 255. If you AND with 1, you get a perfectly
random number between 0 and 1. If you AND with 0, you no longer get a
random number. Awesome!
> The short number of samples was DELIBERATELY used to exacerbate the
> problem although a number or nay-sayers jumped on this in an attempt
> to prove that I don't know what I'm talking about.
An impression you are reinforcing with every message. There is
nothing wrong with the randomness from /dev/random, nor with the
odd/even randomness in your sample program's output. Other than
"trust me", "try it with a real coin", or "read a book", I am not sure
how to convince you of this... So I will probably stop trying.
- Pat
> In the first place, the problem was to display the error of using
> an ANDing operation to truncate a random number.
Except there is no error.
> In the limit,
> one could AND with 0 and show that all randomness has been removed.
Of course. Any time you truncate something, you are removing something from
it.
> However, those who know nothing about the theory would then
> probably jump upon this as a "special case" even though it usn't.
Nope, no special case. Truncate all the way, remove everything. Truncate
part of the way, remove something.
If you have a random number between 0 and 32767, and you want a random
number between 0 and 255, you are going to have to remove some of the
randomness from the input number. So long as the input random numbers are
uniform and the truncation maps the same number of inputs to each output,
any truncation scheme is as good as any other. Specifically, ANDing is as
good as dividing, is as good as any other scheme as far as the quality of
the output is concerned.
Where things get complicated is where the number of possible outputs does
not divide evenly into the number of possible inputs. For example,
truncating a random number between 0 and 32767 to one between 0 and 9. There
are some algorithms to do this, but ANDing is insufficient.
DS
On Mon, Jan 10, 2005 at 07:41:02AM -0500, linux-os wrote:
> one could AND with 0 and show that all randomness has been removed.
Zero removes all bits so it's a special case.
As long as 1 bit is left coming from /dev/*random and not your
application, you're guaranteed that single bit to be random (since you
didn't mask it).
It's like if I read 100 bytes from /dev/random and then I truncate the
last 99 and it'll be as random as reading a single byte. Random means
all single bits are random too, not only the entire bytes.