2001-07-26 21:47:37

by Simon Kirby

[permalink] [raw]
Subject: read() details

Just some things I've always wondered about...

Is it safe to assume that when a single read() call of x bytes a file
(the file being locked against other processes appending to it) returns
less than x bytes, the next read() will always return 0? If so, is it
portable to make such an assumption?

...Or is it always better to make sure read() returns 0 before assuming
EOF, perhaps because the kernel may want to promote contiguous-page
read()s or for some other reason?

On a related note, would there be a win in altering the first read() size
(at the beginning of a read loop) to allow the kernel to serve the
subsequent read requests from contiguous pages? (This is assuming that
an lseek() happened first which would misalign the further read()s with
page boundaries.)

Simon-

[ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
[ [email protected] ][ [email protected] ]
[ Opinions expressed are not necessarily those of my employers. ]


2001-07-26 22:23:28

by Alan

[permalink] [raw]
Subject: Re: read() details

> Is it safe to assume that when a single read() call of x bytes a file
> (the file being locked against other processes appending to it) returns
> less than x bytes, the next read() will always return 0? If so, is it

No. Posix allows any read to be interrupted. Unix doesn't do this. Even so
another writer in parallel on the same file will cause what you describe

2001-07-26 22:31:38

by Simon Kirby

[permalink] [raw]
Subject: Re: read() details

On Thu, Jul 26, 2001 at 11:24:28PM +0100, Alan Cox wrote:

> > Is it safe to assume that when a single read() call of x bytes a file
> > (the file being locked against other processes appending to it) returns
> > less than x bytes, the next read() will always return 0? If so, is it
>
> No. Posix allows any read to be interrupted. Unix doesn't do this. Even so
> another writer in parallel on the same file will cause what you describe

Well, I was meaning to imply that reads which are interrupted would be
have to be manually restarted. But yes, there is also no guarantee that
huge reads will not return the requested size, which in effect makes any
don't-read-again-just-to-get-an-EOF optimization more trouble than it
would be worth.

Simon-

[ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
[ [email protected] ][ [email protected] ]
[ Opinions expressed are not necessarily those of my employers. ]