Okay, looking at tcp_sendmsg a little more, it looks like it lets go of the
sock lock in wait_for_tcp_memory before re-acquiring it, which is probably
where the interleaving gets in. I'm not sure if TCP should be handling this
or NFSD. From what little I know, TCP should serialize requests it gets and
atomically write them out, preventing interleaving, and it looks like it
doesn't do that.
- Shirish
----- Original Message -----
From: "Shirish Kalele" <[email protected]>
To: <[email protected]>; <[email protected]>
Sent: Wednesday, October 17, 2001 3:50 AM
Subject: [NFS] NFSD over TCP: TCP broken?
> Hi,
>
> I've been looking at running nfsd over tcp on Linux. I modified the #ifdef
> so that nfsd uses tcp. I also made writes to the socket blocking, so that
> the thread blocks till the entire reply has been accepted by TCP. (I know
> the right way is going to be to have an independent thread whose job would
> be to just pick replies off a queue and block on sending them to tcp, but
> this is what I've done temporarily.)
>
> Then I tried to copy a directory from a Solaris client to the Linux server
> using nfsv3 over tcp. This took a long time, with lots of delays where
> nothing was being transferred.
>
> Looking at the network traces, it looks like the RPC records being sent
over
> TCP are inconsistent with the lengths specified in the record marker. This
> happens mainly when 3-4 requests arrive one after the other and you have
3-4
> threads replying to these requests in parallel. It looks like TCP gets
> hopelessly confused and botches up the replies being sent. I point my
finger
> at TCP because tcp_sendmsg returns a valid length indicating that the
entire
> reply was accepted, but the tcp sequence numbers show that the RPC record
> sent on the wire wasn't equal to the length accepted by TCP. After a
while,
> the client realizes it's out of sync when it gets an invalid RPC record
> marker, and resets and reconnects. This repeats multiple times.
>
> Is TCP known to break when multiple threads try to send data down the pipe
> simulaneously? Is there a known fix for this? Where should I be focussing
to
> fix the problem?
>
> I'm not on the list, so please include me in replies.
>
> Thanks,
> Shirish
>
>
>
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>
Hello!
> where the interleaving gets in.
I do not think that you diagnosed the problem correctly.
nfsd used non blocking io and write to tcp is strictly atomic in this case.
> I'm not sure if TCP should be handling this
> or NFSD. From what little I know, TCP should serialize requests it gets and
> atomically write them out,
However, it does not and it should not. Like concurrent write()
to any other file, the result is unpredictably interleaved data.
Alexey
>>>>> " " == kuznet <[email protected]> writes:
> Hello!
>> where the interleaving gets in.
> I do not think that you diagnosed the problem correctly. nfsd
> used non blocking io and write to tcp is strictly atomic in
> this case.
Some of the patches that attempted to fix the nfsd server code relied
on making the TCP stuff blocking. I've seen several such patches
floating around that ignore the fact that the socket lock is dropped
when the IPV4 socket code sleeps.
In any case, even with nonblocking TCP, one has to protect the socket
until the entire message has been sent. Otherwise we risk seeing
another thread racing for the socket while we're doing whatever needs
to be done to clear the -EAGAIN.
Cheers,
Trond