2010-03-29 18:36:47

by Daniel J Blueman

[permalink] [raw]
Subject: [2.6.34-rc2 NFS4 oops] open error path failure...

Hi Trond,

When open fails and should return EPERM [1], instead we see an oops
[2]. I see this on 2.6.34-rc1 and -rc2 mainline; NFS4 server is
mainline 2.6.33.1.

Let me know if you can't reproduce it and I'll provide some analysis
from this end.

Thanks,
Daniel

--- [1]

$ touch /tmp/foo
$ sudo chmod 0 /tmp/foo
$ cat /tmp/foo
cat: /tmp/foo: Permission denied

$ touch /net/users/daniel/foo
$ sudo chmod 0 /net/users/daniel/foo
$ cat /net/users/daniel/foo
Killed

--- [2]

BUG: unable to handle kernel NULL pointer dereference at 000000000000000b
IP: [<ffffffff8115036e>] nameidata_to_filp+0x1e/0x70
PGD 840d3067 PUD 841eb067 PMD 0
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
last sysfs file: /sys/devices/system/cpu/cpu7/cache/index2/shared_cpu_map
CPU 4
Modules linked in: usb_storage nls_iso8859_1 nls_cp437 vfat fat
binfmt_misc rfcomm sco bnep l2cap kvm_intel kvm microcode btusb
bluetooth arc4 ecb uvcvideo iwlagn videodev iwlcore v4l1_compat
v4l2_compat_ioctl32 mac80211

Pid: 20644, comm: perf Tainted: G W 2.6.34-rc2-317cd #2
0KM426/Studio 1557
RIP: 0010:[<ffffffff8115036e>] [<ffffffff8115036e>] nameidata_to_filp+0x1e/0x70
RSP: 0018:ffff88008fd31d38 EFLAGS: 00010292
RAX: ffff88012c1825a0 RBX: fffffffffffffff3 RCX: 0000000000000041
RDX: 0000000000008000 RSI: 0000000000000024 RDI: ffff88008fd31e18
RBP: ffff88008fd31d48 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000008000
R13: 0000000000000000 R14: 0000000000000000 R15: ffff88012f8c3180
FS: 00007fd781585700(0000) GS:ffff880010800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 000000000000000b CR3: 00000000842a3000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process perf (pid: 20644, threadinfo ffff88008fd30000, task ffff88012c1825a0)
Stack:
ffff88008fd31e18 ffff88008fd31e18 ffff88008fd31da8 ffffffff8115e7f8
<0> ffff88008fd31d88 ffffffff81301018 ffff88012a121500 000000242a121500
<0> ffff88013250b6c0 ffff88008fd31e18 0000000000000000 ffff88012f8c3180
Call Trace:
[<ffffffff8115e7f8>] do_last+0x348/0x740
[<ffffffff81301018>] ? __raw_spin_lock_init+0x38/0x70
[<ffffffff81160de9>] do_filp_open+0x219/0x690
[<ffffffff816e59b9>] ? sub_preempt_count+0xa9/0xe0
[<ffffffff8116c912>] ? alloc_fd+0x132/0x1b0
[<ffffffff8114fcd9>] do_sys_open+0x69/0x140
[<ffffffff816e0d92>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8114fdf0>] sys_open+0x20/0x30
[<ffffffff81003172>] system_call_fastpath+0x16/0x1b
Code: cc 58 00 48 c7 c0 ea ff ff ff eb 82 90 55 48 89 e5 53 48 83 ec
08 0f 1f 44 00 00 48 8b 9f 90 00 00 00 65 48 8b 04 25 40 b5 00 00 <48>
83 7b 18 00 4c 8b 80 18 06 00 00 74 14 e8 2f c4 00 00 48 89
RIP [<ffffffff8115036e>] nameidata_to_filp+0x1e/0x70
RSP <ffff88008fd31d38>
CR2: 000000000000000b
---[ end trace 525ed01cba1a212e ]---
--
Daniel J Blueman


2010-03-31 14:23:09

by Chuck Lever

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

On 03/31/2010 07:20 AM, Daniel J Blueman wrote:
> Talking of expensive, I see latencytop show>16000ms latency for
> writing pages when I have a workload that does large buffered I/O to
> an otherwise uncongested server. The gigabit network is saturated, and
> reads often stall for 1000-4000ms (!). Client has the default 16 TCP
> request slots, and server has 8 nfsds - the server is far from disk or
> processor-saturated. I'll see if there is any useful debugging I can
> get about this.

That latency is pretty much guaranteed to be due to a long RPC backlog
queue on the client. Bumping the size of the slot table to 128 and
increasing the number of NFSD threads may help.

--
chuck[dot]lever[at]oracle[dot]com

2010-03-31 11:20:56

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

On Mon, Mar 29, 2010 at 10:22 PM, Trond Myklebust
<[email protected]> wrote:
> On Mon, 2010-03-29 at 20:03 +0100, Al Viro wrote:
>> On Mon, Mar 29, 2010 at 07:36:45PM +0100, Daniel J Blueman wrote:
>> > Hi Trond,
>> >
>> > When open fails and should return EPERM [1], instead we see an oops
>> > [2]. I see this on 2.6.34-rc1 and -rc2 mainline; NFS4 server is
>> > mainline 2.6.33.1.
>> >
>> > Let me know if you can't reproduce it and I'll provide some analysis
>> > from this end.
>>
>> Joy... ?ERR_PTR(-EPERM) in nd.intent.file, and whoever had called
>> lookup_instantiate_filp() hadn't bothered to check the return value.
>>
>> OK, I think I see what's going on. ?Replace
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? return 1;
>> with
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? return state;
>> in fs/nfs/nfs4proc.c:nfs4_open_revalidate() and see if everything works
>> properly (or just lose the lookup_instantiate_filp() in there and simply
>> return state).
>
> So this raises a point. Originally, the d_revalidate() call was required
> to return a boolean 0 or 1. Nowadays it allows the filesystem to return
> an error value instead.
>
> Should we therefore rewrite the NFS implementation to propagate errors
> like ESTALE (when it means the parent directory is gone), EACCES, EPERM
> and EIO instead of the current behaviour of just dropping the dentry and
> hence forcing a lookup?

Passing the error back without forcing a lookup sounds like a good
win, if it can avoid a comparatively expensive roundtrip to the server
(iff the dentry is fresh enough). Is this possible?

Talking of expensive, I see latencytop show >16000ms latency for
writing pages when I have a workload that does large buffered I/O to
an otherwise uncongested server. The gigabit network is saturated, and
reads often stall for 1000-4000ms (!). Client has the default 16 TCP
request slots, and server has 8 nfsds - the server is far from disk or
processor-saturated. I'll see if there is any useful debugging I can
get about this.

Thanks,
Daniel
--
Daniel J Blueman

2010-03-29 21:22:26

by Trond Myklebust

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

On Mon, 2010-03-29 at 20:03 +0100, Al Viro wrote:
> On Mon, Mar 29, 2010 at 07:36:45PM +0100, Daniel J Blueman wrote:
> > Hi Trond,
> >
> > When open fails and should return EPERM [1], instead we see an oops
> > [2]. I see this on 2.6.34-rc1 and -rc2 mainline; NFS4 server is
> > mainline 2.6.33.1.
> >
> > Let me know if you can't reproduce it and I'll provide some analysis
> > from this end.
>
> Joy... ERR_PTR(-EPERM) in nd.intent.file, and whoever had called
> lookup_instantiate_filp() hadn't bothered to check the return value.
>
> OK, I think I see what's going on. Replace
> lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
> return 1;
> with
> lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
> return state;
> in fs/nfs/nfs4proc.c:nfs4_open_revalidate() and see if everything works
> properly (or just lose the lookup_instantiate_filp() in there and simply
> return state).
>

So this raises a point. Originally, the d_revalidate() call was required
to return a boolean 0 or 1. Nowadays it allows the filesystem to return
an error value instead.

Should we therefore rewrite the NFS implementation to propagate errors
like ESTALE (when it means the parent directory is gone), EACCES, EPERM
and EIO instead of the current behaviour of just dropping the dentry and
hence forcing a lookup?

Trond


2010-03-29 19:03:09

by Al Viro

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

On Mon, Mar 29, 2010 at 07:36:45PM +0100, Daniel J Blueman wrote:
> Hi Trond,
>
> When open fails and should return EPERM [1], instead we see an oops
> [2]. I see this on 2.6.34-rc1 and -rc2 mainline; NFS4 server is
> mainline 2.6.33.1.
>
> Let me know if you can't reproduce it and I'll provide some analysis
> from this end.

Joy... ERR_PTR(-EPERM) in nd.intent.file, and whoever had called
lookup_instantiate_filp() hadn't bothered to check the return value.

OK, I think I see what's going on. Replace
lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
return 1;
with
lookup_instantiate_filp(nd, (struct dentry *)state, NULL);
return state;
in fs/nfs/nfs4proc.c:nfs4_open_revalidate() and see if everything works
properly (or just lose the lookup_instantiate_filp() in there and simply
return state).


2010-03-29 19:21:59

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

Hi Al,

On Mon, Mar 29, 2010 at 8:03 PM, Al Viro <[email protected]> wrot=
e:
> On Mon, Mar 29, 2010 at 07:36:45PM +0100, Daniel J Blueman wrote:
>> Hi Trond,
>>
>> When open fails and should return EPERM [1], instead we see an oops
>> [2]. I see this on 2.6.34-rc1 and -rc2 mainline; NFS4 server is
>> mainline 2.6.33.1.
>>
>> Let me know if you can't reproduce it and I'll provide some analysis
>> from this end.
>
> Joy... =A0ERR_PTR(-EPERM) in nd.intent.file, and whoever had called
> lookup_instantiate_filp() hadn't bothered to check the return value.
>
> OK, I think I see what's going on. =A0Replace
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0lookup=
_instantiate_filp(nd, (struct dentry *)state, NULL);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return=
1;
> with
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0lookup=
_instantiate_filp(nd, (struct dentry *)state, NULL);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return=
state;
> in fs/nfs/nfs4proc.c:nfs4_open_revalidate() and see if everything wor=
ks
> properly (or just lose the lookup_instantiate_filp() in there and sim=
ply
> return state).

That did the trick!

Looks like I should have reported this when I first encountered it,
but I guess it goes to show there needs to be some NFS validation
(LTP?). I'm still trying to chase down a rare NFSv4 dentry "Stale NFS
file handle" issue I've seen.

Thanks,
Daniel
--=20
Daniel J Blueman

2010-04-02 10:34:28

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [2.6.34-rc2 NFS4 oops] open error path failure...

On Wed, Mar 31, 2010 at 3:21 PM, Chuck Lever <[email protected]> wrote:
> On 03/31/2010 07:20 AM, Daniel J Blueman wrote:
>>
>> Talking of expensive, I see latencytop show>16000ms latency for
>> writing pages when I have a workload that does large buffered I/O to
>> an otherwise uncongested server. The gigabit network is saturated, and
>> reads often stall for 1000-4000ms (!). Client has the default 16 TCP
>> request slots, and server has 8 nfsds - the server is far from disk or
>> processor-saturated. I'll see if there is any useful debugging I can
>> get about this.
>
> That latency is pretty much guaranteed to be due to a long RPC backlog queue
> on the client. ?Bumping the size of the slot table to 128 and increasing the
> number of NFSD threads may help.

Increasing these values did help quite a bit, though I was still
seeing 5000-8000ms at nfs_wait_bit_uninterruptible() [1] and close().
Then again, I also was seeing 'Scheduler waiting for cpu' taking up to
3000ms! I suspect processor throttling, due to exceeding thermal
limits.

Thanks,
Daniel

--- [1]

nfs_wait_bit_uninterruptible
nfs_wait_on_request
nfs_wait_in_requests_locked
nfs_sync_mapping_wait
nfs_write_mapping
nfs_wb_nocommit
nfs_getattr
vfs_getattr
vfs_fstat
sys_newfstat
system_call_fastpath
--
Daniel J Blueman