2010-03-08 18:20:28

by J.Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case

On Mon, Mar 08, 2010 at 10:10:14AM +1100, Neil Brown wrote:
> On Sun, 7 Mar 2010 16:58:26 -0500
> "J. Bruce Fields" <[email protected]> wrote:
> > Don't we have this problem already, then? The export cache really is
> > just a cache, and we should be prepared for a given entry to be purged
> > at any time.
> >
>
> Yes, it is a cache, but things don't get pushed out when it is 'full', so you
> can expect items to stay out their expiry time.

At least for some caches we may want some sort of pruning when they get
to full.

If we do that, then we may want to vary the behavior from cache to
cache, hence keeping the export cache protected from this deadlock.
Though I'd rather avoid that kind of special case if we could.

> So if you add something
> with an expiry 30 minutes in the future, you can usually expect it to still
> be there in 30 seconds.
>
> If someone ran "exportfs -f"

or "exportfs -r"?

> at exactly the wrong time you might be able to deadlock mountd (I'd
> have to double-check to be sure), but that is very different from
> mountd acting in a way which leads directly to a deadlock.

Yeah, agreed. It's still a bug--people are supposed to be able to do
that without deadlock--but, sure, it may be unbelievably hard to hit.

So, I need to look harder at this, thanks for the review. (May take a
few days, though, so if someone else wants to volunteer, feel free.)

I wonder how best to eliminate the deadlock:

- Allow fh downcall to return -EAGAIN, let mountd change that to
a JUKEBOX or dropped rpc?
- Require mountd run two processes, with one dedicated to
upcalls?
- Separate mountd entirely into an upcall-handling daemon and an
rpc daemon?

Occasionally we seem to hear from a security-conscious administrator
who's running v4-only and is irritated that they have to firewall off
mountd instead of just being able to kill it entirely. The latter might
reassure them, I suppose.

--b.


2010-03-08 18:22:08

by J.Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case

On Mon, Mar 08, 2010 at 01:21:48PM -0500, J. Bruce Fields wrote:
> So, I need to look harder at this, thanks for the review.

By the way, Steve, the first two patches should still be fine, if you
want to apply those.

--b.

2010-03-08 18:32:41

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case

On 03/08/2010 01:21 PM, J. Bruce Fields wrote:
> Occasionally we seem to hear from a security-conscious administrator
> who's running v4-only and is irritated that they have to firewall off
> mountd instead of just being able to kill it entirely. The latter might
> reassure them, I suppose.

I think Jeff had the idea of having mountd simply not set up its RPC
listeners in that case. That looks easy to do.

--
chuck[dot]lever[at]oracle[dot]com

2010-03-08 18:39:57

by J.Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case

On Mon, Mar 08, 2010 at 01:30:23PM -0500, Chuck Lever wrote:
> On 03/08/2010 01:21 PM, J. Bruce Fields wrote:
>> Occasionally we seem to hear from a security-conscious administrator
>> who's running v4-only and is irritated that they have to firewall off
>> mountd instead of just being able to kill it entirely. The latter might
>> reassure them, I suppose.
>
> I think Jeff had the idea of having mountd simply not set up its RPC
> listeners in that case. That looks easy to do.

Sure, makes sense.

But we might decide we want separate processes for servicing MOUNT
requests and export upcalls anyway.

In which case, call one rpc.mountd, the other nfsd-cache-helper, don't
bother running "rpc.mountd" in the v4-only case, and, yay, we never have
to answer the "why do I still have to run rpc.mountd?" question again.

I dunno.

--b.

2010-03-08 18:47:05

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case

On 03/08/2010 01:41 PM, J. Bruce Fields wrote:
> On Mon, Mar 08, 2010 at 01:30:23PM -0500, Chuck Lever wrote:
>> On 03/08/2010 01:21 PM, J. Bruce Fields wrote:
>>> Occasionally we seem to hear from a security-conscious administrator
>>> who's running v4-only and is irritated that they have to firewall off
>>> mountd instead of just being able to kill it entirely. The latter might
>>> reassure them, I suppose.
>>
>> I think Jeff had the idea of having mountd simply not set up its RPC
>> listeners in that case. That looks easy to do.
>
> Sure, makes sense.
>
> But we might decide we want separate processes for servicing MOUNT
> requests and export upcalls anyway.
>
> In which case, call one rpc.mountd, the other nfsd-cache-helper, don't
> bother running "rpc.mountd" in the v4-only case, and, yay, we never have
> to answer the "why do I still have to run rpc.mountd?" question again.

Heh.

Separating these facilities would make mountd's RPC service
implementation more standard. Right now, it has to select() over the
RPC listeners _and_ the cache upcall fds. That means it has it's own
"svc_run()" function, and thus doesn't use the RPC library's.

But it might get complicated if we want to retain functionality in
mountd for kernels that don't have the new export cache...

--
chuck[dot]lever[at]oracle[dot]com

2010-03-08 19:56:10

by Steve Dickson

[permalink] [raw]
Subject: Re: [PATCH 3/3] mountd: fix crossmnt options in v2/v3 case



On 03/08/2010 01:41 PM, J. Bruce Fields wrote:
> On Mon, Mar 08, 2010 at 01:30:23PM -0500, Chuck Lever wrote:
>> On 03/08/2010 01:21 PM, J. Bruce Fields wrote:
>>> Occasionally we seem to hear from a security-conscious administrator
>>> who's running v4-only and is irritated that they have to firewall off
>>> mountd instead of just being able to kill it entirely. The latter might
>>> reassure them, I suppose.
>>
>> I think Jeff had the idea of having mountd simply not set up its RPC
>> listeners in that case. That looks easy to do.
>
> Sure, makes sense.
>
> But we might decide we want separate processes for servicing MOUNT
> requests and export upcalls anyway.
Yes... separating the upcalls from the network call would make things
much more similar... IMHO...

>
> In which case, call one rpc.mountd, the other nfsd-cache-helper, don't
> bother running "rpc.mountd" in the v4-only case, and, yay, we never have
> to answer the "why do I still have to run rpc.mountd?" question again.
Who says mountd has to be a longed lived daemon 100% of the time...
It could used a start point for both the nfsv4listner process and
the RPC listener (i.e. mountd itself). Then mountd could realize its
a nfsv4-only environment and simply die (once the nfsv4lister is started).

Just a thought...

steved.