v4:
1) creation and destruction on rpcbind clients now depends on service program
versions "vs_hidden" flag.
This patch is required for further RPC layer virtualization, because rpcbind
clients have to be per network namespace.
To achive this, we have to untie network namespace from rpcbind clients sockets.
The idea of this patch set is to make rpcbind clients non-static. I.e. rpcbind
clients will be created during first RPC service creation, and destroyed when
last RPC service is stopped.
With this patch set rpcbind clients can be virtualized easely.
The following series consists of:
---
Stanislav Kinsbursky (8):
SUNRPC: introduce helpers for reference counted rpcbind clients
SUNRPC: use rpcbind reference counting helpers
SUNRPC: introduce svc helpers for prepairing rpcbind infrastructure
SUNRPC: setup rpcbind clients if service requires it
SUNRPC: cleanup service destruction
NFSd: call svc rpcbind cleanup explicitly
SUNRPC: remove rpcbind clients creation during service registering
SUNRPC: remove rpcbind clients destruction on module cleanup
fs/nfsd/nfssvc.c | 2 +
include/linux/sunrpc/clnt.h | 2 +
include/linux/sunrpc/svc.h | 1 +
net/sunrpc/rpcb_clnt.c | 85 ++++++++++++++++++++++++++++---------------
net/sunrpc/sunrpc_syms.c | 3 --
net/sunrpc/svc.c | 48 +++++++++++++++++++++++-
6 files changed, 105 insertions(+), 36 deletions(-)
--
Signature
On Tue, 20 Sep 2011 18:43:45 +0400
Stanislav Kinsbursky <[email protected]> wrote:
> 20.09.2011 18:24, Jeff Layton пишет:
> > On Tue, 20 Sep 2011 17:49:27 +0400
> > Stanislav Kinsbursky<[email protected]> wrote:
> >
> >> v5: fixed races with rpcb_users in rpcb_get_local()
> >>
> >> This helpers will be used for dynamical creation and destruction of rpcbind
> >> clients.
> >> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
> >> clients has been created already, then we just increase rpcb_users.
> >>
> >> Signed-off-by: Stanislav Kinsbursky<[email protected]>
> >>
> >> ---
> >> net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
> >> 1 files changed, 53 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
> >> index e45d2fb..5f4a406 100644
> >> --- a/net/sunrpc/rpcb_clnt.c
> >> +++ b/net/sunrpc/rpcb_clnt.c
> >> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
> >> static struct rpc_clnt * rpcb_local_clnt;
> >> static struct rpc_clnt * rpcb_local_clnt4;
> >> +DEFINE_SPINLOCK(rpcb_clnt_lock);
> >> +unsigned int rpcb_users;
> >> +
> >> struct rpcbind_args {
> >> struct rpc_xprt * r_xprt;
> >> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
> >> kfree(map);
> >> }
> >> +static int rpcb_get_local(void)
> >> +{
> >> + int cnt;
> >> +
> >> + spin_lock(&rpcb_clnt_lock);
> >> + if (rpcb_users)
> >> + rpcb_users++;
> >> + cnt = rpcb_users;
> >> + spin_unlock(&rpcb_clnt_lock);
> >> +
> >> + return cnt;
> >> +}
> >> +
> >> +void rpcb_put_local(void)
> >> +{
> >> + struct rpc_clnt *clnt = rpcb_local_clnt;
> >> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
> >> + int shutdown;
> >> +
> >> + spin_lock(&rpcb_clnt_lock);
> >> + if (--rpcb_users == 0) {
> >> + rpcb_local_clnt = NULL;
> >> + rpcb_local_clnt4 = NULL;
> >> + }
> >
> > In the function below, you mention that the above pointers are
> > protected by rpcb_create_local_mutex, but it looks like they get reset
> > here without that being held?
> >
>
> Assigning of them is protected by rpcb_create_local_mutex.
> Dereferencing of them is protected by rpcb_clnt_lock.
>
That's probably a bug, but I haven't sat down to work through the logic.
> > Might it be simpler to just protect rpcb_users with the
> > rpcb_create_local_mutex and ensure that it's held whenever you call one
> > of these routines? None of these are codepaths are particularly hot.
> >
>
> I just inherited this lock-mutex logic.
> Actually, you right. This codepaths are used rarely.
> But are use sure, that we need to remove this "speed-up" logic if we take into
> account that it was here already?
>
There are many ways to do this...
In general, it's difficult to get locking right, especially when you
start mixing multiple locks on related resources. Personally, I'd go
with a simpler scheme here. There's not much value in protecting this
counter with a spinlock when the other parts need to be protected by a
mutex. If you do decide to do it with multiple locks, then please do
document in comments how the locking is expected to work.
An alternate scheme might be to consider doing this with krefs, but I
haven't really considered whether that idiom makes sense here.
--
Jeff Layton <[email protected]>
Trond, is this patch version suits you now? Or not?
Please, comment somehow to let me know, may I proceed with further development
or not.
Sorry for disturbing (if so).
21.09.2011 13:07, Stanislav Kinsbursky пишет:
> v6:
> 1) added write memory barrier to rpcb_set_local to make sure, that rpcbind
> clients become valid before rpcb_users assignment
> 2) explicitly set rpcb_users to 1 instead of incrementing it (looks clearer from
> my pow).
>
> Notice: write memory barrier after zeroing rpcbind clients in rpcb_put_local()
> is not required, since to users of them left. New user (service) will create new
> clients before dereferencing them.
>
> This helpers will be used for dynamical creation and destruction of rpcbind
> clients.
> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
> clients has been created already, then we just increase rpcb_users.
>
> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>
> ---
> net/sunrpc/rpcb_clnt.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 54 insertions(+), 0 deletions(-)
>
> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
> index e45d2fb..9fcdb42 100644
> --- a/net/sunrpc/rpcb_clnt.c
> +++ b/net/sunrpc/rpcb_clnt.c
> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
> static struct rpc_clnt * rpcb_local_clnt;
> static struct rpc_clnt * rpcb_local_clnt4;
>
> +DEFINE_SPINLOCK(rpcb_clnt_lock);
> +unsigned int rpcb_users;
> +
> struct rpcbind_args {
> struct rpc_xprt * r_xprt;
>
> @@ -161,6 +164,57 @@ static void rpcb_map_release(void *data)
> kfree(map);
> }
>
> +static int rpcb_get_local(void)
> +{
> + int cnt;
> +
> + spin_lock(&rpcb_clnt_lock);
> + if (rpcb_users)
> + rpcb_users++;
> + cnt = rpcb_users;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + return cnt;
> +}
> +
> +void rpcb_put_local(void)
> +{
> + struct rpc_clnt *clnt = rpcb_local_clnt;
> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
> + int shutdown;
> +
> + spin_lock(&rpcb_clnt_lock);
> + if (--rpcb_users == 0) {
> + rpcb_local_clnt = NULL;
> + rpcb_local_clnt4 = NULL;
> + }
> + shutdown = !rpcb_users;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + if (shutdown) {
> + /*
> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
> + */
> + if (clnt4)
> + rpc_shutdown_client(clnt4);
> + if (clnt)
> + rpc_shutdown_client(clnt);
> + }
> + return;
> +}
> +
> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
> +{
> + /* Protected by rpcb_create_local_mutex */
> + rpcb_local_clnt = clnt;
> + rpcb_local_clnt4 = clnt4;
> + smp_wmb();
> + rpcb_users = 1;
> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
> + rpcb_local_clnt4);
> +}
> +
> /*
> * Returns zero on success, otherwise a negative errno value
> * is returned.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Best regards,
Stanislav Kinsbursky
Rpcbind clients destruction during SUNRPC module removing is obsolete since now
those clients are destroying during last RPC service shutdown.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 12 ------------
net/sunrpc/sunrpc_syms.c | 3 ---
2 files changed, 0 insertions(+), 15 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index 9327305..80ddf55 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -1097,15 +1097,3 @@ static struct rpc_program rpcb_program = {
.version = rpcb_version,
.stats = &rpcb_stats,
};
-
-/**
- * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
- *
- */
-void cleanup_rpcb_clnt(void)
-{
- if (rpcb_local_clnt4)
- rpc_shutdown_client(rpcb_local_clnt4);
- if (rpcb_local_clnt)
- rpc_shutdown_client(rpcb_local_clnt);
-}
diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
index 9d08091..8ec9778 100644
--- a/net/sunrpc/sunrpc_syms.c
+++ b/net/sunrpc/sunrpc_syms.c
@@ -61,8 +61,6 @@ static struct pernet_operations sunrpc_net_ops = {
extern struct cache_detail unix_gid_cache;
-extern void cleanup_rpcb_clnt(void);
-
static int __init
init_sunrpc(void)
{
@@ -102,7 +100,6 @@ out:
static void __exit
cleanup_sunrpc(void)
{
- cleanup_rpcb_clnt();
rpcauth_remove_module();
cleanup_socket_xprt();
svc_cleanup_xprt_sock();
20.09.2011 18:38, Myklebust, Trond пишет:
>> -----Original Message-----
>> From: Stanislav Kinsbursky [mailto:[email protected]]
>> Sent: Tuesday, September 20, 2011 10:35 AM
>> To: Myklebust, Trond
>> Cc: Schumaker, Bryan; [email protected]; Pavel Emelianov;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v4 1/8] SUNRPC: introduce helpers for reference
>> counted rpcbind clients
>>
>> 20.09.2011 18:14, Myklebust, Trond пишет:
>>
>>>>>
>>>>> Doesn't it need to be protected by rpcb_clnt_lock too?
>>>>>
>>>>
>>>> Nope from my pow. It's protected by rpcb_create_local_mutex. I.e. no
>>>> one will change rpcb_users since it's zero. If it's non zero - we
>>>> willn't get to rpcb_set_local().
>>>
>>> OK, so you are saying that the rpcb_users++ below could be replaced by
>> rpcb_users=1?
>>>
>>
>> Yes, you right.
>>
>>> In that case, don't you need a smp_wmb() between the setting of
>> rpcb_local_clnt/4 and the setting of rpcb_users? Otherwise, how do you
>> guarantee that rpcb_users != 0 implies rpbc_local_clnt/4 != NULL?
>>>
>>
>> We check rpcb_users under spinlock. Gain spinlock forces memory barrier,
>> doesn't it?
>
> Yes, and so you don't need an smp_rmb() on the reader side. However, you still need to ensure that the processor which _sets_ the rpcb_users and rpcb_local_clnt/4 actually writes them in the correct order.
>
Yep, now I understand what are you talking about.
Will fix both places (set and put).
--
Best regards,
Stanislav Kinsbursky
20.09.2011 18:58, Bryan Schumaker пишет:
> On 09/20/2011 10:43 AM, Stanislav Kinsbursky wrote:
>> 20.09.2011 18:24, Jeff Layton пишет:
>>> On Tue, 20 Sep 2011 17:49:27 +0400
>>> Stanislav Kinsbursky<[email protected]> wrote:
>>>
>>>> v5: fixed races with rpcb_users in rpcb_get_local()
>>>>
>>>> This helpers will be used for dynamical creation and destruction of rpcbind
>>>> clients.
>>>> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
>>>> clients has been created already, then we just increase rpcb_users.
>>>>
>>>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>>>
>>>> ---
>>>> net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>> 1 files changed, 53 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
>>>> index e45d2fb..5f4a406 100644
>>>> --- a/net/sunrpc/rpcb_clnt.c
>>>> +++ b/net/sunrpc/rpcb_clnt.c
>>>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>>>> static struct rpc_clnt * rpcb_local_clnt;
>>>> static struct rpc_clnt * rpcb_local_clnt4;
>>>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>>>> +unsigned int rpcb_users;
>>>> +
>>>> struct rpcbind_args {
>>>> struct rpc_xprt * r_xprt;
>>>> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
>>>> kfree(map);
>>>> }
>>>> +static int rpcb_get_local(void)
>>>> +{
>>>> + int cnt;
>>>> +
>>>> + spin_lock(&rpcb_clnt_lock);
>>>> + if (rpcb_users)
>>>> + rpcb_users++;
>>>> + cnt = rpcb_users;
>>>> + spin_unlock(&rpcb_clnt_lock);
>>>> +
>>>> + return cnt;
>>>> +}
>>>> +
>>>> +void rpcb_put_local(void)
>>>> +{
>>>> + struct rpc_clnt *clnt = rpcb_local_clnt;
>>>> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
>>>> + int shutdown;
>>>> +
>>>> + spin_lock(&rpcb_clnt_lock);
>>>> + if (--rpcb_users == 0) {
>>>> + rpcb_local_clnt = NULL;
>>>> + rpcb_local_clnt4 = NULL;
>>>> + }
>>>
>>> In the function below, you mention that the above pointers are
>>> protected by rpcb_create_local_mutex, but it looks like they get reset
>>> here without that being held?
>>>
>>
>> Assigning of them is protected by rpcb_create_local_mutex.
>> Dereferencing of them is protected by rpcb_clnt_lock.
>
> Shouldn't you be using the same lock for assigning and dereferencing? Otherwise one thread could change these variables while another is using them.
Probably I wasn't clear with my previous explanation.
Actually, we use only spinlock to make sure, that the pointers are valid when we
dereferencing them. Synchronization point here is rpcb_users counter.
IOW, we use these pointers only from svc code and only after service already
started. And with this patch-set we can be sure, that this pointers has been
created already to the point, when this service starts.
But when this counter is zero, we can experience races with assigning those
pointers. It takes a lot of time, so we use local mutex here instead of spinlock.
Have I answered your question?
--
Best regards,
Stanislav Kinsbursky
We have to call svc_rpcb_cleanup() explicitly from nfsd_last_thread() since
this function is registered as service shutdown callback and thus nobody else
will done it for us.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
fs/nfsd/nfssvc.c | 2 ++
include/linux/sunrpc/svc.h | 1 +
net/sunrpc/svc.c | 3 ++-
3 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index dc5a1bf..52cd976 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -256,6 +256,8 @@ static void nfsd_last_thread(struct svc_serv *serv)
nfsd_serv = NULL;
nfsd_shutdown();
+ svc_rpcb_cleanup(serv);
+
printk(KERN_WARNING "nfsd: last server has exited, flushing export "
"cache\n");
nfsd_export_flush();
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 223588a..5e71a30 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -401,6 +401,7 @@ struct svc_procedure {
/*
* Function prototypes.
*/
+void svc_rpcb_cleanup(struct svc_serv *serv);
struct svc_serv *svc_create(struct svc_program *, unsigned int,
void (*shutdown)(struct svc_serv *));
struct svc_rqst *svc_prepare_thread(struct svc_serv *serv,
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 407462f..252552a 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -367,11 +367,12 @@ static int svc_rpcb_setup(struct svc_serv *serv)
return 0;
}
-static void svc_rpcb_cleanup(struct svc_serv *serv)
+void svc_rpcb_cleanup(struct svc_serv *serv)
{
svc_unregister(serv);
rpcb_put_local();
}
+EXPORT_SYMBOL_GPL(svc_rpcb_cleanup);
static int svc_uses_rpcbind(struct svc_serv *serv)
{
20.09.2011 18:38, Myklebust, Trond пишет:
>> -----Original Message-----
>> From: Stanislav Kinsbursky [mailto:[email protected]]
>> Sent: Tuesday, September 20, 2011 10:35 AM
>> To: Myklebust, Trond
>> Cc: Schumaker, Bryan; [email protected]; Pavel Emelianov;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v4 1/8] SUNRPC: introduce helpers for reference
>> counted rpcbind clients
>>
>> 20.09.2011 18:14, Myklebust, Trond пишет:
>>
>>>>>
>>>>> Doesn't it need to be protected by rpcb_clnt_lock too?
>>>>>
>>>>
>>>> Nope from my pow. It's protected by rpcb_create_local_mutex. I.e. no
>>>> one will change rpcb_users since it's zero. If it's non zero - we
>>>> willn't get to rpcb_set_local().
>>>
>>> OK, so you are saying that the rpcb_users++ below could be replaced by
>> rpcb_users=1?
>>>
>>
>> Yes, you right.
>>
>>> In that case, don't you need a smp_wmb() between the setting of
>> rpcb_local_clnt/4 and the setting of rpcb_users? Otherwise, how do you
>> guarantee that rpcb_users != 0 implies rpbc_local_clnt/4 != NULL?
>>>
>>
>> We check rpcb_users under spinlock. Gain spinlock forces memory barrier,
>> doesn't it?
>
> Yes, and so you don't need an smp_rmb() on the reader side. However, you still need to ensure that the processor which _sets_ the rpcb_users and rpcb_local_clnt/4 actually writes them in the correct order.
>
Trond, I've thought again and realized, that even if these writes (rpcb_users
and rpbc_local_clnt/4) will be done in reversed order, then no barrier required
here.
Because if we have another process trying to create those clients (it can't use
them since it's not started yet) on other CPU, than it's waiting on creation
mutex. When it will gain the mutex, we will have required memory barrier for us.
Or I missed something in your explanation?
--
Best regards,
Stanislav Kinsbursky
20.09.2011 18:41, Myklebust, Trond пишет:
>> -----Original Message-----
>> From: Jeff Layton [mailto:[email protected]]
>> Sent: Tuesday, September 20, 2011 10:25 AM
>> To: Stanislav Kinsbursky
>> Cc: Myklebust, Trond; [email protected]; Pavel Emelianov;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v5 1/8] SUNRPC: introduce helpers for reference
>> counted rpcbind clients
>>
>> On Tue, 20 Sep 2011 17:49:27 +0400
>> Stanislav Kinsbursky<[email protected]> wrote:
>>
>>> v5: fixed races with rpcb_users in rpcb_get_local()
>>>
>>> This helpers will be used for dynamical creation and destruction of
>>> rpcbind clients.
>>> Variable rpcb_users is actually a counter of lauched RPC services.
> If
>>> rpcbind clients has been created already, then we just increase
> rpcb_users.
>>>
>>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>>
>>> ---
>>> net/sunrpc/rpcb_clnt.c | 53
>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 files changed, 53 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c index
>>> e45d2fb..5f4a406 100644
>>> --- a/net/sunrpc/rpcb_clnt.c
>>> +++ b/net/sunrpc/rpcb_clnt.c
>>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>>> static struct rpc_clnt * rpcb_local_clnt;
>>> static struct rpc_clnt * rpcb_local_clnt4;
>>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>>> +unsigned int rpcb_users;
>>> +
>>> struct rpcbind_args {
>>> struct rpc_xprt * r_xprt;
>>> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
>>> kfree(map);
>>> }
>>> +static int rpcb_get_local(void)
>>> +{
>>> + int cnt;
>>> +
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (rpcb_users)
>>> + rpcb_users++;
>>> + cnt = rpcb_users;
>>> + spin_unlock(&rpcb_clnt_lock);
>>> +
>>> + return cnt;
>>> +}
>>> +
>>> +void rpcb_put_local(void)
>>> +{
>>> + struct rpc_clnt *clnt = rpcb_local_clnt;
>>> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
>>> + int shutdown;
>>> +
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (--rpcb_users == 0) {
>>> + rpcb_local_clnt = NULL;
>>> + rpcb_local_clnt4 = NULL;
>>> + }
>>
>> In the function below, you mention that the above pointers are
> protected by
>> rpcb_create_local_mutex, but it looks like they get reset here without
> that
>> being held?
>>
>> Might it be simpler to just protect rpcb_users with the
>> rpcb_create_local_mutex and ensure that it's held whenever you call
> one of
>> these routines? None of these are codepaths are particularly hot.
>
> Alternatively, if you do
>
> if (rpcb_users == 1) {
> rpcb_local_clnt = NULL;
> rpcb_local_clnt4 = NULL;
> smp_wmb();
> rpcb_users = 0;
> } else
> rpcb_users--;
>
> then the spinlock protection in rpbc_get_local() is still good enough to
> guarantee correctness.
I don't understand the idea of this code. It guarantees, that if rpcb_users ==
0, then rpcb_local_clnt == NULL and rpcb_local_clnt4 == NULL.
But we don't need such guarantee from my pow.
I.e. if rpcb_users == 0, then it means, that no services running right now.
For example, processes, destroying those clients, is running on CPU#0.
On CPU#1, for example, we have another process trying to get those clients and
waiting on spinlock. When this process will gain the spinlock, it will see 0
users, gain mutex and then try to create new clients. We still have no users on
this clients yet. And this process will just reassign whose rpcbind clients
pointers (and here we need memmory barrier for sure).
--
Best regards,
Stanislav Kinsbursky
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGFuaXNsYXYgS2luc2J1cnNr
eSBbbWFpbHRvOnNraW5zYnVyc2t5QHBhcmFsbGVscy5jb21dDQo+IFNlbnQ6IFR1ZXNkYXksIFNl
cHRlbWJlciAyMCwgMjAxMSAxMjoyMSBQTQ0KPiBUbzogTXlrbGVidXN0LCBUcm9uZA0KPiBDYzog
U2NodW1ha2VyLCBCcnlhbjsgbGludXgtbmZzQHZnZXIua2VybmVsLm9yZzsgUGF2ZWwgRW1lbGlh
bm92Ow0KPiBuZWlsYkBzdXNlLmRlOyBuZXRkZXZAdmdlci5rZXJuZWwub3JnOyBsaW51eC1rZXJu
ZWxAdmdlci5rZXJuZWwub3JnOw0KPiBiZmllbGRzQGZpZWxkc2VzLm9yZzsgZGF2ZW1AZGF2ZW1s
b2Z0Lm5ldA0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY0IDEvOF0gU1VOUlBDOiBpbnRyb2R1Y2Ug
aGVscGVycyBmb3IgcmVmZXJlbmNlDQo+IGNvdW50ZWQgcnBjYmluZCBjbGllbnRzDQo+IA0KPiAy
MC4wOS4yMDExIDE4OjM4LCBNeWtsZWJ1c3QsIFRyb25kINC/0LjRiNC10YI6DQo+ID4+IC0tLS0t
T3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IFN0YW5pc2xhdiBLaW5zYnVyc2t5IFtt
YWlsdG86c2tpbnNidXJza3lAcGFyYWxsZWxzLmNvbV0NCj4gPj4gU2VudDogVHVlc2RheSwgU2Vw
dGVtYmVyIDIwLCAyMDExIDEwOjM1IEFNDQo+ID4+IFRvOiBNeWtsZWJ1c3QsIFRyb25kDQo+ID4+
IENjOiBTY2h1bWFrZXIsIEJyeWFuOyBsaW51eC1uZnNAdmdlci5rZXJuZWwub3JnOyBQYXZlbCBF
bWVsaWFub3Y7DQo+ID4+IG5laWxiQHN1c2UuZGU7IG5ldGRldkB2Z2VyLmtlcm5lbC5vcmc7IGxp
bnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7DQo+ID4+IGJmaWVsZHNAZmllbGRzZXMub3JnOyBk
YXZlbUBkYXZlbWxvZnQubmV0DQo+ID4+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjQgMS84XSBTVU5S
UEM6IGludHJvZHVjZSBoZWxwZXJzIGZvciByZWZlcmVuY2UNCj4gPj4gY291bnRlZCBycGNiaW5k
IGNsaWVudHMNCj4gPj4NCj4gPj4gMjAuMDkuMjAxMSAxODoxNCwgTXlrbGVidXN0LCBUcm9uZCDQ
v9C40YjQtdGCOg0KPiA+Pg0KPiA+Pj4+Pg0KPiA+Pj4+PiBEb2Vzbid0IGl0ICBuZWVkIHRvIGJl
IHByb3RlY3RlZCBieSBycGNiX2NsbnRfbG9jayB0b28/DQo+ID4+Pj4+DQo+ID4+Pj4NCj4gPj4+
PiBOb3BlIGZyb20gbXkgcG93LiBJdCdzIHByb3RlY3RlZCBieSBycGNiX2NyZWF0ZV9sb2NhbF9t
dXRleC4gSS5lLg0KPiA+Pj4+IG5vIG9uZSB3aWxsIGNoYW5nZSBycGNiX3VzZXJzIHNpbmNlIGl0
J3MgemVyby4gSWYgaXQncyBub24gemVybyAtDQo+ID4+Pj4gd2Ugd2lsbG4ndCBnZXQgdG8gcnBj
Yl9zZXRfbG9jYWwoKS4NCj4gPj4+DQo+ID4+PiBPSywgc28geW91IGFyZSBzYXlpbmcgdGhhdCB0
aGUgcnBjYl91c2VycysrIGJlbG93IGNvdWxkIGJlIHJlcGxhY2VkDQo+ID4+PiBieQ0KPiA+PiBy
cGNiX3VzZXJzPTE/DQo+ID4+Pg0KPiA+Pg0KPiA+PiBZZXMsIHlvdSByaWdodC4NCj4gPj4NCj4g
Pj4+IEluIHRoYXQgY2FzZSwgZG9uJ3QgeW91IG5lZWQgYSBzbXBfd21iKCkgYmV0d2VlbiB0aGUg
c2V0dGluZyBvZg0KPiA+PiBycGNiX2xvY2FsX2NsbnQvNCBhbmQgdGhlIHNldHRpbmcgb2YgcnBj
Yl91c2Vycz8gT3RoZXJ3aXNlLCBob3cgZG8NCj4gPj4geW91IGd1YXJhbnRlZSB0aGF0IHJwY2Jf
dXNlcnMgIT0gMCBpbXBsaWVzIHJwYmNfbG9jYWxfY2xudC80ICE9IE5VTEw/DQo+ID4+Pg0KPiA+
Pg0KPiA+PiBXZSBjaGVjayBycGNiX3VzZXJzIHVuZGVyIHNwaW5sb2NrLiBHYWluIHNwaW5sb2Nr
IGZvcmNlcyBtZW1vcnkNCj4gPj4gYmFycmllciwgZG9lc24ndCBpdD8NCj4gPg0KPiA+IFllcywg
YW5kIHNvIHlvdSBkb24ndCBuZWVkIGFuIHNtcF9ybWIoKSBvbiB0aGUgcmVhZGVyIHNpZGUuIEhv
d2V2ZXIsDQo+IHlvdSBzdGlsbCBuZWVkIHRvIGVuc3VyZSB0aGF0IHRoZSBwcm9jZXNzb3Igd2hp
Y2ggX3NldHNfIHRoZSBycGNiX3VzZXJzIGFuZA0KPiBycGNiX2xvY2FsX2NsbnQvNCBhY3R1YWxs
eSB3cml0ZXMgdGhlbSBpbiB0aGUgY29ycmVjdCBvcmRlci4NCj4gPg0KPiANCj4gVHJvbmQsIEkn
dmUgdGhvdWdodCBhZ2FpbiBhbmQgcmVhbGl6ZWQsIHRoYXQgZXZlbiBpZiB0aGVzZSB3cml0ZXMg
KHJwY2JfdXNlcnMNCj4gYW5kIHJwYmNfbG9jYWxfY2xudC80KSB3aWxsIGJlIGRvbmUgaW4gcmV2
ZXJzZWQgb3JkZXIsIHRoZW4gbm8gYmFycmllcg0KPiByZXF1aXJlZCBoZXJlLg0KPiBCZWNhdXNl
IGlmIHdlIGhhdmUgYW5vdGhlciBwcm9jZXNzIHRyeWluZyB0byBjcmVhdGUgdGhvc2UgY2xpZW50
cyAoaXQgY2FuJ3QgdXNlDQo+IHRoZW0gc2luY2UgaXQncyBub3Qgc3RhcnRlZCB5ZXQpIG9uIG90
aGVyIENQVSwgdGhhbiBpdCdzIHdhaXRpbmcgb24gY3JlYXRpb24NCj4gbXV0ZXguIFdoZW4gaXQg
d2lsbCBnYWluIHRoZSBtdXRleCwgd2Ugd2lsbCBoYXZlIHJlcXVpcmVkIG1lbW9yeSBiYXJyaWVy
DQo+IGZvciB1cy4NCj4gDQo+IE9yIEkgbWlzc2VkIHNvbWV0aGluZyBpbiB5b3VyIGV4cGxhbmF0
aW9uPw0KDQpZb3UgbmVlZCB0byBlbnN1cmUgdGhhdCBpZiBzb21lb25lIGNhbGxzIHJwY2JfZ2V0
X2xvY2FsKCkgYW5kIGdldHMgYSBwb3NpdGl2ZSByZXN1bHQsIHRoZW4gdGhleSBjYW4gcmVseSBv
biBycGNiX2xvY2FsX2NsbnQvNCBiZWluZyBub24temVyby4gV2l0aG91dCB0aGUgd3JpdGUgYmFy
cmllciwgdGhhdCBpcyBub3QgdGhlIGNhc2UuDQoNCldpdGhvdXQgdGhhdCBndWFyYW50ZWUsIHlv
dSBjYW4ndCByZWFsbHkgZW5zdXJlIHRoYXQgcnBjYl9wdXRfbG9jYWwoKSB3aWxsIHdvcmsgYXMg
ZXhwZWN0ZWQgZWl0aGVyLCBzaW5jZSBpdCB3aWxsIGJlIHBvc3NpYmxlIHRvIHRyaWdnZXIgdGhl
IC0tcnBjYl91c2VycyA9PSAwIGNhc2Ugd2l0aG91dCBzaHV0dGluZyBkb3duIHJwY2JfbG9jYWxf
Y2xudC80IChiZWNhdXNlIGNsbnQvY2xudDQgPT0gMCkuDQoNCkNoZWVycw0KICBUcm9uZA0KDQoT
77+977+97Lm7HO+/vSbvv71+77+9Ju+/vRjvv73vv70rLe+/ve+/vd22F++/ve+/vXfvv73vv73L
m++/ve+/ve+/vW3vv71i77+977+9Z37Ip++/vRfvv73vv73cqH3vv73vv73vv73GoHrvv70major
du+/ve+/ve+/vQfvv73vv73vv73vv716Wivvv73vv70rembvv73vv73vv71o77+977+977+9fu+/
ve+/ve+/ve+/vWnvv73vv73vv71677+9Hu+/vXfvv73vv73vv70/77+977+977+977+9Ju+/vSnf
ohtm
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGFuaXNsYXYgS2luc2J1cnNr
eSBbbWFpbHRvOnNraW5zYnVyc2t5QHBhcmFsbGVscy5jb21dDQo+IFNlbnQ6IEZyaWRheSwgU2Vw
dGVtYmVyIDIzLCAyMDExIDEwOjQxIEFNDQo+IFRvOiBNeWtsZWJ1c3QsIFRyb25kDQo+IENjOiBs
aW51eC1uZnNAdmdlci5rZXJuZWwub3JnOyBQYXZlbCBFbWVsaWFub3Y7IG5laWxiQHN1c2UuZGU7
DQo+IG5ldGRldkB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7
DQo+IGJmaWVsZHNAZmllbGRzZXMub3JnOyBkYXZlbUBkYXZlbWxvZnQubmV0DQo+IFN1YmplY3Q6
IFJlOiBbUEFUQ0ggdjYgMS84XSBTVU5SUEM6IGludHJvZHVjZSBoZWxwZXJzIGZvciByZWZlcmVu
Y2UNCj4gY291bnRlZCBycGNiaW5kIGNsaWVudHMNCj4gDQo+IFRyb25kLCBpcyB0aGlzIHBhdGNo
IHZlcnNpb24gc3VpdHMgeW91IG5vdz8gT3Igbm90Pw0KPiBQbGVhc2UsIGNvbW1lbnQgc29tZWhv
dyB0byBsZXQgbWUga25vdywgbWF5IEkgcHJvY2VlZCB3aXRoIGZ1cnRoZXINCj4gZGV2ZWxvcG1l
bnQgb3Igbm90Lg0KDQpIaSBTdGFuaXNsYXYsDQoNClllcywgdGhpcyB2ZXJzaW9uIG9mIHRoZSBw
YXRjaCBsb29rcyBzYWZlIHRvIG1lLg0KDQpDaGVlcnMNCiAgVHJvbmQNCgTvv717Lm7vv70r77+9
77+977+977+977+977+977+9KyXvv73vv71sendt77+977+9Yu+/veunsu+/ve+/vXLvv73vv716
WO+/ve+/vRnfsinvv73vv73vv713Kh9qZ++/ve+/ve+/vR7vv73vv73vv73vv73vv73domov77+9
77+977+9eu+/vd6W77+977+9Mu+/vd6Z77+977+977+9Ju+/vSnfoe+/vWHvv73vv71/77+977+9
Hu+/vUfvv73vv73vv71o77+9D++/vWo6K3bvv73vv73vv71377+92aU=
All is simple: we just increase users counter if rpcbind clients has been
created already. Otherwise we create them and set users counter to 1.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 12 ++++--------
1 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index 8724780..83634e0 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -255,9 +255,7 @@ static int rpcb_create_local_unix(void)
clnt4 = NULL;
}
- /* Protected by rpcb_create_local_mutex */
- rpcb_local_clnt = clnt;
- rpcb_local_clnt4 = clnt4;
+ rpcb_set_local(clnt, clnt4);
out:
return result;
@@ -309,9 +307,7 @@ static int rpcb_create_local_net(void)
clnt4 = NULL;
}
- /* Protected by rpcb_create_local_mutex */
- rpcb_local_clnt = clnt;
- rpcb_local_clnt4 = clnt4;
+ rpcb_set_local(clnt, clnt4);
out:
return result;
@@ -326,11 +322,11 @@ static int rpcb_create_local(void)
static DEFINE_MUTEX(rpcb_create_local_mutex);
int result = 0;
- if (rpcb_local_clnt)
+ if (rpcb_get_local())
return result;
mutex_lock(&rpcb_create_local_mutex);
- if (rpcb_local_clnt)
+ if (rpcb_get_local())
goto out;
if (rpcb_create_local_unix() != 0)
This helpers will be used for dynamical creation and destruction of rpcbind
clients.
Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
clients has been created already, then we just increase rpcb_users.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 50 insertions(+), 0 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index e45d2fb..8724780 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
static struct rpc_clnt * rpcb_local_clnt;
static struct rpc_clnt * rpcb_local_clnt4;
+DEFINE_SPINLOCK(rpcb_clnt_lock);
+unsigned int rpcb_users;
+
struct rpcbind_args {
struct rpc_xprt * r_xprt;
@@ -161,6 +164,53 @@ static void rpcb_map_release(void *data)
kfree(map);
}
+static int rpcb_get_local(void)
+{
+ spin_lock(&rpcb_clnt_lock);
+ if (rpcb_users)
+ rpcb_users++;
+ spin_unlock(&rpcb_clnt_lock);
+
+ return rpcb_users;
+}
+
+void rpcb_put_local(void)
+{
+ struct rpc_clnt *clnt = rpcb_local_clnt;
+ struct rpc_clnt *clnt4 = rpcb_local_clnt4;
+ int shutdown;
+
+ spin_lock(&rpcb_clnt_lock);
+ if (--rpcb_users == 0) {
+ rpcb_local_clnt = NULL;
+ rpcb_local_clnt4 = NULL;
+ }
+ shutdown = !rpcb_users;
+ spin_unlock(&rpcb_clnt_lock);
+
+ if (shutdown) {
+ /*
+ * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
+ */
+ if (clnt4)
+ rpc_shutdown_client(clnt4);
+ if (clnt)
+ rpc_shutdown_client(clnt);
+ }
+ return;
+}
+
+static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
+{
+ /* Protected by rpcb_create_local_mutex */
+ rpcb_local_clnt = clnt;
+ rpcb_local_clnt4 = clnt4;
+ rpcb_users++;
+ dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
+ "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
+ rpcb_local_clnt4);
+}
+
/*
* Returns zero on success, otherwise a negative errno value
* is returned.
This helpers will be used only for those services, that will send portmapper
registration calls.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
include/linux/sunrpc/clnt.h | 2 ++
net/sunrpc/rpcb_clnt.c | 2 +-
net/sunrpc/svc.c | 35 +++++++++++++++++++++++++++++++++++
3 files changed, 38 insertions(+), 1 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index db7bcaf..1eb437d 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -135,6 +135,8 @@ void rpc_shutdown_client(struct rpc_clnt *);
void rpc_release_client(struct rpc_clnt *);
void rpc_task_release_client(struct rpc_task *);
+int rpcb_create_local(void);
+void rpcb_put_local(void);
int rpcb_register(u32, u32, int, unsigned short);
int rpcb_v4_register(const u32 program, const u32 version,
const struct sockaddr *address,
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index 83634e0..64e15d1 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -317,7 +317,7 @@ out:
* Returns zero on success, otherwise a negative errno value
* is returned.
*/
-static int rpcb_create_local(void)
+int rpcb_create_local(void)
{
static DEFINE_MUTEX(rpcb_create_local_mutex);
int result = 0;
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 6a69a11..d2d61bf 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -354,6 +354,41 @@ svc_pool_for_cpu(struct svc_serv *serv, int cpu)
return &serv->sv_pools[pidx % serv->sv_nrpools];
}
+static int svc_rpcb_setup(struct svc_serv *serv)
+{
+ int err;
+
+ err = rpcb_create_local();
+ if (err)
+ return err;
+
+ /* Remove any stale portmap registrations */
+ svc_unregister(serv);
+ return 0;
+}
+
+static void svc_rpcb_cleanup(struct svc_serv *serv)
+{
+ svc_unregister(serv);
+ rpcb_put_local();
+}
+
+static int svc_uses_rpcbind(struct svc_serv *serv)
+{
+ struct svc_program *progp;
+ unsigned int i;
+
+ for (progp = serv->sv_program; progp; progp = progp->pg_next) {
+ for (i = 0; i < progp->pg_nvers; i++) {
+ if (progp->pg_vers[i] == NULL)
+ continue;
+ if (progp->pg_vers[i]->vs_hidden == 0)
+ return 1;
+ }
+ }
+
+ return 0;
+}
/*
* Create an RPC service
New function ("svc_uses_rpcbind") will be used to detect, that new service will
send portmapper register calls. For such services we will create rpcbind
clients and remove all stale portmap registrations.
Also, svc_rpcb_cleanup() will be set as sv_shutdown callback for such services
in case of this field wasn't initialized earlier. This will allow to destroy
rpcbind clients when no other users of them left.
Note: Currently, any creating service will be detected as portmap user.
Probably, this is wrong. But now it depends on program versions "vs_hidden"
flag.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/svc.c | 11 +++++++++--
1 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index d2d61bf..918edc3 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -454,8 +454,15 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
spin_lock_init(&pool->sp_lock);
}
- /* Remove any stale portmap registrations */
- svc_unregister(serv);
+ if (svc_uses_rpcbind(serv)) {
+ if (svc_rpcb_setup(serv) < 0) {
+ kfree(serv->sv_pools);
+ kfree(serv);
+ return NULL;
+ }
+ if (!serv->sv_shutdown)
+ serv->sv_shutdown = svc_rpcb_cleanup;
+ }
return serv;
}
20.09.2011 18:24, Jeff Layton пишет:
> On Tue, 20 Sep 2011 17:49:27 +0400
> Stanislav Kinsbursky<[email protected]> wrote:
>
>> v5: fixed races with rpcb_users in rpcb_get_local()
>>
>> This helpers will be used for dynamical creation and destruction of rpcbind
>> clients.
>> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
>> clients has been created already, then we just increase rpcb_users.
>>
>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>
>> ---
>> net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 files changed, 53 insertions(+), 0 deletions(-)
>>
>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
>> index e45d2fb..5f4a406 100644
>> --- a/net/sunrpc/rpcb_clnt.c
>> +++ b/net/sunrpc/rpcb_clnt.c
>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>> static struct rpc_clnt * rpcb_local_clnt;
>> static struct rpc_clnt * rpcb_local_clnt4;
>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>> +unsigned int rpcb_users;
>> +
>> struct rpcbind_args {
>> struct rpc_xprt * r_xprt;
>> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
>> kfree(map);
>> }
>> +static int rpcb_get_local(void)
>> +{
>> + int cnt;
>> +
>> + spin_lock(&rpcb_clnt_lock);
>> + if (rpcb_users)
>> + rpcb_users++;
>> + cnt = rpcb_users;
>> + spin_unlock(&rpcb_clnt_lock);
>> +
>> + return cnt;
>> +}
>> +
>> +void rpcb_put_local(void)
>> +{
>> + struct rpc_clnt *clnt = rpcb_local_clnt;
>> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
>> + int shutdown;
>> +
>> + spin_lock(&rpcb_clnt_lock);
>> + if (--rpcb_users == 0) {
>> + rpcb_local_clnt = NULL;
>> + rpcb_local_clnt4 = NULL;
>> + }
>
> In the function below, you mention that the above pointers are
> protected by rpcb_create_local_mutex, but it looks like they get reset
> here without that being held?
>
Assigning of them is protected by rpcb_create_local_mutex.
Dereferencing of them is protected by rpcb_clnt_lock.
> Might it be simpler to just protect rpcb_users with the
> rpcb_create_local_mutex and ensure that it's held whenever you call one
> of these routines? None of these are codepaths are particularly hot.
>
I just inherited this lock-mutex logic.
Actually, you right. This codepaths are used rarely.
But are use sure, that we need to remove this "speed-up" logic if we take into
account that it was here already?
>> + shutdown = !rpcb_users;
>> + spin_unlock(&rpcb_clnt_lock);
>> +
>> + if (shutdown) {
>> + /*
>> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
>> + */
>> + if (clnt4)
>> + rpc_shutdown_client(clnt4);
>> + if (clnt)
>> + rpc_shutdown_client(clnt);
>> + }
>> + return;
>> +}
>> +
>> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
>> +{
>> + /* Protected by rpcb_create_local_mutex */
>> + rpcb_local_clnt = clnt;
>> + rpcb_local_clnt4 = clnt4;
>> + rpcb_users++;
>> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
>> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
>> + rpcb_local_clnt4);
>> +}
>> +
>> /*
>> * Returns zero on success, otherwise a negative errno value
>> * is returned.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
Best regards,
Stanislav Kinsbursky
v5: fixed races with rpcb_users in rpcb_get_local()
This helpers will be used for dynamical creation and destruction of rpcbind
clients.
Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
clients has been created already, then we just increase rpcb_users.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 53 insertions(+), 0 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index e45d2fb..5f4a406 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
static struct rpc_clnt * rpcb_local_clnt;
static struct rpc_clnt * rpcb_local_clnt4;
+DEFINE_SPINLOCK(rpcb_clnt_lock);
+unsigned int rpcb_users;
+
struct rpcbind_args {
struct rpc_xprt * r_xprt;
@@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
kfree(map);
}
+static int rpcb_get_local(void)
+{
+ int cnt;
+
+ spin_lock(&rpcb_clnt_lock);
+ if (rpcb_users)
+ rpcb_users++;
+ cnt = rpcb_users;
+ spin_unlock(&rpcb_clnt_lock);
+
+ return cnt;
+}
+
+void rpcb_put_local(void)
+{
+ struct rpc_clnt *clnt = rpcb_local_clnt;
+ struct rpc_clnt *clnt4 = rpcb_local_clnt4;
+ int shutdown;
+
+ spin_lock(&rpcb_clnt_lock);
+ if (--rpcb_users == 0) {
+ rpcb_local_clnt = NULL;
+ rpcb_local_clnt4 = NULL;
+ }
+ shutdown = !rpcb_users;
+ spin_unlock(&rpcb_clnt_lock);
+
+ if (shutdown) {
+ /*
+ * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
+ */
+ if (clnt4)
+ rpc_shutdown_client(clnt4);
+ if (clnt)
+ rpc_shutdown_client(clnt);
+ }
+ return;
+}
+
+static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
+{
+ /* Protected by rpcb_create_local_mutex */
+ rpcb_local_clnt = clnt;
+ rpcb_local_clnt4 = clnt4;
+ rpcb_users++;
+ dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
+ "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
+ rpcb_local_clnt4);
+}
+
/*
* Returns zero on success, otherwise a negative errno value
* is returned.
On Tue, 20 Sep 2011 17:49:27 +0400
Stanislav Kinsbursky <[email protected]> wrote:
> v5: fixed races with rpcb_users in rpcb_get_local()
>
> This helpers will be used for dynamical creation and destruction of rpcbind
> clients.
> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
> clients has been created already, then we just increase rpcb_users.
>
> Signed-off-by: Stanislav Kinsbursky <[email protected]>
>
> ---
> net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 53 insertions(+), 0 deletions(-)
>
> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
> index e45d2fb..5f4a406 100644
> --- a/net/sunrpc/rpcb_clnt.c
> +++ b/net/sunrpc/rpcb_clnt.c
> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
> static struct rpc_clnt * rpcb_local_clnt;
> static struct rpc_clnt * rpcb_local_clnt4;
> +DEFINE_SPINLOCK(rpcb_clnt_lock);
> +unsigned int rpcb_users;
> +
> struct rpcbind_args {
> struct rpc_xprt * r_xprt;
> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
> kfree(map);
> }
> +static int rpcb_get_local(void)
> +{
> + int cnt;
> +
> + spin_lock(&rpcb_clnt_lock);
> + if (rpcb_users)
> + rpcb_users++;
> + cnt = rpcb_users;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + return cnt;
> +}
> +
> +void rpcb_put_local(void)
> +{
> + struct rpc_clnt *clnt = rpcb_local_clnt;
> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
> + int shutdown;
> +
> + spin_lock(&rpcb_clnt_lock);
> + if (--rpcb_users == 0) {
> + rpcb_local_clnt = NULL;
> + rpcb_local_clnt4 = NULL;
> + }
In the function below, you mention that the above pointers are
protected by rpcb_create_local_mutex, but it looks like they get reset
here without that being held?
Might it be simpler to just protect rpcb_users with the
rpcb_create_local_mutex and ensure that it's held whenever you call one
of these routines? None of these are codepaths are particularly hot.
> + shutdown = !rpcb_users;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + if (shutdown) {
> + /*
> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
> + */
> + if (clnt4)
> + rpc_shutdown_client(clnt4);
> + if (clnt)
> + rpc_shutdown_client(clnt);
> + }
> + return;
> +}
> +
> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
> +{
> + /* Protected by rpcb_create_local_mutex */
> + rpcb_local_clnt = clnt;
> + rpcb_local_clnt4 = clnt4;
> + rpcb_users++;
> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
> + rpcb_local_clnt4);
> +}
> +
> /*
> * Returns zero on success, otherwise a negative errno value
> * is returned.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Jeff Layton <[email protected]>
v6:
1) added write memory barrier to rpcb_set_local to make sure, that rpcbind
clients become valid before rpcb_users assignment
2) explicitly set rpcb_users to 1 instead of incrementing it (looks clearer from
my pow).
Notice: write memory barrier after zeroing rpcbind clients in rpcb_put_local()
is not required, since to users of them left. New user (service) will create new
clients before dereferencing them.
This helpers will be used for dynamical creation and destruction of rpcbind
clients.
Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
clients has been created already, then we just increase rpcb_users.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 54 insertions(+), 0 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index e45d2fb..9fcdb42 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
static struct rpc_clnt * rpcb_local_clnt;
static struct rpc_clnt * rpcb_local_clnt4;
+DEFINE_SPINLOCK(rpcb_clnt_lock);
+unsigned int rpcb_users;
+
struct rpcbind_args {
struct rpc_xprt * r_xprt;
@@ -161,6 +164,57 @@ static void rpcb_map_release(void *data)
kfree(map);
}
+static int rpcb_get_local(void)
+{
+ int cnt;
+
+ spin_lock(&rpcb_clnt_lock);
+ if (rpcb_users)
+ rpcb_users++;
+ cnt = rpcb_users;
+ spin_unlock(&rpcb_clnt_lock);
+
+ return cnt;
+}
+
+void rpcb_put_local(void)
+{
+ struct rpc_clnt *clnt = rpcb_local_clnt;
+ struct rpc_clnt *clnt4 = rpcb_local_clnt4;
+ int shutdown;
+
+ spin_lock(&rpcb_clnt_lock);
+ if (--rpcb_users == 0) {
+ rpcb_local_clnt = NULL;
+ rpcb_local_clnt4 = NULL;
+ }
+ shutdown = !rpcb_users;
+ spin_unlock(&rpcb_clnt_lock);
+
+ if (shutdown) {
+ /*
+ * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
+ */
+ if (clnt4)
+ rpc_shutdown_client(clnt4);
+ if (clnt)
+ rpc_shutdown_client(clnt);
+ }
+ return;
+}
+
+static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
+{
+ /* Protected by rpcb_create_local_mutex */
+ rpcb_local_clnt = clnt;
+ rpcb_local_clnt4 = clnt4;
+ smp_wmb();
+ rpcb_users = 1;
+ dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
+ "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
+ rpcb_local_clnt4);
+}
+
/*
* Returns zero on success, otherwise a negative errno value
* is returned.
On Tue, 20 Sep 2011 14:14:04 +0400
Stanislav Kinsbursky <[email protected]> wrote:
> New function ("svc_uses_rpcbind") will be used to detect, that new service will
> send portmapper register calls. For such services we will create rpcbind
> clients and remove all stale portmap registrations.
> Also, svc_rpcb_cleanup() will be set as sv_shutdown callback for such services
> in case of this field wasn't initialized earlier. This will allow to destroy
> rpcbind clients when no other users of them left.
>
> Note: Currently, any creating service will be detected as portmap user.
> Probably, this is wrong. But now it depends on program versions "vs_hidden"
> flag.
>
Yes, I think that nfs4_callback_version4 should also have vs_hidden
set. Currently, it's trying to unregister the service from the
portmapper on shutdown even though it's not registering it. Basically,
any service that sets up its sockets with SVC_SOCK_ANONYMOUS should
also have vs_hidden set on all versions.
--
Jeff Layton <[email protected]>
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTY2h1bWFrZXIsIEJyeWFuDQo+
IFNlbnQ6IFR1ZXNkYXksIFNlcHRlbWJlciAyMCwgMjAxMSA5OjA1IEFNDQo+IFRvOiBTdGFuaXNs
YXYgS2luc2J1cnNreQ0KPiBDYzogTXlrbGVidXN0LCBUcm9uZDsgbGludXgtbmZzQHZnZXIua2Vy
bmVsLm9yZzsgeGVtdWxAcGFyYWxsZWxzLmNvbTsNCj4gbmVpbGJAc3VzZS5kZTsgbmV0ZGV2QHZn
ZXIua2VybmVsLm9yZzsgbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZzsNCj4gYmZpZWxkc0Bm
aWVsZHNlcy5vcmc7IGRhdmVtQGRhdmVtbG9mdC5uZXQNCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2
NCAxLzhdIFNVTlJQQzogaW50cm9kdWNlIGhlbHBlcnMgZm9yIHJlZmVyZW5jZQ0KPiBjb3VudGVk
IHJwY2JpbmQgY2xpZW50cw0KPiANCj4gT24gMDkvMjAvMjAxMSAwNjoxMyBBTSwgU3RhbmlzbGF2
IEtpbnNidXJza3kgd3JvdGU6DQo+ID4gVGhpcyBoZWxwZXJzIHdpbGwgYmUgdXNlZCBmb3IgZHlu
YW1pY2FsIGNyZWF0aW9uIGFuZCBkZXN0cnVjdGlvbiBvZg0KPiA+IHJwY2JpbmQgY2xpZW50cy4N
Cj4gPiBWYXJpYWJsZSBycGNiX3VzZXJzIGlzIGFjdHVhbGx5IGEgY291bnRlciBvZiBsYXVjaGVk
IFJQQyBzZXJ2aWNlcy4gSWYNCj4gPiBycGNiaW5kIGNsaWVudHMgaGFzIGJlZW4gY3JlYXRlZCBh
bHJlYWR5LCB0aGVuIHdlIGp1c3QgaW5jcmVhc2UgcnBjYl91c2Vycy4NCj4gPg0KPiA+IFNpZ25l
ZC1vZmYtYnk6IFN0YW5pc2xhdiBLaW5zYnVyc2t5IDxza2luc2J1cnNreUBwYXJhbGxlbHMuY29t
Pg0KPiA+DQo+ID4gLS0tDQo+ID4gIG5ldC9zdW5ycGMvcnBjYl9jbG50LmMgfCAgIDUwDQo+ICsr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+ICAxIGZp
bGVzIGNoYW5nZWQsIDUwIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBk
aWZmIC0tZ2l0IGEvbmV0L3N1bnJwYy9ycGNiX2NsbnQuYyBiL25ldC9zdW5ycGMvcnBjYl9jbG50
LmMgaW5kZXgNCj4gPiBlNDVkMmZiLi44NzI0NzgwIDEwMDY0NA0KPiA+IC0tLSBhL25ldC9zdW5y
cGMvcnBjYl9jbG50LmMNCj4gPiArKysgYi9uZXQvc3VucnBjL3JwY2JfY2xudC5jDQo+ID4gQEAg
LTExNCw2ICsxMTQsOSBAQCBzdGF0aWMgc3RydWN0IHJwY19wcm9ncmFtCXJwY2JfcHJvZ3JhbTsN
Cj4gPiAgc3RhdGljIHN0cnVjdCBycGNfY2xudCAqCXJwY2JfbG9jYWxfY2xudDsNCj4gPiAgc3Rh
dGljIHN0cnVjdCBycGNfY2xudCAqCXJwY2JfbG9jYWxfY2xudDQ7DQo+ID4NCj4gPiArREVGSU5F
X1NQSU5MT0NLKHJwY2JfY2xudF9sb2NrKTsNCj4gPiArdW5zaWduZWQgaW50CQkJcnBjYl91c2Vy
czsNCj4gPiArDQo+ID4gIHN0cnVjdCBycGNiaW5kX2FyZ3Mgew0KPiA+ICAJc3RydWN0IHJwY194
cHJ0ICoJcl94cHJ0Ow0KPiA+DQo+ID4gQEAgLTE2MSw2ICsxNjQsNTMgQEAgc3RhdGljIHZvaWQg
cnBjYl9tYXBfcmVsZWFzZSh2b2lkICpkYXRhKQ0KPiA+ICAJa2ZyZWUobWFwKTsNCj4gPiAgfQ0K
PiA+DQo+ID4gK3N0YXRpYyBpbnQgcnBjYl9nZXRfbG9jYWwodm9pZCkNCj4gPiArew0KPiA+ICsJ
c3Bpbl9sb2NrKCZycGNiX2NsbnRfbG9jayk7DQo+ID4gKwlpZiAocnBjYl91c2VycykNCj4gPiAr
CQlycGNiX3VzZXJzKys7DQo+ID4gKwlzcGluX3VubG9jaygmcnBjYl9jbG50X2xvY2spOw0KPiA+
ICsNCj4gPiArCXJldHVybiBycGNiX3VzZXJzOw0KPiAgICAgICAgIF5eXl5eXl5eXl5eXl5eXl5e
Xg0KPiBJcyBpdCBzYWZlIHRvIHVzZSB0aGlzIHZhcmlhYmxlIG91dHNpZGUgb2YgdGhlIHJwY2Jf
Y2xudF9sb2NrPw0KPg0KTm9wZS4gSWYgcnBjYl91c2VycyB3YXMgemVybyBpbiB0aGUgcHJvdGVj
dGVkIHNlY3Rpb24gYWJvdmUsIG5vdGhpbmcgZ3VhcmFudGVlcyB0aGF0IGl0IHdpbGwgc3RpbGwg
YmUgemVybyBoZXJlLCBhbmQgc28gdGhlIGNhbGxlciBtYXkgZ2V0IHRoZSAod3JvbmcpIGltcHJl
c3Npb24gdGhhdCB0aGUgY291bnRlciB3YXMgaW5jcmVtZW50ZWQuDQoNCj4gPiArfQ0KPiA+ICsN
Cj4gPiArdm9pZCBycGNiX3B1dF9sb2NhbCh2b2lkKQ0KPiA+ICt7DQo+ID4gKwlzdHJ1Y3QgcnBj
X2NsbnQgKmNsbnQgPSBycGNiX2xvY2FsX2NsbnQ7DQo+ID4gKwlzdHJ1Y3QgcnBjX2NsbnQgKmNs
bnQ0ID0gcnBjYl9sb2NhbF9jbG50NDsNCj4gPiArCWludCBzaHV0ZG93bjsNCj4gPiArDQo+ID4g
KwlzcGluX2xvY2soJnJwY2JfY2xudF9sb2NrKTsNCj4gPiArCWlmICgtLXJwY2JfdXNlcnMgPT0g
MCkgew0KPiA+ICsJCXJwY2JfbG9jYWxfY2xudCA9IE5VTEw7DQo+ID4gKwkJcnBjYl9sb2NhbF9j
bG50NCA9IE5VTEw7DQo+ID4gKwl9DQo+ID4gKwlzaHV0ZG93biA9ICFycGNiX3VzZXJzOw0KPiA+
ICsJc3Bpbl91bmxvY2soJnJwY2JfY2xudF9sb2NrKTsNCj4gPiArDQo+ID4gKwlpZiAoc2h1dGRv
d24pIHsNCj4gPiArCQkvKg0KPiA+ICsJCSAqIGNsZWFudXBfcnBjYl9jbG50IC0gcmVtb3ZlIHhw
cnRzb2NrJ3Mgc3lzY3RscywgdW5yZWdpc3Rlcg0KPiA+ICsJCSAqLw0KPiA+ICsJCWlmIChjbG50
NCkNCj4gPiArCQkJcnBjX3NodXRkb3duX2NsaWVudChjbG50NCk7DQo+ID4gKwkJaWYgKGNsbnQp
DQo+ID4gKwkJCXJwY19zaHV0ZG93bl9jbGllbnQoY2xudCk7DQo+ID4gKwl9DQo+ID4gKwlyZXR1
cm47DQo+ID4gK30NCj4gPiArDQo+ID4gK3N0YXRpYyB2b2lkIHJwY2Jfc2V0X2xvY2FsKHN0cnVj
dCBycGNfY2xudCAqY2xudCwgc3RydWN0IHJwY19jbG50DQo+ID4gKypjbG50NCkgew0KPiA+ICsJ
LyogUHJvdGVjdGVkIGJ5IHJwY2JfY3JlYXRlX2xvY2FsX211dGV4ICovDQoNCkRvZXNuJ3QgaXQg
IG5lZWQgdG8gYmUgcHJvdGVjdGVkIGJ5IHJwY2JfY2xudF9sb2NrIHRvbz8NCg0KPiA+ICsJcnBj
Yl9sb2NhbF9jbG50ID0gY2xudDsNCj4gPiArCXJwY2JfbG9jYWxfY2xudDQgPSBjbG50NDsNCj4g
PiArCXJwY2JfdXNlcnMrKzsNCl5eXl5eXl5eXl5eXl5eXl5eXl4NCg0KPiA+ICsJZHByaW50aygi
UlBDOiAgICAgICBjcmVhdGVkIG5ldyBycGNiIGxvY2FsIGNsaWVudHMgKHJwY2JfbG9jYWxfY2xu
dDogIg0KPiA+ICsJCQkiJXAsIHJwY2JfbG9jYWxfY2xudDQ6ICVwKVxuIiwgcnBjYl9sb2NhbF9j
bG50LA0KPiA+ICsJCQlycGNiX2xvY2FsX2NsbnQ0KTsNCj4gPiArfQ0KPiA+ICsNCj4gPiAgLyoN
Cj4gPiAgICogUmV0dXJucyB6ZXJvIG9uIHN1Y2Nlc3MsIG90aGVyd2lzZSBhIG5lZ2F0aXZlIGVy
cm5vIHZhbHVlDQo+ID4gICAqIGlzIHJldHVybmVkLg0KPiA+DQo+ID4gLS0NCj4gPiBUbyB1bnN1
YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3JpYmUgbGludXgt
bmZzIg0KPiA+IGluIHRoZSBib2R5IG9mIGEgbWVzc2FnZSB0byBtYWpvcmRvbW9Admdlci5rZXJu
ZWwub3JnIE1vcmUNCj4gbWFqb3Jkb21vDQo+ID4gaW5mbyBhdCAgaHR0cDovL3ZnZXIua2VybmVs
Lm9yZy9tYWpvcmRvbW8taW5mby5odG1sDQoNCgTvv717Lm7vv70r77+977+977+977+977+977+9
77+9KyXvv73vv71sendt77+977+9Yu+/veunsu+/ve+/vXLvv73vv716WO+/ve+/vRnfsinvv73v
v73vv713Kh9qZ++/ve+/ve+/vR7vv73vv73vv73vv73vv73domov77+977+977+9eu+/vd6W77+9
77+9Mu+/vd6Z77+977+977+9Ju+/vSnfoe+/vWHvv73vv71/77+977+9Hu+/vUfvv73vv73vv71o
77+9D++/vWo6K3bvv73vv73vv71377+92aU=
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGFuaXNsYXYgS2luc2J1cnNr
eSBbbWFpbHRvOnNraW5zYnVyc2t5QHBhcmFsbGVscy5jb21dDQo+IFNlbnQ6IFR1ZXNkYXksIFNl
cHRlbWJlciAyMCwgMjAxMSA5OjM1IEFNDQo+IFRvOiBNeWtsZWJ1c3QsIFRyb25kDQo+IENjOiBT
Y2h1bWFrZXIsIEJyeWFuOyBsaW51eC1uZnNAdmdlci5rZXJuZWwub3JnOyBQYXZlbCBFbWVsaWFu
b3Y7DQo+IG5laWxiQHN1c2UuZGU7IG5ldGRldkB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LWtlcm5l
bEB2Z2VyLmtlcm5lbC5vcmc7DQo+IGJmaWVsZHNAZmllbGRzZXMub3JnOyBkYXZlbUBkYXZlbWxv
ZnQubmV0DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjQgMS84XSBTVU5SUEM6IGludHJvZHVjZSBo
ZWxwZXJzIGZvciByZWZlcmVuY2UNCj4gY291bnRlZCBycGNiaW5kIGNsaWVudHMNCj4gDQo+IDIw
LjA5LjIwMTEgMTc6MTUsIE15a2xlYnVzdCwgVHJvbmQg0L/QuNGI0LXRgjoNCj4gPj4gLS0tLS1P
cmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPj4gRnJvbTogU2NodW1ha2VyLCBCcnlhbg0KPiA+PiBT
ZW50OiBUdWVzZGF5LCBTZXB0ZW1iZXIgMjAsIDIwMTEgOTowNSBBTQ0KPiA+PiBUbzogU3Rhbmlz
bGF2IEtpbnNidXJza3kNCj4gPj4gQ2M6IE15a2xlYnVzdCwgVHJvbmQ7IGxpbnV4LW5mc0B2Z2Vy
Lmtlcm5lbC5vcmc7IHhlbXVsQHBhcmFsbGVscy5jb207DQo+ID4+IG5laWxiQHN1c2UuZGU7IG5l
dGRldkB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7DQo+ID4+
IGJmaWVsZHNAZmllbGRzZXMub3JnOyBkYXZlbUBkYXZlbWxvZnQubmV0DQo+ID4+IFN1YmplY3Q6
IFJlOiBbUEFUQ0ggdjQgMS84XSBTVU5SUEM6IGludHJvZHVjZSBoZWxwZXJzIGZvciByZWZlcmVu
Y2UNCj4gPj4gY291bnRlZCBycGNiaW5kIGNsaWVudHMNCj4gPj4NCj4gPj4gT24gMDkvMjAvMjAx
MSAwNjoxMyBBTSwgU3RhbmlzbGF2IEtpbnNidXJza3kgd3JvdGU6DQo+ID4+PiBUaGlzIGhlbHBl
cnMgd2lsbCBiZSB1c2VkIGZvciBkeW5hbWljYWwgY3JlYXRpb24gYW5kIGRlc3RydWN0aW9uIG9m
DQo+ID4+PiBycGNiaW5kIGNsaWVudHMuDQo+ID4+PiBWYXJpYWJsZSBycGNiX3VzZXJzIGlzIGFj
dHVhbGx5IGEgY291bnRlciBvZiBsYXVjaGVkIFJQQyBzZXJ2aWNlcy4NCj4gPj4+IElmIHJwY2Jp
bmQgY2xpZW50cyBoYXMgYmVlbiBjcmVhdGVkIGFscmVhZHksIHRoZW4gd2UganVzdCBpbmNyZWFz
ZQ0KPiBycGNiX3VzZXJzLg0KPiA+Pj4NCj4gPj4+IFNpZ25lZC1vZmYtYnk6IFN0YW5pc2xhdiBL
aW5zYnVyc2t5PHNraW5zYnVyc2t5QHBhcmFsbGVscy5jb20+DQo+ID4+Pg0KPiA+Pj4gLS0tDQo+
ID4+PiAgIG5ldC9zdW5ycGMvcnBjYl9jbG50LmMgfCAgIDUwDQo+ID4+ICsrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+Pj4gICAxIGZpbGVzIGNoYW5n
ZWQsIDUwIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pDQo+ID4+Pg0KPiA+Pj4gZGlmZiAt
LWdpdCBhL25ldC9zdW5ycGMvcnBjYl9jbG50LmMgYi9uZXQvc3VucnBjL3JwY2JfY2xudC5jIGlu
ZGV4DQo+ID4+PiBlNDVkMmZiLi44NzI0NzgwIDEwMDY0NA0KPiA+Pj4gLS0tIGEvbmV0L3N1bnJw
Yy9ycGNiX2NsbnQuYw0KPiA+Pj4gKysrIGIvbmV0L3N1bnJwYy9ycGNiX2NsbnQuYw0KPiA+Pj4g
QEAgLTExNCw2ICsxMTQsOSBAQCBzdGF0aWMgc3RydWN0IHJwY19wcm9ncmFtCXJwY2JfcHJvZ3Jh
bTsNCj4gPj4+ICAgc3RhdGljIHN0cnVjdCBycGNfY2xudCAqCXJwY2JfbG9jYWxfY2xudDsNCj4g
Pj4+ICAgc3RhdGljIHN0cnVjdCBycGNfY2xudCAqCXJwY2JfbG9jYWxfY2xudDQ7DQo+ID4+Pg0K
PiA+Pj4gK0RFRklORV9TUElOTE9DSyhycGNiX2NsbnRfbG9jayk7DQo+ID4+PiArdW5zaWduZWQg
aW50CQkJcnBjYl91c2VyczsNCj4gPj4+ICsNCj4gPj4+ICAgc3RydWN0IHJwY2JpbmRfYXJncyB7
DQo+ID4+PiAgIAlzdHJ1Y3QgcnBjX3hwcnQgKglyX3hwcnQ7DQo+ID4+Pg0KPiA+Pj4gQEAgLTE2
MSw2ICsxNjQsNTMgQEAgc3RhdGljIHZvaWQgcnBjYl9tYXBfcmVsZWFzZSh2b2lkICpkYXRhKQ0K
PiA+Pj4gICAJa2ZyZWUobWFwKTsNCj4gPj4+ICAgfQ0KPiA+Pj4NCj4gPj4+ICtzdGF0aWMgaW50
IHJwY2JfZ2V0X2xvY2FsKHZvaWQpDQo+ID4+PiArew0KPiA+Pj4gKwlzcGluX2xvY2soJnJwY2Jf
Y2xudF9sb2NrKTsNCj4gPj4+ICsJaWYgKHJwY2JfdXNlcnMpDQo+ID4+PiArCQlycGNiX3VzZXJz
Kys7DQo+ID4+PiArCXNwaW5fdW5sb2NrKCZycGNiX2NsbnRfbG9jayk7DQo+ID4+PiArDQo+ID4+
PiArCXJldHVybiBycGNiX3VzZXJzOw0KPiA+PiAgICAgICAgICBeXl5eXl5eXl5eXl5eXl5eXl4N
Cj4gPj4gSXMgaXQgc2FmZSB0byB1c2UgdGhpcyB2YXJpYWJsZSBvdXRzaWRlIG9mIHRoZSBycGNi
X2NsbnRfbG9jaz8NCj4gPj4NCj4gPiBOb3BlLiBJZiBycGNiX3VzZXJzIHdhcyB6ZXJvIGluIHRo
ZSBwcm90ZWN0ZWQgc2VjdGlvbiBhYm92ZSwgbm90aGluZw0KPiBndWFyYW50ZWVzIHRoYXQgaXQg
d2lsbCBzdGlsbCBiZSB6ZXJvIGhlcmUsIGFuZCBzbyB0aGUgY2FsbGVyIG1heSBnZXQgdGhlICh3
cm9uZykNCj4gaW1wcmVzc2lvbiB0aGF0IHRoZSBjb3VudGVyIHdhcyBpbmNyZW1lbnRlZC4NCj4g
Pg0KPiANCj4gWWVwLCB5b3UgcmlnaHQuIFdpbGwgZml4IHRoaXMgcmFjZXMuDQo+IA0KPiA+Pj4g
K30NCj4gPj4+ICsNCj4gPj4+ICt2b2lkIHJwY2JfcHV0X2xvY2FsKHZvaWQpDQo+ID4+PiArew0K
PiA+Pj4gKwlzdHJ1Y3QgcnBjX2NsbnQgKmNsbnQgPSBycGNiX2xvY2FsX2NsbnQ7DQo+ID4+PiAr
CXN0cnVjdCBycGNfY2xudCAqY2xudDQgPSBycGNiX2xvY2FsX2NsbnQ0Ow0KPiA+Pj4gKwlpbnQg
c2h1dGRvd247DQo+ID4+PiArDQo+ID4+PiArCXNwaW5fbG9jaygmcnBjYl9jbG50X2xvY2spOw0K
PiA+Pj4gKwlpZiAoLS1ycGNiX3VzZXJzID09IDApIHsNCj4gPj4+ICsJCXJwY2JfbG9jYWxfY2xu
dCA9IE5VTEw7DQo+ID4+PiArCQlycGNiX2xvY2FsX2NsbnQ0ID0gTlVMTDsNCj4gPj4+ICsJfQ0K
PiA+Pj4gKwlzaHV0ZG93biA9ICFycGNiX3VzZXJzOw0KPiA+Pj4gKwlzcGluX3VubG9jaygmcnBj
Yl9jbG50X2xvY2spOw0KPiA+Pj4gKw0KPiA+Pj4gKwlpZiAoc2h1dGRvd24pIHsNCj4gPj4+ICsJ
CS8qDQo+ID4+PiArCQkgKiBjbGVhbnVwX3JwY2JfY2xudCAtIHJlbW92ZSB4cHJ0c29jaydzIHN5
c2N0bHMsIHVucmVnaXN0ZXINCj4gPj4+ICsJCSAqLw0KPiA+Pj4gKwkJaWYgKGNsbnQ0KQ0KPiA+
Pj4gKwkJCXJwY19zaHV0ZG93bl9jbGllbnQoY2xudDQpOw0KPiA+Pj4gKwkJaWYgKGNsbnQpDQo+
ID4+PiArCQkJcnBjX3NodXRkb3duX2NsaWVudChjbG50KTsNCj4gPj4+ICsJfQ0KPiA+Pj4gKwly
ZXR1cm47DQo+ID4+PiArfQ0KPiA+Pj4gKw0KPiA+Pj4gK3N0YXRpYyB2b2lkIHJwY2Jfc2V0X2xv
Y2FsKHN0cnVjdCBycGNfY2xudCAqY2xudCwgc3RydWN0IHJwY19jbG50DQo+ID4+PiArKmNsbnQ0
KSB7DQo+ID4+PiArCS8qIFByb3RlY3RlZCBieSBycGNiX2NyZWF0ZV9sb2NhbF9tdXRleCAqLw0K
PiA+DQo+ID4gRG9lc24ndCBpdCAgbmVlZCB0byBiZSBwcm90ZWN0ZWQgYnkgcnBjYl9jbG50X2xv
Y2sgdG9vPw0KPiA+DQo+IA0KPiBOb3BlIGZyb20gbXkgcG93LiBJdCdzIHByb3RlY3RlZCBieSBy
cGNiX2NyZWF0ZV9sb2NhbF9tdXRleC4gSS5lLiBubyBvbmUNCj4gd2lsbCBjaGFuZ2UgcnBjYl91
c2VycyBzaW5jZSBpdCdzIHplcm8uIElmIGl0J3Mgbm9uIHplcm8gLSB3ZSB3aWxsbid0IGdldCB0
bw0KPiBycGNiX3NldF9sb2NhbCgpLg0KDQpPSywgc28geW91IGFyZSBzYXlpbmcgdGhhdCB0aGUg
cnBjYl91c2VycysrIGJlbG93IGNvdWxkIGJlIHJlcGxhY2VkIGJ5IHJwY2JfdXNlcnM9MT8NCg0K
SW4gdGhhdCBjYXNlLCBkb24ndCB5b3UgbmVlZCBhIHNtcF93bWIoKSBiZXR3ZWVuIHRoZSBzZXR0
aW5nIG9mIHJwY2JfbG9jYWxfY2xudC80IGFuZCB0aGUgc2V0dGluZyBvZiBycGNiX3VzZXJzPyBP
dGhlcndpc2UsIGhvdyBkbyB5b3UgZ3VhcmFudGVlIHRoYXQgcnBjYl91c2VycyAhPSAwIGltcGxp
ZXMgcnBiY19sb2NhbF9jbG50LzQgIT0gTlVMTD8NCg0KPiA+Pj4gKwlycGNiX2xvY2FsX2NsbnQg
PSBjbG50Ow0KPiA+Pj4gKwlycGNiX2xvY2FsX2NsbnQ0ID0gY2xudDQ7DQo+ID4+PiArCXJwY2Jf
dXNlcnMrKzsNCj4gPiBeXl5eXl5eXl5eXl5eXl5eXl5eDQo+ID4NCj4gPj4+ICsJZHByaW50aygi
UlBDOiAgICAgICBjcmVhdGVkIG5ldyBycGNiIGxvY2FsIGNsaWVudHMgKHJwY2JfbG9jYWxfY2xu
dDogIg0KPiA+Pj4gKwkJCSIlcCwgcnBjYl9sb2NhbF9jbG50NDogJXApXG4iLCBycGNiX2xvY2Fs
X2NsbnQsDQo+ID4+PiArCQkJcnBjYl9sb2NhbF9jbG50NCk7DQo+ID4+PiArfQ0KPiA+Pj4gKw0K
DQoT77+977+97Lm7HO+/vSbvv71+77+9Ju+/vRjvv73vv70rLe+/ve+/vd22F++/ve+/vXfvv73v
v73Lm++/ve+/ve+/vW3vv71i77+977+9Z37Ip++/vRfvv73vv73cqH3vv73vv73vv73GoHrvv70m
ajordu+/ve+/ve+/vQfvv73vv73vv73vv716Wivvv73vv70rembvv73vv73vv71o77+977+977+9
fu+/ve+/ve+/ve+/vWnvv73vv73vv71677+9Hu+/vXfvv73vv73vv70/77+977+977+977+9Ju+/
vSnfohtm
20.09.2011 17:15, Myklebust, Trond пишет:
>> -----Original Message-----
>> From: Schumaker, Bryan
>> Sent: Tuesday, September 20, 2011 9:05 AM
>> To: Stanislav Kinsbursky
>> Cc: Myklebust, Trond; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v4 1/8] SUNRPC: introduce helpers for reference
>> counted rpcbind clients
>>
>> On 09/20/2011 06:13 AM, Stanislav Kinsbursky wrote:
>>> This helpers will be used for dynamical creation and destruction of
>>> rpcbind clients.
>>> Variable rpcb_users is actually a counter of lauched RPC services. If
>>> rpcbind clients has been created already, then we just increase rpcb_users.
>>>
>>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>>
>>> ---
>>> net/sunrpc/rpcb_clnt.c | 50
>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 files changed, 50 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c index
>>> e45d2fb..8724780 100644
>>> --- a/net/sunrpc/rpcb_clnt.c
>>> +++ b/net/sunrpc/rpcb_clnt.c
>>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>>> static struct rpc_clnt * rpcb_local_clnt;
>>> static struct rpc_clnt * rpcb_local_clnt4;
>>>
>>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>>> +unsigned int rpcb_users;
>>> +
>>> struct rpcbind_args {
>>> struct rpc_xprt * r_xprt;
>>>
>>> @@ -161,6 +164,53 @@ static void rpcb_map_release(void *data)
>>> kfree(map);
>>> }
>>>
>>> +static int rpcb_get_local(void)
>>> +{
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (rpcb_users)
>>> + rpcb_users++;
>>> + spin_unlock(&rpcb_clnt_lock);
>>> +
>>> + return rpcb_users;
>> ^^^^^^^^^^^^^^^^^^
>> Is it safe to use this variable outside of the rpcb_clnt_lock?
>>
> Nope. If rpcb_users was zero in the protected section above, nothing guarantees that it will still be zero here, and so the caller may get the (wrong) impression that the counter was incremented.
>
Yep, you right. Will fix this races.
>>> +}
>>> +
>>> +void rpcb_put_local(void)
>>> +{
>>> + struct rpc_clnt *clnt = rpcb_local_clnt;
>>> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
>>> + int shutdown;
>>> +
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (--rpcb_users == 0) {
>>> + rpcb_local_clnt = NULL;
>>> + rpcb_local_clnt4 = NULL;
>>> + }
>>> + shutdown = !rpcb_users;
>>> + spin_unlock(&rpcb_clnt_lock);
>>> +
>>> + if (shutdown) {
>>> + /*
>>> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
>>> + */
>>> + if (clnt4)
>>> + rpc_shutdown_client(clnt4);
>>> + if (clnt)
>>> + rpc_shutdown_client(clnt);
>>> + }
>>> + return;
>>> +}
>>> +
>>> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt
>>> +*clnt4) {
>>> + /* Protected by rpcb_create_local_mutex */
>
> Doesn't it need to be protected by rpcb_clnt_lock too?
>
Nope from my pow. It's protected by rpcb_create_local_mutex. I.e. no one will
change rpcb_users since it's zero. If it's non zero - we willn't get to
rpcb_set_local().
>>> + rpcb_local_clnt = clnt;
>>> + rpcb_local_clnt4 = clnt4;
>>> + rpcb_users++;
> ^^^^^^^^^^^^^^^^^^^
>
>>> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
>>> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
>>> + rpcb_local_clnt4);
>>> +}
>>> +
>>> /*
>>> * Returns zero on success, otherwise a negative errno value
>>> * is returned.
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs"
>>> in the body of a message to [email protected] More
>> majordomo
>>> info at http://vger.kernel.org/majordomo-info.html
>
> �{.n�+�������+%��lzwm��b�맲��r��zX��߲)���w*jg��������ݢj/���z�ޖ��2�ޙ���&�)ߡ�a�����G���h��j:+v���w�٥
--
Best regards,
Stanislav Kinsbursky
On 09/20/2011 10:43 AM, Stanislav Kinsbursky wrote:
> 20.09.2011 18:24, Jeff Layton пишет:
>> On Tue, 20 Sep 2011 17:49:27 +0400
>> Stanislav Kinsbursky<[email protected]> wrote:
>>
>>> v5: fixed races with rpcb_users in rpcb_get_local()
>>>
>>> This helpers will be used for dynamical creation and destruction of rpcbind
>>> clients.
>>> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
>>> clients has been created already, then we just increase rpcb_users.
>>>
>>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>>
>>> ---
>>> net/sunrpc/rpcb_clnt.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 files changed, 53 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
>>> index e45d2fb..5f4a406 100644
>>> --- a/net/sunrpc/rpcb_clnt.c
>>> +++ b/net/sunrpc/rpcb_clnt.c
>>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>>> static struct rpc_clnt * rpcb_local_clnt;
>>> static struct rpc_clnt * rpcb_local_clnt4;
>>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>>> +unsigned int rpcb_users;
>>> +
>>> struct rpcbind_args {
>>> struct rpc_xprt * r_xprt;
>>> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
>>> kfree(map);
>>> }
>>> +static int rpcb_get_local(void)
>>> +{
>>> + int cnt;
>>> +
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (rpcb_users)
>>> + rpcb_users++;
>>> + cnt = rpcb_users;
>>> + spin_unlock(&rpcb_clnt_lock);
>>> +
>>> + return cnt;
>>> +}
>>> +
>>> +void rpcb_put_local(void)
>>> +{
>>> + struct rpc_clnt *clnt = rpcb_local_clnt;
>>> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
>>> + int shutdown;
>>> +
>>> + spin_lock(&rpcb_clnt_lock);
>>> + if (--rpcb_users == 0) {
>>> + rpcb_local_clnt = NULL;
>>> + rpcb_local_clnt4 = NULL;
>>> + }
>>
>> In the function below, you mention that the above pointers are
>> protected by rpcb_create_local_mutex, but it looks like they get reset
>> here without that being held?
>>
>
> Assigning of them is protected by rpcb_create_local_mutex.
> Dereferencing of them is protected by rpcb_clnt_lock.
Shouldn't you be using the same lock for assigning and dereferencing? Otherwise one thread could change these variables while another is using them.
>
>> Might it be simpler to just protect rpcb_users with the
>> rpcb_create_local_mutex and ensure that it's held whenever you call one
>> of these routines? None of these are codepaths are particularly hot.
>>
>
> I just inherited this lock-mutex logic.
> Actually, you right. This codepaths are used rarely.
> But are use sure, that we need to remove this "speed-up" logic if we take into account that it was here already?
>
>>> + shutdown = !rpcb_users;
>>> + spin_unlock(&rpcb_clnt_lock);
>>> +
>>> + if (shutdown) {
>>> + /*
>>> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
>>> + */
>>> + if (clnt4)
>>> + rpc_shutdown_client(clnt4);
>>> + if (clnt)
>>> + rpc_shutdown_client(clnt);
>>> + }
>>> + return;
>>> +}
>>> +
>>> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
>>> +{
>>> + /* Protected by rpcb_create_local_mutex */
>>> + rpcb_local_clnt = clnt;
>>> + rpcb_local_clnt4 = clnt4;
>>> + rpcb_users++;
>>> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
>>> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
>>> + rpcb_local_clnt4);
>>> +}
>>> +
>>> /*
>>> * Returns zero on success, otherwise a negative errno value
>>> * is returned.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>>> the body of a message to [email protected]
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
>
> -----Original Message-----
> From: Jeff Layton [mailto:[email protected]]
> Sent: Tuesday, September 20, 2011 10:25 AM
> To: Stanislav Kinsbursky
> Cc: Myklebust, Trond; [email protected]; Pavel Emelianov;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: Re: [PATCH v5 1/8] SUNRPC: introduce helpers for reference
> counted rpcbind clients
>
> On Tue, 20 Sep 2011 17:49:27 +0400
> Stanislav Kinsbursky <[email protected]> wrote:
>
> > v5: fixed races with rpcb_users in rpcb_get_local()
> >
> > This helpers will be used for dynamical creation and destruction of
> > rpcbind clients.
> > Variable rpcb_users is actually a counter of lauched RPC services.
If
> > rpcbind clients has been created already, then we just increase
rpcb_users.
> >
> > Signed-off-by: Stanislav Kinsbursky <[email protected]>
> >
> > ---
> > net/sunrpc/rpcb_clnt.c | 53
> ++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 files changed, 53 insertions(+), 0 deletions(-)
> >
> > diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c index
> > e45d2fb..5f4a406 100644
> > --- a/net/sunrpc/rpcb_clnt.c
> > +++ b/net/sunrpc/rpcb_clnt.c
> > @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
> > static struct rpc_clnt * rpcb_local_clnt;
> > static struct rpc_clnt * rpcb_local_clnt4;
> > +DEFINE_SPINLOCK(rpcb_clnt_lock);
> > +unsigned int rpcb_users;
> > +
> > struct rpcbind_args {
> > struct rpc_xprt * r_xprt;
> > @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
> > kfree(map);
> > }
> > +static int rpcb_get_local(void)
> > +{
> > + int cnt;
> > +
> > + spin_lock(&rpcb_clnt_lock);
> > + if (rpcb_users)
> > + rpcb_users++;
> > + cnt = rpcb_users;
> > + spin_unlock(&rpcb_clnt_lock);
> > +
> > + return cnt;
> > +}
> > +
> > +void rpcb_put_local(void)
> > +{
> > + struct rpc_clnt *clnt = rpcb_local_clnt;
> > + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
> > + int shutdown;
> > +
> > + spin_lock(&rpcb_clnt_lock);
> > + if (--rpcb_users == 0) {
> > + rpcb_local_clnt = NULL;
> > + rpcb_local_clnt4 = NULL;
> > + }
>
> In the function below, you mention that the above pointers are
protected by
> rpcb_create_local_mutex, but it looks like they get reset here without
that
> being held?
>
> Might it be simpler to just protect rpcb_users with the
> rpcb_create_local_mutex and ensure that it's held whenever you call
one of
> these routines? None of these are codepaths are particularly hot.
Alternatively, if you do
if (rpcb_users == 1) {
rpcb_local_clnt = NULL;
rpcb_local_clnt4 = NULL;
smp_wmb();
rpcb_users = 0;
} else
rpcb_users--;
then the spinlock protection in rpbc_get_local() is still good enough to
guarantee correctness.
We don't need this code since rpcbind clients are creating during RPC service
creation.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/rpcb_clnt.c | 9 ---------
1 files changed, 0 insertions(+), 9 deletions(-)
diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
index 64e15d1..9327305 100644
--- a/net/sunrpc/rpcb_clnt.c
+++ b/net/sunrpc/rpcb_clnt.c
@@ -428,11 +428,6 @@ int rpcb_register(u32 prog, u32 vers, int prot, unsigned short port)
struct rpc_message msg = {
.rpc_argp = &map,
};
- int error;
-
- error = rpcb_create_local();
- if (error)
- return error;
dprintk("RPC: %sregistering (%u, %u, %d, %u) with local "
"rpcbind\n", (port ? "" : "un"),
@@ -568,11 +563,7 @@ int rpcb_v4_register(const u32 program, const u32 version,
struct rpc_message msg = {
.rpc_argp = &map,
};
- int error;
- error = rpcb_create_local();
- if (error)
- return error;
if (rpcb_local_clnt4 == NULL)
return -EPROTONOSUPPORT;
svc_unregister() call have to be removed from svc_destroy() since it will be
called in sv_shutdown callback.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
net/sunrpc/svc.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 918edc3..407462f 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -530,7 +530,6 @@ svc_destroy(struct svc_serv *serv)
if (svc_serv_is_pooled(serv))
svc_pool_map_put();
- svc_unregister(serv);
kfree(serv->sv_pools);
kfree(serv);
}
20.09.2011 21:13, Myklebust, Trond пишет:
>> -----Original Message-----
>> From: Stanislav Kinsbursky [mailto:[email protected]]
>> Sent: Tuesday, September 20, 2011 12:21 PM
>> To: Myklebust, Trond
>> Cc: Schumaker, Bryan; [email protected]; Pavel Emelianov;
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v4 1/8] SUNRPC: introduce helpers for reference
>> counted rpcbind clients
>>
>> 20.09.2011 18:38, Myklebust, Trond пишет:
>>>> -----Original Message-----
>>>> From: Stanislav Kinsbursky [mailto:[email protected]]
>>>> Sent: Tuesday, September 20, 2011 10:35 AM
>>>> To: Myklebust, Trond
>>>> Cc: Schumaker, Bryan; [email protected]; Pavel Emelianov;
>>>> [email protected]; [email protected]; [email protected];
>>>> [email protected]; [email protected]
>>>> Subject: Re: [PATCH v4 1/8] SUNRPC: introduce helpers for reference
>>>> counted rpcbind clients
>>>>
>>>> 20.09.2011 18:14, Myklebust, Trond пишет:
>>>>
>>>>>>>
>>>>>>> Doesn't it need to be protected by rpcb_clnt_lock too?
>>>>>>>
>>>>>>
>>>>>> Nope from my pow. It's protected by rpcb_create_local_mutex. I.e.
>>>>>> no one will change rpcb_users since it's zero. If it's non zero -
>>>>>> we willn't get to rpcb_set_local().
>>>>>
>>>>> OK, so you are saying that the rpcb_users++ below could be replaced
>>>>> by
>>>> rpcb_users=1?
>>>>>
>>>>
>>>> Yes, you right.
>>>>
>>>>> In that case, don't you need a smp_wmb() between the setting of
>>>> rpcb_local_clnt/4 and the setting of rpcb_users? Otherwise, how do
>>>> you guarantee that rpcb_users != 0 implies rpbc_local_clnt/4 != NULL?
>>>>>
>>>>
>>>> We check rpcb_users under spinlock. Gain spinlock forces memory
>>>> barrier, doesn't it?
>>>
>>> Yes, and so you don't need an smp_rmb() on the reader side. However,
>> you still need to ensure that the processor which _sets_ the rpcb_users and
>> rpcb_local_clnt/4 actually writes them in the correct order.
>>>
>>
>> Trond, I've thought again and realized, that even if these writes (rpcb_users
>> and rpbc_local_clnt/4) will be done in reversed order, then no barrier
>> required here.
>> Because if we have another process trying to create those clients (it can't use
>> them since it's not started yet) on other CPU, than it's waiting on creation
>> mutex. When it will gain the mutex, we will have required memory barrier
>> for us.
>>
>> Or I missed something in your explanation?
>
> You need to ensure that if someone calls rpcb_get_local() and gets a positive result, then they can rely on rpcb_local_clnt/4 being non-zero. Without the write barrier, that is not the case.
>
In current context we can be sure, that between rpcb_get_local() and first
dereference of rpcb_local_clnt/4 we have at least one spinlock
(svc_xprt_class_lock in svc_create_xprt).
But I understand, that we can't relay on it since this code can be changed in
future.
So I'll add barrier there.
> Without that guarantee, you can't really ensure that rpcb_put_local() will work as expected either, since it will be possible to trigger the --rpcb_users == 0 case without shutting down rpcb_local_clnt/4 (because clnt/clnt4 == 0).
>
Yes, you right. But it doesn't mean, that we require barrier here, because we
don't need this garantee you are talking about.
We can be sure, that we always see right rpcb_users value. It means that, if we
set this value to zero, then no other running services left and no references to
those clients can occur.
And even if we have another process which is going to create new service right
now on another CPU, then this process will see that no rpcbind users present and
will create new clients and assign them to global variables prior to any use of
this clients can occur.
And this assign will be done with barrier as we agreed earlier.
> Cheers
> Trond
>
--
Best regards,
Stanislav Kinsbursky
20.09.2011 18:14, Myklebust, Trond пишет:
>>>
>>> Doesn't it need to be protected by rpcb_clnt_lock too?
>>>
>>
>> Nope from my pow. It's protected by rpcb_create_local_mutex. I.e. no one
>> will change rpcb_users since it's zero. If it's non zero - we willn't get to
>> rpcb_set_local().
>
> OK, so you are saying that the rpcb_users++ below could be replaced by rpcb_users=1?
>
Yes, you right.
> In that case, don't you need a smp_wmb() between the setting of rpcb_local_clnt/4 and the setting of rpcb_users? Otherwise, how do you guarantee that rpcb_users != 0 implies rpbc_local_clnt/4 != NULL?
>
We check rpcb_users under spinlock. Gain spinlock forces memory barrier, doesn't it?
--
Best regards,
Stanislav Kinsbursky
On Tue, 20 Sep 2011 14:13:32 +0400
Stanislav Kinsbursky <[email protected]> wrote:
> v4:
> 1) creation and destruction on rpcbind clients now depends on service program
> versions "vs_hidden" flag.
>
> This patch is required for further RPC layer virtualization, because rpcbind
> clients have to be per network namespace.
> To achive this, we have to untie network namespace from rpcbind clients sockets.
> The idea of this patch set is to make rpcbind clients non-static. I.e. rpcbind
> clients will be created during first RPC service creation, and destroyed when
> last RPC service is stopped.
> With this patch set rpcbind clients can be virtualized easely.
>
>
> The following series consists of:
>
> ---
>
> Stanislav Kinsbursky (8):
> SUNRPC: introduce helpers for reference counted rpcbind clients
> SUNRPC: use rpcbind reference counting helpers
> SUNRPC: introduce svc helpers for prepairing rpcbind infrastructure
> SUNRPC: setup rpcbind clients if service requires it
> SUNRPC: cleanup service destruction
> NFSd: call svc rpcbind cleanup explicitly
> SUNRPC: remove rpcbind clients creation during service registering
> SUNRPC: remove rpcbind clients destruction on module cleanup
>
>
> fs/nfsd/nfssvc.c | 2 +
> include/linux/sunrpc/clnt.h | 2 +
> include/linux/sunrpc/svc.h | 1 +
> net/sunrpc/rpcb_clnt.c | 85 ++++++++++++++++++++++++++++---------------
> net/sunrpc/sunrpc_syms.c | 3 --
> net/sunrpc/svc.c | 48 +++++++++++++++++++++++-
> 6 files changed, 105 insertions(+), 36 deletions(-)
>
Patchset looks good to me. The only remaining thing I think is to set
vs_hidden on nfs4_callback_version4, but that patch is orthogonal to
this set.
Reviewed-by: Jeff Layton <[email protected]>
On 09/20/2011 06:13 AM, Stanislav Kinsbursky wrote:
> This helpers will be used for dynamical creation and destruction of rpcbind
> clients.
> Variable rpcb_users is actually a counter of lauched RPC services. If rpcbind
> clients has been created already, then we just increase rpcb_users.
>
> Signed-off-by: Stanislav Kinsbursky <[email protected]>
>
> ---
> net/sunrpc/rpcb_clnt.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 files changed, 50 insertions(+), 0 deletions(-)
>
> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
> index e45d2fb..8724780 100644
> --- a/net/sunrpc/rpcb_clnt.c
> +++ b/net/sunrpc/rpcb_clnt.c
> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
> static struct rpc_clnt * rpcb_local_clnt;
> static struct rpc_clnt * rpcb_local_clnt4;
>
> +DEFINE_SPINLOCK(rpcb_clnt_lock);
> +unsigned int rpcb_users;
> +
> struct rpcbind_args {
> struct rpc_xprt * r_xprt;
>
> @@ -161,6 +164,53 @@ static void rpcb_map_release(void *data)
> kfree(map);
> }
>
> +static int rpcb_get_local(void)
> +{
> + spin_lock(&rpcb_clnt_lock);
> + if (rpcb_users)
> + rpcb_users++;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + return rpcb_users;
^^^^^^^^^^^^^^^^^^
Is it safe to use this variable outside of the rpcb_clnt_lock?
> +}
> +
> +void rpcb_put_local(void)
> +{
> + struct rpc_clnt *clnt = rpcb_local_clnt;
> + struct rpc_clnt *clnt4 = rpcb_local_clnt4;
> + int shutdown;
> +
> + spin_lock(&rpcb_clnt_lock);
> + if (--rpcb_users == 0) {
> + rpcb_local_clnt = NULL;
> + rpcb_local_clnt4 = NULL;
> + }
> + shutdown = !rpcb_users;
> + spin_unlock(&rpcb_clnt_lock);
> +
> + if (shutdown) {
> + /*
> + * cleanup_rpcb_clnt - remove xprtsock's sysctls, unregister
> + */
> + if (clnt4)
> + rpc_shutdown_client(clnt4);
> + if (clnt)
> + rpc_shutdown_client(clnt);
> + }
> + return;
> +}
> +
> +static void rpcb_set_local(struct rpc_clnt *clnt, struct rpc_clnt *clnt4)
> +{
> + /* Protected by rpcb_create_local_mutex */
> + rpcb_local_clnt = clnt;
> + rpcb_local_clnt4 = clnt4;
> + rpcb_users++;
> + dprintk("RPC: created new rpcb local clients (rpcb_local_clnt: "
> + "%p, rpcb_local_clnt4: %p)\n", rpcb_local_clnt,
> + rpcb_local_clnt4);
> +}
> +
> /*
> * Returns zero on success, otherwise a negative errno value
> * is returned.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGFuaXNsYXYgS2luc2J1cnNr
eSBbbWFpbHRvOnNraW5zYnVyc2t5QHBhcmFsbGVscy5jb21dDQo+IFNlbnQ6IFR1ZXNkYXksIFNl
cHRlbWJlciAyMCwgMjAxMSAxMDozNSBBTQ0KPiBUbzogTXlrbGVidXN0LCBUcm9uZA0KPiBDYzog
U2NodW1ha2VyLCBCcnlhbjsgbGludXgtbmZzQHZnZXIua2VybmVsLm9yZzsgUGF2ZWwgRW1lbGlh
bm92Ow0KPiBuZWlsYkBzdXNlLmRlOyBuZXRkZXZAdmdlci5rZXJuZWwub3JnOyBsaW51eC1rZXJu
ZWxAdmdlci5rZXJuZWwub3JnOw0KPiBiZmllbGRzQGZpZWxkc2VzLm9yZzsgZGF2ZW1AZGF2ZW1s
b2Z0Lm5ldA0KPiBTdWJqZWN0OiBSZTogW1BBVENIIHY0IDEvOF0gU1VOUlBDOiBpbnRyb2R1Y2Ug
aGVscGVycyBmb3IgcmVmZXJlbmNlDQo+IGNvdW50ZWQgcnBjYmluZCBjbGllbnRzDQo+IA0KPiAy
MC4wOS4yMDExIDE4OjE0LCBNeWtsZWJ1c3QsIFRyb25kINC/0LjRiNC10YI6DQo+IA0KPiA+Pj4N
Cj4gPj4+IERvZXNuJ3QgaXQgIG5lZWQgdG8gYmUgcHJvdGVjdGVkIGJ5IHJwY2JfY2xudF9sb2Nr
IHRvbz8NCj4gPj4+DQo+ID4+DQo+ID4+IE5vcGUgZnJvbSBteSBwb3cuIEl0J3MgcHJvdGVjdGVk
IGJ5IHJwY2JfY3JlYXRlX2xvY2FsX211dGV4LiBJLmUuIG5vDQo+ID4+IG9uZSB3aWxsIGNoYW5n
ZSBycGNiX3VzZXJzIHNpbmNlIGl0J3MgemVyby4gSWYgaXQncyBub24gemVybyAtIHdlDQo+ID4+
IHdpbGxuJ3QgZ2V0IHRvIHJwY2Jfc2V0X2xvY2FsKCkuDQo+ID4NCj4gPiBPSywgc28geW91IGFy
ZSBzYXlpbmcgdGhhdCB0aGUgcnBjYl91c2VycysrIGJlbG93IGNvdWxkIGJlIHJlcGxhY2VkIGJ5
DQo+IHJwY2JfdXNlcnM9MT8NCj4gPg0KPiANCj4gWWVzLCB5b3UgcmlnaHQuDQo+IA0KPiA+IElu
IHRoYXQgY2FzZSwgZG9uJ3QgeW91IG5lZWQgYSBzbXBfd21iKCkgYmV0d2VlbiB0aGUgc2V0dGlu
ZyBvZg0KPiBycGNiX2xvY2FsX2NsbnQvNCBhbmQgdGhlIHNldHRpbmcgb2YgcnBjYl91c2Vycz8g
T3RoZXJ3aXNlLCBob3cgZG8geW91DQo+IGd1YXJhbnRlZSB0aGF0IHJwY2JfdXNlcnMgIT0gMCBp
bXBsaWVzIHJwYmNfbG9jYWxfY2xudC80ICE9IE5VTEw/DQo+ID4NCj4gDQo+IFdlIGNoZWNrIHJw
Y2JfdXNlcnMgdW5kZXIgc3BpbmxvY2suIEdhaW4gc3BpbmxvY2sgZm9yY2VzIG1lbW9yeSBiYXJy
aWVyLA0KPiBkb2Vzbid0IGl0Pw0KDQpZZXMsIGFuZCBzbyB5b3UgZG9uJ3QgbmVlZCBhbiBzbXBf
cm1iKCkgb24gdGhlIHJlYWRlciBzaWRlLiBIb3dldmVyLCB5b3Ugc3RpbGwgbmVlZCB0byBlbnN1
cmUgdGhhdCB0aGUgcHJvY2Vzc29yIHdoaWNoIF9zZXRzXyB0aGUgcnBjYl91c2VycyBhbmQgcnBj
Yl9sb2NhbF9jbG50LzQgYWN0dWFsbHkgd3JpdGVzIHRoZW0gaW4gdGhlIGNvcnJlY3Qgb3JkZXIu
DQoNCkNoZWVycw0KICBUcm9uZA0KDQoT77+977+97Lm7HO+/vSbvv71+77+9Ju+/vRjvv73vv70r
Le+/ve+/vd22F++/ve+/vXfvv73vv73Lm++/ve+/ve+/vW3vv71i77+977+9Z37Ip++/vRfvv73v
v73cqH3vv73vv73vv73GoHrvv70majordu+/ve+/ve+/vQfvv73vv73vv73vv716Wivvv73vv70r
embvv73vv73vv71o77+977+977+9fu+/ve+/ve+/ve+/vWnvv73vv73vv71677+9Hu+/vXfvv73v
v73vv70/77+977+977+977+9Ju+/vSnfohtm
20.09.2011 19:11, Jeff Layton пишет:
>
> In general, it's difficult to get locking right, especially when you
> start mixing multiple locks on related resources. Personally, I'd go
> with a simpler scheme here. There's not much value in protecting this
> counter with a spinlock when the other parts need to be protected by a
> mutex. If you do decide to do it with multiple locks, then please do
> document in comments how the locking is expected to work.
>
> An alternate scheme might be to consider doing this with krefs, but I
> haven't really considered whether that idiom makes sense here.
>
Jeff, please, have a look at my answer to Bryan Schumaker.
Does it allay your apprehensions?
--
Best regards,
Stanislav Kinsbursky
On 09/20/2011 11:38 AM, Stanislav Kinsbursky wrote:
> 20.09.2011 18:58, Bryan Schumaker =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
>> On 09/20/2011 10:43 AM, Stanislav Kinsbursky wrote:
>>> 20.09.2011 18:24, Jeff Layton =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
>>>> On Tue, 20 Sep 2011 17:49:27 +0400
>>>> Stanislav Kinsbursky<[email protected]> wrote:
>>>>
>>>>> v5: fixed races with rpcb_users in rpcb_get_local()
>>>>>
>>>>> This helpers will be used for dynamical creation and destruction =
of rpcbind
>>>>> clients.
>>>>> Variable rpcb_users is actually a counter of lauched RPC services=
=2E If rpcbind
>>>>> clients has been created already, then we just increase rpcb_user=
s.
>>>>>
>>>>> Signed-off-by: Stanislav Kinsbursky<[email protected]>
>>>>>
>>>>> ---
>>>>> net/sunrpc/rpcb_clnt.c | 53 +++++++++++++++++++++++++++++++=
+++++++++++++++++
>>>>> 1 files changed, 53 insertions(+), 0 deletions(-)
>>>>>
>>>>> diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
>>>>> index e45d2fb..5f4a406 100644
>>>>> --- a/net/sunrpc/rpcb_clnt.c
>>>>> +++ b/net/sunrpc/rpcb_clnt.c
>>>>> @@ -114,6 +114,9 @@ static struct rpc_program rpcb_program;
>>>>> static struct rpc_clnt * rpcb_local_clnt;
>>>>> static struct rpc_clnt * rpcb_local_clnt4;
>>>>> +DEFINE_SPINLOCK(rpcb_clnt_lock);
>>>>> +unsigned int rpcb_users;
>>>>> +
>>>>> struct rpcbind_args {
>>>>> struct rpc_xprt * r_xprt;
>>>>> @@ -161,6 +164,56 @@ static void rpcb_map_release(void *data)
>>>>> kfree(map);
>>>>> }
>>>>> +static int rpcb_get_local(void)
>>>>> +{
>>>>> + int cnt;
>>>>> +
>>>>> + spin_lock(&rpcb_clnt_lock);
>>>>> + if (rpcb_users)
>>>>> + rpcb_users++;
>>>>> + cnt =3D rpcb_users;
>>>>> + spin_unlock(&rpcb_clnt_lock);
>>>>> +
>>>>> + return cnt;
>>>>> +}
>>>>> +
>>>>> +void rpcb_put_local(void)
>>>>> +{
>>>>> + struct rpc_clnt *clnt =3D rpcb_local_clnt;
>>>>> + struct rpc_clnt *clnt4 =3D rpcb_local_clnt4;
>>>>> + int shutdown;
>>>>> +
>>>>> + spin_lock(&rpcb_clnt_lock);
>>>>> + if (--rpcb_users =3D=3D 0) {
>>>>> + rpcb_local_clnt =3D NULL;
>>>>> + rpcb_local_clnt4 =3D NULL;
>>>>> + }
>>>>
>>>> In the function below, you mention that the above pointers are
>>>> protected by rpcb_create_local_mutex, but it looks like they get r=
eset
>>>> here without that being held?
>>>>
>>>
>>> Assigning of them is protected by rpcb_create_local_mutex.
>>> Dereferencing of them is protected by rpcb_clnt_lock.
>>
>> Shouldn't you be using the same lock for assigning and dereferencing=
? Otherwise one thread could change these variables while another is u=
sing them.
>=20
> Probably I wasn't clear with my previous explanation.
> Actually, we use only spinlock to make sure, that the pointers are va=
lid when we dereferencing them. Synchronization point here is rpcb_user=
s counter.
> IOW, we use these pointers only from svc code and only after service =
already started. And with this patch-set we can be sure, that this poin=
ters has been created already to the point, when this service starts.
>=20
> But when this counter is zero, we can experience races with assigning=
those pointers. It takes a lot of time, so we use local mutex here ins=
tead of spinlock.
>=20
> Have I answered your question?
I think I understand now. Thanks!
>=20