2012-10-23 18:23:51

by Myklebust, Trond

[permalink] [raw]
Subject: Re: Heads-up: 3.6.2 / 3.6.3 NFS server oops: 3.6.2+ regression? (also an unrelated ext4 data loss bug)

T24gVHVlLCAyMDEyLTEwLTIzIGF0IDEzOjU3IC0wNDAwLCBUcm9uZCBNeWtsZWJ1c3Qgd3JvdGU6
DQo+IE9uIFR1ZSwgMjAxMi0xMC0yMyBhdCAxNzo0NCArMDAwMCwgTXlrbGVidXN0LCBUcm9uZCB3
cm90ZToNCj4gPiBZb3UgY2FuJ3QgaG9sZCBhIHNwaW5sb2NrIHdoaWxlIHNsZWVwaW5nLiBCb3Ro
IG11dGV4X2xvY2soKSBhbmQgbnNtX2NyZWF0ZSgpIGNhbiBkZWZpbml0ZWx5IHNsZWVwLg0KPiA+
IA0KPiA+IFRoZSBjb3JyZWN0IHdheSB0byBkbyB0aGlzIGlzIHRvIGdyYWIgdGhlIHNwaW5sb2Nr
IGFuZCByZWNoZWNrIHRoZSB2YWx1ZSBvZiBsbi0+bnNtX3VzZXJzIGluc2lkZSB0aGUgJ2lmICgh
SVNfRVJSKCkpJyBjb25kaXRpb24uIElmIGl0IGlzIHN0aWxsIHplcm8sIGJ1bXAgaXQgYW5kIHNl
dCBsbi0+bnNtX2NsbnQsIG90aGVyd2lzZSBidW1wIGl0LCBnZXQgdGhlIGV4aXN0aW5nIGxuLT5u
c21fY2xudCBhbmQgY2FsbCBycGNfc2h1dGRvd25fY2xudCgpIG9uIHRoZSByZWR1bmRhbnQgbnNt
IGNsaWVudCBhZnRlciBkcm9wcGluZyB0aGUgc3BpbmxvY2suDQo+ID4gDQo+ID4gQ2hlZXJzDQo+
ID4gICBUcm9uZA0KPiANCj4gQ2FuIHlvdSBwbGVhc2UgY2hlY2sgaWYgdGhlIGZvbGxvd2luZyBw
YXRjaCBmaXhlcyB0aGUgaXNzdWU/DQo+IA0KPiBDaGVlcnMNCj4gICBUcm9uZA0KPiANCk1laC4u
LiBUaGlzIG9uZSBnZXRzIHJpZCBvZiB0aGUgMTAwJSByZWR1bmRhbnQgbXV0ZXguLi4NCg0KODwt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KRnJvbSA0MTg3YzgxNmExNWRmMTI1NDRlYmNmYTZiOTYxZmNlOTY0NThlMjQ0IE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQ0KRnJvbTogVHJvbmQgTXlrbGVidXN0IDxUcm9uZC5NeWtsZWJ1c3RA
bmV0YXBwLmNvbT4NCkRhdGU6IFR1ZSwgMjMgT2N0IDIwMTIgMTM6NTE6NTggLTA0MDANClN1Ympl
Y3Q6IFtQQVRDSF0gTE9DS0Q6IGZpeCByYWNlcyBpbiBuc21fY2xpZW50X2dldA0KDQpDb21taXQg
ZTk0MDZkYjIwZmVjYmZjYWI2NDZiYWQxNTdiNGNmZGM3Y2FkZGRmYiAobG9ja2Q6IHBlci1uZXQN
Ck5TTSBjbGllbnQgY3JlYXRpb24gYW5kIGRlc3RydWN0aW9uIGhlbHBlcnMgaW50cm9kdWNlZCkg
Y29udGFpbnMNCmEgbmFzdHkgcmFjZSBvbiBpbml0aWFsaXNhdGlvbiBvZiB0aGUgcGVyLW5ldCBO
U00gY2xpZW50IGJlY2F1c2UNCml0IGRvZXNuJ3QgY2hlY2sgd2hldGhlciBvciBub3QgdGhlIGNs
aWVudCBpcyBzZXQgYWZ0ZXIgZ3JhYmJpbmcNCnRoZSBuc21fY3JlYXRlX211dGV4Lg0KDQpSZXBv
cnRlZC1ieTogTml4IDxuaXhAZXNwZXJpLm9yZy51az4NClNpZ25lZC1vZmYtYnk6IFRyb25kIE15
a2xlYnVzdCA8VHJvbmQuTXlrbGVidXN0QG5ldGFwcC5jb20+DQpDYzogc3RhYmxlQHZnZXIua2Vy
bmVsLm9yZw0KLS0tDQogZnMvbG9ja2QvbW9uLmMgfCA0MyArKysrKysrKysrKysrKysrKysrKysr
KysrKy0tLS0tLS0tLS0tLS0tLS0tDQogMSBmaWxlIGNoYW5nZWQsIDI2IGluc2VydGlvbnMoKyks
IDE3IGRlbGV0aW9ucygtKQ0KDQpkaWZmIC0tZ2l0IGEvZnMvbG9ja2QvbW9uLmMgYi9mcy9sb2Nr
ZC9tb24uYw0KaW5kZXggZTRmYjNiYS4uZmU2OTU2MCAxMDA2NDQNCi0tLSBhL2ZzL2xvY2tkL21v
bi5jDQorKysgYi9mcy9sb2NrZC9tb24uYw0KQEAgLTg1LDI5ICs4NSwzOCBAQCBzdGF0aWMgc3Ry
dWN0IHJwY19jbG50ICpuc21fY3JlYXRlKHN0cnVjdCBuZXQgKm5ldCkNCiAJcmV0dXJuIHJwY19j
cmVhdGUoJmFyZ3MpOw0KIH0NCiANCitzdGF0aWMgc3RydWN0IHJwY19jbG50ICpuc21fY2xpZW50
X3NldChzdHJ1Y3QgbG9ja2RfbmV0ICpsbiwNCisJCXN0cnVjdCBycGNfY2xudCAqY2xudCkNCit7
DQorCXNwaW5fbG9jaygmbG4tPm5zbV9jbG50X2xvY2spOw0KKwlpZiAobG4tPm5zbV91c2VycyA9
PSAwKSB7DQorCQlpZiAoY2xudCA9PSBOVUxMKQ0KKwkJCWdvdG8gb3V0Ow0KKwkJbG4tPm5zbV9j
bG50ID0gY2xudDsNCisJfQ0KKwljbG50ID0gbG4tPm5zbV9jbG50Ow0KKwlsbi0+bnNtX3VzZXJz
Kys7DQorb3V0Og0KKwlzcGluX3VubG9jaygmbG4tPm5zbV9jbG50X2xvY2spOw0KKwlyZXR1cm4g
Y2xudDsNCit9DQorDQogc3RhdGljIHN0cnVjdCBycGNfY2xudCAqbnNtX2NsaWVudF9nZXQoc3Ry
dWN0IG5ldCAqbmV0KQ0KIHsNCi0Jc3RhdGljIERFRklORV9NVVRFWChuc21fY3JlYXRlX211dGV4
KTsNCi0Jc3RydWN0IHJwY19jbG50CSpjbG50Ow0KKwlzdHJ1Y3QgcnBjX2NsbnQJKmNsbnQsICpu
ZXc7DQogCXN0cnVjdCBsb2NrZF9uZXQgKmxuID0gbmV0X2dlbmVyaWMobmV0LCBsb2NrZF9uZXRf
aWQpOw0KIA0KLQlzcGluX2xvY2soJmxuLT5uc21fY2xudF9sb2NrKTsNCi0JaWYgKGxuLT5uc21f
dXNlcnMpIHsNCi0JCWxuLT5uc21fdXNlcnMrKzsNCi0JCWNsbnQgPSBsbi0+bnNtX2NsbnQ7DQot
CQlzcGluX3VubG9jaygmbG4tPm5zbV9jbG50X2xvY2spOw0KKwljbG50ID0gbnNtX2NsaWVudF9z
ZXQobG4sIE5VTEwpOw0KKwlpZiAoY2xudCAhPSBOVUxMKQ0KIAkJZ290byBvdXQ7DQotCX0NCi0J
c3Bpbl91bmxvY2soJmxuLT5uc21fY2xudF9sb2NrKTsNCiANCi0JbXV0ZXhfbG9jaygmbnNtX2Ny
ZWF0ZV9tdXRleCk7DQotCWNsbnQgPSBuc21fY3JlYXRlKG5ldCk7DQotCWlmICghSVNfRVJSKGNs
bnQpKSB7DQotCQlsbi0+bnNtX2NsbnQgPSBjbG50Ow0KLQkJc21wX3dtYigpOw0KLQkJbG4tPm5z
bV91c2VycyA9IDE7DQotCX0NCi0JbXV0ZXhfdW5sb2NrKCZuc21fY3JlYXRlX211dGV4KTsNCisJ
Y2xudCA9IG5ldyA9IG5zbV9jcmVhdGUobmV0KTsNCisJaWYgKElTX0VSUihjbG50KSkNCisJCWdv
dG8gb3V0Ow0KKw0KKwljbG50ID0gbnNtX2NsaWVudF9zZXQobG4sIG5ldyk7DQorCWlmIChjbG50
ICE9IG5ldykNCisJCXJwY19zaHV0ZG93bl9jbGllbnQobmV3KTsNCiBvdXQ6DQogCXJldHVybiBj
bG50Ow0KIH0NCi0tIA0KMS43LjExLjcNCg0KDQotLSANClRyb25kIE15a2xlYnVzdA0KTGludXgg
TkZTIGNsaWVudCBtYWludGFpbmVyDQoNCk5ldEFwcA0KVHJvbmQuTXlrbGVidXN0QG5ldGFwcC5j
b20NCnd3dy5uZXRhcHAuY29tDQo=


2012-10-24 10:16:32

by Stanislav Kinsbursky

[permalink] [raw]
Subject: [PATCH] lockd: fix races in per-net NSM client handling

This patch fixes two problems:
1) Removes races on NSM creation.
2) Fixes silly misprint on NSM client destruction (usage counter was checked
for non-zero value instead of zero).

Signed-off-by: Stanislav Kinsbursky <[email protected]>
---
fs/lockd/mon.c | 35 +++++++++++++++++++++++------------
1 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
index e4fb3ba..e3e59f6 100644
--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -85,30 +85,41 @@ static struct rpc_clnt *nsm_create(struct net *net)
return rpc_create(&args);
}

-static struct rpc_clnt *nsm_client_get(struct net *net)
+static struct rpc_clnt *nsm_get_client(struct net *net)
{
- static DEFINE_MUTEX(nsm_create_mutex);
- struct rpc_clnt *clnt;
+ struct rpc_clnt *clnt = NULL;
struct lockd_net *ln = net_generic(net, lockd_net_id);

spin_lock(&ln->nsm_clnt_lock);
if (ln->nsm_users) {
ln->nsm_users++;
clnt = ln->nsm_clnt;
- spin_unlock(&ln->nsm_clnt_lock);
- goto out;
}
spin_unlock(&ln->nsm_clnt_lock);
+ return clnt;
+}
+
+static struct rpc_clnt *nsm_client_get(struct net *net)
+{
+ static DEFINE_MUTEX(nsm_create_mutex);
+ struct rpc_clnt *clnt;
+ struct lockd_net *ln = net_generic(net, lockd_net_id);
+
+ clnt = nsm_get_client(net);
+ if (clnt)
+ return clnt;

mutex_lock(&nsm_create_mutex);
- clnt = nsm_create(net);
- if (!IS_ERR(clnt)) {
- ln->nsm_clnt = clnt;
- smp_wmb();
- ln->nsm_users = 1;
+ clnt = nsm_get_client(net);
+ if (clnt == NULL) {
+ clnt = nsm_create(net);
+ if (!IS_ERR(clnt)) {
+ ln->nsm_clnt = clnt;
+ smp_wmb();
+ ln->nsm_users = 1;
+ }
}
mutex_unlock(&nsm_create_mutex);
-out:
return clnt;
}

@@ -120,7 +131,7 @@ static void nsm_client_put(struct net *net)

spin_lock(&ln->nsm_clnt_lock);
if (ln->nsm_users) {
- if (--ln->nsm_users)
+ if (--ln->nsm_users == 0)
ln->nsm_clnt = NULL;
shutdown = !ln->nsm_users;
}


2012-10-23 19:49:29

by Nix

[permalink] [raw]
Subject: Re: Heads-up: 3.6.2 / 3.6.3 NFS server oops: 3.6.2+ regression? (also an unrelated ext4 data loss bug)

On 23 Oct 2012, Trond Myklebust outgrape:

> On Tue, 2012-10-23 at 13:57 -0400, Trond Myklebust wrote:
>> On Tue, 2012-10-23 at 17:44 +0000, Myklebust, Trond wrote:
>> > You can't hold a spinlock while sleeping. Both mutex_lock() and nsm_create() can definitely sleep.
>> >
>> > The correct way to do this is to grab the spinlock and recheck the value of ln->nsm_users inside the 'if (!IS_ERR())' condition. If it is still zero, bump it and set ln->nsm_clnt, otherwise bump it, get the existing ln->nsm_clnt and call rpc_shutdown_clnt() on the redundant nsm client after dropping the spinlock.
>> >
>> > Cheers
>> > Trond
>>
>> Can you please check if the following patch fixes the issue?
>>
>> Cheers
>> Trond
>>
> Meh... This one gets rid of the 100% redundant mutex...

No help, I'm afraid:

[ 894.005699] ------------[ cut here ]------------
[ 894.005929] kernel BUG at fs/lockd/mon.c:159!
[ 894.006156] invalid opcode: 0000 [#1] SMP
[ 894.006451] Modules linked in: firewire_ohci firewire_core [last unloaded: microcode]
[ 894.007005] CPU 1
[ 894.007050] Pid: 1035, comm: lockd Not tainted 3.6.3-dirty #1 empty empty/S7010
[ 894.007669] RIP: 0010:[<ffffffff8120fbbc>] [<ffffffff8120fbbc>] nsm_mon_unmon+0x64/0x98
[ 894.008126] RSP: 0018:ffff880620a23ce0 EFLAGS: 00010246
[ 894.008355] RAX: ffff880620a23ce8 RBX: 0000000000000000 RCX: 0000000000000000
[ 894.008591] RDX: ffff880620a23d58 RSI: 0000000000000002 RDI: ffff880620a23d30
[ 894.008827] RBP: ffff880620a23d40 R08: 0000000000000000 R09: ffffea00188e4f00
[ 894.009063] R10: ffffffff814d032f R11: 0000000000000020 R12: 0000000000000000
[ 894.009300] R13: ffff88061f067e40 R14: ffff88061f067ee8 R15: ffff88062393dc00
[ 894.009537] FS: 0000000000000000(0000) GS:ffff88063fc40000(0000) knlGS:0000000000000000
[ 894.009956] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 894.010187] CR2: 00007f056a9a6ff0 CR3: 0000000001a0b000 CR4: 00000000000027e0
[ 894.010422] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 894.010659] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 894.010896] Process lockd (pid: 1035, threadinfo ffff880620a22000, task ffff8806208b5900)
[ 894.011310] Stack:
[ 894.011528] 0000000000000010 ffff8806102d3db1 00000003000186b5 ffffffff00000010
[ 894.012083] ffff8806102d3dc1 000000000000008c 0000000000000000 ffff880620a23ce8
[ 894.012637] ffff880620a23d58 0000000000000000 ffff88061f067ee8 ffff8806102d3d00
[ 894.013190] Call Trace:
[ 894.013413] [<ffffffff8120ff07>] nsm_monitor+0x123/0x17e
[ 894.013645] [<ffffffff81211b72>] nlm4svc_retrieve_args+0x62/0xd7
[ 894.013879] [<ffffffff81211f71>] nlm4svc_proc_lock+0x3c/0xb5
[ 894.014112] [<ffffffff812116a3>] ? nlm4svc_decode_lockargs+0x47/0xb2
[ 894.014349] [<ffffffff814d89fa>] svc_process+0x3bf/0x6a1
[ 894.014581] [<ffffffff8120d5f0>] lockd+0x127/0x164
[ 894.014810] [<ffffffff8120d4c9>] ? set_grace_period+0x8a/0x8a
[ 894.015046] [<ffffffff8107bcbc>] kthread+0x8b/0x93
[ 894.015277] [<ffffffff81501334>] kernel_thread_helper+0x4/0x10
[ 894.015511] [<ffffffff8107bc31>] ? kthread_worker_fn+0xe1/0xe1
[ 894.015744] [<ffffffff81501330>] ? gs_change+0xb/0xb
[ 894.015972] Code: b8 10 00 00 00 48 89 45 c0 48 8d 81 8c 00 00 00 b9 08 00 00 00 48 89 45 c8 89 d8 f3 ab 48 8d 45 a8 48 89 55 e0 48 89 45 d8 75 02 <0f> 0b 89 f6 48 c7 02 00 00 00 00 4c 89 c7 48 6b f6 38 ba 00 04
[ 894.018895] RIP [<ffffffff8120fbbc>] nsm_mon_unmon+0x64/0x98
[ 894.019163] RSP <ffff880620a23ce0>
[ 894.019401] ---[ end trace b8ef5cb81bec72c8 ]---

Slightly different timing, but still boom.