Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp1203399img; Tue, 19 Mar 2019 02:41:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqwQKNnR5tGVJlY/rnMzt5b/duy3QdNV6Pbgz+N+2S8FRSF1bi7dSSEDFL7z9c8ev/VNzw97 X-Received: by 2002:a65:625a:: with SMTP id q26mr948536pgv.61.1552988508793; Tue, 19 Mar 2019 02:41:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552988508; cv=none; d=google.com; s=arc-20160816; b=sBCXQUBaPcMTBem/tywU7P0D7IGRZFzBSK2JhABbCDyT71OUPq44oPF5HZpFHc4cNF dBG6OP8adOt0NDoYS9tDKZ6/ihP+vXm/BUXcgukJVydVWaJLsKHDn4P8qeMqVtMikDTe cDdEMCARlt8EAYM1AUjr3vX0TKHbRddlckYHWbXYEFCAI3HEYzmPOPO7iawrdmWhCD7U 7AcW4ojPRkj2z4IlpRYFPlKPbl0b1zwElpLMg3wqixR2uPez3V4SUR0xq77MO186bb1n YKuNZ4chsQOM3fp0LtbY3vXjyS+/hiWgWM/vprOWdYiewNN4GtHLjZffh5mmurugxBhk Jc9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to:date :references:subject:cc:to:from; bh=6tLIqOZXAIe83B1YSeaoX3yaelDJV+i24gAUxZlsibw=; b=PLZB77nNYj8AiJbbVu0ScPFlz0p4iyE+2IwwFr1GWVEsHZPRw7gOD5q+wleyBiC4rh zN4DBJfSLIjeU0uSwtNSG5jiA7sqLibNsZNmllYmfpMF+oI+JjasrQ+++BHW1RW9H8Io hym7SCpZ+OElOPX/jW/N+xCS22WB9T2TlBtAaiB9JuWk/qVr3LmVHgllATnJwALy74Gu eUJnofNkDXqHNeIxvlVxLZRCN/2yJVnpSZujuXg94R92QRh1JAxfrMnBMlZBwWB85BER Kf6WWQneK853iFx54g1pFTuEQGVNyYeGRIAAvrCe/RIjxT8z/mnofRyib3qtMdD2tjxT GX+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u5si11299259pgh.221.2019.03.19.02.41.33; Tue, 19 Mar 2019 02:41:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727518AbfCSJj2 (ORCPT + 99 others); Tue, 19 Mar 2019 05:39:28 -0400 Received: from mx2.suse.de ([195.135.220.15]:47642 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725906AbfCSJj1 (ORCPT ); Tue, 19 Mar 2019 05:39:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 69C54ADCC; Tue, 19 Mar 2019 09:39:25 +0000 (UTC) From: Luis Henriques To: "Yan\, Zheng" Cc: "Yan\, Zheng" , Sage Weil , Ilya Dryomov , ceph-devel , Linux Kernel Mailing List , stable@vger.kernel.org Subject: Re: [PATCH] ceph: Fix a memory leak in ci->i_head_snapc References: <20190315111107.15154-1-lhenriques@suse.com> <87r2b4zd2q.fsf@suse.com> <87lg1c17b2.fsf@suse.com> Date: Tue, 19 Mar 2019 09:39:24 +0000 In-Reply-To: (Zheng Yan's message of "Tue, 19 Mar 2019 11:13:35 +0800") Message-ID: <87a7hr19v7.fsf@suse.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org "Yan, Zheng" writes: > On Tue, Mar 19, 2019 at 12:22 AM Luis Henriques wrote: >> >> "Yan, Zheng" writes: >> >> > On Mon, Mar 18, 2019 at 6:33 PM Luis Henriques wrote: >> >> >> >> "Yan, Zheng" writes: >> >> >> >> > On Fri, Mar 15, 2019 at 7:13 PM Luis Henriques wrote: >> >> >> >> >> >> I'm occasionally seeing a kmemleak warning in xfstest generic/013: >> >> >> >> >> >> unreferenced object 0xffff8881fccca940 (size 32): >> >> >> comm "kworker/0:1", pid 12, jiffies 4295005883 (age 130.648s) >> >> >> hex dump (first 32 bytes): >> >> >> 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................ >> >> >> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ >> >> >> backtrace: >> >> >> [<00000000d741a1ea>] build_snap_context+0x5b/0x2a0 >> >> >> [<0000000021a00533>] rebuild_snap_realms+0x27/0x90 >> >> >> [<00000000ac538600>] rebuild_snap_realms+0x42/0x90 >> >> >> [<000000000e955fac>] ceph_update_snap_trace+0x2ee/0x610 >> >> >> [<00000000a9550416>] ceph_handle_snap+0x317/0x5f3 >> >> >> [<00000000fc287b83>] dispatch+0x362/0x176c >> >> >> [<00000000a312c741>] ceph_con_workfn+0x9ce/0x2cf0 >> >> >> [<000000004168e3a9>] process_one_work+0x1d4/0x400 >> >> >> [<000000002188e9e7>] worker_thread+0x2d/0x3c0 >> >> >> [<00000000b593e4b3>] kthread+0x112/0x130 >> >> >> [<00000000a8587dca>] ret_from_fork+0x35/0x40 >> >> >> [<00000000ba1c9c1d>] 0xffffffffffffffff >> >> >> >> >> >> It looks like it is possible that we miss a flush_ack from the MDS when, >> >> >> for example, umounting the filesystem. In that case, we can simply drop >> >> >> the reference to the ceph_snap_context obtained in ceph_queue_cap_snap(). >> >> >> >> >> >> Link: https://tracker.ceph.com/issues/38224 >> >> >> Cc: stable@vger.kernel.org >> >> >> Signed-off-by: Luis Henriques >> >> >> --- >> >> >> fs/ceph/caps.c | 7 +++++++ >> >> >> 1 file changed, 7 insertions(+) >> >> >> >> >> >> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c >> >> >> index 36a8dc699448..208f4dc6f574 100644 >> >> >> --- a/fs/ceph/caps.c >> >> >> +++ b/fs/ceph/caps.c >> >> >> @@ -1054,6 +1054,7 @@ int ceph_is_any_caps(struct inode *inode) >> >> >> static void drop_inode_snap_realm(struct ceph_inode_info *ci) >> >> >> { >> >> >> struct ceph_snap_realm *realm = ci->i_snap_realm; >> >> >> + >> >> >> spin_lock(&realm->inodes_with_caps_lock); >> >> >> list_del_init(&ci->i_snap_realm_item); >> >> >> ci->i_snap_realm_counter++; >> >> >> @@ -1063,6 +1064,12 @@ static void drop_inode_snap_realm(struct ceph_inode_info *ci) >> >> >> spin_unlock(&realm->inodes_with_caps_lock); >> >> >> ceph_put_snap_realm(ceph_sb_to_client(ci->vfs_inode.i_sb)->mdsc, >> >> >> realm); >> >> >> + /* >> >> >> + * ci->i_head_snapc should be NULL, but we may still be waiting for a >> >> >> + * flush_ack from the MDS. In that case, we still hold a ref for the >> >> >> + * ceph_snap_context and we need to drop it. >> >> >> + */ >> >> >> + ceph_put_snap_context(ci->i_head_snapc); >> >> >> } >> >> >> >> >> >> /* >> >> > >> >> > This does not seem right. i_head_snapc is cleared when >> >> > (ci->i_wrbuffer_ref_head == 0 && ci->i_dirty_caps == 0 && >> >> > ci->i_flushing_caps == 0) . Nothing do with dropping ci->i_snap_realm. >> >> > Did you see 'reconnect denied' during the test? If you did, the fix >> >> > should be in iterate_session_caps() >> >> > >> >> >> >> No, I didn't saw any 'reconnect denied' in the test. The test actually >> >> seems to execute fine, except from the memory leak. >> >> >> >> It's very difficult to reproduce this issue, but last time I managed to >> >> get this memory leak to trigger I actually had some debugging code in >> >> drop_inode_snap_realm, something like: >> >> >> >> if (ci->i_head_snapc) >> >> printk("i_head_snapc: 0x%px\n", ci->i_head_snapc); >> > >> > please add code that prints i_wrbuffer_ref_head, i_dirty_caps, >> > i_flushing_caps. and try reproducing it again. >> > >> >> Ok, it took me a few hours, but I managed to reproduce the bug, with >> those extra printks. All those values are set to 0 when the bug >> triggers (and i_head_snapc != NULL). >> > > Thanks, which test triggers this bug? That's generic/013. It usually triggers after a few hours of running it in a loop (I'm using a vstart cluster for that). > > I searched that code, found we may fail to cleanup i_head_snap in two > places. One is in ceph_queue_cap_snap, Another is in > remove_session_caps_cb(). Ah, great! I spent a lot of time looking but I couldn't really find it. My bet was that ceph_queue_cap_snap was doing the ceph_get_snap_context and that the corresponding ceph_put_snap_context would be done in handle_cap_flush_ack. That's why I mentioned the missing flush_ack from MDS. Cheers, -- Luis > >> Cheers, >> -- >> Luis >> >> >> > >> >> >> >> This printk was only executed when the bug triggered (during a >> >> filesystem umount) and the address shown was the same as in the kmemleak >> >> warning. >> >> >> >> After spending some time looking, I assumed this to be a missing call to >> >> handle_cap_flush_ack, which would do the i_head_snapc cleanup. >> >> >> >> Cheers, >> >> -- >> >> Luis >> > >