Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755016AbbBQCIm (ORCPT ); Mon, 16 Feb 2015 21:08:42 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:33885 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752096AbbBQBu7 (ORCPT ); Mon, 16 Feb 2015 20:50:59 -0500 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Yan, Zheng" , "Ilya Dryomov" Date: Tue, 17 Feb 2015 01:46:53 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.2 062/152] ceph: introduce global empty snap context In-Reply-To: X-SA-Exim-Connect-IP: 192.168.4.249 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3865 Lines: 137 3.2.67-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: "Yan, Zheng" commit 97c85a828f36bbfffe9d77b977b65a5872b6cad4 upstream. Current snaphost code does not properly handle moving inode from one empty snap realm to another empty snap realm. After changing inode's snap realm, some dirty pages' snap context can be not equal to inode's i_head_snap. This can trigger BUG() in ceph_put_wrbuffer_cap_refs() The fix is introduce a global empty snap context for all empty snap realm. This avoids triggering the BUG() for filesystem with no snapshot. Fixes: http://tracker.ceph.com/issues/9928 Signed-off-by: Yan, Zheng Reviewed-by: Ilya Dryomov [bwh: Backported to 3.2: - Adjust context - As we don't have ceph_create_snap_context(), open-code it in ceph_snap_init()] Signed-off-by: Ben Hutchings --- --- a/fs/ceph/snap.c +++ b/fs/ceph/snap.c @@ -288,6 +288,9 @@ static int cmpu64_rev(const void *a, con return 0; } + +static struct ceph_snap_context *empty_snapc; + /* * build the snap context for a given realm. */ @@ -329,6 +332,12 @@ static int build_snap_context(struct cep return 0; } + if (num == 0 && realm->seq == empty_snapc->seq) { + ceph_get_snap_context(empty_snapc); + snapc = empty_snapc; + goto done; + } + /* alloc new snap context */ err = -ENOMEM; if (num > (SIZE_MAX - sizeof(*snapc)) / sizeof(u64)) @@ -364,6 +373,7 @@ static int build_snap_context(struct cep dout("build_snap_context %llx %p: %p seq %lld (%d snaps)\n", realm->ino, realm, snapc, snapc->seq, snapc->num_snaps); +done: if (realm->cached_context) ceph_put_snap_context(realm->cached_context); realm->cached_context = snapc; @@ -465,6 +475,9 @@ void ceph_queue_cap_snap(struct ceph_ino cap_snap. lucky us. */ dout("queue_cap_snap %p already pending\n", inode); kfree(capsnap); + } else if (ci->i_snap_realm->cached_context == empty_snapc) { + dout("queue_cap_snap %p empty snapc\n", inode); + kfree(capsnap); } else if (dirty & (CEPH_CAP_AUTH_EXCL|CEPH_CAP_XATTR_EXCL| CEPH_CAP_FILE_EXCL|CEPH_CAP_FILE_WR)) { struct ceph_snap_context *snapc = ci->i_head_snapc; @@ -927,5 +940,17 @@ out: return; } +int __init ceph_snap_init(void) +{ + empty_snapc = kzalloc(sizeof(struct ceph_snap_context), GFP_NOFS); + if (!empty_snapc) + return -ENOMEM; + atomic_set(&empty_snapc->nref, 1); + empty_snapc->seq = 1; + return 0; +} - +void ceph_snap_exit(void) +{ + ceph_put_snap_context(empty_snapc); +} --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -911,14 +911,20 @@ static int __init init_ceph(void) if (ret) goto out; - ret = register_filesystem(&ceph_fs_type); + ret = ceph_snap_init(); if (ret) goto out_icache; + ret = register_filesystem(&ceph_fs_type); + if (ret) + goto out_snap; + pr_info("loaded (mds proto %d)\n", CEPH_MDSC_PROTOCOL); return 0; +out_snap: + ceph_snap_exit(); out_icache: destroy_caches(); out: @@ -929,6 +935,7 @@ static void __exit exit_ceph(void) { dout("exit_ceph\n"); unregister_filesystem(&ceph_fs_type); + ceph_snap_exit(); destroy_caches(); } --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -677,6 +677,8 @@ extern void ceph_queue_cap_snap(struct c extern int __ceph_finish_cap_snap(struct ceph_inode_info *ci, struct ceph_cap_snap *capsnap); extern void ceph_cleanup_empty_realms(struct ceph_mds_client *mdsc); +extern int ceph_snap_init(void); +extern void ceph_snap_exit(void); /* * a cap_snap is "pending" if it is still awaiting an in-progress -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/