Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp4843837ybe; Mon, 9 Sep 2019 15:50:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqxBbuac50c8OXDQBBEmZMv01wWqcx2cK1SFkORxoZ1BfdcioM8RAwRE0A0UhVquYYLSaD0m X-Received: by 2002:a05:6402:14c7:: with SMTP id f7mr15257198edx.70.1568069411307; Mon, 09 Sep 2019 15:50:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568069411; cv=none; d=google.com; s=arc-20160816; b=lsYKhIKQIE75NeDHqQid/UGwByzDzTvRrunxjsyBzNzxHFT0DEm8Fflkiqu6pNJxgW FkJYbhKcU5w0KmvtltJxsvPVwGWyPo/HN1EUtlWfj4aaYjJsuTzXTeNBvo6UKUNTI2O4 zcrxOaj3YlWd1JL1d5+zRukGcfKXNyXEHb14/pReeqSh1B6FqM7PxF5ggsyIUI1bOc7S EAg2UuqIL79gNgRQuigSV6HypQZZ5W9P0CbSAH73ajLPSEeyODEjnj+xewILsDNtzbcQ K8p2tDxjUqx+bQJqFTNHHCGRFJq1djLZ+3x1BbkXmDK99u/Rkvxvr3g5W04EPaLnn1ki O6VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=ycVScbyUw0v8mMG7vBenDZbavo/+svys5u8BxCQR4W4=; b=Vmqtrti6skjrQJjeVIoqAT6HptbE7pErNEAf5jIf0xfnkIMyZcpd+1+C8z+HWtVgdN nnImNtUHnBvCKFVN4goSAYAzFwHr9OXqfaFvPN5YLy/2wB2zsxfDeNX7zj2ZBc6IUDfd eQYnNTOQNcouYBCqafL6sopv41vD+ADMzr+936SXHkiJf0c1v9943PGY+6QNBa+zwIiy yy7tM/NQABusTfGBshe/azT+2RViGWnJU69HuRjPyeZk5JGPIbiATG7ICAIG8TbBmYxv tedizRdWb/+mSw7wIAS32t+mg7uhT5XIs0L2zaBahlyynJ+Im3B/r1YZ6KJMbnm3JtNU E4bA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="Jo3pPW6/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h14si8052335ejc.373.2019.09.09.15.49.47; Mon, 09 Sep 2019 15:50:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="Jo3pPW6/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732002AbfIIKfL (ORCPT + 99 others); Mon, 9 Sep 2019 06:35:11 -0400 Received: from mail.kernel.org ([198.145.29.99]:45430 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730682AbfIIKfL (ORCPT ); Mon, 9 Sep 2019 06:35:11 -0400 Received: from tleilax.poochiereds.net (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 350922086D; Mon, 9 Sep 2019 10:35:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1568025309; bh=hRx9yb4pD46+5kN87jkPGzlRP7O/0R+F2wikb+6HcWA=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=Jo3pPW6/RwscCHhtxZZAfHBiK1LMxpjeeFw824tkKRa1EpEkK6JJk1P6a/zpCgo7V Zhew8jObYvP56NaHXYqlPdIerExsgvVaAQeVQIgl+JH1K8eakg6ddaT8xtVC5JBOdc jCjZxey8Rs6VwIE4kIHHJnstokJ5zHg+B2Tz9qIA= Message-ID: <3f838e42a50575595c7310386cf698aca8f89607.camel@kernel.org> Subject: Re: [PATCH v2] ceph: allow object copies across different filesystems in the same cluster From: Jeff Layton To: Luis Henriques , Sage Weil , Ilya Dryomov Cc: ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Mon, 09 Sep 2019 06:35:07 -0400 In-Reply-To: <20190909102834.16246-1-lhenriques@suse.com> References: <87k1ahojri.fsf@suse.com> <20190909102834.16246-1-lhenriques@suse.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.32.4 (3.32.4-1.fc30) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2019-09-09 at 11:28 +0100, Luis Henriques wrote: > OSDs are able to perform object copies across different pools. Thus, > there's no need to prevent copy_file_range from doing remote copies if the > source and destination superblocks are different. Only return -EXDEV if > they have different fsid (the cluster ID). > > Signed-off-by: Luis Henriques > --- > fs/ceph/file.c | 18 ++++++++++++++---- > 1 file changed, 14 insertions(+), 4 deletions(-) > > Hi, > > Here's the patch changelog since initial submittion: > > - Dropped have_fsid checks on client structs > - Use %pU to print the fsid instead of raw hex strings (%*ph) > - Fixed 'To:' field in email so that this time the patch hits vger > > Cheers, > -- > Luis > > diff --git a/fs/ceph/file.c b/fs/ceph/file.c > index 685a03cc4b77..4a624a1dd0bb 100644 > --- a/fs/ceph/file.c > +++ b/fs/ceph/file.c > @@ -1904,6 +1904,7 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > struct ceph_inode_info *src_ci = ceph_inode(src_inode); > struct ceph_inode_info *dst_ci = ceph_inode(dst_inode); > struct ceph_cap_flush *prealloc_cf; > + struct ceph_fs_client *src_fsc = ceph_inode_to_client(src_inode); > struct ceph_object_locator src_oloc, dst_oloc; > struct ceph_object_id src_oid, dst_oid; > loff_t endoff = 0, size; > @@ -1915,8 +1916,17 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > if (src_inode == dst_inode) > return -EINVAL; > - if (src_inode->i_sb != dst_inode->i_sb) > - return -EXDEV; > + if (src_inode->i_sb != dst_inode->i_sb) { > + struct ceph_fs_client *dst_fsc = ceph_inode_to_client(dst_inode); > + > + if (ceph_fsid_compare(&src_fsc->client->fsid, > + &dst_fsc->client->fsid)) { > + dout("Copying object across different clusters:"); > + dout(" src fsid: %pU dst fsid: %pU\n", > + &src_fsc->client->fsid, &dst_fsc->client->fsid); > + return -EXDEV; > + } > + } Just to be clear: what happens here if I mount two entirely separate clusters, and their OSDs don't have any access to one another? Will this fail at some later point with an error that we can catch so that we can fall back? > if (ceph_snap(dst_inode) != CEPH_NOSNAP) > return -EROFS; > > @@ -1928,7 +1938,7 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > * efficient). > */ > > - if (ceph_test_mount_opt(ceph_inode_to_client(src_inode), NOCOPYFROM)) > + if (ceph_test_mount_opt(src_fsc, NOCOPYFROM)) > return -EOPNOTSUPP; > > if ((src_ci->i_layout.stripe_unit != dst_ci->i_layout.stripe_unit) || > @@ -2044,7 +2054,7 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > dst_ci->i_vino.ino, dst_objnum); > /* Do an object remote copy */ > err = ceph_osdc_copy_from( > - &ceph_inode_to_client(src_inode)->client->osdc, > + &src_fsc->client->osdc, > src_ci->i_vino.snap, 0, > &src_oid, &src_oloc, > CEPH_OSD_OP_FLAG_FADVISE_SEQUENTIAL | -- Jeff Layton