Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp3804019ybl; Mon, 27 Jan 2020 10:45:51 -0800 (PST) X-Google-Smtp-Source: APXvYqwdsSwFwAN+Hmqz3H34XPsezT1DoNskaahAgWHP9Kr2pEWvxPo9bs/Y2EWg1mpAYjOCF79b X-Received: by 2002:a54:4f14:: with SMTP id e20mr317112oiy.84.1580150751634; Mon, 27 Jan 2020 10:45:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580150751; cv=none; d=google.com; s=arc-20160816; b=pBgjsCCSxjoFWurwNvbAXVB4sjmt1EtinB1ajK57J9QmFBnscHiPYLINOrcTGz0rRh CUedCOHNWOJmErHtH4u7PRBGqkISGG8q4XLgi7MfR2Fvkdd8xM1moTgoCLqdnyxYg1o/ xxK9pKWs4J6q1qWOASQgRPVujaxrsSFXmj2dRdbbo0/k9bYkZZVUPTugEAJlib+7IsKi GT+BfjdnFCgbQ/GqdjMAYU2HSA2uUI/C3sny8EREe7bS57b0t7m9RrRx8DfoDYHMJKO0 DYVup9EfXURdYP0qLZI1roTpkC4+fEcj//8jZS2UTlpXRWZ6Ruy1EsJ8SqhJ/2cCsKGf r//w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=02nrvhN4I98WyIn3TDADeraaRaHp590BISEMhnXnqTo=; b=LKXnZJ7NvpBImA6vm2Qz9KOwPptfRgpyJBxJ48Lu2vo9iOx5CBmrWy06V5UVN4WGaD 0wYlCcGntGTErXUKiYV7TCOTDioyGlr5ig09cnGV+YoLo0EzGLWMYIC5MspyHYoNBI1f PncCcrvjmtCXsuVYOqcMn0tGIYbf6tfvmc2AGiW4w9s4xvxdrpUbxV17g2PHbeX367UP oKHwWDJVCmJ5B4Rt6S0yfU4k9kopwbH+Ovb/yW7WcsYlX6VAXc2JOi+CaMn8kO+/WBCU KEKH+VMoKyel2dYNJ2bdsKuZifDW1sjMgtugpN9slPDcR96HP8IV0EBMldAHKI+Ylz8/ kMgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a10si3995084oia.232.2020.01.27.10.45.39; Mon, 27 Jan 2020 10:45:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726303AbgA0Som (ORCPT + 99 others); Mon, 27 Jan 2020 13:44:42 -0500 Received: from mx2.suse.de ([195.135.220.15]:53926 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725845AbgA0Som (ORCPT ); Mon, 27 Jan 2020 13:44:42 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 9D599AD81; Mon, 27 Jan 2020 18:44:39 +0000 (UTC) Date: Mon, 27 Jan 2020 18:44:43 +0000 From: Luis Henriques To: Ilya Dryomov Cc: Jeff Layton , Sage Weil , "Yan, Zheng" , Gregory Farnum , Ceph Development , LKML Subject: Re: [RFC PATCH 2/3] ceph: parallelize all copy-from requests in copy_file_range Message-ID: <20200127184443.GB22545@suse.com> References: <20200127164321.17468-1-lhenriques@suse.com> <20200127164321.17468-3-lhenriques@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 27, 2020 at 06:58:46PM +0100, Ilya Dryomov wrote: > On Mon, Jan 27, 2020 at 5:43 PM Luis Henriques wrote: > > > > Right now the copy_file_range syscall serializes all the OSDs 'copy-from' > > operations, waiting for each request to complete before sending the next > > one. This patch modifies copy_file_range so that all the 'copy-from' > > operations are sent in bulk and waits for its completion at the end. This > > will allow significant speed-ups, specially when sending requests for > > different target OSDs. > > > > Signed-off-by: Luis Henriques > > --- > > fs/ceph/file.c | 38 +++++++++++++++++++++++++++++++-- > > include/linux/ceph/osd_client.h | 2 ++ > > net/ceph/osd_client.c | 1 + > > 3 files changed, 39 insertions(+), 2 deletions(-) > > > > diff --git a/fs/ceph/file.c b/fs/ceph/file.c > > index 1e6cdf2dfe90..5d8f0ba11719 100644 > > --- a/fs/ceph/file.c > > +++ b/fs/ceph/file.c > > @@ -1931,6 +1931,28 @@ static int is_file_size_ok(struct inode *src_inode, struct inode *dst_inode, > > return 0; > > } > > > > +static int wait_copy_from_reqs(struct list_head *osd_reqs) > > +{ > > + struct ceph_osd_request *req; > > + int ret = 0, err; > > + > > + while (!list_empty(osd_reqs)) { > > + req = list_first_entry(osd_reqs, > > + struct ceph_osd_request, > > + r_copy_item); > > + list_del_init(&req->r_copy_item); > > + err = ceph_osdc_wait_request(req->r_osdc, req); > > + if (err) { > > + if (!ret) > > + ret = err; > > + dout("copy request failed (err=%d)\n", err); > > + } > > + ceph_osdc_put_request(req); > > + } > > + > > + return ret; > > +} > > This should probably go into libceph, as ceph_osdc_wait_requests(). Sure, that makes sense. > > + > > static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > struct file *dst_file, loff_t dst_off, > > size_t len, unsigned int flags) > > @@ -1943,12 +1965,14 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > struct ceph_fs_client *src_fsc = ceph_inode_to_client(src_inode); > > struct ceph_object_locator src_oloc, dst_oloc; > > struct ceph_object_id src_oid, dst_oid; > > + struct ceph_osd_request *req; > > loff_t endoff = 0, size; > > ssize_t ret = -EIO; > > u64 src_objnum, dst_objnum, src_objoff, dst_objoff; > > u32 src_objlen, dst_objlen, object_size; > > int src_got = 0, dst_got = 0, err, dirty; > > bool do_final_copy = false; > > + LIST_HEAD(osd_reqs); > > > > if (src_inode->i_sb != dst_inode->i_sb) { > > struct ceph_fs_client *dst_fsc = ceph_inode_to_client(dst_inode); > > @@ -2097,7 +2121,7 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > ceph_oid_printf(&dst_oid, "%llx.%08llx", > > dst_ci->i_vino.ino, dst_objnum); > > /* Do an object remote copy */ > > - err = ceph_osdc_copy_from( > > + req = ceph_osdc_copy_from_nowait( > > &src_fsc->client->osdc, > > src_ci->i_vino.snap, 0, > > &src_oid, &src_oloc, > > @@ -2108,7 +2132,8 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > CEPH_OSD_OP_FLAG_FADVISE_DONTNEED, > > dst_ci->i_truncate_seq, dst_ci->i_truncate_size, > > CEPH_OSD_COPY_FROM_FLAG_TRUNCATE_SEQ); > > - if (err) { > > + if (IS_ERR(req)) { > > + err = PTR_ERR(req); > > if (err == -EOPNOTSUPP) { > > src_fsc->have_copy_from2 = false; > > pr_notice("OSDs don't support 'copy-from2'; " > > @@ -2117,14 +2142,23 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > dout("ceph_osdc_copy_from returned %d\n", err); > > if (!ret) > > ret = err; > > + /* wait for all queued requests */ > > + wait_copy_from_reqs(&osd_reqs); > > goto out_caps; > > } > > + list_add(&req->r_copy_item, &osd_reqs); > > len -= object_size; > > src_off += object_size; > > dst_off += object_size; > > ret += object_size; > > } > > > > + err = wait_copy_from_reqs(&osd_reqs); > > + if (err) { > > + if (!ret) > > + ret = err; > > + goto out_caps; > > + } > > if (len) > > /* We still need one final local copy */ > > do_final_copy = true; > > diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h > > index 7916a178d137..2b4a14bc6154 100644 > > --- a/include/linux/ceph/osd_client.h > > +++ b/include/linux/ceph/osd_client.h > > @@ -210,6 +210,8 @@ struct ceph_osd_request { > > u64 r_data_offset; /* ditto */ > > bool r_linger; /* don't resend on failure */ > > > > + struct list_head r_copy_item; /* used for copy-from operations */ > > + > > We have r_private_item for exactly this kind of ad-hoc lists, already > used in rbd and ceph_direct_read_write(). Ah! I've actually considered using it, but thought that using the same field for different purposes may be a source of confusion and bugs in the future. But sure, it can be used. I'll add a comment in the struct definition regarding this ad-hoc usage. Cheers, -- Lu?s