Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp1935615ybl; Thu, 30 Jan 2020 08:33:52 -0800 (PST) X-Google-Smtp-Source: APXvYqzyo+z2jGhZgPJX8XJ+st8RN9YWXBEyGEfXKQJ3lq3dzdSnlB83Tw/Wpzrt1MAKVWtgplWO X-Received: by 2002:aca:b183:: with SMTP id a125mr3469119oif.83.1580402031271; Thu, 30 Jan 2020 08:33:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580402031; cv=none; d=google.com; s=arc-20160816; b=BTzayCbRvM/EnTpMRB3dVGLhPeq1I1j/7zblnawxPTKpMIUTlnGuTUEUxfLdEf4bfR P+X45x1t7MfBpSdagb2zIlUsaANbihyWiQTJmc9gBLcSdMMyyr5dTUykomgUpetat3nw WX3VkL9N5Ydko3E2CX2kvpUDPQd/lXxAKqeEcQ+hshfWauHkVDHxJB4OIxZehVEUmyml AIlPKDz3ln3sCH9yfjwyFDaky4JvOakowBQDcsEZm5Cq4xXg6UBQhwqjnyzRpPpyd68n 4tE2Z+UF3tFh5Rb9/t36e+3mN2Ynlctcpt9nOMAvkIFs2b7JFbDTAAZAZ9h/hyo8UtrN ONUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=3eCojlExdQRUhVq87Z1DZpVI0ty07Rd7UrFmnTOEHlw=; b=R6+QhhHMMkc6OTuADATojKGrA7WDM90hm2tPKXQFBY0kRCozsVx1ldjKM3o+0wZxRe hIk2BQ7zBxmZ3QHDO3OwK1gSX/u/9dyhk+dE7fKk8TWMGPEdLGpvM9QZtkcgxAO9B/4W 7rnUSqvlk8nKuPNyqLVnzBCd0nxBfqEvnbJS/91Gf8APojIZ6a2AiCNWWr0tNAreNqYc WH8RTblwdHWlf/LlkfivezbJ7+D7t6yDpa3kuAD5vVWsjvnjQMAcRkXaUMo9tLVqo/oz fpchdpXsahYxU10KyAGMZi5KcEewHNkFlwOAstwBgTb3Fl63ngO1F3+M1xe1OZ8T6F9E QB2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o7si3203756otk.185.2020.01.30.08.33.38; Thu, 30 Jan 2020 08:33:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727359AbgA3Qb3 (ORCPT + 99 others); Thu, 30 Jan 2020 11:31:29 -0500 Received: from mx2.suse.de ([195.135.220.15]:36108 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727191AbgA3Qb2 (ORCPT ); Thu, 30 Jan 2020 11:31:28 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 70098AFDF; Thu, 30 Jan 2020 16:31:25 +0000 (UTC) Date: Thu, 30 Jan 2020 16:31:01 +0000 From: Luis Henriques To: Ilya Dryomov Cc: Jeff Layton , Sage Weil , "Yan, Zheng" , Gregory Farnum , Ceph Development , LKML Subject: Re: [PATCH 1/1] ceph: parallelize all copy-from requests in copy_file_range Message-ID: <20200130163101.GA22679@suse.com> References: <20200129182011.5483-1-lhenriques@suse.com> <20200129182011.5483-2-lhenriques@suse.com> <20200130153711.GA20170@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 30, 2020 at 04:59:59PM +0100, Ilya Dryomov wrote: ... > > > > @@ -2117,14 +2126,33 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > > > dout("ceph_osdc_copy_from returned %d\n", err); > > > > if (!ret) > > > > ret = err; > > > > + /* wait for all queued requests */ > > > > + ceph_osdc_wait_requests(&osd_reqs); > > > > goto out_caps; > > > > } > > > > + list_add(&req->r_private_item, &osd_reqs); > > > > len -= object_size; > > > > src_off += object_size; > > > > dst_off += object_size; > > > > ret += object_size; > > > > > > So ret is incremented here, but you have numerious tests where ret is > > > assigned an error only if ret is 0. Unless I'm missing something, this > > > interferes with returning errors from __ceph_copy_file_range(). > > > > Well, the problem is that an error may occur *after* we have already done > > some copies. In that case we need to return the number of bytes that have > > been successfully copied instead of an error; eventually, subsequent calls > > to complete the copy_file_range will then return the error. At least this > > is how I understood the man page (i.e. similar to the write(2) syscall). > > AFAICS ret is incremented before you know that *any* of the copies were > successful. If the first copy fails, how do you report that error? Ah, got it. So, a solution would be to have the ceph_osdc_wait_requests interface changed so that we can get the number of successful requests. /me takes a deep breath and goes look at the *whole* thing to prevent these mistakes. Thanks a lot for your review, Ilya. > > > > > > > + if (--ncopies == 0) { > > > > + err = ceph_osdc_wait_requests(&osd_reqs); > > > > + if (err) { > > > > + if (!ret) > > > > + ret = err; > > > > + goto out_caps; > > > > + } > > > > + ncopies = (src_fsc->mount_options->wsize / > > > > + object_size) * 4; > > > > > > The object size is constant within a file, so ncopies should be too. > > > Perhaps introduce a counter instead of recalculating ncopies here? > > > > Not sure I understood your comment. You would rather have: > > > > * ncopies initialized only once outside the loop > > * have a counter counting the number of objects copied > > * call ceph_osdc_wait_requests() when this counter is a multiple of > > ncopies > > I was thinking of a counter that is initialized to ncopies and reset to > ncopies any time it reaches 0. This is just a nit though. Sure, no problem. Cheers, -- Lu?s