Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp5237504ybl; Tue, 4 Feb 2020 10:07:41 -0800 (PST) X-Google-Smtp-Source: APXvYqzOEo1efb7DO6itAmLku9kDX/80t5f9CXafFkTg09oXKmyCKsrgabyXxS1tdaRBsYbzqC7d X-Received: by 2002:a05:6830:1305:: with SMTP id p5mr21734395otq.124.1580839661574; Tue, 04 Feb 2020 10:07:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580839661; cv=none; d=google.com; s=arc-20160816; b=boPPsnNlU25jH9gCMiTR/fHf6Q/T/jaqABKUehe4AuhFTrihBcMwQW4b+nutsXWBg3 SGRh9p4M9w6DXx8ZFxA1cIeElS0eAN2Y23mu8z/BX4aCFVQrJxGuTftQZlbSQ9mJHolt fUcAJqy7TUUhvKC5p0l72pp7e9N+gZFIzk3aonr5l5MEzCG84F+ZNJu3BcZtVzrmmD6j MP+UiNatGoXt2IO73Wtuu8DpfwWzKeZ7wM8gcAMdXVdRxtQBedwf5ALN1aQXeOVOKKwM aMT0zoi7H5iNEHh97kCEyWHrsForPoGd+N3/7rdfkLGJwW9y1btrSe4gEzls58UQjnJp uNWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=mc62ZSJ2FyEbetH4nweBkjrAs6Ck5WUQW5hhZvQRO50=; b=nzWUZwLFX0y3Y1X9UDcTFdHeQrTULoR4jU3tPLIaT+pY5QiInMI6yViMxCEazYbiSb VdvvCzQmiA/vbxRqeINoF3L+3PsXz4MpAFSBkwRZBtQ79v5v81KmZcIaJDKC/GEpLdS4 jEdlvSh2wD5/sjPJ2vFZPt3siTdAAMAvQanehSrtRMal6iFesNlRYnYUkHw1qz+Z2LP3 g8mfQuXe0cgMWVSVLsXxYP2aruCI4ndM2sO/OxO44W1PtRxVXnNQ5wWgysiuGmR6KQYe k/KwAJfhAdhgfUUksQ/9agWmZ+vMEkIbrC3pEnNDvto72liirbiyTyAcPezvlteEJ69V LZtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=az91qY8K; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p19si12428147otk.251.2020.02.04.10.07.28; Tue, 04 Feb 2020 10:07:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=az91qY8K; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727455AbgBDSGX (ORCPT + 99 others); Tue, 4 Feb 2020 13:06:23 -0500 Received: from mail-il1-f193.google.com ([209.85.166.193]:35026 "EHLO mail-il1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727355AbgBDSGX (ORCPT ); Tue, 4 Feb 2020 13:06:23 -0500 Received: by mail-il1-f193.google.com with SMTP id g12so16723842ild.2; Tue, 04 Feb 2020 10:06:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=mc62ZSJ2FyEbetH4nweBkjrAs6Ck5WUQW5hhZvQRO50=; b=az91qY8K3YfCW0QGFi1FZ/pBdRYotv9Le+rXXqWUZQNr/OGWLGYifvJwzO97uDlxNF EHB6INZm/VLtEkppIfPqkV+vJxyUYszv1d5Aevlx+zUCD2vY3eGVxArvUTN19rdJUaEc CHL2j/uXpkPq183F50rozETD9JOHr27NNlbnmdb6+ZFnRb+RVDVr2ssZewHcaO8CKJGf lOPCjfrKyUrmT2cb/+lOG5eREU2aILHuKjjM+4VQVeh15UXRJg8xX7r4QmKPYtCE1E/t OhpQOzWbOXYq/fGb+BPpr6lViSL5aKf1Vwoy9NvsEiVXV3YuNlaHyU78vvOIvfnSYjgT iAkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=mc62ZSJ2FyEbetH4nweBkjrAs6Ck5WUQW5hhZvQRO50=; b=KjDVjws+P3bwn3w+dn9Y75Gk8n+D4RVeTTrheLwEXXh4d3VZzadnl+B0BK+I/IlLzO wMWfPnz7PCiivh4oZXPMb2zAMeom4/mkOjXVEZ3goTeff+UcPfO6M8jLIEvd2GuqIdkl YD1aMPUkFcm+m308GS6kF7qXKrOseGNkNnRbvrpNNAdNyfR4C079lO+OnaTV4Oawufd6 JhkDJHLjKi73yy6MPStjql9y2tVSpzMv5/unhrbv5809yXUxw+7AIHq30oj6v7HJb3p1 Kea/IG2VMfXxBTK81kSeG2a0DFzKSHHOlywsAhLOjjNvwzStwFtTFA8wVBEruAg73ARp edSA== X-Gm-Message-State: APjAAAXDSvcj5Oa4Du6Z3zU15PH5VThd+mfCnnQhNMvEpN94bZR5T4Sd 5CyR27CE9q5SEmVzBmHR2hIF8laU4iC2sFO7JXA= X-Received: by 2002:a92:b749:: with SMTP id c9mr21417293ilm.143.1580839582327; Tue, 04 Feb 2020 10:06:22 -0800 (PST) MIME-Version: 1.0 References: <20200203165117.5701-1-lhenriques@suse.com> <20200203165117.5701-2-lhenriques@suse.com> <20200204151158.GA15992@suse.com> In-Reply-To: <20200204151158.GA15992@suse.com> From: Ilya Dryomov Date: Tue, 4 Feb 2020 19:06:36 +0100 Message-ID: Subject: Re: [PATCH v3 1/1] ceph: parallelize all copy-from requests in copy_file_range To: Luis Henriques Cc: Jeff Layton , Sage Weil , "Yan, Zheng" , Gregory Farnum , Ceph Development , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 4, 2020 at 4:11 PM Luis Henriques wrote: > > On Tue, Feb 04, 2020 at 11:56:57AM +0100, Ilya Dryomov wrote: > ... > > > @@ -2108,21 +2118,40 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off, > > > CEPH_OSD_OP_FLAG_FADVISE_DONTNEED, > > > dst_ci->i_truncate_seq, dst_ci->i_truncate_size, > > > CEPH_OSD_COPY_FROM_FLAG_TRUNCATE_SEQ); > > > - if (err) { > > > - if (err == -EOPNOTSUPP) { > > > - src_fsc->have_copy_from2 = false; > > > - pr_notice("OSDs don't support 'copy-from2'; " > > > - "disabling copy_file_range\n"); > > > - } > > > + if (IS_ERR(req)) { > > > + err = PTR_ERR(req); > > > dout("ceph_osdc_copy_from returned %d\n", err); > > > + > > > + /* wait for all queued requests */ > > > + ceph_osdc_wait_requests(&osd_reqs, &reqs_complete); > > > + ret += reqs_complete * object_size; /* Update copied bytes */ > > > > Hi Luis, > > > > Looks like ret is still incremented unconditionally? What happens > > if there are three OSD requests on the list and the first fails but > > the second and the third succeed? As is, ceph_osdc_wait_requests() > > will return an error with reqs_complete set to 2... > > > > > if (!ret) > > > ret = err; > > > > ... and we will return 8M instead of an error. > > Right, my assumption was that if a request fails, all subsequent requests > would also fail. This would allow ret to be updated with the number of > successful requests (x object size), even if the OSDs replies were being > delivered in a different order. But from your comment I see that my > assumption is incorrect. > > In that case, when shall ret be updated with the number of bytes already > written? Only after a successful call to ceph_osdc_wait_requests()? I mentioned this in the previous email: you probably want to change ceph_osdc_wait_requests() so that the counter isn't incremented after an error is encountered. > I.e. only after each throttling cycle, when we don't have any requests > pending completion? In this case, I can simply drop the extra > reqs_complete parameter to the ceph_osdc_wait_requests. > > In your example the right thing to do would be to simply return an error, > I guess. But then we're assuming that we're loosing space in the storage, > as we may have created objects that won't be reachable anymore. Well, that is what I'm getting at -- this needs a lot more consideration. How errors are dealt with, how file metadata is updated, when do we fall back to plain copy, etc. Generating stray objects is bad but way better than reporting that e.g. 0..12M were copied when only 0..4M and 8M..12M were actually copied, leaving the user one step away from data loss. One option is to revert to issuing copy-from requests serially when an error is encountered. Another option is to fall back to plain copy on any error. Or perhaps we just don't bother with the complexity of parallel copy-from requests at all... Of course, no matter what we do for parallel copy-from requests, the existing short copy bug needs to be fixed separately. > > > > > > goto out_caps; > > > } > > > + list_add(&req->r_private_item, &osd_reqs); > > > len -= object_size; > > > src_off += object_size; > > > dst_off += object_size; > > > - ret += object_size; > > > + /* > > > + * Wait requests if we've reached the maximum requests allowed > > > + * or if this was the last copy > > > + */ > > > + if ((--copy_count == 0) || (len < object_size)) { > > > + err = ceph_osdc_wait_requests(&osd_reqs, &reqs_complete); > > > + ret += reqs_complete * object_size; /* Update copied bytes */ > > > > Same here. > > > > > + if (err) { > > > + if (err == -EOPNOTSUPP) { > > > + src_fsc->have_copy_from2 = false; > > > > Since EOPNOTSUPP is special in that it triggers the fallback, it > > should be returned even if something was copied. Think about a > > mixed cluster, where some OSDs support copy-from2 and some don't. > > If the range is split between such OSDs, copy_file_range() will > > always return short if the range happens to start on a new OSD. > > IMO, if we managed to copy some objects, we still need to return the > number of bytes copied. Because, since this return value will be less > then the expected amount of bytes, the application will retry the > operation. And at that point, since we've set have_copy_from2 to 'false', > the default VFS implementation will be used. Ah, yeah, given have_copy_from2 guard, this particular corner case is fine. Thanks, Ilya