2017-04-18 17:28:40

by Olga Kornievskaia

[permalink] [raw]
Subject: Re: [nfsv4] Inter server-side copy performance

On Mon, Apr 17, 2017 at 11:37 AM, Anna Schumaker
<[email protected]> wrote:
>
>
> On Mon, Apr 17, 2017 at 11:30 AM Olga Kornievskaia <[email protected]> wrote:
>>
>> On Mon, Apr 17, 2017 at 9:36 AM, J. Bruce Fields <[email protected]>
>> wrote:
>> > On Fri, Apr 14, 2017 at 05:22:13PM -0400, Olga Kornievskaia wrote:
>> >> On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <[email protected]>
>> >> wrote:
>> >> > On 4/13/17, 11:45 AM, "J. Bruce Fields" <[email protected]> wrote:
>> >> >> Are you timing just the copy_file_range() call, or do you include a
>> >> >> following sync?
>> >> >
>> >> > I am timing right before calling copy_file_range() up to doing an
>> >> > fsync() and close() of the destination file.
>> >> > For the traditional copy is the same, I am timing right before the
>> >> > first read on the source file up to the
>> >> > fsync() and close() of the destination file.
>> >>
>> >> Why should do we need a sync after copy_file_range(). kernel
>> >> copy_file_range() will send the commits for any unstable copies it
>> >> received.
>> >
>> > Why does it do that? As far as I can tell it's not required by
>> > documentation for copy_file_range() or COPY. COPY has a write verifier
>> > and a stable_how argument in the reply. Skipping the commits would
>> > allow better performance in case a copy requires multiple COPY calls.
>> >
>> > But, in any case, if copy_file_range() already committed then it
>> > probably doesn't make a significant difference to the timing whether you
>> > include a following sync and/or close.
>>
>> Hm. It does make sense. Anna wrote the original code which included
>> the COMMIT after copy which I haven't thought about.
>>
>> Anna, any comments?
>
>
> I think the commit just seemed like a good idea at the time. I'm okay with
> changing it if it doesn't make sense.

Given how the code is written now it looks like it's not possible to
save up commits....

Here's what I can see happening:

nfs42_proc_clone() as well as nfs42_proc_copy() will call
nfs_sync_inode(dst) "to make sure server(s) have the latest data"
prior to initiating the clone/copy. So even if we just queue up (not
send) the commit after the executing nfs42_proc_copy, then next call
into vfs_copy_file_range() will send out that queued up commit.

Is it ok to relax the requirement that requirement, I'm not sure...


2017-04-18 18:33:34

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [nfsv4] Inter server-side copy performance

On Tue, Apr 18, 2017 at 01:28:39PM -0400, Olga Kornievskaia wrote:
> Given how the code is written now it looks like it's not possible to
> save up commits....
>
> Here's what I can see happening:
>
> nfs42_proc_clone() as well as nfs42_proc_copy() will call
> nfs_sync_inode(dst) "to make sure server(s) have the latest data"
> prior to initiating the clone/copy. So even if we just queue up (not
> send) the commit after the executing nfs42_proc_copy, then next call
> into vfs_copy_file_range() will send out that queued up commit.
>
> Is it ok to relax the requirement that requirement, I'm not sure...

Well, if the typical case of copy_file_range is just opening a file,
doing a single big copy_file_range(), then closing the file, then this
doesn't matter.

The linux server is currently limiting COPY to 4MB at a time, which will
make the commits more annoying.

Even there the typical case will probably still be an open, followed by
a series of non-overlapping copies, then close, and that shouldn't
require the commits.

--b.

2017-06-15 19:29:28

by Mora, Jorge

[permalink] [raw]
Subject: Re: [nfsv4] Inter server-side copy performance

SGVyZSBhcmUgdGhlIG5ldyBudW1iZXJzIHVzaW5nIGxhdGVzdCBTU0MgY29kZSBmb3IgYW4gOEdC
IGNvcHkuDQpUaGUgY29kZSBoYXMgYSBkZWxheWVkIHVubW91bnQgb24gdGhlIGRlc3RpbmF0aW9u
IHNlcnZlciB3aGljaCBhbGxvd3MgZm9yIHNpbmdsZQ0KbW91bnQgd2hlbiBtdWx0aXBsZSBDT1BZ
IGNhbGxzIGFyZSBtYWRlIGJhY2sgdG8gYmFjay4NCkFsc28sIHRoZXJlIGlzIGEgdGhpcmQgb3B0
aW9uIHdoaWNoIGlzIHVzaW5nIGlvY3RsIHdpdGggYSA2NCBiaXQgY29weSBsZW5ndGggaW4gb3Jk
ZXIgdG8gDQppc3N1ZSBhIHNpbmdsZSBjYWxsIGZvciBjb3B5IGxlbmd0aHMgPj0gNEdCLg0KDQpT
ZXR1cDoNCiAgICAgQ2xpZW50OiAxNiBDUFVzLCAzMkdCDQogICAgIFNSQyBzZXJ2ZXI6IDQgQ1BV
cywgOEdCDQogICAgIERTVCBzZXJ2ZXI6IDQgQ1BVcywgOEdCDQoNClRyYWRpdGlvbmFsIGNvcHk6
DQogICAgREJHMjogMjA6MzE6NDMuNjgzNTk1IC0gVHJhZGl0aW9uYWwgQ09QWSByZXR1cm5zIDg1
ODk5MzQ1OTAgKDk2Ljg0MzI4MTAzMDcgc2Vjb25kcykNClNTQyAoMiBjb3B5X2ZpbGVfcmFuZ2Ug
Y2FsbHMgYmFjayB0byBiYWNrKToNCiAgICBEQkcyOiAyMDozMDowMC4yNjgyMDMgLSBTZXJ2ZXIt
c2lkZSBDT1BZIHJldHVybnMgODU4OTkzNDU5MCAoODMuMDUxNzc1OTMyMyBzZWNvbmRzKQ0KICAg
IFBBU1M6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5j
ZSBpbXByb3ZlbWVudCBmb3IgYSA4R0IgZmlsZTogMTYlDQpTU0MgKDIgY29weV9maWxlX3Jhbmdl
IGNhbGxzIGluIHBhcmFsbGVsKToNCiAgICBEQkcyOiAyMDozNDo0OS42ODY1NzMgLSBTZXJ2ZXIt
c2lkZSBDT1BZIHJldHVybnMgODU4OTkzNDU5MCAoNzkuMzA4MDAxMDQxNCBzZWNvbmRzKQ0KICAg
IFBBU1M6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5j
ZSBpbXByb3ZlbWVudCBmb3IgYSA4R0IgZmlsZTogMjAlDQpTU0MgKDEgaW9jdGwgY2FsbCk6DQog
ICAgREJHMjogMjA6Mzg6NDEuMzIzNzc0IC0gU2VydmVyLXNpZGUgQ09QWSByZXR1cm5zIDg1ODk5
MzQ1OTAgKDc0Ljc3NzQzNTA2NDMgc2Vjb25kcykNCiAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBl
cmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgOEdC
IGZpbGU6IDI4JQ0KDQpTaW5jZSBJIGRvbuKAmXQgaGF2ZSB0aHJlZSBzaW1pbGFyIHN5c3RlbXMg
dG8gdGVzdCB3aXRoLCBoYXZpbmcgdGhlIGJlc3QgbWFjaGluZSAobW9yZSBjcHXigJlzIGFuZCBt
b3JlIG1lbW9yeSkNCmFzIHRoZSBjbGllbnQgZ2l2ZXMgYSBiZXR0ZXIgcGVyZm9ybWFuY2UgZm9y
IHRoZSB0cmFkaXRpb25hbCBjb3B5LiBUaGUgZm9sbG93aW5nIHJlc3VsdHMgYXJlIGRvbmUgdXNp
bmcNCnRoZSBiZXN0IG1hY2hpbmUgYXMgdGhlIGRlc3RpbmF0aW9uIHNlcnZlciBpbnN0ZWFkLg0K
DQpTZXR1cCAodXNpbmcgdGhlIGJlc3QgbWFjaGluZSBhcyB0aGUgZGVzdGluYXRpb24gc2VydmVy
IGluc3RlYWQpOg0KICAgICBDbGllbnQ6IDQgQ1BVcywgOEdCDQogICAgIFNSQyBzZXJ2ZXI6IDQg
Q1BVcywgOEdCDQogICAgIERTVCBzZXJ2ZXI6IDE2IENQVXMsIDMyR0INCg0KVHJhZGl0aW9uYWwg
Y29weToNCiAgICBEQkcyOiAyMTo1MjoxNS4wMzk2MjUgLSBUcmFkaXRpb25hbCBDT1BZIHJldHVy
bnMgODU4OTkzNDU5MCAoMTc4LjY4NjYzNTk3MSBzZWNvbmRzKQ0KU1NDICgyIGNvcHlfZmlsZV9y
YW5nZSBjYWxscyBiYWNrIHRvIGJhY2spOg0KICAgIERCRzI6IDIxOjQ5OjA4Ljk2MTM4NCAtIFNl
cnZlci1zaWRlIENPUFkgcmV0dXJucyA4NTg5OTM0NTkwICgxNzMuMDcxMTcyOTUzIHNlY29uZHMp
DQogICAgUEFTUzogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZv
cm1hbmNlIGltcHJvdmVtZW50IGZvciBhIDhHQiBmaWxlOiAzJQ0KU1NDICgyIGNvcHlfZmlsZV9y
YW5nZSBjYWxscyBpbiBwYXJhbGxlbCk6DQogICAgREJHMjogMjE6MzU6NTkuODIyNDY3IC0gU2Vy
dmVyLXNpZGUgQ09QWSByZXR1cm5zIDg1ODk5MzQ1OTAgKDE1OS43NDM4NDk5OTMgc2Vjb25kcykN
CiAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9y
bWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgOEdCIGZpbGU6IDE4JQ0KU1NDICgxIGlvY3RsIGNhbGwp
Og0KICAgIERCRzI6IDIxOjI4OjMzLjQ2MTUyOCAtIFNlcnZlci1zaWRlIENPUFkgcmV0dXJucyA4
NTg5OTM0NTkwICg4My45OTgzOTgwNjU2IHNlY29uZHMpDQogICAgUEFTUzogU1NDIHNob3VsZCBv
dXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGltcHJvdmVtZW50IGZvciBh
IDhHQiBmaWxlOiAxMTklDQoNCkFzIHlvdSBjYW4gc2VlIGEgc2luZ2xlIDhHQiBjb3B5IChpb2N0
bCB3aXRoIDY0IGJpdCBjb3B5IGxlbmd0aCkgcGVyZm9ybXMgdGhlIHNhbWUgYXMgYmVmb3JlIChh
Ym91dCA4MCBzZWNvbmRzKQ0KYnV0IGluIHRoaXMgY2FzZSB0aGUgdHJhZGl0aW9uYWwgY29weSB0
YWtlcyBhIGxvdCBsb25nZXIuDQoNCg0KLS1Kb3JnZQ0KDQoNCk9uIDQvMTgvMTcsIDEyOjMzIFBN
LCAibGludXgtbmZzLW93bmVyQHZnZXIua2VybmVsLm9yZyBvbiBiZWhhbGYgb2YgSi4gQnJ1Y2Ug
RmllbGRzIiA8bGludXgtbmZzLW93bmVyQHZnZXIua2VybmVsLm9yZyBvbiBiZWhhbGYgb2YgYmZp
ZWxkc0BmaWVsZHNlcy5vcmc+IHdyb3RlOg0KDQogICAgT24gVHVlLCBBcHIgMTgsIDIwMTcgYXQg
MDE6Mjg6MzlQTSAtMDQwMCwgT2xnYSBLb3JuaWV2c2thaWEgd3JvdGU6DQogICAgPiBHaXZlbiBo
b3cgdGhlIGNvZGUgaXMgd3JpdHRlbiBub3cgaXQgbG9va3MgbGlrZSBpdCdzIG5vdCBwb3NzaWJs
ZSB0bw0KICAgID4gc2F2ZSB1cCBjb21taXRzLi4uLg0KICAgID4gDQogICAgPiBIZXJlJ3Mgd2hh
dCBJIGNhbiBzZWUgaGFwcGVuaW5nOg0KICAgID4gDQogICAgPiBuZnM0Ml9wcm9jX2Nsb25lKCkg
YXMgd2VsbCBhcyBuZnM0Ml9wcm9jX2NvcHkoKSB3aWxsIGNhbGwNCiAgICA+IG5mc19zeW5jX2lu
b2RlKGRzdCkgInRvIG1ha2Ugc3VyZSBzZXJ2ZXIocykgaGF2ZSB0aGUgbGF0ZXN0IGRhdGEiDQog
ICAgPiBwcmlvciB0byBpbml0aWF0aW5nIHRoZSBjbG9uZS9jb3B5LiBTbyBldmVuIGlmIHdlIGp1
c3QgcXVldWUgdXAgKG5vdA0KICAgID4gc2VuZCkgdGhlIGNvbW1pdCBhZnRlciB0aGUgZXhlY3V0
aW5nIG5mczQyX3Byb2NfY29weSwgdGhlbiBuZXh0IGNhbGwNCiAgICA+IGludG8gdmZzX2NvcHlf
ZmlsZV9yYW5nZSgpIHdpbGwgc2VuZCBvdXQgdGhhdCBxdWV1ZWQgdXAgY29tbWl0Lg0KICAgID4g
DQogICAgPiBJcyBpdCBvayB0byByZWxheCB0aGUgcmVxdWlyZW1lbnQgdGhhdCByZXF1aXJlbWVu
dCwgSSdtIG5vdCBzdXJlLi4uDQogICAgDQogICAgV2VsbCwgaWYgdGhlIHR5cGljYWwgY2FzZSBv
ZiBjb3B5X2ZpbGVfcmFuZ2UgaXMganVzdCBvcGVuaW5nIGEgZmlsZSwNCiAgICBkb2luZyBhIHNp
bmdsZSBiaWcgY29weV9maWxlX3JhbmdlKCksIHRoZW4gY2xvc2luZyB0aGUgZmlsZSwgdGhlbiB0
aGlzDQogICAgZG9lc24ndCBtYXR0ZXIuDQogICAgDQogICAgVGhlIGxpbnV4IHNlcnZlciBpcyBj
dXJyZW50bHkgbGltaXRpbmcgQ09QWSB0byA0TUIgYXQgYSB0aW1lLCB3aGljaCB3aWxsDQogICAg
bWFrZSB0aGUgY29tbWl0cyBtb3JlIGFubm95aW5nLg0KICAgIA0KICAgIEV2ZW4gdGhlcmUgdGhl
IHR5cGljYWwgY2FzZSB3aWxsIHByb2JhYmx5IHN0aWxsIGJlIGFuIG9wZW4sIGZvbGxvd2VkIGJ5
DQogICAgYSBzZXJpZXMgb2Ygbm9uLW92ZXJsYXBwaW5nIGNvcGllcywgdGhlbiBjbG9zZSwgYW5k
IHRoYXQgc2hvdWxkbid0DQogICAgcmVxdWlyZSB0aGUgY29tbWl0cy4NCiAgICANCiAgICAtLWIu
DQogICAgLS0NCiAgICBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGlu
ZSAidW5zdWJzY3JpYmUgbGludXgtbmZzIiBpbg0KICAgIHRoZSBib2R5IG9mIGEgbWVzc2FnZSB0
byBtYWpvcmRvbW9Admdlci5rZXJuZWwub3JnDQogICAgTW9yZSBtYWpvcmRvbW8gaW5mbyBhdCAg
aHR0cDovL3ZnZXIua2VybmVsLm9yZy9tYWpvcmRvbW8taW5mby5odG1sDQogICAgDQoNCg==

2017-06-15 20:37:43

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [nfsv4] Inter server-side copy performance

Thanks.

My main question is how close we get to what you'd expect given hardware
specs. As long as it's in that neighborhood, then people will know what
to expect.

For example I'd expect server-to-server-copy bandwidth to be be roughly
the smallest of:

- source server disk read bandwidth
- destination server disk write bandwidth
- network bandwidth

Which is actually the same I'd expect for a traditional copy, except
that the network bandwidth might be different.

But in your case, I'm guessing it's gigabit all around (and drive
bandwidth high enough not to matter). And if my arithmetic is right,
traditional copy is getting around 700Mb/s and server-to-server copy
between 800 and 900 Mb/s depending on exactly how we do it? Kinda
curious why traditional copy isn't doing better, I'd've thought we'd
have that pretty well optimized by now.

--b.

On Thu, Jun 15, 2017 at 07:29:24PM +0000, Mora, Jorge wrote:
> Here are the new numbers using latest SSC code for an 8GB copy.
> The code has a delayed unmount on the destination server which allows for single
> mount when multiple COPY calls are made back to back.
> Also, there is a third option which is using ioctl with a 64 bit copy length in order to
> issue a single call for copy lengths >= 4GB.
>
> Setup:
> Client: 16 CPUs, 32GB
> SRC server: 4 CPUs, 8GB
> DST server: 4 CPUs, 8GB
>
> Traditional copy:
> DBG2: 20:31:43.683595 - Traditional COPY returns 8589934590 (96.8432810307 seconds)
> SSC (2 copy_file_range calls back to back):
> DBG2: 20:30:00.268203 - Server-side COPY returns 8589934590 (83.0517759323 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 16%
> SSC (2 copy_file_range calls in parallel):
> DBG2: 20:34:49.686573 - Server-side COPY returns 8589934590 (79.3080010414 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 20%
> SSC (1 ioctl call):
> DBG2: 20:38:41.323774 - Server-side COPY returns 8589934590 (74.7774350643 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 28%
>
> Since I don’t have three similar systems to test with, having the best machine (more cpu’s and more memory)
> as the client gives a better performance for the traditional copy. The following results are done using
> the best machine as the destination server instead.
>
> Setup (using the best machine as the destination server instead):
> Client: 4 CPUs, 8GB
> SRC server: 4 CPUs, 8GB
> DST server: 16 CPUs, 32GB
>
> Traditional copy:
> DBG2: 21:52:15.039625 - Traditional COPY returns 8589934590 (178.686635971 seconds)
> SSC (2 copy_file_range calls back to back):
> DBG2: 21:49:08.961384 - Server-side COPY returns 8589934590 (173.071172953 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 3%
> SSC (2 copy_file_range calls in parallel):
> DBG2: 21:35:59.822467 - Server-side COPY returns 8589934590 (159.743849993 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 18%
> SSC (1 ioctl call):
> DBG2: 21:28:33.461528 - Server-side COPY returns 8589934590 (83.9983980656 seconds)
> PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 119%
>
> As you can see a single 8GB copy (ioctl with 64 bit copy length) performs the same as before (about 80 seconds)
> but in this case the traditional copy takes a lot longer.
>
>
> --Jorge
>
>
> On 4/18/17, 12:33 PM, "[email protected] on behalf of J. Bruce Fields" <[email protected] on behalf of [email protected]> wrote:
>
> On Tue, Apr 18, 2017 at 01:28:39PM -0400, Olga Kornievskaia wrote:
> > Given how the code is written now it looks like it's not possible to
> > save up commits....
> >
> > Here's what I can see happening:
> >
> > nfs42_proc_clone() as well as nfs42_proc_copy() will call
> > nfs_sync_inode(dst) "to make sure server(s) have the latest data"
> > prior to initiating the clone/copy. So even if we just queue up (not
> > send) the commit after the executing nfs42_proc_copy, then next call
> > into vfs_copy_file_range() will send out that queued up commit.
> >
> > Is it ok to relax the requirement that requirement, I'm not sure...
>
> Well, if the typical case of copy_file_range is just opening a file,
> doing a single big copy_file_range(), then closing the file, then this
> doesn't matter.
>
> The linux server is currently limiting COPY to 4MB at a time, which will
> make the commits more annoying.
>
> Even there the typical case will probably still be an open, followed by
> a series of non-overlapping copies, then close, and that shouldn't
> require the commits.
>
> --b.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>