Greetings:
The purpose of this RFC is to gauge initial thoughts/reactions to adding a
path in af_unix for nontemporal copies in the write path. The network stack
supports something similar, but it is enabled for the entire NIC via the
NETIF_F_NOCACHE_COPY bit and cannot (AFAICT) be controlled or adjusted per
socket or per-write and does not affect unix sockets.
This work seeks to build on the existing nontemporal (NT) copy work in the
kernel by adding support in the unix socket write path via a new sendmsg
flag: MSG_NTCOPY. This could also be accomplished via a setsockopt flag,
as well, but this initial implementation adds MSG_NTCOPY for ease of use
and to save an extra system call or two.
In the future, MSG_NTCOPY could be supported by other protocols, and
perhaps used in place of NETIF_F_NOCACHE_COPY to allow user programs to
enable this functionality on a per-write (or per-socket) basis.
If supporting NT copies in the unix write path is acceptable in principle,
I am open to making whatever modifications are requested or needed to get
this RFC closer to a v1. I am sure there will be many; this is just a PoC
in its current form.
As you'll see below, NT copies in the unix write path have a large
measureable impact on certain application architectures and CPUs.
Initial benchmarks are extremely encouraging. I wrote a simple C program to
benchmark this patchset, the program:
- Creates a unix socket pair
- Forks a child process
- The parent process writes to the unix socket using MSG_NTCOPY - or not -
depending on the command line flags
- The child process uses splice to move the data from the unix socket to
a pipe buffer, followed by a second splice call to move the data from
the pipe buffer to a file descriptor opened on /dev/null.
- taskset is used when launching the benchmark to ensure the parent and
child run on appropriate CPUs for various scenarios
The source of the test program is available for examination [1] and results
for three benchmarks I ran are provided below.
Test system: AMD EPYC 7662 64-Core Processor,
64 cores / 128 threads,
512kb L2 per core shared by sibling CPUs,
16mb L3 per NUMA zone,
AMD specific settings: NPS=1 and L3 as NUMA enabled
Test: 1048576 byte object,
100,000 iterations,
512kb pipe buffer size,
512kb unix socket send buffer size
Sample command lines for running the tests provided below. Note that the
command line shows how to run a "normal" copy benchmark. To run the
benchmark in MSG_NTCOPY mode, change command line argument 3 from 0 to 1.
Test pinned to CPUs 1 and 2 which do *not* share an L2 cache, but do share
an L3.
Command line for "normal" copy:
% time taskset -ac 1,2 ./unix-nt-bench 1048576 100000 0 524288 524288
Mode real time (sec.) throughput (Mb/s)
"Normal" copy 10.630 78,928
MSG_NTCOPY 7.429 112,935
Same test as above, but pinned to CPUs 1 and 65 which share an L2 (512kb) and L3
cache (16mb).
Command line for "normal" copy:
% time taskset -ac 1,65 ./unix-nt-bench 1048576 100000 0 524288 524288
Mode real time (sec.) throughput (Mb/s)
"Normal" copy 12.532 66,941
MSG_NTCOPY 9.445 88,826
Same test as above, pinned to CPUs 1 and 65, but with 128kb unix send
buffer and pipe buffer sizes (to avoid spilling L2).
Command line for "normal" copy:
% time taskset -ac 1,65 ./unix-nt-bench 1048576 100000 0 131072 131072
Mode real time (sec.) throughput (Mb/s)
"Normal" copy 12.451 67,377
MSG_NTCOPY 9.451 88,768
Thanks,
Joe
[1]: https://gist.githubusercontent.com/jdamato-fsly/03a2f0cd4e71ebe0fef97f7f2980d9e5/raw/19cfd3aca59109ebf5b03871d952ea1360f3e982/unix-nt-copy-bench.c
Joe Damato (6):
arch, x86, uaccess: Add nontemporal copy functions
iov_iter: Allow custom copyin function
iov_iter: Add a nocache copy iov iterator
net: Add a struct for managing copy functions
net: Add a way to copy skbs without affect cache
net: unix: Add MSG_NTCOPY
arch/x86/include/asm/uaccess_64.h | 6 ++++
include/linux/skbuff.h | 2 ++
include/linux/socket.h | 1 +
include/linux/uaccess.h | 6 ++++
include/linux/uio.h | 2 ++
lib/iov_iter.c | 34 ++++++++++++++++++----
net/core/datagram.c | 61 ++++++++++++++++++++++++++++-----------
net/unix/af_unix.c | 13 +++++++--
8 files changed, 100 insertions(+), 25 deletions(-)
--
2.7.4
Add a new sendmsg flag, MSG_NTCOPY, which user programs can use to signal
to the kernel that data copied into the kernel during sendmsg should be
done so using nontemporal copies, if it is supported by the architecture.
Signed-off-by: Joe Damato <[email protected]>
---
include/linux/socket.h | 1 +
net/unix/af_unix.c | 13 +++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h
index 12085c9..c9b10aa 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -318,6 +318,7 @@ struct ucred {
* plain text and require encryption
*/
+#define MSG_NTCOPY 0x2000000 /* Use a non-temporal copy */
#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */
#define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index e1dd9e9..ccbd643 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1907,7 +1907,11 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
skb_put(skb, len - data_len);
skb->data_len = data_len;
skb->len = len;
- err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, len);
+ if (msg->msg_flags & MSG_NTCOPY)
+ err = skb_copy_datagram_from_iter_nocache(skb, 0, &msg->msg_iter, len);
+ else
+ err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, len);
+
if (err)
goto out_free;
@@ -2167,7 +2171,12 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
skb_put(skb, size - data_len);
skb->data_len = data_len;
skb->len = size;
- err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size);
+
+ if (msg->msg_flags & MSG_NTCOPY)
+ err = skb_copy_datagram_from_iter_nocache(skb, 0, &msg->msg_iter, size);
+ else
+ err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size);
+
if (err) {
kfree_skb(skb);
goto out_err;
--
2.7.4
On Wed, May 11, 2022 at 04:25:20PM -0700, Jakub Kicinski wrote:
> On Tue, 10 May 2022 20:54:21 -0700 Joe Damato wrote:
> > Initial benchmarks are extremely encouraging. I wrote a simple C program to
> > benchmark this patchset, the program:
> > - Creates a unix socket pair
> > - Forks a child process
> > - The parent process writes to the unix socket using MSG_NTCOPY - or not -
> > depending on the command line flags
> > - The child process uses splice to move the data from the unix socket to
> > a pipe buffer, followed by a second splice call to move the data from
> > the pipe buffer to a file descriptor opened on /dev/null.
> > - taskset is used when launching the benchmark to ensure the parent and
> > child run on appropriate CPUs for various scenarios
>
> Is there a practical use case?
Yes; for us there seems to be - especially with AMD Zen2. I'll try to
describe such a setup and my synthetic HTTP benchmark results.
Imagine a program, call it storageD, which is responsible for storing and
retrieving data from a data store. Other programs can request data from
storageD via communicating with it on a Unix socket.
One such program that could request data via the Unix socket is an HTTP
daemon. For some client connections that the HTTP daemon receives, the
daemon may determine that responses can be sent in plain text.
In this case, the HTTP daemon can use splice to move data from the unix
socket connection with storageD directly to the client TCP socket via a
pipe. splice saves CPU cycles and avoids incurring any memory access
latency since the data itself is not accessed.
Because we'll use splice (instead of accessing the data and potentially
affecting the CPU cache) it is advantageous for storageD to use NT copies
when it writes to the Unix socket to avoid evicting hot data from the CPU
cache. After all, once the data is copied into the kernel on the unix
socket write path, it won't be touched again; only spliced.
In my synthetic HTTP benchmarks for this setup, we've been able to increase
network throughput of the the HTTP daemon by roughly 30% while reducing
the system time of storageD. We're still collecting data on production
workloads.
The motivation, IMHO, is very similar to the motivation for
NETIF_F_NOCACHE_COPY, as far I understand.
In some cases, when an application writes to a network socket the data
written to the socket won't be accessed again once it is copied into the
kernel. In these cases, NETIF_F_NOCACHE_COPY can improve performance and
helps to preserve the CPU cache and avoid evicting hot data.
We get a sizable benefit from this option, too, in situations where we
can't use splice and have to call write to transmit data to client
connections. We want to get the same benefit of NETIF_F_NOCACHE_COPY, but
when writing to Unix sockets as well.
Let me know if that makes it more clear.
> The patches look like a lot of extra indirect calls.
Yup. As I mentioned in the cover letter this was mostly a PoC that seems to
work and increases network throughput in a real world scenario.
If this general line of thinking (NT copies on write to a Unix socket) is
acceptable, I'm happy to refactor the code however you (and others) would
like to get it to an acceptable state.
Thanks for taking a look,
Joe
On Tue, 10 May 2022 20:54:21 -0700 Joe Damato wrote:
> Initial benchmarks are extremely encouraging. I wrote a simple C program to
> benchmark this patchset, the program:
> - Creates a unix socket pair
> - Forks a child process
> - The parent process writes to the unix socket using MSG_NTCOPY - or not -
> depending on the command line flags
> - The child process uses splice to move the data from the unix socket to
> a pipe buffer, followed by a second splice call to move the data from
> the pipe buffer to a file descriptor opened on /dev/null.
> - taskset is used when launching the benchmark to ensure the parent and
> child run on appropriate CPUs for various scenarios
Is there a practical use case?
The patches look like a lot of extra indirect calls.
On Wed, 11 May 2022 18:01:54 -0700 Joe Damato wrote:
> > Is there a practical use case?
>
> Yes; for us there seems to be - especially with AMD Zen2. I'll try to
> describe such a setup and my synthetic HTTP benchmark results.
>
> Imagine a program, call it storageD, which is responsible for storing and
> retrieving data from a data store. Other programs can request data from
> storageD via communicating with it on a Unix socket.
>
> One such program that could request data via the Unix socket is an HTTP
> daemon. For some client connections that the HTTP daemon receives, the
> daemon may determine that responses can be sent in plain text.
>
> In this case, the HTTP daemon can use splice to move data from the unix
> socket connection with storageD directly to the client TCP socket via a
> pipe. splice saves CPU cycles and avoids incurring any memory access
> latency since the data itself is not accessed.
>
> Because we'll use splice (instead of accessing the data and potentially
> affecting the CPU cache) it is advantageous for storageD to use NT copies
> when it writes to the Unix socket to avoid evicting hot data from the CPU
> cache. After all, once the data is copied into the kernel on the unix
> socket write path, it won't be touched again; only spliced.
>
> In my synthetic HTTP benchmarks for this setup, we've been able to increase
> network throughput of the the HTTP daemon by roughly 30% while reducing
> the system time of storageD. We're still collecting data on production
> workloads.
>
> The motivation, IMHO, is very similar to the motivation for
> NETIF_F_NOCACHE_COPY, as far I understand.
>
> In some cases, when an application writes to a network socket the data
> written to the socket won't be accessed again once it is copied into the
> kernel. In these cases, NETIF_F_NOCACHE_COPY can improve performance and
> helps to preserve the CPU cache and avoid evicting hot data.
>
> We get a sizable benefit from this option, too, in situations where we
> can't use splice and have to call write to transmit data to client
> connections. We want to get the same benefit of NETIF_F_NOCACHE_COPY, but
> when writing to Unix sockets as well.
>
> Let me know if that makes it more clear.
Makes sense, thanks for the explainer.
> > The patches look like a lot of extra indirect calls.
>
> Yup. As I mentioned in the cover letter this was mostly a PoC that seems to
> work and increases network throughput in a real world scenario.
>
> If this general line of thinking (NT copies on write to a Unix socket) is
> acceptable, I'm happy to refactor the code however you (and others) would
> like to get it to an acceptable state.
My only concern is that in post-spectre world the indirect calls are
going to be more expensive than an branch would be. But I'm not really
a mirco-optimization expert :)
On Thu, May 12, 2022 at 12:46:08PM -0700, Jakub Kicinski wrote:
> On Wed, 11 May 2022 18:01:54 -0700 Joe Damato wrote:
> > > Is there a practical use case?
> >
> > Yes; for us there seems to be - especially with AMD Zen2. I'll try to
> > describe such a setup and my synthetic HTTP benchmark results.
> >
> > Imagine a program, call it storageD, which is responsible for storing and
> > retrieving data from a data store. Other programs can request data from
> > storageD via communicating with it on a Unix socket.
> >
> > One such program that could request data via the Unix socket is an HTTP
> > daemon. For some client connections that the HTTP daemon receives, the
> > daemon may determine that responses can be sent in plain text.
> >
> > In this case, the HTTP daemon can use splice to move data from the unix
> > socket connection with storageD directly to the client TCP socket via a
> > pipe. splice saves CPU cycles and avoids incurring any memory access
> > latency since the data itself is not accessed.
> >
> > Because we'll use splice (instead of accessing the data and potentially
> > affecting the CPU cache) it is advantageous for storageD to use NT copies
> > when it writes to the Unix socket to avoid evicting hot data from the CPU
> > cache. After all, once the data is copied into the kernel on the unix
> > socket write path, it won't be touched again; only spliced.
> >
> > In my synthetic HTTP benchmarks for this setup, we've been able to increase
> > network throughput of the the HTTP daemon by roughly 30% while reducing
> > the system time of storageD. We're still collecting data on production
> > workloads.
> >
> > The motivation, IMHO, is very similar to the motivation for
> > NETIF_F_NOCACHE_COPY, as far I understand.
> >
> > In some cases, when an application writes to a network socket the data
> > written to the socket won't be accessed again once it is copied into the
> > kernel. In these cases, NETIF_F_NOCACHE_COPY can improve performance and
> > helps to preserve the CPU cache and avoid evicting hot data.
> >
> > We get a sizable benefit from this option, too, in situations where we
> > can't use splice and have to call write to transmit data to client
> > connections. We want to get the same benefit of NETIF_F_NOCACHE_COPY, but
> > when writing to Unix sockets as well.
> >
> > Let me know if that makes it more clear.
>
> Makes sense, thanks for the explainer.
>
> > > The patches look like a lot of extra indirect calls.
> >
> > Yup. As I mentioned in the cover letter this was mostly a PoC that seems to
> > work and increases network throughput in a real world scenario.
> >
> > If this general line of thinking (NT copies on write to a Unix socket) is
> > acceptable, I'm happy to refactor the code however you (and others) would
> > like to get it to an acceptable state.
>
> My only concern is that in post-spectre world the indirect calls are
> going to be more expensive than an branch would be. But I'm not really
> a mirco-optimization expert :)
Makes sense; neither am I, FWIW :)
For whatever reason, on AMD Zen2 it seems that using non-temporal
instructions when copying data sizes above the L2 size is a huge
performance win (compared to the kernel's normal temporal copy code) even
if that size fits in L3.
This is why both NETIF_F_NOCACHE_COPY and MSG_NTCOPY from this series seem
to have such a large, measurable impact in the contrived benchmark I
included in the cover letter and also in synthetic HTTP workloads.
I'll plan on including numbers from the benchmark program on a few other
CPUs I have access to in the cover letter for any follow-up RFCs or
revisions.
As a data point, there has been similar-ish work done in glibc [1] to
determine when non-temporal copies should be used on Zen2 based on the size
of the copy. I'm certainly not a micro-arch expert by any stretch, but the
glibc work plus the benchmark results I've measured seem to suggest that
NT-copies can be very helpful on Zen2.
Two questions for you:
1. Do you have any strong opinions on the sendmsg flag vs a socket option?
2. If I can think of a way to avoid the indirect calls, do you think this
series is ready for a v1? I'm not sure if there's anything major that
needs to be addressed aside from the indirect calls.
I'll include some documentation and cosmetic cleanup in the v1, as well.
Thanks,
Joe
[1]: https://sourceware.org/pipermail/libc-alpha/2020-October/118895.html
On Thu, 12 May 2022 15:53:05 -0700 Joe Damato wrote:
> 1. Do you have any strong opinions on the sendmsg flag vs a socket option?
It sounded like you want to mix nt and non-nt on a single socket hence
the flag was a requirement. socket option is better because we can have
many more of those than there are bits for flags, obviously.
> 2. If I can think of a way to avoid the indirect calls, do you think this
> series is ready for a v1? I'm not sure if there's anything major that
> needs to be addressed aside from the indirect calls.
Nothing comes to mind, seems pretty straightforward to me.
From the iov_iter point of view: please follow the way how the inatomic
nocache helpers are implemented instead of adding costly funtion
pointers.