Hi,
This is v3 of my series to enable visibility into SKB paged fragment's
lifecycles, v1 is at [0] and contains some more background and rationale
but basically the series allows entities which inject pages into the
networking stack to receive a notification when the stack has really
finished with those pages (i.e. including retransmissions, clones,
pull-ups etc) and not just when the original skb is finished with, which
is beneficial to many subsystems which wish to inject pages into the
network stack without giving up full ownership of those page's
lifecycle. It implements something broadly along the lines of what was
described in [1].
I have updated based on the feedback from v2. In particular:
* Rebase to v3.2-rc2
* Rework to move reliance on core struct page const'ness change to
end or series (David Miller).
* Remove inadvertent semantic change to drivers/atm/eni.c (David)
* Add skb_frag_set_destructor instead of adding a parameter to
skb_fill_page_desc (Michał Mirosław)
* Resolved conflict with skb zero-copy buffers (nb: I revolved
this with no functional change but I have a suspicion that this
could be vastly simplified using the new destructor hook,
although I've not looked all that closely)
* Split drivers/* patches up from the huge monolithic patch.
* Build tested defconfig on a bunch architectures (arm i386 m68k
mips64 mips s390x sparc64 x86_64) and fixed a variety of issues
arising from this.
* Unfortunately I found an interesting breakage in
kernel/signals.c arising from the additional #include of
highmem.h in skbuff.h which. See:
http://lkml.kernel.org/r/[email protected] I split the addition of skb_frag_kmap out and moved it to the end of the series to avoid this blocking the rest.
* s/kmap_skb_frag/skb_frag_kmap_atomic/ (similar for kunmap) since
the names kmap_skb_frag and skb_frag_kmap were confusingly
similar. At the same time I moved them to skbuff.h since they
are used in a couple of places outside net/core (open coded in
drivers/scsi/fcoe/fcoe.c and #include "../core/kmap_skb.h" in
net/appletalk/ddp.c)
Changes between v1 and v2:
* Added destructor directly to sendpage() protocol hooks instead
of inventing sendpage_destructor() (David Miller)
* Dropped skb_frag_pci_map in favour of skb_frag_dma_map on Michał
Mirosław's advice
* Pushed the NFS fix down into the RPC layer. (Trond)
* I also split out the patches to make protocols use the new
interface into per-protocol patches. I held back on splitting up
the patch to drivers/* (since that will cause an explosion in
the series length) -- I'll do this split for v3.
I've a bit of an open dilemma regarding what to do about the small
number of drivers which use skb_frag_t as part of their internal
datastructures (it's a convenient page+offset+len container). It's a bit
ugly sprinkling the ".p" in around those drivers. I've considered
leaving skb_frag_t alone and defining skb_shared_info->frag as an
anonymous struct with the new layout (i.e. repurposing the existing
skb_frag_t for the alternative use and making a new struct for the
traditional/proper use). This would need a new accessor to copy one to
the other. Alternatively perhaps a generic struct subpage would be
useful in its own right and I could transition those drivers to that?
Opinions?
For v4 I will investigate that issue, collect maintainers ACKs and
rebase over the big drivers/net re-org which is going on (lets see what
git rebase is made of ;-))
Cheers,
Ian.
[0] http://marc.info/?l=linux-netdev&m=131072801125521&w=2
[1] http://marc.info/?l=linux-netdev&m=130925719513084&w=2
From: Ian Campbell <[email protected]>
Date: Fri, 19 Aug 2011 14:26:33 +0100
> This is v3 of my series to enable visibility into SKB paged fragment's
Please tone down the patch count :-/ I'm not going to review anything more
than ~20 or so patches at a time from any one person.
From: David Miller <[email protected]>
Date: Fri, 19 Aug 2011 06:29:58 -0700 (PDT)
> From: Ian Campbell <[email protected]>
> Date: Fri, 19 Aug 2011 14:26:33 +0100
>
>> This is v3 of my series to enable visibility into SKB paged fragment's
>
> Please tone down the patch count :-/ I'm not going to review anything more
> than ~20 or so patches at a time from any one person.
Also none of your patches will even apply to net-next GIT since nearly
all the ethernet drivers have been moved under drivers/net/ethernet
This prevents an issue where an ACK is delayed, a retransmit is queued (either
at the RPC or TCP level) and the ACK arrives before the retransmission hits the
wire. If this happens to an NFS WRITE RPC then the write() system call
completes and the userspace process can continue, potentially modifying data
referenced by the retransmission before the retransmission occurs.
Signed-off-by: Ian Campbell <[email protected]>
Acked-by: Trond Myklebust <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: "J. Bruce Fields" <[email protected]>
Cc: [email protected]
Cc: [email protected]
[since v1:
Push down from NFS layer into RPM layer
]
---
include/linux/sunrpc/xdr.h | 2 ++
include/linux/sunrpc/xprt.h | 5 ++++-
net/sunrpc/clnt.c | 27 ++++++++++++++++++++++-----
net/sunrpc/svcsock.c | 3 ++-
net/sunrpc/xprt.c | 13 +++++++++++++
net/sunrpc/xprtsock.c | 3 ++-
6 files changed, 45 insertions(+), 8 deletions(-)
diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index a20970e..172f81e 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -16,6 +16,7 @@
#include <asm/byteorder.h>
#include <asm/unaligned.h>
#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
/*
* Buffer adjustment
@@ -57,6 +58,7 @@ struct xdr_buf {
tail[1]; /* Appended after page data */
struct page ** pages; /* Array of contiguous pages */
+ struct skb_frag_destructor *destructor;
unsigned int page_base, /* Start of page data */
page_len, /* Length of page data */
flags; /* Flags for data disposition */
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 15518a1..75131eb 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -92,7 +92,10 @@ struct rpc_rqst {
/* A cookie used to track the
state of the transport
connection */
-
+ struct skb_frag_destructor destructor; /* SKB paged fragment
+ * destructor for
+ * transmitted pages*/
+
/*
* Partial send handling
*/
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index c5347d2..919538d 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -61,6 +61,7 @@ static void call_reserve(struct rpc_task *task);
static void call_reserveresult(struct rpc_task *task);
static void call_allocate(struct rpc_task *task);
static void call_decode(struct rpc_task *task);
+static void call_complete(struct rpc_task *task);
static void call_bind(struct rpc_task *task);
static void call_bind_status(struct rpc_task *task);
static void call_transmit(struct rpc_task *task);
@@ -1113,6 +1114,8 @@ rpc_xdr_encode(struct rpc_task *task)
(char *)req->rq_buffer + req->rq_callsize,
req->rq_rcvsize);
+ req->rq_snd_buf.destructor = &req->destructor;
+
p = rpc_encode_header(task);
if (p == NULL) {
printk(KERN_INFO "RPC: couldn't encode RPC header, exit EIO\n");
@@ -1276,6 +1279,7 @@ call_connect_status(struct rpc_task *task)
static void
call_transmit(struct rpc_task *task)
{
+ struct rpc_rqst *req = task->tk_rqstp;
dprint_status(task);
task->tk_action = call_status;
@@ -1309,8 +1313,8 @@ call_transmit(struct rpc_task *task)
call_transmit_status(task);
if (rpc_reply_expected(task))
return;
- task->tk_action = rpc_exit_task;
- rpc_wake_up_queued_task(&task->tk_xprt->pending, task);
+ task->tk_action = call_complete;
+ skb_frag_destructor_unref(&req->destructor);
}
/*
@@ -1383,7 +1387,8 @@ call_bc_transmit(struct rpc_task *task)
return;
}
- task->tk_action = rpc_exit_task;
+ task->tk_action = call_complete;
+ skb_frag_destructor_unref(&req->destructor);
if (task->tk_status < 0) {
printk(KERN_NOTICE "RPC: Could not send backchannel reply "
"error: %d\n", task->tk_status);
@@ -1423,7 +1428,6 @@ call_bc_transmit(struct rpc_task *task)
"error: %d\n", task->tk_status);
break;
}
- rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
}
#endif /* CONFIG_SUNRPC_BACKCHANNEL */
@@ -1589,12 +1593,14 @@ call_decode(struct rpc_task *task)
return;
}
- task->tk_action = rpc_exit_task;
+ task->tk_action = call_complete;
if (decode) {
task->tk_status = rpcauth_unwrap_resp(task, decode, req, p,
task->tk_msg.rpc_resp);
}
+ rpc_sleep_on(&req->rq_xprt->pending, task, NULL);
+ skb_frag_destructor_unref(&req->destructor);
dprintk("RPC: %5u call_decode result %d\n", task->tk_pid,
task->tk_status);
return;
@@ -1609,6 +1615,17 @@ out_retry:
}
}
+/*
+ * 8. Wait for pages to be released by the network stack.
+ */
+static void
+call_complete(struct rpc_task *task)
+{
+ dprintk("RPC: %5u call_complete result %d\n",
+ task->tk_pid, task->tk_status);
+ task->tk_action = rpc_exit_task;
+}
+
static __be32 *
rpc_encode_header(struct rpc_task *task)
{
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 852a258..3685cad 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -196,7 +196,8 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr,
while (pglen > 0) {
if (slen == size)
flags = 0;
- result = kernel_sendpage(sock, *ppage, NULL, base, size, flags);
+ result = kernel_sendpage(sock, *ppage, xdr->destructor,
+ base, size, flags);
if (result > 0)
len += result;
if (result != size)
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index f4385e4..925aa0c 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -1103,6 +1103,16 @@ static inline void xprt_init_xid(struct rpc_xprt *xprt)
xprt->xid = net_random();
}
+static int xprt_complete_skb_pages(void *calldata)
+{
+ struct rpc_task *task = calldata;
+ struct rpc_rqst *req = task->tk_rqstp;
+
+ dprintk("RPC: %5u completing skb pages\n", task->tk_pid);
+ rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
+ return 0;
+}
+
static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
{
struct rpc_rqst *req = task->tk_rqstp;
@@ -1115,6 +1125,9 @@ static void xprt_request_init(struct rpc_task *task, struct rpc_xprt *xprt)
req->rq_xid = xprt_alloc_xid(xprt);
req->rq_release_snd_buf = NULL;
xprt_reset_majortimeo(req);
+ atomic_set(&req->destructor.ref, 1);
+ req->destructor.destroy = &xprt_complete_skb_pages;
+ req->destructor.data = task;
dprintk("RPC: %5u reserved req %p xid %08x\n", task->tk_pid,
req, ntohl(req->rq_xid));
}
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index f79e40e..af3a106 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -408,7 +408,8 @@ static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned i
remainder -= len;
if (remainder != 0 || more)
flags |= MSG_MORE;
- err = sock->ops->sendpage(sock, *ppage, NULL, base, len, flags);
+ err = sock->ops->sendpage(sock, *ppage, xdr->destructor,
+ base, len, flags);
if (remainder == 0 || err != len)
break;
sent += err;
--
1.7.2.5
On Fri, 2011-08-19 at 14:34 +0100, David Miller wrote:
> From: David Miller <[email protected]>
> Date: Fri, 19 Aug 2011 06:29:58 -0700 (PDT)
>
> > From: Ian Campbell <[email protected]>
> > Date: Fri, 19 Aug 2011 14:26:33 +0100
> >
> >> This is v3 of my series to enable visibility into SKB paged fragment's
> >
> > Please tone down the patch count :-/ I'm not going to review anything more
> > than ~20 or so patches at a time from any one person.
Unfortunately I didn't see any middle ground between one massive patch
munging drivers with lots of different maintainers and one patch per
maintainer (which is loads).
I'll resend in batches of ~20. After the first patch all the "convert to
SKB paged frag API" patches are independent.
Would you be willing to just review patches 1-20 of this series? In
which case I could avoid spamming the list with those again now.
> Also none of your patches will even apply to net-next GIT since nearly
> all the ethernet drivers have been moved under drivers/net/ethernet
Yes, I noted that in my intro mail and will rebase for v4. Since it's
mostly just been code motion I think review in the old place will still
be largely valid.
Ian.
From: David Miller <[email protected]>
Date: Fri, 19 Aug 2011 06:34:10 -0700 (PDT)
> From: David Miller <[email protected]>
> Date: Fri, 19 Aug 2011 06:29:58 -0700 (PDT)
>
>> From: Ian Campbell <[email protected]>
>> Date: Fri, 19 Aug 2011 14:26:33 +0100
>>
>>> This is v3 of my series to enable visibility into SKB paged fragment's
>>
>> Please tone down the patch count :-/ I'm not going to review anything more
>> than ~20 or so patches at a time from any one person.
>
> Also none of your patches will even apply to net-next GIT since nearly
> all the ethernet drivers have been moved under drivers/net/ethernet
Ian, please acknowledge my grievances here. I see you replying to other
people, but not to me and I'm the one who has to process all of this
stuff eventually.
If you want this series to be taken seriously:
1) Make your patches against net-next GIT, none of your driver patches will
apply because they have all been moved around to different locations
under drivers/net
2) Submit this in a _SANE_ way. This means, get the first patch that adds
the new interfaces approved and merged. Then slowly and carefuly submit
small, reasonably sized, sets of patches that convert the drivers over.
Otherwise there is no way I'm even looking at this stuff, let alone actually
apply it.
Realize that every time you patch bomb rediculiously like this I have
to click and classify every single patch in your bomb at
http://patchwork.ozlabs.org/project/netdev/list/
Signed-off-by: Ian Campbell <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Alexey Kuznetsov <[email protected]>
Cc: "Pekka Savola (ipv6)" <[email protected]>
Cc: James Morris <[email protected]>
Cc: Hideaki YOSHIFUJI <[email protected]>
Cc: Patrick McHardy <[email protected]>
Cc: Trond Myklebust <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
[since v2:
Use skb_frag_set_destructor
since v1:
Drop sendpage_destructor and just add an argument to sendpage protocol hooks
]
---
drivers/block/drbd/drbd_main.c | 1 +
drivers/scsi/iscsi_tcp.c | 4 ++--
drivers/scsi/iscsi_tcp.h | 3 ++-
drivers/staging/pohmelfs/trans.c | 3 ++-
drivers/target/iscsi/iscsi_target_util.c | 3 ++-
fs/dlm/lowcomms.c | 4 ++--
fs/ocfs2/cluster/tcp.c | 1 +
include/linux/net.h | 6 +++++-
include/net/inet_common.h | 4 +++-
include/net/ip.h | 4 +++-
include/net/sock.h | 8 +++++---
include/net/tcp.h | 4 +++-
net/ceph/messenger.c | 2 +-
net/core/sock.c | 6 +++++-
net/ipv4/af_inet.c | 9 ++++++---
net/ipv4/ip_output.c | 6 ++++--
net/ipv4/tcp.c | 24 ++++++++++++++++--------
net/ipv4/udp.c | 11 ++++++-----
net/ipv4/udp_impl.h | 5 +++--
net/rds/tcp_send.c | 1 +
net/socket.c | 11 +++++++----
net/sunrpc/svcsock.c | 6 +++---
net/sunrpc/xprtsock.c | 2 +-
23 files changed, 84 insertions(+), 44 deletions(-)
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 0358e55..49c7346 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2584,6 +2584,7 @@ static int _drbd_send_page(struct drbd_conf *mdev, struct page *page,
set_fs(KERNEL_DS);
do {
sent = mdev->data.socket->ops->sendpage(mdev->data.socket, page,
+ NULL,
offset, len,
msg_flags);
if (sent == -EAGAIN) {
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 7724414..2eb6801 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -283,8 +283,8 @@ static int iscsi_sw_tcp_xmit_segment(struct iscsi_tcp_conn *tcp_conn,
if (!segment->data) {
sg = segment->sg;
offset += segment->sg_offset + sg->offset;
- r = tcp_sw_conn->sendpage(sk, sg_page(sg), offset,
- copy, flags);
+ r = tcp_sw_conn->sendpage(sk, sg_page(sg), NULL,
+ offset, copy, flags);
} else {
struct msghdr msg = { .msg_flags = flags };
struct kvec iov = {
diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h
index 666fe09..1e23265 100644
--- a/drivers/scsi/iscsi_tcp.h
+++ b/drivers/scsi/iscsi_tcp.h
@@ -52,7 +52,8 @@ struct iscsi_sw_tcp_conn {
uint32_t sendpage_failures_cnt;
uint32_t discontiguous_hdr_cnt;
- ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
+ ssize_t (*sendpage)(struct socket *, struct page *,
+ struct skb_frag_destructor *, int, size_t, int);
};
struct iscsi_sw_tcp_host {
diff --git a/drivers/staging/pohmelfs/trans.c b/drivers/staging/pohmelfs/trans.c
index 36a2535..f897fdb 100644
--- a/drivers/staging/pohmelfs/trans.c
+++ b/drivers/staging/pohmelfs/trans.c
@@ -104,7 +104,8 @@ static int netfs_trans_send_pages(struct netfs_trans *t, struct netfs_state *st)
msg.msg_flags = MSG_WAITALL | (attached_pages == 1 ? 0 :
MSG_MORE);
- err = kernel_sendpage(st->socket, page, 0, size, msg.msg_flags);
+ err = kernel_sendpage(st->socket, page, NULL,
+ 0, size, msg.msg_flags);
if (err <= 0) {
printk("%s: %d/%d failed to send transaction page: t: %p, gen: %u, size: %u, err: %d.\n",
__func__, i, t->page_num, t, t->gen, size, err);
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index a1acb01..3448aaa 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -1323,7 +1323,8 @@ send_hdr:
u32 sub_len = min_t(u32, data_len, space);
send_pg:
tx_sent = conn->sock->ops->sendpage(conn->sock,
- sg_page(sg), sg->offset + offset, sub_len, 0);
+ sg_page(sg), NULL,
+ sg->offset + offset, sub_len, 0);
if (tx_sent != sub_len) {
if (tx_sent == -EAGAIN) {
pr_err("tcp_sendpage() returned"
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 990626e..98ace05 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1342,8 +1342,8 @@ static void send_to_sock(struct connection *con)
ret = 0;
if (len) {
- ret = kernel_sendpage(con->sock, e->page, offset, len,
- msg_flags);
+ ret = kernel_sendpage(con->sock, e->page, NULL,
+ offset, len, msg_flags);
if (ret == -EAGAIN || ret == 0) {
if (ret == -EAGAIN &&
test_bit(SOCK_ASYNC_NOSPACE, &con->sock->flags) &&
diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
index db5ee4b..81366a0 100644
--- a/fs/ocfs2/cluster/tcp.c
+++ b/fs/ocfs2/cluster/tcp.c
@@ -982,6 +982,7 @@ static void o2net_sendpage(struct o2net_sock_container *sc,
mutex_lock(&sc->sc_send_lock);
ret = sc->sc_sock->ops->sendpage(sc->sc_sock,
virt_to_page(kmalloced_virt),
+ NULL,
(long)kmalloced_virt & ~PAGE_MASK,
size, MSG_DONTWAIT);
mutex_unlock(&sc->sc_send_lock);
diff --git a/include/linux/net.h b/include/linux/net.h
index b299230..db562ba 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -157,6 +157,7 @@ struct kiocb;
struct sockaddr;
struct msghdr;
struct module;
+struct skb_frag_destructor;
struct proto_ops {
int family;
@@ -203,6 +204,7 @@ struct proto_ops {
int (*mmap) (struct file *file, struct socket *sock,
struct vm_area_struct * vma);
ssize_t (*sendpage) (struct socket *sock, struct page *page,
+ struct skb_frag_destructor *destroy,
int offset, size_t size, int flags);
ssize_t (*splice_read)(struct socket *sock, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags);
@@ -273,7 +275,9 @@ extern int kernel_getsockopt(struct socket *sock, int level, int optname,
char *optval, int *optlen);
extern int kernel_setsockopt(struct socket *sock, int level, int optname,
char *optval, unsigned int optlen);
-extern int kernel_sendpage(struct socket *sock, struct page *page, int offset,
+extern int kernel_sendpage(struct socket *sock, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset,
size_t size, int flags);
extern int kernel_sock_ioctl(struct socket *sock, int cmd, unsigned long arg);
extern int kernel_sock_shutdown(struct socket *sock,
diff --git a/include/net/inet_common.h b/include/net/inet_common.h
index 22fac98..91cd8d0 100644
--- a/include/net/inet_common.h
+++ b/include/net/inet_common.h
@@ -21,7 +21,9 @@ extern int inet_dgram_connect(struct socket *sock, struct sockaddr * uaddr,
extern int inet_accept(struct socket *sock, struct socket *newsock, int flags);
extern int inet_sendmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size);
-extern ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset,
+extern ssize_t inet_sendpage(struct socket *sock, struct page *page,
+ struct skb_frag_destructor *frag,
+ int offset,
size_t size, int flags);
extern int inet_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size, int flags);
diff --git a/include/net/ip.h b/include/net/ip.h
index aa76c7a..bcbbd15 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -114,7 +114,9 @@ extern int ip_append_data(struct sock *sk, struct flowi4 *fl4,
struct rtable **rt,
unsigned int flags);
extern int ip_generic_getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb);
-extern ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *page,
+extern ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4,
+ struct page *page,
+ struct skb_frag_destructor *destroy,
int offset, size_t size, int flags);
extern struct sk_buff *__ip_make_skb(struct sock *sk,
struct flowi4 *fl4,
diff --git a/include/net/sock.h b/include/net/sock.h
index 8e4062f..feb6266 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -765,6 +765,7 @@ struct proto {
size_t len, int noblock, int flags,
int *addr_len);
int (*sendpage)(struct sock *sk, struct page *page,
+ struct skb_frag_destructor *destroy,
int offset, size_t size, int flags);
int (*bind)(struct sock *sk,
struct sockaddr *uaddr, int addr_len);
@@ -1153,9 +1154,10 @@ extern int sock_no_mmap(struct file *file,
struct socket *sock,
struct vm_area_struct *vma);
extern ssize_t sock_no_sendpage(struct socket *sock,
- struct page *page,
- int offset, size_t size,
- int flags);
+ struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset, size_t size,
+ int flags);
/*
* Functions to fill in entries in struct proto_ops when a protocol
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 149a415..6dd0e35 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -323,7 +323,9 @@ extern void *tcp_v4_tw_get_peer(struct sock *sk);
extern int tcp_v4_tw_remember_stamp(struct inet_timewait_sock *tw);
extern int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
size_t size);
-extern int tcp_sendpage(struct sock *sk, struct page *page, int offset,
+extern int tcp_sendpage(struct sock *sk, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset,
size_t size, int flags);
extern int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg);
extern int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index c340e2e..2304360 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -849,7 +849,7 @@ static int write_partial_msg_pages(struct ceph_connection *con)
cpu_to_le32(crc32c(tmpcrc, base, len));
con->out_msg_pos.did_page_crc = 1;
}
- ret = kernel_sendpage(con->sock, page,
+ ret = kernel_sendpage(con->sock, page, NULL,
con->out_msg_pos.page_pos + page_shift,
len,
MSG_DONTWAIT | MSG_NOSIGNAL |
diff --git a/net/core/sock.c b/net/core/sock.c
index 72b8f06..5c1332e 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1863,7 +1863,9 @@ int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *
}
EXPORT_SYMBOL(sock_no_mmap);
-ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags)
+ssize_t sock_no_sendpage(struct socket *sock, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset, size_t size, int flags)
{
ssize_t res;
struct msghdr msg = {.msg_flags = flags};
@@ -1873,6 +1875,8 @@ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, siz
iov.iov_len = size;
res = kernel_sendmsg(sock, &msg, &iov, 1, size);
kunmap(page);
+ /* kernel_sendmsg copies so we can destroy immediately */
+ skb_frag_destructor_unref(destroy);
return res;
}
EXPORT_SYMBOL(sock_no_sendpage);
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 1b745d4..ace46eb 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -740,7 +740,9 @@ int inet_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
}
EXPORT_SYMBOL(inet_sendmsg);
-ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset,
+ssize_t inet_sendpage(struct socket *sock, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset,
size_t size, int flags)
{
struct sock *sk = sock->sk;
@@ -753,8 +755,9 @@ ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset,
return -EAGAIN;
if (sk->sk_prot->sendpage)
- return sk->sk_prot->sendpage(sk, page, offset, size, flags);
- return sock_no_sendpage(sock, page, offset, size, flags);
+ return sk->sk_prot->sendpage(sk, page, destroy,
+ offset, size, flags);
+ return sock_no_sendpage(sock, page, destroy, offset, size, flags);
}
EXPORT_SYMBOL(inet_sendpage);
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index f198745..71284d3 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -1116,6 +1116,7 @@ int ip_append_data(struct sock *sk, struct flowi4 *fl4,
}
ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *page,
+ struct skb_frag_destructor *destroy,
int offset, size_t size, int flags)
{
struct inet_sock *inet = inet_sk(sk);
@@ -1229,11 +1230,12 @@ ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *page,
i = skb_shinfo(skb)->nr_frags;
if (len > size)
len = size;
- if (skb_can_coalesce(skb, i, page, NULL, offset)) {
+ if (skb_can_coalesce(skb, i, page, destroy, offset)) {
skb_shinfo(skb)->frags[i-1].size += len;
} else if (i < MAX_SKB_FRAGS) {
- get_page(page);
skb_fill_page_desc(skb, i, page, offset, len);
+ skb_frag_set_destructor(skb, i, destroy);
+ skb_frag_ref(skb, i);
} else {
err = -EMSGSIZE;
goto error;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index dd93333..5dd6d50 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -757,7 +757,10 @@ static int tcp_send_mss(struct sock *sk, int *size_goal, int flags)
return mss_now;
}
-static ssize_t do_tcp_sendpages(struct sock *sk, struct page **pages, int poffset,
+static ssize_t do_tcp_sendpages(struct sock *sk,
+ struct page **pages,
+ struct skb_frag_destructor **destructors,
+ int poffset,
size_t psize, int flags)
{
struct tcp_sock *tp = tcp_sk(sk);
@@ -783,6 +786,8 @@ static ssize_t do_tcp_sendpages(struct sock *sk, struct page **pages, int poffse
while (psize > 0) {
struct sk_buff *skb = tcp_write_queue_tail(sk);
struct page *page = pages[poffset / PAGE_SIZE];
+ struct skb_frag_destructor *destroy =
+ destructors ? destructors[poffset / PAGE_SIZE] : NULL;
int copy, i, can_coalesce;
int offset = poffset % PAGE_SIZE;
int size = min_t(size_t, psize, PAGE_SIZE - offset);
@@ -804,7 +809,7 @@ new_segment:
copy = size;
i = skb_shinfo(skb)->nr_frags;
- can_coalesce = skb_can_coalesce(skb, i, page, NULL, offset);
+ can_coalesce = skb_can_coalesce(skb, i, page, destroy, offset);
if (!can_coalesce && i >= MAX_SKB_FRAGS) {
tcp_mark_push(tp, skb);
goto new_segment;
@@ -815,8 +820,9 @@ new_segment:
if (can_coalesce) {
skb_shinfo(skb)->frags[i - 1].size += copy;
} else {
- get_page(page);
skb_fill_page_desc(skb, i, page, offset, copy);
+ skb_frag_set_destructor(skb, i, destroy);
+ skb_frag_ref(skb, i);
}
skb->len += copy;
@@ -871,18 +877,20 @@ out_err:
return sk_stream_error(sk, flags, err);
}
-int tcp_sendpage(struct sock *sk, struct page *page, int offset,
- size_t size, int flags)
+int tcp_sendpage(struct sock *sk, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset, size_t size, int flags)
{
ssize_t res;
if (!(sk->sk_route_caps & NETIF_F_SG) ||
!(sk->sk_route_caps & NETIF_F_ALL_CSUM))
- return sock_no_sendpage(sk->sk_socket, page, offset, size,
- flags);
+ return sock_no_sendpage(sk->sk_socket, page, destroy,
+ offset, size, flags);
lock_sock(sk);
- res = do_tcp_sendpages(sk, &page, offset, size, flags);
+ res = do_tcp_sendpages(sk, &page, &destroy,
+ offset, size, flags);
release_sock(sk);
return res;
}
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 1b5a193..f4bab14 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1028,8 +1028,9 @@ do_confirm:
}
EXPORT_SYMBOL(udp_sendmsg);
-int udp_sendpage(struct sock *sk, struct page *page, int offset,
- size_t size, int flags)
+int udp_sendpage(struct sock *sk, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset, size_t size, int flags)
{
struct inet_sock *inet = inet_sk(sk);
struct udp_sock *up = udp_sk(sk);
@@ -1057,11 +1058,11 @@ int udp_sendpage(struct sock *sk, struct page *page, int offset,
}
ret = ip_append_page(sk, &inet->cork.fl.u.ip4,
- page, offset, size, flags);
+ page, destroy, offset, size, flags);
if (ret == -EOPNOTSUPP) {
release_sock(sk);
- return sock_no_sendpage(sk->sk_socket, page, offset,
- size, flags);
+ return sock_no_sendpage(sk->sk_socket, page, destroy,
+ offset, size, flags);
}
if (ret < 0) {
udp_flush_pending_frames(sk);
diff --git a/net/ipv4/udp_impl.h b/net/ipv4/udp_impl.h
index aaad650..4923d82 100644
--- a/net/ipv4/udp_impl.h
+++ b/net/ipv4/udp_impl.h
@@ -23,8 +23,9 @@ extern int compat_udp_getsockopt(struct sock *sk, int level, int optname,
#endif
extern int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
size_t len, int noblock, int flags, int *addr_len);
-extern int udp_sendpage(struct sock *sk, struct page *page, int offset,
- size_t size, int flags);
+extern int udp_sendpage(struct sock *sk, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset, size_t size, int flags);
extern int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb);
extern void udp_destroy_sock(struct sock *sk);
diff --git a/net/rds/tcp_send.c b/net/rds/tcp_send.c
index 1b4fd68..71503ad 100644
--- a/net/rds/tcp_send.c
+++ b/net/rds/tcp_send.c
@@ -119,6 +119,7 @@ int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
while (sg < rm->data.op_nents) {
ret = tc->t_sock->ops->sendpage(tc->t_sock,
sg_page(&rm->data.op_sg[sg]),
+ NULL,
rm->data.op_sg[sg].offset + off,
rm->data.op_sg[sg].length - off,
MSG_DONTWAIT|MSG_NOSIGNAL);
diff --git a/net/socket.c b/net/socket.c
index 24a7740..cd927a2 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -795,7 +795,7 @@ static ssize_t sock_sendpage(struct file *file, struct page *page,
if (more)
flags |= MSG_MORE;
- return kernel_sendpage(sock, page, offset, size, flags);
+ return kernel_sendpage(sock, page, NULL, offset, size, flags);
}
static ssize_t sock_splice_read(struct file *file, loff_t *ppos,
@@ -3350,15 +3350,18 @@ int kernel_setsockopt(struct socket *sock, int level, int optname,
}
EXPORT_SYMBOL(kernel_setsockopt);
-int kernel_sendpage(struct socket *sock, struct page *page, int offset,
+int kernel_sendpage(struct socket *sock, struct page *page,
+ struct skb_frag_destructor *destroy,
+ int offset,
size_t size, int flags)
{
sock_update_classid(sock->sk);
if (sock->ops->sendpage)
- return sock->ops->sendpage(sock, page, offset, size, flags);
+ return sock->ops->sendpage(sock, page, destroy,
+ offset, size, flags);
- return sock_no_sendpage(sock, page, offset, size, flags);
+ return sock_no_sendpage(sock, page, destroy, offset, size, flags);
}
EXPORT_SYMBOL(kernel_sendpage);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 767d494..852a258 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -183,7 +183,7 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr,
/* send head */
if (slen == xdr->head[0].iov_len)
flags = 0;
- len = kernel_sendpage(sock, headpage, headoffset,
+ len = kernel_sendpage(sock, headpage, NULL, headoffset,
xdr->head[0].iov_len, flags);
if (len != xdr->head[0].iov_len)
goto out;
@@ -196,7 +196,7 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr,
while (pglen > 0) {
if (slen == size)
flags = 0;
- result = kernel_sendpage(sock, *ppage, base, size, flags);
+ result = kernel_sendpage(sock, *ppage, NULL, base, size, flags);
if (result > 0)
len += result;
if (result != size)
@@ -210,7 +210,7 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr,
/* send tail */
if (xdr->tail[0].iov_len) {
- result = kernel_sendpage(sock, tailpage, tailoffset,
+ result = kernel_sendpage(sock, tailpage, NULL, tailoffset,
xdr->tail[0].iov_len, 0);
if (result > 0)
len += result;
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index d7f97ef..f79e40e 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -408,7 +408,7 @@ static int xs_send_pagedata(struct socket *sock, struct xdr_buf *xdr, unsigned i
remainder -= len;
if (remainder != 0 || more)
flags |= MSG_MORE;
- err = sock->ops->sendpage(sock, *ppage, base, len, flags);
+ err = sock->ops->sendpage(sock, *ppage, NULL, base, len, flags);
if (remainder == 0 || err != len)
break;
sent += err;
--
1.7.2.5
On Fri, 2011-08-19 at 07:04 -0700, David Miller wrote:
> From: David Miller <[email protected]>
> Date: Fri, 19 Aug 2011 06:34:10 -0700 (PDT)
>
> > From: David Miller <[email protected]>
> > Date: Fri, 19 Aug 2011 06:29:58 -0700 (PDT)
> >
> >> From: Ian Campbell <[email protected]>
> >> Date: Fri, 19 Aug 2011 14:26:33 +0100
> >>
> >>> This is v3 of my series to enable visibility into SKB paged fragment's
> >>
> >> Please tone down the patch count :-/ I'm not going to review anything more
> >> than ~20 or so patches at a time from any one person.
> >
> > Also none of your patches will even apply to net-next GIT since nearly
> > all the ethernet drivers have been moved under drivers/net/ethernet
>
> Ian, please acknowledge my grievances here. I see you replying to other
> people, but not to me and I'm the one who has to process all of this
> stuff eventually.
Sorry, I thought I had replied to you first, did it go missing?
> If you want this series to be taken seriously:
>
> 1) Make your patches against net-next GIT, none of your driver patches will
> apply because they have all been moved around to different locations
> under drivers/net
> 2) Submit this in a _SANE_ way. This means, get the first patch that adds
> the new interfaces approved and merged. Then slowly and carefuly submit
> small, reasonably sized, sets of patches that convert the drivers over.
That all makes sense, I'll do it that way.
> Otherwise there is no way I'm even looking at this stuff, let alone actually
> apply it.
> Realize that every time you patch bomb rediculiously like this I have
> to click and classify every single patch in your bomb at
> http://patchwork.ozlabs.org/project/netdev/list/
Ah, right, I hadn't considered the patchwork thing, I'm very sorry about
that!
Ian.