2023-07-19 19:38:27

by Arseniy Krasnov

[permalink] [raw]
Subject: [RFC PATCH v2 1/4] virtio/vsock: rework MSG_PEEK for SOCK_STREAM

This reworks current implementation of MSG_PEEK logic:
1) Replaces 'skb_queue_walk_safe()' with 'skb_queue_walk()'. There is
no need in the first one, as there are no removes of skb in loop.
2) Removes nested while loop - MSG_PEEK logic could be implemented
without it: just iterate over skbs without removing it and copy
data from each until destination buffer is not full.

Signed-off-by: Arseniy Krasnov <[email protected]>
Reviewed-by: Bobby Eshleman <[email protected]>
---
net/vmw_vsock/virtio_transport_common.c | 41 ++++++++++++-------------
1 file changed, 19 insertions(+), 22 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index b769fc258931..2ee40574c339 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -348,37 +348,34 @@ virtio_transport_stream_do_peek(struct vsock_sock *vsk,
size_t len)
{
struct virtio_vsock_sock *vvs = vsk->trans;
- size_t bytes, total = 0, off;
- struct sk_buff *skb, *tmp;
- int err = -EFAULT;
+ struct sk_buff *skb;
+ size_t total = 0;
+ int err;

spin_lock_bh(&vvs->rx_lock);

- skb_queue_walk_safe(&vvs->rx_queue, skb, tmp) {
- off = 0;
+ skb_queue_walk(&vvs->rx_queue, skb) {
+ size_t bytes;

- if (total == len)
- break;
+ bytes = len - total;
+ if (bytes > skb->len)
+ bytes = skb->len;

- while (total < len && off < skb->len) {
- bytes = len - total;
- if (bytes > skb->len - off)
- bytes = skb->len - off;
+ spin_unlock_bh(&vvs->rx_lock);

- /* sk_lock is held by caller so no one else can dequeue.
- * Unlock rx_lock since memcpy_to_msg() may sleep.
- */
- spin_unlock_bh(&vvs->rx_lock);
+ /* sk_lock is held by caller so no one else can dequeue.
+ * Unlock rx_lock since memcpy_to_msg() may sleep.
+ */
+ err = memcpy_to_msg(msg, skb->data, bytes);
+ if (err)
+ goto out;

- err = memcpy_to_msg(msg, skb->data + off, bytes);
- if (err)
- goto out;
+ total += bytes;

- spin_lock_bh(&vvs->rx_lock);
+ spin_lock_bh(&vvs->rx_lock);

- total += bytes;
- off += bytes;
- }
+ if (total == len)
+ break;
}

spin_unlock_bh(&vvs->rx_lock);
--
2.25.1



2023-07-25 16:19:00

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [RFC PATCH v2 1/4] virtio/vsock: rework MSG_PEEK for SOCK_STREAM

On Wed, Jul 19, 2023 at 10:27:05PM +0300, Arseniy Krasnov wrote:
>This reworks current implementation of MSG_PEEK logic:
>1) Replaces 'skb_queue_walk_safe()' with 'skb_queue_walk()'. There is
> no need in the first one, as there are no removes of skb in loop.
>2) Removes nested while loop - MSG_PEEK logic could be implemented
> without it: just iterate over skbs without removing it and copy
> data from each until destination buffer is not full.
>
>Signed-off-by: Arseniy Krasnov <[email protected]>
>Reviewed-by: Bobby Eshleman <[email protected]>
>---
> net/vmw_vsock/virtio_transport_common.c | 41 ++++++++++++-------------
> 1 file changed, 19 insertions(+), 22 deletions(-)

Reviewed-by: Stefano Garzarella <[email protected]>

>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index b769fc258931..2ee40574c339 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -348,37 +348,34 @@ virtio_transport_stream_do_peek(struct vsock_sock *vsk,
> size_t len)
> {
> struct virtio_vsock_sock *vvs = vsk->trans;
>- size_t bytes, total = 0, off;
>- struct sk_buff *skb, *tmp;
>- int err = -EFAULT;
>+ struct sk_buff *skb;
>+ size_t total = 0;
>+ int err;
>
> spin_lock_bh(&vvs->rx_lock);
>
>- skb_queue_walk_safe(&vvs->rx_queue, skb, tmp) {
>- off = 0;
>+ skb_queue_walk(&vvs->rx_queue, skb) {
>+ size_t bytes;
>
>- if (total == len)
>- break;
>+ bytes = len - total;
>+ if (bytes > skb->len)
>+ bytes = skb->len;
>
>- while (total < len && off < skb->len) {
>- bytes = len - total;
>- if (bytes > skb->len - off)
>- bytes = skb->len - off;
>+ spin_unlock_bh(&vvs->rx_lock);
>
>- /* sk_lock is held by caller so no one else can dequeue.
>- * Unlock rx_lock since memcpy_to_msg() may sleep.
>- */
>- spin_unlock_bh(&vvs->rx_lock);
>+ /* sk_lock is held by caller so no one else can dequeue.
>+ * Unlock rx_lock since memcpy_to_msg() may sleep.
>+ */
>+ err = memcpy_to_msg(msg, skb->data, bytes);
>+ if (err)
>+ goto out;
>
>- err = memcpy_to_msg(msg, skb->data + off, bytes);
>- if (err)
>- goto out;
>+ total += bytes;
>
>- spin_lock_bh(&vvs->rx_lock);
>+ spin_lock_bh(&vvs->rx_lock);
>
>- total += bytes;
>- off += bytes;
>- }
>+ if (total == len)
>+ break;
> }
>
> spin_unlock_bh(&vvs->rx_lock);
>--
>2.25.1
>