2019-05-28 10:58:02

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()

During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
before registering the driver", Stefan pointed out some possible issues
in the .probe() and .remove() callbacks of the virtio-vsock driver.

This series tries to solve these issues:
- Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
.probe() to avoid that some sockets queue works when the initialization
is not finished.
- Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
be sure that no one is accessing the device, and adds another flush at the
end of the .remove() to avoid use after free.
- Patch 4 free also used buffers in the virtqueues during the .remove().

Stefano Garzarella (4):
vsock/virtio: fix locking around 'the_virtio_vsock'
vsock/virtio: stop workers during the .remove()
vsock/virtio: fix flush of works during the .remove()
vsock/virtio: free used buffers during the .remove()

net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
1 file changed, 90 insertions(+), 15 deletions(-)

--
2.20.1


2019-05-28 10:58:24

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH 2/4] vsock/virtio: stop workers during the .remove()

Before to call vdev->config->reset(vdev) we need to be sure that
no one is accessing the device, for this reason, we add new variables
in the struct virtio_vsock to stop the workers during the .remove().

This patch also add few comments before vdev->config->reset(vdev)
and vdev->config->del_vqs(vdev).

Suggested-by: Stefan Hajnoczi <[email protected]>
Suggested-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
net/vmw_vsock/virtio_transport.c | 49 +++++++++++++++++++++++++++++++-
1 file changed, 48 insertions(+), 1 deletion(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index d3ba7747aa73..e694df10ab61 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -39,6 +39,7 @@ struct virtio_vsock {
* must be accessed with tx_lock held.
*/
struct mutex tx_lock;
+ bool tx_run;

struct work_struct send_pkt_work;
spinlock_t send_pkt_list_lock;
@@ -54,6 +55,7 @@ struct virtio_vsock {
* must be accessed with rx_lock held.
*/
struct mutex rx_lock;
+ bool rx_run;
int rx_buf_nr;
int rx_buf_max_nr;

@@ -61,6 +63,7 @@ struct virtio_vsock {
* vqs[VSOCK_VQ_EVENT] must be accessed with event_lock held.
*/
struct mutex event_lock;
+ bool event_run;
struct virtio_vsock_event event_list[8];

u32 guest_cid;
@@ -98,6 +101,10 @@ static void virtio_transport_loopback_work(struct work_struct *work)
spin_unlock_bh(&vsock->loopback_list_lock);

mutex_lock(&vsock->rx_lock);
+
+ if (!vsock->rx_run)
+ goto out;
+
while (!list_empty(&pkts)) {
struct virtio_vsock_pkt *pkt;

@@ -106,6 +113,7 @@ static void virtio_transport_loopback_work(struct work_struct *work)

virtio_transport_recv_pkt(pkt);
}
+out:
mutex_unlock(&vsock->rx_lock);
}

@@ -134,6 +142,9 @@ virtio_transport_send_pkt_work(struct work_struct *work)

mutex_lock(&vsock->tx_lock);

+ if (!vsock->tx_run)
+ goto out;
+
vq = vsock->vqs[VSOCK_VQ_TX];

for (;;) {
@@ -192,6 +203,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
if (added)
virtqueue_kick(vq);

+out:
mutex_unlock(&vsock->tx_lock);

if (restart_rx)
@@ -314,6 +326,10 @@ static void virtio_transport_tx_work(struct work_struct *work)

vq = vsock->vqs[VSOCK_VQ_TX];
mutex_lock(&vsock->tx_lock);
+
+ if (!vsock->tx_run)
+ goto out;
+
do {
struct virtio_vsock_pkt *pkt;
unsigned int len;
@@ -324,6 +340,8 @@ static void virtio_transport_tx_work(struct work_struct *work)
added = true;
}
} while (!virtqueue_enable_cb(vq));
+
+out:
mutex_unlock(&vsock->tx_lock);

if (added)
@@ -352,6 +370,9 @@ static void virtio_transport_rx_work(struct work_struct *work)

mutex_lock(&vsock->rx_lock);

+ if (!vsock->rx_run)
+ goto out;
+
do {
virtqueue_disable_cb(vq);
for (;;) {
@@ -461,6 +482,9 @@ static void virtio_transport_event_work(struct work_struct *work)

mutex_lock(&vsock->event_lock);

+ if (!vsock->event_run)
+ goto out;
+
do {
struct virtio_vsock_event *event;
unsigned int len;
@@ -475,7 +499,7 @@ static void virtio_transport_event_work(struct work_struct *work)
} while (!virtqueue_enable_cb(vq));

virtqueue_kick(vsock->vqs[VSOCK_VQ_EVENT]);
-
+out:
mutex_unlock(&vsock->event_lock);
}

@@ -611,12 +635,18 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
INIT_WORK(&vsock->loopback_work, virtio_transport_loopback_work);

+ mutex_lock(&vsock->tx_lock);
+ vsock->tx_run = true;
+ mutex_unlock(&vsock->tx_lock);
+
mutex_lock(&vsock->rx_lock);
virtio_vsock_rx_fill(vsock);
+ vsock->rx_run = true;
mutex_unlock(&vsock->rx_lock);

mutex_lock(&vsock->event_lock);
virtio_vsock_event_fill(vsock);
+ vsock->event_run = true;
mutex_unlock(&vsock->event_lock);

the_virtio_vsock = vsock;
@@ -647,6 +677,22 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
/* Reset all connected sockets when the device disappear */
vsock_for_each_connected_socket(virtio_vsock_reset_sock);

+ /* Stop all work handlers to make sure no one is accessing the device */
+ mutex_lock(&vsock->rx_lock);
+ vsock->rx_run = false;
+ mutex_unlock(&vsock->rx_lock);
+
+ mutex_lock(&vsock->tx_lock);
+ vsock->tx_run = false;
+ mutex_unlock(&vsock->tx_lock);
+
+ mutex_lock(&vsock->event_lock);
+ vsock->event_run = false;
+ mutex_unlock(&vsock->event_lock);
+
+ /* Flush all device writes and interrupts, device will not use any
+ * more buffers.
+ */
vdev->config->reset(vdev);

mutex_lock(&vsock->rx_lock);
@@ -677,6 +723,7 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
}
spin_unlock_bh(&vsock->loopback_list_lock);

+ /* Delete virtqueues and flush outstanding callbacks if any */
vdev->config->del_vqs(vdev);

kfree(vsock);
--
2.20.1

2019-05-28 10:59:22

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'

This patch protects the reading of 'the_virtio_vsock' taking the
mutex used when it is set.
We also move the 'the_virtio_vsock' assignment at the end of the
.probe(), when we finished all the initialization, and at the
beginning of .remove(), before to release resources, taking the
lock until the end of the function.

Signed-off-by: Stefano Garzarella <[email protected]>
---
net/vmw_vsock/virtio_transport.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 96ab344f17bb..d3ba7747aa73 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -68,7 +68,13 @@ struct virtio_vsock {

static struct virtio_vsock *virtio_vsock_get(void)
{
- return the_virtio_vsock;
+ struct virtio_vsock *vsock;
+
+ mutex_lock(&the_virtio_vsock_mutex);
+ vsock = the_virtio_vsock;
+ mutex_unlock(&the_virtio_vsock_mutex);
+
+ return vsock;
}

static u32 virtio_transport_get_local_cid(void)
@@ -592,7 +598,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
atomic_set(&vsock->queued_replies, 0);

vdev->priv = vsock;
- the_virtio_vsock = vsock;
mutex_init(&vsock->tx_lock);
mutex_init(&vsock->rx_lock);
mutex_init(&vsock->event_lock);
@@ -614,6 +619,8 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
virtio_vsock_event_fill(vsock);
mutex_unlock(&vsock->event_lock);

+ the_virtio_vsock = vsock;
+
mutex_unlock(&the_virtio_vsock_mutex);
return 0;

@@ -628,6 +635,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
struct virtio_vsock *vsock = vdev->priv;
struct virtio_vsock_pkt *pkt;

+ mutex_lock(&the_virtio_vsock_mutex);
+ the_virtio_vsock = NULL;
+
flush_work(&vsock->loopback_work);
flush_work(&vsock->rx_work);
flush_work(&vsock->tx_work);
@@ -667,13 +677,10 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
}
spin_unlock_bh(&vsock->loopback_list_lock);

- mutex_lock(&the_virtio_vsock_mutex);
- the_virtio_vsock = NULL;
- mutex_unlock(&the_virtio_vsock_mutex);
-
vdev->config->del_vqs(vdev);

kfree(vsock);
+ mutex_unlock(&the_virtio_vsock_mutex);
}

static struct virtio_device_id id_table[] = {
--
2.20.1

2019-05-28 10:59:26

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH 4/4] vsock/virtio: free used buffers during the .remove()

Before this patch, we only freed unused buffers, but there may
still be used buffers to be freed.

Signed-off-by: Stefano Garzarella <[email protected]>
---
net/vmw_vsock/virtio_transport.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index ad093ce96693..6a2afb989562 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -669,6 +669,18 @@ static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
flush_work(&vsock->send_pkt_work);
}

+static void virtio_vsock_free_buf(struct virtqueue *vq)
+{
+ struct virtio_vsock_pkt *pkt;
+ unsigned int len;
+
+ while ((pkt = virtqueue_detach_unused_buf(vq)))
+ virtio_transport_free_pkt(pkt);
+
+ while ((pkt = virtqueue_get_buf(vq, &len)))
+ virtio_transport_free_pkt(pkt);
+}
+
static void virtio_vsock_remove(struct virtio_device *vdev)
{
struct virtio_vsock *vsock = vdev->priv;
@@ -702,13 +714,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
vdev->config->reset(vdev);

mutex_lock(&vsock->rx_lock);
- while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
- virtio_transport_free_pkt(pkt);
+ virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_RX]);
mutex_unlock(&vsock->rx_lock);

mutex_lock(&vsock->tx_lock);
- while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX])))
- virtio_transport_free_pkt(pkt);
+ virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_TX]);
mutex_unlock(&vsock->tx_lock);

spin_lock_bh(&vsock->send_pkt_list_lock);
--
2.20.1

2019-05-28 14:03:14

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

We flush all pending works before to call vdev->config->reset(vdev),
but other works can be queued before the vdev->config->del_vqs(vdev),
so we add another flush after it, to avoid use after free.

Suggested-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index e694df10ab61..ad093ce96693 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
return ret;
}

+static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
+{
+ flush_work(&vsock->loopback_work);
+ flush_work(&vsock->rx_work);
+ flush_work(&vsock->tx_work);
+ flush_work(&vsock->event_work);
+ flush_work(&vsock->send_pkt_work);
+}
+
static void virtio_vsock_remove(struct virtio_device *vdev)
{
struct virtio_vsock *vsock = vdev->priv;
@@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
mutex_lock(&the_virtio_vsock_mutex);
the_virtio_vsock = NULL;

- flush_work(&vsock->loopback_work);
- flush_work(&vsock->rx_work);
- flush_work(&vsock->tx_work);
- flush_work(&vsock->event_work);
- flush_work(&vsock->send_pkt_work);
-
/* Reset all connected sockets when the device disappear */
vsock_for_each_connected_socket(virtio_vsock_reset_sock);

@@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
vsock->event_run = false;
mutex_unlock(&vsock->event_lock);

+ /* Flush all pending works */
+ virtio_vsock_flush_works(vsock);
+
/* Flush all device writes and interrupts, device will not use any
* more buffers.
*/
@@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
/* Delete virtqueues and flush outstanding callbacks if any */
vdev->config->del_vqs(vdev);

+ /* Other works can be queued before 'config->del_vqs()', so we flush
+ * all works before to free the vsock object to avoid use after free.
+ */
+ virtio_vsock_flush_works(vsock);
+
kfree(vsock);
mutex_unlock(&the_virtio_vsock_mutex);
}
--
2.20.1

2019-05-29 03:24:15

by Jason Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()


On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> We flush all pending works before to call vdev->config->reset(vdev),
> but other works can be queued before the vdev->config->del_vqs(vdev),
> so we add another flush after it, to avoid use after free.
>
> Suggested-by: Michael S. Tsirkin <[email protected]>
> Signed-off-by: Stefano Garzarella <[email protected]>
> ---
> net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> 1 file changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index e694df10ab61..ad093ce96693 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> return ret;
> }
>
> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> +{
> + flush_work(&vsock->loopback_work);
> + flush_work(&vsock->rx_work);
> + flush_work(&vsock->tx_work);
> + flush_work(&vsock->event_work);
> + flush_work(&vsock->send_pkt_work);
> +}
> +
> static void virtio_vsock_remove(struct virtio_device *vdev)
> {
> struct virtio_vsock *vsock = vdev->priv;
> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> mutex_lock(&the_virtio_vsock_mutex);
> the_virtio_vsock = NULL;
>
> - flush_work(&vsock->loopback_work);
> - flush_work(&vsock->rx_work);
> - flush_work(&vsock->tx_work);
> - flush_work(&vsock->event_work);
> - flush_work(&vsock->send_pkt_work);
> -
> /* Reset all connected sockets when the device disappear */
> vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>
> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> vsock->event_run = false;
> mutex_unlock(&vsock->event_lock);
>
> + /* Flush all pending works */
> + virtio_vsock_flush_works(vsock);
> +
> /* Flush all device writes and interrupts, device will not use any
> * more buffers.
> */
> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> /* Delete virtqueues and flush outstanding callbacks if any */
> vdev->config->del_vqs(vdev);
>
> + /* Other works can be queued before 'config->del_vqs()', so we flush
> + * all works before to free the vsock object to avoid use after free.
> + */
> + virtio_vsock_flush_works(vsock);


Some questions after a quick glance:

1) It looks to me that the work could be queued from the path of
vsock_transport_cancel_pkt() . Is that synchronized here?

2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run
still needed? It looks to me we've already done except that we need
flush rx_work in the end since send_pkt_work can requeue rx_work.

Thanks


> +
> kfree(vsock);
> mutex_unlock(&the_virtio_vsock_mutex);
> }

2019-05-29 11:00:01

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>
> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > We flush all pending works before to call vdev->config->reset(vdev),
> > but other works can be queued before the vdev->config->del_vqs(vdev),
> > so we add another flush after it, to avoid use after free.
> >
> > Suggested-by: Michael S. Tsirkin <[email protected]>
> > Signed-off-by: Stefano Garzarella <[email protected]>
> > ---
> > net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> > 1 file changed, 17 insertions(+), 6 deletions(-)
> >
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index e694df10ab61..ad093ce96693 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> > return ret;
> > }
> > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > +{
> > + flush_work(&vsock->loopback_work);
> > + flush_work(&vsock->rx_work);
> > + flush_work(&vsock->tx_work);
> > + flush_work(&vsock->event_work);
> > + flush_work(&vsock->send_pkt_work);
> > +}
> > +
> > static void virtio_vsock_remove(struct virtio_device *vdev)
> > {
> > struct virtio_vsock *vsock = vdev->priv;
> > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > mutex_lock(&the_virtio_vsock_mutex);
> > the_virtio_vsock = NULL;
> > - flush_work(&vsock->loopback_work);
> > - flush_work(&vsock->rx_work);
> > - flush_work(&vsock->tx_work);
> > - flush_work(&vsock->event_work);
> > - flush_work(&vsock->send_pkt_work);
> > -
> > /* Reset all connected sockets when the device disappear */
> > vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > vsock->event_run = false;
> > mutex_unlock(&vsock->event_lock);
> > + /* Flush all pending works */
> > + virtio_vsock_flush_works(vsock);
> > +
> > /* Flush all device writes and interrupts, device will not use any
> > * more buffers.
> > */
> > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > /* Delete virtqueues and flush outstanding callbacks if any */
> > vdev->config->del_vqs(vdev);
> > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > + * all works before to free the vsock object to avoid use after free.
> > + */
> > + virtio_vsock_flush_works(vsock);
>
>
> Some questions after a quick glance:
>
> 1) It looks to me that the work could be queued from the path of
> vsock_transport_cancel_pkt() . Is that synchronized here?
>

Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
queue work from the upper layer (socket).

Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
a rare issue could happen:
we are setting the_virtio_vsock to NULL at the start of .remove() and we
are freeing the object pointed by it at the end of .remove(), so
virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
running, accessing the object that we are freed.

Should I use something like RCU to prevent this issue?

virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
{
rcu_read_lock();
vsock = rcu_dereference(the_virtio_vsock_mutex);
...
rcu_read_unlock();
}

virtio_vsock_remove()
{
rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
synchronize_rcu();

...

free(vsock);
}

Could there be a better approach?


> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> needed? It looks to me we've already done except that we need flush rx_work
> in the end since send_pkt_work can requeue rx_work.

The main reason of tx_run/rx_run/event_run is to prevent that a worker
function is running while we are calling config->reset().

E.g. if an interrupt comes between virtio_vsock_flush_works() and
config->reset(), it can queue new works that can access the device while
we are in config->reset().

IMHO they are still needed.

What do you think?


Thanks for your questions,
Stefano

2019-05-30 04:31:21

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'

From: Stefano Garzarella <[email protected]>
Date: Tue, 28 May 2019 12:56:20 +0200

> @@ -68,7 +68,13 @@ struct virtio_vsock {
>
> static struct virtio_vsock *virtio_vsock_get(void)
> {
> - return the_virtio_vsock;
> + struct virtio_vsock *vsock;
> +
> + mutex_lock(&the_virtio_vsock_mutex);
> + vsock = the_virtio_vsock;
> + mutex_unlock(&the_virtio_vsock_mutex);
> +
> + return vsock;

This doesn't do anything as far as I can tell.

No matter what, you will either get the value before it's changed or
after it's changed.

Since you should never publish the pointer by assigning it until the
object is fully initialized, this can never be a problem even without
the mutex being there.

Even if you sampled the the_virtio_sock value right before it's being
set to NULL by the remove function, that still can happen with the
mutex held too.

This function is also terribly named btw, it implies that a reference
count is being taken. But that's not what this function does, it
just returns the pointer value as-is.

2019-05-30 09:48:06

by Jason Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()


On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>> We flush all pending works before to call vdev->config->reset(vdev),
>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>> so we add another flush after it, to avoid use after free.
>>>
>>> Suggested-by: Michael S. Tsirkin <[email protected]>
>>> Signed-off-by: Stefano Garzarella <[email protected]>
>>> ---
>>> net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>> 1 file changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>> index e694df10ab61..ad093ce96693 100644
>>> --- a/net/vmw_vsock/virtio_transport.c
>>> +++ b/net/vmw_vsock/virtio_transport.c
>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>> return ret;
>>> }
>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>> +{
>>> + flush_work(&vsock->loopback_work);
>>> + flush_work(&vsock->rx_work);
>>> + flush_work(&vsock->tx_work);
>>> + flush_work(&vsock->event_work);
>>> + flush_work(&vsock->send_pkt_work);
>>> +}
>>> +
>>> static void virtio_vsock_remove(struct virtio_device *vdev)
>>> {
>>> struct virtio_vsock *vsock = vdev->priv;
>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>> mutex_lock(&the_virtio_vsock_mutex);
>>> the_virtio_vsock = NULL;
>>> - flush_work(&vsock->loopback_work);
>>> - flush_work(&vsock->rx_work);
>>> - flush_work(&vsock->tx_work);
>>> - flush_work(&vsock->event_work);
>>> - flush_work(&vsock->send_pkt_work);
>>> -
>>> /* Reset all connected sockets when the device disappear */
>>> vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>> vsock->event_run = false;
>>> mutex_unlock(&vsock->event_lock);
>>> + /* Flush all pending works */
>>> + virtio_vsock_flush_works(vsock);
>>> +
>>> /* Flush all device writes and interrupts, device will not use any
>>> * more buffers.
>>> */
>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>> /* Delete virtqueues and flush outstanding callbacks if any */
>>> vdev->config->del_vqs(vdev);
>>> + /* Other works can be queued before 'config->del_vqs()', so we flush
>>> + * all works before to free the vsock object to avoid use after free.
>>> + */
>>> + virtio_vsock_flush_works(vsock);
>>
>> Some questions after a quick glance:
>>
>> 1) It looks to me that the work could be queued from the path of
>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>
> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> queue work from the upper layer (socket).
>
> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> a rare issue could happen:
> we are setting the_virtio_vsock to NULL at the start of .remove() and we
> are freeing the object pointed by it at the end of .remove(), so
> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> running, accessing the object that we are freed.


Yes, that's my point.


>
> Should I use something like RCU to prevent this issue?
>
> virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> {
> rcu_read_lock();
> vsock = rcu_dereference(the_virtio_vsock_mutex);


RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).


> ...
> rcu_read_unlock();
> }
>
> virtio_vsock_remove()
> {
> rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> synchronize_rcu();
>
> ...
>
> free(vsock);
> }
>
> Could there be a better approach?
>
>
>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>> needed? It looks to me we've already done except that we need flush rx_work
>> in the end since send_pkt_work can requeue rx_work.
> The main reason of tx_run/rx_run/event_run is to prevent that a worker
> function is running while we are calling config->reset().
>
> E.g. if an interrupt comes between virtio_vsock_flush_works() and
> config->reset(), it can queue new works that can access the device while
> we are in config->reset().
>
> IMHO they are still needed.
>
> What do you think?


I mean could we simply do flush after reset once and without
tx_rx/rx_run tricks?

rest();

virtio_vsock_flush_work();

virtio_vsock_free_buf();


Thanks


>
>
> Thanks for your questions,
> Stefano

2019-05-30 10:12:01

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>
> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > We flush all pending works before to call vdev->config->reset(vdev),
> > > > but other works can be queued before the vdev->config->del_vqs(vdev),
> > > > so we add another flush after it, to avoid use after free.
> > > >
> > > > Suggested-by: Michael S. Tsirkin <[email protected]>
> > > > Signed-off-by: Stefano Garzarella <[email protected]>
> > > > ---
> > > > net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> > > > 1 file changed, 17 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > > > index e694df10ab61..ad093ce96693 100644
> > > > --- a/net/vmw_vsock/virtio_transport.c
> > > > +++ b/net/vmw_vsock/virtio_transport.c
> > > > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> > > > return ret;
> > > > }
> > > > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > > > +{
> > > > + flush_work(&vsock->loopback_work);
> > > > + flush_work(&vsock->rx_work);
> > > > + flush_work(&vsock->tx_work);
> > > > + flush_work(&vsock->event_work);
> > > > + flush_work(&vsock->send_pkt_work);
> > > > +}
> > > > +
> > > > static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > {
> > > > struct virtio_vsock *vsock = vdev->priv;
> > > > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > mutex_lock(&the_virtio_vsock_mutex);
> > > > the_virtio_vsock = NULL;
> > > > - flush_work(&vsock->loopback_work);
> > > > - flush_work(&vsock->rx_work);
> > > > - flush_work(&vsock->tx_work);
> > > > - flush_work(&vsock->event_work);
> > > > - flush_work(&vsock->send_pkt_work);
> > > > -
> > > > /* Reset all connected sockets when the device disappear */
> > > > vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > vsock->event_run = false;
> > > > mutex_unlock(&vsock->event_lock);
> > > > + /* Flush all pending works */
> > > > + virtio_vsock_flush_works(vsock);
> > > > +
> > > > /* Flush all device writes and interrupts, device will not use any
> > > > * more buffers.
> > > > */
> > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > /* Delete virtqueues and flush outstanding callbacks if any */
> > > > vdev->config->del_vqs(vdev);
> > > > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > > > + * all works before to free the vsock object to avoid use after free.
> > > > + */
> > > > + virtio_vsock_flush_works(vsock);
> > >
> > > Some questions after a quick glance:
> > >
> > > 1) It looks to me that the work could be queued from the path of
> > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > >
> > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > queue work from the upper layer (socket).
> >
> > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > a rare issue could happen:
> > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > are freeing the object pointed by it at the end of .remove(), so
> > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > running, accessing the object that we are freed.
>
>
> Yes, that's my point.
>
>
> >
> > Should I use something like RCU to prevent this issue?
> >
> > virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > {
> > rcu_read_lock();
> > vsock = rcu_dereference(the_virtio_vsock_mutex);
>
>
> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>

Okay, I'm going this way.

>
> > ...
> > rcu_read_unlock();
> > }
> >
> > virtio_vsock_remove()
> > {
> > rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > synchronize_rcu();
> >
> > ...
> >
> > free(vsock);
> > }
> >
> > Could there be a better approach?
> >
> >
> > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > needed? It looks to me we've already done except that we need flush rx_work
> > > in the end since send_pkt_work can requeue rx_work.
> > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > function is running while we are calling config->reset().
> >
> > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > config->reset(), it can queue new works that can access the device while
> > we are in config->reset().
> >
> > IMHO they are still needed.
> >
> > What do you think?
>
>
> I mean could we simply do flush after reset once and without tx_rx/rx_run
> tricks?
>
> rest();
>
> virtio_vsock_flush_work();
>
> virtio_vsock_free_buf();

My only doubt is:
is it safe to call config->reset() while a worker function could access
the device?

I had this doubt reading the Michael's advice[1] and looking at
virtnet_remove() where there are these lines before the config->reset():

/* Make sure no work handler is accessing the device. */
flush_work(&vi->config_work);

Thanks,
Stefano

[1] https://lore.kernel.org/netdev/[email protected]

2019-05-30 10:29:47

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'

On Wed, May 29, 2019 at 09:28:52PM -0700, David Miller wrote:
> From: Stefano Garzarella <[email protected]>
> Date: Tue, 28 May 2019 12:56:20 +0200
>
> > @@ -68,7 +68,13 @@ struct virtio_vsock {
> >
> > static struct virtio_vsock *virtio_vsock_get(void)
> > {
> > - return the_virtio_vsock;
> > + struct virtio_vsock *vsock;
> > +
> > + mutex_lock(&the_virtio_vsock_mutex);
> > + vsock = the_virtio_vsock;
> > + mutex_unlock(&the_virtio_vsock_mutex);
> > +
> > + return vsock;
>
> This doesn't do anything as far as I can tell.
>
> No matter what, you will either get the value before it's changed or
> after it's changed.
>
> Since you should never publish the pointer by assigning it until the
> object is fully initialized, this can never be a problem even without
> the mutex being there.
>
> Even if you sampled the the_virtio_sock value right before it's being
> set to NULL by the remove function, that still can happen with the
> mutex held too.

Yes, I found out when I was answering Jason's question [1]. :(

I proposed to use RCU to solve this issue, do you agree?
Let me know if there is a better way.

>
> This function is also terribly named btw, it implies that a reference
> count is being taken. But that's not what this function does, it
> just returns the pointer value as-is.

What do you think if I remove the function, using directly the_virtio_vsock?
(should be easier to use with RCU API)

Thanks,
Stefano

[1] https://lore.kernel.org/netdev/20190529105832.oz3sagbne5teq3nt@steredhat

2019-05-30 12:00:46

by Jason Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()


On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>> We flush all pending works before to call vdev->config->reset(vdev),
>>>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>>>> so we add another flush after it, to avoid use after free.
>>>>>
>>>>> Suggested-by: Michael S. Tsirkin <[email protected]>
>>>>> Signed-off-by: Stefano Garzarella <[email protected]>
>>>>> ---
>>>>> net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>>>> 1 file changed, 17 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>>>> index e694df10ab61..ad093ce96693 100644
>>>>> --- a/net/vmw_vsock/virtio_transport.c
>>>>> +++ b/net/vmw_vsock/virtio_transport.c
>>>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>>>> return ret;
>>>>> }
>>>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>>>> +{
>>>>> + flush_work(&vsock->loopback_work);
>>>>> + flush_work(&vsock->rx_work);
>>>>> + flush_work(&vsock->tx_work);
>>>>> + flush_work(&vsock->event_work);
>>>>> + flush_work(&vsock->send_pkt_work);
>>>>> +}
>>>>> +
>>>>> static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> {
>>>>> struct virtio_vsock *vsock = vdev->priv;
>>>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> mutex_lock(&the_virtio_vsock_mutex);
>>>>> the_virtio_vsock = NULL;
>>>>> - flush_work(&vsock->loopback_work);
>>>>> - flush_work(&vsock->rx_work);
>>>>> - flush_work(&vsock->tx_work);
>>>>> - flush_work(&vsock->event_work);
>>>>> - flush_work(&vsock->send_pkt_work);
>>>>> -
>>>>> /* Reset all connected sockets when the device disappear */
>>>>> vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> vsock->event_run = false;
>>>>> mutex_unlock(&vsock->event_lock);
>>>>> + /* Flush all pending works */
>>>>> + virtio_vsock_flush_works(vsock);
>>>>> +
>>>>> /* Flush all device writes and interrupts, device will not use any
>>>>> * more buffers.
>>>>> */
>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> /* Delete virtqueues and flush outstanding callbacks if any */
>>>>> vdev->config->del_vqs(vdev);
>>>>> + /* Other works can be queued before 'config->del_vqs()', so we flush
>>>>> + * all works before to free the vsock object to avoid use after free.
>>>>> + */
>>>>> + virtio_vsock_flush_works(vsock);
>>>> Some questions after a quick glance:
>>>>
>>>> 1) It looks to me that the work could be queued from the path of
>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>
>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>> queue work from the upper layer (socket).
>>>
>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>> a rare issue could happen:
>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>> are freeing the object pointed by it at the end of .remove(), so
>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>> running, accessing the object that we are freed.
>>
>> Yes, that's my point.
>>
>>
>>> Should I use something like RCU to prevent this issue?
>>>
>>> virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>> {
>>> rcu_read_lock();
>>> vsock = rcu_dereference(the_virtio_vsock_mutex);
>>
>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>
> Okay, I'm going this way.
>
>>> ...
>>> rcu_read_unlock();
>>> }
>>>
>>> virtio_vsock_remove()
>>> {
>>> rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>> synchronize_rcu();
>>>
>>> ...
>>>
>>> free(vsock);
>>> }
>>>
>>> Could there be a better approach?
>>>
>>>
>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>> in the end since send_pkt_work can requeue rx_work.
>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>> function is running while we are calling config->reset().
>>>
>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>> config->reset(), it can queue new works that can access the device while
>>> we are in config->reset().
>>>
>>> IMHO they are still needed.
>>>
>>> What do you think?
>>
>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>> tricks?
>>
>> rest();
>>
>> virtio_vsock_flush_work();
>>
>> virtio_vsock_free_buf();
> My only doubt is:
> is it safe to call config->reset() while a worker function could access
> the device?
>
> I had this doubt reading the Michael's advice[1] and looking at
> virtnet_remove() where there are these lines before the config->reset():
>
> /* Make sure no work handler is accessing the device. */
> flush_work(&vi->config_work);
>
> Thanks,
> Stefano
>
> [1] https://lore.kernel.org/netdev/[email protected]


Good point. Then I agree with you. But if we can use the RCU to detect
the detach of device from socket for these, it would be even better.

Thanks


2019-05-31 08:19:53

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>
> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > vsock->event_run = false;
> > > > > > mutex_unlock(&vsock->event_lock);
> > > > > > + /* Flush all pending works */
> > > > > > + virtio_vsock_flush_works(vsock);
> > > > > > +
> > > > > > /* Flush all device writes and interrupts, device will not use any
> > > > > > * more buffers.
> > > > > > */
> > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > /* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > vdev->config->del_vqs(vdev);
> > > > > > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > + * all works before to free the vsock object to avoid use after free.
> > > > > > + */
> > > > > > + virtio_vsock_flush_works(vsock);
> > > > > Some questions after a quick glance:
> > > > >
> > > > > 1) It looks to me that the work could be queued from the path of
> > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > >
> > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > queue work from the upper layer (socket).
> > > >
> > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > a rare issue could happen:
> > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > are freeing the object pointed by it at the end of .remove(), so
> > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > running, accessing the object that we are freed.
> > >
> > > Yes, that's my point.
> > >
> > >
> > > > Should I use something like RCU to prevent this issue?
> > > >
> > > > virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > {
> > > > rcu_read_lock();
> > > > vsock = rcu_dereference(the_virtio_vsock_mutex);
> > >
> > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > >
> > Okay, I'm going this way.
> >
> > > > ...
> > > > rcu_read_unlock();
> > > > }
> > > >
> > > > virtio_vsock_remove()
> > > > {
> > > > rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > synchronize_rcu();
> > > >
> > > > ...
> > > >
> > > > free(vsock);
> > > > }
> > > >
> > > > Could there be a better approach?
> > > >
> > > >
> > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > in the end since send_pkt_work can requeue rx_work.
> > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > function is running while we are calling config->reset().
> > > >
> > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > config->reset(), it can queue new works that can access the device while
> > > > we are in config->reset().
> > > >
> > > > IMHO they are still needed.
> > > >
> > > > What do you think?
> > >
> > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > tricks?
> > >
> > > rest();
> > >
> > > virtio_vsock_flush_work();
> > >
> > > virtio_vsock_free_buf();
> > My only doubt is:
> > is it safe to call config->reset() while a worker function could access
> > the device?
> >
> > I had this doubt reading the Michael's advice[1] and looking at
> > virtnet_remove() where there are these lines before the config->reset():
> >
> > /* Make sure no work handler is accessing the device. */
> > flush_work(&vi->config_work);
> >
> > Thanks,
> > Stefano
> >
> > [1] https://lore.kernel.org/netdev/[email protected]
>
>
> Good point. Then I agree with you. But if we can use the RCU to detect the
> detach of device from socket for these, it would be even better.
>

What about checking 'the_virtio_vsock' in the worker functions in a RCU
critical section?
In this way, I can remove the rx_run/tx_run/event_run.

Do you think it's cleaner?

Thank you very much,
Stefano

2019-05-31 09:58:40

by Jason Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()


On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
>>> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>>>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>> vsock->event_run = false;
>>>>>>> mutex_unlock(&vsock->event_lock);
>>>>>>> + /* Flush all pending works */
>>>>>>> + virtio_vsock_flush_works(vsock);
>>>>>>> +
>>>>>>> /* Flush all device writes and interrupts, device will not use any
>>>>>>> * more buffers.
>>>>>>> */
>>>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>> /* Delete virtqueues and flush outstanding callbacks if any */
>>>>>>> vdev->config->del_vqs(vdev);
>>>>>>> + /* Other works can be queued before 'config->del_vqs()', so we flush
>>>>>>> + * all works before to free the vsock object to avoid use after free.
>>>>>>> + */
>>>>>>> + virtio_vsock_flush_works(vsock);
>>>>>> Some questions after a quick glance:
>>>>>>
>>>>>> 1) It looks to me that the work could be queued from the path of
>>>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>>>
>>>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>>>> queue work from the upper layer (socket).
>>>>>
>>>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>>>> a rare issue could happen:
>>>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>>>> are freeing the object pointed by it at the end of .remove(), so
>>>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>>>> running, accessing the object that we are freed.
>>>> Yes, that's my point.
>>>>
>>>>
>>>>> Should I use something like RCU to prevent this issue?
>>>>>
>>>>> virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>>> {
>>>>> rcu_read_lock();
>>>>> vsock = rcu_dereference(the_virtio_vsock_mutex);
>>>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>>>
>>> Okay, I'm going this way.
>>>
>>>>> ...
>>>>> rcu_read_unlock();
>>>>> }
>>>>>
>>>>> virtio_vsock_remove()
>>>>> {
>>>>> rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>>> synchronize_rcu();
>>>>>
>>>>> ...
>>>>>
>>>>> free(vsock);
>>>>> }
>>>>>
>>>>> Could there be a better approach?
>>>>>
>>>>>
>>>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>>>> in the end since send_pkt_work can requeue rx_work.
>>>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>>>> function is running while we are calling config->reset().
>>>>>
>>>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>>>> config->reset(), it can queue new works that can access the device while
>>>>> we are in config->reset().
>>>>>
>>>>> IMHO they are still needed.
>>>>>
>>>>> What do you think?
>>>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>>>> tricks?
>>>>
>>>> rest();
>>>>
>>>> virtio_vsock_flush_work();
>>>>
>>>> virtio_vsock_free_buf();
>>> My only doubt is:
>>> is it safe to call config->reset() while a worker function could access
>>> the device?
>>>
>>> I had this doubt reading the Michael's advice[1] and looking at
>>> virtnet_remove() where there are these lines before the config->reset():
>>>
>>> /* Make sure no work handler is accessing the device. */
>>> flush_work(&vi->config_work);
>>>
>>> Thanks,
>>> Stefano
>>>
>>> [1] https://lore.kernel.org/netdev/[email protected]
>>
>> Good point. Then I agree with you. But if we can use the RCU to detect the
>> detach of device from socket for these, it would be even better.
>>
> What about checking 'the_virtio_vsock' in the worker functions in a RCU
> critical section?
> In this way, I can remove the rx_run/tx_run/event_run.
>
> Do you think it's cleaner?


Yes, I think so.

Thanks


>
> Thank you very much,
> Stefano

2019-06-06 08:13:04

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > vsock->event_run = false;
> > > > > > > > mutex_unlock(&vsock->event_lock);
> > > > > > > > + /* Flush all pending works */
> > > > > > > > + virtio_vsock_flush_works(vsock);
> > > > > > > > +
> > > > > > > > /* Flush all device writes and interrupts, device will not use any
> > > > > > > > * more buffers.
> > > > > > > > */
> > > > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > /* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > > > vdev->config->del_vqs(vdev);
> > > > > > > > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > > > + * all works before to free the vsock object to avoid use after free.
> > > > > > > > + */
> > > > > > > > + virtio_vsock_flush_works(vsock);
> > > > > > > Some questions after a quick glance:
> > > > > > >
> > > > > > > 1) It looks to me that the work could be queued from the path of
> > > > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > > >
> > > > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > > > queue work from the upper layer (socket).
> > > > > >
> > > > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > > > a rare issue could happen:
> > > > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > > > are freeing the object pointed by it at the end of .remove(), so
> > > > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > > > running, accessing the object that we are freed.
> > > > > Yes, that's my point.
> > > > >
> > > > >
> > > > > > Should I use something like RCU to prevent this issue?
> > > > > >
> > > > > > virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > > > {
> > > > > > rcu_read_lock();
> > > > > > vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > > >
> > > > Okay, I'm going this way.
> > > >
> > > > > > ...
> > > > > > rcu_read_unlock();
> > > > > > }
> > > > > >
> > > > > > virtio_vsock_remove()
> > > > > > {
> > > > > > rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > > > synchronize_rcu();
> > > > > >
> > > > > > ...
> > > > > >
> > > > > > free(vsock);
> > > > > > }
> > > > > >
> > > > > > Could there be a better approach?
> > > > > >
> > > > > >
> > > > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > > > in the end since send_pkt_work can requeue rx_work.
> > > > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > > > function is running while we are calling config->reset().
> > > > > >
> > > > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > > > config->reset(), it can queue new works that can access the device while
> > > > > > we are in config->reset().
> > > > > >
> > > > > > IMHO they are still needed.
> > > > > >
> > > > > > What do you think?
> > > > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > > > tricks?
> > > > >
> > > > > rest();
> > > > >
> > > > > virtio_vsock_flush_work();
> > > > >
> > > > > virtio_vsock_free_buf();
> > > > My only doubt is:
> > > > is it safe to call config->reset() while a worker function could access
> > > > the device?
> > > >
> > > > I had this doubt reading the Michael's advice[1] and looking at
> > > > virtnet_remove() where there are these lines before the config->reset():
> > > >
> > > > /* Make sure no work handler is accessing the device. */
> > > > flush_work(&vi->config_work);
> > > >
> > > > Thanks,
> > > > Stefano
> > > >
> > > > [1] https://lore.kernel.org/netdev/[email protected]
> > >
> > > Good point. Then I agree with you. But if we can use the RCU to detect the
> > > detach of device from socket for these, it would be even better.
> > >
> > What about checking 'the_virtio_vsock' in the worker functions in a RCU
> > critical section?
> > In this way, I can remove the rx_run/tx_run/event_run.
> >
> > Do you think it's cleaner?
>
>
> Yes, I think so.
>

Hi Jason,
while I was trying to use RCU also for workers, I discovered that it can
not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.

So, if you agree I'd send a v2 using RCU only for the
virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
this patch as is to be sure that no one is accessing the device while we
call config->reset().

Thanks,
Stefano

2019-06-10 13:32:40

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()

On Tue, May 28, 2019 at 12:56:19PM +0200, Stefano Garzarella wrote:
> During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> before registering the driver", Stefan pointed out some possible issues
> in the .probe() and .remove() callbacks of the virtio-vsock driver.
>
> This series tries to solve these issues:
> - Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
> .probe() to avoid that some sockets queue works when the initialization
> is not finished.
> - Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
> be sure that no one is accessing the device, and adds another flush at the
> end of the .remove() to avoid use after free.
> - Patch 4 free also used buffers in the virtqueues during the .remove().
>
> Stefano Garzarella (4):
> vsock/virtio: fix locking around 'the_virtio_vsock'
> vsock/virtio: stop workers during the .remove()
> vsock/virtio: fix flush of works during the .remove()
> vsock/virtio: free used buffers during the .remove()
>
> net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
> 1 file changed, 90 insertions(+), 15 deletions(-)

Looking forward to v2. I took a look at the discussion and I'll review
v2 from scratch. Just keep in mind that the mutex is used more for
mutual exclusion of the init/exit code than to protect the_virtio_vsock,
so we'll still need protection of init/exit code even with RCU.

Stefan


Attachments:
(No filename) (1.45 kB)
signature.asc (499.00 B)
Download all attachments

2019-06-13 15:55:28

by Jason Wang

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()


On 2019/6/6 下午4:11, Stefano Garzarella wrote:
> On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
>> On 2019/5/31 下午4:18, Stefano Garzarella wrote:
>>> On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
>>>> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
>>>>> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>>>>>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>>>>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>>>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>>> vsock->event_run = false;
>>>>>>>>> mutex_unlock(&vsock->event_lock);
>>>>>>>>> + /* Flush all pending works */
>>>>>>>>> + virtio_vsock_flush_works(vsock);
>>>>>>>>> +
>>>>>>>>> /* Flush all device writes and interrupts, device will not use any
>>>>>>>>> * more buffers.
>>>>>>>>> */
>>>>>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>>>>>> /* Delete virtqueues and flush outstanding callbacks if any */
>>>>>>>>> vdev->config->del_vqs(vdev);
>>>>>>>>> + /* Other works can be queued before 'config->del_vqs()', so we flush
>>>>>>>>> + * all works before to free the vsock object to avoid use after free.
>>>>>>>>> + */
>>>>>>>>> + virtio_vsock_flush_works(vsock);
>>>>>>>> Some questions after a quick glance:
>>>>>>>>
>>>>>>>> 1) It looks to me that the work could be queued from the path of
>>>>>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>>>>>
>>>>>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>>>>>> queue work from the upper layer (socket).
>>>>>>>
>>>>>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>>>>>> a rare issue could happen:
>>>>>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>>>>>> are freeing the object pointed by it at the end of .remove(), so
>>>>>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>>>>>> running, accessing the object that we are freed.
>>>>>> Yes, that's my point.
>>>>>>
>>>>>>
>>>>>>> Should I use something like RCU to prevent this issue?
>>>>>>>
>>>>>>> virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>>>>>> {
>>>>>>> rcu_read_lock();
>>>>>>> vsock = rcu_dereference(the_virtio_vsock_mutex);
>>>>>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>>>>>
>>>>> Okay, I'm going this way.
>>>>>
>>>>>>> ...
>>>>>>> rcu_read_unlock();
>>>>>>> }
>>>>>>>
>>>>>>> virtio_vsock_remove()
>>>>>>> {
>>>>>>> rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>>>>>> synchronize_rcu();
>>>>>>>
>>>>>>> ...
>>>>>>>
>>>>>>> free(vsock);
>>>>>>> }
>>>>>>>
>>>>>>> Could there be a better approach?
>>>>>>>
>>>>>>>
>>>>>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>>>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>>>>>> in the end since send_pkt_work can requeue rx_work.
>>>>>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>>>>>> function is running while we are calling config->reset().
>>>>>>>
>>>>>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>>>>>> config->reset(), it can queue new works that can access the device while
>>>>>>> we are in config->reset().
>>>>>>>
>>>>>>> IMHO they are still needed.
>>>>>>>
>>>>>>> What do you think?
>>>>>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>>>>>> tricks?
>>>>>>
>>>>>> rest();
>>>>>>
>>>>>> virtio_vsock_flush_work();
>>>>>>
>>>>>> virtio_vsock_free_buf();
>>>>> My only doubt is:
>>>>> is it safe to call config->reset() while a worker function could access
>>>>> the device?
>>>>>
>>>>> I had this doubt reading the Michael's advice[1] and looking at
>>>>> virtnet_remove() where there are these lines before the config->reset():
>>>>>
>>>>> /* Make sure no work handler is accessing the device. */
>>>>> flush_work(&vi->config_work);
>>>>>
>>>>> Thanks,
>>>>> Stefano
>>>>>
>>>>> [1] https://lore.kernel.org/netdev/[email protected]
>>>> Good point. Then I agree with you. But if we can use the RCU to detect the
>>>> detach of device from socket for these, it would be even better.
>>>>
>>> What about checking 'the_virtio_vsock' in the worker functions in a RCU
>>> critical section?
>>> In this way, I can remove the rx_run/tx_run/event_run.
>>>
>>> Do you think it's cleaner?
>>
>> Yes, I think so.
>>
> Hi Jason,
> while I was trying to use RCU also for workers, I discovered that it can
> not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
> There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.
>
> So, if you agree I'd send a v2 using RCU only for the
> virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
> this patch as is to be sure that no one is accessing the device while we
> call config->reset().
>
> Thanks,
> Stefano


If it work, I don't object to use that consider it was suggested by
Michael. You can go this way and let's see.

Personally I would like something more cleaner. E.g RCU + some kind of
reference count (kref?).

Thanks

2019-06-27 10:06:44

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()

On Mon, Jun 10, 2019 at 02:09:45PM +0100, Stefan Hajnoczi wrote:
> On Tue, May 28, 2019 at 12:56:19PM +0200, Stefano Garzarella wrote:
> > During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> > before registering the driver", Stefan pointed out some possible issues
> > in the .probe() and .remove() callbacks of the virtio-vsock driver.
> >
> > This series tries to solve these issues:
> > - Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
> > .probe() to avoid that some sockets queue works when the initialization
> > is not finished.
> > - Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
> > be sure that no one is accessing the device, and adds another flush at the
> > end of the .remove() to avoid use after free.
> > - Patch 4 free also used buffers in the virtqueues during the .remove().
> >
> > Stefano Garzarella (4):
> > vsock/virtio: fix locking around 'the_virtio_vsock'
> > vsock/virtio: stop workers during the .remove()
> > vsock/virtio: fix flush of works during the .remove()
> > vsock/virtio: free used buffers during the .remove()
> >
> > net/vmw_vsock/virtio_transport.c | 105 ++++++++++++++++++++++++++-----
> > 1 file changed, 90 insertions(+), 15 deletions(-)
>
> Looking forward to v2. I took a look at the discussion and I'll review
> v2 from scratch. Just keep in mind that the mutex is used more for
> mutual exclusion of the init/exit code than to protect the_virtio_vsock,
> so we'll still need protection of init/exit code even with RCU.

Thanks for the advice! I'll send the v2 ASAP.

Thanks,
Stefano

2019-06-27 10:27:17

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Thu, Jun 13, 2019 at 04:57:15PM +0800, Jason Wang wrote:
>
> On 2019/6/6 下午4:11, Stefano Garzarella wrote:
> > On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
> > > On 2019/5/31 下午4:18, Stefano Garzarella wrote:
> > > > On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> > > > > On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > > > > > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > > > > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > > > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > > > vsock->event_run = false;
> > > > > > > > > > mutex_unlock(&vsock->event_lock);
> > > > > > > > > > + /* Flush all pending works */
> > > > > > > > > > + virtio_vsock_flush_works(vsock);
> > > > > > > > > > +
> > > > > > > > > > /* Flush all device writes and interrupts, device will not use any
> > > > > > > > > > * more buffers.
> > > > > > > > > > */
> > > > > > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > > > > > > /* Delete virtqueues and flush outstanding callbacks if any */
> > > > > > > > > > vdev->config->del_vqs(vdev);
> > > > > > > > > > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > > > > > + * all works before to free the vsock object to avoid use after free.
> > > > > > > > > > + */
> > > > > > > > > > + virtio_vsock_flush_works(vsock);
> > > > > > > > > Some questions after a quick glance:
> > > > > > > > >
> > > > > > > > > 1) It looks to me that the work could be queued from the path of
> > > > > > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > > > > >
> > > > > > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > > > > > queue work from the upper layer (socket).
> > > > > > > >
> > > > > > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > > > > > a rare issue could happen:
> > > > > > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > > > > > are freeing the object pointed by it at the end of .remove(), so
> > > > > > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > > > > > running, accessing the object that we are freed.
> > > > > > > Yes, that's my point.
> > > > > > >
> > > > > > >
> > > > > > > > Should I use something like RCU to prevent this issue?
> > > > > > > >
> > > > > > > > virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > > > > > > {
> > > > > > > > rcu_read_lock();
> > > > > > > > vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > > > > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > > > > >
> > > > > > Okay, I'm going this way.
> > > > > >
> > > > > > > > ...
> > > > > > > > rcu_read_unlock();
> > > > > > > > }
> > > > > > > >
> > > > > > > > virtio_vsock_remove()
> > > > > > > > {
> > > > > > > > rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > > > > > > synchronize_rcu();
> > > > > > > >
> > > > > > > > ...
> > > > > > > >
> > > > > > > > free(vsock);
> > > > > > > > }
> > > > > > > >
> > > > > > > > Could there be a better approach?
> > > > > > > >
> > > > > > > >
> > > > > > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > > > > > in the end since send_pkt_work can requeue rx_work.
> > > > > > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > > > > > function is running while we are calling config->reset().
> > > > > > > >
> > > > > > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > > > > > config->reset(), it can queue new works that can access the device while
> > > > > > > > we are in config->reset().
> > > > > > > >
> > > > > > > > IMHO they are still needed.
> > > > > > > >
> > > > > > > > What do you think?
> > > > > > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > > > > > tricks?
> > > > > > >
> > > > > > > rest();
> > > > > > >
> > > > > > > virtio_vsock_flush_work();
> > > > > > >
> > > > > > > virtio_vsock_free_buf();
> > > > > > My only doubt is:
> > > > > > is it safe to call config->reset() while a worker function could access
> > > > > > the device?
> > > > > >
> > > > > > I had this doubt reading the Michael's advice[1] and looking at
> > > > > > virtnet_remove() where there are these lines before the config->reset():
> > > > > >
> > > > > > /* Make sure no work handler is accessing the device. */
> > > > > > flush_work(&vi->config_work);
> > > > > >
> > > > > > Thanks,
> > > > > > Stefano
> > > > > >
> > > > > > [1] https://lore.kernel.org/netdev/[email protected]
> > > > > Good point. Then I agree with you. But if we can use the RCU to detect the
> > > > > detach of device from socket for these, it would be even better.
> > > > >
> > > > What about checking 'the_virtio_vsock' in the worker functions in a RCU
> > > > critical section?
> > > > In this way, I can remove the rx_run/tx_run/event_run.
> > > >
> > > > Do you think it's cleaner?
> > >
> > > Yes, I think so.
> > >
> > Hi Jason,
> > while I was trying to use RCU also for workers, I discovered that it can
> > not be used if we can sleep. (Workers have mutex, memory allocation, etc.).
> > There is SRCU, but I think the rx_run/tx_run/event_run is cleaner.
> >
> > So, if you agree I'd send a v2 using RCU only for the
> > virtio_transport_send_pkt() or vsock_transport_cancel_pkt(), and leave
> > this patch as is to be sure that no one is accessing the device while we
> > call config->reset().
> >
> > Thanks,
> > Stefano
>
>
> If it work, I don't object to use that consider it was suggested by Michael.
> You can go this way and let's see.

Okay, I'll try if it works.

>
> Personally I would like something more cleaner. E.g RCU + some kind of
> reference count (kref?).

I'll try to check if kref can help to have a cleaner solution in this
case.

Thanks for your comments,
Stefano