Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp5065876ybi; Tue, 28 May 2019 07:03:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqzEUPC6zWo+pbfgmDDpgd2l7cIACqKs0Btl91xv242HgUnuFZ0TSXSVXJNtlzKHeVT1/d4Q X-Received: by 2002:a62:585:: with SMTP id 127mr122306030pff.231.1559052194178; Tue, 28 May 2019 07:03:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559052194; cv=none; d=google.com; s=arc-20160816; b=Bzc34wOlGHCz8i1wfz7BllGxGGl8uTudVJeH+lCz7uUWRHedazH0Q/EuQ3+zKLbovJ 7mytmrM1TX0DS+hVZj3BUyuptStF2jh3MX6sgFTBlNIo12llfLIzS4S5dCK0gv9t56wz I818Ap393CEw/6JnfZplUgU4wkwUlFFife+5o1regAPKW2hX8CrSaT4ppuDZ91PmKqRt IpodEbnKa0BAzOVfr7X3qHf65ITzBHPFgLJFwf5IG0Q5N1IxQTiEkYVppJ2LCFk/D7z8 mDR45rGNWOqoYhipoayZNzeapWFgmaGDVPYBxZ2kCkNHj0pLPUvXQqMDyYH4hzFm6FK5 Y1IQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=qGLSqiJut5Hgo88VV4gr2h75BccbGskBrvp4ORiPEFo=; b=IJ/SG3+JTBVvK6UlnbGR/SAeF4SEvwN2b5ohQ1w8d4P57uhzcMA3PZ01Bs5Xg+ncaU ogYVu1IwFk7ZJQ7LVeTeoAJp5VQ/4Wkejuhme/2YBRf0ozfcxdlfEv4sFKd6873H8oR2 wObP/1duO2s0HYOy8d9QSYq50kzFB1wPKtgvyDqdthpKp+pkA7JFUu0fRzAllJU2QgJE kAmcoGyq1wlupT9jybMNpm5HfLwG4hkrGkBd3BebMRxn3Rq9/zxdySnxqgEKluIWVym7 vJzRsNqzzLHiw/6NUC9K2Q0y544LLmemxcJo6PoHVmqzN8AuSjzS2wIXKH2p/h1UUmL3 8iNg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 9si21561349pgu.189.2019.05.28.07.02.57; Tue, 28 May 2019 07:03:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726939AbfE1K4s (ORCPT + 99 others); Tue, 28 May 2019 06:56:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53250 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726859AbfE1K4p (ORCPT ); Tue, 28 May 2019 06:56:45 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 350F5300502A; Tue, 28 May 2019 10:56:45 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-13.ams2.redhat.com [10.36.117.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F2B67D59A; Tue, 28 May 2019 10:56:42 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , "Michael S . Tsirkin" Subject: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove() Date: Tue, 28 May 2019 12:56:22 +0200 Message-Id: <20190528105623.27983-4-sgarzare@redhat.com> In-Reply-To: <20190528105623.27983-1-sgarzare@redhat.com> References: <20190528105623.27983-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Tue, 28 May 2019 10:56:45 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We flush all pending works before to call vdev->config->reset(vdev), but other works can be queued before the vdev->config->del_vqs(vdev), so we add another flush after it, to avoid use after free. Suggested-by: Michael S. Tsirkin Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index e694df10ab61..ad093ce96693 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev) return ret; } +static void virtio_vsock_flush_works(struct virtio_vsock *vsock) +{ + flush_work(&vsock->loopback_work); + flush_work(&vsock->rx_work); + flush_work(&vsock->tx_work); + flush_work(&vsock->event_work); + flush_work(&vsock->send_pkt_work); +} + static void virtio_vsock_remove(struct virtio_device *vdev) { struct virtio_vsock *vsock = vdev->priv; @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev) mutex_lock(&the_virtio_vsock_mutex); the_virtio_vsock = NULL; - flush_work(&vsock->loopback_work); - flush_work(&vsock->rx_work); - flush_work(&vsock->tx_work); - flush_work(&vsock->event_work); - flush_work(&vsock->send_pkt_work); - /* Reset all connected sockets when the device disappear */ vsock_for_each_connected_socket(virtio_vsock_reset_sock); @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev) vsock->event_run = false; mutex_unlock(&vsock->event_lock); + /* Flush all pending works */ + virtio_vsock_flush_works(vsock); + /* Flush all device writes and interrupts, device will not use any * more buffers. */ @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev) /* Delete virtqueues and flush outstanding callbacks if any */ vdev->config->del_vqs(vdev); + /* Other works can be queued before 'config->del_vqs()', so we flush + * all works before to free the vsock object to avoid use after free. + */ + virtio_vsock_flush_works(vsock); + kfree(vsock); mutex_unlock(&the_virtio_vsock_mutex); } -- 2.20.1