Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3257119ybd; Fri, 28 Jun 2019 05:38:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqz71FpSH10iezMcPsSFNAeytNYE8zu2lTKNnZ6ixpCM2ciAPG+DCOfWFM53jHrQFMRpjeGi X-Received: by 2002:a63:1a5e:: with SMTP id a30mr8817198pgm.433.1561725521726; Fri, 28 Jun 2019 05:38:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561725521; cv=none; d=google.com; s=arc-20160816; b=xkiRZmQ76LEGo2JxcmY3CIemOJlR0WcgIALz+hqKwqLBubnd+v5+3arNpEkjQHDWfd eud7kOqcca5vKkzNjRqu+wslcn/1zzl2IxwunJLCefXwTAhmPUvYHK9ufOrLQAzDCLzf s+oJTuph74/tjiVs7lHDPRDAOlk/kOAOeXzsbYF+lsxI8rmMjkFznqU96MnVw6fzf1dJ 6UomGnFJadLxzmA2NfwalP96TXiG5BynVAlVMgoTjtLq0lDU764N4OIcCxYAQN43BBFC Zso4DjJn7VZ6/eyzO0v/7rhCqyDHOj9WPbSwaqDqv6DmfCaMtGK1fF8mF/VQpIYEaKx1 7vvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YL9q13HqEr2Ec/X2XbON4pg2N1taT2NpYCw5wcgvNMI=; b=f//1yDxt8I0Vw/C6XdWOpm1j465PePgJv3xUItEPb9e0HGHyzQMgD02WSqgpvUWuyy fmDhtI91Qpx0ka74zxD4hmVXZHy5lLrfBuyASGJAEPXoHTeMB4DZeBw4HNUYH0lp5Hbb X2siHLhNpVei52sC/ksIQYuvKpX1CyQBwfwBS6a3rVONtIgMBHUT54ZRoo33a4uREmSh 8Ekx3iytbl4Tz7yD940zwKLRAxb/2dN0CObQfi5+NVDOllW/8bW1mQMgnHVQzX+O2F4M 0/5HDaf/BFbtgpaC08kniRIJ2TxMfAeQVzwOLjpX5R65VQEvwnCU4Ee2fLCrR+TTKiq0 BwTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f71si2176865pgc.307.2019.06.28.05.38.25; Fri, 28 Jun 2019 05:38:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727106AbfF1Mha (ORCPT + 99 others); Fri, 28 Jun 2019 08:37:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45224 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727023AbfF1MhZ (ORCPT ); Fri, 28 Jun 2019 08:37:25 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C5146C059B9F; Fri, 28 Jun 2019 12:37:24 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-102.ams2.redhat.com [10.36.117.102]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC67C5D98E; Fri, 28 Jun 2019 12:37:18 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, Stefan Hajnoczi , "Michael S. Tsirkin" , "David S. Miller" , Jason Wang , linux-kernel@vger.kernel.org Subject: [PATCH v2 3/3] vsock/virtio: fix flush of works during the .remove() Date: Fri, 28 Jun 2019 14:36:59 +0200 Message-Id: <20190628123659.139576-4-sgarzare@redhat.com> In-Reply-To: <20190628123659.139576-1-sgarzare@redhat.com> References: <20190628123659.139576-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 28 Jun 2019 12:37:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch moves the flush of works after vdev->config->del_vqs(vdev), because we need to be sure that no workers run before to free the 'vsock' object. Since we stopped the workers using the [tx|rx|event]_run flags, we are sure no one is accessing the device while we are calling vdev->config->reset(vdev), so we can safely move the workers' flush. Before the vdev->config->del_vqs(vdev), workers can be scheduled by VQ callbacks, so we must flush them after del_vqs(), to avoid use-after-free of 'vsock' object. Suggested-by: Michael S. Tsirkin Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index 1b44ec6f3f6c..96dafa978268 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -680,12 +680,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev) rcu_assign_pointer(the_virtio_vsock, NULL); synchronize_rcu(); - flush_work(&vsock->loopback_work); - flush_work(&vsock->rx_work); - flush_work(&vsock->tx_work); - flush_work(&vsock->event_work); - flush_work(&vsock->send_pkt_work); - /* Reset all connected sockets when the device disappear */ vsock_for_each_connected_socket(virtio_vsock_reset_sock); @@ -740,6 +734,15 @@ static void virtio_vsock_remove(struct virtio_device *vdev) /* Delete virtqueues and flush outstanding callbacks if any */ vdev->config->del_vqs(vdev); + /* Other works can be queued before 'config->del_vqs()', so we flush + * all works before to free the vsock object to avoid use after free. + */ + flush_work(&vsock->loopback_work); + flush_work(&vsock->rx_work); + flush_work(&vsock->tx_work); + flush_work(&vsock->event_work); + flush_work(&vsock->send_pkt_work); + mutex_unlock(&the_virtio_vsock_mutex); kfree(vsock); -- 2.20.1