Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp815504imu; Tue, 11 Dec 2018 08:01:44 -0800 (PST) X-Google-Smtp-Source: AFSGD/Vm2zxZ17zpMdXCypkVMvQGkhK6BJhWhIt8meov1SvaCyErerfTbIFknhVCs/9usUAyCBUB X-Received: by 2002:a65:4b82:: with SMTP id t2mr15112224pgq.189.1544544104016; Tue, 11 Dec 2018 08:01:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544544103; cv=none; d=google.com; s=arc-20160816; b=yBaNmjICKKKNxjPMD1z0MHhyaYE9ifZtUxVqq78xLCkT98fbY2dcqedAVdibFtytCB pu9sQ8JkarroJtEzlVIPtLxRAqHcne3dOAWDklGxN8kp8EwwHdMpXKEzB/I8XoD43eA+ SqgBi4tkd+6m1V8qXzOmT6Kmqv35TZ0gcJaKlqSoF8QF05a3jMIi+THdEjcg6ijqlH4U p7xOVP7w+CDBlb89+RTC6pQwMa6OmW1axSoFLjnmBADIHdSq+jxFQ2kMtrecEXmvzS90 xR3/yrHnO/BePwM0K7+p3VTmEx3r+zaRtNfa211aUhbjTuo/t/Mi/DGWvpa/hWixWneZ HE0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=O9DbYFZJeHqBH8ppDFq6lx34RF8ReuCJkAHUPxHn2XU=; b=LtdA2a6Kd3RT5CnGCuR4NTy7g3fONPZD3bxlVxrEGaPGJ1WlDT/zmZrSVX1DOdefrx 5IXJ+OWZMD4we6LlMxCvdHuIbxjXP2Cjzbq9DxbSheYm7uLNAs7by1c2vhJ5NTQ8RJ8a l+zZNEgZDB/fGiSEU8L+K69L6ATrswnWSamu9qek+ZG44KR6ODy8giklMQBDKQ/arf+h dKMKuFqbWKhahAGauZAd7zAfQ7RHLxIry3VokIKbwhT7h5WosGjqp8JPb3ETa+njyKOA 3LejH5wOW18hma9BP/+HsGvdga9vmPC7sE9ZDnIvjcS3sja6CofVZV+BTMAByw79fGH9 0h4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=G6ZsgS1S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l96si13068442plb.292.2018.12.11.08.01.29; Tue, 11 Dec 2018 08:01:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=G6ZsgS1S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730117AbeLKP6l (ORCPT + 99 others); Tue, 11 Dec 2018 10:58:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:48162 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731040AbeLKP6i (ORCPT ); Tue, 11 Dec 2018 10:58:38 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4E0C3208E7; Tue, 11 Dec 2018 15:58:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1544543917; bh=eo7v4Pzn8rLp8zNnvypSMEHm6w74bLzHYv4iW6w4g8s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G6ZsgS1SGqSIBFvblwW0WitR6g14tQIQ0OdvboLbokr5ubvjTAmhqkyPF9EFTupBy jEg0Tjfkf0bay/Cpt5BnITsFZNZE6+lk6dcUoFo3qgMKY/wQDnCQBrdYJyyQmHZpSx xKzD+xx8WbkAOYA4CHXmY3Xp904s4l8QpW9CbfSY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, syzbot+e3e074963495f92a89ed@syzkaller.appspotmail.com, syzbot+d5a0a170c5069658b141@syzkaller.appspotmail.com, Stefan Hajnoczi , "Michael S. Tsirkin" , Jason Wang , syzbot+bd391451452fb0b93039@syzkaller.appspotmail.com Subject: [PATCH 4.19 079/118] vhost/vsock: fix use-after-free in network stack callers Date: Tue, 11 Dec 2018 16:41:38 +0100 Message-Id: <20181211151647.450861711@linuxfoundation.org> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20181211151644.216668863@linuxfoundation.org> References: <20181211151644.216668863@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Stefan Hajnoczi commit 834e772c8db0c6a275d75315d90aba4ebbb1e249 upstream. If the network stack calls .send_pkt()/.cancel_pkt() during .release(), a struct vhost_vsock use-after-free is possible. This occurs because .release() does not wait for other CPUs to stop using struct vhost_vsock. Switch to an RCU-enabled hashtable (indexed by guest CID) so that .release() can wait for other CPUs by calling synchronize_rcu(). This also eliminates vhost_vsock_lock acquisition in the data path so it could have a positive effect on performance. This is CVE-2018-14625 "kernel: use-after-free Read in vhost_transport_send_pkt". Cc: stable@vger.kernel.org Reported-and-tested-by: syzbot+bd391451452fb0b93039@syzkaller.appspotmail.com Reported-by: syzbot+e3e074963495f92a89ed@syzkaller.appspotmail.com Reported-by: syzbot+d5a0a170c5069658b141@syzkaller.appspotmail.com Signed-off-by: Stefan Hajnoczi Signed-off-by: Michael S. Tsirkin Acked-by: Jason Wang Signed-off-by: Greg Kroah-Hartman --- drivers/vhost/vsock.c | 57 ++++++++++++++++++++++++++++---------------------- 1 file changed, 33 insertions(+), 24 deletions(-) --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include "vhost.h" @@ -27,14 +28,14 @@ enum { /* Used to track all the vhost_vsock instances on the system. */ static DEFINE_SPINLOCK(vhost_vsock_lock); -static LIST_HEAD(vhost_vsock_list); +static DEFINE_READ_MOSTLY_HASHTABLE(vhost_vsock_hash, 8); struct vhost_vsock { struct vhost_dev dev; struct vhost_virtqueue vqs[2]; - /* Link to global vhost_vsock_list, protected by vhost_vsock_lock */ - struct list_head list; + /* Link to global vhost_vsock_hash, writes use vhost_vsock_lock */ + struct hlist_node hash; struct vhost_work send_pkt_work; spinlock_t send_pkt_list_lock; @@ -50,11 +51,14 @@ static u32 vhost_transport_get_local_cid return VHOST_VSOCK_DEFAULT_HOST_CID; } -static struct vhost_vsock *__vhost_vsock_get(u32 guest_cid) +/* Callers that dereference the return value must hold vhost_vsock_lock or the + * RCU read lock. + */ +static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) { struct vhost_vsock *vsock; - list_for_each_entry(vsock, &vhost_vsock_list, list) { + hash_for_each_possible_rcu(vhost_vsock_hash, vsock, hash, guest_cid) { u32 other_cid = vsock->guest_cid; /* Skip instances that have no CID yet */ @@ -69,17 +73,6 @@ static struct vhost_vsock *__vhost_vsock return NULL; } -static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) -{ - struct vhost_vsock *vsock; - - spin_lock_bh(&vhost_vsock_lock); - vsock = __vhost_vsock_get(guest_cid); - spin_unlock_bh(&vhost_vsock_lock); - - return vsock; -} - static void vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct vhost_virtqueue *vq) @@ -210,9 +203,12 @@ vhost_transport_send_pkt(struct virtio_v struct vhost_vsock *vsock; int len = pkt->len; + rcu_read_lock(); + /* Find the vhost_vsock according to guest context id */ vsock = vhost_vsock_get(le64_to_cpu(pkt->hdr.dst_cid)); if (!vsock) { + rcu_read_unlock(); virtio_transport_free_pkt(pkt); return -ENODEV; } @@ -225,6 +221,8 @@ vhost_transport_send_pkt(struct virtio_v spin_unlock_bh(&vsock->send_pkt_list_lock); vhost_work_queue(&vsock->dev, &vsock->send_pkt_work); + + rcu_read_unlock(); return len; } @@ -234,12 +232,15 @@ vhost_transport_cancel_pkt(struct vsock_ struct vhost_vsock *vsock; struct virtio_vsock_pkt *pkt, *n; int cnt = 0; + int ret = -ENODEV; LIST_HEAD(freeme); + rcu_read_lock(); + /* Find the vhost_vsock according to guest context id */ vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); if (!vsock) - return -ENODEV; + goto out; spin_lock_bh(&vsock->send_pkt_list_lock); list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { @@ -265,7 +266,10 @@ vhost_transport_cancel_pkt(struct vsock_ vhost_poll_queue(&tx_vq->poll); } - return 0; + ret = 0; +out: + rcu_read_unlock(); + return ret; } static struct virtio_vsock_pkt * @@ -533,10 +537,6 @@ static int vhost_vsock_dev_open(struct i spin_lock_init(&vsock->send_pkt_list_lock); INIT_LIST_HEAD(&vsock->send_pkt_list); vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work); - - spin_lock_bh(&vhost_vsock_lock); - list_add_tail(&vsock->list, &vhost_vsock_list); - spin_unlock_bh(&vhost_vsock_lock); return 0; out: @@ -577,9 +577,13 @@ static int vhost_vsock_dev_release(struc struct vhost_vsock *vsock = file->private_data; spin_lock_bh(&vhost_vsock_lock); - list_del(&vsock->list); + if (vsock->guest_cid) + hash_del_rcu(&vsock->hash); spin_unlock_bh(&vhost_vsock_lock); + /* Wait for other CPUs to finish using vsock */ + synchronize_rcu(); + /* Iterating over all connections for all CIDs to find orphans is * inefficient. Room for improvement here. */ vsock_for_each_connected_socket(vhost_vsock_reset_orphans); @@ -620,12 +624,17 @@ static int vhost_vsock_set_cid(struct vh /* Refuse if CID is already in use */ spin_lock_bh(&vhost_vsock_lock); - other = __vhost_vsock_get(guest_cid); + other = vhost_vsock_get(guest_cid); if (other && other != vsock) { spin_unlock_bh(&vhost_vsock_lock); return -EADDRINUSE; } + + if (vsock->guest_cid) + hash_del_rcu(&vsock->hash); + vsock->guest_cid = guest_cid; + hash_add_rcu(vhost_vsock_hash, &vsock->hash, guest_cid); spin_unlock_bh(&vhost_vsock_lock); return 0;