Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp616130rwb; Wed, 18 Jan 2023 23:52:37 -0800 (PST) X-Google-Smtp-Source: AMrXdXsxz4SNV7Angkq7tgDZPxdRi6gYUx2KI6HMu0LyVn39IL+7ElSZPbMXTBWQi+zHlWHgB1BW X-Received: by 2002:a17:906:f2d3:b0:870:5ed6:74af with SMTP id gz19-20020a170906f2d300b008705ed674afmr5940089ejb.77.1674114757009; Wed, 18 Jan 2023 23:52:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674114756; cv=none; d=google.com; s=arc-20160816; b=ZXLBFr4QtSHn5rJjGaoANuYsiGJYfYAqTWV8RdlL0eaJl26ynVB8OBGEs9rk0w8Ngo FFQNRLmsklI/lxMWL6f1vkRFJEfWySmWvXQr7LR/KBYmgfvcLWEYjuIClH1UHIuaT3+e Gam+uH4gChCT1xH6eImhN6TjkeSDZLWYu3gsY5omSO92Kj8WrHeNRNTobU2HnsHC9MZw aw3unqELaAAHR3JgxSejaAHKhlkmFXCy3+NjdxQ996wgw9XYe6zt6LVkw1oGgb8OlaLu cbgkTSQzsFxmzOkl5uxZJ7rfERiKWQfxB3esPTkUpEGhPJiGvHsXqL0lBOzP1TqdZdKW ggrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=S06AqGlmEUyC3/e/k2djDq5Ra/fFhZAiE0FaXodErl0=; b=gXlFgWytxJ/njeYHPtcxId958cgwDdF/7TTRSYzTNJrd9bnooA0qGH8VMSPHe7RjCF H22xACl5C7WLLU7IG7syyJHJ0xqjQXdPd79AOzvNKDnvFTrXybix43LhAGqnG5dYehyq Io0eyziTnZgE4okls+9YyEDczOIUW0CUc4jUtTNA/djL/5uFfWJFibMmLSxvNUHFLl3T mWEpydMEB25Hkwy/SJfENiLuVav/6XLWcBkXJ3JOScy/uNcs13u0tds8uKjPa3x0WRe7 7wZDKS1p/gkg/xAD2tmb2xlm+teRTHrLJxUhccAJ9AQwq00oYVYM4flnkfYyrGtX5dDq O5Ew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="JlPkS2k/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d33-20020a056402402100b0049dd955ded4si16583803eda.524.2023.01.18.23.52.26; Wed, 18 Jan 2023 23:52:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="JlPkS2k/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229614AbjASHhz (ORCPT + 43 others); Thu, 19 Jan 2023 02:37:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229615AbjASHhw (ORCPT ); Thu, 19 Jan 2023 02:37:52 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 890A33ABC for ; Wed, 18 Jan 2023 23:37:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674113822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=S06AqGlmEUyC3/e/k2djDq5Ra/fFhZAiE0FaXodErl0=; b=JlPkS2k/OjmsWHCmI4dZ1M87DK43KrgqQgAhun8VSFaj/dU7zcTHSU0gDBaRmhAkfG5h27 iETq+IBTPVlOcBoIriuEFqVcMx+qiyOJ1LWwz0kShBylbSBepB+1EXbNlStp1cSpSWy3KP ah3xwwR6Yws8LCG0jMxgKuQnzVDHV7M= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-654-FQU6fZupOA2w6g_MqcapeA-1; Thu, 19 Jan 2023 02:36:55 -0500 X-MC-Unique: FQU6fZupOA2w6g_MqcapeA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB249183B3C8; Thu, 19 Jan 2023 07:36:54 +0000 (UTC) Received: from localhost.localdomain (ovpn-12-175.pek2.redhat.com [10.72.12.175]) by smtp.corp.redhat.com (Postfix) with ESMTP id EC656492B00; Thu, 19 Jan 2023 07:36:49 +0000 (UTC) From: Jason Wang To: mst@redhat.com, jasowang@redhat.com Cc: pbonzini@redhat.com, stefanha@redhat.com, bcodding@redhat.com, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Al Viro , Nicholas Bellinger Subject: [PATCH V2] vhost-scsi: unbreak any layout for response Date: Thu, 19 Jan 2023 15:36:47 +0800 Message-Id: <20230119073647.76467-1-jasowang@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Al Viro said: """ Since "vhost/scsi: fix reuse of &vq->iov[out] in response" we have this: cmd->tvc_resp_iov = vq->iov[vc.out]; cmd->tvc_in_iovs = vc.in; combined with iov_iter_init(&iov_iter, ITER_DEST, &cmd->tvc_resp_iov, cmd->tvc_in_iovs, sizeof(v_rsp)); in vhost_scsi_complete_cmd_work(). We used to have ->tvc_resp_iov _pointing_ to vq->iov[vc.out]; back then iov_iter_init() asked to set an iovec-backed iov_iter over the tail of vq->iov[], with length being the amount of iovecs in the tail. Now we have a copy of one element of that array. Fortunately, the members following it in the containing structure are two non-NULL kernel pointers, so copy_to_iter() will not copy anything beyond the first iovec - kernel pointer is not (on the majority of architectures) going to be accepted by access_ok() in copyout() and it won't be skipped since the "length" (in reality - another non-NULL kernel pointer) won't be zero. So it's not going to give a guest-to-qemu escalation, but it's definitely a bug. Frankly, my preference would be to verify that the very first iovec is long enough to hold rsp_size. Due to the above, any users that try to give us vq->iov[vc.out].iov_len < sizeof(struct virtio_scsi_cmd_resp) would currently get a failure in vhost_scsi_complete_cmd_work() anyway. """ However, the spec doesn't say anything about the legacy descriptor layout for the respone. So this patch tries to not assume the response to reside in a single separate descriptor which is what commit 79c14141a487 ("vhost/scsi: Convert completion path to use") tries to achieve towards to ANY_LAYOUT. This is done by allocating and using dedicate resp iov in the command. To be safety, start with UIO_MAXIOV to be consistent with the limitation that we advertise to the vhost_get_vq_desc(). Testing with the hacked virtio-scsi driver that use 1 descriptor for 1 byte in the response. Reported-by: Al Viro Cc: Benjamin Coddington Cc: Nicholas Bellinger Fixes: a77ec83a5789 ("vhost/scsi: fix reuse of &vq->iov[out] in response") Signed-off-by: Jason Wang --- Changes since V1: - tweak the changelog - fix the allocation size for tvc_resp_iov (should be sizeof(struct iovec)) --- drivers/vhost/scsi.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index dca6346d75b3..d5ecb8876fc9 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -80,7 +80,7 @@ struct vhost_scsi_cmd { struct scatterlist *tvc_prot_sgl; struct page **tvc_upages; /* Pointer to response header iovec */ - struct iovec tvc_resp_iov; + struct iovec *tvc_resp_iov; /* Pointer to vhost_scsi for our device */ struct vhost_scsi *tvc_vhost; /* Pointer to vhost_virtqueue for the cmd */ @@ -563,7 +563,7 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work) memcpy(v_rsp.sense, cmd->tvc_sense_buf, se_cmd->scsi_sense_length); - iov_iter_init(&iov_iter, ITER_DEST, &cmd->tvc_resp_iov, + iov_iter_init(&iov_iter, ITER_DEST, cmd->tvc_resp_iov, cmd->tvc_in_iovs, sizeof(v_rsp)); ret = copy_to_iter(&v_rsp, sizeof(v_rsp), &iov_iter); if (likely(ret == sizeof(v_rsp))) { @@ -594,6 +594,7 @@ vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg, struct vhost_scsi_cmd *cmd; struct vhost_scsi_nexus *tv_nexus; struct scatterlist *sg, *prot_sg; + struct iovec *tvc_resp_iov; struct page **pages; int tag; @@ -613,6 +614,7 @@ vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg, sg = cmd->tvc_sgl; prot_sg = cmd->tvc_prot_sgl; pages = cmd->tvc_upages; + tvc_resp_iov = cmd->tvc_resp_iov; memset(cmd, 0, sizeof(*cmd)); cmd->tvc_sgl = sg; cmd->tvc_prot_sgl = prot_sg; @@ -625,6 +627,7 @@ vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg, cmd->tvc_data_direction = data_direction; cmd->tvc_nexus = tv_nexus; cmd->inflight = vhost_scsi_get_inflight(vq); + cmd->tvc_resp_iov = tvc_resp_iov; memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE); @@ -935,7 +938,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) struct iov_iter in_iter, prot_iter, data_iter; u64 tag; u32 exp_data_len, data_direction; - int ret, prot_bytes, c = 0; + int ret, prot_bytes, i, c = 0; u16 lun; u8 task_attr; bool t10_pi = vhost_has_feature(vq, VIRTIO_SCSI_F_T10_PI); @@ -1092,7 +1095,8 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) } cmd->tvc_vhost = vs; cmd->tvc_vq = vq; - cmd->tvc_resp_iov = vq->iov[vc.out]; + for (i = 0; i < vc.in ; i++) + cmd->tvc_resp_iov[i] = vq->iov[vc.out + i]; cmd->tvc_in_iovs = vc.in; pr_debug("vhost_scsi got command opcode: %#02x, lun: %d\n", @@ -1461,6 +1465,7 @@ static void vhost_scsi_destroy_vq_cmds(struct vhost_virtqueue *vq) kfree(tv_cmd->tvc_sgl); kfree(tv_cmd->tvc_prot_sgl); kfree(tv_cmd->tvc_upages); + kfree(tv_cmd->tvc_resp_iov); } sbitmap_free(&svq->scsi_tags); @@ -1508,6 +1513,14 @@ static int vhost_scsi_setup_vq_cmds(struct vhost_virtqueue *vq, int max_cmds) goto out; } + tv_cmd->tvc_resp_iov = kcalloc(UIO_MAXIOV, + sizeof(struct iovec), + GFP_KERNEL); + if (!tv_cmd->tvc_resp_iov) { + pr_err("Unable to allocate tv_cmd->tvc_resp_iov\n"); + goto out; + } + tv_cmd->tvc_prot_sgl = kcalloc(VHOST_SCSI_PREALLOC_PROT_SGLS, sizeof(struct scatterlist), GFP_KERNEL); -- 2.25.1