Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp7331906ybi; Thu, 1 Aug 2019 06:39:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqzx8Jh435LvW/ziK+VYTnW3vs/IIiFLfzJP/eG5zVcPH4PQKt1x4AnytLtX9Y78jYPKp2bw X-Received: by 2002:a62:e801:: with SMTP id c1mr54048871pfi.41.1564666798731; Thu, 01 Aug 2019 06:39:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564666798; cv=none; d=google.com; s=arc-20160816; b=kJ/UAF6sb4Ii/Ntye5cVhbq6VWcBw64f4ufjX+U/nERv5RzyUJFTQMofnNqY4exyUO i74o/YM8UfsEOwrycLklev4cJ2jNnJv9BbNrCzGnSj0nl+pEJY36lxyw7m85RHsXTnQ6 fJwNh8zGgHRLGLVCXFdeEbvSR7bkVpTb/iMlM9t136OaoGCZpN5MQ+3S34pl4kItWw7O yMZYEtnSyZWHi4X/roMyU1ht/HCodLAbunu0HsJoyk1Jt6PMXzgOmn+n3Ug1pTORLfU8 /HtwL2B7pwku6HzJ9eb5DTiGn7+oEmwmy4v2qT1rOQS79OBHSMvCYwcjdiTsQZrBo6L0 NU+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=OarH4MGoiUzfGjFt4JlhO3847Zf6iFLmY9ihGc4jX/c=; b=guPn8Kos2q9Tgfa8QeYeHH5wE3Dghjp2NLjDECf/dkshfCe0FlCvek5GQ3xcA9sK1n 75RlGeqMfhssEq0CGUCI+QTnsj//KwKwzi0XYD7MuNvsBxXQuzJTsvu8WII6uSprpsua Hi4ViflU2/3Rhhnz8ckcn/GwxE3h1DcTdzJJKtsvWFOqSkz5tO204G5MFSzYUAs1ivV5 egfrxMYs8rbVfi15DJujDqEMjsPDgKUL5I2K3CabzFnY/h/lvOzoojl2w1byQ2V4YO02 1eedV+Sv+9rKWaN7171Dq9S39hJvxYBf0LeqDIYw3iXJWprf2oqeAWjRmQ0xaIInJDxg 1/3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v11si33220469plo.223.2019.08.01.06.39.42; Thu, 01 Aug 2019 06:39:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730453AbfHANVY (ORCPT + 99 others); Thu, 1 Aug 2019 09:21:24 -0400 Received: from mail-qk1-f194.google.com ([209.85.222.194]:41632 "EHLO mail-qk1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730091AbfHANVY (ORCPT ); Thu, 1 Aug 2019 09:21:24 -0400 Received: by mail-qk1-f194.google.com with SMTP id v22so51956741qkj.8 for ; Thu, 01 Aug 2019 06:21:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=OarH4MGoiUzfGjFt4JlhO3847Zf6iFLmY9ihGc4jX/c=; b=m6nZdie1AyR60UFRzMI9cPKS7zs7BPELPSSoc/Z1MgnnMV5DpIesYyJeS6TxJo2td/ e5Qcn0LiDaGLkNYEMsDZ1fAPO3ZxaB2btzXVKk+YTKamZJuHYZiqVJWiGZmMKPeUBij/ rbIIILxcboG1TPrZ+jrHTjx+EPgUITFDqxxHz+fqkdU7t3eZZ3sI+PmH+gyQxM66rnA6 mWkmr2RICnF3TzDpz3kUR6BtTEitVz30UVDeQbjfFlostla6IyGEuJc2uyqgzQEC8/9i AYAPcfFvC4Po/AzXBg/kOk3rTEFsLJT0jySaq7KTdsB0nE/qrV7weVCSalYPKXhaelo5 8uwg== X-Gm-Message-State: APjAAAXSptCHSUK8Po85s+Nvtlw1/8rC49utlMM6AHP0CON4poX7YoED bEKxwjwTGjl8pO3TgwV+2DSICw== X-Received: by 2002:ae9:ea06:: with SMTP id f6mr81509370qkg.262.1564665683354; Thu, 01 Aug 2019 06:21:23 -0700 (PDT) Received: from redhat.com ([147.234.38.1]) by smtp.gmail.com with ESMTPSA id j8sm30433876qki.85.2019.08.01.06.21.18 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 06:21:22 -0700 (PDT) Date: Thu, 1 Aug 2019 09:21:15 -0400 From: "Michael S. Tsirkin" To: Stefano Garzarella Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , virtualization@lists.linux-foundation.org, Jason Wang , kvm@vger.kernel.org Subject: Re: [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket Message-ID: <20190801091106-mutt-send-email-mst@kernel.org> References: <20190717113030.163499-2-sgarzare@redhat.com> <20190729095956-mutt-send-email-mst@kernel.org> <20190729153656.zk4q4rob5oi6iq7l@steredhat> <20190729114302-mutt-send-email-mst@kernel.org> <20190729161903.yhaj5rfcvleexkhc@steredhat> <20190729165056.r32uzj6om3o6vfvp@steredhat> <20190729143622-mutt-send-email-mst@kernel.org> <20190730093539.dcksure3vrykir3g@steredhat> <20190730163807-mutt-send-email-mst@kernel.org> <20190801104754.lb3ju5xjfmnxioii@steredhat> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190801104754.lb3ju5xjfmnxioii@steredhat> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 01, 2019 at 12:47:54PM +0200, Stefano Garzarella wrote: > On Tue, Jul 30, 2019 at 04:42:25PM -0400, Michael S. Tsirkin wrote: > > On Tue, Jul 30, 2019 at 11:35:39AM +0200, Stefano Garzarella wrote: > > (...) > > > > > > > The problem here is the compatibility. Before this series virtio-vsock > > > and vhost-vsock modules had the RX buffer size hard-coded > > > (VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE = 4K). So, if we send a buffer smaller > > > of 4K, there might be issues. > > > > Shouldn't be if they are following the spec. If not let's fix > > the broken parts. > > > > > > > > Maybe it is the time to add add 'features' to virtio-vsock device. > > > > > > Thanks, > > > Stefano > > > > Why would a remote care about buffer sizes? > > > > Let's first see what the issues are. If they exist > > we can either fix the bugs, or code the bug as a feature in spec. > > > > The vhost_transport '.stream_enqueue' callback > [virtio_transport_stream_enqueue()] calls the virtio_transport_send_pkt_info(), > passing the user message. This function allocates a new packet, copying > the user message, but (before this series) it limits the packet size to > the VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (4K): > > static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, > struct virtio_vsock_pkt_info *info) > { > ... > /* we can send less than pkt_len bytes */ > if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE) > pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE; > > /* virtio_transport_get_credit might return less than pkt_len credit */ > pkt_len = virtio_transport_get_credit(vvs, pkt_len); > > /* Do not send zero length OP_RW pkt */ > if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW) > return pkt_len; > ... > } > > then it queues the packet for the TX worker calling .send_pkt() > [vhost_transport_send_pkt() in the vhost_transport case] > > The main function executed by the TX worker is > vhost_transport_do_send_pkt() that picks up a buffer from the virtqueue > and it tries to copy the packet (up to 4K) on it. If the buffer > allocated from the guest will be smaller then 4K, I think here it will > be discarded with an error: > > static void > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > struct vhost_virtqueue *vq) > { > ... > nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter); isn't pck len the actual length though? > if (nbytes != pkt->len) { > virtio_transport_free_pkt(pkt); > vq_err(vq, "Faulted on copying pkt buf\n"); > break; > } > ... > } > > > This series changes this behavior since now we will split the packet in > vhost_transport_do_send_pkt() depending on the buffer found in the > virtqueue. > > We didn't change the buffer size in this series, so we still backward > compatible, but if we will use buffers smaller than 4K, we should > encounter the error described above. > > How do you suggest we proceed if we want to change the buffer size? > Maybe adding a feature to "support any buffer size"? > > Thanks, > Stefano