Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp7216849ybi; Thu, 1 Aug 2019 05:02:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqw2um+9iCqZ97dSpQzBoPEuSyGZY8bam7g7hQmN32TWeydl7UhvsufbD9XFPBli4D05bHu1 X-Received: by 2002:a17:902:7448:: with SMTP id e8mr125821220plt.85.1564660969822; Thu, 01 Aug 2019 05:02:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564660969; cv=none; d=google.com; s=arc-20160816; b=rdIxonapF7TLzi5Y/25uiryJia9sWZh9XDJXiVwqoNYZeZ5ojjw4ZDs6EBfmkGDFTj LkLPeMLKQOnLRbND8VEW3SaMTQNPgFiKLaPyIUoabvIAF60L/WHG+R9vrK4l889yNABS arkkJWG159DEaXnZgw3W0cuVJFcvYTDYGEf3kc6YRnei2HlNAisz/0BdwZIROPSAlAKi puw/a0//9mB2Sk03JNUDcut7rbzWaRRO2/OhdeYene7juDVuFBC3MXqXQKKgrAI7tb9p OzIOLWtxHvlYgTn74DSF2Eot9RzaCWsLQ/RVa9hyG565Q0X1gUwtVQ5CyCR3uc17OINp AJnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Qw6j853wZUG0P4ng1fQNKfQUl03rC9IHR3/FL4fgPwk=; b=RN0EKljbN52gFlqtDb6B4E49J/P5Fa2V7YItAM7YJFR5D8cNFHQVHg2sgQihDRgGFI ZS5BKbSumsZKq5hJ2totHt0vVlrefUbUzQn2QOLk6NJ6EPIsxO8YNCgxGeOhNt/0/F/J vPyXlJ7S/pn4yNCMD72oWp+9y5tS5TtRVRylS1RYum771iD9lSFlofjIVe7g5x/CkW3+ iWlSauxr0moeWChgLfT0FGagFMGl4q4WOqmJdygjwb4AdEfIsG1UD/nw/KwhAu8R4QHP IEUtnAX5E8YwLBCS3Rd7rdLWy6cdj6AdY4U3csoprkQLkmzk7NRS9KrUG3y0WV4AnFlD BzHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p23si5349857pgi.76.2019.08.01.05.02.32; Thu, 01 Aug 2019 05:02:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728783AbfHAKsA (ORCPT + 99 others); Thu, 1 Aug 2019 06:48:00 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:42677 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727956AbfHAKr7 (ORCPT ); Thu, 1 Aug 2019 06:47:59 -0400 Received: by mail-wr1-f68.google.com with SMTP id x1so23154023wrr.9 for ; Thu, 01 Aug 2019 03:47:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Qw6j853wZUG0P4ng1fQNKfQUl03rC9IHR3/FL4fgPwk=; b=oE3J/uzi3svl5i/pQjxL+2VX6F3MzFglL5VPeW8yuGkNmr+4zI1GEjGQUBDka9PbMP T9J9iyrJODgWBTREQ382vDGjaMLuvQxTjtyuZVtUeJHuk35xMO1Q1NRHGuc4HeXZ6j7e dKjiq57kkPD2qAITIDQ3iUNUPmw+Bu8pVBLWfl6w9u4wlAR1cY4gVmmE3ats81xE5lMM GTl7gX2+L/TiHh7MBQQ5zcXJQ4k4oZlZlBL+yLg9NGaaqV93LUGWyGHc3ktKLQgRo4a9 uC8OipDigeztuKtG1jLvg49loRikTLNAJhzJbpmHrvC9hzS7RJdMuEKpNBw92+EWypC3 Khjg== X-Gm-Message-State: APjAAAXgMlUUPujBaZVtZzGOjshzb+Wf9u7+/xyztkvQOU9zdgdh+2zS HJorsBRD9QGLjSC0xFNxCC+M0Q== X-Received: by 2002:adf:e8c8:: with SMTP id k8mr36986014wrn.285.1564656477234; Thu, 01 Aug 2019 03:47:57 -0700 (PDT) Received: from steredhat (host122-201-dynamic.13-79-r.retail.telecomitalia.it. [79.13.201.122]) by smtp.gmail.com with ESMTPSA id h8sm75520711wmf.12.2019.08.01.03.47.55 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 03:47:56 -0700 (PDT) Date: Thu, 1 Aug 2019 12:47:54 +0200 From: Stefano Garzarella To: "Michael S. Tsirkin" Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , virtualization@lists.linux-foundation.org, Jason Wang , kvm@vger.kernel.org Subject: Re: [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket Message-ID: <20190801104754.lb3ju5xjfmnxioii@steredhat> References: <20190717113030.163499-1-sgarzare@redhat.com> <20190717113030.163499-2-sgarzare@redhat.com> <20190729095956-mutt-send-email-mst@kernel.org> <20190729153656.zk4q4rob5oi6iq7l@steredhat> <20190729114302-mutt-send-email-mst@kernel.org> <20190729161903.yhaj5rfcvleexkhc@steredhat> <20190729165056.r32uzj6om3o6vfvp@steredhat> <20190729143622-mutt-send-email-mst@kernel.org> <20190730093539.dcksure3vrykir3g@steredhat> <20190730163807-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190730163807-mutt-send-email-mst@kernel.org> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 30, 2019 at 04:42:25PM -0400, Michael S. Tsirkin wrote: > On Tue, Jul 30, 2019 at 11:35:39AM +0200, Stefano Garzarella wrote: (...) > > > > The problem here is the compatibility. Before this series virtio-vsock > > and vhost-vsock modules had the RX buffer size hard-coded > > (VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE = 4K). So, if we send a buffer smaller > > of 4K, there might be issues. > > Shouldn't be if they are following the spec. If not let's fix > the broken parts. > > > > > Maybe it is the time to add add 'features' to virtio-vsock device. > > > > Thanks, > > Stefano > > Why would a remote care about buffer sizes? > > Let's first see what the issues are. If they exist > we can either fix the bugs, or code the bug as a feature in spec. > The vhost_transport '.stream_enqueue' callback [virtio_transport_stream_enqueue()] calls the virtio_transport_send_pkt_info(), passing the user message. This function allocates a new packet, copying the user message, but (before this series) it limits the packet size to the VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (4K): static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, struct virtio_vsock_pkt_info *info) { ... /* we can send less than pkt_len bytes */ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE) pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE; /* virtio_transport_get_credit might return less than pkt_len credit */ pkt_len = virtio_transport_get_credit(vvs, pkt_len); /* Do not send zero length OP_RW pkt */ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW) return pkt_len; ... } then it queues the packet for the TX worker calling .send_pkt() [vhost_transport_send_pkt() in the vhost_transport case] The main function executed by the TX worker is vhost_transport_do_send_pkt() that picks up a buffer from the virtqueue and it tries to copy the packet (up to 4K) on it. If the buffer allocated from the guest will be smaller then 4K, I think here it will be discarded with an error: static void vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct vhost_virtqueue *vq) { ... nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter); if (nbytes != pkt->len) { virtio_transport_free_pkt(pkt); vq_err(vq, "Faulted on copying pkt buf\n"); break; } ... } This series changes this behavior since now we will split the packet in vhost_transport_do_send_pkt() depending on the buffer found in the virtqueue. We didn't change the buffer size in this series, so we still backward compatible, but if we will use buffers smaller than 4K, we should encounter the error described above. How do you suggest we proceed if we want to change the buffer size? Maybe adding a feature to "support any buffer size"? Thanks, Stefano