Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3342598ybi; Fri, 19 Jul 2019 01:40:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqwMM+WwxRqDojMYF2TfRz6bsYhAysc4ew+WK7DCstyN4gEQOK1shdOm8u1FId+BARGAeyjU X-Received: by 2002:a63:2252:: with SMTP id t18mr52998635pgm.5.1563525602675; Fri, 19 Jul 2019 01:40:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563525602; cv=none; d=google.com; s=arc-20160816; b=hqwbGynm6YJX6BwPL5Eu2mwcew0RBt54boHcVzRYU0ebQsG3qz7UfzxKrB8YfbyQVy gfzhlijbJhaf6+hYRpTryPVe99gL6ToDGVHdsTphEW1bzjgcYsgk6g3vZF8SSWsbVmLL HuaDF0rue5qkjmgRG2foR0Vb7CZM3zT9uGQ9fTnxockRpD81yfgKKY/4b5KHgaBA+tDb YePCz0FYtSwPSJwc7ebPcXSIgzGqy386zqKIAUZ7AlT2d8X2mfFJoto0sQuiKZW01pKp QFiY1TmLZSJc3ZEKVSCVlk/dm8Sb8jkdcpA0boZYXHUPiiXaXnT0VObX8dYqG8JZ2a2l lwTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=IwlDXaMcxvc69QTHdN/yDtvOyRXHjJlcOStxihQIiDQ=; b=oQM0tD0ri3w5RH/Al4MFkVCiM4ldcudrjKnXkoREGLOf4XYrz6ZV154hZoRHDNkq8I xXbaXbBEipuQZUFu26XUvyV396P4YJVT9LpDKC8VZV4ZwQquglx12TGT2Vr7fWoexPGh TXkUcq5zxanxOkUaZelzo55c9y5x7c55h+XtD7RhbT61NO0ebXmTh7CNLxFAQu7he5XB swVhaeGxtA8FFKOHaG5tR1gWGL7W4QxyqemY52yXWUHaj6n7t1WPGuWvGKirNUXdzcwB L0PrzSqr2+WiNT7Siwb/spxOIHG43dhiAa4ZusoSlgk8N2iCHi7Yyy80Oal0KMhwYYIS RNiw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r129si1388177pgr.21.2019.07.19.01.39.47; Fri, 19 Jul 2019 01:40:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726856AbfGSIj0 (ORCPT + 99 others); Fri, 19 Jul 2019 04:39:26 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:40938 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725853AbfGSIj0 (ORCPT ); Fri, 19 Jul 2019 04:39:26 -0400 Received: by mail-wr1-f67.google.com with SMTP id r1so31376161wrl.7 for ; Fri, 19 Jul 2019 01:39:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=IwlDXaMcxvc69QTHdN/yDtvOyRXHjJlcOStxihQIiDQ=; b=NEOUkSjwXL13RZ2oNrgvsxhJQhOSeQgRUmwvDLlSqIT/K2Y5jyoKNzZamKzhU8hjPR yG4V0WKaP1gX0kfl4FKtIq5W95ZrYNOTMjvc74eAT1pCiOMSS0tf6EwM5NtDU+u1xDpo TuEZBdyHKSq6TiNNNYca2XETX1/bUzC920fJkgttEP2Sgg7+MejnQdjn3wlnmkZLry3V qGUFR/u25266JXCW1Fk8LeomkQk7C52vagNd0OQ3nNZr3pbVCWVcnHTwSmAHpurTMOnz LAoZuL1gv+LQEhaXlRnj4CjDNF3M9eU+VUgYQhWTpgzmr0XnkCTC9NlLtpAQMADNiJsS n+4w== X-Gm-Message-State: APjAAAVZZSgtNfn/bPohZEp+KyXP2qi2UFwChCj0e1GfxKg98Qjf60m+ RnXLWpYypgCJJY/YM+gblQZuTg== X-Received: by 2002:adf:e50c:: with SMTP id j12mr50617307wrm.117.1563525563378; Fri, 19 Jul 2019 01:39:23 -0700 (PDT) Received: from steredhat (host122-201-dynamic.13-79-r.retail.telecomitalia.it. [79.13.201.122]) by smtp.gmail.com with ESMTPSA id c1sm58860105wrh.1.2019.07.19.01.39.22 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Fri, 19 Jul 2019 01:39:22 -0700 (PDT) Date: Fri, 19 Jul 2019 10:39:20 +0200 From: Stefano Garzarella To: Jason Wang Cc: "Michael S. Tsirkin" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers Message-ID: <20190719083920.67qo2umpthz454be@steredhat> References: <20190717113030.163499-1-sgarzare@redhat.com> <20190717113030.163499-5-sgarzare@redhat.com> <20190717105336-mutt-send-email-mst@kernel.org> <20190718041234-mutt-send-email-mst@kernel.org> <20190718072741-mutt-send-email-mst@kernel.org> <20190719080832.7hoeus23zjyrx3cc@steredhat> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 19, 2019 at 04:21:52PM +0800, Jason Wang wrote: > > On 2019/7/19 下午4:08, Stefano Garzarella wrote: > > On Thu, Jul 18, 2019 at 07:35:46AM -0400, Michael S. Tsirkin wrote: > > > On Thu, Jul 18, 2019 at 11:37:30AM +0200, Stefano Garzarella wrote: > > > > On Thu, Jul 18, 2019 at 10:13 AM Michael S. Tsirkin wrote: > > > > > On Thu, Jul 18, 2019 at 09:50:14AM +0200, Stefano Garzarella wrote: > > > > > > On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin wrote: > > > > > > > On Wed, Jul 17, 2019 at 01:30:29PM +0200, Stefano Garzarella wrote: > > > > > > > > If the packets to sent to the guest are bigger than the buffer > > > > > > > > available, we can split them, using multiple buffers and fixing > > > > > > > > the length in the packet header. > > > > > > > > This is safe since virtio-vsock supports only stream sockets. > > > > > > > > > > > > > > > > Signed-off-by: Stefano Garzarella > > > > > > > So how does it work right now? If an app > > > > > > > does sendmsg with a 64K buffer and the other > > > > > > > side publishes 4K buffers - does it just stall? > > > > > > Before this series, the 64K (or bigger) user messages was split in 4K packets > > > > > > (fixed in the code) and queued in an internal list for the TX worker. > > > > > > > > > > > > After this series, we will queue up to 64K packets and then it will be split in > > > > > > the TX worker, depending on the size of the buffers available in the > > > > > > vring. (The idea was to allow EWMA or a configuration of the buffers size, but > > > > > > for now we postponed it) > > > > > Got it. Using workers for xmit is IMHO a bad idea btw. > > > > > Why is it done like this? > > > > Honestly, I don't know the exact reasons for this design, but I suppose > > > > that the idea was to have only one worker that uses the vring, and > > > > multiple user threads that enqueue packets in the list. > > > > This can simplify the code and we can put the user threads to sleep if > > > > we don't have "credit" available (this means that the receiver doesn't > > > > have space to receive the packet). > > > I think you mean the reverse: even without credits you can copy from > > > user and queue up data, then process it without waking up the user > > > thread. > > I checked the code better, but it doesn't seem to do that. > > The .sendmsg callback of af_vsock, check if the transport has space > > (virtio-vsock transport returns the credit available). If there is no > > space, it put the thread to sleep on the 'sk_sleep(sk)' wait_queue. > > > > When the transport receives an update of credit available on the other > > peer, it calls 'sk->sk_write_space(sk)' that wakes up the thread > > sleeping, that will queue the new packet. > > > > So, in the current implementation, the TX worker doesn't check the > > credit available, it only sends the packets. > > > > > Does it help though? It certainly adds up work outside of > > > user thread context which means it's not accounted for > > > correctly. > > I can try to xmit the packet directly in the user thread context, to see > > the improvements. > > > It will then looks more like what virtio-net (and other networking device) > did. I'll try ASAP, the changes should not be too complicated... I hope :) > > > > > > > Maybe we want more VQs. Would help improve parallelism. The question > > > would then become how to map sockets to VQs. With a simple hash > > > it's easy to create collisions ... > > Yes, more VQs can help but the map question is not simple to answer. > > Maybe we can do an hash on the (cid, port) or do some kind of estimation > > of queue utilization and try to balance. > > Should the mapping be unique? > > > It sounds to me you want some kind of fair queuing? We've already had > several qdiscs that do this. Thanks for pointing it out! > > So if we use the kernel networking xmit path, all those issues could be > addressed. One more point to AF_VSOCK + net-stack, but we have to evaluate possible drawbacks in using the net-stack. (e.g. more latency due to the complexity of the net-stack?) Thanks, Stefano