Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3315677ybi; Fri, 19 Jul 2019 01:09:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqyv4CkFcAFKCtFrOMMyHsYMBbD8rzM4JB/e9QZIHQt+XWHIa0+EGWKIC8E4dJt1wgMUpJAR X-Received: by 2002:a17:902:b206:: with SMTP id t6mr56240325plr.195.1563523752582; Fri, 19 Jul 2019 01:09:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563523752; cv=none; d=google.com; s=arc-20160816; b=s0Th7aEkoljpFNd/FC3btd/MewpEII9/+rGXt5fC5OBCuYnbfakV6V7BWjxSTmpX9Q YHeRmoM7p95P+DJZyeGRrb8MubslXzpapOUcv3bT5vsD8UTlrRKmTxHLgxGZZ/yxsEq6 mqv3SNKtEM45h9T14gpE6V23HHYuXp0TS5uS2CSf4sIpO3w4wG+jg/1lUbg/biB2zdJf dGrF81wj8LmoQHTlvkwY6HpjRH6XO0agMRK85ZaIJV9ktNwc+/l3gzA0kl+kS90ReTrO zK9kGwBED7m/1Xv5rHN7spKPh1aLesYVgFQOM3dnHfcEmVuJ1wEg6QTsEKDqEESJ7Jpu iGcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Y+k78OJ3hK1HM1NDemkzXf61z14PGYV/5FdhA7MXvnc=; b=rs3b0aHMEi2vwWnqVUFHvZv2cNiE61GCtHl+ifA2LzYu21vwosIOnBdPbi6HYy2Bs6 69/IUxmRbLX/wzKYzgbp0h6/WM8CdrfWCaEE4X2DZNgZ+fRw7ubJ4DXPw8BfO5rYTUwx HJgyFziims5tXztnuiMReuRAz7WNwrKr0YzmpMfs6c2d4LzFtw1/sgsCStHEBEk4uyQN c4jUoRRdiqjIoKiRyDgC2EMZHhVgkHrS3oGM/k5kyKZ6z9JvkA5hVzzfyTXRCw4G0Y2d ieqzlvmYwRdFj/lG12RC8B+/1gfnY2qBFdBPo7n57W53DGjdgOb7pGffFAhuGQrUg/wi S3qw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m5si363700pls.358.2019.07.19.01.08.56; Fri, 19 Jul 2019 01:09:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727010AbfGSIIi (ORCPT + 99 others); Fri, 19 Jul 2019 04:08:38 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:34439 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726076AbfGSIIi (ORCPT ); Fri, 19 Jul 2019 04:08:38 -0400 Received: by mail-wr1-f66.google.com with SMTP id 31so31345956wrm.1 for ; Fri, 19 Jul 2019 01:08:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Y+k78OJ3hK1HM1NDemkzXf61z14PGYV/5FdhA7MXvnc=; b=fd4IJUonduFpjSw0N0rUGQrqre86e9msePD3P0aSsPTBxXFeeBD+bTjyO9p7e0WFsZ GHU+p0b0E9PtieWStkJWLbhHkJYgyqRgjZrpE1xhsWzE5t8bCdbcWKfOJapwd3lKIcCC okQc5J71qlXf/ludmoa7OygauqpGZoNwRrqLpdDcW1HanExa1Tx+iIUOPpouc1qtS8f5 hKSBvVZKuXmgDg/mmapwLLDgc7Flb4RtFAn6VvqcJp7PG+pxjZW7YUvnwkbd6pYJEea4 9g9U0SUYvGIX+Zp2l6flgBvvRtPaFA+OJxP7x03Je26pu9ulxu0W6jZ9Yc6igqASQ8/k JyoA== X-Gm-Message-State: APjAAAWGPOsKtZ8F8HicxLB6eR78xFzEBVzvIkhYwC5fLuF/FQlov6bI brD7yzSBxm4o0qmQvLC0bP9/vQ== X-Received: by 2002:a5d:528d:: with SMTP id c13mr53804468wrv.247.1563523715806; Fri, 19 Jul 2019 01:08:35 -0700 (PDT) Received: from steredhat (host122-201-dynamic.13-79-r.retail.telecomitalia.it. [79.13.201.122]) by smtp.gmail.com with ESMTPSA id y6sm34814375wmd.16.2019.07.19.01.08.34 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Fri, 19 Jul 2019 01:08:35 -0700 (PDT) Date: Fri, 19 Jul 2019 10:08:32 +0200 From: Stefano Garzarella To: "Michael S. Tsirkin" Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , virtualization@lists.linux-foundation.org, Jason Wang , kvm@vger.kernel.org Subject: Re: [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers Message-ID: <20190719080832.7hoeus23zjyrx3cc@steredhat> References: <20190717113030.163499-1-sgarzare@redhat.com> <20190717113030.163499-5-sgarzare@redhat.com> <20190717105336-mutt-send-email-mst@kernel.org> <20190718041234-mutt-send-email-mst@kernel.org> <20190718072741-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190718072741-mutt-send-email-mst@kernel.org> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 18, 2019 at 07:35:46AM -0400, Michael S. Tsirkin wrote: > On Thu, Jul 18, 2019 at 11:37:30AM +0200, Stefano Garzarella wrote: > > On Thu, Jul 18, 2019 at 10:13 AM Michael S. Tsirkin wrote: > > > On Thu, Jul 18, 2019 at 09:50:14AM +0200, Stefano Garzarella wrote: > > > > On Wed, Jul 17, 2019 at 4:55 PM Michael S. Tsirkin wrote: > > > > > On Wed, Jul 17, 2019 at 01:30:29PM +0200, Stefano Garzarella wrote: > > > > > > If the packets to sent to the guest are bigger than the buffer > > > > > > available, we can split them, using multiple buffers and fixing > > > > > > the length in the packet header. > > > > > > This is safe since virtio-vsock supports only stream sockets. > > > > > > > > > > > > Signed-off-by: Stefano Garzarella > > > > > > > > > > So how does it work right now? If an app > > > > > does sendmsg with a 64K buffer and the other > > > > > side publishes 4K buffers - does it just stall? > > > > > > > > Before this series, the 64K (or bigger) user messages was split in 4K packets > > > > (fixed in the code) and queued in an internal list for the TX worker. > > > > > > > > After this series, we will queue up to 64K packets and then it will be split in > > > > the TX worker, depending on the size of the buffers available in the > > > > vring. (The idea was to allow EWMA or a configuration of the buffers size, but > > > > for now we postponed it) > > > > > > Got it. Using workers for xmit is IMHO a bad idea btw. > > > Why is it done like this? > > > > Honestly, I don't know the exact reasons for this design, but I suppose > > that the idea was to have only one worker that uses the vring, and > > multiple user threads that enqueue packets in the list. > > This can simplify the code and we can put the user threads to sleep if > > we don't have "credit" available (this means that the receiver doesn't > > have space to receive the packet). > > > I think you mean the reverse: even without credits you can copy from > user and queue up data, then process it without waking up the user > thread. I checked the code better, but it doesn't seem to do that. The .sendmsg callback of af_vsock, check if the transport has space (virtio-vsock transport returns the credit available). If there is no space, it put the thread to sleep on the 'sk_sleep(sk)' wait_queue. When the transport receives an update of credit available on the other peer, it calls 'sk->sk_write_space(sk)' that wakes up the thread sleeping, that will queue the new packet. So, in the current implementation, the TX worker doesn't check the credit available, it only sends the packets. > Does it help though? It certainly adds up work outside of > user thread context which means it's not accounted for > correctly. I can try to xmit the packet directly in the user thread context, to see the improvements. > > Maybe we want more VQs. Would help improve parallelism. The question > would then become how to map sockets to VQs. With a simple hash > it's easy to create collisions ... Yes, more VQs can help but the map question is not simple to answer. Maybe we can do an hash on the (cid, port) or do some kind of estimation of queue utilization and try to balance. Should the mapping be unique? > > > > > > What are the drawbacks in your opinion? > > > > > > Thanks, > > Stefano > > - More pressure on scheduler > - Increased latency > Thanks, Stefano