Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2939655imm; Mon, 16 Jul 2018 17:46:26 -0700 (PDT) X-Google-Smtp-Source: AAOMgpctd45MUuhHqHlJJ8pYJmd+4cKv9lcn47aOVAW+dEEIt53bogEAgjzu4zvZFsMaacDtmI+W X-Received: by 2002:a63:ab0c:: with SMTP id p12-v6mr17229020pgf.190.1531788386426; Mon, 16 Jul 2018 17:46:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531788386; cv=none; d=google.com; s=arc-20160816; b=YK1o6xZkhBg4TI7Wvekld9sC2duW7PCjvO3+Dbzu3qKVdTIb9t5cIIC1Ta2d6Z4MFY /4ZEF/5j1Cd4iAcrbtnk3UqhTTMTt5zkvhuvXNixlCQpse8fJHaf9HDu7uX9RoJSs0la 1rlIEmnWu6l4i7W1Iqttb0C3doVbvb7XmkgoyTZE6A6Q+fD+wD7SZfKjB2KY/rCGmpiS j6L6xH6rVvPUQzGyUNVkXSE6nwDUi9oHOdVCcw/hm9Zj3HirBwItaKSra1+rWV4pktKe LTXLPMemqTQfoktW8epKBTb3rQWYc06cddXT1Wee53WErfk5Hwm2w09unEEuEmDTsysP ByBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=GeKYFjYlHafTQkgrHS1VDi5dtGzk/EV0GhB7d+gQ+5E=; b=tLrQm2WfknqM+zyK0j3p1uaxfnA1IQ/bk+GUjAGU9x3TNAzbkCBJ4tcHeD01ZGW/F1 2ZGqn1PHiZBaRp2WnUFCcmCO3rDAtQ0Ip5Cp/mGNZYXFNYdIFI9MLwR92Ctvf25iczDG 8DZ/A2I44FyK3O7PfE2Z68+X+IFML5EFcqqybxl/sRqyexh3s2tGUTMO26IP0R+/C+ZL R4JsNncyArf5pO14LfBN62WKsFIEC/mw5CORyxiVgEQ75ozOXgVrpnV2Ld6+r1ShScKz ZIiMpakKWDbEZUiwQdXDrhGk/48IlRgxF3QK6+G0xkR1JAJ4tLBfvyYp01zvcF6VtRtp KcTw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a5-v6si31782232plh.340.2018.07.16.17.46.08; Mon, 16 Jul 2018 17:46:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730310AbeGQBPY (ORCPT + 99 others); Mon, 16 Jul 2018 21:15:24 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:33224 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729827AbeGQBPX (ORCPT ); Mon, 16 Jul 2018 21:15:23 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 20D90401EF04; Tue, 17 Jul 2018 00:45:30 +0000 (UTC) Received: from [10.72.12.85] (ovpn-12-85.pek2.redhat.com [10.72.12.85]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CF2F1111E40C; Tue, 17 Jul 2018 00:45:19 +0000 (UTC) Subject: Re: [PATCH net-next V2 0/8] Packed virtqueue support for vhost To: "Michael S. Tsirkin" Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, wexu@redhat.com, jfreimann@redhat.com, tiwei.bie@intel.com, maxime.coquelin@redhat.com References: <1531711691-6769-1-git-send-email-jasowang@redhat.com> <20180716113720-mutt-send-email-mst@kernel.org> <33f4643f-f226-0389-1f4f-607c289db94e@redhat.com> <20180716154102-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: <5ba5c927-a0b4-f399-7a88-b90763765142@redhat.com> Date: Tue, 17 Jul 2018 08:45:16 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20180716154102-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Tue, 17 Jul 2018 00:45:30 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Tue, 17 Jul 2018 00:45:30 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jasowang@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018年07月16日 20:49, Michael S. Tsirkin wrote: > On Mon, Jul 16, 2018 at 05:46:33PM +0800, Jason Wang wrote: >> >> On 2018年07月16日 16:39, Michael S. Tsirkin wrote: >>> On Mon, Jul 16, 2018 at 11:28:03AM +0800, Jason Wang wrote: >>>> Hi all: >>>> >>>> This series implements packed virtqueues. The code were tested with >>>> Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/ >>>> >>>> >>>> Pktgen test for both RX and TX does not show obvious difference with >>>> split virtqueues. The main bottleneck is the guest Linux driver, since >>>> it can not stress vhost for a 100% CPU utilization. A full TCP >>>> benchmark is ongoing. Will test virtio-net pmd as well when it was >>>> ready. >>> Well the question then is why we should bother merging this >>> if this doesn't give a performance gain. >> We meet bottlenecks at other places. I can only test Linux driver which has >> lots of overheads e.g interrupts. And perf show only a small fraction of >> time were spent on e.g virtqueue manipulation. I hope virtio-net pmd can >> give us different result but we don't have one ready for testing now. (Jen's >> V4 have bugs thus can not work with this series). > Can't linux busy poll? For vhost busy polling, there's no difference since guest can not give vhost enough stress. For guest busy polling, it does not work for the packets generated by pktgen. > And how about testing loopback with XDP? No difference, I even shortcut both the tun_get_user() on host and netif_receive_skb() in guest. >>> Do you see >>> a gain in CPU utilization maybe? >> Unfortunately not. >> >>> If not - let's wait for that TCP benchmark result? >> We can, but you know TCP_STREAM result is sometime misleading. >> >> A brunch of other patches of mine were rebased on this and then blocked on >> this series. Consider we don't meet regression, maybe we can merge this >> first and try optimizations or fixups on top? >> >> Thanks > I'm not sure I understand this approach. Packed ring is just an optimization. > What value is there in merging it if it does not help speed? If you want to support migration from dpdk or vDPA backend. And we still have the chance to see the performance with virito-net pmd in the future. If this does not make sense for you, I will leave this series until we can get results from virtio-net pmd (or find a way that packed virtqueue outperform). And I will start to post other optimizations on vhost. Thanks > >>>> Notes: >>>> - This version depends on Tiwei's series at https://patchwork.ozlabs.org/cover/942297/ >>>> >>>> This version were tested with: >>>> >>>> - Zerocopy (Out of Order) support >>>> - vIOMMU support >>>> - mergeable buffer on/off >>>> - busy polling on/off >>>> - vsock (nc-vsock) >>>> >>>> Changes from V1: >>>> - drop uapi patch and use Tiwei's >>>> - split the enablement of packed virtqueue into a separate patch >>>> >>>> Changes from RFC V5: >>>> >>>> - save unnecessary barriers during vhost_add_used_packed_n() >>>> - more compact math for event idx >>>> - fix failure of SET_VRING_BASE when avail_wrap_counter is true >>>> - fix not copy avail_wrap_counter during GET_VRING_BASE >>>> - introduce SET_VRING_USED_BASE/GET_VRING_USED_BASE for syncing last_used_idx >>>> - rename used_wrap_counter to last_used_wrap_counter >>>> - rebase to net-next >>>> >>>> Changes from RFC V4: >>>> >>>> - fix signalled_used index recording >>>> - track avail index correctly >>>> - various minor fixes >>>> >>>> Changes from RFC V3: >>>> >>>> - Fix math on event idx checking >>>> - Sync last avail wrap counter through GET/SET_VRING_BASE >>>> - remove desc_event prefix in the driver/device structure >>>> >>>> Changes from RFC V2: >>>> >>>> - do not use & in checking desc_event_flags >>>> - off should be most significant bit >>>> - remove the workaround of mergeable buffer for dpdk prototype >>>> - id should be in the last descriptor in the chain >>>> - keep _F_WRITE for write descriptor when adding used >>>> - device flags updating should use ADDR_USED type >>>> - return error on unexpected unavail descriptor in a chain >>>> - return false in vhost_ve_avail_empty is descriptor is available >>>> - track last seen avail_wrap_counter >>>> - correctly examine available descriptor in get_indirect_packed() >>>> - vhost_idx_diff should return u16 instead of bool >>>> >>>> Changes from RFC V1: >>>> >>>> - Refactor vhost used elem code to avoid open coding on used elem >>>> - Event suppression support (compile test only). >>>> - Indirect descriptor support (compile test only). >>>> - Zerocopy support. >>>> - vIOMMU support. >>>> - SCSI/VSOCK support (compile test only). >>>> - Fix several bugs >>>> >>>> Jason Wang (8): >>>> vhost: move get_rx_bufs to vhost.c >>>> vhost: hide used ring layout from device >>>> vhost: do not use vring_used_elem >>>> vhost_net: do not explicitly manipulate vhost_used_elem >>>> vhost: vhost_put_user() can accept metadata type >>>> vhost: packed ring support >>>> vhost: event suppression for packed ring >>>> vhost: enable packed virtqueues >>>> >>>> drivers/vhost/net.c | 143 ++----- >>>> drivers/vhost/scsi.c | 62 +-- >>>> drivers/vhost/vhost.c | 994 ++++++++++++++++++++++++++++++++++++++++----- >>>> drivers/vhost/vhost.h | 55 ++- >>>> drivers/vhost/vsock.c | 42 +- >>>> include/uapi/linux/vhost.h | 7 + >>>> 6 files changed, 1035 insertions(+), 268 deletions(-) >>>> >>>> -- >>>> 2.7.4