Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp90675imu; Sun, 4 Nov 2018 19:41:46 -0800 (PST) X-Google-Smtp-Source: AJdET5exC4ML4xQYW76Rh9448KuINW3TqcmyQPJr4wlMzlShcJ8QwxtrzmNAv+VwgWxi3Uj2b49e X-Received: by 2002:a17:902:f209:: with SMTP id gn9mr19718737plb.6.1541389306490; Sun, 04 Nov 2018 19:41:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541389306; cv=none; d=google.com; s=arc-20160816; b=wYgzepcFJ5tKbedF6CbYi4Y6MAxQLM3dopIsRik4AMoCCP4PFaH/bBJa21qyp9p5yL 4U+Gwe96bzPvj4uMt8KEsaclX51xDBLt8XIYVZAKYFvsr9QiJFqbRa6PVztUc+XFn3XP B6erM8eiyWFTLBByMM81qsKftOMHdPnhf23TbJqp0ZBC5dmLpKj+e6m/y7C23YjTW6E3 pF6FKcu8ZWfASFNtIZ9rJf8YtGgOSRZrP5uNyqSAKbi6lMgIUVJGQF1fGkAErjcM7FBv J2rbSUgGILWYIVuITDgPBRQi/nuZ8Jio+/uyKYqoTCLWtsmoIbHif7nx1kKyI0bSpWq3 8OUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=4dkeUMbr8EKhVUR0l3Nr9X0+wZC2nOn4tfLbUjniIZg=; b=kpXqTtzYNg5vgj0VMznEM2RtQYr7e2i1CgvUFRrX9dMrWNh6biyCJ78NxEhThgghhW PY6r+X4tQ0rOP7H/lKkg0HSItGc9YLgkUwK0cd1PK05Puf3K50weHlru+OAPaeQDCywl mWT5FxjC0BtotnDh4Lmnvf3G/iEtKn+D7hA7JpVHbIp9sGU2MLWfF49wEpSRtGauoWtJ pWWexO9GGC/SiactVZQFWUphliwTnwLM8LleNDb0Sfanu5GjZmsm/nt8vzTfc6+V6FwL B5Iii4aRRx2eEIrzaZLMAco54AdHdz/nzhCFwNoCzUWgB85iNbhh8wV1rmgvSfzKNWC1 MyNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=uigLF1K+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6-v6si41670696plr.267.2018.11.04.19.41.18; Sun, 04 Nov 2018 19:41:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=uigLF1K+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728643AbeKEM6U (ORCPT + 99 others); Mon, 5 Nov 2018 07:58:20 -0500 Received: from mail-qk1-f193.google.com ([209.85.222.193]:43467 "EHLO mail-qk1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727066AbeKEM6T (ORCPT ); Mon, 5 Nov 2018 07:58:19 -0500 Received: by mail-qk1-f193.google.com with SMTP id r71so12464403qkr.10; Sun, 04 Nov 2018 19:40:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4dkeUMbr8EKhVUR0l3Nr9X0+wZC2nOn4tfLbUjniIZg=; b=uigLF1K+NfYiVmAOCRW57mofkLMfZ+ZyXrFBY8BeTeycpjSgBl1wOgs57K2tdhqua7 6w8LF2/1AWpRsyYLLMGTWSer5Rroa9JawdJXDF8jc8URfBNsJ3zOW5PABrf/J6iukCI6 9KpkR6FZ42yl0hOfoSh0usaYrsGBLctEcQoixxs/mrcnLMjzfEStui/wdvVSbZl27852 ScICQBqMv0u7CSWD0xz7CnBcP7vmza8OxdTbF7I1hN7aabJN28IkGaKlexoz4BSdv+MV seHxvRe/rOKCQPri/a2NPIAYQiVXQxuDUE6x6bN4dlP+9Bz5xqxawRQK+wG9Xy6AVvCS V5hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4dkeUMbr8EKhVUR0l3Nr9X0+wZC2nOn4tfLbUjniIZg=; b=AN2Vf0klk2zkQLihRq0l6yNooPMm61ZMjgj8I2If1Xv5i5u31YOu23cQG5LepBOkhT 2hQyt8zCkXxoPp7cPehfVN2iPRY8C3eyxb3kxjsaNqi7gbnMPbwC0oMSLlXN2lQdF9Qa 2L1yr7wBWNs2TI+PkisRfIboVbmSOJAacbbUf8HehYmYv9Dvar+hY0vew9OhMqNE7d2F 3zLwOSkzlMEJxk1mNgNYN+JdXtFhRK21XhItqViF1ItNN6PoCIPnxnaOdsLbAuxN4xKr PvTiwmoLvWrjgVp7mg6YNnrXvf75y7DLMYMaR7eCuzsl3kEq3Rf8Lc2f7kaOT8zTXieb susA== X-Gm-Message-State: AGRZ1gIbg4RwtxPMxP+MLw2jyvKV062ebcxFzGoXzwnCFdKGsEjWSIk0 cCzaevBdWFiS+vSLhrYH5KImT/enqA8it7eaII0= X-Received: by 2002:a0c:a802:: with SMTP id w2mr17137259qva.198.1541389246487; Sun, 04 Nov 2018 19:40:46 -0800 (PST) MIME-Version: 1.0 References: <20181102160710.3741-1-v.mayatskih@gmail.com> In-Reply-To: From: Vitaly Mayatskih Date: Sun, 4 Nov 2018 22:40:34 -0500 Message-ID: Subject: Re: [PATCH 0/1] vhost: parallel virtqueue handling To: Jason Wang Cc: "Michael S . Tsirkin" , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Nov 4, 2018 at 9:52 PM Jason Wang wrote: > Thanks a lot for the patches. Here's some thoughts: > > - This is not the first attempt that tries to parallelize vhost workers. > So we need a comparing among them. > > 1) Multiple vhost workers from Anthony, > https://www.spinics.net/lists/netdev/msg189432.html > > 2) ELVIS from IBM, http://www.mulix.org/pubs/eli/elvis-h319.pdf > > 3) CMWQ from Bandan, > http://www.linux-kvm.org/images/5/52/02x08-Aspen-Bandan_Das-vhost-sharing_is_better.pdf > > - vhost-net use a different multiqueue model. Each vhost device on host > is only dealing with a specific queue pair instead of a whole device. > This allow great flexibility and multiqueue could be implemented without > touching vhost codes. I'm no way a network expert, but I think this is because it follows a combined queue model of the NIC. Having a TX/RX queues pair looks like a natural choice for this case. > - current vhost-net implementation depends heavily on the assumption of > single thread model especially its busy polling code. It would be broken > by this attempt. If we decide to go this way, this needs to be fixed. > And we do need performance result of networking. Thanks for noting that, I miss a lot of historical background. Will check that up. > - Having more threads is not necessarily a win, at least we need a > module parameter to other stuffs to control the number of threads I > believe. I agree I didn't think fully about other cases, but for the disk it is already under controll: QEMU's num-queues disk parameter. There's a certain saturation point when adding more threads does not yield lot more more performance. For my environment it's about 12 queues. So, how does it sound: the default behaviour is 1 worker per vhost device. If the user needs per-vq worker he does a new VHOST_SET_ ioctl? -- wbr, Vitaly