Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757497AbcCURnp (ORCPT ); Mon, 21 Mar 2016 13:43:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53778 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757127AbcCURnn (ORCPT ); Mon, 21 Mar 2016 13:43:43 -0400 From: Bandan Das To: "Michael Rapoport" Cc: tj@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, mst@redhat.com, jiangshanlai@gmail.com Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues References: <1458339291-4093-1-git-send-email-bsd@redhat.com> <201603210758.u2L7wiXA028101@d06av09.portsmouth.uk.ibm.com> Date: Mon, 21 Mar 2016 13:43:41 -0400 In-Reply-To: <201603210758.u2L7wiXA028101@d06av09.portsmouth.uk.ibm.com> (Michael Rapoport's message of "Mon, 21 Mar 2016 09:58:39 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2590 Lines: 64 "Michael Rapoport" writes: > Hi Bandan, > >> From: Bandan Das >> >> At Linuxcon last year, based on our presentation "vhost: sharing is > better" [1], >> we had briefly discussed the idea of cgroup aware workqueues with Tejun. > The >> following patches are a result of the discussion. They are in no way > complete in >> that the changes are for unbounded workqueues only, but I just wanted to > present my >> unfinished work as RFC and get some feedback. >> >> 1/4 and 3/4 are simple cgroup changes and add a helper function. >> 2/4 is the main implementation. >> 4/4 changes vhost to use workqueues with support for cgroups. >> >> Example: >> vhost creates a worker thread when invoked for a kvm guest. Since, >> the guest is a normal process, the kernel thread servicing it should be >> attached to the vm process' cgroups. > > I did some performance evaluation of different threading models in vhost, > and in most tests replacing vhost kthread's with workqueues degrades the Workqueues us kthread_create internally and if calling one over the other impacts performace, I think we should investigate that. Which patches did you use ? Note that an earlier version of workqueue patches that I posted used per-cpu workqueues. > performance. Moreover, having thread management inside the vhost provides What exactly is the advantage doing our own thread management ? Do you have any examples ? (Besides for doing our own scheduling like in the original Elvis paper which I don't think is gonna happen). Also, note here that there is a possibility to affect how our work gets executed by using optional switches to alloc_workqueue() so all is not lost. > opportunity for optimization, at least for some workloads... > That said, I believe that switching vhost to use workqueues is not that > good idea after all. > >> Netperf: >> Two guests running netperf in parallel. >> Without patches With > patches >> >> TCP_STREAM (10^6 bits/second) 975.45 978.88 >> TCP_RR (Trans/second) 20121 18820.82 >> UDP_STREAM (10^6 bits/second) 1287.82 1184.5 >> UDP_RR (Trans/second) 20766.72 19667.08 >> Time a 4G iso download 2m 33 seconds 3m 02 seconds > > -- > Sincerely yours, > Mike. > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html