Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754587AbcCVHMk (ORCPT ); Tue, 22 Mar 2016 03:12:40 -0400 Received: from e06smtp15.uk.ibm.com ([195.75.94.111]:33359 "EHLO e06smtp15.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751549AbcCVHMd convert rfc822-to-8bit (ORCPT ); Tue, 22 Mar 2016 03:12:33 -0400 X-IBM-Helo: d06dlp03.portsmouth.uk.ibm.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Message-Id: <201603220712.u2M7CFdO004636@d06av03.portsmouth.uk.ibm.com> X-IBM-Helo: smtp.notes.na.collabserv.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org In-Reply-To: To: Bandan Das Cc: jiangshanlai@gmail.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mst@redhat.com, tj@kernel.org Subject: vhost threading model (was: Re: [RFC PATCH 0/4] cgroup aware workqueues) From: "Michael Rapoport" Date: Tue, 22 Mar 2016 09:12:08 +0200 References: <1458339291-4093-1-git-send-email-bsd@redhat.com><201603210758.u2L7wiXA028101@d06av09.portsmouth.uk.ibm.com> MIME-Version: 1.0 X-KeepSent: 58BEC716:F5C5773E-C2257F7E:0025E29A; type=4; name=$KeepSent X-Mailer: IBM Notes Release 9.0.1 October 14, 2013 X-LLNOutbound: False X-Disclaimed: 40047 X-TNEFEvaluated: 1 Content-Transfer-Encoding: 8BIT Content-Type: text/plain; charset="US-ASCII" x-cbid: 16032207-0021-0000-0000-000025E80D37 X-IBM-ISS-SpamDetectors: Score=0.40962; BY=0; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; SC=0.40962; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00005066; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000153; SDB=6.00676893; UDB=6.00310103; UTC=2016-03-22 07:12:09 x-cbparentid: 16032207-7792-0000-0000-0000029CDC54 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2777 Lines: 79 > Bandan Das wrote on 03/21/2016 07:43:41 PM: > > "Michael Rapoport" writes: > > > > Hi Bandan, > > > >> From: Bandan Das > >> > >> At Linuxcon last year, based on our presentation "vhost: sharing is > > better" [1], > >> we had briefly discussed the idea of cgroup aware workqueues with Tejun. > > The > >> following patches are a result of the discussion. They are in no way > > complete in > >> that the changes are for unbounded workqueues only, but I just wanted to > > present my > >> unfinished work as RFC and get some feedback. > >> > >> 1/4 and 3/4 are simple cgroup changes and add a helper function. > >> 2/4 is the main implementation. > >> 4/4 changes vhost to use workqueues with support for cgroups. > >> > >> Example: > >> vhost creates a worker thread when invoked for a kvm guest. Since, > >> the guest is a normal process, the kernel thread servicing it should be > >> attached to the vm process' cgroups. > > > > I did some performance evaluation of different threading models in vhost, > > and in most tests replacing vhost kthread's with workqueues degrades the > > Workqueues us kthread_create internally and if calling one over the > other impacts performace, I think we should investigate that. Agree. Didn't have time to do it myself yet... > Which patches did you use ? Note that an earlier version of workqueue patches > that I posted used per-cpu workqueues. I've used your earlier version of workqueue patches, then I modified it to use unbound workqueues, and then I even restored to some extent original vhost workqueue usage. In all the cases I saw performance degradation relatively to the baseline. > > performance. Moreover, having thread management inside the vhost provides > > What exactly is the advantage doing our own thread management ? Do you have > any examples ? (Besides for doing our own scheduling like in the original Elvis > paper which I don't think is gonna happen). Also, note here that there is > a possibility to affect how our work gets executed by using optional switches to > alloc_workqueue() so all is not lost. Well, Elvis is a _theoretical_ example that showed that I/O scheduling in the vhost improves performance. I'm not saying we should take Evlis and try to squeeze it into the vhost, I just want to say that we cannot switch vhost to use workqueues if it causes performance degradation. My opinion is that we need to give it some more thought, much more performance evaluation, so that we can find the best model. > > opportunity for optimization, at least for some workloads... > > That said, I believe that switching vhost to use workqueues is not that > > good idea after all. > > -- Sincerely yours, Mike.