Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753385AbcDCKoA (ORCPT ); Sun, 3 Apr 2016 06:44:00 -0400 Received: from e06smtp13.uk.ibm.com ([195.75.94.109]:56158 "EHLO e06smtp13.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752687AbcDCKn5 convert rfc822-to-8bit (ORCPT ); Sun, 3 Apr 2016 06:43:57 -0400 X-IBM-Helo: d06dlp03.portsmouth.uk.ibm.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Message-Id: <201604031043.u33AhpQJ007249@d06av07.portsmouth.uk.ibm.com> X-IBM-Helo: smtp.notes.na.collabserv.com X-IBM-MailFrom: rapoport@il.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org In-Reply-To: To: Bandan Das Cc: jiangshanlai@gmail.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mst@redhat.com, Tejun Heo Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues From: "Michael Rapoport" Date: Sun, 3 Apr 2016 13:43:46 +0300 References: <1458339291-4093-1-git-send-email-bsd@redhat.com><201603210758.u2L7wiY9003907@d06av07.portsmouth.uk.ibm.com><20160330170419.GG7822@mtj.duckdns.org><201603310617.u2V6HIkt008006@d06av12.portsmouth.uk.ibm.com><20160331171435.GD24661@htj.duckdns.org> MIME-Version: 1.0 X-KeepSent: C8E5C937:132889AB-C2257F8A:00214A2B; type=4; name=$KeepSent X-Mailer: IBM Notes Release 9.0.1 October 14, 2013 X-LLNOutbound: False X-Disclaimed: 42931 X-TNEFEvaluated: 1 Content-Transfer-Encoding: 8BIT Content-Type: text/plain; charset="US-ASCII" x-cbid: 16040310-0013-0000-0000-00000CC4136F X-IBM-ISS-SpamDetectors: Score=0.49; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0; SC=0.49; ST=0; TS=0; UL=0; ISC= X-IBM-ISS-DetailInfo: BY=3.00005112; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000156; SDB=6.00682665; UDB=6.00313520; UTC=2016-04-03 10:43:47 x-cbparentid: 16040310-4536-0000-0000-000006F8D0C9 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1942 Lines: 58 Hi Bandan, > Bandan Das wrote on 03/31/2016 09:45:43 PM: > > > >> > > opportunity for optimization, at least for some workloads... > >> > > >> > What sort of optimizations are we talking about? > >> > >> Well, if we take Evlis (1) as for the theoretical base, there could be > >> benefit of doing I/O scheduling inside the vhost. > > > > Yeah, if that actually is beneficial, take full control of the > > kworker thread. > > Well, even if it actually is beneficial (which I am sure it is), it seems a > little impractical to block current improvements based on a future prospect > that (as far as I know), no one is working on ? I'm not suggesting to block current improvements based on a future prospect. But, unfortunately, there's regression rather than improvement with the results you've posted. And, I thought you are working on comparing different approaches to vhost threading, like workqueues and shared vhost thread (1) ;-) Anyway, I'm working on this in a background, and, frankly, I cannot say I have a clear vision of the best route. > There have been discussions about this in the past and iirc, most people agree > about not going the byos* route. But I am still all for such a proposal and if > it's good/clean enough, I think we can definitely tear down what we have and > throw it away! The I/O scheduling part is intrusive enough that even the current > code base has to be changed quite a bit. The "byos" route seems more promising with respect to possible performance gains, but it will definitely add complexity, and I cannot say if the added complexity will be worth performance improvements. Meanwhile, I'd suggest we better understand what causes regression with your current patches and maybe then we'll be smarter to get to the right direction. :) > *byos = bring your own scheduling ;) > > > Thanks. -- Sincerely yours, Mike. [1] https://lwn.net/Articles/650857/