Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758192AbcCaSpt (ORCPT ); Thu, 31 Mar 2016 14:45:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48022 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757960AbcCaSpq (ORCPT ); Thu, 31 Mar 2016 14:45:46 -0400 From: Bandan Das To: Tejun Heo Cc: Michael Rapoport , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, mst@redhat.com, jiangshanlai@gmail.com Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues References: <1458339291-4093-1-git-send-email-bsd@redhat.com> <201603210758.u2L7wiY9003907@d06av07.portsmouth.uk.ibm.com> <20160330170419.GG7822@mtj.duckdns.org> <201603310617.u2V6HIkt008006@d06av12.portsmouth.uk.ibm.com> <20160331171435.GD24661@htj.duckdns.org> Date: Thu, 31 Mar 2016 14:45:43 -0400 In-Reply-To: <20160331171435.GD24661@htj.duckdns.org> (Tejun Heo's message of "Thu, 31 Mar 2016 13:14:35 -0400") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2006 Lines: 45 Tejun Heo writes: > Hello, Michael. > > On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote: >> > There really shouldn't be any difference when using unbound >> > workqueues. workqueue becomes a convenience thing which manages >> > worker pools and there shouldn't be any difference between workqueue >> > workers and kthreads in terms of behavior. >> >> I agree that there really shouldn't be any performance difference, but the >> tests I've run show otherwise. I have no idea why and I hadn't time yet to >> investigate it. > > I'd be happy to help digging into what's going on. If kvm wants full > control over the worker thread, kvm can use workqueue as a pure > threadpool. Schedule a work item to grab a worker thread with the > matching attributes and keep using it as it'd a kthread. While that > wouldn't be able to take advantage of work item flushing and so on, > it'd still be a simpler way to manage worker threads and the extra > stuff like cgroup membership handling doesn't have to be duplicated. > >> > > opportunity for optimization, at least for some workloads... >> > >> > What sort of optimizations are we talking about? >> >> Well, if we take Evlis (1) as for the theoretical base, there could be >> benefit of doing I/O scheduling inside the vhost. > > Yeah, if that actually is beneficial, take full control of the > kworker thread. Well, even if it actually is beneficial (which I am sure it is), it seems a little impractical to block current improvements based on a future prospect that (as far as I know), no one is working on ? There have been discussions about this in the past and iirc, most people agree about not going the byos* route. But I am still all for such a proposal and if it's good/clean enough, I think we can definitely tear down what we have and throw it away! The I/O scheduling part is intrusive enough that even the current code base has to be changed quite a bit. *byos = bring your own scheduling ;) > Thanks.