Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752964AbbGNOEk (ORCPT ); Tue, 14 Jul 2015 10:04:40 -0400 Received: from mail-wg0-f47.google.com ([74.125.82.47]:33291 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751729AbbGNOEi (ORCPT ); Tue, 14 Jul 2015 10:04:38 -0400 Date: Tue, 14 Jul 2015 16:04:32 +0200 From: Frederic Weisbecker To: Christoph Lameter Cc: Oleg Nesterov , LKML , Rik van Riel , Andrew Morton Subject: Re: [PATCH 2/5] kmod: Use system_unbound_wq instead of khelper Message-ID: <20150714140428.GB29441@lerouge> References: <1436465237-22031-1-git-send-email-fweisbec@gmail.com> <1436465237-22031-3-git-send-email-fweisbec@gmail.com> <20150709224406.GA17528@redhat.com> <20150710134739.GA25078@lerouge> <20150710171208.GA26428@lerouge> <20150710181055.GB26428@lerouge> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1684 Lines: 33 On Fri, Jul 10, 2015 at 02:05:56PM -0500, Christoph Lameter wrote: > On Fri, 10 Jul 2015, Frederic Weisbecker wrote: > > > Note that nohz full is perfectly fine with that. The issue I'm worried about > > is the case where drivers spawn hundreds of jobs and it all happen on the same > > node because the kernel threads inherit the workqueue affinity, instead of > > the global affinity that khelper had. > > Well if this is working as intended here then the kernel threads will only > run on a specific cpu. As far as we can tell the amout of kernel threads > spawned is rather low Quite high actually. I count 578 calls on my machine. Most of them are launched by crypto subsystem trying to load modules. And it takes more than one second to complete all of these requests... > and also the performance requirements on those > threads are low. I think it is sensitive given the possible high number of instances launched. Now at least the crypto subsystem hasn't optimized that at all because all these instances are serialized. Basically on my machine, all of them run on CPU 0. Now I'm worried about other configs that may launch loads of parallel usermodehelper threads. That said I tend to think that if such a thing hasn't been seen as a problem on small SMP systems, why would it be an issue if we affine them on a NUMA node that is usually at least 4 CPUs wide? Or is it possible to see lower numbers of CPUs in a NUMA node? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/