Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753922AbbGFPeB (ORCPT ); Mon, 6 Jul 2015 11:34:01 -0400 Received: from mail-wi0-f172.google.com ([209.85.212.172]:36613 "EHLO mail-wi0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752606AbbGFPd5 (ORCPT ); Mon, 6 Jul 2015 11:33:57 -0400 From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Oleg Nesterov , Christoph Lameter , Rik van Riel , Andrew Morton Subject: [PATCH 2/3] kmod: Add up-to-date explanations on the purpose of each asynchronous levels Date: Mon, 6 Jul 2015 17:33:40 +0200 Message-Id: <1436196821-13962-3-git-send-email-fweisbec@gmail.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1436196821-13962-1-git-send-email-fweisbec@gmail.com> References: <1436196821-13962-1-git-send-email-fweisbec@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3411 Lines: 88 There seem to be quite some confusions on the comments, likely due to changes that came after them. Now since it's very non obvious why we have 3 levels of asynchronous code to implement usermodehelpers, it's important to comment in detail the reason of this layout. Cc: Rik van Riel Cc: Oleg Nesterov Cc: Andrew Morton Cc: Christoph Lameter Signed-off-by: Frederic Weisbecker --- kernel/kmod.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/kernel/kmod.c b/kernel/kmod.c index 4682e91..f940b21 100644 --- a/kernel/kmod.c +++ b/kernel/kmod.c @@ -269,7 +269,11 @@ out: do_exit(0); } -/* Keventd can't block, but this (a child) can. */ +/* + * We couldn't wait for usermodehelper completion from khelper without + * blocking other pending concurrent usermodehelper targets. This is why + * the UMH_WAIT_PROC flavour runs in its own thread. + */ static int call_usermodehelper_exec_sync(void *data) { struct subprocess_info *sub_info = data; @@ -285,8 +289,8 @@ static int call_usermodehelper_exec_sync(void *data) /* * Normally it is bogus to call wait4() from in-kernel because * wait4() wants to write the exit code to a userspace address. - * But call_usermodehelper_exec_sync() always runs as keventd, - * and put_user() to a kernel address works OK for kernel + * But call_usermodehelper_exec_sync() always runs as kernel + * thread and put_user() to a kernel address works OK for kernel * threads, due to their having an mm_segment_t which spans the * entire address space. * @@ -307,7 +311,15 @@ static int call_usermodehelper_exec_sync(void *data) do_exit(0); } -/* This is run by khelper thread */ +/* + * This function doesn't need to be called asynchronously. But we need to create + * the usermodehelper kernel threads from a task that is affine to all CPUs + * (or nohz housekeeping ones) such that they inherit a global affinity. Khelper + * workqueue simply provides that. + * call_usermodehelper() can be called from tasks with a reduced CPU + * affinity (eg: per-cpu workqueues) and we don't want usermodehelper targets to + * contend any busy CPU. + */ static void call_usermodehelper_exec_work(struct work_struct *work) { struct subprocess_info *sub_info = @@ -693,6 +705,18 @@ struct ctl_table usermodehelper_table[] = { void __init usermodehelper_init(void) { + /* + * The singlethread property here stands for the need of a workqueue + * with wide CPUs affinity, in order to create usermodehelper kernel + * threads inheriting this attribute irrespective of + * call_usermodehelper() callers. Non-singlethread workqueues are + * otherwise per-cpu and wouldn't produce the desired effect. + * + * The ordering guarantee as a side-effect isn't necessary but shouldn't + * introduce performance issue. All we do is creating two kernel threads. + * This should be fast enough not to block concurrent usermodehelper + * callers. + */ khelper_wq = create_singlethread_workqueue("khelper"); BUG_ON(!khelper_wq); } -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/