Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp45329imm; Fri, 21 Sep 2018 10:03:23 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdb8+dIO1Dp9/KbxIkCLUzA1ZND8lAgenSDyo/kOfs+7PUGe7INZEZ3TqXit8DZTWrhSg0+K X-Received: by 2002:a63:c608:: with SMTP id w8-v6mr42505623pgg.16.1537549403326; Fri, 21 Sep 2018 10:03:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537549403; cv=none; d=google.com; s=arc-20160816; b=RTMM1We7fJdeBAvT91sfJIIHIHS2AEOPhzmdvQ5bTaxJijgs9ZuVk6KtHW5VMnnaaU ECKAmTv4nCpyoVnZjLh7mq3AOLbvVf/qz9LMQ4olK2tUejEHkZd4DPzhgQz3hIwLMkDJ CvpFGNScNOmSb6iCZa/tNpFENmFXjI3vWAq7q/fJNWI+bZL2e81wzcRYXYGPDkO3gIde Dexpqhd0O+lNMktHXqka/Ly8TC4Fm0lmKZUh4V0tlLG3lZk1pfnGB50yfcbTqlOaqb+H jzWrogdgE7MGknJDccIesOqbaOYJ80ljfemeNdSzDMbQ6esIXPd91yPNPvQX5Au1g3qh oKbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=BZJU20XblxkYXVWGHSAY6PHQErNgaP3PxZ69g99ZfGc=; b=k1wGif38wL+d9aGf98NWLvgdmY+HD0K26FDL/VEX+bIqQTBBS/aQys3T1YNI6dhnqp jW0I6sfYMOs29oN79tk+7UK80/2L2w5RSRVFNgYvf46Ysr9wS3obsb0VibPwPuvPbQWj F6njoe16F6ggwgWa9cS3kLZTFJ5lpHt8UlXqKNasSQpxTnIuN3Gbm+73qpaCbA/9e/t+ I7oGSwlM+sIgBX/ompahM3gQav5AuSoJ2SD+1CPIlAmqcjHSnO4hh/fUvu5yqn0Vta2+ ju6E0BTMkI2a3Hkh+GWr+5MHtw1kqYZ4bXe8+v5GmAfP+YubS+Ssplzg3PzJHgy4vHJb oO9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q8-v6si23957687pls.482.2018.09.21.10.02.40; Fri, 21 Sep 2018 10:03:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390506AbeIUWwJ (ORCPT + 99 others); Fri, 21 Sep 2018 18:52:09 -0400 Received: from mga06.intel.com ([134.134.136.31]:35009 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728149AbeIUWwJ (ORCPT ); Fri, 21 Sep 2018 18:52:09 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Sep 2018 10:02:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,285,1534834800"; d="scan'208";a="234880956" Received: from ahduyck-mobl.amr.corp.intel.com (HELO [10.7.198.152]) ([10.7.198.152]) by orsmga004.jf.intel.com with ESMTP; 21 Sep 2018 10:02:14 -0700 Subject: Re: [PATCH v4 4/5] async: Add support for queueing on specific node To: Dan Williams Cc: Linux MM , Linux Kernel Mailing List , linux-nvdimm , Pasha Tatashin , Michal Hocko , Dave Jiang , Ingo Molnar , Dave Hansen , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Logan Gunthorpe , "Kirill A. Shutemov" References: <20180920215824.19464.8884.stgit@localhost.localdomain> <20180920222938.19464.34102.stgit@localhost.localdomain> From: Alexander Duyck Message-ID: Date: Fri, 21 Sep 2018 10:02:13 -0700 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/21/2018 7:57 AM, Dan Williams wrote: > On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck > wrote: >> >> This patch introduces two new variants of the async_schedule_ functions >> that allow scheduling on a specific node. These functions are >> async_schedule_on and async_schedule_on_domain which end up mapping to >> async_schedule and async_schedule_domain but provide NUMA node specific >> functionality. The original functions were moved to inline function >> definitions that call the new functions while passing NUMA_NO_NODE. >> >> The main motivation behind this is to address the need to be able to >> schedule NVDIMM init work on specific NUMA nodes in order to improve >> performance of memory initialization. >> >> One additional change I made is I dropped the "extern" from the function >> prototypes in the async.h kernel header since they aren't needed. >> >> Signed-off-by: Alexander Duyck >> --- >> include/linux/async.h | 20 +++++++++++++++++--- >> kernel/async.c | 36 +++++++++++++++++++++++++----------- >> 2 files changed, 42 insertions(+), 14 deletions(-) >> >> diff --git a/include/linux/async.h b/include/linux/async.h >> index 6b0226bdaadc..9878b99cbb01 100644 >> --- a/include/linux/async.h >> +++ b/include/linux/async.h >> @@ -14,6 +14,7 @@ >> >> #include >> #include >> +#include >> >> typedef u64 async_cookie_t; >> typedef void (*async_func_t) (void *data, async_cookie_t cookie); >> @@ -37,9 +38,22 @@ struct async_domain { >> struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \ >> .registered = 0 } >> >> -extern async_cookie_t async_schedule(async_func_t func, void *data); >> -extern async_cookie_t async_schedule_domain(async_func_t func, void *data, >> - struct async_domain *domain); >> +async_cookie_t async_schedule_on(async_func_t func, void *data, int node); >> +async_cookie_t async_schedule_on_domain(async_func_t func, void *data, int node, >> + struct async_domain *domain); > > I would expect this to take a cpu instead of a node to not surprise > users coming from queue_work_on() / schedule_work_on()... The thing is queue_work_on actually queues the work on a cpu in most cases. The problem is that we are running on an unbound workqueue so what we actually get is node specific behavior instead of CPU specific. That is why I opted for this. https://elixir.bootlin.com/linux/v4.19-rc4/source/kernel/workqueue.c#L1390 >> + >> +static inline async_cookie_t async_schedule(async_func_t func, void *data) >> +{ >> + return async_schedule_on(func, data, NUMA_NO_NODE); >> +} >> + >> +static inline async_cookie_t >> +async_schedule_domain(async_func_t func, void *data, >> + struct async_domain *domain) >> +{ >> + return async_schedule_on_domain(func, data, NUMA_NO_NODE, domain); >> +} >> + >> void async_unregister_domain(struct async_domain *domain); >> extern void async_synchronize_full(void); >> extern void async_synchronize_full_domain(struct async_domain *domain); >> diff --git a/kernel/async.c b/kernel/async.c >> index a893d6170944..1d7ce81c1949 100644 >> --- a/kernel/async.c >> +++ b/kernel/async.c >> @@ -56,6 +56,7 @@ synchronization with the async_synchronize_full() function, before returning >> #include >> #include >> #include >> +#include >> >> #include "workqueue_internal.h" >> >> @@ -149,8 +150,11 @@ static void async_run_entry_fn(struct work_struct *work) >> wake_up(&async_done); >> } >> >> -static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain) >> +static async_cookie_t __async_schedule(async_func_t func, void *data, >> + struct async_domain *domain, >> + int node) >> { >> + int cpu = WORK_CPU_UNBOUND; >> struct async_entry *entry; >> unsigned long flags; >> async_cookie_t newcookie; >> @@ -194,30 +198,40 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy >> /* mark that this task has queued an async job, used by module init */ >> current->flags |= PF_USED_ASYNC; >> >> + /* guarantee cpu_online_mask doesn't change during scheduling */ >> + get_online_cpus(); >> + >> + if (node >= 0 && node < MAX_NUMNODES && node_online(node)) >> + cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask); > > ...I think this node to cpu helper should be up-leveled for callers. I > suspect using get_online_cpus() may cause lockdep problems to take the > cpu_hotplug_lock() within a "do_something_on()" routine. For example, > I found this when auditing queue_work_on() users: Yeah, after looking over the code I think I do see an issue. I will probably need to add something like a "unbound_cpu_by_node" type of function in order to pair it up with the unbound_pwq_by_node call that is in __queue_work. Otherwise I run the risk of scheduling on a CPU that I shouldn't be scheduling on. > /* > * Doesn't need any cpu hotplug locking because we do rely on per-cpu > * kworkers being shut down before our page_alloc_cpu_dead callback is > * executed on the offlined cpu. > * Calling this function with cpu hotplug locks held can actually lead > * to obscure indirect dependencies via WQ context. > */ > void lru_add_drain_all(void) > > I think it's a gotcha waiting to happen if async_schedule_on() has > more restrictive calling contexts than queue_work_on(). I can look into that. If nothing else it looks like queue_work_on does put the onus on the caller to make certain the CPU cannot go away so I can push the responsibility up the call chain in order to maintain parity.