Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp260242ioo; Fri, 20 May 2022 21:15:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyBHgUYKeduAFAw68Daj1i+NJy4xnRx8AoRfpOEfAm+ZRD+WOU5KFdaNIc9UQLh9cJgLOHn X-Received: by 2002:a63:2223:0:b0:3f6:3422:7db1 with SMTP id i35-20020a632223000000b003f634227db1mr10503480pgi.77.1653106542828; Fri, 20 May 2022 21:15:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653106542; cv=none; d=google.com; s=arc-20160816; b=X3nGrApvO80wrGPF/f4sWbwIsWgiLWSdKxV/ogSAklyQfENOBhoeZU57UeqKYGONU2 H/SWr59lnB2s1OGU63isJQV2DTyhLrJ2RGiCr5zPh4wuyxtABEc6rz5M9AZ0jC82V7yK DwwQNQqF/tMmtx9U2jxBCprjUU8ihfDrFh8w9csKfaO93Y57xCTmdS+X4e0JiU2CIeYv 1FFgY4rlb98LyqNeI18+uCaBe6Z1CYU631Ra4clr3zQzt7kGCJm/LBXPkI/HculBqw+g LkQ5bpFhQY3ACXMWT7uBumidlwarVGU9vE9N3cfHdN/pNVrNOxgR1rY5Bj8XbGqbvu1D WYdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version:reply-to :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tcCUqq62LqNcO93QQhh8dqfuIfAJuRpYFxxg9H99X9Q=; b=MQiHTIIUWHoCn2CUqpMV2RVWJK5TBTlDynX8xC1QcruP68LOkr0sr3o/rJp+mUWitJ U1gRKCqWTQEKRuiIv5XYXRxyJg1C1wMmmYS3MvLVFvnxH4doAwEcgS6Ot7VAFusB/5XL yU0QuZPE7CTUWzqoP7msWeRuTDsTClhGWZDTaex3QE7qA2KP51CY1Ml75D16r6wM0HJX ojIxnjo1Gl9fBfDNsM4RQFj8l3JR88eQXNmjhx7m4gurfJVBc/Ex0rsb2B+71RrKsCCH FR3FJ4U6ZprBf8pHm5rf4zrHErFH7R8CyKz9I6NBSHOnu4lCpI3ZAG9mFMrYUQ0bcytK IjLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ki1pJCRl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 195-20020a6302cc000000b003f644503490si1783131pgc.329.2022.05.20.21.15.20; Fri, 20 May 2022 21:15:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ki1pJCRl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244873AbiESUuQ (ORCPT + 99 others); Thu, 19 May 2022 16:50:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244864AbiESUt7 (ORCPT ); Thu, 19 May 2022 16:49:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 08C9E994F9 for ; Thu, 19 May 2022 13:49:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652993395; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tcCUqq62LqNcO93QQhh8dqfuIfAJuRpYFxxg9H99X9Q=; b=Ki1pJCRlS1Hl9elgKr8uuJNQSso6ZsCWV9H63k/w829v2UdCYBprq05Lokxo0fBMEY6y5n d077+w8u/wXi+ArX1mJcn4kkEStVeFnXV4dcDjl7deIg2/uBVNP7salR5SLvdcnb9g4/HX 7kGjcFiNwc3PAEr2PPsfg39huTh795Y= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-573-9KHopIYNNE2H6ZOGp8oBKw-1; Thu, 19 May 2022 16:49:53 -0400 X-MC-Unique: 9KHopIYNNE2H6ZOGp8oBKw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 30E3B3C025C3; Thu, 19 May 2022 20:49:53 +0000 (UTC) Received: from dqiao.bos.com (unknown [10.22.35.162]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA243112131B; Thu, 19 May 2022 20:49:52 +0000 (UTC) From: Donghai Qiao To: akpm@linux-foundation.org, sfr@canb.auug.org.au, arnd@arndb.de, peterz@infradead.org, heying24@huawei.com, andriy.shevchenko@linux.intel.com, axboe@kernel.dk, rdunlap@infradead.org, tglx@linutronix.de, gor@linux.ibm.com Cc: donghai.w.qiao@gmail.com, linux-kernel@vger.kernel.org, Donghai Qiao Subject: [PATCH v4 05/11] smp: replace smp_call_function_single_async with smp_call_csd Date: Thu, 19 May 2022 16:49:37 -0400 Message-Id: <20220519204943.1079578-6-dqiao@redhat.com> In-Reply-To: <20220519204943.1079578-1-dqiao@redhat.com> References: <20220519204943.1079578-1-dqiao@redhat.com> Reply-To: dqiao@redhat.com MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Replaced smp_call_function_single_async with smp_call_csd and extended it to support one CPU synchronous call with preallocated csd structure. Signed-off-by: Donghai Qiao --- v1 -> v2: removed 'x' from the function names and change XCALL to SMP_CALL from the new macros v2 -> v3: Changed the call of smp_call_private() to smp_call_csd() include/linux/smp.h | 3 +- kernel/smp.c | 157 ++++++++++++++++++++------------------------ 2 files changed, 75 insertions(+), 85 deletions(-) diff --git a/include/linux/smp.h b/include/linux/smp.h index 06a20454fd53..b4885e45690b 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -193,7 +193,8 @@ int smp_call_function_single(int cpuid, smp_call_func_t func, void *info, void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func, void *info, bool wait, const struct cpumask *mask); -int smp_call_function_single_async(int cpu, struct __call_single_data *csd); +#define smp_call_function_single_async(cpu, csd) \ + smp_call_csd(cpu, csd, SMP_CALL_TYPE_ASYNC) /* * Cpus stopping functions in panic. All have default weak definitions. diff --git a/kernel/smp.c b/kernel/smp.c index 8fdea9547502..f08135ad70e3 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -444,41 +444,6 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) send_call_function_single_ipi(cpu); } -/* - * Insert a previously allocated call_single_data_t element - * for execution on the given CPU. data must already have - * ->func, ->info, and ->flags set. - */ -static int generic_exec_single(int cpu, struct __call_single_data *csd) -{ - if (cpu == smp_processor_id()) { - smp_call_func_t func = csd->func; - void *info = csd->info; - unsigned long flags; - - /* - * We can unlock early even for the synchronous on-stack case, - * since we're doing this from the same CPU.. - */ - csd_lock_record(csd); - csd_unlock(csd); - local_irq_save(flags); - func(info); - csd_lock_record(NULL); - local_irq_restore(flags); - return 0; - } - - if ((unsigned)cpu >= nr_cpu_ids || !cpu_online(cpu)) { - csd_unlock(csd); - return -ENXIO; - } - - __smp_call_single_queue(cpu, &csd->node.llist); - - return 0; -} - /** * generic_smp_call_function_single_interrupt - Execute SMP IPI callbacks * @@ -676,52 +641,6 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, } EXPORT_SYMBOL(smp_call_function_single); -/** - * smp_call_function_single_async() - Run an asynchronous function on a - * specific CPU. - * @cpu: The CPU to run on. - * @csd: Pre-allocated and setup data structure - * - * Like smp_call_function_single(), but the call is asynchonous and - * can thus be done from contexts with disabled interrupts. - * - * The caller passes his own pre-allocated data structure - * (ie: embedded in an object) and is responsible for synchronizing it - * such that the IPIs performed on the @csd are strictly serialized. - * - * If the function is called with one csd which has not yet been - * processed by previous call to smp_call_function_single_async(), the - * function will return immediately with -EBUSY showing that the csd - * object is still in progress. - * - * NOTE: Be careful, there is unfortunately no current debugging facility to - * validate the correctness of this serialization. - * - * Return: %0 on success or negative errno value on error - */ -int smp_call_function_single_async(int cpu, struct __call_single_data *csd) -{ - int err = 0; - - preempt_disable(); - - if (csd->node.u_flags & CSD_FLAG_LOCK) { - err = -EBUSY; - goto out; - } - - csd->node.u_flags = CSD_FLAG_LOCK; - smp_wmb(); - - err = generic_exec_single(cpu, csd); - -out: - preempt_enable(); - - return err; -} -EXPORT_SYMBOL_GPL(smp_call_function_single_async); - /* * smp_call_function_any - Run a function on any of the given cpus * @mask: The mask of cpus it can run on. @@ -1304,15 +1223,85 @@ EXPORT_SYMBOL(smp_call_mask_cond); * Because of that, this function can be used from the contexts with disabled * interrupts. * - * Parameters + * The bit CSD_FLAG_LOCK will be set to csd->node.u_flags only if the call + * is made as type CSD_TYPE_SYNC or CSD_TYPE_ASYNC. * + * Parameters * cpu: Must be a positive value less than nr_cpu_id. * csd: The private csd provided by the callers. - * * Others: see smp_call(). + * + * Return: %0 on success or negative errno value on error. + * + * The following comments are from smp_call_function_single_async(): + * + * The call is asynchronous and can thus be done from contexts with + * disabled interrupts. If the function is called with one csd which + * has not yet been processed by previous call, the function will + * return immediately with -EBUSY showing that the csd object is + * still in progress. + * + * NOTE: Be careful, there is unfortunately no current debugging + * facility to validate the correctness of this serialization. */ int smp_call_csd(int cpu, call_single_data_t *csd, unsigned int flags) { - return 0; + int err = 0; + + if ((unsigned int)cpu >= nr_cpu_ids || !cpu_online(cpu)) { + pr_warn("cpu ID must be a positive number < nr_cpu_ids and must be currently online\n"); + return -EINVAL; + } + + if (csd == NULL) { + pr_warn("csd must not be NULL\n"); + return -EINVAL; + } + + preempt_disable(); + if (csd->node.u_flags & CSD_FLAG_LOCK) { + err = -EBUSY; + goto out; + } + + /* + * CSD_FLAG_LOCK is set for CSD_TYPE_SYNC or CSD_TYPE_ASYNC only. + */ + if ((flags & ~(CSD_TYPE_SYNC | CSD_TYPE_ASYNC)) == 0) + csd->node.u_flags = CSD_FLAG_LOCK | flags; + else + csd->node.u_flags = flags; + + if (cpu == smp_processor_id()) { + smp_call_func_t func = csd->func; + void *info = csd->info; + unsigned long flags; + + /* + * We can unlock early even for the synchronous on-stack case, + * since we're doing this from the same CPU.. + */ + csd_lock_record(csd); + csd_unlock(csd); + local_irq_save(flags); + func(info); + csd_lock_record(NULL); + local_irq_restore(flags); + goto out; + } + + /* + * Ensure the flags are visible before the csd + * goes to the queue. + */ + smp_wmb(); + + __smp_call_single_queue(cpu, &csd->node.llist); + + if (flags & CSD_TYPE_SYNC) + csd_lock_wait(csd); +out: + preempt_enable(); + return err; } EXPORT_SYMBOL(smp_call_csd); -- 2.27.0