Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp989268pxb; Fri, 22 Apr 2022 16:06:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx/clcnxp0YpMBeePG+mdku6UzmDP1bkFoMFqV2lMRuKb33Agt9xClhDvw2rLgGCEkcLgXT X-Received: by 2002:a63:a901:0:b0:3aa:1cb6:d44a with SMTP id u1-20020a63a901000000b003aa1cb6d44amr5735188pge.339.1650668772444; Fri, 22 Apr 2022 16:06:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650668772; cv=none; d=google.com; s=arc-20160816; b=oVIsi7D6xqWz/gwR8XO1Sg3g8hr6NDscYj2hlmjhXKdEqTwuuzjzr0a5lVqaAgkTm9 IyXnWLxsDr5kGrCNX4vH2VxqRosIT5SLiDHIntceAmiiGW+eS/cpOyNX27YWbO+u0j+V 4qzGzSJccew7GH34gAjg1RaWDFQH8KhCgErWSVZiV82av7ywgRdSpMP4qvDpqzIgowBK T+p1sOrpMxb7NE4LfsQby47iM503dS8M/7CfQvLIOJ6gHpuptH1qpnoI1Gokr8KBUE7c IcAJYgPP24XC/OFy/fFZqCSEqNxMnp2zsJvjlPoqsDN5XZlnrHL+yLO3GHBTGhpVOnhD PvMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version:reply-to :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=q9/6VEJ+gCRw8wx8uLgcrhAfbPHFfqmDtXctaHj+IMQ=; b=KrVXaLafIL09dJin7l0/sJFtBJEG/rGNX+bZnCEN+WaPb4URfqNBUxAEB5/aRclxeS 3rPt+Urd/dkK02k4Vb6CFl+QJJhzndeVOTKzPcquduSIpj2WkNa8B7l9YMRuuGSpVyWD kRb/MGFETopBkl/e70Bzr6CKAOIS/UPQuv7zrUCkmltE61z1RbXm7L7JK1d4eCbX5+bV xfENiDkN9ieUfxnWZDb5dlbC5dQgbw6UAp3M5xAqQCzfJ3mxWuhuSxa44Gwldi9ynkXi TTraS4GKGKbuw3JmRNMdR9XUXSuoQ+Ojsony9Mhevh8rj9OlD7L9jfYTWo++LlrBUH5F Bfvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PdN9byy8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id d12-20020a17090a564c00b001bd14e030adsi8248850pji.133.2022.04.22.16.06.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 16:06:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PdN9byy8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D9EF42CE79F; Fri, 22 Apr 2022 14:35:48 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230076AbiDVVid (ORCPT + 99 others); Fri, 22 Apr 2022 17:38:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230161AbiDVViR (ORCPT ); Fri, 22 Apr 2022 17:38:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A59DA1DFC6C for ; Fri, 22 Apr 2022 13:43:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650660201; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=q9/6VEJ+gCRw8wx8uLgcrhAfbPHFfqmDtXctaHj+IMQ=; b=PdN9byy8ofqZQikU58R/dH3EHiLny4Vy8YzsyhMMm0lZCPqQihy/i6loyoatalonGfinJz FWYEnPM5DfIJz02Qy7OeAMmtqYiSuaaWdWt7JcPNHZvWU4xZgBqiNsysQ7ICWexRcgygC5 X/KoAxNjQioBtyIZrY9z+k+LJ5Mja2w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-152-mJvUr5RhOTyfEJiCHkuI7A-1; Fri, 22 Apr 2022 16:01:01 -0400 X-MC-Unique: mJvUr5RhOTyfEJiCHkuI7A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1994180418F; Fri, 22 Apr 2022 20:00:56 +0000 (UTC) Received: from dqiao.bos.com (unknown [10.22.19.111]) by smtp.corp.redhat.com (Postfix) with ESMTP id B6279C27DB9; Fri, 22 Apr 2022 20:00:55 +0000 (UTC) From: Donghai Qiao To: akpm@linux-foundation.org, sfr@canb.auug.org.au, arnd@arndb.de, peterz@infradead.org, heying24@huawei.com, andriy.shevchenko@linux.intel.com, axboe@kernel.dk, rdunlap@infradead.org, tglx@linutronix.de, gor@linux.ibm.com Cc: donghai.w.qiao@gmail.com, linux-kernel@vger.kernel.org, Donghai Qiao Subject: [PATCH v2 05/11] smp: replace smp_call_function_single_async() with smp_call_private() Date: Fri, 22 Apr 2022 16:00:34 -0400 Message-Id: <20220422200040.93813-6-dqiao@redhat.com> In-Reply-To: <20220422200040.93813-1-dqiao@redhat.com> References: <20220422200040.93813-1-dqiao@redhat.com> Reply-To: dqiao@redhat.com MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Replaced smp_call_function_single_async() with smp_call_private() and also extended smp_call_private() to support one CPU synchronous call with preallocated csd structures. Ideally, the new interface smp_call() should be able to do what smp_call_function_single_async() does. Because the csd is provided and maintained by the callers, it exposes the risk of corrupting the call_single_queue[cpu] linked list if the clents menipulate their csd inappropriately. On the other hand, there should have no noticeable performance advantage to provide preallocated csd for cross call kernel consumers. Thus, in the long run, the consumers should change to not use this type of preallocated csd. Signed-off-by: Donghai Qiao --- v1 -> v2: removed 'x' from the function names and change XCALL to SMP_CALL from the new macros include/linux/smp.h | 3 +- kernel/smp.c | 163 +++++++++++++++++++++----------------------- 2 files changed, 81 insertions(+), 85 deletions(-) diff --git a/include/linux/smp.h b/include/linux/smp.h index bee1e6b5b2fd..0301faf270bf 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -206,7 +206,8 @@ int smp_call_function_single(int cpuid, smp_call_func_t func, void *info, void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func, void *info, bool wait, const struct cpumask *mask); -int smp_call_function_single_async(int cpu, struct __call_single_data *csd); +#define smp_call_function_single_async(cpu, csd) \ + smp_call_private(cpu, csd, SMP_CALL_TYPE_ASYNC) /* * Cpus stopping functions in panic. All have default weak definitions. diff --git a/kernel/smp.c b/kernel/smp.c index 8998b97d5f72..51715633b4f7 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -429,41 +429,6 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) send_call_function_single_ipi(cpu); } -/* - * Insert a previously allocated call_single_data_t element - * for execution on the given CPU. data must already have - * ->func, ->info, and ->flags set. - */ -static int generic_exec_single(int cpu, struct __call_single_data *csd) -{ - if (cpu == smp_processor_id()) { - smp_call_func_t func = csd->func; - void *info = csd->info; - unsigned long flags; - - /* - * We can unlock early even for the synchronous on-stack case, - * since we're doing this from the same CPU.. - */ - csd_lock_record(csd); - csd_unlock(csd); - local_irq_save(flags); - func(info); - csd_lock_record(NULL); - local_irq_restore(flags); - return 0; - } - - if ((unsigned)cpu >= nr_cpu_ids || !cpu_online(cpu)) { - csd_unlock(csd); - return -ENXIO; - } - - __smp_call_single_queue(cpu, &csd->node.llist); - - return 0; -} - /** * generic_smp_call_function_single_interrupt - Execute SMP IPI callbacks * @@ -661,52 +626,6 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info, } EXPORT_SYMBOL(smp_call_function_single); -/** - * smp_call_function_single_async() - Run an asynchronous function on a - * specific CPU. - * @cpu: The CPU to run on. - * @csd: Pre-allocated and setup data structure - * - * Like smp_call_function_single(), but the call is asynchonous and - * can thus be done from contexts with disabled interrupts. - * - * The caller passes his own pre-allocated data structure - * (ie: embedded in an object) and is responsible for synchronizing it - * such that the IPIs performed on the @csd are strictly serialized. - * - * If the function is called with one csd which has not yet been - * processed by previous call to smp_call_function_single_async(), the - * function will return immediately with -EBUSY showing that the csd - * object is still in progress. - * - * NOTE: Be careful, there is unfortunately no current debugging facility to - * validate the correctness of this serialization. - * - * Return: %0 on success or negative errno value on error - */ -int smp_call_function_single_async(int cpu, struct __call_single_data *csd) -{ - int err = 0; - - preempt_disable(); - - if (csd->node.u_flags & CSD_FLAG_LOCK) { - err = -EBUSY; - goto out; - } - - csd->node.u_flags = CSD_FLAG_LOCK; - smp_wmb(); - - err = generic_exec_single(cpu, csd); - -out: - preempt_enable(); - - return err; -} -EXPORT_SYMBOL_GPL(smp_call_function_single_async); - /* * smp_call_function_any - Run a function on any of the given cpus * @mask: The mask of cpus it can run on. @@ -1251,16 +1170,92 @@ EXPORT_SYMBOL(smp_call_mask_cond); * Because the call is asynchonous with a preallocated csd structure, thus * it can be called from contexts with disabled interrupts. * - * Parameters + * Ideally this functionality should be part of smp_call_mask_cond(). + * Because the csd is provided and maintained by the callers, merging this + * functionality into smp_call_mask_cond() will result in some extra + * complications in it. Before there is better way to facilitate all + * kinds of call, let's still handle this case with a separate function. + * + * The bit CSD_FLAG_LOCK will be set to csd->node.u_flags only if the + * call is made as type CSD_TYPE_SYNC or CSD_TYPE_ASYNC. * + * Parameters: * cpu: Must be a positive value less than nr_cpu_id. * csd: The private csd provided by the caller. - * * Others: see smp_call(). + * + * Return: %0 on success or negative errno value on error. + * + * The following comments are from smp_call_function_single_async(): + * + * The call is asynchronous and can thus be done from contexts with + * disabled interrupts. If the function is called with one csd which + * has not yet been processed by previous call, the function will + * return immediately with -EBUSY showing that the csd object is + * still in progress. + * + * NOTE: Be careful, there is unfortunately no current debugging + * facility to validate the correctness of this serialization. */ int smp_call_private(int cpu, call_single_data_t *csd, unsigned int flags) { - return 0; + int err = 0; + + if ((unsigned int)cpu >= nr_cpu_ids || !cpu_online(cpu)) { + pr_warn("cpu ID must be a positive number < nr_cpu_ids and must be currently online\n"); + return -EINVAL; + } + + if (csd == NULL) { + pr_warn("csd must not be NULL\n"); + return -EINVAL; + } + + preempt_disable(); + if (csd->node.u_flags & CSD_FLAG_LOCK) { + err = -EBUSY; + goto out; + } + + /* + * CSD_FLAG_LOCK is set for CSD_TYPE_SYNC or CSD_TYPE_ASYNC only. + */ + if ((flags & ~(CSD_TYPE_SYNC | CSD_TYPE_ASYNC)) == 0) + csd->node.u_flags = CSD_FLAG_LOCK | flags; + else + csd->node.u_flags = flags; + + if (cpu == smp_processor_id()) { + smp_call_func_t func = csd->func; + void *info = csd->info; + unsigned long flags; + + /* + * We can unlock early even for the synchronous on-stack case, + * since we're doing this from the same CPU.. + */ + csd_lock_record(csd); + csd_unlock(csd); + local_irq_save(flags); + func(info); + csd_lock_record(NULL); + local_irq_restore(flags); + goto out; + } + + /* + * Ensure the flags are visible before the csd + * goes to the queue. + */ + smp_wmb(); + + __smp_call_single_queue(cpu, &csd->node.llist); + + if (flags & CSD_TYPE_SYNC) + csd_lock_wait(csd); +out: + preempt_enable(); + return err; } EXPORT_SYMBOL(smp_call_private); -- 2.27.0