Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB0A2C61DA3 for ; Tue, 21 Feb 2023 22:36:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230420AbjBUWgq (ORCPT ); Tue, 21 Feb 2023 17:36:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230341AbjBUWgh (ORCPT ); Tue, 21 Feb 2023 17:36:37 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13D9132E4A for ; Tue, 21 Feb 2023 14:35:48 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id t15so6105939wrz.7 for ; Tue, 21 Feb 2023 14:35:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+bBfIfpR8W2u8MvrmlDVru58AVxTNXaOUl1C+oqeDgw=; b=1r09LteUwDU4SF7l341JDNjXAH5BdwlKoD1kpEzMMLIiq61fQSDFr9u7qyTLrneS1u CT+y4SOILx02K1gkdtP8/j13yHBUqGGJhCANHLLrvWHua6x712IiWTdatYnApHNNLDQv LzLeZupzDzfmEpl8rt6Gsl7xtbrRbAhYbsHyPXS0FmEt7wYWlLfj5f1QNYbz083IItMe 4poUOxHiKm+HyADDg4NXXKcMuHQ6nagArhZu7NLl6p4PZUDCtKwUdL0XGyO4k7qzowCA F5g4/qTP/3N9+JuHQjy52kejdfvK0xyNvjackdVOKdlYVieckY6zKJV6D8sYj2ld+XMO mqhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+bBfIfpR8W2u8MvrmlDVru58AVxTNXaOUl1C+oqeDgw=; b=qVDxT6Gb1llTvTMVyIUpANzUZL2YMi7ugWtUcHNu24Mh/rq6D/bsP8ni0mcB91EJL1 DUQeTsraWkrpwKDJMnRYVOu2JfiT2XEFUEaTYBiEXRzn6a6SDiR0Xdqgtqz6cpyxOCoZ 5fIObd6U4eSaqj+YY1DR5FWNyhmLvn1WAQA4FwhMqZbBhx9ggUztLQEn8NP9xBLEizFd pqqtZkgPZRdRhHcfpJ4iG7Y/oXx6TO09SIFcPNF09cDCdnzqcSHTG3819Pxd3wWclQVm COu72flaXXydBL4Ef4dOkhTGK9mLrDoAt9gdqPQBBJTy48IJz7LVAAa01MIjXeQC3gQ5 pHHA== X-Gm-Message-State: AO0yUKVd7zxB/zkWEnKdE2ilpKskAqh+RR7dPjNEryEUiyJLO2CUh/7f 6V7v//aKh/R/Iy7w6CMIAvkXnQ== X-Google-Smtp-Source: AK7set+2fXVojIcF+5FTdeJUXnmhgKfbuyZEW1d7j0gG6M01yVn3sG1hwHvvoQy8ll3ux7cROr99nw== X-Received: by 2002:a5d:58d9:0:b0:2c5:5fb0:b215 with SMTP id o25-20020a5d58d9000000b002c55fb0b215mr5643258wrf.30.1677018946503; Tue, 21 Feb 2023 14:35:46 -0800 (PST) Received: from usaari01.cust.communityfibre.co.uk ([2a02:6b6a:b566:0:1a14:8be6:b3a9:a95e]) by smtp.gmail.com with ESMTPSA id u13-20020a5d434d000000b002c55ec7f661sm4501254wrr.5.2023.02.21.14.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 14:35:46 -0800 (PST) From: Usama Arif To: dwmw2@infradead.org, tglx@linutronix.de, kim.phillips@amd.com Cc: piotrgorski@cachyos.org, oleksandr@natalenko.name, arjan@linux.intel.com, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, paulmck@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, rcu@vger.kernel.org, mimoja@mimoja.de, hewenliang4@huawei.com, thomas.lendacky@amd.com, seanjc@google.com, pmenzel@molgen.mpg.de, fam.zheng@bytedance.com, punit.agrawal@bytedance.com, simon.evans@bytedance.com, liangma@liangbit.com, David Woodhouse , Usama Arif Subject: [PATCH v10 3/8] cpu/hotplug: Add dynamic parallel bringup states before CPUHP_BRINGUP_CPU Date: Tue, 21 Feb 2023 22:33:47 +0000 Message-Id: <20230221223352.2288528-4-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221223352.2288528-1-usama.arif@bytedance.com> References: <20230221223352.2288528-1-usama.arif@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Woodhouse There is often significant latency in the early stages of CPU bringup, and time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and then waiting for it to respond before moving on to the next. Allow a platform to register a set of pre-bringup CPUHP states to which each CPU can be stepped in parallel, thus absorbing some of that latency. There is a subtlety here: even with an empty CPUHP_BP_PARALLEL_DYN step, this means that *all* CPUs are brought through the prepare states and to CPUHP_BP_PREPARE_DYN before any of them are taken to CPUHP_BRINGUP_CPU and then are allowed to run for themselves to CPUHP_ONLINE. So any combination of prepare/start calls which depend on A-B ordering for each CPU in turn, such as the X2APIC code which used to allocate a cluster mask 'just in case' and store it in a global variable in the prep stage, then potentially consume that preallocated structure from the AP and set the global pointer to NULL to be reallocated in CPUHP_X2APIC_PREPARE for the next CPU... would explode horribly. Any platform enabling the CPUHP_BP_PARALLEL_DYN steps must be reviewed and tested to ensure that such issues do not exist, and the existing behaviour of bringing CPUs to CPUHP_BP_PREPARE_DYN and then immediately to CPUHP_BRINGUP_CPU and CPUHP_ONLINE only one at a time does not change unless such a state is registered. Note that the new parallel stages do *not* yet bring each AP to the CPUHP_BRINGUP_CPU state at the same time, only to the new states which exist before it. The final loop in bringup_nonboot_cpus() is untouched, bringing each AP in turn from the final PARALLEL_DYN state (or all the way from CPUHP_OFFLINE) to CPUHP_BRINGUP_CPU and then waiting for that AP to do its own processing and reach CPUHP_ONLINE before releasing the next. Parallelising that part by bringing them all to CPUHP_BRINGUP_CPU and then waiting for them all is an exercise for the future. Signed-off-by: David Woodhouse Signed-off-by: Usama Arif Tested-by: Paul E. McKenney Tested-by: Kim Phillips Tested-by: Oleksandr Natalenko --- include/linux/cpuhotplug.h | 2 ++ kernel/cpu.c | 31 +++++++++++++++++++++++++++++-- 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 6c6859bfc454..e5a73ae6ccc0 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -133,6 +133,8 @@ enum cpuhp_state { CPUHP_MIPS_SOC_PREPARE, CPUHP_BP_PREPARE_DYN, CPUHP_BP_PREPARE_DYN_END = CPUHP_BP_PREPARE_DYN + 20, + CPUHP_BP_PARALLEL_DYN, + CPUHP_BP_PARALLEL_DYN_END = CPUHP_BP_PARALLEL_DYN + 4, CPUHP_BRINGUP_CPU, /* diff --git a/kernel/cpu.c b/kernel/cpu.c index 6c0a92ca6bb5..fffb0da61ccc 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1504,8 +1504,30 @@ int bringup_hibernate_cpu(unsigned int sleep_cpu) void bringup_nonboot_cpus(unsigned int setup_max_cpus) { + unsigned int n = setup_max_cpus - num_online_cpus(); unsigned int cpu; + /* + * An architecture may have registered parallel pre-bringup states to + * which each CPU may be brought in parallel. For each such state, + * bring N CPUs to it in turn before the final round of bringing them + * online. + */ + if (n > 0) { + enum cpuhp_state st = CPUHP_BP_PARALLEL_DYN; + + while (st <= CPUHP_BP_PARALLEL_DYN_END && cpuhp_hp_states[st].name) { + int i = n; + + for_each_present_cpu(cpu) { + cpu_up(cpu, st); + if (!--i) + break; + } + st++; + } + } + for_each_present_cpu(cpu) { if (num_online_cpus() >= setup_max_cpus) break; @@ -1882,6 +1904,10 @@ static int cpuhp_reserve_state(enum cpuhp_state state) step = cpuhp_hp_states + CPUHP_BP_PREPARE_DYN; end = CPUHP_BP_PREPARE_DYN_END; break; + case CPUHP_BP_PARALLEL_DYN: + step = cpuhp_hp_states + CPUHP_BP_PARALLEL_DYN; + end = CPUHP_BP_PARALLEL_DYN_END; + break; default: return -EINVAL; } @@ -1906,14 +1932,15 @@ static int cpuhp_store_callbacks(enum cpuhp_state state, const char *name, /* * If name is NULL, then the state gets removed. * - * CPUHP_AP_ONLINE_DYN and CPUHP_BP_PREPARE_DYN are handed out on + * CPUHP_AP_ONLINE_DYN and CPUHP_BP_P*_DYN are handed out on * the first allocation from these dynamic ranges, so the removal * would trigger a new allocation and clear the wrong (already * empty) state, leaving the callbacks of the to be cleared state * dangling, which causes wreckage on the next hotplug operation. */ if (name && (state == CPUHP_AP_ONLINE_DYN || - state == CPUHP_BP_PREPARE_DYN)) { + state == CPUHP_BP_PREPARE_DYN || + state == CPUHP_BP_PARALLEL_DYN)) { ret = cpuhp_reserve_state(state); if (ret < 0) return ret; -- 2.25.1