Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp14985631rwb; Mon, 28 Nov 2022 06:57:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf6mA/++3uAi4ydFzEuCtuGWLMmKSJUN9IfGV2KcCMSn4i1V4XNWaxibr7gJxzOOGGv+WsPz X-Received: by 2002:a63:1e5b:0:b0:477:3c62:49ba with SMTP id p27-20020a631e5b000000b004773c6249bamr27581570pgm.446.1669647419886; Mon, 28 Nov 2022 06:56:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669647419; cv=none; d=google.com; s=arc-20160816; b=KnB9KHVbG+xoxMEB+En77FhTaWyy0jmhD0NT25+EaJ8xdFr/aU+70Fe1hgTdAsJ00U siSmRJds2G8U3sQaBM+8CxrAgK3HzTmPpGmq2LcCktGJuZwHYI+mMDyiohtQaqYvkHq8 4UoZlgj4z8Tl8z/NymWdXRpgAm9TJnGqdpx1aJfiVQFDPyc3w/IB5flUbtcQk9McSUJ5 nBN5J/DHQkNRidNOhWUojUqfgJD7UQiIE6TYxMaaH9kxnR9ET++gfiZXOByw5wIIOq9B tdIzieEJMQ1hLGNkTchtDxnIb/leAlHfCvQCCZns1qqHw834Hyr7OdvLnfnCSEG1iruc T6Ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=kWwE1erm96hqH4NzCpm0rQVx9jaBqr5JrjpVAQkn5MQ=; b=QwgvCSdnA7DJg6hTyVJw23vYW4YxUw2k5eb8K8HJJK+oifWhB9yiNgj2+7cmTpKMuQ O6lbnECeKl+bszPe5fZgiAGKTOnVPMDwDwM7fDOm+Li6PMXjOTR5KFu3Xbp1UzVbVC5s 82xm7G/Usgv1gQ8UYaOOHtAiA73l9Fn7PHMmUME9vu2jlDBZimgbtxeNKcRc2P7BdcZw sAUwxfou3vWFXXXsj5X5kfDhN3D3i4B38LJFF2XkeJpRj47JfO+jKHEXRBoLLFxBwsPa QSNYMnPo28WM8s7+Im0ROsoLp08t5bv7uwTvgc9hGuj6dtbUcNCXbvYi5pTmgs6uPKyH XMUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="juV/h03S"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b21-20020a63d315000000b0045259b8b37esi12662284pgg.714.2022.11.28.06.56.49; Mon, 28 Nov 2022 06:56:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="juV/h03S"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232494AbiK1O27 (ORCPT + 84 others); Mon, 28 Nov 2022 09:28:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232488AbiK1O25 (ORCPT ); Mon, 28 Nov 2022 09:28:57 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 038B91F637; Mon, 28 Nov 2022 06:28:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669645736; x=1701181736; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MrI+cgsw8fa/uMy0ii8IS1azy4iiz+Bct6bPgqhgiOA=; b=juV/h03SzrZ470dHRA/kXpYr1i6AhXJr/pH/5M8Xev556W4zlOkv242h uHkb7cZ6RWHXEqqGDCcafZiq8l51x2Vmghxj6oivE7fllZvLD1tzbz8SO NSIwxUpagA7HR1VWXENSPWnnPg7VRDIb2n2cr+896uWGGz9LfJxbHIG4a nlP9uAvblDGSDKJ4shvMyz2DtW87WyerThKNq9++Y3iL5vkLNZ6AJ0gVv 0Tle9IreDRebLeH5PwAKL/yjxRAw4SB3sLkaHpJh9uhNtScR5Lkjh14i1 3AB5K+o8bnwgwVOA4oFovyINdZmVhoG4/rD/jYxIazowp00tfu9WH+dKf Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="376985221" X-IronPort-AV: E=Sophos;i="5.96,200,1665471600"; d="scan'208";a="376985221" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Nov 2022 06:28:41 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="749408009" X-IronPort-AV: E=Sophos;i="5.96,200,1665471600"; d="scan'208";a="749408009" Received: from zq-optiplex-7090.bj.intel.com ([10.238.156.129]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Nov 2022 06:28:39 -0800 From: Zqiang To: paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Date: Mon, 28 Nov 2022 22:34:28 +0800 Message-Id: <20221128143428.1703744-1-qiang1.zhang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, invoke rcu_tasks_rude_wait_gp() to wait one rude RCU-tasks grace period, if __num_online_cpus == 1, will return directly, indicates the end of the rude RCU-task grace period. suppose the system has two cpus, consider the following scenario: CPU0 CPU1 (going offline) migration/1 task: cpu_stopper_thread -> take_cpu_down -> _cpu_disable (dec __num_online_cpus) ->cpuhp_invoke_callback preempt_disable access old_data0 task1 del old_data0 ..... synchronize_rcu_tasks_rude() task1 schedule out .... task2 schedule in rcu_tasks_rude_wait_gp() ->__num_online_cpus == 1 ->return .... task1 schedule in ->free old_data0 preempt_enable when CPU1 dec __num_online_cpus and __num_online_cpus is equal one, the CPU1 has not finished offline, stop_machine task(migration/1) still running on CPU1, maybe still accessing 'old_data0', but the 'old_data0' has freed on CPU0. This commit add cpus_read_lock/unlock() protection before accessing __num_online_cpus variables, to ensure that the CPU in the offline process has been completed offline. Signed-off-by: Zqiang --- kernel/rcu/tasks.h | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 4a991311be9b..08e72c6462d8 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work) { } +static DEFINE_PER_CPU(struct work_struct, rude_work); + // Wait for one rude RCU-tasks grace period. static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp) { + int cpu; + struct work_struct *work; + + cpus_read_lock(); if (num_online_cpus() <= 1) - return; // Fastpath for only one CPU. + goto end;// Fastpath for only one CPU. rtp->n_ipis += cpumask_weight(cpu_online_mask); - schedule_on_each_cpu(rcu_tasks_be_rude); + for_each_online_cpu(cpu) { + work = per_cpu_ptr(&rude_work, cpu); + INIT_WORK(work, rcu_tasks_be_rude); + schedule_work_on(cpu, work); + } + + for_each_online_cpu(cpu) + flush_work(per_cpu_ptr(&rude_work, cpu)); + +end: + cpus_read_unlock(); } void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func); -- 2.25.1