Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1337109pxu; Mon, 23 Nov 2020 19:06:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJxdwgYANouroUgoBMX5HYuuN6pGvQ5SHeC/PjR7t98jtZB52byUTGFA6e/PHDet42XUGPIr X-Received: by 2002:a17:906:f752:: with SMTP id jp18mr2223151ejb.331.1606187181533; Mon, 23 Nov 2020 19:06:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606187181; cv=none; d=google.com; s=arc-20160816; b=sf0igL3nbddcJDgPw6SugRDao06toldBxujWwB7e2p6lFRHuyd0PbMwVaKvpIRFytK +m67F0XY1U7BTV/quRHcgt5hfncAP010Z5nK3uCAT6qWKMhG/3H50tymwofgS/RyM3un cx/T6OEsBKtDfhdNZAC6dzwL042fAAxD2AvK9jMciYlhZr1UarZCqIJXPlqs248lQa9U bZoH71Ym9iyugxUitk3N04RtaWL2VVBT5FweSaSABRCI2sw6mek6hJgGLlP9Lw1f5fRh PnZR+r1oFcCVlHbVksH+bwD3XPwjpt6HBX6UqHBpyDoSvSrjkGxt+Gzi34iIFx6q2mVs jqIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=17QBfr2S9H9ceFYNxFidT2do9uNSdmH9kH9glL4oqJM=; b=fHdp37RCf8yTDfWF1ANNFevAas5Sz10HfkJPWAQ4/mmf+z6vQM0DErCjXIJUIo7i8O 4gxJS2LHqbkJCvOtb3K5nrkBt7vamzl3Ikd0BecKMaqYY1U+papjT6KiIeXCrEbkcZHV M5Q1IOKvz7LYUbOay6cm0w2omSs/vx85fZJr4UIdy4uV1nPLDDJAv2QhrOVcNRjCUujh 8yjHrtp1oX+KqzmtqaKyfwPUw1fhHdWMvKHey0PL1ygEtS+Eqvwn+tAOalOSzPSvaJ1/ aqmkgc1+bYACE7/JE4ZSvWNDQreOZZCg3cpfyd+1zMFhxSWsLwN4Z1MAMqgJqd2NbLzt pxLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zpqpMZvc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t10si7542079eju.536.2020.11.23.19.05.45; Mon, 23 Nov 2020 19:06:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zpqpMZvc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726505AbgKWXVK (ORCPT + 99 others); Mon, 23 Nov 2020 18:21:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:58436 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725921AbgKWXVJ (ORCPT ); Mon, 23 Nov 2020 18:21:09 -0500 Received: from localhost (unknown [176.167.152.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4344320691; Mon, 23 Nov 2020 23:21:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1606173668; bh=CKEZj+lJVWkeVVIkmbst5AlXQtm5TYqw8mkzx03cZBU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=zpqpMZvcXj6eTZp/HvX7RSWVfU6lHF73DU+DFeuP40CbQiCiDOIg1/VNvpX1jv6cD Gt3rCu0U+kSHPlMbzN7C3jATvZXGSXHffUaym7cB38YrFwbHnTI/aFUVqHhsz/7SQE KtHgEJvoXzPT3SHCBdQxaAUJhf1EzU4WZWSzY7ac= Date: Tue, 24 Nov 2020 00:21:06 +0100 From: Frederic Weisbecker To: Alex Belits Cc: Prasun Kapoor , "linux-api@vger.kernel.org" , "davem@davemloft.net" , "trix@redhat.com" , "mingo@kernel.org" , "linux-kernel@vger.kernel.org" , "rostedt@goodmis.org" , "peterx@redhat.com" , "tglx@linutronix.de" , "nitesh@redhat.com" , "linux-arch@vger.kernel.org" , "mtosatti@redhat.com" , "will@kernel.org" , "peterz@infradead.org" , "leon@sidebranch.com" , "linux-arm-kernel@lists.infradead.org" , "catalin.marinas@arm.com" , "pauld@redhat.com" , "netdev@vger.kernel.org" Subject: Re: [EXT] Re: [PATCH v5 9/9] task_isolation: kick_all_cpus_sync: don't kick isolated cpus Message-ID: <20201123232106.GD1751@lothringen> References: <8d887e59ca713726f4fcb25a316e1e932b02823e.camel@marvell.com> <3236b13f42679031960c5605be20664e90e75223.camel@marvell.com> <20201123222907.GC1751@lothringen> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 23, 2020 at 10:39:34PM +0000, Alex Belits wrote: > > On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote: > > External Email > > > > ------------------------------------------------------------------- > > --- > > On Mon, Nov 23, 2020 at 05:58:42PM +0000, Alex Belits wrote: > > > From: Yuri Norov > > > > > > Make sure that kick_all_cpus_sync() does not call CPUs that are > > > running > > > isolated tasks. > > > > > > Signed-off-by: Yuri Norov > > > [abelits@marvell.com: use safe task_isolation_cpumask() > > > implementation] > > > Signed-off-by: Alex Belits > > > --- > > > kernel/smp.c | 14 +++++++++++++- > > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > index 4d17501433be..b2faecf58ed0 100644 > > > --- a/kernel/smp.c > > > +++ b/kernel/smp.c > > > @@ -932,9 +932,21 @@ static void do_nothing(void *unused) > > > */ > > > void kick_all_cpus_sync(void) > > > { > > > + struct cpumask mask; > > > + > > > /* Make sure the change is visible before we kick the cpus */ > > > smp_mb(); > > > - smp_call_function(do_nothing, NULL, 1); > > > + > > > + preempt_disable(); > > > +#ifdef CONFIG_TASK_ISOLATION > > > + cpumask_clear(&mask); > > > + task_isolation_cpumask(&mask); > > > + cpumask_complement(&mask, &mask); > > > +#else > > > + cpumask_setall(&mask); > > > +#endif > > > + smp_call_function_many(&mask, do_nothing, NULL, 1); > > > + preempt_enable(); > > > > Same comment about IPIs here. > > This is different from timers. The original design was based on the > idea that every CPU should be able to enter kernel at any time and run > kernel code with no additional preparation. Then the only solution is > to always do full broadcast and require all CPUs to process it. > > What I am trying to introduce is the idea of CPU that is not likely to > run kernel code any soon, and can afford to go through an additional > synchronization procedure on the next entry into kernel. The > synchronization is not skipped, it simply happens later, early in > kernel entry code. Ah I see, this is ordered that way: ll_isol_flags = ISOLATED CPU 0 CPU 1 ------------------ ----------------- // kernel entry data_to_sync = 1 ll_isol_flags = ISOLATED_BROKEN smp_mb() smp_mb() if ll_isol_flags(CPU 1) == ISOLATED READ data_to_sync smp_call(CPU 1) You should document that, ie: explain why what you're doing is safe. Also Beware though that the data to sync in question doesn't need to be visible in the entry code before task_isolation_kernel_enter(). You need to audit all the callers of kick_all_cpus_sync().