Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752259AbdGaNVC (ORCPT ); Mon, 31 Jul 2017 09:21:02 -0400 Received: from ozlabs.org ([103.22.144.67]:35671 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752077AbdGaNVB (ORCPT ); Mon, 31 Jul 2017 09:21:01 -0400 From: Michael Ellerman To: Peter Zijlstra , Mathieu Desnoyers Cc: "Paul E . McKenney" , linux-kernel@vger.kernel.org, Boqun Feng , Andrew Hunter , Maged Michael , gromer@google.com, Avi Kivity , Nicholas Piggin , Benjamin Herrenschmidt , Palmer Dabbelt Subject: Re: [RFC PATCH v2] membarrier: expedited private command In-Reply-To: <20170728115702.5vgnvwhmbbmyrxbf@hirez.programming.kicks-ass.net> References: <20170727211314.32666-1-mathieu.desnoyers@efficios.com> <20170728085532.ylhuz2irwmgpmejv@hirez.programming.kicks-ass.net> <20170728115702.5vgnvwhmbbmyrxbf@hirez.programming.kicks-ass.net> User-Agent: Notmuch/0.21 (https://notmuchmail.org) Date: Mon, 31 Jul 2017 23:20:59 +1000 Message-ID: <87tw1s4u9w.fsf@concordia.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2051 Lines: 57 Peter Zijlstra writes: > On Fri, Jul 28, 2017 at 10:55:32AM +0200, Peter Zijlstra wrote: >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index e9785f7aed75..33f34a201255 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2641,8 +2641,18 @@ static struct rq *finish_task_switch(struct task_struct *prev) >> finish_arch_post_lock_switch(); >> >> fire_sched_in_preempt_notifiers(current); >> + >> + /* >> + * For CONFIG_MEMBARRIER we need a full memory barrier after the >> + * rq->curr assignment. Not all architectures have one in either >> + * switch_to() or switch_mm() so we use (and complement) the one >> + * implied by mmdrop()'s atomic_dec_and_test(). >> + */ >> if (mm) >> mmdrop(mm); >> + else if (IS_ENABLED(CONFIG_MEMBARRIER)) >> + smp_mb(); >> + >> if (unlikely(prev_state == TASK_DEAD)) { >> if (prev->sched_class->task_dead) >> prev->sched_class->task_dead(prev); >> >> > >> a whole bunch of architectures don't in fact need this extra barrier at all. > > In fact, I'm fairly sure its only PPC. > > Because only ARM64 and PPC actually implement ACQUIRE/RELEASE with > anything other than smp_mb() (for now, Risc-V is in this same boat and > MIPS could be if they ever sort out their fancy barriers). > > TSO archs use a regular STORE for RELEASE, but all their atomics imply a > smp_mb() and there are enough around to make one happen (typically > mm_cpumask updates). > > Everybody else, aside from ARM64 and PPC must use smp_mb() for > ACQUIRE/RELEASE. > > ARM64 has a super duper barrier in switch_to(). > > Which only leaves PPC stranded.. but the 'good' news is that mpe says > they'll probably need a barrier in switch_mm() in any case. I may have been sleep deprived. We have a patch, probably soon to be merged, which will add a smp_mb() in switch_mm() but *only* when we add a CPU to mm_cpumask, ie. when we run on a CPU we haven't run on before. I'm not across membarrier enough to know if that's sufficient, but it seems unlikely? cheers