Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752463AbdG1REm (ORCPT ); Fri, 28 Jul 2017 13:04:42 -0400 Received: from mail.efficios.com ([167.114.142.141]:40891 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751882AbdG1REl (ORCPT ); Fri, 28 Jul 2017 13:04:41 -0400 Date: Fri, 28 Jul 2017 17:06:53 +0000 (UTC) From: Mathieu Desnoyers To: Peter Zijlstra Cc: "Paul E. McKenney" , linux-kernel , Boqun Feng , Andrew Hunter , maged michael , gromer , Avi Kivity , Michael Ellerman , Nicholas Piggin , Benjamin Herrenschmidt , Palmer Dabbelt Message-ID: <856243469.29609.1501261613685.JavaMail.zimbra@efficios.com> In-Reply-To: <20170728164642.jolhwyqs3swhzmrb@hirez.programming.kicks-ass.net> References: <20170727211314.32666-1-mathieu.desnoyers@efficios.com> <20170728085532.ylhuz2irwmgpmejv@hirez.programming.kicks-ass.net> <20170728115702.5vgnvwhmbbmyrxbf@hirez.programming.kicks-ass.net> <2118431661.29566.1501256295573.JavaMail.zimbra@efficios.com> <20170728164642.jolhwyqs3swhzmrb@hirez.programming.kicks-ass.net> Subject: Re: [RFC PATCH v2] membarrier: expedited private command MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.9_GA_1794 (ZimbraWebClient - FF52 (Linux)/8.7.9_GA_1794) Thread-Topic: membarrier: expedited private command Thread-Index: GWb43eS76G/sOwZkVhGhsQ1RUpCH/w== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1920 Lines: 56 ----- On Jul 28, 2017, at 12:46 PM, Peter Zijlstra peterz@infradead.org wrote: > On Fri, Jul 28, 2017 at 03:38:15PM +0000, Mathieu Desnoyers wrote: >> > Which only leaves PPC stranded.. but the 'good' news is that mpe says >> > they'll probably need a barrier in switch_mm() in any case. >> >> As I pointed out in my other email, I plan to do this: >> >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2636,6 +2636,11 @@ static struct rq *finish_task_switch(struct task_struct >> *prev) >> vtime_task_switch(prev); >> perf_event_task_sched_in(prev, current); > > Here would place it _inside_ the rq->lock, which seems to make more > sense given the purpose of the barrier, but either way works given its > definition. Given its naming "...after_unlock_lock", I thought it would be clearer to put it after the unlock. Anyway, this barrier does not seem to be used to ensure the release barrier per se (unlock already has release semantic), but rather ensures a full memory barrier wrt memory accesses that are synchronized by means other than this this lock. > >> finish_lock_switch(rq, prev); > > You could put the whole thing inside IS_ENABLED(CONFIG_SYSMEMBARRIER) or > something. I'm tempted to wait until we hear from powerpc maintainers, so we learn whether they deeply care about this extra barrier in finish_task_switch() before making it conditional on CONFIG_MEMBARRIER. Having a guaranteed barrier after context switch on all architectures may have other uses. Thanks, Mathieu > >> + /* >> + * The membarrier system call requires a full memory barrier >> + * after storing to rq->curr, before going back to user-space. >> + */ >> + smp_mb__after_unlock_lock(); >> finish_arch_post_lock_switch(); >> > > fire_sched_in_preempt_notifiers(current); -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com