Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751621AbdISWBv (ORCPT ); Tue, 19 Sep 2017 18:01:51 -0400 Received: from mail.efficios.com ([167.114.142.141]:37455 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751521AbdISWBt (ORCPT ); Tue, 19 Sep 2017 18:01:49 -0400 Date: Tue, 19 Sep 2017 22:02:27 +0000 (UTC) From: Mathieu Desnoyers To: Andrea Parri Cc: "Paul E. McKenney" , linux-kernel , Peter Zijlstra , Boqun Feng , Andrew Hunter , maged michael , gromer , Avi Kivity , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Dave Watson Message-ID: <1204490306.14054.1505858546999.JavaMail.zimbra@efficios.com> In-Reply-To: <20170919214702.GA2793@andrea> References: <20170919195631.17865-1-mathieu.desnoyers@efficios.com> <20170919214702.GA2793@andrea> Subject: Re: [PATCH v2] membarrier: Document scheduler barrier requirements MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.11_GA_1854 (ZimbraWebClient - FF52 (Linux)/8.7.11_GA_1854) Thread-Topic: membarrier: Document scheduler barrier requirements Thread-Index: RDHhptjXp/iRHeAcG1uNphSJ4ruWlQ== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4288 Lines: 118 ----- On Sep 19, 2017, at 5:47 PM, Andrea Parri parri.andrea@gmail.com wrote: > On Tue, Sep 19, 2017 at 03:56:31PM -0400, Mathieu Desnoyers wrote: >> Document the membarrier requirement on having a full memory barrier in >> __schedule() after coming from user-space, before storing to rq->curr. >> It is provided by smp_mb__before_spinlock() in __schedule(). > > It is smp_mb__after_spinlock(). (Yes: I missed it in my previous email.) OK. Will send a v3 fixing the changelog. Thanks, Mathieu > > Andrea > > >> >> Document that membarrier requires a full barrier on transition from >> kernel thread to userspace thread. We currently have an implicit barrier >> from atomic_dec_and_test() in mmdrop() that ensures this. >> >> The x86 switch_mm_irqs_off() full barrier is currently provided by many >> cpumask update operations as well as write_cr3(). Document that >> write_cr3() provides this barrier. >> >> Changes since v1: >> - Update comments to match reality for code paths which are after >> storing to rq->curr, before returning to user-space. >> >> Signed-off-by: Mathieu Desnoyers >> CC: Peter Zijlstra >> CC: Paul E. McKenney >> CC: Boqun Feng >> CC: Andrew Hunter >> CC: Maged Michael >> CC: gromer@google.com >> CC: Avi Kivity >> CC: Benjamin Herrenschmidt >> CC: Paul Mackerras >> CC: Michael Ellerman >> CC: Dave Watson >> --- >> arch/x86/mm/tlb.c | 5 +++++ >> include/linux/sched/mm.h | 5 +++++ >> kernel/sched/core.c | 9 +++++++++ >> 3 files changed, 19 insertions(+) >> >> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c >> index 1ab3821f9e26..74f94fe4aded 100644 >> --- a/arch/x86/mm/tlb.c >> +++ b/arch/x86/mm/tlb.c >> @@ -144,6 +144,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct >> mm_struct *next, >> } >> #endif >> >> + /* >> + * The membarrier system call requires a full memory barrier >> + * before returning to user-space, after storing to rq->curr. >> + * Writing to CR3 provides that full memory barrier. >> + */ >> if (real_prev == next) { >> VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != >> next->context.ctx_id); >> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h >> index 3a19c253bdb1..766cc47c4d7c 100644 >> --- a/include/linux/sched/mm.h >> +++ b/include/linux/sched/mm.h >> @@ -38,6 +38,11 @@ static inline void mmgrab(struct mm_struct *mm) >> extern void __mmdrop(struct mm_struct *); >> static inline void mmdrop(struct mm_struct *mm) >> { >> + /* >> + * The implicit full barrier implied by atomic_dec_and_test is >> + * required by the membarrier system call before returning to >> + * user-space, after storing to rq->curr. >> + */ >> if (unlikely(atomic_dec_and_test(&mm->mm_count))) >> __mmdrop(mm); >> } >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 18a6966567da..7977b25acf54 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2658,6 +2658,12 @@ static struct rq *finish_task_switch(struct task_struct >> *prev) >> finish_arch_post_lock_switch(); >> >> fire_sched_in_preempt_notifiers(current); >> + /* >> + * When transitioning from a kernel thread to a userspace >> + * thread, mmdrop()'s implicit full barrier is required by the >> + * membarrier system call, because the current active_mm can >> + * become the current mm without going through switch_mm(). >> + */ >> if (mm) >> mmdrop(mm); >> if (unlikely(prev_state == TASK_DEAD)) { >> @@ -3299,6 +3305,9 @@ static void __sched notrace __schedule(bool preempt) >> * Make sure that signal_pending_state()->signal_pending() below >> * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) >> * done by the caller to avoid the race with signal_wake_up(). >> + * >> + * The membarrier system call requires a full memory barrier >> + * after coming from user-space, before storing to rq->curr. >> */ >> rq_lock(rq, &rf); >> smp_mb__after_spinlock(); >> -- >> 2.11.0 -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com