Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751580AbdISVrP (ORCPT ); Tue, 19 Sep 2017 17:47:15 -0400 Received: from mail-wr0-f194.google.com ([209.85.128.194]:37431 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751227AbdISVrO (ORCPT ); Tue, 19 Sep 2017 17:47:14 -0400 X-Google-Smtp-Source: AOwi7QBjVPiyEd030Y6N1gQP4rVF0LX0wcbIim/fi2s3JWNBZByO1nXUTjG8JuhaC6/ZY90Dm0doaw== Date: Tue, 19 Sep 2017 23:47:02 +0200 From: Andrea Parri To: Mathieu Desnoyers Cc: "Paul E . McKenney" , linux-kernel@vger.kernel.org, Peter Zijlstra , Boqun Feng , Andrew Hunter , Maged Michael , gromer@google.com, Avi Kivity , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Dave Watson Subject: Re: [PATCH v2] membarrier: Document scheduler barrier requirements Message-ID: <20170919214702.GA2793@andrea> References: <20170919195631.17865-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170919195631.17865-1-mathieu.desnoyers@efficios.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3977 Lines: 103 On Tue, Sep 19, 2017 at 03:56:31PM -0400, Mathieu Desnoyers wrote: > Document the membarrier requirement on having a full memory barrier in > __schedule() after coming from user-space, before storing to rq->curr. > It is provided by smp_mb__before_spinlock() in __schedule(). It is smp_mb__after_spinlock(). (Yes: I missed it in my previous email.) Andrea > > Document that membarrier requires a full barrier on transition from > kernel thread to userspace thread. We currently have an implicit barrier > from atomic_dec_and_test() in mmdrop() that ensures this. > > The x86 switch_mm_irqs_off() full barrier is currently provided by many > cpumask update operations as well as write_cr3(). Document that > write_cr3() provides this barrier. > > Changes since v1: > - Update comments to match reality for code paths which are after > storing to rq->curr, before returning to user-space. > > Signed-off-by: Mathieu Desnoyers > CC: Peter Zijlstra > CC: Paul E. McKenney > CC: Boqun Feng > CC: Andrew Hunter > CC: Maged Michael > CC: gromer@google.com > CC: Avi Kivity > CC: Benjamin Herrenschmidt > CC: Paul Mackerras > CC: Michael Ellerman > CC: Dave Watson > --- > arch/x86/mm/tlb.c | 5 +++++ > include/linux/sched/mm.h | 5 +++++ > kernel/sched/core.c | 9 +++++++++ > 3 files changed, 19 insertions(+) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 1ab3821f9e26..74f94fe4aded 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -144,6 +144,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > } > #endif > > + /* > + * The membarrier system call requires a full memory barrier > + * before returning to user-space, after storing to rq->curr. > + * Writing to CR3 provides that full memory barrier. > + */ > if (real_prev == next) { > VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > next->context.ctx_id); > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index 3a19c253bdb1..766cc47c4d7c 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -38,6 +38,11 @@ static inline void mmgrab(struct mm_struct *mm) > extern void __mmdrop(struct mm_struct *); > static inline void mmdrop(struct mm_struct *mm) > { > + /* > + * The implicit full barrier implied by atomic_dec_and_test is > + * required by the membarrier system call before returning to > + * user-space, after storing to rq->curr. > + */ > if (unlikely(atomic_dec_and_test(&mm->mm_count))) > __mmdrop(mm); > } > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 18a6966567da..7977b25acf54 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2658,6 +2658,12 @@ static struct rq *finish_task_switch(struct task_struct *prev) > finish_arch_post_lock_switch(); > > fire_sched_in_preempt_notifiers(current); > + /* > + * When transitioning from a kernel thread to a userspace > + * thread, mmdrop()'s implicit full barrier is required by the > + * membarrier system call, because the current active_mm can > + * become the current mm without going through switch_mm(). > + */ > if (mm) > mmdrop(mm); > if (unlikely(prev_state == TASK_DEAD)) { > @@ -3299,6 +3305,9 @@ static void __sched notrace __schedule(bool preempt) > * Make sure that signal_pending_state()->signal_pending() below > * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) > * done by the caller to avoid the race with signal_wake_up(). > + * > + * The membarrier system call requires a full memory barrier > + * after coming from user-space, before storing to rq->curr. > */ > rq_lock(rq, &rf); > smp_mb__after_spinlock(); > -- > 2.11.0 >