Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757457Ab0ANShm (ORCPT ); Thu, 14 Jan 2010 13:37:42 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757390Ab0ANShl (ORCPT ); Thu, 14 Jan 2010 13:37:41 -0500 Received: from tomts43-srv.bellnexxia.net ([209.226.175.110]:57280 "EHLO tomts43-srv.bellnexxia.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757369Ab0ANShl (ORCPT ); Thu, 14 Jan 2010 13:37:41 -0500 Date: Thu, 14 Jan 2010 13:37:39 -0500 From: Mathieu Desnoyers To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, "Paul E. McKenney" , Steven Rostedt , Oleg Nesterov , Ingo Molnar , akpm@linux-foundation.org, josh@joshtriplett.org, tglx@linutronix.de, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, laijs@cn.fujitsu.com, dipankar@in.ibm.com Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory barrier (v5) Message-ID: <20100114183739.GA18435@Krystal> References: <20100113013757.GA29314@Krystal> <1263400738.4244.242.camel@laptop> <20100113193603.GA27327@Krystal> <1263460096.4244.282.camel@laptop> <20100114162609.GC3487@Krystal> <1263488625.4244.333.camel@laptop> <20100114175449.GA15387@Krystal> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline In-Reply-To: <20100114175449.GA15387@Krystal> X-Editor: vi X-Info: http://krystal.dyndns.org:8080 X-Operating-System: Linux/2.6.27.31-grsec (i686) X-Uptime: 13:11:37 up 29 days, 2:30, 5 users, load average: 0.31, 0.15, 0.11 User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4164 Lines: 100 * Mathieu Desnoyers (mathieu.desnoyers@polymtl.ca) wrote: > * Peter Zijlstra (peterz@infradead.org) wrote: > > On Thu, 2010-01-14 at 11:26 -0500, Mathieu Desnoyers wrote: > > > > > It's this scenario that is causing problem. Let's consider this > > > execution: > > > > > (slightly augmented) > > CPU 0 (membarrier) CPU 1 (another mm -> our mm) > > > switch_mm() > smp_mb() > clear_mm_cpumask() > set_mm_cpumask() > smp_mb() (by load_cr3() on x86) > switch_to() > memory access before membarrier > > smp_mb() > mm_cpumask includes CPU 1 > rcu_read_lock() > if (CPU 1 mm != our mm) > skip CPU 1. > rcu_read_unlock() > smp_mb() > > current = next (1) > > urcu read lock() > read gp > store local gp (2) > barrier() > access critical section data (3) > memory access after membarrier > > So if we don't have any memory barrier between (1) and (3), the memory > operations can be reordered in such a way that CPU 0 will not send IPI > to a CPU that would need to have it's barrier() promoted into a > smp_mb(). > > > > > I'm still not getting it, sure we don't send an IPI, but it will have > > done an mb() in switch_mm() to become our mm, so even without the IPI it > > will have executed that mb we were after. > > The augmented race window above shows that it would be possible for (2) > and (3) to be reordered across the barrier(), and therefore the critical > section access could spill over a rcu-unlocked state. To make this painfully clear, I'll reorder the accesses to match that of the CPU to memory: CPU 0 (membarrier) CPU 1 (another mm -our mm) switch_mm() smp_mb() clear_mm_cpumask() set_mm_cpumask() smp_mb() (by load_cr3() on x86) switch_to() urcu read lock() access critical section data (3) memory access before membarrier smp_mb() mm_cpumask includes CPU 1 rcu_read_lock() if (CPU 1 mm != our mm) skip CPU 1. rcu_read_unlock() smp_mb() memory access after membarrier current = next (1) (buffer flush) read gp store local gp (2) This should make the problem a bit more evident. Access (3) is done outside of the read-side C.S. as far as the userspace synchronize_rcu() is concerned. Thanks, Mathieu -- Mathieu Desnoyers OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/