Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757034Ab3DZRsk (ORCPT ); Fri, 26 Apr 2013 13:48:40 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:60111 "EHLO e7.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756698Ab3DZRsi (ORCPT ); Fri, 26 Apr 2013 13:48:38 -0400 Date: Fri, 26 Apr 2013 10:48:16 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Simon Horman , Eric Dumazet , Julian Anastasov , Ingo Molnar , lvs-devel@vger.kernel.org, netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Pablo Neira Ayuso , Dipankar Sarma , dhaval.giani@gmail.com Subject: Re: [PATCH 2/2] ipvs: Use cond_resched_rcu_lock() helper when dumping connections Message-ID: <20130426174815.GI3860@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1366940708-10180-1-git-send-email-horms@verge.net.au> <1366940708-10180-3-git-send-email-horms@verge.net.au> <20130426080313.GC8669@dyad.programming.kicks-ass.net> <20130426154547.GC3860@linux.vnet.ibm.com> <20130426171948.GA31467@dyad.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130426171948.GA31467@dyad.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13042617-5806-0000-0000-000020E4AF73 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2664 Lines: 58 On Fri, Apr 26, 2013 at 07:19:49PM +0200, Peter Zijlstra wrote: > On Fri, Apr 26, 2013 at 08:45:47AM -0700, Paul E. McKenney wrote: > > On Fri, Apr 26, 2013 at 10:03:13AM +0200, Peter Zijlstra wrote: > > > On Fri, Apr 26, 2013 at 10:45:08AM +0900, Simon Horman wrote: > > > > > > > @@ -975,8 +975,7 @@ static void *ip_vs_conn_array(struct seq_file *seq, loff_t pos) > > > > return cp; > > > > } > > > > } > > > > - rcu_read_unlock(); > > > > - rcu_read_lock(); > > > > + cond_resched_rcu_lock(); > > > > } > > > > > > > > > While I agree with the sentiment I do find it a somewhat dangerous construct in > > > that it might become far too easy to keep an RCU reference over this break and > > > thus violate the RCU premise. > > > > > > Is there anything that can detect this? Sparse / cocinelle / smatch? If so it > > > would be great to add this to these checkers. > > > > I have done some crude coccinelle patterns in the past, but they are > > subject to false positives (from when you transfer the pointer from > > RCU protection to reference-count protection) and also false negatives > > (when you atomically increment some statistic unrelated to protection). > > > > I could imagine maintaining a per-thread count of the number of outermost > > RCU read-side critical sections at runtime, and then associating that > > counter with a given pointer at rcu_dereference() time, but this would > > require either compiler magic or an API for using a pointer returned > > by rcu_dereference(). This API could in theory be enforced by sparse. > > Luckily cond_resched_rcu_lock() will typically only occur within loops, and > loops tend to be contained in a single sourcefile. > > This would suggest a simple static checker should be able to tell without too > much magic right? All it needs to do is track pointers returned from > rcu_dereference*() and see if they're used after cond_resched_rcu_lock(). > > Also, cond_resched_rcu_lock() will only drop a single level of RCU refs; so > that should be easier still. Don't get me wrong, I am not opposing cond_resched_rcu_lock() because it will be difficult to validate. For one thing, until there are a lot of them, manual inspection is quite possible. So feel free to apply my Acked-by to the patch. But it is definitely not too early to start thinking about how best to automatically validate this sort of thing! Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/