Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932400AbZICWDD (ORCPT ); Thu, 3 Sep 2009 18:03:03 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S932386AbZICWDB (ORCPT ); Thu, 3 Sep 2009 18:03:01 -0400 Received: from e2.ny.us.ibm.com ([32.97.182.142]:42685 "EHLO e2.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932382AbZICWC6 (ORCPT ); Thu, 3 Sep 2009 18:02:58 -0400 Date: Thu, 3 Sep 2009 15:03:00 -0700 From: "Paul E. McKenney" To: Christoph Lameter Cc: Eric Dumazet , Pekka Enberg , Zdenek Kabelac , Patrick McHardy , Robin Holt , Linux Kernel Mailing List , Jesper Dangaard Brouer , Linux Netdev List , Netfilter Developers Subject: Re: [PATCH] slub: fix slab_pad_check() Message-ID: <20090903220300.GN6761@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <4A9F1620.2080105@gmail.com> <84144f020909022331x2b275aa5n428f88670e0ae8bc@mail.gmail.com> <4A9F7283.1090306@gmail.com> <4A9FCDC6.3060003@gmail.com> <4A9FDA72.8060001@gmail.com> <20090903174435.GF6761@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.15+20070412 (2007-04-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2827 Lines: 63 On Thu, Sep 03, 2009 at 05:43:12PM -0500, Christoph Lameter wrote: > On Thu, 3 Sep 2009, Paul E. McKenney wrote: > > > 2. CPU 0 discovers that the slab cache can now be destroyed. > > > > It determines that there are no users, and has guaranteed > > that there will be no future users. So it knows that it > > can safely do kmem_cache_destroy(). > > > > 3. In absence of rcu_barrier(), kmem_cache_destroy() would > > immediately tear down the slab data structures. > > Of course. This has been discussed before. > > You need to ensure that no objects are in use before destroying a slab. In > case of DESTROY_BY_RCU you must ensure that there are no potential > readers. So use a suitable rcu barrier or something else like a > synchronize_rcu... If by "you must ensure" you mean "kmem_cache_destroy must ensure", then we are in complete agreement. Otherwise, not a chance. > > > But going through the RCU period is pointless since no user of the cache > > > remains. > > > > Which is irrelevant. The outstanding RCU callback was posted by the > > slab cache itself, -not- by the user of the slab cache. > > There will be no rcu callbacks generated at kmem_cache_destroy with the > patch I posted. That is true. However, there well might be RCU callbacks generated by kmem_cache_free() that have not yet been invoked. Since kmem_cache_free() generated them, it is ridiculous to insist that the user account for them. That responsibility must instead fall on kmem_cache_destroy(). > > > The dismantling does not need RCU since there are no operations on the > > > objects in progress. So simply switch DESTROY_BY_RCU off for close. > > > > Unless I am missing something, this patch re-introduces the bug that > > the rcu_barrier() was added to prevent. So, in absence of a better > > explanation of what I am missing: > > The "fix" was ill advised. Slab users must ensure that no objects are in > use before destroying a slab. Only the slab users know how the objects > are being used. The slab allocator itself cannot know how to ensure that > there are no pending references. Putting a rcu_barrier in there creates an > inconsistency in the operation of kmem_cache_destroy() and an expectation > of functionality that the function cannot provide. If by "must ensure that no objects are in use", you mean "must have no further references to them", then we are in agreement. And in my scenario above, it is not the -user- who later references the memory, but rather the slab code itself. Put the rcu_barrier() in kmem_cache_free(). Please. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/