Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753781AbcDAJEu (ORCPT ); Fri, 1 Apr 2016 05:04:50 -0400 Received: from casper.infradead.org ([85.118.1.10]:41735 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752548AbcDAJEs (ORCPT ); Fri, 1 Apr 2016 05:04:48 -0400 Date: Fri, 1 Apr 2016 11:04:41 +0200 From: Peter Zijlstra To: Vladimir Davydov Cc: Andrew Morton , Christoph Lameter , Joonsoo Kim , Pekka Enberg , David Rientjes , Johannes Weiner , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH -mm v2 3/3] slub: make dead caches discard free slabs immediately Message-ID: <20160401090441.GD12845@twins.programming.kicks-ass.net> References: <6eecfafdc6c3dcbb98d2176cdebcb65abbc180b4.1422461573.git.vdavydov@parallels.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6eecfafdc6c3dcbb98d2176cdebcb65abbc180b4.1422461573.git.vdavydov@parallels.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1762 Lines: 58 On Wed, Jan 28, 2015 at 07:22:51PM +0300, Vladimir Davydov wrote: > +++ b/mm/slub.c > @@ -2007,6 +2007,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) > int pages; > int pobjects; > > + preempt_disable(); > do { > pages = 0; > pobjects = 0; > @@ -2040,6 +2041,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) > > } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) > != oldpage); > + if (unlikely(!s->cpu_partial)) { > + unsigned long flags; > + > + local_irq_save(flags); > + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab)); > + local_irq_restore(flags); > + } > + preempt_enable(); > #endif > } > > @@ -3369,7 +3378,7 @@ EXPORT_SYMBOL(kfree); > * being allocated from last increasing the chance that the last objects > * are freed in them. > */ > -int __kmem_cache_shrink(struct kmem_cache *s) > +int __kmem_cache_shrink(struct kmem_cache *s, bool deactivate) > { > int node; > int i; > @@ -3381,14 +3390,26 @@ int __kmem_cache_shrink(struct kmem_cache *s) > unsigned long flags; > int ret = 0; > > + if (deactivate) { > + /* > + * Disable empty slabs caching. Used to avoid pinning offline > + * memory cgroups by kmem pages that can be freed. > + */ > + s->cpu_partial = 0; > + s->min_partial = 0; > + > + /* > + * s->cpu_partial is checked locklessly (see put_cpu_partial), > + * so we have to make sure the change is visible. > + */ > + kick_all_cpus_sync(); > + } Argh! what the heck! and without a single mention in the changelog. Why are you spraying IPIs across the entire machine? Why isn't synchronize_sched() good enough, that would allow you to get rid of the local_irq_save/restore as well.