Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756040AbcKVNwy (ORCPT ); Tue, 22 Nov 2016 08:52:54 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:60738 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754375AbcKVNwx (ORCPT ); Tue, 22 Nov 2016 08:52:53 -0500 Date: Tue, 22 Nov 2016 14:53:02 +0100 From: Greg KH To: Ganesh Mahendran Cc: arve@android.com, riandrews@android.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] binder: replace kzalloc with kmem_cache Message-ID: <20161122135302.GA10497@kroah.com> References: <1479813450-4462-1-git-send-email-opensource.ganesh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1479813450-4462-1-git-send-email-opensource.ganesh@gmail.com> User-Agent: Mutt/1.7.1 (2016-10-04) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1638 Lines: 47 On Tue, Nov 22, 2016 at 07:17:30PM +0800, Ganesh Mahendran wrote: > This patch use kmem_cache to allocate/free binder objects. Why do this? > It will have better memory efficiency. Really? How? It should be the same, if not a bit worse. Have you tested this? What is the results? > And we can also get object usage details in /sys/kernel/slab/* for > futher analysis. Why do we need this? Who needs this information and what are you going to do with it? > Signed-off-by: Ganesh Mahendran > --- > drivers/android/binder.c | 127 ++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 104 insertions(+), 23 deletions(-) > > diff --git a/drivers/android/binder.c b/drivers/android/binder.c > index 3c71b98..f1f8362 100644 > --- a/drivers/android/binder.c > +++ b/drivers/android/binder.c > @@ -54,6 +54,14 @@ > static HLIST_HEAD(binder_deferred_list); > static HLIST_HEAD(binder_dead_nodes); > > +static struct kmem_cache *binder_proc_cachep; > +static struct kmem_cache *binder_thread_cachep; > +static struct kmem_cache *binder_node_cachep; > +static struct kmem_cache *binder_ref_cachep; > +static struct kmem_cache *binder_transaction_cachep; > +static struct kmem_cache *binder_work_cachep; > +static struct kmem_cache *binder_ref_death_cachep; That's a lot of different caches, are you sure they don't just all get merged together anyway for most allocators? Don't create lots of little caches for no good reason, and without any benchmark numbers, I'd prefer to leave this alone. You are going to have to prove this is a win to allow this type of churn. thanks, greg k-h