Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754746AbcLNCkA (ORCPT ); Tue, 13 Dec 2016 21:40:00 -0500 Received: from mail-pf0-f196.google.com ([209.85.192.196]:35436 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753281AbcLNCj5 (ORCPT ); Tue, 13 Dec 2016 21:39:57 -0500 Date: Wed, 14 Dec 2016 10:39:16 +0800 From: Ganesh Mahendran To: Greg KH Cc: arve@android.com, riandrews@android.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] binder: replace kzalloc with kmem_cache Message-ID: <20161214023916.GB4665@leo-test> References: <1479813450-4462-1-git-send-email-opensource.ganesh@gmail.com> <20161122135302.GA10497@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161122135302.GA10497@kroah.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3431 Lines: 103 Hi, Greg: Sorry for the late response. On Tue, Nov 22, 2016 at 02:53:02PM +0100, Greg KH wrote: > On Tue, Nov 22, 2016 at 07:17:30PM +0800, Ganesh Mahendran wrote: > > This patch use kmem_cache to allocate/free binder objects. > > Why do this? I am not very familiar with kmem_cache. I think if we have thousands of active binder objects in system, kmem_cache would be better. Below is binder object number in my android system: ----- $ cat /d/binder/stats ... proc: active 100 total 6735 thread: active 1456 total 180807 node: active 5668 total 1027387 ref: active 7141 total 1214877 death: active 844 total 468056 transaction: active 0 total 54736890 transaction_complete: active 0 total 54736890 ----- binder objects are allocated/freed frequently. > > > It will have better memory efficiency. > > Really? How? It should be the same, if not a bit worse. Have you > tested this? What is the results? kzalloc will use object with size 2^n to store user data. Take "struct binder_thread" as example, its size is 296 bytes. If use kzalloc(), slab system will use 512 object size to store the 296 bytes. But if use kmem_cache to create a seperte(may be merged with other slab allocator) allocator, it will use 304 object size to store the 296 bytes. Below is information get from /proc/slabinfo : ------ name binder_thread 858 858 304 26 2 memmory efficiency is : (296 * 26) / (2 * 4096) = 93.9% > > > And we can also get object usage details in /sys/kernel/slab/* for > > futher analysis. > > Why do we need this? Who needs this information and what are you going > to do with it? This is only for debug purpuse to see how much memory is used by binder. > > > Signed-off-by: Ganesh Mahendran > > --- > > drivers/android/binder.c | 127 ++++++++++++++++++++++++++++++++++++++--------- > > 1 file changed, 104 insertions(+), 23 deletions(-) > > > > diff --git a/drivers/android/binder.c b/drivers/android/binder.c > > index 3c71b98..f1f8362 100644 > > --- a/drivers/android/binder.c > > +++ b/drivers/android/binder.c > > @@ -54,6 +54,14 @@ > > static HLIST_HEAD(binder_deferred_list); > > static HLIST_HEAD(binder_dead_nodes); > > > > +static struct kmem_cache *binder_proc_cachep; > > +static struct kmem_cache *binder_thread_cachep; > > +static struct kmem_cache *binder_node_cachep; > > +static struct kmem_cache *binder_ref_cachep; > > +static struct kmem_cache *binder_transaction_cachep; > > +static struct kmem_cache *binder_work_cachep; > > +static struct kmem_cache *binder_ref_death_cachep; > > That's a lot of different caches, are you sure they don't just all get > merged together anyway for most allocators? If binder kmem_cache have the same flag with other allocator, it may be merged with other allocator. But I think it would be better than using kzalloc(). > > Don't create lots of little caches for no good reason, and without any > benchmark numbers, I'd prefer to leave this alone. You are going to > have to prove this is a win to allow this type of churn. I test binder with this patch. There is no performance regression. --- I run 10 times with below command: $binderThroughputTest -w 100 Before after(with patch) avg: 9848.4 9878.8 Thanks. > > thanks, > > greg k-h