Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757983AbYAJCre (ORCPT ); Wed, 9 Jan 2008 21:47:34 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754137AbYAJCr0 (ORCPT ); Wed, 9 Jan 2008 21:47:26 -0500 Received: from waste.org ([66.93.16.53]:35118 "EHLO waste.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754001AbYAJCrZ (ORCPT ); Wed, 9 Jan 2008 21:47:25 -0500 Subject: Re: [RFC PATCH] greatly reduce SLOB external fragmentation From: Matt Mackall To: Pekka J Enberg Cc: Christoph Lameter , Ingo Molnar , Linus Torvalds , Hugh Dickins , Andi Kleen , Peter Zijlstra , Linux Kernel Mailing List In-Reply-To: References: <84144f020801021109v78e06c6k10d26af0e330fc85@mail.gmail.com> <1199314218.4497.109.camel@cinder.waste.org> <20080103085239.GA10813@elte.hu> <1199378818.8274.25.camel@cinder.waste.org> <1199419890.4608.77.camel@cinder.waste.org> <1199641910.8215.28.camel@cinder.waste.org> <1199906151.6245.57.camel@cinder.waste.org> Content-Type: text/plain Date: Wed, 09 Jan 2008 20:46:36 -0600 Message-Id: <1199933196.6245.125.camel@cinder.waste.org> Mime-Version: 1.0 X-Mailer: Evolution 2.12.2 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3076 Lines: 95 On Thu, 2008-01-10 at 00:43 +0200, Pekka J Enberg wrote: > Hi Matt, > > On Wed, 9 Jan 2008, Matt Mackall wrote: > > I kicked this around for a while, slept on it, and then came up with > > this little hack first thing this morning: > > > > ------------ > > slob: split free list by size > > > > [snip] > > > And the results are fairly miraculous, so please double-check them on > > your setup. The resulting statistics change to this: > > [snip] > > > So the average jumped by 714k from before the patch, became much more > > stable, and beat SLUB by 287k. There are also 7 perfectly filled pages > > now, up from 1 before. And we can't get a whole lot better than this: > > we're using 259 pages for 244 pages of actual data, so our total > > overhead is only 6%! For comparison, SLUB's using about 70 pages more > > for the same data, so its total overhead appears to be about 35%. > > Unfortunately I only see a slight improvement to SLOB (but it still gets > beaten by SLUB): > > [ the minimum, maximum, and average are captured from 10 individual runs ] > > Free (kB) Used (kB) > Total (kB) min max average min max average > SLUB (no debug) 26536 23868 23892 23877.6 2644 2668 2658.4 > SLOB (patched) 26548 23456 23708 23603.2 2840 3092 2944.8 > SLOB (vanilla) 26548 23472 23640 23579.6 2908 3076 2968.4 > SLAB (no debug) 26544 23316 23364 23343.2 3180 3228 3200.8 > SLUB (with debug) 26484 23120 23136 23127.2 3348 3364 3356.8 With your kernel config and my lguest+busybox setup, I get: SLUB: MemFree: 24208 kB MemFree: 24212 kB MemFree: 24212 kB MemFree: 24212 kB MemFree: 24216 kB MemFree: 24216 kB MemFree: 24220 kB MemFree: 24220 kB MemFree: 24224 kB MemFree: 24232 kB avg: 24217.2 SLOB with two lists: MemFree: 24204 kB MemFree: 24260 kB MemFree: 24260 kB MemFree: 24276 kB MemFree: 24288 kB MemFree: 24292 kB MemFree: 24312 kB MemFree: 24320 kB MemFree: 24336 kB MemFree: 24396 kB avg: 24294.4 Not sure why this result is so different from yours. Hacked this up to three lists to experiment and we now have: MemFree: 24348 kB MemFree: 24372 kB MemFree: 24372 kB MemFree: 24372 kB MemFree: 24372 kB MemFree: 24380 kB MemFree: 24384 kB MemFree: 24404 kB MemFree: 24404 kB MemFree: 24408 kB avg: 24344.4 Even the last version is still using about 250 pages of storage for 209 pages of data, so it's got about 20% overhead still. -- Mathematics is the supreme nostalgia of our time. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/