Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756928AbZAVMsB (ORCPT ); Thu, 22 Jan 2009 07:48:01 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754743AbZAVMrv (ORCPT ); Thu, 22 Jan 2009 07:47:51 -0500 Received: from excu-mxob-1.symantec.com ([198.6.49.12]:54053 "EHLO excu-mxob-1.symantec.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754848AbZAVMru (ORCPT ); Thu, 22 Jan 2009 07:47:50 -0500 Date: Thu, 22 Jan 2009 12:47:11 +0000 (GMT) From: Hugh Dickins X-X-Sender: hugh@blonde.anvils To: Pekka Enberg cc: Nick Piggin , Linux Memory Management List , Linux Kernel Mailing List , Andrew Morton , Lin Ming , "Zhang, Yanmin" , Christoph Lameter Subject: Re: [patch] SLQB slab allocator In-Reply-To: <84144f020901220201g6bdc2d5maf3395fc8b21fe67@mail.gmail.com> Message-ID: References: <20090121143008.GV24891@wotan.suse.de> <84144f020901220201g6bdc2d5maf3395fc8b21fe67@mail.gmail.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2027 Lines: 38 On Thu, 22 Jan 2009, Pekka Enberg wrote: > On Wed, Jan 21, 2009 at 8:10 PM, Hugh Dickins wrote: > > I was initially _very_ impressed by how well it did on my venerable > > tmpfs loop swapping loads, where I'd expected next to no effect; but > > that turned out to be because on three machines I'd been using SLUB, > > without remembering how default slub_max_order got raised from 1 to 3 > > in 2.6.26 (hmm, and Documentation/vm/slub.txt not updated). > > > > That's been making SLUB behave pretty badly (e.g. elapsed time 30% > > more than SLAB) with swapping loads on most of my machines. Though > > oddly one seems immune, and another takes four times as long: guess > > it depends on how close to thrashing, but probably more to investigate > > there. I think my original SLUB versus SLAB comparisons were done on > > the immune one: as I remember, SLUB and SLAB were equivalent on those > > loads when SLUB came in, but even with boot option slub_max_order=1, > > SLUB is still slower than SLAB on such tests (e.g. 2% slower). > > FWIW - swapping loads are not what anybody should tune for. > > What kind of machine are you seeing this on? It sounds like it could > be a side-effect from commit 9b2cd506e5f2117f94c28a0040bf5da058105316 > ("slub: Calculate min_objects based on number of processors"). Thanks, yes, that could well account for the residual difference: the machines in question have 2 or 4 cpus, so the old slub_min_objects=4 has effectively become slub_min_objects=12 or slub_min_objects=16. I'm now trying with slub_max_order=1 slub_min_objects=4 on the boot lines (though I'll need to curtail tests on a couple of machines), and will report back later. It's great that SLUB provides these knobs; not so great that it needs them. Hugh -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/