Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753133Ab2KPSeB (ORCPT ); Fri, 16 Nov 2012 13:34:01 -0500 Received: from e28smtp07.in.ibm.com ([122.248.162.7]:42822 "EHLO e28smtp07.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752828Ab2KPSd7 (ORCPT ); Fri, 16 Nov 2012 13:33:59 -0500 Message-ID: <50A686C5.7080103@linux.vnet.ibm.com> Date: Sat, 17 Nov 2012 00:02:37 +0530 From: "Srivatsa S. Bhat" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120828 Thunderbird/15.0 MIME-Version: 1.0 To: Dave Hansen CC: Mel Gorman , Vaidyanathan Srinivasan , akpm@linux-foundation.org, mjg59@srcf.ucam.org, paulmck@linux.vnet.ibm.com, maxime.coquelin@stericsson.com, loic.pallardy@stericsson.com, arjan@linux.intel.com, kmpark@infradead.org, kamezawa.hiroyu@jp.fujitsu.com, lenb@kernel.org, rjw@sisk.pl, gargankita@gmail.com, amit.kachhap@linaro.org, thomas.abraham@linaro.org, santosh.shilimkar@ti.com, linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, andi@firstfloor.org, SrinivasPandruvada Subject: Re: [RFC PATCH 0/8][Sorted-buddy] mm: Linux VM Infrastructure to support Memory Power Management References: <20121106195026.6941.24662.stgit@srivatsabhat.in.ibm.com> <20121108180257.GC8218@suse.de> <20121109051247.GA499@dirshya.in.ibm.com> <20121109090052.GF8218@suse.de> <509D185D.8070307@linux.vnet.ibm.com> <509D200F.2000908@linux.vnet.ibm.com> <509D2B9B.4090305@linux.vnet.ibm.com> <509D3088.2060507@linux.vnet.ibm.com> <509D32C2.2090104@linux.vnet.ibm.com> <509D34DA.5090303@linux.vnet.ibm.com> In-Reply-To: <509D34DA.5090303@linux.vnet.ibm.com> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit x-cbid: 12111618-8878-0000-0000-000004CF41FA Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3261 Lines: 87 On 11/09/2012 10:22 PM, Srivatsa S. Bhat wrote: > On 11/09/2012 10:13 PM, Srivatsa S. Bhat wrote: >> On 11/09/2012 10:04 PM, Srivatsa S. Bhat wrote: >>> On 11/09/2012 09:43 PM, Dave Hansen wrote: >>>> On 11/09/2012 07:23 AM, Srivatsa S. Bhat wrote: >>>>> FWIW, kernbench is actually (and surprisingly) showing a slight performance >>>>> *improvement* with this patchset, over vanilla 3.7-rc3, as I mentioned in >>>>> my other email to Dave. >>>>> >>>>> https://lkml.org/lkml/2012/11/7/428 >>>>> >>>>> I don't think I can dismiss it as an experimental error, because I am seeing >>>>> those results consistently.. I'm trying to find out what's behind that. >>>> >>>> The only numbers in that link are in the date. :) Let's see the >>>> numbers, please. >>>> >>> >>> Sure :) The reason I didn't post the numbers very eagerly was that I didn't >>> want it to look ridiculous if it later turned out to be really an error in the >>> experiment ;) But since I have seen it happening consistently I think I can >>> post the numbers here with some non-zero confidence. >>> >>>> If you really have performance improvement to the memory allocator (or >>>> something else) here, then surely it can be pared out of your patches >>>> and merged quickly by itself. Those kinds of optimizations are hard to >>>> come by! >>>> >>> >>> :-) >>> >>> Anyway, here it goes: >>> >>> Test setup: >>> ---------- >>> x86 2-socket quad-core machine. (CONFIG_NUMA=n because I figured that my >>> patchset might not handle NUMA properly). Mem region size = 512 MB. >>> >> >> For CONFIG_NUMA=y on the same machine, the difference between the 2 kernels >> was much lesser, but nevertheless, this patchset performed better. I wouldn't >> vouch that my patchset handles NUMA correctly, but here are the numbers from >> that run anyway (at least to show that I really found the results to be >> repeatable): >> I fixed up the NUMA case (I'll post the updated patch for that soon) and ran a fresh set of kernbench runs. The difference between mainline and this patchset is quite tiny; so we can't really say that this patchset shows a performance improvement over mainline. However, I can safely conclude that this patchset doesn't show any performance _degradation_ w.r.t mainline in kernbench. Results from one of the recent kernbench runs: --------------------------------------------- Kernbench log for Vanilla 3.7-rc3 ================================= Kernel: 3.7.0-rc3 Average Optimal load -j 32 Run (std deviation): Elapsed Time 330.39 (0.746257) User Time 4283.63 (3.39617) System Time 604.783 (2.72629) Percent CPU 1479 (3.60555) Context Switches 845634 (6031.22) Sleeps 833655 (6652.17) Kernbench log for Sorted-buddy ============================== Kernel: 3.7.0-rc3-sorted-buddy Average Optimal load -j 32 Run (std deviation): Elapsed Time 329.967 (2.76789) User Time 4230.02 (2.15324) System Time 599.793 (1.09988) Percent CPU 1463.33 (11.3725) Context Switches 840530 (1646.75) Sleeps 833732 (2227.68) Regards, Srivatsa S. Bhat -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/