Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S262443AbTHaRDr (ORCPT ); Sun, 31 Aug 2003 13:03:47 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S262455AbTHaRDr (ORCPT ); Sun, 31 Aug 2003 13:03:47 -0400 Received: from obsidian.spiritone.com ([216.99.193.137]:45220 "EHLO obsidian.spiritone.com") by vger.kernel.org with ESMTP id S262443AbTHaRDq (ORCPT ); Sun, 31 Aug 2003 13:03:46 -0400 Date: Sun, 31 Aug 2003 10:03:09 -0700 From: "Martin J. Bligh" To: Dan Kegel , GCC Mailing List , linux-kernel@vger.kernel.org Subject: Re: LMbench as gcc performance regression test? Message-ID: <1804760000.1062349388@[10.10.2.4]> In-Reply-To: <3F51A201.8090108@kegel.com> References: <3F51A201.8090108@kegel.com> X-Mailer: Mulberry/2.2.1 (Linux/x86) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2088 Lines: 44 > http://cs.nmu.edu/~benchmark/ has an interesting little graph > of LMBench results vs. Linux kernel version, all done with the > same compiler. > > Has anyone seen a similar graph showing LMBench results vs. gcc version, > all done with the same Linux kernel? > And does everyone agree that's a meaningful way to compare the > performance of code generated by different compilers? I've done similar things with kernbench before (always using 2.95 to run the test, but comparing kernels compiled with gcc 2.95 vs 3.2 vs 3.3, and -Os vs -O2, etc). Summary was that 3.x takes *much* longer to compile the kernel, and produces worse code (though 3.3 is almost back up to the performance of 2.95, and is better than 3.2). -O2 is better than -Os, at least on a machine with 2MB L2 cache. Search the archives for results I posted if you want, but I never bothered graphing them. > I happen to have a number of versions of gcc handy, and was > considering making such a graph, but was hoping somebody > else had already done it. > > (There seems to be large variations in successive runs of LMBench > when I try it, so it may take me a bit of work to get repeatable > results.) I'd just throw away any of the subtests that give you > 1% variations, deriving anything meaning from crap data is hard (and dubious). I have similar problems with some of the results in lmbench - Larry suggested setting "ENOUGH=" or something, which helped a few tests, but most of them still aren't stable enough to be useful to me. I'd also use something "bigger" than just a microbenchmark - you need to exercise a realistic set of the kernel functions in order to see space vs time tradeoffs, etc. If you want to see whether it's faster for you, you need a benchmark that simulates roughly what you do with the machine (ie a system-level benchmark). M. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/