Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751726AbbBUQjj (ORCPT ); Sat, 21 Feb 2015 11:39:39 -0500 Received: from mail.skyhub.de ([78.46.96.112]:48144 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751323AbbBUQjh (ORCPT ); Sat, 21 Feb 2015 11:39:37 -0500 Date: Sat, 21 Feb 2015 17:38:40 +0100 From: Borislav Petkov To: Ingo Molnar Cc: Andy Lutomirski , Oleg Nesterov , Rik van Riel , x86@kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds Subject: Re: [RFC PATCH] x86, fpu: Use eagerfpu by default on all CPUs Message-ID: <20150221163840.GA32073@pd.tnic> References: <20150221093150.GA27841@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20150221093150.GA27841@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3719 Lines: 70 On Sat, Feb 21, 2015 at 10:31:50AM +0100, Ingo Molnar wrote: > So it would be nice to test this on at least one reasonably old (but > not uncomfortably old - say 5 years old) system, to get a feel for > what kind of performance impact it has there. Yeah, this is exactly what Andy and I were talking about yesterday on IRC. So let's measure our favourite workload - the kernel build! :-) My assumption is that libc uses SSE for memcpy and thus the FPU will be used. (I'll trace FPU-specific PMCs later to confirm). Machine is an AMD F10h which should be 5-10 years old depending on what you're looking at (uarch, revision, ...). Numbers look great to me in the sense that we have a very small improvement and the rest stays the same. Which would mean, killing lazy FPU does not bring slowdown, if no improvement, but will bring a huuge improvement in code quality and the handling of the FPU state by getting rid of the lazyness... IPC is the same, branch misses are *down* a bit, cache misses go up a bit probably because we're shuffling FPU state more often to mem, page faults go down and runtime goes down by half a second: plain 3.19: ========== perf stat -a -e task-clock,cycles,instructions,branch-misses,cache-misses,faults,context-switches,migrations --repeat 10 --sync --pre ~/bin/pre-build-kernel.sh make -s -j12 Performance counter stats for 'system wide' (10 runs): 1408897.576594 task-clock (msec) # 6.003 CPUs utilized ( +- 0.15% ) [100.00%] 3,137,565,760,188 cycles # 2.227 GHz ( +- 0.02% ) [100.00%] 2,849,228,161,721 instructions # 0.91 insns per cycle ( +- 0.00% ) [100.00%] 32,391,188,891 branch-misses # 22.990 M/sec ( +- 0.02% ) [100.00%] 27,879,813,595 cache-misses # 19.788 M/sec ( +- 0.01% ) 27,195,402 faults # 0.019 M/sec ( +- 0.01% ) [100.00%] 1,293,241 context-switches # 0.918 K/sec ( +- 0.09% ) [100.00%] 69,548 migrations # 0.049 K/sec ( +- 0.22% ) 234.681331200 seconds time elapsed ( +- 0.15% ) eagerfpu=ENABLE =============== Performance counter stats for 'system wide' (10 runs): 1405208.771580 task-clock (msec) # 6.003 CPUs utilized ( +- 0.19% ) [100.00%] 3,137,381,829,748 cycles # 2.233 GHz ( +- 0.03% ) [100.00%] 2,849,059,336,718 instructions # 0.91 insns per cycle ( +- 0.00% ) [100.00%] 32,380,999,636 branch-misses # 23.044 M/sec ( +- 0.02% ) [100.00%] 27,884,281,327 cache-misses # 19.844 M/sec ( +- 0.01% ) 27,193,985 faults # 0.019 M/sec ( +- 0.01% ) [100.00%] 1,293,300 context-switches # 0.920 K/sec ( +- 0.08% ) [100.00%] 69,791 migrations # 0.050 K/sec ( +- 0.18% ) 234.066525648 seconds time elapsed ( +- 0.19% ) -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/