Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964840AbbLHHAJ (ORCPT ); Tue, 8 Dec 2015 02:00:09 -0500 Received: from mail-wm0-f53.google.com ([74.125.82.53]:34059 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964795AbbLHHAH (ORCPT ); Tue, 8 Dec 2015 02:00:07 -0500 Date: Tue, 8 Dec 2015 08:00:03 +0100 From: Ingo Molnar To: Andy Lutomirski Cc: Andy Lutomirski , X86 ML , "linux-kernel@vger.kernel.org" , Brian Gerst , Borislav Petkov , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Denys Vlasenko , Linus Torvalds Subject: Re: [PATCH 00/12] x86: Rewrite 64-bit syscall code Message-ID: <20151208070003.GA26154@gmail.com> References: <20151208044254.GA3058@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2509 Lines: 64 * Andy Lutomirski wrote: > On Mon, Dec 7, 2015 at 8:42 PM, Ingo Molnar wrote: > > > > * Andy Lutomirski wrote: > > > >> On Mon, Dec 7, 2015 at 1:51 PM, Andy Lutomirski wrote: > >> > >> > This is kind of like the 32-bit and compat code, except that I preserved the > >> > fast path this time. I was unable to measure any significant performance > >> > change on my laptop in the fast path. > >> > > >> > What do you all think? > >> > >> For completeness, if I zap the fast path entirely (see attached), I lose 20 > >> cycles (148 cycles vs 128 cycles) on Skylake. Switching between movq and pushq > >> for stack setup makes no difference whatsoever, interestingly. I haven't tried > >> to figure out exactly where those 20 cycles go. > > > > So I asked for this before, and I'll do so again: could you please stick the cycle > > granular system call performance test into a 'perf bench' variant so that: > > > > 1) More people can run it all on various pieces of hardware and help out quantify > > the patches. > > > > 2) We can keep an eye on not regressing base system call performance in the > > future, with a good in-tree testcase. > > > > Is it okay if it's not particularly shiny or modular? [...] Absolutely! > [...] The tool I'm using is here: > > https://git.kernel.org/cgit/linux/kernel/git/luto/misc-tests.git/tree/tight_loop/perf_self_monitor.c > > and I can certainly stick it into 'perf bench' pretty easily. Can I > leave making it into a proper library to some future contributor? Sure - 'perf bench' tests aren't librarized generally - the goal is to make it easy to create a new measurement. > It's actually decently fancy. It allocates a perf self-monitoring > instance that counts cycles, and then it takes a bunch of samples and > discards any that flagged a context switch. It does some very > rudimentary statistics on the rest. It's utterly devoid of a fancy > UI, though. > > It works very well on native, and it works better than I had expected > under KVM. (KVM traps RDPMC because neither Intel nor AMD has seen > fit to provide any sensible way to virtualize RDPMC without exiting.) Sounds fantastic to me! Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/