Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751672AbbKZMmH (ORCPT ); Thu, 26 Nov 2015 07:42:07 -0500 Received: from unicorn.mansr.com ([81.2.72.234]:45392 "EHLO unicorn.mansr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750786AbbKZMmD convert rfc822-to-8bit (ORCPT ); Thu, 26 Nov 2015 07:42:03 -0500 From: =?iso-8859-1?Q?M=E5ns_Rullg=E5rd?= To: Nicolas Pitre Cc: Russell King - ARM Linux , Stephen Boyd , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, Michal Marek , linux-kbuild@vger.kernel.org, Arnd Bergmann , Steven Rostedt , Thomas Petazzoni Subject: Re: [PATCH v2 2/2] ARM: Replace calls to __aeabi_{u}idiv with udiv/sdiv instructions References: <1448488264-23400-1-git-send-email-sboyd@codeaurora.org> <1448488264-23400-3-git-send-email-sboyd@codeaurora.org> <20151126012859.GX8644@n2100.arm.linux.org.uk> Date: Thu, 26 Nov 2015 12:41:54 +0000 In-Reply-To: (Nicolas Pitre's message of "Thu, 26 Nov 2015 00:32:45 -0500 (EST)") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4136 Lines: 122 Nicolas Pitre writes: > On Thu, 26 Nov 2015, M?ns Rullg?rd wrote: > >> Russell King - ARM Linux writes: >> >> > On Thu, Nov 26, 2015 at 12:50:08AM +0000, M?ns Rullg?rd wrote: >> >> If not calling the function saves an I-cache miss, the benefit can be >> >> substantial. No, I have no proof of this being a problem, but it's >> >> something that could happen. >> > >> > That's a simplistic view of modern CPUs. >> > >> > As I've already said, modern CPUs which have branch prediction, but >> > they also have speculative instruction fetching and speculative data >> > prefetching - which the CPUs which have idiv support will have. >> > >> > With such features, the branch predictor is able to learn that the >> > branch will be taken, and because of the speculative instruction >> > fetching, it can bring the cache line in so that it has the >> > instructions it needs with minimal or, if working correctly, >> > without stalling the CPU pipeline. >> >> It doesn't matter how many fancy features the CPU has. Executing more >> branches and using more cache lines puts additional pressure on those >> resources, reducing overall performance. Besides, the performance >> counters readily show that the prediction is nothing near as perfect as >> you seem to believe. > > OK... Let's try to come up with actual numbers. > > We know that letting gcc emit idiv by itself is the ultimate solution. > And it is free of maintenance on our side besides passing the > appropriate argument to gcc of course. So this is worth doing. > > For the case where you have a set of target machines in your kernel that > may or may not have idiv, then the first step should be to patch > __aeabi_uidiv and __aeabi_idiv. This is a pretty small and simple > change that might turn out to be more than good enough. It is necessary > anyway as the full patching solution does not cover all cases. > > Then, IMHO, it would be a good idea to get performance numbers to > compare that first step and the full patching solution. Of course the > full patching will yield better performance. It has to. But if the > difference is not significant enough, then it might not be worth > introducing the implied complexity into mainline. And it is not because > the approach is bad. In fact I think this is a very cool hack. But it > comes with a cost in maintenance and that cost has to be justified. > > Just to have an idea, I produced the attached micro benchmark. I tested > on a TC2 forced to a single Cortex-A15 core and I got those results: > > Testing INLINE_DIV ... > > real 0m7.182s > user 0m7.170s > sys 0m0.000s > > Testing PATCHED_DIV ... > > real 0m7.181s > user 0m7.170s > sys 0m0.000s > > Testing OUTOFLINE_DIV ... > > real 0m7.181s > user 0m7.170s > sys 0m0.005s > > Testing LIBGCC_DIV ... > > real 0m18.659s > user 0m18.635s > sys 0m0.000s > > As you can see, whether the div is inline or out-of-line, whether > arguments are moved into r0-r1 or not, makes no difference at all on a > Cortex-A15. > > Now forcing it onto a Cortex-A7 core: > > Testing INLINE_DIV ... > > real 0m8.917s > user 0m8.895s > sys 0m0.005s > > Testing PATCHED_DIV ... > > real 0m11.666s > user 0m11.645s > sys 0m0.000s > > Testing OUTOFLINE_DIV ... > > real 0m13.065s > user 0m13.025s > sys 0m0.000s > > Testing LIBGCC_DIV ... > > real 0m51.815s > user 0m51.750s > sys 0m0.005s > > So on A cortex-A7 the various overheads become visible. How significant > is it in practice with normal kernel usage? I don't know. Bear in mind that in a trivial test like this, everything fits in L1 caches and branch prediction works perfectly. It would be more informative to measure the effect on a load that already has some cache and branch prediction misses. -- M?ns Rullg?rd mans@mansr.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/