Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756876AbbBEJFo (ORCPT ); Thu, 5 Feb 2015 04:05:44 -0500 Received: from mail-wi0-f178.google.com ([209.85.212.178]:50013 "EHLO mail-wi0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753580AbbBEJFl (ORCPT ); Thu, 5 Feb 2015 04:05:41 -0500 Message-ID: <54D33263.4060707@linaro.org> Date: Thu, 05 Feb 2015 09:05:39 +0000 From: Daniel Thompson User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Stephen Boyd CC: Thomas Gleixner , John Stultz , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, Sumit Semwal , Steven Rostedt Subject: Re: [PATCH v3 0/4] sched_clock: Optimize and avoid deadlock during read from NMI References: <1421859236-19782-1-git-send-email-daniel.thompson@linaro.org> <1422644602-11953-1-git-send-email-daniel.thompson@linaro.org> <20150205005034.GA30372@codeaurora.org> In-Reply-To: <20150205005034.GA30372@codeaurora.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3007 Lines: 80 On 05/02/15 00:50, Stephen Boyd wrote: > On 01/30, Daniel Thompson wrote: >> This patchset optimizes the generic sched_clock implementation to >> significantly reduce the data cache profile. It also makes it safe to call >> sched_clock() from NMI (or FIQ on ARM). >> >> The data cache profile of sched_clock() in both the original code and >> my previous patch was somewhere between 2 and 3 (64-byte) cache lines, >> depending on alignment of struct clock_data. After patching, the cache >> profile for the normal case should be a single cacheline. >> >> NMI safety was tested on i.MX6 with perf drowning the system in FIQs and >> using the perf handler to check that sched_clock() returned monotonic >> values. At the same time I forcefully reduced kt_wrap so that >> update_sched_clock() is being called at >1000Hz. >> >> Without the patches the above system is grossly unstable, surviving >> [9K,115K,25K] perf event cycles during three separate runs. With the >> patch I ran for over 9M perf event cycles before getting bored. > > I wanted to see if there was any speedup from these changes so I > made a tight loop around sched_clock() that ran for 10 seconds > and I ran it 10 times before and after this patch series: > > unsigned long long clock, start_clock; > int count = 0; > > clock = start_clock = sched_clock(); > while ((clock - start_clock) < 10ULL * NSEC_PER_SEC) { > clock = sched_clock(); > count++; > } > > pr_info("Made %d calls in %llu ns\n", count, clock - start_clock); > > Before > ------ > Made 19218953 calls in 10000000439 ns > Made 19212790 calls in 10000000438 ns > Made 19217121 calls in 10000000142 ns > Made 19227304 calls in 10000000142 ns > Made 19217559 calls in 10000000142 ns > Made 19230193 calls in 10000000290 ns > Made 19212715 calls in 10000000290 ns > Made 19234446 calls in 10000000438 ns > Made 19226274 calls in 10000000439 ns > Made 19236118 calls in 10000000143 ns > > After > ----- > Made 19434797 calls in 10000000438 ns > Made 19435733 calls in 10000000439 ns > Made 19434499 calls in 10000000438 ns > Made 19438482 calls in 10000000438 ns > Made 19435604 calls in 10000000142 ns > Made 19438551 calls in 10000000438 ns > Made 19444550 calls in 10000000290 ns > Made 19437580 calls in 10000000290 ns > Made 19439429 calls in 10000048142 ns > Made 19439493 calls in 10000000438 ns > > So it seems to be a small improvement. > Awesome! I guess this is mostly the effect of simplifying the suspend logic since the changes to the cache profile probably wouldn't reveal much in such a tight loop. I will re-run this after acting on your other review comments. BTW what device did you run on? Daniel. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/