Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753302AbbF3MS1 (ORCPT ); Tue, 30 Jun 2015 08:18:27 -0400 Received: from casper.infradead.org ([85.118.1.10]:53296 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753213AbbF3MSS (ORCPT ); Tue, 30 Jun 2015 08:18:18 -0400 Date: Tue, 30 Jun 2015 14:18:12 +0200 From: Peter Zijlstra To: Fredrik =?iso-8859-1?Q?Markstr=F6m?= Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, Rik van Riel , Jason Low , =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker Subject: Re: [PATCH 1/1] cputime: Make the reported utime+stime correspond to the actual runtime. Message-ID: <20150630121812.GG3644@twins.programming.kicks-ass.net> References: <1434099316-29749-1-git-send-email-fredrik.markstrom@gmail.com> <1434099316-29749-2-git-send-email-fredrik.markstrom@gmail.com> <1434104217.1495.74.camel@twins> <20150612110158.GA18673@twins.programming.kicks-ass.net> <20150629145837.GE3644@twins.programming.kicks-ass.net> <20150630093054.GK19282@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1613 Lines: 39 On Tue, Jun 30, 2015 at 01:50:15PM +0200, Fredrik Markstr?m wrote: > Excellent, Please do not top post. > The reason I replaced the early bail with that last test is that I > believe it needs to be done within the lock and I wanted to keep that > region short. To be honest I'm not sure this test is needed at all > anymore, but I couldn't make sense of the comment above the early bail > so I didn't dare to remove it. Ah, there's a simple reason we should keep it, apart from the wobblies in calculating the division. Imagine two concurrent callers, on with an rtime ahead of the other. Let the latest rtime caller acquire the lock first and compute s/u-time. Once the second caller acquires the lock, we observe the last rtime was in the past and we use the latest values. > Regarding the lock, have you considered how many cores you need > hammering at rusage to introduce some substantial congestion ? Spinlock contention across 120 cores and 4 nodes is pretty bad, even with hardly any hold time :-) I've not investigated where the absolute pain threshold is, but given the size (and growth) of machines these days, its seems like a prudent thing. > Sorry for not letting this go (I know I should) but I always feel bad > introducing per thread data. Yes agreed, but a global lock is just asking for trouble. Esp when its as easy as this to avoid it. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/