Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp646010ybk; Wed, 20 May 2020 08:28:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZyiNnaZx8Ijxx8VaTXoE9Vnzwmg6CTVJIinTtFzsjJCH2E9N09u946QMCPvrWT68QPt0l X-Received: by 2002:a17:906:934e:: with SMTP id p14mr4029966ejw.494.1589988539324; Wed, 20 May 2020 08:28:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589988539; cv=none; d=google.com; s=arc-20160816; b=A5BOEbmCKgRyk5ai/UIeyM/fjHC0ILYxafm5vIiT5mRGHSItIjGDeoKoPIAj2yz1+B Zg4r+M95Z6lGJdXfNqCJK+bf1n4XuOfFc8QXWPII6zU8crJaMHZUoiyGIHQVL+wc0IvI odZnFbxX7ZLO257tBiUFugZ/eJ0lL4XzGhyGE0N62hBroc7mA8PJ3mNxnG75wV3YyIt5 t50AinMxaX7Tq3XWpahN1gFWTYC2YOAcQWjCl+lsuoiTf7F3FCogo8ol5mV+xCCfWiyf rKJMMWW7gJCyWBkNBJs5FjXtSnZsX+wleL9Up+LO/w80REyie6uaMhQsRovS1gVAwkRC dHWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=RAAW2dLc29k3ISiLIOXrXt0G/ZR+WHztHlZgdr/wJG0=; b=rOkQUyBfdzTygtGXOSJ3XJSHD+4OIZNn5Pvf6z+4hXhuoKy/zrpaA4qLygj4PMu6A7 tUU9uOTbkjMYQVMNpHCFg4ZPov+cD/t9vhDcryOUOSWZMA4Sh/eYaVSVE0GjJxE3Y6LB DIeRc0xONMRFv8JRo1gx/aR4OzNfjjJHqgdQS/dmxo6jNIRq7GgrVIlGMTAWlAcEeNaU FIPLNfogusr+iF/sIjaFl1ApChQjz3CEJj4vbMFfO4GsIsi6Yo7x9DV5Ng8pqtb02H+f wwe0ikbxWHJeY/Jilsfi9/YlSjUyc1Td1F3ikPMlSSxehGSB/TrwZtnjX3MbNO9BGOZr TDXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CTwfs+a4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lv2si1747693ejb.651.2020.05.20.08.28.36; Wed, 20 May 2020 08:28:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CTwfs+a4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726832AbgETPYv (ORCPT + 99 others); Wed, 20 May 2020 11:24:51 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:52261 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726560AbgETPYv (ORCPT ); Wed, 20 May 2020 11:24:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589988289; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RAAW2dLc29k3ISiLIOXrXt0G/ZR+WHztHlZgdr/wJG0=; b=CTwfs+a4QPOkX4HMuqibO2PH+aNqmERzj2rAP0NktQf8jXhThSyIsgfNrMAKUQL7dSMk1t lNj7K8egAcW1K+Z+EnU/kSdghm+x/oj0HxTuaYalsygEOp8xpiKR0KV5lvZLcpZjxcqfA8 vqgvVw8/FABC4YIAKkJhXR2X8vFIzy8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-109-zUJOX02RMHOhlDxOi7R1KQ-1; Wed, 20 May 2020 11:24:47 -0400 X-MC-Unique: zUJOX02RMHOhlDxOi7R1KQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 943858018A5; Wed, 20 May 2020 15:24:46 +0000 (UTC) Received: from dhcp-27-174.brq.redhat.com (unknown [10.40.193.108]) by smtp.corp.redhat.com (Postfix) with SMTP id BC6D519451; Wed, 20 May 2020 15:24:41 +0000 (UTC) Received: by dhcp-27-174.brq.redhat.com (nbSMTP-1.00) for uid 1000 oleg@redhat.com; Wed, 20 May 2020 17:24:46 +0200 (CEST) Date: Wed, 20 May 2020 17:24:40 +0200 From: Oleg Nesterov To: Peter Zijlstra Cc: Ingo Molnar , Thomas Gleixner , Andrew Fox , Stephen Johnston , linux-kernel@vger.kernel.org, Stanislaw Gruszka , Linus Torvalds Subject: Re: [PATCH v2] sched/cputime: make scale_stime() more precise Message-ID: <20200520152439.GC26470@redhat.com> References: <20190718131834.GA22211@redhat.com> <20200127122817.GA10957@redhat.com> <20200519172506.GA317395@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200519172506.GA317395@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/19, Peter Zijlstra wrote: > > > The new implementation does the additional div64_u64_rem() but according > > to my naive measurements it is faster on x86_64, much faster if rtime/etc > > are big enough. See > > > > https://lore.kernel.org/lkml/20200123130541.GA30620@redhat.com/ > > Right, so -m32 when ran on x86_64 CPUs isn't really fair, because then > it still has hardware fls() for ilog2() and a massively fast mult and > division instruction. Try and run this on a puny 32bit ARM that maybe > has a hardware multiplier on. OK, > Anyway, how about we write it like the below and then when some puny > architecture comes complaining we can use Linus' original algorithm for > their arch implementation. Sure, I am fine either way, but... > +static inline u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div) > { > u64 q; > > asm ("mulq %2; divq %3" : "=a" (q) > - : "a" (a), "rm" ((u64)mul), "rm" ((u64)div) > + : "a" (a), "rm" (mul), "rm" (div) > : "rdx"); ... > +#ifndef mul_u64_u64_div_u64 > +static inline u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c) > +{ > + u64 res = 0, div, rem; > + int shift; > + > + /* can a * b overflow ? */ > + if (ilog2(a) + ilog2(b) > 62) { > + /* > + * (b * a) / c is equal to > + * > + * (b / c) * a + > + * (b % c) * a / c > + * > + * if nothing overflows. Can the 1st multiplication > + * overflow? Yes, but we do not care: this can only > + * happen if the end result can't fit in u64 anyway. > + * > + * So the code below does > + * > + * res = (b / c) * a; > + * b = b % c; > + */ > + div = div64_u64_rem(b, c, &rem); > + res = div * a; > + b = rem; > + > + shift = ilog2(a) + ilog2(b) - 62; > + if (shift > 0) { > + /* drop precision */ > + b >>= shift; > + c >>= shift; > + if (!c) > + return res; > + } > + } > + > + return res + div64_u64(a * b, c); > +} Note that according to my measurements the "asm" version is slower than the generic code above when "a * b" doesn't fit u64. Nevermind, I agree with your version. Will you send this patch or do you want me to make V3 ? Oleg.