Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp255775ybl; Tue, 28 Jan 2020 23:18:21 -0800 (PST) X-Google-Smtp-Source: APXvYqzpAR+vXWoha3aK1lP8zkTAQ3c4Xy52aGjn4uPrFgdHAH9i1H37Ik2U7bkbac1TS7HFx+Kb X-Received: by 2002:a05:6808:319:: with SMTP id i25mr5655132oie.128.1580282300987; Tue, 28 Jan 2020 23:18:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580282300; cv=none; d=google.com; s=arc-20160816; b=FmC4b5u5TlCW8s0yXNTK/DFdEmhU96iiVpitCcfE5Spmc52gvvao/WT2HK8RZl+HeL hfDHDSNxskEeeE9tPRGp44Daz2kWGgwanYwwfMEw/8+xgYB/wxH8QzQ0lswDwLZUmeso vStomCwXujPy8xa1KB26QF4jA+q49eqQbsP6bniY452rtQIiixtltbN+p3S4UY0R7OVh jxocWUj3GZWuSklAvYaGyF0CdXajKmtssrFi6nxhDeO9byxJ2ibvLdB469IckXSmSut6 IaFPi1bd0/hO64hoeOge63VGI6OdECnQL0hZQ3ZQGNhzANW4F1KvcyvZEAsHhIdICsED xc9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=dkR0txrdE9wbAaN506cfyE8ZQGo0/y8tddnPB5h2g7Q=; b=HumpF/DoOec8ilVWYSqlx4mx/M43vSgcyq1OUsrvnJjh+WIzS9Lc8pRRY7PcZvUI/T e9IcK2wD3ekn6/0bheFDPdBHbdoCEX0Z2hEZ49+H+zzHgwErITkhDn83GwVurXQ4Wzmv aT9SBRrmo/5suRhHesh1xe4NM93Xnpg43VmKaZRVeGETCrtapuh9X+RyFIlAwRLbj5hE DySP3IiGWDpnecboe39Z2Pacpy5abPsljl4Ml0Kq/r5i4KRqwIY8FFrr8JtWzHXi5MqW myb2zfQcraOfold7IrjKUIHjKgOs6zLPCCF5a7dA3lr+yVPjEWiE6x4o9TBtBAdLptHL 414g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s18si662850oij.84.2020.01.28.23.17.58; Tue, 28 Jan 2020 23:18:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726124AbgA2HOt (ORCPT + 99 others); Wed, 29 Jan 2020 02:14:49 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:50476 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726037AbgA2HOs (ORCPT ); Wed, 29 Jan 2020 02:14:48 -0500 Received: from p5b06da22.dip0.t-ipconnect.de ([91.6.218.34] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iwhYR-0003r2-Ta; Wed, 29 Jan 2020 08:14:28 +0100 Received: by nanos.tec.linutronix.de (Postfix, from userid 1000) id 66BB8101227; Wed, 29 Jan 2020 08:14:27 +0100 (CET) From: Thomas Gleixner To: Andy Lutomirski Cc: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , nathanl@linux.ibm.com, Arnd Bergmann , Vincenzo Frascino , Andrew Lutomirski , LKML , linuxppc-dev , linux-arm-kernel , "open list\:MIPS" , X86 ML Subject: Re: [RFC PATCH v4 10/11] lib: vdso: Allow arches to override the ns shift operation In-Reply-To: References: <877e1rfa40.fsf@nanos.tec.linutronix.de> Date: Wed, 29 Jan 2020 08:14:27 +0100 Message-ID: <87mua64tv0.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Andy Lutomirski writes: > On Thu, Jan 16, 2020 at 11:57 AM Thomas Gleixner wrote: >> >> Andy Lutomirski writes: >> > On Thu, Jan 16, 2020 at 9:58 AM Christophe Leroy >> > >> > Would mul_u64_u64_shr() be a good alternative? Could we adjust it to >> > assume the shift is less than 32? That function exists to benefit >> > 32-bit arches. >> >> We'd want mul_u64_u32_shr() for this. The rules for mult and shift are: >> > > That's what I meant to type... Just that it does not work. The math is: ns = d->nsecs; // That's the nsec value shifted left by d->shift ns += ((cur - d->last) & d->mask) * mult; ns >>= d->shift; So we cannot use mul_u64_u32_shr() because we need the addition there before shifting. And no, we can't drop the fractional part of d->nsecs. Been there, done that, got sporadic time going backwards problems as a reward. Need to look at that again as stuff has changed over time. On x86 we enforce that mask is 64bit, so the & operation is not there, but due to the nasties of TSC we have that conditional if (cur > last) return (cur - last) * mult; return 0; Christophe, on PPC the decrementer/RTC clocksource masks are 64bit as well, so you can spare that & operation there too. Thanks, tglx