Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752552AbdLLUES (ORCPT ); Tue, 12 Dec 2017 15:04:18 -0500 Received: from mx3.wp.pl ([212.77.101.9]:38219 "EHLO mx3.wp.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752134AbdLLUER (ORCPT ); Tue, 12 Dec 2017 15:04:17 -0500 Date: Tue, 12 Dec 2017 12:04:09 -0800 From: Jakub Kicinski To: Al Viro Cc: Linus Torvalds , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC][PATCH] new byteorder primitives - ..._{replace,get}_bits() Message-ID: <20171212120409.64b6362e@cakuba.netronome.com> In-Reply-To: <20171212194532.GA7062@ZenIV.linux.org.uk> References: <20171210045326.GO21978@ZenIV.linux.org.uk> <420a198d-61f8-81cf-646d-10446cb41def@synopsys.com> <20171211050520.GV21978@ZenIV.linux.org.uk> <20171211053803.GW21978@ZenIV.linux.org.uk> <20171211155422.GA12326@ZenIV.linux.org.uk> <20171211200224.23bc5df4@cakuba.netronome.com> <20171212062002.GY21978@ZenIV.linux.org.uk> <20171212194532.GA7062@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-WP-MailID: acd9dcae94ecabc3a7d3cd3268719c17 X-WP-AV: skaner antywirusowy Poczty Wirtualnej Polski X-WP-SPAM: NO 000000A [EbPU] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1659 Lines: 49 On Tue, 12 Dec 2017 19:45:32 +0000, Al Viro wrote: > On Tue, Dec 12, 2017 at 06:20:02AM +0000, Al Viro wrote: > > > Umm... What's wrong with > > > > #define FIELD_FOO 0,4 > > #define FIELD_BAR 6,12 > > #define FIELD_BAZ 18,14 > > > > A macro can bloody well expand to any sequence of tokens - le32_get_bits(v, FIELD_BAZ) > > will become le32_get_bits(v, 18, 14) just fine. What's the problem with that? > > FWIW, if you want to use the mask, __builtin_ffsll() is not the only way to do > it - you don't need the shift. Multiplier would do just as well, and that can > be had easier. If mask = (2*a + 1)< mask - 1 = ((2*a) << n) + ((1< mask ^ (mask - 1) = (1< and > mask & (mask ^ (mask - 1)) = 1< > IOW, with > > static __always_inline u64 mask_to_multiplier(u64 mask) > { > return mask & (mask ^ (mask - 1)); > } > > we could do > > static __always_inline __le64 le64_replace_bits(__le64 old, u64 v, u64 mask) > { > __le64 m = cpu_to_le64(mask); > return (old & ~m) | (cpu_to_le64(v * mask_to_multiplier(mask)) & m); > } > > static __always_inline u64 le64_get_bits(__le64 v, u64 mask) > { > return (le64_to_cpu(v) & mask) / mask_to_multiplier(mask); > } > > etc. Compiler will turn those into shifts... I can live with either calling > conventions. > > Comments? Very nice! The compilation-time check if the value can fit in a field covered by the mask (if they're both known) did help me catch bugs early a few times over the years, so if it could be preserved we can maybe even drop the FIELD_* macros and just use this approach?