Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753282AbcLLV5V (ORCPT ); Mon, 12 Dec 2016 16:57:21 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:51604 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753169AbcLLV5U (ORCPT ); Mon, 12 Dec 2016 16:57:20 -0500 MIME-Version: 1.0 In-Reply-To: References: From: "Jason A. Donenfeld" Date: Mon, 12 Dec 2016 22:57:14 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2] siphash: add cryptographically secure hashtable function To: Linus Torvalds Cc: "kernel-hardening@lists.openwall.com" , LKML , Linux Crypto Mailing List , George Spelvin , Scott Bauer , Andi Kleen , Andy Lutomirski , Greg KH , Jean-Philippe Aumasson , "Daniel J . Bernstein" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1130 Lines: 28 On Mon, Dec 12, 2016 at 10:44 PM, Jason A. Donenfeld wrote: > #if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > switch (left) { > case 0: break; > case 1: b |= data[0]; break; > case 2: b |= get_unaligned_le16(data); break; > case 4: b |= get_unaligned_le32(data); break; > default: > b |= le64_to_cpu(load_unaligned_zeropad(data) & > bytemask_from_count(left)); > break; > } > #else > switch (left) { > case 7: b |= ((u64)data[6]) << 48; > case 6: b |= ((u64)data[5]) << 40; > case 5: b |= ((u64)data[4]) << 32; > case 4: b |= get_unaligned_le32(data); break; > case 3: b |= ((u64)data[2]) << 16; > case 2: b |= get_unaligned_le16(data); break; > case 1: b |= data[0]; > } > #endif As it turns out, perhaps unsurprisingly, the code generation here is really not nice, resulting in many branches instead of a computed jump. I'll submit v3 with just a branch-less load_unaligned_zeropad for the 64-bit/dcache case and the duff's device for the other case.