From: "Jason A. Donenfeld" Subject: Re: [PATCH v2] siphash: add cryptographically secure hashtable function Date: Mon, 12 Dec 2016 22:57:14 +0100 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Cc: "kernel-hardening@lists.openwall.com" , LKML , Linux Crypto Mailing List , George Spelvin , Scott Bauer , Andi Kleen , Andy Lutomirski , Greg KH , Jean-Philippe Aumasson , "Daniel J . Bernstein" To: Linus Torvalds Return-path: Received: from frisell.zx2c4.com ([192.95.5.64]:51604 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753169AbcLLV5U (ORCPT ); Mon, 12 Dec 2016 16:57:20 -0500 In-Reply-To: Sender: linux-crypto-owner@vger.kernel.org List-ID: On Mon, Dec 12, 2016 at 10:44 PM, Jason A. Donenfeld wrote: > #if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > switch (left) { > case 0: break; > case 1: b |= data[0]; break; > case 2: b |= get_unaligned_le16(data); break; > case 4: b |= get_unaligned_le32(data); break; > default: > b |= le64_to_cpu(load_unaligned_zeropad(data) & > bytemask_from_count(left)); > break; > } > #else > switch (left) { > case 7: b |= ((u64)data[6]) << 48; > case 6: b |= ((u64)data[5]) << 40; > case 5: b |= ((u64)data[4]) << 32; > case 4: b |= get_unaligned_le32(data); break; > case 3: b |= ((u64)data[2]) << 16; > case 2: b |= get_unaligned_le16(data); break; > case 1: b |= data[0]; > } > #endif As it turns out, perhaps unsurprisingly, the code generation here is really not nice, resulting in many branches instead of a computed jump. I'll submit v3 with just a branch-less load_unaligned_zeropad for the 64-bit/dcache case and the duff's device for the other case.