Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933281AbcLNVBg (ORCPT ); Wed, 14 Dec 2016 16:01:36 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:37746 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753117AbcLNVBe (ORCPT ); Wed, 14 Dec 2016 16:01:34 -0500 MIME-Version: 1.0 In-Reply-To: References: <20161214035927.30004-1-Jason@zx2c4.com> <20161214035927.30004-3-Jason@zx2c4.com> From: "Jason A. Donenfeld" Date: Wed, 14 Dec 2016 22:01:29 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 3/4] secure_seq: use siphash24 instead of md5_transform To: Tom Herbert Cc: David Laight , Netdev , kernel-hardening@lists.openwall.com, Andi Kleen , LKML , Linux Crypto Mailing List Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 815 Lines: 18 On Wed, Dec 14, 2016 at 9:12 PM, Tom Herbert wrote: > If you pad the data structure to 64 bits then we can call the version > of siphash that only deals in 64 bit words. Writing a zero in the > padding will be cheaper than dealing with odd lengths in siphash24. On Wed, Dec 14, 2016 at 9:27 PM, Hannes Frederic Sowa wrote: > What I don't really understand is that the addition of this complexity > actually reduces the performance, as you have to take the "if (left)" > branch during hashing and causes you to make a load_unaligned_zeropad. Oh, duh, you guys are right. Fixed in my repo [1]. I'll submit the next version in a day or so to let some other comments come in. Thanks again for your reviews. Jason [1] https://git.zx2c4.com/linux-dev/log/?h=siphash