Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753399Ab0KYOFy (ORCPT ); Thu, 25 Nov 2010 09:05:54 -0500 Received: from mail-ww0-f44.google.com ([74.125.82.44]:46528 "EHLO mail-ww0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752092Ab0KYOFx (ORCPT ); Thu, 25 Nov 2010 09:05:53 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=OBfevLgzPvJt7FLv5NmOfdUxWr0V7hajcPqwNp2djfkBS9nxDRYJIsQlf3BvC6QIfo 3Z1gEmHMhUnxRFQNkmOXFchCs7D3ZOfbb2YRyGDWJuyzO/GQ4H040dJTJEhf7gp8EZcY ckV2y3bNLbe1xa7JKriJN9njDPUPdvnbb0iqQ= Subject: Re: [PATCH 2/2] The new jhash implementation From: Eric Dumazet To: Changli Gao Cc: Jozsef Kadlecsik , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, Linus Torvalds , Rusty Russell In-Reply-To: References: <1290690908-794-1-git-send-email-kadlec@blackhole.kfki.hu> <1290690908-794-2-git-send-email-kadlec@blackhole.kfki.hu> <1290690908-794-3-git-send-email-kadlec@blackhole.kfki.hu> <1290692943.2858.303.camel@edumazet-laptop> Content-Type: text/plain; charset="UTF-8" Date: Thu, 25 Nov 2010 15:05:46 +0100 Message-ID: <1290693946.2858.323.camel@edumazet-laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 896 Lines: 36 Le jeudi 25 novembre 2010 à 21:55 +0800, Changli Gao a écrit : > > I suggest : > > > > #include > > ... > > a += __get_unaligned_cpu32(k); > > b += __get_unaligned_cpu32(k+4); > > c += __get_unaligned_cpu32(k+8); > > > > Fits nicely in registers. > > > > I think you mean get_unaligned_le32(). > No, I meant __get_unaligned_cpu32() We do same thing in jhash2() : a += k[0]; b += k[1]; c += k[2]; We dont care of bit order of the 32bit quantity we are adding to a,b or c , as long its consistent for the current machine ;) get_unaligned_le32() would be slow on big endian arches. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/