From: "Darrick J. Wong" Subject: [PATCH 08/13] crc32: Optimize loop counter for x86 Date: Wed, 18 Jan 2012 14:28:29 -0800 Message-ID: <20120118222829.4079.22103.stgit@elm3c44.beaverton.ibm.com> References: <20120118222733.4079.13377.stgit@elm3c44.beaverton.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: Theodore Tso , Joakim Tjernlund , Bob Pearson , linux-kernel , Andreas Dilger , linux-crypto , linux-fsdevel , Mingming Cao , linux-ext4@vger.kernel.org To: Andrew Morton , Herbert Xu , "Darrick J. Wong" Return-path: In-Reply-To: <20120118222733.4079.13377.stgit@elm3c44.beaverton.ibm.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-ext4.vger.kernel.org Add two changes that improve the performance of x86 systems 1. replace main loop with incrementing counter this change improves the performance of the selftest by about 5-6% on Nehalem CPUs. The apparent reason is that the compiler can use the loop index to perform an indexed memory access. This is reported to make the performance of PowerPC CPUs to get worse. 2. replace the rem_len loop with incrementing counter this change improves the performance of the selftest, which has more than the usual number of occurances, by about 1-2% on x86 CPUs. In actual work loads the length is most often a multiple of 4 bytes and this code does not get executed as often if at all. Again this change is reported to make the performance of PowerPC get worse. From: Bob Pearson Signed-off-by: Bob Pearson [djwong@us.ibm.com: Minor changelog tweaks] Signed-off-by: Darrick J. Wong --- lib/crc32.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/lib/crc32.c b/lib/crc32.c index 826e163..4eac9c7 100644 --- a/lib/crc32.c +++ b/lib/crc32.c @@ -66,6 +66,9 @@ crc32_body(u32 crc, unsigned char const *buf, size_t len, const u32 (*tab)[256]) # endif const u32 *b; size_t rem_len; +# ifdef CONFIG_X86 + size_t i; +# endif const u32 *t0=tab[0], *t1=tab[1], *t2=tab[2], *t3=tab[3]; const u32 *t4 = tab[4], *t5 = tab[5], *t6 = tab[6], *t7 = tab[7]; u32 q; @@ -86,7 +89,12 @@ crc32_body(u32 crc, unsigned char const *buf, size_t len, const u32 (*tab)[256]) # endif b = (const u32 *)buf; +# ifdef CONFIG_X86 + --b; + for (i = 0; i < len; i++) { +# else for (--b; len; --len) { +# endif q = crc ^ *++b; /* use pre increment for speed */ # if CRC_LE_BITS == 32 crc = DO_CRC4; @@ -100,9 +108,14 @@ crc32_body(u32 crc, unsigned char const *buf, size_t len, const u32 (*tab)[256]) /* And the last few bytes */ if (len) { u8 *p = (u8 *)(b + 1) - 1; +# ifdef CONFIG_X86 + for (i = 0; i < len; i++) + DO_CRC(*++p); /* use pre increment for speed */ +# else do { DO_CRC(*++p); /* use pre increment for speed */ } while (--len); +# endif } return crc; #undef DO_CRC