Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754542AbbLVXcV (ORCPT ); Tue, 22 Dec 2015 18:32:21 -0500 Received: from mail-lf0-f54.google.com ([209.85.215.54]:34087 "EHLO mail-lf0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752975AbbLVXcU (ORCPT ); Tue, 22 Dec 2015 18:32:20 -0500 MIME-Version: 1.0 Date: Tue, 22 Dec 2015 15:32:19 -0800 Message-ID: Subject: Re: [PATCH] ARM64: Improve copy_page for 128 cache line sizes. From: Andrew Pinski To: "linux-arm-kernel@lists.infradead.org" , Arnd Bergmann Cc: Will Deacon , Andrew Pinski , pinsia@gmail.com, LKML Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2274 Lines: 57 On Tue, Dec 21, 2015 at 5:43 AM, Arnd Bergmann wrote: > > On Monday 21 December 2015, Will Deacon wrote: >> On Sat, Dec 19, 2015 at 04:11:18PM -0800, Andrew Pinski wrote: >> > Adding a check for the cache line size is not much overhead. >> > Special case 128 byte cache line size. >> > This improves copy_page by 85% on ThunderX compared to the original >> > implementation. >> >> So this patch seems to: >> >> - Align the loop >> - Increase the prefetch size >> - Unroll the loop once >> >> Do you know where your 85% boost comes from between these? I'd really >> like to avoid having multiple versions of copy_page, if possible, but >> maybe we could end up with something that works well enough regardless >> of cacheline size. Understanding what your bottleneck is would help to >> lead us in the right direction. I think it is the prefetching. ThunderX T88 pass 1 and pass 2 does not have a hardware prefetcher so prefetching a half of a cacheline ahead does not help at all. >> >> Also, how are you measuring the improvement? If you can share your >> test somewhere, I can see how it affects the other systems I have >> access to. You can find my benchmark at https://github.com/apinski-cavium/copy_page_benchmark . copy_page is my previous patch. copy_page128 is just the unrolled and only 128 byte prefetching copy_page64 is the original code copy_page64unroll is the new patch which I will be sending out soon. > > A related question would be how other CPU cores are affected by the change. > The test for the cache line size is going to take a few cycles, possibly a lot on certain implementations, e.g. if we ever get one where 'mrs' is microcoded or trapped by a hypervisor. > > Are there any possible downsides to using the ThunderX version on other microarchitectures too and skip the check? Yes that is a good idea. I will send out a new patch in a little bit which just unrolls the loop with keeping of the two prefetch instructions in there. Thanks, Andrew Pinski > > Arnd -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/