Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp2643043rwl; Mon, 27 Mar 2023 03:16:03 -0700 (PDT) X-Google-Smtp-Source: AK7set+izPs6YQotiGheMFd29/29y1eJUVU9tMcVhjDcmdFYQW597vm/HajtpvWpCo2TAx4X6toc X-Received: by 2002:a05:6a20:4fa7:b0:d8:ad79:e517 with SMTP id gh39-20020a056a204fa700b000d8ad79e517mr9323305pzb.30.1679912163561; Mon, 27 Mar 2023 03:16:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679912163; cv=none; d=google.com; s=arc-20160816; b=GluOeSL4dNVmcJPPSQNzJG386/dh/zE/qgcKrw+AOoRnwJ5mX9iaBk7OjRLgFwSoph vlJGVKF0+Fg+/DMlF6AuEEnpPdundFx2IV5qftQsoA7pGStpRSQYf2b7lcrCybjrSvSw dhaNFefm3BknnNtanRosUGmwM/grqgudi+VoE0QFsSiGBSNXT4ASFHcChfIc64zR8gEn w3hx89DFRzI3xZE0E4Mt+WcKt3GzNy/Nsq8FuBn501DozMHDmCr8CGLoffvD+olNXyNJ eX37Tfa5W9fJ8nVtKlI/nYEoXRAbtix0BGKgNfdGbogZoSvLg27CT1Cu9mac8LUpvgrU MNGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=mdaIrFm/nvZWM+yHt3ybsVjqXy7JAG/ez+kNiVD32Yc=; b=WizunpV871iQcIQ0/R2k7AhUpMUwePPK/IyitjxslcrsMu+Eqbhy+2n+J3gOfP419E +5wJwjkUQRBQUlsGSksYryNtGHjB2154F1OhkG9TPmi3V9H2n7TnfqspIg1eaGpSp8RE ID8ukOtV+fboGGprmlbCgVGMZmhDz5KtAFYajkqn59qE55gOzgTZD9uyc8O8gYLXmxGf F1bKG8bimCdQYnAAY2x08EcNSKMuCFFN+hrH0V7Byu8TLQOo2pOdaN55oy0WmGjZrtei GT+yHlEAjbN+w9JLmYHseHBRJGaZG6FArcGmqn29Ma8+U6bphlPAro+G6D0Kf00JarNG 7T1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p5-20020a056a000b4500b005e070e694bcsi15274870pfo.14.2023.03.27.03.15.51; Mon, 27 Mar 2023 03:16:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233481AbjC0KMH (ORCPT + 99 others); Mon, 27 Mar 2023 06:12:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233516AbjC0KL5 (ORCPT ); Mon, 27 Mar 2023 06:11:57 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFDFE5BBE; Mon, 27 Mar 2023 03:11:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4EDD3B81057; Mon, 27 Mar 2023 10:11:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 840B3C433EF; Mon, 27 Mar 2023 10:11:43 +0000 (UTC) Date: Mon, 27 Mar 2023 11:11:40 +0100 From: Catalin Marinas To: David Laight Cc: 'Mark Rutland' , "linux-kernel@vger.kernel.org" , "agordeev@linux.ibm.com" , "aou@eecs.berkeley.edu" , "bp@alien8.de" , "dave.hansen@linux.intel.com" , "davem@davemloft.net" , "gor@linux.ibm.com" , "hca@linux.ibm.com" , "linux-arch@vger.kernel.org" , "linux@armlinux.org.uk" , "mingo@redhat.com" , "palmer@dabbelt.com" , "paul.walmsley@sifive.com" , "robin.murphy@arm.com" , "tglx@linutronix.de" , "torvalds@linux-foundation.org" , "viro@zeniv.linux.org.uk" , "will@kernel.org" Subject: Re: [PATCH v2 1/4] lib: test copy_{to,from}_user() Message-ID: References: <20230321122514.1743889-1-mark.rutland@arm.com> <20230321122514.1743889-2-mark.rutland@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-2.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 23, 2023 at 10:16:12PM +0000, David Laight wrote: > From: Mark Rutland > > Sent: 22 March 2023 14:05 > .... > > > IIUC, in such tests you only vary the destination offset. Our copy > > > routines in general try to align the source and leave the destination > > > unaligned for performance. It would be interesting to add some variation > > > on the source offset as well to spot potential issues with that part of > > > the memcpy routines. > > > > I have that on my TODO list; I had intended to drop that into the > > usercopy_params. The only problem is that the cross product of size, > > src_offset, and dst_offset gets quite large. > > I thought that is was better to align the writes and do misaligned reads. We inherited the memcpy/memset routines from the optimised cortex strings library (fine-tuned by the toolchain people for various Arm microarchitectures). For some CPUs with less aggressive prefetching it's probably marginally faster to align the reads instead of writes (as multiple unaligned writes are usually combined in the write buffer somewhere). Also, IIRC for some small copies (less than 16 bytes), our routines don't bother with any alignment at all. > Although maybe copy_to/from_user() would be best aligning the user address > (to avoid page faults part way through a misaligned access). In theory only copy_to_user() needs the write aligned if we want strict guarantees of what was written. For copy_from_user() we can work around by falling back to a byte read. > OTOH, on x86, is it even worth bothering at all. > I have measured a performance drop for misaligned reads, but it > was less than 1 clock per cache line in a test that was doing > 2 misaligned reads in at least some of the clock cycles. > I think the memory read path can do two AVX reads each clock. > So doing two misaligned 64bit reads isn't stressing it. I think that's what Mark found as well in his testing, though I'm sure one can build a very specific benchmark that shows a small degradation. -- Catalin