Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp6231258rwl; Wed, 22 Mar 2023 08:05:01 -0700 (PDT) X-Google-Smtp-Source: AK7set99KNmrp5s2wdIsrdTfbN+bV0LqjXZMclRz4j0HjDHGXXZhlLa2c7h2SnZvYlNeOj2Idubt X-Received: by 2002:a17:903:7cb:b0:19f:3b86:4715 with SMTP id ko11-20020a17090307cb00b0019f3b864715mr2865682plb.8.1679497501260; Wed, 22 Mar 2023 08:05:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679497501; cv=none; d=google.com; s=arc-20160816; b=GK7LT530ut6S2btPDKEfIIj60Hj0wjhcDrYd10y8WeitU1z0bL7+balgtuTDvhQPVR 2O7HhMQE49wfiSGXUraPzxRwIqwG8v3XXzQDJubdI7w/axlV0FS3qyphxW1+M+yiQWna tz5mo1zIFHSVTZO1t6Ojeim7KxA25NPAPlCOdMXfKj9nAYcJiU6zNDQH66A6zozfkW3X CadzaXFmzkQwkc0faGJMidTWSTt3qyfn2ilp7Ei5EYWJ5EQc/6vp2VmdXE8k7HMJyV5+ mw5VAO2RzDxOGACWMWS0hVbrL7psbb/mwJQZD0mvM5YyPH0bxsBaHpxiWpQ0MM5vmmw0 jlCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=czoMuNnqrS/6uhPPV6TeEqwCuVuoKM3VEs6hviF60LE=; b=aQdTkHPhEUXH92v41fVVLHkGKZqaCrZftLHWT45ozWWw2a6QXFPYwbiMamm5oTFqB/ jUU39dxeVzZeLx7clRAiZanEfajRivC9QDupnjClfN6guGQrarnIX5tvLhARXpvFKYxJ vZWsmG+tBcr1GwO8KyRrdQmknLAWVf8VtaMjtD5buct+2jRtYLQGBIho+g0kBVW+hc64 P2IGQ+YUcr2OBwRKcZfhRLPeomEl/pOtFdqyQkanWkiuwaKdFsuNhX5pGvW6B7mV4hMn eJPVNSpaE9ub810TKNMB4bN8gNz1wPHDisHR5K36RJZidcYK3OD1V7LkSQwEVXXinm66 fLgQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kj13-20020a17090306cd00b001a1c732a214si4585329plb.277.2023.03.22.08.04.40; Wed, 22 Mar 2023 08:05:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230348AbjCVOue (ORCPT + 99 others); Wed, 22 Mar 2023 10:50:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229842AbjCVOuL (ORCPT ); Wed, 22 Mar 2023 10:50:11 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D74B65C6F; Wed, 22 Mar 2023 07:48:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EBB20B81CE4; Wed, 22 Mar 2023 14:48:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94351C4339B; Wed, 22 Mar 2023 14:48:29 +0000 (UTC) Date: Wed, 22 Mar 2023 14:48:26 +0000 From: Catalin Marinas To: Mark Rutland Cc: linux-kernel@vger.kernel.org, agordeev@linux.ibm.com, aou@eecs.berkeley.edu, bp@alien8.de, dave.hansen@linux.intel.com, davem@davemloft.net, gor@linux.ibm.com, hca@linux.ibm.com, linux-arch@vger.kernel.org, linux@armlinux.org.uk, mingo@redhat.com, palmer@dabbelt.com, paul.walmsley@sifive.com, robin.murphy@arm.com, tglx@linutronix.de, torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, will@kernel.org Subject: Re: [PATCH v2 3/4] arm64: fix __raw_copy_to_user semantics Message-ID: References: <20230321122514.1743889-1-mark.rutland@arm.com> <20230321122514.1743889-4-mark.rutland@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230321122514.1743889-4-mark.rutland@arm.com> X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 21, 2023 at 12:25:13PM +0000, Mark Rutland wrote: > For some combinations of sizes and alignments __{arch,raw}_copy_to_user > will copy some bytes between (to + size - N) and (to + size), but will > never modify bytes past (to + size). > > This violates the documentation in , which states: > > > If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes > > starting at to must become equal to the bytes fetched from the > > corresponding area starting at from. All data past to + size - N must > > be left unmodified. > > This can be demonstrated through testing, e.g. > > | # test_copy_to_user: EXPECTATION FAILED at lib/usercopy_kunit.c:287 > | post-destination bytes modified (dst_page[4082]=0x1, offset=4081, size=16, ret=15) > | [FAILED] 16 byte copy > > This happens because the __arch_copy_to_user() can make unaligned stores > to the userspace buffer, and the ARM architecture permits (but does not > require) that such unaligned stores write some bytes before raising a > fault (per ARM DDI 0487I.a Section B2.2.1 and Section B2.7.1). The > extable fixup handlers in __arch_copy_to_user() assume that any faulting > store has failed entirely, and so under-report the number of bytes > copied when an unaligned store writes some bytes before faulting. I find the Arm ARM hard to parse (no surprise here). Do you happen to know what the behavior is for the new CPY instructions? I'd very much like to use those for uaccess as well eventually but if they have the same imp def behaviour, I'd rather relax the documentation and continue to live with the current behaviour. > The only architecturally guaranteed way to avoid this is to only use > aligned stores to write to user memory. This patch rewrites > __arch_copy_to_user() to only access the user buffer with aligned > stores, such that the bytes written can always be determined reliably. Can we not fall back to byte-at-a-time? There's still a potential race if the page becomes read-only for example. Well, probably not worth it if we decide to go this route. Where we may notice some small performance degradation is copy_to_user() where the reads from the source end up unaligned due to the destination buffer alignment. I doubt that's a common case though and most CPUs can probably cope just well with this. -- Catalin