Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1461138pxk; Fri, 4 Sep 2020 09:53:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySTeUcH+pBcvXvfINJ16KKKlSZY9vaqvqjDbigS7zMxnWkijBf83W6lH0hxnFExPZAyGNB X-Received: by 2002:aa7:dd0d:: with SMTP id i13mr9736566edv.314.1599238434897; Fri, 04 Sep 2020 09:53:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599238434; cv=none; d=google.com; s=arc-20160816; b=MCQaijkXsDIRFzDNDOsUEMkE1UrGMfRbB0MZI1rWArST4q4FER+bjpz4NpLopJgRac OQ+LdNQjdhH0Kcb4ULEK3d6J7gX9HiVOxeKYm3TxkuVmulm5F81QxdI0NFLx2VN3HCUo TMiI4iu6Sh9nXtUvdIhRiCKGLHeS00rJqqMMQLZGPEDEZAaS0yinYB5HLyBhdJwVb9xA je42O4zSK3ozHDnIKRrhMXmXEmrjA1I1LymwYe/iGpl6+puKDW785VT0ySmzaJAjjFjs VSzVymuqM2M2orF6FC8AWyFpbfuJvFv4rNW0rZdFMb2BRUg4VUDHlI23D685q67p7Pzj 9BvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ttKAZXFTdZgjTFfSwyEULSe/BTBQjBlyN7XpL7zPz6k=; b=paNx4Z0FJFojdPlu5/2PTbsqajwNNdX2hD4lsutKTpIG7LCinWXHfuOEFiQuwUynR8 b1ojsL2SevjKO6jaFlNG86MBqQ4753ZGGqg99dEDzuQqLx42tzpjuafLDv7c5YGybeo8 +PFK3BfyEoM0zQs+fGZFWNz1FTKOuO7jismjW8nsTp0oj1ymDHueljJI3WbNlIJJuMmJ 1Jq8iXLEj8KDtMKpsW4+vtYeDXgpifdnddDcN4tlosjRs4g9XbllmDimcK8dyq1KMQg8 1Zzwv98AB3F+izTK9PHCScRjbvqR+JLylggW9v19DmmdorROr1uCCYyRGiSecQyw0/5c RdtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=oVIW9Keo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s23si4414276ejd.95.2020.09.04.09.53.31; Fri, 04 Sep 2020 09:53:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=casper.20170209 header.b=oVIW9Keo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726984AbgIDQwn (ORCPT + 99 others); Fri, 4 Sep 2020 12:52:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726407AbgIDQw1 (ORCPT ); Fri, 4 Sep 2020 12:52:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C25FAC061246; Fri, 4 Sep 2020 09:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ttKAZXFTdZgjTFfSwyEULSe/BTBQjBlyN7XpL7zPz6k=; b=oVIW9KeoYnK3snaQmqigyM9bk1 VNtmyrLS05ICP3ujzNGFEK2UL7gcWGiYvYSel8+mT7ktm/BFy258Yqt/x5lfToZRnT3DF+BwUSYyv 90F1nd6QtB6oqxn4/p8A9KLc9eB/JWY0Srn3so/OeRDYteTlnh2yRasDd2cZ/IIhCrEd6bEFhhPT0 FzHZcxYjcdIqBdQAx+l2SJRzCMnUnGdls/Chne20VcQlK4wxSwVWimCpoVcYeHy5U3aqMzLf4AmN0 i+uOIITCXwo7wmafRLyn/tVfo5jJmIBUgcGbBY/xzsRG4nkGHdc0tuN4/AllOvSR6b7j94EAyOjfO XjupkZ9A==; Received: from [2001:4bb8:184:af1:704:22b1:700d:1395] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kEEwp-00012v-BA; Fri, 04 Sep 2020 16:52:23 +0000 From: Christoph Hellwig To: Paul Walmsley , Palmer Dabbelt , Arnd Bergmann , Alexander Viro Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 3/8] asm-generic: fix unaligned access hamdling in raw_copy_{from,to}_user Date: Fri, 4 Sep 2020 18:52:11 +0200 Message-Id: <20200904165216.1799796-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200904165216.1799796-1-hch@lst.de> References: <20200904165216.1799796-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use get_unaligned and put_unaligned for the small constant size cases in the generic uaccess routines. This ensures they can be used for architectures that do not support unaligned loads and stores, while being a no-op for those that do. Signed-off-by: Christoph Hellwig --- include/asm-generic/uaccess.h | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index cc3b2c8b68fab4..768502bbfb154e 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -36,19 +36,17 @@ raw_copy_from_user(void *to, const void __user * from, unsigned long n) if (__builtin_constant_p(n)) { switch(n) { case 1: - *(u8 *)to = *(u8 __force *)from; + *(u8 *)to = get_unaligned((u8 __force *)from); return 0; case 2: - *(u16 *)to = *(u16 __force *)from; + *(u16 *)to = get_unaligned((u16 __force *)from); return 0; case 4: - *(u32 *)to = *(u32 __force *)from; + *(u32 *)to = get_unaligned((u32 __force *)from); return 0; -#ifdef CONFIG_64BIT case 8: - *(u64 *)to = *(u64 __force *)from; + *(u64 *)to = get_unaligned((u64 __force *)from); return 0; -#endif } } @@ -62,19 +60,17 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) if (__builtin_constant_p(n)) { switch(n) { case 1: - *(u8 __force *)to = *(u8 *)from; + put_unaligned(*(u8 *)from, (u8 __force *)to); return 0; case 2: - *(u16 __force *)to = *(u16 *)from; + put_unaligned(*(u16 *)from, (u16 __force *)to); return 0; case 4: - *(u32 __force *)to = *(u32 *)from; + put_unaligned(*(u32 *)from, (u32 __force *)to); return 0; -#ifdef CONFIG_64BIT case 8: - *(u64 __force *)to = *(u64 *)from; + put_unaligned(*(u64 *)from, (u64 __force *)to); return 0; -#endif default: break; } -- 2.28.0