Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC7BFC433F5 for ; Thu, 16 Dec 2021 17:49:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240473AbhLPRt0 (ORCPT ); Thu, 16 Dec 2021 12:49:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235919AbhLPRtZ (ORCPT ); Thu, 16 Dec 2021 12:49:25 -0500 Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com [IPv6:2a00:1450:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B2C9C061574 for ; Thu, 16 Dec 2021 09:49:24 -0800 (PST) Received: by mail-lf1-x132.google.com with SMTP id d10so51181099lfg.6 for ; Thu, 16 Dec 2021 09:49:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=Yt6iQrPRR02VnMfhyQFLe4lPoyYi1LLJZh1EAgoFFL5odLJNrpYEgdV1gCmEQ/8Ebm vqyvFiXql5+DLQ9/AE42zzsnV8GD+7jyfxwfcG0fq/qlAeVI5VhSB0SiPNjhiYS6tgxi kvXLQUaOrvL+UE8xSz2YpOdT/AUCe9TuJhNKQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=SSo9A6PLMwrw+VR+PlWFGeyOM0b+Gn7eotW0uYVhJN0DVmw5EbLicTPiLUgxhpMWe+ 0Xnq4NjuL6h2CvPoZwzrSUf63L1go2saAcKeb7+UXB16ASXR9PpboOiz4/PoEvUeeopW WXtSkpaiptgTrDaiK8qjbOs6ZTUFjYrdIgBZO2jjnNHsiWTymsPfneF5BUJaDqWAudgv hyip1BDNMSdfdQ1VxfvC8Z4UUxbjTQ/GTFu/gGYne7ngoUY7JfiTRPidhmsUFNTnl10j IMd2HuhRn84/S/mXs6Aj5OMLoy7hlT/dYdTNWNgEH740veA9IlR+LHQqxCZ+c/SiM43t JwSA== X-Gm-Message-State: AOAM53236+BQXI6DV/SRx1lQsrcwIljGJO/bEuczPdWonQQshcQ+CgTI WDDPlrnLuOMq8z8D6fFxMgCNNQg3dq/ldqJIHPw= X-Google-Smtp-Source: ABdhPJxrtala47JLscM/aclrPjpvENGULeaN1Y81DupZci1U+v1CfZ4MpxvpNGbB5HTeGLi7dwAO4Q== X-Received: by 2002:a05:6512:1324:: with SMTP id x36mr15799892lfu.141.1639676962446; Thu, 16 Dec 2021 09:49:22 -0800 (PST) Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com. [209.85.208.169]) by smtp.gmail.com with ESMTPSA id q16sm1251316ljg.51.2021.12.16.09.49.22 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 16 Dec 2021 09:49:22 -0800 (PST) Received: by mail-lj1-f169.google.com with SMTP id p8so39737584ljo.5 for ; Thu, 16 Dec 2021 09:49:22 -0800 (PST) X-Received: by 2002:adf:8b0e:: with SMTP id n14mr9660488wra.281.1639676577218; Thu, 16 Dec 2021 09:42:57 -0800 (PST) MIME-Version: 1.0 References: <20210514100106.3404011-1-arnd@kernel.org> In-Reply-To: From: Linus Torvalds Date: Thu, 16 Dec 2021 09:42:41 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 00/13] Unify asm/unaligned.h around struct helper To: Ard Biesheuvel Cc: Arnd Bergmann , "Jason A. Donenfeld" , Johannes Berg , Kees Cook , Nick Desaulniers , linux-arch , Vineet Gupta , Arnd Bergmann , Amitkumar Karwar , Benjamin Herrenschmidt , Borislav Petkov , Eric Dumazet , Florian Fainelli , Ganapathi Bhat , Geert Uytterhoeven , "H. Peter Anvin" , Ingo Molnar , Jakub Kicinski , James Morris , Jens Axboe , John Johansen , Jonas Bonn , Kalle Valo , Michael Ellerman , Paul Mackerras , Rich Felker , "Richard Russon (FlatCap)" , Russell King , "Serge E. Hallyn" , Sharvari Harisangam , Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Vladimir Oltean , Xinming Hu , Yoshinori Sato , X86 ML , Linux Kernel Mailing List , Linux ARM , linux-m68k , Linux Crypto Mailing List , openrisc@lists.librecores.org, "open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)" , Linux-sh list , "open list:SPARC + UltraSPARC (sparc/sparc64)" , linux-ntfs-dev@lists.sourceforge.net, linux-block , linux-wireless , "open list:BPF JIT for MIPS (32-BIT AND 64-BIT)" , LSM List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 16, 2021 at 9:29 AM Ard Biesheuvel wrote: > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is used in many places to > conditionally emit code that violates C alignment rules. E.g., there > is this example in Documentation/core-api/unaligned-memory-access.rst: > > bool ether_addr_equal(const u8 *addr1, const u8 *addr2) > { > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | > ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); > return fold == 0; > #else It probably works fine in practice - the one case we had was really pretty special, and about the vectorizer doing odd things. But I think we should strive to convert these to use "get_unaligned()", since code generation is fine. It still often makes sense to have that test for the config variable, simply because the approach might be different if we know unaligned accesses are slow. So I'll happily take patches that do obvious conversions to get_unaligned() where they make sense, but I don't think we should consider this some huge hard requirement. Linus