Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E64BC433FE for ; Thu, 16 Dec 2021 17:43:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239749AbhLPRnK (ORCPT ); Thu, 16 Dec 2021 12:43:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239094AbhLPRnK (ORCPT ); Thu, 16 Dec 2021 12:43:10 -0500 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60553C06173E for ; Thu, 16 Dec 2021 09:43:09 -0800 (PST) Received: by mail-ed1-x52e.google.com with SMTP id z7so27994537edc.11 for ; Thu, 16 Dec 2021 09:43:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=Yt6iQrPRR02VnMfhyQFLe4lPoyYi1LLJZh1EAgoFFL5odLJNrpYEgdV1gCmEQ/8Ebm vqyvFiXql5+DLQ9/AE42zzsnV8GD+7jyfxwfcG0fq/qlAeVI5VhSB0SiPNjhiYS6tgxi kvXLQUaOrvL+UE8xSz2YpOdT/AUCe9TuJhNKQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ms/wuM8WAzRPY4WykXwi0/YehvvfoHEwDko5JETWgyY=; b=GqC5J3oGuoUrt4OFiPxSta29NljDx2VgcwrttCuODskITXGlJTYSI0AxZUzVIgpAwR 0dlI68yt9tE3EKm9Gc1954tA6EwELB4p30xiwmmdxvwi/mW1qsVghFDQ/4h1OUHRNDUq QjSt/IHXXuEJka8Rc55ka+gwsf6RpDewRKCiSp+jzOK/7zwEXjo44OBBCEaJcjWJHBch QY5tSwCvoo3pVv+4XjKlZICIfwiEY4f33jr+0oMPAq6fn7pi3pw0Q+/AVcXy4UTx7Ox1 i4G9IJKPPGKxzss9KvLVgqyts8leOVWwMPPLHWLb3wzCmVqcgfdrlJ0OLi1YJFafxdXi 6EEg== X-Gm-Message-State: AOAM532WpqvqOUpQWJ2kAiIyoRxZj9G+AyazXpcw9vzvS7vc6NXRVb1g WelKEnbEBhj+C94eOGwYgTpcgGeqhQukRkRl6mA= X-Google-Smtp-Source: ABdhPJzkCCjL48DbkAoHVLqoS/cC8qa5cCyFbz+L9lNSms0YhCUk9QJmiGgd8WNrorH/Qqjgjf9Jqw== X-Received: by 2002:a05:6402:908:: with SMTP id g8mr21494788edz.59.1639676587775; Thu, 16 Dec 2021 09:43:07 -0800 (PST) Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com. [209.85.208.53]) by smtp.gmail.com with ESMTPSA id qt5sm25602ejb.214.2021.12.16.09.43.07 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 16 Dec 2021 09:43:07 -0800 (PST) Received: by mail-ed1-f53.google.com with SMTP id r11so88874342edd.9 for ; Thu, 16 Dec 2021 09:43:07 -0800 (PST) X-Received: by 2002:adf:8b0e:: with SMTP id n14mr9660488wra.281.1639676577218; Thu, 16 Dec 2021 09:42:57 -0800 (PST) MIME-Version: 1.0 References: <20210514100106.3404011-1-arnd@kernel.org> In-Reply-To: From: Linus Torvalds Date: Thu, 16 Dec 2021 09:42:41 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 00/13] Unify asm/unaligned.h around struct helper To: Ard Biesheuvel Cc: Arnd Bergmann , "Jason A. Donenfeld" , Johannes Berg , Kees Cook , Nick Desaulniers , linux-arch , Vineet Gupta , Arnd Bergmann , Amitkumar Karwar , Benjamin Herrenschmidt , Borislav Petkov , Eric Dumazet , Florian Fainelli , Ganapathi Bhat , Geert Uytterhoeven , "H. Peter Anvin" , Ingo Molnar , Jakub Kicinski , James Morris , Jens Axboe , John Johansen , Jonas Bonn , Kalle Valo , Michael Ellerman , Paul Mackerras , Rich Felker , "Richard Russon (FlatCap)" , Russell King , "Serge E. Hallyn" , Sharvari Harisangam , Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Vladimir Oltean , Xinming Hu , Yoshinori Sato , X86 ML , Linux Kernel Mailing List , Linux ARM , linux-m68k , Linux Crypto Mailing List , openrisc@lists.librecores.org, "open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)" , Linux-sh list , "open list:SPARC + UltraSPARC (sparc/sparc64)" , linux-ntfs-dev@lists.sourceforge.net, linux-block , linux-wireless , "open list:BPF JIT for MIPS (32-BIT AND 64-BIT)" , LSM List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Thu, Dec 16, 2021 at 9:29 AM Ard Biesheuvel wrote: > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is used in many places to > conditionally emit code that violates C alignment rules. E.g., there > is this example in Documentation/core-api/unaligned-memory-access.rst: > > bool ether_addr_equal(const u8 *addr1, const u8 *addr2) > { > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | > ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); > return fold == 0; > #else It probably works fine in practice - the one case we had was really pretty special, and about the vectorizer doing odd things. But I think we should strive to convert these to use "get_unaligned()", since code generation is fine. It still often makes sense to have that test for the config variable, simply because the approach might be different if we know unaligned accesses are slow. So I'll happily take patches that do obvious conversions to get_unaligned() where they make sense, but I don't think we should consider this some huge hard requirement. Linus