Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp6291823pxb; Mon, 8 Nov 2021 06:22:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJxOajfMRm9CiEUcGHol+Yb/5ZmeQGndS+OQr1Qzzy3MGVJLLREMuw6ErOHNNmxVdfiAgrsX X-Received: by 2002:a05:6402:35c2:: with SMTP id z2mr45663317edc.135.1636381332649; Mon, 08 Nov 2021 06:22:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1636381332; cv=none; d=google.com; s=arc-20160816; b=sgcfJh8qj73VoWktYjsAlCegCWYq4VNB9+7QFNlDVPbl+cqwAORTwuJjkIVhbfWV8W N02XfqFEcD5p89Ku8mIGYfNIwmvMfEaJDtig24Zx4l8qRKmSROenfJhNUoZqNZfo//C5 QTEfIjdcEZ5GX4kJ4ph3zsgdkN0EcTCqyijSyM70wozsq+rgWi4n0QtU1jVfn6rdVoLq vV1xc9ZROlwHLRJki6fD9qlU4oSufafjOjWz+5dMcYXj4xSO3bxZoZRPcNkA/VL9d3+k LZMcNi7zJ4HOfqmxoI5rQPJo6kKHH3ozPFH6PCEyUPsa+vRAWZ5nyxBPuyMYhZzXlT9b jwGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=zplHvVovV2rxJNIECkrDx9sggZa+eS1U3Dfk5QpbhVA=; b=yUcBl2gZzr1rEYyVeAM9tqkLC8S9fDQ9+6ggF0EWGKh3QVBJwS0NU9YhVQHbF/YySH kHKncdsQ1ZJYXLTJAuXLkUxqqrHQ5jUDxUlwhJ13/1xG5g4mHnnpIXFtu1aJ2PMnO3LS wCijUnO/ej0gHDHqV1OYXPKj/SkUz/VFxZTNxutQS7NZdenSkno8tZIG21RbSRUGdLbT VePabBpnfL/uGbGZHz0A+rUmiCyLLr/Xk/wwprrHIXG+zH3GCNXSaOtEjbyO3jbb6Ixu dV/qfxRZ0/spHR+vHx22lZbv3xRt/slOEOPs9MhH3Cx1CjnRDRUs+ree4rphcyKrH5hW v1cA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Hv1HBpce; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o14si25970715edv.128.2021.11.08.06.21.49; Mon, 08 Nov 2021 06:22:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Hv1HBpce; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236155AbhKHLcA (ORCPT + 99 others); Mon, 8 Nov 2021 06:32:00 -0500 Received: from mail.kernel.org ([198.145.29.99]:38672 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231557AbhKHLb7 (ORCPT ); Mon, 8 Nov 2021 06:31:59 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id A277D61360 for ; Mon, 8 Nov 2021 11:29:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636370955; bh=t2BEV1wat8ws6w2DTh6xiYZxYgmoX7b04xEVmmh6RNQ=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=Hv1HBpcecgOPuOx/IU9Ci6dO8fXAgzIZMlzo8P3YTpcM3zRsepWzCN6PFlHNv05cM UktH1a/pQ+oO3BTLWGHl8sl47krZ0LfPmYX/JU3J+mBWDSu6JvQCB3toq0qRykHYaJ Gqggil/CqGQ2+FI8OscTxZKnF9YRk1fV7ez+5T3gX4sLp7OJc/+TaG1Meu6rkiqL0h b1aBXbNC4bPYjCQ8X/+KTpFUvHt+curnVDqoxuHDRxW+LVmHymLirsILV5g8sFSRvM OMztQkrsF1SnkCCMteS2CBH+0TFaYJql/pQZCVV5DEQGUAmSjOaoPRKxvuqdbiqaHN Xp09e7pD50oMw== Received: by mail-oi1-f170.google.com with SMTP id q124so27028006oig.3 for ; Mon, 08 Nov 2021 03:29:15 -0800 (PST) X-Gm-Message-State: AOAM531zm5ztzkLFx9uft2X88Clg37y9z09JkB8lAOAAYvGdM1qNo1Ec qy+bYRPvL8SK9zMGwr6UPOU1Py5Uaz+fpb9N2Jk= X-Received: by 2002:a05:6808:1919:: with SMTP id bf25mr7236654oib.33.1636370954860; Mon, 08 Nov 2021 03:29:14 -0800 (PST) MIME-Version: 1.0 References: <20211105145917.2828911-1-ardb@kernel.org> <20211105145917.2828911-3-ardb@kernel.org> In-Reply-To: From: Ard Biesheuvel Date: Mon, 8 Nov 2021 12:29:04 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v6 2/2] arm64: implement support for static call trampolines To: Peter Zijlstra Cc: Linux ARM , Linux Kernel Mailing List , Mark Rutland , Quentin Perret , Catalin Marinas , James Morse , Will Deacon , Frederic Weisbecker , Kees Cook , Sami Tolvanen , Andy Lutomirski , Josh Poimboeuf , Steven Rostedt Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 8 Nov 2021 at 11:23, Peter Zijlstra wrote: > > On Fri, Nov 05, 2021 at 03:59:17PM +0100, Ard Biesheuvel wrote: > > diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h > > new file mode 100644 > > index 000000000000..6ee918991510 > > --- /dev/null > > +++ b/arch/arm64/include/asm/static_call.h > > @@ -0,0 +1,40 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +#ifndef _ASM_STATIC_CALL_H > > +#define _ASM_STATIC_CALL_H > > + > > +/* > > + * The sequence below is laid out in a way that guarantees that the literal and > > + * the instruction are always covered by the same cacheline, and can be updated > > + * using a single store-pair instruction (provided that we rewrite the BTI C > > + * instruction as well). This means the literal and the instruction are always > > + * in sync when observed via the D-side. > > + * > > + * However, this does not guarantee that the I-side will catch up immediately > > + * as well: until the I-cache maintenance completes, CPUs may branch to the old > > + * target, or execute a stale NOP or RET. We deal with this by writing the > > + * literal unconditionally, even if it is 0x0 or the branch is in range. That > > + * way, a stale NOP will fall through and call the new target via an indirect > > + * call. Stale RETs or Bs will be taken as before, and branch to the old > > + * target. > > + */ > > Thanks for the comment! > > > > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c > > index 771f543464e0..a265a87d4d9e 100644 > > --- a/arch/arm64/kernel/patching.c > > +++ b/arch/arm64/kernel/patching.c > > > > +static void *strip_cfi_jt(void *addr) > > +{ > > + if (IS_ENABLED(CONFIG_CFI_CLANG)) { > > + void *p = addr; > > + u32 insn; > > + > > + /* > > + * Taking the address of a function produces the address of the > > + * jump table entry when Clang CFI is enabled. Such entries are > > + * ordinary jump instructions, preceded by a BTI C instruction > > + * if BTI is enabled for the kernel. > > + */ > > + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) > > + p += 4; > > Perhaps: > if (aarch64_insn_is_bti(le32_to_cpup(p))) That instruction does not exist yet, and it begs the question which type of BTI instruction we want to detect. > p += 4; > > Perhapser still, add: > else > WARN_ON(IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)); > There's already a WARN() below that will trigger and return the original address if the entry did not have the expected layout, which means a direct branch at offset 0x0 or 0x4 depending on whether BTI is on. So I could add a WARN() here as well, but I'd prefer to keep the one at the bottom, which makes the one here slightly redundant. > > + > > + insn = le32_to_cpup(p); > > + if (aarch64_insn_is_b(insn)) > > + return p + aarch64_get_branch_offset(insn); > > + > > + WARN_ON(1); > > + } > > + return addr; > > +} > > Also, can this please have a comment decrying the lack of built-in for > this? Sure.