Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp700672rdg; Wed, 11 Oct 2023 03:09:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGMMA9ameL2ksWmYqXixtQFRK2xjfI4QJMavG/4OwL34+AuVK04JwYGZW49bDRBlQf2zns5 X-Received: by 2002:a05:6a00:2406:b0:690:d620:7801 with SMTP id z6-20020a056a00240600b00690d6207801mr19627082pfh.11.1697018969124; Wed, 11 Oct 2023 03:09:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697018969; cv=none; d=google.com; s=arc-20160816; b=D9Sc+PeRIh/azxcWJzZT3gw26yh0WVAA7RE3CBBliy+1EZ3fKljt4VDWOnGfizQIq/ JbWSH/YbOjgZL0np/oaMUqTLJ3vx3L31gSkd9xdxA9jxcdLw903W7ES0YlybmFBFeOBm /w4LXgXcsyYp2+ZSZjktPnW99bI2r2d24Ie0daoA28DCn8HuMpDcMGXXl33hopwP69L2 Gkd6RYLnDCryf63pfbI8lVZqchwztOFKRU4hX1/olOH61KcciDSO3lRtXwX186NHDc0c NHIOIQvmM2cJNV01+YbrBLYAaJ/m2BUG5e8i5UFkzGNOK3U/3xRoYGwBxFkqHKXuElv0 aj/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=yvsBLyfpw7XYEQH/Ae+OYe8TZ7rZEB6xSI4v9wDe+e4=; fh=zwYgZI7fViiWMszSXUp9QDpMCZZ/fkBX1HIXF8Q+kVM=; b=zF3H11azqSlrOQl6w56+jQZ3chG+/HdeLsePSTEqgZh5vCgp4JNfy3bWsIlAw6YmBe zW3A8nVRa/7SfvHxv3lqX1yUJ4zi/ur9uCNe1xOeMEScxwfb17NoLECBl5v33LJ/KQUY FFpnLFIJHceMrKqcb7Q+duHXc+CNvw2RP/oUL/M3GQwZY7fPrFZcIxzkQrXzH6QFRnxn U+yuuHsmBBJekpplalSk3kqrON4/j9qbId1rq5u9ccIt95SQsKtTyEkjyMqpW+dczk1E t+Sl8KUiMsJhYvxzeXgC9/1wK6H3FRANiivJHN68QC99O+oBhmvHKg3hPsgUA+TnIzL4 7tuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id bx13-20020a056a00428d00b00690c0055246si11673650pfb.294.2023.10.11.03.09.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 03:09:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 7994D8347B51; Wed, 11 Oct 2023 03:09:24 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346226AbjJKKJJ (ORCPT + 99 others); Wed, 11 Oct 2023 06:09:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234681AbjJKKIo (ORCPT ); Wed, 11 Oct 2023 06:08:44 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED06B19BD for ; Wed, 11 Oct 2023 03:07:45 -0700 (PDT) Received: from kwepemi500008.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4S57f60FpFzVkqB; Wed, 11 Oct 2023 18:04:14 +0800 (CST) Received: from huawei.com (10.67.174.55) by kwepemi500008.china.huawei.com (7.221.188.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 11 Oct 2023 18:07:43 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v5.15 08/15] arm64: factor out EL1 SSBS emulation hook Date: Wed, 11 Oct 2023 10:06:48 +0000 Message-ID: <20231011100655.979626-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231011100655.979626-1-ruanjinjie@huawei.com> References: <20231011100655.979626-1-ruanjinjie@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.67.174.55] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemi500008.china.huawei.com (7.221.188.139) X-CFilter-Loop: Reflected X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 11 Oct 2023 03:09:24 -0700 (PDT) X-Spam-Level: ** From: Mark Rutland commit bff8f413c71ffc3cb679dbd9a5632b33af563f9f upstream. Currently call_undef_hook() is used to handle UNDEFINED exceptions from EL0 and EL1. As support for deprecated instructions may be enabled independently, the handlers for individual instructions are organised as a linked list of struct undef_hook which can be manipulated dynamically. As this can be manipulated dynamically, the list is protected with a raw_spinlock which must be acquired when handling UNDEFINED exceptions or when manipulating the list of handlers. This locking is unfortunate as it serialises handling of UNDEFINED exceptions, and requires RCU to be enabled for lockdep, requiring the use of RCU_NONIDLE() in resume path of cpu_suspend() since commit: a2c42bbabbe260b7 ("arm64: spectre: Prevent lockdep splat on v4 mitigation enable path") The list of UNDEFINED handlers largely consist of handlers for exceptions taken from EL0, and the only handler for exceptions taken from EL1 handles `MSR SSBS, #imm` on CPUs which feature PSTATE.SSBS but lack the corresponding MSR (Immediate) instruction. Other than this we never expect to take an UNDEFINED exception from EL1 in normal operation. This patch reworks do_el0_undef() to invoke the EL1 SSBS handler directly, relegating call_undef_hook() to only handle EL0 UNDEFs. This removes redundant work to iterate the list for EL1 UNDEFs, and removes the need for locking, permitting EL1 UNDEFs to be handled in parallel without contention. The RCU_NONIDLE() call in cpu_suspend() will be removed in a subsequent patch, as there are other potential issues with the use of instrumentable code and RCU in the CPU suspend code. I've tested this by forcing the detection of SSBS on a CPU that doesn't have it, and verifying that the try_emulate_el1_ssbs() callback is invoked. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Peter Zijlstra Cc: Will Deacon Link: https://lore.kernel.org/r/20221019144123.612388-4-mark.rutland@arm.com Signed-off-by: Will Deacon Signed-off-by: Jinjie Ruan --- arch/arm64/include/asm/spectre.h | 2 ++ arch/arm64/kernel/proton-pack.c | 26 +++++++------------------- arch/arm64/kernel/traps.c | 15 ++++++++------- 3 files changed, 17 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index aa3d3607d5c8..db7b371b367c 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -26,6 +26,7 @@ enum mitigation_state { SPECTRE_VULNERABLE, }; +struct pt_regs; struct task_struct; /* @@ -98,5 +99,6 @@ enum mitigation_state arm64_get_spectre_bhb_state(void); bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); u8 spectre_bhb_loop_affected(int scope); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); +bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); #endif /* __ASSEMBLY__ */ #endif /* __ASM_SPECTRE_H */ diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 428cfabd11c4..7515ed1f0669 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -521,10 +521,13 @@ bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope) return state != SPECTRE_UNAFFECTED; } -static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) +bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr) { - if (user_mode(regs)) - return 1; + const u32 instr_mask = ~(1U << PSTATE_Imm_shift); + const u32 instr_val = 0xd500401f | PSTATE_SSBS; + + if ((instr & instr_mask) != instr_val) + return false; if (instr & BIT(PSTATE_Imm_shift)) regs->pstate |= PSR_SSBS_BIT; @@ -532,19 +535,11 @@ static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) regs->pstate &= ~PSR_SSBS_BIT; arm64_skip_faulting_instruction(regs, 4); - return 0; + return true; } -static struct undef_hook ssbs_emulation_hook = { - .instr_mask = ~(1U << PSTATE_Imm_shift), - .instr_val = 0xd500401f | PSTATE_SSBS, - .fn = ssbs_emulation_handler, -}; - static enum mitigation_state spectre_v4_enable_hw_mitigation(void) { - static bool undef_hook_registered = false; - static DEFINE_RAW_SPINLOCK(hook_lock); enum mitigation_state state; /* @@ -555,13 +550,6 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void) if (state != SPECTRE_MITIGATED || !this_cpu_has_cap(ARM64_SSBS)) return state; - raw_spin_lock(&hook_lock); - if (!undef_hook_registered) { - register_undef_hook(&ssbs_emulation_hook); - undef_hook_registered = true; - } - raw_spin_unlock(&hook_lock); - if (spectre_v4_mitigations_off()) { sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS); set_pstate_ssbs(1); diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 2999ccf4c117..7d9bdd79797f 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -402,12 +402,7 @@ static int call_undef_hook(struct pt_regs *regs) int (*fn)(struct pt_regs *regs, u32 instr) = NULL; void __user *pc = (void __user *)instruction_pointer(regs); - if (!user_mode(regs)) { - __le32 instr_le; - if (get_kernel_nofault(instr_le, (__force __le32 *)pc)) - goto exit; - instr = le32_to_cpu(instr_le); - } else if (compat_thumb_mode(regs)) { + if (compat_thumb_mode(regs)) { /* 16-bit Thumb instruction */ __le16 instr_le; if (get_user(instr_le, (__le16 __user *)pc)) @@ -500,9 +495,15 @@ void do_el0_undef(struct pt_regs *regs, unsigned long esr) void do_el1_undef(struct pt_regs *regs, unsigned long esr) { - if (call_undef_hook(regs) == 0) + u32 insn; + + if (aarch64_insn_read((void *)regs->pc, &insn)) + goto out_err; + + if (try_emulate_el1_ssbs(regs, insn)) return; +out_err: die("Oops - Undefined instruction", regs, esr); } -- 2.34.1