Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp253523imb; Thu, 28 Feb 2019 23:03:32 -0800 (PST) X-Google-Smtp-Source: APXvYqwQ8dfXHLxV3XcHqLnp3nA0UE6+x6ZK1ziiotFY9sCJ1GqQll+0kcmsLJWq6KnBVbnyWh8y X-Received: by 2002:a17:902:76cc:: with SMTP id j12mr3840698plt.170.1551423812192; Thu, 28 Feb 2019 23:03:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551423812; cv=none; d=google.com; s=arc-20160816; b=BVvhbpJFEvA1E9Mek/jbOUvU5wAKYOrPWT0vDCQU1hjQIPGQsO6R7Hbr/JxWK6Xrwx 2jfvcwWRoCjBClj2B4GHAS4u2WjReah+C53Muf0YjvMAyvfyjdWGqyQcY/Uy4iyQiHCy nMzRfLe7qh9euB7F0OvDnqDYP0IDMCxsdWVG4tl7OXMZY/TwTm6ys6HTQsKS+sgErHYy yl20yXrzXGUyvb+LXv0xqKe2xeYpw5N7bQc05TFC3rtSagz3/z5aK0/epGtQ1SW4RFg0 h0IWSqcq6xcGhdGrTh1eKrvOfq9FmLNA4+2O8EIa0Vo6DA3GO+DdcczSFYEIWvESUYjX 3x3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=akhPLZCruSCSnHBhBqlpLIw1lRdwjcn1AefpoYuAXY0=; b=gUMKYdz2FM3q/ED93vUTpjxPt6S/xTxPV0on54N1Gs5gNsHKBMYA7laN7pNjZZ42gG IAQNuUMTm6TAEwdE7RVbtHQ/CD6C/MvT9ZwCkLWSsKfeMSNF5IOT5pDckT+82GS2L3d2 YsGaQY5KTnOat8xJ5zFjLg5KnlR8zCT6mIe6kyuY3FYllgIQk+8mYP6kUmiClASCSGgo m6XkbVAMxZz4fHmIJDv4xOQ36TVuGhIrKKo4smRpywUqhGJ8lOUrVc1nr+EinkGL/BvX EPeJKvW4MwEkRWW8H36Xux9eoGhzNz9lb7tlieHVC4Tpwu5VoE1Moz/+BePUskprgtWk fW0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q15si19149841pgm.420.2019.02.28.23.03.16; Thu, 28 Feb 2019 23:03:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730284AbfCAHCj (ORCPT + 99 others); Fri, 1 Mar 2019 02:02:39 -0500 Received: from foss.arm.com ([217.140.101.70]:58788 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725290AbfCAHCi (ORCPT ); Fri, 1 Mar 2019 02:02:38 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85980EBD; Thu, 28 Feb 2019 23:02:38 -0800 (PST) Received: from [172.20.0.134] (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6BCE53F5C1; Thu, 28 Feb 2019 23:02:37 -0800 (PST) Subject: Re: [PATCH v5 08/10] arm64: Always enable ssb vulnerability detection To: Jeremy Linton , linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, Dave.Martin@arm.com, shankerd@codeaurora.org, julien.thierry@arm.com, mlangsdo@redhat.com, stefan.wahren@i2e.com, linux-kernel@vger.kernel.org References: <20190227010544.597579-1-jeremy.linton@arm.com> <20190227010544.597579-9-jeremy.linton@arm.com> From: Andre Przywara Message-ID: <5c76075d-1889-ff75-0567-f3e2df7079f4@foss.arm.com> Date: Fri, 1 Mar 2019 01:02:35 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <20190227010544.597579-9-jeremy.linton@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 2/26/19 7:05 PM, Jeremy Linton wrote: > The ssb detection logic is necessary regardless of whether > the vulnerability mitigation code is built into the kernel. > Break it out so that the CONFIG option only controls the > mitigation logic and not the vulnerability detection. > > Signed-off-by: Jeremy Linton > --- > arch/arm64/include/asm/cpufeature.h | 4 ---- > arch/arm64/kernel/cpu_errata.c | 11 +++++++---- > 2 files changed, 7 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > index dfcfba725d72..c2b60a021437 100644 > --- a/arch/arm64/include/asm/cpufeature.h > +++ b/arch/arm64/include/asm/cpufeature.h > @@ -628,11 +628,7 @@ static inline int arm64_get_ssbd_state(void) > #endif > } > > -#ifdef CONFIG_ARM64_SSBD > void arm64_set_ssbd_mitigation(bool state); > -#else > -static inline void arm64_set_ssbd_mitigation(bool state) {} > -#endif > > extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); > > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c > index 0f6e8f5d67bc..5f5611d17dc1 100644 > --- a/arch/arm64/kernel/cpu_errata.c > +++ b/arch/arm64/kernel/cpu_errata.c > @@ -276,7 +276,6 @@ static int detect_harden_bp_fw(void) > return 1; > } > > -#ifdef CONFIG_ARM64_SSBD > DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); > > int ssbd_state __read_mostly = ARM64_SSBD_KERNEL; > @@ -347,6 +346,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt, > *updptr = cpu_to_le32(aarch64_insn_gen_nop()); > } > > +#ifdef CONFIG_ARM64_SSBD > void arm64_set_ssbd_mitigation(bool state) > { > if (this_cpu_has_cap(ARM64_SSBS)) { > @@ -371,6 +371,12 @@ void arm64_set_ssbd_mitigation(bool state) > break; > } > } > +#else > +void arm64_set_ssbd_mitigation(bool state) > +{ > + pr_info_once("SSBD, disabled by kernel configuration\n"); Is there a stray comma or is the continuation of some previous printout? Regardless of that it looks good and compiles with both CONFIG_ARM64_SSBD defined or not: Reviewed-by: Andre Przywara Cheers, Andre. > +} > +#endif /* CONFIG_ARM64_SSBD */ > > static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, > int scope) > @@ -468,7 +474,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, > > return required; > } > -#endif /* CONFIG_ARM64_SSBD */ > > static void __maybe_unused > cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) > @@ -760,14 +765,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = { > ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors), > }, > #endif > -#ifdef CONFIG_ARM64_SSBD > { > .desc = "Speculative Store Bypass Disable", > .capability = ARM64_SSBD, > .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, > .matches = has_ssbd_mitigation, > }, > -#endif > #ifdef CONFIG_ARM64_ERRATUM_1188873 > { > /* Cortex-A76 r0p0 to r2p0 */ >