2021-02-12 05:16:40

by Jian Cai

[permalink] [raw]
Subject: [PATCH] ARM: Implement Clang's SLS mitigation

This patch adds a config CONFIG_HARDEN_SLS_ALL that can be used to turn
on -mharden-sls=all, which mitigates the straight-line speculation
vulnerability, or more commonly known as Spectre, Meldown.

Notice -mharden-sls= has other options as below, and this config turns
on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Link: https://reviews.llvm.org/D93221
Link: https://reviews.llvm.org/D81404
Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
Link: https://crbug.com/1171521

Suggested-by: Manoj Gupta <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---
arch/arm/Makefile | 4 ++++
arch/arm64/Makefile | 5 +++++
security/Kconfig.hardening | 11 +++++++++++
3 files changed, 20 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..8fd0ccd87eca 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,11 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..d83c406c81a3 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,17 @@ choice

endchoice

+
+config CC_HAS_HARDEN_SLS_ALL
+ def_bool $(cc-option,-mharden-sls=all)
+
+ config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ depends on CC_HAS_HARDEN_SLS_ALL
+ help
+ Enables straight-line speculation vulnerability hardening
+ at highest level.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.478.g8a0d178c01-goog


2021-02-12 05:57:46

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH] ARM: Implement Clang's SLS mitigation

Hi Jian,

On Thu, Feb 11, 2021 at 09:14:58PM -0800, Jian Cai wrote:
> This patch adds a config CONFIG_HARDEN_SLS_ALL that can be used to turn

Drop "a config".

> on -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, or more commonly known as Spectre, Meldown.

^ I would drop "or" here
^ drop comma,
use "and",
typo: "Meltdown"

Although, is that a fair statement? SLS is not called Spectre or
Meltdown by ARM, it is a speculative processor vulnerabilty. It
might just be better to drop eerything after the first comma (although
maybe that is just being pedantic).

>
> Notice -mharden-sls= has other options as below, and this config turns
> on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.

I cannot comment on whether or not this is worth doing, I will leave
that up to Will, Catalin, et al. The following comments are more from a
"regular kernel developer" perspective, rather than an "arm64 kernel
developer" :)

> Link: https://reviews.llvm.org/D93221
> Link: https://reviews.llvm.org/D81404
> Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation

This is also a useful article it seems:

https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

> Link: https://crbug.com/1171521

This crbug is private. If it is going into a commit message, please
publicize it.

> Suggested-by: Manoj Gupta <[email protected]>
> Signed-off-by: Jian Cai <[email protected]>
> ---
> arch/arm/Makefile | 4 ++++
> arch/arm64/Makefile | 5 +++++
> security/Kconfig.hardening | 11 +++++++++++
> 3 files changed, 20 insertions(+)
>
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 4aaec9599e8a..11d89ef32da9 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
> KBUILD_LDFLAGS += -EL
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> #
> # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
> # later may result in code being generated that handles signed short and signed
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index 90309208bb28..8fd0ccd87eca 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -34,6 +34,11 @@ $(warning LSE atomics not supported by binutils)
> endif
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> +

Extra space here

> cc_has_k_constraint := $(call try-run,echo \
> 'int main(void) { \
> asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
> diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
> index 269967c4fc1b..d83c406c81a3 100644
> --- a/security/Kconfig.hardening
> +++ b/security/Kconfig.hardening
> @@ -121,6 +121,17 @@ choice
>
> endchoice
>
> +
> +config CC_HAS_HARDEN_SLS_ALL
> + def_bool $(cc-option,-mharden-sls=all)

I do not think that CONFIG_CC_HAS_HARDEN_SLS_ALL serves much purpose.
Moving the cc-option into CONFIG_HARDEN_SLS_ALL is just as clean.

config HARDEN_SLS_ALL
bool "enable SLS vulnerability hardening"
depends on $(cc-option,-mharden-sls=all)
help
Enables straight-line speculation vulnerability hardening
at highest level.

> +
> + config HARDEN_SLS_ALL
> + bool "enable SLS vulnerability hardening"

The spacing here seems messed up, I corrected it above.

> + depends on CC_HAS_HARDEN_SLS_ALL
> + help
> + Enables straight-line speculation vulnerability hardening
> + at highest level.
> +
> config GCC_PLUGIN_STRUCTLEAK_VERBOSE
> bool "Report forcefully initialized variables"
> depends on GCC_PLUGIN_STRUCTLEAK
> --
> 2.30.0.478.g8a0d178c01-goog
>
>

Cheers,
Nathan

2021-02-12 10:45:36

by David Laight

[permalink] [raw]
Subject: RE: [PATCH] ARM: Implement Clang's SLS mitigation

> > on -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, or more commonly known as Spectre, Meldown.
>
> ^ I would drop "or" here
> ^ drop comma,
> use "and",
> typo: "Meltdown"
>
> Although, is that a fair statement? SLS is not called Spectre or
> Meltdown by ARM, it is a speculative processor vulnerabilty. It
> might just be better to drop eerything after the first comma (although
> maybe that is just being pedantic).

or replace with:
(speculative execution of the instruction following some unconditional jumps).

Which will save a lot of people looking up what it means.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2021-02-12 19:57:41

by Jian Cai

[permalink] [raw]
Subject: [PATCH v2] ARM: Implement Clang's SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Link: https://reviews.llvm.org/D93221
Link: https://reviews.llvm.org/D81404
Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---

Changes v1 -> v2:
Update the description and patch based on Nathan and David's comments.

arch/arm/Makefile | 4 ++++
arch/arm64/Makefile | 4 ++++
security/Kconfig.hardening | 7 +++++++
3 files changed, 15 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..9266d8d1f78f 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,13 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ def_bool $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening
+ at highest level.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.478.g8a0d178c01-goog

2021-02-17 11:12:13

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v2] ARM: Implement Clang's SLS mitigation

From: Will Deacon
> Sent: 17 February 2021 09:49
>
> On Fri, Feb 12, 2021 at 11:52:53AM -0800, Jian Cai wrote:
> > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, speculative execution of the instruction following some
> > unconditional jumps. Notice -mharden-sls= has other options as below,
> > and this config turns on the strongest option.
> >
> > all: enable all mitigations against Straight Line Speculation that are implemented.
> > none: disable all mitigations against Straight Line Speculation.
> > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> What exactly does this mitigation do? This should be documented somewhere,
> maybe in the Kconfig text?

I looked it up, it adds some fairly heavy serialising instructions
after the unconditional jump.
For BLR (call indirect) it has to use a BL (call) to an indirect jump.

I don't know if the execution of the serialising instructions
gets aborted.
If not you could end up with unexpected delays - like those on
some x86 cpu when they speculatively executed trig functions.

It all seems pretty broken though.
I'd expect the branch prediction unit to speculate at the jump
target for 'predicted taken' conditional jumps.
So you'd really expect unconditional jumps to behave the same way.
BLR ought to be using the branch target buffer (BTB).

(It isn't actually 100% clear that some processors don't use the BTB
for non-indirect jumps though....)

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2021-02-17 13:20:31

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2] ARM: Implement Clang's SLS mitigation

On Fri, Feb 12, 2021 at 11:52:53AM -0800, Jian Cai wrote:
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.

What exactly does this mitigation do? This should be documented somewhere,
maybe in the Kconfig text?

> Link: https://reviews.llvm.org/D93221
> Link: https://reviews.llvm.org/D81404
> Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
>
> Suggested-by: Manoj Gupta <[email protected]>
> Suggested-by: Nathan Chancellor <[email protected]>
> Suggested-by: David Laight <[email protected]>
> Signed-off-by: Jian Cai <[email protected]>
> ---
>
> Changes v1 -> v2:
> Update the description and patch based on Nathan and David's comments.
>
> arch/arm/Makefile | 4 ++++
> arch/arm64/Makefile | 4 ++++
> security/Kconfig.hardening | 7 +++++++
> 3 files changed, 15 insertions(+)
>
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 4aaec9599e8a..11d89ef32da9 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
> KBUILD_LDFLAGS += -EL
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> #
> # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
> # later may result in code being generated that handles signed short and signed
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index 90309208bb28..ca7299b356a9 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
> endif
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif

The big problem I have with this is that it's a compile-time decision.
For the other spectre crap we have a combination of the "mitigations=off"
command-line and CPU detection to avoid the cost of the mitigation where
it is not deemed necessary.

So I think that either we enable this unconditionally, or we don't enable it
at all (and people can hack their CFLAGS themselves if they want to). It
would be helpful for one of the Arm folks to chime in, as I'm yet to see any
evidence that this is actually exploitable. Is it any worse that Spectre-v1,
where we _don't_ have a compiler mitigation?

Finally, do we have to worry about our assembly code?

Will

2021-02-17 20:59:00

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH v2] ARM: Implement Clang's SLS mitigation

On Fri, Feb 12, 2021 at 11:53 AM 'Jian Cai' via Clang Built Linux
<[email protected]> wrote:

The oneline of the commit is "ARM: Implement Clang's SLS mitigation,"
but that's not precise. GCC implements the same flag with the same
arguments. There is nothing compiler specific about this patch.
(Though perhaps different section names are used, see below).

>
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> Link: https://reviews.llvm.org/D93221
> Link: https://reviews.llvm.org/D81404
> Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
>
> Suggested-by: Manoj Gupta <[email protected]>
> Suggested-by: Nathan Chancellor <[email protected]>
> Suggested-by: David Laight <[email protected]>
> Signed-off-by: Jian Cai <[email protected]>

I observe lots of linker warnings with this applied on linux-next:
ld.lld: warning:
init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_x0) is being placed
in '.text.__llvm_slsblr_thunk_x0'
You need to modify arch/arm64/kernel/vmlinux.lds.S and
arch/arm/kernel/vmlinux.lds.S (and possibly
arch/arm/boot/compressed/vmlinux.lds.S as well) to add these sections
back into .text so that the linkers don't place these orphaned
sections in wild places. The resulting aarch64 kernel image doesn't
even boot (under emulation).

For 32b ARM:
ld.lld: warning:
init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_arm_r0) is being
placed in '.text.__llvm_slsblr_thunk_arm_r0'
...
ld.lld: warning:
init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_thumb_r0) is being
placed in '.text.__llvm_slsblr_thunk_thumb_r0'
...
<trimmed, but there's close to 60 of these>

And the image doesn't boot (under emulation).

> ---
>
> Changes v1 -> v2:
> Update the description and patch based on Nathan and David's comments.
>
> arch/arm/Makefile | 4 ++++
> arch/arm64/Makefile | 4 ++++
> security/Kconfig.hardening | 7 +++++++
> 3 files changed, 15 insertions(+)
>
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 4aaec9599e8a..11d89ef32da9 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
> KBUILD_LDFLAGS += -EL
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> #
> # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
> # later may result in code being generated that handles signed short and signed
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index 90309208bb28..ca7299b356a9 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
> endif
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> cc_has_k_constraint := $(call try-run,echo \
> 'int main(void) { \
> asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
> diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
> index 269967c4fc1b..9266d8d1f78f 100644
> --- a/security/Kconfig.hardening
> +++ b/security/Kconfig.hardening
> @@ -121,6 +121,13 @@ choice
>
> endchoice
>
> +config HARDEN_SLS_ALL
> + bool "enable SLS vulnerability hardening"
> + def_bool $(cc-option,-mharden-sls=all)

This fails to set CONFIG_HARDEN_SLS_ALL for me with:
$ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1
-j72 defconfig
$ grep SLS_ALL .config
# CONFIG_HARDEN_SLS_ALL is not set

but it's flipped on there for arm64 defconfig:
$ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make LLVM=1 LLVM_IAS=1
-j72 defconfig
$ grep SLS_ALL .config
CONFIG_HARDEN_SLS_ALL=y

What's going on there? Is the cc-option Kconfig macro broken for
Clang when cross compiling 32b ARM? I can still enable
CONFIG_HARDEN_SLS_ALL via menuconfig, but I wonder if the default
value is funny because the cc-option check is failing?

> + help
> + Enables straight-line speculation vulnerability hardening
> + at highest level.
> +
> config GCC_PLUGIN_STRUCTLEAK_VERBOSE
> bool "Report forcefully initialized variables"
> depends on GCC_PLUGIN_STRUCTLEAK
> --

--
Thanks,
~Nick Desaulniers

2021-02-19 20:23:11

by Jian Cai

[permalink] [raw]
Subject: [PATCH v3] ARM: Implement SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Suggested-by: Will Deacon <[email protected]>
Reviewed-by: Nathan Chancellor <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---

Changes v2 -> v3:
Modify linker scripts as Nick suggested to address boot failure
(verified with qemu). Added more details in Kconfig.hardening
description. Disable the config by default.

arch/arm/Makefile | 4 ++++
arch/arm/include/asm/vmlinux.lds.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 1 +
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
security/Kconfig.hardening | 10 ++++++++++
6 files changed, 28 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@
__edtcm_data = .; \
} \
. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@ SECTIONS
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
ARM_TEXT
+ SLS_TEXT
}

#ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@ jiffies = jiffies_64;
#define TRAMP_TEXT
#endif

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@ SECTIONS
HIBERNATE_TEXT
TRAMP_TEXT
*(.fixup)
+ SLS_TEXT
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..e70f227019e1 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,16 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ default n
+ def_bool $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening on ARM and ARM64
+ architectures. It inserts speculation barrier sequences (SB or DSB+ISB
+ depending on the target architecture) after RET and BR, and replacing
+ BLR with BL+BR sequence.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.617.g56c4b15f3c-goog

2021-02-19 20:35:52

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH v3] ARM: Implement SLS mitigation

Hi Jian,

On Fri, Feb 19, 2021 at 12:18:40PM -0800, 'Jian Cai' via Clang Built Linux wrote:
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> Links:
> https://reviews.llvm.org/D93221
> https://reviews.llvm.org/D81404
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
>
> Suggested-by: Manoj Gupta <[email protected]>
> Suggested-by: Nick Desaulniers <[email protected]>
> Suggested-by: Nathan Chancellor <[email protected]>
> Suggested-by: David Laight <[email protected]>
> Suggested-by: Will Deacon <[email protected]>
> Reviewed-by: Nathan Chancellor <[email protected]>

My review still stands but in the future, if you significantly change
how a patch is structured or works, please drop my tag and let me re-add
it.

One comment below.

> Signed-off-by: Jian Cai <[email protected]>
> ---
>
> Changes v2 -> v3:
> Modify linker scripts as Nick suggested to address boot failure
> (verified with qemu). Added more details in Kconfig.hardening
> description. Disable the config by default.
>
> arch/arm/Makefile | 4 ++++
> arch/arm/include/asm/vmlinux.lds.h | 4 ++++
> arch/arm/kernel/vmlinux.lds.S | 1 +
> arch/arm64/Makefile | 4 ++++
> arch/arm64/kernel/vmlinux.lds.S | 5 +++++
> security/Kconfig.hardening | 10 ++++++++++
> 6 files changed, 28 insertions(+)
>
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index 4aaec9599e8a..11d89ef32da9 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
> KBUILD_LDFLAGS += -EL
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> #
> # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
> # later may result in code being generated that handles signed short and signed
> diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
> index 4a91428c324d..c7f9717511ca 100644
> --- a/arch/arm/include/asm/vmlinux.lds.h
> +++ b/arch/arm/include/asm/vmlinux.lds.h
> @@ -145,3 +145,7 @@
> __edtcm_data = .; \
> } \
> . = __dtcm_start + SIZEOF(.data_dtcm);
> +
> +#define SLS_TEXT \
> + ALIGN_FUNCTION(); \
> + *(.text.__llvm_slsblr_thunk_*)
> diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
> index f7f4620d59c3..e71f2bc97bae 100644
> --- a/arch/arm/kernel/vmlinux.lds.S
> +++ b/arch/arm/kernel/vmlinux.lds.S
> @@ -63,6 +63,7 @@ SECTIONS
> .text : { /* Real text segment */
> _stext = .; /* Text and read-only data */
> ARM_TEXT
> + SLS_TEXT
> }
>
> #ifdef CONFIG_DEBUG_ALIGN_RODATA
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index 90309208bb28..ca7299b356a9 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
> endif
> endif
>
> +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
> +KBUILD_CFLAGS += -mharden-sls=all
> +endif
> +
> cc_has_k_constraint := $(call try-run,echo \
> 'int main(void) { \
> asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 4c0b0c89ad59..f8912e42ffcd 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -93,6 +93,10 @@ jiffies = jiffies_64;
> #define TRAMP_TEXT
> #endif
>
> +#define SLS_TEXT \
> + ALIGN_FUNCTION(); \
> + *(.text.__llvm_slsblr_thunk_*)
> +
> /*
> * The size of the PE/COFF section that covers the kernel image, which
> * runs from _stext to _edata, must be a round multiple of the PE/COFF
> @@ -144,6 +148,7 @@ SECTIONS
> HIBERNATE_TEXT
> TRAMP_TEXT
> *(.fixup)
> + SLS_TEXT
> *(.gnu.warning)
> . = ALIGN(16);
> *(.got) /* Global offset table */
> diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
> index 269967c4fc1b..e70f227019e1 100644
> --- a/security/Kconfig.hardening
> +++ b/security/Kconfig.hardening
> @@ -121,6 +121,16 @@ choice
>
> endchoice
>
> +config HARDEN_SLS_ALL
> + bool "enable SLS vulnerability hardening"
> + default n
> + def_bool $(cc-option,-mharden-sls=all)

This is a much more convoluted way of writing:

depends on $(cc-option,-mharden-sls=all)

"default n" is the default and "def_bool" is short for:

bool
default <expr>

which is defeated by the "default n".

> + help
> + Enables straight-line speculation vulnerability hardening on ARM and ARM64
> + architectures. It inserts speculation barrier sequences (SB or DSB+ISB
> + depending on the target architecture) after RET and BR, and replacing
> + BLR with BL+BR sequence.
> +
> config GCC_PLUGIN_STRUCTLEAK_VERBOSE
> bool "Report forcefully initialized variables"
> depends on GCC_PLUGIN_STRUCTLEAK
> --
> 2.30.0.617.g56c4b15f3c-goog
>

2021-02-19 23:12:48

by Jian Cai

[permalink] [raw]
Subject: [PATCH v4] ARM: Implement SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Suggested-by: Will Deacon <[email protected]>
Reviewed-by: Nathan Chancellor <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---

Changes v3 -> v4:
Address Nathan's comment and replace def_bool with depends on in
HARDEN_SLS_ALL.

arch/arm/Makefile | 4 ++++
arch/arm/include/asm/vmlinux.lds.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 1 +
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
security/Kconfig.hardening | 10 ++++++++++
6 files changed, 28 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@
__edtcm_data = .; \
} \
. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@ SECTIONS
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
ARM_TEXT
+ SLS_TEXT
}

#ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@ jiffies = jiffies_64;
#define TRAMP_TEXT
#endif

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@ SECTIONS
HIBERNATE_TEXT
TRAMP_TEXT
*(.fixup)
+ SLS_TEXT
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..146b75a79d9e 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,16 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ default n
+ depends on $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening on ARM and ARM64
+ architectures. It inserts speculation barrier sequences (SB or DSB+ISB
+ depending on the target architecture) after RET and BR, and replacing
+ BLR with BL+BR sequence.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.617.g56c4b15f3c-goog

2021-02-21 10:18:51

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
> index 269967c4fc1b..146b75a79d9e 100644
> --- a/security/Kconfig.hardening
> +++ b/security/Kconfig.hardening
> @@ -121,6 +121,16 @@ choice
>
> endchoice
>
> +config HARDEN_SLS_ALL
> + bool "enable SLS vulnerability hardening"
> + default n

Please get rid of this useless "default n"

> + depends on $(cc-option,-mharden-sls=all)
> + help
> + Enables straight-line speculation vulnerability hardening on ARM and ARM64
> + architectures. It inserts speculation barrier sequences (SB or DSB+ISB
> + depending on the target architecture) after RET and BR, and replacing
> + BLR with BL+BR sequence.

Given that this is in an architecture independent Kconfig file, and it
detects support in CC for this feature, why should this help text be
written to be specific to a couple of architectures? Will this feature
only ever be available on these two architectures? What if someone adds
support for another architecture?

--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

2021-02-22 12:02:26

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> Links:
> https://reviews.llvm.org/D93221
> https://reviews.llvm.org/D81404
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
>
> Suggested-by: Manoj Gupta <[email protected]>
> Suggested-by: Nick Desaulniers <[email protected]>
> Suggested-by: Nathan Chancellor <[email protected]>
> Suggested-by: David Laight <[email protected]>
> Suggested-by: Will Deacon <[email protected]>
> Reviewed-by: Nathan Chancellor <[email protected]>
> Signed-off-by: Jian Cai <[email protected]>
> ---

Please can you reply to my previous questions?

https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/

(apologies if you did, but I don't see them in the archive or my inbox)

Will

2021-02-22 21:55:12

by Jian Cai

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

Please see my comments inlined below.

Thanks,
Jian

On Mon, Feb 22, 2021 at 3:58 AM Will Deacon <[email protected]> wrote:
>
> On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, speculative execution of the instruction following some
> > unconditional jumps. Notice -mharden-sls= has other options as below,
> > and this config turns on the strongest option.
> >
> > all: enable all mitigations against Straight Line Speculation that are implemented.
> > none: disable all mitigations against Straight Line Speculation.
> > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> >
> > Links:
> > https://reviews.llvm.org/D93221
> > https://reviews.llvm.org/D81404
> > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
> >
> > Suggested-by: Manoj Gupta <[email protected]>
> > Suggested-by: Nick Desaulniers <[email protected]>
> > Suggested-by: Nathan Chancellor <[email protected]>
> > Suggested-by: David Laight <[email protected]>
> > Suggested-by: Will Deacon <[email protected]>
> > Reviewed-by: Nathan Chancellor <[email protected]>
> > Signed-off-by: Jian Cai <[email protected]>
> > ---
>
> Please can you reply to my previous questions?
>
> https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/
>
> (apologies if you did, but I don't see them in the archive or my inbox)

I should have clarified the suggested-by tag was in regard to the
Kconfig text change. Regarding your earlier questions, please see my
comments below.

> So I think that either we enable this unconditionally, or we don't enable it
> at all (and people can hack their CFLAGS themselves if they want to).

Not sure if this answers your question but this config should provide
a way for people to turn on the mitigation at their own risk.

> It would be helpful for one of the Arm folks to chime in, as I'm yet to see any
> evidence that this is actually exploitable. Is it any worse that Spectre-v1,
> where we _don't_ have a compiler mitigation?

> Finally, do we have to worry about our assembly code?

I am not sure if there are any plans to protect assembly code and I
will leave it to the Arm folks since they know a whole lot better. But
even without that part, we should still have better protection,
especially when overhead does not look too bad: I did some preliminary
experiments on ChromeOS, code size of vmlinux increased 3%, and there
were no noticeable changes to run-time performance of the benchmarks I
used.

2021-02-23 02:42:21

by Jian Cai

[permalink] [raw]
Subject: [PATCH v5] ARM: Implement SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Suggested-by: Will Deacon <[email protected]>
Suggested-by: Russell King <[email protected]>
Reviewed-by: Nathan Chancellor <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---

Changes v4->v5:
Removed "default n" and made the description target indepdent in
Kconfig.hardening.


arch/arm/Makefile | 4 ++++
arch/arm/include/asm/vmlinux.lds.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 1 +
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
security/Kconfig.hardening | 10 ++++++++++
6 files changed, 28 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@
__edtcm_data = .; \
} \
. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@ SECTIONS
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
ARM_TEXT
+ SLS_TEXT
}

#ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@ jiffies = jiffies_64;
#define TRAMP_TEXT
#endif

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@ SECTIONS
HIBERNATE_TEXT
TRAMP_TEXT
*(.fixup)
+ SLS_TEXT
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..146b75a79d9e 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,16 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ default n
+ depends on $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening on ARM and ARM64
+ architectures. It inserts speculation barrier sequences (SB or DSB+ISB
+ depending on the target architecture) after RET and BR, and replacing
+ BLR with BL+BR sequence.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.617.g56c4b15f3c-goog

2021-02-23 02:43:08

by Jian Cai

[permalink] [raw]
Subject: [PATCH v5] ARM: Implement SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Suggested-by: Will Deacon <[email protected]>
Suggested-by: Russell King <[email protected]>
Reviewed-by: Nathan Chancellor <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---
Changes v4->v5:
Removed "default n" and made the description target indepdent in
Kconfig.hardening. Please ignore my last email, it did not include the
changes.

arch/arm/Makefile | 4 ++++
arch/arm/include/asm/vmlinux.lds.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 1 +
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
security/Kconfig.hardening | 8 ++++++++
6 files changed, 26 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 4aaec9599e8a..11d89ef32da9 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@
__edtcm_data = .; \
} \
. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@ SECTIONS
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
ARM_TEXT
+ SLS_TEXT
}

#ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 90309208bb28..ca7299b356a9 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..f8912e42ffcd 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -93,6 +93,10 @@ jiffies = jiffies_64;
#define TRAMP_TEXT
#endif

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -144,6 +148,7 @@ SECTIONS
HIBERNATE_TEXT
TRAMP_TEXT
*(.fixup)
+ SLS_TEXT
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..db76ad732c14 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,14 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ depends on $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening. This inserts
+ speculation barrier instruction sequences after certain unconditional jumps
+ to prevent speculative execution past those barriers.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.0.617.g56c4b15f3c-goog

2021-02-23 10:08:58

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> Please see my comments inlined below.
>
> Thanks,
> Jian
>
> On Mon, Feb 22, 2021 at 3:58 AM Will Deacon <[email protected]> wrote:
> >
> > On Fri, Feb 19, 2021 at 03:08:13PM -0800, Jian Cai wrote:
> > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > -mharden-sls=all, which mitigates the straight-line speculation
> > > vulnerability, speculative execution of the instruction following some
> > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > and this config turns on the strongest option.
> > >
> > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > none: disable all mitigations against Straight Line Speculation.
> > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> > >
> > > Links:
> > > https://reviews.llvm.org/D93221
> > > https://reviews.llvm.org/D81404
> > > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> > > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
> > >
> > > Suggested-by: Manoj Gupta <[email protected]>
> > > Suggested-by: Nick Desaulniers <[email protected]>
> > > Suggested-by: Nathan Chancellor <[email protected]>
> > > Suggested-by: David Laight <[email protected]>
> > > Suggested-by: Will Deacon <[email protected]>
> > > Reviewed-by: Nathan Chancellor <[email protected]>
> > > Signed-off-by: Jian Cai <[email protected]>
> > > ---
> >
> > Please can you reply to my previous questions?
> >
> > https://lore.kernel.org/linux-arm-kernel/20210217094859.GA3706@willie-the-truck/
> >
> > (apologies if you did, but I don't see them in the archive or my inbox)
>
> I should have clarified the suggested-by tag was in regard to the
> Kconfig text change. Regarding your earlier questions, please see my
> comments below.
>
> > So I think that either we enable this unconditionally, or we don't enable it
> > at all (and people can hack their CFLAGS themselves if they want to).
>
> Not sure if this answers your question but this config should provide
> a way for people to turn on the mitigation at their own risk.

I'm not sure I see the point; either it's needed or its not. I wonder if
there's a plan to fix this in future CPUs (another question for the Arm
folks).

> > It would be helpful for one of the Arm folks to chime in, as I'm yet to see any
> > evidence that this is actually exploitable. Is it any worse that Spectre-v1,
> > where we _don't_ have a compiler mitigation?
>
> > Finally, do we have to worry about our assembly code?
>
> I am not sure if there are any plans to protect assembly code and I
> will leave it to the Arm folks since they know a whole lot better. But
> even without that part, we should still have better protection,
> especially when overhead does not look too bad: I did some preliminary
> experiments on ChromeOS, code size of vmlinux increased 3%, and there
> were no noticeable changes to run-time performance of the benchmarks I
> used.

If the mitigation is required, I'm not sure I see a lot of point in only
doing a half-baked job of it. It feels a bit like a box-ticking exercise,
in which case any overhead is too much.

Will

2021-03-04 23:23:16

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <[email protected]> wrote:

> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.

I heard about compiler protection for this, so nice to see it happening!

Would you happen to know if there is any plan to do the same for GCC?
I know you folks at Google like LLVM, but if you know let us know.

> +config HARDEN_SLS_ALL
> + bool "enable SLS vulnerability hardening"

I would go in and also edit arch/arm/mm/Kconfig under:
config HARDEN_BRANCH_PREDICTOR add
select HARDEN_SLS_ALL

Because if the user wants hardening for branch prediction
in general then the user certainly wants this as well, if
available. The help text for that option literally says:

"This config option will take CPU-specific actions to harden
the branch predictor against aliasing attacks and may rely on
specific instruction sequences or control bits being set by
the system firmware."

Notice this only turns on for CPUs with CPU_SPECTRE
defined which makes sense. Also it is default y which fulfils
Will's request that it be turned on by default where
applicable. Notably it will not be turned on for pre-v7 silicon
which would be unhelpful as they don't suffer from
these bugs.

Reading Kristofs compiler patch here:
https://reviews.llvm.org/rG195f44278c4361a4a32377a98a1e3a15203d3647

I take it that for affected CPUs we should also patch all assembly
in the kernel containing a RET, BR or BLR with
DSB SYS followed by ISB?

I suppose we would also need to look for any mov PC, <>
code...

I guess we can invent a "SB" macro to mimic what Aarch64 is
doing so the code is easy to read. (Thinking aloud.)

Yours,
Linus Walleij

2021-03-04 23:23:52

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v4] ARM: Implement SLS mitigation

From: Linus Walleij
> Sent: 03 March 2021 15:19
>
> On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <[email protected]> wrote:
> > On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> > > I am not sure if there are any plans to protect assembly code and I
> > > will leave it to the Arm folks since they know a whole lot better. But
> > > even without that part, we should still have better protection,
> > > especially when overhead does not look too bad: I did some preliminary
> > > experiments on ChromeOS, code size of vmlinux increased 3%, and there
> > > were no noticeable changes to run-time performance of the benchmarks I
> > > used.
> >
> > If the mitigation is required, I'm not sure I see a lot of point in only
> > doing a half-baked job of it. It feels a bit like a box-ticking exercise,
> > in which case any overhead is too much.
>
> I wrote some suggestions on follow-ups in my reply, and I can
> help out doing some of the patches, I think.
>
> Since ARM32 RET is mov pc, <>
> git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
> I suppose these need to come out:
>
> mov pc, lr
> dsb(nsh);
> isb();

Won't that go horribly wrong for conditional returns?

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2021-03-04 23:24:05

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <[email protected]> wrote:
> On Mon, Feb 22, 2021 at 01:50:06PM -0800, Jian Cai wrote:
> > I am not sure if there are any plans to protect assembly code and I
> > will leave it to the Arm folks since they know a whole lot better. But
> > even without that part, we should still have better protection,
> > especially when overhead does not look too bad: I did some preliminary
> > experiments on ChromeOS, code size of vmlinux increased 3%, and there
> > were no noticeable changes to run-time performance of the benchmarks I
> > used.
>
> If the mitigation is required, I'm not sure I see a lot of point in only
> doing a half-baked job of it. It feels a bit like a box-ticking exercise,
> in which case any overhead is too much.

I wrote some suggestions on follow-ups in my reply, and I can
help out doing some of the patches, I think.

Since ARM32 RET is mov pc, <>
git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
I suppose these need to come out:

mov pc, lr
dsb(nsh);
isb();

As ARM32 doesn't have sb my idea is to make a macro
"sb" that resolves to dsb/isb when this is enabled and then
we could start patching all the assembly users with that as
well. I need the Kconfig symbol from this patch though.

I also suggest selecting this mitigation as part of
HARDEN_BRANCH_PREDICTOR, by the token that either
you want all of them or none of them.

Yours,
Linus Walleij

2021-03-04 23:25:23

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v4] ARM: Implement SLS mitigation

On Wed, Mar 3, 2021 at 4:29 PM David Laight <[email protected]> wrote:
> > On Tue, Feb 23, 2021 at 11:05 AM Will Deacon <[email protected]> wrote:

> > I wrote some suggestions on follow-ups in my reply, and I can
> > help out doing some of the patches, I think.
> >
> > Since ARM32 RET is mov pc, <>
> > git grep 'mov.*pc,' | wc -l gives 93 sites in arch/arm.
> > I suppose these need to come out:
> >
> > mov pc, lr
> > dsb(nsh);
> > isb();
>
> Won't that go horribly wrong for conditional returns?

It will so I would not insert it after those. It has to be
on a case-by-base basis, I am not planning any
search and replace operations.

Yours,
Linus Walleij

2021-03-05 01:02:49

by Jian Cai

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <[email protected]> wrote:
>
> On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <[email protected]> wrote:
>
> > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > -mharden-sls=all, which mitigates the straight-line speculation
> > vulnerability, speculative execution of the instruction following some
> > unconditional jumps. Notice -mharden-sls= has other options as below,
> > and this config turns on the strongest option.
> >
> > all: enable all mitigations against Straight Line Speculation that are implemented.
> > none: disable all mitigations against Straight Line Speculation.
> > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> I heard about compiler protection for this, so nice to see it happening!
>
> Would you happen to know if there is any plan to do the same for GCC?
> I know you folks at Google like LLVM, but if you know let us know.

I think gcc also has these options.
https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html

>
> > +config HARDEN_SLS_ALL
> > + bool "enable SLS vulnerability hardening"
>
> I would go in and also edit arch/arm/mm/Kconfig under:
> config HARDEN_BRANCH_PREDICTOR add
> select HARDEN_SLS_ALL
>
> Because if the user wants hardening for branch prediction
> in general then the user certainly wants this as well, if
> available. The help text for that option literally says:
>
> "This config option will take CPU-specific actions to harden
> the branch predictor against aliasing attacks and may rely on
> specific instruction sequences or control bits being set by
> the system firmware."
>
> Notice this only turns on for CPUs with CPU_SPECTRE
> defined which makes sense. Also it is default y which fulfils
> Will's request that it be turned on by default where
> applicable. Notably it will not be turned on for pre-v7 silicon
> which would be unhelpful as they don't suffer from
> these bugs.

Thanks for the suggestion. I will update the patch.

>
> Reading Kristofs compiler patch here:
> https://reviews.llvm.org/rG195f44278c4361a4a32377a98a1e3a15203d3647
>
> I take it that for affected CPUs we should also patch all assembly
> in the kernel containing a RET, BR or BLR with
> DSB SYS followed by ISB?
>
> I suppose we would also need to look for any mov PC, <>
> code...
>
> I guess we can invent a "SB" macro to mimic what Aarch64 is
> doing so the code is easy to read. (Thinking aloud.)
>
> Yours,
> Linus Walleij

2021-03-05 01:05:40

by Jian Cai

[permalink] [raw]
Subject: [PATCH v6] ARM: Implement SLS mitigation

This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
-mharden-sls=all, which mitigates the straight-line speculation
vulnerability, speculative execution of the instruction following some
unconditional jumps. Notice -mharden-sls= has other options as below,
and this config turns on the strongest option.

all: enable all mitigations against Straight Line Speculation that are implemented.
none: disable all mitigations against Straight Line Speculation.
retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
blr: enable the mitigation against Straight Line Speculation for BLR instructions.

Links:
https://reviews.llvm.org/D93221
https://reviews.llvm.org/D81404
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2

Suggested-by: Manoj Gupta <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: David Laight <[email protected]>
Suggested-by: Will Deacon <[email protected]>
Suggested-by: Russell King <[email protected]>
Suggested-by: Linus Walleij <[email protected]>
Signed-off-by: Jian Cai <[email protected]>
---

Changes v1 -> v2:
Update the description and patch based on Nathan and David's comments.

Changes v2 -> v3:
Modify linker scripts as Nick suggested to address boot failure
(verified with qemu). Added more details in Kconfig.hardening
description. Disable the config by default.

Changes v3 -> v4:
Address Nathan's comment and replace def_bool with depends on in
HARDEN_SLS_ALL.

Changes v4 -> v5:
Removed "default n" and made the description target indepdent in
Kconfig.hardening.

Changes v5 -> v6:
Add select HARDEN_SLS_ALL under config HARDEN_BRANCH_PREDICTOR. This
turns on HARDEN_SLS_ALL by default where applicable.


arch/arm/Makefile | 4 ++++
arch/arm/include/asm/vmlinux.lds.h | 4 ++++
arch/arm/kernel/vmlinux.lds.S | 1 +
arch/arm/mm/Kconfig | 1 +
arch/arm64/Makefile | 4 ++++
arch/arm64/kernel/vmlinux.lds.S | 5 +++++
security/Kconfig.hardening | 8 ++++++++
7 files changed, 27 insertions(+)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index dad5502ecc28..54f9b5ff9682 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
KBUILD_LDFLAGS += -EL
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
#
# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
# later may result in code being generated that handles signed short and signed
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
index 4a91428c324d..c7f9717511ca 100644
--- a/arch/arm/include/asm/vmlinux.lds.h
+++ b/arch/arm/include/asm/vmlinux.lds.h
@@ -145,3 +145,7 @@
__edtcm_data = .; \
} \
. = __dtcm_start + SIZEOF(.data_dtcm);
+
+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index f7f4620d59c3..e71f2bc97bae 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -63,6 +63,7 @@ SECTIONS
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
ARM_TEXT
+ SLS_TEXT
}

#ifdef CONFIG_DEBUG_ALIGN_RODATA
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 35f43d0aa056..bdb63e7b1bec 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -837,6 +837,7 @@ config HARDEN_BRANCH_PREDICTOR
bool "Harden the branch predictor against aliasing attacks" if EXPERT
depends on CPU_SPECTRE
default y
+ select HARDEN_SLS_ALL
help
Speculation attacks against some high-performance processors rely
on being able to manipulate the branch predictor for a victim
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 5b84aec31ed3..e233bfbdb1c2 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
endif
endif

+ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
+KBUILD_CFLAGS += -mharden-sls=all
+endif
+
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7eea7888bb02..d5c892605862 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -103,6 +103,10 @@ jiffies = jiffies_64;
#define TRAMP_TEXT
#endif

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from _stext to _edata, must be a round multiple of the PE/COFF
@@ -154,6 +158,7 @@ SECTIONS
HIBERNATE_TEXT
TRAMP_TEXT
*(.fixup)
+ SLS_TEXT
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 269967c4fc1b..db76ad732c14 100644
--- a/security/Kconfig.hardening
+++ b/security/Kconfig.hardening
@@ -121,6 +121,14 @@ choice

endchoice

+config HARDEN_SLS_ALL
+ bool "enable SLS vulnerability hardening"
+ depends on $(cc-option,-mharden-sls=all)
+ help
+ Enables straight-line speculation vulnerability hardening. This inserts
+ speculation barrier instruction sequences after certain unconditional jumps
+ to prevent speculative execution past those barriers.
+
config GCC_PLUGIN_STRUCTLEAK_VERBOSE
bool "Report forcefully initialized variables"
depends on GCC_PLUGIN_STRUCTLEAK
--
2.30.1.766.gb4fecdf3b7-goog

2021-03-05 09:57:09

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v6] ARM: Implement SLS mitigation

On Thu, Mar 04, 2021 at 04:53:18PM -0800, Jian Cai wrote:
> This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> -mharden-sls=all, which mitigates the straight-line speculation
> vulnerability, speculative execution of the instruction following some
> unconditional jumps. Notice -mharden-sls= has other options as below,
> and this config turns on the strongest option.
>
> all: enable all mitigations against Straight Line Speculation that are implemented.
> none: disable all mitigations against Straight Line Speculation.
> retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> blr: enable the mitigation against Straight Line Speculation for BLR instructions.
>
> Links:
> https://reviews.llvm.org/D93221
> https://reviews.llvm.org/D81404
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
> https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
>
> Suggested-by: Manoj Gupta <[email protected]>
> Suggested-by: Nick Desaulniers <[email protected]>
> Suggested-by: Nathan Chancellor <[email protected]>
> Suggested-by: David Laight <[email protected]>
> Suggested-by: Will Deacon <[email protected]>

I'm still reasonably opposed to this patch, so please don't add my
"Suggested-by" here as, if I were to suggest anything, it would be not
to apply this patch :)

I still don't see why SLS is worth a compiler mitigation which will affect
all CPUs that run the kernel binary, but Spectre-v1 is not. In other words,
the big thing missing from this is a justification as to why SLS is a
problem worth working around for general C code.

Will

2021-03-06 12:37:10

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <[email protected]> wrote:
> On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <[email protected]> wrote:
> >
> > On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <[email protected]> wrote:
> >
> > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > -mharden-sls=all, which mitigates the straight-line speculation
> > > vulnerability, speculative execution of the instruction following some
> > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > and this config turns on the strongest option.
> > >
> > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > none: disable all mitigations against Straight Line Speculation.
> > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> >
> > I heard about compiler protection for this, so nice to see it happening!
> >
> > Would you happen to know if there is any plan to do the same for GCC?
> > I know you folks at Google like LLVM, but if you know let us know.
>
> I think gcc also has these options.
> https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html

And how does that work with this part of your patch:

+#define SLS_TEXT \
+ ALIGN_FUNCTION(); \
+ *(.text.__llvm_slsblr_thunk_*)

This does not look compiler agnostic?

Yours,
Linus Walleij

2021-03-06 12:37:36

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v6] ARM: Implement SLS mitigation

On Fri, Mar 5, 2021 at 10:53 AM Will Deacon <[email protected]> wrote:

> I still don't see why SLS is worth a compiler mitigation which will affect
> all CPUs that run the kernel binary, but Spectre-v1 is not. In other words,
> the big thing missing from this is a justification as to why SLS is a
> problem worth working around for general C code.

I might be on the Dunning Kruger-scale of a little knowledge is dangerous
here, but AFAICT it is because it is mitigating all branches that result
from the compilation.

I think the people who devised this approach didn't think about the
kernel problem per se but about "any code".

They would have to go back to the compilers, have them introduce
a marker instead for each branch or return, and then emit symbols
that the kernel can run-time patch to mitigate the vulnerability.
Something like that. (I guess.)

Notice that these symbols/pointers would first have a
footprint impact, though maybe they could be discarded if
not applicable.

The patch says:

It inserts speculation barrier sequences (SB or DSB+ISB
depending on the target architecture) after RET and BR, and
replacing BLR with BL+BR sequence.

How would you do that at runtime? If you slot in NOPs
around the branch for mitigating, there will still be impact.
If you want to make the code look the same unless vulnerable,
you would have to patch the branch with a branch to another
place to do the barriers... that patched branch in turn could
be speculated? I feel stupid here. But I guess someone could
come up with something?

So instead of a simple straight-forward solution that becomes a
really complicated awkward solution that generate a few thousand
more man-hours and delays the mitigations. So I understand if
the authors would want to try the simple approach
first.

It may however be our job to say "no, go do the really
complicated thing", I guess that is what you're saying. :)

Yours,
Linus Walleij

2021-03-10 05:04:05

by Jian Cai

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <[email protected]> wrote:
>
> On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <[email protected]> wrote:
> > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <[email protected]> wrote:
> > >
> > > On Tue, Feb 23, 2021 at 3:36 AM Jian Cai <[email protected]> wrote:
> > >
> > > > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
> > > > -mharden-sls=all, which mitigates the straight-line speculation
> > > > vulnerability, speculative execution of the instruction following some
> > > > unconditional jumps. Notice -mharden-sls= has other options as below,
> > > > and this config turns on the strongest option.
> > > >
> > > > all: enable all mitigations against Straight Line Speculation that are implemented.
> > > > none: disable all mitigations against Straight Line Speculation.
> > > > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
> > > > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
> > >
> > > I heard about compiler protection for this, so nice to see it happening!
> > >
> > > Would you happen to know if there is any plan to do the same for GCC?
> > > I know you folks at Google like LLVM, but if you know let us know.
> >
> > I think gcc also has these options.
> > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
>
> And how does that work with this part of your patch:
>
> +#define SLS_TEXT \
> + ALIGN_FUNCTION(); \
> + *(.text.__llvm_slsblr_thunk_*)
>
> This does not look compiler agnostic?
>

You are right, GCC does generate different oraphan section names. I
will address it in the next version of the patch. Also it seems only
arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
support it yet. I don't know if there is any plan to implement it for
32-bit gcc, but should we patch arm32 linker script preemptively,
assuming the sections will be named with the same pattern like how
clang does so the kernel would not fail to boot when the flag is
implemented?

Thanks,
Jian

> Yours,
> Linus Walleij

2021-03-22 11:49:36

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

On Wed, Mar 10, 2021 at 5:43 AM Jian Cai <[email protected]> wrote:
> On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <[email protected]> wrote:
> > On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <[email protected]> wrote:
> > > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <[email protected]> wrote:

> > > I think gcc also has these options.
> > > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
> >
> > And how does that work with this part of your patch:
> >
> > +#define SLS_TEXT \
> > + ALIGN_FUNCTION(); \
> > + *(.text.__llvm_slsblr_thunk_*)
> >
> > This does not look compiler agnostic?
>
> You are right, GCC does generate different oraphan section names. I
> will address it in the next version of the patch. Also it seems only
> arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
> support it yet. I don't know if there is any plan to implement it for
> 32-bit gcc, but should we patch arm32 linker script preemptively,
> assuming the sections will be named with the same pattern like how
> clang does so the kernel would not fail to boot when the flag is
> implemented?

I think the best thing is to have something like this:
Implement a macro such as this in
include/linux/compiler-clang.h

#define SLS_TEXT_SECTION *(.text.__llvm_slsblr_thunk_*)

then the corresponding in include/linux/compiler-gcc.h
but here also add a

#define SLS_TEXT_SECTION #error "no compiler support"

if the compiler version does not have this.

I don't know the exact best approach sadly, as the patch
looks now it seems a bit fragile, I wonder if you get linker
warnings when this section is unused?

Yours,
Linus Walleij

2021-03-23 22:42:48

by Jian Cai

[permalink] [raw]
Subject: Re: [PATCH v5] ARM: Implement SLS mitigation

Thanks for the suggestion. I've sent an inquiry to the author of
-mharden-sls* in GCC and hopefully that would shed some more light. We
do get warnings for oraphon sections when using lld. The other linkers
do not seem to provide such warnings, although the boot failure also
does not seem to happen with them.

On Mon, Mar 22, 2021 at 4:45 AM Linus Walleij <[email protected]> wrote:
>
> On Wed, Mar 10, 2021 at 5:43 AM Jian Cai <[email protected]> wrote:
> > On Sat, Mar 6, 2021 at 4:25 AM Linus Walleij <[email protected]> wrote:
> > > On Fri, Mar 5, 2021 at 12:23 AM Jian Cai <[email protected]> wrote:
> > > > On Wed, Mar 3, 2021 at 7:04 AM Linus Walleij <[email protected]> wrote:
>
> > > > I think gcc also has these options.
> > > > https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
> > >
> > > And how does that work with this part of your patch:
> > >
> > > +#define SLS_TEXT \
> > > + ALIGN_FUNCTION(); \
> > > + *(.text.__llvm_slsblr_thunk_*)
> > >
> > > This does not look compiler agnostic?
> >
> > You are right, GCC does generate different oraphan section names. I
> > will address it in the next version of the patch. Also it seems only
> > arm64 gcc supports -mharden-sls=* at this moment, arm32 gcc does not
> > support it yet. I don't know if there is any plan to implement it for
> > 32-bit gcc, but should we patch arm32 linker script preemptively,
> > assuming the sections will be named with the same pattern like how
> > clang does so the kernel would not fail to boot when the flag is
> > implemented?
>
> I think the best thing is to have something like this:
> Implement a macro such as this in
> include/linux/compiler-clang.h
>
> #define SLS_TEXT_SECTION *(.text.__llvm_slsblr_thunk_*)
>
> then the corresponding in include/linux/compiler-gcc.h
> but here also add a
>
> #define SLS_TEXT_SECTION #error "no compiler support"
>
> if the compiler version does not have this.
>
> I don't know the exact best approach sadly, as the patch
> looks now it seems a bit fragile, I wonder if you get linker
> warnings when this section is unused?
>
> Yours,
> Linus Walleij

2021-03-25 14:03:59

by Linus Walleij

[permalink] [raw]
Subject: Re: [PATCH v2] ARM: Implement Clang's SLS mitigation

Hi Will,

I went back and found this feedback which is kind of the heart of the
issues regarding SLS.

On Wed, Feb 17, 2021 at 10:51 AM Will Deacon <[email protected]> wrote:

> The big problem I have with this is that it's a compile-time decision.
> For the other spectre crap we have a combination of the "mitigations=off"
> command-line and CPU detection to avoid the cost of the mitigation where
> it is not deemed necessary.

For newcomers, the way this works today can be found in e.g.:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kernel/proton-pack.c
mitigations=off turns off Spectre v2 and v4 mitigations.

AFAICT this is achived with misc parameterization to firmware
and hypervisors and no runtime-patching of any code
at all?

(On ARM32 it has no effect whatsoever, we just turn on all
spectre v2 mitigations by default. No runtime choice.)

The way I understand it is that for SLS the compiler must at least
put in some kind of placeholders, but that it *might* be possible to do
runtime mitigations on top of that.

We need feedback from the compiler people as to what is
possible here.

If it is *not* possible to mitigate at run-time, then I don't know
what is the right thing to do. Certainly not to turn it on by default
as is done today?

> So I think that either we enable this unconditionally, or we don't enable it
> at all (and people can hack their CFLAGS themselves if they want to). It
> would be helpful for one of the Arm folks to chime in, as I'm yet to see any
> evidence that this is actually exploitable.
(...)
> Is it any worse that Spectre-v1,
> where we _don't_ have a compiler mitigation?

There is such a compiler mitigation for Spectre v1, under
the name "Speculative load hardening" the kernel
is not (yet) enabling it.

https://llvm.org/docs/SpeculativeLoadHardening.html
it comes with the intuitive command line switch
-mspeculative-load-hardening

Certainly a separate patch can add speculative load
hardening support on top of this, or before this patch,
if there is desire and/or feels like a more coherent
approach.

As the article says "The performance overhead of this style of
comprehensive mitigation is very high (...) most large applications
seeing a 30% overhead or less."

I suppose it can be enabled while compiling the kernel just
like this patch enables -mharden-sls=all

I don't know if your comment means that if we enable one
of them we should just as well enable both or none as
otherwise there is no real protection, as attackers can
just use the other similar attack vector?

> Finally, do we have to worry about our assembly code?

AFAICT yes, and you seem to have hardened
Aarch64's ERET:s which seemed especially vulnerable
in commit
679db70801da9fda91d26caf13bf5b5ccc74e8e8
"arm64: entry: Place an SB sequence following an ERET instruction"
Link for people without kernel source:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=679db70801da9fda91d26caf13bf5b5ccc74e8e8

So it seems the most vulnerable spot was already
fixed by you, thanks! But I bet there are some more
spots.

Yours,
Linus Walleij