2019-11-08 19:00:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 00/75] 4.4.200-stable review

This is the start of the stable review cycle for the 4.4.200 release.
There are 75 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Sun 10 Nov 2019 05:42:11 PM UTC.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.200-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 4.4.200-rc1

zhangyi (F) <[email protected]>
fs/dcache: move security_d_instantiate() behind attaching dentry to inode

Petr Vorel <[email protected]>
alarmtimer: Change remaining ENOTSUPP to EOPNOTSUPP

Russell King <[email protected]>
ARM: fix the cockup in the previous patch

Russell King <[email protected]>
ARM: ensure that processor vtables is not lost after boot

Russell King <[email protected]>
ARM: spectre-v2: per-CPU vtables to work around big.Little systems

Russell King <[email protected]>
ARM: add PROC_VTABLE and PROC_TABLE macros

Russell King <[email protected]>
ARM: clean up per-processor check_bugs method call

Russell King <[email protected]>
ARM: split out processor lookup

Russell King <[email protected]>
ARM: make lookup_processor_type() non-__init

Julien Thierry <[email protected]>
ARM: 8810/1: vfp: Fix wrong assignement to ufp_exc

Julien Thierry <[email protected]>
ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization

Julien Thierry <[email protected]>
ARM: 8795/1: spectre-v1.1: use put_user() for __put_user()

Julien Thierry <[email protected]>
ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limit

Julien Thierry <[email protected]>
ARM: 8793/1: signal: replace __put_user_error with __put_user

Julien Thierry <[email protected]>
ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user()

Julien Thierry <[email protected]>
ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state

Julien Thierry <[email protected]>
ARM: 8789/1: signal: copy registers using __copy_to_user()

Russell King <[email protected]>
ARM: spectre-v1: mitigate user accesses

Russell King <[email protected]>
ARM: spectre-v1: use get_user() for __get_user()

Russell King <[email protected]>
ARM: use __inttype() in get_user()

Russell King <[email protected]>
ARM: oabi-compat: copy semops using __copy_from_user()

Russell King <[email protected]>
ARM: vfp: use __copy_from_user() when restoring VFP state

Russell King <[email protected]>
ARM: signal: copy registers using __copy_from_user()

Russell King <[email protected]>
ARM: spectre-v1: fix syscall entry

Russell King <[email protected]>
ARM: spectre-v1: add array_index_mask_nospec() implementation

Russell King <[email protected]>
ARM: spectre-v1: add speculation barrier (csdb) macros

Russell King <[email protected]>
ARM: spectre-v2: warn about incorrect context switching functions

Russell King <[email protected]>
ARM: spectre-v2: add firmware based hardening

Russell King <[email protected]>
ARM: spectre-v2: harden user aborts in kernel space

Russell King <[email protected]>
ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit

Russell King <[email protected]>
ARM: spectre-v2: harden branch predictor on context switches

Russell King <[email protected]>
ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre

Russell King <[email protected]>
ARM: bugs: add support for per-processor bug checking

Russell King <[email protected]>
ARM: bugs: hook processor bug checking into SMP and suspend paths

Russell King <[email protected]>
ARM: bugs: prepare processor bug infrastructure

Russell King <[email protected]>
ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs

Marc Zyngier <[email protected]>
arm/arm64: smccc-1.1: Handle function result as parameters

Marc Zyngier <[email protected]>
arm/arm64: smccc-1.1: Make return values unsigned long

Marc Zyngier <[email protected]>
arm/arm64: smccc: Add SMCCC-specific return codes

Marc Zyngier <[email protected]>
arm/arm64: smccc: Implement SMCCC v1.1 inline primitive

Marc Zyngier <[email protected]>
arm/arm64: smccc: Make function identifiers an unsigned quantity

Marc Zyngier <[email protected]>
firmware/psci: Expose SMCCC version through psci_ops

Marc Zyngier <[email protected]>
firmware/psci: Expose PSCI conduit

Marc Zyngier <[email protected]>
arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support

Marc Zyngier <[email protected]>
arm/arm64: KVM: Advertise SMCCC v1.1

Vladimir Murzin <[email protected]>
ARM: Move system register accessors to asm/cp15.h

Russell King <[email protected]>
ARM: uaccess: remove put_user() code duplication

Jens Wiklander <[email protected]>
ARM: 8481/2: drivers: psci: replace psci firmware calls

Jens Wiklander <[email protected]>
ARM: 8480/2: arm64: add implementation for arm-smccc

Jens Wiklander <[email protected]>
ARM: 8479/2: add implementation for arm-smccc

Jens Wiklander <[email protected]>
ARM: 8478/2: arm/arm64: add arm-smccc

Andrey Ryabinin <[email protected]>
ARM: 8051/1: put_user: fix possible data corruption in put_user

Jeffrey Hugo <[email protected]>
dmaengine: qcom: bam_dma: Fix resource leak

Eric Dumazet <[email protected]>
net/flow_dissector: switch to siphash

Eric Dumazet <[email protected]>
inet: stop leaking jiffies on the wire

Eran Ben Elisha <[email protected]>
net/mlx4_core: Dynamically set guaranteed amount of counters per VF

Xin Long <[email protected]>
vxlan: check tun_info options_len properly

Eric Dumazet <[email protected]>
net: add READ_ONCE() annotation in __skb_wait_for_more_packets()

zhanglin <[email protected]>
net: Zeroing the structure ethtool_wolinfo in ethtool_get_wol()

Jiangfeng Xiao <[email protected]>
net: hisilicon: Fix ping latency when deal with high throughput

Tejun Heo <[email protected]>
net: fix sk_page_frag() recursion from memory reclaim

Eric Dumazet <[email protected]>
dccp: do not leak jiffies on the wire

Dave Wysochanski <[email protected]>
cifs: Fix cifsInodeInfo lock_sem deadlock when reconnect occurs

Jonas Gorski <[email protected]>
MIPS: bmips: mark exception vectors as char arrays

Navid Emamdoost <[email protected]>
of: unittest: fix memory leak in unittest_data_add

Bodo Stroesser <[email protected]>
scsi: target: core: Do not overwrite CDB byte 1

Yunfeng Ye <[email protected]>
perf kmem: Fix memory leak in compact_gfp_flags()

Thomas Bogendoerfer <[email protected]>
scsi: fix kconfig dependency warning related to 53C700_LE_ON_BE

Thomas Bogendoerfer <[email protected]>
scsi: sni_53c710: fix compilation error

Russell King <[email protected]>
ARM: mm: fix alignment handler faults under memory pressure

Adam Ford <[email protected]>
ARM: dts: logicpd-torpedo-som: Remove twl_keypad

Robin Murphy <[email protected]>
ASoc: rockchip: i2s: Fix RPM imbalance

Yizhuo <[email protected]>
regulator: pfuze100-regulator: Variable "val" in pfuze100_regulator_probe() could be uninitialized

Axel Lin <[email protected]>
regulator: ti-abb: Fix timeout in ti_abb_wait_txdone/ti_abb_clear_all_txdone

Seth Forshee <[email protected]>
kbuild: add -fcf-protection=none when using retpoline flags


-------------

Diffstat:

Makefile | 10 +-
arch/arm/Kconfig | 3 +-
arch/arm/boot/dts/logicpd-torpedo-som.dtsi | 4 +
arch/arm/include/asm/arch_gicv3.h | 27 +-
arch/arm/include/asm/assembler.h | 23 ++
arch/arm/include/asm/barrier.h | 34 +++
arch/arm/include/asm/bugs.h | 6 +-
arch/arm/include/asm/cp15.h | 18 ++
arch/arm/include/asm/cputype.h | 9 +
arch/arm/include/asm/proc-fns.h | 65 ++++-
arch/arm/include/asm/system_misc.h | 15 ++
arch/arm/include/asm/thread_info.h | 8 +-
arch/arm/include/asm/uaccess.h | 177 ++++++++-----
arch/arm/kernel/Makefile | 4 +-
arch/arm/kernel/armksyms.c | 6 +
arch/arm/kernel/bugs.c | 18 ++
arch/arm/kernel/entry-common.S | 18 +-
arch/arm/kernel/entry-header.S | 25 ++
arch/arm/kernel/head-common.S | 6 +-
arch/arm/kernel/psci-call.S | 31 ---
arch/arm/kernel/setup.c | 40 +--
arch/arm/kernel/signal.c | 125 ++++-----
arch/arm/kernel/smccc-call.S | 62 +++++
arch/arm/kernel/smp.c | 36 +++
arch/arm/kernel/suspend.c | 2 +
arch/arm/kernel/sys_oabi-compat.c | 16 +-
arch/arm/lib/copy_from_user.S | 5 +
arch/arm/mm/Kconfig | 23 ++
arch/arm/mm/Makefile | 2 +-
arch/arm/mm/alignment.c | 44 +++-
arch/arm/mm/fault.c | 3 +
arch/arm/mm/proc-macros.S | 13 +-
arch/arm/mm/proc-v7-2level.S | 6 -
arch/arm/mm/proc-v7-bugs.c | 161 ++++++++++++
arch/arm/mm/proc-v7.S | 154 ++++++++---
arch/arm/vfp/vfpmodule.c | 37 ++-
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/Makefile | 4 +-
arch/arm64/kernel/arm64ksyms.c | 5 +
arch/arm64/kernel/asm-offsets.c | 3 +
arch/arm64/kernel/psci-call.S | 28 --
arch/arm64/kernel/smccc-call.S | 43 ++++
arch/mips/bcm63xx/prom.c | 2 +-
arch/mips/include/asm/bmips.h | 10 +-
arch/mips/kernel/smp-bmips.c | 8 +-
drivers/dma/qcom_bam_dma.c | 14 +
drivers/firmware/Kconfig | 3 +
drivers/firmware/psci.c | 78 +++++-
drivers/net/ethernet/hisilicon/hip04_eth.c | 15 +-
.../net/ethernet/mellanox/mlx4/resource_tracker.c | 42 +--
drivers/net/vxlan.c | 5 +-
drivers/of/unittest.c | 1 +
drivers/regulator/pfuze100-regulator.c | 8 +-
drivers/regulator/ti-abb-regulator.c | 26 +-
drivers/scsi/Kconfig | 2 +-
drivers/scsi/sni_53c710.c | 4 +-
drivers/target/target_core_device.c | 21 --
fs/cifs/cifsglob.h | 5 +
fs/cifs/cifsproto.h | 1 +
fs/cifs/file.c | 23 +-
fs/cifs/smb2file.c | 2 +-
fs/dcache.c | 2 +-
include/linux/arm-smccc.h | 283 +++++++++++++++++++++
include/linux/gfp.h | 23 ++
include/linux/psci.h | 13 +
include/linux/skbuff.h | 3 +-
include/net/flow_dissector.h | 3 +-
include/net/sock.h | 11 +-
kernel/time/alarmtimer.c | 4 +-
net/core/datagram.c | 2 +-
net/core/ethtool.c | 4 +-
net/core/flow_dissector.c | 48 ++--
net/dccp/ipv4.c | 4 +-
net/ipv4/datagram.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 +-
net/sched/sch_fq_codel.c | 6 +-
net/sched/sch_hhf.c | 8 +-
net/sched/sch_sfb.c | 13 +-
net/sched/sch_sfq.c | 14 +-
net/sctp/socket.c | 2 +-
sound/soc/rockchip/rockchip_i2s.c | 2 +-
tools/perf/builtin-kmem.c | 1 +
82 files changed, 1556 insertions(+), 486 deletions(-)



2019-11-08 19:00:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 05/75] ARM: dts: logicpd-torpedo-som: Remove twl_keypad

From: Adam Ford <[email protected]>

[ Upstream commit 6b512b0ee091edcb8e46218894e4c917d919d3dc ]

The TWL4030 used on the Logit PD Torpedo SOM does not have the
keypad pins routed. This patch disables the twl_keypad driver
to remove some splat during boot:

twl4030_keypad 48070000.i2c:twl@48:keypad: missing or malformed property linux,keymap: -22
twl4030_keypad 48070000.i2c:twl@48:keypad: Failed to build keymap
twl4030_keypad: probe of 48070000.i2c:twl@48:keypad failed with error -22

Signed-off-by: Adam Ford <[email protected]>
[[email protected]: removed error time stamps]
Signed-off-by: Tony Lindgren <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/arm/boot/dts/logicpd-torpedo-som.dtsi | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
index e05670423d8b7..a6c59bf698b31 100644
--- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
@@ -169,3 +169,7 @@
&twl_gpio {
ti,use-leds;
};
+
+&twl_keypad {
+ status = "disabled";
+};
--
2.20.1



2019-11-08 19:00:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 46/75] ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit

From: Russell King <[email protected]>

Commit e388b80288aade31135aca23d32eee93dd106795 upstream.

When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register. If this bit has not
been set, the Spectre workarounds will not be functional.

Add validation that this bit is set, and print a warning at alert level
if this is not the case.

Signed-off-by: Russell King <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/Makefile | 2 +-
arch/arm/mm/proc-v7-bugs.c | 36 ++++++++++++++++++++++++++++++++++++
arch/arm/mm/proc-v7.S | 4 ++--
3 files changed, 39 insertions(+), 3 deletions(-)

--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -92,7 +92,7 @@ obj-$(CONFIG_CPU_MOHAWK) += proc-mohawk.
obj-$(CONFIG_CPU_FEROCEON) += proc-feroceon.o
obj-$(CONFIG_CPU_V6) += proc-v6.o
obj-$(CONFIG_CPU_V6K) += proc-v6.o
-obj-$(CONFIG_CPU_V7) += proc-v7.o
+obj-$(CONFIG_CPU_V7) += proc-v7.o proc-v7-bugs.o
obj-$(CONFIG_CPU_V7M) += proc-v7m.o

AFLAGS_proc-v6.o :=-Wa,-march=armv6
--- /dev/null
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -0,0 +1,36 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/smp.h>
+
+static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned,
+ u32 mask, const char *msg)
+{
+ u32 aux_cr;
+
+ asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr));
+
+ if ((aux_cr & mask) != mask) {
+ if (!*warned)
+ pr_err("CPU%u: %s", smp_processor_id(), msg);
+ *warned = true;
+ }
+}
+
+static DEFINE_PER_CPU(bool, spectre_warned);
+
+static void check_spectre_auxcr(bool *warned, u32 bit)
+{
+ if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
+ cpu_v7_check_auxcr_set(warned, bit,
+ "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
+}
+
+void cpu_v7_ca8_ibe(void)
+{
+ check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6));
+}
+
+void cpu_v7_ca15_ibe(void)
+{
+ check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0));
+}
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -511,7 +511,7 @@ __v7_setup_stack:
globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
#endif
- define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+ define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe

@ Cortex-A9 - needs more registers preserved across suspend/resume
@ and bpiall switch_mm for hardening
@@ -544,7 +544,7 @@ __v7_setup_stack:
globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size
globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend
globl_equ cpu_ca15_do_resume, cpu_v7_do_resume
- define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+ define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe
#ifdef CONFIG_CPU_PJ4B
define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#endif


2019-11-08 19:00:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 07/75] scsi: sni_53c710: fix compilation error

From: Thomas Bogendoerfer <[email protected]>

[ Upstream commit 0ee6211408a8e939428f662833c7301394125b80 ]

Drop out memory dev_printk() with wrong device pointer argument.

[mkp: typo]

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/scsi/sni_53c710.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
index 76278072147e2..b0f5220ae23a8 100644
--- a/drivers/scsi/sni_53c710.c
+++ b/drivers/scsi/sni_53c710.c
@@ -78,10 +78,8 @@ static int snirm710_probe(struct platform_device *dev)

base = res->start;
hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
- if (!hostdata) {
- dev_printk(KERN_ERR, dev, "Failed to allocate host data\n");
+ if (!hostdata)
return -ENOMEM;
- }

hostdata->dev = &dev->dev;
dma_set_mask(&dev->dev, DMA_BIT_MASK(32));
--
2.20.1



2019-11-08 19:00:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 47/75] ARM: spectre-v2: harden user aborts in kernel space

From: Russell King <[email protected]>

Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream.

In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:

Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate icache.

If the IBE bit is not set, then there is little point to enabling the
workaround.

Signed-off-by: Russell King <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/cp15.h | 3 +
arch/arm/include/asm/system_misc.h | 15 +++++++
arch/arm/mm/fault.c | 3 +
arch/arm/mm/proc-v7-bugs.c | 73 ++++++++++++++++++++++++++++++++++---
arch/arm/mm/proc-v7.S | 8 ++--
5 files changed, 94 insertions(+), 8 deletions(-)

--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -64,6 +64,9 @@
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)

+#define BPIALL __ACCESS_CP15(c7, 0, c5, 6)
+#define ICIALLU __ACCESS_CP15(c7, 0, c5, 0)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */

static inline unsigned long get_cr(void)
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -7,6 +7,7 @@
#include <linux/linkage.h>
#include <linux/irqflags.h>
#include <linux/reboot.h>
+#include <linux/percpu.h>

extern void cpu_init(void);

@@ -14,6 +15,20 @@ void soft_restart(unsigned long);
extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
extern void (*arm_pm_idle)(void);

+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+typedef void (*harden_branch_predictor_fn_t)(void);
+DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+static inline void harden_branch_predictor(void)
+{
+ harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn,
+ smp_processor_id());
+ if (fn)
+ fn();
+}
+#else
+#define harden_branch_predictor() do { } while (0)
+#endif
+
#define UDBG_UNDEFINED (1 << 0)
#define UDBG_SYSCALL (1 << 1)
#define UDBG_BADABORT (1 << 2)
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk,
{
struct siginfo si;

+ if (addr > TASK_SIZE)
+ harden_branch_predictor();
+
#ifdef CONFIG_DEBUG_USER
if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
((user_debug & UDBG_BUS) && (sig == SIGBUS))) {
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -2,7 +2,61 @@
#include <linux/kernel.h>
#include <linux/smp.h>

-static __maybe_unused void cpu_v7_check_auxcr_set(bool *warned,
+#include <asm/cp15.h>
+#include <asm/cputype.h>
+#include <asm/system_misc.h>
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
+
+static void harden_branch_predictor_bpiall(void)
+{
+ write_sysreg(0, BPIALL);
+}
+
+static void harden_branch_predictor_iciallu(void)
+{
+ write_sysreg(0, ICIALLU);
+}
+
+static void cpu_v7_spectre_init(void)
+{
+ const char *spectre_v2_method = NULL;
+ int cpu = smp_processor_id();
+
+ if (per_cpu(harden_branch_predictor_fn, cpu))
+ return;
+
+ switch (read_cpuid_part()) {
+ case ARM_CPU_PART_CORTEX_A8:
+ case ARM_CPU_PART_CORTEX_A9:
+ case ARM_CPU_PART_CORTEX_A12:
+ case ARM_CPU_PART_CORTEX_A17:
+ case ARM_CPU_PART_CORTEX_A73:
+ case ARM_CPU_PART_CORTEX_A75:
+ per_cpu(harden_branch_predictor_fn, cpu) =
+ harden_branch_predictor_bpiall;
+ spectre_v2_method = "BPIALL";
+ break;
+
+ case ARM_CPU_PART_CORTEX_A15:
+ case ARM_CPU_PART_BRAHMA_B15:
+ per_cpu(harden_branch_predictor_fn, cpu) =
+ harden_branch_predictor_iciallu;
+ spectre_v2_method = "ICIALLU";
+ break;
+ }
+ if (spectre_v2_method)
+ pr_info("CPU%u: Spectre v2: using %s workaround\n",
+ smp_processor_id(), spectre_v2_method);
+}
+#else
+static void cpu_v7_spectre_init(void)
+{
+}
+#endif
+
+static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
u32 mask, const char *msg)
{
u32 aux_cr;
@@ -13,24 +67,33 @@ static __maybe_unused void cpu_v7_check_
if (!*warned)
pr_err("CPU%u: %s", smp_processor_id(), msg);
*warned = true;
+ return false;
}
+ return true;
}

static DEFINE_PER_CPU(bool, spectre_warned);

-static void check_spectre_auxcr(bool *warned, u32 bit)
+static bool check_spectre_auxcr(bool *warned, u32 bit)
{
- if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
+ return IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) &&
cpu_v7_check_auxcr_set(warned, bit,
"Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n");
}

void cpu_v7_ca8_ibe(void)
{
- check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6));
+ if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
+ cpu_v7_spectre_init();
}

void cpu_v7_ca15_ibe(void)
{
- check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0));
+ if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+ cpu_v7_spectre_init();
+}
+
+void cpu_v7_bugs_init(void)
+{
+ cpu_v7_spectre_init();
}
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -474,8 +474,10 @@ __v7_setup_stack:

__INITDATA

+ .weak cpu_v7_bugs_init
+
@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
- define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+ define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init

#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@ generic v7 bpiall on context switch
@@ -490,7 +492,7 @@ __v7_setup_stack:
globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend
globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume
#endif
- define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+ define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init

#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
#else
@@ -526,7 +528,7 @@ __v7_setup_stack:
globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm
#endif
globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext
- define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+ define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init
#endif

@ Cortex-A15 - needs iciallu switch_mm for hardening


2019-11-08 19:00:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 72/75] ARM: ensure that processor vtables is not lost after boot

From: Russell King <[email protected]>

Commit 3a4d0c2172bcf15b7a3d9d498b2b355f9864286b upstream.

Marek Szyprowski reported problems with CPU hotplug in current kernels.
This was tracked down to the processor vtables being located in an
init section, and therefore discarded after kernel boot, despite being
required after boot to properly initialise the non-boot CPUs.

Arrange for these tables to end up in .rodata when required.

Reported-by: Marek Szyprowski <[email protected]>
Tested-by: Krzysztof Kozlowski <[email protected]>
Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems")
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/proc-macros.S | 10 ++++++++++
1 file changed, 10 insertions(+)

--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -259,6 +259,13 @@
.endm

.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
+/*
+ * If we are building for big.Little with branch predictor hardening,
+ * we need the processor function tables to remain available after boot.
+ */
+#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+ .section ".rodata"
+#endif
.type \name\()_processor_functions, #object
.align 2
ENTRY(\name\()_processor_functions)
@@ -294,6 +301,9 @@ ENTRY(\name\()_processor_functions)
.endif

.size \name\()_processor_functions, . - \name\()_processor_functions
+#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+ .previous
+#endif
.endm

.macro define_cache_functions name:req


2019-11-08 19:01:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 58/75] ARM: spectre-v1: mitigate user accesses

From: Russell King <[email protected]>

Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream.

Spectre variant 1 attacks are about this sequence of pseudo-code:

index = load(user-manipulated pointer);
access(base + index * stride);

In order for the cache side-channel to work, the access() must me made
to memory which userspace can detect whether cache lines have been
loaded. On 32-bit ARM, this must be either user accessible memory, or
a kernel mapping of that same user accessible memory.

The problem occurs when the load() speculatively loads privileged data,
and the subsequent access() is made to user accessible memory.

Any load() which makes use of a user-maniplated pointer is a potential
problem if the data it has loaded is used in a subsequent access. This
also applies for the access() if the data loaded by that access is used
by a subsequent access.

Harden the get_user() accessors against Spectre attacks by forcing out
of bounds addresses to a NULL pointer. This prevents get_user() being
used as the load() step above. As a side effect, put_user() will also
be affected even though it isn't implicated.

Also harden copy_from_user() by redoing the bounds check within the
arm_copy_from_user() code, and NULLing the pointer if out of bounds.

Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/assembler.h | 4 ++++
arch/arm/lib/copy_from_user.S | 9 +++++++++
2 files changed, 13 insertions(+)

--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -454,6 +454,10 @@ THUMB( orr \reg , \reg , #PSR_T_BIT )
adds \tmp, \addr, #\size - 1
sbcccs \tmp, \tmp, \limit
bcs \bad
+#ifdef CONFIG_CPU_SPECTRE
+ movcs \addr, #0
+ csdb
+#endif
#endif
.endm

--- a/arch/arm/lib/copy_from_user.S
+++ b/arch/arm/lib/copy_from_user.S
@@ -90,6 +90,15 @@
.text

ENTRY(arm_copy_from_user)
+#ifdef CONFIG_CPU_SPECTRE
+ get_thread_info r3
+ ldr r3, [r3, #TI_ADDR_LIMIT]
+ adds ip, r1, r2 @ ip=addr+size
+ sub r3, r3, #1 @ addr_limit - 1
+ cmpcc ip, r3 @ if (addr+size > addr_limit - 1)
+ movcs r1, #0 @ addr = NULL
+ csdb
+#endif

#include "copy_template.S"



2019-11-08 19:01:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 61/75] ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user()

From: Julien Thierry <[email protected]>

Commit 319508902600c2688e057750148487996396e9ca upstream.

Copy events to user using __copy_to_user() rather than copy members of
individually with __put_user_error().
This has the benefit of disabling/enabling PAN once per event intead of
once per event member.

Signed-off-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kernel/sys_oabi-compat.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

--- a/arch/arm/kernel/sys_oabi-compat.c
+++ b/arch/arm/kernel/sys_oabi-compat.c
@@ -276,6 +276,7 @@ asmlinkage long sys_oabi_epoll_wait(int
int maxevents, int timeout)
{
struct epoll_event *kbuf;
+ struct oabi_epoll_event e;
mm_segment_t fs;
long ret, err, i;

@@ -294,8 +295,11 @@ asmlinkage long sys_oabi_epoll_wait(int
set_fs(fs);
err = 0;
for (i = 0; i < ret; i++) {
- __put_user_error(kbuf[i].events, &events->events, err);
- __put_user_error(kbuf[i].data, &events->data, err);
+ e.events = kbuf[i].events;
+ e.data = kbuf[i].data;
+ err = __copy_to_user(events, &e, sizeof(e));
+ if (err)
+ break;
events++;
}
kfree(kbuf);


2019-11-08 19:01:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 73/75] ARM: fix the cockup in the previous patch

From: Russell King <[email protected]>

Commit d6951f582cc50ba0ad22ef46b599740966599b14 upstream.

The intention in the previous patch was to only place the processor
tables in the .rodata section if big.Little was being built and we
wanted the branch target hardening, but instead (due to the way it
was tested) it ended up always placing the tables into the .rodata
section.

Although harmless, let's correct this anyway.

Fixes: 3a4d0c2172bc ("ARM: ensure that processor vtables is not lost after boot")
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/proc-macros.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -263,7 +263,7 @@
* If we are building for big.Little with branch predictor hardening,
* we need the processor function tables to remain available after boot.
*/
-#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
.section ".rodata"
#endif
.type \name\()_processor_functions, #object
@@ -301,7 +301,7 @@ ENTRY(\name\()_processor_functions)
.endif

.size \name\()_processor_functions, . - \name\()_processor_functions
-#if 1 // defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
.previous
#endif
.endm


2019-11-08 19:01:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 64/75] ARM: 8795/1: spectre-v1.1: use put_user() for __put_user()

From: Julien Thierry <[email protected]>

Commit e3aa6243434fd9a82e84bb79ab1abd14f2d9a5a7 upstream.

When Spectre mitigation is required, __put_user() needs to include
check_uaccess. This is already the case for put_user(), so just make
__put_user() an alias of put_user().

Signed-off-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/uaccess.h | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)

--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -408,6 +408,14 @@ do { \
__pu_err; \
})

+#ifdef CONFIG_CPU_SPECTRE
+/*
+ * When mitigating Spectre variant 1.1, all accessors need to include
+ * verification of the address space.
+ */
+#define __put_user(x, ptr) put_user(x, ptr)
+
+#else
#define __put_user(x, ptr) \
({ \
long __pu_err = 0; \
@@ -415,12 +423,6 @@ do { \
__pu_err; \
})

-#define __put_user_error(x, ptr, err) \
-({ \
- __put_user_switch((x), (ptr), (err), __put_user_nocheck); \
- (void) 0; \
-})
-
#define __put_user_nocheck(x, __pu_ptr, __err, __size) \
do { \
unsigned long __pu_addr = (unsigned long)__pu_ptr; \
@@ -500,6 +502,7 @@ do { \
: "r" (x), "i" (-EFAULT) \
: "cc")

+#endif /* !CONFIG_CPU_SPECTRE */

#ifdef CONFIG_MMU
extern unsigned long __must_check


2019-11-08 19:01:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 67/75] ARM: make lookup_processor_type() non-__init

From: Russell King <[email protected]>

Commit 899a42f836678a595f7d2bc36a5a0c2b03d08cbc upstream.

Move lookup_processor_type() out of the __init section so it is callable
from (eg) the secondary startup code during hotplug.

Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kernel/head-common.S | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/arch/arm/kernel/head-common.S
+++ b/arch/arm/kernel/head-common.S
@@ -122,6 +122,9 @@ __mmap_switched_data:
.long init_thread_union + THREAD_START_SP @ sp
.size __mmap_switched_data, . - __mmap_switched_data

+ __FINIT
+ .text
+
/*
* This provides a C-API version of __lookup_processor_type
*/
@@ -133,9 +136,6 @@ ENTRY(lookup_processor_type)
ldmfd sp!, {r4 - r6, r9, pc}
ENDPROC(lookup_processor_type)

- __FINIT
- .text
-
/*
* Read processor ID register (CP#15, CR0), and look up in the linker-built
* supported processor list. Note that we can't use the absolute addresses


2019-11-08 19:01:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 74/75] alarmtimer: Change remaining ENOTSUPP to EOPNOTSUPP

From: Petr Vorel <[email protected]>

Fix backport of commit f18ddc13af981ce3c7b7f26925f099e7c6929aba upstream.

Update backport to change ENOTSUPP to EOPNOTSUPP in
alarm_timer_{del,set}(), which were removed in
f2c45807d3992fe0f173f34af9c347d907c31686 in v4.13-rc1.

Fixes: c22df8ea7c5831d6fdca2f6f136f0d32d7064ff9

Signed-off-by: Petr Vorel <[email protected]>
Acked-by: Thadeu Lima de Souza Cascardo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/time/alarmtimer.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -573,7 +573,7 @@ static void alarm_timer_get(struct k_iti
static int alarm_timer_del(struct k_itimer *timr)
{
if (!rtcdev)
- return -ENOTSUPP;
+ return -EOPNOTSUPP;

if (alarm_try_to_cancel(&timr->it.alarm.alarmtimer) < 0)
return TIMER_RETRY;
@@ -597,7 +597,7 @@ static int alarm_timer_set(struct k_itim
ktime_t exp;

if (!rtcdev)
- return -ENOTSUPP;
+ return -EOPNOTSUPP;

if (flags & ~TIMER_ABSTIME)
return -EINVAL;


2019-11-08 19:01:29

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 41/75] ARM: bugs: prepare processor bug infrastructure

From: Russell King <[email protected]>

Commit a5b9177f69329314721aa7022b7e69dab23fa1f0 upstream.

Prepare the processor bug infrastructure so that it can be expanded to
check for per-processor bugs.

Signed-off-by: Russell King <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/bugs.h | 4 ++--
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/bugs.c | 9 +++++++++
3 files changed, 12 insertions(+), 2 deletions(-)

--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -10,10 +10,10 @@
#ifndef __ASM_BUGS_H
#define __ASM_BUGS_H

-#ifdef CONFIG_MMU
extern void check_writebuffer_bugs(void);

-#define check_bugs() check_writebuffer_bugs()
+#ifdef CONFIG_MMU
+extern void check_bugs(void);
#else
#define check_bugs() do { } while (0)
#endif
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -30,6 +30,7 @@ else
obj-y += entry-armv.o
endif

+obj-$(CONFIG_MMU) += bugs.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
obj-$(CONFIG_ISA_DMA_API) += dma.o
obj-$(CONFIG_FIQ) += fiq.o fiqasm.o
--- /dev/null
+++ b/arch/arm/kernel/bugs.c
@@ -0,0 +1,9 @@
+// SPDX-Identifier: GPL-2.0
+#include <linux/init.h>
+#include <asm/bugs.h>
+#include <asm/proc-fns.h>
+
+void __init check_bugs(void)
+{
+ check_writebuffer_bugs();
+}


2019-11-08 19:01:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 43/75] ARM: bugs: add support for per-processor bug checking

From: Russell King <[email protected]>

Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream.

Add support for per-processor bug checking - each processor function
descriptor gains a function pointer for this check, which must not be
an __init function. If non-NULL, this will be called whenever a CPU
enters the kernel via which ever path (boot CPU, secondary CPU startup,
CPU resuming, etc.)

This allows processor specific bug checks to validate that workaround
bits are properly enabled by firmware via all entry paths to the kernel.

Signed-off-by: Russell King <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/proc-fns.h | 4 ++++
arch/arm/kernel/bugs.c | 4 ++++
arch/arm/mm/proc-macros.S | 3 ++-
3 files changed, 10 insertions(+), 1 deletion(-)

--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -37,6 +37,10 @@ extern struct processor {
*/
void (*_proc_init)(void);
/*
+ * Check for processor bugs
+ */
+ void (*check_bugs)(void);
+ /*
* Disable any processor specifics
*/
void (*_proc_fin)(void);
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -5,6 +5,10 @@

void check_other_bugs(void)
{
+#ifdef MULTI_CPU
+ if (processor.check_bugs)
+ processor.check_bugs();
+#endif
}

void __init check_bugs(void)
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -258,13 +258,14 @@
mcr p15, 0, ip, c7, c10, 4 @ data write barrier
.endm

-.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0
+.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0
.type \name\()_processor_functions, #object
.align 2
ENTRY(\name\()_processor_functions)
.word \dabort
.word \pabort
.word cpu_\name\()_proc_init
+ .word \bugs
.word cpu_\name\()_proc_fin
.word cpu_\name\()_reset
.word cpu_\name\()_do_idle


2019-11-08 19:01:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 45/75] ARM: spectre-v2: harden branch predictor on context switches

From: Russell King <[email protected]>

Commit 06c23f5ffe7ad45b908d0fff604dae08a7e334b9 upstream.

Required manual merge of arch/arm/mm/proc-v7.S.

Harden the branch predictor against Spectre v2 attacks on context
switches for ARMv7 and later CPUs. We do this by:

Cortex A9, A12, A17, A73, A75: invalidating the BTB.
Cortex A15, Brahma B15: invalidating the instruction cache.

Cortex A57 and Cortex A72 are not addressed in this patch.

Cortex R7 and Cortex R8 are also not addressed as we do not enforce
memory protection on these cores.

Signed-off-by: Russell King <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/Kconfig | 19 ++++++
arch/arm/mm/proc-v7-2level.S | 6 --
arch/arm/mm/proc-v7.S | 125 +++++++++++++++++++++++++++++++++----------
3 files changed, 115 insertions(+), 35 deletions(-)

--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -797,6 +797,25 @@ config CPU_BPREDICT_DISABLE
config CPU_SPECTRE
bool

+config HARDEN_BRANCH_PREDICTOR
+ bool "Harden the branch predictor against aliasing attacks" if EXPERT
+ depends on CPU_SPECTRE
+ default y
+ help
+ Speculation attacks against some high-performance processors rely
+ on being able to manipulate the branch predictor for a victim
+ context by executing aliasing branches in the attacker context.
+ Such attacks can be partially mitigated against by clearing
+ internal branch predictor state and limiting the prediction
+ logic in some situations.
+
+ This config option will take CPU-specific actions to harden
+ the branch predictor against aliasing attacks and may rely on
+ specific instruction sequences or control bits being set by
+ the system firmware.
+
+ If unsure, say Y.
+
config TLS_REG_EMUL
bool
select NEED_KUSER_HELPERS
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -41,11 +41,6 @@
* even on Cortex-A8 revisions not affected by 430973.
* If IBE is not set, the flush BTAC/BTB won't do anything.
*/
-ENTRY(cpu_ca8_switch_mm)
-#ifdef CONFIG_MMU
- mov r2, #0
- mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB
-#endif
ENTRY(cpu_v7_switch_mm)
#ifdef CONFIG_MMU
mmid r1, r1 @ get mm->context.id
@@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm)
#endif
bx lr
ENDPROC(cpu_v7_switch_mm)
-ENDPROC(cpu_ca8_switch_mm)

/*
* cpu_v7_set_pte_ext(ptep, pte)
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -87,6 +87,17 @@ ENTRY(cpu_v7_dcache_clean_area)
ret lr
ENDPROC(cpu_v7_dcache_clean_area)

+ENTRY(cpu_v7_iciallu_switch_mm)
+ mov r3, #0
+ mcr p15, 0, r3, c7, c5, 0 @ ICIALLU
+ b cpu_v7_switch_mm
+ENDPROC(cpu_v7_iciallu_switch_mm)
+ENTRY(cpu_v7_bpiall_switch_mm)
+ mov r3, #0
+ mcr p15, 0, r3, c7, c5, 6 @ flush BTAC/BTB
+ b cpu_v7_switch_mm
+ENDPROC(cpu_v7_bpiall_switch_mm)
+
string cpu_v7_name, "ARMv7 Processor"
.align

@@ -152,31 +163,6 @@ ENTRY(cpu_v7_do_resume)
ENDPROC(cpu_v7_do_resume)
#endif

-/*
- * Cortex-A8
- */
- globl_equ cpu_ca8_proc_init, cpu_v7_proc_init
- globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin
- globl_equ cpu_ca8_reset, cpu_v7_reset
- globl_equ cpu_ca8_do_idle, cpu_v7_do_idle
- globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
- globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext
- globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size
-#ifdef CONFIG_ARM_CPU_SUSPEND
- globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
- globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
-#endif
-
-/*
- * Cortex-A9 processor functions
- */
- globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init
- globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin
- globl_equ cpu_ca9mp_reset, cpu_v7_reset
- globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle
- globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
- globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm
- globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext
.globl cpu_ca9mp_suspend_size
.equ cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2
#ifdef CONFIG_ARM_CPU_SUSPEND
@@ -490,10 +476,75 @@ __v7_setup_stack:

@ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ @ generic v7 bpiall on context switch
+ globl_equ cpu_v7_bpiall_proc_init, cpu_v7_proc_init
+ globl_equ cpu_v7_bpiall_proc_fin, cpu_v7_proc_fin
+ globl_equ cpu_v7_bpiall_reset, cpu_v7_reset
+ globl_equ cpu_v7_bpiall_do_idle, cpu_v7_do_idle
+ globl_equ cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area
+ globl_equ cpu_v7_bpiall_set_pte_ext, cpu_v7_set_pte_ext
+ globl_equ cpu_v7_bpiall_suspend_size, cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+ globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend
+ globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume
+#endif
+ define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions
+#else
+#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions
+#endif
+
#ifndef CONFIG_ARM_LPAE
+ @ Cortex-A8 - always needs bpiall switch_mm implementation
+ globl_equ cpu_ca8_proc_init, cpu_v7_proc_init
+ globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin
+ globl_equ cpu_ca8_reset, cpu_v7_reset
+ globl_equ cpu_ca8_do_idle, cpu_v7_do_idle
+ globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
+ globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext
+ globl_equ cpu_ca8_switch_mm, cpu_v7_bpiall_switch_mm
+ globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size
+#ifdef CONFIG_ARM_CPU_SUSPEND
+ globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
+ globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
+#endif
define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
+
+ @ Cortex-A9 - needs more registers preserved across suspend/resume
+ @ and bpiall switch_mm for hardening
+ globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init
+ globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin
+ globl_equ cpu_ca9mp_reset, cpu_v7_reset
+ globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle
+ globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ globl_equ cpu_ca9mp_switch_mm, cpu_v7_bpiall_switch_mm
+#else
+ globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm
+#endif
+ globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext
define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#endif
+
+ @ Cortex-A15 - needs iciallu switch_mm for hardening
+ globl_equ cpu_ca15_proc_init, cpu_v7_proc_init
+ globl_equ cpu_ca15_proc_fin, cpu_v7_proc_fin
+ globl_equ cpu_ca15_reset, cpu_v7_reset
+ globl_equ cpu_ca15_do_idle, cpu_v7_do_idle
+ globl_equ cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ globl_equ cpu_ca15_switch_mm, cpu_v7_iciallu_switch_mm
+#else
+ globl_equ cpu_ca15_switch_mm, cpu_v7_switch_mm
+#endif
+ globl_equ cpu_ca15_set_pte_ext, cpu_v7_set_pte_ext
+ globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size
+ globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend
+ globl_equ cpu_ca15_do_resume, cpu_v7_do_resume
+ define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#ifdef CONFIG_CPU_PJ4B
define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#endif
@@ -600,7 +651,7 @@ __v7_ca7mp_proc_info:
__v7_ca12mp_proc_info:
.long 0x410fc0d0
.long 0xff0ffff0
- __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
+ __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
.size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info

/*
@@ -610,7 +661,7 @@ __v7_ca12mp_proc_info:
__v7_ca15mp_proc_info:
.long 0x410fc0f0
.long 0xff0ffff0
- __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
+ __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions
.size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info

/*
@@ -620,7 +671,7 @@ __v7_ca15mp_proc_info:
__v7_b15mp_proc_info:
.long 0x420f00f0
.long 0xff0ffff0
- __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup
+ __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions
.size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info

/*
@@ -630,9 +681,25 @@ __v7_b15mp_proc_info:
__v7_ca17mp_proc_info:
.long 0x410fc0e0
.long 0xff0ffff0
- __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
+ __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
.size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info

+ /* ARM Ltd. Cortex A73 processor */
+ .type __v7_ca73_proc_info, #object
+__v7_ca73_proc_info:
+ .long 0x410fd090
+ .long 0xff0ffff0
+ __v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+ .size __v7_ca73_proc_info, . - __v7_ca73_proc_info
+
+ /* ARM Ltd. Cortex A75 processor */
+ .type __v7_ca75_proc_info, #object
+__v7_ca75_proc_info:
+ .long 0x410fd0a0
+ .long 0xff0ffff0
+ __v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS
+ .size __v7_ca75_proc_info, . - __v7_ca75_proc_info
+
/*
* Qualcomm Inc. Krait processors.
*/


2019-11-08 19:01:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 44/75] ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre

From: Russell King <[email protected]>

Commit c58d237d0852a57fde9bc2c310972e8f4e3d155d upstream.

Add a Kconfig symbol for CPUs which are vulnerable to the Spectre
attacks.

Signed-off-by: Russell King <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/Kconfig | 4 ++++
1 file changed, 4 insertions(+)

--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -396,6 +396,7 @@ config CPU_V7
select CPU_CP15_MPU if !MMU
select CPU_HAS_ASID if MMU
select CPU_PABRT_V7
+ select CPU_SPECTRE if MMU
select CPU_TLB_V7 if MMU

# ARMv7M
@@ -793,6 +794,9 @@ config CPU_BPREDICT_DISABLE
help
Say Y here to disable branch prediction. If unsure, say N.

+config CPU_SPECTRE
+ bool
+
config TLS_REG_EMUL
bool
select NEED_KUSER_HELPERS


2019-11-08 19:02:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 66/75] ARM: 8810/1: vfp: Fix wrong assignement to ufp_exc

From: Julien Thierry <[email protected]>

Commit 5df7a99bdd0de4a0480320264c44c04543c29d5a upstream.

In vfp_preserve_user_clear_hwstate, ufp_exc->fpinst2 gets assigned to
itself. It should actually be hwstate->fpinst2 that gets assigned to the
ufp_exc field.

Fixes commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 ("ARM: 8791/1:
vfp: use __copy_to_user() when saving VFP state").

Reported-by: David Binderman <[email protected]>
Signed-off-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/vfp/vfpmodule.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -583,7 +583,7 @@ int vfp_preserve_user_clear_hwstate(stru
*/
ufp_exc->fpexc = hwstate->fpexc;
ufp_exc->fpinst = hwstate->fpinst;
- ufp_exc->fpinst2 = ufp_exc->fpinst2;
+ ufp_exc->fpinst2 = hwstate->fpinst2;

/* Ensure that VFP is disabled. */
vfp_flush_hwstate(thread);


2019-11-08 19:02:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 68/75] ARM: split out processor lookup

From: Russell King <[email protected]>

Commit 65987a8553061515b5851b472081aedb9837a391 upstream.

Split out the lookup of the processor type and associated error handling
from the rest of setup_processor() - we will need to use this in the
secondary CPU bringup path for big.Little Spectre variant 2 mitigation.

Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/cputype.h | 1 +
arch/arm/kernel/setup.c | 31 +++++++++++++++++++------------
2 files changed, 20 insertions(+), 12 deletions(-)

--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -93,6 +93,7 @@
#define ARM_CPU_PART_SCORPION 0x510002d0

extern unsigned int processor_id;
+struct proc_info_list *lookup_processor(u32 midr);

#ifdef CONFIG_CPU_CP15
#define read_cpuid(reg) \
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -599,22 +599,29 @@ static void __init smp_build_mpidr_hash(
}
#endif

-static void __init setup_processor(void)
+/*
+ * locate processor in the list of supported processor types. The linker
+ * builds this table for us from the entries in arch/arm/mm/proc-*.S
+ */
+struct proc_info_list *lookup_processor(u32 midr)
{
- struct proc_info_list *list;
+ struct proc_info_list *list = lookup_processor_type(midr);

- /*
- * locate processor in the list of supported processor
- * types. The linker builds this table for us from the
- * entries in arch/arm/mm/proc-*.S
- */
- list = lookup_processor_type(read_cpuid_id());
if (!list) {
- pr_err("CPU configuration botched (ID %08x), unable to continue.\n",
- read_cpuid_id());
- while (1);
+ pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n",
+ smp_processor_id(), midr);
+ while (1)
+ /* can't use cpu_relax() here as it may require MMU setup */;
}

+ return list;
+}
+
+static void __init setup_processor(void)
+{
+ unsigned int midr = read_cpuid_id();
+ struct proc_info_list *list = lookup_processor(midr);
+
cpu_name = list->cpu_name;
__cpu_architecture = __get_cpu_architecture();

@@ -632,7 +639,7 @@ static void __init setup_processor(void)
#endif

pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n",
- cpu_name, read_cpuid_id(), read_cpuid_id() & 15,
+ list->cpu_name, midr, midr & 15,
proc_arch[cpu_architecture()], get_cr());

snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c",


2019-11-08 19:03:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 36/75] arm/arm64: smccc: Implement SMCCC v1.1 inline primitive

From: Marc Zyngier <[email protected]>

commit f2d3b2e8759a5833df6f022e42df2d581e6d843c upstream.

One of the major improvement of SMCCC v1.1 is that it only clobbers
the first 4 registers, both on 32 and 64bit. This means that it
becomes very easy to provide an inline version of the SMC call
primitive, and avoid performing a function call to stash the
registers that would otherwise be clobbered by SMCCC v1.0.

Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Mark Rutland <[email protected]> [v4.9 backport]
Tested-by: Greg Hackmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 141 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 141 insertions(+)

--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -123,5 +123,146 @@ asmlinkage void arm_smccc_hvc(unsigned l
unsigned long a5, unsigned long a6, unsigned long a7,
struct arm_smccc_res *res);

+/* SMCCC v1.1 implementation madness follows */
+#ifdef CONFIG_ARM64
+
+#define SMCCC_SMC_INST "smc #0"
+#define SMCCC_HVC_INST "hvc #0"
+
+#elif defined(CONFIG_ARM)
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+
+#define SMCCC_SMC_INST __SMC(0)
+#define SMCCC_HVC_INST __HVC(0)
+
+#endif
+
+#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x
+
+#define __count_args(...) \
+ ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0)
+
+#define __constraint_write_0 \
+ "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3)
+#define __constraint_write_1 \
+ "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3)
+#define __constraint_write_2 \
+ "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3)
+#define __constraint_write_3 \
+ "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3)
+#define __constraint_write_4 __constraint_write_3
+#define __constraint_write_5 __constraint_write_4
+#define __constraint_write_6 __constraint_write_5
+#define __constraint_write_7 __constraint_write_6
+
+#define __constraint_read_0
+#define __constraint_read_1
+#define __constraint_read_2
+#define __constraint_read_3
+#define __constraint_read_4 "r" (r4)
+#define __constraint_read_5 __constraint_read_4, "r" (r5)
+#define __constraint_read_6 __constraint_read_5, "r" (r6)
+#define __constraint_read_7 __constraint_read_6, "r" (r7)
+
+#define __declare_arg_0(a0, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register unsigned long r1 asm("r1"); \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_1(a0, a1, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_2(a0, a1, a2, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register typeof(a2) r2 asm("r2") = a2; \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_3(a0, a1, a2, a3, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register typeof(a2) r2 asm("r2") = a2; \
+ register typeof(a3) r3 asm("r3") = a3
+
+#define __declare_arg_4(a0, a1, a2, a3, a4, res) \
+ __declare_arg_3(a0, a1, a2, a3, res); \
+ register typeof(a4) r4 asm("r4") = a4
+
+#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \
+ __declare_arg_4(a0, a1, a2, a3, a4, res); \
+ register typeof(a5) r5 asm("r5") = a5
+
+#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \
+ __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \
+ register typeof(a6) r6 asm("r6") = a6
+
+#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \
+ __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \
+ register typeof(a7) r7 asm("r7") = a7
+
+#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__)
+
+#define ___constraints(count) \
+ : __constraint_write_ ## count \
+ : __constraint_read_ ## count \
+ : "memory"
+#define __constraints(count) ___constraints(count)
+
+/*
+ * We have an output list that is not necessarily used, and GCC feels
+ * entitled to optimise the whole sequence away. "volatile" is what
+ * makes it stick.
+ */
+#define __arm_smccc_1_1(inst, ...) \
+ do { \
+ __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
+ asm volatile(inst "\n" \
+ __constraints(__count_args(__VA_ARGS__))); \
+ if (___res) \
+ *___res = (typeof(*___res)){r0, r1, r2, r3}; \
+ } while (0)
+
+/*
+ * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call
+ *
+ * This is a variadic macro taking one to eight source arguments, and
+ * an optional return structure.
+ *
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
+ *
+ * This macro is used to make SMC calls following SMC Calling Convention v1.1.
+ * The content of the supplied param are copied to registers 0 to 7 prior
+ * to the SMC instruction. The return values are updated with the content
+ * from register 0 to 3 on return from the SMC instruction if not NULL.
+ */
+#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__)
+
+/*
+ * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call
+ *
+ * This is a variadic macro taking one to eight source arguments, and
+ * an optional return structure.
+ *
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
+ *
+ * This macro is used to make HVC calls following SMC Calling Convention v1.1.
+ * The content of the supplied param are copied to registers 0 to 7 prior
+ * to the HVC instruction. The return values are updated with the content
+ * from register 0 to 3 on return from the HVC instruction if not NULL.
+ */
+#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
+
#endif /*__ASSEMBLY__*/
#endif /*__LINUX_ARM_SMCCC_H*/


2019-11-08 19:23:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 42/75] ARM: bugs: hook processor bug checking into SMP and suspend paths

From: Russell King <[email protected]>

Commit 26602161b5ba795928a5a719fe1d5d9f2ab5c3ef upstream.

Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode. This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.

Signed-off-by: Russell King <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/bugs.h | 2 ++
arch/arm/kernel/bugs.c | 5 +++++
arch/arm/kernel/smp.c | 4 ++++
arch/arm/kernel/suspend.c | 2 ++
4 files changed, 13 insertions(+)

--- a/arch/arm/include/asm/bugs.h
+++ b/arch/arm/include/asm/bugs.h
@@ -14,8 +14,10 @@ extern void check_writebuffer_bugs(void)

#ifdef CONFIG_MMU
extern void check_bugs(void);
+extern void check_other_bugs(void);
#else
#define check_bugs() do { } while (0)
+#define check_other_bugs() do { } while (0)
#endif

#endif
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -3,7 +3,12 @@
#include <asm/bugs.h>
#include <asm/proc-fns.h>

+void check_other_bugs(void)
+{
+}
+
void __init check_bugs(void)
{
check_writebuffer_bugs();
+ check_other_bugs();
}
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -29,6 +29,7 @@
#include <linux/irq_work.h>

#include <linux/atomic.h>
+#include <asm/bugs.h>
#include <asm/smp.h>
#include <asm/cacheflush.h>
#include <asm/cpu.h>
@@ -396,6 +397,9 @@ asmlinkage void secondary_start_kernel(v
* before we continue - which happens after __cpu_up returns.
*/
set_cpu_online(cpu, true);
+
+ check_other_bugs();
+
complete(&cpu_running);

local_irq_enable();
--- a/arch/arm/kernel/suspend.c
+++ b/arch/arm/kernel/suspend.c
@@ -1,6 +1,7 @@
#include <linux/init.h>
#include <linux/slab.h>

+#include <asm/bugs.h>
#include <asm/cacheflush.h>
#include <asm/idmap.h>
#include <asm/pgalloc.h>
@@ -34,6 +35,7 @@ int cpu_suspend(unsigned long arg, int (
cpu_switch_mm(mm->pgd, mm);
local_flush_bp_all();
local_flush_tlb_all();
+ check_other_bugs();
}

return ret;


2019-11-08 19:23:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 40/75] ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs

From: Russell King <[email protected]>

Commit f5683e76f35b4ec5891031b6a29036efe0a1ff84 upstream.

Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the
Broadcom Brahma B15 CPU.

Signed-off-by: Russell King <[email protected]>
Acked-by: Florian Fainelli <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/cputype.h | 8 ++++++++
1 file changed, 8 insertions(+)

--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -74,8 +74,16 @@
#define ARM_CPU_PART_CORTEX_A12 0x4100c0d0
#define ARM_CPU_PART_CORTEX_A17 0x4100c0e0
#define ARM_CPU_PART_CORTEX_A15 0x4100c0f0
+#define ARM_CPU_PART_CORTEX_A53 0x4100d030
+#define ARM_CPU_PART_CORTEX_A57 0x4100d070
+#define ARM_CPU_PART_CORTEX_A72 0x4100d080
+#define ARM_CPU_PART_CORTEX_A73 0x4100d090
+#define ARM_CPU_PART_CORTEX_A75 0x4100d0a0
#define ARM_CPU_PART_MASK 0xff00fff0

+/* Broadcom cores */
+#define ARM_CPU_PART_BRAHMA_B15 0x420000f0
+
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
#define ARM_CPU_XSCALE_ARCH_V2 0x4000


2019-11-08 19:24:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 71/75] ARM: spectre-v2: per-CPU vtables to work around big.Little systems

From: Russell King <[email protected]>

Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream.

In big.Little systems, some CPUs require the Spectre workarounds in
paths such as the context switch, but other CPUs do not. In order
to handle these differences, we need per-CPU vtables.

We are unable to use the kernel's per-CPU variables to support this
as per-CPU is not initialised at times when we need access to the
vtables, so we have to use an array indexed by logical CPU number.

We use an array-of-pointers to avoid having function pointers in
the kernel's read/write .data section.

Note: Added include of linux/slab.h in arch/arm/smp.c.

Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/proc-fns.h | 23 +++++++++++++++++++++++
arch/arm/kernel/setup.c | 5 +++++
arch/arm/kernel/smp.c | 32 ++++++++++++++++++++++++++++++++
arch/arm/mm/proc-v7-bugs.c | 17 ++---------------
4 files changed, 62 insertions(+), 15 deletions(-)

--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -104,12 +104,35 @@ extern void cpu_do_resume(void *);
#else

extern struct processor processor;
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+#include <linux/smp.h>
+/*
+ * This can't be a per-cpu variable because we need to access it before
+ * per-cpu has been initialised. We have a couple of functions that are
+ * called in a pre-emptible context, and so can't use smp_processor_id()
+ * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the
+ * function pointers for these are identical across all CPUs.
+ */
+extern struct processor *cpu_vtable[];
+#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f
+#define PROC_TABLE(f) cpu_vtable[0]->f
+static inline void init_proc_vtable(const struct processor *p)
+{
+ unsigned int cpu = smp_processor_id();
+ *cpu_vtable[cpu] = *p;
+ WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area !=
+ cpu_vtable[0]->dcache_clean_area);
+ WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext !=
+ cpu_vtable[0]->set_pte_ext);
+}
+#else
#define PROC_VTABLE(f) processor.f
#define PROC_TABLE(f) processor.f
static inline void init_proc_vtable(const struct processor *p)
{
processor = *p;
}
+#endif

#define cpu_proc_init PROC_VTABLE(_proc_init)
#define cpu_check_bugs PROC_VTABLE(check_bugs)
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -113,6 +113,11 @@ EXPORT_SYMBOL(elf_hwcap2);

#ifdef MULTI_CPU
struct processor processor __read_mostly;
+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+struct processor *cpu_vtable[NR_CPUS] = {
+ [0] = &processor,
+};
+#endif
#endif
#ifdef MULTI_TLB
struct cpu_tlb_fns cpu_tlb __read_mostly;
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -27,6 +27,7 @@
#include <linux/completion.h>
#include <linux/cpufreq.h>
#include <linux/irq_work.h>
+#include <linux/slab.h>

#include <linux/atomic.h>
#include <asm/bugs.h>
@@ -40,6 +41,7 @@
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
+#include <asm/procinfo.h>
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
@@ -96,6 +98,30 @@ static unsigned long get_arch_pgd(pgd_t
#endif
}

+#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR)
+static int secondary_biglittle_prepare(unsigned int cpu)
+{
+ if (!cpu_vtable[cpu])
+ cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL);
+
+ return cpu_vtable[cpu] ? 0 : -ENOMEM;
+}
+
+static void secondary_biglittle_init(void)
+{
+ init_proc_vtable(lookup_processor(read_cpuid_id())->proc);
+}
+#else
+static int secondary_biglittle_prepare(unsigned int cpu)
+{
+ return 0;
+}
+
+static void secondary_biglittle_init(void)
+{
+}
+#endif
+
int __cpu_up(unsigned int cpu, struct task_struct *idle)
{
int ret;
@@ -103,6 +129,10 @@ int __cpu_up(unsigned int cpu, struct ta
if (!smp_ops.smp_boot_secondary)
return -ENOSYS;

+ ret = secondary_biglittle_prepare(cpu);
+ if (ret)
+ return ret;
+
/*
* We need to tell the secondary core where to find
* its stack and the page tables.
@@ -354,6 +384,8 @@ asmlinkage void secondary_start_kernel(v
struct mm_struct *mm = &init_mm;
unsigned int cpu;

+ secondary_biglittle_init();
+
/*
* The identity mapping is uncached (strongly ordered), so
* switch away from it before attempting any exclusive accesses.
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -52,8 +52,6 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A17:
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
- if (processor.switch_mm != cpu_v7_bpiall_switch_mm)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
@@ -61,8 +59,6 @@ static void cpu_v7_spectre_init(void)

case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
- if (processor.switch_mm != cpu_v7_iciallu_switch_mm)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
@@ -88,11 +84,9 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
- if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
- processor.switch_mm = cpu_v7_hvc_switch_mm;
+ cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break;

@@ -101,11 +95,9 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
- if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)
- goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
- processor.switch_mm = cpu_v7_smc_switch_mm;
+ cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
break;

@@ -119,11 +111,6 @@ static void cpu_v7_spectre_init(void)
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
- return;
-
-bl_error:
- pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",
- cpu);
}
#else
static void cpu_v7_spectre_init(void)


2019-11-08 19:24:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 70/75] ARM: add PROC_VTABLE and PROC_TABLE macros

From: Russell King <[email protected]>

Commit e209950fdd065d2cc46e6338e47e52841b830cba upstream.

Allow the way we access members of the processor vtable to be changed
at compile time. We will need to move to per-CPU vtables to fix the
Spectre variant 2 issues on big.Little systems.

However, we have a couple of calls that do not need the vtable
treatment, and indeed cause a kernel warning due to the (later) use
of smp_processor_id(), so also introduce the PROC_TABLE macro for
these which always use CPU 0's function pointers.

Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/proc-fns.h | 39 ++++++++++++++++++++++++++-------------
arch/arm/kernel/setup.c | 4 +---
2 files changed, 27 insertions(+), 16 deletions(-)

--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -23,7 +23,7 @@ struct mm_struct;
/*
* Don't change this structure - ASM code relies on it.
*/
-extern struct processor {
+struct processor {
/* MISC
* get data abort address/flags
*/
@@ -79,9 +79,13 @@ extern struct processor {
unsigned int suspend_size;
void (*do_suspend)(void *);
void (*do_resume)(void *);
-} processor;
+};

#ifndef MULTI_CPU
+static inline void init_proc_vtable(const struct processor *p)
+{
+}
+
extern void cpu_proc_init(void);
extern void cpu_proc_fin(void);
extern int cpu_do_idle(void);
@@ -98,18 +102,27 @@ extern void cpu_reset(unsigned long addr
extern void cpu_do_suspend(void *);
extern void cpu_do_resume(void *);
#else
-#define cpu_proc_init processor._proc_init
-#define cpu_check_bugs processor.check_bugs
-#define cpu_proc_fin processor._proc_fin
-#define cpu_reset processor.reset
-#define cpu_do_idle processor._do_idle
-#define cpu_dcache_clean_area processor.dcache_clean_area
-#define cpu_set_pte_ext processor.set_pte_ext
-#define cpu_do_switch_mm processor.switch_mm

-/* These three are private to arch/arm/kernel/suspend.c */
-#define cpu_do_suspend processor.do_suspend
-#define cpu_do_resume processor.do_resume
+extern struct processor processor;
+#define PROC_VTABLE(f) processor.f
+#define PROC_TABLE(f) processor.f
+static inline void init_proc_vtable(const struct processor *p)
+{
+ processor = *p;
+}
+
+#define cpu_proc_init PROC_VTABLE(_proc_init)
+#define cpu_check_bugs PROC_VTABLE(check_bugs)
+#define cpu_proc_fin PROC_VTABLE(_proc_fin)
+#define cpu_reset PROC_VTABLE(reset)
+#define cpu_do_idle PROC_VTABLE(_do_idle)
+#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area)
+#define cpu_set_pte_ext PROC_TABLE(set_pte_ext)
+#define cpu_do_switch_mm PROC_VTABLE(switch_mm)
+
+/* These two are private to arch/arm/kernel/suspend.c */
+#define cpu_do_suspend PROC_VTABLE(do_suspend)
+#define cpu_do_resume PROC_VTABLE(do_resume)
#endif

extern void cpu_resume(void);
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -625,9 +625,7 @@ static void __init setup_processor(void)
cpu_name = list->cpu_name;
__cpu_architecture = __get_cpu_architecture();

-#ifdef MULTI_CPU
- processor = *list->proc;
-#endif
+ init_proc_vtable(list->proc);
#ifdef MULTI_TLB
cpu_tlb = *list->tlb;
#endif


2019-11-08 19:24:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 65/75] ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization

From: Julien Thierry <[email protected]>

Commit afaf6838f4bc896a711180b702b388b8cfa638fc upstream.

Introduce C and asm helpers to sanitize user address, taking the
address range they target into account.

Use asm helper for existing sanitization in __copy_from_user().

Signed-off-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/assembler.h | 11 +++++++++++
arch/arm/include/asm/uaccess.h | 26 ++++++++++++++++++++++++++
arch/arm/lib/copy_from_user.S | 6 +-----
3 files changed, 38 insertions(+), 5 deletions(-)

--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -461,6 +461,17 @@ THUMB( orr \reg , \reg , #PSR_T_BIT )
#endif
.endm

+ .macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req
+#ifdef CONFIG_CPU_SPECTRE
+ sub \tmp, \limit, #1
+ subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr
+ addhs \tmp, \tmp, #1 @ if (tmp >= 0) {
+ subhss \tmp, \tmp, \size @ tmp = limit - (addr + size) }
+ movlo \addr, #0 @ if (tmp < 0) addr = NULL
+ csdb
+#endif
+ .endm
+
.macro uaccess_disable, tmp, isb=1
#ifdef CONFIG_CPU_SW_DOMAIN_PAN
/*
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -138,6 +138,32 @@ static inline void set_fs(mm_segment_t f
__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))

/*
+ * Sanitise a uaccess pointer such that it becomes NULL if addr+size
+ * is above the current addr_limit.
+ */
+#define uaccess_mask_range_ptr(ptr, size) \
+ ((__typeof__(ptr))__uaccess_mask_range_ptr(ptr, size))
+static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr,
+ size_t size)
+{
+ void __user *safe_ptr = (void __user *)ptr;
+ unsigned long tmp;
+
+ asm volatile(
+ " sub %1, %3, #1\n"
+ " subs %1, %1, %0\n"
+ " addhs %1, %1, #1\n"
+ " subhss %1, %1, %2\n"
+ " movlo %0, #0\n"
+ : "+r" (safe_ptr), "=&r" (tmp)
+ : "r" (size), "r" (current_thread_info()->addr_limit)
+ : "cc");
+
+ csdb();
+ return safe_ptr;
+}
+
+/*
* Single-value transfer routines. They automatically use the right
* size if we just have the right pointer type. Note that the functions
* which read from user space (*get_*) need to take care not to leak
--- a/arch/arm/lib/copy_from_user.S
+++ b/arch/arm/lib/copy_from_user.S
@@ -93,11 +93,7 @@ ENTRY(arm_copy_from_user)
#ifdef CONFIG_CPU_SPECTRE
get_thread_info r3
ldr r3, [r3, #TI_ADDR_LIMIT]
- adds ip, r1, r2 @ ip=addr+size
- sub r3, r3, #1 @ addr_limit - 1
- cmpcc ip, r3 @ if (addr+size > addr_limit - 1)
- movcs r1, #0 @ addr = NULL
- csdb
+ uaccess_mask_range_ptr r1, r2, r3, ip
#endif

#include "copy_template.S"


2019-11-08 19:24:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 62/75] ARM: 8793/1: signal: replace __put_user_error with __put_user

From: Julien Thierry <[email protected]>

Commit 18ea66bd6e7a95bdc598223d72757190916af28b upstream.

With Spectre-v1.1 mitigations, __put_user_error is pointless. In an attempt
to remove it, replace its references in frame setups with __put_user.

Signed-off-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kernel/signal.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -302,7 +302,7 @@ setup_sigframe(struct sigframe __user *s
if (err == 0)
err |= preserve_vfp_context(&aux->vfp);
#endif
- __put_user_error(0, &aux->end_magic, err);
+ err |= __put_user(0, &aux->end_magic);

return err;
}
@@ -434,7 +434,7 @@ setup_frame(struct ksignal *ksig, sigset
/*
* Set uc.uc_flags to a value which sc.trap_no would never have.
*/
- __put_user_error(0x5ac3c35a, &frame->uc.uc_flags, err);
+ err = __put_user(0x5ac3c35a, &frame->uc.uc_flags);

err |= setup_sigframe(frame, regs, set);
if (err == 0)
@@ -454,8 +454,8 @@ setup_rt_frame(struct ksignal *ksig, sig

err |= copy_siginfo_to_user(&frame->info, &ksig->info);

- __put_user_error(0, &frame->sig.uc.uc_flags, err);
- __put_user_error(NULL, &frame->sig.uc.uc_link, err);
+ err |= __put_user(0, &frame->sig.uc.uc_flags);
+ err |= __put_user(NULL, &frame->sig.uc.uc_link);

err |= __save_altstack(&frame->sig.uc.uc_stack, regs->ARM_sp);
err |= setup_sigframe(&frame->sig, regs, set);


2019-11-08 19:24:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 75/75] fs/dcache: move security_d_instantiate() behind attaching dentry to inode

From: "zhangyi (F)" <[email protected]>

During backport 1e2e547a93a "do d_instantiate/unlock_new_inode
combinations safely", there was a error instantiating sequence of
attaching dentry to inode and calling security_d_instantiate().

Before commit ce23e640133 "->getxattr(): pass dentry and inode as
separate arguments" and b96809173e9 "security_d_instantiate(): move to
the point prior to attaching dentry to inode", security_d_instantiate()
should be called beind __d_instantiate(), otherwise it will trigger
below problem when CONFIG_SECURITY_SMACK on ext4 was enabled because
d_inode(dentry) used by ->getxattr() is NULL before __d_instantiate()
instantiate inode.

[ 31.858026] BUG: unable to handle kernel paging request at ffffffffffffff70
...
[ 31.882024] Call Trace:
[ 31.882378] [<ffffffffa347f75c>] ext4_xattr_get+0x8c/0x3e0
[ 31.883195] [<ffffffffa3489454>] ext4_xattr_security_get+0x24/0x40
[ 31.884086] [<ffffffffa336a56b>] generic_getxattr+0x5b/0x90
[ 31.884907] [<ffffffffa3700514>] smk_fetch+0xb4/0x150
[ 31.885634] [<ffffffffa3700772>] smack_d_instantiate+0x1c2/0x550
[ 31.886508] [<ffffffffa36f9a5a>] security_d_instantiate+0x3a/0x80
[ 31.887389] [<ffffffffa3353b26>] d_instantiate_new+0x36/0x130
[ 31.888223] [<ffffffffa342b1ef>] ext4_mkdir+0x4af/0x6a0
[ 31.888928] [<ffffffffa3343470>] vfs_mkdir+0x100/0x280
[ 31.889536] [<ffffffffa334b086>] SyS_mkdir+0xb6/0x170
[ 31.890255] [<ffffffffa307c855>] ? trace_do_page_fault+0x95/0x2b0
[ 31.891134] [<ffffffffa3c5e078>] entry_SYSCALL_64_fastpath+0x18/0x73

Cc: <[email protected]> # 3.16, 4.4
Signed-off-by: zhangyi (F) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/dcache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1903,7 +1903,6 @@ void d_instantiate_new(struct dentry *en
BUG_ON(!hlist_unhashed(&entry->d_u.d_alias));
BUG_ON(!inode);
lockdep_annotate_inode_mutex_key(inode);
- security_d_instantiate(entry, inode);
spin_lock(&inode->i_lock);
__d_instantiate(entry, inode);
WARN_ON(!(inode->i_state & I_NEW));
@@ -1911,6 +1910,7 @@ void d_instantiate_new(struct dentry *en
smp_mb();
wake_up_bit(&inode->i_state, __I_NEW);
spin_unlock(&inode->i_lock);
+ security_d_instantiate(entry, inode);
}
EXPORT_SYMBOL(d_instantiate_new);



2019-11-08 19:24:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 51/75] ARM: spectre-v1: add array_index_mask_nospec() implementation

From: Russell King <[email protected]>

Commit 1d4238c56f9816ce0f9c8dbe42d7f2ad81cb6613 upstream.

Add an implementation of the array_index_mask_nospec() function for
mitigating Spectre variant 1 throughout the kernel.

Signed-off-by: Russell King <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/barrier.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -108,5 +108,26 @@ do { \
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()

+#ifdef CONFIG_CPU_SPECTRE
+static inline unsigned long array_index_mask_nospec(unsigned long idx,
+ unsigned long sz)
+{
+ unsigned long mask;
+
+ asm volatile(
+ "cmp %1, %2\n"
+ " sbc %0, %1, %1\n"
+ CSDB
+ : "=r" (mask)
+ : "r" (idx), "Ir" (sz)
+ : "cc");
+
+ return mask;
+}
+#define array_index_mask_nospec array_index_mask_nospec
+#endif
+
+#include <asm-generic/barrier.h>
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_BARRIER_H */


2019-11-08 19:24:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 35/75] arm/arm64: smccc: Make function identifiers an unsigned quantity

From: Marc Zyngier <[email protected]>

commit ded4c39e93f3b72968fdb79baba27f3b83dad34c upstream.

Function identifiers are a 32bit, unsigned quantity. But we never
tell so to the compiler, resulting in the following:

4ac: b26187e0 mov x0, #0xffffffff80000001

We thus rely on the firmware narrowing it for us, which is not
always a reasonable expectation.

Cc: [email protected]
Reported-by: Ard Biesheuvel <[email protected]>
Acked-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Mark Rutland <[email protected]> [v4.9 backport]
Tested-by: Greg Hackmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -14,14 +14,16 @@
#ifndef __LINUX_ARM_SMCCC_H
#define __LINUX_ARM_SMCCC_H

+#include <uapi/linux/const.h>
+
/*
* This file provides common defines for ARM SMC Calling Convention as
* specified in
* http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html
*/

-#define ARM_SMCCC_STD_CALL 0
-#define ARM_SMCCC_FAST_CALL 1
+#define ARM_SMCCC_STD_CALL _AC(0,U)
+#define ARM_SMCCC_FAST_CALL _AC(1,U)
#define ARM_SMCCC_TYPE_SHIFT 31

#define ARM_SMCCC_SMC_32 0


2019-11-08 19:24:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 06/75] ARM: mm: fix alignment handler faults under memory pressure

From: Russell King <[email protected]>

[ Upstream commit 67e15fa5b487adb9b78a92789eeff2d6ec8f5cee ]

When the system has high memory pressure, the page containing the
instruction may be paged out. Using probe_kernel_address() means that
if the page is swapped out, the resulting page fault will not be
handled because page faults are disabled by this function.

Use get_user() to read the instruction instead.

Reported-by: Jing Xiangfeng <[email protected]>
Fixes: b255188f90e2 ("ARM: fix scheduling while atomic warning in alignment handling code")
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/arm/mm/alignment.c | 44 +++++++++++++++++++++++++++++++++--------
1 file changed, 36 insertions(+), 8 deletions(-)

diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index 7d5f4c736a16b..cd18eda014c24 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -767,6 +767,36 @@ do_alignment_t32_to_handler(unsigned long *pinstr, struct pt_regs *regs,
return NULL;
}

+static int alignment_get_arm(struct pt_regs *regs, u32 *ip, unsigned long *inst)
+{
+ u32 instr = 0;
+ int fault;
+
+ if (user_mode(regs))
+ fault = get_user(instr, ip);
+ else
+ fault = probe_kernel_address(ip, instr);
+
+ *inst = __mem_to_opcode_arm(instr);
+
+ return fault;
+}
+
+static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
+{
+ u16 instr = 0;
+ int fault;
+
+ if (user_mode(regs))
+ fault = get_user(instr, ip);
+ else
+ fault = probe_kernel_address(ip, instr);
+
+ *inst = __mem_to_opcode_thumb16(instr);
+
+ return fault;
+}
+
static int
do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
@@ -774,10 +804,10 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
unsigned long instr = 0, instrptr;
int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs);
unsigned int type;
- unsigned int fault;
u16 tinstr = 0;
int isize = 4;
int thumb2_32b = 0;
+ int fault;

if (interrupts_enabled(regs))
local_irq_enable();
@@ -786,15 +816,14 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)

if (thumb_mode(regs)) {
u16 *ptr = (u16 *)(instrptr & ~1);
- fault = probe_kernel_address(ptr, tinstr);
- tinstr = __mem_to_opcode_thumb16(tinstr);
+
+ fault = alignment_get_thumb(regs, ptr, &tinstr);
if (!fault) {
if (cpu_architecture() >= CPU_ARCH_ARMv7 &&
IS_T32(tinstr)) {
/* Thumb-2 32-bit */
- u16 tinst2 = 0;
- fault = probe_kernel_address(ptr + 1, tinst2);
- tinst2 = __mem_to_opcode_thumb16(tinst2);
+ u16 tinst2;
+ fault = alignment_get_thumb(regs, ptr + 1, &tinst2);
instr = __opcode_thumb32_compose(tinstr, tinst2);
thumb2_32b = 1;
} else {
@@ -803,8 +832,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
}
}
} else {
- fault = probe_kernel_address((void *)instrptr, instr);
- instr = __mem_to_opcode_arm(instr);
+ fault = alignment_get_arm(regs, (void *)instrptr, &instr);
}

if (fault) {
--
2.20.1



2019-11-08 19:24:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 32/75] arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support

From: Marc Zyngier <[email protected]>

commit 6167ec5c9145cdf493722dfd80a5d48bafc4a18a upstream.

A new feature of SMCCC 1.1 is that it offers firmware-based CPU
workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
BP hardening for CVE-2017-5715.

If the host has some mitigation for this issue, report that
we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
host workaround on every guest exit.

Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
[v4.9: account for files moved to virt/ upstream]
Signed-off-by: Mark Rutland <[email protected]> [v4.9 backport]
Tested-by: Greg Hackmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[ardb: restrict to include/linux/arm-smccc.h]
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 5 +++++
1 file changed, 5 insertions(+)

--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -70,6 +70,11 @@
ARM_SMCCC_SMC_32, \
0, 1)

+#define ARM_SMCCC_ARCH_WORKAROUND_1 \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 0x8000)
+
#ifndef __ASSEMBLY__

#include <linux/linkage.h>


2019-11-08 19:24:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 30/75] ARM: Move system register accessors to asm/cp15.h

From: Vladimir Murzin <[email protected]>

Commit 4f2546384150e78cad8045e59a9587fabcd9f9fe upstream.

Headers linux/irqchip/arm-gic.v3.h and arch/arm/include/asm/kvm_hyp.h
are included in virt/kvm/arm/hyp/vgic-v3-sr.c and both define macros
called __ACCESS_CP15 and __ACCESS_CP15_64 which obviously creates a
conflict. These macros were introduced independently for GIC and KVM
and, in fact, do the same thing.

As an option we could add prefixes to KVM and GIC version of macros so
they won't clash, but it'd introduce code duplication. Alternatively,
we could keep macro in, say, GIC header and include it in KVM one (or
vice versa), but such dependency would not look nicer.

So we follow arm64 way (it handles this via sysreg.h) and move only
single set of macros to asm/cp15.h

Cc: Russell King <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: Vladimir Murzin <[email protected]>
Signed-off-by: Christoffer Dall <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/arch_gicv3.h | 27 +++++++++++----------------
arch/arm/include/asm/cp15.h | 15 +++++++++++++++
2 files changed, 26 insertions(+), 16 deletions(-)

--- a/arch/arm/include/asm/arch_gicv3.h
+++ b/arch/arm/include/asm/arch_gicv3.h
@@ -22,9 +22,7 @@

#include <linux/io.h>
#include <asm/barrier.h>
-
-#define __ACCESS_CP15(CRn, Op1, CRm, Op2) p15, Op1, %0, CRn, CRm, Op2
-#define __ACCESS_CP15_64(Op1, CRm) p15, Op1, %Q0, %R0, CRm
+#include <asm/cp15.h>

#define ICC_EOIR1 __ACCESS_CP15(c12, 0, c12, 1)
#define ICC_DIR __ACCESS_CP15(c12, 0, c11, 1)
@@ -102,58 +100,55 @@

static inline void gic_write_eoir(u32 irq)
{
- asm volatile("mcr " __stringify(ICC_EOIR1) : : "r" (irq));
+ write_sysreg(irq, ICC_EOIR1);
isb();
}

static inline void gic_write_dir(u32 val)
{
- asm volatile("mcr " __stringify(ICC_DIR) : : "r" (val));
+ write_sysreg(val, ICC_DIR);
isb();
}

static inline u32 gic_read_iar(void)
{
- u32 irqstat;
+ u32 irqstat = read_sysreg(ICC_IAR1);

- asm volatile("mrc " __stringify(ICC_IAR1) : "=r" (irqstat));
dsb(sy);
+
return irqstat;
}

static inline void gic_write_pmr(u32 val)
{
- asm volatile("mcr " __stringify(ICC_PMR) : : "r" (val));
+ write_sysreg(val, ICC_PMR);
}

static inline void gic_write_ctlr(u32 val)
{
- asm volatile("mcr " __stringify(ICC_CTLR) : : "r" (val));
+ write_sysreg(val, ICC_CTLR);
isb();
}

static inline void gic_write_grpen1(u32 val)
{
- asm volatile("mcr " __stringify(ICC_IGRPEN1) : : "r" (val));
+ write_sysreg(val, ICC_IGRPEN1);
isb();
}

static inline void gic_write_sgi1r(u64 val)
{
- asm volatile("mcrr " __stringify(ICC_SGI1R) : : "r" (val));
+ write_sysreg(val, ICC_SGI1R);
}

static inline u32 gic_read_sre(void)
{
- u32 val;
-
- asm volatile("mrc " __stringify(ICC_SRE) : "=r" (val));
- return val;
+ return read_sysreg(ICC_SRE);
}

static inline void gic_write_sre(u32 val)
{
- asm volatile("mcr " __stringify(ICC_SRE) : : "r" (val));
+ write_sysreg(val, ICC_SRE);
isb();
}

--- a/arch/arm/include/asm/cp15.h
+++ b/arch/arm/include/asm/cp15.h
@@ -49,6 +49,21 @@

#ifdef CONFIG_CPU_CP15

+#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
+ "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
+#define __ACCESS_CP15_64(Op1, CRm) \
+ "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
+
+#define __read_sysreg(r, w, c, t) ({ \
+ t __val; \
+ asm volatile(r " " c : "=r" (__val)); \
+ __val; \
+})
+#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
+
+#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
+#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
+
extern unsigned long cr_alignment; /* defined in entry-armv.S */

static inline unsigned long get_cr(void)


2019-11-08 19:25:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 52/75] ARM: spectre-v1: fix syscall entry

From: Russell King <[email protected]>

Commit 10573ae547c85b2c61417ff1a106cffbfceada35 upstream.

Prevent speculation at the syscall table decoding by clamping the index
used to zero on invalid system call numbers, and using the csdb
speculative barrier.

Signed-off-by: Russell King <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kernel/entry-common.S | 18 +++++++-----------
arch/arm/kernel/entry-header.S | 25 +++++++++++++++++++++++++
2 files changed, 32 insertions(+), 11 deletions(-)

--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -223,9 +223,7 @@ local_restart:
tst r10, #_TIF_SYSCALL_WORK @ are we tracing syscalls?
bne __sys_trace

- cmp scno, #NR_syscalls @ check upper syscall limit
- badr lr, ret_fast_syscall @ return address
- ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
+ invoke_syscall tbl, scno, r10, ret_fast_syscall

add r1, sp, #S_OFF
2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
@@ -258,14 +256,8 @@ __sys_trace:
mov r1, scno
add r0, sp, #S_OFF
bl syscall_trace_enter
-
- badr lr, __sys_trace_return @ return address
- mov scno, r0 @ syscall number (possibly new)
- add r1, sp, #S_R0 + S_OFF @ pointer to regs
- cmp scno, #NR_syscalls @ check upper syscall limit
- ldmccia r1, {r0 - r6} @ have to reload r0 - r6
- stmccia sp, {r4, r5} @ and update the stack args
- ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
+ mov scno, r0
+ invoke_syscall tbl, scno, r10, __sys_trace_return, reload=1
cmp scno, #-1 @ skip the syscall?
bne 2b
add sp, sp, #S_OFF @ restore stack
@@ -317,6 +309,10 @@ sys_syscall:
bic scno, r0, #__NR_OABI_SYSCALL_BASE
cmp scno, #__NR_syscall - __NR_SYSCALL_BASE
cmpne scno, #NR_syscalls @ check range
+#ifdef CONFIG_CPU_SPECTRE
+ movhs scno, #0
+ csdb
+#endif
stmloia sp, {r5, r6} @ shuffle args
movlo r0, r1
movlo r1, r2
--- a/arch/arm/kernel/entry-header.S
+++ b/arch/arm/kernel/entry-header.S
@@ -373,6 +373,31 @@
#endif
.endm

+ .macro invoke_syscall, table, nr, tmp, ret, reload=0
+#ifdef CONFIG_CPU_SPECTRE
+ mov \tmp, \nr
+ cmp \tmp, #NR_syscalls @ check upper syscall limit
+ movcs \tmp, #0
+ csdb
+ badr lr, \ret @ return address
+ .if \reload
+ add r1, sp, #S_R0 + S_OFF @ pointer to regs
+ ldmccia r1, {r0 - r6} @ reload r0-r6
+ stmccia sp, {r4, r5} @ update stack arguments
+ .endif
+ ldrcc pc, [\table, \tmp, lsl #2] @ call sys_* routine
+#else
+ cmp \nr, #NR_syscalls @ check upper syscall limit
+ badr lr, \ret @ return address
+ .if \reload
+ add r1, sp, #S_R0 + S_OFF @ pointer to regs
+ ldmccia r1, {r0 - r6} @ reload r0-r6
+ stmccia sp, {r4, r5} @ update stack arguments
+ .endif
+ ldrcc pc, [\table, \nr, lsl #2] @ call sys_* routine
+#endif
+ .endm
+
/*
* These are the registers used in the syscall handler, and allow us to
* have in theory up to 7 arguments to a function - r0 to r6.


2019-11-08 19:25:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 14/75] dccp: do not leak jiffies on the wire

From: Eric Dumazet <[email protected]>

[ Upstream commit 3d1e5039f5f87a8731202ceca08764ee7cb010d3 ]

For some reason I missed the case of DCCP passive
flows in my previous patch.

Fixes: a904a0693c18 ("inet: stop leaking jiffies on the wire")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: Thiemo Nagel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/dccp/ipv4.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -417,7 +417,7 @@ struct sock *dccp_v4_request_recv_sock(c
RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt));
newinet->mc_index = inet_iif(skb);
newinet->mc_ttl = ip_hdr(skb)->ttl;
- newinet->inet_id = jiffies;
+ newinet->inet_id = prandom_u32();

if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL)
goto put_and_exit;


2019-11-08 19:26:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 69/75] ARM: clean up per-processor check_bugs method call

From: Russell King <[email protected]>

Commit 945aceb1db8885d3a35790cf2e810f681db52756 upstream.

Call the per-processor type check_bugs() method in the same way as we
do other per-processor functions - move the "processor." detail into
proc-fns.h.

Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/proc-fns.h | 1 +
arch/arm/kernel/bugs.c | 4 ++--
2 files changed, 3 insertions(+), 2 deletions(-)

--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -99,6 +99,7 @@ extern void cpu_do_suspend(void *);
extern void cpu_do_resume(void *);
#else
#define cpu_proc_init processor._proc_init
+#define cpu_check_bugs processor.check_bugs
#define cpu_proc_fin processor._proc_fin
#define cpu_reset processor.reset
#define cpu_do_idle processor._do_idle
--- a/arch/arm/kernel/bugs.c
+++ b/arch/arm/kernel/bugs.c
@@ -6,8 +6,8 @@
void check_other_bugs(void)
{
#ifdef MULTI_CPU
- if (processor.check_bugs)
- processor.check_bugs();
+ if (cpu_check_bugs)
+ cpu_check_bugs();
#endif
}



2019-11-08 19:26:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 57/75] ARM: spectre-v1: use get_user() for __get_user()

From: Russell King <[email protected]>

Commit b1cd0a14806321721aae45f5446ed83a3647c914 upstream.

Fixing __get_user() for spectre variant 1 is not sane: we would have to
add address space bounds checking in order to validate that the location
should be accessed, and then zero the address if found to be invalid.

Since __get_user() is supposed to avoid the bounds check, and this is
exactly what get_user() does, there's no point having two different
implementations that are doing the same thing. So, when the Spectre
workarounds are required, make __get_user() an alias of get_user().

Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/uaccess.h | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)

--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -288,6 +288,16 @@ static inline void set_fs(mm_segment_t f
#define user_addr_max() \
(segment_eq(get_fs(), KERNEL_DS) ? ~0UL : get_fs())

+#ifdef CONFIG_CPU_SPECTRE
+/*
+ * When mitigating Spectre variant 1, it is not worth fixing the non-
+ * verifying accessors, because we need to add verification of the
+ * address space there. Force these to use the standard get_user()
+ * version instead.
+ */
+#define __get_user(x, ptr) get_user(x, ptr)
+#else
+
/*
* The "__xxx" versions of the user access functions do not verify the
* address space - it must have been done previously with a separate
@@ -304,12 +314,6 @@ static inline void set_fs(mm_segment_t f
__gu_err; \
})

-#define __get_user_error(x, ptr, err) \
-({ \
- __get_user_err((x), (ptr), err); \
- (void) 0; \
-})
-
#define __get_user_err(x, ptr, err) \
do { \
unsigned long __gu_addr = (unsigned long)(ptr); \
@@ -369,6 +373,7 @@ do { \

#define __get_user_asm_word(x, addr, err) \
__get_user_asm(x, addr, err, ldr)
+#endif


#define __put_user_switch(x, ptr, __err, __fn) \


2019-11-08 19:26:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 38/75] arm/arm64: smccc-1.1: Make return values unsigned long

From: Marc Zyngier <[email protected]>

[ Upstream commit 1d8f574708a3fb6f18c85486d0c5217df893c0cf ]

An unfortunate consequence of having a strong typing for the input
values to the SMC call is that it also affects the type of the
return values, limiting r0 to 32 bits and r{1,2,3} to whatever
was passed as an input.

Let's turn everything into "unsigned long", which satisfies the
requirements of both architectures, and allows for the full
range of return values.

Reported-by: Julien Grall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -167,31 +167,31 @@ asmlinkage void arm_smccc_hvc(unsigned l

#define __declare_arg_0(a0, res) \
struct arm_smccc_res *___res = res; \
- register u32 r0 asm("r0") = a0; \
+ register unsigned long r0 asm("r0") = (u32)a0; \
register unsigned long r1 asm("r1"); \
register unsigned long r2 asm("r2"); \
register unsigned long r3 asm("r3")

#define __declare_arg_1(a0, a1, res) \
struct arm_smccc_res *___res = res; \
- register u32 r0 asm("r0") = a0; \
- register typeof(a1) r1 asm("r1") = a1; \
+ register unsigned long r0 asm("r0") = (u32)a0; \
+ register unsigned long r1 asm("r1") = a1; \
register unsigned long r2 asm("r2"); \
register unsigned long r3 asm("r3")

#define __declare_arg_2(a0, a1, a2, res) \
struct arm_smccc_res *___res = res; \
- register u32 r0 asm("r0") = a0; \
- register typeof(a1) r1 asm("r1") = a1; \
- register typeof(a2) r2 asm("r2") = a2; \
+ register unsigned long r0 asm("r0") = (u32)a0; \
+ register unsigned long r1 asm("r1") = a1; \
+ register unsigned long r2 asm("r2") = a2; \
register unsigned long r3 asm("r3")

#define __declare_arg_3(a0, a1, a2, a3, res) \
struct arm_smccc_res *___res = res; \
- register u32 r0 asm("r0") = a0; \
- register typeof(a1) r1 asm("r1") = a1; \
- register typeof(a2) r2 asm("r2") = a2; \
- register typeof(a3) r3 asm("r3") = a3
+ register unsigned long r0 asm("r0") = (u32)a0; \
+ register unsigned long r1 asm("r1") = a1; \
+ register unsigned long r2 asm("r2") = a2; \
+ register unsigned long r3 asm("r3") = a3

#define __declare_arg_4(a0, a1, a2, a3, a4, res) \
__declare_arg_3(a0, a1, a2, a3, res); \


2019-11-08 19:26:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 56/75] ARM: use __inttype() in get_user()

From: Russell King <[email protected]>

Commit d09fbb327d670737ab40fd8bbb0765ae06b8b739 upstream.

Borrow the x86 implementation of __inttype() to use in get_user() to
select an integer type suitable to temporarily hold the result value.
This is necessary to avoid propagating the volatile nature of the
result argument, which can cause the following warning:

lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var]

Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/uaccess.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -123,6 +123,13 @@ static inline void set_fs(mm_segment_t f
flag; })

/*
+ * This is a type: either unsigned long, if the argument fits into
+ * that type, or otherwise unsigned long long.
+ */
+#define __inttype(x) \
+ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+
+/*
* Single-value transfer routines. They automatically use the right
* size if we just have the right pointer type. Note that the functions
* which read from user space (*get_*) need to take care not to leak
@@ -191,7 +198,7 @@ extern int __get_user_64t_4(void *);
({ \
unsigned long __limit = current_thread_info()->addr_limit - 1; \
register const typeof(*(p)) __user *__p asm("r0") = (p);\
- register typeof(x) __r2 asm("r2"); \
+ register __inttype(x) __r2 asm("r2"); \
register unsigned long __l asm("r1") = __limit; \
register int __e asm("r0"); \
unsigned int __ua_flags = uaccess_save_and_enable(); \


2019-11-08 19:26:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 49/75] ARM: spectre-v2: warn about incorrect context switching functions

From: Russell King <[email protected]>

Commit c44f366ea7c85e1be27d08f2f0880f4120698125 upstream.

Warn at error level if the context switching function is not what we
are expecting. This can happen with big.Little systems, which we
currently do not support.

Signed-off-by: Russell King <[email protected]>
Boot-tested-by: Tony Lindgren <[email protected]>
Reviewed-by: Tony Lindgren <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: David A. Long <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/mm/proc-v7-bugs.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)

--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -12,6 +12,8 @@
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);

+extern void cpu_v7_iciallu_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
+extern void cpu_v7_bpiall_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);

@@ -50,6 +52,8 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A17:
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
+ if (processor.switch_mm != cpu_v7_bpiall_switch_mm)
+ goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
@@ -57,6 +61,8 @@ static void cpu_v7_spectre_init(void)

case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
+ if (processor.switch_mm != cpu_v7_iciallu_switch_mm)
+ goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
@@ -82,6 +88,8 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
+ if (processor.switch_mm != cpu_v7_hvc_switch_mm && cpu)
+ goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
processor.switch_mm = cpu_v7_hvc_switch_mm;
@@ -93,6 +101,8 @@ static void cpu_v7_spectre_init(void)
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
+ if (processor.switch_mm != cpu_v7_smc_switch_mm && cpu)
+ goto bl_error;
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
processor.switch_mm = cpu_v7_smc_switch_mm;
@@ -109,6 +119,11 @@ static void cpu_v7_spectre_init(void)
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
+ return;
+
+bl_error:
+ pr_err("CPU%u: Spectre v2: incorrect context switching function, system vulnerable\n",
+ cpu);
}
#else
static void cpu_v7_spectre_init(void)


2019-11-08 19:27:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.4 23/75] dmaengine: qcom: bam_dma: Fix resource leak

From: Jeffrey Hugo <[email protected]>

commit 7667819385457b4aeb5fac94f67f52ab52cc10d5 upstream.

bam_dma_terminate_all() will leak resources if any of the transactions are
committed to the hardware (present in the desc fifo), and not complete.
Since bam_dma_terminate_all() does not cause the hardware to be updated,
the hardware will still operate on any previously committed transactions.
This can cause memory corruption if the memory for the transaction has been
reassigned, and will cause a sync issue between the BAM and its client(s).

Fix this by properly updating the hardware in bam_dma_terminate_all().

Fixes: e7c0fe2a5c84 ("dmaengine: add Qualcomm BAM dma driver")
Signed-off-by: Jeffrey Hugo <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>


---
drivers/dma/qcom_bam_dma.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

--- a/drivers/dma/qcom_bam_dma.c
+++ b/drivers/dma/qcom_bam_dma.c
@@ -671,7 +671,21 @@ static int bam_dma_terminate_all(struct

/* remove all transactions, including active transaction */
spin_lock_irqsave(&bchan->vc.lock, flag);
+ /*
+ * If we have transactions queued, then some might be committed to the
+ * hardware in the desc fifo. The only way to reset the desc fifo is
+ * to do a hardware reset (either by pipe or the entire block).
+ * bam_chan_init_hw() will trigger a pipe reset, and also reinit the
+ * pipe. If the pipe is left disabled (default state after pipe reset)
+ * and is accessed by a connected hardware engine, a fatal error in
+ * the BAM will occur. There is a small window where this could happen
+ * with bam_chan_init_hw(), but it is assumed that the caller has
+ * stopped activity on any attached hardware engine. Make sure to do
+ * this first so that the BAM hardware doesn't cause memory corruption
+ * by accessing freed resources.
+ */
if (bchan->curr_txd) {
+ bam_chan_init_hw(bchan, bchan->curr_txd->dir);
list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
bchan->curr_txd = NULL;
}


2019-11-09 01:19:14

by kernelci.org bot

[permalink] [raw]
Subject: Re: [PATCH 4.4 00/75] 4.4.200-stable review

stable-rc/linux-4.4.y boot: 80 boots: 0 failed, 72 passed with 7 offline, 1 conflict (v4.4.199-76-g6afbf4832d8a)

Full Boot Summary: https://kernelci.org/boot/all/job/stable-rc/branch/linux-4.4.y/kernel/v4.4.199-76-g6afbf4832d8a/
Full Build Summary: https://kernelci.org/build/stable-rc/branch/linux-4.4.y/kernel/v4.4.199-76-g6afbf4832d8a/

Tree: stable-rc
Branch: linux-4.4.y
Git Describe: v4.4.199-76-g6afbf4832d8a
Git Commit: 6afbf4832d8a30149fdb36136e222f795f2e9ec9
Git URL: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
Tested: 41 unique boards, 17 SoC families, 13 builds out of 190

Offline Platforms:

arm:

sunxi_defconfig:
gcc-8
sun5i-r8-chip: 1 offline lab
sun7i-a20-bananapi: 1 offline lab

multi_v7_defconfig:
gcc-8
qcom-apq8064-cm-qs600: 1 offline lab
sun5i-r8-chip: 1 offline lab
sun7i-a20-bananapi: 1 offline lab

davinci_all_defconfig:
gcc-8
dm365evm,legacy: 1 offline lab

qcom_defconfig:
gcc-8
qcom-apq8064-cm-qs600: 1 offline lab

Conflicting Boot Failure Detected: (These likely are not failures as other labs are reporting PASS. Needs review.)

i386:
i386_defconfig:
qemu_i386:
lab-collabora: PASS (gcc-8)
lab-baylibre: FAIL (gcc-8)

---
For more info write to <[email protected]>

2019-11-09 10:36:29

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH 4.4 00/75] 4.4.200-stable review

On Sat, 9 Nov 2019 at 00:24, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 4.4.200 release.
> There are 75 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun 10 Nov 2019 05:42:11 PM UTC.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.4.200-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.4.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Results from Linaro’s test farm.
No regressions on arm64, arm, x86_64, and i386.

Summary
------------------------------------------------------------------------

kernel: 4.4.200-rc1
git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git branch: linux-4.4.y
git commit: 6afbf4832d8a30149fdb36136e222f795f2e9ec9
git describe: v4.4.199-76-g6afbf4832d8a
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.4-oe/build/v4.4.199-76-g6afbf4832d8a


No regressions (compared to build v4.4.199)

No fixes (compared to build v4.4.199)

Ran 16997 total tests in the following environments and test suites.

Environments
--------------
- i386
- juno-r2 - arm64
- qemu_arm
- qemu_arm64
- qemu_i386
- qemu_x86_64
- x15 - arm
- x86_64

Test Suites
-----------
* build
* kselftest
* libhugetlbfs
* ltp-cap_bounds-tests
* ltp-commands-tests
* ltp-cpuhotplug-tests
* ltp-dio-tests
* ltp-fcntl-locktests-tests
* ltp-filecaps-tests
* ltp-fs-tests
* ltp-fs_bind-tests
* ltp-fs_perms_simple-tests
* ltp-fsx-tests
* ltp-hugetlb-tests
* ltp-io-tests
* ltp-ipc-tests
* ltp-math-tests
* ltp-mm-tests
* ltp-nptl-tests
* ltp-pty-tests
* ltp-sched-tests
* ltp-securebits-tests
* ltp-syscalls-tests
* network-basic-tests
* perf
* prep-tmp-disk
* kvm-unit-tests
* ltp-containers-tests
* ltp-cve-tests
* ltp-open-posix-tests
* spectre-meltdown-checker-test
* v4l2-compliance
* install-android-platform-tools-r2600
* kselftest-vsyscall-mode-native
* kselftest-vsyscall-mode-none
* ssuite

--
Linaro LKFT
https://lkft.linaro.org

2019-11-09 15:39:48

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 4.4 00/75] 4.4.200-stable review

On 11/8/19 10:49 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.4.200 release.
> There are 75 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun 10 Nov 2019 05:42:11 PM UTC.
> Anything received after that time might be too late.
>

Build results:
total: 170 pass: 170 fail: 0
Qemu test results:
total: 324 pass: 324 fail: 0

Guenter