This is the start of the stable review cycle for the 4.15.4 release.
There are 202 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.15.4-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.15.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <[email protected]>
Linux 4.15.4-rc1
Uma Krishnan <[email protected]>
scsi: cxlflash: Reset command ioasc
James Smart <[email protected]>
scsi: lpfc: Fix crash after bad bar setup on driver attachment
Bart Van Assche <[email protected]>
scsi: core: Ensure that the SCSI error handler gets woken up
Steven Rostedt (VMware) <[email protected]>
ftrace: Remove incorrect setting of glob search field
Eric Biggers <[email protected]>
devpts: fix error handling in devpts_mntget()
Eric W. Biederman <[email protected]>
mn10300/misalignment: Use SIGSEGV SEGV_MAPERR to report a failed user copy
Amir Goldstein <[email protected]>
ovl: hash directory inodes for fsnotify
Amir Goldstein <[email protected]>
ovl: take mnt_want_write() for removing impure xattr
Amir Goldstein <[email protected]>
ovl: take mnt_want_write() for work/index dir setup
Amir Goldstein <[email protected]>
ovl: fix failure to fsync lower dir
Amir Goldstein <[email protected]>
ovl: force r/o mount when index dir creation fails
Toshi Kani <[email protected]>
acpi, nfit: fix register dimm error handling
Greg Kroah-Hartman <[email protected]>
ACPI: sbshc: remove raw pointer from printk() message
Imre Deak <[email protected]>
drm/i915: Avoid PPS HW/SW state mismatch due to rounding
Yan Markman <[email protected]>
arm64: dts: marvell: add Ethernet aliases
Peter Zijlstra <[email protected]>
objtool: Fix switch-table detection
Andrey Ryabinin <[email protected]>
lib/ubsan: add type mismatch handler for new GCC/Clang
Andrew Morton <[email protected]>
lib/ubsan.c: s/missaligned/misaligned/
Daniel Lezcano <[email protected]>
clocksource/drivers/stm32: Fix kernel panic with multiple timers
Ming Lei <[email protected]>
blk-mq: quiesce queue before freeing queue
Bart Van Assche <[email protected]>
pktcdvd: Fix a recently introduced NULL pointer dereference
Bart Van Assche <[email protected]>
pktcdvd: Fix pkt_setup_dev() error path
Peter Rosin <[email protected]>
pinctrl: sx150x: Add a static gpio/pinctrl pin range mapping
Peter Rosin <[email protected]>
pinctrl: sx150x: Register pinctrl before adding the gpiochip
Peter Rosin <[email protected]>
pinctrl: sx150x: Unregister the pinctrl on release
Dmitry Mastykin <[email protected]>
pinctrl: mcp23s08: fix irq setup order
Mika Westerberg <[email protected]>
pinctrl: intel: Initialize GPIO properly when used through irqchip
Thomas Gleixner <[email protected]>
genirq: Make legacy autoprobing work again
James Hogan <[email protected]>
EDAC, octeon: Fix an uninitialized variable warning
Max Filippov <[email protected]>
xtensa: fix futex_atomic_cmpxchg_inatomic
Mikulas Patocka <[email protected]>
alpha: fix formating of stack content
Mikulas Patocka <[email protected]>
alpha: fix reboot on Avanti platform
Michael Cree <[email protected]>
alpha: Fix mixed up args in EXC macro in futex operations
Arnd Bergmann <[email protected]>
alpha: osf_sys.c: fix put_tv32 regression
Mikulas Patocka <[email protected]>
alpha: fix crash if pthread_create races with signal delivery
Eric W. Biederman <[email protected]>
signal/sh: Ensure si_signo is initialized in do_divide_error
Eric W. Biederman <[email protected]>
signal/openrisc: Fix do_unaligned_access to send the proper signal
John Garry <[email protected]>
ipmi: use dynamic memory for DMI driver override
Hans de Goede <[email protected]>
Bluetooth: btusb: Restore QCA Rome suspend/resume fix with a "rewritten" version
Kai-Heng Feng <[email protected]>
Revert "Bluetooth: btusb: fix QCA Rome suspend/resume"
Hans de Goede <[email protected]>
Bluetooth: btsdio: Do not bind to non-removable BCM43341
Hans de Goede <[email protected]>
HID: quirks: Fix keyboard + touchpad on Toshiba Click Mini not working
Eric Biggers <[email protected]>
pipe: fix off-by-one error when checking buffer limits
Eric Biggers <[email protected]>
pipe: actually allow root to exceed the pipe buffer limits
Eric Biggers <[email protected]>
kernel/relay.c: revert "kernel/relay.c: fix potential memory leak"
Rasmus Villemoes <[email protected]>
kernel/async.c: revert "async: simplify lowest_in_progress()"
Heiko Carstens <[email protected]>
fs/proc/kcore.c: use probe_kernel_read() instead of memcpy()
Mauro Carvalho Chehab <[email protected]>
media: cxusb, dib0700: ignore XC2028_I2C_FLUSH
Hans Verkuil <[email protected]>
media: vivid: fix module load error when enabling fb and no_error_inj=1
Mauro Carvalho Chehab <[email protected]>
media: ts2020: avoid integer overflows on 32 bit machines
Hans Verkuil <[email protected]>
media: dt-bindings/media/cec-gpio.txt: mention the CEC/HPD max voltages
Arnd Bergmann <[email protected]>
media: dvb-frontends: fix i2c access helpers for KASAN
Mauro Carvalho Chehab <[email protected]>
media: dvb_frontend: be sure to init dvb_frontend_handle_ioctl() return code
Arnd Bergmann <[email protected]>
kasan: rework Kconfig settings
Andrey Konovalov <[email protected]>
kasan: don't emit builtin calls when sanitization is off
Liu Bo <[email protected]>
Btrfs: raid56: iterate raid56 internal bio with bio_for_each_segment_all
Nikolay Borisov <[email protected]>
btrfs: Handle btrfs_set_extent_delalloc failure in fixup worker
David Howells <[email protected]>
afs: Fix server list handling
David Howells <[email protected]>
afs: Fix missing cursor clearance
David Howells <[email protected]>
afs: Need to clear responded flag in addr cursor
David Howells <[email protected]>
afs: Add missing afs_put_cell()
Martin Kaiser <[email protected]>
watchdog: imx2_wdt: restore previous timeout after suspend+resume
Charles Keepax <[email protected]>
ASoC: compress: Correct handling of copy callback
Takashi Iwai <[email protected]>
ASoC: skl: Fix kernel warning due to zero NHTL entry
John Keeping <[email protected]>
ASoC: rockchip: i2s: fix playback after runtime resume
Pierre-Louis Bossart <[email protected]>
ASoC: acpi: fix machine driver selection based on quirk
Ulf Magnusson <[email protected]>
KVM: PPC: Book3S PR: Fix broken select due to misspelling
James Morse <[email protected]>
KVM: arm/arm64: Handle CPU_PM_ENTER_FAILED
Paul Mackerras <[email protected]>
KVM: PPC: Book3S HV: Drop locks before reading guest memory
Paul Mackerras <[email protected]>
KVM: PPC: Book3S HV: Make sure we don't re-enter guest without XIVE loaded
Liran Alon <[email protected]>
KVM: nVMX: Fix bug of injecting L2 exception into L1
Liran Alon <[email protected]>
KVM: nVMX: Fix races when sending nested PI while dest enters/leaves L2
Marc Zyngier <[email protected]>
arm: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls
LEROY Christophe <[email protected]>
crypto: talitos - fix Kernel Oops on hashing an empty file
Eric Biggers <[email protected]>
crypto: sha512-mb - initialize pending lengths correctly
Horia Geantă <[email protected]>
crypto: caam - fix endless loop when DECO acquire fails
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: make ctrl_is_pointer work for subdevs
Daniel Mentz <[email protected]>
media: v4l2-compat-ioctl32.c: refactor compat ioctl32 logic
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: don't copy back the result for certain errors
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: drop pr_info for unknown buffer type
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: copy clip list in put_v4l2_window32
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: fix ctrl_is_pointer
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: copy m.userptr in put_v4l2_plane32
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: avoid sizeof(type)
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: move 'helper' functions to __get/put_v4l2_format32
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: fix the indentation
Hans Verkuil <[email protected]>
media: v4l2-compat-ioctl32.c: add missing VIDIOC_PREPARE_BUF
Hans Verkuil <[email protected]>
media: v4l2-ioctl.c: don't copy back the result for -ENOTTY
Hans Verkuil <[email protected]>
media: v4l2-ioctl.c: use check_fmt for enum/g/s/try_fmt
Eric Biggers <[email protected]>
crypto: hash - prevent using keyed hashes without setting key
Eric Biggers <[email protected]>
crypto: hash - annotate algorithms taking optional key
Eric Biggers <[email protected]>
crypto: poly1305 - remove ->setkey() method
Eric Biggers <[email protected]>
crypto: mcryptd - pass through absence of ->setkey()
Eric Biggers <[email protected]>
crypto: cryptd - pass through absence of ->setkey()
Eric Biggers <[email protected]>
crypto: hash - introduce crypto_hash_alg_has_setkey()
Mika Westerberg <[email protected]>
ahci: Add Intel Cannon Lake PCH-H PCI ID
Hans de Goede <[email protected]>
ahci: Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI
Hans de Goede <[email protected]>
ahci: Annotate PCI ids for mobile Intel chipsets as such
Ivan Vecera <[email protected]>
kernfs: fix regression in kernfs_fop_write caused by wrong type
Trond Myklebust <[email protected]>
nfsd: Detect unhashed stids in nfsd4_verify_open_stid()
Trond Myklebust <[email protected]>
NFS: Fix a race between mmap() and O_DIRECT
Eric Biggers <[email protected]>
NFS: reject request for id_legacy key without auxdata
J. Bruce Fields <[email protected]>
NFS: commit direct writes even if they fail partially
Trond Myklebust <[email protected]>
NFS: Fix nfsstat breakage due to LOOKUPP
Trond Myklebust <[email protected]>
NFS: Add a cond_resched() to nfs_commit_release_pages()
Tigran Mkrtchyan <[email protected]>
nfs41: do not return ENOMEM on LAYOUTUNAVAILABLE
Scott Mayhew <[email protected]>
nfs/pnfs: fix nfs_direct_req ref leak when i/o falls back to the mds
Eric Biggers <[email protected]>
ubifs: free the encrypted symlink target
Bradley Bolen <[email protected]>
ubi: block: Fix locking for idr_alloc/idr_remove
Sascha Hauer <[email protected]>
ubi: fastmap: Erase outdated anchor PEBs during attach
Clay McClure <[email protected]>
ubi: Fix race condition between ubi volume creation and udev
Miquel Raynal <[email protected]>
mtd: nand: sunxi: Fix ECC strength choice
Miquel Raynal <[email protected]>
mtd: nand: Fix nand_do_read_oob() return value
Kamal Dasu <[email protected]>
mtd: nand: brcmnand: Disable prefetch by default
Arnd Bergmann <[email protected]>
mtd: cfi: convert inline functions to macros
Marc Zyngier <[email protected]>
arm64: Kill PSCI_GET_VERSION as a variant-2 workaround
Marc Zyngier <[email protected]>
arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support
Marc Zyngier <[email protected]>
arm/arm64: smccc: Implement SMCCC v1.1 inline primitive
Marc Zyngier <[email protected]>
arm/arm64: smccc: Make function identifiers an unsigned quantity
Marc Zyngier <[email protected]>
firmware/psci: Expose SMCCC version through psci_ops
Marc Zyngier <[email protected]>
firmware/psci: Expose PSCI conduit
Marc Zyngier <[email protected]>
arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
Marc Zyngier <[email protected]>
arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support
Marc Zyngier <[email protected]>
arm/arm64: KVM: Turn kvm_psci_version into a static inline
Marc Zyngier <[email protected]>
arm64: KVM: Make PSCI_VERSION a fast path
Marc Zyngier <[email protected]>
arm/arm64: KVM: Advertise SMCCC v1.1
Marc Zyngier <[email protected]>
arm/arm64: KVM: Implement PSCI 1.0 support
Marc Zyngier <[email protected]>
arm/arm64: KVM: Add smccc accessors to PSCI code
Marc Zyngier <[email protected]>
arm/arm64: KVM: Add PSCI_VERSION helper
Marc Zyngier <[email protected]>
arm/arm64: KVM: Consolidate the PSCI include files
Marc Zyngier <[email protected]>
arm64: KVM: Increment PC after handling an SMC trap
Jayachandran C <[email protected]>
arm64: Branch predictor hardening for Cavium ThunderX2
Shanker Donthineni <[email protected]>
arm64: Implement branch predictor hardening for Falkor
Will Deacon <[email protected]>
arm64: Implement branch predictor hardening for affected Cortex-A CPUs
Will Deacon <[email protected]>
arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
Will Deacon <[email protected]>
arm64: entry: Apply BP hardening for suspicious interrupts from EL0
Will Deacon <[email protected]>
arm64: entry: Apply BP hardening for high-priority synchronous exceptions
Marc Zyngier <[email protected]>
arm64: KVM: Use per-CPU vector when BP hardening is enabled
Marc Zyngier <[email protected]>
arm64: Move BP hardening to check_and_switch_context
Will Deacon <[email protected]>
arm64: Add skeleton to harden the branch predictor against aliasing attacks
Marc Zyngier <[email protected]>
arm64: Move post_ttbr_update_workaround to C code
Will Deacon <[email protected]>
drivers/firmware: Expose psci_get_version through psci_ops structure
Will Deacon <[email protected]>
arm64: cpufeature: Pass capability structure to ->enable callback
Suzuki K Poulose <[email protected]>
arm64: Run enable method for errata work arounds on late CPUs
James Morse <[email protected]>
arm64: cpufeature: __this_cpu_has_cap() shouldn't stop early
Will Deacon <[email protected]>
arm64: futex: Mask __user pointers prior to dereference
Will Deacon <[email protected]>
arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user
Will Deacon <[email protected]>
arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_user
Will Deacon <[email protected]>
arm64: uaccess: Prevent speculative use of the current addr_limit
Will Deacon <[email protected]>
arm64: entry: Ensure branch through syscall table is bounded under speculation
Robin Murphy <[email protected]>
arm64: Use pointer masking to limit uaccess speculation
Robin Murphy <[email protected]>
arm64: Make USER_DS an inclusive limit
Robin Murphy <[email protected]>
arm64: Implement array_index_mask_nospec()
Will Deacon <[email protected]>
arm64: barrier: Add CSDB macros to control data-value prediction
Will Deacon <[email protected]>
perf: arm_spe: Fail device probe when arm64_kernel_unmapped_at_el0()
Will Deacon <[email protected]>
arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
Will Deacon <[email protected]>
arm64: entry: Reword comment about post_ttbr_update_workaround
Marc Zyngier <[email protected]>
arm64: Force KPTI to be disabled on Cavium ThunderX
Will Deacon <[email protected]>
arm64: kpti: Add ->enable callback to remap swapper using nG mappings
Will Deacon <[email protected]>
arm64: mm: Permit transitioning from Global to Non-Global without BBM
Will Deacon <[email protected]>
arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()
Jayachandran C <[email protected]>
arm64: Turn on KPTI only on CPUs that need it
Jayachandran C <[email protected]>
arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
Catalin Marinas <[email protected]>
arm64: kpti: Fix the interaction between ASID switching and software PAN
Will Deacon <[email protected]>
arm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBR
Suzuki K Poulose <[email protected]>
arm64: capabilities: Handle duplicate entries for a capability
Will Deacon <[email protected]>
arm64: Take into account ID_AA64PFR0_EL1.CSV3
Will Deacon <[email protected]>
arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
Will Deacon <[email protected]>
arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
Will Deacon <[email protected]>
arm64: use RET instruction for exiting the trampoline
Will Deacon <[email protected]>
arm64: kaslr: Put kernel vectors address in separate data page
Will Deacon <[email protected]>
arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
Will Deacon <[email protected]>
arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
Stephen Boyd <[email protected]>
arm64: cpu_errata: Add Kryo to Falkor 1003 errata
Will Deacon <[email protected]>
arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
Will Deacon <[email protected]>
arm64: entry: Hook up entry trampoline to exception vectors
Will Deacon <[email protected]>
arm64: entry: Explicitly pass exception level to kernel_ventry macro
Will Deacon <[email protected]>
arm64: mm: Map entry trampoline into trampoline and kernel page tables
Will Deacon <[email protected]>
arm64: entry: Add exception trampoline page for exceptions from EL0
Will Deacon <[email protected]>
arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
Will Deacon <[email protected]>
arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
Will Deacon <[email protected]>
arm64: mm: Allocate ASIDs in pairs
Will Deacon <[email protected]>
arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
Will Deacon <[email protected]>
arm64: mm: Rename post_ttbr0_update_workaround
Will Deacon <[email protected]>
arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003
Will Deacon <[email protected]>
arm64: mm: Move ASID from TTBR0 to TTBR1
Will Deacon <[email protected]>
arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
Will Deacon <[email protected]>
arm64: mm: Use non-global mappings for kernel space
Arvind Yadav <[email protected]>
media: hdpvr: Fix an error handling path in hdpvr_probe()
Malcolm Priestley <[email protected]>
media: dvb-usb-v2: lmedm04: move ts2020 attach to dm04_lme2510_tuner
Malcolm Priestley <[email protected]>
media: dvb-usb-v2: lmedm04: Improve logic checking of warm start
Steven Rostedt (VMware) <[email protected]>
sched/rt: Up the root domain ref count when passing it around via IPIs
Steven Rostedt (VMware) <[email protected]>
sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()
Lionel Landwerlin <[email protected]>
Revert "drm/i915: mark all device info struct with __initconst"
Rasmus Villemoes <[email protected]>
watchdog: gpio_wdt: set WDOG_HW_RUNNING in gpio_wdt_stop
Sven Joachim <[email protected]>
ssb: Do not disable PCI host on non-Mips
Yang Shunyong <[email protected]>
dmaengine: dmatest: fix container_of member in dmatest_callback
Andrew-sh Cheng <[email protected]>
cpufreq: mediatek: add mediatek related projects into blacklist
Aurelien Aptel <[email protected]>
CIFS: zero sensitive data when freeing
Daniel N Pettersson <[email protected]>
cifs: Fix autonegotiate security settings mismatch
Matthew Wilcox <[email protected]>
cifs: Fix missing put_xid in cifs_file_strict_mmap
Matt Redfearn <[email protected]>
watchdog: indydog: Add dependency on SGI_HAS_INDYDOG
-------------
Diffstat:
Documentation/arm64/silicon-errata.txt | 2 +-
.../devicetree/bindings/media/cec-gpio.txt | 6 +-
Makefile | 7 +-
arch/alpha/include/asm/futex.h | 8 +-
arch/alpha/kernel/osf_sys.c | 4 +-
arch/alpha/kernel/pci_impl.h | 3 +-
arch/alpha/kernel/process.c | 3 +-
arch/alpha/kernel/traps.c | 13 +-
arch/arm/crypto/crc32-ce-glue.c | 2 +
arch/arm/include/asm/kvm_host.h | 6 +
arch/arm/include/asm/kvm_mmu.h | 10 +
arch/arm/include/asm/kvm_psci.h | 27 -
arch/arm/kvm/handle_exit.c | 17 +-
arch/arm64/Kconfig | 46 +-
arch/arm64/boot/dts/marvell/armada-7040-db.dts | 6 +
arch/arm64/boot/dts/marvell/armada-8040-db.dts | 7 +
arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts | 6 +
arch/arm64/crypto/crc32-ce-glue.c | 2 +
arch/arm64/include/asm/asm-uaccess.h | 36 +-
arch/arm64/include/asm/assembler.h | 54 +-
arch/arm64/include/asm/barrier.h | 22 +
arch/arm64/include/asm/cpucaps.h | 5 +-
arch/arm64/include/asm/cputype.h | 9 +
arch/arm64/include/asm/efi.h | 12 +-
arch/arm64/include/asm/fixmap.h | 5 +
arch/arm64/include/asm/futex.h | 9 +-
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/include/asm/kvm_host.h | 5 +
arch/arm64/include/asm/kvm_mmu.h | 38 +
arch/arm64/include/asm/kvm_psci.h | 27 -
arch/arm64/include/asm/mmu.h | 48 +
arch/arm64/include/asm/mmu_context.h | 12 +-
arch/arm64/include/asm/pgtable-hwdef.h | 1 +
arch/arm64/include/asm/pgtable-prot.h | 35 +-
arch/arm64/include/asm/pgtable.h | 1 +
arch/arm64/include/asm/proc-fns.h | 6 -
arch/arm64/include/asm/processor.h | 3 +
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/include/asm/tlbflush.h | 16 +-
arch/arm64/include/asm/uaccess.h | 181 +++-
arch/arm64/kernel/Makefile | 4 +
arch/arm64/kernel/arm64ksyms.c | 4 +-
arch/arm64/kernel/asm-offsets.c | 6 +-
arch/arm64/kernel/bpi.S | 83 ++
arch/arm64/kernel/cpu-reset.S | 2 +-
arch/arm64/kernel/cpu_errata.c | 239 ++++-
arch/arm64/kernel/cpufeature.c | 138 ++-
arch/arm64/kernel/entry.S | 228 ++++-
arch/arm64/kernel/head.S | 2 +-
arch/arm64/kernel/process.c | 12 +-
arch/arm64/kernel/sleep.S | 2 +-
arch/arm64/kernel/vmlinux.lds.S | 22 +-
arch/arm64/kvm/handle_exit.c | 14 +-
arch/arm64/kvm/hyp/entry.S | 12 +
arch/arm64/kvm/hyp/hyp-entry.S | 20 +-
arch/arm64/kvm/hyp/switch.c | 13 +-
arch/arm64/lib/clear_user.S | 10 +-
arch/arm64/lib/copy_from_user.S | 4 +-
arch/arm64/lib/copy_in_user.S | 9 +-
arch/arm64/lib/copy_to_user.S | 4 +-
arch/arm64/mm/cache.S | 4 +-
arch/arm64/mm/context.c | 48 +-
arch/arm64/mm/fault.c | 36 +-
arch/arm64/mm/mmu.c | 35 +
arch/arm64/mm/proc.S | 224 ++++-
arch/arm64/xen/hypercall.S | 4 +-
arch/mn10300/mm/misalignment.c | 2 +-
arch/openrisc/kernel/traps.c | 10 +-
arch/powerpc/crypto/crc32c-vpmsum_glue.c | 1 +
arch/powerpc/kvm/Kconfig | 2 +-
arch/powerpc/kvm/book3s_hv.c | 16 +-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 40 +-
arch/s390/crypto/crc32-vx.c | 3 +
arch/sh/kernel/traps_32.c | 3 +-
arch/sparc/crypto/crc32c_glue.c | 1 +
arch/x86/crypto/crc32-pclmul_glue.c | 1 +
arch/x86/crypto/crc32c-intel_glue.c | 1 +
arch/x86/crypto/poly1305_glue.c | 1 -
.../x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c | 10 +-
arch/x86/kvm/vmx.c | 6 +-
arch/x86/kvm/x86.h | 1 +
arch/xtensa/include/asm/futex.h | 23 +-
block/blk-core.c | 9 +
crypto/ahash.c | 33 +-
crypto/algif_hash.c | 52 +-
crypto/crc32_generic.c | 1 +
crypto/crc32c_generic.c | 1 +
crypto/cryptd.c | 10 +-
crypto/mcryptd.c | 10 +-
crypto/poly1305_generic.c | 17 +-
crypto/shash.c | 25 +-
drivers/acpi/nfit/core.c | 3 +
drivers/acpi/sbshc.c | 4 +-
drivers/ata/ahci.c | 37 +-
drivers/block/pktcdvd.c | 12 +-
drivers/bluetooth/btsdio.c | 9 +
drivers/bluetooth/btusb.c | 20 +-
drivers/char/ipmi/ipmi_dmi.c | 5 +-
drivers/clocksource/timer-stm32.c | 7 +-
drivers/cpufreq/cpufreq-dt-platdev.c | 8 +
drivers/crypto/bfin_crc.c | 3 +-
drivers/crypto/caam/ctrl.c | 8 +-
drivers/crypto/stm32/stm32_crc32.c | 2 +
drivers/crypto/talitos.c | 4 +
drivers/dma/dmatest.c | 2 +-
drivers/edac/octeon_edac-lmc.c | 1 +
drivers/firmware/psci.c | 57 +-
drivers/gpu/drm/i915/i915_pci.c | 94 +-
drivers/gpu/drm/i915/intel_dp.c | 6 +
drivers/hid/hid-core.c | 12 +-
drivers/media/dvb-core/dvb_frontend.c | 4 +-
drivers/media/dvb-frontends/ascot2e.c | 4 +-
drivers/media/dvb-frontends/cxd2841er.c | 4 +-
drivers/media/dvb-frontends/helene.c | 4 +-
drivers/media/dvb-frontends/horus3a.c | 4 +-
drivers/media/dvb-frontends/itd1000.c | 5 +-
drivers/media/dvb-frontends/mt312.c | 5 +-
drivers/media/dvb-frontends/stb0899_drv.c | 3 +-
drivers/media/dvb-frontends/stb6100.c | 6 +-
drivers/media/dvb-frontends/stv0367.c | 4 +-
drivers/media/dvb-frontends/stv090x.c | 4 +-
drivers/media/dvb-frontends/stv6110x.c | 4 +-
drivers/media/dvb-frontends/ts2020.c | 4 +-
drivers/media/dvb-frontends/zl10039.c | 4 +-
drivers/media/platform/vivid/vivid-core.h | 1 +
drivers/media/platform/vivid/vivid-ctrls.c | 35 +-
drivers/media/usb/dvb-usb-v2/lmedm04.c | 39 +-
drivers/media/usb/dvb-usb/cxusb.c | 2 +
drivers/media/usb/dvb-usb/dib0700_devices.c | 1 +
drivers/media/usb/hdpvr/hdpvr-core.c | 26 +-
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 1032 ++++++++++++--------
drivers/media/v4l2-core/v4l2-ioctl.c | 145 ++-
drivers/mtd/nand/brcmnand/brcmnand.c | 13 +-
drivers/mtd/nand/nand_base.c | 5 +-
drivers/mtd/nand/sunxi_nand.c | 8 +-
drivers/mtd/ubi/block.c | 42 +-
drivers/mtd/ubi/vmt.c | 15 +-
drivers/mtd/ubi/wl.c | 77 +-
drivers/perf/arm_spe_pmu.c | 9 +
drivers/pinctrl/intel/pinctrl-intel.c | 23 +-
drivers/pinctrl/pinctrl-mcp23s08.c | 8 +-
drivers/pinctrl/pinctrl-sx150x.c | 40 +-
drivers/scsi/cxlflash/main.c | 1 +
drivers/scsi/hosts.c | 6 +
drivers/scsi/lpfc/lpfc_init.c | 84 +-
drivers/scsi/scsi_error.c | 18 +-
drivers/scsi/scsi_lib.c | 39 +-
drivers/ssb/Kconfig | 2 +-
.../lustre/lnet/libcfs/linux/linux-crypto-adler.c | 1 +
drivers/watchdog/Kconfig | 2 +-
drivers/watchdog/gpio_wdt.c | 3 +-
drivers/watchdog/imx2_wdt.c | 20 +-
fs/afs/addr_list.c | 13 +-
fs/afs/rotate.c | 20 +-
fs/afs/server_list.c | 3 +-
fs/afs/vlclient.c | 10 +-
fs/afs/volume.c | 47 +-
fs/btrfs/inode.c | 11 +-
fs/btrfs/raid56.c | 11 +-
fs/cifs/cifsencrypt.c | 3 +-
fs/cifs/connect.c | 6 +-
fs/cifs/file.c | 26 +-
fs/cifs/misc.c | 14 +-
fs/cifs/smb2pdu.c | 3 +-
fs/devpts/inode.c | 4 +-
fs/kernfs/file.c | 2 +-
fs/nfs/direct.c | 4 +-
fs/nfs/filelayout/filelayout.c | 4 +-
fs/nfs/io.c | 2 +-
fs/nfs/nfs4idmap.c | 6 +-
fs/nfs/nfs4xdr.c | 64 +-
fs/nfs/pnfs.c | 4 +-
fs/nfs/write.c | 2 +
fs/nfsd/nfs4state.c | 1 +
fs/overlayfs/inode.c | 39 +-
fs/overlayfs/readdir.c | 17 +-
fs/overlayfs/super.c | 38 +-
fs/overlayfs/util.c | 4 +-
fs/pipe.c | 15 +-
fs/proc/kcore.c | 18 +-
fs/ubifs/dir.c | 10 +-
include/crypto/hash.h | 34 +-
include/crypto/internal/hash.h | 2 +
include/crypto/poly1305.h | 2 -
include/kvm/arm_psci.h | 51 +
include/linux/arm-smccc.h | 165 +++-
include/linux/crypto.h | 8 +
include/linux/mtd/map.h | 130 ++-
include/linux/nfs4.h | 12 +-
include/linux/psci.h | 14 +
include/scsi/scsi_host.h | 2 +
include/uapi/linux/psci.h | 3 +
kernel/async.c | 20 +-
kernel/irq/autoprobe.c | 2 +-
kernel/irq/chip.c | 6 +-
kernel/irq/internals.h | 2 +-
kernel/relay.c | 1 -
kernel/sched/rt.c | 24 +-
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 13 +
kernel/trace/ftrace.c | 1 -
lib/Kconfig.debug | 2 +-
lib/Kconfig.kasan | 11 +
lib/ubsan.c | 50 +-
lib/ubsan.h | 14 +
scripts/Makefile.kasan | 5 +
scripts/Makefile.lib | 2 +-
sound/soc/intel/skylake/skl-nhlt.c | 3 +-
sound/soc/rockchip/rockchip_i2s.c | 6 +
sound/soc/soc-acpi.c | 8 +-
sound/soc/soc-compress.c | 8 +-
tools/objtool/check.c | 41 +-
tools/objtool/check.h | 1 +
virt/kvm/arm/arm.c | 11 +-
virt/kvm/arm/psci.c | 143 ++-
215 files changed, 3788 insertions(+), 1624 deletions(-)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 85d13c001497 upstream.
The pre_ttbr0_update_workaround hook is called prior to context-switching
TTBR0 because Falkor erratum E1003 can cause TLB allocation with the wrong
ASID if both the ASID and the base address of the TTBR are updated at
the same time.
With the ASID sitting safely in TTBR1, we no longer update things
atomically, so we can remove the pre_ttbr0_update_workaround macro as
it's no longer required. The erratum infrastructure and documentation
is left around for #E1003, as it will be required by the entry
trampoline code in a future patch.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 22 ----------------------
arch/arm64/include/asm/mmu_context.h | 2 --
arch/arm64/mm/context.c | 11 -----------
arch/arm64/mm/proc.S | 1 -
4 files changed, 36 deletions(-)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -26,7 +26,6 @@
#include <asm/asm-offsets.h>
#include <asm/cpufeature.h>
#include <asm/debug-monitors.h>
-#include <asm/mmu_context.h>
#include <asm/page.h>
#include <asm/pgtable-hwdef.h>
#include <asm/ptrace.h>
@@ -478,27 +477,6 @@ alternative_endif
.endm
/*
- * Errata workaround prior to TTBR0_EL1 update
- *
- * val: TTBR value with new BADDR, preserved
- * tmp0: temporary register, clobbered
- * tmp1: other temporary register, clobbered
- */
- .macro pre_ttbr0_update_workaround, val, tmp0, tmp1
-#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
-alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
- mrs \tmp0, ttbr0_el1
- mov \tmp1, #FALKOR_RESERVED_ASID
- bfi \tmp0, \tmp1, #48, #16 // reserved ASID + old BADDR
- msr ttbr0_el1, \tmp0
- isb
- bfi \tmp0, \val, #0, #48 // reserved ASID + new BADDR
- msr ttbr0_el1, \tmp0
- isb
-alternative_else_nop_endif
-#endif
- .endm
-
/*
* Errata workaround post TTBR0_EL1 update.
*/
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -19,8 +19,6 @@
#ifndef __ASM_MMU_CONTEXT_H
#define __ASM_MMU_CONTEXT_H
-#define FALKOR_RESERVED_ASID 1
-
#ifndef __ASSEMBLY__
#include <linux/compiler.h>
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -79,13 +79,6 @@ void verify_cpu_asid_bits(void)
}
}
-static void set_reserved_asid_bits(void)
-{
- if (IS_ENABLED(CONFIG_QCOM_FALKOR_ERRATUM_1003) &&
- cpus_have_const_cap(ARM64_WORKAROUND_QCOM_FALKOR_E1003))
- __set_bit(FALKOR_RESERVED_ASID, asid_map);
-}
-
static void flush_context(unsigned int cpu)
{
int i;
@@ -94,8 +87,6 @@ static void flush_context(unsigned int c
/* Update the list of reserved ASIDs and the ASID bitmap. */
bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
- set_reserved_asid_bits();
-
for_each_possible_cpu(i) {
asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0);
/*
@@ -254,8 +245,6 @@ static int asids_init(void)
panic("Failed to allocate bitmap for %lu ASIDs\n",
NUM_USER_ASIDS);
- set_reserved_asid_bits();
-
pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);
return 0;
}
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -138,7 +138,6 @@ ENDPROC(cpu_do_resume)
* - pgd_phys - physical address of new TTB
*/
ENTRY(cpu_do_switch_mm)
- pre_ttbr0_update_workaround x0, x2, x3
mrs x2, ttbr1_el1
mmid x1, x1 // get mm->context.id
bfi x2, x1, #48, #16 // set the ASID
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 158d495899ce upstream.
The post_ttbr0_update_workaround hook applies to any change to TTBRx_EL1.
Since we're using TTBR1 for the ASID, rename the hook to make it clearer
as to what it's doing.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 5 ++---
arch/arm64/kernel/entry.S | 2 +-
arch/arm64/mm/proc.S | 2 +-
3 files changed, 4 insertions(+), 5 deletions(-)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -477,10 +477,9 @@ alternative_endif
.endm
/*
-/*
- * Errata workaround post TTBR0_EL1 update.
+ * Errata workaround post TTBRx_EL1 update.
*/
- .macro post_ttbr0_update_workaround
+ .macro post_ttbr_update_workaround
#ifdef CONFIG_CAVIUM_ERRATUM_27456
alternative_if ARM64_WORKAROUND_CAVIUM_27456
ic iallu
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -257,7 +257,7 @@ alternative_else_nop_endif
* Cavium erratum 27456 (broadcast TLBI instructions may cause I-cache
* corruption).
*/
- post_ttbr0_update_workaround
+ post_ttbr_update_workaround
.endif
1:
.if \el != 0
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -145,7 +145,7 @@ ENTRY(cpu_do_switch_mm)
isb
msr ttbr0_el1, x0 // now update TTBR0
isb
- post_ttbr0_update_workaround
+ post_ttbr_update_workaround
ret
ENDPROC(cpu_do_switch_mm)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit e78eef554a91 upstream.
Since PSCI 1.0 allows the SMCCC version to be (indirectly) probed,
let's do that at boot time, and expose the version of the calling
convention as part of the psci_ops structure.
Acked-by: Lorenzo Pieralisi <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/firmware/psci.c | 27 +++++++++++++++++++++++++++
include/linux/psci.h | 6 ++++++
2 files changed, 33 insertions(+)
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -61,6 +61,7 @@ bool psci_tos_resident_on(int cpu)
struct psci_operations psci_ops = {
.conduit = PSCI_CONDUIT_NONE,
+ .smccc_version = SMCCC_VERSION_1_0,
};
typedef unsigned long (psci_fn)(unsigned long, unsigned long,
@@ -511,6 +512,31 @@ static void __init psci_init_migrate(voi
pr_info("Trusted OS resident on physical CPU 0x%lx\n", cpuid);
}
+static void __init psci_init_smccc(void)
+{
+ u32 ver = ARM_SMCCC_VERSION_1_0;
+ int feature;
+
+ feature = psci_features(ARM_SMCCC_VERSION_FUNC_ID);
+
+ if (feature != PSCI_RET_NOT_SUPPORTED) {
+ u32 ret;
+ ret = invoke_psci_fn(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0);
+ if (ret == ARM_SMCCC_VERSION_1_1) {
+ psci_ops.smccc_version = SMCCC_VERSION_1_1;
+ ver = ret;
+ }
+ }
+
+ /*
+ * Conveniently, the SMCCC and PSCI versions are encoded the
+ * same way. No, this isn't accidental.
+ */
+ pr_info("SMC Calling Convention v%d.%d\n",
+ PSCI_VERSION_MAJOR(ver), PSCI_VERSION_MINOR(ver));
+
+}
+
static void __init psci_0_2_set_functions(void)
{
pr_info("Using standard PSCI v0.2 function IDs\n");
@@ -559,6 +585,7 @@ static int __init psci_probe(void)
psci_init_migrate();
if (PSCI_VERSION_MAJOR(ver) >= 1) {
+ psci_init_smccc();
psci_init_cpu_suspend();
psci_init_system_suspend();
}
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -31,6 +31,11 @@ enum psci_conduit {
PSCI_CONDUIT_HVC,
};
+enum smccc_version {
+ SMCCC_VERSION_1_0,
+ SMCCC_VERSION_1_1,
+};
+
struct psci_operations {
u32 (*get_version)(void);
int (*cpu_suspend)(u32 state, unsigned long entry_point);
@@ -41,6 +46,7 @@ struct psci_operations {
unsigned long lowest_affinity_level);
int (*migrate_info_type)(void);
enum psci_conduit conduit;
+ enum smccc_version smccc_version;
};
extern struct psci_operations psci_ops;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit 7f1bda447c9bd48b415acedba6b830f61591601f upstream.
The commit list can get very large, and so we need a cond_resched()
in nfs_commit_release_pages() in order to ensure we don't hog the CPU
for excessive periods of time.
Reported-by: Mike Galbraith <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/write.c | 2 ++
1 file changed, 2 insertions(+)
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1837,6 +1837,8 @@ static void nfs_commit_release_pages(str
set_bit(NFS_CONTEXT_RESEND_WRITES, &req->wb_context->flags);
next:
nfs_unlock_and_release_request(req);
+ /* Latency breaker */
+ cond_resched();
}
nfss = NFS_SERVER(data->inode);
if (atomic_long_read(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Charles Keepax <[email protected]>
commit 290df4d3ab192821b66857c05346b23056ee9545 upstream.
The soc_compr_copy callback is currently broken. Since the
changes to move the compr_ops over to the component the return
value is not correctly propagated, always returning zero on
success rather than the number of bytes copied. This causes
user-space to stall continuously reading as it does not believe
it has received any data.
Furthermore, the changes to move the compr_ops over to the
component iterate through the list of components and will call
the copy callback for any that have compressed ops. There isn't
currently any consensus on the mechanism to combine the results
of multiple copy callbacks.
To fix this issue for now halt searching the component list when
we locate a copy callback and return the result of that single
callback. Additional work should probably be done to look at the
other ops, tidy things up, and work out if we want to support
multiple components on a single compressed, but this is the only
fix required to get things working again.
Fixes: 9e7e3738ab0e ("ASoC: snd_soc_component_driver has snd_compr_ops")
Signed-off-by: Charles Keepax <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
sound/soc/soc-compress.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/sound/soc/soc-compress.c
+++ b/sound/soc/soc-compress.c
@@ -944,7 +944,7 @@ static int soc_compr_copy(struct snd_com
struct snd_soc_platform *platform = rtd->platform;
struct snd_soc_component *component;
struct snd_soc_rtdcom_list *rtdcom;
- int ret = 0, __ret;
+ int ret = 0;
mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
@@ -965,10 +965,10 @@ static int soc_compr_copy(struct snd_com
!component->driver->compr_ops->copy)
continue;
- __ret = component->driver->compr_ops->copy(cstream, buf, count);
- if (__ret < 0)
- ret = __ret;
+ ret = component->driver->compr_ops->copy(cstream, buf, count);
+ break;
}
+
err:
mutex_unlock(&rtd->pcm_mutex);
return ret;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit e7c52b84fb18f08ce49b6067ae6285aca79084a8 upstream.
We get a lot of very large stack frames using gcc-7.0.1 with the default
-fsanitize-address-use-after-scope --param asan-stack=1 options, which can
easily cause an overflow of the kernel stack, e.g.
drivers/gpu/drm/i915/gvt/handlers.c:2434:1: warning: the frame size of 46176 bytes is larger than 3072 bytes
drivers/net/wireless/ralink/rt2x00/rt2800lib.c:5650:1: warning: the frame size of 23632 bytes is larger than 3072 bytes
lib/atomic64_test.c:250:1: warning: the frame size of 11200 bytes is larger than 3072 bytes
drivers/gpu/drm/i915/gvt/handlers.c:2621:1: warning: the frame size of 9208 bytes is larger than 3072 bytes
drivers/media/dvb-frontends/stv090x.c:3431:1: warning: the frame size of 6816 bytes is larger than 3072 bytes
fs/fscache/stats.c:287:1: warning: the frame size of 6536 bytes is larger than 3072 bytes
To reduce this risk, -fsanitize-address-use-after-scope is now split out
into a separate CONFIG_KASAN_EXTRA Kconfig option, leading to stack
frames that are smaller than 2 kilobytes most of the time on x86_64. An
earlier version of this patch also prevented combining KASAN_EXTRA with
KASAN_INLINE, but that is no longer necessary with gcc-7.0.1.
All patches to get the frame size below 2048 bytes with CONFIG_KASAN=y
and CONFIG_KASAN_EXTRA=n have been merged by maintainers now, so we can
bring back that default now. KASAN_EXTRA=y still causes lots of
warnings but now defaults to !COMPILE_TEST to disable it in
allmodconfig, and it remains disabled in all other defconfigs since it
is a new option. I arbitrarily raise the warning limit for KASAN_EXTRA
to 3072 to reduce the noise, but an allmodconfig kernel still has around
50 warnings on gcc-7.
I experimented a bit more with smaller stack frames and have another
follow-up series that reduces the warning limit for 64-bit architectures
to 1280 bytes (without CONFIG_KASAN).
With earlier versions of this patch series, I also had patches to address
the warnings we get with KASAN and/or KASAN_EXTRA, using a
"noinline_if_stackbloat" annotation.
That annotation now got replaced with a gcc-8 bugfix (see
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715) and a workaround for
older compilers, which means that KASAN_EXTRA is now just as bad as
before and will lead to an instant stack overflow in a few extreme
cases.
This reverts parts of commit 3f181b4d8652 ("lib/Kconfig.debug: disable
-Wframe-larger-than warnings with KASAN=y"). Two patches in linux-next
should be merged first to avoid introducing warnings in an allmodconfig
build:
3cd890dbe2a4 ("media: dvb-frontends: fix i2c access helpers for KASAN")
16c3ada89cff ("media: r820t: fix r820t_write_reg for KASAN")
Do we really need to backport this?
I think we do: without this patch, enabling KASAN will lead to
unavoidable kernel stack overflow in certain device drivers when built
with gcc-7 or higher on linux-4.10+ or any version that contains a
backport of commit c5caf21ab0cf8. Most people are probably still on
older compilers, but it will get worse over time as they upgrade their
distros.
The warnings we get on kernels older than this should all be for code
that uses dangerously large stack frames, though most of them do not
cause an actual stack overflow by themselves.The asan-stack option was
added in linux-4.0, and commit 3f181b4d8652 ("lib/Kconfig.debug:
disable -Wframe-larger-than warnings with KASAN=y") effectively turned
off the warning for allmodconfig kernels, so I would like to see this
fix backported to any kernels later than 4.0.
I have done dozens of fixes for individual functions with stack frames
larger than 2048 bytes with asan-stack, and I plan to make sure that
all those fixes make it into the stable kernels as well (most are
already there).
Part of the complication here is that asan-stack (from 4.0) was
originally assumed to always require much larger stacks, but that
turned out to be a combination of multiple gcc bugs that we have now
worked around and fixed, but sanitize-address-use-after-scope (from
v4.10) has a much higher inherent stack usage and also suffers from at
least three other problems that we have analyzed but not yet fixed
upstream, each of them makes the stack usage more severe than it should
be.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnd Bergmann <[email protected]>
Acked-by: Andrey Ryabinin <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
lib/Kconfig.debug | 2 +-
lib/Kconfig.kasan | 11 +++++++++++
scripts/Makefile.kasan | 2 ++
3 files changed, 14 insertions(+), 1 deletion(-)
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -217,7 +217,7 @@ config ENABLE_MUST_CHECK
config FRAME_WARN
int "Warn for stack frames larger than (needs gcc 4.4)"
range 0 8192
- default 0 if KASAN
+ default 3072 if KASAN_EXTRA
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
default 1280 if (!64BIT && PARISC)
default 1024 if (!64BIT && !PARISC)
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -20,6 +20,17 @@ config KASAN
Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB
(the resulting kernel does not boot).
+config KASAN_EXTRA
+ bool "KAsan: extra checks"
+ depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST
+ help
+ This enables further checks in the kernel address sanitizer, for now
+ it only includes the address-use-after-scope check that can lead
+ to excessive kernel stack usage, frame size warnings and longer
+ compile time.
+ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more
+
+
choice
prompt "Instrumentation type"
depends on KASAN
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -30,7 +30,9 @@ else
endif
endif
+ifdef CONFIG_KASAN_EXTRA
CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope)
+endif
CFLAGS_KASAN_NOSANITIZE := -fno-builtin
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells <[email protected]>
commit 8305e579c653b127b292fcdce551e930f9560260 upstream.
In afs_select_fileserver(), we need to clear the ->responded flag in the
address list when reusing it. We should also clear it in
afs_select_current_fileserver().
To this end, just memset() the object before initialising it.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Signed-off-by: David Howells <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/afs/rotate.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
--- a/fs/afs/rotate.c
+++ b/fs/afs/rotate.c
@@ -383,6 +383,7 @@ use_server:
afs_get_addrlist(alist);
read_unlock(&server->fs_lock);
+ memset(&fc->ac, 0, sizeof(fc->ac));
/* Probe the current fileserver if we haven't done so yet. */
if (!test_bit(AFS_SERVER_FL_PROBED, &server->flags)) {
@@ -397,11 +398,8 @@ use_server:
else
afs_put_addrlist(alist);
- fc->ac.addr = NULL;
fc->ac.start = READ_ONCE(alist->index);
fc->ac.index = fc->ac.start;
- fc->ac.error = 0;
- fc->ac.begun = false;
goto iterate_address;
iterate_address:
@@ -458,12 +456,10 @@ bool afs_select_current_fileserver(struc
return false;
}
+ memset(&fc->ac, 0, sizeof(fc->ac));
fc->ac.alist = alist;
- fc->ac.addr = NULL;
fc->ac.start = READ_ONCE(alist->index);
fc->ac.index = fc->ac.start;
- fc->ac.error = 0;
- fc->ac.begun = false;
goto iterate_address;
case 0:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: LEROY Christophe <[email protected]>
commit 87a81dce53b1ea61acaeefa5191a0376a2d1d721 upstream.
Performing the hash of an empty file leads to a kernel Oops
[ 44.504600] Unable to handle kernel paging request for data at address 0x0000000c
[ 44.512819] Faulting instruction address: 0xc02d2be8
[ 44.524088] Oops: Kernel access of bad area, sig: 11 [#1]
[ 44.529171] BE PREEMPT CMPC885
[ 44.532232] CPU: 0 PID: 491 Comm: md5sum Not tainted 4.15.0-rc8-00211-g3a968610b6ea #81
[ 44.540814] NIP: c02d2be8 LR: c02d2984 CTR: 00000000
[ 44.545812] REGS: c6813c90 TRAP: 0300 Not tainted (4.15.0-rc8-00211-g3a968610b6ea)
[ 44.554223] MSR: 00009032 <EE,ME,IR,DR,RI> CR: 48222822 XER: 20000000
[ 44.560855] DAR: 0000000c DSISR: c0000000
[ 44.560855] GPR00: c02d28fc c6813d40 c6828000 c646fa40 00000001 00000001 00000001 00000000
[ 44.560855] GPR08: 0000004c 00000000 c000bfcc 00000000 28222822 100280d4 00000000 10020008
[ 44.560855] GPR16: 00000000 00000020 00000000 00000000 10024008 00000000 c646f9f0 c6179a10
[ 44.560855] GPR24: 00000000 00000001 c62f0018 c6179a10 00000000 c6367a30 c62f0000 c646f9c0
[ 44.598542] NIP [c02d2be8] ahash_process_req+0x448/0x700
[ 44.603751] LR [c02d2984] ahash_process_req+0x1e4/0x700
[ 44.608868] Call Trace:
[ 44.611329] [c6813d40] [c02d28fc] ahash_process_req+0x15c/0x700 (unreliable)
[ 44.618302] [c6813d90] [c02060c4] hash_recvmsg+0x11c/0x210
[ 44.623716] [c6813db0] [c0331354] ___sys_recvmsg+0x98/0x138
[ 44.629226] [c6813eb0] [c03332c0] __sys_recvmsg+0x40/0x84
[ 44.634562] [c6813f10] [c03336c0] SyS_socketcall+0xb8/0x1d4
[ 44.640073] [c6813f40] [c000d1ac] ret_from_syscall+0x0/0x38
[ 44.645530] Instruction dump:
[ 44.648465] 38c00001 7f63db78 4e800421 7c791b78 54690ffe 0f090000 80ff0190 2f870000
[ 44.656122] 40befe50 2f990001 409e0210 813f01bc <8129000c> b39e003a 7d29c214 913e003c
This patch fixes that Oops by checking if src is NULL.
Fixes: 6a1e8d14156d4 ("crypto: talitos - making mapping helpers more generic")
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/crypto/talitos.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -1138,6 +1138,10 @@ static int talitos_sg_map(struct device
struct talitos_private *priv = dev_get_drvdata(dev);
bool is_sec1 = has_ftr_sec1(priv);
+ if (!src) {
+ to_talitos_ptr(ptr, 0, 0, is_sec1);
+ return 1;
+ }
if (sg_count == 1) {
to_talitos_ptr(ptr, sg_dma_address(src) + offset, len, is_sec1);
return sg_count;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Mackerras <[email protected]>
commit 43ff3f65234061e08d234bdef5a9aadc19832b74 upstream.
This fixes a bug where it is possible to enter a guest on a POWER9
system without having the XIVE (interrupt controller) context loaded.
This can happen because we unload the XIVE context from the CPU
before doing the real-mode handling for machine checks. After the
real-mode handler runs, it is possible that we re-enter the guest
via a fast path which does not load the XIVE context.
To fix this, we move the unloading of the XIVE context to come after
the real-mode machine check handler is called.
Fixes: 5af50993850a ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 40 ++++++++++++++++----------------
1 file changed, 20 insertions(+), 20 deletions(-)
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1423,6 +1423,26 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
blt deliver_guest_interrupt
guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
+ /* Save more register state */
+ mfdar r6
+ mfdsisr r7
+ std r6, VCPU_DAR(r9)
+ stw r7, VCPU_DSISR(r9)
+ /* don't overwrite fault_dar/fault_dsisr if HDSI */
+ cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE
+ beq mc_cont
+ std r6, VCPU_FAULT_DAR(r9)
+ stw r7, VCPU_FAULT_DSISR(r9)
+
+ /* See if it is a machine check */
+ cmpwi r12, BOOK3S_INTERRUPT_MACHINE_CHECK
+ beq machine_check_realmode
+mc_cont:
+#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
+ addi r3, r9, VCPU_TB_RMEXIT
+ mr r4, r9
+ bl kvmhv_accumulate_time
+#endif
#ifdef CONFIG_KVM_XICS
/* We are exiting, pull the VP from the XIVE */
lwz r0, VCPU_XIVE_PUSHED(r9)
@@ -1460,26 +1480,6 @@ guest_exit_cont: /* r9 = vcpu, r12 = tr
eieio
1:
#endif /* CONFIG_KVM_XICS */
- /* Save more register state */
- mfdar r6
- mfdsisr r7
- std r6, VCPU_DAR(r9)
- stw r7, VCPU_DSISR(r9)
- /* don't overwrite fault_dar/fault_dsisr if HDSI */
- cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE
- beq mc_cont
- std r6, VCPU_FAULT_DAR(r9)
- stw r7, VCPU_FAULT_DSISR(r9)
-
- /* See if it is a machine check */
- cmpwi r12, BOOK3S_INTERRUPT_MACHINE_CHECK
- beq machine_check_realmode
-mc_cont:
-#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
- addi r3, r9, VCPU_TB_RMEXIT
- mr r4, r9
- bl kvmhv_accumulate_time
-#endif
mr r3, r12
/* Increment exit count, poke other threads to exit */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells <[email protected]>
commit fe4d774c847398c2a45c10a780ccfde069840793 upstream.
afs_select_fileserver() ends the address cursor it is using in the case in
which we get some sort of network error and run out of addresses to iterate
through, before it jumps to try the next server. This also needs to be
done when the server aborts with some sort of error that means we should
try the next server.
Fix this by:
(1) Move the iterate_address afs_end_cursor() call to the next_server
case.
(2) End the cursor in the failed case.
(3) Make afs_end_cursor() clear the ->begun flag and ->addr pointer in the
address cursor.
(4) Make afs_end_cursor() able to be called on an already cleared cursor.
Without this, something like the following oops may occur:
AFS: Assertion failed
18446612134397189888 == 0 is false
0xffff88007c279f00 == 0x0 is false
------------[ cut here ]------------
kernel BUG at fs/afs/rotate.c:360!
RIP: 0010:afs_select_fileserver+0x79b/0xa30 [kafs]
Call Trace:
afs_statfs+0xcc/0x180 [kafs]
? p9_client_statfs+0x9e/0x110 [9pnet]
? _cond_resched+0x19/0x40
statfs_by_dentry+0x6d/0x90
vfs_statfs+0x1b/0xc0
user_statfs+0x4b/0x80
SYSC_statfs+0x15/0x30
SyS_statfs+0xe/0x10
entry_SYSCALL_64_fastpath+0x20/0x83
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Reported-by: Marc Dionne <[email protected]>
Signed-off-by: David Howells <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/afs/addr_list.c | 13 ++++++++++---
fs/afs/rotate.c | 12 ++++++------
2 files changed, 16 insertions(+), 9 deletions(-)
--- a/fs/afs/addr_list.c
+++ b/fs/afs/addr_list.c
@@ -332,11 +332,18 @@ bool afs_iterate_addresses(struct afs_ad
*/
int afs_end_cursor(struct afs_addr_cursor *ac)
{
- if (ac->responded && ac->index != ac->start)
- WRITE_ONCE(ac->alist->index, ac->index);
+ struct afs_addr_list *alist;
- afs_put_addrlist(ac->alist);
+ alist = ac->alist;
+ if (alist) {
+ if (ac->responded && ac->index != ac->start)
+ WRITE_ONCE(alist->index, ac->index);
+ afs_put_addrlist(alist);
+ }
+
+ ac->addr = NULL;
ac->alist = NULL;
+ ac->begun = false;
return ac->error;
}
--- a/fs/afs/rotate.c
+++ b/fs/afs/rotate.c
@@ -334,6 +334,7 @@ start:
next_server:
_debug("next");
+ afs_end_cursor(&fc->ac);
afs_put_cb_interest(afs_v2net(vnode), fc->cbi);
fc->cbi = NULL;
fc->index++;
@@ -408,16 +409,15 @@ iterate_address:
/* Iterate over the current server's address list to try and find an
* address on which it will respond to us.
*/
- if (afs_iterate_addresses(&fc->ac)) {
- _leave(" = t");
- return true;
- }
+ if (!afs_iterate_addresses(&fc->ac))
+ goto next_server;
- afs_end_cursor(&fc->ac);
- goto next_server;
+ _leave(" = t");
+ return true;
failed:
fc->flags |= AFS_FS_CURSOR_STOP;
+ afs_end_cursor(&fc->ac);
_leave(" = f [failed %d]", fc->ac.error);
return false;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Rosin <[email protected]>
commit 1a1d39e1b8dd1d0f92a79da4fcc1ab0be3ae9bfc upstream.
Various gpiolib activity depend on the pinctrl to be up and kicking.
Therefore, register the pinctrl before adding a gpiochip.
Suggested-by: Linus Walleij <[email protected]>
Signed-off-by: Peter Rosin <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/pinctrl/pinctrl-sx150x.c | 35 +++++++++++++++++++++--------------
1 file changed, 21 insertions(+), 14 deletions(-)
--- a/drivers/pinctrl/pinctrl-sx150x.c
+++ b/drivers/pinctrl/pinctrl-sx150x.c
@@ -1144,6 +1144,27 @@ static int sx150x_probe(struct i2c_clien
if (ret)
return ret;
+ /* Pinctrl_desc */
+ pctl->pinctrl_desc.name = "sx150x-pinctrl";
+ pctl->pinctrl_desc.pctlops = &sx150x_pinctrl_ops;
+ pctl->pinctrl_desc.confops = &sx150x_pinconf_ops;
+ pctl->pinctrl_desc.pins = pctl->data->pins;
+ pctl->pinctrl_desc.npins = pctl->data->npins;
+ pctl->pinctrl_desc.owner = THIS_MODULE;
+
+ ret = devm_pinctrl_register_and_init(dev, &pctl->pinctrl_desc,
+ pctl, &pctl->pctldev);
+ if (ret) {
+ dev_err(dev, "Failed to register pinctrl device\n");
+ return ret;
+ }
+
+ ret = pinctrl_enable(pctl->pctldev);
+ if (ret) {
+ dev_err(dev, "Failed to enable pinctrl device\n");
+ return ret;
+ }
+
/* Register GPIO controller */
pctl->gpio.label = devm_kstrdup(dev, client->name, GFP_KERNEL);
pctl->gpio.base = -1;
@@ -1217,20 +1238,6 @@ static int sx150x_probe(struct i2c_clien
client->irq);
}
- /* Pinctrl_desc */
- pctl->pinctrl_desc.name = "sx150x-pinctrl";
- pctl->pinctrl_desc.pctlops = &sx150x_pinctrl_ops;
- pctl->pinctrl_desc.confops = &sx150x_pinconf_ops;
- pctl->pinctrl_desc.pins = pctl->data->pins;
- pctl->pinctrl_desc.npins = pctl->data->npins;
- pctl->pinctrl_desc.owner = THIS_MODULE;
-
- pctl->pctldev = devm_pinctrl_register(dev, &pctl->pinctrl_desc, pctl);
- if (IS_ERR(pctl->pctldev)) {
- dev_err(dev, "Failed to register pinctrl device\n");
- return PTR_ERR(pctl->pctldev);
- }
-
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Garry <[email protected]>
commit 5516e21a1e95e9b9f39985598431a25477d91643 upstream.
Currently a crash can be seen if we reach the "err"
label in dmi_add_platform_ipmi(), calling
platform_device_put(), like here:
[ 7.270584] (null): ipmi:dmi: Unable to add resources: -16
[ 7.330229] ------------[ cut here ]------------
[ 7.334889] kernel BUG at mm/slub.c:3894!
[ 7.338936] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 7.344475] Modules linked in:
[ 7.347556] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2-00004-gbe9cb7b-dirty #114
[ 7.355907] Hardware name: Huawei Taishan 2280 /D05, BIOS Hisilicon D05 IT17 Nemo 2.0 RC0 11/29/2017
[ 7.365137] task: 00000000c211f6d3 task.stack: 00000000f276e9af
[ 7.371116] pstate: 60000005 (nZCv daif -PAN -UAO)
[ 7.375957] pc : kfree+0x194/0x1b4
[ 7.379389] lr : platform_device_release+0xcc/0xd8
[ 7.384225] sp : ffff0000092dba90
[ 7.387567] x29: ffff0000092dba90 x28: ffff000008a83000
[ 7.392933] x27: ffff0000092dbc10 x26: 00000000000000e6
[ 7.398297] x25: 0000000000000003 x24: ffff0000085b51e8
[ 7.403662] x23: 0000000000000100 x22: ffff7e0000234cc0
[ 7.409027] x21: ffff000008af3660 x20: ffff8017d21acc10
[ 7.414392] x19: ffff8017d21acc00 x18: 0000000000000002
[ 7.419757] x17: 0000000000000001 x16: 0000000000000008
[ 7.425121] x15: 0000000000000001 x14: 6666666678303d65
[ 7.430486] x13: 6469727265766f5f x12: 7265766972642e76
[ 7.435850] x11: 6564703e2d617020 x10: 6530326435373638
[ 7.441215] x9 : 3030303030303030 x8 : 3d76656420657361
[ 7.446580] x7 : ffff000008f59df8 x6 : ffff8017fbe0ea50
[ 7.451945] x5 : 0000000000000000 x4 : 0000000000000000
[ 7.457309] x3 : ffffffffffffffff x2 : 0000000000000000
[ 7.462674] x1 : 0fffc00000000800 x0 : ffff7e0000234ce0
[ 7.468039] Process swapper/0 (pid: 1, stack limit = 0x00000000f276e9af)
[ 7.474809] Call trace:
[ 7.477272] kfree+0x194/0x1b4
[ 7.480351] platform_device_release+0xcc/0xd8
[ 7.484837] device_release+0x34/0x90
[ 7.488531] kobject_put+0x70/0xcc
[ 7.491961] put_device+0x14/0x1c
[ 7.495304] platform_device_put+0x14/0x1c
[ 7.499439] dmi_add_platform_ipmi+0x348/0x3ac
[ 7.503923] scan_for_dmi_ipmi+0xfc/0x10c
[ 7.507970] do_one_initcall+0x38/0x124
[ 7.511840] kernel_init_freeable+0x188/0x228
[ 7.516238] kernel_init+0x10/0x100
[ 7.519756] ret_from_fork+0x10/0x18
[ 7.523362] Code: f94002c0 37780080 f94012c0 37000040 (d4210000)
[ 7.529552] ---[ end trace 11750e4787deef9e ]---
[ 7.534228] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[ 7.534228]
This is because when the device is released in
platform_device_release(), we try to free
pdev.driver_override. This is a const string, hence
the crash.
Fix by using dynamic memory for pdev->driver_override.
Signed-off-by: John Garry <[email protected]>
[Removed the free of driver_override from ipmi_si_remove_by_dev(). The
free is done in platform_device_release(), and would result in a double
free, and ipmi_si_remove_by_dev() is called by non-platform devices.]
Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/char/ipmi/ipmi_dmi.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/char/ipmi/ipmi_dmi.c
+++ b/drivers/char/ipmi/ipmi_dmi.c
@@ -106,7 +106,10 @@ static void __init dmi_add_platform_ipmi
pr_err("ipmi:dmi: Error allocation IPMI platform device\n");
return;
}
- pdev->driver_override = override;
+ pdev->driver_override = kasprintf(GFP_KERNEL, "%s",
+ override);
+ if (!pdev->driver_override)
+ goto err;
if (type == IPMI_DMI_TYPE_SSIF) {
set_prop_entry(p[pidx++], "i2c-addr", u16, base_addr);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit 61f5acea8737d9b717fcc22bb6679924f3c82b98 upstream.
Commit 7d06d5895c15 ("Revert "Bluetooth: btusb: fix QCA...suspend/resume"")
removed the setting of the BTUSB_RESET_RESUME quirk for QCA Rome devices,
instead favoring adding USB_QUIRK_RESET_RESUME quirks in usb/core/quirks.c.
This was done because the DIY BTUSB_RESET_RESUME reset-resume handling
has several issues (see the original commit message). An added advantage
of moving over to the USB-core reset-resume handling is that it also
disables autosuspend for these devices, which is similarly broken on these.
But there are 2 issues with this approach:
1) It leaves the broken DIY BTUSB_RESET_RESUME code in place for Realtek
devices.
2) Sofar only 2 of the 10 QCA devices known to the btusb code have been
added to usb/core/quirks.c and if we fix the Realtek case the same way
we need to add an additional 14 entries. So in essence we need to
duplicate a large part of the usb_device_id table in btusb.c in
usb/core/quirks.c and manually keep them in sync.
This commit instead restores setting a reset-resume quirk for QCA devices
in the btusb.c code, avoiding the duplicate usb_device_id table problem.
This commit avoids the problems with the original DIY BTUSB_RESET_RESUME
code by simply setting the USB_QUIRK_RESET_RESUME quirk directly on the
usb_device.
This commit also moves the BTUSB_REALTEK case over to directly setting the
USB_QUIRK_RESET_RESUME on the usb_device and removes the now unused
BTUSB_RESET_RESUME code.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1514836
Fixes: 7d06d5895c15 ("Revert "Bluetooth: btusb: fix QCA...suspend/resume"")
Cc: Leif Liddy <[email protected]>
Cc: Matthias Kaehlcke <[email protected]>
Cc: Brian Norris <[email protected]>
Cc: Daniel Drake <[email protected]>
Cc: Kai-Heng Feng <[email protected]>
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Marcel Holtmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/bluetooth/btusb.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -23,6 +23,7 @@
#include <linux/module.h>
#include <linux/usb.h>
+#include <linux/usb/quirks.h>
#include <linux/firmware.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
@@ -387,9 +388,8 @@ static const struct usb_device_id blackl
#define BTUSB_FIRMWARE_LOADED 7
#define BTUSB_FIRMWARE_FAILED 8
#define BTUSB_BOOTING 9
-#define BTUSB_RESET_RESUME 10
-#define BTUSB_DIAG_RUNNING 11
-#define BTUSB_OOB_WAKE_ENABLED 12
+#define BTUSB_DIAG_RUNNING 10
+#define BTUSB_OOB_WAKE_ENABLED 11
struct btusb_data {
struct hci_dev *hdev;
@@ -3117,6 +3117,12 @@ static int btusb_probe(struct usb_interf
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
+
+ /* QCA Rome devices lose their updated firmware over suspend,
+ * but the USB hub doesn't notice any status change.
+ * explicitly request a device reset on resume.
+ */
+ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
}
#ifdef CONFIG_BT_HCIBTUSB_RTL
@@ -3127,7 +3133,7 @@ static int btusb_probe(struct usb_interf
* but the USB hub doesn't notice any status change.
* Explicitly request a device reset on resume.
*/
- set_bit(BTUSB_RESET_RESUME, &data->flags);
+ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
}
#endif
@@ -3293,14 +3299,6 @@ static int btusb_suspend(struct usb_inte
enable_irq(data->oob_wake_irq);
}
- /* Optionally request a device reset on resume, but only when
- * wakeups are disabled. If wakeups are enabled we assume the
- * device will stay powered up throughout suspend.
- */
- if (test_bit(BTUSB_RESET_RESUME, &data->flags) &&
- !device_may_wakeup(&data->udev->dev))
- data->udev->reset_resume = 1;
-
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Uma Krishnan <[email protected]>
commit 96cf727fe8f102bf92150b741db71ee39fb8c521 upstream.
In the event of a command failure, cxlflash returns the failure to the upper
layers to process. After processing the error, when the command is queued
again, the private command structure will not be zeroed and the ioasc could be
stale. Per the SISLite specification, the AFU only sets the ioasc in the
presence of a failure. Thus, even though the original command succeeds the
second time, the command is considered a failure due to stale ioasc. This
cycle repeats indefinitely and can cause a hang or IO failure.
To fix the issue, clear the ioasc before queuing any command.
[mkp: added Cc: stable per request]
Fixes: 479ad8e9d48c ("scsi: cxlflash: Remove zeroing of private command data")
Signed-off-by: Uma Krishnan <[email protected]>
Acked-by: Matthew R. Ochs <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/scsi/cxlflash/main.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/scsi/cxlflash/main.c
+++ b/drivers/scsi/cxlflash/main.c
@@ -620,6 +620,7 @@ static int cxlflash_queuecommand(struct
cmd->parent = afu;
cmd->hwq_index = hwq_index;
+ cmd->sa.ioasc = 0;
cmd->rcb.ctx_id = hwq->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.port_sel = CHAN2PORTMASK(scp->device->channel);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Cree <[email protected]>
commit 84e455361ec97ea6037d31d42a2955628ea2094b upstream.
Fix the typo (mixed up arguments) in the EXC macro in the futex
definitions introduced by commit ca282f697381 (alpha: add a
helper for emitting exception table entries).
Signed-off-by: Michael Cree <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/alpha/include/asm/futex.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/alpha/include/asm/futex.h
+++ b/arch/alpha/include/asm/futex.h
@@ -20,8 +20,8 @@
"3: .subsection 2\n" \
"4: br 1b\n" \
" .previous\n" \
- EXC(1b,3b,%1,$31) \
- EXC(2b,3b,%1,$31) \
+ EXC(1b,3b,$31,%1) \
+ EXC(2b,3b,$31,%1) \
: "=&r" (oldval), "=&r"(ret) \
: "r" (uaddr), "r"(oparg) \
: "memory")
@@ -82,8 +82,8 @@ futex_atomic_cmpxchg_inatomic(u32 *uval,
"3: .subsection 2\n"
"4: br 1b\n"
" .previous\n"
- EXC(1b,3b,%0,$31)
- EXC(2b,3b,%0,$31)
+ EXC(1b,3b,$31,%0)
+ EXC(2b,3b,$31,%0)
: "+r"(ret), "=&r"(prev), "=&r"(cmp)
: "r"(uaddr), "r"((long)(int)oldval), "r"(newval)
: "memory");
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit 47669fb6b5951d0e09fc99719653e0ac92b50b99 upstream.
There was a typo in the new version of put_tv32() that caused an unguarded
access of a user space pointer, and failed to return the correct result in
gettimeofday(), wait4(), usleep_thread() and old_adjtimex().
This fixes it to give the correct behavior again.
Fixes: 1cc6c4635e9f ("osf_sys.c: switch handling of timeval32/itimerval32 to copy_{to,from}_user()")
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/alpha/kernel/osf_sys.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -964,8 +964,8 @@ static inline long
put_tv32(struct timeval32 __user *o, struct timeval *i)
{
return copy_to_user(o, &(struct timeval32){
- .tv_sec = o->tv_sec,
- .tv_usec = o->tv_usec},
+ .tv_sec = i->tv_sec,
+ .tv_usec = i->tv_usec},
sizeof(struct timeval32));
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka <[email protected]>
commit 4b01abdb32fc36abe877503bfbd33019159fad71 upstream.
Since version 4.9, the kernel automatically breaks printk calls into
multiple newlines unless pr_cont is used. Fix the alpha stacktrace code,
so that it prints stack trace in four columns, as it was initially
intended.
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/alpha/kernel/traps.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
--- a/arch/alpha/kernel/traps.c
+++ b/arch/alpha/kernel/traps.c
@@ -160,11 +160,16 @@ void show_stack(struct task_struct *task
for(i=0; i < kstack_depth_to_print; i++) {
if (((long) stack & (THREAD_SIZE-1)) == 0)
break;
- if (i && ((i % 4) == 0))
- printk("\n ");
- printk("%016lx ", *stack++);
+ if ((i % 4) == 0) {
+ if (i)
+ pr_cont("\n");
+ printk(" ");
+ } else {
+ pr_cont(" ");
+ }
+ pr_cont("%016lx", *stack++);
}
- printk("\n");
+ pr_cont("\n");
dik_show_trace(sp);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Smart <[email protected]>
commit e4b9794efdce13242f4af6682f3ed48ce3864a87 upstream.
In test cases where an instance of the driver is detached and
reattached, the driver will crash on reattachment. There is a compound
if statement that will skip over the bar setup if the pci_resource_start
call is not successful. The driver erroneously returns success to its
bar setup in this scenario even though the bars aren't properly
configured.
Rework the offending code segment for proper initialization steps. If
the pci_resource_start call fails, -ENOMEM is now returned.
Sample stack:
rport-5:0-10: blocked FC remote port time out: removing rport
BUG: unable to handle kernel NULL pointer dereference at (null)
... lpfc_sli4_wait_bmbx_ready+0x32/0x70 [lpfc]
...
... RIP: 0010:... ... lpfc_sli4_wait_bmbx_ready+0x32/0x70 [lpfc]
Call Trace:
... lpfc_sli4_post_sync_mbox+0x106/0x4d0 [lpfc]
... ? __alloc_pages_nodemask+0x176/0x420
... ? __kmalloc+0x2e/0x230
... lpfc_sli_issue_mbox_s4+0x533/0x720 [lpfc]
... ? mempool_alloc+0x69/0x170
... ? dma_generic_alloc_coherent+0x8f/0x140
... lpfc_sli_issue_mbox+0xf/0x20 [lpfc]
... lpfc_sli4_driver_resource_setup+0xa6f/0x1130 [lpfc]
... ? lpfc_pci_probe_one+0x23e/0x16f0 [lpfc]
... lpfc_pci_probe_one+0x445/0x16f0 [lpfc]
... local_pci_probe+0x45/0xa0
... work_for_cpu_fn+0x14/0x20
... process_one_work+0x17a/0x440
Signed-off-by: Dick Kennedy <[email protected]>
Signed-off-by: James Smart <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/scsi/lpfc/lpfc_init.c | 84 +++++++++++++++++++++++++-----------------
1 file changed, 51 insertions(+), 33 deletions(-)
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -9421,44 +9421,62 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba
lpfc_sli4_bar0_register_memmap(phba, if_type);
}
- if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
- (pci_resource_start(pdev, PCI_64BIT_BAR2))) {
- /*
- * Map SLI4 if type 0 HBA Control Register base to a kernel
- * virtual address and setup the registers.
- */
- phba->pci_bar1_map = pci_resource_start(pdev, PCI_64BIT_BAR2);
- bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2);
- phba->sli4_hba.ctrl_regs_memmap_p =
- ioremap(phba->pci_bar1_map, bar1map_len);
- if (!phba->sli4_hba.ctrl_regs_memmap_p) {
- dev_printk(KERN_ERR, &pdev->dev,
- "ioremap failed for SLI4 HBA control registers.\n");
+ if (if_type == LPFC_SLI_INTF_IF_TYPE_0) {
+ if (pci_resource_start(pdev, PCI_64BIT_BAR2)) {
+ /*
+ * Map SLI4 if type 0 HBA Control Register base to a
+ * kernel virtual address and setup the registers.
+ */
+ phba->pci_bar1_map = pci_resource_start(pdev,
+ PCI_64BIT_BAR2);
+ bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2);
+ phba->sli4_hba.ctrl_regs_memmap_p =
+ ioremap(phba->pci_bar1_map,
+ bar1map_len);
+ if (!phba->sli4_hba.ctrl_regs_memmap_p) {
+ dev_err(&pdev->dev,
+ "ioremap failed for SLI4 HBA "
+ "control registers.\n");
+ error = -ENOMEM;
+ goto out_iounmap_conf;
+ }
+ phba->pci_bar2_memmap_p =
+ phba->sli4_hba.ctrl_regs_memmap_p;
+ lpfc_sli4_bar1_register_memmap(phba);
+ } else {
+ error = -ENOMEM;
goto out_iounmap_conf;
}
- phba->pci_bar2_memmap_p = phba->sli4_hba.ctrl_regs_memmap_p;
- lpfc_sli4_bar1_register_memmap(phba);
}
- if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
- (pci_resource_start(pdev, PCI_64BIT_BAR4))) {
- /*
- * Map SLI4 if type 0 HBA Doorbell Register base to a kernel
- * virtual address and setup the registers.
- */
- phba->pci_bar2_map = pci_resource_start(pdev, PCI_64BIT_BAR4);
- bar2map_len = pci_resource_len(pdev, PCI_64BIT_BAR4);
- phba->sli4_hba.drbl_regs_memmap_p =
- ioremap(phba->pci_bar2_map, bar2map_len);
- if (!phba->sli4_hba.drbl_regs_memmap_p) {
- dev_printk(KERN_ERR, &pdev->dev,
- "ioremap failed for SLI4 HBA doorbell registers.\n");
- goto out_iounmap_ctrl;
- }
- phba->pci_bar4_memmap_p = phba->sli4_hba.drbl_regs_memmap_p;
- error = lpfc_sli4_bar2_register_memmap(phba, LPFC_VF0);
- if (error)
+ if (if_type == LPFC_SLI_INTF_IF_TYPE_0) {
+ if (pci_resource_start(pdev, PCI_64BIT_BAR4)) {
+ /*
+ * Map SLI4 if type 0 HBA Doorbell Register base to
+ * a kernel virtual address and setup the registers.
+ */
+ phba->pci_bar2_map = pci_resource_start(pdev,
+ PCI_64BIT_BAR4);
+ bar2map_len = pci_resource_len(pdev, PCI_64BIT_BAR4);
+ phba->sli4_hba.drbl_regs_memmap_p =
+ ioremap(phba->pci_bar2_map,
+ bar2map_len);
+ if (!phba->sli4_hba.drbl_regs_memmap_p) {
+ dev_err(&pdev->dev,
+ "ioremap failed for SLI4 HBA"
+ " doorbell registers.\n");
+ error = -ENOMEM;
+ goto out_iounmap_ctrl;
+ }
+ phba->pci_bar4_memmap_p =
+ phba->sli4_hba.drbl_regs_memmap_p;
+ error = lpfc_sli4_bar2_register_memmap(phba, LPFC_VF0);
+ if (error)
+ goto out_iounmap_all;
+ } else {
+ error = -ENOMEM;
goto out_iounmap_all;
+ }
}
return 0;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein <[email protected]>
commit 31747eda41ef3c30c09c5c096b380bf54013746a upstream.
fsnotify pins a watched directory inode in cache, but if directory dentry
is released, new lookup will allocate a new dentry and a new inode.
Directory events will be notified on the new inode, while fsnotify listener
is watching the old pinned inode.
Hash all directory inodes to reuse the pinned inode on lookup. Pure upper
dirs are hashes by real upper inode, merge and lower dirs are hashed by
real lower inode.
The reference to lower inode was being held by the lower dentry object
in the overlay dentry (oe->lowerstack[0]). Releasing the overlay dentry
may drop lower inode refcount to zero. Add a refcount on behalf of the
overlay inode to prevent that.
As a by-product, hashing directory inodes also detects multiple
redirected dirs to the same lower dir and uncovered redirected dir
target on and returns -ESTALE on lookup.
The reported issue dates back to initial version of overlayfs, but this
patch depends on ovl_inode code that was introduced in kernel v4.13.
Reported-by: Niklas Cassel <[email protected]>
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Tested-by: Niklas Cassel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/overlayfs/inode.c | 39 ++++++++++++++++++++++++++++-----------
fs/overlayfs/super.c | 1 +
fs/overlayfs/util.c | 4 ++--
3 files changed, 31 insertions(+), 13 deletions(-)
--- a/fs/overlayfs/inode.c
+++ b/fs/overlayfs/inode.c
@@ -606,6 +606,16 @@ static int ovl_inode_set(struct inode *i
static bool ovl_verify_inode(struct inode *inode, struct dentry *lowerdentry,
struct dentry *upperdentry)
{
+ if (S_ISDIR(inode->i_mode)) {
+ /* Real lower dir moved to upper layer under us? */
+ if (!lowerdentry && ovl_inode_lower(inode))
+ return false;
+
+ /* Lookup of an uncovered redirect origin? */
+ if (!upperdentry && ovl_inode_upper(inode))
+ return false;
+ }
+
/*
* Allow non-NULL lower inode in ovl_inode even if lowerdentry is NULL.
* This happens when finding a copied up overlay inode for a renamed
@@ -633,6 +643,8 @@ struct inode *ovl_get_inode(struct dentr
struct inode *inode;
/* Already indexed or could be indexed on copy up? */
bool indexed = (index || (ovl_indexdir(dentry->d_sb) && !upperdentry));
+ struct dentry *origin = indexed ? lowerdentry : NULL;
+ bool is_dir;
if (WARN_ON(upperdentry && indexed && !lowerdentry))
return ERR_PTR(-EIO);
@@ -641,15 +653,19 @@ struct inode *ovl_get_inode(struct dentr
realinode = d_inode(lowerdentry);
/*
- * Copy up origin (lower) may exist for non-indexed upper, but we must
- * not use lower as hash key in that case.
- * Hash inodes that are or could be indexed by origin inode and
- * non-indexed upper inodes that could be hard linked by upper inode.
+ * Copy up origin (lower) may exist for non-indexed non-dir upper, but
+ * we must not use lower as hash key in that case.
+ * Hash non-dir that is or could be indexed by origin inode.
+ * Hash dir that is or could be merged by origin inode.
+ * Hash pure upper and non-indexed non-dir by upper inode.
*/
- if (!S_ISDIR(realinode->i_mode) && (upperdentry || indexed)) {
- struct inode *key = d_inode(indexed ? lowerdentry :
- upperdentry);
- unsigned int nlink;
+ is_dir = S_ISDIR(realinode->i_mode);
+ if (is_dir)
+ origin = lowerdentry;
+
+ if (upperdentry || origin) {
+ struct inode *key = d_inode(origin ?: upperdentry);
+ unsigned int nlink = is_dir ? 1 : realinode->i_nlink;
inode = iget5_locked(dentry->d_sb, (unsigned long) key,
ovl_inode_test, ovl_inode_set, key);
@@ -670,8 +686,9 @@ struct inode *ovl_get_inode(struct dentr
goto out;
}
- nlink = ovl_get_nlink(lowerdentry, upperdentry,
- realinode->i_nlink);
+ /* Recalculate nlink for non-dir due to indexing */
+ if (!is_dir)
+ nlink = ovl_get_nlink(lowerdentry, upperdentry, nlink);
set_nlink(inode, nlink);
} else {
inode = new_inode(dentry->d_sb);
@@ -685,7 +702,7 @@ struct inode *ovl_get_inode(struct dentr
ovl_set_flag(OVL_IMPURE, inode);
/* Check for non-merge dir that may have whiteouts */
- if (S_ISDIR(realinode->i_mode)) {
+ if (is_dir) {
struct ovl_entry *oe = dentry->d_fsdata;
if (((upperdentry && lowerdentry) || oe->numlower > 1) ||
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -211,6 +211,7 @@ static void ovl_destroy_inode(struct ino
struct ovl_inode *oi = OVL_I(inode);
dput(oi->__upperdentry);
+ iput(oi->lower);
kfree(oi->redirect);
ovl_dir_cache_free(inode);
mutex_destroy(&oi->lock);
--- a/fs/overlayfs/util.c
+++ b/fs/overlayfs/util.c
@@ -257,7 +257,7 @@ void ovl_inode_init(struct inode *inode,
if (upperdentry)
OVL_I(inode)->__upperdentry = upperdentry;
if (lowerdentry)
- OVL_I(inode)->lower = d_inode(lowerdentry);
+ OVL_I(inode)->lower = igrab(d_inode(lowerdentry));
ovl_copyattr(d_inode(upperdentry ?: lowerdentry), inode);
}
@@ -273,7 +273,7 @@ void ovl_inode_update(struct inode *inod
*/
smp_wmb();
OVL_I(inode)->__upperdentry = upperdentry;
- if (!S_ISDIR(upperinode->i_mode) && inode_unhashed(inode)) {
+ if (inode_unhashed(inode)) {
inode->i_private = upperinode;
__insert_inode_hash(inode, (unsigned long) upperinode);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein <[email protected]>
commit a5a927a7c82e28ea76599dee4019c41e372c911f upstream.
The optimization in ovl_cache_get_impure() that tries to remove an
unneeded "impure" xattr needs to take mnt_want_write() on upper fs.
Fixes: 4edb83bb1041 ("ovl: constant d_ino for non-merge dirs")
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/overlayfs/readdir.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
--- a/fs/overlayfs/readdir.c
+++ b/fs/overlayfs/readdir.c
@@ -593,8 +593,15 @@ static struct ovl_dir_cache *ovl_cache_g
return ERR_PTR(res);
}
if (list_empty(&cache->entries)) {
- /* Good oportunity to get rid of an unnecessary "impure" flag */
- ovl_do_removexattr(ovl_dentry_upper(dentry), OVL_XATTR_IMPURE);
+ /*
+ * A good opportunity to get rid of an unneeded "impure" flag.
+ * Removing the "impure" xattr is best effort.
+ */
+ if (!ovl_want_write(dentry)) {
+ ovl_do_removexattr(ovl_dentry_upper(dentry),
+ OVL_XATTR_IMPURE);
+ ovl_drop_write(dentry);
+ }
ovl_clear_flag(OVL_IMPURE, d_inode(dentry));
kfree(cache);
return NULL;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein <[email protected]>
commit 972d0093c2f7b1bd57e47a1780a552dde528fd16 upstream.
When work dir creation fails, a warning is emitted and overlay is
mounted r/o. Trying to remount r/w will fail with no work dir.
When index dir creation fails, the same warning is emitted and overlay
is mounted r/o, but trying to remount r/w will succeed. This may cause
unintentional corruption of filesystem consistency.
Adjust the behavior of index dir creation failure to that of work dir
creation failure and do not allow to remount r/w. User needs to state
an explicitly intention to work without an index by mounting with
option 'index=off' to allow r/w mount with no index dir.
When mounting with option 'index=on' and no 'upperdir', index is
implicitly disabled, so do not warn about no file handle support.
The issue was introduced with inodes index feature in v4.13, but this
patch will not apply cleanly before ovl_fill_super() re-factoring in
v4.15.
Fixes: 02bcd1577400 ("ovl: introduce the inodes index dir feature")
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/overlayfs/super.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -703,7 +703,8 @@ static int ovl_lower_dir(const char *nam
* The inodes index feature needs to encode and decode file
* handles, so it requires that all layers support them.
*/
- if (ofs->config.index && !ovl_can_decode_fh(path->dentry->d_sb)) {
+ if (ofs->config.index && ofs->config.upperdir &&
+ !ovl_can_decode_fh(path->dentry->d_sb)) {
ofs->config.index = false;
pr_warn("overlayfs: fs on '%s' does not support file handles, falling back to index=off.\n", name);
}
@@ -1257,11 +1258,16 @@ static int ovl_fill_super(struct super_b
if (err)
goto out_free_oe;
- if (!ofs->indexdir)
+ /* Force r/o mount with no index dir */
+ if (!ofs->indexdir) {
+ dput(ofs->workdir);
+ ofs->workdir = NULL;
sb->s_flags |= SB_RDONLY;
+ }
+
}
- /* Show index=off/on in /proc/mounts for any of the reasons above */
+ /* Show index=off in /proc/mounts for forced r/o mount */
if (!ofs->indexdir)
ofs->config.index = false;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Ryabinin <[email protected]>
commit 42440c1f9911b4b7b8ba3dc4e90c1197bc561211 upstream.
UBSAN=y fails to build with new GCC/clang:
arch/x86/kernel/head64.o: In function `sanitize_boot_params':
arch/x86/include/asm/bootparam_utils.h:37: undefined reference to `__ubsan_handle_type_mismatch_v1'
because Clang and GCC 8 slightly changed ABI for 'type mismatch' errors.
Compiler now uses new __ubsan_handle_type_mismatch_v1() function with
slightly modified 'struct type_mismatch_data'.
Let's add new 'struct type_mismatch_data_common' which is independent from
compiler's layout of 'struct type_mismatch_data'. And make
__ubsan_handle_type_mismatch[_v1]() functions transform compiler-dependent
type mismatch data to our internal representation. This way, we can
support both old and new compilers with minimal amount of change.
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andrey Ryabinin <[email protected]>
Reported-by: Sodagudi Prasad <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
lib/ubsan.c | 48 ++++++++++++++++++++++++++++++++++++++----------
lib/ubsan.h | 14 ++++++++++++++
2 files changed, 52 insertions(+), 10 deletions(-)
--- a/lib/ubsan.c
+++ b/lib/ubsan.c
@@ -265,14 +265,14 @@ void __ubsan_handle_divrem_overflow(stru
}
EXPORT_SYMBOL(__ubsan_handle_divrem_overflow);
-static void handle_null_ptr_deref(struct type_mismatch_data *data)
+static void handle_null_ptr_deref(struct type_mismatch_data_common *data)
{
unsigned long flags;
- if (suppress_report(&data->location))
+ if (suppress_report(data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(data->location, &flags);
pr_err("%s null pointer of type %s\n",
type_check_kinds[data->type_check_kind],
@@ -281,15 +281,15 @@ static void handle_null_ptr_deref(struct
ubsan_epilogue(&flags);
}
-static void handle_misaligned_access(struct type_mismatch_data *data,
+static void handle_misaligned_access(struct type_mismatch_data_common *data,
unsigned long ptr)
{
unsigned long flags;
- if (suppress_report(&data->location))
+ if (suppress_report(data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(data->location, &flags);
pr_err("%s misaligned address %p for type %s\n",
type_check_kinds[data->type_check_kind],
@@ -299,15 +299,15 @@ static void handle_misaligned_access(str
ubsan_epilogue(&flags);
}
-static void handle_object_size_mismatch(struct type_mismatch_data *data,
+static void handle_object_size_mismatch(struct type_mismatch_data_common *data,
unsigned long ptr)
{
unsigned long flags;
- if (suppress_report(&data->location))
+ if (suppress_report(data->location))
return;
- ubsan_prologue(&data->location, &flags);
+ ubsan_prologue(data->location, &flags);
pr_err("%s address %p with insufficient space\n",
type_check_kinds[data->type_check_kind],
(void *) ptr);
@@ -315,7 +315,7 @@ static void handle_object_size_mismatch(
ubsan_epilogue(&flags);
}
-void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data,
unsigned long ptr)
{
@@ -326,8 +326,36 @@ void __ubsan_handle_type_mismatch(struct
else
handle_object_size_mismatch(data, ptr);
}
+
+void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+ unsigned long ptr)
+{
+ struct type_mismatch_data_common common_data = {
+ .location = &data->location,
+ .type = data->type,
+ .alignment = data->alignment,
+ .type_check_kind = data->type_check_kind
+ };
+
+ ubsan_type_mismatch_common(&common_data, ptr);
+}
EXPORT_SYMBOL(__ubsan_handle_type_mismatch);
+void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
+ unsigned long ptr)
+{
+
+ struct type_mismatch_data_common common_data = {
+ .location = &data->location,
+ .type = data->type,
+ .alignment = 1UL << data->log_alignment,
+ .type_check_kind = data->type_check_kind
+ };
+
+ ubsan_type_mismatch_common(&common_data, ptr);
+}
+EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1);
+
void __ubsan_handle_nonnull_return(struct nonnull_return_data *data)
{
unsigned long flags;
--- a/lib/ubsan.h
+++ b/lib/ubsan.h
@@ -37,6 +37,20 @@ struct type_mismatch_data {
unsigned char type_check_kind;
};
+struct type_mismatch_data_v1 {
+ struct source_location location;
+ struct type_descriptor *type;
+ unsigned char log_alignment;
+ unsigned char type_check_kind;
+};
+
+struct type_mismatch_data_common {
+ struct source_location *location;
+ struct type_descriptor *type;
+ unsigned long alignment;
+ unsigned char type_check_kind;
+};
+
struct nonnull_arg_data {
struct source_location location;
struct source_location attr_location;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <[email protected]>
commit 99ce7962d52d1948ad6f2785e308d48e76e0a6ef upstream.
Linus reported that GCC-7.3 generated a switch-table construct that
confused objtool. It turns out that, in particular due to KASAN, it is
possible to have unrelated .rodata usage in between the .rodata setup
for the switch-table and the following indirect jump.
The simple linear reverse search from the indirect jump would hit upon
the KASAN .rodata usage first and fail to find a switch_table,
resulting in a spurious 'sibling call with modified stack frame'
warning.
Fix this by creating a 'jump-stack' which we can 'unwind' during
reversal, thereby skipping over much of the in-between code.
This is not fool proof by any means, but is sufficient to make the
known cases work. Future work would be to construct more comprehensive
flow analysis code.
Reported-and-tested-by: Linus Torvalds <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
tools/objtool/check.c | 41 +++++++++++++++++++++++++++++++++++++++--
tools/objtool/check.h | 1 +
2 files changed, 40 insertions(+), 2 deletions(-)
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -851,8 +851,14 @@ static int add_switch_table(struct objto
* This is a fairly uncommon pattern which is new for GCC 6. As of this
* writing, there are 11 occurrences of it in the allmodconfig kernel.
*
+ * As of GCC 7 there are quite a few more of these and the 'in between' code
+ * is significant. Esp. with KASAN enabled some of the code between the mov
+ * and jmpq uses .rodata itself, which can confuse things.
+ *
* TODO: Once we have DWARF CFI and smarter instruction decoding logic,
* ensure the same register is used in the mov and jump instructions.
+ *
+ * NOTE: RETPOLINE made it harder still to decode dynamic jumps.
*/
static struct rela *find_switch_table(struct objtool_file *file,
struct symbol *func,
@@ -874,12 +880,25 @@ static struct rela *find_switch_table(st
text_rela->addend + 4);
if (!rodata_rela)
return NULL;
+
file->ignore_unreachables = true;
return rodata_rela;
}
/* case 3 */
- func_for_each_insn_continue_reverse(file, func, insn) {
+ /*
+ * Backward search using the @first_jump_src links, these help avoid
+ * much of the 'in between' code. Which avoids us getting confused by
+ * it.
+ */
+ for (insn = list_prev_entry(insn, list);
+
+ &insn->list != &file->insn_list &&
+ insn->sec == func->sec &&
+ insn->offset >= func->offset;
+
+ insn = insn->first_jump_src ?: list_prev_entry(insn, list)) {
+
if (insn->type == INSN_JUMP_DYNAMIC)
break;
@@ -909,14 +928,32 @@ static struct rela *find_switch_table(st
return NULL;
}
+
static int add_func_switch_tables(struct objtool_file *file,
struct symbol *func)
{
- struct instruction *insn, *prev_jump = NULL;
+ struct instruction *insn, *last = NULL, *prev_jump = NULL;
struct rela *rela, *prev_rela = NULL;
int ret;
func_for_each_insn(file, func, insn) {
+ if (!last)
+ last = insn;
+
+ /*
+ * Store back-pointers for unconditional forward jumps such
+ * that find_switch_table() can back-track using those and
+ * avoid some potentially confusing code.
+ */
+ if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest &&
+ insn->offset > last->offset &&
+ insn->jump_dest->offset > insn->offset &&
+ !insn->jump_dest->first_jump_src) {
+
+ insn->jump_dest->first_jump_src = insn;
+ last = insn->jump_dest;
+ }
+
if (insn->type != INSN_JUMP_DYNAMIC)
continue;
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -47,6 +47,7 @@ struct instruction {
bool alt_group, visited, dead_end, ignore, hint, save, restore, ignore_alts;
struct symbol *call_dest;
struct instruction *jump_dest;
+ struct instruction *first_jump_src;
struct list_head alts;
struct symbol *func;
struct stack_op stack_op;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Lezcano <[email protected]>
commit e0aeca3d8cbaea514eb98df1149faa918f9ec42d upstream.
The current code hides a couple of bugs:
- The global variable 'clock_event_ddata' is overwritten each time the
init function is invoked.
This is fixed with a kmemdup() instead of assigning the global variable. That
prevents a memory corruption when several timers are defined in the DT.
- The clockevent's event_handler is NULL if the time framework does
not select the clockevent when registering it, this is fine but the init
code generates in any case an interrupt leading to dereference this
NULL pointer.
The stm32 timer works with shadow registers, a mechanism to cache the
registers. When a change is done in one buffered register, we need to
artificially generate an event to force the timer to copy the content
of the register to the shadowed register.
The auto-reload register (ARR) is one of the shadowed register as well as
the prescaler register (PSC), so in order to force the copy, we issue an
event which in turn leads to an interrupt and the NULL dereference.
This is fixed by inverting two lines where we clear the status register
before enabling the update event interrupt.
As this kernel crash is resulting from the combination of these two bugs,
the fixes are grouped into a single patch.
Tested-by: Benjamin Gaignard <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
Acked-by: Benjamin Gaignard <[email protected]>
Cc: Alexandre Torgue <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Maxime Coquelin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/clocksource/timer-stm32.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/drivers/clocksource/timer-stm32.c
+++ b/drivers/clocksource/timer-stm32.c
@@ -106,6 +106,10 @@ static int __init stm32_clockevent_init(
unsigned long rate, max_delta;
int irq, ret, bits, prescaler = 1;
+ data = kmemdup(&clock_event_ddata, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
@@ -156,8 +160,8 @@ static int __init stm32_clockevent_init(
writel_relaxed(prescaler - 1, data->base + TIM_PSC);
writel_relaxed(TIM_EGR_UG, data->base + TIM_EGR);
- writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
writel_relaxed(0, data->base + TIM_SR);
+ writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
data->periodic_top = DIV_ROUND_CLOSEST(rate, prescaler * HZ);
@@ -184,6 +188,7 @@ err_iomap:
err_clk_enable:
clk_put(clk);
err_clk_get:
+ kfree(data);
return ret;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Morton <[email protected]>
commit b8fe1120b4ba342b4f156d24e952d6e686b20298 upstream.
A vist from the spelling fairy.
Cc: David Laight <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
lib/ubsan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/lib/ubsan.c
+++ b/lib/ubsan.c
@@ -281,7 +281,7 @@ static void handle_null_ptr_deref(struct
ubsan_epilogue(&flags);
}
-static void handle_missaligned_access(struct type_mismatch_data *data,
+static void handle_misaligned_access(struct type_mismatch_data *data,
unsigned long ptr)
{
unsigned long flags;
@@ -322,7 +322,7 @@ void __ubsan_handle_type_mismatch(struct
if (!ptr)
handle_null_ptr_deref(data);
else if (data->alignment && !IS_ALIGNED(ptr, data->alignment))
- handle_missaligned_access(data, ptr);
+ handle_misaligned_access(data, ptr);
else
handle_object_size_mismatch(data, ptr);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yan Markman <[email protected]>
commit 474c5885582c4a79c21bcf01ed98f98c935f1f4a upstream.
This patch adds Ethernet aliases in the Marvell Armada 7040 DB, 8040 DB
and 8040 mcbin device trees so that the bootloader setup the MAC
addresses correctly.
Signed-off-by: Yan Markman <[email protected]>
[Antoine: commit message, small fixes]
Signed-off-by: Antoine Tenart <[email protected]>
Signed-off-by: Gregory CLEMENT <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/boot/dts/marvell/armada-7040-db.dts | 6 ++++++
arch/arm64/boot/dts/marvell/armada-8040-db.dts | 7 +++++++
arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts | 6 ++++++
3 files changed, 19 insertions(+)
--- a/arch/arm64/boot/dts/marvell/armada-7040-db.dts
+++ b/arch/arm64/boot/dts/marvell/armada-7040-db.dts
@@ -61,6 +61,12 @@
reg = <0x0 0x0 0x0 0x80000000>;
};
+ aliases {
+ ethernet0 = &cpm_eth0;
+ ethernet1 = &cpm_eth1;
+ ethernet2 = &cpm_eth2;
+ };
+
cpm_reg_usb3_0_vbus: cpm-usb3-0-vbus {
compatible = "regulator-fixed";
regulator-name = "usb3h0-vbus";
--- a/arch/arm64/boot/dts/marvell/armada-8040-db.dts
+++ b/arch/arm64/boot/dts/marvell/armada-8040-db.dts
@@ -61,6 +61,13 @@
reg = <0x0 0x0 0x0 0x80000000>;
};
+ aliases {
+ ethernet0 = &cpm_eth0;
+ ethernet1 = &cpm_eth2;
+ ethernet2 = &cps_eth0;
+ ethernet3 = &cps_eth1;
+ };
+
cpm_reg_usb3_0_vbus: cpm-usb3-0-vbus {
compatible = "regulator-fixed";
regulator-name = "cpm-usb3h0-vbus";
--- a/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
+++ b/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
@@ -62,6 +62,12 @@
reg = <0x0 0x0 0x0 0x80000000>;
};
+ aliases {
+ ethernet0 = &cpm_eth0;
+ ethernet1 = &cps_eth0;
+ ethernet2 = &cps_eth1;
+ };
+
/* Regulator labels correspond with schematics */
v_3_3: regulator-3-3v {
compatible = "regulator-fixed";
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming Lei <[email protected]>
commit c2856ae2f315d754a0b6a268e4c6745b332b42e7 upstream.
After queue is frozen, dispatch still may happen, for example:
1) requests are submitted from several contexts
2) requests from all these contexts are inserted to queue, but may dispatch
to LLD in one of these paths, but other paths sill need to move on even all
these requests are completed(that means blk_mq_freeze_queue_wait() returns
at that time)
3) dispatch after queue freezing still moves on and causes use-after-free,
because request queue is freed
This patch quiesces queue after it is frozen, and makes sure all
in-progress dispatch are completed.
This patch fixes the following kernel crash when running heavy IOs vs.
deleting device:
[ 36.719251] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[ 36.720318] IP: kyber_has_work+0x14/0x40
[ 36.720847] PGD 254bf5067 P4D 254bf5067 PUD 255e6a067 PMD 0
[ 36.721584] Oops: 0000 [#1] PREEMPT SMP
[ 36.722105] Dumping ftrace buffer:
[ 36.722570] (ftrace buffer empty)
[ 36.723057] Modules linked in: scsi_debug ebtable_filter ebtables ip6table_filter ip6_tables tcm_loop iscsi_target_mod target_core_file target_core_iblock target_core_pscsi target_core_mod xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c bridge stp llc fuse iptable_filter ip_tables sd_mod sg btrfs xor zstd_decompress zstd_compress xxhash raid6_pq mptsas mptscsih bcache crc32c_intel ahci mptbase libahci serio_raw scsi_transport_sas nvme libata shpchp lpc_ich virtio_scsi nvme_core binfmt_misc dm_mod iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi null_blk configs
[ 36.733438] CPU: 2 PID: 2374 Comm: fio Not tainted 4.15.0-rc2.blk_mq_quiesce+ #714
[ 36.735143] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.9.3-1.fc25 04/01/2014
[ 36.736688] RIP: 0010:kyber_has_work+0x14/0x40
[ 36.737515] RSP: 0018:ffffc9000209bca0 EFLAGS: 00010202
[ 36.738431] RAX: 0000000000000008 RBX: ffff88025578bfc8 RCX: ffff880257bf4ed0
[ 36.739581] RDX: 0000000000000038 RSI: ffffffff81a98c6d RDI: ffff88025578bfc8
[ 36.740730] RBP: ffff880253cebfc8 R08: ffffc9000209bda0 R09: ffff8802554f3480
[ 36.741885] R10: ffffc9000209be60 R11: ffff880263f72538 R12: ffff88025573e9e8
[ 36.743036] R13: ffff88025578bfd0 R14: 0000000000000001 R15: 0000000000000000
[ 36.744189] FS: 00007f9b9bee67c0(0000) GS:ffff88027fc80000(0000) knlGS:0000000000000000
[ 36.746617] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 36.748483] CR2: 0000000000000008 CR3: 0000000254bf4001 CR4: 00000000003606e0
[ 36.750164] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 36.751455] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 36.752796] Call Trace:
[ 36.753992] blk_mq_do_dispatch_sched+0x7f/0xe0
[ 36.755110] blk_mq_sched_dispatch_requests+0x119/0x190
[ 36.756179] __blk_mq_run_hw_queue+0x83/0x90
[ 36.757144] __blk_mq_delay_run_hw_queue+0xaf/0x110
[ 36.758046] blk_mq_run_hw_queue+0x24/0x70
[ 36.758845] blk_mq_flush_plug_list+0x1e7/0x270
[ 36.759676] blk_flush_plug_list+0xd6/0x240
[ 36.760463] blk_finish_plug+0x27/0x40
[ 36.761195] do_io_submit+0x19b/0x780
[ 36.761921] ? entry_SYSCALL_64_fastpath+0x1a/0x7d
[ 36.762788] entry_SYSCALL_64_fastpath+0x1a/0x7d
[ 36.763639] RIP: 0033:0x7f9b9699f697
[ 36.764352] RSP: 002b:00007ffc10f991b8 EFLAGS: 00000206 ORIG_RAX: 00000000000000d1
[ 36.765773] RAX: ffffffffffffffda RBX: 00000000008f6f00 RCX: 00007f9b9699f697
[ 36.766965] RDX: 0000000000a5e6c0 RSI: 0000000000000001 RDI: 00007f9b8462a000
[ 36.768377] RBP: 0000000000000000 R08: 0000000000000001 R09: 00000000008f6420
[ 36.769649] R10: 00007f9b846e5000 R11: 0000000000000206 R12: 00007f9b795d6a70
[ 36.770807] R13: 00007f9b795e4140 R14: 00007f9b795e3fe0 R15: 0000000100000000
[ 36.771955] Code: 83 c7 10 e9 3f 68 d1 ff 0f 1f 44 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 8b 97 b0 00 00 00 48 8d 42 08 48 83 c2 38 <48> 3b 00 74 06 b8 01 00 00 00 c3 48 3b 40 08 75 f4 48 83 c0 10
[ 36.775004] RIP: kyber_has_work+0x14/0x40 RSP: ffffc9000209bca0
[ 36.776012] CR2: 0000000000000008
[ 36.776690] ---[ end trace 4045cbce364ff2a4 ]---
[ 36.777527] Kernel panic - not syncing: Fatal exception
[ 36.778526] Dumping ftrace buffer:
[ 36.779313] (ftrace buffer empty)
[ 36.780081] Kernel Offset: disabled
[ 36.780877] ---[ end Kernel panic - not syncing: Fatal exception
Reviewed-by: Christoph Hellwig <[email protected]>
Tested-by: Yi Zhang <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
block/blk-core.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -699,6 +699,15 @@ void blk_cleanup_queue(struct request_qu
queue_flag_set(QUEUE_FLAG_DEAD, q);
spin_unlock_irq(lock);
+ /*
+ * make sure all in-progress dispatch are completed because
+ * blk_freeze_queue() can only complete all requests, and
+ * dispatch may still be in-progress since we dispatch requests
+ * from more than one contexts
+ */
+ if (q->mq_ops)
+ blk_mq_quiesce_queue(q);
+
/* for synchronous bio-based driver finish in-flight integrity i/o */
blk_flush_integrity();
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit 882d4171a8950646413b1a3cbe0e4a6a612fe82e upstream.
Call bdev_get_queue(bdev) after bdev->bd_disk has been initialized
instead of just before that pointer has been initialized. This patch
avoids that the following command
pktsetup 1 /dev/sr0
triggers the following kernel crash:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000548
IP: pkt_setup_dev+0x2db/0x670 [pktcdvd]
CPU: 2 PID: 724 Comm: pktsetup Not tainted 4.15.0-rc4-dbg+ #1
Call Trace:
pkt_ctl_ioctl+0xce/0x1c0 [pktcdvd]
do_vfs_ioctl+0x8e/0x670
SyS_ioctl+0x3c/0x70
entry_SYSCALL_64_fastpath+0x23/0x9a
Reported-by: Maciej S. Szmigiero <[email protected]>
Fixes: commit ca18d6f769d2 ("block: Make most scsi_req_init() calls implicit")
Signed-off-by: Bart Van Assche <[email protected]>
Tested-by: Maciej S. Szmigiero <[email protected]>
Cc: Maciej S. Szmigiero <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/block/pktcdvd.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2579,14 +2579,14 @@ static int pkt_new_dev(struct pktcdvd_de
bdev = bdget(dev);
if (!bdev)
return -ENOMEM;
+ ret = blkdev_get(bdev, FMODE_READ | FMODE_NDELAY, NULL);
+ if (ret)
+ return ret;
if (!blk_queue_scsi_passthrough(bdev_get_queue(bdev))) {
WARN_ONCE(true, "Attempt to register a non-SCSI queue\n");
- bdput(bdev);
+ blkdev_put(bdev, FMODE_READ | FMODE_NDELAY);
return -EINVAL;
}
- ret = blkdev_get(bdev, FMODE_READ | FMODE_NDELAY, NULL);
- if (ret)
- return ret;
/* This is safe, since we have a reference from open(). */
__module_get(THIS_MODULE);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit 5a0ec388ef0f6e33841aeb810d7fa23f049ec4cd upstream.
Commit 523e1d399ce0 ("block: make gendisk hold a reference to its queue")
modified add_disk() and disk_release() but did not update any of the
error paths that trigger a put_disk() call after disk->queue has been
assigned. That introduced the following behavior in the pktcdvd driver
if pkt_new_dev() fails:
Kernel BUG at 00000000e98fd882 [verbose debug info unavailable]
Since disk_release() calls blk_put_queue() anyway if disk->queue != NULL,
fix this by removing the blk_cleanup_queue() call from the pkt_setup_dev()
error path.
Fixes: commit 523e1d399ce0 ("block: make gendisk hold a reference to its queue")
Signed-off-by: Bart Van Assche <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Maciej S. Szmigiero <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/block/pktcdvd.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2745,7 +2745,7 @@ static int pkt_setup_dev(dev_t dev, dev_
pd->pkt_dev = MKDEV(pktdev_major, idx);
ret = pkt_new_dev(pd, dev);
if (ret)
- goto out_new_dev;
+ goto out_mem2;
/* inherit events of the host device */
disk->events = pd->bdev->bd_disk->events;
@@ -2763,8 +2763,6 @@ static int pkt_setup_dev(dev_t dev, dev_
mutex_unlock(&ctl_mutex);
return 0;
-out_new_dev:
- blk_cleanup_queue(disk->queue);
out_mem2:
put_disk(disk);
out_mem:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Rosin <[email protected]>
commit b930151e5b55a0e62a3aad06876de891ac980471 upstream.
Without such a range, gpiolib fails with -EPROBE_DEFER, pending the
addition of the range. So, without a range, gpiolib will keep
deferring indefinitely.
Fixes: 9e80f9064e73 ("pinctrl: Add SX150X GPIO Extender Pinctrl Driver")
Fixes: e10f72bf4b3e ("gpio: gpiolib: Generalise state persistence beyond sleep")
Suggested-by: Linus Walleij <[email protected]>
Signed-off-by: Peter Rosin <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/pinctrl/pinctrl-sx150x.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/drivers/pinctrl/pinctrl-sx150x.c
+++ b/drivers/pinctrl/pinctrl-sx150x.c
@@ -1193,6 +1193,11 @@ static int sx150x_probe(struct i2c_clien
if (ret)
return ret;
+ ret = gpiochip_add_pin_range(&pctl->gpio, dev_name(dev),
+ 0, 0, pctl->data->npins);
+ if (ret)
+ return ret;
+
/* Add Interrupt support if an irq is specified */
if (client->irq > 0) {
pctl->irq_chip.name = devm_kstrdup(dev, client->name,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mika Westerberg <[email protected]>
commit f5a26acf0162477af6ee4c11b4fb9cffe5d3e257 upstream.
When a GPIO is requested using gpiod_get_* APIs the intel pinctrl driver
switches the pin to GPIO mode and makes sure interrupts are routed to
the GPIO hardware instead of IOAPIC. However, if the GPIO is used
directly through irqchip, as is the case with many I2C-HID devices where
I2C core automatically configures interrupt for the device, the pin is
not initialized as GPIO. Instead we rely that the BIOS configures the
pin accordingly which seems not to be the case at least in Asus X540NA
SKU3 with Focaltech touchpad.
When the pin is not properly configured it might result weird behaviour
like interrupts suddenly stop firing completely and the touchpad stops
responding to user input.
Fix this by properly initializing the pin to GPIO mode also when it is
used directly through irqchip.
Fixes: 7981c0015af2 ("pinctrl: intel: Add Intel Sunrisepoint pin controller and GPIO support")
Reported-by: Daniel Drake <[email protected]>
Reported-and-tested-by: Chris Chiu <[email protected]>
Signed-off-by: Mika Westerberg <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/pinctrl/intel/pinctrl-intel.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
--- a/drivers/pinctrl/intel/pinctrl-intel.c
+++ b/drivers/pinctrl/intel/pinctrl-intel.c
@@ -425,6 +425,18 @@ static void __intel_gpio_set_direction(v
writel(value, padcfg0);
}
+static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
+{
+ u32 value;
+
+ /* Put the pad into GPIO mode */
+ value = readl(padcfg0) & ~PADCFG0_PMODE_MASK;
+ /* Disable SCI/SMI/NMI generation */
+ value &= ~(PADCFG0_GPIROUTIOXAPIC | PADCFG0_GPIROUTSCI);
+ value &= ~(PADCFG0_GPIROUTSMI | PADCFG0_GPIROUTNMI);
+ writel(value, padcfg0);
+}
+
static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
struct pinctrl_gpio_range *range,
unsigned pin)
@@ -432,7 +444,6 @@ static int intel_gpio_request_enable(str
struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
void __iomem *padcfg0;
unsigned long flags;
- u32 value;
raw_spin_lock_irqsave(&pctrl->lock, flags);
@@ -442,13 +453,7 @@ static int intel_gpio_request_enable(str
}
padcfg0 = intel_get_padcfg(pctrl, pin, PADCFG0);
- /* Put the pad into GPIO mode */
- value = readl(padcfg0) & ~PADCFG0_PMODE_MASK;
- /* Disable SCI/SMI/NMI generation */
- value &= ~(PADCFG0_GPIROUTIOXAPIC | PADCFG0_GPIROUTSCI);
- value &= ~(PADCFG0_GPIROUTSMI | PADCFG0_GPIROUTNMI);
- writel(value, padcfg0);
-
+ intel_gpio_set_gpio_mode(padcfg0);
/* Disable TX buffer and enable RX (this will be input) */
__intel_gpio_set_direction(padcfg0, true);
@@ -935,6 +940,8 @@ static int intel_gpio_irq_type(struct ir
raw_spin_lock_irqsave(&pctrl->lock, flags);
+ intel_gpio_set_gpio_mode(reg);
+
value = readl(reg);
value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Rosin <[email protected]>
commit 0657cb50b5a75abd92956028727dc255d690a4a6 upstream.
There is no matching call to pinctrl_unregister, so switch to the
managed devm_pinctrl_register to clean up properly when done.
Fixes: 9e80f9064e73 ("pinctrl: Add SX150X GPIO Extender Pinctrl Driver")
Signed-off-by: Peter Rosin <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/pinctrl/pinctrl-sx150x.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pinctrl/pinctrl-sx150x.c
+++ b/drivers/pinctrl/pinctrl-sx150x.c
@@ -1225,7 +1225,7 @@ static int sx150x_probe(struct i2c_clien
pctl->pinctrl_desc.npins = pctl->data->npins;
pctl->pinctrl_desc.owner = THIS_MODULE;
- pctl->pctldev = pinctrl_register(&pctl->pinctrl_desc, dev, pctl);
+ pctl->pctldev = devm_pinctrl_register(dev, &pctl->pinctrl_desc, pctl);
if (IS_ERR(pctl->pctldev)) {
dev_err(dev, "Failed to register pinctrl device\n");
return PTR_ERR(pctl->pctldev);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <[email protected]>
commit 1beaeacdc88b537703d04d5536235d0bbb36db93 upstream.
Meelis reported the following warning on a quad P3 HP NetServer museum piece:
WARNING: CPU: 3 PID: 258 at kernel/irq/chip.c:244 __irq_startup+0x80/0x100
EIP: __irq_startup+0x80/0x100
irq_startup+0x7e/0x170
probe_irq_on+0x128/0x2b0
parport_irq_probe.constprop.18+0x8d/0x1af [parport_pc]
parport_pc_probe_port+0xf11/0x1260 [parport_pc]
parport_pc_init+0x78a/0xf10 [parport_pc]
parport_parse_param.constprop.16+0xf0/0xf0 [parport_pc]
do_one_initcall+0x45/0x1e0
This is caused by the rewrite of the irq activation/startup sequence which
missed to convert a callsite in the irq legacy auto probing code.
To fix this irq_activate_and_startup() needs to gain a return value so the
pending logic can work proper.
Fixes: c942cee46bba ("genirq: Separate activation and startup")
Reported-by: Meelis Roos <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Meelis Roos <[email protected]>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801301935410.1797@nanos
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/irq/autoprobe.c | 2 +-
kernel/irq/chip.c | 6 +++---
kernel/irq/internals.h | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
--- a/kernel/irq/autoprobe.c
+++ b/kernel/irq/autoprobe.c
@@ -71,7 +71,7 @@ unsigned long probe_irq_on(void)
raw_spin_lock_irq(&desc->lock);
if (!desc->action && irq_settings_can_probe(desc)) {
desc->istate |= IRQS_AUTODETECT | IRQS_WAITING;
- if (irq_startup(desc, IRQ_NORESEND, IRQ_START_FORCE))
+ if (irq_activate_and_startup(desc, IRQ_NORESEND))
desc->istate |= IRQS_PENDING;
}
raw_spin_unlock_irq(&desc->lock);
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -294,11 +294,11 @@ int irq_activate(struct irq_desc *desc)
return 0;
}
-void irq_activate_and_startup(struct irq_desc *desc, bool resend)
+int irq_activate_and_startup(struct irq_desc *desc, bool resend)
{
if (WARN_ON(irq_activate(desc)))
- return;
- irq_startup(desc, resend, IRQ_START_FORCE);
+ return 0;
+ return irq_startup(desc, resend, IRQ_START_FORCE);
}
static void __irq_disable(struct irq_desc *desc, bool mask);
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -76,7 +76,7 @@ extern void __enable_irq(struct irq_desc
#define IRQ_START_COND false
extern int irq_activate(struct irq_desc *desc);
-extern void irq_activate_and_startup(struct irq_desc *desc, bool resend);
+extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
extern void irq_shutdown(struct irq_desc *desc);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Mastykin <[email protected]>
commit 02e389e63e3523828fc3832f27e0341885f60f6f upstream.
When using mcp23s08 module with gpio-keys, often (50% of boots)
it fails to get irq numbers with message:
"gpio-keys keys: Unable to get irq number for GPIO 0, error -6".
Seems that irqs must be setup before devm_gpiochip_add_data().
Signed-off-by: Dmitry Mastykin <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/pinctrl/pinctrl-mcp23s08.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/pinctrl/pinctrl-mcp23s08.c
+++ b/drivers/pinctrl/pinctrl-mcp23s08.c
@@ -896,16 +896,16 @@ static int mcp23s08_probe_one(struct mcp
goto fail;
}
- ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
- if (ret < 0)
- goto fail;
-
if (mcp->irq && mcp->irq_controller) {
ret = mcp23s08_irq_setup(mcp);
if (ret)
goto fail;
}
+ ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
+ if (ret < 0)
+ goto fail;
+
mcp->pinctrl_desc.name = "mcp23xxx-pinctrl";
mcp->pinctrl_desc.pctlops = &mcp_pinctrl_ops;
mcp->pinctrl_desc.confops = &mcp_pinconf_ops;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Max Filippov <[email protected]>
commit ca47480921587ae30417dd234a9f79af188e3666 upstream.
Return 0 if the operation was successful, not the userspace memory
value. Check that userspace value equals passed oldval, not itself.
Don't update *uval if the value wasn't read from userspace memory.
This fixes process hang due to infinite loop in futex_lock_pi.
It also fixes a bunch of glibc tests nptl/tst-mutexpi*.
Signed-off-by: Max Filippov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/xtensa/include/asm/futex.h | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
--- a/arch/xtensa/include/asm/futex.h
+++ b/arch/xtensa/include/asm/futex.h
@@ -92,7 +92,6 @@ futex_atomic_cmpxchg_inatomic(u32 *uval,
u32 oldval, u32 newval)
{
int ret = 0;
- u32 prev;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
@@ -103,26 +102,24 @@ futex_atomic_cmpxchg_inatomic(u32 *uval,
__asm__ __volatile__ (
" # futex_atomic_cmpxchg_inatomic\n"
- "1: l32i %1, %3, 0\n"
- " mov %0, %5\n"
- " wsr %1, scompare1\n"
- "2: s32c1i %0, %3, 0\n"
- "3:\n"
+ " wsr %5, scompare1\n"
+ "1: s32c1i %1, %4, 0\n"
+ " s32i %1, %6, 0\n"
+ "2:\n"
" .section .fixup,\"ax\"\n"
" .align 4\n"
- "4: .long 3b\n"
- "5: l32r %1, 4b\n"
- " movi %0, %6\n"
+ "3: .long 2b\n"
+ "4: l32r %1, 3b\n"
+ " movi %0, %7\n"
" jx %1\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .long 1b,5b,2b,5b\n"
+ " .long 1b,4b\n"
" .previous\n"
- : "+r" (ret), "=&r" (prev), "+m" (*uaddr)
- : "r" (uaddr), "r" (oldval), "r" (newval), "I" (-EFAULT)
+ : "+r" (ret), "+r" (newval), "+m" (*uaddr), "+m" (*uval)
+ : "r" (uaddr), "r" (oldval), "r" (uval), "I" (-EFAULT)
: "memory");
- *uval = prev;
return ret;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit d83a8243aaefe62ace433e4384a4f077bed86acb upstream.
Some ioctls need to copy back the result even if the ioctl returned
an error. However, don't do this for the error code -ENOTTY.
It makes no sense in that cases.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -968,6 +968,9 @@ static long do_video_ioctl(struct file *
set_fs(old_fs);
}
+ if (err == -ENOTTY)
+ return err;
+
/* Special case: even after an error we need to put the
results back for these ioctls since the error_idx will
contain information on which control failed. */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
commit 20e8175d246e9f9deb377f2784b3e7dfb2ad3e86 upstream.
KVM doesn't follow the SMCCC when it comes to unimplemented calls,
and inject an UNDEF instead of returning an error. Since firmware
calls are now used for security mitigation, they are becoming more
common, and the undef is counter productive.
Instead, let's follow the SMCCC which states that -1 must be returned
to the caller when getting an unknown function number.
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kvm/handle_exit.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -38,7 +38,7 @@ static int handle_hvc(struct kvm_vcpu *v
ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
- kvm_inject_undefined(vcpu);
+ vcpu_set_reg(vcpu, 0, ~0UL);
return 1;
}
@@ -47,7 +47,16 @@ static int handle_hvc(struct kvm_vcpu *v
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- kvm_inject_undefined(vcpu);
+ /*
+ * "If an SMC instruction executed at Non-secure EL1 is
+ * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
+ * Trap exception, not a Secure Monitor Call exception [...]"
+ *
+ * We need to advance the PC after the trap, as it would
+ * otherwise return to the same address...
+ */
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit edfc3722cfef4217c7fe92b272cbe0288ba1ff57 upstream.
The Toshiba Click Mini uses an i2c attached keyboard/touchpad combo
(single i2c_hid device for both) which has a vid:pid of 04F3:0401,
which is also used by a bunch of Elan touchpads which are handled by the
drivers/input/mouse/elan_i2c driver, but that driver deals with pure
touchpads and does not work for a combo device such as the one on the
Toshiba Click Mini.
The combo on the Mini has an ACPI id of ELAN0800, which is not claimed
by the elan_i2c driver, so check for that and if it is found do not ignore
the device. This fixes the keyboard/touchpad combo on the Mini not working
(although with the touchpad in mouse emulation mode).
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Jiri Kosina <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/hid/hid-core.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -2643,7 +2643,6 @@ static const struct hid_device_id hid_ig
{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EARTHMATE) },
{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EM_LT20) },
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0400) },
- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0401) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) },
@@ -2913,6 +2912,17 @@ bool hid_ignore(struct hid_device *hdev)
strncmp(hdev->name, "http://www.masterkit.ru MA901", 22) == 0)
return true;
break;
+ case USB_VENDOR_ID_ELAN:
+ /*
+ * Many Elan devices have a product id of 0x0401 and are handled
+ * by the elan_i2c input driver. But the ACPI HID ELAN0800 dev
+ * is not (and cannot be) handled by that driver ->
+ * Ignore all 0x0401 devs except for the ELAN0800 dev.
+ */
+ if (hdev->product == 0x0401 &&
+ strncmp(hdev->name, "ELAN0800", 8) != 0)
+ return true;
+ break;
}
if (hdev->type == HID_TYPE_USBMOUSE &&
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit b4cdaba274247c9c841c6a682c08fa91fb3aa549 upstream.
BCM43341 devices soldered onto the PCB (non-removable) always (AFAICT)
use an UART connection for bluetooth. But they also advertise btsdio
support on their 3th sdio function, this causes 2 problems:
1) A non functioning BT HCI getting registered
2) Since the btsdio driver does not have suspend/resume callbacks,
mmc_sdio_pre_suspend will return -ENOSYS, causing mmc_pm_notify()
to react as if the SDIO-card is removed and since the slot is
marked as non-removable it will never get detected as inserted again.
Which results in wifi no longer working after a suspend/resume.
This commit fixes both by making btsdio ignore BCM43341 devices
when connected to a slot which is marked non-removable.
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Marcel Holtmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/bluetooth/btsdio.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/drivers/bluetooth/btsdio.c
+++ b/drivers/bluetooth/btsdio.c
@@ -31,6 +31,7 @@
#include <linux/errno.h>
#include <linux/skbuff.h>
+#include <linux/mmc/host.h>
#include <linux/mmc/sdio_ids.h>
#include <linux/mmc/sdio_func.h>
@@ -292,6 +293,14 @@ static int btsdio_probe(struct sdio_func
tuple = tuple->next;
}
+ /* BCM43341 devices soldered onto the PCB (non-removable) use an
+ * uart connection for bluetooth, ignore the BT SDIO interface.
+ */
+ if (func->vendor == SDIO_VENDOR_ID_BROADCOM &&
+ func->device == SDIO_DEVICE_ID_BROADCOM_43341 &&
+ !mmc_card_is_removable(func->card->host))
+ return -ENODEV;
+
data = devm_kzalloc(&func->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit a1be1f3931bfe0a42b46fef77a04593c2b136e7f upstream.
This reverts commit ba62bafe942b ("kernel/relay.c: fix potential memory leak").
This commit introduced a double free bug, because 'chan' is already
freed by the line:
kref_put(&chan->kref, relay_destroy_channel);
This bug was found by syzkaller, using the BLKTRACESETUP ioctl.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: ba62bafe942b ("kernel/relay.c: fix potential memory leak")
Signed-off-by: Eric Biggers <[email protected]>
Reported-by: syzbot <[email protected]>
Reviewed-by: Andrew Morton <[email protected]>
Cc: Zhouyi Zhou <[email protected]>
Cc: Jens Axboe <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/relay.c | 1 -
1 file changed, 1 deletion(-)
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -611,7 +611,6 @@ free_bufs:
kref_put(&chan->kref, relay_destroy_channel);
mutex_unlock(&relay_channels_mutex);
- kfree(chan);
return NULL;
}
EXPORT_SYMBOL_GPL(relay_open);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rasmus Villemoes <[email protected]>
commit 4f7e988e63e336827f4150de48163bed05d653bd upstream.
This reverts commit 92266d6ef60c ("async: simplify lowest_in_progress()")
which was simply wrong: In the case where domain is NULL, we now use the
wrong offsetof() in the list_first_entry macro, so we don't actually
fetch the ->cookie value, but rather the eight bytes located
sizeof(struct list_head) further into the struct async_entry.
On 64 bit, that's the data member, while on 32 bit, that's a u64 built
from func and data in some order.
I think the bug happens to be harmless in practice: It obviously only
affects callers which pass a NULL domain, and AFAICT the only such
caller is
async_synchronize_full() ->
async_synchronize_full_domain(NULL) ->
async_synchronize_cookie_domain(ASYNC_COOKIE_MAX, NULL)
and the ASYNC_COOKIE_MAX means that in practice we end up waiting for
the async_global_pending list to be empty - but it would break if
somebody happened to pass (void*)-1 as the data element to
async_schedule, and of course also if somebody ever does a
async_synchronize_cookie_domain(, NULL) with a "finite" cookie value.
Maybe the "harmless in practice" means this isn't -stable material. But
I'm not completely confident my quick git grep'ing is enough, and there
might be affected code in one of the earlier kernels that has since been
removed, so I'll leave the decision to the stable guys.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 92266d6ef60c "async: simplify lowest_in_progress()"
Signed-off-by: Rasmus Villemoes <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Cc: Arjan van de Ven <[email protected]>
Cc: Adam Wallis <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/async.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -84,20 +84,24 @@ static atomic_t entry_count;
static async_cookie_t lowest_in_progress(struct async_domain *domain)
{
- struct list_head *pending;
+ struct async_entry *first = NULL;
async_cookie_t ret = ASYNC_COOKIE_MAX;
unsigned long flags;
spin_lock_irqsave(&async_lock, flags);
- if (domain)
- pending = &domain->pending;
- else
- pending = &async_global_pending;
+ if (domain) {
+ if (!list_empty(&domain->pending))
+ first = list_first_entry(&domain->pending,
+ struct async_entry, domain_list);
+ } else {
+ if (!list_empty(&async_global_pending))
+ first = list_first_entry(&async_global_pending,
+ struct async_entry, global_list);
+ }
- if (!list_empty(pending))
- ret = list_first_entry(pending, struct async_entry,
- domain_list)->cookie;
+ if (first)
+ ret = first->cookie;
spin_unlock_irqrestore(&async_lock, flags);
return ret;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit eff84b379089cd8b4e83599639c1f5f6e34ef7bf upstream.
The SHA-512 multibuffer code keeps track of the number of blocks pending
in each lane. The minimum of these values is used to identify the next
lane that will be completed. Unused lanes are set to a large number
(0xFFFFFFFF) so that they don't affect this calculation.
However, it was forgotten to set the lengths to this value in the
initial state, where all lanes are unused. As a result it was possible
for sha512_mb_mgr_get_comp_job_avx2() to select an unused lane, causing
a NULL pointer dereference. Specifically this could happen in the case
where ->update() was passed fewer than SHA512_BLOCK_SIZE bytes of data,
so it then called sha_complete_job() without having actually submitted
any blocks to the multi-buffer code. This hit a NULL pointer
dereference if another task happened to have submitted blocks
concurrently to the same CPU and the flush timer had not yet expired.
Fix this by initializing sha512_mb_mgr->lens correctly.
As usual, this bug was found by syzkaller.
Fixes: 45691e2d9b18 ("crypto: sha512-mb - submit/flush routines for AVX2")
Reported-by: syzbot <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
+++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
@@ -57,10 +57,12 @@ void sha512_mb_mgr_init_avx2(struct sha5
{
unsigned int j;
- state->lens[0] = 0;
- state->lens[1] = 1;
- state->lens[2] = 2;
- state->lens[3] = 3;
+ /* initially all lanes are unused */
+ state->lens[0] = 0xFFFFFFFF00000000;
+ state->lens[1] = 0xFFFFFFFF00000001;
+ state->lens[2] = 0xFFFFFFFF00000002;
+ state->lens[3] = 0xFFFFFFFF00000003;
+
state->unused_lanes = 0xFF03020100;
for (j = 0; j < 4; j++)
state->ldata[j].job_in_lane = NULL;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mauro Carvalho Chehab <[email protected]>
commit 81742be14b6a90c9fd0ff6eb4218bdf696ad8e46 upstream.
Before this patch, when compiled for arm32, the signal strength
were reported as:
Lock (0x1f) Signal= 4294908.66dBm C/N= 12.79dB
Because of a 32 bit integer overflow. After it, it is properly
reported as:
Lock (0x1f) Signal= -58.64dBm C/N= 12.79dB
Fixes: 0f91c9d6bab9 ("[media] TS2020: Calculate tuner gain correctly")
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/dvb-frontends/ts2020.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/media/dvb-frontends/ts2020.c
+++ b/drivers/media/dvb-frontends/ts2020.c
@@ -368,7 +368,7 @@ static int ts2020_read_tuner_gain(struct
gain2 = clamp_t(long, gain2, 0, 13);
v_agc = clamp_t(long, v_agc, 400, 1100);
- *_gain = -(gain1 * 2330 +
+ *_gain = -((__s64)gain1 * 2330 +
gain2 * 3500 +
v_agc * 24 / 10 * 10 +
10000);
@@ -386,7 +386,7 @@ static int ts2020_read_tuner_gain(struct
gain3 = clamp_t(long, gain3, 0, 6);
v_agc = clamp_t(long, v_agc, 600, 1600);
- *_gain = -(gain1 * 2650 +
+ *_gain = -((__s64)gain1 * 2650 +
gain2 * 3380 +
gain3 * 2850 +
v_agc * 176 / 100 * 10 -
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mauro Carvalho Chehab <[email protected]>
commit a9cb97c3e628902e37583d8a40bb28cf76522cf1 upstream.
As smatch warned:
drivers/media/dvb-core/dvb_frontend.c:2468 dvb_frontend_handle_ioctl() error: uninitialized symbol 'err'.
The ioctl handler actually got a regression here: before changeset
d73dcf0cdb95 ("media: dvb_frontend: cleanup ioctl handling logic"),
the code used to return -EOPNOTSUPP if an ioctl handler was not
implemented on a driver. After the change, it may return a random
value.
Fixes: d73dcf0cdb95 ("media: dvb_frontend: cleanup ioctl handling logic")
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Tested-by: Daniel Scheller <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/dvb-core/dvb_frontend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/media/dvb-core/dvb_frontend.c
+++ b/drivers/media/dvb-core/dvb_frontend.c
@@ -2110,7 +2110,7 @@ static int dvb_frontend_handle_ioctl(str
struct dvb_frontend *fe = dvbdev->priv;
struct dvb_frontend_private *fepriv = fe->frontend_priv;
struct dtv_frontend_properties *c = &fe->dtv_property_cache;
- int i, err;
+ int i, err = -EOPNOTSUPP;
dev_dbg(fe->dvb->device, "%s:\n", __func__);
@@ -2145,6 +2145,7 @@ static int dvb_frontend_handle_ioctl(str
}
}
kfree(tvp);
+ err = 0;
break;
}
case FE_GET_PROPERTY: {
@@ -2196,6 +2197,7 @@ static int dvb_frontend_handle_ioctl(str
return -EFAULT;
}
kfree(tvp);
+ err = 0;
break;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit dac15ed62dc9d7898d3695a3c393d6f712d66185 upstream.
Mention the maximum voltages of the CEC and HPD lines. Since in the example
these lines are connected to a Raspberry Pi and the Rpi GPIO lines are 3.3V
it is a good idea to warn against directly connecting the HPD to the Raspberry
Pi's GPIO line.
Signed-off-by: Hans Verkuil <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/devicetree/bindings/media/cec-gpio.txt | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/Documentation/devicetree/bindings/media/cec-gpio.txt
+++ b/Documentation/devicetree/bindings/media/cec-gpio.txt
@@ -4,6 +4,10 @@ The HDMI CEC GPIO module supports CEC im
is hooked up to a pull-up GPIO line and - optionally - the HPD line is
hooked up to another GPIO line.
+Please note: the maximum voltage for the CEC line is 3.63V, for the HPD
+line it is 5.3V. So you may need some sort of level conversion circuitry
+when connecting them to a GPIO line.
+
Required properties:
- compatible: value must be "cec-gpio".
- cec-gpios: gpio that the CEC line is connected to. The line should be
@@ -21,7 +25,7 @@ the following property is optional:
Example for the Raspberry Pi 3 where the CEC line is connected to
pin 26 aka BCM7 aka CE1 on the GPIO pin header and the HPD line is
-connected to pin 11 aka BCM17:
+connected to pin 11 aka BCM17 (some level shifter is needed for this!):
#include <dt-bindings/gpio/gpio.h>
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Liu Bo <[email protected]>
commit 0198e5b707cfeb5defbd1b71b1ec6e71580d7db9 upstream.
Bio iterated by set_bio_pages_uptodate() is raid56 internal one, so it
will never be a BIO_CLONED bio, and since this is called by end_io
functions, bio->bi_iter.bi_size is zero, we mustn't use
bio_for_each_segment() as that is a no-op if bi_size is zero.
Fixes: 6592e58c6b68e61f003a01ba29a3716e7e2e9484 ("Btrfs: fix write corruption due to bio cloning on raid5/6")
Signed-off-by: Liu Bo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/btrfs/raid56.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1435,14 +1435,13 @@ static int fail_bio_stripe(struct btrfs_
*/
static void set_bio_pages_uptodate(struct bio *bio)
{
- struct bio_vec bvec;
- struct bvec_iter iter;
+ struct bio_vec *bvec;
+ int i;
- if (bio_flagged(bio, BIO_CLONED))
- bio->bi_iter = btrfs_io_bio(bio)->iter;
+ ASSERT(!bio_flagged(bio, BIO_CLONED));
- bio_for_each_segment(bvec, bio, iter)
- SetPageUptodate(bvec.bv_page);
+ bio_for_each_segment_all(bvec, bio, i)
+ SetPageUptodate(bvec->bv_page);
}
/*
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells <[email protected]>
commit 45df8462730d2149834980d3db16e2d2b9daaf60 upstream.
Fix server list handling in the following ways:
(1) In afs_alloc_volume(), remove duplicate server list build code. This
was already done by afs_alloc_server_list() which afs_alloc_volume()
previously called. This just results in twice as many VL RPCs.
(2) In afs_deliver_vl_get_entry_by_name_u(), use the number of server
records indicated by ->nServers in the UVLDB record returned by the
VL.GetEntryByNameU RPC call rather than scanning all NMAXNSERVERS
slots. Unused slots may contain garbage.
(3) In afs_alloc_server_list(), don't stop converting a UVLDB record into
a server list just because we can't look up one of the servers. Just
skip that server and go on to the next. If we can't look up any of
the servers then we'll fail at the end.
Without this patch, an attempt to view the umich.edu root cell using
something like "ls /afs/umich.edu" on a dynamic root (future patch) mount
or an autocell mount will result in ENOMEDIUM. The failure is due to kafs
not stopping after nServers'worth of records have been read, but then
trying to access a server with a garbage UUID and getting an error, which
aborts the server list build.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Reported-by: Jonathan Billings <[email protected]>
Signed-off-by: David Howells <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/afs/server_list.c | 3 ++-
fs/afs/vlclient.c | 10 +++++++---
fs/afs/volume.c | 46 ++--------------------------------------------
3 files changed, 11 insertions(+), 48 deletions(-)
--- a/fs/afs/server_list.c
+++ b/fs/afs/server_list.c
@@ -58,7 +58,8 @@ struct afs_server_list *afs_alloc_server
server = afs_lookup_server(cell, key, &vldb->fs_server[i]);
if (IS_ERR(server)) {
ret = PTR_ERR(server);
- if (ret == -ENOENT)
+ if (ret == -ENOENT ||
+ ret == -ENOMEDIUM)
continue;
goto error_2;
}
--- a/fs/afs/vlclient.c
+++ b/fs/afs/vlclient.c
@@ -23,7 +23,7 @@ static int afs_deliver_vl_get_entry_by_n
struct afs_uvldbentry__xdr *uvldb;
struct afs_vldb_entry *entry;
bool new_only = false;
- u32 tmp;
+ u32 tmp, nr_servers;
int i, ret;
_enter("");
@@ -36,6 +36,10 @@ static int afs_deliver_vl_get_entry_by_n
uvldb = call->buffer;
entry = call->reply[0];
+ nr_servers = ntohl(uvldb->nServers);
+ if (nr_servers > AFS_NMAXNSERVERS)
+ nr_servers = AFS_NMAXNSERVERS;
+
for (i = 0; i < ARRAY_SIZE(uvldb->name) - 1; i++)
entry->name[i] = (u8)ntohl(uvldb->name[i]);
entry->name[i] = 0;
@@ -44,14 +48,14 @@ static int afs_deliver_vl_get_entry_by_n
/* If there is a new replication site that we can use, ignore all the
* sites that aren't marked as new.
*/
- for (i = 0; i < AFS_NMAXNSERVERS; i++) {
+ for (i = 0; i < nr_servers; i++) {
tmp = ntohl(uvldb->serverFlags[i]);
if (!(tmp & AFS_VLSF_DONTUSE) &&
(tmp & AFS_VLSF_NEWREPSITE))
new_only = true;
}
- for (i = 0; i < AFS_NMAXNSERVERS; i++) {
+ for (i = 0; i < nr_servers; i++) {
struct afs_uuid__xdr *xdr;
struct afs_uuid *uuid;
int j;
--- a/fs/afs/volume.c
+++ b/fs/afs/volume.c
@@ -26,9 +26,8 @@ static struct afs_volume *afs_alloc_volu
unsigned long type_mask)
{
struct afs_server_list *slist;
- struct afs_server *server;
struct afs_volume *volume;
- int ret = -ENOMEM, nr_servers = 0, i, j;
+ int ret = -ENOMEM, nr_servers = 0, i;
for (i = 0; i < vldb->nr_servers; i++)
if (vldb->fs_mask[i] & type_mask)
@@ -58,49 +57,8 @@ static struct afs_volume *afs_alloc_volu
refcount_set(&slist->usage, 1);
volume->servers = slist;
-
- /* Make sure a records exists for each server this volume occupies. */
- for (i = 0; i < nr_servers; i++) {
- if (!(vldb->fs_mask[i] & type_mask))
- continue;
-
- server = afs_lookup_server(params->cell, params->key,
- &vldb->fs_server[i]);
- if (IS_ERR(server)) {
- ret = PTR_ERR(server);
- if (ret == -ENOENT)
- continue;
- goto error_2;
- }
-
- /* Insertion-sort by server pointer */
- for (j = 0; j < slist->nr_servers; j++)
- if (slist->servers[j].server >= server)
- break;
- if (j < slist->nr_servers) {
- if (slist->servers[j].server == server) {
- afs_put_server(params->net, server);
- continue;
- }
-
- memmove(slist->servers + j + 1,
- slist->servers + j,
- (slist->nr_servers - j) * sizeof(struct afs_server_entry));
- }
-
- slist->servers[j].server = server;
- slist->nr_servers++;
- }
-
- if (slist->nr_servers == 0) {
- ret = -EDESTADDRREQ;
- goto error_2;
- }
-
return volume;
-error_2:
- afs_put_serverlist(params->net, slist);
error_1:
afs_put_cell(params->net, volume->cell);
kfree(volume);
@@ -328,7 +286,7 @@ static int afs_update_volume_status(stru
/* See if the volume's server list got updated. */
new = afs_alloc_server_list(volume->cell, key,
- vldb, (1 << volume->type));
+ vldb, (1 << volume->type));
if (IS_ERR(new)) {
ret = PTR_ERR(new);
goto error_vldb;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells <[email protected]>
commit e44150157f42219fa5c074588efdb31ccfb197fc upstream.
afs_alloc_volume() needs to release the cell ref it obtained in the case of
an error. Fix this by adding an afs_put_cell() call into the error path.
This can triggered when a lookup for a cell in a dynamic root or an
autocell mount returns an error whilst trying to look up the server (such
as ENOMEDIUM). This results in an assertion failure oops when the module
is unloaded due to outstanding refs on a cell record.
Fixes: d2ddc776a458 ("afs: Overhaul volume and server record caching and fileserver rotation")
Signed-off-by: David Howells <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/afs/volume.c | 1 +
1 file changed, 1 insertion(+)
--- a/fs/afs/volume.c
+++ b/fs/afs/volume.c
@@ -102,6 +102,7 @@ static struct afs_volume *afs_alloc_volu
error_2:
afs_put_serverlist(params->net, slist);
error_1:
+ afs_put_cell(params->net, volume->cell);
kfree(volume);
error_0:
return ERR_PTR(ret);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikolay Borisov <[email protected]>
commit f3038ee3a3f1017a1cbe9907e31fa12d366c5dcb upstream.
This function was introduced by 247e743cbe6e ("Btrfs: Use async helpers
to deal with pages that have been improperly dirtied") and it didn't do
any error handling then. This function might very well fail in ENOMEM
situation, yet it's not handled, this could lead to inconsistent state.
So let's handle the failure by setting the mapping error bit.
Signed-off-by: Nikolay Borisov <[email protected]>
Reviewed-by: Qu Wenruo <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/btrfs/inode.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2098,8 +2098,15 @@ again:
goto out;
}
- btrfs_set_extent_delalloc(inode, page_start, page_end, 0, &cached_state,
- 0);
+ ret = btrfs_set_extent_delalloc(inode, page_start, page_end, 0,
+ &cached_state, 0);
+ if (ret) {
+ mapping_set_error(page->mapping, ret);
+ end_extent_writepage(page, ret, page_start, page_end);
+ ClearPageChecked(page);
+ goto out;
+ }
+
ClearPageChecked(page);
set_page_dirty(page);
btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai <[email protected]>
commit 20a1ea2222e7cbf96e9bf8579362e971491e6aea upstream.
I got the following kernel warning when loading snd-soc-skl module on
Dell Latitude 7270 laptop:
memremap attempted on mixed range 0x0000000000000000 size: 0x0
WARNING: CPU: 0 PID: 484 at kernel/memremap.c:98 memremap+0x8a/0x180
Call Trace:
skl_nhlt_init+0x82/0xf0 [snd_soc_skl]
skl_probe+0x2ee/0x7c0 [snd_soc_skl]
....
It seems that the machine doesn't support the SKL DSP gives the empty
NHLT entry, and it triggers the warning. For avoiding it, let do the
zero check before calling memremap().
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
sound/soc/intel/skylake/skl-nhlt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/sound/soc/intel/skylake/skl-nhlt.c
+++ b/sound/soc/intel/skylake/skl-nhlt.c
@@ -43,7 +43,8 @@ struct nhlt_acpi_table *skl_nhlt_init(st
obj = acpi_evaluate_dsm(handle, &osc_guid, 1, 1, NULL);
if (obj && obj->type == ACPI_TYPE_BUFFER) {
nhlt_ptr = (struct nhlt_resource_desc *)obj->buffer.pointer;
- nhlt_table = (struct nhlt_acpi_table *)
+ if (nhlt_ptr->length)
+ nhlt_table = (struct nhlt_acpi_table *)
memremap(nhlt_ptr->min_addr, nhlt_ptr->length,
MEMREMAP_WB);
ACPI_FREE(obj);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Kaiser <[email protected]>
commit 0be267255cef64e1c58475baa7b25568355a3816 upstream.
When the watchdog device is suspended, its timeout is set to the maximum
value. During resume, the previously set timeout should be restored.
This does not work at the moment.
The suspend function calls
imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME);
and resume reverts this by calling
imx2_wdt_set_timeout(wdog, wdog->timeout);
However, imx2_wdt_set_timeout() updates wdog->timeout. Therefore,
wdog->timeout is set to IMX2_WDT_MAX_TIME when we enter the resume
function.
Fix this by adding a new function __imx2_wdt_set_timeout() which
only updates the hardware settings. imx2_wdt_set_timeout() now calls
__imx2_wdt_set_timeout() and then saves the new timeout to
wdog->timeout.
During suspend, we call __imx2_wdt_set_timeout() directly so that
wdog->timeout won't be updated and we can restore the previous value
during resume. This approach makes wdog->timeout different from the
actual setting in the hardware which is usually not a good thing.
However, the two differ only while we're suspended and no kernel code is
running, so it should be ok in this case.
Signed-off-by: Martin Kaiser <[email protected]>
Reviewed-by: Guenter Roeck <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Wim Van Sebroeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/watchdog/imx2_wdt.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
--- a/drivers/watchdog/imx2_wdt.c
+++ b/drivers/watchdog/imx2_wdt.c
@@ -169,15 +169,21 @@ static int imx2_wdt_ping(struct watchdog
return 0;
}
-static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
- unsigned int new_timeout)
+static void __imx2_wdt_set_timeout(struct watchdog_device *wdog,
+ unsigned int new_timeout)
{
struct imx2_wdt_device *wdev = watchdog_get_drvdata(wdog);
- wdog->timeout = new_timeout;
-
regmap_update_bits(wdev->regmap, IMX2_WDT_WCR, IMX2_WDT_WCR_WT,
WDOG_SEC_TO_COUNT(new_timeout));
+}
+
+static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
+ unsigned int new_timeout)
+{
+ __imx2_wdt_set_timeout(wdog, new_timeout);
+
+ wdog->timeout = new_timeout;
return 0;
}
@@ -371,7 +377,11 @@ static int imx2_wdt_suspend(struct devic
/* The watchdog IP block is running */
if (imx2_wdt_is_running(wdev)) {
- imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME);
+ /*
+ * Don't update wdog->timeout, we'll restore the current value
+ * during resume.
+ */
+ __imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME);
imx2_wdt_ping(wdog);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Mackerras <[email protected]>
commit 36ee41d161c67a6fcf696d4817a0da31f778938c upstream.
Running with CONFIG_DEBUG_ATOMIC_SLEEP reveals that HV KVM tries to
read guest memory, in order to emulate guest instructions, while
preempt is disabled and a vcore lock is held. This occurs in
kvmppc_handle_exit_hv(), called from post_guest_process(), when
emulating guest doorbell instructions on POWER9 systems, and also
when checking whether we have hit a hypervisor breakpoint.
Reading guest memory can cause a page fault and thus cause the
task to sleep, so we need to avoid reading guest memory while
holding a spinlock or when preempt is disabled.
To fix this, we move the preempt_enable() in kvmppc_run_core() to
before the loop that calls post_guest_process() for each vcore that
has just run, and we drop and re-take the vcore lock around the calls
to kvmppc_emulate_debug_inst() and kvmppc_emulate_doorbell_instr().
Dropping the lock is safe with respect to the iteration over the
runnable vcpus in post_guest_process(); for_each_runnable_thread
is actually safe to use locklessly. It is possible for a vcpu
to become runnable and add itself to the runnable_threads array
(code near the beginning of kvmppc_run_vcpu()) and then get included
in the iteration in post_guest_process despite the fact that it
has not just run. This is benign because vcpu->arch.trap and
vcpu->arch.ceded will be zero.
Fixes: 579006944e0d ("KVM: PPC: Book3S HV: Virtualize doorbell facility on POWER9")
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/powerpc/kvm/book3s_hv.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1005,8 +1005,6 @@ static int kvmppc_emulate_doorbell_instr
struct kvm *kvm = vcpu->kvm;
struct kvm_vcpu *tvcpu;
- if (!cpu_has_feature(CPU_FTR_ARCH_300))
- return EMULATE_FAIL;
if (kvmppc_get_last_inst(vcpu, INST_GENERIC, &inst) != EMULATE_DONE)
return RESUME_GUEST;
if (get_op(inst) != 31)
@@ -1056,6 +1054,7 @@ static int kvmppc_emulate_doorbell_instr
return RESUME_GUEST;
}
+/* Called with vcpu->arch.vcore->lock held */
static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
struct task_struct *tsk)
{
@@ -1176,7 +1175,10 @@ static int kvmppc_handle_exit_hv(struct
swab32(vcpu->arch.emul_inst) :
vcpu->arch.emul_inst;
if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) {
+ /* Need vcore unlocked to call kvmppc_get_last_inst */
+ spin_unlock(&vcpu->arch.vcore->lock);
r = kvmppc_emulate_debug_inst(run, vcpu);
+ spin_lock(&vcpu->arch.vcore->lock);
} else {
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
r = RESUME_GUEST;
@@ -1191,8 +1193,13 @@ static int kvmppc_handle_exit_hv(struct
*/
case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
r = EMULATE_FAIL;
- if ((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG)
+ if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) &&
+ cpu_has_feature(CPU_FTR_ARCH_300)) {
+ /* Need vcore unlocked to call kvmppc_get_last_inst */
+ spin_unlock(&vcpu->arch.vcore->lock);
r = kvmppc_emulate_doorbell_instr(vcpu);
+ spin_lock(&vcpu->arch.vcore->lock);
+ }
if (r == EMULATE_FAIL) {
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
r = RESUME_GUEST;
@@ -2934,13 +2941,14 @@ static noinline void kvmppc_run_core(str
/* make sure updates to secondary vcpu structs are visible now */
smp_mb();
+ preempt_enable();
+
for (sub = 0; sub < core_info.n_subcores; ++sub) {
pvc = core_info.vc[sub];
post_guest_process(pvc, pvc == vc);
}
spin_lock(&vc->lock);
- preempt_enable();
out:
vc->vcore_state = VCORE_INACTIVE;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miquel Raynal <[email protected]>
commit 87e89ce8d0d14f573c068c61bec2117751fb5103 upstream.
Starting from commit 041e4575f034 ("mtd: nand: handle ECC errors in
OOB"), nand_do_read_oob() (from the NAND core) did return 0 or a
negative error, and the MTD layer expected it.
However, the trend for the NAND layer is now to return an error or a
positive number of bitflips. Deciding which status to return to the user
belongs to the MTD layer.
Commit e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
brought this logic to the mtd_read_oob() function while the return value
coming from nand_do_read_oob() (called by the ->_read_oob() hook) was
left unchanged.
Fixes: e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
Signed-off-by: Miquel Raynal <[email protected]>
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/nand/nand_base.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -2199,6 +2199,7 @@ EXPORT_SYMBOL(nand_write_oob_syndrome);
static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
+ unsigned int max_bitflips = 0;
int page, realpage, chipnr;
struct nand_chip *chip = mtd_to_nand(mtd);
struct mtd_ecc_stats stats;
@@ -2256,6 +2257,8 @@ static int nand_do_read_oob(struct mtd_i
nand_wait_ready(mtd);
}
+ max_bitflips = max_t(unsigned int, max_bitflips, ret);
+
readlen -= len;
if (!readlen)
break;
@@ -2281,7 +2284,7 @@ static int nand_do_read_oob(struct mtd_i
if (mtd->ecc_stats.failed - stats.failed)
return -EBADMSG;
- return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0;
+ return max_bitflips;
}
/**
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 486c521510c44a04cd756a9267e7d1e271c8a4ba upstream.
These helper functions do not really help. Move the code to the
__get/put_v4l2_format32 functions.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 124 +++++---------------------
1 file changed, 24 insertions(+), 100 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -92,92 +92,6 @@ static int put_v4l2_window32(struct v4l2
return 0;
}
-static inline int get_v4l2_pix_format(struct v4l2_pix_format *kp, struct v4l2_pix_format __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_pix_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int get_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
- struct v4l2_pix_format_mplane __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_pix_format_mplane)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_pix_format(struct v4l2_pix_format *kp, struct v4l2_pix_format __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_pix_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
- struct v4l2_pix_format_mplane __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_pix_format_mplane)))
- return -EFAULT;
- return 0;
-}
-
-static inline int get_v4l2_vbi_format(struct v4l2_vbi_format *kp, struct v4l2_vbi_format __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_vbi_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_vbi_format(struct v4l2_vbi_format *kp, struct v4l2_vbi_format __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_vbi_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int get_v4l2_sliced_vbi_format(struct v4l2_sliced_vbi_format *kp, struct v4l2_sliced_vbi_format __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_sliced_vbi_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_sliced_vbi_format(struct v4l2_sliced_vbi_format *kp, struct v4l2_sliced_vbi_format __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_sliced_vbi_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int get_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sdr_format __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_sdr_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sdr_format __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_sdr_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int get_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
-{
- if (copy_from_user(kp, up, sizeof(struct v4l2_meta_format)))
- return -EFAULT;
- return 0;
-}
-
-static inline int put_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
-{
- if (copy_to_user(up, kp, sizeof(struct v4l2_meta_format)))
- return -EFAULT;
- return 0;
-}
-
struct v4l2_format32 {
__u32 type; /* enum v4l2_buf_type */
union {
@@ -217,25 +131,30 @@ static int __get_v4l2_format32(struct v4
switch (kp->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- return get_v4l2_pix_format(&kp->fmt.pix, &up->fmt.pix);
+ return copy_from_user(&kp->fmt.pix, &up->fmt.pix,
+ sizeof(kp->fmt.pix)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- return get_v4l2_pix_format_mplane(&kp->fmt.pix_mp,
- &up->fmt.pix_mp);
+ return copy_from_user(&kp->fmt.pix_mp, &up->fmt.pix_mp,
+ sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
return get_v4l2_window32(&kp->fmt.win, &up->fmt.win);
case V4L2_BUF_TYPE_VBI_CAPTURE:
case V4L2_BUF_TYPE_VBI_OUTPUT:
- return get_v4l2_vbi_format(&kp->fmt.vbi, &up->fmt.vbi);
+ return copy_from_user(&kp->fmt.vbi, &up->fmt.vbi,
+ sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- return get_v4l2_sliced_vbi_format(&kp->fmt.sliced, &up->fmt.sliced);
+ return copy_from_user(&kp->fmt.sliced, &up->fmt.sliced,
+ sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SDR_CAPTURE:
case V4L2_BUF_TYPE_SDR_OUTPUT:
- return get_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
+ return copy_from_user(&kp->fmt.sdr, &up->fmt.sdr,
+ sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_META_CAPTURE:
- return get_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
+ return copy_from_user(&kp->fmt.meta, &up->fmt.meta,
+ sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
kp->type);
@@ -266,25 +185,30 @@ static int __put_v4l2_format32(struct v4
switch (kp->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- return put_v4l2_pix_format(&kp->fmt.pix, &up->fmt.pix);
+ return copy_to_user(&up->fmt.pix, &kp->fmt.pix,
+ sizeof(kp->fmt.pix)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- return put_v4l2_pix_format_mplane(&kp->fmt.pix_mp,
- &up->fmt.pix_mp);
+ return copy_to_user(&up->fmt.pix_mp, &kp->fmt.pix_mp,
+ sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
return put_v4l2_window32(&kp->fmt.win, &up->fmt.win);
case V4L2_BUF_TYPE_VBI_CAPTURE:
case V4L2_BUF_TYPE_VBI_OUTPUT:
- return put_v4l2_vbi_format(&kp->fmt.vbi, &up->fmt.vbi);
+ return copy_to_user(&up->fmt.vbi, &kp->fmt.vbi,
+ sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- return put_v4l2_sliced_vbi_format(&kp->fmt.sliced, &up->fmt.sliced);
+ return copy_to_user(&up->fmt.sliced, &kp->fmt.sliced,
+ sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SDR_CAPTURE:
case V4L2_BUF_TYPE_SDR_OUTPUT:
- return put_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
+ return copy_to_user(&up->fmt.sdr, &kp->fmt.sdr,
+ sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_META_CAPTURE:
- return put_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
+ return copy_to_user(&up->fmt.meta, &kp->fmt.meta,
+ sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
kp->type);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 3a0a397ff5ff upstream.
Now that we've standardised on SMCCC v1.1 to perform the branch
prediction invalidation, let's drop the previous band-aid.
If vendors haven't updated their firmware to do SMCCC 1.1, they
haven't updated PSCI either, so we don't loose anything.
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/bpi.S | 24 ---------------------
arch/arm64/kernel/cpu_errata.c | 45 +++++++++++------------------------------
arch/arm64/kvm/hyp/switch.c | 14 ------------
3 files changed, 13 insertions(+), 70 deletions(-)
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -54,30 +54,6 @@ ENTRY(__bp_harden_hyp_vecs_start)
vectors __kvm_hyp_vector
.endr
ENTRY(__bp_harden_hyp_vecs_end)
-ENTRY(__psci_hyp_bp_inval_start)
- sub sp, sp, #(8 * 18)
- stp x16, x17, [sp, #(16 * 0)]
- stp x14, x15, [sp, #(16 * 1)]
- stp x12, x13, [sp, #(16 * 2)]
- stp x10, x11, [sp, #(16 * 3)]
- stp x8, x9, [sp, #(16 * 4)]
- stp x6, x7, [sp, #(16 * 5)]
- stp x4, x5, [sp, #(16 * 6)]
- stp x2, x3, [sp, #(16 * 7)]
- stp x0, x1, [sp, #(16 * 8)]
- mov x0, #0x84000000
- smc #0
- ldp x16, x17, [sp, #(16 * 0)]
- ldp x14, x15, [sp, #(16 * 1)]
- ldp x12, x13, [sp, #(16 * 2)]
- ldp x10, x11, [sp, #(16 * 3)]
- ldp x8, x9, [sp, #(16 * 4)]
- ldp x6, x7, [sp, #(16 * 5)]
- ldp x4, x5, [sp, #(16 * 6)]
- ldp x2, x3, [sp, #(16 * 7)]
- ldp x0, x1, [sp, #(16 * 8)]
- add sp, sp, #(8 * 18)
-ENTRY(__psci_hyp_bp_inval_end)
ENTRY(__qcom_hyp_sanitize_link_stack_start)
stp x29, x30, [sp, #-16]!
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -67,7 +67,6 @@ static int cpu_enable_trap_ctr_access(vo
DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
#ifdef CONFIG_KVM
-extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
extern char __qcom_hyp_sanitize_link_stack_start[];
extern char __qcom_hyp_sanitize_link_stack_end[];
extern char __smccc_workaround_1_smc_start[];
@@ -116,8 +115,6 @@ static void __install_bp_hardening_cb(bp
spin_unlock(&bp_lock);
}
#else
-#define __psci_hyp_bp_inval_start NULL
-#define __psci_hyp_bp_inval_end NULL
#define __qcom_hyp_sanitize_link_stack_start NULL
#define __qcom_hyp_sanitize_link_stack_end NULL
#define __smccc_workaround_1_smc_start NULL
@@ -164,24 +161,25 @@ static void call_hvc_arch_workaround_1(v
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
}
-static bool check_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+static int enable_smccc_arch_workaround_1(void *data)
{
+ const struct arm64_cpu_capabilities *entry = data;
bp_hardening_cb_t cb;
void *smccc_start, *smccc_end;
struct arm_smccc_res res;
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
- return false;
+ return 0;
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
- return false;
+ return 0;
switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if (res.a0)
- return false;
+ return 0;
cb = call_hvc_arch_workaround_1;
smccc_start = __smccc_workaround_1_hvc_start;
smccc_end = __smccc_workaround_1_hvc_end;
@@ -191,35 +189,18 @@ static bool check_smccc_arch_workaround_
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if (res.a0)
- return false;
+ return 0;
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;
default:
- return false;
+ return 0;
}
install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
- return true;
-}
-
-static int enable_psci_bp_hardening(void *data)
-{
- const struct arm64_cpu_capabilities *entry = data;
-
- if (psci_ops.get_version) {
- if (check_smccc_arch_workaround_1(entry))
- return 0;
-
- install_bp_hardening_cb(entry,
- (bp_hardening_cb_t)psci_ops.get_version,
- __psci_hyp_bp_inval_start,
- __psci_hyp_bp_inval_end);
- }
-
return 0;
}
@@ -399,22 +380,22 @@ const struct arm64_cpu_capabilities arm6
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
@@ -428,12 +409,12 @@ const struct arm64_cpu_capabilities arm6
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
- .enable = enable_psci_bp_hardening,
+ .enable = enable_smccc_arch_workaround_1,
},
#endif
{
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -344,20 +344,6 @@ again:
if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
goto again;
- if (exit_code == ARM_EXCEPTION_TRAP &&
- (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC64 ||
- kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC32)) {
- u32 val = vcpu_get_reg(vcpu, 0);
-
- if (val == PSCI_0_2_FN_PSCI_VERSION) {
- val = kvm_psci_version(vcpu, kern_hyp_va(vcpu->kvm));
- if (unlikely(val == KVM_ARM_PSCI_0_1))
- val = PSCI_RET_NOT_SUPPORTED;
- vcpu_set_reg(vcpu, 0, val);
- goto again;
- }
- }
-
if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
exit_code == ARM_EXCEPTION_TRAP) {
bool valid;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 3ee6d040719ae09110e5cdf24d5386abe5d1b776 upstream.
The result of the VIDIOC_PREPARE_BUF ioctl was never copied back
to userspace since it was missing in the switch.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -1052,6 +1052,7 @@ static long do_video_ioctl(struct file *
err = put_v4l2_create32(&karg.v2crt, up);
break;
+ case VIDIOC_PREPARE_BUF:
case VIDIOC_QUERYBUF:
case VIDIOC_QBUF:
case VIDIOC_DQBUF:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit b2469c814fbc8f1f19676dd4912717b798df511e upstream.
Don't duplicate the buffer type checks in enum/g/s/try_fmt.
The check_fmt function does that already.
It is hard to keep the checks in sync for all these functions and
in fact the check for VBI was wrong in the _fmt functions as it
allowed SDR types as well. This caused a v4l2-compliance failure
for /dev/swradio0 using vivid.
This simplifies the code and keeps the check in one place and
fixes the SDR/VBI bug.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-ioctl.c | 140 +++++++++++++----------------------
1 file changed, 54 insertions(+), 86 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1311,52 +1311,50 @@ static int v4l_enum_fmt(const struct v4l
struct file *file, void *fh, void *arg)
{
struct v4l2_fmtdesc *p = arg;
- struct video_device *vfd = video_devdata(file);
- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
- int ret = -EINVAL;
+ int ret = check_fmt(file, p->type);
+
+ if (ret)
+ return ret;
+ ret = -EINVAL;
switch (p->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_enum_fmt_vid_cap))
+ if (unlikely(!ops->vidioc_enum_fmt_vid_cap))
break;
ret = ops->vidioc_enum_fmt_vid_cap(file, fh, arg);
break;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_vid_cap_mplane))
+ if (unlikely(!ops->vidioc_enum_fmt_vid_cap_mplane))
break;
ret = ops->vidioc_enum_fmt_vid_cap_mplane(file, fh, arg);
break;
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_vid_overlay))
+ if (unlikely(!ops->vidioc_enum_fmt_vid_overlay))
break;
ret = ops->vidioc_enum_fmt_vid_overlay(file, fh, arg);
break;
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_enum_fmt_vid_out))
+ if (unlikely(!ops->vidioc_enum_fmt_vid_out))
break;
ret = ops->vidioc_enum_fmt_vid_out(file, fh, arg);
break;
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_enum_fmt_vid_out_mplane))
+ if (unlikely(!ops->vidioc_enum_fmt_vid_out_mplane))
break;
ret = ops->vidioc_enum_fmt_vid_out_mplane(file, fh, arg);
break;
case V4L2_BUF_TYPE_SDR_CAPTURE:
- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_enum_fmt_sdr_cap))
+ if (unlikely(!ops->vidioc_enum_fmt_sdr_cap))
break;
ret = ops->vidioc_enum_fmt_sdr_cap(file, fh, arg);
break;
case V4L2_BUF_TYPE_SDR_OUTPUT:
- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_enum_fmt_sdr_out))
+ if (unlikely(!ops->vidioc_enum_fmt_sdr_out))
break;
ret = ops->vidioc_enum_fmt_sdr_out(file, fh, arg);
break;
case V4L2_BUF_TYPE_META_CAPTURE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_meta_cap))
+ if (unlikely(!ops->vidioc_enum_fmt_meta_cap))
break;
ret = ops->vidioc_enum_fmt_meta_cap(file, fh, arg);
break;
@@ -1370,13 +1368,10 @@ static int v4l_g_fmt(const struct v4l2_i
struct file *file, void *fh, void *arg)
{
struct v4l2_format *p = arg;
- struct video_device *vfd = video_devdata(file);
- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
- int ret;
+ int ret = check_fmt(file, p->type);
+
+ if (ret)
+ return ret;
/*
* fmt can't be cleared for these overlay types due to the 'clips'
@@ -1404,7 +1399,7 @@ static int v4l_g_fmt(const struct v4l2_i
switch (p->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_g_fmt_vid_cap))
+ if (unlikely(!ops->vidioc_g_fmt_vid_cap))
break;
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
ret = ops->vidioc_g_fmt_vid_cap(file, fh, arg);
@@ -1412,23 +1407,15 @@ static int v4l_g_fmt(const struct v4l2_i
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
return ret;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_vid_cap_mplane))
- break;
return ops->vidioc_g_fmt_vid_cap_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_vid_overlay))
- break;
return ops->vidioc_g_fmt_vid_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_g_fmt_vbi_cap))
- break;
return ops->vidioc_g_fmt_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_g_fmt_sliced_vbi_cap))
- break;
return ops->vidioc_g_fmt_sliced_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out))
+ if (unlikely(!ops->vidioc_g_fmt_vid_out))
break;
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
ret = ops->vidioc_g_fmt_vid_out(file, fh, arg);
@@ -1436,32 +1423,18 @@ static int v4l_g_fmt(const struct v4l2_i
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
return ret;
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out_mplane))
- break;
return ops->vidioc_g_fmt_vid_out_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out_overlay))
- break;
return ops->vidioc_g_fmt_vid_out_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_g_fmt_vbi_out))
- break;
return ops->vidioc_g_fmt_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_g_fmt_sliced_vbi_out))
- break;
return ops->vidioc_g_fmt_sliced_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SDR_CAPTURE:
- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_g_fmt_sdr_cap))
- break;
return ops->vidioc_g_fmt_sdr_cap(file, fh, arg);
case V4L2_BUF_TYPE_SDR_OUTPUT:
- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_g_fmt_sdr_out))
- break;
return ops->vidioc_g_fmt_sdr_out(file, fh, arg);
case V4L2_BUF_TYPE_META_CAPTURE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_meta_cap))
- break;
return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
}
return -EINVAL;
@@ -1487,12 +1460,10 @@ static int v4l_s_fmt(const struct v4l2_i
{
struct v4l2_format *p = arg;
struct video_device *vfd = video_devdata(file);
- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
- int ret;
+ int ret = check_fmt(file, p->type);
+
+ if (ret)
+ return ret;
ret = v4l_enable_media_source(vfd);
if (ret)
@@ -1501,37 +1472,37 @@ static int v4l_s_fmt(const struct v4l2_i
switch (p->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_s_fmt_vid_cap))
+ if (unlikely(!ops->vidioc_s_fmt_vid_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.pix);
ret = ops->vidioc_s_fmt_vid_cap(file, fh, arg);
/* just in case the driver zeroed it again */
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
- if (is_tch)
+ if (vfd->vfl_type == VFL_TYPE_TOUCH)
v4l_pix_format_touch(&p->fmt.pix);
return ret;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_vid_cap_mplane))
+ if (unlikely(!ops->vidioc_s_fmt_vid_cap_mplane))
break;
CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
return ops->vidioc_s_fmt_vid_cap_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_vid_overlay))
+ if (unlikely(!ops->vidioc_s_fmt_vid_overlay))
break;
CLEAR_AFTER_FIELD(p, fmt.win);
return ops->vidioc_s_fmt_vid_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_s_fmt_vbi_cap))
+ if (unlikely(!ops->vidioc_s_fmt_vbi_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.vbi);
return ops->vidioc_s_fmt_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_s_fmt_sliced_vbi_cap))
+ if (unlikely(!ops->vidioc_s_fmt_sliced_vbi_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.sliced);
return ops->vidioc_s_fmt_sliced_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out))
+ if (unlikely(!ops->vidioc_s_fmt_vid_out))
break;
CLEAR_AFTER_FIELD(p, fmt.pix);
ret = ops->vidioc_s_fmt_vid_out(file, fh, arg);
@@ -1539,37 +1510,37 @@ static int v4l_s_fmt(const struct v4l2_i
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
return ret;
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out_mplane))
+ if (unlikely(!ops->vidioc_s_fmt_vid_out_mplane))
break;
CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
return ops->vidioc_s_fmt_vid_out_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out_overlay))
+ if (unlikely(!ops->vidioc_s_fmt_vid_out_overlay))
break;
CLEAR_AFTER_FIELD(p, fmt.win);
return ops->vidioc_s_fmt_vid_out_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_s_fmt_vbi_out))
+ if (unlikely(!ops->vidioc_s_fmt_vbi_out))
break;
CLEAR_AFTER_FIELD(p, fmt.vbi);
return ops->vidioc_s_fmt_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_s_fmt_sliced_vbi_out))
+ if (unlikely(!ops->vidioc_s_fmt_sliced_vbi_out))
break;
CLEAR_AFTER_FIELD(p, fmt.sliced);
return ops->vidioc_s_fmt_sliced_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SDR_CAPTURE:
- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_s_fmt_sdr_cap))
+ if (unlikely(!ops->vidioc_s_fmt_sdr_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.sdr);
return ops->vidioc_s_fmt_sdr_cap(file, fh, arg);
case V4L2_BUF_TYPE_SDR_OUTPUT:
- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_s_fmt_sdr_out))
+ if (unlikely(!ops->vidioc_s_fmt_sdr_out))
break;
CLEAR_AFTER_FIELD(p, fmt.sdr);
return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
case V4L2_BUF_TYPE_META_CAPTURE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_meta_cap))
+ if (unlikely(!ops->vidioc_s_fmt_meta_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.meta);
return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
@@ -1581,19 +1552,16 @@ static int v4l_try_fmt(const struct v4l2
struct file *file, void *fh, void *arg)
{
struct v4l2_format *p = arg;
- struct video_device *vfd = video_devdata(file);
- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
- int ret;
+ int ret = check_fmt(file, p->type);
+
+ if (ret)
+ return ret;
v4l_sanitize_format(p);
switch (p->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_try_fmt_vid_cap))
+ if (unlikely(!ops->vidioc_try_fmt_vid_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.pix);
ret = ops->vidioc_try_fmt_vid_cap(file, fh, arg);
@@ -1601,27 +1569,27 @@ static int v4l_try_fmt(const struct v4l2
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
return ret;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_vid_cap_mplane))
+ if (unlikely(!ops->vidioc_try_fmt_vid_cap_mplane))
break;
CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
return ops->vidioc_try_fmt_vid_cap_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_vid_overlay))
+ if (unlikely(!ops->vidioc_try_fmt_vid_overlay))
break;
CLEAR_AFTER_FIELD(p, fmt.win);
return ops->vidioc_try_fmt_vid_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_try_fmt_vbi_cap))
+ if (unlikely(!ops->vidioc_try_fmt_vbi_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.vbi);
return ops->vidioc_try_fmt_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
- if (unlikely(!is_rx || is_vid || !ops->vidioc_try_fmt_sliced_vbi_cap))
+ if (unlikely(!ops->vidioc_try_fmt_sliced_vbi_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.sliced);
return ops->vidioc_try_fmt_sliced_vbi_cap(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out))
+ if (unlikely(!ops->vidioc_try_fmt_vid_out))
break;
CLEAR_AFTER_FIELD(p, fmt.pix);
ret = ops->vidioc_try_fmt_vid_out(file, fh, arg);
@@ -1629,37 +1597,37 @@ static int v4l_try_fmt(const struct v4l2
p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
return ret;
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out_mplane))
+ if (unlikely(!ops->vidioc_try_fmt_vid_out_mplane))
break;
CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
return ops->vidioc_try_fmt_vid_out_mplane(file, fh, arg);
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out_overlay))
+ if (unlikely(!ops->vidioc_try_fmt_vid_out_overlay))
break;
CLEAR_AFTER_FIELD(p, fmt.win);
return ops->vidioc_try_fmt_vid_out_overlay(file, fh, arg);
case V4L2_BUF_TYPE_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_try_fmt_vbi_out))
+ if (unlikely(!ops->vidioc_try_fmt_vbi_out))
break;
CLEAR_AFTER_FIELD(p, fmt.vbi);
return ops->vidioc_try_fmt_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- if (unlikely(!is_tx || is_vid || !ops->vidioc_try_fmt_sliced_vbi_out))
+ if (unlikely(!ops->vidioc_try_fmt_sliced_vbi_out))
break;
CLEAR_AFTER_FIELD(p, fmt.sliced);
return ops->vidioc_try_fmt_sliced_vbi_out(file, fh, arg);
case V4L2_BUF_TYPE_SDR_CAPTURE:
- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_try_fmt_sdr_cap))
+ if (unlikely(!ops->vidioc_try_fmt_sdr_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.sdr);
return ops->vidioc_try_fmt_sdr_cap(file, fh, arg);
case V4L2_BUF_TYPE_SDR_OUTPUT:
- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_try_fmt_sdr_out))
+ if (unlikely(!ops->vidioc_try_fmt_sdr_out))
break;
CLEAR_AFTER_FIELD(p, fmt.sdr);
return ops->vidioc_try_fmt_sdr_out(file, fh, arg);
case V4L2_BUF_TYPE_META_CAPTURE:
- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_meta_cap))
+ if (unlikely(!ops->vidioc_try_fmt_meta_cap))
break;
CLEAR_AFTER_FIELD(p, fmt.meta);
return ops->vidioc_try_fmt_meta_cap(file, fh, arg);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit a208fa8f33031b9e0aba44c7d1b7e68eb0cbd29e upstream.
We need to consistently enforce that keyed hashes cannot be used without
setting the key. To do this we need a reliable way to determine whether
a given hash algorithm is keyed or not. AF_ALG currently does this by
checking for the presence of a ->setkey() method. However, this is
actually slightly broken because the CRC-32 algorithms implement
->setkey() but can also be used without a key. (The CRC-32 "key" is not
actually a cryptographic key but rather represents the initial state.
If not overridden, then a default initial state is used.)
Prepare to fix this by introducing a flag CRYPTO_ALG_OPTIONAL_KEY which
indicates that the algorithm has a ->setkey() method, but it is not
required to be called. Then set it on all the CRC-32 algorithms.
The same also applies to the Adler-32 implementation in Lustre.
Also, the cryptd and mcryptd templates have to pass through the flag
from their underlying algorithm.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/crypto/crc32-ce-glue.c | 2 ++
arch/arm64/crypto/crc32-ce-glue.c | 2 ++
arch/powerpc/crypto/crc32c-vpmsum_glue.c | 1 +
arch/s390/crypto/crc32-vx.c | 3 +++
arch/sparc/crypto/crc32c_glue.c | 1 +
arch/x86/crypto/crc32-pclmul_glue.c | 1 +
arch/x86/crypto/crc32c-intel_glue.c | 1 +
crypto/crc32_generic.c | 1 +
crypto/crc32c_generic.c | 1 +
crypto/cryptd.c | 7 +++----
crypto/mcryptd.c | 7 +++----
drivers/crypto/bfin_crc.c | 3 ++-
drivers/crypto/stm32/stm32_crc32.c | 2 ++
drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c | 1 +
include/linux/crypto.h | 6 ++++++
15 files changed, 30 insertions(+), 9 deletions(-)
--- a/arch/arm/crypto/crc32-ce-glue.c
+++ b/arch/arm/crypto/crc32-ce-glue.c
@@ -188,6 +188,7 @@ static struct shash_alg crc32_pmull_algs
.base.cra_name = "crc32",
.base.cra_driver_name = "crc32-arm-ce",
.base.cra_priority = 200,
+ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.base.cra_blocksize = 1,
.base.cra_module = THIS_MODULE,
}, {
@@ -203,6 +204,7 @@ static struct shash_alg crc32_pmull_algs
.base.cra_name = "crc32c",
.base.cra_driver_name = "crc32c-arm-ce",
.base.cra_priority = 200,
+ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.base.cra_blocksize = 1,
.base.cra_module = THIS_MODULE,
} };
--- a/arch/arm64/crypto/crc32-ce-glue.c
+++ b/arch/arm64/crypto/crc32-ce-glue.c
@@ -185,6 +185,7 @@ static struct shash_alg crc32_pmull_algs
.base.cra_name = "crc32",
.base.cra_driver_name = "crc32-arm64-ce",
.base.cra_priority = 200,
+ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.base.cra_blocksize = 1,
.base.cra_module = THIS_MODULE,
}, {
@@ -200,6 +201,7 @@ static struct shash_alg crc32_pmull_algs
.base.cra_name = "crc32c",
.base.cra_driver_name = "crc32c-arm64-ce",
.base.cra_priority = 200,
+ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.base.cra_blocksize = 1,
.base.cra_module = THIS_MODULE,
} };
--- a/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+++ b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
@@ -141,6 +141,7 @@ static struct shash_alg alg = {
.cra_name = "crc32c",
.cra_driver_name = "crc32c-vpmsum",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_module = THIS_MODULE,
--- a/arch/s390/crypto/crc32-vx.c
+++ b/arch/s390/crypto/crc32-vx.c
@@ -239,6 +239,7 @@ static struct shash_alg crc32_vx_algs[]
.cra_name = "crc32",
.cra_driver_name = "crc32-vx",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CRC32_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crc_ctx),
.cra_module = THIS_MODULE,
@@ -259,6 +260,7 @@ static struct shash_alg crc32_vx_algs[]
.cra_name = "crc32be",
.cra_driver_name = "crc32be-vx",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CRC32_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crc_ctx),
.cra_module = THIS_MODULE,
@@ -279,6 +281,7 @@ static struct shash_alg crc32_vx_algs[]
.cra_name = "crc32c",
.cra_driver_name = "crc32c-vx",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CRC32_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crc_ctx),
.cra_module = THIS_MODULE,
--- a/arch/sparc/crypto/crc32c_glue.c
+++ b/arch/sparc/crypto/crc32c_glue.c
@@ -133,6 +133,7 @@ static struct shash_alg alg = {
.cra_name = "crc32c",
.cra_driver_name = "crc32c-sparc64",
.cra_priority = SPARC_CR_OPCODE_PRIORITY,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_alignmask = 7,
--- a/arch/x86/crypto/crc32-pclmul_glue.c
+++ b/arch/x86/crypto/crc32-pclmul_glue.c
@@ -162,6 +162,7 @@ static struct shash_alg alg = {
.cra_name = "crc32",
.cra_driver_name = "crc32-pclmul",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_module = THIS_MODULE,
--- a/arch/x86/crypto/crc32c-intel_glue.c
+++ b/arch/x86/crypto/crc32c-intel_glue.c
@@ -226,6 +226,7 @@ static struct shash_alg alg = {
.cra_name = "crc32c",
.cra_driver_name = "crc32c-intel",
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_module = THIS_MODULE,
--- a/crypto/crc32_generic.c
+++ b/crypto/crc32_generic.c
@@ -133,6 +133,7 @@ static struct shash_alg alg = {
.cra_name = "crc32",
.cra_driver_name = "crc32-generic",
.cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_module = THIS_MODULE,
--- a/crypto/crc32c_generic.c
+++ b/crypto/crc32c_generic.c
@@ -146,6 +146,7 @@ static struct shash_alg alg = {
.cra_name = "crc32c",
.cra_driver_name = "crc32c-generic",
.cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 3,
.cra_ctxsize = sizeof(struct chksum_ctx),
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -893,10 +893,9 @@ static int cryptd_create_hash(struct cry
if (err)
goto out_free_inst;
- type = CRYPTO_ALG_ASYNC;
- if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
- type |= CRYPTO_ALG_INTERNAL;
- inst->alg.halg.base.cra_flags = type;
+ inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC |
+ (alg->cra_flags & (CRYPTO_ALG_INTERNAL |
+ CRYPTO_ALG_OPTIONAL_KEY));
inst->alg.halg.digestsize = salg->digestsize;
inst->alg.halg.statesize = salg->statesize;
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -517,10 +517,9 @@ static int mcryptd_create_hash(struct cr
if (err)
goto out_free_inst;
- type = CRYPTO_ALG_ASYNC;
- if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
- type |= CRYPTO_ALG_INTERNAL;
- inst->alg.halg.base.cra_flags = type;
+ inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC |
+ (alg->cra_flags & (CRYPTO_ALG_INTERNAL |
+ CRYPTO_ALG_OPTIONAL_KEY));
inst->alg.halg.digestsize = halg->digestsize;
inst->alg.halg.statesize = halg->statesize;
--- a/drivers/crypto/bfin_crc.c
+++ b/drivers/crypto/bfin_crc.c
@@ -494,7 +494,8 @@ static struct ahash_alg algs = {
.cra_driver_name = DRIVER_NAME,
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
- CRYPTO_ALG_ASYNC,
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct bfin_crypto_crc_ctx),
.cra_alignmask = 3,
--- a/drivers/crypto/stm32/stm32_crc32.c
+++ b/drivers/crypto/stm32/stm32_crc32.c
@@ -208,6 +208,7 @@ static struct shash_alg algs[] = {
.cra_name = "crc32",
.cra_driver_name = DRIVER_NAME,
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 3,
.cra_ctxsize = sizeof(struct stm32_crc_ctx),
@@ -229,6 +230,7 @@ static struct shash_alg algs[] = {
.cra_name = "crc32c",
.cra_driver_name = DRIVER_NAME,
.cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 3,
.cra_ctxsize = sizeof(struct stm32_crc_ctx),
--- a/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c
+++ b/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c
@@ -120,6 +120,7 @@ static struct shash_alg alg = {
.cra_name = "adler32",
.cra_driver_name = "adler32-zlib",
.cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32),
.cra_module = THIS_MODULE,
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -107,6 +107,12 @@
#define CRYPTO_ALG_INTERNAL 0x00002000
/*
+ * Set if the algorithm has a ->setkey() method but can be used without
+ * calling it first, i.e. there is a default key.
+ */
+#define CRYPTO_ALG_OPTIONAL_KEY 0x00004000
+
+/*
* Transform masks and values (for crt_flags).
*/
#define CRYPTO_TFM_REQ_MASK 0x000fff00
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 841a3ff329713f796a63356fef6e2f72e4a3f6a3 upstream.
When the cryptd template is used to wrap an unkeyed hash algorithm,
don't install a ->setkey() method to the cryptd instance. This change
is necessary for cryptd to keep working with unkeyed hash algorithms
once we start enforcing that ->setkey() is called when present.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
crypto/cryptd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -911,7 +911,8 @@ static int cryptd_create_hash(struct cry
inst->alg.finup = cryptd_hash_finup_enqueue;
inst->alg.export = cryptd_hash_export;
inst->alg.import = cryptd_hash_import;
- inst->alg.setkey = cryptd_hash_setkey;
+ if (crypto_shash_alg_has_setkey(salg))
+ inst->alg.setkey = cryptd_hash_setkey;
inst->alg.digest = cryptd_hash_digest_enqueue;
err = ahash_register_instance(tmpl, inst);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 49686cbbb3ebafe42e63868222f269d8053ead00 upstream.
nfs_idmap_legacy_upcall() is supposed to be called with 'aux' pointing
to a 'struct idmap', via the call to request_key_with_auxdata() in
nfs_idmap_request_key().
However it can also be reached via the request_key() system call in
which case 'aux' will be NULL, causing a NULL pointer dereference in
nfs_idmap_prepare_pipe_upcall(), assuming that the key description is
valid enough to get that far.
Fix this by making nfs_idmap_legacy_upcall() negate the key if no
auxdata is provided.
As usual, this bug was found by syzkaller. A simple reproducer using
the command-line keyctl program is:
keyctl request2 id_legacy uid:0 '' @s
Fixes: 57e62324e469 ("NFS: Store the legacy idmapper result in the keyring")
Reported-by: [email protected]
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/nfs4idmap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/fs/nfs/nfs4idmap.c
+++ b/fs/nfs/nfs4idmap.c
@@ -568,9 +568,13 @@ static int nfs_idmap_legacy_upcall(struc
struct idmap_msg *im;
struct idmap *idmap = (struct idmap *)aux;
struct key *key = cons->key;
- int ret = -ENOMEM;
+ int ret = -ENOKEY;
+
+ if (!aux)
+ goto out1;
/* msg and im are freed in idmap_pipe_destroy_msg */
+ ret = -ENOMEM;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
goto out1;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: J. Bruce Fields <[email protected]>
commit 1b8d97b0a837beaf48a8449955b52c650a7114b4 upstream.
If some of the WRITE calls making up an O_DIRECT write syscall fail,
we neglect to commit, even if some of the WRITEs succeed.
We also depend on the commit code to free the reference count on the
nfs_page taken in the "if (request_commit)" case at the end of
nfs_direct_write_completion(). The problem was originally noticed
because ENOSPC's encountered partway through a write would result in a
closed file being sillyrenamed when it should have been unlinked.
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/direct.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -775,10 +775,8 @@ static void nfs_direct_write_completion(
spin_lock(&dreq->lock);
- if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) {
- dreq->flags = 0;
+ if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
dreq->error = hdr->error;
- }
if (dreq->error == 0) {
nfs_direct_good_bytes(dreq, hdr);
if (nfs_write_need_commit(hdr)) {
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Scott Mayhew <[email protected]>
commit ba4a76f703ab7eb72941fdaac848502073d6e9ee upstream.
Currently when falling back to doing I/O through the MDS (via
pnfs_{read|write}_through_mds), the client frees the nfs_pgio_header
without releasing the reference taken on the dreq
via pnfs_generic_pg_{read|write}pages -> nfs_pgheader_init ->
nfs_direct_pgio_init. It then takes another reference on the dreq via
nfs_generic_pg_pgios -> nfs_pgheader_init -> nfs_direct_pgio_init and
as a result the requester will become stuck in inode_dio_wait. Once
that happens, other processes accessing the inode will become stuck as
well.
Ensure that pnfs_read_through_mds() and pnfs_write_through_mds() clean
up correctly by calling hdr->completion_ops->completion() instead of
calling hdr->release() directly.
This can be reproduced (sometimes) by performing "storage failover
takeover" commands on NetApp filer while doing direct I/O from a client.
This can also be reproduced using SystemTap to simulate a failure while
doing direct I/O from a client (from Dave Wysochanski
<[email protected]>):
stap -v -g -e 'probe module("nfs_layout_nfsv41_files").function("nfs4_fl_prepare_ds").return { $return=NULL; exit(); }'
Suggested-by: Trond Myklebust <[email protected]>
Signed-off-by: Scott Mayhew <[email protected]>
Fixes: 1ca018d28d ("pNFS: Fix a memory leak when attempted pnfs fails")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/pnfs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -2255,7 +2255,7 @@ pnfs_write_through_mds(struct nfs_pageio
nfs_pageio_reset_write_mds(desc);
mirror->pg_recoalesce = 1;
}
- hdr->release(hdr);
+ hdr->completion_ops->completion(hdr);
}
static enum pnfs_try_status
@@ -2378,7 +2378,7 @@ pnfs_read_through_mds(struct nfs_pageio_
nfs_pageio_reset_read_mds(desc);
mirror->pg_recoalesce = 1;
}
- hdr->release(hdr);
+ hdr->completion_ops->completion(hdr);
}
/*
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 6dc52b15c4a4 upstream.
Cavium ThunderX's erratum 27456 results in a corruption of icache
entries that are loaded from memory that is mapped as non-global
(i.e. ASID-tagged).
As KPTI is based on memory being mapped non-global, let's prevent
it from kicking in if this erratum is detected.
Signed-off-by: Marc Zyngier <[email protected]>
[will: Update comment]
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -853,12 +853,23 @@ static int __kpti_forced; /* 0: not forc
static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
int __unused)
{
+ char const *str = "command line option";
u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
- /* Forced on command line? */
+ /*
+ * For reasons that aren't entirely clear, enabling KPTI on Cavium
+ * ThunderX leads to apparent I-cache corruption of kernel text, which
+ * ends as well as you might imagine. Don't even try.
+ */
+ if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
+ str = "ARM64_WORKAROUND_CAVIUM_27456";
+ __kpti_forced = -1;
+ }
+
+ /* Forced? */
if (__kpti_forced) {
- pr_info_once("kernel page table isolation forced %s by command line option\n",
- __kpti_forced > 0 ? "ON" : "OFF");
+ pr_info_once("kernel page table isolation forced %s by %s\n",
+ __kpti_forced > 0 ? "ON" : "OFF", str);
return __kpti_forced > 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 09a8d6d48499 upstream.
In order to call into the firmware to apply workarounds, it is
useful to find out whether we're using HVC or SMC. Let's expose
this through the psci_ops.
Acked-by: Lorenzo Pieralisi <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/firmware/psci.c | 28 +++++++++++++++++++++++-----
include/linux/psci.h | 7 +++++++
2 files changed, 30 insertions(+), 5 deletions(-)
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -59,7 +59,9 @@ bool psci_tos_resident_on(int cpu)
return cpu == resident_cpu;
}
-struct psci_operations psci_ops;
+struct psci_operations psci_ops = {
+ .conduit = PSCI_CONDUIT_NONE,
+};
typedef unsigned long (psci_fn)(unsigned long, unsigned long,
unsigned long, unsigned long);
@@ -210,6 +212,22 @@ static unsigned long psci_migrate_info_u
0, 0, 0);
}
+static void set_conduit(enum psci_conduit conduit)
+{
+ switch (conduit) {
+ case PSCI_CONDUIT_HVC:
+ invoke_psci_fn = __invoke_psci_fn_hvc;
+ break;
+ case PSCI_CONDUIT_SMC:
+ invoke_psci_fn = __invoke_psci_fn_smc;
+ break;
+ default:
+ WARN(1, "Unexpected PSCI conduit %d\n", conduit);
+ }
+
+ psci_ops.conduit = conduit;
+}
+
static int get_set_conduit_method(struct device_node *np)
{
const char *method;
@@ -222,9 +240,9 @@ static int get_set_conduit_method(struct
}
if (!strcmp("hvc", method)) {
- invoke_psci_fn = __invoke_psci_fn_hvc;
+ set_conduit(PSCI_CONDUIT_HVC);
} else if (!strcmp("smc", method)) {
- invoke_psci_fn = __invoke_psci_fn_smc;
+ set_conduit(PSCI_CONDUIT_SMC);
} else {
pr_warn("invalid \"method\" property: %s\n", method);
return -EINVAL;
@@ -654,9 +672,9 @@ int __init psci_acpi_init(void)
pr_info("probing for conduit method from ACPI.\n");
if (acpi_psci_use_hvc())
- invoke_psci_fn = __invoke_psci_fn_hvc;
+ set_conduit(PSCI_CONDUIT_HVC);
else
- invoke_psci_fn = __invoke_psci_fn_smc;
+ set_conduit(PSCI_CONDUIT_SMC);
return psci_probe();
}
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -25,6 +25,12 @@ bool psci_tos_resident_on(int cpu);
int psci_cpu_init_idle(unsigned int cpu);
int psci_cpu_suspend_enter(unsigned long index);
+enum psci_conduit {
+ PSCI_CONDUIT_NONE,
+ PSCI_CONDUIT_SMC,
+ PSCI_CONDUIT_HVC,
+};
+
struct psci_operations {
u32 (*get_version)(void);
int (*cpu_suspend)(u32 state, unsigned long entry_point);
@@ -34,6 +40,7 @@ struct psci_operations {
int (*affinity_info)(unsigned long target_affinity,
unsigned long lowest_affinity_level);
int (*migrate_info_type)(void);
+ enum psci_conduit conduit;
};
extern struct psci_operations psci_ops;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 6167ec5c9145 upstream.
A new feature of SMCCC 1.1 is that it offers firmware-based CPU
workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
BP hardening for CVE-2017-5715.
If the host has some mitigation for this issue, report that
we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
host workaround on every guest exit.
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Conflicts:
arch/arm/include/asm/kvm_host.h
arch/arm64/include/asm/kvm_host.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/kvm_host.h | 6 ++++++
arch/arm64/include/asm/kvm_host.h | 5 +++++
include/linux/arm-smccc.h | 5 +++++
virt/kvm/arm/psci.c | 9 ++++++++-
4 files changed, 24 insertions(+), 1 deletion(-)
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -301,4 +301,10 @@ int kvm_arm_vcpu_arch_has_attr(struct kv
/* All host FP/SIMD state is restored on guest exit, so nothing to save: */
static inline void kvm_fpsimd_flush_cpu_state(void) {}
+static inline bool kvm_arm_harden_branch_predictor(void)
+{
+ /* No way to detect it yet, pretend it is not there. */
+ return false;
+}
+
#endif /* __ARM_KVM_HOST_H__ */
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -396,4 +396,9 @@ static inline void kvm_fpsimd_flush_cpu_
sve_flush_cpu_state();
}
+static inline bool kvm_arm_harden_branch_predictor(void)
+{
+ return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
+}
+
#endif /* __ARM64_KVM_HOST_H__ */
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -73,6 +73,11 @@
ARM_SMCCC_SMC_32, \
0, 1)
+#define ARM_SMCCC_ARCH_WORKAROUND_1 \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 0x8000)
+
#ifndef __ASSEMBLY__
#include <linux/linkage.h>
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -405,13 +405,20 @@ int kvm_hvc_call_handler(struct kvm_vcpu
{
u32 func_id = smccc_get_function(vcpu);
u32 val = PSCI_RET_NOT_SUPPORTED;
+ u32 feature;
switch (func_id) {
case ARM_SMCCC_VERSION_FUNC_ID:
val = ARM_SMCCC_VERSION_1_1;
break;
case ARM_SMCCC_ARCH_FEATURES_FUNC_ID:
- /* Nothing supported yet */
+ feature = smccc_get_arg1(vcpu);
+ switch(feature) {
+ case ARM_SMCCC_ARCH_WORKAROUND_1:
+ if (kvm_arm_harden_branch_predictor())
+ val = 0;
+ break;
+ }
break;
default:
return kvm_psci_call(vcpu);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 90348689d500 upstream.
For those CPUs that require PSCI to perform a BP invalidation,
going all the way to the PSCI code for not much is a waste of
precious cycles. Let's terminate that call as early as possible.
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kvm/hyp/switch.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,7 @@
#include <linux/types.h>
#include <linux/jump_label.h>
+#include <uapi/linux/psci.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
@@ -341,6 +342,18 @@ again:
if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
goto again;
+ if (exit_code == ARM_EXCEPTION_TRAP &&
+ (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC64 ||
+ kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC32) &&
+ vcpu_get_reg(vcpu, 0) == PSCI_0_2_FN_PSCI_VERSION) {
+ u64 val = PSCI_RET_NOT_SUPPORTED;
+ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
+ val = 2;
+
+ vcpu_set_reg(vcpu, 0, val);
+ goto again;
+ }
+
if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
exit_code == ARM_EXCEPTION_TRAP) {
bool valid;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 84684fecd7ea upstream.
Instead of open coding the accesses to the various registers,
let's add explicit SMCCC accessors.
Reviewed-by: Christoffer Dall <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
virt/kvm/arm/psci.c | 52 ++++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 42 insertions(+), 10 deletions(-)
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -32,6 +32,38 @@
#define AFFINITY_MASK(level) ~((0x1UL << ((level) * MPIDR_LEVEL_BITS)) - 1)
+static u32 smccc_get_function(struct kvm_vcpu *vcpu)
+{
+ return vcpu_get_reg(vcpu, 0);
+}
+
+static unsigned long smccc_get_arg1(struct kvm_vcpu *vcpu)
+{
+ return vcpu_get_reg(vcpu, 1);
+}
+
+static unsigned long smccc_get_arg2(struct kvm_vcpu *vcpu)
+{
+ return vcpu_get_reg(vcpu, 2);
+}
+
+static unsigned long smccc_get_arg3(struct kvm_vcpu *vcpu)
+{
+ return vcpu_get_reg(vcpu, 3);
+}
+
+static void smccc_set_retval(struct kvm_vcpu *vcpu,
+ unsigned long a0,
+ unsigned long a1,
+ unsigned long a2,
+ unsigned long a3)
+{
+ vcpu_set_reg(vcpu, 0, a0);
+ vcpu_set_reg(vcpu, 1, a1);
+ vcpu_set_reg(vcpu, 2, a2);
+ vcpu_set_reg(vcpu, 3, a3);
+}
+
static unsigned long psci_affinity_mask(unsigned long affinity_level)
{
if (affinity_level <= 3)
@@ -77,7 +109,7 @@ static unsigned long kvm_psci_vcpu_on(st
unsigned long context_id;
phys_addr_t target_pc;
- cpu_id = vcpu_get_reg(source_vcpu, 1) & MPIDR_HWID_BITMASK;
+ cpu_id = smccc_get_arg1(source_vcpu) & MPIDR_HWID_BITMASK;
if (vcpu_mode_is_32bit(source_vcpu))
cpu_id &= ~((u32) 0);
@@ -96,8 +128,8 @@ static unsigned long kvm_psci_vcpu_on(st
return PSCI_RET_INVALID_PARAMS;
}
- target_pc = vcpu_get_reg(source_vcpu, 2);
- context_id = vcpu_get_reg(source_vcpu, 3);
+ target_pc = smccc_get_arg2(source_vcpu);
+ context_id = smccc_get_arg3(source_vcpu);
kvm_reset_vcpu(vcpu);
@@ -116,7 +148,7 @@ static unsigned long kvm_psci_vcpu_on(st
* NOTE: We always update r0 (or x0) because for PSCI v0.1
* the general puspose registers are undefined upon CPU_ON.
*/
- vcpu_set_reg(vcpu, 0, context_id);
+ smccc_set_retval(vcpu, context_id, 0, 0, 0);
vcpu->arch.power_off = false;
smp_mb(); /* Make sure the above is visible */
@@ -136,8 +168,8 @@ static unsigned long kvm_psci_vcpu_affin
struct kvm *kvm = vcpu->kvm;
struct kvm_vcpu *tmp;
- target_affinity = vcpu_get_reg(vcpu, 1);
- lowest_affinity_level = vcpu_get_reg(vcpu, 2);
+ target_affinity = smccc_get_arg1(vcpu);
+ lowest_affinity_level = smccc_get_arg2(vcpu);
/* Determine target affinity mask */
target_affinity_mask = psci_affinity_mask(lowest_affinity_level);
@@ -210,7 +242,7 @@ int kvm_psci_version(struct kvm_vcpu *vc
static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
- unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
+ u32 psci_fn = smccc_get_function(vcpu);
unsigned long val;
int ret = 1;
@@ -277,14 +309,14 @@ static int kvm_psci_0_2_call(struct kvm_
break;
}
- vcpu_set_reg(vcpu, 0, val);
+ smccc_set_retval(vcpu, val, 0, 0, 0);
return ret;
}
static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
- unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
+ u32 psci_fn = smccc_get_function(vcpu);
unsigned long val;
switch (psci_fn) {
@@ -302,7 +334,7 @@ static int kvm_psci_0_1_call(struct kvm_
break;
}
- vcpu_set_reg(vcpu, 0, val);
+ smccc_set_retval(vcpu, val, 0, 0, 0);
return 1;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 1a2fb94e6a77 upstream.
As we're about to update the PSCI support, and because I'm lazy,
let's move the PSCI include file to include/kvm so that both
ARM architectures can find it.
Acked-by: Christoffer Dall <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/kvm_psci.h | 27 ---------------------------
arch/arm/kvm/handle_exit.c | 2 +-
arch/arm64/include/asm/kvm_psci.h | 27 ---------------------------
arch/arm64/kvm/handle_exit.c | 3 ++-
include/kvm/arm_psci.h | 27 +++++++++++++++++++++++++++
virt/kvm/arm/arm.c | 2 +-
virt/kvm/arm/psci.c | 3 ++-
7 files changed, 33 insertions(+), 58 deletions(-)
delete mode 100644 arch/arm/include/asm/kvm_psci.h
rename arch/arm64/include/asm/kvm_psci.h => include/kvm/arm_psci.h (89%)
--- a/arch/arm/include/asm/kvm_psci.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Copyright (C) 2012 - ARM Ltd
- * Author: Marc Zyngier <[email protected]>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- */
-
-#ifndef __ARM_KVM_PSCI_H__
-#define __ARM_KVM_PSCI_H__
-
-#define KVM_ARM_PSCI_0_1 1
-#define KVM_ARM_PSCI_0_2 2
-
-int kvm_psci_version(struct kvm_vcpu *vcpu);
-int kvm_psci_call(struct kvm_vcpu *vcpu);
-
-#endif /* __ARM_KVM_PSCI_H__ */
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -21,7 +21,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
-#include <asm/kvm_psci.h>
+#include <kvm/arm_psci.h>
#include <trace/events/kvm.h>
#include "trace.h"
--- a/arch/arm64/include/asm/kvm_psci.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Copyright (C) 2012,2013 - ARM Ltd
- * Author: Marc Zyngier <[email protected]>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- */
-
-#ifndef __ARM64_KVM_PSCI_H__
-#define __ARM64_KVM_PSCI_H__
-
-#define KVM_ARM_PSCI_0_1 1
-#define KVM_ARM_PSCI_0_2 2
-
-int kvm_psci_version(struct kvm_vcpu *vcpu);
-int kvm_psci_call(struct kvm_vcpu *vcpu);
-
-#endif /* __ARM64_KVM_PSCI_H__ */
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -22,12 +22,13 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
+#include <kvm/arm_psci.h>
+
#include <asm/esr.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
-#include <asm/kvm_psci.h>
#include <asm/debug-monitors.h>
#define CREATE_TRACE_POINTS
--- /dev/null
+++ b/include/kvm/arm_psci.h
@@ -0,0 +1,27 @@
+/*
+ * Copyright (C) 2012,2013 - ARM Ltd
+ * Author: Marc Zyngier <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __KVM_ARM_PSCI_H__
+#define __KVM_ARM_PSCI_H__
+
+#define KVM_ARM_PSCI_0_1 1
+#define KVM_ARM_PSCI_0_2 2
+
+int kvm_psci_version(struct kvm_vcpu *vcpu);
+int kvm_psci_call(struct kvm_vcpu *vcpu);
+
+#endif /* __KVM_ARM_PSCI_H__ */
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -31,6 +31,7 @@
#include <linux/irqbypass.h>
#include <trace/events/kvm.h>
#include <kvm/arm_pmu.h>
+#include <kvm/arm_psci.h>
#define CREATE_TRACE_POINTS
#include "trace.h"
@@ -46,7 +47,6 @@
#include <asm/kvm_mmu.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
-#include <asm/kvm_psci.h>
#include <asm/sections.h>
#ifdef REQUIRES_VIRT
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -21,9 +21,10 @@
#include <asm/cputype.h>
#include <asm/kvm_emulate.h>
-#include <asm/kvm_psci.h>
#include <asm/kvm_host.h>
+#include <kvm/arm_psci.h>
+
#include <uapi/linux/psci.h>
/*
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 58e0b2239a4d upstream.
PSCI 1.0 can be trivially implemented by providing the FEATURES
call on top of PSCI 0.2 and returning 1.0 as the PSCI version.
We happily ignore everything else, as they are either optional or
are clarifications that do not require any additional change.
PSCI 1.0 is now the default until we decide to add a userspace
selection API.
Reviewed-by: Christoffer Dall <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/kvm/arm_psci.h | 3 +++
virt/kvm/arm/psci.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 47 insertions(+), 1 deletion(-)
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -22,6 +22,9 @@
#define KVM_ARM_PSCI_0_1 PSCI_VERSION(0, 1)
#define KVM_ARM_PSCI_0_2 PSCI_VERSION(0, 2)
+#define KVM_ARM_PSCI_1_0 PSCI_VERSION(1, 0)
+
+#define KVM_ARM_PSCI_LATEST KVM_ARM_PSCI_1_0
int kvm_psci_version(struct kvm_vcpu *vcpu);
int kvm_psci_call(struct kvm_vcpu *vcpu);
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -234,7 +234,7 @@ static void kvm_psci_system_reset(struct
int kvm_psci_version(struct kvm_vcpu *vcpu)
{
if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
- return KVM_ARM_PSCI_0_2;
+ return KVM_ARM_PSCI_LATEST;
return KVM_ARM_PSCI_0_1;
}
@@ -313,6 +313,47 @@ static int kvm_psci_0_2_call(struct kvm_
return ret;
}
+static int kvm_psci_1_0_call(struct kvm_vcpu *vcpu)
+{
+ u32 psci_fn = smccc_get_function(vcpu);
+ u32 feature;
+ unsigned long val;
+ int ret = 1;
+
+ switch(psci_fn) {
+ case PSCI_0_2_FN_PSCI_VERSION:
+ val = KVM_ARM_PSCI_1_0;
+ break;
+ case PSCI_1_0_FN_PSCI_FEATURES:
+ feature = smccc_get_arg1(vcpu);
+ switch(feature) {
+ case PSCI_0_2_FN_PSCI_VERSION:
+ case PSCI_0_2_FN_CPU_SUSPEND:
+ case PSCI_0_2_FN64_CPU_SUSPEND:
+ case PSCI_0_2_FN_CPU_OFF:
+ case PSCI_0_2_FN_CPU_ON:
+ case PSCI_0_2_FN64_CPU_ON:
+ case PSCI_0_2_FN_AFFINITY_INFO:
+ case PSCI_0_2_FN64_AFFINITY_INFO:
+ case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
+ case PSCI_0_2_FN_SYSTEM_OFF:
+ case PSCI_0_2_FN_SYSTEM_RESET:
+ case PSCI_1_0_FN_PSCI_FEATURES:
+ val = 0;
+ break;
+ default:
+ val = PSCI_RET_NOT_SUPPORTED;
+ break;
+ }
+ break;
+ default:
+ return kvm_psci_0_2_call(vcpu);
+ }
+
+ smccc_set_retval(vcpu, val, 0, 0, 0);
+ return ret;
+}
+
static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
@@ -355,6 +396,8 @@ static int kvm_psci_0_1_call(struct kvm_
int kvm_psci_call(struct kvm_vcpu *vcpu)
{
switch (kvm_psci_version(vcpu)) {
+ case KVM_ARM_PSCI_1_0:
+ return kvm_psci_1_0_call(vcpu);
case KVM_ARM_PSCI_0_2:
return kvm_psci_0_2_call(vcpu);
case KVM_ARM_PSCI_0_1:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit d0a144f12a7c upstream.
As we're about to trigger a PSCI version explosion, it doesn't
hurt to introduce a PSCI_VERSION helper that is going to be
used everywhere.
Reviewed-by: Christoffer Dall <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/kvm/arm_psci.h | 6 ++++--
include/uapi/linux/psci.h | 3 +++
virt/kvm/arm/psci.c | 4 +---
3 files changed, 8 insertions(+), 5 deletions(-)
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -18,8 +18,10 @@
#ifndef __KVM_ARM_PSCI_H__
#define __KVM_ARM_PSCI_H__
-#define KVM_ARM_PSCI_0_1 1
-#define KVM_ARM_PSCI_0_2 2
+#include <uapi/linux/psci.h>
+
+#define KVM_ARM_PSCI_0_1 PSCI_VERSION(0, 1)
+#define KVM_ARM_PSCI_0_2 PSCI_VERSION(0, 2)
int kvm_psci_version(struct kvm_vcpu *vcpu);
int kvm_psci_call(struct kvm_vcpu *vcpu);
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -88,6 +88,9 @@
(((ver) & PSCI_VERSION_MAJOR_MASK) >> PSCI_VERSION_MAJOR_SHIFT)
#define PSCI_VERSION_MINOR(ver) \
((ver) & PSCI_VERSION_MINOR_MASK)
+#define PSCI_VERSION(maj, min) \
+ ((((maj) << PSCI_VERSION_MAJOR_SHIFT) & PSCI_VERSION_MAJOR_MASK) | \
+ ((min) & PSCI_VERSION_MINOR_MASK))
/* PSCI features decoding (>=1.0) */
#define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT 1
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -25,8 +25,6 @@
#include <kvm/arm_psci.h>
-#include <uapi/linux/psci.h>
-
/*
* This is an implementation of the Power State Coordination Interface
* as described in ARM document number ARM DEN 0022A.
@@ -222,7 +220,7 @@ static int kvm_psci_0_2_call(struct kvm_
* Bits[31:16] = Major Version = 0
* Bits[15:0] = Minor Version = 2
*/
- val = 2;
+ val = KVM_ARM_PSCI_0_2;
break;
case PSCI_0_2_FN_CPU_SUSPEND:
case PSCI_0_2_FN64_CPU_SUSPEND:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit f5115e8869e1 upstream.
When handling an SMC trap, the "preferred return address" is set
to that of the SMC, and not the next PC (which is a departure from
the behaviour of an SMC that isn't trapped).
Increment PC in the handler, as the guest is otherwise forever
stuck...
Cc: [email protected]
Fixes: acfb3b883f6d ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls")
Reviewed-by: Christoffer Dall <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kvm/handle_exit.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -54,7 +54,16 @@ static int handle_hvc(struct kvm_vcpu *v
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
+ /*
+ * "If an SMC instruction executed at Non-secure EL1 is
+ * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
+ * Trap exception, not a Secure Monitor Call exception [...]"
+ *
+ * We need to advance the PC after the trap, as it would
+ * otherwise return to the same address...
+ */
vcpu_set_reg(vcpu, 0, ~0UL);
+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit aa6acde65e03 upstream.
Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
and can theoretically be attacked by malicious code.
This patch implements a PSCI-based mitigation for these CPUs when available.
The call into firmware will invalidate the branch predictor state, preventing
any malicious entries from affecting other victim contexts.
Co-developed-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/bpi.S | 24 +++++++++++++++++++++++
arch/arm64/kernel/cpu_errata.c | 42 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+)
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -53,3 +53,27 @@ ENTRY(__bp_harden_hyp_vecs_start)
vectors __kvm_hyp_vector
.endr
ENTRY(__bp_harden_hyp_vecs_end)
+ENTRY(__psci_hyp_bp_inval_start)
+ sub sp, sp, #(8 * 18)
+ stp x16, x17, [sp, #(16 * 0)]
+ stp x14, x15, [sp, #(16 * 1)]
+ stp x12, x13, [sp, #(16 * 2)]
+ stp x10, x11, [sp, #(16 * 3)]
+ stp x8, x9, [sp, #(16 * 4)]
+ stp x6, x7, [sp, #(16 * 5)]
+ stp x4, x5, [sp, #(16 * 6)]
+ stp x2, x3, [sp, #(16 * 7)]
+ stp x0, x1, [sp, #(16 * 8)]
+ mov x0, #0x84000000
+ smc #0
+ ldp x16, x17, [sp, #(16 * 0)]
+ ldp x14, x15, [sp, #(16 * 1)]
+ ldp x12, x13, [sp, #(16 * 2)]
+ ldp x10, x11, [sp, #(16 * 3)]
+ ldp x8, x9, [sp, #(16 * 4)]
+ ldp x6, x7, [sp, #(16 * 5)]
+ ldp x4, x5, [sp, #(16 * 6)]
+ ldp x2, x3, [sp, #(16 * 7)]
+ ldp x0, x1, [sp, #(16 * 8)]
+ add sp, sp, #(8 * 18)
+ENTRY(__psci_hyp_bp_inval_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -67,6 +67,8 @@ static int cpu_enable_trap_ctr_access(vo
DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
#ifdef CONFIG_KVM
+extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
+
static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
@@ -108,6 +110,9 @@ static void __install_bp_hardening_cb(bp
spin_unlock(&bp_lock);
}
#else
+#define __psci_hyp_bp_inval_start NULL
+#define __psci_hyp_bp_inval_end NULL
+
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -132,6 +137,21 @@ static void install_bp_hardening_cb(con
__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
}
+
+#include <linux/psci.h>
+
+static int enable_psci_bp_hardening(void *data)
+{
+ const struct arm64_cpu_capabilities *entry = data;
+
+ if (psci_ops.get_version)
+ install_bp_hardening_cb(entry,
+ (bp_hardening_cb_t)psci_ops.get_version,
+ __psci_hyp_bp_inval_start,
+ __psci_hyp_bp_inval_end);
+
+ return 0;
+}
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
#define MIDR_RANGE(model, min, max) \
@@ -282,6 +302,28 @@ const struct arm64_cpu_capabilities arm6
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
},
#endif
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+ .enable = enable_psci_bp_hardening,
+ },
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+ .enable = enable_psci_bp_hardening,
+ },
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ .enable = enable_psci_bp_hardening,
+ },
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+ .enable = enable_psci_bp_hardening,
+ },
+#endif
{
}
};
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shanker Donthineni <[email protected]>
Commit ec82b567a74f upstream.
Falkor is susceptible to branch predictor aliasing and can
theoretically be attacked by malicious code. This patch
implements a mitigation for these attacks, preventing any
malicious entries from affecting other victim contexts.
Signed-off-by: Shanker Donthineni <[email protected]>
[will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR]
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/kvm_asm.h | 2 +
arch/arm64/kernel/bpi.S | 8 +++++++
arch/arm64/kernel/cpu_errata.c | 40 +++++++++++++++++++++++++++++++++++++--
arch/arm64/kvm/hyp/entry.S | 12 +++++++++++
arch/arm64/kvm/hyp/switch.c | 8 +++++++
6 files changed, 70 insertions(+), 3 deletions(-)
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -43,7 +43,8 @@
#define ARM64_SVE 22
#define ARM64_UNMAP_KERNEL_AT_EL0 23
#define ARM64_HARDEN_BRANCH_PREDICTOR 24
+#define ARM64_HARDEN_BP_POST_GUEST_EXIT 25
-#define ARM64_NCAPS 25
+#define ARM64_NCAPS 26
#endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -68,6 +68,8 @@ extern u32 __kvm_get_mdcr_el2(void);
extern u32 __init_stage2_translation(void);
+extern void __qcom_hyp_sanitize_btac_predictors(void);
+
#endif
#endif /* __ARM_KVM_ASM_H__ */
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -77,3 +77,11 @@ ENTRY(__psci_hyp_bp_inval_start)
ldp x0, x1, [sp, #(16 * 8)]
add sp, sp, #(8 * 18)
ENTRY(__psci_hyp_bp_inval_end)
+
+ENTRY(__qcom_hyp_sanitize_link_stack_start)
+ stp x29, x30, [sp, #-16]!
+ .rept 16
+ bl . + 4
+ .endr
+ ldp x29, x30, [sp], #16
+ENTRY(__qcom_hyp_sanitize_link_stack_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -68,6 +68,8 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_har
#ifdef CONFIG_KVM
extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
+extern char __qcom_hyp_sanitize_link_stack_start[];
+extern char __qcom_hyp_sanitize_link_stack_end[];
static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -110,8 +112,10 @@ static void __install_bp_hardening_cb(bp
spin_unlock(&bp_lock);
}
#else
-#define __psci_hyp_bp_inval_start NULL
-#define __psci_hyp_bp_inval_end NULL
+#define __psci_hyp_bp_inval_start NULL
+#define __psci_hyp_bp_inval_end NULL
+#define __qcom_hyp_sanitize_link_stack_start NULL
+#define __qcom_hyp_sanitize_link_stack_end NULL
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
@@ -152,6 +156,29 @@ static int enable_psci_bp_hardening(void
return 0;
}
+
+static void qcom_link_stack_sanitization(void)
+{
+ u64 tmp;
+
+ asm volatile("mov %0, x30 \n"
+ ".rept 16 \n"
+ "bl . + 4 \n"
+ ".endr \n"
+ "mov x30, %0 \n"
+ : "=&r" (tmp));
+}
+
+static int qcom_enable_link_stack_sanitization(void *data)
+{
+ const struct arm64_cpu_capabilities *entry = data;
+
+ install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
+ __qcom_hyp_sanitize_link_stack_start,
+ __qcom_hyp_sanitize_link_stack_end);
+
+ return 0;
+}
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
#define MIDR_RANGE(model, min, max) \
@@ -323,6 +350,15 @@ const struct arm64_cpu_capabilities arm6
MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
.enable = enable_psci_bp_hardening,
},
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+ .enable = qcom_enable_link_stack_sanitization,
+ },
+ {
+ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
+ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+ },
#endif
{
}
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -196,3 +196,15 @@ alternative_endif
eret
ENDPROC(__fpsimd_guest_restore)
+
+ENTRY(__qcom_hyp_sanitize_btac_predictors)
+ /**
+ * Call SMC64 with Silicon provider serviceID 23<<8 (0xc2001700)
+ * 0xC2000000-0xC200FFFF: assigned to SiP Service Calls
+ * b15-b0: contains SiP functionID
+ */
+ movz x0, #0x1700
+ movk x0, #0xc200, lsl #16
+ smc #0
+ ret
+ENDPROC(__qcom_hyp_sanitize_btac_predictors)
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -393,6 +393,14 @@ again:
/* 0 falls through to be handled out of EL2 */
}
+ if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) {
+ u32 midr = read_cpuid_id();
+
+ /* Apply BTAC predictors mitigation to all Falkor chips */
+ if ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)
+ __qcom_hyp_sanitize_btac_predictors();
+ }
+
fp_enabled = __fpsimd_enabled();
__sysreg_save_guest_state(guest_ctxt);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit a65d219fe5dc upstream.
Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they
will soon need MIDR matches for hardening the branch predictor.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cputype.h | 4 ++++
1 file changed, 4 insertions(+)
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -79,8 +79,10 @@
#define ARM_CPU_PART_AEM_V8 0xD0F
#define ARM_CPU_PART_FOUNDATION 0xD00
#define ARM_CPU_PART_CORTEX_A57 0xD07
+#define ARM_CPU_PART_CORTEX_A72 0xD08
#define ARM_CPU_PART_CORTEX_A53 0xD03
#define ARM_CPU_PART_CORTEX_A73 0xD09
+#define ARM_CPU_PART_CORTEX_A75 0xD0A
#define APM_CPU_PART_POTENZA 0x000
@@ -97,7 +99,9 @@
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
#define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
+#define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 5dfc6ed27710 upstream.
Software-step and PC alignment fault exceptions have higher priority than
instruction abort exceptions, so apply the BP hardening hooks there too
if the user PC appears to reside in kernel space.
Reported-by: Dan Hettena <[email protected]>
Reviewed-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 5 ++++-
arch/arm64/mm/fault.c | 9 +++++++++
2 files changed, 13 insertions(+), 1 deletion(-)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -767,7 +767,10 @@ el0_sp_pc:
* Stack or PC alignment exception handling
*/
mrs x26, far_el1
- enable_daif
+ enable_da_f
+#ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off
+#endif
ct_user_exit
mov x0, x26
mov x1, x25
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -731,6 +731,12 @@ asmlinkage void __exception do_sp_pc_abo
struct siginfo info;
struct task_struct *tsk = current;
+ if (user_mode(regs)) {
+ if (instruction_pointer(regs) > TASK_SIZE)
+ arm64_apply_bp_hardening();
+ local_irq_enable();
+ }
+
if (show_unhandled_signals && unhandled_signal(tsk, SIGBUS))
pr_info_ratelimited("%s[%d]: %s exception: pc=%p sp=%p\n",
tsk->comm, task_pid_nr(tsk),
@@ -790,6 +796,9 @@ asmlinkage int __exception do_debug_exce
if (interrupts_enabled(regs))
trace_hardirqs_off();
+ if (user_mode(regs) && instruction_pointer(regs) > TASK_SIZE)
+ arm64_apply_bp_hardening();
+
if (!inf->fn(addr, esr, regs)) {
rv = 1;
} else {
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 6840bdd73d07 upstream.
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm/include/asm/kvm_mmu.h
arch/arm64/include/asm/kvm_mmu.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/include/asm/kvm_mmu.h | 10 ++++++++++
arch/arm64/include/asm/kvm_mmu.h | 38 ++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/switch.c | 2 +-
virt/kvm/arm/arm.c | 8 +++++++-
4 files changed, 56 insertions(+), 2 deletions(-)
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -221,6 +221,16 @@ static inline unsigned int kvm_get_vmid_
return 8;
}
+static inline void *kvm_get_hyp_vector(void)
+{
+ return kvm_ksym_ref(__kvm_hyp_vector);
+}
+
+static inline int kvm_map_vectors(void)
+{
+ return 0;
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* __ARM_KVM_MMU_H__ */
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -309,5 +309,43 @@ static inline unsigned int kvm_get_vmid_
return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8;
}
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#include <asm/mmu.h>
+
+static inline void *kvm_get_hyp_vector(void)
+{
+ struct bp_hardening_data *data = arm64_get_bp_hardening_data();
+ void *vect = kvm_ksym_ref(__kvm_hyp_vector);
+
+ if (data->fn) {
+ vect = __bp_harden_hyp_vecs_start +
+ data->hyp_vectors_slot * SZ_2K;
+
+ if (!has_vhe())
+ vect = lm_alias(vect);
+ }
+
+ return vect;
+}
+
+static inline int kvm_map_vectors(void)
+{
+ return create_hyp_mappings(kvm_ksym_ref(__bp_harden_hyp_vecs_start),
+ kvm_ksym_ref(__bp_harden_hyp_vecs_end),
+ PAGE_HYP_EXEC);
+}
+
+#else
+static inline void *kvm_get_hyp_vector(void)
+{
+ return kvm_ksym_ref(__kvm_hyp_vector);
+}
+
+static inline int kvm_map_vectors(void)
+{
+ return 0;
+}
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -52,7 +52,7 @@ static void __hyp_text __activate_traps_
val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN);
write_sysreg(val, cpacr_el1);
- write_sysreg(__kvm_hyp_vector, vbar_el1);
+ write_sysreg(kvm_get_hyp_vector(), vbar_el1);
}
static void __hyp_text __activate_traps_nvhe(void)
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1158,7 +1158,7 @@ static void cpu_init_hyp_mode(void *dumm
pgd_ptr = kvm_mmu_get_httbr();
stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
hyp_stack_ptr = stack_page + PAGE_SIZE;
- vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
+ vector_ptr = (unsigned long)kvm_get_hyp_vector();
__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
__cpu_init_stage2();
@@ -1403,6 +1403,12 @@ static int init_hyp_mode(void)
goto out_err;
}
+ err = kvm_map_vectors();
+ if (err) {
+ kvm_err("Cannot map vectors\n");
+ goto out_err;
+ }
+
/*
* Map the Hyp stack pages
*/
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit a8e4c0a919ae upstream.
We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
which has the unexpected consequence of being triggered on every
exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
even if no context switch actually occured.
This is a bit suboptimal, and it would be more logical to only
invalidate the branch predictor when we actually switch to
a different mm.
In order to solve this, move the call to arm64_apply_bp_hardening()
into check_and_switch_context(), where we're guaranteed to pick
a different mm context.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/mm/context.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -231,6 +231,9 @@ void check_and_switch_context(struct mm_
raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
switch_mm_fastpath:
+
+ arm64_apply_bp_hardening();
+
/*
* Defer TTBR0_EL1 setting for user threads to uaccess_enable() when
* emulating PAN.
@@ -246,8 +249,6 @@ asmlinkage void post_ttbr_update_workaro
"ic iallu; dsb nsh; isb",
ARM64_WORKAROUND_CAVIUM_27456,
CONFIG_CAVIUM_ERRATUM_27456));
-
- arm64_apply_bp_hardening();
}
static int asids_init(void)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 439e70e27a51 upstream.
The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.
Reported-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpu-reset.S | 2 +-
arch/arm64/kernel/head.S | 2 +-
arch/arm64/kernel/sleep.S | 2 +-
arch/arm64/mm/proc.S | 8 ++++----
4 files changed, 7 insertions(+), 7 deletions(-)
--- a/arch/arm64/kernel/cpu-reset.S
+++ b/arch/arm64/kernel/cpu-reset.S
@@ -16,7 +16,7 @@
#include <asm/virt.h>
.text
-.pushsection .idmap.text, "ax"
+.pushsection .idmap.text, "awx"
/*
* __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -371,7 +371,7 @@ ENDPROC(__primary_switched)
* end early head section, begin head code that is also used for
* hotplug and needs to have the same protections as the text region
*/
- .section ".idmap.text","ax"
+ .section ".idmap.text","awx"
ENTRY(kimage_vaddr)
.quad _text - TEXT_OFFSET
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -96,7 +96,7 @@ ENTRY(__cpu_suspend_enter)
ret
ENDPROC(__cpu_suspend_enter)
- .pushsection ".idmap.text", "ax"
+ .pushsection ".idmap.text", "awx"
ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
bl __cpu_setup
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -86,7 +86,7 @@ ENDPROC(cpu_do_suspend)
*
* x0: Address of context pointer
*/
- .pushsection ".idmap.text", "ax"
+ .pushsection ".idmap.text", "awx"
ENTRY(cpu_do_resume)
ldp x2, x3, [x0]
ldp x4, x5, [x0, #16]
@@ -152,7 +152,7 @@ ENTRY(cpu_do_switch_mm)
ret
ENDPROC(cpu_do_switch_mm)
- .pushsection ".idmap.text", "ax"
+ .pushsection ".idmap.text", "awx"
.macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
adrp \tmp1, empty_zero_page
@@ -184,7 +184,7 @@ ENDPROC(idmap_cpu_replace_ttbr1)
.popsection
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
- .pushsection ".idmap.text", "ax"
+ .pushsection ".idmap.text", "awx"
.macro __idmap_kpti_get_pgtable_ent, type
dc cvac, cur_\()\type\()p // Ensure any existing dirty
@@ -373,7 +373,7 @@ ENDPROC(idmap_kpti_install_ng_mappings)
* Initialise the processor for turning the MMU on. Return in x0 the
* value of the SCTLR_EL1 register.
*/
- .pushsection ".idmap.text", "ax"
+ .pushsection ".idmap.text", "awx"
ENTRY(__cpu_setup)
tlbi vmalle1 // Invalidate local TLB
dsb nsh
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 0f15adbb2861 upstream.
Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.
This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.
Co-developed-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm64/kernel/cpufeature.c
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 17 ++++++++
arch/arm64/include/asm/cpucaps.h | 3 +
arch/arm64/include/asm/mmu.h | 37 +++++++++++++++++++
arch/arm64/include/asm/sysreg.h | 1
arch/arm64/kernel/Makefile | 4 ++
arch/arm64/kernel/bpi.S | 55 ++++++++++++++++++++++++++++
arch/arm64/kernel/cpu_errata.c | 74 +++++++++++++++++++++++++++++++++++++++
arch/arm64/kernel/cpufeature.c | 1
arch/arm64/kernel/entry.S | 7 ++-
arch/arm64/mm/context.c | 2 +
arch/arm64/mm/fault.c | 17 ++++++++
11 files changed, 215 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/kernel/bpi.S
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -855,6 +855,23 @@ config UNMAP_KERNEL_AT_EL0
If unsure, say Y.
+config HARDEN_BRANCH_PREDICTOR
+ bool "Harden the branch predictor against aliasing attacks" if EXPERT
+ default y
+ help
+ Speculation attacks against some high-performance processors rely on
+ being able to manipulate the branch predictor for a victim context by
+ executing aliasing branches in the attacker context. Such attacks
+ can be partially mitigated against by clearing internal branch
+ predictor state and limiting the prediction logic in some situations.
+
+ This config option will take CPU-specific actions to harden the
+ branch predictor against aliasing attacks and may rely on specific
+ instruction sequences or control bits being set by the system
+ firmware.
+
+ If unsure, say Y.
+
menuconfig ARMV8_DEPRECATED
bool "Emulate deprecated/obsolete ARMv8 instructions"
depends on COMPAT
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -42,7 +42,8 @@
#define ARM64_HAS_DCPOP 21
#define ARM64_SVE 22
#define ARM64_UNMAP_KERNEL_AT_EL0 23
+#define ARM64_HARDEN_BRANCH_PREDICTOR 24
-#define ARM64_NCAPS 24
+#define ARM64_NCAPS 25
#endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -41,6 +41,43 @@ static inline bool arm64_kernel_unmapped
cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
}
+typedef void (*bp_hardening_cb_t)(void);
+
+struct bp_hardening_data {
+ int hyp_vectors_slot;
+ bp_hardening_cb_t fn;
+};
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
+
+DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+ return this_cpu_ptr(&bp_hardening_data);
+}
+
+static inline void arm64_apply_bp_hardening(void)
+{
+ struct bp_hardening_data *d;
+
+ if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR))
+ return;
+
+ d = arm64_get_bp_hardening_data();
+ if (d->fn)
+ d->fn();
+}
+#else
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+ return NULL;
+}
+
+static inline void arm64_apply_bp_hardening(void) { }
+#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
extern void paging_init(void);
extern void bootmem_init(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -438,6 +438,7 @@
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
+#define ID_AA64PFR0_CSV2_SHIFT 56
#define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_GIC_SHIFT 24
#define ID_AA64PFR0_ASIMD_SHIFT 20
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -53,6 +53,10 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST) +=
arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
arm64-obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
+ifeq ($(CONFIG_KVM),y)
+arm64-obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
+endif
+
obj-y += $(arm64-obj-y) vdso/ probes/
obj-m += $(arm64-obj-m)
head-y := head.o
--- /dev/null
+++ b/arch/arm64/kernel/bpi.S
@@ -0,0 +1,55 @@
+/*
+ * Contains CPU specific branch predictor invalidation sequences
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+
+.macro ventry target
+ .rept 31
+ nop
+ .endr
+ b \target
+.endm
+
+.macro vectors target
+ ventry \target + 0x000
+ ventry \target + 0x080
+ ventry \target + 0x100
+ ventry \target + 0x180
+
+ ventry \target + 0x200
+ ventry \target + 0x280
+ ventry \target + 0x300
+ ventry \target + 0x380
+
+ ventry \target + 0x400
+ ventry \target + 0x480
+ ventry \target + 0x500
+ ventry \target + 0x580
+
+ ventry \target + 0x600
+ ventry \target + 0x680
+ ventry \target + 0x700
+ ventry \target + 0x780
+.endm
+
+ .align 11
+ENTRY(__bp_harden_hyp_vecs_start)
+ .rept 4
+ vectors __kvm_hyp_vector
+ .endr
+ENTRY(__bp_harden_hyp_vecs_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -60,6 +60,80 @@ static int cpu_enable_trap_ctr_access(vo
return 0;
}
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#include <asm/mmu_context.h>
+#include <asm/cacheflush.h>
+
+DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+#ifdef CONFIG_KVM
+static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
+ const char *hyp_vecs_end)
+{
+ void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K);
+ int i;
+
+ for (i = 0; i < SZ_2K; i += 0x80)
+ memcpy(dst + i, hyp_vecs_start, hyp_vecs_end - hyp_vecs_start);
+
+ flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
+}
+
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+ const char *hyp_vecs_start,
+ const char *hyp_vecs_end)
+{
+ static int last_slot = -1;
+ static DEFINE_SPINLOCK(bp_lock);
+ int cpu, slot = -1;
+
+ spin_lock(&bp_lock);
+ for_each_possible_cpu(cpu) {
+ if (per_cpu(bp_hardening_data.fn, cpu) == fn) {
+ slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
+ break;
+ }
+ }
+
+ if (slot == -1) {
+ last_slot++;
+ BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start)
+ / SZ_2K) <= last_slot);
+ slot = last_slot;
+ __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
+ }
+
+ __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+ __this_cpu_write(bp_hardening_data.fn, fn);
+ spin_unlock(&bp_lock);
+}
+#else
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+ const char *hyp_vecs_start,
+ const char *hyp_vecs_end)
+{
+ __this_cpu_write(bp_hardening_data.fn, fn);
+}
+#endif /* CONFIG_KVM */
+
+static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
+ bp_hardening_cb_t fn,
+ const char *hyp_vecs_start,
+ const char *hyp_vecs_end)
+{
+ u64 pfr0;
+
+ if (!entry->matches(entry, SCOPE_LOCAL_CPU))
+ return;
+
+ pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+ if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+ return;
+
+ __install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
+}
+#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
#define MIDR_RANGE(model, min, max) \
.def_scope = SCOPE_LOCAL_CPU, \
.matches = is_affected_midr_range, \
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -146,6 +146,7 @@ static const struct arm64_ftr_bits ftr_i
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -722,12 +722,15 @@ el0_ia:
* Instruction abort handling
*/
mrs x26, far_el1
- enable_daif
+ enable_da_f
+#ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off
+#endif
ct_user_exit
mov x0, x26
mov x1, x25
mov x2, sp
- bl do_mem_abort
+ bl do_el0_ia_bp_hardening
b ret_to_user
el0_fpsimd_acc:
/*
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaro
"ic iallu; dsb nsh; isb",
ARM64_WORKAROUND_CAVIUM_27456,
CONFIG_CAVIUM_ERRATUM_27456));
+
+ arm64_apply_bp_hardening();
}
static int asids_init(void)
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -707,6 +707,23 @@ asmlinkage void __exception do_mem_abort
arm64_notify_die("", regs, &info, esr);
}
+asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
+ unsigned int esr,
+ struct pt_regs *regs)
+{
+ /*
+ * We've taken an instruction abort from userspace and not yet
+ * re-enabled IRQs. If the address is a kernel address, apply
+ * BP hardening prior to enabling IRQs and pre-emption.
+ */
+ if (addr > TASK_SIZE)
+ arm64_apply_bp_hardening();
+
+ local_irq_enable();
+ do_mem_abort(addr, esr, regs);
+}
+
+
asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
unsigned int esr,
struct pt_regs *regs)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit d68e3ba5303f upstream.
Entry into recent versions of ARM Trusted Firmware will invalidate the CPU
branch predictor state in order to protect against aliasing attacks.
This patch exposes the PSCI "VERSION" function via psci_ops, so that it
can be invoked outside of the PSCI driver where necessary.
Acked-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/firmware/psci.c | 2 ++
include/linux/psci.h | 1 +
2 files changed, 3 insertions(+)
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -496,6 +496,8 @@ static void __init psci_init_migrate(voi
static void __init psci_0_2_set_functions(void)
{
pr_info("Using standard PSCI v0.2 function IDs\n");
+ psci_ops.get_version = psci_get_version;
+
psci_function_id[PSCI_FN_CPU_SUSPEND] =
PSCI_FN_NATIVE(0_2, CPU_SUSPEND);
psci_ops.cpu_suspend = psci_cpu_suspend;
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -26,6 +26,7 @@ int psci_cpu_init_idle(unsigned int cpu)
int psci_cpu_suspend_enter(unsigned long index);
struct psci_operations {
+ u32 (*get_version)(void);
int (*cpu_suspend)(u32 state, unsigned long entry_point);
int (*cpu_off)(u32 state);
int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Suzuki K Poulose <[email protected]>
Commit 55b35d070c25 upstream.
When a CPU is brought up after we have finalised the system
wide capabilities (i.e, features and errata), we make sure the
new CPU doesn't need a new errata work around which has not been
detected already. However we don't run enable() method on the new
CPU for the errata work arounds already detected. This could
cause the new CPU running without potential work arounds.
It is upto the "enable()" method to decide if this CPU should
do something about the errata.
Fixes: commit 6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU")
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Andre Przywara <[email protected]>
Cc: Dave Martin <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -221,15 +221,18 @@ void verify_local_cpu_errata_workarounds
{
const struct arm64_cpu_capabilities *caps = arm64_errata;
- for (; caps->matches; caps++)
- if (!cpus_have_cap(caps->capability) &&
- caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ for (; caps->matches; caps++) {
+ if (cpus_have_cap(caps->capability)) {
+ if (caps->enable)
+ caps->enable((void *)caps);
+ } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
pr_crit("CPU%d: Requires work around for %s, not detected"
" at boot time\n",
smp_processor_id(),
caps->desc ? : "an erratum");
cpu_die_early();
}
+ }
}
void update_cpu_errata_workarounds(void)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit f71c2ffcb20d upstream.
Like we've done for get_user and put_user, ensure that user pointers
are masked before invoking the underlying __arch_{clear,copy_*}_user
operations.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/uaccess.h | 29 ++++++++++++++++++++++-------
arch/arm64/kernel/arm64ksyms.c | 4 ++--
arch/arm64/lib/clear_user.S | 6 +++---
arch/arm64/lib/copy_in_user.S | 5 +++--
4 files changed, 30 insertions(+), 14 deletions(-)
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -391,20 +391,35 @@ do { \
#define put_user __put_user
extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
-#define raw_copy_from_user __arch_copy_from_user
+#define raw_copy_from_user(to, from, n) \
+({ \
+ __arch_copy_from_user((to), __uaccess_mask_ptr(from), (n)); \
+})
+
extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
-#define raw_copy_to_user __arch_copy_to_user
-extern unsigned long __must_check raw_copy_in_user(void __user *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
+#define raw_copy_to_user(to, from, n) \
+({ \
+ __arch_copy_to_user(__uaccess_mask_ptr(to), (from), (n)); \
+})
+
+extern unsigned long __must_check __arch_copy_in_user(void __user *to, const void __user *from, unsigned long n);
+#define raw_copy_in_user(to, from, n) \
+({ \
+ __arch_copy_in_user(__uaccess_mask_ptr(to), \
+ __uaccess_mask_ptr(from), (n)); \
+})
+
#define INLINE_COPY_TO_USER
#define INLINE_COPY_FROM_USER
-static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
+extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned long n);
+static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n)
{
if (access_ok(VERIFY_WRITE, to, n))
- n = __clear_user(__uaccess_mask_ptr(to), n);
+ n = __arch_clear_user(__uaccess_mask_ptr(to), n);
return n;
}
+#define clear_user __clear_user
extern long strncpy_from_user(char *dest, const char __user *src, long count);
@@ -418,7 +433,7 @@ extern unsigned long __must_check __copy
static inline int __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
{
kasan_check_write(dst, size);
- return __copy_user_flushcache(dst, src, size);
+ return __copy_user_flushcache(dst, __uaccess_mask_ptr(src), size);
}
#endif
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -37,8 +37,8 @@ EXPORT_SYMBOL(clear_page);
/* user mem (segment) */
EXPORT_SYMBOL(__arch_copy_from_user);
EXPORT_SYMBOL(__arch_copy_to_user);
-EXPORT_SYMBOL(__clear_user);
-EXPORT_SYMBOL(raw_copy_in_user);
+EXPORT_SYMBOL(__arch_clear_user);
+EXPORT_SYMBOL(__arch_copy_in_user);
/* physical memory */
EXPORT_SYMBOL(memstart_addr);
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -21,7 +21,7 @@
.text
-/* Prototype: int __clear_user(void *addr, size_t sz)
+/* Prototype: int __arch_clear_user(void *addr, size_t sz)
* Purpose : clear some user memory
* Params : addr - user memory address to clear
* : sz - number of bytes to clear
@@ -29,7 +29,7 @@
*
* Alignment fixed up by hardware.
*/
-ENTRY(__clear_user)
+ENTRY(__arch_clear_user)
uaccess_enable_not_uao x2, x3, x4
mov x2, x1 // save the size for fixup return
subs x1, x1, #8
@@ -52,7 +52,7 @@ uao_user_alternative 9f, strb, sttrb, wz
5: mov x0, #0
uaccess_disable_not_uao x2, x3
ret
-ENDPROC(__clear_user)
+ENDPROC(__arch_clear_user)
.section .fixup,"ax"
.align 2
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -64,14 +64,15 @@
.endm
end .req x5
-ENTRY(raw_copy_in_user)
+
+ENTRY(__arch_copy_in_user)
uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
uaccess_disable_not_uao x3, x4
mov x0, #0
ret
-ENDPROC(raw_copy_in_user)
+ENDPROC(__arch_copy_in_user)
.section .fixup,"ax"
.align 2
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 0a0d111d40fd upstream.
In order to invoke the CPU capability ->matches callback from the ->enable
callback for applying local-CPU workarounds, we need a handle on the
capability structure.
This patch passes a pointer to the capability structure to the ->enable
callback.
Reviewed-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1215,7 +1215,7 @@ void __init enable_cpu_capabilities(cons
* uses an IPI, giving us a PSTATE that disappears when
* we return.
*/
- stop_machine(caps->enable, NULL, cpu_online_mask);
+ stop_machine(caps->enable, (void *)caps, cpu_online_mask);
}
}
}
@@ -1259,7 +1259,7 @@ verify_local_cpu_features(const struct a
cpu_die_early();
}
if (caps->enable)
- caps->enable(NULL);
+ caps->enable((void *)caps);
}
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 91b2d3442f6a upstream.
The arm64 futex code has some explicit dereferencing of user pointers
where performing atomic operations in response to a futex command. This
patch uses masking to limit any speculative futex operations to within
the user address space.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/futex.h | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -48,9 +48,10 @@ do { \
} while (0)
static inline int
-arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
+arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
{
int oldval = 0, ret, tmp;
+ u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
pagefault_disable();
@@ -88,15 +89,17 @@ arch_futex_atomic_op_inuser(int op, int
}
static inline int
-futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
u32 oldval, u32 newval)
{
int ret = 0;
u32 val, tmp;
+ u32 __user *uaddr;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
+ if (!access_ok(VERIFY_WRITE, _uaddr, sizeof(u32)))
return -EFAULT;
+ uaddr = __uaccess_mask_ptr(_uaddr);
uaccess_enable();
asm volatile("// futex_atomic_cmpxchg_inatomic\n"
" prfm pstl1strm, %2\n"
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arvind Yadav <[email protected]>
commit c0f71bbb810237a38734607ca4599632f7f5d47f upstream.
Here, hdpvr_register_videodev() is responsible for setup and
register a video device. Also defining and initializing a worker.
hdpvr_register_videodev() is calling by hdpvr_probe at last.
So no need to flush any work here.
Unregister v4l2, free buffers and memory. If hdpvr_probe() will fail.
Signed-off-by: Arvind Yadav <[email protected]>
Reported-by: Andrey Konovalov <[email protected]>
Tested-by: Andrey Konovalov <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/usb/hdpvr/hdpvr-core.c | 26 +++++++++++++++-----------
1 file changed, 15 insertions(+), 11 deletions(-)
--- a/drivers/media/usb/hdpvr/hdpvr-core.c
+++ b/drivers/media/usb/hdpvr/hdpvr-core.c
@@ -292,7 +292,7 @@ static int hdpvr_probe(struct usb_interf
/* register v4l2_device early so it can be used for printks */
if (v4l2_device_register(&interface->dev, &dev->v4l2_dev)) {
dev_err(&interface->dev, "v4l2_device_register failed\n");
- goto error;
+ goto error_free_dev;
}
mutex_init(&dev->io_mutex);
@@ -301,7 +301,7 @@ static int hdpvr_probe(struct usb_interf
dev->usbc_buf = kmalloc(64, GFP_KERNEL);
if (!dev->usbc_buf) {
v4l2_err(&dev->v4l2_dev, "Out of memory\n");
- goto error;
+ goto error_v4l2_unregister;
}
init_waitqueue_head(&dev->wait_buffer);
@@ -339,13 +339,13 @@ static int hdpvr_probe(struct usb_interf
}
if (!dev->bulk_in_endpointAddr) {
v4l2_err(&dev->v4l2_dev, "Could not find bulk-in endpoint\n");
- goto error;
+ goto error_put_usb;
}
/* init the device */
if (hdpvr_device_init(dev)) {
v4l2_err(&dev->v4l2_dev, "device init failed\n");
- goto error;
+ goto error_put_usb;
}
mutex_lock(&dev->io_mutex);
@@ -353,7 +353,7 @@ static int hdpvr_probe(struct usb_interf
mutex_unlock(&dev->io_mutex);
v4l2_err(&dev->v4l2_dev,
"allocating transfer buffers failed\n");
- goto error;
+ goto error_put_usb;
}
mutex_unlock(&dev->io_mutex);
@@ -361,7 +361,7 @@ static int hdpvr_probe(struct usb_interf
retval = hdpvr_register_i2c_adapter(dev);
if (retval < 0) {
v4l2_err(&dev->v4l2_dev, "i2c adapter register failed\n");
- goto error;
+ goto error_free_buffers;
}
client = hdpvr_register_ir_rx_i2c(dev);
@@ -394,13 +394,17 @@ static int hdpvr_probe(struct usb_interf
reg_fail:
#if IS_ENABLED(CONFIG_I2C)
i2c_del_adapter(&dev->i2c_adapter);
+error_free_buffers:
#endif
+ hdpvr_free_buffers(dev);
+error_put_usb:
+ usb_put_dev(dev->udev);
+ kfree(dev->usbc_buf);
+error_v4l2_unregister:
+ v4l2_device_unregister(&dev->v4l2_dev);
+error_free_dev:
+ kfree(dev);
error:
- if (dev) {
- flush_work(&dev->worker);
- /* this frees allocated memory */
- hdpvr_delete(dev);
- }
return retval;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt (VMware) <[email protected]>
commit ad0f1d9d65938aec72a698116cd73a980916895e upstream.
When the rto_push_irq_work_func() is called, it looks at the RT overloaded
bitmask in the root domain via the runqueue (rq->rd). The problem is that
during CPU up and down, nothing here stops rq->rd from changing between
taking the rq->rd->rto_lock and releasing it. That means the lock that is
released is not the same lock that was taken.
Instead of using this_rq()->rd to get the root domain, as the irq work is
part of the root domain, we can simply get the root domain from the irq work
that is passed to the routine:
container_of(work, struct root_domain, rto_push_work)
This keeps the root domain consistent.
Reported-by: Pavan Kondeti <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/sched/rt.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq)
* the rt_loop_next will cause the iterator to perform another scan.
*
*/
-static int rto_next_cpu(struct rq *rq)
+static int rto_next_cpu(struct root_domain *rd)
{
- struct root_domain *rd = rq->rd;
int next;
int cpu;
@@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq *
* Otherwise it is finishing up and an ipi needs to be sent.
*/
if (rq->rd->rto_cpu < 0)
- cpu = rto_next_cpu(rq);
+ cpu = rto_next_cpu(rq->rd);
raw_spin_unlock(&rq->rd->rto_lock);
@@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq *
/* Called from hardirq context */
void rto_push_irq_work_func(struct irq_work *work)
{
+ struct root_domain *rd =
+ container_of(work, struct root_domain, rto_push_work);
struct rq *rq;
int cpu;
@@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_w
raw_spin_unlock(&rq->lock);
}
- raw_spin_lock(&rq->rd->rto_lock);
+ raw_spin_lock(&rd->rto_lock);
/* Pass the IPI to the next rt overloaded queue */
- cpu = rto_next_cpu(rq);
+ cpu = rto_next_cpu(rd);
- raw_spin_unlock(&rq->rd->rto_lock);
+ raw_spin_unlock(&rd->rto_lock);
if (cpu < 0)
return;
/* Try the next RT overloaded CPU */
- irq_work_queue_on(&rq->rd->rto_push_work, cpu);
+ irq_work_queue_on(&rd->rto_push_work, cpu);
}
#endif /* HAVE_RT_PUSH_IPI */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 4e6020565596 upstream.
Break-before-make is not needed when transitioning from Global to
Non-Global mappings, provided that the contiguous hint is not being used.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/mm/mmu.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -117,6 +117,10 @@ static bool pgattr_change_is_safe(u64 ol
if ((old | new) & PTE_CONT)
return false;
+ /* Transitioning from Global to Non-Global is safe */
+ if (((old ^ new) == PTE_NG) && (new & PTE_NG))
+ return true;
+
return ((old ^ new) & ~mask) == 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit f992b4dfd58b upstream.
Defaulting to global mappings for kernel space is generally good for
performance and appears to be necessary for Cavium ThunderX. If we
subsequently decide that we need to enable kpti, then we need to rewrite
our existing page table entries to be non-global. This is fiddly, and
made worse by the possible use of contiguous mappings, which require
a strict break-before-make sequence.
Since the enable callback runs on each online CPU from stop_machine
context, we can have all CPUs enter the idmap, where secondaries can
wait for the primary CPU to rewrite swapper with its MMU off. It's all
fairly horrible, but at least it only runs once.
Tested-by: Marc Zyngier <[email protected]>
Reviewed-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm64/mm/proc.S
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 4
arch/arm64/kernel/cpufeature.c | 25 ++++
arch/arm64/mm/proc.S | 202 +++++++++++++++++++++++++++++++++++--
3 files changed, 224 insertions(+), 7 deletions(-)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -489,6 +489,10 @@ alternative_else_nop_endif
#endif
.endm
+ .macro pte_to_phys, phys, pte
+ and \phys, \pte, #(((1 << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
+ .endm
+
/**
* Errata workaround prior to disable MMU. Insert an ISB immediately prior
* to executing the MSR that will change SCTLR_ELn[M] from a value of 1 to 0.
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -878,6 +878,30 @@ static bool unmap_kernel_at_el0(const st
ID_AA64PFR0_CSV3_SHIFT);
}
+static int kpti_install_ng_mappings(void *__unused)
+{
+ typedef void (kpti_remap_fn)(int, int, phys_addr_t);
+ extern kpti_remap_fn idmap_kpti_install_ng_mappings;
+ kpti_remap_fn *remap_fn;
+
+ static bool kpti_applied = false;
+ int cpu = smp_processor_id();
+
+ if (kpti_applied)
+ return 0;
+
+ remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+
+ cpu_install_idmap();
+ remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
+ cpu_uninstall_idmap();
+
+ if (!cpu)
+ kpti_applied = true;
+
+ return 0;
+}
+
static int __init parse_kpti(char *str)
{
bool enabled;
@@ -984,6 +1008,7 @@ static const struct arm64_cpu_capabiliti
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
.def_scope = SCOPE_SYSTEM,
.matches = unmap_kernel_at_el0,
+ .enable = kpti_install_ng_mappings,
},
#endif
{
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -153,6 +153,16 @@ ENTRY(cpu_do_switch_mm)
ENDPROC(cpu_do_switch_mm)
.pushsection ".idmap.text", "ax"
+
+.macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+ adrp \tmp1, empty_zero_page
+ msr ttbr1_el1, \tmp2
+ isb
+ tlbi vmalle1
+ dsb nsh
+ isb
+.endm
+
/*
* void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
*
@@ -162,13 +172,7 @@ ENDPROC(cpu_do_switch_mm)
ENTRY(idmap_cpu_replace_ttbr1)
save_and_disable_daif flags=x2
- adrp x1, empty_zero_page
- msr ttbr1_el1, x1
- isb
-
- tlbi vmalle1
- dsb nsh
- isb
+ __idmap_cpu_set_reserved_ttbr1 x1, x3
msr ttbr1_el1, x0
isb
@@ -179,6 +183,190 @@ ENTRY(idmap_cpu_replace_ttbr1)
ENDPROC(idmap_cpu_replace_ttbr1)
.popsection
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ .pushsection ".idmap.text", "ax"
+
+ .macro __idmap_kpti_get_pgtable_ent, type
+ dc cvac, cur_\()\type\()p // Ensure any existing dirty
+ dmb sy // lines are written back before
+ ldr \type, [cur_\()\type\()p] // loading the entry
+ tbz \type, #0, next_\()\type // Skip invalid entries
+ .endm
+
+ .macro __idmap_kpti_put_pgtable_ent_ng, type
+ orr \type, \type, #PTE_NG // Same bit for blocks and pages
+ str \type, [cur_\()\type\()p] // Update the entry and ensure it
+ dc civac, cur_\()\type\()p // is visible to all CPUs.
+ .endm
+
+/*
+ * void __kpti_install_ng_mappings(int cpu, int num_cpus, phys_addr_t swapper)
+ *
+ * Called exactly once from stop_machine context by each CPU found during boot.
+ */
+__idmap_kpti_flag:
+ .long 1
+ENTRY(idmap_kpti_install_ng_mappings)
+ cpu .req w0
+ num_cpus .req w1
+ swapper_pa .req x2
+ swapper_ttb .req x3
+ flag_ptr .req x4
+ cur_pgdp .req x5
+ end_pgdp .req x6
+ pgd .req x7
+ cur_pudp .req x8
+ end_pudp .req x9
+ pud .req x10
+ cur_pmdp .req x11
+ end_pmdp .req x12
+ pmd .req x13
+ cur_ptep .req x14
+ end_ptep .req x15
+ pte .req x16
+
+ mrs swapper_ttb, ttbr1_el1
+ adr flag_ptr, __idmap_kpti_flag
+
+ cbnz cpu, __idmap_kpti_secondary
+
+ /* We're the boot CPU. Wait for the others to catch up */
+ sevl
+1: wfe
+ ldaxr w18, [flag_ptr]
+ eor w18, w18, num_cpus
+ cbnz w18, 1b
+
+ /* We need to walk swapper, so turn off the MMU. */
+ pre_disable_mmu_workaround
+ mrs x18, sctlr_el1
+ bic x18, x18, #SCTLR_ELx_M
+ msr sctlr_el1, x18
+ isb
+
+ /* Everybody is enjoying the idmap, so we can rewrite swapper. */
+ /* PGD */
+ mov cur_pgdp, swapper_pa
+ add end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
+do_pgd: __idmap_kpti_get_pgtable_ent pgd
+ tbnz pgd, #1, walk_puds
+ __idmap_kpti_put_pgtable_ent_ng pgd
+next_pgd:
+ add cur_pgdp, cur_pgdp, #8
+ cmp cur_pgdp, end_pgdp
+ b.ne do_pgd
+
+ /* Publish the updated tables and nuke all the TLBs */
+ dsb sy
+ tlbi vmalle1is
+ dsb ish
+ isb
+
+ /* We're done: fire up the MMU again */
+ mrs x18, sctlr_el1
+ orr x18, x18, #SCTLR_ELx_M
+ msr sctlr_el1, x18
+ isb
+
+ /* Set the flag to zero to indicate that we're all done */
+ str wzr, [flag_ptr]
+ ret
+
+ /* PUD */
+walk_puds:
+ .if CONFIG_PGTABLE_LEVELS > 3
+ pte_to_phys cur_pudp, pgd
+ add end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
+do_pud: __idmap_kpti_get_pgtable_ent pud
+ tbnz pud, #1, walk_pmds
+ __idmap_kpti_put_pgtable_ent_ng pud
+next_pud:
+ add cur_pudp, cur_pudp, 8
+ cmp cur_pudp, end_pudp
+ b.ne do_pud
+ b next_pgd
+ .else /* CONFIG_PGTABLE_LEVELS <= 3 */
+ mov pud, pgd
+ b walk_pmds
+next_pud:
+ b next_pgd
+ .endif
+
+ /* PMD */
+walk_pmds:
+ .if CONFIG_PGTABLE_LEVELS > 2
+ pte_to_phys cur_pmdp, pud
+ add end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
+do_pmd: __idmap_kpti_get_pgtable_ent pmd
+ tbnz pmd, #1, walk_ptes
+ __idmap_kpti_put_pgtable_ent_ng pmd
+next_pmd:
+ add cur_pmdp, cur_pmdp, #8
+ cmp cur_pmdp, end_pmdp
+ b.ne do_pmd
+ b next_pud
+ .else /* CONFIG_PGTABLE_LEVELS <= 2 */
+ mov pmd, pud
+ b walk_ptes
+next_pmd:
+ b next_pud
+ .endif
+
+ /* PTE */
+walk_ptes:
+ pte_to_phys cur_ptep, pmd
+ add end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
+do_pte: __idmap_kpti_get_pgtable_ent pte
+ __idmap_kpti_put_pgtable_ent_ng pte
+next_pte:
+ add cur_ptep, cur_ptep, #8
+ cmp cur_ptep, end_ptep
+ b.ne do_pte
+ b next_pmd
+
+ /* Secondary CPUs end up here */
+__idmap_kpti_secondary:
+ /* Uninstall swapper before surgery begins */
+ __idmap_cpu_set_reserved_ttbr1 x18, x17
+
+ /* Increment the flag to let the boot CPU we're ready */
+1: ldxr w18, [flag_ptr]
+ add w18, w18, #1
+ stxr w17, w18, [flag_ptr]
+ cbnz w17, 1b
+
+ /* Wait for the boot CPU to finish messing around with swapper */
+ sevl
+1: wfe
+ ldxr w18, [flag_ptr]
+ cbnz w18, 1b
+
+ /* All done, act like nothing happened */
+ msr ttbr1_el1, swapper_ttb
+ isb
+ ret
+
+ .unreq cpu
+ .unreq num_cpus
+ .unreq swapper_pa
+ .unreq swapper_ttb
+ .unreq flag_ptr
+ .unreq cur_pgdp
+ .unreq end_pgdp
+ .unreq pgd
+ .unreq cur_pudp
+ .unreq end_pudp
+ .unreq pud
+ .unreq cur_pmdp
+ .unreq end_pmdp
+ .unreq pmd
+ .unreq cur_ptep
+ .unreq end_ptep
+ .unreq pte
+ENDPROC(idmap_kpti_install_ng_mappings)
+ .popsection
+#endif
+
/*
* __cpu_setup
*
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 41acec624087 upstream.
To allow systems which do not require kpti to continue running with
global kernel mappings (which appears to be a requirement for Cavium
ThunderX due to a CPU erratum), make the use of nG in the kernel page
tables dependent on arm64_kernel_unmapped_at_el0(), which is resolved
at runtime.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/kernel-pgtable.h | 12 ++----------
arch/arm64/include/asm/pgtable-prot.h | 30 ++++++++++++++----------------
2 files changed, 16 insertions(+), 26 deletions(-)
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -78,16 +78,8 @@
/*
* Initial memory map attributes.
*/
-#define _SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define _SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define SWAPPER_PTE_FLAGS (_SWAPPER_PTE_FLAGS | PTE_NG)
-#define SWAPPER_PMD_FLAGS (_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
-#else
-#define SWAPPER_PTE_FLAGS _SWAPPER_PTE_FLAGS
-#define SWAPPER_PMD_FLAGS _SWAPPER_PMD_FLAGS
-#endif
+#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
#if ARM64_SWAPPER_USES_SECTION_MAPS
#define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -37,13 +37,11 @@
#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-#define PROT_DEFAULT (_PROT_DEFAULT | PTE_NG)
-#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_SECT_NG)
-#else
-#define PROT_DEFAULT _PROT_DEFAULT
-#define PROT_SECT_DEFAULT _PROT_SECT_DEFAULT
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+#define PTE_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
+#define PMD_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
+
+#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
+#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -55,22 +53,22 @@
#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
-#define _PAGE_DEFAULT (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
-#define _HYP_PAGE_DEFAULT (_PAGE_DEFAULT & ~PTE_NG)
+#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT _PAGE_DEFAULT
-#define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_RO __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
-#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
-#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
+#define PAGE_KERNEL __pgprot(PROT_NORMAL)
+#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
+#define PAGE_KERNEL_EXEC_CONT __pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
#define PAGE_HYP __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
#define PAGE_HYP_EXEC __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
#define PAGE_HYP_RO __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
#define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
-#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
-#define PAGE_S2_DEVICE __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
+#define PAGE_S2 __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
+#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit b519538dfefc upstream.
There are now a handful of open-coded masks to extract the ASID from a
TTBR value, so introduce a TTBR_ASID_MASK and use that instead.
Suggested-by: Mark Rutland <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/asm-uaccess.h | 3 ++-
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/uaccess.h | 4 ++--
arch/arm64/kernel/entry.S | 2 +-
4 files changed, 6 insertions(+), 4 deletions(-)
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -4,6 +4,7 @@
#include <asm/alternative.h>
#include <asm/kernel-pgtable.h>
+#include <asm/mmu.h>
#include <asm/sysreg.h>
#include <asm/assembler.h>
@@ -17,7 +18,7 @@
msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
isb
sub \tmp1, \tmp1, #SWAPPER_DIR_SIZE
- bic \tmp1, \tmp1, #(0xffff << 48)
+ bic \tmp1, \tmp1, #TTBR_ASID_MASK
msr ttbr1_el1, \tmp1 // set reserved ASID
isb
.endm
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -18,6 +18,7 @@
#define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */
#define USER_ASID_FLAG (UL(1) << 48)
+#define TTBR_ASID_MASK (UL(0xffff) << 48)
#ifndef __ASSEMBLY__
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -112,7 +112,7 @@ static inline void __uaccess_ttbr0_disab
write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
isb();
/* Set reserved ASID */
- ttbr &= ~(0xffffUL << 48);
+ ttbr &= ~TTBR_ASID_MASK;
write_sysreg(ttbr, ttbr1_el1);
isb();
}
@@ -131,7 +131,7 @@ static inline void __uaccess_ttbr0_enabl
/* Restore active ASID */
ttbr1 = read_sysreg(ttbr1_el1);
- ttbr1 |= ttbr0 & (0xffffUL << 48);
+ ttbr1 |= ttbr0 & TTBR_ASID_MASK;
write_sysreg(ttbr1, ttbr1_el1);
isb();
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -205,7 +205,7 @@ alternative_else_nop_endif
.if \el != 0
mrs x21, ttbr1_el1
- tst x21, #0xffff << 48 // Check for the reserved ASID
+ tst x21, #TTBR_ASID_MASK // Check for the reserved ASID
orr x23, x23, #PSR_PAN_BIT // Set the emulated PAN in the saved SPSR
b.eq 1f // TTBR0 access already disabled
and x23, x23, #~PSR_PAN_BIT // Clear the emulated PAN in the saved SPSR
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 0617052ddde3 upstream.
Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.
Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -844,15 +844,14 @@ config FORCE_MAX_ZONEORDER
4M allocations matching the default size used by generic code.
config UNMAP_KERNEL_AT_EL0
- bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+ bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
default y
help
- Some attacks against KASLR make use of the timing difference between
- a permission fault which could arise from a page table entry that is
- present in the TLB, and a translation fault which always requires a
- page table walk. This option defends against these attacks by unmapping
- the kernel whilst running in userspace, therefore forcing translation
- faults for all of kernel space.
+ Speculation attacks against some high-performance processors can
+ be used to bypass MMU permission checks and leak kernel data to
+ userspace. This can be defended against by unmapping the kernel
+ when running in userspace, mapping it back in on exception entry
+ via a trampoline page in the vector table.
If unsure, say Y.
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 084eb77cd3a8 upstream.
Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 13 +++++++++++++
1 file changed, 13 insertions(+)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -843,6 +843,19 @@ config FORCE_MAX_ZONEORDER
However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
4M allocations matching the default size used by generic code.
+config UNMAP_KERNEL_AT_EL0
+ bool "Unmap kernel when running in userspace (aka \"KAISER\")"
+ default y
+ help
+ Some attacks against KASLR make use of the timing difference between
+ a permission fault which could arise from a page table entry that is
+ present in the TLB, and a translation fault which always requires a
+ page table walk. This option defends against these attacks by unmapping
+ the kernel whilst running in userspace, therefore forcing translation
+ faults for all of kernel space.
+
+ If unsure, say Y.
+
menuconfig ARMV8_DEPRECATED
bool "Emulate deprecated/obsolete ARMv8 instructions"
depends on COMPAT
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rasmus Villemoes <[email protected]>
commit bc137dfdbec27c0ec5731a89002daded4a4aa1ea upstream.
The first patch above (https://patchwork.kernel.org/patch/9970181/)
makes the oops go away, but it just papers over the problem. The real
problem is that the watchdog core clears WDOG_HW_RUNNING in
watchdog_stop, and the gpio driver fails to set it in its stop
function when it doesn't actually stop it. This means that the core
doesn't know that it now has responsibility for petting the device, in
turn causing the device to reset the system (I hadn't noticed this
because the board I'm working on has that reset logic disabled).
How about this (other drivers may of course have the same problem, I
haven't checked). One might say that ->stop should return an error
when the device can't be stopped, but OTOH this brings parity between
a device without a ->stop method and a GPIO wd that has always-running
set. IOW, I think ->stop should only return an error when an actual
attempt to stop the hardware failed.
From: Rasmus Villemoes <[email protected]>
The watchdog framework clears WDOG_HW_RUNNING before calling
->stop. If the driver is unable to stop the device, it is supposed to
set that bit again so that the watchdog core takes care of sending
heart-beats while the device is not open from user-space. Update the
gpio_wdt driver to honour that contract (and get rid of the redundant
clearing of WDOG_HW_RUNNING).
Fixes: 3c10bbde10 ("watchdog: core: Clear WDOG_HW_RUNNING before calling the stop function")
Signed-off-by: Rasmus Villemoes <[email protected]>
Reviewed-by: Guenter Roeck <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Wim Van Sebroeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/watchdog/gpio_wdt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/watchdog/gpio_wdt.c
+++ b/drivers/watchdog/gpio_wdt.c
@@ -80,7 +80,8 @@ static int gpio_wdt_stop(struct watchdog
if (!priv->always_running) {
gpio_wdt_disable(priv);
- clear_bit(WDOG_HW_RUNNING, &wdd->status);
+ } else {
+ set_bit(WDOG_HW_RUNNING, &wdd->status);
}
return 0;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sven Joachim <[email protected]>
commit a9e6d44ddeccd3522670e641f1ed9b068e746ff7 upstream.
After upgrading an old laptop to 4.15-rc9, I found that the eth0 and
wlan0 interfaces had disappeared. It turns out that the b43 and b44
drivers require SSB_PCIHOST_POSSIBLE which depends on
PCI_DRIVERS_LEGACY, a config option that only exists on Mips.
Fixes: 58eae1416b80 ("ssb: Disable PCI host for PCI_DRIVERS_GENERIC")
Signed-off-by: Sven Joachim <[email protected]>
Reviewed-by: James Hogan <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/ssb/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/ssb/Kconfig
+++ b/drivers/ssb/Kconfig
@@ -32,7 +32,7 @@ config SSB_BLOCKIO
config SSB_PCIHOST_POSSIBLE
bool
- depends on SSB && (PCI = y || PCI = SSB) && PCI_DRIVERS_LEGACY
+ depends on SSB && (PCI = y || PCI = SSB) && (PCI_DRIVERS_LEGACY || !MIPS)
default y
config SSB_PCIHOST
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 51a0048beb44 upstream.
The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.
This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/fixmap.h | 4 ++++
arch/arm64/include/asm/pgtable.h | 1 +
arch/arm64/kernel/asm-offsets.c | 6 +++++-
arch/arm64/mm/mmu.c | 23 +++++++++++++++++++++++
4 files changed, 33 insertions(+), 1 deletion(-)
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -58,6 +58,10 @@ enum fixed_addresses {
FIX_APEI_GHES_NMI,
#endif /* CONFIG_ACPI_APEI_GHES */
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ FIX_ENTRY_TRAMP_TEXT,
+#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
__end_of_permanent_fixed_addresses,
/*
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -683,6 +683,7 @@ static inline void pmdp_set_wrprotect(st
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
+extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
/*
* Encode and decode a swap entry:
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -24,6 +24,7 @@
#include <linux/kvm_host.h>
#include <linux/suspend.h>
#include <asm/cpufeature.h>
+#include <asm/fixmap.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
#include <asm/smp_plat.h>
@@ -148,11 +149,14 @@ int main(void)
DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
-
BLANK();
DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address));
DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address));
DEFINE(HIBERN_PBE_NEXT, offsetof(struct pbe, next));
DEFINE(ARM64_FTR_SYSVAL, offsetof(struct arm64_ftr_reg, sys_val));
+ BLANK();
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ DEFINE(TRAMP_VALIAS, TRAMP_VALIAS);
+#endif
return 0;
}
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -525,6 +525,29 @@ static int __init parse_rodata(char *arg
}
early_param("rodata", parse_rodata);
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __init map_entry_trampoline(void)
+{
+ extern char __entry_tramp_text_start[];
+
+ pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
+ phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
+
+ /* The trampoline is always mapped and can therefore be global */
+ pgprot_val(prot) &= ~PTE_NG;
+
+ /* Map only the text into the trampoline page table */
+ memset(tramp_pg_dir, 0, PGD_SIZE);
+ __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+ prot, pgd_pgtable_alloc, 0);
+
+ /* ...as well as the kernel page table */
+ __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+ return 0;
+}
+core_initcall(map_entry_trampoline);
+#endif
+
/*
* Create fine-grained mappings for the kernel.
*/
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 5b1f7fe41909 upstream.
We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 50 +++++++++++++++++++++++-----------------------
1 file changed, 25 insertions(+), 25 deletions(-)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -71,7 +71,7 @@
#define BAD_FIQ 2
#define BAD_ERROR 3
- .macro kernel_ventry label
+ .macro kernel_ventry, el, label, regsize = 64
.align 7
sub sp, sp, #S_FRAME_SIZE
#ifdef CONFIG_VMAP_STACK
@@ -84,7 +84,7 @@
tbnz x0, #THREAD_SHIFT, 0f
sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp
- b \label
+ b el\()\el\()_\label
0:
/*
@@ -116,7 +116,7 @@
sub sp, sp, x0
mrs x0, tpidrro_el0
#endif
- b \label
+ b el\()\el\()_\label
.endm
.macro kernel_entry, el, regsize = 64
@@ -369,31 +369,31 @@ tsk .req x28 // current thread_info
.align 11
ENTRY(vectors)
- kernel_ventry el1_sync_invalid // Synchronous EL1t
- kernel_ventry el1_irq_invalid // IRQ EL1t
- kernel_ventry el1_fiq_invalid // FIQ EL1t
- kernel_ventry el1_error_invalid // Error EL1t
-
- kernel_ventry el1_sync // Synchronous EL1h
- kernel_ventry el1_irq // IRQ EL1h
- kernel_ventry el1_fiq_invalid // FIQ EL1h
- kernel_ventry el1_error // Error EL1h
-
- kernel_ventry el0_sync // Synchronous 64-bit EL0
- kernel_ventry el0_irq // IRQ 64-bit EL0
- kernel_ventry el0_fiq_invalid // FIQ 64-bit EL0
- kernel_ventry el0_error // Error 64-bit EL0
+ kernel_ventry 1, sync_invalid // Synchronous EL1t
+ kernel_ventry 1, irq_invalid // IRQ EL1t
+ kernel_ventry 1, fiq_invalid // FIQ EL1t
+ kernel_ventry 1, error_invalid // Error EL1t
+
+ kernel_ventry 1, sync // Synchronous EL1h
+ kernel_ventry 1, irq // IRQ EL1h
+ kernel_ventry 1, fiq_invalid // FIQ EL1h
+ kernel_ventry 1, error // Error EL1h
+
+ kernel_ventry 0, sync // Synchronous 64-bit EL0
+ kernel_ventry 0, irq // IRQ 64-bit EL0
+ kernel_ventry 0, fiq_invalid // FIQ 64-bit EL0
+ kernel_ventry 0, error // Error 64-bit EL0
#ifdef CONFIG_COMPAT
- kernel_ventry el0_sync_compat // Synchronous 32-bit EL0
- kernel_ventry el0_irq_compat // IRQ 32-bit EL0
- kernel_ventry el0_fiq_invalid_compat // FIQ 32-bit EL0
- kernel_ventry el0_error_compat // Error 32-bit EL0
+ kernel_ventry 0, sync_compat, 32 // Synchronous 32-bit EL0
+ kernel_ventry 0, irq_compat, 32 // IRQ 32-bit EL0
+ kernel_ventry 0, fiq_invalid_compat, 32 // FIQ 32-bit EL0
+ kernel_ventry 0, error_compat, 32 // Error 32-bit EL0
#else
- kernel_ventry el0_sync_invalid // Synchronous 32-bit EL0
- kernel_ventry el0_irq_invalid // IRQ 32-bit EL0
- kernel_ventry el0_fiq_invalid // FIQ 32-bit EL0
- kernel_ventry el0_error_invalid // Error 32-bit EL0
+ kernel_ventry 0, sync_invalid, 32 // Synchronous 32-bit EL0
+ kernel_ventry 0, irq_invalid, 32 // IRQ 32-bit EL0
+ kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0
+ kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0
#endif
END(vectors)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit e046eb0c9bf2 upstream.
In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/kernel-pgtable.h | 12 ++++++++++--
arch/arm64/include/asm/pgtable-prot.h | 21 +++++++++++++++------
2 files changed, 25 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 7803343e5881..77a27af01371 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -78,8 +78,16 @@
/*
* Initial memory map attributes.
*/
-#define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define SWAPPER_PTE_FLAGS (_SWAPPER_PTE_FLAGS | PTE_NG)
+#define SWAPPER_PMD_FLAGS (_SWAPPER_PMD_FLAGS | PMD_SECT_NG)
+#else
+#define SWAPPER_PTE_FLAGS _SWAPPER_PTE_FLAGS
+#define SWAPPER_PMD_FLAGS _SWAPPER_PMD_FLAGS
+#endif
#if ARM64_SWAPPER_USES_SECTION_MAPS
#define SWAPPER_MM_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 0a5635fb0ef9..22a926825e3f 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -34,8 +34,16 @@
#include <asm/pgtable-types.h>
-#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define PROT_DEFAULT (_PROT_DEFAULT | PTE_NG)
+#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_SECT_NG)
+#else
+#define PROT_DEFAULT _PROT_DEFAULT
+#define PROT_SECT_DEFAULT _PROT_SECT_DEFAULT
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
@@ -48,6 +56,7 @@
#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
#define _PAGE_DEFAULT (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+#define _HYP_PAGE_DEFAULT (_PAGE_DEFAULT & ~PTE_NG)
#define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_KERNEL_RO __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
@@ -55,15 +64,15 @@
#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
-#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
-#define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
-#define PAGE_HYP_RO __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+#define PAGE_HYP __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+#define PAGE_HYP_EXEC __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+#define PAGE_HYP_RO __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
#define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
#define PAGE_S2_DEVICE __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
-#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_PXN | PTE_UXN)
+#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
#define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
--
2.16.1
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yang Shunyong <[email protected]>
commit 66b3bd2356e0a1531c71a3dcf96944621e25c17c upstream.
The type of arg passed to dmatest_callback is struct dmatest_done.
It refers to test_done in struct dmatest_thread, not done_wait.
Fixes: 6f6a23a213be ("dmaengine: dmatest: move callback wait ...")
Signed-off-by: Yang Shunyong <[email protected]>
Acked-by: Adam Wallis <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/dma/dmatest.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -355,7 +355,7 @@ static void dmatest_callback(void *arg)
{
struct dmatest_done *done = arg;
struct dmatest_thread *thread =
- container_of(arg, struct dmatest_thread, done_wait);
+ container_of(done, struct dmatest_thread, test_done);
if (!thread->done) {
done->done = true;
wake_up_all(done->wait);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel N Pettersson <[email protected]>
commit 9aca7e454415f7878b28524e76bebe1170911a88 upstream.
Autonegotiation gives a security settings mismatch error if the SMB
server selects an SMBv3 dialect that isn't SMB3.02. The exact error is
"protocol revalidation - security settings mismatch".
This can be tested using Samba v4.2 or by setting the global Samba
setting max protocol = SMB3_00.
The check that fails in smb3_validate_negotiate is the dialect
verification of the negotiate info response. This is because it tries
to verify against the protocol_id in the global smbdefault_values. The
protocol_id in smbdefault_values is SMB3.02.
In SMB2_negotiate the protocol_id in smbdefault_values isn't updated,
it is global so it probably shouldn't be, but server->dialect is.
This patch changes the check in smb3_validate_negotiate to use
server->dialect instead of server->vals->protocol_id. The patch works
with autonegotiate and when using a specific version in the vers mount
option.
Signed-off-by: Daniel N Pettersson <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/cifs/smb2pdu.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -733,8 +733,7 @@ int smb3_validate_negotiate(const unsign
}
/* check validate negotiate info response matches what we got earlier */
- if (pneg_rsp->Dialect !=
- cpu_to_le16(tcon->ses->server->vals->protocol_id))
+ if (pneg_rsp->Dialect != cpu_to_le16(tcon->ses->server->dialect))
goto vneg_out;
if (pneg_rsp->SecurityMode != cpu_to_le16(tcon->ses->server->sec_mode))
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Wilcox <[email protected]>
commit f04a703c3d613845ae3141bfaf223489de8ab3eb upstream.
If cifs_zap_mapping() returned an error, we would return without putting
the xid that we got earlier. Restructure cifs_file_strict_mmap() and
cifs_file_mmap() to be more similar to each other and have a single
point of return that always puts the xid.
Signed-off-by: Matthew Wilcox <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/cifs/file.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -3471,20 +3471,18 @@ static const struct vm_operations_struct
int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
{
- int rc, xid;
+ int xid, rc = 0;
struct inode *inode = file_inode(file);
xid = get_xid();
- if (!CIFS_CACHE_READ(CIFS_I(inode))) {
+ if (!CIFS_CACHE_READ(CIFS_I(inode)))
rc = cifs_zap_mapping(inode);
- if (rc)
- return rc;
- }
-
- rc = generic_file_mmap(file, vma);
- if (rc == 0)
+ if (!rc)
+ rc = generic_file_mmap(file, vma);
+ if (!rc)
vma->vm_ops = &cifs_file_vm_ops;
+
free_xid(xid);
return rc;
}
@@ -3494,16 +3492,16 @@ int cifs_file_mmap(struct file *file, st
int rc, xid;
xid = get_xid();
+
rc = cifs_revalidate_file(file);
- if (rc) {
+ if (rc)
cifs_dbg(FYI, "Validation prior to mmap failed, error=%d\n",
rc);
- free_xid(xid);
- return rc;
- }
- rc = generic_file_mmap(file, vma);
- if (rc == 0)
+ if (!rc)
+ rc = generic_file_mmap(file, vma);
+ if (!rc)
vma->vm_ops = &cifs_file_vm_ops;
+
free_xid(xid);
return rc;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Redfearn <[email protected]>
commit 24f8d233074badd4c18e4dafd2fb97d65838afed upstream.
Commit da2a68b3eb47 ("watchdog: Enable COMPILE_TEST where possible")
enabled building the Indy watchdog driver when COMPILE_TEST is enabled.
However, the driver makes reference to symbols that are only defined for
certain platforms are selected in the config. These platforms select
SGI_HAS_INDYDOG. Without this, link time errors result, for example
when building a MIPS allyesconfig.
drivers/watchdog/indydog.o: In function `indydog_write':
indydog.c:(.text+0x18): undefined reference to `sgimc'
indydog.c:(.text+0x1c): undefined reference to `sgimc'
drivers/watchdog/indydog.o: In function `indydog_start':
indydog.c:(.text+0x54): undefined reference to `sgimc'
indydog.c:(.text+0x58): undefined reference to `sgimc'
drivers/watchdog/indydog.o: In function `indydog_stop':
indydog.c:(.text+0xa4): undefined reference to `sgimc'
drivers/watchdog/indydog.o:indydog.c:(.text+0xa8): more undefined
references to `sgimc' follow
make: *** [Makefile:1005: vmlinux] Error 1
Fix this by ensuring that CONFIG_INDIDOG can only be selected when the
necessary dependent platform symbols are built in.
Fixes: da2a68b3eb47 ("watchdog: Enable COMPILE_TEST where possible")
Signed-off-by: Matt Redfearn <[email protected]>
Signed-off-by: Ralf Baechle <[email protected]>
Suggested-by: James Hogan <[email protected]>
Reviewed-by: Guenter Roeck <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Wim Van Sebroeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/watchdog/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/watchdog/Kconfig
+++ b/drivers/watchdog/Kconfig
@@ -1451,7 +1451,7 @@ config RC32434_WDT
config INDYDOG
tristate "Indy/I2 Hardware Watchdog"
- depends on SGI_HAS_INDYDOG || (MIPS && COMPILE_TEST)
+ depends on SGI_HAS_INDYDOG
help
Hardware driver for the Indy's/I2's watchdog. This is a
watchdog timer that will reboot the machine after a 60 second
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aurelien Aptel <[email protected]>
commit 97f4b7276b829a8927ac903a119bef2f963ccc58 upstream.
also replaces memset()+kfree() by kzfree().
Signed-off-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/cifs/cifsencrypt.c | 3 +--
fs/cifs/connect.c | 6 +++---
fs/cifs/misc.c | 14 ++++----------
3 files changed, 8 insertions(+), 15 deletions(-)
--- a/fs/cifs/cifsencrypt.c
+++ b/fs/cifs/cifsencrypt.c
@@ -325,9 +325,8 @@ int calc_lanman_hash(const char *passwor
{
int i;
int rc;
- char password_with_pad[CIFS_ENCPWD_SIZE];
+ char password_with_pad[CIFS_ENCPWD_SIZE] = {0};
- memset(password_with_pad, 0, CIFS_ENCPWD_SIZE);
if (password)
strncpy(password_with_pad, password, CIFS_ENCPWD_SIZE);
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -1707,7 +1707,7 @@ cifs_parse_mount_options(const char *mou
tmp_end++;
if (!(tmp_end < end && tmp_end[1] == delim)) {
/* No it is not. Set the password to NULL */
- kfree(vol->password);
+ kzfree(vol->password);
vol->password = NULL;
break;
}
@@ -1745,7 +1745,7 @@ cifs_parse_mount_options(const char *mou
options = end;
}
- kfree(vol->password);
+ kzfree(vol->password);
/* Now build new password string */
temp_len = strlen(value);
vol->password = kzalloc(temp_len+1, GFP_KERNEL);
@@ -4235,7 +4235,7 @@ cifs_construct_tcon(struct cifs_sb_info
reset_cifs_unix_caps(0, tcon, NULL, vol_info);
out:
kfree(vol_info->username);
- kfree(vol_info->password);
+ kzfree(vol_info->password);
kfree(vol_info);
return tcon;
--- a/fs/cifs/misc.c
+++ b/fs/cifs/misc.c
@@ -98,14 +98,11 @@ sesInfoFree(struct cifs_ses *buf_to_free
kfree(buf_to_free->serverOS);
kfree(buf_to_free->serverDomain);
kfree(buf_to_free->serverNOS);
- if (buf_to_free->password) {
- memset(buf_to_free->password, 0, strlen(buf_to_free->password));
- kfree(buf_to_free->password);
- }
+ kzfree(buf_to_free->password);
kfree(buf_to_free->user_name);
kfree(buf_to_free->domainName);
- kfree(buf_to_free->auth_key.response);
- kfree(buf_to_free);
+ kzfree(buf_to_free->auth_key.response);
+ kzfree(buf_to_free);
}
struct cifs_tcon *
@@ -136,10 +133,7 @@ tconInfoFree(struct cifs_tcon *buf_to_fr
}
atomic_dec(&tconInfoAllocCount);
kfree(buf_to_free->nativeFileSystem);
- if (buf_to_free->password) {
- memset(buf_to_free->password, 0, strlen(buf_to_free->password));
- kfree(buf_to_free->password);
- }
+ kzfree(buf_to_free->password);
kfree(buf_to_free);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 27a921e75711 upstream.
With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
by ensuring that we switch to a reserved ASID of zero when disabling
user access and restore the active user ASID on the uaccess enable path.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 1 -
arch/arm64/include/asm/asm-uaccess.h | 25 +++++++++++++++++--------
arch/arm64/include/asm/uaccess.h | 21 +++++++++++++++++----
arch/arm64/kernel/entry.S | 4 ++--
arch/arm64/lib/clear_user.S | 2 +-
arch/arm64/lib/copy_from_user.S | 2 +-
arch/arm64/lib/copy_in_user.S | 2 +-
arch/arm64/lib/copy_to_user.S | 2 +-
arch/arm64/mm/cache.S | 2 +-
arch/arm64/xen/hypercall.S | 2 +-
10 files changed, 42 insertions(+), 21 deletions(-)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -920,7 +920,6 @@ endif
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
- depends on BROKEN # Temporary while switch_mm is reworked
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -16,11 +16,20 @@
add \tmp1, \tmp1, #SWAPPER_DIR_SIZE // reserved_ttbr0 at the end of swapper_pg_dir
msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
isb
+ sub \tmp1, \tmp1, #SWAPPER_DIR_SIZE
+ bic \tmp1, \tmp1, #(0xffff << 48)
+ msr ttbr1_el1, \tmp1 // set reserved ASID
+ isb
.endm
- .macro __uaccess_ttbr0_enable, tmp1
+ .macro __uaccess_ttbr0_enable, tmp1, tmp2
get_thread_info \tmp1
ldr \tmp1, [\tmp1, #TSK_TI_TTBR0] // load saved TTBR0_EL1
+ mrs \tmp2, ttbr1_el1
+ extr \tmp2, \tmp2, \tmp1, #48
+ ror \tmp2, \tmp2, #16
+ msr ttbr1_el1, \tmp2 // set the active ASID
+ isb
msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
isb
.endm
@@ -31,18 +40,18 @@ alternative_if_not ARM64_HAS_PAN
alternative_else_nop_endif
.endm
- .macro uaccess_ttbr0_enable, tmp1, tmp2
+ .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
alternative_if_not ARM64_HAS_PAN
- save_and_disable_irq \tmp2 // avoid preemption
- __uaccess_ttbr0_enable \tmp1
- restore_irq \tmp2
+ save_and_disable_irq \tmp3 // avoid preemption
+ __uaccess_ttbr0_enable \tmp1, \tmp2
+ restore_irq \tmp3
alternative_else_nop_endif
.endm
#else
.macro uaccess_ttbr0_disable, tmp1
.endm
- .macro uaccess_ttbr0_enable, tmp1, tmp2
+ .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
.endm
#endif
@@ -56,8 +65,8 @@ alternative_if ARM64_ALT_PAN_NOT_UAO
alternative_else_nop_endif
.endm
- .macro uaccess_enable_not_uao, tmp1, tmp2
- uaccess_ttbr0_enable \tmp1, \tmp2
+ .macro uaccess_enable_not_uao, tmp1, tmp2, tmp3
+ uaccess_ttbr0_enable \tmp1, \tmp2, \tmp3
alternative_if ARM64_ALT_PAN_NOT_UAO
SET_PSTATE_PAN(0)
alternative_else_nop_endif
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -107,15 +107,19 @@ static inline void __uaccess_ttbr0_disab
{
unsigned long ttbr;
+ ttbr = read_sysreg(ttbr1_el1);
/* reserved_ttbr0 placed at the end of swapper_pg_dir */
- ttbr = read_sysreg(ttbr1_el1) + SWAPPER_DIR_SIZE;
- write_sysreg(ttbr, ttbr0_el1);
+ write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
+ isb();
+ /* Set reserved ASID */
+ ttbr &= ~(0xffffUL << 48);
+ write_sysreg(ttbr, ttbr1_el1);
isb();
}
static inline void __uaccess_ttbr0_enable(void)
{
- unsigned long flags;
+ unsigned long flags, ttbr0, ttbr1;
/*
* Disable interrupts to avoid preemption between reading the 'ttbr0'
@@ -123,7 +127,16 @@ static inline void __uaccess_ttbr0_enabl
* roll-over and an update of 'ttbr0'.
*/
local_irq_save(flags);
- write_sysreg(current_thread_info()->ttbr0, ttbr0_el1);
+ ttbr0 = current_thread_info()->ttbr0;
+
+ /* Restore active ASID */
+ ttbr1 = read_sysreg(ttbr1_el1);
+ ttbr1 |= ttbr0 & (0xffffUL << 48);
+ write_sysreg(ttbr1, ttbr1_el1);
+ isb();
+
+ /* Restore user page table */
+ write_sysreg(ttbr0, ttbr0_el1);
isb();
local_irq_restore(flags);
}
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -184,7 +184,7 @@ alternative_if ARM64_HAS_PAN
alternative_else_nop_endif
.if \el != 0
- mrs x21, ttbr0_el1
+ mrs x21, ttbr1_el1
tst x21, #0xffff << 48 // Check for the reserved ASID
orr x23, x23, #PSR_PAN_BIT // Set the emulated PAN in the saved SPSR
b.eq 1f // TTBR0 access already disabled
@@ -248,7 +248,7 @@ alternative_else_nop_endif
tbnz x22, #22, 1f // Skip re-enabling TTBR0 access if the PSR_PAN_BIT is set
.endif
- __uaccess_ttbr0_enable x0
+ __uaccess_ttbr0_enable x0, x1
.if \el == 0
/*
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -30,7 +30,7 @@
* Alignment fixed up by hardware.
*/
ENTRY(__clear_user)
- uaccess_enable_not_uao x2, x3
+ uaccess_enable_not_uao x2, x3, x4
mov x2, x1 // save the size for fixup return
subs x1, x1, #8
b.mi 2f
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -64,7 +64,7 @@
end .req x5
ENTRY(__arch_copy_from_user)
- uaccess_enable_not_uao x3, x4
+ uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
uaccess_disable_not_uao x3
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -65,7 +65,7 @@
end .req x5
ENTRY(raw_copy_in_user)
- uaccess_enable_not_uao x3, x4
+ uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
uaccess_disable_not_uao x3
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -63,7 +63,7 @@
end .req x5
ENTRY(__arch_copy_to_user)
- uaccess_enable_not_uao x3, x4
+ uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
uaccess_disable_not_uao x3
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -49,7 +49,7 @@ ENTRY(flush_icache_range)
* - end - virtual end address of region
*/
ENTRY(__flush_cache_user_range)
- uaccess_ttbr0_enable x2, x3
+ uaccess_ttbr0_enable x2, x3, x4
dcache_line_size x2, x3
sub x3, x2, #1
bic x4, x0, x3
--- a/arch/arm64/xen/hypercall.S
+++ b/arch/arm64/xen/hypercall.S
@@ -101,7 +101,7 @@ ENTRY(privcmd_call)
* need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
* is enabled (it implies that hardware UAO and PAN disabled).
*/
- uaccess_ttbr0_enable x6, x7
+ uaccess_ttbr0_enable x6, x7, x8
hvc XEN_IMM
/*
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 9b0de864b5bc upstream.
Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/tlbflush.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -23,6 +23,7 @@
#include <linux/sched.h>
#include <asm/cputype.h>
+#include <asm/mmu.h>
/*
* Raw TLBI operations.
@@ -54,6 +55,11 @@
#define __tlbi(op, ...) __TLBI_N(op, ##__VA_ARGS__, 1, 0)
+#define __tlbi_user(op, arg) do { \
+ if (arm64_kernel_unmapped_at_el0()) \
+ __tlbi(op, (arg) | USER_ASID_FLAG); \
+} while (0)
+
/*
* TLB Management
* ==============
@@ -115,6 +121,7 @@ static inline void flush_tlb_mm(struct m
dsb(ishst);
__tlbi(aside1is, asid);
+ __tlbi_user(aside1is, asid);
dsb(ish);
}
@@ -125,6 +132,7 @@ static inline void flush_tlb_page(struct
dsb(ishst);
__tlbi(vale1is, addr);
+ __tlbi_user(vale1is, addr);
dsb(ish);
}
@@ -151,10 +159,13 @@ static inline void __flush_tlb_range(str
dsb(ishst);
for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
- if (last_level)
+ if (last_level) {
__tlbi(vale1is, addr);
- else
+ __tlbi_user(vale1is, addr);
+ } else {
__tlbi(vae1is, addr);
+ __tlbi_user(vae1is, addr);
+ }
}
dsb(ish);
}
@@ -194,6 +205,7 @@ static inline void __flush_tlb_pgtable(s
unsigned long addr = uaddr >> 12 | (ASID(mm) << 48);
__tlbi(vae1is, addr);
+ __tlbi_user(vae1is, addr);
dsb(ish);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 4bf3286d29f3 upstream.
Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 39 ++++++++++++++++++++++++++++++++++++---
1 file changed, 36 insertions(+), 3 deletions(-)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -73,6 +73,17 @@
.macro kernel_ventry, el, label, regsize = 64
.align 7
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ .if \el == 0
+ .if \regsize == 64
+ mrs x30, tpidrro_el0
+ msr tpidrro_el0, xzr
+ .else
+ mov x30, xzr
+ .endif
+ .endif
+#endif
+
sub sp, sp, #S_FRAME_SIZE
#ifdef CONFIG_VMAP_STACK
/*
@@ -119,6 +130,11 @@
b el\()\el\()_\label
.endm
+ .macro tramp_alias, dst, sym
+ mov_q \dst, TRAMP_VALIAS
+ add \dst, \dst, #(\sym - .entry.tramp.text)
+ .endm
+
.macro kernel_entry, el, regsize = 64
.if \regsize == 32
mov w0, w0 // zero upper 32 bits of x0
@@ -271,18 +287,20 @@ alternative_else_nop_endif
.if \el == 0
ldr x23, [sp, #S_SP] // load return stack pointer
msr sp_el0, x23
+ tst x22, #PSR_MODE32_BIT // native task?
+ b.eq 3f
+
#ifdef CONFIG_ARM64_ERRATUM_845719
alternative_if ARM64_WORKAROUND_845719
- tbz x22, #4, 1f
#ifdef CONFIG_PID_IN_CONTEXTIDR
mrs x29, contextidr_el1
msr contextidr_el1, x29
#else
msr contextidr_el1, xzr
#endif
-1:
alternative_else_nop_endif
#endif
+3:
.endif
msr elr_el1, x21 // set up the return data
@@ -304,7 +322,22 @@ alternative_else_nop_endif
ldp x28, x29, [sp, #16 * 14]
ldr lr, [sp, #S_LR]
add sp, sp, #S_FRAME_SIZE // restore sp
- eret // return to kernel
+
+#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
+ eret
+#else
+ .if \el == 0
+ bne 4f
+ msr far_el1, x30
+ tramp_alias x30, tramp_exit_native
+ br x30
+4:
+ tramp_alias x30, tramp_exit_compat
+ br x30
+ .else
+ eret
+ .endif
+#endif
.endm
.macro irq_stack_entry
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit be04a6d1126b upstream.
Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.
This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1029,6 +1029,14 @@ alternative_else_nop_endif
.if \regsize == 64
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif
+ /*
+ * Defend against branch aliasing attacks by pushing a dummy
+ * entry onto the return stack and using a RET instruction to
+ * enter the full-fat kernel vectors.
+ */
+ bl 2f
+ b .
+2:
tramp_map_kernel x30
#ifdef CONFIG_RANDOMIZE_BASE
adr x30, tramp_vectors + PAGE_SIZE
@@ -1041,7 +1049,7 @@ alternative_insn isb, nop, ARM64_WORKARO
msr vbar_el1, x30
add x30, x30, #(1b - tramp_vectors)
isb
- br x30
+ ret
.endm
.macro tramp_exit, regsize = 64
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stephen Boyd <[email protected]>
Commit bb48711800e6 upstream.
The Kryo CPUs are also affected by the Falkor 1003 errata, so
we need to do the same workaround on Kryo CPUs. The MIDR is
slightly more complicated here, where the PART number is not
always the same when looking at all the bits from 15 to 4. Drop
the lower 8 bits and just look at the top 4 to see if it's '2'
and then consider those as Kryo CPUs. This covers all the
combinations without having to list them all out.
Fixes: 38fd94b0275c ("arm64: Work around Falkor erratum 1003")
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Stephen Boyd <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Conflicts:
arch/arm64/include/asm/cputype.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/arm64/silicon-errata.txt | 2 +-
arch/arm64/include/asm/cputype.h | 2 ++
arch/arm64/kernel/cpu_errata.c | 21 +++++++++++++++++++++
3 files changed, 24 insertions(+), 1 deletion(-)
--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -72,7 +72,7 @@ stable kernels.
| Hisilicon | Hip0{6,7} | #161010701 | N/A |
| Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 |
| | | | |
-| Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
+| Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
| Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
| Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -92,6 +92,7 @@
#define QCOM_CPU_PART_FALKOR_V1 0x800
#define QCOM_CPU_PART_FALKOR 0xC00
+#define QCOM_CPU_PART_KRYO 0x200
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
@@ -101,6 +102,7 @@
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
#define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
#define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
+#define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
#ifndef __ASSEMBLY__
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -30,6 +30,20 @@ is_affected_midr_range(const struct arm6
entry->midr_range_max);
}
+static bool __maybe_unused
+is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ u32 model;
+
+ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+ model = read_cpuid_id();
+ model &= MIDR_IMPLEMENTOR_MASK | (0xf00 << MIDR_PARTNUM_SHIFT) |
+ MIDR_ARCHITECTURE_MASK;
+
+ return model == entry->midr_model;
+}
+
static bool
has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
int scope)
@@ -169,6 +183,13 @@ const struct arm64_cpu_capabilities arm6
MIDR_CPU_VAR_REV(0, 0),
MIDR_CPU_VAR_REV(0, 0)),
},
+ {
+ .desc = "Qualcomm Technologies Kryo erratum 1003",
+ .capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
+ .def_scope = SCOPE_LOCAL_CPU,
+ .midr_model = MIDR_QCOM_KRYO,
+ .matches = is_kryo_midr,
+ },
#endif
#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009
{
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jayachandran C <[email protected]>
Commit 0d90718871fe upstream.
Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.
Signed-off-by: Jayachandran C <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cputype.h | 3 +++
1 file changed, 3 insertions(+)
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -87,6 +87,7 @@
#define CAVIUM_CPU_PART_THUNDERX 0x0A1
#define CAVIUM_CPU_PART_THUNDERX_81XX 0x0A2
#define CAVIUM_CPU_PART_THUNDERX_83XX 0x0A3
+#define CAVIUM_CPU_PART_THUNDERX2 0x0AF
#define BRCM_CPU_PART_VULCAN 0x516
@@ -100,6 +101,8 @@
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
+#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
#define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
#define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
#define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Malcolm Priestley <[email protected]>
commit 3d932ee27e852e4904647f15b64dedca51187ad7 upstream.
Warm start has no check as whether a genuine device has
connected and proceeds to next execution path.
Check device should read 0x47 at offset of 2 on USB descriptor read
and it is the amount requested of 6 bytes.
Fix for
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access as
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: Malcolm Priestley <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/usb/dvb-usb-v2/lmedm04.c | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
+++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
@@ -494,18 +494,23 @@ static int lme2510_pid_filter(struct dvb
static int lme2510_return_status(struct dvb_usb_device *d)
{
- int ret = 0;
+ int ret;
u8 *data;
- data = kzalloc(10, GFP_KERNEL);
+ data = kzalloc(6, GFP_KERNEL);
if (!data)
return -ENOMEM;
- ret |= usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0),
- 0x06, 0x80, 0x0302, 0x00, data, 0x0006, 200);
- info("Firmware Status: %x (%x)", ret , data[2]);
+ ret = usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0),
+ 0x06, 0x80, 0x0302, 0x00,
+ data, 0x6, 200);
+ if (ret != 6)
+ ret = -EINVAL;
+ else
+ ret = data[2];
+
+ info("Firmware Status: %6ph", data);
- ret = (ret < 0) ? -ENODEV : data[2];
kfree(data);
return ret;
}
@@ -1189,6 +1194,7 @@ static int lme2510_get_adapter_count(str
static int lme2510_identify_state(struct dvb_usb_device *d, const char **name)
{
struct lme2510_state *st = d->priv;
+ int status;
usb_reset_configuration(d->udev);
@@ -1197,12 +1203,16 @@ static int lme2510_identify_state(struct
st->dvb_usb_lme2510_firmware = dvb_usb_lme2510_firmware;
- if (lme2510_return_status(d) == 0x44) {
+ status = lme2510_return_status(d);
+ if (status == 0x44) {
*name = lme_firmware_switch(d, 0);
return COLD;
}
- return 0;
+ if (status != 0x47)
+ return -EINVAL;
+
+ return WARM;
}
static int lme2510_get_stream_config(struct dvb_frontend *fe, u8 *ts_type,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt (VMware) <[email protected]>
commit 364f56653708ba8bcdefd4f0da2a42904baa8eeb upstream.
When issuing an IPI RT push, where an IPI is sent to each CPU that has more
than one RT task scheduled on it, it references the root domain's rto_mask,
that contains all the CPUs within the root domain that has more than one RT
task in the runable state. The problem is, after the IPIs are initiated, the
rq->lock is released. This means that the root domain that is associated to
the run queue could be freed while the IPIs are going around.
Add a sched_get_rd() and a sched_put_rd() that will increment and decrement
the root domain's ref count respectively. This way when initiating the IPIs,
the scheduler will up the root domain's ref count before releasing the
rq->lock, ensuring that the root domain does not go away until the IPI round
is complete.
Reported-by: Pavan Kondeti <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/sched/rt.c | 9 +++++++--
kernel/sched/sched.h | 2 ++
kernel/sched/topology.c | 13 +++++++++++++
3 files changed, 22 insertions(+), 2 deletions(-)
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1990,8 +1990,11 @@ static void tell_cpu_to_push(struct rq *
rto_start_unlock(&rq->rd->rto_loop_start);
- if (cpu >= 0)
+ if (cpu >= 0) {
+ /* Make sure the rd does not get freed while pushing */
+ sched_get_rd(rq->rd);
irq_work_queue_on(&rq->rd->rto_push_work, cpu);
+ }
}
/* Called from hardirq context */
@@ -2021,8 +2024,10 @@ void rto_push_irq_work_func(struct irq_w
raw_spin_unlock(&rd->rto_lock);
- if (cpu < 0)
+ if (cpu < 0) {
+ sched_put_rd(rd);
return;
+ }
/* Try the next RT overloaded CPU */
irq_work_queue_on(&rd->rto_push_work, cpu);
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -665,6 +665,8 @@ extern struct mutex sched_domains_mutex;
extern void init_defrootdomain(void);
extern int sched_init_domains(const struct cpumask *cpu_map);
extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
+extern void sched_get_rd(struct root_domain *rd);
+extern void sched_put_rd(struct root_domain *rd);
#ifdef HAVE_RT_PUSH_IPI
extern void rto_push_irq_work_func(struct irq_work *work);
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -259,6 +259,19 @@ void rq_attach_root(struct rq *rq, struc
call_rcu_sched(&old_rd->rcu, free_rootdomain);
}
+void sched_get_rd(struct root_domain *rd)
+{
+ atomic_inc(&rd->refcount);
+}
+
+void sched_put_rd(struct root_domain *rd)
+{
+ if (!atomic_dec_and_test(&rd->refcount))
+ return;
+
+ call_rcu_sched(&rd->rcu, free_rootdomain);
+}
+
static int init_rootdomain(struct root_domain *rd)
{
if (!zalloc_cpumask_var(&rd->span, GFP_KERNEL))
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit f72af90c3783 upstream.
We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kvm/hyp/hyp-entry.S | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -15,6 +15,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#include <linux/arm-smccc.h>
#include <linux/linkage.h>
#include <asm/alternative.h>
@@ -64,10 +65,11 @@ alternative_endif
lsr x0, x1, #ESR_ELx_EC_SHIFT
cmp x0, #ESR_ELx_EC_HVC64
+ ccmp x0, #ESR_ELx_EC_HVC32, #4, ne
b.ne el1_trap
- mrs x1, vttbr_el2 // If vttbr is valid, the 64bit guest
- cbnz x1, el1_trap // called HVC
+ mrs x1, vttbr_el2 // If vttbr is valid, the guest
+ cbnz x1, el1_hvc_guest // called HVC
/* Here, we're pretty sure the host called HVC. */
ldp x0, x1, [sp], #16
@@ -100,6 +102,20 @@ alternative_endif
eret
+el1_hvc_guest:
+ /*
+ * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1.
+ * The workaround has already been applied on the host,
+ * so let's quickly get back to the guest. We don't bother
+ * restoring x1, as it can be clobbered anyway.
+ */
+ ldr x1, [sp] // Guest's x0
+ eor w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
+ cbnz w1, el1_trap
+ mov x0, x1
+ add sp, sp, #16
+ eret
+
el1_trap:
/*
* x0: ESR_EC
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit a4097b351118 upstream.
We're about to need kvm_psci_version in HYP too. So let's turn it
into a static inline, and pass the kvm structure as a second
parameter (so that HYP can do a kern_hyp_va on it).
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kvm/hyp/switch.c | 18 +++++++++++-------
include/kvm/arm_psci.h | 21 ++++++++++++++++++++-
virt/kvm/arm/psci.c | 12 ++----------
3 files changed, 33 insertions(+), 18 deletions(-)
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -19,6 +19,8 @@
#include <linux/jump_label.h>
#include <uapi/linux/psci.h>
+#include <kvm/arm_psci.h>
+
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
@@ -344,14 +346,16 @@ again:
if (exit_code == ARM_EXCEPTION_TRAP &&
(kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC64 ||
- kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC32) &&
- vcpu_get_reg(vcpu, 0) == PSCI_0_2_FN_PSCI_VERSION) {
- u64 val = PSCI_RET_NOT_SUPPORTED;
- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
- val = 2;
+ kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_HVC32)) {
+ u32 val = vcpu_get_reg(vcpu, 0);
- vcpu_set_reg(vcpu, 0, val);
- goto again;
+ if (val == PSCI_0_2_FN_PSCI_VERSION) {
+ val = kvm_psci_version(vcpu, kern_hyp_va(vcpu->kvm));
+ if (unlikely(val == KVM_ARM_PSCI_0_1))
+ val = PSCI_RET_NOT_SUPPORTED;
+ vcpu_set_reg(vcpu, 0, val);
+ goto again;
+ }
}
if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -18,6 +18,7 @@
#ifndef __KVM_ARM_PSCI_H__
#define __KVM_ARM_PSCI_H__
+#include <linux/kvm_host.h>
#include <uapi/linux/psci.h>
#define KVM_ARM_PSCI_0_1 PSCI_VERSION(0, 1)
@@ -26,7 +27,25 @@
#define KVM_ARM_PSCI_LATEST KVM_ARM_PSCI_1_0
-int kvm_psci_version(struct kvm_vcpu *vcpu);
+/*
+ * We need the KVM pointer independently from the vcpu as we can call
+ * this from HYP, and need to apply kern_hyp_va on it...
+ */
+static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
+{
+ /*
+ * Our PSCI implementation stays the same across versions from
+ * v0.2 onward, only adding the few mandatory functions (such
+ * as FEATURES with 1.0) that are required by newer
+ * revisions. It is thus safe to return the latest.
+ */
+ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
+ return KVM_ARM_PSCI_LATEST;
+
+ return KVM_ARM_PSCI_0_1;
+}
+
+
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
#endif /* __KVM_ARM_PSCI_H__ */
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -123,7 +123,7 @@ static unsigned long kvm_psci_vcpu_on(st
if (!vcpu)
return PSCI_RET_INVALID_PARAMS;
if (!vcpu->arch.power_off) {
- if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
+ if (kvm_psci_version(source_vcpu, kvm) != KVM_ARM_PSCI_0_1)
return PSCI_RET_ALREADY_ON;
else
return PSCI_RET_INVALID_PARAMS;
@@ -232,14 +232,6 @@ static void kvm_psci_system_reset(struct
kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_RESET);
}
-int kvm_psci_version(struct kvm_vcpu *vcpu)
-{
- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
- return KVM_ARM_PSCI_LATEST;
-
- return KVM_ARM_PSCI_0_1;
-}
-
static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
@@ -397,7 +389,7 @@ static int kvm_psci_0_1_call(struct kvm_
*/
static int kvm_psci_call(struct kvm_vcpu *vcpu)
{
- switch (kvm_psci_version(vcpu)) {
+ switch (kvm_psci_version(vcpu, vcpu->kvm)) {
case KVM_ARM_PSCI_1_0:
return kvm_psci_1_0_call(vcpu);
case KVM_ARM_PSCI_0_2:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robin Murphy <[email protected]>
Commit 51369e398d0d upstream.
Currently, USER_DS represents an exclusive limit while KERNEL_DS is
inclusive. In order to do some clever trickery for speculation-safe
masking, we need them both to behave equivalently - there aren't enough
bits to make KERNEL_DS exclusive, so we have precisely one option. This
also happens to correct a longstanding false negative for a range
ending on the very top byte of kernel memory.
Mark Rutland points out that we've actually got the semantics of
addresses vs. segments muddled up in most of the places we need to
amend, so shuffle the {USER,KERNEL}_DS definitions around such that we
can correct those properly instead of just pasting "-1"s everywhere.
Signed-off-by: Robin Murphy <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/processor.h | 3 ++
arch/arm64/include/asm/uaccess.h | 45 +++++++++++++++++++++----------------
arch/arm64/kernel/entry.S | 4 +--
arch/arm64/mm/fault.c | 4 +--
4 files changed, 33 insertions(+), 23 deletions(-)
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -21,6 +21,9 @@
#define TASK_SIZE_64 (UL(1) << VA_BITS)
+#define KERNEL_DS UL(-1)
+#define USER_DS (TASK_SIZE_64 - 1)
+
#ifndef __ASSEMBLY__
/*
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -35,10 +35,7 @@
#include <asm/compiler.h>
#include <asm/extable.h>
-#define KERNEL_DS (-1UL)
#define get_ds() (KERNEL_DS)
-
-#define USER_DS TASK_SIZE_64
#define get_fs() (current_thread_info()->addr_limit)
static inline void set_fs(mm_segment_t fs)
@@ -66,22 +63,32 @@ static inline void set_fs(mm_segment_t f
* Returns 1 if the range is valid, 0 otherwise.
*
* This is equivalent to the following test:
- * (u65)addr + (u65)size <= current->addr_limit
- *
- * This needs 65-bit arithmetic.
+ * (u65)addr + (u65)size <= (u65)current->addr_limit + 1
*/
-#define __range_ok(addr, size) \
-({ \
- unsigned long __addr = (unsigned long)(addr); \
- unsigned long flag, roksum; \
- __chk_user_ptr(addr); \
- asm("adds %1, %1, %3; ccmp %1, %4, #2, cc; cset %0, ls" \
- : "=&r" (flag), "=&r" (roksum) \
- : "1" (__addr), "Ir" (size), \
- "r" (current_thread_info()->addr_limit) \
- : "cc"); \
- flag; \
-})
+static inline unsigned long __range_ok(unsigned long addr, unsigned long size)
+{
+ unsigned long limit = current_thread_info()->addr_limit;
+
+ __chk_user_ptr(addr);
+ asm volatile(
+ // A + B <= C + 1 for all A,B,C, in four easy steps:
+ // 1: X = A + B; X' = X % 2^64
+ " adds %0, %0, %2\n"
+ // 2: Set C = 0 if X > 2^64, to guarantee X' > C in step 4
+ " csel %1, xzr, %1, hi\n"
+ // 3: Set X' = ~0 if X >= 2^64. For X == 2^64, this decrements X'
+ // to compensate for the carry flag being set in step 4. For
+ // X > 2^64, X' merely has to remain nonzero, which it does.
+ " csinv %0, %0, xzr, cc\n"
+ // 4: For X < 2^64, this gives us X' - C - 1 <= 0, where the -1
+ // comes from the carry in being clear. Otherwise, we are
+ // testing X' - C == 0, subject to the previous adjustments.
+ " sbcs xzr, %0, %1\n"
+ " cset %0, ls\n"
+ : "+r" (addr), "+r" (limit) : "Ir" (size) : "cc");
+
+ return addr;
+}
/*
* When dealing with data aborts, watchpoints, or instruction traps we may end
@@ -90,7 +97,7 @@ static inline void set_fs(mm_segment_t f
*/
#define untagged_addr(addr) sign_extend64(addr, 55)
-#define access_ok(type, addr, size) __range_ok(addr, size)
+#define access_ok(type, addr, size) __range_ok((unsigned long)(addr), size)
#define user_addr_max get_fs
#define _ASM_EXTABLE(from, to) \
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -167,10 +167,10 @@ alternative_else_nop_endif
.else
add x21, sp, #S_FRAME_SIZE
get_thread_info tsk
- /* Save the task's original addr_limit and set USER_DS (TASK_SIZE_64) */
+ /* Save the task's original addr_limit and set USER_DS */
ldr x20, [tsk, #TSK_TI_ADDR_LIMIT]
str x20, [sp, #S_ORIG_ADDR_LIMIT]
- mov x20, #TASK_SIZE_64
+ mov x20, #USER_DS
str x20, [tsk, #TSK_TI_ADDR_LIMIT]
/* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */
.endif /* \el == 0 */
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -240,7 +240,7 @@ static inline bool is_permission_fault(u
if (fsc_type == ESR_ELx_FSC_PERM)
return true;
- if (addr < USER_DS && system_uses_ttbr0_pan())
+ if (addr < TASK_SIZE && system_uses_ttbr0_pan())
return fsc_type == ESR_ELx_FSC_FAULT &&
(regs->pstate & PSR_PAN_BIT);
@@ -414,7 +414,7 @@ static int __kprobes do_page_fault(unsig
mm_flags |= FAULT_FLAG_WRITE;
}
- if (addr < USER_DS && is_permission_fault(esr, regs, addr)) {
+ if (addr < TASK_SIZE && is_permission_fault(esr, regs, addr)) {
/* regs->orig_addr_limit may be 0 if we entered from EL0 */
if (regs->orig_addr_limit == KERNEL_DS)
die("Accessing user space memory with fs=KERNEL_DS", regs, esr);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit f2d3b2e8759a upstream.
One of the major improvement of SMCCC v1.1 is that it only clobbers
the first 4 registers, both on 32 and 64bit. This means that it
becomes very easy to provide an inline version of the SMC call
primitive, and avoid performing a function call to stash the
registers that would otherwise be clobbered by SMCCC v1.0.
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 141 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 141 insertions(+)
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -150,5 +150,146 @@ asmlinkage void __arm_smccc_hvc(unsigned
#define arm_smccc_hvc_quirk(...) __arm_smccc_hvc(__VA_ARGS__)
+/* SMCCC v1.1 implementation madness follows */
+#ifdef CONFIG_ARM64
+
+#define SMCCC_SMC_INST "smc #0"
+#define SMCCC_HVC_INST "hvc #0"
+
+#elif defined(CONFIG_ARM)
+#include <asm/opcodes-sec.h>
+#include <asm/opcodes-virt.h>
+
+#define SMCCC_SMC_INST __SMC(0)
+#define SMCCC_HVC_INST __HVC(0)
+
+#endif
+
+#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x
+
+#define __count_args(...) \
+ ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0)
+
+#define __constraint_write_0 \
+ "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3)
+#define __constraint_write_1 \
+ "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3)
+#define __constraint_write_2 \
+ "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3)
+#define __constraint_write_3 \
+ "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3)
+#define __constraint_write_4 __constraint_write_3
+#define __constraint_write_5 __constraint_write_4
+#define __constraint_write_6 __constraint_write_5
+#define __constraint_write_7 __constraint_write_6
+
+#define __constraint_read_0
+#define __constraint_read_1
+#define __constraint_read_2
+#define __constraint_read_3
+#define __constraint_read_4 "r" (r4)
+#define __constraint_read_5 __constraint_read_4, "r" (r5)
+#define __constraint_read_6 __constraint_read_5, "r" (r6)
+#define __constraint_read_7 __constraint_read_6, "r" (r7)
+
+#define __declare_arg_0(a0, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register unsigned long r1 asm("r1"); \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_1(a0, a1, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register unsigned long r2 asm("r2"); \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_2(a0, a1, a2, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register typeof(a2) r2 asm("r2") = a2; \
+ register unsigned long r3 asm("r3")
+
+#define __declare_arg_3(a0, a1, a2, a3, res) \
+ struct arm_smccc_res *___res = res; \
+ register u32 r0 asm("r0") = a0; \
+ register typeof(a1) r1 asm("r1") = a1; \
+ register typeof(a2) r2 asm("r2") = a2; \
+ register typeof(a3) r3 asm("r3") = a3
+
+#define __declare_arg_4(a0, a1, a2, a3, a4, res) \
+ __declare_arg_3(a0, a1, a2, a3, res); \
+ register typeof(a4) r4 asm("r4") = a4
+
+#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \
+ __declare_arg_4(a0, a1, a2, a3, a4, res); \
+ register typeof(a5) r5 asm("r5") = a5
+
+#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \
+ __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \
+ register typeof(a6) r6 asm("r6") = a6
+
+#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \
+ __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \
+ register typeof(a7) r7 asm("r7") = a7
+
+#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__)
+
+#define ___constraints(count) \
+ : __constraint_write_ ## count \
+ : __constraint_read_ ## count \
+ : "memory"
+#define __constraints(count) ___constraints(count)
+
+/*
+ * We have an output list that is not necessarily used, and GCC feels
+ * entitled to optimise the whole sequence away. "volatile" is what
+ * makes it stick.
+ */
+#define __arm_smccc_1_1(inst, ...) \
+ do { \
+ __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
+ asm volatile(inst "\n" \
+ __constraints(__count_args(__VA_ARGS__))); \
+ if (___res) \
+ *___res = (typeof(*___res)){r0, r1, r2, r3}; \
+ } while (0)
+
+/*
+ * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call
+ *
+ * This is a variadic macro taking one to eight source arguments, and
+ * an optional return structure.
+ *
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
+ *
+ * This macro is used to make SMC calls following SMC Calling Convention v1.1.
+ * The content of the supplied param are copied to registers 0 to 7 prior
+ * to the SMC instruction. The return values are updated with the content
+ * from register 0 to 3 on return from the SMC instruction if not NULL.
+ */
+#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__)
+
+/*
+ * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call
+ *
+ * This is a variadic macro taking one to eight source arguments, and
+ * an optional return structure.
+ *
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
+ *
+ * This macro is used to make HVC calls following SMC Calling Convention v1.1.
+ * The content of the supplied param are copied to registers 0 to 7 prior
+ * to the HVC instruction. The return values are updated with the content
+ * from register 0 to 3 on return from the HVC instruction if not NULL.
+ */
+#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
+
#endif /*__ASSEMBLY__*/
#endif /*__LINUX_ARM_SMCCC_H*/
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 6314d90e6493 upstream.
In a similar manner to array_index_mask_nospec, this patch introduces an
assembly macro (mask_nospec64) which can be used to bound a value under
speculation. This macro is then used to ensure that the indirect branch
through the syscall table is bounded under speculation, with out-of-range
addresses speculating as calls to sys_io_setup (0).
Reviewed-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 11 +++++++++++
arch/arm64/kernel/entry.S | 2 ++
2 files changed, 13 insertions(+)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -116,6 +116,17 @@
.endm
/*
+ * Sanitise a 64-bit bounded index wrt speculation, returning zero if out
+ * of bounds.
+ */
+ .macro mask_nospec64, idx, limit, tmp
+ sub \tmp, \idx, \limit
+ bic \tmp, \tmp, \idx
+ and \idx, \idx, \tmp, asr #63
+ csdb
+ .endm
+
+/*
* NOP sequence
*/
.macro nops, num
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -378,6 +378,7 @@ alternative_insn eret, nop, ARM64_UNMAP_
* x7 is reserved for the system call number in 32-bit mode.
*/
wsc_nr .req w25 // number of system calls
+xsc_nr .req x25 // number of system calls (zero-extended)
wscno .req w26 // syscall number
xscno .req x26 // syscall number (zero-extended)
stbl .req x27 // syscall table pointer
@@ -932,6 +933,7 @@ el0_svc_naked: // compat entry point
b.ne __sys_trace
cmp wscno, wsc_nr // check upper syscall limit
b.hs ni_sys
+ mask_nospec64 xscno, xsc_nr, x19 // enforce bounds for syscall number
ldr x16, [stbl, xscno, lsl #3] // address in the syscall table
blr x16 // call sys_* routine
b ret_fast_syscall
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bradley Bolen <[email protected]>
commit 7f29ae9f977bcdc3654e68bc36d170223c52fd48 upstream.
This fixes a race with idr_alloc where gd->first_minor can be set to the
same value for two simultaneous calls to ubiblock_create. Each instance
calls device_add_disk with the same first_minor. device_add_disk calls
bdi_register_owner which generates several warnings.
WARNING: CPU: 1 PID: 179 at kernel-source/fs/sysfs/dir.c:31
sysfs_warn_dup+0x68/0x88
sysfs: cannot create duplicate filename '/devices/virtual/bdi/252:2'
WARNING: CPU: 1 PID: 179 at kernel-source/lib/kobject.c:240
kobject_add_internal+0x1ec/0x2f8
kobject_add_internal failed for 252:2 with -EEXIST, don't try to
register things with the same name in the same directory
WARNING: CPU: 1 PID: 179 at kernel-source/fs/sysfs/dir.c:31
sysfs_warn_dup+0x68/0x88
sysfs: cannot create duplicate filename '/dev/block/252:2'
However, device_add_disk does not error out when bdi_register_owner
returns an error. Control continues until reaching blk_register_queue.
It then BUGs.
kernel BUG at kernel-source/fs/sysfs/group.c:113!
[<c01e26cc>] (internal_create_group) from [<c01e2950>]
(sysfs_create_group+0x20/0x24)
[<c01e2950>] (sysfs_create_group) from [<c00e3d38>]
(blk_trace_init_sysfs+0x18/0x20)
[<c00e3d38>] (blk_trace_init_sysfs) from [<c02bdfbc>]
(blk_register_queue+0xd8/0x154)
[<c02bdfbc>] (blk_register_queue) from [<c02cec84>]
(device_add_disk+0x194/0x44c)
[<c02cec84>] (device_add_disk) from [<c0436ec8>]
(ubiblock_create+0x284/0x2e0)
[<c0436ec8>] (ubiblock_create) from [<c0427bb8>]
(vol_cdev_ioctl+0x450/0x554)
[<c0427bb8>] (vol_cdev_ioctl) from [<c0189110>] (vfs_ioctl+0x30/0x44)
[<c0189110>] (vfs_ioctl) from [<c01892e0>] (do_vfs_ioctl+0xa0/0x790)
[<c01892e0>] (do_vfs_ioctl) from [<c0189a14>] (SyS_ioctl+0x44/0x68)
[<c0189a14>] (SyS_ioctl) from [<c0010640>] (ret_fast_syscall+0x0/0x34)
Locking idr_alloc/idr_remove removes the race and keeps gd->first_minor
unique.
Fixes: 2bf50d42f3a4 ("UBI: block: Dynamically allocate minor numbers")
Signed-off-by: Bradley Bolen <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/ubi/block.c | 42 ++++++++++++++++++++++++++----------------
1 file changed, 26 insertions(+), 16 deletions(-)
--- a/drivers/mtd/ubi/block.c
+++ b/drivers/mtd/ubi/block.c
@@ -99,6 +99,8 @@ struct ubiblock {
/* Linked list of all ubiblock instances */
static LIST_HEAD(ubiblock_devices);
+static DEFINE_IDR(ubiblock_minor_idr);
+/* Protects ubiblock_devices and ubiblock_minor_idr */
static DEFINE_MUTEX(devices_mutex);
static int ubiblock_major;
@@ -351,8 +353,6 @@ static const struct blk_mq_ops ubiblock_
.init_request = ubiblock_init_request,
};
-static DEFINE_IDR(ubiblock_minor_idr);
-
int ubiblock_create(struct ubi_volume_info *vi)
{
struct ubiblock *dev;
@@ -365,14 +365,15 @@ int ubiblock_create(struct ubi_volume_in
/* Check that the volume isn't already handled */
mutex_lock(&devices_mutex);
if (find_dev_nolock(vi->ubi_num, vi->vol_id)) {
- mutex_unlock(&devices_mutex);
- return -EEXIST;
+ ret = -EEXIST;
+ goto out_unlock;
}
- mutex_unlock(&devices_mutex);
dev = kzalloc(sizeof(struct ubiblock), GFP_KERNEL);
- if (!dev)
- return -ENOMEM;
+ if (!dev) {
+ ret = -ENOMEM;
+ goto out_unlock;
+ }
mutex_init(&dev->dev_mutex);
@@ -437,14 +438,13 @@ int ubiblock_create(struct ubi_volume_in
goto out_free_queue;
}
- mutex_lock(&devices_mutex);
list_add_tail(&dev->list, &ubiblock_devices);
- mutex_unlock(&devices_mutex);
/* Must be the last step: anyone can call file ops from now on */
add_disk(dev->gd);
dev_info(disk_to_dev(dev->gd), "created from ubi%d:%d(%s)",
dev->ubi_num, dev->vol_id, vi->name);
+ mutex_unlock(&devices_mutex);
return 0;
out_free_queue:
@@ -457,6 +457,8 @@ out_put_disk:
put_disk(dev->gd);
out_free_dev:
kfree(dev);
+out_unlock:
+ mutex_unlock(&devices_mutex);
return ret;
}
@@ -478,30 +480,36 @@ static void ubiblock_cleanup(struct ubib
int ubiblock_remove(struct ubi_volume_info *vi)
{
struct ubiblock *dev;
+ int ret;
mutex_lock(&devices_mutex);
dev = find_dev_nolock(vi->ubi_num, vi->vol_id);
if (!dev) {
- mutex_unlock(&devices_mutex);
- return -ENODEV;
+ ret = -ENODEV;
+ goto out_unlock;
}
/* Found a device, let's lock it so we can check if it's busy */
mutex_lock(&dev->dev_mutex);
if (dev->refcnt > 0) {
- mutex_unlock(&dev->dev_mutex);
- mutex_unlock(&devices_mutex);
- return -EBUSY;
+ ret = -EBUSY;
+ goto out_unlock_dev;
}
/* Remove from device list */
list_del(&dev->list);
- mutex_unlock(&devices_mutex);
-
ubiblock_cleanup(dev);
mutex_unlock(&dev->dev_mutex);
+ mutex_unlock(&devices_mutex);
+
kfree(dev);
return 0;
+
+out_unlock_dev:
+ mutex_unlock(&dev->dev_mutex);
+out_unlock:
+ mutex_unlock(&devices_mutex);
+ return ret;
}
static int ubiblock_resize(struct ubi_volume_info *vi)
@@ -630,6 +638,7 @@ static void ubiblock_remove_all(void)
struct ubiblock *next;
struct ubiblock *dev;
+ mutex_lock(&devices_mutex);
list_for_each_entry_safe(dev, next, &ubiblock_devices, list) {
/* The module is being forcefully removed */
WARN_ON(dev->desc);
@@ -638,6 +647,7 @@ static void ubiblock_remove_all(void)
ubiblock_cleanup(dev);
kfree(dev);
}
+ mutex_unlock(&devices_mutex);
}
int __init ubiblock_init(void)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 6b46d444146eb8d0b99562795cea8086639d7282 upstream.
ubifs_symlink() forgot to free the kmalloc()'ed buffer holding the
encrypted symlink target, creating a memory leak. Fix it.
(UBIFS could actually encrypt directly into ui->data, removing the
temporary buffer, but that is left for the patch that switches to use
the symlink helper functions.)
Fixes: ca7f85be8d6c ("ubifs: Add support for encrypted symlinks")
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/ubifs/dir.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
--- a/fs/ubifs/dir.c
+++ b/fs/ubifs/dir.c
@@ -1216,10 +1216,8 @@ static int ubifs_symlink(struct inode *d
ostr.len = disk_link.len;
err = fscrypt_fname_usr_to_disk(inode, &istr, &ostr);
- if (err) {
- kfree(sd);
+ if (err)
goto out_inode;
- }
sd->len = cpu_to_le16(ostr.len);
disk_link.name = (char *)sd;
@@ -1251,11 +1249,10 @@ static int ubifs_symlink(struct inode *d
goto out_cancel;
mutex_unlock(&dir_ui->ui_mutex);
- ubifs_release_budget(c, &req);
insert_inode_hash(inode);
d_instantiate(dentry, inode);
- fscrypt_free_filename(&nm);
- return 0;
+ err = 0;
+ goto out_fname;
out_cancel:
dir->i_size -= sz_change;
@@ -1268,6 +1265,7 @@ out_fname:
fscrypt_free_filename(&nm);
out_budg:
ubifs_release_budget(c, &req);
+ kfree(sd);
return err;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tigran Mkrtchyan <[email protected]>
commit 7ff4cff637aa0bd2abbd81f53b2a6206c50afd95 upstream.
A pNFS server may return LAYOUTUNAVAILABLE error on LAYOUTGET for files
which don't have any layout. In this situation pnfs_update_layout
currently returns NULL. As this NULL is converted into ENOMEM, IO
requests fails instead of falling back to MDS.
Do not return ENOMEM on LAYOUTUNAVAILABLE and let client retry through
MDS.
Fixes 8d40b0f14846f. I will suggest to backport this fix to affected
stable branches.
Signed-off-by: Tigran Mkrtchyan <[email protected]>
[trondmy: Use IS_ERR_OR_NULL()]
Fixes: 8d40b0f14846 ("NFS filelayout:call GETDEVICEINFO after...")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/filelayout/filelayout.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/fs/nfs/filelayout/filelayout.c
+++ b/fs/nfs/filelayout/filelayout.c
@@ -895,9 +895,7 @@ fl_pnfs_update_layout(struct inode *ino,
lseg = pnfs_update_layout(ino, ctx, pos, count, iomode, strict_iomode,
gfp_flags);
- if (!lseg)
- lseg = ERR_PTR(-ENOMEM);
- if (IS_ERR(lseg))
+ if (IS_ERR_OR_NULL(lseg))
goto out;
lo = NFS_I(ino)->layout;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit 4f1764172a0aa7395d12b96cae640ca1438c5085 upstream.
The state of the stid is guaranteed by 2 locks:
- The nfs4_client 'cl_lock' spinlock
- The nfs4_ol_stateid 'st_mutex' mutex
so it is quite possible for the stid to be unhashed after lookup,
but before calling nfsd4_lock_ol_stateid(). So we do need to check
for a zero value for 'sc_type' in nfsd4_verify_open_stid().
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Checuk Lever <[email protected]>
Fixes: 659aefb68eca "nfsd: Ensure we don't recognise lock stateids..."
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfsd/nfs4state.c | 1 +
1 file changed, 1 insertion(+)
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -3590,6 +3590,7 @@ nfsd4_verify_open_stid(struct nfs4_stid
switch (s->sc_type) {
default:
break;
+ case 0:
case NFS4_CLOSED_STID:
case NFS4_CLOSED_DELEG_STID:
ret = nfserr_bad_stateid;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit b092201e0020 upstream.
Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1.
It is lovely. Really.
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/bpi.S | 20 ++++++++++++
arch/arm64/kernel/cpu_errata.c | 68 ++++++++++++++++++++++++++++++++++++++++-
2 files changed, 87 insertions(+), 1 deletion(-)
--- a/arch/arm64/kernel/bpi.S
+++ b/arch/arm64/kernel/bpi.S
@@ -17,6 +17,7 @@
*/
#include <linux/linkage.h>
+#include <linux/arm-smccc.h>
.macro ventry target
.rept 31
@@ -85,3 +86,22 @@ ENTRY(__qcom_hyp_sanitize_link_stack_sta
.endr
ldp x29, x30, [sp], #16
ENTRY(__qcom_hyp_sanitize_link_stack_end)
+
+.macro smccc_workaround_1 inst
+ sub sp, sp, #(8 * 4)
+ stp x2, x3, [sp, #(8 * 0)]
+ stp x0, x1, [sp, #(8 * 2)]
+ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1
+ \inst #0
+ ldp x2, x3, [sp, #(8 * 0)]
+ ldp x0, x1, [sp, #(8 * 2)]
+ add sp, sp, #(8 * 4)
+.endm
+
+ENTRY(__smccc_workaround_1_smc_start)
+ smccc_workaround_1 smc
+ENTRY(__smccc_workaround_1_smc_end)
+
+ENTRY(__smccc_workaround_1_hvc_start)
+ smccc_workaround_1 hvc
+ENTRY(__smccc_workaround_1_hvc_end)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -70,6 +70,10 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_har
extern char __psci_hyp_bp_inval_start[], __psci_hyp_bp_inval_end[];
extern char __qcom_hyp_sanitize_link_stack_start[];
extern char __qcom_hyp_sanitize_link_stack_end[];
+extern char __smccc_workaround_1_smc_start[];
+extern char __smccc_workaround_1_smc_end[];
+extern char __smccc_workaround_1_hvc_start[];
+extern char __smccc_workaround_1_hvc_end[];
static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
const char *hyp_vecs_end)
@@ -116,6 +120,10 @@ static void __install_bp_hardening_cb(bp
#define __psci_hyp_bp_inval_end NULL
#define __qcom_hyp_sanitize_link_stack_start NULL
#define __qcom_hyp_sanitize_link_stack_end NULL
+#define __smccc_workaround_1_smc_start NULL
+#define __smccc_workaround_1_smc_end NULL
+#define __smccc_workaround_1_hvc_start NULL
+#define __smccc_workaround_1_hvc_end NULL
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
@@ -142,17 +150,75 @@ static void install_bp_hardening_cb(con
__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
}
+#include <uapi/linux/psci.h>
+#include <linux/arm-smccc.h>
#include <linux/psci.h>
+static void call_smc_arch_workaround_1(void)
+{
+ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
+static void call_hvc_arch_workaround_1(void)
+{
+ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+}
+
+static bool check_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+{
+ bp_hardening_cb_t cb;
+ void *smccc_start, *smccc_end;
+ struct arm_smccc_res res;
+
+ if (!entry->matches(entry, SCOPE_LOCAL_CPU))
+ return false;
+
+ if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
+ return false;
+
+ switch (psci_ops.conduit) {
+ case PSCI_CONDUIT_HVC:
+ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+ ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+ if (res.a0)
+ return false;
+ cb = call_hvc_arch_workaround_1;
+ smccc_start = __smccc_workaround_1_hvc_start;
+ smccc_end = __smccc_workaround_1_hvc_end;
+ break;
+
+ case PSCI_CONDUIT_SMC:
+ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
+ ARM_SMCCC_ARCH_WORKAROUND_1, &res);
+ if (res.a0)
+ return false;
+ cb = call_smc_arch_workaround_1;
+ smccc_start = __smccc_workaround_1_smc_start;
+ smccc_end = __smccc_workaround_1_smc_end;
+ break;
+
+ default:
+ return false;
+ }
+
+ install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+
+ return true;
+}
+
static int enable_psci_bp_hardening(void *data)
{
const struct arm64_cpu_capabilities *entry = data;
- if (psci_ops.get_version)
+ if (psci_ops.get_version) {
+ if (check_smccc_arch_workaround_1(entry))
+ return 0;
+
install_bp_hardening_cb(entry,
(bp_hardening_cb_t)psci_ops.get_version,
__psci_hyp_bp_inval_start,
__psci_hyp_bp_inval_end);
+ }
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit ca1b4974bd237f2373b0e980b11957aac3499b56 upstream.
Intel uses different SATA PCI ids for the Desktop and Mobile SKUs of their
chipsets. For older models the comment describing which chipset the PCI id
is for, aksi indicates when we're dealing with a mobile SKU. Extend the
comments for recent chipsets to also indicate mobile SKUs.
The information this commit adds comes from Intel's chipset datasheets.
This commit is a preparation patch for allowing a different default
sata link powermanagement policy for mobile chipsets.
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/ata/ahci.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -268,9 +268,9 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0x3b23), board_ahci }, /* PCH AHCI */
{ PCI_VDEVICE(INTEL, 0x3b24), board_ahci }, /* PCH RAID */
{ PCI_VDEVICE(INTEL, 0x3b25), board_ahci }, /* PCH RAID */
- { PCI_VDEVICE(INTEL, 0x3b29), board_ahci }, /* PCH AHCI */
+ { PCI_VDEVICE(INTEL, 0x3b29), board_ahci }, /* PCH M AHCI */
{ PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
- { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH M RAID */
{ PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
{ PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
{ PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
@@ -293,9 +293,9 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
{ PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
{ PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
- { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT M AHCI */
{ PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
- { PCI_VDEVICE(INTEL, 0x1c05), board_ahci }, /* CPT RAID */
+ { PCI_VDEVICE(INTEL, 0x1c05), board_ahci }, /* CPT M RAID */
{ PCI_VDEVICE(INTEL, 0x1c06), board_ahci }, /* CPT RAID */
{ PCI_VDEVICE(INTEL, 0x1c07), board_ahci }, /* CPT RAID */
{ PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PBG AHCI */
@@ -304,20 +304,20 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0x2826), board_ahci }, /* PBG RAID */
{ PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* DH89xxCC AHCI */
{ PCI_VDEVICE(INTEL, 0x1e02), board_ahci }, /* Panther Point AHCI */
- { PCI_VDEVICE(INTEL, 0x1e03), board_ahci }, /* Panther Point AHCI */
+ { PCI_VDEVICE(INTEL, 0x1e03), board_ahci }, /* Panther Point M AHCI */
{ PCI_VDEVICE(INTEL, 0x1e04), board_ahci }, /* Panther Point RAID */
{ PCI_VDEVICE(INTEL, 0x1e05), board_ahci }, /* Panther Point RAID */
{ PCI_VDEVICE(INTEL, 0x1e06), board_ahci }, /* Panther Point RAID */
- { PCI_VDEVICE(INTEL, 0x1e07), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x1e07), board_ahci }, /* Panther Point M RAID */
{ PCI_VDEVICE(INTEL, 0x1e0e), board_ahci }, /* Panther Point RAID */
{ PCI_VDEVICE(INTEL, 0x8c02), board_ahci }, /* Lynx Point AHCI */
- { PCI_VDEVICE(INTEL, 0x8c03), board_ahci }, /* Lynx Point AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c03), board_ahci }, /* Lynx Point M AHCI */
{ PCI_VDEVICE(INTEL, 0x8c04), board_ahci }, /* Lynx Point RAID */
- { PCI_VDEVICE(INTEL, 0x8c05), board_ahci }, /* Lynx Point RAID */
+ { PCI_VDEVICE(INTEL, 0x8c05), board_ahci }, /* Lynx Point M RAID */
{ PCI_VDEVICE(INTEL, 0x8c06), board_ahci }, /* Lynx Point RAID */
- { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point RAID */
+ { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point M RAID */
{ PCI_VDEVICE(INTEL, 0x8c0e), board_ahci }, /* Lynx Point RAID */
- { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point RAID */
+ { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point M RAID */
{ PCI_VDEVICE(INTEL, 0x9c02), board_ahci }, /* Lynx Point-LP AHCI */
{ PCI_VDEVICE(INTEL, 0x9c03), board_ahci }, /* Lynx Point-LP AHCI */
{ PCI_VDEVICE(INTEL, 0x9c04), board_ahci }, /* Lynx Point-LP RAID */
@@ -358,21 +358,21 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */
- { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series M AHCI */
{ PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */
- { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series M RAID */
{ PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */
- { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series M RAID */
{ PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
- { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
+ { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series M RAID */
{ PCI_VDEVICE(INTEL, 0x9d03), board_ahci }, /* Sunrise Point-LP AHCI */
{ PCI_VDEVICE(INTEL, 0x9d05), board_ahci }, /* Sunrise Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0x9d07), board_ahci }, /* Sunrise Point-LP RAID */
{ PCI_VDEVICE(INTEL, 0xa102), board_ahci }, /* Sunrise Point-H AHCI */
- { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */
+ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H M AHCI */
{ PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */
{ PCI_VDEVICE(INTEL, 0xa106), board_ahci }, /* Sunrise Point-H RAID */
- { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H M RAID */
{ PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */
{ PCI_VDEVICE(INTEL, 0x2822), board_ahci }, /* Lewisburg RAID*/
{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Lewisburg AHCI*/
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mika Westerberg <[email protected]>
commit f919dde0772a894c693a1eeabc77df69d6a9b937 upstream.
Add Intel Cannon Lake PCH-H PCI ID to the list of supported controllers.
Signed-off-by: Mika Westerberg <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/ata/ahci.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -386,6 +386,7 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0xa206), board_ahci }, /* Lewisburg RAID*/
{ PCI_VDEVICE(INTEL, 0xa252), board_ahci }, /* Lewisburg RAID*/
{ PCI_VDEVICE(INTEL, 0xa256), board_ahci }, /* Lewisburg RAID*/
+ { PCI_VDEVICE(INTEL, 0xa356), board_ahci }, /* Cannon Lake PCH-H RAID */
{ PCI_VDEVICE(INTEL, 0x0f22), board_ahci }, /* Bay Trail AHCI */
{ PCI_VDEVICE(INTEL, 0x0f23), board_ahci }, /* Bay Trail AHCI */
{ PCI_VDEVICE(INTEL, 0x22a3), board_ahci }, /* Cherry Trail AHCI */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Suzuki K Poulose <[email protected]>
Commit 67948af41f2e upstream.
Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 44 +++++++++++++++++++++--------------------
1 file changed, 23 insertions(+), 21 deletions(-)
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1118,6 +1118,26 @@ static void __init setup_elf_hwcaps(cons
cap_set_elf_hwcap(hwcaps);
}
+/*
+ * Check if the current CPU has a given feature capability.
+ * Should be called from non-preemptible context.
+ */
+static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+ unsigned int cap)
+{
+ const struct arm64_cpu_capabilities *caps;
+
+ if (WARN_ON(preemptible()))
+ return false;
+
+ for (caps = cap_array; caps->desc; caps++)
+ if (caps->capability == cap &&
+ caps->matches &&
+ caps->matches(caps, SCOPE_LOCAL_CPU))
+ return true;
+ return false;
+}
+
void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
const char *info)
{
@@ -1181,8 +1201,9 @@ verify_local_elf_hwcaps(const struct arm
}
static void
-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
+verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
{
+ const struct arm64_cpu_capabilities *caps = caps_list;
for (; caps->matches; caps++) {
if (!cpus_have_cap(caps->capability))
continue;
@@ -1190,7 +1211,7 @@ verify_local_cpu_features(const struct a
* If the new CPU misses an advertised feature, we cannot proceed
* further, park the cpu.
*/
- if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ if (!__this_cpu_has_cap(caps_list, caps->capability)) {
pr_crit("CPU%d: missing feature: %s\n",
smp_processor_id(), caps->desc);
cpu_die_early();
@@ -1272,25 +1293,6 @@ static void __init mark_const_caps_ready
static_branch_enable(&arm64_const_caps_ready);
}
-/*
- * Check if the current CPU has a given feature capability.
- * Should be called from non-preemptible context.
- */
-static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
- unsigned int cap)
-{
- const struct arm64_cpu_capabilities *caps;
-
- if (WARN_ON(preemptible()))
- return false;
-
- for (caps = cap_array; caps->desc; caps++)
- if (caps->capability == cap && caps->matches)
- return caps->matches(caps, SCOPE_LOCAL_CPU);
-
- return false;
-}
-
extern const struct arm64_cpu_capabilities arm64_errata[];
bool this_cpu_has_cap(unsigned int cap)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit ea1e3de85e94 upstream.
Allow explicit disabling of the entry trampoline on the kernel command
line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/mmu.h | 3 +-
arch/arm64/kernel/cpufeature.c | 41 +++++++++++++++++++++++++++++++++++++++
arch/arm64/kernel/entry.S | 9 ++++----
4 files changed, 50 insertions(+), 6 deletions(-)
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -41,7 +41,8 @@
#define ARM64_WORKAROUND_CAVIUM_30115 20
#define ARM64_HAS_DCPOP 21
#define ARM64_SVE 22
+#define ARM64_UNMAP_KERNEL_AT_EL0 23
-#define ARM64_NCAPS 23
+#define ARM64_NCAPS 24
#endif /* __ASM_CPUCAPS_H */
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -36,7 +36,8 @@ typedef struct {
static inline bool arm64_kernel_unmapped_at_el0(void)
{
- return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+ return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
+ cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
}
extern void paging_init(void);
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -846,6 +846,40 @@ static bool has_no_fpsimd(const struct a
ID_AA64PFR0_FP_SHIFT) < 0;
}
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+ int __unused)
+{
+ /* Forced on command line? */
+ if (__kpti_forced) {
+ pr_info_once("kernel page table isolation forced %s by command line option\n",
+ __kpti_forced > 0 ? "ON" : "OFF");
+ return __kpti_forced > 0;
+ }
+
+ /* Useful for KASLR robustness */
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+ return true;
+
+ return false;
+}
+
+static int __init parse_kpti(char *str)
+{
+ bool enabled;
+ int ret = strtobool(str, &enabled);
+
+ if (ret)
+ return ret;
+
+ __kpti_forced = enabled ? 1 : -1;
+ return 0;
+}
+__setup("kpti=", parse_kpti);
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@@ -932,6 +966,13 @@ static const struct arm64_cpu_capabiliti
.def_scope = SCOPE_SYSTEM,
.matches = hyp_offset_low,
},
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ {
+ .capability = ARM64_UNMAP_KERNEL_AT_EL0,
+ .def_scope = SCOPE_SYSTEM,
+ .matches = unmap_kernel_at_el0,
+ },
+#endif
{
/* FP/SIMD is not implemented */
.capability = ARM64_HAS_NO_FPSIMD,
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -74,6 +74,7 @@
.macro kernel_ventry, el, label, regsize = 64
.align 7
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+alternative_if ARM64_UNMAP_KERNEL_AT_EL0
.if \el == 0
.if \regsize == 64
mrs x30, tpidrro_el0
@@ -82,6 +83,7 @@
mov x30, xzr
.endif
.endif
+alternative_else_nop_endif
#endif
sub sp, sp, #S_FRAME_SIZE
@@ -323,10 +325,9 @@ alternative_else_nop_endif
ldr lr, [sp, #S_LR]
add sp, sp, #S_FRAME_SIZE // restore sp
-#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
- eret
-#else
.if \el == 0
+alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
bne 4f
msr far_el1, x30
tramp_alias x30, tramp_exit_native
@@ -334,10 +335,10 @@ alternative_else_nop_endif
4:
tramp_alias x30, tramp_exit_compat
br x30
+#endif
.else
eret
.endif
-#endif
.endm
.macro irq_stack_entry
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 179a56f6f9fb upstream.
For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.
Reviewed-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm64/kernel/cpufeature.c
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/kernel/cpufeature.c | 8 +++++++-
2 files changed, 8 insertions(+), 1 deletion(-)
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -437,6 +437,7 @@
#define ID_AA64ISAR1_DPB_SHIFT 0
/* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_GIC_SHIFT 24
#define ID_AA64PFR0_ASIMD_SHIFT 20
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -145,6 +145,7 @@ static const struct arm64_ftr_bits ftr_i
};
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
@@ -852,6 +853,8 @@ static int __kpti_forced; /* 0: not forc
static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
int __unused)
{
+ u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
/* Forced on command line? */
if (__kpti_forced) {
pr_info_once("kernel page table isolation forced %s by command line option\n",
@@ -863,7 +866,9 @@ static bool unmap_kernel_at_el0(const st
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
return true;
- return false;
+ /* Defer to CPU feature registers */
+ return !cpuid_feature_extract_unsigned_field(pfr0,
+ ID_AA64PFR0_CSV3_SHIFT);
}
static int __init parse_kpti(char *str)
@@ -968,6 +973,7 @@ static const struct arm64_cpu_capabiliti
},
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
{
+ .desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
.def_scope = SCOPE_SYSTEM,
.matches = unmap_kernel_at_el0,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Catalin Marinas <[email protected]>
Commit 6b88a32c7af6 upstream.
With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
active ASID to decide whether user access was enabled (non-zero ASID)
when the exception was taken. On return from exception, if user access
was previously disabled, it re-instates TTBR0_EL1 from the per-thread
saved value (updated in switch_mm() or efi_set_pgd()).
Commit 7655abb95386 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e75711
("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
__uaccess_ttbr0_disable() function and asm macro to first write the
reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
exception occurs between these two, the exception return code will
re-instate a valid TTBR0_EL1. Similar scenario can happen in
cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
update in cpu_do_switch_mm().
This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
disables the interrupts around the TTBR0_EL1 and ASID switching code in
__uaccess_ttbr0_disable(). It also ensures that, when returning from the
EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.
The accesses to current_thread_info()->ttbr0 are updated to use
READ_ONCE/WRITE_ONCE.
As a safety measure, __uaccess_ttbr0_enable() always masks out any
existing non-zero ASID TTBR1_EL1 before writing in the new ASID.
Fixes: 27a921e75711 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
Acked-by: Will Deacon <[email protected]>
Reported-by: Ard Biesheuvel <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: James Morse <[email protected]>
Tested-by: James Morse <[email protected]>
Co-developed-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Conflicts:
arch/arm64/include/asm/asm-uaccess.h
arch/arm64/include/asm/uaccess.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/asm-uaccess.h | 12 +++++++-----
arch/arm64/include/asm/efi.h | 12 +++++++-----
arch/arm64/include/asm/mmu_context.h | 3 ++-
arch/arm64/include/asm/uaccess.h | 9 ++++++---
arch/arm64/kernel/entry.S | 2 +-
arch/arm64/lib/clear_user.S | 2 +-
arch/arm64/lib/copy_from_user.S | 2 +-
arch/arm64/lib/copy_in_user.S | 2 +-
arch/arm64/lib/copy_to_user.S | 2 +-
arch/arm64/mm/cache.S | 2 +-
arch/arm64/mm/proc.S | 3 +++
arch/arm64/xen/hypercall.S | 2 +-
12 files changed, 32 insertions(+), 21 deletions(-)
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -14,11 +14,11 @@
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
.macro __uaccess_ttbr0_disable, tmp1
mrs \tmp1, ttbr1_el1 // swapper_pg_dir
+ bic \tmp1, \tmp1, #TTBR_ASID_MASK
add \tmp1, \tmp1, #SWAPPER_DIR_SIZE // reserved_ttbr0 at the end of swapper_pg_dir
msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
isb
sub \tmp1, \tmp1, #SWAPPER_DIR_SIZE
- bic \tmp1, \tmp1, #TTBR_ASID_MASK
msr ttbr1_el1, \tmp1 // set reserved ASID
isb
.endm
@@ -35,9 +35,11 @@
isb
.endm
- .macro uaccess_ttbr0_disable, tmp1
+ .macro uaccess_ttbr0_disable, tmp1, tmp2
alternative_if_not ARM64_HAS_PAN
+ save_and_disable_irq \tmp2 // avoid preemption
__uaccess_ttbr0_disable \tmp1
+ restore_irq \tmp2
alternative_else_nop_endif
.endm
@@ -49,7 +51,7 @@ alternative_if_not ARM64_HAS_PAN
alternative_else_nop_endif
.endm
#else
- .macro uaccess_ttbr0_disable, tmp1
+ .macro uaccess_ttbr0_disable, tmp1, tmp2
.endm
.macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
@@ -59,8 +61,8 @@ alternative_else_nop_endif
/*
* These macros are no-ops when UAO is present.
*/
- .macro uaccess_disable_not_uao, tmp1
- uaccess_ttbr0_disable \tmp1
+ .macro uaccess_disable_not_uao, tmp1, tmp2
+ uaccess_ttbr0_disable \tmp1, \tmp2
alternative_if ARM64_ALT_PAN_NOT_UAO
SET_PSTATE_PAN(1)
alternative_else_nop_endif
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -121,19 +121,21 @@ static inline void efi_set_pgd(struct mm
if (mm != current->active_mm) {
/*
* Update the current thread's saved ttbr0 since it is
- * restored as part of a return from exception. Set
- * the hardware TTBR0_EL1 using cpu_switch_mm()
- * directly to enable potential errata workarounds.
+ * restored as part of a return from exception. Enable
+ * access to the valid TTBR0_EL1 and invoke the errata
+ * workaround directly since there is no return from
+ * exception when invoking the EFI run-time services.
*/
update_saved_ttbr0(current, mm);
- cpu_switch_mm(mm->pgd, mm);
+ uaccess_ttbr0_enable();
+ post_ttbr_update_workaround();
} else {
/*
* Defer the switch to the current thread's TTBR0_EL1
* until uaccess_enable(). Restore the current
* thread's saved ttbr0 corresponding to its active_mm
*/
- cpu_set_reserved_ttbr0();
+ uaccess_ttbr0_disable();
update_saved_ttbr0(current, current->active_mm);
}
}
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -175,7 +175,7 @@ static inline void update_saved_ttbr0(st
else
ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48;
- task_thread_info(tsk)->ttbr0 = ttbr;
+ WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
}
#else
static inline void update_saved_ttbr0(struct task_struct *tsk,
@@ -230,6 +230,7 @@ switch_mm(struct mm_struct *prev, struct
#define activate_mm(prev,next) switch_mm(prev, next, current)
void verify_cpu_asid_bits(void);
+void post_ttbr_update_workaround(void);
#endif /* !__ASSEMBLY__ */
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -105,16 +105,18 @@ static inline void set_fs(mm_segment_t f
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
static inline void __uaccess_ttbr0_disable(void)
{
- unsigned long ttbr;
+ unsigned long flags, ttbr;
+ local_irq_save(flags);
ttbr = read_sysreg(ttbr1_el1);
+ ttbr &= ~TTBR_ASID_MASK;
/* reserved_ttbr0 placed at the end of swapper_pg_dir */
write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
isb();
/* Set reserved ASID */
- ttbr &= ~TTBR_ASID_MASK;
write_sysreg(ttbr, ttbr1_el1);
isb();
+ local_irq_restore(flags);
}
static inline void __uaccess_ttbr0_enable(void)
@@ -127,10 +129,11 @@ static inline void __uaccess_ttbr0_enabl
* roll-over and an update of 'ttbr0'.
*/
local_irq_save(flags);
- ttbr0 = current_thread_info()->ttbr0;
+ ttbr0 = READ_ONCE(current_thread_info()->ttbr0);
/* Restore active ASID */
ttbr1 = read_sysreg(ttbr1_el1);
+ ttbr1 &= ~TTBR_ASID_MASK; /* safety measure */
ttbr1 |= ttbr0 & TTBR_ASID_MASK;
write_sysreg(ttbr1, ttbr1_el1);
isb();
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -204,7 +204,7 @@ alternative_if ARM64_HAS_PAN
alternative_else_nop_endif
.if \el != 0
- mrs x21, ttbr1_el1
+ mrs x21, ttbr0_el1
tst x21, #TTBR_ASID_MASK // Check for the reserved ASID
orr x23, x23, #PSR_PAN_BIT // Set the emulated PAN in the saved SPSR
b.eq 1f // TTBR0 access already disabled
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -50,7 +50,7 @@ uao_user_alternative 9f, strh, sttrh, wz
b.mi 5f
uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
5: mov x0, #0
- uaccess_disable_not_uao x2
+ uaccess_disable_not_uao x2, x3
ret
ENDPROC(__clear_user)
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -67,7 +67,7 @@ ENTRY(__arch_copy_from_user)
uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
- uaccess_disable_not_uao x3
+ uaccess_disable_not_uao x3, x4
mov x0, #0 // Nothing to copy
ret
ENDPROC(__arch_copy_from_user)
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -68,7 +68,7 @@ ENTRY(raw_copy_in_user)
uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
- uaccess_disable_not_uao x3
+ uaccess_disable_not_uao x3, x4
mov x0, #0
ret
ENDPROC(raw_copy_in_user)
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -66,7 +66,7 @@ ENTRY(__arch_copy_to_user)
uaccess_enable_not_uao x3, x4, x5
add end, x0, x2
#include "copy_template.S"
- uaccess_disable_not_uao x3
+ uaccess_disable_not_uao x3, x4
mov x0, #0
ret
ENDPROC(__arch_copy_to_user)
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -72,7 +72,7 @@ USER(9f, ic ivau, x4 ) // invalidate I
isb
mov x0, #0
1:
- uaccess_ttbr0_disable x1
+ uaccess_ttbr0_disable x1, x2
ret
9:
mov x0, #-EFAULT
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -140,6 +140,9 @@ ENDPROC(cpu_do_resume)
ENTRY(cpu_do_switch_mm)
mrs x2, ttbr1_el1
mmid x1, x1 // get mm->context.id
+#ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ bfi x0, x1, #48, #16 // set the ASID field in TTBR0
+#endif
bfi x2, x1, #48, #16 // set the ASID
msr ttbr1_el1, x2 // in TTBR1 (since TCR.A1 is set)
isb
--- a/arch/arm64/xen/hypercall.S
+++ b/arch/arm64/xen/hypercall.S
@@ -107,6 +107,6 @@ ENTRY(privcmd_call)
/*
* Disable userspace access from kernel once the hyp call completed.
*/
- uaccess_ttbr0_disable x6
+ uaccess_ttbr0_disable x6, x7
ret
ENDPROC(privcmd_call);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lionel Landwerlin <[email protected]>
commit b5a756a722286af9702d565501e1f690d075d16b upstream.
This reverts commit 5b54eddd3920e9f6f1a6d972454baf350cbae77e.
Conflicts:
drivers/gpu/drm/i915/i915_pci.c
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104805
Signed-off-by: Lionel Landwerlin <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Fixes: 5b54eddd3920 ("drm/i915: mark all device info struct with __initconst")
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
(cherry picked from commit 5db47e37b38755c5e26e6b8fbc1a32ce73495940)
Signed-off-by: Rodrigo Vivi <[email protected]>
Cc: Ozkan Sezer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/gpu/drm/i915/i915_pci.c | 94 ++++++++++++++++++++--------------------
1 file changed, 47 insertions(+), 47 deletions(-)
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -74,19 +74,19 @@
GEN_DEFAULT_PAGE_SIZES, \
CURSOR_OFFSETS
-static const struct intel_device_info intel_i830_info __initconst = {
+static const struct intel_device_info intel_i830_info = {
GEN2_FEATURES,
.platform = INTEL_I830,
.is_mobile = 1, .cursor_needs_physical = 1,
.num_pipes = 2, /* legal, last one wins */
};
-static const struct intel_device_info intel_i845g_info __initconst = {
+static const struct intel_device_info intel_i845g_info = {
GEN2_FEATURES,
.platform = INTEL_I845G,
};
-static const struct intel_device_info intel_i85x_info __initconst = {
+static const struct intel_device_info intel_i85x_info = {
GEN2_FEATURES,
.platform = INTEL_I85X, .is_mobile = 1,
.num_pipes = 2, /* legal, last one wins */
@@ -94,7 +94,7 @@ static const struct intel_device_info in
.has_fbc = 1,
};
-static const struct intel_device_info intel_i865g_info __initconst = {
+static const struct intel_device_info intel_i865g_info = {
GEN2_FEATURES,
.platform = INTEL_I865G,
};
@@ -108,7 +108,7 @@ static const struct intel_device_info in
GEN_DEFAULT_PAGE_SIZES, \
CURSOR_OFFSETS
-static const struct intel_device_info intel_i915g_info __initconst = {
+static const struct intel_device_info intel_i915g_info = {
GEN3_FEATURES,
.platform = INTEL_I915G, .cursor_needs_physical = 1,
.has_overlay = 1, .overlay_needs_physical = 1,
@@ -116,7 +116,7 @@ static const struct intel_device_info in
.unfenced_needs_alignment = 1,
};
-static const struct intel_device_info intel_i915gm_info __initconst = {
+static const struct intel_device_info intel_i915gm_info = {
GEN3_FEATURES,
.platform = INTEL_I915GM,
.is_mobile = 1,
@@ -128,7 +128,7 @@ static const struct intel_device_info in
.unfenced_needs_alignment = 1,
};
-static const struct intel_device_info intel_i945g_info __initconst = {
+static const struct intel_device_info intel_i945g_info = {
GEN3_FEATURES,
.platform = INTEL_I945G,
.has_hotplug = 1, .cursor_needs_physical = 1,
@@ -137,7 +137,7 @@ static const struct intel_device_info in
.unfenced_needs_alignment = 1,
};
-static const struct intel_device_info intel_i945gm_info __initconst = {
+static const struct intel_device_info intel_i945gm_info = {
GEN3_FEATURES,
.platform = INTEL_I945GM, .is_mobile = 1,
.has_hotplug = 1, .cursor_needs_physical = 1,
@@ -148,14 +148,14 @@ static const struct intel_device_info in
.unfenced_needs_alignment = 1,
};
-static const struct intel_device_info intel_g33_info __initconst = {
+static const struct intel_device_info intel_g33_info = {
GEN3_FEATURES,
.platform = INTEL_G33,
.has_hotplug = 1,
.has_overlay = 1,
};
-static const struct intel_device_info intel_pineview_info __initconst = {
+static const struct intel_device_info intel_pineview_info = {
GEN3_FEATURES,
.platform = INTEL_PINEVIEW, .is_mobile = 1,
.has_hotplug = 1,
@@ -172,7 +172,7 @@ static const struct intel_device_info in
GEN_DEFAULT_PAGE_SIZES, \
CURSOR_OFFSETS
-static const struct intel_device_info intel_i965g_info __initconst = {
+static const struct intel_device_info intel_i965g_info = {
GEN4_FEATURES,
.platform = INTEL_I965G,
.has_overlay = 1,
@@ -180,7 +180,7 @@ static const struct intel_device_info in
.has_snoop = false,
};
-static const struct intel_device_info intel_i965gm_info __initconst = {
+static const struct intel_device_info intel_i965gm_info = {
GEN4_FEATURES,
.platform = INTEL_I965GM,
.is_mobile = 1, .has_fbc = 1,
@@ -190,13 +190,13 @@ static const struct intel_device_info in
.has_snoop = false,
};
-static const struct intel_device_info intel_g45_info __initconst = {
+static const struct intel_device_info intel_g45_info = {
GEN4_FEATURES,
.platform = INTEL_G45,
.ring_mask = RENDER_RING | BSD_RING,
};
-static const struct intel_device_info intel_gm45_info __initconst = {
+static const struct intel_device_info intel_gm45_info = {
GEN4_FEATURES,
.platform = INTEL_GM45,
.is_mobile = 1, .has_fbc = 1,
@@ -213,12 +213,12 @@ static const struct intel_device_info in
GEN_DEFAULT_PAGE_SIZES, \
CURSOR_OFFSETS
-static const struct intel_device_info intel_ironlake_d_info __initconst = {
+static const struct intel_device_info intel_ironlake_d_info = {
GEN5_FEATURES,
.platform = INTEL_IRONLAKE,
};
-static const struct intel_device_info intel_ironlake_m_info __initconst = {
+static const struct intel_device_info intel_ironlake_m_info = {
GEN5_FEATURES,
.platform = INTEL_IRONLAKE,
.is_mobile = 1, .has_fbc = 1,
@@ -241,12 +241,12 @@ static const struct intel_device_info in
GEN6_FEATURES, \
.platform = INTEL_SANDYBRIDGE
-static const struct intel_device_info intel_sandybridge_d_gt1_info __initconst = {
+static const struct intel_device_info intel_sandybridge_d_gt1_info = {
SNB_D_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_sandybridge_d_gt2_info __initconst = {
+static const struct intel_device_info intel_sandybridge_d_gt2_info = {
SNB_D_PLATFORM,
.gt = 2,
};
@@ -257,12 +257,12 @@ static const struct intel_device_info in
.is_mobile = 1
-static const struct intel_device_info intel_sandybridge_m_gt1_info __initconst = {
+static const struct intel_device_info intel_sandybridge_m_gt1_info = {
SNB_M_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_sandybridge_m_gt2_info __initconst = {
+static const struct intel_device_info intel_sandybridge_m_gt2_info = {
SNB_M_PLATFORM,
.gt = 2,
};
@@ -286,12 +286,12 @@ static const struct intel_device_info in
.platform = INTEL_IVYBRIDGE, \
.has_l3_dpf = 1
-static const struct intel_device_info intel_ivybridge_d_gt1_info __initconst = {
+static const struct intel_device_info intel_ivybridge_d_gt1_info = {
IVB_D_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_ivybridge_d_gt2_info __initconst = {
+static const struct intel_device_info intel_ivybridge_d_gt2_info = {
IVB_D_PLATFORM,
.gt = 2,
};
@@ -302,17 +302,17 @@ static const struct intel_device_info in
.is_mobile = 1, \
.has_l3_dpf = 1
-static const struct intel_device_info intel_ivybridge_m_gt1_info __initconst = {
+static const struct intel_device_info intel_ivybridge_m_gt1_info = {
IVB_M_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_ivybridge_m_gt2_info __initconst = {
+static const struct intel_device_info intel_ivybridge_m_gt2_info = {
IVB_M_PLATFORM,
.gt = 2,
};
-static const struct intel_device_info intel_ivybridge_q_info __initconst = {
+static const struct intel_device_info intel_ivybridge_q_info = {
GEN7_FEATURES,
.platform = INTEL_IVYBRIDGE,
.gt = 2,
@@ -320,7 +320,7 @@ static const struct intel_device_info in
.has_l3_dpf = 1,
};
-static const struct intel_device_info intel_valleyview_info __initconst = {
+static const struct intel_device_info intel_valleyview_info = {
.platform = INTEL_VALLEYVIEW,
.gen = 7,
.is_lp = 1,
@@ -356,17 +356,17 @@ static const struct intel_device_info in
.platform = INTEL_HASWELL, \
.has_l3_dpf = 1
-static const struct intel_device_info intel_haswell_gt1_info __initconst = {
+static const struct intel_device_info intel_haswell_gt1_info = {
HSW_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_haswell_gt2_info __initconst = {
+static const struct intel_device_info intel_haswell_gt2_info = {
HSW_PLATFORM,
.gt = 2,
};
-static const struct intel_device_info intel_haswell_gt3_info __initconst = {
+static const struct intel_device_info intel_haswell_gt3_info = {
HSW_PLATFORM,
.gt = 3,
};
@@ -386,17 +386,17 @@ static const struct intel_device_info in
.gen = 8, \
.platform = INTEL_BROADWELL
-static const struct intel_device_info intel_broadwell_gt1_info __initconst = {
+static const struct intel_device_info intel_broadwell_gt1_info = {
BDW_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_broadwell_gt2_info __initconst = {
+static const struct intel_device_info intel_broadwell_gt2_info = {
BDW_PLATFORM,
.gt = 2,
};
-static const struct intel_device_info intel_broadwell_rsvd_info __initconst = {
+static const struct intel_device_info intel_broadwell_rsvd_info = {
BDW_PLATFORM,
.gt = 3,
/* According to the device ID those devices are GT3, they were
@@ -404,13 +404,13 @@ static const struct intel_device_info in
*/
};
-static const struct intel_device_info intel_broadwell_gt3_info __initconst = {
+static const struct intel_device_info intel_broadwell_gt3_info = {
BDW_PLATFORM,
.gt = 3,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
};
-static const struct intel_device_info intel_cherryview_info __initconst = {
+static const struct intel_device_info intel_cherryview_info = {
.gen = 8, .num_pipes = 3,
.has_hotplug = 1,
.is_lp = 1,
@@ -453,12 +453,12 @@ static const struct intel_device_info in
.gen = 9, \
.platform = INTEL_SKYLAKE
-static const struct intel_device_info intel_skylake_gt1_info __initconst = {
+static const struct intel_device_info intel_skylake_gt1_info = {
SKL_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_skylake_gt2_info __initconst = {
+static const struct intel_device_info intel_skylake_gt2_info = {
SKL_PLATFORM,
.gt = 2,
};
@@ -468,12 +468,12 @@ static const struct intel_device_info in
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING
-static const struct intel_device_info intel_skylake_gt3_info __initconst = {
+static const struct intel_device_info intel_skylake_gt3_info = {
SKL_GT3_PLUS_PLATFORM,
.gt = 3,
};
-static const struct intel_device_info intel_skylake_gt4_info __initconst = {
+static const struct intel_device_info intel_skylake_gt4_info = {
SKL_GT3_PLUS_PLATFORM,
.gt = 4,
};
@@ -509,13 +509,13 @@ static const struct intel_device_info in
IVB_CURSOR_OFFSETS, \
BDW_COLORS
-static const struct intel_device_info intel_broxton_info __initconst = {
+static const struct intel_device_info intel_broxton_info = {
GEN9_LP_FEATURES,
.platform = INTEL_BROXTON,
.ddb_size = 512,
};
-static const struct intel_device_info intel_geminilake_info __initconst = {
+static const struct intel_device_info intel_geminilake_info = {
GEN9_LP_FEATURES,
.platform = INTEL_GEMINILAKE,
.ddb_size = 1024,
@@ -527,17 +527,17 @@ static const struct intel_device_info in
.gen = 9, \
.platform = INTEL_KABYLAKE
-static const struct intel_device_info intel_kabylake_gt1_info __initconst = {
+static const struct intel_device_info intel_kabylake_gt1_info = {
KBL_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_kabylake_gt2_info __initconst = {
+static const struct intel_device_info intel_kabylake_gt2_info = {
KBL_PLATFORM,
.gt = 2,
};
-static const struct intel_device_info intel_kabylake_gt3_info __initconst = {
+static const struct intel_device_info intel_kabylake_gt3_info = {
KBL_PLATFORM,
.gt = 3,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
@@ -548,17 +548,17 @@ static const struct intel_device_info in
.gen = 9, \
.platform = INTEL_COFFEELAKE
-static const struct intel_device_info intel_coffeelake_gt1_info __initconst = {
+static const struct intel_device_info intel_coffeelake_gt1_info = {
CFL_PLATFORM,
.gt = 1,
};
-static const struct intel_device_info intel_coffeelake_gt2_info __initconst = {
+static const struct intel_device_info intel_coffeelake_gt2_info = {
CFL_PLATFORM,
.gt = 2,
};
-static const struct intel_device_info intel_coffeelake_gt3_info __initconst = {
+static const struct intel_device_info intel_coffeelake_gt3_info = {
CFL_PLATFORM,
.gt = 3,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
@@ -569,7 +569,7 @@ static const struct intel_device_info in
.ddb_size = 1024, \
GLK_COLORS
-static const struct intel_device_info intel_cannonlake_gt2_info __initconst = {
+static const struct intel_device_info intel_cannonlake_gt2_info = {
GEN10_FEATURES,
.is_alpha_support = 1,
.platform = INTEL_CANNONLAKE,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew-sh Cheng <[email protected]>
commit 6066998cbd2b1012a8d5bc9a2957cfd0ad53150e upstream.
mediatek projects will use mediate-cpufreq.c as cpufreq driver,
instead of using cpufreq_dt.c
Add mediatek related projects into cpufreq-dt blacklist
Signed-off-by: Andrew-sh Cheng <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Sean Wang <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/cpufreq/cpufreq-dt-platdev.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -108,6 +108,14 @@ static const struct of_device_id blackli
{ .compatible = "marvell,armadaxp", },
+ { .compatible = "mediatek,mt2701", },
+ { .compatible = "mediatek,mt2712", },
+ { .compatible = "mediatek,mt7622", },
+ { .compatible = "mediatek,mt7623", },
+ { .compatible = "mediatek,mt817x", },
+ { .compatible = "mediatek,mt8173", },
+ { .compatible = "mediatek,mt8176", },
+
{ .compatible = "nvidia,tegra124", },
{ .compatible = "st,stih407", },
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 30d88c0e3ace upstream.
It is possible to take an IRQ from EL0 following a branch to a kernel
address in such a way that the IRQ is prioritised over the instruction
abort. Whilst an attacker would need to get the stars to align here,
it might be sufficient with enough calibration so perform BP hardening
in the rare case that we see a kernel address in the ELR when handling
an IRQ from EL0.
Reported-by: Dan Hettena <[email protected]>
Reviewed-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 5 +++++
arch/arm64/mm/fault.c | 6 ++++++
2 files changed, 11 insertions(+)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -828,6 +828,11 @@ el0_irq_naked:
#endif
ct_user_exit
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+ tbz x22, #55, 1f
+ bl do_el0_irq_bp_hardening
+1:
+#endif
irq_handler
#ifdef CONFIG_TRACE_IRQFLAGS
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -707,6 +707,12 @@ asmlinkage void __exception do_mem_abort
arm64_notify_die("", regs, &info, esr);
}
+asmlinkage void __exception do_el0_irq_bp_hardening(void)
+{
+ /* PC has already been checked in entry.S */
+ arm64_apply_bp_hardening();
+}
+
asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
unsigned int esr,
struct pt_regs *regs)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 95e3de3590e3 upstream.
We will soon need to invoke a CPU-specific function pointer after changing
page tables, so move post_ttbr_update_workaround out into C code to make
this possible.
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm64/include/asm/assembler.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 13 -------------
arch/arm64/kernel/entry.S | 2 +-
arch/arm64/mm/context.c | 9 +++++++++
arch/arm64/mm/proc.S | 3 +--
4 files changed, 11 insertions(+), 16 deletions(-)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -494,19 +494,6 @@ alternative_endif
mrs \rd, sp_el0
.endm
-/*
- * Errata workaround post TTBRx_EL1 update.
- */
- .macro post_ttbr_update_workaround
-#ifdef CONFIG_CAVIUM_ERRATUM_27456
-alternative_if ARM64_WORKAROUND_CAVIUM_27456
- ic iallu
- dsb nsh
- isb
-alternative_else_nop_endif
-#endif
- .endm
-
.macro pte_to_phys, phys, pte
and \phys, \pte, #(((1 << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
.endm
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -277,7 +277,7 @@ alternative_else_nop_endif
* Cavium erratum 27456 (broadcast TLBI instructions may cause I-cache
* corruption).
*/
- post_ttbr_update_workaround
+ bl post_ttbr_update_workaround
.endif
1:
.if \el != 0
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -239,6 +239,15 @@ switch_mm_fastpath:
cpu_switch_mm(mm->pgd, mm);
}
+/* Errata workaround post TTBRx_EL1 update. */
+asmlinkage void post_ttbr_update_workaround(void)
+{
+ asm(ALTERNATIVE("nop; nop; nop",
+ "ic iallu; dsb nsh; isb",
+ ARM64_WORKAROUND_CAVIUM_27456,
+ CONFIG_CAVIUM_ERRATUM_27456));
+}
+
static int asids_init(void)
{
asid_bits = get_cpu_asid_bits();
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -148,8 +148,7 @@ ENTRY(cpu_do_switch_mm)
isb
msr ttbr0_el1, x0 // now update TTBR0
isb
- post_ttbr_update_workaround
- ret
+ b post_ttbr_update_workaround // Back to C code...
ENDPROC(cpu_do_switch_mm)
.pushsection ".idmap.text", "awx"
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 7a4a0c1555b8 upstream.
When running with the kernel unmapped whilst at EL0, the virtually-addressed
SPE buffer is also unmapped, which can lead to buffer faults if userspace
profiling is enabled and potentially also when writing back kernel samples
unless an expensive drain operation is performed on exception return.
For now, fail the SPE driver probe when arm64_kernel_unmapped_at_el0().
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/perf/arm_spe_pmu.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/drivers/perf/arm_spe_pmu.c
+++ b/drivers/perf/arm_spe_pmu.c
@@ -1164,6 +1164,15 @@ static int arm_spe_pmu_device_dt_probe(s
struct arm_spe_pmu *spe_pmu;
struct device *dev = &pdev->dev;
+ /*
+ * If kernelspace is unmapped when running at EL0, then the SPE
+ * buffer will fault and prematurely terminate the AUX session.
+ */
+ if (arm64_kernel_unmapped_at_el0()) {
+ dev_warn_once(dev, "profiling buffer inaccessible. Try passing \"kpti=off\" on the kernel command line\n");
+ return -EPERM;
+ }
+
spe_pmu = devm_kzalloc(dev, sizeof(*spe_pmu), GFP_KERNEL);
if (!spe_pmu) {
dev_err(dev, "failed to allocate spe_pmu\n");
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 8ed5a59dcb47a6f76034ee760b36e089f3e82529 upstream.
The struct v4l2_plane32 should set m.userptr as well. The same
happens in v4l2_buffer32 and v4l2-compliance tests for this.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 47 +++++++++++++++-----------
1 file changed, 28 insertions(+), 19 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -310,19 +310,24 @@ static int get_v4l2_plane32(struct v4l2_
sizeof(up->data_offset)))
return -EFAULT;
- if (memory == V4L2_MEMORY_USERPTR) {
+ switch (memory) {
+ case V4L2_MEMORY_MMAP:
+ case V4L2_MEMORY_OVERLAY:
+ if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
+ sizeof(up32->m.mem_offset)))
+ return -EFAULT;
+ break;
+ case V4L2_MEMORY_USERPTR:
if (get_user(p, &up32->m.userptr))
return -EFAULT;
up_pln = compat_ptr(p);
if (put_user((unsigned long)up_pln, &up->m.userptr))
return -EFAULT;
- } else if (memory == V4L2_MEMORY_DMABUF) {
+ break;
+ case V4L2_MEMORY_DMABUF:
if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(up32->m.fd)))
return -EFAULT;
- } else {
- if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
- sizeof(up32->m.mem_offset)))
- return -EFAULT;
+ break;
}
return 0;
@@ -331,22 +336,32 @@ static int get_v4l2_plane32(struct v4l2_
static int put_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
enum v4l2_memory memory)
{
+ unsigned long p;
+
if (copy_in_user(up32, up, 2 * sizeof(__u32)) ||
copy_in_user(&up32->data_offset, &up->data_offset,
sizeof(up->data_offset)))
return -EFAULT;
- /* For MMAP, driver might've set up the offset, so copy it back.
- * USERPTR stays the same (was userspace-provided), so no copying. */
- if (memory == V4L2_MEMORY_MMAP)
+ switch (memory) {
+ case V4L2_MEMORY_MMAP:
+ case V4L2_MEMORY_OVERLAY:
if (copy_in_user(&up32->m.mem_offset, &up->m.mem_offset,
sizeof(up->m.mem_offset)))
return -EFAULT;
- /* For DMABUF, driver might've set up the fd, so copy it back. */
- if (memory == V4L2_MEMORY_DMABUF)
+ break;
+ case V4L2_MEMORY_USERPTR:
+ if (get_user(p, &up->m.userptr) ||
+ put_user((compat_ulong_t)ptr_to_compat((__force void *)p),
+ &up32->m.userptr))
+ return -EFAULT;
+ break;
+ case V4L2_MEMORY_DMABUF:
if (copy_in_user(&up32->m.fd, &up->m.fd,
sizeof(up->m.fd)))
return -EFAULT;
+ break;
+ }
return 0;
}
@@ -408,6 +423,7 @@ static int get_v4l2_buffer32(struct v4l2
} else {
switch (kp->memory) {
case V4L2_MEMORY_MMAP:
+ case V4L2_MEMORY_OVERLAY:
if (get_user(kp->m.offset, &up->m.offset))
return -EFAULT;
break;
@@ -421,10 +437,6 @@ static int get_v4l2_buffer32(struct v4l2
kp->m.userptr = (unsigned long)compat_ptr(tmp);
}
break;
- case V4L2_MEMORY_OVERLAY:
- if (get_user(kp->m.offset, &up->m.offset))
- return -EFAULT;
- break;
case V4L2_MEMORY_DMABUF:
if (get_user(kp->m.fd, &up->m.fd))
return -EFAULT;
@@ -481,6 +493,7 @@ static int put_v4l2_buffer32(struct v4l2
} else {
switch (kp->memory) {
case V4L2_MEMORY_MMAP:
+ case V4L2_MEMORY_OVERLAY:
if (put_user(kp->m.offset, &up->m.offset))
return -EFAULT;
break;
@@ -488,10 +501,6 @@ static int put_v4l2_buffer32(struct v4l2
if (put_user(kp->m.userptr, &up->m.userptr))
return -EFAULT;
break;
- case V4L2_MEMORY_OVERLAY:
- if (put_user(kp->m.offset, &up->m.offset))
- return -EFAULT;
- break;
case V4L2_MEMORY_DMABUF:
if (put_user(kp->m.fd, &up->m.fd))
return -EFAULT;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Clay McClure <[email protected]>
commit a51a0c8d213594bc094cb8e54aad0cb6d7f7b9a6 upstream.
Similar to commit 714fb87e8bc0 ("ubi: Fix race condition between ubi
device creation and udev"), we should make the volume active before
registering it.
Signed-off-by: Clay McClure <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/ubi/vmt.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -270,6 +270,12 @@ int ubi_create_volume(struct ubi_device
vol->last_eb_bytes = vol->usable_leb_size;
}
+ /* Make volume "available" before it becomes accessible via sysfs */
+ spin_lock(&ubi->volumes_lock);
+ ubi->volumes[vol_id] = vol;
+ ubi->vol_count += 1;
+ spin_unlock(&ubi->volumes_lock);
+
/* Register character device for the volume */
cdev_init(&vol->cdev, &ubi_vol_cdev_operations);
vol->cdev.owner = THIS_MODULE;
@@ -298,11 +304,6 @@ int ubi_create_volume(struct ubi_device
if (err)
goto out_sysfs;
- spin_lock(&ubi->volumes_lock);
- ubi->volumes[vol_id] = vol;
- ubi->vol_count += 1;
- spin_unlock(&ubi->volumes_lock);
-
ubi_volume_notify(ubi, vol, UBI_VOLUME_ADDED);
self_check_volumes(ubi);
return err;
@@ -315,6 +316,10 @@ out_sysfs:
*/
cdev_device_del(&vol->cdev, &vol->dev);
out_mapping:
+ spin_lock(&ubi->volumes_lock);
+ ubi->volumes[vol_id] = NULL;
+ ubi->vol_count -= 1;
+ spin_unlock(&ubi->volumes_lock);
ubi_eba_destroy_table(eba_tbl);
out_acc:
spin_lock(&ubi->volumes_lock);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit a16e772e664b9a261424107784804cffc8894977 upstream.
Since Poly1305 requires a nonce per invocation, the Linux kernel
implementations of Poly1305 don't use the crypto API's keying mechanism
and instead expect the key and nonce as the first 32 bytes of the data.
But ->setkey() is still defined as a stub returning an error code. This
prevents Poly1305 from being used through AF_ALG and will also break it
completely once we start enforcing that all crypto API users (not just
AF_ALG) call ->setkey() if present.
Fix it by removing crypto_poly1305_setkey(), leaving ->setkey as NULL.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/crypto/poly1305_glue.c | 1 -
crypto/poly1305_generic.c | 17 +++++------------
include/crypto/poly1305.h | 2 --
3 files changed, 5 insertions(+), 15 deletions(-)
--- a/arch/x86/crypto/poly1305_glue.c
+++ b/arch/x86/crypto/poly1305_glue.c
@@ -164,7 +164,6 @@ static struct shash_alg alg = {
.init = poly1305_simd_init,
.update = poly1305_simd_update,
.final = crypto_poly1305_final,
- .setkey = crypto_poly1305_setkey,
.descsize = sizeof(struct poly1305_simd_desc_ctx),
.base = {
.cra_name = "poly1305",
--- a/crypto/poly1305_generic.c
+++ b/crypto/poly1305_generic.c
@@ -47,17 +47,6 @@ int crypto_poly1305_init(struct shash_de
}
EXPORT_SYMBOL_GPL(crypto_poly1305_init);
-int crypto_poly1305_setkey(struct crypto_shash *tfm,
- const u8 *key, unsigned int keylen)
-{
- /* Poly1305 requires a unique key for each tag, which implies that
- * we can't set it on the tfm that gets accessed by multiple users
- * simultaneously. Instead we expect the key as the first 32 bytes in
- * the update() call. */
- return -ENOTSUPP;
-}
-EXPORT_SYMBOL_GPL(crypto_poly1305_setkey);
-
static void poly1305_setrkey(struct poly1305_desc_ctx *dctx, const u8 *key)
{
/* r &= 0xffffffc0ffffffc0ffffffc0fffffff */
@@ -76,6 +65,11 @@ static void poly1305_setskey(struct poly
dctx->s[3] = get_unaligned_le32(key + 12);
}
+/*
+ * Poly1305 requires a unique key for each tag, which implies that we can't set
+ * it on the tfm that gets accessed by multiple users simultaneously. Instead we
+ * expect the key as the first 32 bytes in the update() call.
+ */
unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx,
const u8 *src, unsigned int srclen)
{
@@ -281,7 +275,6 @@ static struct shash_alg poly1305_alg = {
.init = crypto_poly1305_init,
.update = crypto_poly1305_update,
.final = crypto_poly1305_final,
- .setkey = crypto_poly1305_setkey,
.descsize = sizeof(struct poly1305_desc_ctx),
.base = {
.cra_name = "poly1305",
--- a/include/crypto/poly1305.h
+++ b/include/crypto/poly1305.h
@@ -31,8 +31,6 @@ struct poly1305_desc_ctx {
};
int crypto_poly1305_init(struct shash_desc *desc);
-int crypto_poly1305_setkey(struct crypto_shash *tfm,
- const u8 *key, unsigned int keylen);
unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx,
const u8 *src, unsigned int srclen);
int crypto_poly1305_update(struct shash_desc *desc,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 9fa68f620041be04720d0cbfb1bd3ddfc6310b24 upstream.
Currently, almost none of the keyed hash algorithms check whether a key
has been set before proceeding. Some algorithms are okay with this and
will effectively just use a key of all 0's or some other bogus default.
However, others will severely break, as demonstrated using
"hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
via a (potentially exploitable) stack buffer overflow.
A while ago, this problem was solved for AF_ALG by pairing each hash
transform with a 'has_key' bool. However, there are still other places
in the kernel where userspace can specify an arbitrary hash algorithm by
name, and the kernel uses it as unkeyed hash without checking whether it
is really unkeyed. Examples of this include:
- KEYCTL_DH_COMPUTE, via the KDF extension
- dm-verity
- dm-crypt, via the ESSIV support
- dm-integrity, via the "internal hash" mode with no key given
- drbd (Distributed Replicated Block Device)
This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
privileges to call.
Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
->crt_flags of each hash transform that indicates whether the transform
still needs to be keyed or not. Then, make the hash init, import, and
digest functions return -ENOKEY if the key is still needed.
The new flag also replaces the 'has_key' bool which algif_hash was
previously using, thereby simplifying the algif_hash implementation.
Reported-by: syzbot <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
crypto/ahash.c | 22 ++++++++++++++++----
crypto/algif_hash.c | 52 ++++++++++---------------------------------------
crypto/shash.c | 25 +++++++++++++++++++----
include/crypto/hash.h | 34 ++++++++++++++++++++++----------
include/linux/crypto.h | 2 +
5 files changed, 75 insertions(+), 60 deletions(-)
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -193,11 +193,18 @@ int crypto_ahash_setkey(struct crypto_ah
unsigned int keylen)
{
unsigned long alignmask = crypto_ahash_alignmask(tfm);
+ int err;
if ((unsigned long)key & alignmask)
- return ahash_setkey_unaligned(tfm, key, keylen);
+ err = ahash_setkey_unaligned(tfm, key, keylen);
+ else
+ err = tfm->setkey(tfm, key, keylen);
+
+ if (err)
+ return err;
- return tfm->setkey(tfm, key, keylen);
+ crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
@@ -368,7 +375,12 @@ EXPORT_SYMBOL_GPL(crypto_ahash_finup);
int crypto_ahash_digest(struct ahash_request *req)
{
- return crypto_ahash_op(req, crypto_ahash_reqtfm(req)->digest);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return crypto_ahash_op(req, tfm->digest);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);
@@ -450,7 +462,6 @@ static int crypto_ahash_init_tfm(struct
struct ahash_alg *alg = crypto_ahash_alg(hash);
hash->setkey = ahash_nosetkey;
- hash->has_setkey = false;
hash->export = ahash_no_export;
hash->import = ahash_no_import;
@@ -465,7 +476,8 @@ static int crypto_ahash_init_tfm(struct
if (alg->setkey) {
hash->setkey = alg->setkey;
- hash->has_setkey = true;
+ if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+ crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
}
if (alg->export)
hash->export = alg->export;
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -34,11 +34,6 @@ struct hash_ctx {
struct ahash_request req;
};
-struct algif_hash_tfm {
- struct crypto_ahash *hash;
- bool has_key;
-};
-
static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx)
{
unsigned ds;
@@ -307,7 +302,7 @@ static int hash_check_key(struct socket
int err = 0;
struct sock *psk;
struct alg_sock *pask;
- struct algif_hash_tfm *tfm;
+ struct crypto_ahash *tfm;
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
@@ -321,7 +316,7 @@ static int hash_check_key(struct socket
err = -ENOKEY;
lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
- if (!tfm->has_key)
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
goto unlock;
if (!pask->refcnt++)
@@ -412,41 +407,17 @@ static struct proto_ops algif_hash_ops_n
static void *hash_bind(const char *name, u32 type, u32 mask)
{
- struct algif_hash_tfm *tfm;
- struct crypto_ahash *hash;
-
- tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
- if (!tfm)
- return ERR_PTR(-ENOMEM);
-
- hash = crypto_alloc_ahash(name, type, mask);
- if (IS_ERR(hash)) {
- kfree(tfm);
- return ERR_CAST(hash);
- }
-
- tfm->hash = hash;
-
- return tfm;
+ return crypto_alloc_ahash(name, type, mask);
}
static void hash_release(void *private)
{
- struct algif_hash_tfm *tfm = private;
-
- crypto_free_ahash(tfm->hash);
- kfree(tfm);
+ crypto_free_ahash(private);
}
static int hash_setkey(void *private, const u8 *key, unsigned int keylen)
{
- struct algif_hash_tfm *tfm = private;
- int err;
-
- err = crypto_ahash_setkey(tfm->hash, key, keylen);
- tfm->has_key = !err;
-
- return err;
+ return crypto_ahash_setkey(private, key, keylen);
}
static void hash_sock_destruct(struct sock *sk)
@@ -461,11 +432,10 @@ static void hash_sock_destruct(struct so
static int hash_accept_parent_nokey(void *private, struct sock *sk)
{
- struct hash_ctx *ctx;
+ struct crypto_ahash *tfm = private;
struct alg_sock *ask = alg_sk(sk);
- struct algif_hash_tfm *tfm = private;
- struct crypto_ahash *hash = tfm->hash;
- unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(hash);
+ struct hash_ctx *ctx;
+ unsigned int len = sizeof(*ctx) + crypto_ahash_reqsize(tfm);
ctx = sock_kmalloc(sk, len, GFP_KERNEL);
if (!ctx)
@@ -478,7 +448,7 @@ static int hash_accept_parent_nokey(void
ask->private = ctx;
- ahash_request_set_tfm(&ctx->req, hash);
+ ahash_request_set_tfm(&ctx->req, tfm);
ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &ctx->wait);
@@ -489,9 +459,9 @@ static int hash_accept_parent_nokey(void
static int hash_accept_parent(void *private, struct sock *sk)
{
- struct algif_hash_tfm *tfm = private;
+ struct crypto_ahash *tfm = private;
- if (!tfm->has_key && crypto_ahash_has_setkey(tfm->hash))
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
return hash_accept_parent_nokey(private, sk);
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -58,11 +58,18 @@ int crypto_shash_setkey(struct crypto_sh
{
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ int err;
if ((unsigned long)key & alignmask)
- return shash_setkey_unaligned(tfm, key, keylen);
+ err = shash_setkey_unaligned(tfm, key, keylen);
+ else
+ err = shash->setkey(tfm, key, keylen);
+
+ if (err)
+ return err;
- return shash->setkey(tfm, key, keylen);
+ crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
}
EXPORT_SYMBOL_GPL(crypto_shash_setkey);
@@ -181,6 +188,9 @@ int crypto_shash_digest(struct shash_des
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
if (((unsigned long)data | (unsigned long)out) & alignmask)
return shash_digest_unaligned(desc, data, len, out);
@@ -360,7 +370,8 @@ int crypto_init_shash_ops_async(struct c
crt->digest = shash_async_digest;
crt->setkey = shash_async_setkey;
- crt->has_setkey = alg->setkey != shash_no_setkey;
+ crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
if (alg->export)
crt->export = shash_async_export;
@@ -375,8 +386,14 @@ int crypto_init_shash_ops_async(struct c
static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_shash *hash = __crypto_shash_cast(tfm);
+ struct shash_alg *alg = crypto_shash_alg(hash);
+
+ hash->descsize = alg->descsize;
+
+ if (crypto_shash_alg_has_setkey(alg) &&
+ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+ crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
- hash->descsize = crypto_shash_alg(hash)->descsize;
return 0;
}
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -210,7 +210,6 @@ struct crypto_ahash {
unsigned int keylen);
unsigned int reqsize;
- bool has_setkey;
struct crypto_tfm base;
};
@@ -410,11 +409,6 @@ static inline void *ahash_request_ctx(st
int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen);
-static inline bool crypto_ahash_has_setkey(struct crypto_ahash *tfm)
-{
- return tfm->has_setkey;
-}
-
/**
* crypto_ahash_finup() - update and finalize message digest
* @req: reference to the ahash_request handle that holds all information
@@ -487,7 +481,12 @@ static inline int crypto_ahash_export(st
*/
static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
{
- return crypto_ahash_reqtfm(req)->import(req, in);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return tfm->import(req, in);
}
/**
@@ -503,7 +502,12 @@ static inline int crypto_ahash_import(st
*/
static inline int crypto_ahash_init(struct ahash_request *req)
{
- return crypto_ahash_reqtfm(req)->init(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return tfm->init(req);
}
/**
@@ -855,7 +859,12 @@ static inline int crypto_shash_export(st
*/
static inline int crypto_shash_import(struct shash_desc *desc, const void *in)
{
- return crypto_shash_alg(desc->tfm)->import(desc, in);
+ struct crypto_shash *tfm = desc->tfm;
+
+ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return crypto_shash_alg(tfm)->import(desc, in);
}
/**
@@ -871,7 +880,12 @@ static inline int crypto_shash_import(st
*/
static inline int crypto_shash_init(struct shash_desc *desc)
{
- return crypto_shash_alg(desc->tfm)->init(desc);
+ struct crypto_shash *tfm = desc->tfm;
+
+ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return crypto_shash_alg(tfm)->init(desc);
}
/**
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -115,6 +115,8 @@
/*
* Transform masks and values (for crt_flags).
*/
+#define CRYPTO_TFM_NEED_KEY 0x00000001
+
#define CRYPTO_TFM_REQ_MASK 0x000fff00
#define CRYPTO_TFM_RES_MASK 0xfff00000
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 181a4a2d5a0a7b43cab08a70710d727e7764ccdd upstream.
If the ioctl returned -ENOTTY, then don't bother copying
back the result as there is no point.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-ioctl.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -2895,8 +2895,11 @@ video_usercopy(struct file *file, unsign
/* Handles IOCTL */
err = func(file, cmd, parg);
- if (err == -ENOIOCTLCMD)
+ if (err == -ENOTTY || err == -ENOIOCTLCMD) {
err = -ENOTTY;
+ goto out;
+ }
+
if (err == 0) {
if (cmd == VIDIOC_DQBUF)
trace_v4l2_dqbuf(video_devdata(file)->minor, parg);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit b7b957d429f601d6d1942122b339474f31191d75 upstream.
The indentation of this source is all over the place. Fix this.
This patch only changes whitespace.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 212 +++++++++++++-------------
1 file changed, 107 insertions(+), 105 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -49,12 +49,12 @@ struct v4l2_window32 {
static int get_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
{
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_window32)) ||
- copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
- get_user(kp->field, &up->field) ||
- get_user(kp->chromakey, &up->chromakey) ||
- get_user(kp->clipcount, &up->clipcount) ||
- get_user(kp->global_alpha, &up->global_alpha))
- return -EFAULT;
+ copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
+ get_user(kp->field, &up->field) ||
+ get_user(kp->chromakey, &up->chromakey) ||
+ get_user(kp->clipcount, &up->clipcount) ||
+ get_user(kp->global_alpha, &up->global_alpha))
+ return -EFAULT;
if (kp->clipcount > 2048)
return -EINVAL;
if (kp->clipcount) {
@@ -84,11 +84,11 @@ static int get_v4l2_window32(struct v4l2
static int put_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
{
if (copy_to_user(&up->w, &kp->w, sizeof(kp->w)) ||
- put_user(kp->field, &up->field) ||
- put_user(kp->chromakey, &up->chromakey) ||
- put_user(kp->clipcount, &up->clipcount) ||
- put_user(kp->global_alpha, &up->global_alpha))
- return -EFAULT;
+ put_user(kp->field, &up->field) ||
+ put_user(kp->chromakey, &up->chromakey) ||
+ put_user(kp->clipcount, &up->clipcount) ||
+ put_user(kp->global_alpha, &up->global_alpha))
+ return -EFAULT;
return 0;
}
@@ -100,7 +100,7 @@ static inline int get_v4l2_pix_format(st
}
static inline int get_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
- struct v4l2_pix_format_mplane __user *up)
+ struct v4l2_pix_format_mplane __user *up)
{
if (copy_from_user(kp, up, sizeof(struct v4l2_pix_format_mplane)))
return -EFAULT;
@@ -115,7 +115,7 @@ static inline int put_v4l2_pix_format(st
}
static inline int put_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
- struct v4l2_pix_format_mplane __user *up)
+ struct v4l2_pix_format_mplane __user *up)
{
if (copy_to_user(up, kp, sizeof(struct v4l2_pix_format_mplane)))
return -EFAULT;
@@ -238,7 +238,7 @@ static int __get_v4l2_format32(struct v4
return get_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
default:
pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
- kp->type);
+ kp->type);
return -EINVAL;
}
}
@@ -287,7 +287,7 @@ static int __put_v4l2_format32(struct v4
return put_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
default:
pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
- kp->type);
+ kp->type);
return -EINVAL;
}
}
@@ -321,7 +321,7 @@ static int get_v4l2_standard32(struct v4
{
/* other fields are not set by the user, nor used by the driver */
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_standard32)) ||
- get_user(kp->index, &up->index))
+ get_user(kp->index, &up->index))
return -EFAULT;
return 0;
}
@@ -329,13 +329,14 @@ static int get_v4l2_standard32(struct v4
static int put_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_standard32)) ||
- put_user(kp->index, &up->index) ||
- put_user(kp->id, &up->id) ||
- copy_to_user(up->name, kp->name, 24) ||
- copy_to_user(&up->frameperiod, &kp->frameperiod, sizeof(kp->frameperiod)) ||
- put_user(kp->framelines, &up->framelines) ||
- copy_to_user(up->reserved, kp->reserved, 4 * sizeof(__u32)))
- return -EFAULT;
+ put_user(kp->index, &up->index) ||
+ put_user(kp->id, &up->id) ||
+ copy_to_user(up->name, kp->name, 24) ||
+ copy_to_user(&up->frameperiod, &kp->frameperiod,
+ sizeof(kp->frameperiod)) ||
+ put_user(kp->framelines, &up->framelines) ||
+ copy_to_user(up->reserved, kp->reserved, 4 * sizeof(__u32)))
+ return -EFAULT;
return 0;
}
@@ -375,14 +376,14 @@ struct v4l2_buffer32 {
};
static int get_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
- enum v4l2_memory memory)
+ enum v4l2_memory memory)
{
void __user *up_pln;
compat_long_t p;
if (copy_in_user(up, up32, 2 * sizeof(__u32)) ||
- copy_in_user(&up->data_offset, &up32->data_offset,
- sizeof(__u32)))
+ copy_in_user(&up->data_offset, &up32->data_offset,
+ sizeof(__u32)))
return -EFAULT;
if (memory == V4L2_MEMORY_USERPTR) {
@@ -396,7 +397,7 @@ static int get_v4l2_plane32(struct v4l2_
return -EFAULT;
} else {
if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
- sizeof(__u32)))
+ sizeof(__u32)))
return -EFAULT;
}
@@ -404,23 +405,23 @@ static int get_v4l2_plane32(struct v4l2_
}
static int put_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
- enum v4l2_memory memory)
+ enum v4l2_memory memory)
{
if (copy_in_user(up32, up, 2 * sizeof(__u32)) ||
- copy_in_user(&up32->data_offset, &up->data_offset,
- sizeof(__u32)))
+ copy_in_user(&up32->data_offset, &up->data_offset,
+ sizeof(__u32)))
return -EFAULT;
/* For MMAP, driver might've set up the offset, so copy it back.
* USERPTR stays the same (was userspace-provided), so no copying. */
if (memory == V4L2_MEMORY_MMAP)
if (copy_in_user(&up32->m.mem_offset, &up->m.mem_offset,
- sizeof(__u32)))
+ sizeof(__u32)))
return -EFAULT;
/* For DMABUF, driver might've set up the fd, so copy it back. */
if (memory == V4L2_MEMORY_DMABUF)
if (copy_in_user(&up32->m.fd, &up->m.fd,
- sizeof(int)))
+ sizeof(int)))
return -EFAULT;
return 0;
@@ -434,19 +435,19 @@ static int get_v4l2_buffer32(struct v4l2
int ret;
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_buffer32)) ||
- get_user(kp->index, &up->index) ||
- get_user(kp->type, &up->type) ||
- get_user(kp->flags, &up->flags) ||
- get_user(kp->memory, &up->memory) ||
- get_user(kp->length, &up->length))
- return -EFAULT;
+ get_user(kp->index, &up->index) ||
+ get_user(kp->type, &up->type) ||
+ get_user(kp->flags, &up->flags) ||
+ get_user(kp->memory, &up->memory) ||
+ get_user(kp->length, &up->length))
+ return -EFAULT;
if (V4L2_TYPE_IS_OUTPUT(kp->type))
if (get_user(kp->bytesused, &up->bytesused) ||
- get_user(kp->field, &up->field) ||
- get_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- get_user(kp->timestamp.tv_usec,
- &up->timestamp.tv_usec))
+ get_user(kp->field, &up->field) ||
+ get_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+ get_user(kp->timestamp.tv_usec,
+ &up->timestamp.tv_usec))
return -EFAULT;
if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
@@ -466,7 +467,7 @@ static int get_v4l2_buffer32(struct v4l2
uplane32 = compat_ptr(p);
if (!access_ok(VERIFY_READ, uplane32,
- kp->length * sizeof(struct v4l2_plane32)))
+ kp->length * sizeof(struct v4l2_plane32)))
return -EFAULT;
/* We don't really care if userspace decides to kill itself
@@ -490,12 +491,12 @@ static int get_v4l2_buffer32(struct v4l2
break;
case V4L2_MEMORY_USERPTR:
{
- compat_long_t tmp;
+ compat_long_t tmp;
- if (get_user(tmp, &up->m.userptr))
- return -EFAULT;
+ if (get_user(tmp, &up->m.userptr))
+ return -EFAULT;
- kp->m.userptr = (unsigned long)compat_ptr(tmp);
+ kp->m.userptr = (unsigned long)compat_ptr(tmp);
}
break;
case V4L2_MEMORY_OVERLAY:
@@ -521,22 +522,23 @@ static int put_v4l2_buffer32(struct v4l2
int ret;
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_buffer32)) ||
- put_user(kp->index, &up->index) ||
- put_user(kp->type, &up->type) ||
- put_user(kp->flags, &up->flags) ||
- put_user(kp->memory, &up->memory))
- return -EFAULT;
+ put_user(kp->index, &up->index) ||
+ put_user(kp->type, &up->type) ||
+ put_user(kp->flags, &up->flags) ||
+ put_user(kp->memory, &up->memory))
+ return -EFAULT;
if (put_user(kp->bytesused, &up->bytesused) ||
- put_user(kp->field, &up->field) ||
- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- put_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec) ||
- copy_to_user(&up->timecode, &kp->timecode, sizeof(struct v4l2_timecode)) ||
- put_user(kp->sequence, &up->sequence) ||
- put_user(kp->reserved2, &up->reserved2) ||
- put_user(kp->reserved, &up->reserved) ||
- put_user(kp->length, &up->length))
- return -EFAULT;
+ put_user(kp->field, &up->field) ||
+ put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+ put_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec) ||
+ copy_to_user(&up->timecode, &kp->timecode,
+ sizeof(struct v4l2_timecode)) ||
+ put_user(kp->sequence, &up->sequence) ||
+ put_user(kp->reserved2, &up->reserved2) ||
+ put_user(kp->reserved, &up->reserved) ||
+ put_user(kp->length, &up->length))
+ return -EFAULT;
if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
num_planes = kp->length;
@@ -600,11 +602,11 @@ static int get_v4l2_framebuffer32(struct
u32 tmp;
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_framebuffer32)) ||
- get_user(tmp, &up->base) ||
- get_user(kp->capability, &up->capability) ||
- get_user(kp->flags, &up->flags) ||
- copy_from_user(&kp->fmt, &up->fmt, sizeof(up->fmt)))
- return -EFAULT;
+ get_user(tmp, &up->base) ||
+ get_user(kp->capability, &up->capability) ||
+ get_user(kp->flags, &up->flags) ||
+ copy_from_user(&kp->fmt, &up->fmt, sizeof(up->fmt)))
+ return -EFAULT;
kp->base = (__force void *)compat_ptr(tmp);
return 0;
}
@@ -614,11 +616,11 @@ static int put_v4l2_framebuffer32(struct
u32 tmp = (u32)((unsigned long)kp->base);
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_framebuffer32)) ||
- put_user(tmp, &up->base) ||
- put_user(kp->capability, &up->capability) ||
- put_user(kp->flags, &up->flags) ||
- copy_to_user(&up->fmt, &kp->fmt, sizeof(up->fmt)))
- return -EFAULT;
+ put_user(tmp, &up->base) ||
+ put_user(kp->capability, &up->capability) ||
+ put_user(kp->flags, &up->flags) ||
+ copy_to_user(&up->fmt, &kp->fmt, sizeof(up->fmt)))
+ return -EFAULT;
return 0;
}
@@ -694,12 +696,12 @@ static int get_v4l2_ext_controls32(struc
compat_caddr_t p;
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_ext_controls32)) ||
- get_user(kp->which, &up->which) ||
- get_user(kp->count, &up->count) ||
- get_user(kp->error_idx, &up->error_idx) ||
- copy_from_user(kp->reserved, up->reserved,
- sizeof(kp->reserved)))
- return -EFAULT;
+ get_user(kp->which, &up->which) ||
+ get_user(kp->count, &up->count) ||
+ get_user(kp->error_idx, &up->error_idx) ||
+ copy_from_user(kp->reserved, up->reserved,
+ sizeof(kp->reserved)))
+ return -EFAULT;
if (kp->count == 0) {
kp->controls = NULL;
return 0;
@@ -710,7 +712,7 @@ static int get_v4l2_ext_controls32(struc
return -EFAULT;
ucontrols = compat_ptr(p);
if (!access_ok(VERIFY_READ, ucontrols,
- kp->count * sizeof(struct v4l2_ext_control32)))
+ kp->count * sizeof(struct v4l2_ext_control32)))
return -EFAULT;
kcontrols = compat_alloc_user_space(kp->count *
sizeof(struct v4l2_ext_control));
@@ -746,11 +748,11 @@ static int put_v4l2_ext_controls32(struc
compat_caddr_t p;
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_ext_controls32)) ||
- put_user(kp->which, &up->which) ||
- put_user(kp->count, &up->count) ||
- put_user(kp->error_idx, &up->error_idx) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
- return -EFAULT;
+ put_user(kp->which, &up->which) ||
+ put_user(kp->count, &up->count) ||
+ put_user(kp->error_idx, &up->error_idx) ||
+ copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+ return -EFAULT;
if (!kp->count)
return 0;
@@ -758,7 +760,7 @@ static int put_v4l2_ext_controls32(struc
return -EFAULT;
ucontrols = compat_ptr(p);
if (!access_ok(VERIFY_WRITE, ucontrols,
- n * sizeof(struct v4l2_ext_control32)))
+ n * sizeof(struct v4l2_ext_control32)))
return -EFAULT;
while (--n >= 0) {
@@ -796,15 +798,15 @@ struct v4l2_event32 {
static int put_v4l2_event32(struct v4l2_event *kp, struct v4l2_event32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_event32)) ||
- put_user(kp->type, &up->type) ||
- copy_to_user(&up->u, &kp->u, sizeof(kp->u)) ||
- put_user(kp->pending, &up->pending) ||
- put_user(kp->sequence, &up->sequence) ||
- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) ||
- put_user(kp->id, &up->id) ||
- copy_to_user(up->reserved, kp->reserved, 8 * sizeof(__u32)))
- return -EFAULT;
+ put_user(kp->type, &up->type) ||
+ copy_to_user(&up->u, &kp->u, sizeof(kp->u)) ||
+ put_user(kp->pending, &up->pending) ||
+ put_user(kp->sequence, &up->sequence) ||
+ put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+ put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) ||
+ put_user(kp->id, &up->id) ||
+ copy_to_user(up->reserved, kp->reserved, 8 * sizeof(__u32)))
+ return -EFAULT;
return 0;
}
@@ -821,12 +823,12 @@ static int get_v4l2_edid32(struct v4l2_e
u32 tmp;
if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_edid32)) ||
- get_user(kp->pad, &up->pad) ||
- get_user(kp->start_block, &up->start_block) ||
- get_user(kp->blocks, &up->blocks) ||
- get_user(tmp, &up->edid) ||
- copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
- return -EFAULT;
+ get_user(kp->pad, &up->pad) ||
+ get_user(kp->start_block, &up->start_block) ||
+ get_user(kp->blocks, &up->blocks) ||
+ get_user(tmp, &up->edid) ||
+ copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
+ return -EFAULT;
kp->edid = (__force u8 *)compat_ptr(tmp);
return 0;
}
@@ -836,12 +838,12 @@ static int put_v4l2_edid32(struct v4l2_e
u32 tmp = (u32)((unsigned long)kp->edid);
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_edid32)) ||
- put_user(kp->pad, &up->pad) ||
- put_user(kp->start_block, &up->start_block) ||
- put_user(kp->blocks, &up->blocks) ||
- put_user(tmp, &up->edid) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
- return -EFAULT;
+ put_user(kp->pad, &up->pad) ||
+ put_user(kp->start_block, &up->start_block) ||
+ put_user(kp->blocks, &up->blocks) ||
+ put_user(tmp, &up->edid) ||
+ copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+ return -EFAULT;
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit a751be5b142ef6bcbbb96d9899516f4d9c8d0ef4 upstream.
put_v4l2_window32() didn't copy back the clip list to userspace.
Drivers can update the clip rectangles, so this should be done.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 59 +++++++++++++++++---------
1 file changed, 40 insertions(+), 19 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -50,6 +50,11 @@ struct v4l2_window32 {
static int get_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
{
+ struct v4l2_clip32 __user *uclips;
+ struct v4l2_clip __user *kclips;
+ compat_caddr_t p;
+ u32 n;
+
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
get_user(kp->field, &up->field) ||
@@ -59,38 +64,54 @@ static int get_v4l2_window32(struct v4l2
return -EFAULT;
if (kp->clipcount > 2048)
return -EINVAL;
- if (kp->clipcount) {
- struct v4l2_clip32 __user *uclips;
- struct v4l2_clip __user *kclips;
- int n = kp->clipcount;
- compat_caddr_t p;
+ if (!kp->clipcount) {
+ kp->clips = NULL;
+ return 0;
+ }
- if (get_user(p, &up->clips))
+ n = kp->clipcount;
+ if (get_user(p, &up->clips))
+ return -EFAULT;
+ uclips = compat_ptr(p);
+ kclips = compat_alloc_user_space(n * sizeof(*kclips));
+ kp->clips = kclips;
+ while (n--) {
+ if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
return -EFAULT;
- uclips = compat_ptr(p);
- kclips = compat_alloc_user_space(n * sizeof(*kclips));
- kp->clips = kclips;
- while (--n >= 0) {
- if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
- return -EFAULT;
- if (put_user(n ? kclips + 1 : NULL, &kclips->next))
- return -EFAULT;
- uclips += 1;
- kclips += 1;
- }
- } else
- kp->clips = NULL;
+ if (put_user(n ? kclips + 1 : NULL, &kclips->next))
+ return -EFAULT;
+ uclips++;
+ kclips++;
+ }
return 0;
}
static int put_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
{
+ struct v4l2_clip __user *kclips = kp->clips;
+ struct v4l2_clip32 __user *uclips;
+ u32 n = kp->clipcount;
+ compat_caddr_t p;
+
if (copy_to_user(&up->w, &kp->w, sizeof(kp->w)) ||
put_user(kp->field, &up->field) ||
put_user(kp->chromakey, &up->chromakey) ||
put_user(kp->clipcount, &up->clipcount) ||
put_user(kp->global_alpha, &up->global_alpha))
return -EFAULT;
+
+ if (!kp->clipcount)
+ return 0;
+
+ if (get_user(p, &up->clips))
+ return -EFAULT;
+ uclips = compat_ptr(p);
+ while (n--) {
+ if (copy_in_user(&uclips->c, &kclips->c, sizeof(uclips->c)))
+ return -EFAULT;
+ uclips++;
+ kclips++;
+ }
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Morse <[email protected]>
commit 58d6b15e9da5042a99c9c30ad725792e4569150e upstream.
cpu_pm_enter() calls the pm notifier chain with CPU_PM_ENTER, then if
there is a failure: CPU_PM_ENTER_FAILED.
When KVM receives CPU_PM_ENTER it calls cpu_hyp_reset() which will
return us to the hyp-stub. If we subsequently get a CPU_PM_ENTER_FAILED,
KVM does nothing, leaving the CPU running with the hyp-stub, at odds
with kvm_arm_hardware_enabled.
Add CPU_PM_ENTER_FAILED as a fallthrough for CPU_PM_EXIT, this reloads
KVM based on kvm_arm_hardware_enabled. This is safe even if CPU_PM_ENTER
never gets as far as KVM, as cpu_hyp_reinit() calls cpu_hyp_reset()
to make sure the hyp-stub is loaded before reloading KVM.
Fixes: 67f691976662 ("arm64: kvm: allows kvm cpu hotplug")
CC: Lorenzo Pieralisi <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Christoffer Dall <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
virt/kvm/arm/arm.c | 1 +
1 file changed, 1 insertion(+)
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1239,6 +1239,7 @@ static int hyp_init_cpu_pm_notifier(stru
cpu_hyp_reset();
return NOTIFY_OK;
+ case CPU_PM_ENTER_FAILED:
case CPU_PM_EXIT:
if (__this_cpu_read(kvm_arm_hardware_enabled))
/* The hardware was enabled before suspend. */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit ded4c39e93f3 upstream.
Function identifiers are a 32bit, unsigned quantity. But we never
tell so to the compiler, resulting in the following:
4ac: b26187e0 mov x0, #0xffffffff80000001
We thus rely on the firmware narrowing it for us, which is not
always a reasonable expectation.
Cc: [email protected]
Reported-by: Ard Biesheuvel <[email protected]>
Acked-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/arm-smccc.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -14,14 +14,16 @@
#ifndef __LINUX_ARM_SMCCC_H
#define __LINUX_ARM_SMCCC_H
+#include <uapi/linux/const.h>
+
/*
* This file provides common defines for ARM SMC Calling Convention as
* specified in
* http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html
*/
-#define ARM_SMCCC_STD_CALL 0
-#define ARM_SMCCC_FAST_CALL 1
+#define ARM_SMCCC_STD_CALL _AC(0,U)
+#define ARM_SMCCC_FAST_CALL _AC(1,U)
#define ARM_SMCCC_TYPE_SHIFT 31
#define ARM_SMCCC_SMC_32 0
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Keeping <[email protected]>
commit c66234cfedfc3e6e3b62563a5f2c1562be09a35d upstream.
When restoring registers during runtime resume, we must not write to
I2S_TXDR which is the transmit FIFO as this queues up a sample to be
output and pushes all of the output channels down by one.
This can be demonstrated with the speaker-test utility:
for i in a b c; do speaker-test -c 2 -s 1; done
which should play a test through the left speaker three times but if the
I2S hardware starts runtime suspended the first sample will be played
through the right speaker.
Fix this by marking I2S_TXDR as volatile (which also requires marking it
as readble, even though it technically isn't). This seems to be the
most robust fix, the alternative of giving I2S_TXDR a default value is
more fragile since it does not prevent regcache writing to the register
in all circumstances.
While here, also fix the configuration of I2S_RXDR and I2S_FIFOLR; these
are not writable so they do not suffer from the same problem as I2S_TXDR
but reading from I2S_RXDR does suffer from a similar problem.
Fixes: f0447f6cbb20 ("ASoC: rockchip: i2s: restore register during runtime_suspend/resume cycle", 2016-09-07)
Signed-off-by: John Keeping <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
sound/soc/rockchip/rockchip_i2s.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/sound/soc/rockchip/rockchip_i2s.c
+++ b/sound/soc/rockchip/rockchip_i2s.c
@@ -504,6 +504,7 @@ static bool rockchip_i2s_rd_reg(struct d
case I2S_INTCR:
case I2S_XFER:
case I2S_CLR:
+ case I2S_TXDR:
case I2S_RXDR:
case I2S_FIFOLR:
case I2S_INTSR:
@@ -518,6 +519,9 @@ static bool rockchip_i2s_volatile_reg(st
switch (reg) {
case I2S_INTSR:
case I2S_CLR:
+ case I2S_FIFOLR:
+ case I2S_TXDR:
+ case I2S_RXDR:
return true;
default:
return false;
@@ -527,6 +531,8 @@ static bool rockchip_i2s_volatile_reg(st
static bool rockchip_i2s_precious_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
+ case I2S_RXDR:
+ return true;
default:
return false;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Mentz <[email protected]>
commit a1dfb4c48cc1e64eeb7800a27c66a6f7e88d075a upstream.
The 32-bit compat v4l2 ioctl handling is implemented based on its 64-bit
equivalent. It converts 32-bit data structures into its 64-bit
equivalents and needs to provide the data to the 64-bit ioctl in user
space memory which is commonly allocated using
compat_alloc_user_space().
However, due to how that function is implemented, it can only be called
a single time for every syscall invocation.
Supposedly to avoid this limitation, the existing code uses a mix of
memory from the kernel stack and memory allocated through
compat_alloc_user_space().
Under normal circumstances, this would not work, because the 64-bit
ioctl expects all pointers to point to user space memory. As a
workaround, set_fs(KERNEL_DS) is called to temporarily disable this
extra safety check and allow kernel pointers. However, this might
introduce a security vulnerability: The result of the 32-bit to 64-bit
conversion is writeable by user space because the output buffer has been
allocated via compat_alloc_user_space(). A malicious user space process
could then manipulate pointers inside this output buffer, and due to the
previous set_fs(KERNEL_DS) call, functions like get_user() or put_user()
no longer prevent kernel memory access.
The new approach is to pre-calculate the total amount of user space
memory that is needed, allocate it using compat_alloc_user_space() and
then divide up the allocated memory to accommodate all data structures
that need to be converted.
An alternative approach would have been to retain the union type karg
that they allocated on the kernel stack in do_video_ioctl(), copy all
data from user space into karg and then back to user space. However, we
decided against this approach because it does not align with other
compat syscall implementations. Instead, we tried to replicate the
get_user/put_user pairs as found in other places in the kernel:
if (get_user(clipcount, &up->clipcount) ||
put_user(clipcount, &kp->clipcount)) return -EFAULT;
Notes from [email protected]:
This patch was taken from:
https://github.com/LineageOS/android_kernel_samsung_apq8084/commit/97b733953c06e4f0398ade18850f0817778255f7
Clearly nobody could be bothered to upstream this patch or at minimum
tell us :-( We only heard about this a week ago.
This patch was rebased and cleaned up. Compared to the original I
also swapped the order of the convert_in_user arguments so that they
matched copy_in_user. It was hard to review otherwise. I also replaced
the ALLOC_USER_SPACE/ALLOC_AND_GET by a normal function.
Fixes: 6b5a9492ca ("v4l: introduce string control support.")
Signed-off-by: Daniel Mentz <[email protected]>
Co-developed-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 752 ++++++++++++++++----------
1 file changed, 483 insertions(+), 269 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -22,6 +22,14 @@
#include <media/v4l2-ctrls.h>
#include <media/v4l2-ioctl.h>
+/* Use the same argument order as copy_in_user */
+#define assign_in_user(to, from) \
+({ \
+ typeof(*from) __assign_tmp; \
+ \
+ get_user(__assign_tmp, from) || put_user(__assign_tmp, to); \
+})
+
static long native_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret = -ENOIOCTLCMD;
@@ -48,37 +56,41 @@ struct v4l2_window32 {
__u8 global_alpha;
};
-static int get_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
+static int get_v4l2_window32(struct v4l2_window __user *kp,
+ struct v4l2_window32 __user *up,
+ void __user *aux_buf, u32 aux_space)
{
struct v4l2_clip32 __user *uclips;
struct v4l2_clip __user *kclips;
compat_caddr_t p;
- u32 n;
+ u32 clipcount;
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
- get_user(kp->field, &up->field) ||
- get_user(kp->chromakey, &up->chromakey) ||
- get_user(kp->clipcount, &up->clipcount) ||
- get_user(kp->global_alpha, &up->global_alpha))
+ copy_in_user(&kp->w, &up->w, sizeof(up->w)) ||
+ assign_in_user(&kp->field, &up->field) ||
+ assign_in_user(&kp->chromakey, &up->chromakey) ||
+ assign_in_user(&kp->global_alpha, &up->global_alpha) ||
+ get_user(clipcount, &up->clipcount) ||
+ put_user(clipcount, &kp->clipcount))
return -EFAULT;
- if (kp->clipcount > 2048)
+ if (clipcount > 2048)
return -EINVAL;
- if (!kp->clipcount) {
- kp->clips = NULL;
- return 0;
- }
+ if (!clipcount)
+ return put_user(NULL, &kp->clips);
- n = kp->clipcount;
if (get_user(p, &up->clips))
return -EFAULT;
uclips = compat_ptr(p);
- kclips = compat_alloc_user_space(n * sizeof(*kclips));
- kp->clips = kclips;
- while (n--) {
+ if (aux_space < clipcount * sizeof(*kclips))
+ return -EFAULT;
+ kclips = aux_buf;
+ if (put_user(kclips, &kp->clips))
+ return -EFAULT;
+
+ while (clipcount--) {
if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
return -EFAULT;
- if (put_user(n ? kclips + 1 : NULL, &kclips->next))
+ if (put_user(clipcount ? kclips + 1 : NULL, &kclips->next))
return -EFAULT;
uclips++;
kclips++;
@@ -86,27 +98,28 @@ static int get_v4l2_window32(struct v4l2
return 0;
}
-static int put_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
+static int put_v4l2_window32(struct v4l2_window __user *kp,
+ struct v4l2_window32 __user *up)
{
struct v4l2_clip __user *kclips = kp->clips;
struct v4l2_clip32 __user *uclips;
- u32 n = kp->clipcount;
compat_caddr_t p;
+ u32 clipcount;
- if (copy_to_user(&up->w, &kp->w, sizeof(kp->w)) ||
- put_user(kp->field, &up->field) ||
- put_user(kp->chromakey, &up->chromakey) ||
- put_user(kp->clipcount, &up->clipcount) ||
- put_user(kp->global_alpha, &up->global_alpha))
+ if (copy_in_user(&up->w, &kp->w, sizeof(kp->w)) ||
+ assign_in_user(&up->field, &kp->field) ||
+ assign_in_user(&up->chromakey, &kp->chromakey) ||
+ assign_in_user(&up->global_alpha, &kp->global_alpha) ||
+ get_user(clipcount, &kp->clipcount) ||
+ put_user(clipcount, &up->clipcount))
return -EFAULT;
-
- if (!kp->clipcount)
+ if (!clipcount)
return 0;
if (get_user(p, &up->clips))
return -EFAULT;
uclips = compat_ptr(p);
- while (n--) {
+ while (clipcount--) {
if (copy_in_user(&uclips->c, &kclips->c, sizeof(uclips->c)))
return -EFAULT;
uclips++;
@@ -146,107 +159,164 @@ struct v4l2_create_buffers32 {
__u32 reserved[8];
};
-static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
+static int __bufsize_v4l2_format(struct v4l2_format32 __user *up, u32 *size)
+{
+ u32 type;
+
+ if (get_user(type, &up->type))
+ return -EFAULT;
+
+ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY: {
+ u32 clipcount;
+
+ if (get_user(clipcount, &up->fmt.win.clipcount))
+ return -EFAULT;
+ if (clipcount > 2048)
+ return -EINVAL;
+ *size = clipcount * sizeof(struct v4l2_clip);
+ return 0;
+ }
+ default:
+ *size = 0;
+ return 0;
+ }
+}
+
+static int bufsize_v4l2_format(struct v4l2_format32 __user *up, u32 *size)
+{
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
+ return -EFAULT;
+ return __bufsize_v4l2_format(up, size);
+}
+
+static int __get_v4l2_format32(struct v4l2_format __user *kp,
+ struct v4l2_format32 __user *up,
+ void __user *aux_buf, u32 aux_space)
{
- if (get_user(kp->type, &up->type))
+ u32 type;
+
+ if (get_user(type, &up->type) || put_user(type, &kp->type))
return -EFAULT;
- switch (kp->type) {
+ switch (type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- return copy_from_user(&kp->fmt.pix, &up->fmt.pix,
- sizeof(kp->fmt.pix)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.pix, &up->fmt.pix,
+ sizeof(kp->fmt.pix)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- return copy_from_user(&kp->fmt.pix_mp, &up->fmt.pix_mp,
- sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.pix_mp, &up->fmt.pix_mp,
+ sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
- return get_v4l2_window32(&kp->fmt.win, &up->fmt.win);
+ return get_v4l2_window32(&kp->fmt.win, &up->fmt.win,
+ aux_buf, aux_space);
case V4L2_BUF_TYPE_VBI_CAPTURE:
case V4L2_BUF_TYPE_VBI_OUTPUT:
- return copy_from_user(&kp->fmt.vbi, &up->fmt.vbi,
- sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.vbi, &up->fmt.vbi,
+ sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- return copy_from_user(&kp->fmt.sliced, &up->fmt.sliced,
- sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.sliced, &up->fmt.sliced,
+ sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SDR_CAPTURE:
case V4L2_BUF_TYPE_SDR_OUTPUT:
- return copy_from_user(&kp->fmt.sdr, &up->fmt.sdr,
- sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.sdr, &up->fmt.sdr,
+ sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_META_CAPTURE:
- return copy_from_user(&kp->fmt.meta, &up->fmt.meta,
- sizeof(kp->fmt.meta)) ? -EFAULT : 0;
+ return copy_in_user(&kp->fmt.meta, &up->fmt.meta,
+ sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
return -EINVAL;
}
}
-static int get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
+static int get_v4l2_format32(struct v4l2_format __user *kp,
+ struct v4l2_format32 __user *up,
+ void __user *aux_buf, u32 aux_space)
{
if (!access_ok(VERIFY_READ, up, sizeof(*up)))
return -EFAULT;
- return __get_v4l2_format32(kp, up);
+ return __get_v4l2_format32(kp, up, aux_buf, aux_space);
}
-static int get_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
+static int bufsize_v4l2_create(struct v4l2_create_buffers32 __user *up,
+ u32 *size)
+{
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
+ return -EFAULT;
+ return __bufsize_v4l2_format(&up->format, size);
+}
+
+static int get_v4l2_create32(struct v4l2_create_buffers __user *kp,
+ struct v4l2_create_buffers32 __user *up,
+ void __user *aux_buf, u32 aux_space)
{
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format)))
+ copy_in_user(kp, up,
+ offsetof(struct v4l2_create_buffers32, format)))
return -EFAULT;
- return __get_v4l2_format32(&kp->format, &up->format);
+ return __get_v4l2_format32(&kp->format, &up->format,
+ aux_buf, aux_space);
}
-static int __put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
+static int __put_v4l2_format32(struct v4l2_format __user *kp,
+ struct v4l2_format32 __user *up)
{
- if (put_user(kp->type, &up->type))
+ u32 type;
+
+ if (get_user(type, &kp->type))
return -EFAULT;
- switch (kp->type) {
+ switch (type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT:
- return copy_to_user(&up->fmt.pix, &kp->fmt.pix,
+ return copy_in_user(&up->fmt.pix, &kp->fmt.pix,
sizeof(kp->fmt.pix)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
- return copy_to_user(&up->fmt.pix_mp, &kp->fmt.pix_mp,
+ return copy_in_user(&up->fmt.pix_mp, &kp->fmt.pix_mp,
sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_VIDEO_OVERLAY:
case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
return put_v4l2_window32(&kp->fmt.win, &up->fmt.win);
case V4L2_BUF_TYPE_VBI_CAPTURE:
case V4L2_BUF_TYPE_VBI_OUTPUT:
- return copy_to_user(&up->fmt.vbi, &kp->fmt.vbi,
+ return copy_in_user(&up->fmt.vbi, &kp->fmt.vbi,
sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
- return copy_to_user(&up->fmt.sliced, &kp->fmt.sliced,
+ return copy_in_user(&up->fmt.sliced, &kp->fmt.sliced,
sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_SDR_CAPTURE:
case V4L2_BUF_TYPE_SDR_OUTPUT:
- return copy_to_user(&up->fmt.sdr, &kp->fmt.sdr,
+ return copy_in_user(&up->fmt.sdr, &kp->fmt.sdr,
sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
case V4L2_BUF_TYPE_META_CAPTURE:
- return copy_to_user(&up->fmt.meta, &kp->fmt.meta,
+ return copy_in_user(&up->fmt.meta, &kp->fmt.meta,
sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
return -EINVAL;
}
}
-static int put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
+static int put_v4l2_format32(struct v4l2_format __user *kp,
+ struct v4l2_format32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)))
return -EFAULT;
return __put_v4l2_format32(kp, up);
}
-static int put_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
+static int put_v4l2_create32(struct v4l2_create_buffers __user *kp,
+ struct v4l2_create_buffers32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- copy_to_user(up, kp, offsetof(struct v4l2_create_buffers32, format)) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
+ copy_in_user(up, kp,
+ offsetof(struct v4l2_create_buffers32, format)) ||
+ copy_in_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
return -EFAULT;
return __put_v4l2_format32(&kp->format, &up->format);
}
@@ -260,25 +330,27 @@ struct v4l2_standard32 {
__u32 reserved[4];
};
-static int get_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
+static int get_v4l2_standard32(struct v4l2_standard __user *kp,
+ struct v4l2_standard32 __user *up)
{
/* other fields are not set by the user, nor used by the driver */
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- get_user(kp->index, &up->index))
+ assign_in_user(&kp->index, &up->index))
return -EFAULT;
return 0;
}
-static int put_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
+static int put_v4l2_standard32(struct v4l2_standard __user *kp,
+ struct v4l2_standard32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(kp->index, &up->index) ||
- put_user(kp->id, &up->id) ||
- copy_to_user(up->name, kp->name, sizeof(up->name)) ||
- copy_to_user(&up->frameperiod, &kp->frameperiod,
- sizeof(kp->frameperiod)) ||
- put_user(kp->framelines, &up->framelines) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
+ assign_in_user(&up->index, &kp->index) ||
+ assign_in_user(&up->id, &kp->id) ||
+ copy_in_user(up->name, kp->name, sizeof(up->name)) ||
+ copy_in_user(&up->frameperiod, &kp->frameperiod,
+ sizeof(up->frameperiod)) ||
+ assign_in_user(&up->framelines, &kp->framelines) ||
+ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
return -EFAULT;
return 0;
}
@@ -318,11 +390,11 @@ struct v4l2_buffer32 {
__u32 reserved;
};
-static int get_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
+static int get_v4l2_plane32(struct v4l2_plane __user *up,
+ struct v4l2_plane32 __user *up32,
enum v4l2_memory memory)
{
- void __user *up_pln;
- compat_long_t p;
+ compat_ulong_t p;
if (copy_in_user(up, up32, 2 * sizeof(__u32)) ||
copy_in_user(&up->data_offset, &up32->data_offset,
@@ -337,10 +409,8 @@ static int get_v4l2_plane32(struct v4l2_
return -EFAULT;
break;
case V4L2_MEMORY_USERPTR:
- if (get_user(p, &up32->m.userptr))
- return -EFAULT;
- up_pln = compat_ptr(p);
- if (put_user((unsigned long)up_pln, &up->m.userptr))
+ if (get_user(p, &up32->m.userptr) ||
+ put_user((unsigned long)compat_ptr(p), &up->m.userptr))
return -EFAULT;
break;
case V4L2_MEMORY_DMABUF:
@@ -352,7 +422,8 @@ static int get_v4l2_plane32(struct v4l2_
return 0;
}
-static int put_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
+static int put_v4l2_plane32(struct v4l2_plane __user *up,
+ struct v4l2_plane32 __user *up32,
enum v4l2_memory memory)
{
unsigned long p;
@@ -376,8 +447,7 @@ static int put_v4l2_plane32(struct v4l2_
return -EFAULT;
break;
case V4L2_MEMORY_DMABUF:
- if (copy_in_user(&up32->m.fd, &up->m.fd,
- sizeof(up->m.fd)))
+ if (copy_in_user(&up32->m.fd, &up->m.fd, sizeof(up->m.fd)))
return -EFAULT;
break;
}
@@ -385,79 +455,121 @@ static int put_v4l2_plane32(struct v4l2_
return 0;
}
-static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user *up)
+static int bufsize_v4l2_buffer(struct v4l2_buffer32 __user *up, u32 *size)
{
+ u32 type;
+ u32 length;
+
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
+ get_user(type, &up->type) ||
+ get_user(length, &up->length))
+ return -EFAULT;
+
+ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
+ if (length > VIDEO_MAX_PLANES)
+ return -EINVAL;
+
+ /*
+ * We don't really care if userspace decides to kill itself
+ * by passing a very big length value
+ */
+ *size = length * sizeof(struct v4l2_plane);
+ } else {
+ *size = 0;
+ }
+ return 0;
+}
+
+static int get_v4l2_buffer32(struct v4l2_buffer __user *kp,
+ struct v4l2_buffer32 __user *up,
+ void __user *aux_buf, u32 aux_space)
+{
+ u32 type;
+ u32 length;
+ enum v4l2_memory memory;
struct v4l2_plane32 __user *uplane32;
struct v4l2_plane __user *uplane;
compat_caddr_t p;
int ret;
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- get_user(kp->index, &up->index) ||
- get_user(kp->type, &up->type) ||
- get_user(kp->flags, &up->flags) ||
- get_user(kp->memory, &up->memory) ||
- get_user(kp->length, &up->length))
- return -EFAULT;
-
- if (V4L2_TYPE_IS_OUTPUT(kp->type))
- if (get_user(kp->bytesused, &up->bytesused) ||
- get_user(kp->field, &up->field) ||
- get_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- get_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec))
- return -EFAULT;
-
- if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
- unsigned int num_planes;
-
- if (kp->length == 0) {
- kp->m.planes = NULL;
- /* num_planes == 0 is legal, e.g. when userspace doesn't
- * need planes array on DQBUF*/
- return 0;
- } else if (kp->length > VIDEO_MAX_PLANES) {
- return -EINVAL;
+ assign_in_user(&kp->index, &up->index) ||
+ get_user(type, &up->type) ||
+ put_user(type, &kp->type) ||
+ assign_in_user(&kp->flags, &up->flags) ||
+ get_user(memory, &up->memory) ||
+ put_user(memory, &kp->memory) ||
+ get_user(length, &up->length) ||
+ put_user(length, &kp->length))
+ return -EFAULT;
+
+ if (V4L2_TYPE_IS_OUTPUT(type))
+ if (assign_in_user(&kp->bytesused, &up->bytesused) ||
+ assign_in_user(&kp->field, &up->field) ||
+ assign_in_user(&kp->timestamp.tv_sec,
+ &up->timestamp.tv_sec) ||
+ assign_in_user(&kp->timestamp.tv_usec,
+ &up->timestamp.tv_usec))
+ return -EFAULT;
+
+ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
+ u32 num_planes = length;
+
+ if (num_planes == 0) {
+ /*
+ * num_planes == 0 is legal, e.g. when userspace doesn't
+ * need planes array on DQBUF
+ */
+ return put_user(NULL, &kp->m.planes);
}
+ if (num_planes > VIDEO_MAX_PLANES)
+ return -EINVAL;
if (get_user(p, &up->m.planes))
return -EFAULT;
uplane32 = compat_ptr(p);
if (!access_ok(VERIFY_READ, uplane32,
- kp->length * sizeof(*uplane32)))
+ num_planes * sizeof(*uplane32)))
return -EFAULT;
- /* We don't really care if userspace decides to kill itself
- * by passing a very big num_planes value */
- uplane = compat_alloc_user_space(kp->length * sizeof(*uplane));
- kp->m.planes = (__force struct v4l2_plane *)uplane;
+ /*
+ * We don't really care if userspace decides to kill itself
+ * by passing a very big num_planes value
+ */
+ if (aux_space < num_planes * sizeof(*uplane))
+ return -EFAULT;
- for (num_planes = 0; num_planes < kp->length; num_planes++) {
- ret = get_v4l2_plane32(uplane, uplane32, kp->memory);
+ uplane = aux_buf;
+ if (put_user((__force struct v4l2_plane *)uplane,
+ &kp->m.planes))
+ return -EFAULT;
+
+ while (num_planes--) {
+ ret = get_v4l2_plane32(uplane, uplane32, memory);
if (ret)
return ret;
- ++uplane;
- ++uplane32;
+ uplane++;
+ uplane32++;
}
} else {
- switch (kp->memory) {
+ switch (memory) {
case V4L2_MEMORY_MMAP:
case V4L2_MEMORY_OVERLAY:
- if (get_user(kp->m.offset, &up->m.offset))
+ if (assign_in_user(&kp->m.offset, &up->m.offset))
return -EFAULT;
break;
- case V4L2_MEMORY_USERPTR:
- {
- compat_long_t tmp;
+ case V4L2_MEMORY_USERPTR: {
+ compat_ulong_t userptr;
- if (get_user(tmp, &up->m.userptr))
- return -EFAULT;
-
- kp->m.userptr = (unsigned long)compat_ptr(tmp);
- }
+ if (get_user(userptr, &up->m.userptr) ||
+ put_user((unsigned long)compat_ptr(userptr),
+ &kp->m.userptr))
+ return -EFAULT;
break;
+ }
case V4L2_MEMORY_DMABUF:
- if (get_user(kp->m.fd, &up->m.fd))
+ if (assign_in_user(&kp->m.fd, &up->m.fd))
return -EFAULT;
break;
}
@@ -466,62 +578,70 @@ static int get_v4l2_buffer32(struct v4l2
return 0;
}
-static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user *up)
+static int put_v4l2_buffer32(struct v4l2_buffer __user *kp,
+ struct v4l2_buffer32 __user *up)
{
+ u32 type;
+ u32 length;
+ enum v4l2_memory memory;
struct v4l2_plane32 __user *uplane32;
struct v4l2_plane __user *uplane;
compat_caddr_t p;
- int num_planes;
int ret;
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(kp->index, &up->index) ||
- put_user(kp->type, &up->type) ||
- put_user(kp->flags, &up->flags) ||
- put_user(kp->memory, &up->memory))
- return -EFAULT;
-
- if (put_user(kp->bytesused, &up->bytesused) ||
- put_user(kp->field, &up->field) ||
- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- put_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec) ||
- copy_to_user(&up->timecode, &kp->timecode, sizeof(kp->timecode)) ||
- put_user(kp->sequence, &up->sequence) ||
- put_user(kp->reserved2, &up->reserved2) ||
- put_user(kp->reserved, &up->reserved) ||
- put_user(kp->length, &up->length))
+ assign_in_user(&up->index, &kp->index) ||
+ get_user(type, &kp->type) ||
+ put_user(type, &up->type) ||
+ assign_in_user(&up->flags, &kp->flags) ||
+ get_user(memory, &kp->memory) ||
+ put_user(memory, &up->memory))
+ return -EFAULT;
+
+ if (assign_in_user(&up->bytesused, &kp->bytesused) ||
+ assign_in_user(&up->field, &kp->field) ||
+ assign_in_user(&up->timestamp.tv_sec, &kp->timestamp.tv_sec) ||
+ assign_in_user(&up->timestamp.tv_usec, &kp->timestamp.tv_usec) ||
+ copy_in_user(&up->timecode, &kp->timecode, sizeof(kp->timecode)) ||
+ assign_in_user(&up->sequence, &kp->sequence) ||
+ assign_in_user(&up->reserved2, &kp->reserved2) ||
+ assign_in_user(&up->reserved, &kp->reserved) ||
+ get_user(length, &kp->length) ||
+ put_user(length, &up->length))
return -EFAULT;
- if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
- num_planes = kp->length;
+ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
+ u32 num_planes = length;
+
if (num_planes == 0)
return 0;
- uplane = (__force struct v4l2_plane __user *)kp->m.planes;
+ if (get_user(uplane, ((__force struct v4l2_plane __user **)&kp->m.planes)))
+ return -EFAULT;
if (get_user(p, &up->m.planes))
return -EFAULT;
uplane32 = compat_ptr(p);
- while (--num_planes >= 0) {
- ret = put_v4l2_plane32(uplane, uplane32, kp->memory);
+ while (num_planes--) {
+ ret = put_v4l2_plane32(uplane, uplane32, memory);
if (ret)
return ret;
++uplane;
++uplane32;
}
} else {
- switch (kp->memory) {
+ switch (memory) {
case V4L2_MEMORY_MMAP:
case V4L2_MEMORY_OVERLAY:
- if (put_user(kp->m.offset, &up->m.offset))
+ if (assign_in_user(&up->m.offset, &kp->m.offset))
return -EFAULT;
break;
case V4L2_MEMORY_USERPTR:
- if (put_user(kp->m.userptr, &up->m.userptr))
+ if (assign_in_user(&up->m.userptr, &kp->m.userptr))
return -EFAULT;
break;
case V4L2_MEMORY_DMABUF:
- if (put_user(kp->m.fd, &up->m.fd))
+ if (assign_in_user(&up->m.fd, &kp->m.fd))
return -EFAULT;
break;
}
@@ -546,29 +666,32 @@ struct v4l2_framebuffer32 {
} fmt;
};
-static int get_v4l2_framebuffer32(struct v4l2_framebuffer *kp, struct v4l2_framebuffer32 __user *up)
+static int get_v4l2_framebuffer32(struct v4l2_framebuffer __user *kp,
+ struct v4l2_framebuffer32 __user *up)
{
- u32 tmp;
+ compat_caddr_t tmp;
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(tmp, &up->base) ||
- get_user(kp->capability, &up->capability) ||
- get_user(kp->flags, &up->flags) ||
- copy_from_user(&kp->fmt, &up->fmt, sizeof(up->fmt)))
+ put_user((__force void *)compat_ptr(tmp), &kp->base) ||
+ assign_in_user(&kp->capability, &up->capability) ||
+ assign_in_user(&kp->flags, &up->flags) ||
+ copy_in_user(&kp->fmt, &up->fmt, sizeof(kp->fmt)))
return -EFAULT;
- kp->base = (__force void *)compat_ptr(tmp);
return 0;
}
-static int put_v4l2_framebuffer32(struct v4l2_framebuffer *kp, struct v4l2_framebuffer32 __user *up)
+static int put_v4l2_framebuffer32(struct v4l2_framebuffer __user *kp,
+ struct v4l2_framebuffer32 __user *up)
{
- u32 tmp = (u32)((unsigned long)kp->base);
+ void *base;
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(tmp, &up->base) ||
- put_user(kp->capability, &up->capability) ||
- put_user(kp->flags, &up->flags) ||
- copy_to_user(&up->fmt, &kp->fmt, sizeof(up->fmt)))
+ get_user(base, &kp->base) ||
+ put_user(ptr_to_compat(base), &up->base) ||
+ assign_in_user(&up->capability, &kp->capability) ||
+ assign_in_user(&up->flags, &kp->flags) ||
+ copy_in_user(&up->fmt, &kp->fmt, sizeof(kp->fmt)))
return -EFAULT;
return 0;
}
@@ -585,18 +708,22 @@ struct v4l2_input32 {
__u32 reserved[3];
};
-/* The 64-bit v4l2_input struct has extra padding at the end of the struct.
- Otherwise it is identical to the 32-bit version. */
-static inline int get_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
+/*
+ * The 64-bit v4l2_input struct has extra padding at the end of the struct.
+ * Otherwise it is identical to the 32-bit version.
+ */
+static inline int get_v4l2_input32(struct v4l2_input __user *kp,
+ struct v4l2_input32 __user *up)
{
- if (copy_from_user(kp, up, sizeof(*up)))
+ if (copy_in_user(kp, up, sizeof(*up)))
return -EFAULT;
return 0;
}
-static inline int put_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
+static inline int put_v4l2_input32(struct v4l2_input __user *kp,
+ struct v4l2_input32 __user *up)
{
- if (copy_to_user(up, kp, sizeof(*up)))
+ if (copy_in_user(up, kp, sizeof(*up)))
return -EFAULT;
return 0;
}
@@ -650,41 +777,64 @@ static inline bool ctrl_is_pointer(struc
(qec.flags & V4L2_CTRL_FLAG_HAS_PAYLOAD);
}
+static int bufsize_v4l2_ext_controls(struct v4l2_ext_controls32 __user *up,
+ u32 *size)
+{
+ u32 count;
+
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
+ get_user(count, &up->count))
+ return -EFAULT;
+ if (count > V4L2_CID_MAX_CTRLS)
+ return -EINVAL;
+ *size = count * sizeof(struct v4l2_ext_control);
+ return 0;
+}
+
static int get_v4l2_ext_controls32(struct file *file,
- struct v4l2_ext_controls *kp,
- struct v4l2_ext_controls32 __user *up)
+ struct v4l2_ext_controls __user *kp,
+ struct v4l2_ext_controls32 __user *up,
+ void __user *aux_buf, u32 aux_space)
{
struct v4l2_ext_control32 __user *ucontrols;
struct v4l2_ext_control __user *kcontrols;
- unsigned int n;
+ u32 count;
+ u32 n;
compat_caddr_t p;
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- get_user(kp->which, &up->which) ||
- get_user(kp->count, &up->count) ||
- get_user(kp->error_idx, &up->error_idx) ||
- copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
+ assign_in_user(&kp->which, &up->which) ||
+ get_user(count, &up->count) ||
+ put_user(count, &kp->count) ||
+ assign_in_user(&kp->error_idx, &up->error_idx) ||
+ copy_in_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
return -EFAULT;
- if (kp->count == 0) {
- kp->controls = NULL;
- return 0;
- } else if (kp->count > V4L2_CID_MAX_CTRLS) {
+
+ if (count == 0)
+ return put_user(NULL, &kp->controls);
+ if (count > V4L2_CID_MAX_CTRLS)
return -EINVAL;
- }
if (get_user(p, &up->controls))
return -EFAULT;
ucontrols = compat_ptr(p);
- if (!access_ok(VERIFY_READ, ucontrols, kp->count * sizeof(*ucontrols)))
+ if (!access_ok(VERIFY_READ, ucontrols, count * sizeof(*ucontrols)))
return -EFAULT;
- kcontrols = compat_alloc_user_space(kp->count * sizeof(*kcontrols));
- kp->controls = (__force struct v4l2_ext_control *)kcontrols;
- for (n = 0; n < kp->count; n++) {
+ if (aux_space < count * sizeof(*kcontrols))
+ return -EFAULT;
+ kcontrols = aux_buf;
+ if (put_user((__force struct v4l2_ext_control *)kcontrols,
+ &kp->controls))
+ return -EFAULT;
+
+ for (n = 0; n < count; n++) {
u32 id;
if (copy_in_user(kcontrols, ucontrols, sizeof(*ucontrols)))
return -EFAULT;
+
if (get_user(id, &kcontrols->id))
return -EFAULT;
+
if (ctrl_is_pointer(file, id)) {
void __user *s;
@@ -701,43 +851,54 @@ static int get_v4l2_ext_controls32(struc
}
static int put_v4l2_ext_controls32(struct file *file,
- struct v4l2_ext_controls *kp,
+ struct v4l2_ext_controls __user *kp,
struct v4l2_ext_controls32 __user *up)
{
struct v4l2_ext_control32 __user *ucontrols;
- struct v4l2_ext_control __user *kcontrols =
- (__force struct v4l2_ext_control __user *)kp->controls;
- int n = kp->count;
+ struct v4l2_ext_control __user *kcontrols;
+ u32 count;
+ u32 n;
compat_caddr_t p;
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(kp->which, &up->which) ||
- put_user(kp->count, &up->count) ||
- put_user(kp->error_idx, &up->error_idx) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+ assign_in_user(&up->which, &kp->which) ||
+ get_user(count, &kp->count) ||
+ put_user(count, &up->count) ||
+ assign_in_user(&up->error_idx, &kp->error_idx) ||
+ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)) ||
+ get_user(kcontrols, &kp->controls))
return -EFAULT;
- if (!kp->count)
- return 0;
+ if (!count)
+ return 0;
if (get_user(p, &up->controls))
return -EFAULT;
ucontrols = compat_ptr(p);
- if (!access_ok(VERIFY_WRITE, ucontrols, n * sizeof(*ucontrols)))
+ if (!access_ok(VERIFY_WRITE, ucontrols, count * sizeof(*ucontrols)))
return -EFAULT;
- while (--n >= 0) {
- unsigned size = sizeof(*ucontrols);
+ for (n = 0; n < count; n++) {
+ unsigned int size = sizeof(*ucontrols);
u32 id;
- if (get_user(id, &kcontrols->id))
+ if (get_user(id, &kcontrols->id) ||
+ put_user(id, &ucontrols->id) ||
+ assign_in_user(&ucontrols->size, &kcontrols->size) ||
+ copy_in_user(&ucontrols->reserved2, &kcontrols->reserved2,
+ sizeof(ucontrols->reserved2)))
return -EFAULT;
- /* Do not modify the pointer when copying a pointer control.
- The contents of the pointer was changed, not the pointer
- itself. */
+
+ /*
+ * Do not modify the pointer when copying a pointer control.
+ * The contents of the pointer was changed, not the pointer
+ * itself.
+ */
if (ctrl_is_pointer(file, id))
size -= sizeof(ucontrols->value64);
+
if (copy_in_user(ucontrols, kcontrols, size))
return -EFAULT;
+
ucontrols++;
kcontrols++;
}
@@ -757,17 +918,18 @@ struct v4l2_event32 {
__u32 reserved[8];
};
-static int put_v4l2_event32(struct v4l2_event *kp, struct v4l2_event32 __user *up)
+static int put_v4l2_event32(struct v4l2_event __user *kp,
+ struct v4l2_event32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(kp->type, &up->type) ||
- copy_to_user(&up->u, &kp->u, sizeof(kp->u)) ||
- put_user(kp->pending, &up->pending) ||
- put_user(kp->sequence, &up->sequence) ||
- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) ||
- put_user(kp->id, &up->id) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
+ assign_in_user(&up->type, &kp->type) ||
+ copy_in_user(&up->u, &kp->u, sizeof(kp->u)) ||
+ assign_in_user(&up->pending, &kp->pending) ||
+ assign_in_user(&up->sequence, &kp->sequence) ||
+ assign_in_user(&up->timestamp.tv_sec, &kp->timestamp.tv_sec) ||
+ assign_in_user(&up->timestamp.tv_nsec, &kp->timestamp.tv_nsec) ||
+ assign_in_user(&up->id, &kp->id) ||
+ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
return -EFAULT;
return 0;
}
@@ -780,31 +942,34 @@ struct v4l2_edid32 {
compat_caddr_t edid;
};
-static int get_v4l2_edid32(struct v4l2_edid *kp, struct v4l2_edid32 __user *up)
+static int get_v4l2_edid32(struct v4l2_edid __user *kp,
+ struct v4l2_edid32 __user *up)
{
- u32 tmp;
+ compat_uptr_t tmp;
if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
- get_user(kp->pad, &up->pad) ||
- get_user(kp->start_block, &up->start_block) ||
- get_user(kp->blocks, &up->blocks) ||
+ assign_in_user(&kp->pad, &up->pad) ||
+ assign_in_user(&kp->start_block, &up->start_block) ||
+ assign_in_user(&kp->blocks, &up->blocks) ||
get_user(tmp, &up->edid) ||
- copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
+ put_user(compat_ptr(tmp), &kp->edid) ||
+ copy_in_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
return -EFAULT;
- kp->edid = (__force u8 *)compat_ptr(tmp);
return 0;
}
-static int put_v4l2_edid32(struct v4l2_edid *kp, struct v4l2_edid32 __user *up)
+static int put_v4l2_edid32(struct v4l2_edid __user *kp,
+ struct v4l2_edid32 __user *up)
{
- u32 tmp = (u32)((unsigned long)kp->edid);
+ void *edid;
if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
- put_user(kp->pad, &up->pad) ||
- put_user(kp->start_block, &up->start_block) ||
- put_user(kp->blocks, &up->blocks) ||
- put_user(tmp, &up->edid) ||
- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+ assign_in_user(&up->pad, &kp->pad) ||
+ assign_in_user(&up->start_block, &kp->start_block) ||
+ assign_in_user(&up->blocks, &kp->blocks) ||
+ get_user(edid, &kp->edid) ||
+ put_user(ptr_to_compat(edid), &up->edid) ||
+ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
return -EFAULT;
return 0;
}
@@ -837,22 +1002,23 @@ static int put_v4l2_edid32(struct v4l2_e
#define VIDIOC_G_OUTPUT32 _IOR ('V', 46, s32)
#define VIDIOC_S_OUTPUT32 _IOWR('V', 47, s32)
+static int alloc_userspace(unsigned int size, u32 aux_space,
+ void __user **up_native)
+{
+ *up_native = compat_alloc_user_space(size + aux_space);
+ if (!*up_native)
+ return -ENOMEM;
+ if (clear_user(*up_native, size))
+ return -EFAULT;
+ return 0;
+}
+
static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
- union {
- struct v4l2_format v2f;
- struct v4l2_buffer v2b;
- struct v4l2_framebuffer v2fb;
- struct v4l2_input v2i;
- struct v4l2_standard v2s;
- struct v4l2_ext_controls v2ecs;
- struct v4l2_event v2ev;
- struct v4l2_create_buffers v2crt;
- struct v4l2_edid v2edid;
- unsigned long vx;
- int vi;
- } karg;
void __user *up = compat_ptr(arg);
+ void __user *up_native = NULL;
+ void __user *aux_buf;
+ u32 aux_space;
int compatible_arg = 1;
long err = 0;
@@ -891,30 +1057,52 @@ static long do_video_ioctl(struct file *
case VIDIOC_STREAMOFF:
case VIDIOC_S_INPUT:
case VIDIOC_S_OUTPUT:
- err = get_user(karg.vi, (s32 __user *)up);
+ err = alloc_userspace(sizeof(unsigned int), 0, &up_native);
+ if (!err && assign_in_user((unsigned int __user *)up_native,
+ (compat_uint_t __user *)up))
+ err = -EFAULT;
compatible_arg = 0;
break;
case VIDIOC_G_INPUT:
case VIDIOC_G_OUTPUT:
+ err = alloc_userspace(sizeof(unsigned int), 0, &up_native);
compatible_arg = 0;
break;
case VIDIOC_G_EDID:
case VIDIOC_S_EDID:
- err = get_v4l2_edid32(&karg.v2edid, up);
+ err = alloc_userspace(sizeof(struct v4l2_edid), 0, &up_native);
+ if (!err)
+ err = get_v4l2_edid32(up_native, up);
compatible_arg = 0;
break;
case VIDIOC_G_FMT:
case VIDIOC_S_FMT:
case VIDIOC_TRY_FMT:
- err = get_v4l2_format32(&karg.v2f, up);
+ err = bufsize_v4l2_format(up, &aux_space);
+ if (!err)
+ err = alloc_userspace(sizeof(struct v4l2_format),
+ aux_space, &up_native);
+ if (!err) {
+ aux_buf = up_native + sizeof(struct v4l2_format);
+ err = get_v4l2_format32(up_native, up,
+ aux_buf, aux_space);
+ }
compatible_arg = 0;
break;
case VIDIOC_CREATE_BUFS:
- err = get_v4l2_create32(&karg.v2crt, up);
+ err = bufsize_v4l2_create(up, &aux_space);
+ if (!err)
+ err = alloc_userspace(sizeof(struct v4l2_create_buffers),
+ aux_space, &up_native);
+ if (!err) {
+ aux_buf = up_native + sizeof(struct v4l2_create_buffers);
+ err = get_v4l2_create32(up_native, up,
+ aux_buf, aux_space);
+ }
compatible_arg = 0;
break;
@@ -922,36 +1110,63 @@ static long do_video_ioctl(struct file *
case VIDIOC_QUERYBUF:
case VIDIOC_QBUF:
case VIDIOC_DQBUF:
- err = get_v4l2_buffer32(&karg.v2b, up);
+ err = bufsize_v4l2_buffer(up, &aux_space);
+ if (!err)
+ err = alloc_userspace(sizeof(struct v4l2_buffer),
+ aux_space, &up_native);
+ if (!err) {
+ aux_buf = up_native + sizeof(struct v4l2_buffer);
+ err = get_v4l2_buffer32(up_native, up,
+ aux_buf, aux_space);
+ }
compatible_arg = 0;
break;
case VIDIOC_S_FBUF:
- err = get_v4l2_framebuffer32(&karg.v2fb, up);
+ err = alloc_userspace(sizeof(struct v4l2_framebuffer), 0,
+ &up_native);
+ if (!err)
+ err = get_v4l2_framebuffer32(up_native, up);
compatible_arg = 0;
break;
case VIDIOC_G_FBUF:
+ err = alloc_userspace(sizeof(struct v4l2_framebuffer), 0,
+ &up_native);
compatible_arg = 0;
break;
case VIDIOC_ENUMSTD:
- err = get_v4l2_standard32(&karg.v2s, up);
+ err = alloc_userspace(sizeof(struct v4l2_standard), 0,
+ &up_native);
+ if (!err)
+ err = get_v4l2_standard32(up_native, up);
compatible_arg = 0;
break;
case VIDIOC_ENUMINPUT:
- err = get_v4l2_input32(&karg.v2i, up);
+ err = alloc_userspace(sizeof(struct v4l2_input), 0, &up_native);
+ if (!err)
+ err = get_v4l2_input32(up_native, up);
compatible_arg = 0;
break;
case VIDIOC_G_EXT_CTRLS:
case VIDIOC_S_EXT_CTRLS:
case VIDIOC_TRY_EXT_CTRLS:
- err = get_v4l2_ext_controls32(file, &karg.v2ecs, up);
+ err = bufsize_v4l2_ext_controls(up, &aux_space);
+ if (!err)
+ err = alloc_userspace(sizeof(struct v4l2_ext_controls),
+ aux_space, &up_native);
+ if (!err) {
+ aux_buf = up_native + sizeof(struct v4l2_ext_controls);
+ err = get_v4l2_ext_controls32(file, up_native, up,
+ aux_buf, aux_space);
+ }
compatible_arg = 0;
break;
case VIDIOC_DQEVENT:
+ err = alloc_userspace(sizeof(struct v4l2_event), 0, &up_native);
compatible_arg = 0;
break;
}
@@ -960,29 +1175,26 @@ static long do_video_ioctl(struct file *
if (compatible_arg)
err = native_ioctl(file, cmd, (unsigned long)up);
- else {
- mm_segment_t old_fs = get_fs();
-
- set_fs(KERNEL_DS);
- err = native_ioctl(file, cmd, (unsigned long)&karg);
- set_fs(old_fs);
- }
+ else
+ err = native_ioctl(file, cmd, (unsigned long)up_native);
if (err == -ENOTTY)
return err;
- /* Special case: even after an error we need to put the
- results back for these ioctls since the error_idx will
- contain information on which control failed. */
+ /*
+ * Special case: even after an error we need to put the
+ * results back for these ioctls since the error_idx will
+ * contain information on which control failed.
+ */
switch (cmd) {
case VIDIOC_G_EXT_CTRLS:
case VIDIOC_S_EXT_CTRLS:
case VIDIOC_TRY_EXT_CTRLS:
- if (put_v4l2_ext_controls32(file, &karg.v2ecs, up))
+ if (put_v4l2_ext_controls32(file, up_native, up))
err = -EFAULT;
break;
case VIDIOC_S_EDID:
- if (put_v4l2_edid32(&karg.v2edid, up))
+ if (put_v4l2_edid32(up_native, up))
err = -EFAULT;
break;
}
@@ -994,44 +1206,46 @@ static long do_video_ioctl(struct file *
case VIDIOC_S_OUTPUT:
case VIDIOC_G_INPUT:
case VIDIOC_G_OUTPUT:
- err = put_user(((s32)karg.vi), (s32 __user *)up);
+ if (assign_in_user((compat_uint_t __user *)up,
+ ((unsigned int __user *)up_native)))
+ err = -EFAULT;
break;
case VIDIOC_G_FBUF:
- err = put_v4l2_framebuffer32(&karg.v2fb, up);
+ err = put_v4l2_framebuffer32(up_native, up);
break;
case VIDIOC_DQEVENT:
- err = put_v4l2_event32(&karg.v2ev, up);
+ err = put_v4l2_event32(up_native, up);
break;
case VIDIOC_G_EDID:
- err = put_v4l2_edid32(&karg.v2edid, up);
+ err = put_v4l2_edid32(up_native, up);
break;
case VIDIOC_G_FMT:
case VIDIOC_S_FMT:
case VIDIOC_TRY_FMT:
- err = put_v4l2_format32(&karg.v2f, up);
+ err = put_v4l2_format32(up_native, up);
break;
case VIDIOC_CREATE_BUFS:
- err = put_v4l2_create32(&karg.v2crt, up);
+ err = put_v4l2_create32(up_native, up);
break;
case VIDIOC_PREPARE_BUF:
case VIDIOC_QUERYBUF:
case VIDIOC_QBUF:
case VIDIOC_DQBUF:
- err = put_v4l2_buffer32(&karg.v2b, up);
+ err = put_v4l2_buffer32(up_native, up);
break;
case VIDIOC_ENUMSTD:
- err = put_v4l2_standard32(&karg.v2s, up);
+ err = put_v4l2_standard32(up_native, up);
break;
case VIDIOC_ENUMINPUT:
- err = put_v4l2_input32(&karg.v2i, up);
+ err = put_v4l2_input32(up_native, up);
break;
}
return err;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit 9e343e87d2c4c707ef8fae2844864d4dde3a2d13 upstream.
The map_word_() functions, dating back to linux-2.6.8, try to perform
bitwise operations on a 'map_word' structure. This may have worked
with compilers that were current then (gcc-3.4 or earlier), but end
up being rather inefficient on any version I could try now (gcc-4.4 or
higher). Specifically we hit a problem analyzed in gcc PR81715 where we
fail to reuse the stack space for local variables.
This can be seen immediately in the stack consumption for
cfi_staa_erase_varsize() and other functions that (with CONFIG_KASAN)
can be up to 2200 bytes. Changing the inline functions into macros brings
this down to 1280 bytes. Without KASAN, the same problem exists, but
the stack consumption is lower to start with, my patch shrinks it from
920 to 496 bytes on with arm-linux-gnueabi-gcc-5.4, and saves around
1KB in .text size for cfi_cmdset_0020.c, as it avoids copying map_word
structures for each call to one of these helpers.
With the latest gcc-8 snapshot, the problem is fixed in upstream gcc,
but nobody uses that yet, so we should still work around it in mainline
kernels and probably backport the workaround to stable kernels as well.
We had a couple of other functions that suffered from the same gcc bug,
and all of those had a simpler workaround involving dummy variables
in the inline function. Unfortunately that did not work here, the
macro hack was the best I could come up with.
It would also be helpful to have someone to a little performance testing
on the patch, to see how much it helps in terms of CPU utilitzation.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715
Signed-off-by: Arnd Bergmann <[email protected]>
Acked-by: Richard Weinberger <[email protected]>
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/mtd/map.h | 130 ++++++++++++++++++++++--------------------------
1 file changed, 61 insertions(+), 69 deletions(-)
--- a/include/linux/mtd/map.h
+++ b/include/linux/mtd/map.h
@@ -270,75 +270,67 @@ void map_destroy(struct mtd_info *mtd);
#define INVALIDATE_CACHED_RANGE(map, from, size) \
do { if (map->inval_cache) map->inval_cache(map, from, size); } while (0)
-
-static inline int map_word_equal(struct map_info *map, map_word val1, map_word val2)
-{
- int i;
-
- for (i = 0; i < map_words(map); i++) {
- if (val1.x[i] != val2.x[i])
- return 0;
- }
-
- return 1;
-}
-
-static inline map_word map_word_and(struct map_info *map, map_word val1, map_word val2)
-{
- map_word r;
- int i;
-
- for (i = 0; i < map_words(map); i++)
- r.x[i] = val1.x[i] & val2.x[i];
-
- return r;
-}
-
-static inline map_word map_word_clr(struct map_info *map, map_word val1, map_word val2)
-{
- map_word r;
- int i;
-
- for (i = 0; i < map_words(map); i++)
- r.x[i] = val1.x[i] & ~val2.x[i];
-
- return r;
-}
-
-static inline map_word map_word_or(struct map_info *map, map_word val1, map_word val2)
-{
- map_word r;
- int i;
-
- for (i = 0; i < map_words(map); i++)
- r.x[i] = val1.x[i] | val2.x[i];
-
- return r;
-}
-
-static inline int map_word_andequal(struct map_info *map, map_word val1, map_word val2, map_word val3)
-{
- int i;
-
- for (i = 0; i < map_words(map); i++) {
- if ((val1.x[i] & val2.x[i]) != val3.x[i])
- return 0;
- }
-
- return 1;
-}
-
-static inline int map_word_bitsset(struct map_info *map, map_word val1, map_word val2)
-{
- int i;
-
- for (i = 0; i < map_words(map); i++) {
- if (val1.x[i] & val2.x[i])
- return 1;
- }
-
- return 0;
-}
+#define map_word_equal(map, val1, val2) \
+({ \
+ int i, ret = 1; \
+ for (i = 0; i < map_words(map); i++) \
+ if ((val1).x[i] != (val2).x[i]) { \
+ ret = 0; \
+ break; \
+ } \
+ ret; \
+})
+
+#define map_word_and(map, val1, val2) \
+({ \
+ map_word r; \
+ int i; \
+ for (i = 0; i < map_words(map); i++) \
+ r.x[i] = (val1).x[i] & (val2).x[i]; \
+ r; \
+})
+
+#define map_word_clr(map, val1, val2) \
+({ \
+ map_word r; \
+ int i; \
+ for (i = 0; i < map_words(map); i++) \
+ r.x[i] = (val1).x[i] & ~(val2).x[i]; \
+ r; \
+})
+
+#define map_word_or(map, val1, val2) \
+({ \
+ map_word r; \
+ int i; \
+ for (i = 0; i < map_words(map); i++) \
+ r.x[i] = (val1).x[i] | (val2).x[i]; \
+ r; \
+})
+
+#define map_word_andequal(map, val1, val2, val3) \
+({ \
+ int i, ret = 1; \
+ for (i = 0; i < map_words(map); i++) { \
+ if (((val1).x[i] & (val2).x[i]) != (val2).x[i]) { \
+ ret = 0; \
+ break; \
+ } \
+ } \
+ ret; \
+})
+
+#define map_word_bitsset(map, val1, val2) \
+({ \
+ int i, ret = 0; \
+ for (i = 0; i < map_words(map); i++) { \
+ if ((val1).x[i] & (val2).x[i]) { \
+ ret = 1; \
+ break; \
+ } \
+ } \
+ ret; \
+})
static inline map_word map_word_load(struct map_info *map, const void *ptr)
{
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 0fa2c5f954c41e870fe327907c01cb8f7ea8d5a2 upstream.
If the framebuffer is enabled and error injection is disabled, then
creating the controls for the video output device would fail with an
error.
This is because the Clear Framebuffer control uses the 'vivid control
class' and that control class isn't added if error injection is disabled.
In addition, this control was added to e.g. vbi devices as well, which
makes no sense.
Move this control to its own control handler and handle it correctly.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/platform/vivid/vivid-core.h | 1
drivers/media/platform/vivid/vivid-ctrls.c | 35 ++++++++++++++++++++++++-----
2 files changed, 30 insertions(+), 6 deletions(-)
--- a/drivers/media/platform/vivid/vivid-core.h
+++ b/drivers/media/platform/vivid/vivid-core.h
@@ -154,6 +154,7 @@ struct vivid_dev {
struct v4l2_ctrl_handler ctrl_hdl_streaming;
struct v4l2_ctrl_handler ctrl_hdl_sdtv_cap;
struct v4l2_ctrl_handler ctrl_hdl_loop_cap;
+ struct v4l2_ctrl_handler ctrl_hdl_fb;
struct video_device vid_cap_dev;
struct v4l2_ctrl_handler ctrl_hdl_vid_cap;
struct video_device vid_out_dev;
--- a/drivers/media/platform/vivid/vivid-ctrls.c
+++ b/drivers/media/platform/vivid/vivid-ctrls.c
@@ -120,9 +120,6 @@ static int vivid_user_gen_s_ctrl(struct
clear_bit(V4L2_FL_REGISTERED, &dev->radio_rx_dev.flags);
clear_bit(V4L2_FL_REGISTERED, &dev->radio_tx_dev.flags);
break;
- case VIVID_CID_CLEAR_FB:
- vivid_clear_fb(dev);
- break;
case VIVID_CID_BUTTON:
dev->button_pressed = 30;
break;
@@ -274,8 +271,28 @@ static const struct v4l2_ctrl_config viv
.type = V4L2_CTRL_TYPE_BUTTON,
};
+
+/* Framebuffer Controls */
+
+static int vivid_fb_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct vivid_dev *dev = container_of(ctrl->handler,
+ struct vivid_dev, ctrl_hdl_fb);
+
+ switch (ctrl->id) {
+ case VIVID_CID_CLEAR_FB:
+ vivid_clear_fb(dev);
+ break;
+ }
+ return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_fb_ctrl_ops = {
+ .s_ctrl = vivid_fb_s_ctrl,
+};
+
static const struct v4l2_ctrl_config vivid_ctrl_clear_fb = {
- .ops = &vivid_user_gen_ctrl_ops,
+ .ops = &vivid_fb_ctrl_ops,
.id = VIVID_CID_CLEAR_FB,
.name = "Clear Framebuffer",
.type = V4L2_CTRL_TYPE_BUTTON,
@@ -1357,6 +1374,7 @@ int vivid_create_controls(struct vivid_d
struct v4l2_ctrl_handler *hdl_streaming = &dev->ctrl_hdl_streaming;
struct v4l2_ctrl_handler *hdl_sdtv_cap = &dev->ctrl_hdl_sdtv_cap;
struct v4l2_ctrl_handler *hdl_loop_cap = &dev->ctrl_hdl_loop_cap;
+ struct v4l2_ctrl_handler *hdl_fb = &dev->ctrl_hdl_fb;
struct v4l2_ctrl_handler *hdl_vid_cap = &dev->ctrl_hdl_vid_cap;
struct v4l2_ctrl_handler *hdl_vid_out = &dev->ctrl_hdl_vid_out;
struct v4l2_ctrl_handler *hdl_vbi_cap = &dev->ctrl_hdl_vbi_cap;
@@ -1384,10 +1402,12 @@ int vivid_create_controls(struct vivid_d
v4l2_ctrl_new_custom(hdl_sdtv_cap, &vivid_ctrl_class, NULL);
v4l2_ctrl_handler_init(hdl_loop_cap, 1);
v4l2_ctrl_new_custom(hdl_loop_cap, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_fb, 1);
+ v4l2_ctrl_new_custom(hdl_fb, &vivid_ctrl_class, NULL);
v4l2_ctrl_handler_init(hdl_vid_cap, 55);
v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_class, NULL);
v4l2_ctrl_handler_init(hdl_vid_out, 26);
- if (!no_error_inj)
+ if (!no_error_inj || dev->has_fb)
v4l2_ctrl_new_custom(hdl_vid_out, &vivid_ctrl_class, NULL);
v4l2_ctrl_handler_init(hdl_vbi_cap, 21);
v4l2_ctrl_new_custom(hdl_vbi_cap, &vivid_ctrl_class, NULL);
@@ -1561,7 +1581,7 @@ int vivid_create_controls(struct vivid_d
v4l2_ctrl_new_custom(hdl_loop_cap, &vivid_ctrl_loop_video, NULL);
if (dev->has_fb)
- v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_clear_fb, NULL);
+ v4l2_ctrl_new_custom(hdl_fb, &vivid_ctrl_clear_fb, NULL);
if (dev->has_radio_rx) {
v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_hw_seek_mode, NULL);
@@ -1658,6 +1678,7 @@ int vivid_create_controls(struct vivid_d
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_streaming, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_sdtv_cap, NULL);
v4l2_ctrl_add_handler(hdl_vid_cap, hdl_loop_cap, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_cap, hdl_fb, NULL);
if (hdl_vid_cap->error)
return hdl_vid_cap->error;
dev->vid_cap_dev.ctrl_handler = hdl_vid_cap;
@@ -1666,6 +1687,7 @@ int vivid_create_controls(struct vivid_d
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_gen, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_aud, NULL);
v4l2_ctrl_add_handler(hdl_vid_out, hdl_streaming, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_out, hdl_fb, NULL);
if (hdl_vid_out->error)
return hdl_vid_out->error;
dev->vid_out_dev.ctrl_handler = hdl_vid_out;
@@ -1725,4 +1747,5 @@ void vivid_free_controls(struct vivid_de
v4l2_ctrl_handler_free(&dev->ctrl_hdl_streaming);
v4l2_ctrl_handler_free(&dev->ctrl_hdl_sdtv_cap);
v4l2_ctrl_handler_free(&dev->ctrl_hdl_loop_cap);
+ v4l2_ctrl_handler_free(&dev->ctrl_hdl_fb);
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann <[email protected]>
commit 3cd890dbe2a4f14cc44c85bb6cf37e5e22d4dd0e upstream.
A typical code fragment was copied across many dvb-frontend drivers and
causes large stack frames when built with with CONFIG_KASAN on gcc-5/6/7:
drivers/media/dvb-frontends/cxd2841er.c:3225:1: error: the frame size of 3992 bytes is larger than 3072 bytes [-Werror=frame-larger-than=]
drivers/media/dvb-frontends/cxd2841er.c:3404:1: error: the frame size of 3136 bytes is larger than 3072 bytes [-Werror=frame-larger-than=]
drivers/media/dvb-frontends/stv0367.c:3143:1: error: the frame size of 4016 bytes is larger than 3072 bytes [-Werror=frame-larger-than=]
drivers/media/dvb-frontends/stv090x.c:3430:1: error: the frame size of 5312 bytes is larger than 3072 bytes [-Werror=frame-larger-than=]
drivers/media/dvb-frontends/stv090x.c:4248:1: error: the frame size of 4872 bytes is larger than 3072 bytes [-Werror=frame-larger-than=]
gcc-8 now solves this by consolidating the stack slots for the argument
variables, but on older compilers we can get the same behavior by taking
the pointer of a local variable rather than the inline function argument.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/dvb-frontends/ascot2e.c | 4 +++-
drivers/media/dvb-frontends/cxd2841er.c | 4 +++-
drivers/media/dvb-frontends/helene.c | 4 +++-
drivers/media/dvb-frontends/horus3a.c | 4 +++-
drivers/media/dvb-frontends/itd1000.c | 5 +++--
drivers/media/dvb-frontends/mt312.c | 5 ++++-
drivers/media/dvb-frontends/stb0899_drv.c | 3 ++-
drivers/media/dvb-frontends/stb6100.c | 6 ++++--
drivers/media/dvb-frontends/stv0367.c | 4 +++-
drivers/media/dvb-frontends/stv090x.c | 4 +++-
drivers/media/dvb-frontends/stv6110x.c | 4 +++-
drivers/media/dvb-frontends/zl10039.c | 4 +++-
12 files changed, 37 insertions(+), 14 deletions(-)
--- a/drivers/media/dvb-frontends/ascot2e.c
+++ b/drivers/media/dvb-frontends/ascot2e.c
@@ -155,7 +155,9 @@ static int ascot2e_write_regs(struct asc
static int ascot2e_write_reg(struct ascot2e_priv *priv, u8 reg, u8 val)
{
- return ascot2e_write_regs(priv, reg, &val, 1);
+ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return ascot2e_write_regs(priv, reg, &tmp, 1);
}
static int ascot2e_read_regs(struct ascot2e_priv *priv,
--- a/drivers/media/dvb-frontends/cxd2841er.c
+++ b/drivers/media/dvb-frontends/cxd2841er.c
@@ -257,7 +257,9 @@ static int cxd2841er_write_regs(struct c
static int cxd2841er_write_reg(struct cxd2841er_priv *priv,
u8 addr, u8 reg, u8 val)
{
- return cxd2841er_write_regs(priv, addr, reg, &val, 1);
+ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return cxd2841er_write_regs(priv, addr, reg, &tmp, 1);
}
static int cxd2841er_read_regs(struct cxd2841er_priv *priv,
--- a/drivers/media/dvb-frontends/helene.c
+++ b/drivers/media/dvb-frontends/helene.c
@@ -331,7 +331,9 @@ static int helene_write_regs(struct hele
static int helene_write_reg(struct helene_priv *priv, u8 reg, u8 val)
{
- return helene_write_regs(priv, reg, &val, 1);
+ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return helene_write_regs(priv, reg, &tmp, 1);
}
static int helene_read_regs(struct helene_priv *priv,
--- a/drivers/media/dvb-frontends/horus3a.c
+++ b/drivers/media/dvb-frontends/horus3a.c
@@ -89,7 +89,9 @@ static int horus3a_write_regs(struct hor
static int horus3a_write_reg(struct horus3a_priv *priv, u8 reg, u8 val)
{
- return horus3a_write_regs(priv, reg, &val, 1);
+ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return horus3a_write_regs(priv, reg, &tmp, 1);
}
static int horus3a_enter_power_save(struct horus3a_priv *priv)
--- a/drivers/media/dvb-frontends/itd1000.c
+++ b/drivers/media/dvb-frontends/itd1000.c
@@ -95,8 +95,9 @@ static int itd1000_read_reg(struct itd10
static inline int itd1000_write_reg(struct itd1000_state *state, u8 r, u8 v)
{
- int ret = itd1000_write_regs(state, r, &v, 1);
- state->shadow[r] = v;
+ u8 tmp = v; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+ int ret = itd1000_write_regs(state, r, &tmp, 1);
+ state->shadow[r] = tmp;
return ret;
}
--- a/drivers/media/dvb-frontends/mt312.c
+++ b/drivers/media/dvb-frontends/mt312.c
@@ -142,7 +142,10 @@ static inline int mt312_readreg(struct m
static inline int mt312_writereg(struct mt312_state *state,
const enum mt312_reg_addr reg, const u8 val)
{
- return mt312_write(state, reg, &val, 1);
+ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+
+ return mt312_write(state, reg, &tmp, 1);
}
static inline u32 mt312_div(u32 a, u32 b)
--- a/drivers/media/dvb-frontends/stb0899_drv.c
+++ b/drivers/media/dvb-frontends/stb0899_drv.c
@@ -539,7 +539,8 @@ int stb0899_write_regs(struct stb0899_st
int stb0899_write_reg(struct stb0899_state *state, unsigned int reg, u8 data)
{
- return stb0899_write_regs(state, reg, &data, 1);
+ u8 tmp = data;
+ return stb0899_write_regs(state, reg, &tmp, 1);
}
/*
--- a/drivers/media/dvb-frontends/stb6100.c
+++ b/drivers/media/dvb-frontends/stb6100.c
@@ -226,12 +226,14 @@ static int stb6100_write_reg_range(struc
static int stb6100_write_reg(struct stb6100_state *state, u8 reg, u8 data)
{
+ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
if (unlikely(reg >= STB6100_NUMREGS)) {
dprintk(verbose, FE_ERROR, 1, "Invalid register offset 0x%x", reg);
return -EREMOTEIO;
}
- data = (data & stb6100_template[reg].mask) | stb6100_template[reg].set;
- return stb6100_write_reg_range(state, &data, reg, 1);
+ tmp = (tmp & stb6100_template[reg].mask) | stb6100_template[reg].set;
+ return stb6100_write_reg_range(state, &tmp, reg, 1);
}
--- a/drivers/media/dvb-frontends/stv0367.c
+++ b/drivers/media/dvb-frontends/stv0367.c
@@ -166,7 +166,9 @@ int stv0367_writeregs(struct stv0367_sta
static int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
{
- return stv0367_writeregs(state, reg, &data, 1);
+ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return stv0367_writeregs(state, reg, &tmp, 1);
}
static u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
--- a/drivers/media/dvb-frontends/stv090x.c
+++ b/drivers/media/dvb-frontends/stv090x.c
@@ -755,7 +755,9 @@ static int stv090x_write_regs(struct stv
static int stv090x_write_reg(struct stv090x_state *state, unsigned int reg, u8 data)
{
- return stv090x_write_regs(state, reg, &data, 1);
+ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return stv090x_write_regs(state, reg, &tmp, 1);
}
static int stv090x_i2c_gate_ctrl(struct stv090x_state *state, int enable)
--- a/drivers/media/dvb-frontends/stv6110x.c
+++ b/drivers/media/dvb-frontends/stv6110x.c
@@ -97,7 +97,9 @@ static int stv6110x_write_regs(struct st
static int stv6110x_write_reg(struct stv6110x_state *stv6110x, u8 reg, u8 data)
{
- return stv6110x_write_regs(stv6110x, reg, &data, 1);
+ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return stv6110x_write_regs(stv6110x, reg, &tmp, 1);
}
static int stv6110x_init(struct dvb_frontend *fe)
--- a/drivers/media/dvb-frontends/zl10039.c
+++ b/drivers/media/dvb-frontends/zl10039.c
@@ -134,7 +134,9 @@ static inline int zl10039_writereg(struc
const enum zl10039_reg_addr reg,
const u8 val)
{
- return zl10039_write(state, reg, &val, 1);
+ const u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+
+ return zl10039_write(state, reg, &tmp, 1);
}
static int zl10039_init(struct dvb_frontend *fe)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mauro Carvalho Chehab <[email protected]>
commit 9893b905e743ded332575ca04486bd586c0772f7 upstream.
The XC2028_I2C_FLUSH only needs to be implemented on a few
devices. Others can safely ignore it.
That prevents filling the dmesg with lots of messages like:
dib0700: stk7700ph_xc3028_callback: unknown command 2, arg 0
Fixes: 4d37ece757a8 ("[media] tuner/xc2028: Add I2C flush callback")
Reported-by: Enrico Mioso <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/usb/dvb-usb/cxusb.c | 2 ++
drivers/media/usb/dvb-usb/dib0700_devices.c | 1 +
2 files changed, 3 insertions(+)
--- a/drivers/media/usb/dvb-usb/cxusb.c
+++ b/drivers/media/usb/dvb-usb/cxusb.c
@@ -677,6 +677,8 @@ static int dvico_bluebird_xc2028_callbac
case XC2028_RESET_CLK:
deb_info("%s: XC2028_RESET_CLK %d\n", __func__, arg);
break;
+ case XC2028_I2C_FLUSH:
+ break;
default:
deb_info("%s: unknown command %d, arg %d\n", __func__,
command, arg);
--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
+++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
@@ -430,6 +430,7 @@ static int stk7700ph_xc3028_callback(voi
state->dib7000p_ops.set_gpio(adap->fe_adap[0].fe, 8, 0, 1);
break;
case XC2028_RESET_CLK:
+ case XC2028_I2C_FLUSH:
break;
default:
err("%s: unknown command %d, arg %d\n", __func__,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens <[email protected]>
commit d0290bc20d4739b7a900ae37eb5d4cc3be2b393f upstream.
Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext
data") added a bounce buffer to avoid hardened usercopy checks. Copying
to the bounce buffer was implemented with a simple memcpy() assuming
that it is always valid to read from kernel memory iff the
kern_addr_valid() check passed.
A simple, but pointless, test case like "dd if=/proc/kcore of=/dev/null"
now can easily crash the kernel, since the former execption handling on
invalid kernel addresses now doesn't work anymore.
Also adding a kern_addr_valid() implementation wouldn't help here. Most
architectures simply return 1 here, while a couple implemented a page
table walk to figure out if something is mapped at the address in
question.
With DEBUG_PAGEALLOC active mappings are established and removed all the
time, so that relying on the result of kern_addr_valid() before
executing the memcpy() also doesn't work.
Therefore simply use probe_kernel_read() to copy to the bounce buffer.
This also allows to simplify read_kcore().
At least on s390 this fixes the observed crashes and doesn't introduce
warnings that were removed with df04abfd181a ("fs/proc/kcore.c: Add
bounce buffer for ktext data"), even though the generic
probe_kernel_read() implementation uses uaccess functions.
While looking into this I'm also wondering if kern_addr_valid() could be
completely removed...(?)
Link: http://lkml.kernel.org/r/[email protected]
Fixes: df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data")
Fixes: f5509cc18daa ("mm: Hardened usercopy")
Signed-off-by: Heiko Carstens <[email protected]>
Acked-by: Kees Cook <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/proc/kcore.c | 18 +++++-------------
1 file changed, 5 insertions(+), 13 deletions(-)
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -512,23 +512,15 @@ read_kcore(struct file *file, char __use
return -EFAULT;
} else {
if (kern_addr_valid(start)) {
- unsigned long n;
-
/*
* Using bounce buffer to bypass the
* hardened user copy kernel text checks.
*/
- memcpy(buf, (char *) start, tsz);
- n = copy_to_user(buffer, buf, tsz);
- /*
- * We cannot distinguish between fault on source
- * and fault on destination. When this happens
- * we clear too and hope it will trigger the
- * EFAULT again.
- */
- if (n) {
- if (clear_user(buffer + tsz - n,
- n))
+ if (probe_kernel_read(buf, (void *) start, tsz)) {
+ if (clear_user(buffer, tsz))
+ return -EFAULT;
+ } else {
+ if (copy_to_user(buffer, buf, tsz))
return -EFAULT;
}
} else {
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 9903a91c763ecdae333a04a9d89d79d2b8966503 upstream.
With pipe-user-pages-hard set to 'N', users were actually only allowed up
to 'N - 1' buffers; and likewise for pipe-user-pages-soft.
Fix this to allow up to 'N' buffers, as would be expected.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: b0b91d18e2e9 ("pipe: fix limit checking in pipe_set_size()")
Signed-off-by: Eric Biggers <[email protected]>
Acked-by: Willy Tarreau <[email protected]>
Acked-by: Kees Cook <[email protected]>
Acked-by: Joe Lawrence <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: "Luis R . Rodriguez" <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Mikulas Patocka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/pipe.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -610,12 +610,12 @@ static unsigned long account_pipe_buffer
static bool too_many_pipe_buffers_soft(unsigned long user_bufs)
{
- return pipe_user_pages_soft && user_bufs >= pipe_user_pages_soft;
+ return pipe_user_pages_soft && user_bufs > pipe_user_pages_soft;
}
static bool too_many_pipe_buffers_hard(unsigned long user_bufs)
{
- return pipe_user_pages_hard && user_bufs >= pipe_user_pages_hard;
+ return pipe_user_pages_hard && user_bufs > pipe_user_pages_hard;
}
static bool is_unprivileged_user(void)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Toshi Kani <[email protected]>
commit 23fbd7c70aec7600e3227eb24259fc55bf6e4881 upstream.
A NULL pointer reference kernel bug was observed when
acpi_nfit_add_dimm() called in acpi_nfit_register_dimms() failed. This
error path does not set nfit_mem->nvdimm, but the 2nd
list_for_each_entry() loop in the function assumes it's always set. Add
a check to nfit_mem->nvdimm.
Fixes: ba9c8dd3c222 ("acpi, nfit: add dimm device notification support")
Signed-off-by: Toshi Kani <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/acpi/nfit/core.c | 3 +++
1 file changed, 3 insertions(+)
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -1867,6 +1867,9 @@ static int acpi_nfit_register_dimms(stru
struct kernfs_node *nfit_kernfs;
nvdimm = nfit_mem->nvdimm;
+ if (!nvdimm)
+ continue;
+
nfit_kernfs = sysfs_get_dirent(nvdimm_kobj(nvdimm)->sd, "nfit");
if (nfit_kernfs)
nfit_mem->flags_attr = sysfs_get_dirent(nfit_kernfs,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman <[email protected]>
commit 43cdd1b716b26f6af16da4e145b6578f98798bf6 upstream.
There's no need to be printing a raw kernel pointer to the kernel log at
every boot. So just remove it, and change the whole message to use the
correct dev_info() call at the same time.
Reported-by: Wang Qize <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/acpi/sbshc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/acpi/sbshc.c
+++ b/drivers/acpi/sbshc.c
@@ -275,8 +275,8 @@ static int acpi_smbus_hc_add(struct acpi
device->driver_data = hc;
acpi_ec_add_query_handler(hc->ec, hc->query_bit, NULL, smbus_alarm, hc);
- printk(KERN_INFO PREFIX "SBS HC: EC = 0x%p, offset = 0x%0x, query_bit = 0x%0x\n",
- hc->ec, hc->offset, hc->query_bit);
+ dev_info(&device->dev, "SBS HC: offset = 0x%0x, query_bit = 0x%0x\n",
+ hc->offset, hc->query_bit);
return 0;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein <[email protected]>
commit 2ba9d57e65044859f7ff133bcb0a902769bf3bc6 upstream.
There are several write operations on upper fs not covered by
mnt_want_write():
- test set/remove OPAQUE xattr
- test create O_TMPFILE
- set ORIGIN xattr in ovl_verify_origin()
- cleanup of index entries in ovl_indexdir_cleanup()
Some of these go way back, but this patch only applies over the
v4.14 re-factoring of ovl_fill_super().
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/overlayfs/super.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
--- a/fs/overlayfs/super.c
+++ b/fs/overlayfs/super.c
@@ -520,10 +520,6 @@ static struct dentry *ovl_workdir_create
bool retried = false;
bool locked = false;
- err = mnt_want_write(mnt);
- if (err)
- goto out_err;
-
inode_lock_nested(dir, I_MUTEX_PARENT);
locked = true;
@@ -588,7 +584,6 @@ retry:
goto out_err;
}
out_unlock:
- mnt_drop_write(mnt);
if (locked)
inode_unlock(dir);
@@ -930,12 +925,17 @@ out:
static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
{
+ struct vfsmount *mnt = ofs->upper_mnt;
struct dentry *temp;
int err;
+ err = mnt_want_write(mnt);
+ if (err)
+ return err;
+
ofs->workdir = ovl_workdir_create(ofs, OVL_WORKDIR_NAME, false);
if (!ofs->workdir)
- return 0;
+ goto out;
/*
* Upper should support d_type, else whiteouts are visible. Given
@@ -945,7 +945,7 @@ static int ovl_make_workdir(struct ovl_f
*/
err = ovl_check_d_type_supported(workpath);
if (err < 0)
- return err;
+ goto out;
/*
* We allowed this configuration and don't want to break users over
@@ -969,6 +969,7 @@ static int ovl_make_workdir(struct ovl_f
if (err) {
ofs->noxattr = true;
pr_warn("overlayfs: upper fs does not support xattr.\n");
+ err = 0;
} else {
vfs_removexattr(ofs->workdir, OVL_XATTR_OPAQUE);
}
@@ -980,7 +981,9 @@ static int ovl_make_workdir(struct ovl_f
pr_warn("overlayfs: upper fs does not support file handles, falling back to index=off.\n");
}
- return 0;
+out:
+ mnt_drop_write(mnt);
+ return err;
}
static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
@@ -1027,8 +1030,13 @@ out:
static int ovl_get_indexdir(struct ovl_fs *ofs, struct ovl_entry *oe,
struct path *upperpath)
{
+ struct vfsmount *mnt = ofs->upper_mnt;
int err;
+ err = mnt_want_write(mnt);
+ if (err)
+ return err;
+
/* Verify lower root is upper root origin */
err = ovl_verify_origin(upperpath->dentry, oe->lowerstack[0].dentry,
false, true);
@@ -1056,6 +1064,7 @@ static int ovl_get_indexdir(struct ovl_f
pr_warn("overlayfs: try deleting index dir or mounting with '-o index=off' to disable inodes index.\n");
out:
+ mnt_drop_write(mnt);
return err;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit 3bd6f43f5cb3714f70c591514f344389df593501 upstream.
If scsi_eh_scmd_add() is called concurrently with
scsi_host_queue_ready() while shost->host_blocked > 0 then it can
happen that neither function wakes up the SCSI error handler. Fix
this by making every function that decreases the host_busy counter
wake up the error handler if necessary and by protecting the
host_failed checks with the SCSI host lock.
Reported-by: Pavel Tikhomirov <[email protected]>
References: https://marc.info/?l=linux-kernel&m=150461610630736
Fixes: commit 746650160866 ("scsi: convert host_busy to atomic_t")
Signed-off-by: Bart Van Assche <[email protected]>
Reviewed-by: Pavel Tikhomirov <[email protected]>
Tested-by: Stuart Hayes <[email protected]>
Cc: Konstantin Khorenko <[email protected]>
Cc: Stuart Hayes <[email protected]>
Cc: Pavel Tikhomirov <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Cc: Johannes Thumshirn <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/scsi/hosts.c | 6 ++++++
drivers/scsi/scsi_error.c | 18 ++++++++++++++++--
drivers/scsi/scsi_lib.c | 39 ++++++++++++++++++++++++++++-----------
include/scsi/scsi_host.h | 2 ++
4 files changed, 52 insertions(+), 13 deletions(-)
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -318,6 +318,9 @@ static void scsi_host_dev_release(struct
scsi_proc_hostdir_rm(shost->hostt);
+ /* Wait for functions invoked through call_rcu(&shost->rcu, ...) */
+ rcu_barrier();
+
if (shost->tmf_work_q)
destroy_workqueue(shost->tmf_work_q);
if (shost->ehandler)
@@ -325,6 +328,8 @@ static void scsi_host_dev_release(struct
if (shost->work_q)
destroy_workqueue(shost->work_q);
+ destroy_rcu_head(&shost->rcu);
+
if (shost->shost_state == SHOST_CREATED) {
/*
* Free the shost_dev device name here if scsi_host_alloc()
@@ -399,6 +404,7 @@ struct Scsi_Host *scsi_host_alloc(struct
INIT_LIST_HEAD(&shost->starved_list);
init_waitqueue_head(&shost->host_wait);
mutex_init(&shost->scan_mutex);
+ init_rcu_head(&shost->rcu);
index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL);
if (index < 0)
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -220,6 +220,17 @@ static void scsi_eh_reset(struct scsi_cm
}
}
+static void scsi_eh_inc_host_failed(struct rcu_head *head)
+{
+ struct Scsi_Host *shost = container_of(head, typeof(*shost), rcu);
+ unsigned long flags;
+
+ spin_lock_irqsave(shost->host_lock, flags);
+ shost->host_failed++;
+ scsi_eh_wakeup(shost);
+ spin_unlock_irqrestore(shost->host_lock, flags);
+}
+
/**
* scsi_eh_scmd_add - add scsi cmd to error handling.
* @scmd: scmd to run eh on.
@@ -242,9 +253,12 @@ void scsi_eh_scmd_add(struct scsi_cmnd *
scsi_eh_reset(scmd);
list_add_tail(&scmd->eh_entry, &shost->eh_cmd_q);
- shost->host_failed++;
- scsi_eh_wakeup(shost);
spin_unlock_irqrestore(shost->host_lock, flags);
+ /*
+ * Ensure that all tasks observe the host state change before the
+ * host_failed change.
+ */
+ call_rcu(&shost->rcu, scsi_eh_inc_host_failed);
}
/**
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -318,22 +318,39 @@ static void scsi_init_cmd_errh(struct sc
cmd->cmd_len = scsi_command_size(cmd->cmnd);
}
-void scsi_device_unbusy(struct scsi_device *sdev)
+/*
+ * Decrement the host_busy counter and wake up the error handler if necessary.
+ * Avoid as follows that the error handler is not woken up if shost->host_busy
+ * == shost->host_failed: use call_rcu() in scsi_eh_scmd_add() in combination
+ * with an RCU read lock in this function to ensure that this function in its
+ * entirety either finishes before scsi_eh_scmd_add() increases the
+ * host_failed counter or that it notices the shost state change made by
+ * scsi_eh_scmd_add().
+ */
+static void scsi_dec_host_busy(struct Scsi_Host *shost)
{
- struct Scsi_Host *shost = sdev->host;
- struct scsi_target *starget = scsi_target(sdev);
unsigned long flags;
+ rcu_read_lock();
atomic_dec(&shost->host_busy);
- if (starget->can_queue > 0)
- atomic_dec(&starget->target_busy);
-
- if (unlikely(scsi_host_in_recovery(shost) &&
- (shost->host_failed || shost->host_eh_scheduled))) {
+ if (unlikely(scsi_host_in_recovery(shost))) {
spin_lock_irqsave(shost->host_lock, flags);
- scsi_eh_wakeup(shost);
+ if (shost->host_failed || shost->host_eh_scheduled)
+ scsi_eh_wakeup(shost);
spin_unlock_irqrestore(shost->host_lock, flags);
}
+ rcu_read_unlock();
+}
+
+void scsi_device_unbusy(struct scsi_device *sdev)
+{
+ struct Scsi_Host *shost = sdev->host;
+ struct scsi_target *starget = scsi_target(sdev);
+
+ scsi_dec_host_busy(shost);
+
+ if (starget->can_queue > 0)
+ atomic_dec(&starget->target_busy);
atomic_dec(&sdev->device_busy);
}
@@ -1532,7 +1549,7 @@ starved:
list_add_tail(&sdev->starved_entry, &shost->starved_list);
spin_unlock_irq(shost->host_lock);
out_dec:
- atomic_dec(&shost->host_busy);
+ scsi_dec_host_busy(shost);
return 0;
}
@@ -2020,7 +2037,7 @@ static blk_status_t scsi_queue_rq(struct
return BLK_STS_OK;
out_dec_host_busy:
- atomic_dec(&shost->host_busy);
+ scsi_dec_host_busy(shost);
out_dec_target_busy:
if (scsi_target(sdev)->can_queue > 0)
atomic_dec(&scsi_target(sdev)->target_busy);
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -571,6 +571,8 @@ struct Scsi_Host {
struct blk_mq_tag_set tag_set;
};
+ struct rcu_head rcu;
+
atomic_t host_busy; /* commands actually active on low-level */
atomic_t host_blocked;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric W. Biederman <[email protected]>
commit 0e88bb002a9b2ee8cc3cc9478ce2dc126f849696 upstream.
Set si_signo.
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
Fixes: 0983b31849bb ("sh: Wire up division and address error exceptions on SH-2A.")
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sh/kernel/traps_32.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/sh/kernel/traps_32.c
+++ b/arch/sh/kernel/traps_32.c
@@ -609,7 +609,8 @@ asmlinkage void do_divide_error(unsigned
break;
}
- force_sig_info(SIGFPE, &info, current);
+ info.si_signo = SIGFPE;
+ force_sig_info(info.si_signo, &info, current);
}
#endif
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric W. Biederman <[email protected]>
commit 500d58300571b6602341b041f97c082a461ef994 upstream.
While reviewing the signal sending on openrisc the do_unaligned_access
function stood out because it is obviously wrong. A comment about an
si_code set above when actually si_code is never set. Leading to a
random si_code being sent to userspace in the event of an unaligned
access.
Looking further SIGBUS BUS_ADRALN is the proper pair of signal and
si_code to send for an unaligned access. That is what other
architectures do and what is required by posix.
Given that do_unaligned_access is broken in a way that no one can be
relying on it on openrisc fix the code to just do the right thing.
Fixes: 769a8a96229e ("OpenRISC: Traps")
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
Acked-by: Stafford Horne <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/openrisc/kernel/traps.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
--- a/arch/openrisc/kernel/traps.c
+++ b/arch/openrisc/kernel/traps.c
@@ -266,12 +266,12 @@ asmlinkage void do_unaligned_access(stru
siginfo_t info;
if (user_mode(regs)) {
- /* Send a SIGSEGV */
- info.si_signo = SIGSEGV;
+ /* Send a SIGBUS */
+ info.si_signo = SIGBUS;
info.si_errno = 0;
- /* info.si_code has been set above */
- info.si_addr = (void *)address;
- force_sig_info(SIGSEGV, &info, current);
+ info.si_code = BUS_ADRALN;
+ info.si_addr = (void __user *)address;
+ force_sig_info(SIGBUS, &info, current);
} else {
printk("KERNEL: Unaligned Access 0x%.8lx\n", address);
show_registers(regs);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt (VMware) <[email protected]>
commit 7b6586562708d2b3a04fe49f217ddbadbbbb0546 upstream.
__unregister_ftrace_function_probe() will incorrectly parse the glob filter
because it resets the search variable that was setup by filter_parse_regex().
Al Viro reported this:
After that call of filter_parse_regex() we could have func_g.search not
equal to glob only if glob started with '!' or '*'. In the former case
we would've buggered off with -EINVAL (not = 1). In the latter we
would've set func_g.search equal to glob + 1, calculated the length of
that thing in func_g.len and proceeded to reset func_g.search back to
glob.
Suppose the glob is e.g. *foo*. We end up with
func_g.type = MATCH_MIDDLE_ONLY;
func_g.len = 3;
func_g.search = "*foo";
Feeding that to ftrace_match_record() will not do anything sane - we
will be looking for names containing "*foo" (->len is ignored for that
one).
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 3ba009297149f ("ftrace: Introduce ftrace_glob structure")
Reviewed-by: Dmitry Safonov <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Reported-by: Al Viro <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/trace/ftrace.c | 1 -
1 file changed, 1 deletion(-)
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4456,7 +4456,6 @@ unregister_ftrace_function_probe_func(ch
func_g.type = filter_parse_regex(glob, strlen(glob),
&func_g.search, ¬);
func_g.len = strlen(func_g.search);
- func_g.search = glob;
/* we do not support '!' for function probes */
if (WARN_ON(not))
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka <[email protected]>
commit 21ffceda1c8b3807615c40d440d7815e0c85d366 upstream.
On alpha, a process will crash if it attempts to start a thread and a
signal is delivered at the same time. The crash can be reproduced with
this program: https://cygwin.com/ml/cygwin/2014-11/msg00473.html
The reason for the crash is this:
* we call the clone syscall
* we go to the function copy_process
* copy process calls copy_thread_tls, it is a wrapper around copy_thread
* copy_thread sets the tls pointer: childti->pcb.unique = regs->r20
* copy_thread sets regs->r20 to zero
* we go back to copy_process
* copy process checks "if (signal_pending(current))" and returns
-ERESTARTNOINTR
* the clone syscall is restarted, but this time, regs->r20 is zero, so
the new thread is created with zero tls pointer
* the new thread crashes in start_thread when attempting to access tls
The comment in the code says that setting the register r20 is some
compatibility with OSF/1. But OSF/1 doesn't use the CLONE_SETTLS flag, so
we don't have to zero r20 if CLONE_SETTLS is set. This patch fixes the bug
by zeroing regs->r20 only if CLONE_SETTLS is not set.
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/alpha/kernel/process.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/alpha/kernel/process.c
+++ b/arch/alpha/kernel/process.c
@@ -269,12 +269,13 @@ copy_thread(unsigned long clone_flags, u
application calling fork. */
if (clone_flags & CLONE_SETTLS)
childti->pcb.unique = regs->r20;
+ else
+ regs->r20 = 0; /* OSF/1 has some strange fork() semantics. */
childti->pcb.usp = usp ?: rdusp();
*childregs = *regs;
childregs->r0 = 0;
childregs->r19 = 0;
childregs->r20 = 1; /* OSF/1 has some strange fork() semantics. */
- regs->r20 = 0;
stack = ((struct switch_stack *) regs) - 1;
*childstack = *stack;
childstack->r26 = (unsigned long) ret_from_fork;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric W. Biederman <[email protected]>
commit 6ac1dc736b323011a55ecd1fc5897c24c4f77cbd upstream.
Setting si_code to 0 is the same a setting si_code to SI_USER which is definitely
not correct. With si_code set to SI_USER si_pid and si_uid will be copied to
userspace instead of si_addr. Which is very wrong.
So fix this by using a sensible si_code (SEGV_MAPERR) for this failure.
Fixes: b920de1b77b7 ("mn10300: add the MN10300/AM33 architecture to the kernel")
Cc: David Howells <[email protected]>
Cc: Masakazu Urade <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mn10300/mm/misalignment.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/mn10300/mm/misalignment.c
+++ b/arch/mn10300/mm/misalignment.c
@@ -437,7 +437,7 @@ transfer_failed:
info.si_signo = SIGSEGV;
info.si_errno = 0;
- info.si_code = 0;
+ info.si_code = SEGV_MAPERR;
info.si_addr = (void *) regs->pc;
force_sig_info(SIGSEGV, &info, current);
return;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka <[email protected]>
commit 55fc633c41a08ce9244ff5f528f420b16b1e04d6 upstream.
We need to define NEED_SRM_SAVE_RESTORE on the Avanti, otherwise we get
machine check exception when attempting to reboot the machine.
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/alpha/kernel/pci_impl.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/alpha/kernel/pci_impl.h
+++ b/arch/alpha/kernel/pci_impl.h
@@ -144,7 +144,8 @@ struct pci_iommu_arena
};
#if defined(CONFIG_ALPHA_SRM) && \
- (defined(CONFIG_ALPHA_CIA) || defined(CONFIG_ALPHA_LCA))
+ (defined(CONFIG_ALPHA_CIA) || defined(CONFIG_ALPHA_LCA) || \
+ defined(CONFIG_ALPHA_AVANTI))
# define NEED_SRM_SAVE_RESTORE
#else
# undef NEED_SRM_SAVE_RESTORE
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kai-Heng Feng <[email protected]>
commit 7d06d5895c159f64c46560dc258e553ad8670fe0 upstream.
This reverts commit fd865802c66bc451dc515ed89360f84376ce1a56.
This commit causes a regression on some QCA ROME chips. The USB device
reset happens in btusb_open(), hence firmware loading gets interrupted.
Furthermore, this commit stops working after commit
("a0085f2510e8976614ad8f766b209448b385492f Bluetooth: btusb: driver to
enable the usb-wakeup feature"). Reset-resume quirk only gets enabled in
btusb_suspend() when it's not a wakeup source.
If we really want to reset the USB device, we need to do it before
btusb_open(). Let's handle it in drivers/usb/core/quirks.c.
Cc: Leif Liddy <[email protected]>
Cc: Matthias Kaehlcke <[email protected]>
Cc: Brian Norris <[email protected]>
Cc: Daniel Drake <[email protected]>
Signed-off-by: Kai-Heng Feng <[email protected]>
Reviewed-by: Brian Norris <[email protected]>
Tested-by: Brian Norris <[email protected]>
Signed-off-by: Marcel Holtmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/bluetooth/btusb.c | 6 ------
1 file changed, 6 deletions(-)
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -3117,12 +3117,6 @@ static int btusb_probe(struct usb_interf
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
-
- /* QCA Rome devices lose their updated firmware over suspend,
- * but the USB hub doesn't notice any status change.
- * Explicitly request a device reset on resume.
- */
- set_bit(BTUSB_RESET_RESUME, &data->flags);
}
#ifdef CONFIG_BT_HCIBTUSB_RTL
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit c9cc8d01fb04117928830449388512a5047569c9 upstream.
If devpts_ptmx_path() returns an error code, then devpts_mntget()
dereferences an ERR_PTR():
BUG: unable to handle kernel paging request at fffffffffffffff5
IP: devpts_mntget+0x13f/0x280 fs/devpts/inode.c:173
Fix it by returning early in the error paths.
Reproducer:
#define _GNU_SOURCE
#include <fcntl.h>
#include <sched.h>
#include <sys/ioctl.h>
#define TIOCGPTPEER _IO('T', 0x41)
int main()
{
for (;;) {
int fd = open("/dev/ptmx", 0);
unshare(CLONE_NEWNS);
ioctl(fd, TIOCGPTPEER, 0);
}
}
Fixes: 311fc65c9fb9 ("pty: Repair TIOCGPTPEER")
Reported-by: syzbot <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/devpts/inode.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/devpts/inode.c
+++ b/fs/devpts/inode.c
@@ -168,11 +168,11 @@ struct vfsmount *devpts_mntget(struct fi
dput(path.dentry);
if (err) {
mntput(path.mnt);
- path.mnt = ERR_PTR(err);
+ return ERR_PTR(err);
}
if (DEVPTS_SB(path.mnt->mnt_sb) != fsi) {
mntput(path.mnt);
- path.mnt = ERR_PTR(-ENODEV);
+ return ERR_PTR(-ENODEV);
}
return path.mnt;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit 85c2dd5473b2718b4b63e74bfeb1ca876868e11f upstream.
pipe-user-pages-hard and pipe-user-pages-soft are only supposed to apply
to unprivileged users, as documented in both Documentation/sysctl/fs.txt
and the pipe(7) man page.
However, the capabilities are actually only checked when increasing a
pipe's size using F_SETPIPE_SZ, not when creating a new pipe. Therefore,
if pipe-user-pages-hard has been set, the root user can run into it and be
unable to create pipes. Similarly, if pipe-user-pages-soft has been set,
the root user can run into it and have their pipes limited to 1 page each.
Fix this by allowing the privileged override in both cases.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 759c01142a5d ("pipe: limit the per-user amount of pages allocated in pipes")
Signed-off-by: Eric Biggers <[email protected]>
Acked-by: Kees Cook <[email protected]>
Acked-by: Joe Lawrence <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: "Luis R . Rodriguez" <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Mikulas Patocka <[email protected]>
Cc: Willy Tarreau <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/pipe.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -618,6 +618,11 @@ static bool too_many_pipe_buffers_hard(u
return pipe_user_pages_hard && user_bufs >= pipe_user_pages_hard;
}
+static bool is_unprivileged_user(void)
+{
+ return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN);
+}
+
struct pipe_inode_info *alloc_pipe_info(void)
{
struct pipe_inode_info *pipe;
@@ -634,12 +639,12 @@ struct pipe_inode_info *alloc_pipe_info(
user_bufs = account_pipe_buffers(user, 0, pipe_bufs);
- if (too_many_pipe_buffers_soft(user_bufs)) {
+ if (too_many_pipe_buffers_soft(user_bufs) && is_unprivileged_user()) {
user_bufs = account_pipe_buffers(user, pipe_bufs, 1);
pipe_bufs = 1;
}
- if (too_many_pipe_buffers_hard(user_bufs))
+ if (too_many_pipe_buffers_hard(user_bufs) && is_unprivileged_user())
goto out_revert_acct;
pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer),
@@ -1069,7 +1074,7 @@ static long pipe_set_size(struct pipe_in
if (nr_pages > pipe->buffers &&
(too_many_pipe_buffers_hard(user_bufs) ||
too_many_pipe_buffers_soft(user_bufs)) &&
- !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) {
+ is_unprivileged_user()) {
ret = -EPERM;
goto out_revert_acct;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ulf Magnusson <[email protected]>
commit 57ea5f161a7de5b1913c212d04f57a175b159fdf upstream.
Commit 76d837a4c0f9 ("KVM: PPC: Book3S PR: Don't include SPAPR TCE code
on non-pseries platforms") added a reference to the globally undefined
symbol PPC_SERIES. Looking at the rest of the commit, PPC_PSERIES was
probably intended.
Change PPC_SERIES to PPC_PSERIES.
Discovered with the
https://github.com/ulfalizer/Kconfiglib/blob/master/examples/list_undefined.py
script.
Fixes: 76d837a4c0f9 ("KVM: PPC: Book3S PR: Don't include SPAPR TCE code on non-pseries platforms")
Signed-off-by: Ulf Magnusson <[email protected]>
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/powerpc/kvm/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -68,7 +68,7 @@ config KVM_BOOK3S_64
select KVM_BOOK3S_64_HANDLER
select KVM
select KVM_BOOK3S_PR_POSSIBLE if !KVM_BOOK3S_HV_POSSIBLE
- select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_SERIES || PPC_POWERNV)
+ select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_PSERIES || PPC_POWERNV)
---help---
Support running unmodified book3s_64 and book3s_32 guest kernels
in virtual machines on book3s_64 host processors.
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sascha Hauer <[email protected]>
commit f78e5623f45bab2b726eec29dc5cefbbab2d0b1c upstream.
The fastmap update code might erase the current fastmap anchor PEB
in case it doesn't find any new free PEB. When a power cut happens
in this situation we must not have any outdated fastmap anchor PEB
on the device, because that would be used to attach during next
boot.
The easiest way to make that sure is to erase all outdated fastmap
anchor PEBs synchronously during attach.
Signed-off-by: Sascha Hauer <[email protected]>
Reviewed-by: Richard Weinberger <[email protected]>
Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/ubi/wl.c | 77 +++++++++++++++++++++++++++++++++++++--------------
1 file changed, 57 insertions(+), 20 deletions(-)
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -1529,6 +1529,46 @@ static void shutdown_work(struct ubi_dev
}
/**
+ * erase_aeb - erase a PEB given in UBI attach info PEB
+ * @ubi: UBI device description object
+ * @aeb: UBI attach info PEB
+ * @sync: If true, erase synchronously. Otherwise schedule for erasure
+ */
+static int erase_aeb(struct ubi_device *ubi, struct ubi_ainf_peb *aeb, bool sync)
+{
+ struct ubi_wl_entry *e;
+ int err;
+
+ e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
+ if (!e)
+ return -ENOMEM;
+
+ e->pnum = aeb->pnum;
+ e->ec = aeb->ec;
+ ubi->lookuptbl[e->pnum] = e;
+
+ if (sync) {
+ err = sync_erase(ubi, e, false);
+ if (err)
+ goto out_free;
+
+ wl_tree_add(e, &ubi->free);
+ ubi->free_count++;
+ } else {
+ err = schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false);
+ if (err)
+ goto out_free;
+ }
+
+ return 0;
+
+out_free:
+ wl_entry_destroy(ubi, e);
+
+ return err;
+}
+
+/**
* ubi_wl_init - initialize the WL sub-system using attaching information.
* @ubi: UBI device description object
* @ai: attaching information
@@ -1566,17 +1606,9 @@ int ubi_wl_init(struct ubi_device *ubi,
list_for_each_entry_safe(aeb, tmp, &ai->erase, u.list) {
cond_resched();
- e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
- if (!e)
- goto out_free;
-
- e->pnum = aeb->pnum;
- e->ec = aeb->ec;
- ubi->lookuptbl[e->pnum] = e;
- if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false)) {
- wl_entry_destroy(ubi, e);
+ err = erase_aeb(ubi, aeb, false);
+ if (err)
goto out_free;
- }
found_pebs++;
}
@@ -1635,6 +1667,8 @@ int ubi_wl_init(struct ubi_device *ubi,
ubi_assert(!ubi->lookuptbl[e->pnum]);
ubi->lookuptbl[e->pnum] = e;
} else {
+ bool sync = false;
+
/*
* Usually old Fastmap PEBs are scheduled for erasure
* and we don't have to care about them but if we face
@@ -1644,18 +1678,21 @@ int ubi_wl_init(struct ubi_device *ubi,
if (ubi->lookuptbl[aeb->pnum])
continue;
- e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
- if (!e)
- goto out_free;
+ /*
+ * The fastmap update code might not find a free PEB for
+ * writing the fastmap anchor to and then reuses the
+ * current fastmap anchor PEB. When this PEB gets erased
+ * and a power cut happens before it is written again we
+ * must make sure that the fastmap attach code doesn't
+ * find any outdated fastmap anchors, hence we erase the
+ * outdated fastmap anchor PEBs synchronously here.
+ */
+ if (aeb->vol_id == UBI_FM_SB_VOLUME_ID)
+ sync = true;
- e->pnum = aeb->pnum;
- e->ec = aeb->ec;
- ubi_assert(!ubi->lookuptbl[e->pnum]);
- ubi->lookuptbl[e->pnum] = e;
- if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false)) {
- wl_entry_destroy(ubi, e);
+ err = erase_aeb(ubi, aeb, sync);
+ if (err)
goto out_free;
- }
}
found_pebs++;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Konovalov <[email protected]>
commit 0e410e158e5baa1300bdf678cea4f4e0cf9d8b94 upstream.
With KASAN enabled the kernel has two different memset() functions, one
with KASAN checks (memset) and one without (__memset). KASAN uses some
macro tricks to use the proper version where required. For example
memset() calls in mm/slub.c are without KASAN checks, since they operate
on poisoned slab object metadata.
The issue is that clang emits memset() calls even when there is no
memset() in the source code. They get linked with improper memset()
implementation and the kernel fails to boot due to a huge amount of KASAN
reports during early boot stages.
The solution is to add -fno-builtin flag for files with KASAN_SANITIZE :=
n marker.
Link: http://lkml.kernel.org/r/8ffecfffe04088c52c42b92739c2bd8a0bcb3f5e.1516384594.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Nick Desaulniers <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
Makefile | 3 ++-
scripts/Makefile.kasan | 3 +++
scripts/Makefile.lib | 2 +-
3 files changed, 6 insertions(+), 2 deletions(-)
--- a/Makefile
+++ b/Makefile
@@ -432,7 +432,8 @@ export MAKE AWK GENKSYMS INSTALLKERNEL P
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_KASAN CFLAGS_UBSAN
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -31,4 +31,7 @@ else
endif
CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope)
+
+CFLAGS_KASAN_NOSANITIZE := -fno-builtin
+
endif
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -121,7 +121,7 @@ endif
ifeq ($(CONFIG_KASAN),y)
_c_flags += $(if $(patsubst n%,, \
$(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
- $(CFLAGS_KASAN))
+ $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE))
endif
ifeq ($(CONFIG_UBSAN),y)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 273caa260035c03d89ad63d72d8cd3d9e5c5e3f1 upstream.
If the device is of type VFL_TYPE_SUBDEV then vdev->ioctl_ops
is NULL so the 'if (!ops->vidioc_query_ext_ctrl)' check would crash.
Add a test for !ops to the condition.
All sub-devices that have controls will use the control framework,
so they do not have an equivalent to ops->vidioc_query_ext_ctrl.
Returning false if ops is NULL is the correct thing to do here.
Fixes: b8c601e8af ("v4l2-compat-ioctl32.c: fix ctrl_is_pointer")
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Reported-by: Laurent Pinchart <[email protected]>
Reviewed-by: Laurent Pinchart <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -770,7 +770,7 @@ static inline bool ctrl_is_pointer(struc
return ctrl && ctrl->is_ptr;
}
- if (!ops->vidioc_query_ext_ctrl)
+ if (!ops || !ops->vidioc_query_ext_ctrl)
return false;
return !ops->vidioc_query_ext_ctrl(file, fh, &qec) &&
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miquel Raynal <[email protected]>
commit f4c6cd1a7f2275d5bc0e494b21fff26f8dde80f0 upstream.
When the requested ECC strength does not exactly match the strengths
supported by the ECC engine, the driver is selecting the closest
strength meeting the 'selected_strength > requested_strength'
constraint. Fix the fact that, in this particular case, ecc->strength
value was not updated to match the 'selected_strength'.
For instance, one can encounter this issue when no ECC requirement is
filled in the device tree while the NAND chip minimum requirement is not
a strength/step_size combo natively supported by the ECC engine.
Fixes: 1fef62c1423b ("mtd: nand: add sunxi NAND flash controller support")
Suggested-by: Boris Brezillon <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/nand/sunxi_nand.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -1853,8 +1853,14 @@ static int sunxi_nand_hw_common_ecc_ctrl
/* Add ECC info retrieval from DT */
for (i = 0; i < ARRAY_SIZE(strengths); i++) {
- if (ecc->strength <= strengths[i])
+ if (ecc->strength <= strengths[i]) {
+ /*
+ * Update ecc->strength value with the actual strength
+ * that will be used by the ECC engine.
+ */
+ ecc->strength = strengths[i];
break;
+ }
}
if (i >= ARRAY_SIZE(strengths)) {
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kamal Dasu <[email protected]>
commit f953f0f89663c39f08f4baaa8a4a881401b65654 upstream.
Brcm nand controller prefetch feature needs to be disabled
by default. Enabling affects performance on random reads as
well as dma reads.
Signed-off-by: Kamal Dasu <[email protected]>
Fixes: 27c5b17cd1b1 ("mtd: nand: add NAND driver "library" for Broadcom STB NAND controller")
Acked-by: Florian Fainelli <[email protected]>
Signed-off-by: Boris Brezillon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/mtd/nand/brcmnand/brcmnand.c | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)
--- a/drivers/mtd/nand/brcmnand/brcmnand.c
+++ b/drivers/mtd/nand/brcmnand/brcmnand.c
@@ -2193,16 +2193,9 @@ static int brcmnand_setup_dev(struct brc
if (ctrl->nand_version >= 0x0702)
tmp |= ACC_CONTROL_RD_ERASED;
tmp &= ~ACC_CONTROL_FAST_PGM_RDIN;
- if (ctrl->features & BRCMNAND_HAS_PREFETCH) {
- /*
- * FIXME: Flash DMA + prefetch may see spurious erased-page ECC
- * errors
- */
- if (has_flash_dma(ctrl))
- tmp &= ~ACC_CONTROL_PREFETCH;
- else
- tmp |= ACC_CONTROL_PREFETCH;
- }
+ if (ctrl->features & BRCMNAND_HAS_PREFETCH)
+ tmp &= ~ACC_CONTROL_PREFETCH;
+
nand_writereg(ctrl, offs, tmp);
return 0;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 169f24ca68bf0f247d111aef07af00dd3a02ae88 upstream.
There is nothing wrong with using an unknown buffer type. So
stop spamming the kernel log whenever this happens. The kernel
will just return -EINVAL to signal this.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 4 ----
1 file changed, 4 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -179,8 +179,6 @@ static int __get_v4l2_format32(struct v4
return copy_from_user(&kp->fmt.meta, &up->fmt.meta,
sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
- pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
- kp->type);
return -EINVAL;
}
}
@@ -233,8 +231,6 @@ static int __put_v4l2_format32(struct v4
return copy_to_user(&up->fmt.meta, &kp->fmt.meta,
sizeof(kp->fmt.meta)) ? -EFAULT : 0;
default:
- pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
- kp->type);
return -EINVAL;
}
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit b8c601e8af2d08f733d74defa8465303391bb930 upstream.
ctrl_is_pointer just hardcoded two known string controls, but that
caused problems when using e.g. custom controls that use a pointer
for the payload.
Reimplement this function: it now finds the v4l2_ctrl (if the driver
uses the control framework) or it calls vidioc_query_ext_ctrl (if the
driver implements that directly).
In both cases it can now check if the control is a pointer control
or not.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 59 +++++++++++++++++---------
1 file changed, 39 insertions(+), 20 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -18,6 +18,8 @@
#include <linux/videodev2.h>
#include <linux/v4l2-subdev.h>
#include <media/v4l2-dev.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-ctrls.h>
#include <media/v4l2-ioctl.h>
static long native_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -601,24 +603,39 @@ struct v4l2_ext_control32 {
};
} __attribute__ ((packed));
-/* The following function really belong in v4l2-common, but that causes
- a circular dependency between modules. We need to think about this, but
- for now this will do. */
-
-/* Return non-zero if this control is a pointer type. Currently only
- type STRING is a pointer type. */
-static inline int ctrl_is_pointer(u32 id)
-{
- switch (id) {
- case V4L2_CID_RDS_TX_PS_NAME:
- case V4L2_CID_RDS_TX_RADIO_TEXT:
- return 1;
- default:
- return 0;
+/* Return true if this control is a pointer type. */
+static inline bool ctrl_is_pointer(struct file *file, u32 id)
+{
+ struct video_device *vdev = video_devdata(file);
+ struct v4l2_fh *fh = NULL;
+ struct v4l2_ctrl_handler *hdl = NULL;
+ struct v4l2_query_ext_ctrl qec = { id };
+ const struct v4l2_ioctl_ops *ops = vdev->ioctl_ops;
+
+ if (test_bit(V4L2_FL_USES_V4L2_FH, &vdev->flags))
+ fh = file->private_data;
+
+ if (fh && fh->ctrl_handler)
+ hdl = fh->ctrl_handler;
+ else if (vdev->ctrl_handler)
+ hdl = vdev->ctrl_handler;
+
+ if (hdl) {
+ struct v4l2_ctrl *ctrl = v4l2_ctrl_find(hdl, id);
+
+ return ctrl && ctrl->is_ptr;
}
+
+ if (!ops->vidioc_query_ext_ctrl)
+ return false;
+
+ return !ops->vidioc_query_ext_ctrl(file, fh, &qec) &&
+ (qec.flags & V4L2_CTRL_FLAG_HAS_PAYLOAD);
}
-static int get_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext_controls32 __user *up)
+static int get_v4l2_ext_controls32(struct file *file,
+ struct v4l2_ext_controls *kp,
+ struct v4l2_ext_controls32 __user *up)
{
struct v4l2_ext_control32 __user *ucontrols;
struct v4l2_ext_control __user *kcontrols;
@@ -651,7 +668,7 @@ static int get_v4l2_ext_controls32(struc
return -EFAULT;
if (get_user(id, &kcontrols->id))
return -EFAULT;
- if (ctrl_is_pointer(id)) {
+ if (ctrl_is_pointer(file, id)) {
void __user *s;
if (get_user(p, &ucontrols->string))
@@ -666,7 +683,9 @@ static int get_v4l2_ext_controls32(struc
return 0;
}
-static int put_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext_controls32 __user *up)
+static int put_v4l2_ext_controls32(struct file *file,
+ struct v4l2_ext_controls *kp,
+ struct v4l2_ext_controls32 __user *up)
{
struct v4l2_ext_control32 __user *ucontrols;
struct v4l2_ext_control __user *kcontrols =
@@ -698,7 +717,7 @@ static int put_v4l2_ext_controls32(struc
/* Do not modify the pointer when copying a pointer control.
The contents of the pointer was changed, not the pointer
itself. */
- if (ctrl_is_pointer(id))
+ if (ctrl_is_pointer(file, id))
size -= sizeof(ucontrols->value64);
if (copy_in_user(ucontrols, kcontrols, size))
return -EFAULT;
@@ -912,7 +931,7 @@ static long do_video_ioctl(struct file *
case VIDIOC_G_EXT_CTRLS:
case VIDIOC_S_EXT_CTRLS:
case VIDIOC_TRY_EXT_CTRLS:
- err = get_v4l2_ext_controls32(&karg.v2ecs, up);
+ err = get_v4l2_ext_controls32(file, &karg.v2ecs, up);
compatible_arg = 0;
break;
case VIDIOC_DQEVENT:
@@ -939,7 +958,7 @@ static long do_video_ioctl(struct file *
case VIDIOC_G_EXT_CTRLS:
case VIDIOC_S_EXT_CTRLS:
case VIDIOC_TRY_EXT_CTRLS:
- if (put_v4l2_ext_controls32(&karg.v2ecs, up))
+ if (put_v4l2_ext_controls32(file, &karg.v2ecs, up))
err = -EFAULT;
break;
case VIDIOC_S_EDID:
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans Verkuil <[email protected]>
commit 333b1e9f96ce05f7498b581509bb30cde03018bf upstream.
Instead of doing sizeof(struct foo) use sizeof(*up). There even were
cases where 4 * sizeof(__u32) was used instead of sizeof(kp->reserved),
which is very dangerous when the size of the reserved array changes.
Signed-off-by: Hans Verkuil <[email protected]>
Acked-by: Sakari Ailus <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 79 +++++++++++---------------
1 file changed, 36 insertions(+), 43 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -48,7 +48,7 @@ struct v4l2_window32 {
static int get_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
{
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_window32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
get_user(kp->field, &up->field) ||
get_user(kp->chromakey, &up->chromakey) ||
@@ -66,7 +66,7 @@ static int get_v4l2_window32(struct v4l2
if (get_user(p, &up->clips))
return -EFAULT;
uclips = compat_ptr(p);
- kclips = compat_alloc_user_space(n * sizeof(struct v4l2_clip));
+ kclips = compat_alloc_user_space(n * sizeof(*kclips));
kp->clips = kclips;
while (--n >= 0) {
if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
@@ -164,14 +164,14 @@ static int __get_v4l2_format32(struct v4
static int get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
{
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32)))
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
return -EFAULT;
return __get_v4l2_format32(kp, up);
}
static int get_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
{
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_create_buffers32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format)))
return -EFAULT;
return __get_v4l2_format32(&kp->format, &up->format);
@@ -218,14 +218,14 @@ static int __put_v4l2_format32(struct v4
static int put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
{
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_format32)))
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)))
return -EFAULT;
return __put_v4l2_format32(kp, up);
}
static int put_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
{
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_create_buffers32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
copy_to_user(up, kp, offsetof(struct v4l2_create_buffers32, format)) ||
copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
return -EFAULT;
@@ -244,7 +244,7 @@ struct v4l2_standard32 {
static int get_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
{
/* other fields are not set by the user, nor used by the driver */
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_standard32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(kp->index, &up->index))
return -EFAULT;
return 0;
@@ -252,14 +252,14 @@ static int get_v4l2_standard32(struct v4
static int put_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
{
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_standard32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(kp->index, &up->index) ||
put_user(kp->id, &up->id) ||
- copy_to_user(up->name, kp->name, 24) ||
+ copy_to_user(up->name, kp->name, sizeof(up->name)) ||
copy_to_user(&up->frameperiod, &kp->frameperiod,
sizeof(kp->frameperiod)) ||
put_user(kp->framelines, &up->framelines) ||
- copy_to_user(up->reserved, kp->reserved, 4 * sizeof(__u32)))
+ copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
return -EFAULT;
return 0;
}
@@ -307,7 +307,7 @@ static int get_v4l2_plane32(struct v4l2_
if (copy_in_user(up, up32, 2 * sizeof(__u32)) ||
copy_in_user(&up->data_offset, &up32->data_offset,
- sizeof(__u32)))
+ sizeof(up->data_offset)))
return -EFAULT;
if (memory == V4L2_MEMORY_USERPTR) {
@@ -317,11 +317,11 @@ static int get_v4l2_plane32(struct v4l2_
if (put_user((unsigned long)up_pln, &up->m.userptr))
return -EFAULT;
} else if (memory == V4L2_MEMORY_DMABUF) {
- if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(int)))
+ if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(up32->m.fd)))
return -EFAULT;
} else {
if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
- sizeof(__u32)))
+ sizeof(up32->m.mem_offset)))
return -EFAULT;
}
@@ -333,19 +333,19 @@ static int put_v4l2_plane32(struct v4l2_
{
if (copy_in_user(up32, up, 2 * sizeof(__u32)) ||
copy_in_user(&up32->data_offset, &up->data_offset,
- sizeof(__u32)))
+ sizeof(up->data_offset)))
return -EFAULT;
/* For MMAP, driver might've set up the offset, so copy it back.
* USERPTR stays the same (was userspace-provided), so no copying. */
if (memory == V4L2_MEMORY_MMAP)
if (copy_in_user(&up32->m.mem_offset, &up->m.mem_offset,
- sizeof(__u32)))
+ sizeof(up->m.mem_offset)))
return -EFAULT;
/* For DMABUF, driver might've set up the fd, so copy it back. */
if (memory == V4L2_MEMORY_DMABUF)
if (copy_in_user(&up32->m.fd, &up->m.fd,
- sizeof(int)))
+ sizeof(up->m.fd)))
return -EFAULT;
return 0;
@@ -358,7 +358,7 @@ static int get_v4l2_buffer32(struct v4l2
compat_caddr_t p;
int ret;
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_buffer32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(kp->index, &up->index) ||
get_user(kp->type, &up->type) ||
get_user(kp->flags, &up->flags) ||
@@ -370,8 +370,7 @@ static int get_v4l2_buffer32(struct v4l2
if (get_user(kp->bytesused, &up->bytesused) ||
get_user(kp->field, &up->field) ||
get_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
- get_user(kp->timestamp.tv_usec,
- &up->timestamp.tv_usec))
+ get_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec))
return -EFAULT;
if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
@@ -391,13 +390,12 @@ static int get_v4l2_buffer32(struct v4l2
uplane32 = compat_ptr(p);
if (!access_ok(VERIFY_READ, uplane32,
- kp->length * sizeof(struct v4l2_plane32)))
+ kp->length * sizeof(*uplane32)))
return -EFAULT;
/* We don't really care if userspace decides to kill itself
* by passing a very big num_planes value */
- uplane = compat_alloc_user_space(kp->length *
- sizeof(struct v4l2_plane));
+ uplane = compat_alloc_user_space(kp->length * sizeof(*uplane));
kp->m.planes = (__force struct v4l2_plane *)uplane;
for (num_planes = 0; num_planes < kp->length; num_planes++) {
@@ -445,7 +443,7 @@ static int put_v4l2_buffer32(struct v4l2
int num_planes;
int ret;
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_buffer32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(kp->index, &up->index) ||
put_user(kp->type, &up->type) ||
put_user(kp->flags, &up->flags) ||
@@ -456,8 +454,7 @@ static int put_v4l2_buffer32(struct v4l2
put_user(kp->field, &up->field) ||
put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
put_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec) ||
- copy_to_user(&up->timecode, &kp->timecode,
- sizeof(struct v4l2_timecode)) ||
+ copy_to_user(&up->timecode, &kp->timecode, sizeof(kp->timecode)) ||
put_user(kp->sequence, &up->sequence) ||
put_user(kp->reserved2, &up->reserved2) ||
put_user(kp->reserved, &up->reserved) ||
@@ -525,7 +522,7 @@ static int get_v4l2_framebuffer32(struct
{
u32 tmp;
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_framebuffer32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(tmp, &up->base) ||
get_user(kp->capability, &up->capability) ||
get_user(kp->flags, &up->flags) ||
@@ -539,7 +536,7 @@ static int put_v4l2_framebuffer32(struct
{
u32 tmp = (u32)((unsigned long)kp->base);
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_framebuffer32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(tmp, &up->base) ||
put_user(kp->capability, &up->capability) ||
put_user(kp->flags, &up->flags) ||
@@ -564,14 +561,14 @@ struct v4l2_input32 {
Otherwise it is identical to the 32-bit version. */
static inline int get_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
{
- if (copy_from_user(kp, up, sizeof(struct v4l2_input32)))
+ if (copy_from_user(kp, up, sizeof(*up)))
return -EFAULT;
return 0;
}
static inline int put_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
{
- if (copy_to_user(up, kp, sizeof(struct v4l2_input32)))
+ if (copy_to_user(up, kp, sizeof(*up)))
return -EFAULT;
return 0;
}
@@ -619,12 +616,11 @@ static int get_v4l2_ext_controls32(struc
unsigned int n;
compat_caddr_t p;
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_ext_controls32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(kp->which, &up->which) ||
get_user(kp->count, &up->count) ||
get_user(kp->error_idx, &up->error_idx) ||
- copy_from_user(kp->reserved, up->reserved,
- sizeof(kp->reserved)))
+ copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
return -EFAULT;
if (kp->count == 0) {
kp->controls = NULL;
@@ -635,11 +631,9 @@ static int get_v4l2_ext_controls32(struc
if (get_user(p, &up->controls))
return -EFAULT;
ucontrols = compat_ptr(p);
- if (!access_ok(VERIFY_READ, ucontrols,
- kp->count * sizeof(struct v4l2_ext_control32)))
+ if (!access_ok(VERIFY_READ, ucontrols, kp->count * sizeof(*ucontrols)))
return -EFAULT;
- kcontrols = compat_alloc_user_space(kp->count *
- sizeof(struct v4l2_ext_control));
+ kcontrols = compat_alloc_user_space(kp->count * sizeof(*kcontrols));
kp->controls = (__force struct v4l2_ext_control *)kcontrols;
for (n = 0; n < kp->count; n++) {
u32 id;
@@ -671,7 +665,7 @@ static int put_v4l2_ext_controls32(struc
int n = kp->count;
compat_caddr_t p;
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_ext_controls32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(kp->which, &up->which) ||
put_user(kp->count, &up->count) ||
put_user(kp->error_idx, &up->error_idx) ||
@@ -683,8 +677,7 @@ static int put_v4l2_ext_controls32(struc
if (get_user(p, &up->controls))
return -EFAULT;
ucontrols = compat_ptr(p);
- if (!access_ok(VERIFY_WRITE, ucontrols,
- n * sizeof(struct v4l2_ext_control32)))
+ if (!access_ok(VERIFY_WRITE, ucontrols, n * sizeof(*ucontrols)))
return -EFAULT;
while (--n >= 0) {
@@ -721,7 +714,7 @@ struct v4l2_event32 {
static int put_v4l2_event32(struct v4l2_event *kp, struct v4l2_event32 __user *up)
{
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_event32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(kp->type, &up->type) ||
copy_to_user(&up->u, &kp->u, sizeof(kp->u)) ||
put_user(kp->pending, &up->pending) ||
@@ -729,7 +722,7 @@ static int put_v4l2_event32(struct v4l2_
put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) ||
put_user(kp->id, &up->id) ||
- copy_to_user(up->reserved, kp->reserved, 8 * sizeof(__u32)))
+ copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
return -EFAULT;
return 0;
}
@@ -746,7 +739,7 @@ static int get_v4l2_edid32(struct v4l2_e
{
u32 tmp;
- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_edid32)) ||
+ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
get_user(kp->pad, &up->pad) ||
get_user(kp->start_block, &up->start_block) ||
get_user(kp->blocks, &up->blocks) ||
@@ -761,7 +754,7 @@ static int put_v4l2_edid32(struct v4l2_e
{
u32 tmp = (u32)((unsigned long)kp->edid);
- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_edid32)) ||
+ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
put_user(kp->pad, &up->pad) ||
put_user(kp->start_block, &up->start_block) ||
put_user(kp->blocks, &up->blocks) ||
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit fa59b92d299f2787e6bae1ff078ee0982e80211f upstream.
When the mcryptd template is used to wrap an unkeyed hash algorithm,
don't install a ->setkey() method to the mcryptd instance. This change
is necessary for mcryptd to keep working with unkeyed hash algorithms
once we start enforcing that ->setkey() is called when present.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
crypto/mcryptd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -535,7 +535,8 @@ static int mcryptd_create_hash(struct cr
inst->alg.finup = mcryptd_hash_finup_enqueue;
inst->alg.export = mcryptd_hash_export;
inst->alg.import = mcryptd_hash_import;
- inst->alg.setkey = mcryptd_hash_setkey;
+ if (crypto_hash_alg_has_setkey(halg))
+ inst->alg.setkey = mcryptd_hash_setkey;
inst->alg.digest = mcryptd_hash_digest_enqueue;
err = ahash_register_instance(tmpl, inst);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers <[email protected]>
commit cd6ed77ad5d223dc6299fb58f62e0f5267f7e2ba upstream.
Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
to determine whether the underlying algorithm requires a key or not.
But there was no corresponding function for ahash spawns. Add it.
Note that the new function actually has to support both shash and ahash
algorithms, since the ahash API can be used with either.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
crypto/ahash.c | 11 +++++++++++
include/crypto/internal/hash.h | 2 ++
2 files changed, 13 insertions(+)
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -649,5 +649,16 @@ struct hash_alg_common *ahash_attr_alg(s
}
EXPORT_SYMBOL_GPL(ahash_attr_alg);
+bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
+{
+ struct crypto_alg *alg = &halg->base;
+
+ if (alg->cra_type != &crypto_ahash_type)
+ return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
+
+ return __crypto_ahash_alg(alg)->setkey != NULL;
+}
+EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
+
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -90,6 +90,8 @@ static inline bool crypto_shash_alg_has_
return alg->setkey != shash_no_setkey;
}
+bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg);
+
int crypto_init_ahash_spawn(struct crypto_ahash_spawn *spawn,
struct hash_alg_common *alg,
struct crypto_instance *inst);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit 998008b779e424bd7513c434d0ab9c1268459009 upstream.
Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI
SATA controllers. This commit is a preparation patch for allowing a
different default sata link powermanagement policy for mobile chipsets.
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/ata/ahci.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -386,6 +386,10 @@ static const struct pci_device_id ahci_p
{ PCI_VDEVICE(INTEL, 0xa206), board_ahci }, /* Lewisburg RAID*/
{ PCI_VDEVICE(INTEL, 0xa252), board_ahci }, /* Lewisburg RAID*/
{ PCI_VDEVICE(INTEL, 0xa256), board_ahci }, /* Lewisburg RAID*/
+ { PCI_VDEVICE(INTEL, 0x0f22), board_ahci }, /* Bay Trail AHCI */
+ { PCI_VDEVICE(INTEL, 0x0f23), board_ahci }, /* Bay Trail AHCI */
+ { PCI_VDEVICE(INTEL, 0x22a3), board_ahci }, /* Cherry Trail AHCI */
+ { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci }, /* Apollo Lake AHCI */
/* JMicron 360/1/3/5/6, match class to avoid IDE function */
{ PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robin Murphy <[email protected]>
Commit 4d8efc2d5ee4 upstream.
Similarly to x86, mitigate speculation past an access_ok() check by
masking the pointer against the address limit before use.
Even if we don't expect speculative writes per se, it is plausible that
a CPU may still speculate at least as far as fetching a cache line for
writing, hence we also harden put_user() and clear_user() for peace of
mind.
Signed-off-by: Robin Murphy <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/uaccess.h | 26 +++++++++++++++++++++++---
1 file changed, 23 insertions(+), 3 deletions(-)
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -216,6 +216,26 @@ static inline void uaccess_enable_not_ua
}
/*
+ * Sanitise a uaccess pointer such that it becomes NULL if above the
+ * current addr_limit.
+ */
+#define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr)
+static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
+{
+ void __user *safe_ptr;
+
+ asm volatile(
+ " bics xzr, %1, %2\n"
+ " csel %0, %1, xzr, eq\n"
+ : "=&r" (safe_ptr)
+ : "r" (ptr), "r" (current_thread_info()->addr_limit)
+ : "cc");
+
+ csdb();
+ return safe_ptr;
+}
+
+/*
* The "__xxx" versions of the user access functions do not verify the address
* space - it must have been done previously with a separate "access_ok()"
* call.
@@ -285,7 +305,7 @@ do { \
__typeof__(*(ptr)) __user *__p = (ptr); \
might_fault(); \
access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \
- __get_user((x), __p) : \
+ __p = uaccess_mask_ptr(__p), __get_user((x), __p) : \
((x) = 0, -EFAULT); \
})
@@ -349,7 +369,7 @@ do { \
__typeof__(*(ptr)) __user *__p = (ptr); \
might_fault(); \
access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \
- __put_user((x), __p) : \
+ __p = uaccess_mask_ptr(__p), __put_user((x), __p) : \
-EFAULT; \
})
@@ -365,7 +385,7 @@ extern unsigned long __must_check __clea
static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
{
if (access_ok(VERIFY_WRITE, to, n))
- n = __clear_user(to, n);
+ n = __clear_user(__uaccess_mask_ptr(to), n);
return n;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robin Murphy <[email protected]>
Commit 022620eed3d0 upstream.
Provide an optimised, assembly implementation of array_index_mask_nospec()
for arm64 so that the compiler is not in a position to transform the code
in ways which affect its ability to inhibit speculation (e.g. by introducing
conditional branches).
This is similar to the sequence used by x86, modulo architectural differences
in the carry/borrow flags.
Reviewed-by: Mark Rutland <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/barrier.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -41,6 +41,27 @@
#define dma_rmb() dmb(oshld)
#define dma_wmb() dmb(oshst)
+/*
+ * Generate a mask for array_index__nospec() that is ~0UL when 0 <= idx < sz
+ * and 0 otherwise.
+ */
+#define array_index_mask_nospec array_index_mask_nospec
+static inline unsigned long array_index_mask_nospec(unsigned long idx,
+ unsigned long sz)
+{
+ unsigned long mask;
+
+ asm volatile(
+ " cmp %1, %2\n"
+ " sbc %0, xzr, xzr\n"
+ : "=r" (mask)
+ : "r" (idx), "Ir" (sz)
+ : "cc");
+
+ csdb();
+ return mask;
+}
+
#define __smp_mb() dmb(ish)
#define __smp_rmb() dmb(ishld)
#define __smp_wmb() dmb(ishst)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 669474e772b9 upstream.
For CPUs capable of data value prediction, CSDB waits for any outstanding
predictions to architecturally resolve before allowing speculative execution
to continue. Provide macros to expose it to the arch code.
Reviewed-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Conflicts:
arch/arm64/include/asm/assembler.h
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/assembler.h | 7 +++++++
arch/arm64/include/asm/barrier.h | 1 +
2 files changed, 8 insertions(+)
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -109,6 +109,13 @@
.endm
/*
+ * Value prediction barrier
+ */
+ .macro csdb
+ hint #20
+ .endm
+
+/*
* NOP sequence
*/
.macro nops, num
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -32,6 +32,7 @@
#define dsb(opt) asm volatile("dsb " #opt : : : "memory")
#define psb_csync() asm volatile("hint #17" : : : "memory")
+#define csdb() asm volatile("hint #20" : : : "memory")
#define mb() dsb(sy)
#define rmb() dsb(ld)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ivan Vecera <[email protected]>
commit ba87977a49913129962af8ac35b0e13e0fa4382d upstream.
Commit b7ce40cff0b9 ("kernfs: cache atomic_write_len in
kernfs_open_file") changes type of local variable 'len' from ssize_t
to size_t. This change caused that the *ppos value is updated also
when the previous write callback failed.
Mentioned snippet:
...
len = ops->write(...); <- return value can be negative
...
if (len > 0) <- true here in this case
*ppos += len;
...
Fixes: b7ce40cff0b9 ("kernfs: cache atomic_write_len in kernfs_open_file")
Acked-by: Tejun Heo <[email protected]>
Signed-off-by: Ivan Vecera <[email protected]>
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/kernfs/file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -275,7 +275,7 @@ static ssize_t kernfs_fop_write(struct f
{
struct kernfs_open_file *of = kernfs_of(file);
const struct kernfs_ops *ops;
- size_t len;
+ ssize_t len;
char *buf;
if (of->atomic_write_len) {
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit e231c6879cfd44e4fffd384bb6dd7d313249a523 upstream.
When locking the file in order to do O_DIRECT on it, we must unmap
any mmapped ranges on the pagecache so that we can flush out the
dirty data.
Fixes: a5864c999de67 ("NFS: Do not serialise O_DIRECT reads and writes")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/io.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/nfs/io.c
+++ b/fs/nfs/io.c
@@ -99,7 +99,7 @@ static void nfs_block_buffered(struct nf
{
if (!test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
set_bit(NFS_INO_ODIRECT, &nfsi->flags);
- nfs_wb_all(inode);
+ nfs_sync_mapping(inode->i_mapping);
}
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit 8634ef5e05311f32d7f2aee06f6b27a8834a3bd6 upstream.
The LOOKUPP operation was inserted into the nfs4_procedures array
rather than being appended, which put /proc/net/rpc/nfs out of
whack, and broke the nfsstat utility.
Fix by moving the LOOKUPP operation to the end of the array, and
by ensuring that it keeps the same length whether or not NFSV4.1
and NFSv4.2 are compiled in.
Fixes: 5b5faaf6df734 ("nfs4: add NFSv4 LOOKUPP handlers")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/nfs/nfs4xdr.c | 64 ++++++++++++++++++++++++++++++---------------------
include/linux/nfs4.h | 12 ++++++---
2 files changed, 46 insertions(+), 30 deletions(-)
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -7678,6 +7678,22 @@ nfs4_stat_to_errno(int stat)
.p_name = #proc, \
}
+#if defined(CONFIG_NFS_V4_1)
+#define PROC41(proc, argtype, restype) \
+ PROC(proc, argtype, restype)
+#else
+#define PROC41(proc, argtype, restype) \
+ STUB(proc)
+#endif
+
+#if defined(CONFIG_NFS_V4_2)
+#define PROC42(proc, argtype, restype) \
+ PROC(proc, argtype, restype)
+#else
+#define PROC42(proc, argtype, restype) \
+ STUB(proc)
+#endif
+
const struct rpc_procinfo nfs4_procedures[] = {
PROC(READ, enc_read, dec_read),
PROC(WRITE, enc_write, dec_write),
@@ -7698,7 +7714,6 @@ const struct rpc_procinfo nfs4_procedure
PROC(ACCESS, enc_access, dec_access),
PROC(GETATTR, enc_getattr, dec_getattr),
PROC(LOOKUP, enc_lookup, dec_lookup),
- PROC(LOOKUPP, enc_lookupp, dec_lookupp),
PROC(LOOKUP_ROOT, enc_lookup_root, dec_lookup_root),
PROC(REMOVE, enc_remove, dec_remove),
PROC(RENAME, enc_rename, dec_rename),
@@ -7717,33 +7732,30 @@ const struct rpc_procinfo nfs4_procedure
PROC(RELEASE_LOCKOWNER, enc_release_lockowner, dec_release_lockowner),
PROC(SECINFO, enc_secinfo, dec_secinfo),
PROC(FSID_PRESENT, enc_fsid_present, dec_fsid_present),
-#if defined(CONFIG_NFS_V4_1)
- PROC(EXCHANGE_ID, enc_exchange_id, dec_exchange_id),
- PROC(CREATE_SESSION, enc_create_session, dec_create_session),
- PROC(DESTROY_SESSION, enc_destroy_session, dec_destroy_session),
- PROC(SEQUENCE, enc_sequence, dec_sequence),
- PROC(GET_LEASE_TIME, enc_get_lease_time, dec_get_lease_time),
- PROC(RECLAIM_COMPLETE, enc_reclaim_complete, dec_reclaim_complete),
- PROC(GETDEVICEINFO, enc_getdeviceinfo, dec_getdeviceinfo),
- PROC(LAYOUTGET, enc_layoutget, dec_layoutget),
- PROC(LAYOUTCOMMIT, enc_layoutcommit, dec_layoutcommit),
- PROC(LAYOUTRETURN, enc_layoutreturn, dec_layoutreturn),
- PROC(SECINFO_NO_NAME, enc_secinfo_no_name, dec_secinfo_no_name),
- PROC(TEST_STATEID, enc_test_stateid, dec_test_stateid),
- PROC(FREE_STATEID, enc_free_stateid, dec_free_stateid),
+ PROC41(EXCHANGE_ID, enc_exchange_id, dec_exchange_id),
+ PROC41(CREATE_SESSION, enc_create_session, dec_create_session),
+ PROC41(DESTROY_SESSION, enc_destroy_session, dec_destroy_session),
+ PROC41(SEQUENCE, enc_sequence, dec_sequence),
+ PROC41(GET_LEASE_TIME, enc_get_lease_time, dec_get_lease_time),
+ PROC41(RECLAIM_COMPLETE,enc_reclaim_complete, dec_reclaim_complete),
+ PROC41(GETDEVICEINFO, enc_getdeviceinfo, dec_getdeviceinfo),
+ PROC41(LAYOUTGET, enc_layoutget, dec_layoutget),
+ PROC41(LAYOUTCOMMIT, enc_layoutcommit, dec_layoutcommit),
+ PROC41(LAYOUTRETURN, enc_layoutreturn, dec_layoutreturn),
+ PROC41(SECINFO_NO_NAME, enc_secinfo_no_name, dec_secinfo_no_name),
+ PROC41(TEST_STATEID, enc_test_stateid, dec_test_stateid),
+ PROC41(FREE_STATEID, enc_free_stateid, dec_free_stateid),
STUB(GETDEVICELIST),
- PROC(BIND_CONN_TO_SESSION,
+ PROC41(BIND_CONN_TO_SESSION,
enc_bind_conn_to_session, dec_bind_conn_to_session),
- PROC(DESTROY_CLIENTID, enc_destroy_clientid, dec_destroy_clientid),
-#endif /* CONFIG_NFS_V4_1 */
-#ifdef CONFIG_NFS_V4_2
- PROC(SEEK, enc_seek, dec_seek),
- PROC(ALLOCATE, enc_allocate, dec_allocate),
- PROC(DEALLOCATE, enc_deallocate, dec_deallocate),
- PROC(LAYOUTSTATS, enc_layoutstats, dec_layoutstats),
- PROC(CLONE, enc_clone, dec_clone),
- PROC(COPY, enc_copy, dec_copy),
-#endif /* CONFIG_NFS_V4_2 */
+ PROC41(DESTROY_CLIENTID,enc_destroy_clientid, dec_destroy_clientid),
+ PROC42(SEEK, enc_seek, dec_seek),
+ PROC42(ALLOCATE, enc_allocate, dec_allocate),
+ PROC42(DEALLOCATE, enc_deallocate, dec_deallocate),
+ PROC42(LAYOUTSTATS, enc_layoutstats, dec_layoutstats),
+ PROC42(CLONE, enc_clone, dec_clone),
+ PROC42(COPY, enc_copy, dec_copy),
+ PROC(LOOKUPP, enc_lookupp, dec_lookupp),
};
static unsigned int nfs_version4_counts[ARRAY_SIZE(nfs4_procedures)];
--- a/include/linux/nfs4.h
+++ b/include/linux/nfs4.h
@@ -457,7 +457,12 @@ enum lock_type4 {
#define NFS4_DEBUG 1
-/* Index of predefined Linux client operations */
+/*
+ * Index of predefined Linux client operations
+ *
+ * To ensure that /proc/net/rpc/nfs remains correctly ordered, please
+ * append only to this enum when adding new client operations.
+ */
enum {
NFSPROC4_CLNT_NULL = 0, /* Unused */
@@ -480,7 +485,6 @@ enum {
NFSPROC4_CLNT_ACCESS,
NFSPROC4_CLNT_GETATTR,
NFSPROC4_CLNT_LOOKUP,
- NFSPROC4_CLNT_LOOKUPP,
NFSPROC4_CLNT_LOOKUP_ROOT,
NFSPROC4_CLNT_REMOVE,
NFSPROC4_CLNT_RENAME,
@@ -500,7 +504,6 @@ enum {
NFSPROC4_CLNT_SECINFO,
NFSPROC4_CLNT_FSID_PRESENT,
- /* nfs41 */
NFSPROC4_CLNT_EXCHANGE_ID,
NFSPROC4_CLNT_CREATE_SESSION,
NFSPROC4_CLNT_DESTROY_SESSION,
@@ -518,13 +521,14 @@ enum {
NFSPROC4_CLNT_BIND_CONN_TO_SESSION,
NFSPROC4_CLNT_DESTROY_CLIENTID,
- /* nfs42 */
NFSPROC4_CLNT_SEEK,
NFSPROC4_CLNT_ALLOCATE,
NFSPROC4_CLNT_DEALLOCATE,
NFSPROC4_CLNT_LAYOUTSTATS,
NFSPROC4_CLNT_CLONE,
NFSPROC4_CLNT_COPY,
+
+ NFSPROC4_CLNT_LOOKUPP,
};
/* nfs41 types */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit c2f0ad4fc089 upstream.
A mispredicted conditional call to set_fs could result in the wrong
addr_limit being forwarded under speculation to a subsequent access_ok
check, potentially forming part of a spectre-v1 attack using uaccess
routines.
This patch prevents this forwarding from taking place, but putting heavy
barriers in set_fs after writing the addr_limit.
Reviewed-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/uaccess.h | 7 +++++++
1 file changed, 7 insertions(+)
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -42,6 +42,13 @@ static inline void set_fs(mm_segment_t f
{
current_thread_info()->addr_limit = fs;
+ /*
+ * Prevent a mispredicted conditional call to set_fs from forwarding
+ * the wrong address limit to access_ok under speculation.
+ */
+ dsb(nsh);
+ isb();
+
/* On user-mode return, check fs is correct */
set_thread_flag(TIF_FSCHECK);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier <[email protected]>
Commit 09e6be12effd upstream.
The new SMC Calling Convention (v1.1) allows for a reduced overhead
when calling into the firmware, and provides a new feature discovery
mechanism.
Make it visible to KVM guests.
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm/kvm/handle_exit.c | 2 +-
arch/arm64/kvm/handle_exit.c | 2 +-
include/kvm/arm_psci.h | 2 +-
include/linux/arm-smccc.h | 13 +++++++++++++
virt/kvm/arm/psci.c | 24 +++++++++++++++++++++++-
5 files changed, 39 insertions(+), 4 deletions(-)
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -36,7 +36,7 @@ static int handle_hvc(struct kvm_vcpu *v
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
- ret = kvm_psci_call(vcpu);
+ ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
kvm_inject_undefined(vcpu);
return 1;
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -44,7 +44,7 @@ static int handle_hvc(struct kvm_vcpu *v
kvm_vcpu_hvc_get_imm(vcpu));
vcpu->stat.hvc_exit_stat++;
- ret = kvm_psci_call(vcpu);
+ ret = kvm_hvc_call_handler(vcpu);
if (ret < 0) {
vcpu_set_reg(vcpu, 0, ~0UL);
return 1;
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -27,6 +27,6 @@
#define KVM_ARM_PSCI_LATEST KVM_ARM_PSCI_1_0
int kvm_psci_version(struct kvm_vcpu *vcpu);
-int kvm_psci_call(struct kvm_vcpu *vcpu);
+int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
#endif /* __KVM_ARM_PSCI_H__ */
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -60,6 +60,19 @@
#define ARM_SMCCC_QUIRK_NONE 0
#define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */
+#define ARM_SMCCC_VERSION_1_0 0x10000
+#define ARM_SMCCC_VERSION_1_1 0x10001
+
+#define ARM_SMCCC_VERSION_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 0)
+
+#define ARM_SMCCC_ARCH_FEATURES_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ 0, 1)
+
#ifndef __ASSEMBLY__
#include <linux/linkage.h>
--- a/virt/kvm/arm/psci.c
+++ b/virt/kvm/arm/psci.c
@@ -15,6 +15,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#include <linux/arm-smccc.h>
#include <linux/preempt.h>
#include <linux/kvm_host.h>
#include <linux/wait.h>
@@ -339,6 +340,7 @@ static int kvm_psci_1_0_call(struct kvm_
case PSCI_0_2_FN_SYSTEM_OFF:
case PSCI_0_2_FN_SYSTEM_RESET:
case PSCI_1_0_FN_PSCI_FEATURES:
+ case ARM_SMCCC_VERSION_FUNC_ID:
val = 0;
break;
default:
@@ -393,7 +395,7 @@ static int kvm_psci_0_1_call(struct kvm_
* Errors:
* -EINVAL: Unrecognized PSCI function
*/
-int kvm_psci_call(struct kvm_vcpu *vcpu)
+static int kvm_psci_call(struct kvm_vcpu *vcpu)
{
switch (kvm_psci_version(vcpu)) {
case KVM_ARM_PSCI_1_0:
@@ -406,3 +408,23 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
return -EINVAL;
};
}
+
+int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
+{
+ u32 func_id = smccc_get_function(vcpu);
+ u32 val = PSCI_RET_NOT_SUPPORTED;
+
+ switch (func_id) {
+ case ARM_SMCCC_VERSION_FUNC_ID:
+ val = ARM_SMCCC_VERSION_1_1;
+ break;
+ case ARM_SMCCC_ARCH_FEATURES_FUNC_ID:
+ /* Nothing supported yet */
+ break;
+ default:
+ return kvm_psci_call(vcpu);
+ }
+
+ smccc_set_retval(vcpu, val, 0, 0, 0);
+ return 1;
+}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit f167211a93ac upstream.
We don't fully understand the Cavium ThunderX erratum, but it appears
that mapping the kernel as nG can lead to horrible consequences such as
attempting to execute userspace from kernel context. Since kpti isn't
enabled for these CPUs anyway, simplify the comment justifying the lack
of post_ttbr_update_workaround in the exception trampoline.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1010,16 +1010,9 @@ alternative_else_nop_endif
orr \tmp, \tmp, #USER_ASID_FLAG
msr ttbr1_el1, \tmp
/*
- * We avoid running the post_ttbr_update_workaround here because the
- * user and kernel ASIDs don't have conflicting mappings, so any
- * "blessing" as described in:
- *
- * http://lkml.kernel.org/r/[email protected]
- *
- * will not hurt correctness. Whilst this may partially defeat the
- * point of using split ASIDs in the first place, it avoids
- * the hit of invalidating the entire I-cache on every return to
- * userspace.
+ * We avoid running the post_ttbr_update_workaround here because
+ * it's only needed by Cavium ThunderX, which requires KPTI to be
+ * disabled.
*/
.endm
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Malcolm Priestley <[email protected]>
commit 7bf7a7116ed313c601307f7e585419369926ab05 upstream.
When the tuner was split from m88rs2000 the attach function is in wrong
place.
Move to dm04_lme2510_tuner to trap errors on failure and removing
a call to lme_coldreset.
Prevents driver starting up without any tuner connected.
Fixes to trap for ts2020 fail.
LME2510(C): FE Found M88RS2000
ts2020: probe of 0-0060 failed with error -11
...
LME2510(C): TUN Found RS2000 tuner
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] PREEMPT SMP KASAN
Reported-by: Andrey Konovalov <[email protected]>
Signed-off-by: Malcolm Priestley <[email protected]>
Tested-by: Andrey Konovalov <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/media/usb/dvb-usb-v2/lmedm04.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
+++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
@@ -1076,8 +1076,6 @@ static int dm04_lme2510_frontend_attach(
if (adap->fe[0]) {
info("FE Found M88RS2000");
- dvb_attach(ts2020_attach, adap->fe[0], &ts2020_config,
- &d->i2c_adap);
st->i2c_tuner_gate_w = 5;
st->i2c_tuner_gate_r = 5;
st->i2c_tuner_addr = 0x60;
@@ -1143,17 +1141,18 @@ static int dm04_lme2510_tuner(struct dvb
ret = st->tuner_config;
break;
case TUNER_RS2000:
- ret = st->tuner_config;
+ if (dvb_attach(ts2020_attach, adap->fe[0],
+ &ts2020_config, &d->i2c_adap))
+ ret = st->tuner_config;
break;
default:
break;
}
- if (ret)
+ if (ret) {
info("TUN Found %s tuner", tun_msg[ret]);
- else {
- info("TUN No tuner found --- resetting device");
- lme_coldreset(d);
+ } else {
+ info("TUN No tuner found");
return -ENODEV;
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jayachandran C <[email protected]>
Commit f3d795d9b360 upstream.
Use PSCI based mitigation for speculative execution attacks targeting
the branch predictor. We use the same mechanism as the one used for
Cortex-A CPUs, we expect the PSCI version call to have a side effect
of clearing the BTBs.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Jayachandran C <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 10 ++++++++++
1 file changed, 10 insertions(+)
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -359,6 +359,16 @@ const struct arm64_cpu_capabilities arm6
.capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
},
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+ .enable = enable_psci_bp_hardening,
+ },
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+ .enable = enable_psci_bp_hardening,
+ },
#endif
{
}
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Morse <[email protected]>
Commit edf298cfce47 upstream.
this_cpu_has_cap() tests caps->desc not caps->matches, so it stops
walking the list when it finds a 'silent' feature, instead of
walking to the end of the list.
Prior to v4.6's 644c2ae198412 ("arm64: cpufeature: Test 'matches' pointer
to find the end of the list") we always tested desc to find the end of
a capability list. This was changed for dubious things like PAN_NOT_UAO.
v4.7's e3661b128e53e ("arm64: Allow a capability to be checked on
single CPU") added this_cpu_has_cap() using the old desc style test.
CC: Suzuki K Poulose <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1173,9 +1173,8 @@ static bool __this_cpu_has_cap(const str
if (WARN_ON(preemptible()))
return false;
- for (caps = cap_array; caps->desc; caps++)
+ for (caps = cap_array; caps->matches; caps++)
if (caps->capability == cap &&
- caps->matches &&
caps->matches(caps, SCOPE_LOCAL_CPU))
return true;
return false;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 84624087dd7e upstream.
access_ok isn't an expensive operation once the addr_limit for the current
thread has been loaded into the cache. Given that the initial access_ok
check preceding a sequence of __{get,put}_user operations will take
the brunt of the miss, we can make the __* variants identical to the
full-fat versions, which brings with it the benefits of address masking.
The likely cost in these sequences will be from toggling PAN/UAO, which
we can address later by implementing the *_unsafe versions.
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/uaccess.h | 54 +++++++++++++++++++++++----------------
1 file changed, 32 insertions(+), 22 deletions(-)
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -294,28 +294,33 @@ do { \
(x) = (__force __typeof__(*(ptr)))__gu_val; \
} while (0)
-#define __get_user(x, ptr) \
+#define __get_user_check(x, ptr, err) \
({ \
- int __gu_err = 0; \
- __get_user_err((x), (ptr), __gu_err); \
- __gu_err; \
+ __typeof__(*(ptr)) __user *__p = (ptr); \
+ might_fault(); \
+ if (access_ok(VERIFY_READ, __p, sizeof(*__p))) { \
+ __p = uaccess_mask_ptr(__p); \
+ __get_user_err((x), __p, (err)); \
+ } else { \
+ (x) = 0; (err) = -EFAULT; \
+ } \
})
#define __get_user_error(x, ptr, err) \
({ \
- __get_user_err((x), (ptr), (err)); \
+ __get_user_check((x), (ptr), (err)); \
(void)0; \
})
-#define get_user(x, ptr) \
+#define __get_user(x, ptr) \
({ \
- __typeof__(*(ptr)) __user *__p = (ptr); \
- might_fault(); \
- access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \
- __p = uaccess_mask_ptr(__p), __get_user((x), __p) : \
- ((x) = 0, -EFAULT); \
+ int __gu_err = 0; \
+ __get_user_check((x), (ptr), __gu_err); \
+ __gu_err; \
})
+#define get_user __get_user
+
#define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature) \
asm volatile( \
"1:"ALTERNATIVE(instr " " reg "1, [%2]\n", \
@@ -358,28 +363,33 @@ do { \
uaccess_disable_not_uao(); \
} while (0)
-#define __put_user(x, ptr) \
+#define __put_user_check(x, ptr, err) \
({ \
- int __pu_err = 0; \
- __put_user_err((x), (ptr), __pu_err); \
- __pu_err; \
+ __typeof__(*(ptr)) __user *__p = (ptr); \
+ might_fault(); \
+ if (access_ok(VERIFY_WRITE, __p, sizeof(*__p))) { \
+ __p = uaccess_mask_ptr(__p); \
+ __put_user_err((x), __p, (err)); \
+ } else { \
+ (err) = -EFAULT; \
+ } \
})
#define __put_user_error(x, ptr, err) \
({ \
- __put_user_err((x), (ptr), (err)); \
+ __put_user_check((x), (ptr), (err)); \
(void)0; \
})
-#define put_user(x, ptr) \
+#define __put_user(x, ptr) \
({ \
- __typeof__(*(ptr)) __user *__p = (ptr); \
- might_fault(); \
- access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \
- __p = uaccess_mask_ptr(__p), __put_user((x), __p) : \
- -EFAULT; \
+ int __pu_err = 0; \
+ __put_user_check((x), (ptr), __pu_err); \
+ __pu_err; \
})
+#define put_user __put_user
+
extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
#define raw_copy_from_user __arch_copy_from_user
extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jayachandran C <[email protected]>
Commit 0ba2e29c7fc1 upstream.
Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in
unmap_kernel_at_el0(). These CPUs are not vulnerable to
CVE-2017-5754 and do not need KPTI when KASLR is off.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Jayachandran C <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 7 +++++++
1 file changed, 7 insertions(+)
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -866,6 +866,13 @@ static bool unmap_kernel_at_el0(const st
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
return true;
+ /* Don't force KPTI for CPUs that are not vulnerable */
+ switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
+ case MIDR_CAVIUM_THUNDERX2:
+ case MIDR_BRCM_VULCAN:
+ return false;
+ }
+
/* Defer to CPU feature registers */
return !cpuid_feature_extract_unsigned_field(pfr0,
ID_AA64PFR0_CSV3_SHIFT);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 18011eac28c7 upstream.
When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/process.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -370,16 +370,14 @@ void tls_preserve_current_state(void)
static void tls_thread_switch(struct task_struct *next)
{
- unsigned long tpidr, tpidrro;
-
tls_preserve_current_state();
- tpidr = *task_user_tls(next);
- tpidrro = is_compat_thread(task_thread_info(next)) ?
- next->thread.tp_value : 0;
+ if (is_compat_thread(task_thread_info(next)))
+ write_sysreg(next->thread.tp_value, tpidrro_el0);
+ else if (!arm64_kernel_unmapped_at_el0())
+ write_sysreg(0, tpidrro_el0);
- write_sysreg(tpidr, tpidr_el0);
- write_sysreg(tpidrro, tpidrro_el0);
+ write_sysreg(*task_user_tls(next), tpidr_el0);
}
/* Restore the UAO state depending on next's addr_limit */
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit d1777e686ad1 upstream.
We rely on an atomic swizzling of TTBR1 when transitioning from the entry
trampoline to the kernel proper on an exception. We can't rely on this
atomicity in the face of Falkor erratum #E1003, so on affected cores we
can issue a TLB invalidation to invalidate the walk cache prior to
jumping into the kernel. There is still the possibility of a TLB conflict
here due to conflicting walk cache entries prior to the invalidation, but
this doesn't appear to be the case on these CPUs in practice.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 17 +++++------------
arch/arm64/kernel/entry.S | 12 ++++++++++++
2 files changed, 17 insertions(+), 12 deletions(-)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -522,20 +522,13 @@ config CAVIUM_ERRATUM_30115
config QCOM_FALKOR_ERRATUM_1003
bool "Falkor E1003: Incorrect translation due to ASID change"
default y
- select ARM64_PAN if ARM64_SW_TTBR0_PAN
help
On Falkor v1, an incorrect ASID may be cached in the TLB when ASID
- and BADDR are changed together in TTBRx_EL1. The workaround for this
- issue is to use a reserved ASID in cpu_do_switch_mm() before
- switching to the new ASID. Saying Y here selects ARM64_PAN if
- ARM64_SW_TTBR0_PAN is selected. This is done because implementing and
- maintaining the E1003 workaround in the software PAN emulation code
- would be an unnecessary complication. The affected Falkor v1 CPU
- implements ARMv8.1 hardware PAN support and using hardware PAN
- support versus software PAN emulation is mutually exclusive at
- runtime.
-
- If unsure, say Y.
+ and BADDR are changed together in TTBRx_EL1. Since we keep the ASID
+ in TTBR1_EL1, this situation only occurs in the entry trampoline and
+ then only for entries in the walk cache, since the leaf translation
+ is unchanged. Work around the erratum by invalidating the walk cache
+ entries for the trampoline before entering the kernel proper.
config QCOM_FALKOR_ERRATUM_1009
bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -989,6 +989,18 @@ __ni_sys_trace:
sub \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
bic \tmp, \tmp, #USER_ASID_FLAG
msr ttbr1_el1, \tmp
+#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
+alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
+ /* ASID already in \tmp[63:48] */
+ movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
+ movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
+ /* 2MB boundary containing the vectors, so we nobble the walk cache */
+ movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS & ~(SZ_2M - 1)) >> 12)
+ isb
+ tlbi vae1, \tmp
+ dsb nsh
+alternative_else_nop_endif
+#endif /* CONFIG_QCOM_FALKOR_ERRATUM_1003 */
.endm
.macro tramp_unmap_kernel, tmp
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 376133b7edc2 upstream.
We're about to rework the way ASIDs are allocated, switch_mm is
implemented and low-level kernel entry/exit is handled, so keep the
ARM64_SW_TTBR0_PAN code out of the way whilst we do the heavy lifting.
It will be re-enabled in a subsequent patch.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -920,6 +920,7 @@ endif
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
+ depends on BROKEN # Temporary while switch_mm is reworked
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 6c27c4082f4f upstream.
The literal pool entry for identifying the vectors base is the only piece
of information in the trampoline page that identifies the true location
of the kernel.
This patch moves it into a page-aligned region of the .rodata section
and maps this adjacent to the trampoline text via an additional fixmap
entry, which protects against any accidental leakage of the trampoline
contents.
Suggested-by: Ard Biesheuvel <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/fixmap.h | 1 +
arch/arm64/kernel/entry.S | 14 ++++++++++++++
arch/arm64/kernel/vmlinux.lds.S | 5 ++++-
arch/arm64/mm/mmu.c | 10 +++++++++-
4 files changed, 28 insertions(+), 2 deletions(-)
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -59,6 +59,7 @@ enum fixed_addresses {
#endif /* CONFIG_ACPI_APEI_GHES */
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ FIX_ENTRY_TRAMP_DATA,
FIX_ENTRY_TRAMP_TEXT,
#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1030,7 +1030,13 @@ alternative_else_nop_endif
msr tpidrro_el0, x30 // Restored in kernel_ventry
.endif
tramp_map_kernel x30
+#ifdef CONFIG_RANDOMIZE_BASE
+ adr x30, tramp_vectors + PAGE_SIZE
+alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
+ ldr x30, [x30]
+#else
ldr x30, =vectors
+#endif
prfm plil1strm, [x30, #(1b - tramp_vectors)]
msr vbar_el1, x30
add x30, x30, #(1b - tramp_vectors)
@@ -1073,6 +1079,14 @@ END(tramp_exit_compat)
.ltorg
.popsection // .entry.tramp.text
+#ifdef CONFIG_RANDOMIZE_BASE
+ .pushsection ".rodata", "a"
+ .align PAGE_SHIFT
+ .globl __entry_tramp_data_start
+__entry_tramp_data_start:
+ .quad vectors
+ .popsection // .rodata
+#endif /* CONFIG_RANDOMIZE_BASE */
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
/*
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -251,7 +251,10 @@ ASSERT(__idmap_text_end - (__idmap_text_
ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
<= SZ_4K, "Hibernate exit text too big or misaligned")
#endif
-
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
+ "Entry trampoline text too big")
+#endif
/*
* If padding is applied before .head.text, virt<->phys conversions will fail.
*/
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -541,8 +541,16 @@ static int __init map_entry_trampoline(v
__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
prot, pgd_pgtable_alloc, 0);
- /* ...as well as the kernel page table */
+ /* Map both the text and data into the kernel page table */
__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ extern char __entry_tramp_data_start[];
+
+ __set_fixmap(FIX_ENTRY_TRAMP_DATA,
+ __pa_symbol(__entry_tramp_data_start),
+ PAGE_KERNEL_RO);
+ }
+
return 0;
}
core_initcall(map_entry_trampoline);
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit c7b9adaf85f8 upstream.
To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.
This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/kernel/entry.S | 86 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/kernel/vmlinux.lds.S | 17 +++++++
2 files changed, 103 insertions(+)
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -28,6 +28,8 @@
#include <asm/errno.h>
#include <asm/esr.h>
#include <asm/irq.h>
+#include <asm/memory.h>
+#include <asm/mmu.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/thread_info.h>
@@ -943,6 +945,90 @@ __ni_sys_trace:
.popsection // .entry.text
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+/*
+ * Exception vectors trampoline.
+ */
+ .pushsection ".entry.tramp.text", "ax"
+
+ .macro tramp_map_kernel, tmp
+ mrs \tmp, ttbr1_el1
+ sub \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+ bic \tmp, \tmp, #USER_ASID_FLAG
+ msr ttbr1_el1, \tmp
+ .endm
+
+ .macro tramp_unmap_kernel, tmp
+ mrs \tmp, ttbr1_el1
+ add \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
+ orr \tmp, \tmp, #USER_ASID_FLAG
+ msr ttbr1_el1, \tmp
+ /*
+ * We avoid running the post_ttbr_update_workaround here because the
+ * user and kernel ASIDs don't have conflicting mappings, so any
+ * "blessing" as described in:
+ *
+ * http://lkml.kernel.org/r/[email protected]
+ *
+ * will not hurt correctness. Whilst this may partially defeat the
+ * point of using split ASIDs in the first place, it avoids
+ * the hit of invalidating the entire I-cache on every return to
+ * userspace.
+ */
+ .endm
+
+ .macro tramp_ventry, regsize = 64
+ .align 7
+1:
+ .if \regsize == 64
+ msr tpidrro_el0, x30 // Restored in kernel_ventry
+ .endif
+ tramp_map_kernel x30
+ ldr x30, =vectors
+ prfm plil1strm, [x30, #(1b - tramp_vectors)]
+ msr vbar_el1, x30
+ add x30, x30, #(1b - tramp_vectors)
+ isb
+ br x30
+ .endm
+
+ .macro tramp_exit, regsize = 64
+ adr x30, tramp_vectors
+ msr vbar_el1, x30
+ tramp_unmap_kernel x30
+ .if \regsize == 64
+ mrs x30, far_el1
+ .endif
+ eret
+ .endm
+
+ .align 11
+ENTRY(tramp_vectors)
+ .space 0x400
+
+ tramp_ventry
+ tramp_ventry
+ tramp_ventry
+ tramp_ventry
+
+ tramp_ventry 32
+ tramp_ventry 32
+ tramp_ventry 32
+ tramp_ventry 32
+END(tramp_vectors)
+
+ENTRY(tramp_exit_native)
+ tramp_exit
+END(tramp_exit_native)
+
+ENTRY(tramp_exit_compat)
+ tramp_exit 32
+END(tramp_exit_compat)
+
+ .ltorg
+ .popsection // .entry.tramp.text
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+
/*
* Special system call wrappers.
*/
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -57,6 +57,17 @@ jiffies = jiffies_64;
#define HIBERNATE_TEXT
#endif
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define TRAMP_TEXT \
+ . = ALIGN(PAGE_SIZE); \
+ VMLINUX_SYMBOL(__entry_tramp_text_start) = .; \
+ *(.entry.tramp.text) \
+ . = ALIGN(PAGE_SIZE); \
+ VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
+#else
+#define TRAMP_TEXT
+#endif
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -113,6 +124,7 @@ SECTIONS
HYPERVISOR_TEXT
IDMAP_TEXT
HIBERNATE_TEXT
+ TRAMP_TEXT
*(.fixup)
*(.gnu.warning)
. = ALIGN(16);
@@ -214,6 +226,11 @@ SECTIONS
. += RESERVED_TTBR0_SIZE;
#endif
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+ tramp_pg_dir = .;
+ . += PAGE_SIZE;
+#endif
+
__pecoff_data_size = ABSOLUTE(. - __initdata_begin);
_end = .;
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit fc0e1299da54 upstream.
In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.
Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/mmu.h | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -19,6 +19,8 @@
#define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */
#define USER_ASID_FLAG (UL(1) << 48)
+#ifndef __ASSEMBLY__
+
typedef struct {
atomic64_t id;
void *vdso;
@@ -32,6 +34,11 @@ typedef struct {
*/
#define ASID(mm) ((mm)->context.id.counter & 0xffff)
+static inline bool arm64_kernel_unmapped_at_el0(void)
+{
+ return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+}
+
extern void paging_init(void);
extern void bootmem_init(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
@@ -42,4 +49,5 @@ extern void create_pgd_mapping(struct mm
extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
extern void mark_linear_text_alias_ro(void);
+#endif /* !__ASSEMBLY__ */
#endif
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 0c8ea531b774 upstream.
In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/mm/context.c | 25 +++++++++++++++++--------
2 files changed, 18 insertions(+), 8 deletions(-)
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -17,6 +17,7 @@
#define __ASM_MMU_H
#define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */
+#define USER_ASID_FLAG (UL(1) << 48)
typedef struct {
atomic64_t id;
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending;
#define ASID_MASK (~GENMASK(asid_bits - 1, 0))
#define ASID_FIRST_VERSION (1UL << asid_bits)
-#define NUM_USER_ASIDS ASID_FIRST_VERSION
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1)
+#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1)
+#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK)
+#else
+#define NUM_USER_ASIDS (ASID_FIRST_VERSION)
+#define asid2idx(asid) ((asid) & ~ASID_MASK)
+#define idx2asid(idx) asid2idx(idx)
+#endif
/* Get the ASIDBits supported by the current CPU */
static u32 get_cpu_asid_bits(void)
@@ -98,7 +107,7 @@ static void flush_context(unsigned int c
*/
if (asid == 0)
asid = per_cpu(reserved_asids, i);
- __set_bit(asid & ~ASID_MASK, asid_map);
+ __set_bit(asid2idx(asid), asid_map);
per_cpu(reserved_asids, i) = asid;
}
@@ -153,16 +162,16 @@ static u64 new_context(struct mm_struct
* We had a valid ASID in a previous life, so try to re-use
* it if possible.
*/
- asid &= ~ASID_MASK;
- if (!__test_and_set_bit(asid, asid_map))
+ if (!__test_and_set_bit(asid2idx(asid), asid_map))
return newasid;
}
/*
* Allocate a free ASID. If we can't find one, take a note of the
- * currently active ASIDs and mark the TLBs as requiring flushes.
- * We always count from ASID #1, as we use ASID #0 when setting a
- * reserved TTBR0 for the init_mm.
+ * currently active ASIDs and mark the TLBs as requiring flushes. We
+ * always count from ASID #2 (index 1), as we use ASID #0 when setting
+ * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+ * pairs.
*/
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
if (asid != NUM_USER_ASIDS)
@@ -179,7 +188,7 @@ static u64 new_context(struct mm_struct
set_asid:
__set_bit(asid, asid_map);
cur_idx = asid;
- return asid | generation;
+ return idx2asid(asid) | generation;
}
void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <[email protected]>
Commit 7655abb95386 upstream.
In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Tested-by: Shanker Donthineni <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/arm64/include/asm/mmu_context.h | 7 +++++++
arch/arm64/include/asm/pgtable-hwdef.h | 1 +
arch/arm64/include/asm/proc-fns.h | 6 ------
arch/arm64/mm/proc.S | 9 ++++++---
4 files changed, 14 insertions(+), 9 deletions(-)
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -57,6 +57,13 @@ static inline void cpu_set_reserved_ttbr
isb();
}
+static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
+{
+ BUG_ON(pgd == swapper_pg_dir);
+ cpu_set_reserved_ttbr0();
+ cpu_do_switch_mm(virt_to_phys(pgd),mm);
+}
+
/*
* TCR.T0SZ value to use when the ID map is active. Usually equals
* TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -272,6 +272,7 @@
#define TCR_TG1_4K (UL(2) << TCR_TG1_SHIFT)
#define TCR_TG1_64K (UL(3) << TCR_TG1_SHIFT)
+#define TCR_A1 (UL(1) << 22)
#define TCR_ASID16 (UL(1) << 36)
#define TCR_TBI0 (UL(1) << 37)
#define TCR_HA (UL(1) << 39)
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -35,12 +35,6 @@ extern u64 cpu_do_resume(phys_addr_t ptr
#include <asm/memory.h>
-#define cpu_switch_mm(pgd,mm) \
-do { \
- BUG_ON(pgd == swapper_pg_dir); \
- cpu_do_switch_mm(virt_to_phys(pgd),mm); \
-} while (0)
-
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __ASM_PROCFNS_H */
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -139,9 +139,12 @@ ENDPROC(cpu_do_resume)
*/
ENTRY(cpu_do_switch_mm)
pre_ttbr0_update_workaround x0, x2, x3
+ mrs x2, ttbr1_el1
mmid x1, x1 // get mm->context.id
- bfi x0, x1, #48, #16 // set the ASID
- msr ttbr0_el1, x0 // set TTBR0
+ bfi x2, x1, #48, #16 // set the ASID
+ msr ttbr1_el1, x2 // in TTBR1 (since TCR.A1 is set)
+ isb
+ msr ttbr0_el1, x0 // now update TTBR0
isb
post_ttbr0_update_workaround
ret
@@ -224,7 +227,7 @@ ENTRY(__cpu_setup)
* both user and kernel.
*/
ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
- TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
+ TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
tcr_set_idmap_t0sz x10, x9
/*
On 02/15/2018 08:15 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.15.4 release.
> There are 202 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.15.4-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.15.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>
Compiled and booted on my test system. No dmesg regressions.
thanks,
-- Shuah
On 15 February 2018 at 20:45, Greg Kroah-Hartman
<[email protected]> wrote:
> This is the start of the stable review cycle for the 4.15.4 release.
> There are 202 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.15.4-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.15.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
Results from Linaro’s test farm.
No regressions on arm64, arm and x86_64.
NOTE:
LTP: netns_netlink test fail started from 4.15 on ( armv7 ) beagleboard-x15.
tun: section 3 reloc 18 sym '__memzero': relocation 28 out of range
(0xbf022224 -> 0xc10a3ac0)open: No such device
https://bugs.linaro.org/show_bug.cgi?id=3484
Patch is in discussion to fix this problem by,
[PATCHv2] arch/arm/Kconfig: default ARM_MODULE_PLTS to 'y'
https://lkml.org/lkml/2018/1/31/577
Summary
------------------------------------------------------------------------
kernel: 4.15.4-rc1
git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git branch: linux-4.15.y
git commit: 01be67fd216daa47afa25bbfdd555eff8a66ce3e
git describe: v4.15.3-204-g01be67fd216d
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.15-oe/build/v4.15.3-204-g01be67fd216d
No regressions (compared to build v4.15.3-88-gcc4b654d33e8)
Boards, architectures and test suites:
-------------------------------------
hi6220-hikey - arm64
* boot - pass: 20,
* kselftest - skip: 9, pass: 57,
* libhugetlbfs - skip: 1, pass: 90,
* ltp-cap_bounds-tests - pass: 2,
* ltp-containers-tests - skip: 17, pass: 64,
* ltp-fcntl-locktests-tests - pass: 2,
* ltp-filecaps-tests - pass: 2,
* ltp-fs-tests - skip: 2, pass: 61,
* ltp-fs_bind-tests - pass: 2,
* ltp-fs_perms_simple-tests - pass: 19,
* ltp-fsx-tests - pass: 2,
* ltp-hugetlb-tests - skip: 1, pass: 21,
* ltp-io-tests - pass: 3,
* ltp-ipc-tests - pass: 9,
* ltp-math-tests - pass: 11,
* ltp-nptl-tests - pass: 2,
* ltp-pty-tests - pass: 4,
* ltp-sched-tests - skip: 4, pass: 10,
* ltp-securebits-tests - pass: 4,
* ltp-syscalls-tests - skip: 151, pass: 999,
* ltp-timers-tests - skip: 1, pass: 12,
juno-r2 - arm64
* boot - pass: 20,
* kselftest - skip: 10, pass: 56,
* libhugetlbfs - skip: 1, pass: 90,
* ltp-cap_bounds-tests - pass: 2,
* ltp-containers-tests - skip: 17, pass: 64,
* ltp-fcntl-locktests-tests - pass: 2,
* ltp-filecaps-tests - pass: 2,
* ltp-fs-tests - skip: 2, pass: 61,
* ltp-fs_bind-tests - pass: 2,
* ltp-fs_perms_simple-tests - pass: 19,
* ltp-fsx-tests - pass: 2,
* ltp-hugetlb-tests - pass: 22,
* ltp-io-tests - pass: 3,
* ltp-ipc-tests - pass: 9,
* ltp-math-tests - pass: 11,
* ltp-nptl-tests - pass: 2,
* ltp-pty-tests - pass: 4,
* ltp-sched-tests - skip: 4, pass: 10,
* ltp-securebits-tests - pass: 4,
* ltp-syscalls-tests - skip: 148, pass: 1002,
* ltp-timers-tests - skip: 1, pass: 12,
x15 - arm
* boot - pass: 20,
* kselftest - skip: 12, pass: 53,
* libhugetlbfs - skip: 1, pass: 87,
* ltp-cap_bounds-tests - pass: 2,
* ltp-containers-tests - skip: 17, pass: 62, fail: 2
* ltp-fcntl-locktests-tests - pass: 2,
* ltp-filecaps-tests - pass: 2,
* ltp-fs-tests - skip: 2, pass: 61,
* ltp-fs_bind-tests - pass: 2,
* ltp-fs_perms_simple-tests - pass: 19,
* ltp-fsx-tests - pass: 2,
* ltp-hugetlb-tests - skip: 2, pass: 20,
* ltp-io-tests - pass: 3,
* ltp-ipc-tests - pass: 9,
* ltp-math-tests - pass: 11,
* ltp-nptl-tests - pass: 2,
* ltp-pty-tests - pass: 4,
* ltp-sched-tests - skip: 1, pass: 13,
* ltp-securebits-tests - pass: 4,
* ltp-syscalls-tests - skip: 97, pass: 1053,
* ltp-timers-tests - skip: 1, pass: 12,
x86_64
* boot - pass: 20,
* kselftest - skip: 10, pass: 71,
* libhugetlbfs - skip: 1, pass: 90,
* ltp-cap_bounds-tests - pass: 2,
* ltp-containers-tests - skip: 17, pass: 64,
* ltp-fcntl-locktests-tests - pass: 2,
* ltp-filecaps-tests - pass: 2,
* ltp-fs-tests - skip: 1, pass: 62,
* ltp-fs_bind-tests - pass: 2,
* ltp-fs_perms_simple-tests - pass: 19,
* ltp-fsx-tests - pass: 2,
* ltp-hugetlb-tests - pass: 22,
* ltp-io-tests - pass: 3,
* ltp-ipc-tests - pass: 9,
* ltp-math-tests - pass: 10,
* ltp-nptl-tests - pass: 2,
* ltp-pty-tests - pass: 4,
* ltp-sched-tests - skip: 5, pass: 9,
* ltp-securebits-tests - pass: 4,
* ltp-syscalls-tests - skip: 118, pass: 1032,
* ltp-timers-tests - skip: 1, pass: 12,
Documentation - https://collaborate.linaro.org/display/LKFT/Email+Reports
Tested-by: Naresh Kamboju <[email protected]>
On Thu, Feb 15, 2018 at 02:59:50PM -0700, Shuah Khan wrote:
> On 02/15/2018 08:15 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.15.4 release.
> > There are 202 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.15.4-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.15.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Compiled and booted on my test system. No dmesg regressions.
Thanks for testing all of these and letting me know.
greg k-h
On Fri, Feb 16, 2018 at 11:31:34AM +0530, Naresh Kamboju wrote:
> On 15 February 2018 at 20:45, Greg Kroah-Hartman
> <[email protected]> wrote:
> > This is the start of the stable review cycle for the 4.15.4 release.
> > There are 202 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.15.4-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.15.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
>
> Results from Linaro’s test farm.
> No regressions on arm64, arm and x86_64.
Thanks for testing all of these and letting me know.
greg k-h
On 02/15/2018 07:15 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.15.4 release.
> There are 202 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> Anything received after that time might be too late.
>
Build results:
total: 147 pass: 147 fail: 0
Qemu test results:
total: 126 pass: 126 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
On Fri, Feb 16, 2018 at 06:28:23AM -0800, Guenter Roeck wrote:
> On 02/15/2018 07:15 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.15.4 release.
> > There are 202 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sat Feb 17 15:16:28 UTC 2018.
> > Anything received after that time might be too late.
> >
>
> Build results:
> total: 147 pass: 147 fail: 0
> Qemu test results:
> total: 126 pass: 126 fail: 0
>
> Details are available at http://kerneltests.org/builders.
Nice, thanks for testing and letting me know.
greg k-h