2017-08-22 19:23:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 00/41] 4.12.9-stable review

This is the start of the stable review cycle for the 4.12.9 release.
There are 41 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Thu Aug 24 19:09:29 UTC 2017.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.12.9-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.12.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 4.12.9-rc1

Hector Martin <[email protected]>
usb: qmi_wwan: add D-Link DWM-222 device ID

Mathias Nyman <[email protected]>
usb: optimize acpi companion search for usb port devices

Josh Poimboeuf <[email protected]>
debug: Fix WARN_ON_ONCE() for modules

Oleg Nesterov <[email protected]>
pids: make task_tgid_nr_ns() safe

Linus Torvalds <[email protected]>
Sanitize 'move_pages()' permission checks

Thomas Gleixner <[email protected]>
kernel/watchdog: Prevent false positives with turbo modes

Alexey Dobriyan <[email protected]>
genirq/ipi: Fixup checks against nr_cpu_ids

Marc Zyngier <[email protected]>
genirq: Restore trigger settings in irq_modify_status()

Boris Brezillon <[email protected]>
irqchip/atmel-aic: Fix unbalanced refcount in aic_common_rtc_irq_fixup()

Boris Brezillon <[email protected]>
irqchip/atmel-aic: Fix unbalanced of_node_put() in aic_common_irq_fixup()

Oleg Nesterov <[email protected]>
x86/elf: Remove the unnecessary ADDR_NO_RANDOMIZE checks

Oleg Nesterov <[email protected]>
x86: Fix norandmaps/ADDR_NO_RANDOMIZE

Andy Lutomirski <[email protected]>
x86/asm/64: Clear AC on NMI entries

Peter Zijlstra <[email protected]>
perf/x86: Fix RDPMC vs. mm_struct tracking

Munehisa Kamata <[email protected]>
xen-blkfront: use a right index when checking requests

Benjamin Herrenschmidt <[email protected]>
powerpc: Fix VSX enabling/flushing to also test MSR_FP and MSR_VEC

Christoph Hellwig <[email protected]>
blk-mq-pci: add a fallback when pci_irq_get_affinity returns NULL

Gary Bisson <[email protected]>
ARM: dts: imx6qdl-nitrogen6_som2: fix PCIe reset

Roger Pau Monne <[email protected]>
xen: fix bio vec merging

Kees Cook <[email protected]>
mm: revert x86_64 and arm64 ELF_ET_DYN_BASE base changes

Laura Abbott <[email protected]>
mm/vmalloc.c: don't unconditonally use __GFP_HIGHMEM

zhong jiang <[email protected]>
mm/mempolicy: fix use after free when calling get_mempolicy

Prakash Gupta <[email protected]>
mm/cma_debug.c: fix stack corruption due to sprintf usage

Michal Hocko <[email protected]>
mm: fix double mmap_sem unlock on MMF_UNSTABLE enforced SIGBUS

Vladimir Davydov <[email protected]>
slub: fix per memcg cache leak on css offline

Pavel Tatashin <[email protected]>
mm: discard memblock data later

Jussi Laako <[email protected]>
ALSA: usb-audio: add DSD support for new Amanero PID

Takashi Iwai <[email protected]>
ALSA: usb-audio: Add mute TLV for playback volumes on C-Media devices

Takashi Iwai <[email protected]>
ALSA: usb-audio: Apply sample rate quirk to Sennheiser headset

Daniel Mentz <[email protected]>
ALSA: seq: 2nd attempt at fixing race creating a queue

Shaohua Li <[email protected]>
MD: not clear ->safemode for external metadata array

NeilBrown <[email protected]>
md: always clear ->safemode when md_check_recovery gets the mddev lock.

NeilBrown <[email protected]>
md: fix test in md_write_start()

KT Liao <[email protected]>
Input: elan_i2c - Add antoher Lenovo ACPI ID for upcoming Lenovo NB

Kai-Heng Feng <[email protected]>
Input: elan_i2c - add ELAN0608 to the ACPI table

Chunming Zhou <[email protected]>
drm/amdgpu: save list length when fence is signaled

Chris Wilson <[email protected]>
drm/i915: Perform an invalidate prior to executing golden renderstate

[email protected] <[email protected]>
crypto: x86/sha1 - Fix reads beyond the number of blocks passed

Herbert Xu <[email protected]>
crypto: ixp4xx - Fix error handling path in 'aead_perform()'

Thomas Bogendoerfer <[email protected]>
parisc: pci memory bar assignment fails with 64bit kernels on dino/cujo

Jan Kara <[email protected]>
audit: Fix use after free in audit_remove_watch_rule()


-------------

Diffstat:

Makefile | 4 +-
arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi | 4 +-
arch/arm/include/asm/bug.h | 2 +-
arch/arm64/include/asm/bug.h | 2 +-
arch/arm64/include/asm/elf.h | 4 +-
arch/blackfin/include/asm/bug.h | 4 +-
arch/mn10300/include/asm/bug.h | 2 +-
arch/parisc/include/asm/bug.h | 6 +--
arch/powerpc/include/asm/bug.h | 8 ++--
arch/powerpc/kernel/process.c | 5 +-
arch/s390/include/asm/bug.h | 4 +-
arch/sh/include/asm/bug.h | 4 +-
arch/x86/Kconfig | 1 +
arch/x86/crypto/sha1_avx2_x86_64_asm.S | 67 ++++++++++++++-------------
arch/x86/crypto/sha1_ssse3_glue.c | 2 +-
arch/x86/entry/entry_64.S | 2 +
arch/x86/events/core.c | 16 +++----
arch/x86/include/asm/bug.h | 4 +-
arch/x86/include/asm/elf.h | 4 +-
arch/x86/mm/mmap.c | 7 ++-
block/blk-mq-pci.c | 8 +++-
drivers/block/xen-blkfront.c | 6 +--
drivers/crypto/ixp4xx_crypto.c | 6 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 13 +++---
drivers/gpu/drm/i915/i915_gem_render_state.c | 4 ++
drivers/input/mouse/elan_i2c_core.c | 4 ++
drivers/irqchip/irq-atmel-aic-common.c | 5 +-
drivers/md/md.c | 5 +-
drivers/net/usb/qmi_wwan.c | 1 +
drivers/parisc/dino.c | 2 +-
drivers/usb/core/usb-acpi.c | 26 +++++++++--
drivers/xen/biomerge.c | 3 +-
fs/binfmt_elf.c | 3 +-
include/linux/memblock.h | 6 ++-
include/linux/nmi.h | 8 ++++
include/linux/perf_event.h | 4 +-
include/linux/pid.h | 4 +-
include/linux/sched.h | 51 ++++++++++----------
kernel/audit_watch.c | 12 +++--
kernel/events/core.c | 6 +--
kernel/irq/chip.c | 10 +++-
kernel/irq/ipi.c | 4 +-
kernel/pid.c | 11 ++---
kernel/watchdog.c | 1 +
kernel/watchdog_hld.c | 59 +++++++++++++++++++++++
lib/Kconfig.debug | 7 +++
mm/cma_debug.c | 2 +-
mm/memblock.c | 38 +++++++--------
mm/memory.c | 12 ++++-
mm/mempolicy.c | 5 --
mm/migrate.c | 11 ++---
mm/nobootmem.c | 16 -------
mm/page_alloc.c | 4 ++
mm/slub.c | 3 +-
mm/vmalloc.c | 13 ++++--
sound/core/seq/seq_clientmgr.c | 13 ++----
sound/core/seq/seq_queue.c | 14 ++++--
sound/core/seq/seq_queue.h | 2 +-
sound/usb/mixer.c | 2 +
sound/usb/mixer.h | 1 +
sound/usb/mixer_quirks.c | 6 +++
sound/usb/quirks.c | 5 ++
62 files changed, 348 insertions(+), 220 deletions(-)



2017-08-22 19:14:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 03/41] crypto: ixp4xx - Fix error handling path in aead_perform()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit 28389575a8cf933a5f3c378556b9f4d3cce0efd2 upstream.

In commit 0f987e25cb8a, the source processing has been moved in front of
the destination processing, but the error handling path has not been
modified accordingly.
Free resources in the correct order to avoid some leaks.

Fixes: 0f987e25cb8a ("crypto: ixp4xx - Fix false lastlen uninitialised warning")
Reported-by: Christophe JAILLET <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Reviewed-by: Arnd Bergmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/crypto/ixp4xx_crypto.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/crypto/ixp4xx_crypto.c
+++ b/drivers/crypto/ixp4xx_crypto.c
@@ -1074,7 +1074,7 @@ static int aead_perform(struct aead_requ
req_ctx->hmac_virt = dma_pool_alloc(buffer_pool, flags,
&crypt->icv_rev_aes);
if (unlikely(!req_ctx->hmac_virt))
- goto free_buf_src;
+ goto free_buf_dst;
if (!encrypt) {
scatterwalk_map_and_copy(req_ctx->hmac_virt,
req->src, cryptlen, authsize, 0);
@@ -1089,10 +1089,10 @@ static int aead_perform(struct aead_requ
BUG_ON(qmgr_stat_overflow(SEND_QID));
return -EINPROGRESS;

-free_buf_src:
- free_buf_chain(dev, req_ctx->src, crypt->src_buf);
free_buf_dst:
free_buf_chain(dev, req_ctx->dst, crypt->dst_buf);
+free_buf_src:
+ free_buf_chain(dev, req_ctx->src, crypt->src_buf);
crypt->ctl_flags = CTL_FLAG_UNUSED;
return -ENOMEM;
}


2017-08-22 19:14:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 07/41] Input: elan_i2c - add ELAN0608 to the ACPI table

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kai-Heng Feng <[email protected]>

commit 1874064eed0502bd9bef7be8023757b0c4f26883 upstream.

Similar to commit 722c5ac708b4f ("Input: elan_i2c - add ELAN0605 to the
ACPI table"), ELAN0608 should be handled by elan_i2c.

This touchpad can be found in Lenovo ideapad 320-14IKB.

BugLink: https://bugs.launchpad.net/bugs/1708852

Signed-off-by: Kai-Heng Feng <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/input/mouse/elan_i2c_core.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/input/mouse/elan_i2c_core.c
+++ b/drivers/input/mouse/elan_i2c_core.c
@@ -1224,6 +1224,7 @@ static const struct acpi_device_id elan_
{ "ELAN0100", 0 },
{ "ELAN0600", 0 },
{ "ELAN0605", 0 },
+ { "ELAN0608", 0 },
{ "ELAN1000", 0 },
{ }
};


2017-08-22 19:14:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 20/41] mm/mempolicy: fix use after free when calling get_mempolicy

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: zhong jiang <[email protected]>

commit 73223e4e2e3867ebf033a5a8eb2e5df0158ccc99 upstream.

I hit a use after free issue when executing trinity and repoduced it
with KASAN enabled. The related call trace is as follows.

BUG: KASan: use after free in SyS_get_mempolicy+0x3c8/0x960 at addr ffff8801f582d766
Read of size 2 by task syz-executor1/798

INFO: Allocated in mpol_new.part.2+0x74/0x160 age=3 cpu=1 pid=799
__slab_alloc+0x768/0x970
kmem_cache_alloc+0x2e7/0x450
mpol_new.part.2+0x74/0x160
mpol_new+0x66/0x80
SyS_mbind+0x267/0x9f0
system_call_fastpath+0x16/0x1b
INFO: Freed in __mpol_put+0x2b/0x40 age=4 cpu=1 pid=799
__slab_free+0x495/0x8e0
kmem_cache_free+0x2f3/0x4c0
__mpol_put+0x2b/0x40
SyS_mbind+0x383/0x9f0
system_call_fastpath+0x16/0x1b
INFO: Slab 0xffffea0009cb8dc0 objects=23 used=8 fp=0xffff8801f582de40 flags=0x200000000004080
INFO: Object 0xffff8801f582d760 @offset=5984 fp=0xffff8801f582d600

Bytes b4 ffff8801f582d750: ae 01 ff ff 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
Object ffff8801f582d760: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
Object ffff8801f582d770: 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkk.
Redzone ffff8801f582d778: bb bb bb bb bb bb bb bb ........
Padding ffff8801f582d8b8: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
Memory state around the buggy address:
ffff8801f582d600: fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8801f582d680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8801f582d700: fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb fc

!shared memory policy is not protected against parallel removal by other
thread which is normally protected by the mmap_sem. do_get_mempolicy,
however, drops the lock midway while we can still access it later.

Early premature up_read is a historical artifact from times when
put_user was called in this path see https://lwn.net/Articles/124754/
but that is gone since 8bccd85ffbaf ("[PATCH] Implement sys_* do_*
layering in the memory policy layer."). but when we have the the
current mempolicy ref count model. The issue was introduced
accordingly.

Fix the issue by removing the premature release.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: zhong jiang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/mempolicy.c | 5 -----
1 file changed, 5 deletions(-)

--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -931,11 +931,6 @@ static long do_get_mempolicy(int *policy
*policy |= (pol->flags & MPOL_MODE_FLAGS);
}

- if (vma) {
- up_read(&current->mm->mmap_sem);
- vma = NULL;
- }
-
err = 0;
if (nmask) {
if (mpol_store_user_nodemask(pol)) {


2017-08-22 19:14:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 30/41] x86: Fix norandmaps/ADDR_NO_RANDOMIZE

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Oleg Nesterov <[email protected]>

commit 47ac5484fd961420e5ec0bb5b972fde381f57365 upstream.

Documentation/admin-guide/kernel-parameters.txt says:

norandmaps Don't use address space randomization. Equivalent
to echo 0 > /proc/sys/kernel/randomize_va_space

but it doesn't work because arch_rnd() which is used to randomize
mm->mmap_base returns a random value unconditionally. And as Kirill
pointed out, ADDR_NO_RANDOMIZE is broken by the same reason.

Just shift the PF_RANDOMIZE check from arch_mmap_rnd() to arch_rnd().

Fixes: 1b028f784e8c ("x86/mm: Introduce mmap_compat_base() for 32-bit mmap()")
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Cyrill Gorcunov <[email protected]>
Reviewed-by: Dmitry Safonov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/mm/mmap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -82,13 +82,13 @@ static int mmap_is_legacy(void)

static unsigned long arch_rnd(unsigned int rndbits)
{
+ if (!(current->flags & PF_RANDOMIZE))
+ return 0;
return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT;
}

unsigned long arch_mmap_rnd(void)
{
- if (!(current->flags & PF_RANDOMIZE))
- return 0;
return arch_rnd(mmap_is_ia32() ? mmap32_rnd_bits : mmap64_rnd_bits);
}



2017-08-22 19:14:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 26/41] powerpc: Fix VSX enabling/flushing to also test MSR_FP and MSR_VEC

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Benjamin Herrenschmidt <[email protected]>

commit 5a69aec945d27e78abac9fd032533d3aaebf7c1e upstream.

VSX uses a combination of the old vector registers, the old FP
registers and new "second halves" of the FP registers.

Thus when we need to see the VSX state in the thread struct
(flush_vsx_to_thread()) or when we'll use the VSX in the kernel
(enable_kernel_vsx()) we need to ensure they are all flushed into
the thread struct if either of them is individually enabled.

Unfortunately we only tested if the whole VSX was enabled, not if they
were individually enabled.

Fixes: 72cd7b44bc99 ("powerpc: Uncomment and make enable_kernel_vsx() routine available")
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/kernel/process.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -362,7 +362,8 @@ void enable_kernel_vsx(void)

cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);

- if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) {
+ if (current->thread.regs &&
+ (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) {
check_if_tm_restore_required(current);
/*
* If a thread has already been reclaimed then the
@@ -386,7 +387,7 @@ void flush_vsx_to_thread(struct task_str
{
if (tsk->thread.regs) {
preempt_disable();
- if (tsk->thread.regs->msr & MSR_VSX) {
+ if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) {
BUG_ON(tsk != current);
giveup_vsx(tsk);
}


2017-08-22 19:15:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 39/41] debug: Fix WARN_ON_ONCE() for modules

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Josh Poimboeuf <[email protected]>

commit 325cdacd03c12629aa5f9ee2ace49b1f3dc184a8 upstream.

Mike Galbraith reported a situation where a WARN_ON_ONCE() call in DRM
code turned into an oops. As it turns out, WARN_ON_ONCE() seems to be
completely broken when called from a module.

The bug was introduced with the following commit:

19d436268dde ("debug: Add _ONCE() logic to report_bug()")

That commit changed WARN_ON_ONCE() to move its 'once' logic into the bug
trap handler. It requires a writable bug table so that the BUGFLAG_DONE
bit can be written to the flags to indicate the first warning has
occurred.

The bug table was made writable for vmlinux, which relies on
vmlinux.lds.S and vmlinux.lds.h for laying out the sections. However,
it wasn't made writable for modules, which rely on the ELF section
header flags.

Reported-by: Mike Galbraith <[email protected]>
Tested-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Josh Poimboeuf <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 19d436268dde ("debug: Add _ONCE() logic to report_bug()")
Link: http://lkml.kernel.org/r/a53b04235a65478dd9afc51f5b329fdc65c84364.1500095401.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Changbin Du <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/include/asm/bug.h | 2 +-
arch/arm64/include/asm/bug.h | 2 +-
arch/blackfin/include/asm/bug.h | 4 ++--
arch/mn10300/include/asm/bug.h | 2 +-
arch/parisc/include/asm/bug.h | 6 +++---
arch/powerpc/include/asm/bug.h | 8 ++++----
arch/s390/include/asm/bug.h | 4 ++--
arch/sh/include/asm/bug.h | 4 ++--
arch/x86/include/asm/bug.h | 4 ++--
9 files changed, 18 insertions(+), 18 deletions(-)

--- a/arch/arm/include/asm/bug.h
+++ b/arch/arm/include/asm/bug.h
@@ -37,7 +37,7 @@ do { \
".pushsection .rodata.str, \"aMS\", %progbits, 1\n" \
"2:\t.asciz " #__file "\n" \
".popsection\n" \
- ".pushsection __bug_table,\"a\"\n" \
+ ".pushsection __bug_table,\"aw\"\n" \
".align 2\n" \
"3:\t.word 1b, 2b\n" \
"\t.hword " #__line ", 0\n" \
--- a/arch/arm64/include/asm/bug.h
+++ b/arch/arm64/include/asm/bug.h
@@ -36,7 +36,7 @@
#ifdef CONFIG_GENERIC_BUG

#define __BUG_ENTRY(flags) \
- ".pushsection __bug_table,\"a\"\n\t" \
+ ".pushsection __bug_table,\"aw\"\n\t" \
".align 2\n\t" \
"0: .long 1f - 0b\n\t" \
_BUGVERBOSE_LOCATION(__FILE__, __LINE__) \
--- a/arch/blackfin/include/asm/bug.h
+++ b/arch/blackfin/include/asm/bug.h
@@ -21,7 +21,7 @@
#define _BUG_OR_WARN(flags) \
asm volatile( \
"1: .hword %0\n" \
- " .section __bug_table,\"a\",@progbits\n" \
+ " .section __bug_table,\"aw\",@progbits\n" \
"2: .long 1b\n" \
" .long %1\n" \
" .short %2\n" \
@@ -38,7 +38,7 @@
#define _BUG_OR_WARN(flags) \
asm volatile( \
"1: .hword %0\n" \
- " .section __bug_table,\"a\",@progbits\n" \
+ " .section __bug_table,\"aw\",@progbits\n" \
"2: .long 1b\n" \
" .short %1\n" \
" .org 2b + %2\n" \
--- a/arch/mn10300/include/asm/bug.h
+++ b/arch/mn10300/include/asm/bug.h
@@ -21,7 +21,7 @@ do { \
asm volatile( \
" syscall 15 \n" \
"0: \n" \
- " .section __bug_table,\"a\" \n" \
+ " .section __bug_table,\"aw\" \n" \
" .long 0b,%0,%1 \n" \
" .previous \n" \
: \
--- a/arch/parisc/include/asm/bug.h
+++ b/arch/parisc/include/asm/bug.h
@@ -27,7 +27,7 @@
do { \
asm volatile("\n" \
"1:\t" PARISC_BUG_BREAK_ASM "\n" \
- "\t.pushsection __bug_table,\"a\"\n" \
+ "\t.pushsection __bug_table,\"aw\"\n" \
"2:\t" ASM_WORD_INSN "1b, %c0\n" \
"\t.short %c1, %c2\n" \
"\t.org 2b+%c3\n" \
@@ -50,7 +50,7 @@
do { \
asm volatile("\n" \
"1:\t" PARISC_BUG_BREAK_ASM "\n" \
- "\t.pushsection __bug_table,\"a\"\n" \
+ "\t.pushsection __bug_table,\"aw\"\n" \
"2:\t" ASM_WORD_INSN "1b, %c0\n" \
"\t.short %c1, %c2\n" \
"\t.org 2b+%c3\n" \
@@ -64,7 +64,7 @@
do { \
asm volatile("\n" \
"1:\t" PARISC_BUG_BREAK_ASM "\n" \
- "\t.pushsection __bug_table,\"a\"\n" \
+ "\t.pushsection __bug_table,\"aw\"\n" \
"2:\t" ASM_WORD_INSN "1b\n" \
"\t.short %c0\n" \
"\t.org 2b+%c1\n" \
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -18,7 +18,7 @@
#include <asm/asm-offsets.h>
#ifdef CONFIG_DEBUG_BUGVERBOSE
.macro EMIT_BUG_ENTRY addr,file,line,flags
- .section __bug_table,"a"
+ .section __bug_table,"aw"
5001: PPC_LONG \addr, 5002f
.short \line, \flags
.org 5001b+BUG_ENTRY_SIZE
@@ -29,7 +29,7 @@
.endm
#else
.macro EMIT_BUG_ENTRY addr,file,line,flags
- .section __bug_table,"a"
+ .section __bug_table,"aw"
5001: PPC_LONG \addr
.short \flags
.org 5001b+BUG_ENTRY_SIZE
@@ -42,14 +42,14 @@
sizeof(struct bug_entry), respectively */
#ifdef CONFIG_DEBUG_BUGVERBOSE
#define _EMIT_BUG_ENTRY \
- ".section __bug_table,\"a\"\n" \
+ ".section __bug_table,\"aw\"\n" \
"2:\t" PPC_LONG "1b, %0\n" \
"\t.short %1, %2\n" \
".org 2b+%3\n" \
".previous\n"
#else
#define _EMIT_BUG_ENTRY \
- ".section __bug_table,\"a\"\n" \
+ ".section __bug_table,\"aw\"\n" \
"2:\t" PPC_LONG "1b\n" \
"\t.short %2\n" \
".org 2b+%3\n" \
--- a/arch/s390/include/asm/bug.h
+++ b/arch/s390/include/asm/bug.h
@@ -14,7 +14,7 @@
".section .rodata.str,\"aMS\",@progbits,1\n" \
"2: .asciz \""__FILE__"\"\n" \
".previous\n" \
- ".section __bug_table,\"a\"\n" \
+ ".section __bug_table,\"aw\"\n" \
"3: .long 1b-3b,2b-3b\n" \
" .short %0,%1\n" \
" .org 3b+%2\n" \
@@ -30,7 +30,7 @@
asm volatile( \
"0: j 0b+2\n" \
"1:\n" \
- ".section __bug_table,\"a\"\n" \
+ ".section __bug_table,\"aw\"\n" \
"2: .long 1b-2b\n" \
" .short %0\n" \
" .org 2b+%1\n" \
--- a/arch/sh/include/asm/bug.h
+++ b/arch/sh/include/asm/bug.h
@@ -24,14 +24,14 @@
*/
#ifdef CONFIG_DEBUG_BUGVERBOSE
#define _EMIT_BUG_ENTRY \
- "\t.pushsection __bug_table,\"a\"\n" \
+ "\t.pushsection __bug_table,\"aw\"\n" \
"2:\t.long 1b, %O1\n" \
"\t.short %O2, %O3\n" \
"\t.org 2b+%O4\n" \
"\t.popsection\n"
#else
#define _EMIT_BUG_ENTRY \
- "\t.pushsection __bug_table,\"a\"\n" \
+ "\t.pushsection __bug_table,\"aw\"\n" \
"2:\t.long 1b\n" \
"\t.short %O3\n" \
"\t.org 2b+%O4\n" \
--- a/arch/x86/include/asm/bug.h
+++ b/arch/x86/include/asm/bug.h
@@ -35,7 +35,7 @@
#define _BUG_FLAGS(ins, flags) \
do { \
asm volatile("1:\t" ins "\n" \
- ".pushsection __bug_table,\"a\"\n" \
+ ".pushsection __bug_table,\"aw\"\n" \
"2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \
"\t" __BUG_REL(%c0) "\t# bug_entry::file\n" \
"\t.word %c1" "\t# bug_entry::line\n" \
@@ -52,7 +52,7 @@ do { \
#define _BUG_FLAGS(ins, flags) \
do { \
asm volatile("1:\t" ins "\n" \
- ".pushsection __bug_table,\"a\"\n" \
+ ".pushsection __bug_table,\"aw\"\n" \
"2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n" \
"\t.word %c0" "\t# bug_entry::flags\n" \
"\t.org 2b+%c1\n" \


2017-08-22 19:15:13

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 16/41] mm: discard memblock data later

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Pavel Tatashin <[email protected]>

commit 3010f876500f9ba921afaeccec30c45ca6584dc8 upstream.

There is existing use after free bug when deferred struct pages are
enabled:

The memblock_add() allocates memory for the memory array if more than
128 entries are needed. See comment in e820__memblock_setup():

* The bootstrap memblock region count maximum is 128 entries
* (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries
* than that - so allow memblock resizing.

This memblock memory is freed here:
free_low_memory_core_early()

We access the freed memblock.memory later in boot when deferred pages
are initialized in this path:

deferred_init_memmap()
for_each_mem_pfn_range()
__next_mem_pfn_range()
type = &memblock.memory;

One possible explanation for why this use-after-free hasn't been hit
before is that the limit of INIT_MEMBLOCK_REGIONS has never been
exceeded at least on systems where deferred struct pages were enabled.

Tested by reducing INIT_MEMBLOCK_REGIONS down to 4 from the current 128,
and verifying in qemu that this code is getting excuted and that the
freed pages are sane.

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 7e18adb4f80b ("mm: meminit: initialise remaining struct pages in parallel with kswapd")
Signed-off-by: Pavel Tatashin <[email protected]>
Reviewed-by: Steven Sistare <[email protected]>
Reviewed-by: Daniel Jordan <[email protected]>
Reviewed-by: Bob Picco <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/memblock.h | 6 ++++--
mm/memblock.c | 40 ++++++++++++++++++----------------------
mm/nobootmem.c | 16 ----------------
mm/page_alloc.c | 4 ++++
4 files changed, 26 insertions(+), 40 deletions(-)

--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -65,6 +65,7 @@ extern bool movable_node_enabled;
#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
#define __init_memblock __meminit
#define __initdata_memblock __meminitdata
+void memblock_discard(void);
#else
#define __init_memblock
#define __initdata_memblock
@@ -78,8 +79,6 @@ phys_addr_t memblock_find_in_range_node(
int nid, ulong flags);
phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
phys_addr_t size, phys_addr_t align);
-phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr);
-phys_addr_t get_allocated_memblock_memory_regions_info(phys_addr_t *addr);
void memblock_allow_resize(void);
int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid);
int memblock_add(phys_addr_t base, phys_addr_t size);
@@ -114,6 +113,9 @@ void __next_mem_range_rev(u64 *idx, int
void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start,
phys_addr_t *out_end);

+void __memblock_free_early(phys_addr_t base, phys_addr_t size);
+void __memblock_free_late(phys_addr_t base, phys_addr_t size);
+
/**
* for_each_mem_range - iterate through memblock areas from type_a and not
* included in type_b. Or just type_a if type_b is NULL.
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -288,31 +288,27 @@ static void __init_memblock memblock_rem
}

#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
-
-phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info(
- phys_addr_t *addr)
-{
- if (memblock.reserved.regions == memblock_reserved_init_regions)
- return 0;
-
- *addr = __pa(memblock.reserved.regions);
-
- return PAGE_ALIGN(sizeof(struct memblock_region) *
- memblock.reserved.max);
-}
-
-phys_addr_t __init_memblock get_allocated_memblock_memory_regions_info(
- phys_addr_t *addr)
+/**
+ * Discard memory and reserved arrays if they were allocated
+ */
+void __init memblock_discard(void)
{
- if (memblock.memory.regions == memblock_memory_init_regions)
- return 0;
+ phys_addr_t addr, size;

- *addr = __pa(memblock.memory.regions);
-
- return PAGE_ALIGN(sizeof(struct memblock_region) *
- memblock.memory.max);
+ if (memblock.reserved.regions != memblock_reserved_init_regions) {
+ addr = __pa(memblock.reserved.regions);
+ size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ memblock.reserved.max);
+ __memblock_free_late(addr, size);
+ }
+
+ if (memblock.memory.regions == memblock_memory_init_regions) {
+ addr = __pa(memblock.memory.regions);
+ size = PAGE_ALIGN(sizeof(struct memblock_region) *
+ memblock.memory.max);
+ __memblock_free_late(addr, size);
+ }
}
-
#endif

/**
--- a/mm/nobootmem.c
+++ b/mm/nobootmem.c
@@ -146,22 +146,6 @@ static unsigned long __init free_low_mem
NULL)
count += __free_memory_core(start, end);

-#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
- {
- phys_addr_t size;
-
- /* Free memblock.reserved array if it was allocated */
- size = get_allocated_memblock_reserved_regions_info(&start);
- if (size)
- count += __free_memory_core(start, start + size);
-
- /* Free memblock.memory array if it was allocated */
- size = get_allocated_memblock_memory_regions_info(&start);
- if (size)
- count += __free_memory_core(start, start + size);
- }
-#endif
-
return count;
}

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1582,6 +1582,10 @@ void __init page_alloc_init_late(void)
/* Reinit limits that are based on free pages after the kernel is up */
files_maxfiles_init();
#endif
+#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
+ /* Discard memblock private memory */
+ memblock_discard();
+#endif

for_each_populated_zone(zone)
set_zone_contiguous(zone);


2017-08-22 19:15:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 37/41] Sanitize move_pages() permission checks

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Linus Torvalds <[email protected]>

commit 197e7e521384a23b9e585178f3f11c9fa08274b9 upstream.

The 'move_paghes()' system call was introduced long long ago with the
same permission checks as for sending a signal (except using
CAP_SYS_NICE instead of CAP_SYS_KILL for the overriding capability).

That turns out to not be a great choice - while the system call really
only moves physical page allocations around (and you need other
capabilities to do a lot of it), you can check the return value to map
out some the virtual address choices and defeat ASLR of a binary that
still shares your uid.

So change the access checks to the more common 'ptrace_may_access()'
model instead.

This tightens the access checks for the uid, and also effectively
changes the CAP_SYS_NICE check to CAP_SYS_PTRACE, but it's unlikely that
anybody really _uses_ this legacy system call any more (we hav ebetter
NUMA placement models these days), so I expect nobody to notice.

Famous last words.

Reported-by: Otto Ebeling <[email protected]>
Acked-by: Eric W. Biederman <[email protected]>
Cc: Willy Tarreau <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/migrate.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)

--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -41,6 +41,7 @@
#include <linux/page_idle.h>
#include <linux/page_owner.h>
#include <linux/sched/mm.h>
+#include <linux/ptrace.h>

#include <asm/tlbflush.h>

@@ -1649,7 +1650,6 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid,
const int __user *, nodes,
int __user *, status, int, flags)
{
- const struct cred *cred = current_cred(), *tcred;
struct task_struct *task;
struct mm_struct *mm;
int err;
@@ -1673,14 +1673,9 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid,

/*
* Check if this process has the right to modify the specified
- * process. The right exists if the process has administrative
- * capabilities, superuser privileges or the same
- * userid as the target process.
- */
- tcred = __task_cred(task);
- if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) &&
- !uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) &&
- !capable(CAP_SYS_NICE)) {
+ * process. Use the regular "ptrace_may_access()" checks.
+ */
+ if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) {
rcu_read_unlock();
err = -EPERM;
goto out;


2017-08-22 19:15:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 08/41] Input: elan_i2c - Add antoher Lenovo ACPI ID for upcoming Lenovo NB

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: KT Liao <[email protected]>

commit 76988690402dde2880bfe06ecccf381d48ba8e1c upstream.

Add 2 new IDs (ELAN0609 and ELAN060B) to the list of ACPI IDs that should
be handled by the driver.

Signed-off-by: KT Liao <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/input/mouse/elan_i2c_core.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/input/mouse/elan_i2c_core.c
+++ b/drivers/input/mouse/elan_i2c_core.c
@@ -1225,6 +1225,9 @@ static const struct acpi_device_id elan_
{ "ELAN0600", 0 },
{ "ELAN0605", 0 },
{ "ELAN0608", 0 },
+ { "ELAN0605", 0 },
+ { "ELAN0609", 0 },
+ { "ELAN060B", 0 },
{ "ELAN1000", 0 },
{ }
};


2017-08-22 19:15:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 17/41] slub: fix per memcg cache leak on css offline

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Vladimir Davydov <[email protected]>

commit f6ba488073fe8159851fe398cc3c5ee383bb4c7a upstream.

To avoid a possible deadlock, sysfs_slab_remove() schedules an
asynchronous work to delete sysfs entries corresponding to the kmem
cache. To ensure the cache isn't freed before the work function is
called, it takes a reference to the cache kobject. The reference is
supposed to be released by the work function.

However, the work function (sysfs_slab_remove_workfn()) does nothing in
case the cache sysfs entry has already been deleted, leaking the kobject
and the corresponding cache.

This may happen on a per memcg cache destruction, because sysfs entries
of a per memcg cache are deleted on memcg offline if the cache is empty
(see __kmemcg_cache_deactivate()).

The kmemleak report looks like this:

unreferenced object 0xffff9f798a79f540 (size 32):
comm "kworker/1:4", pid 15416, jiffies 4307432429 (age 28687.554s)
hex dump (first 32 bytes):
6b 6d 61 6c 6c 6f 63 2d 31 36 28 31 35 39 39 3a kmalloc-16(1599:
6e 65 77 72 6f 6f 74 29 00 23 6b c0 ff ff ff ff newroot).#k.....
backtrace:
kmemleak_alloc+0x4a/0xa0
__kmalloc_track_caller+0x148/0x2c0
kvasprintf+0x66/0xd0
kasprintf+0x49/0x70
memcg_create_kmem_cache+0xe6/0x160
memcg_kmem_cache_create_func+0x20/0x110
process_one_work+0x205/0x5d0
worker_thread+0x4e/0x3a0
kthread+0x109/0x140
ret_from_fork+0x2a/0x40
unreferenced object 0xffff9f79b6136840 (size 416):
comm "kworker/1:4", pid 15416, jiffies 4307432429 (age 28687.573s)
hex dump (first 32 bytes):
40 fb 80 c2 3e 33 00 00 00 00 00 40 00 00 00 00 @...>3.....@....
00 00 00 00 00 00 00 00 10 00 00 00 10 00 00 00 ................
backtrace:
kmemleak_alloc+0x4a/0xa0
kmem_cache_alloc+0x128/0x280
create_cache+0x3b/0x1e0
memcg_create_kmem_cache+0x118/0x160
memcg_kmem_cache_create_func+0x20/0x110
process_one_work+0x205/0x5d0
worker_thread+0x4e/0x3a0
kthread+0x109/0x140
ret_from_fork+0x2a/0x40

Fix the leak by adding the missing call to kobject_put() to
sysfs_slab_remove_workfn().

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 3b7b314053d02 ("slub: make sysfs file removal asynchronous")
Signed-off-by: Vladimir Davydov <[email protected]>
Reported-by: Andrei Vagin <[email protected]>
Tested-by: Andrei Vagin <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/slub.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5637,13 +5637,14 @@ static void sysfs_slab_remove_workfn(str
* A cache is never shut down before deactivation is
* complete, so no need to worry about synchronization.
*/
- return;
+ goto out;

#ifdef CONFIG_MEMCG
kset_unregister(s->memcg_kset);
#endif
kobject_uevent(&s->kobj, KOBJ_REMOVE);
kobject_del(&s->kobj);
+out:
kobject_put(&s->kobj);
}



2017-08-22 19:16:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 15/41] ALSA: usb-audio: add DSD support for new Amanero PID

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jussi Laako <[email protected]>

commit ed993c6fdfa7734881a4516852d95ae2d3b604d3 upstream.

Add DSD support for new Amanero Combo384 firmware version with a new
PID. This firmware uses DSD_U32_BE.

Fixes: 3eff682d765b ("ALSA: usb-audio: Support both DSD LE/BE Amanero firmware versions")
Signed-off-by: Jussi Laako <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/usb/quirks.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -1375,6 +1375,10 @@ u64 snd_usb_interface_dsd_format_quirks(
}
}
break;
+ case USB_ID(0x16d0, 0x0a23):
+ if (fp->altsetting == 2)
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;

default:
break;


2017-08-22 19:15:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 11/41] MD: not clear ->safemode for external metadata array

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Shaohua Li <[email protected]>

commit afc1f55ca44e257f69da8f43e0714a76686ae8d1 upstream.

->safemode should be triggered by mdadm for external metadaa array, otherwise
array's state confuses mdadm.

Fixes: 33182d15c6bf(md: always clear ->safemode when md_check_recovery gets the mddev lock.)
Cc: NeilBrown <[email protected]>
Signed-off-by: Shaohua Li <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/md.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8639,7 +8639,7 @@ void md_check_recovery(struct mddev *mdd
if (mddev_trylock(mddev)) {
int spares = 0;

- if (mddev->safemode == 1)
+ if (!mddev->external && mddev->safemode == 1)
mddev->safemode = 0;

if (mddev->ro) {


2017-08-22 19:16:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 14/41] ALSA: usb-audio: Add mute TLV for playback volumes on C-Media devices

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Takashi Iwai <[email protected]>

commit 0f174b3525a43bd51f9397394763925e0ebe7bc7 upstream.

C-Media devices (at least some models) mute the playback stream when
volumes are set to the minimum value. But this isn't informed via TLV
and the user-space, typically PulseAudio, gets confused as if it's
still played in a low volume.

This patch adds the new flag, min_mute, to struct usb_mixer_elem_info
for indicating that the mixer element is with the minimum-mute volume.
This flag is set for known C-Media devices in
snd_usb_mixer_fu_apply_quirk() in turn.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196669
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/usb/mixer.c | 2 ++
sound/usb/mixer.h | 1 +
sound/usb/mixer_quirks.c | 6 ++++++
3 files changed, 9 insertions(+)

--- a/sound/usb/mixer.c
+++ b/sound/usb/mixer.c
@@ -542,6 +542,8 @@ int snd_usb_mixer_vol_tlv(struct snd_kco

if (size < sizeof(scale))
return -ENOMEM;
+ if (cval->min_mute)
+ scale[0] = SNDRV_CTL_TLVT_DB_MINMAX_MUTE;
scale[2] = cval->dBmin;
scale[3] = cval->dBmax;
if (copy_to_user(_tlv, scale, sizeof(scale)))
--- a/sound/usb/mixer.h
+++ b/sound/usb/mixer.h
@@ -64,6 +64,7 @@ struct usb_mixer_elem_info {
int cached;
int cache_val[MAX_CHANNELS];
u8 initialized;
+ u8 min_mute;
void *private_data;
};

--- a/sound/usb/mixer_quirks.c
+++ b/sound/usb/mixer_quirks.c
@@ -1878,6 +1878,12 @@ void snd_usb_mixer_fu_apply_quirk(struct
if (unitid == 7 && cval->control == UAC_FU_VOLUME)
snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
break;
+ /* lowest playback value is muted on C-Media devices */
+ case USB_ID(0x0d8c, 0x000c):
+ case USB_ID(0x0d8c, 0x0014):
+ if (strstr(kctl->id.name, "Playback"))
+ cval->min_mute = 1;
+ break;
}
}



2017-08-22 19:17:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 13/41] ALSA: usb-audio: Apply sample rate quirk to Sennheiser headset

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Takashi Iwai <[email protected]>

commit a8e800fe0f68bc28ce309914f47e432742b865ed upstream.

A Senheisser headset requires the typical sample-rate quirk for
avoiding spurious errors from inquiring the current sample rate like:
usb 1-1: 2:1: cannot get freq at ep 0x4
usb 1-1: 3:1: cannot get freq at ep 0x83

The USB ID 1395:740a has to be added to the entries in
snd_usb_get_sample_rate_quirk().

Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1052580
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/usb/quirks.c | 1 +
1 file changed, 1 insertion(+)

--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -1142,6 +1142,7 @@ bool snd_usb_get_sample_rate_quirk(struc
case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */
case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */
case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */
+ case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */
case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */
case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */


2017-08-22 19:15:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 36/41] kernel/watchdog: Prevent false positives with turbo modes

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 7edaeb6841dfb27e362288ab8466ebdc4972e867 upstream.

The hardlockup detector on x86 uses a performance counter based on unhalted
CPU cycles and a periodic hrtimer. The hrtimer period is about 2/5 of the
performance counter period, so the hrtimer should fire 2-3 times before the
performance counter NMI fires. The NMI code checks whether the hrtimer
fired since the last invocation. If not, it assumess a hard lockup.

The calculation of those periods is based on the nominal CPU
frequency. Turbo modes increase the CPU clock frequency and therefore
shorten the period of the perf/NMI watchdog. With extreme Turbo-modes (3x
nominal frequency) the perf/NMI period is shorter than the hrtimer period
which leads to false positives.

A simple fix would be to shorten the hrtimer period, but that comes with
the side effect of more frequent hrtimer and softlockup thread wakeups,
which is not desired.

Implement a low pass filter, which checks the perf/NMI period against
kernel time. If the perf/NMI fires before 4/5 of the watchdog period has
elapsed then the event is ignored and postponed to the next perf/NMI.

That solves the problem and avoids the overhead of shorter hrtimer periods
and more frequent softlockup thread wakeups.

Fixes: 58687acba592 ("lockup_detector: Combine nmi_watchdog and softlockup detector")
Reported-and-tested-by: Kan Liang <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1708150931310.1886@nanos
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/Kconfig | 1
include/linux/nmi.h | 8 ++++++
kernel/watchdog.c | 1
kernel/watchdog_hld.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++
lib/Kconfig.debug | 7 +++++
5 files changed, 76 insertions(+)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -94,6 +94,7 @@ config X86
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL
+ select HARDLOCKUP_CHECK_TIMESTAMP if X86_64
select HAVE_ACPI_APEI if ACPI
select HAVE_ACPI_APEI_NMI if ACPI
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -155,6 +155,14 @@ extern int sysctl_hardlockup_all_cpu_bac
#define sysctl_softlockup_all_cpu_backtrace 0
#define sysctl_hardlockup_all_cpu_backtrace 0
#endif
+
+#if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
+ defined(CONFIG_HARDLOCKUP_DETECTOR)
+void watchdog_update_hrtimer_threshold(u64 period);
+#else
+static inline void watchdog_update_hrtimer_threshold(u64 period) { }
+#endif
+
extern bool is_hardlockup(void);
struct ctl_table;
extern int proc_watchdog(struct ctl_table *, int ,
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -161,6 +161,7 @@ static void set_sample_period(void)
* hardlockup detector generates a warning
*/
sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);
+ watchdog_update_hrtimer_threshold(sample_period);
}

/* Commands for resetting the watchdog */
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -70,6 +70,62 @@ void touch_nmi_watchdog(void)
}
EXPORT_SYMBOL(touch_nmi_watchdog);

+#ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP
+static DEFINE_PER_CPU(ktime_t, last_timestamp);
+static DEFINE_PER_CPU(unsigned int, nmi_rearmed);
+static ktime_t watchdog_hrtimer_sample_threshold __read_mostly;
+
+void watchdog_update_hrtimer_threshold(u64 period)
+{
+ /*
+ * The hrtimer runs with a period of (watchdog_threshold * 2) / 5
+ *
+ * So it runs effectively with 2.5 times the rate of the NMI
+ * watchdog. That means the hrtimer should fire 2-3 times before
+ * the NMI watchdog expires. The NMI watchdog on x86 is based on
+ * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles
+ * might run way faster than expected and the NMI fires in a
+ * smaller period than the one deduced from the nominal CPU
+ * frequency. Depending on the Turbo-Mode factor this might be fast
+ * enough to get the NMI period smaller than the hrtimer watchdog
+ * period and trigger false positives.
+ *
+ * The sample threshold is used to check in the NMI handler whether
+ * the minimum time between two NMI samples has elapsed. That
+ * prevents false positives.
+ *
+ * Set this to 4/5 of the actual watchdog threshold period so the
+ * hrtimer is guaranteed to fire at least once within the real
+ * watchdog threshold.
+ */
+ watchdog_hrtimer_sample_threshold = period * 2;
+}
+
+static bool watchdog_check_timestamp(void)
+{
+ ktime_t delta, now = ktime_get_mono_fast_ns();
+
+ delta = now - __this_cpu_read(last_timestamp);
+ if (delta < watchdog_hrtimer_sample_threshold) {
+ /*
+ * If ktime is jiffies based, a stalled timer would prevent
+ * jiffies from being incremented and the filter would look
+ * at a stale timestamp and never trigger.
+ */
+ if (__this_cpu_inc_return(nmi_rearmed) < 10)
+ return false;
+ }
+ __this_cpu_write(nmi_rearmed, 0);
+ __this_cpu_write(last_timestamp, now);
+ return true;
+}
+#else
+static inline bool watchdog_check_timestamp(void)
+{
+ return true;
+}
+#endif
+
static struct perf_event_attr wd_hw_attr = {
.type = PERF_TYPE_HARDWARE,
.config = PERF_COUNT_HW_CPU_CYCLES,
@@ -94,6 +150,9 @@ static void watchdog_overflow_callback(s
return;
}

+ if (!watchdog_check_timestamp())
+ return;
+
/* check for a hardlockup
* This is done by making sure our timer interrupt
* is incrementing. The timer interrupt should have
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -345,6 +345,13 @@ config SECTION_MISMATCH_WARN_ONLY
If unsure, say Y.

#
+# Enables a timestamp based low pass filter to compensate for perf based
+# hard lockup detection which runs too fast due to turbo modes.
+#
+config HARDLOCKUP_CHECK_TIMESTAMP
+ bool
+
+#
# Select this config option from the architecture Kconfig, if it
# is preferred to always offer frame pointers as a config
# option on the architecture (regardless of KERNEL_DEBUG):


2017-08-22 19:17:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 12/41] ALSA: seq: 2nd attempt at fixing race creating a queue

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Daniel Mentz <[email protected]>

commit 7e1d90f60a0d501c8503e636942ca704a454d910 upstream.

commit 4842e98f26dd80be3623c4714a244ba52ea096a8 ("ALSA: seq: Fix race at
creating a queue") attempted to fix a race reported by syzkaller. That
fix has been described as follows:

"
When a sequencer queue is created in snd_seq_queue_alloc(),it adds the
new queue element to the public list before referencing it. Thus the
queue might be deleted before the call of snd_seq_queue_use(), and it
results in the use-after-free error, as spotted by syzkaller.

The fix is to reference the queue object at the right time.
"

Even with that fix in place, syzkaller reported a use-after-free error.
It specifically pointed to the last instruction "return q->queue" in
snd_seq_queue_alloc(). The pointer q is being used after kfree() has
been called on it.

It turned out that there is still a small window where a race can
happen. The window opens at
snd_seq_ioctl_create_queue()->snd_seq_queue_alloc()->queue_list_add()
and closes at
snd_seq_ioctl_create_queue()->queueptr()->snd_use_lock_use(). Between
these two calls, a different thread could delete the queue and possibly
re-create a different queue in the same location in queue_list.

This change prevents this situation by calling snd_use_lock_use() from
snd_seq_queue_alloc() prior to calling queue_list_add(). It is then the
caller's responsibility to call snd_use_lock_free(&q->use_lock).

Fixes: 4842e98f26dd ("ALSA: seq: Fix race at creating a queue")
Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Daniel Mentz <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/core/seq/seq_clientmgr.c | 13 ++++---------
sound/core/seq/seq_queue.c | 14 +++++++++-----
sound/core/seq/seq_queue.h | 2 +-
3 files changed, 14 insertions(+), 15 deletions(-)

--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1502,16 +1502,11 @@ static int snd_seq_ioctl_unsubscribe_por
static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg)
{
struct snd_seq_queue_info *info = arg;
- int result;
struct snd_seq_queue *q;

- result = snd_seq_queue_alloc(client->number, info->locked, info->flags);
- if (result < 0)
- return result;
-
- q = queueptr(result);
- if (q == NULL)
- return -EINVAL;
+ q = snd_seq_queue_alloc(client->number, info->locked, info->flags);
+ if (IS_ERR(q))
+ return PTR_ERR(q);

info->queue = q->queue;
info->locked = q->locked;
@@ -1521,7 +1516,7 @@ static int snd_seq_ioctl_create_queue(st
if (!info->name[0])
snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue);
strlcpy(q->name, info->name, sizeof(q->name));
- queuefree(q);
+ snd_use_lock_free(&q->use_lock);

return 0;
}
--- a/sound/core/seq/seq_queue.c
+++ b/sound/core/seq/seq_queue.c
@@ -184,22 +184,26 @@ void __exit snd_seq_queues_delete(void)
static void queue_use(struct snd_seq_queue *queue, int client, int use);

/* allocate a new queue -
- * return queue index value or negative value for error
+ * return pointer to new queue or ERR_PTR(-errno) for error
+ * The new queue's use_lock is set to 1. It is the caller's responsibility to
+ * call snd_use_lock_free(&q->use_lock).
*/
-int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)
+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)
{
struct snd_seq_queue *q;

q = queue_new(client, locked);
if (q == NULL)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);
q->info_flags = info_flags;
queue_use(q, client, 1);
+ snd_use_lock_use(&q->use_lock);
if (queue_list_add(q) < 0) {
+ snd_use_lock_free(&q->use_lock);
queue_delete(q);
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);
}
- return q->queue;
+ return q;
}

/* delete a queue - queue must be owned by the client */
--- a/sound/core/seq/seq_queue.h
+++ b/sound/core/seq/seq_queue.h
@@ -71,7 +71,7 @@ void snd_seq_queues_delete(void);


/* create new queue (constructor) */
-int snd_seq_queue_alloc(int client, int locked, unsigned int flags);
+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags);

/* delete queue (destructor) */
int snd_seq_queue_delete(int client, int queueid);


2017-08-22 19:17:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 41/41] usb: qmi_wwan: add D-Link DWM-222 device ID

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Hector Martin <[email protected]>

commit bed9ff165960921303a100228585f2d1691b42eb upstream.

Signed-off-by: Hector Martin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/usb/qmi_wwan.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -1175,6 +1175,7 @@ static const struct usb_device_id produc
{QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */
{QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */
{QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */
+ {QMI_FIXED_INTF(0x2001, 0x7e35, 4)}, /* D-Link DWM-222 */
{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */
{QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */
{QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */


2017-08-22 19:17:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 40/41] usb: optimize acpi companion search for usb port devices

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mathias Nyman <[email protected]>

commit ed18c5fa945768a9bec994e786edbbbc7695acf6 upstream.

This optimization significantly reduces xhci driver load time.

In ACPI tables the acpi companion port devices are children of
the hub device. The port devices are identified by their port number
returned by the ACPI _ADR method.
_ADR 0 is reserved for the root hub device.

The current implementation to find a acpi companion port device
loops through all acpi port devices under that parent hub, evaluating
their _ADR method each time a new port device is added.

for a xHC controller with 25 ports under its roothub it
will end up invoking ACPI bytecode 625 times before all ports
are ready, making it really slow.

The _ADR values are already read and cached earler. So instead of
running the bytecode again we can check the cached _ADR value first,
and then fall back to the old way.

As one of the more significant changes, the xhci load time on
Intel kabylake reduced by 70%, (28ms) from
initcall xhci_pci_init+0x0/0x49 returned 0 after 39537 usecs
to
initcall xhci_pci_init+0x0/0x49 returned 0 after 11270 usecs

Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/core/usb-acpi.c | 26 +++++++++++++++++++++++---
1 file changed, 23 insertions(+), 3 deletions(-)

--- a/drivers/usb/core/usb-acpi.c
+++ b/drivers/usb/core/usb-acpi.c
@@ -127,6 +127,22 @@ out:
*/
#define USB_ACPI_LOCATION_VALID (1 << 31)

+static struct acpi_device *usb_acpi_find_port(struct acpi_device *parent,
+ int raw)
+{
+ struct acpi_device *adev;
+
+ if (!parent)
+ return NULL;
+
+ list_for_each_entry(adev, &parent->children, node) {
+ if (acpi_device_adr(adev) == raw)
+ return adev;
+ }
+
+ return acpi_find_child_device(parent, raw, false);
+}
+
static struct acpi_device *usb_acpi_find_companion(struct device *dev)
{
struct usb_device *udev;
@@ -174,8 +190,10 @@ static struct acpi_device *usb_acpi_find
int raw;

raw = usb_hcd_find_raw_port_number(hcd, port1);
- adev = acpi_find_child_device(ACPI_COMPANION(&udev->dev),
- raw, false);
+
+ adev = usb_acpi_find_port(ACPI_COMPANION(&udev->dev),
+ raw);
+
if (!adev)
return NULL;
} else {
@@ -186,7 +204,9 @@ static struct acpi_device *usb_acpi_find
return NULL;

acpi_bus_get_device(parent_handle, &adev);
- adev = acpi_find_child_device(adev, port1, false);
+
+ adev = usb_acpi_find_port(adev, port1);
+
if (!adev)
return NULL;
}


2017-08-22 19:15:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 34/41] genirq: Restore trigger settings in irq_modify_status()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Marc Zyngier <[email protected]>

commit e8f241893dfbbebe2813c01eac54f263e6a5e59c upstream.

irq_modify_status starts by clearing the trigger settings from
irq_data before applying the new settings, but doesn't restore them,
leaving them to IRQ_TYPE_NONE.

That's pretty confusing to the potential request_irq() that could
follow. Instead, snapshot the settings before clearing them, and restore
them if the irq_modify_status() invocation was not changing the trigger.

Fixes: 1e2a7d78499e ("irqdomain: Don't set type when mapping an IRQ")
Reported-and-tested-by: jeffy <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Jon Hunter <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/irq/chip.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -898,13 +898,15 @@ EXPORT_SYMBOL_GPL(irq_set_chip_and_handl

void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set)
{
- unsigned long flags;
+ unsigned long flags, trigger, tmp;
struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0);

if (!desc)
return;
irq_settings_clr_and_set(desc, clr, set);

+ trigger = irqd_get_trigger_type(&desc->irq_data);
+
irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU |
IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT);
if (irq_settings_has_no_balance_set(desc))
@@ -916,7 +918,11 @@ void irq_modify_status(unsigned int irq,
if (irq_settings_is_level(desc))
irqd_set(&desc->irq_data, IRQD_LEVEL);

- irqd_set(&desc->irq_data, irq_settings_get_trigger_mask(desc));
+ tmp = irq_settings_get_trigger_mask(desc);
+ if (tmp != IRQ_TYPE_NONE)
+ trigger = tmp;
+
+ irqd_set(&desc->irq_data, trigger);

irq_put_desc_unlock(desc, flags);
}


2017-08-22 19:18:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 38/41] pids: make task_tgid_nr_ns() safe

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Oleg Nesterov <[email protected]>

commit dd1c1f2f2028a7b851f701fc6a8ebe39dcb95e7c upstream.

This was reported many times, and this was even mentioned in commit
52ee2dfdd4f5 ("pids: refactor vnr/nr_ns helpers to make them safe") but
somehow nobody bothered to fix the obvious problem: task_tgid_nr_ns() is
not safe because task->group_leader points to nowhere after the exiting
task passes exit_notify(), rcu_read_lock() can not help.

We really need to change __unhash_process() to nullify group_leader,
parent, and real_parent, but this needs some cleanups. Until then we
can turn task_tgid_nr_ns() into another user of __task_pid_nr_ns() and
fix the problem.

Reported-by: Troy Kensinger <[email protected]>
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/pid.h | 4 ++-
include/linux/sched.h | 51 ++++++++++++++++++++++++++------------------------
kernel/pid.c | 11 +++-------
3 files changed, 34 insertions(+), 32 deletions(-)

--- a/include/linux/pid.h
+++ b/include/linux/pid.h
@@ -8,7 +8,9 @@ enum pid_type
PIDTYPE_PID,
PIDTYPE_PGID,
PIDTYPE_SID,
- PIDTYPE_MAX
+ PIDTYPE_MAX,
+ /* only valid to __task_pid_nr_ns() */
+ __PIDTYPE_TGID
};

/*
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1132,13 +1132,6 @@ static inline pid_t task_tgid_nr(struct
return tsk->tgid;
}

-extern pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns);
-
-static inline pid_t task_tgid_vnr(struct task_struct *tsk)
-{
- return pid_vnr(task_tgid(tsk));
-}
-
/**
* pid_alive - check that a task structure is not stale
* @p: Task structure to be checked.
@@ -1154,23 +1147,6 @@ static inline int pid_alive(const struct
return p->pids[PIDTYPE_PID].pid != NULL;
}

-static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns)
-{
- pid_t pid = 0;
-
- rcu_read_lock();
- if (pid_alive(tsk))
- pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns);
- rcu_read_unlock();
-
- return pid;
-}
-
-static inline pid_t task_ppid_nr(const struct task_struct *tsk)
-{
- return task_ppid_nr_ns(tsk, &init_pid_ns);
-}
-
static inline pid_t task_pgrp_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
{
return __task_pid_nr_ns(tsk, PIDTYPE_PGID, ns);
@@ -1192,6 +1168,33 @@ static inline pid_t task_session_vnr(str
return __task_pid_nr_ns(tsk, PIDTYPE_SID, NULL);
}

+static inline pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
+{
+ return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, ns);
+}
+
+static inline pid_t task_tgid_vnr(struct task_struct *tsk)
+{
+ return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, NULL);
+}
+
+static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns)
+{
+ pid_t pid = 0;
+
+ rcu_read_lock();
+ if (pid_alive(tsk))
+ pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns);
+ rcu_read_unlock();
+
+ return pid;
+}
+
+static inline pid_t task_ppid_nr(const struct task_struct *tsk)
+{
+ return task_ppid_nr_ns(tsk, &init_pid_ns);
+}
+
/* Obsolete, do not use: */
static inline pid_t task_pgrp_nr(struct task_struct *tsk)
{
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -527,8 +527,11 @@ pid_t __task_pid_nr_ns(struct task_struc
if (!ns)
ns = task_active_pid_ns(current);
if (likely(pid_alive(task))) {
- if (type != PIDTYPE_PID)
+ if (type != PIDTYPE_PID) {
+ if (type == __PIDTYPE_TGID)
+ type = PIDTYPE_PID;
task = task->group_leader;
+ }
nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns);
}
rcu_read_unlock();
@@ -537,12 +540,6 @@ pid_t __task_pid_nr_ns(struct task_struc
}
EXPORT_SYMBOL(__task_pid_nr_ns);

-pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)
-{
- return pid_nr_ns(task_tgid(tsk), ns);
-}
-EXPORT_SYMBOL(task_tgid_nr_ns);
-
struct pid_namespace *task_active_pid_ns(struct task_struct *tsk)
{
return ns_of_pid(task_pid(tsk));


2017-08-22 19:18:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 35/41] genirq/ipi: Fixup checks against nr_cpu_ids

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alexey Dobriyan <[email protected]>

commit 8fbbe2d7cc478d1544f41f2271787c993c23a4f6 upstream.

Valid CPU ids are [0, nr_cpu_ids-1] inclusive.

Fixes: 3b8e29a82dd1 ("genirq: Implement ipi_send_mask/single()")
Fixes: f9bce791ae2a ("genirq: Add a new function to get IPI reverse mapping")
Signed-off-by: Alexey Dobriyan <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/20170819095751.GB27864@avx2
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/irq/ipi.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/irq/ipi.c
+++ b/kernel/irq/ipi.c
@@ -165,7 +165,7 @@ irq_hw_number_t ipi_get_hwirq(unsigned i
struct irq_data *data = irq_get_irq_data(irq);
struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL;

- if (!data || !ipimask || cpu > nr_cpu_ids)
+ if (!data || !ipimask || cpu >= nr_cpu_ids)
return INVALID_HWIRQ;

if (!cpumask_test_cpu(cpu, ipimask))
@@ -195,7 +195,7 @@ static int ipi_send_verify(struct irq_ch
if (!chip->ipi_send_single && !chip->ipi_send_mask)
return -EINVAL;

- if (cpu > nr_cpu_ids)
+ if (cpu >= nr_cpu_ids)
return -EINVAL;

if (dest) {


2017-08-22 19:14:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 29/41] x86/asm/64: Clear AC on NMI entries

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andy Lutomirski <[email protected]>

commit e93c17301ac55321fc18e0f8316e924e58a83c8c upstream.

This closes a hole in our SMAP implementation.

This patch comes from grsecurity. Good catch!

Signed-off-by: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/314cc9f294e8f14ed85485727556ad4f15bb1659.1502159503.git.luto@kernel.org
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/entry/entry_64.S | 2 ++
1 file changed, 2 insertions(+)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1209,6 +1209,8 @@ ENTRY(nmi)
* other IST entries.
*/

+ ASM_CLAC
+
/* Use %rdx as our temp variable throughout */
pushq %rdx



2017-08-22 19:19:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 33/41] irqchip/atmel-aic: Fix unbalanced refcount in aic_common_rtc_irq_fixup()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Boris Brezillon <[email protected]>

commit 277867ade8262583f4280cadbe90e0031a3706a7 upstream.

of_find_compatible_node() is calling of_node_put() on its first argument
thus leading to an unbalanced of_node_get/put() issue if the node has not
been retained before that.

Instead of passing the root node, pass NULL, which does exactly the same:
iterate over all DT nodes, starting from the root node.

Signed-off-by: Boris Brezillon <[email protected]>
Reported-by: Alexandre Belloni <[email protected]>
Fixes: 3d61467f9bab ("irqchip: atmel-aic: Implement RTC irq fixup")
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-atmel-aic-common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/irqchip/irq-atmel-aic-common.c
+++ b/drivers/irqchip/irq-atmel-aic-common.c
@@ -142,9 +142,9 @@ void __init aic_common_rtc_irq_fixup(str
struct device_node *np;
void __iomem *regs;

- np = of_find_compatible_node(root, NULL, "atmel,at91rm9200-rtc");
+ np = of_find_compatible_node(NULL, NULL, "atmel,at91rm9200-rtc");
if (!np)
- np = of_find_compatible_node(root, NULL,
+ np = of_find_compatible_node(NULL, NULL,
"atmel,at91sam9x5-rtc");

if (!np)


2017-08-22 19:19:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 32/41] irqchip/atmel-aic: Fix unbalanced of_node_put() in aic_common_irq_fixup()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Boris Brezillon <[email protected]>

commit 469bcef53c546bb792aa66303933272991b7831d upstream.

aic_common_irq_fixup() is calling twice of_node_put() on the same node
thus leading to an unbalanced refcount on the root node.

Signed-off-by: Boris Brezillon <[email protected]>
Reported-by: Alexandre Belloni <[email protected]>
Fixes: b2f579b58e93 ("irqchip: atmel-aic: Add irq fixup infrastructure")
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-atmel-aic-common.c | 1 -
1 file changed, 1 deletion(-)

--- a/drivers/irqchip/irq-atmel-aic-common.c
+++ b/drivers/irqchip/irq-atmel-aic-common.c
@@ -196,7 +196,6 @@ static void __init aic_common_irq_fixup(
return;

match = of_match_node(matches, root);
- of_node_put(root);

if (match) {
void (*fixup)(struct device_node *) = match->data;


2017-08-22 19:19:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 31/41] x86/elf: Remove the unnecessary ADDR_NO_RANDOMIZE checks

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Oleg Nesterov <[email protected]>

commit 01578e36163cdd0e4fd61d9976de15f13364e26d upstream.

The ADDR_NO_RANDOMIZE checks in stack_maxrandom_size() and
randomize_stack_top() are not required.

PF_RANDOMIZE is set by load_elf_binary() only if ADDR_NO_RANDOMIZE is not
set, no need to re-check after that.

Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Dmitry Safonov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/mm/mmap.c | 3 +--
fs/binfmt_elf.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)

--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -50,8 +50,7 @@ unsigned long tasksize_64bit(void)
static unsigned long stack_maxrandom_size(unsigned long task_size)
{
unsigned long max = 0;
- if ((current->flags & PF_RANDOMIZE) &&
- !(current->personality & ADDR_NO_RANDOMIZE)) {
+ if (current->flags & PF_RANDOMIZE) {
max = (-1UL) & __STACK_RND_MASK(task_size == tasksize_32bit());
max <<= PAGE_SHIFT;
}
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -666,8 +666,7 @@ static unsigned long randomize_stack_top
{
unsigned long random_variable = 0;

- if ((current->flags & PF_RANDOMIZE) &&
- !(current->personality & ADDR_NO_RANDOMIZE)) {
+ if (current->flags & PF_RANDOMIZE) {
random_variable = get_random_long();
random_variable &= STACK_RND_MASK;
random_variable <<= PAGE_SHIFT;


2017-08-22 19:20:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 27/41] xen-blkfront: use a right index when checking requests

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Munehisa Kamata <[email protected]>

commit b15bd8cb37598afb2963f7eb9e2de468d2d60a2f upstream.

Since commit d05d7f40791c ("Merge branch 'for-4.8/core' of
git://git.kernel.dk/linux-block") and 3fc9d690936f ("Merge branch
'for-4.8/drivers' of git://git.kernel.dk/linux-block"), blkfront_resume()
has been using an index for iterating ring_info to check request when
iterating blk_shadow in an inner loop. This seems to have been
accidentally introduced during the massive rewrite of the block layer
macros in the commits.

This may cause crash like this:

[11798.057074] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048
[11798.058832] IP: [<ffffffff814411fa>] blkfront_resume+0x10a/0x610
....
[11798.061063] Call Trace:
[11798.061063] [<ffffffff8139ce93>] xenbus_dev_resume+0x53/0x140
[11798.061063] [<ffffffff8139ce40>] ? xenbus_dev_probe+0x150/0x150
[11798.061063] [<ffffffff813f359e>] dpm_run_callback+0x3e/0x110
[11798.061063] [<ffffffff813f3a08>] device_resume+0x88/0x190
[11798.061063] [<ffffffff813f4cc0>] dpm_resume+0x100/0x2d0
[11798.061063] [<ffffffff813f5221>] dpm_resume_end+0x11/0x20
[11798.061063] [<ffffffff813950a8>] do_suspend+0xe8/0x1a0
[11798.061063] [<ffffffff813954bd>] shutdown_handler+0xfd/0x130
[11798.061063] [<ffffffff8139aba0>] ? split+0x110/0x110
[11798.061063] [<ffffffff8139ac26>] xenwatch_thread+0x86/0x120
[11798.061063] [<ffffffff810b4570>] ? prepare_to_wait_event+0x110/0x110
[11798.061063] [<ffffffff8108fe57>] kthread+0xd7/0xf0
[11798.061063] [<ffffffff811da811>] ? kfree+0x121/0x170
[11798.061063] [<ffffffff8108fd80>] ? kthread_park+0x60/0x60
[11798.061063] [<ffffffff810863b0>] ? call_usermodehelper_exec_work+0xb0/0xb0
[11798.061063] [<ffffffff810864ea>] ? call_usermodehelper_exec_async+0x13a/0x140
[11798.061063] [<ffffffff81534a45>] ret_from_fork+0x25/0x30

Use the right index in the inner loop.

Fixes: d05d7f40791c ("Merge branch 'for-4.8/core' of git://git.kernel.dk/linux-block")
Fixes: 3fc9d690936f ("Merge branch 'for-4.8/drivers' of git://git.kernel.dk/linux-block")
Signed-off-by: Munehisa Kamata <[email protected]>
Reviewed-by: Thomas Friebel <[email protected]>
Reviewed-by: Eduardo Valentin <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Reviewed-by: Roger Pau Monne <[email protected]>
Cc: [email protected]
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/block/xen-blkfront.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2119,9 +2119,9 @@ static int blkfront_resume(struct xenbus
/*
* Get the bios in the request so we can re-queue them.
*/
- if (req_op(shadow[i].request) == REQ_OP_FLUSH ||
- req_op(shadow[i].request) == REQ_OP_DISCARD ||
- req_op(shadow[i].request) == REQ_OP_SECURE_ERASE ||
+ if (req_op(shadow[j].request) == REQ_OP_FLUSH ||
+ req_op(shadow[j].request) == REQ_OP_DISCARD ||
+ req_op(shadow[j].request) == REQ_OP_SECURE_ERASE ||
shadow[j].request->cmd_flags & REQ_FUA) {
/*
* Flush operations don't contain bios, so


2017-08-22 19:20:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 28/41] perf/x86: Fix RDPMC vs. mm_struct tracking

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit bfe334924ccd9f4a53f30240c03cf2f43f5b2df1 upstream.

Vince reported the following rdpmc() testcase failure:

> Failing test case:
>
> fd=perf_event_open();
> addr=mmap(fd);
> exec() // without closing or unmapping the event
> fd=perf_event_open();
> addr=mmap(fd);
> rdpmc() // GPFs due to rdpmc being disabled

The problem is of course that exec() plays tricks with what is
current->mm, only destroying the old mappings after having
installed the new mm.

Fix this confusion by passing along vma->vm_mm instead of relying on
current->mm.

Reported-by: Vince Weaver <[email protected]>
Tested-by: Vince Weaver <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Andy Lutomirski <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: 1e0fb9ec679c ("perf: Add pmu callbacks to track event mapping and unmapping")
Link: http://lkml.kernel.org/r/[email protected]
[ Minor cleanups. ]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/events/core.c | 16 +++++++---------
include/linux/perf_event.h | 4 ++--
kernel/events/core.c | 6 +++---
3 files changed, 12 insertions(+), 14 deletions(-)

--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2105,7 +2105,7 @@ static void refresh_pce(void *ignored)
load_mm_cr4(current->active_mm);
}

-static void x86_pmu_event_mapped(struct perf_event *event)
+static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
{
if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return;
@@ -2120,22 +2120,20 @@ static void x86_pmu_event_mapped(struct
* For now, this can't happen because all callers hold mmap_sem
* for write. If this changes, we'll need a different solution.
*/
- lockdep_assert_held_exclusive(&current->mm->mmap_sem);
+ lockdep_assert_held_exclusive(&mm->mmap_sem);

- if (atomic_inc_return(&current->mm->context.perf_rdpmc_allowed) == 1)
- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);
+ if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1)
+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);
}

-static void x86_pmu_event_unmapped(struct perf_event *event)
+static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
{
- if (!current->mm)
- return;

if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
return;

- if (atomic_dec_and_test(&current->mm->context.perf_rdpmc_allowed))
- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);
+ if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);
}

static int x86_pmu_event_idx(struct perf_event *event)
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -310,8 +310,8 @@ struct pmu {
* Notification that the event was mapped or unmapped. Called
* in the context of the mapping task.
*/
- void (*event_mapped) (struct perf_event *event); /*optional*/
- void (*event_unmapped) (struct perf_event *event); /*optional*/
+ void (*event_mapped) (struct perf_event *event, struct mm_struct *mm); /* optional */
+ void (*event_unmapped) (struct perf_event *event, struct mm_struct *mm); /* optional */

/*
* Flags for ->add()/->del()/ ->start()/->stop(). There are
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5084,7 +5084,7 @@ static void perf_mmap_open(struct vm_are
atomic_inc(&event->rb->aux_mmap_count);

if (event->pmu->event_mapped)
- event->pmu->event_mapped(event);
+ event->pmu->event_mapped(event, vma->vm_mm);
}

static void perf_pmu_output_stop(struct perf_event *event);
@@ -5107,7 +5107,7 @@ static void perf_mmap_close(struct vm_ar
unsigned long size = perf_data_size(rb);

if (event->pmu->event_unmapped)
- event->pmu->event_unmapped(event);
+ event->pmu->event_unmapped(event, vma->vm_mm);

/*
* rb->aux_mmap_count will always drop before rb->mmap_count and
@@ -5405,7 +5405,7 @@ aux_unlock:
vma->vm_ops = &perf_mmap_vmops;

if (event->pmu->event_mapped)
- event->pmu->event_mapped(event);
+ event->pmu->event_mapped(event, vma->vm_mm);

return ret;
}


2017-08-22 19:20:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 10/41] md: always clear ->safemode when md_check_recovery gets the mddev lock.

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: NeilBrown <[email protected]>

commit 33182d15c6bf182f7ae32a66ea4a547d979cd6d7 upstream.

If ->safemode == 1, md_check_recovery() will try to get the mddev lock
and perform various other checks.
If mddev->in_sync is zero, it will call set_in_sync, and clear
->safemode. However if mddev->in_sync is not zero, ->safemode will not
be cleared.

When md_check_recovery() drops the mddev lock, the thread is woken
up again. Normally it would just check if there was anything else to
do, find nothing, and go to sleep. However as ->safemode was not
cleared, it will take the mddev lock again, then wake itself up
when unlocking.

This results in an infinite loop, repeatedly calling
md_check_recovery(), which RCU or the soft-lockup detector
will eventually complain about.

Prior to commit 4ad23a976413 ("MD: use per-cpu counter for
writes_pending"), safemode would only be set to one when the
writes_pending counter reached zero, and would be cleared again
when writes_pending is incremented. Since that patch, safemode
is set more freely, but is not reliably cleared.

So in md_check_recovery() clear ->safemode before checking ->in_sync.

Fixes: 4ad23a976413 ("MD: use per-cpu counter for writes_pending")
Reported-by: Dominik Brodowski <[email protected]>
Reported-by: David R <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Shaohua Li <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/md.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8639,6 +8639,9 @@ void md_check_recovery(struct mddev *mdd
if (mddev_trylock(mddev)) {
int spares = 0;

+ if (mddev->safemode == 1)
+ mddev->safemode = 0;
+
if (mddev->ro) {
struct md_rdev *rdev;
if (!mddev->external && mddev->in_sync)


2017-08-22 19:21:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 25/41] blk-mq-pci: add a fallback when pci_irq_get_affinity returns NULL

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christoph Hellwig <[email protected]>

commit c005390374957baacbc38eef96ea360559510aa7 upstream.

While pci_irq_get_affinity should never fail for SMP kernel that
implement the affinity mapping, it will always return NULL in the
UP case, so provide a fallback mapping of all queues to CPU 0 in
that case.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Omar Sandoval <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
block/blk-mq-pci.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)

--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -36,12 +36,18 @@ int blk_mq_pci_map_queues(struct blk_mq_
for (queue = 0; queue < set->nr_hw_queues; queue++) {
mask = pci_irq_get_affinity(pdev, queue);
if (!mask)
- return -EINVAL;
+ goto fallback;

for_each_cpu(cpu, mask)
set->mq_map[cpu] = queue;
}

return 0;
+
+fallback:
+ WARN_ON_ONCE(set->nr_hw_queues > 1);
+ for_each_possible_cpu(cpu)
+ set->mq_map[cpu] = 0;
+ return 0;
}
EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues);


2017-08-22 19:14:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 18/41] mm: fix double mmap_sem unlock on MMF_UNSTABLE enforced SIGBUS

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Michal Hocko <[email protected]>

commit 5b53a6ea886700a128b697a6fe8375340dea2c30 upstream.

Tetsuo Handa has noticed that MMF_UNSTABLE SIGBUS path in
handle_mm_fault causes a lockdep splat

Out of memory: Kill process 1056 (a.out) score 603 or sacrifice child
Killed process 1056 (a.out) total-vm:4268108kB, anon-rss:2246048kB, file-rss:0kB, shmem-rss:0kB
a.out (1169) used greatest stack depth: 11664 bytes left
DEBUG_LOCKS_WARN_ON(depth <= 0)
------------[ cut here ]------------
WARNING: CPU: 6 PID: 1339 at kernel/locking/lockdep.c:3617 lock_release+0x172/0x1e0
CPU: 6 PID: 1339 Comm: a.out Not tainted 4.13.0-rc3-next-20170803+ #142
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/02/2015
RIP: 0010:lock_release+0x172/0x1e0
Call Trace:
up_read+0x1a/0x40
__do_page_fault+0x28e/0x4c0
do_page_fault+0x30/0x80
page_fault+0x28/0x30

The reason is that the page fault path might have dropped the mmap_sem
and returned with VM_FAULT_RETRY. MMF_UNSTABLE check however rewrites
the error path to VM_FAULT_SIGBUS and we always expect mmap_sem taken in
that path. Fix this by taking mmap_sem when VM_FAULT_RETRY is held in
the MMF_UNSTABLE path.

We cannot simply add VM_FAULT_SIGBUS to the existing error code because
all arch specific page fault handlers and g-u-p would have to learn a
new error code combination.

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 3f70dc38cec2 ("mm: make sure that kthreads will not refault oom reaped memory")
Reported-by: Tetsuo Handa <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: Andrea Argangeli <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Wenwei Tao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/memory.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3882,8 +3882,18 @@ int handle_mm_fault(struct vm_area_struc
* further.
*/
if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR)
- && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags)))
+ && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags))) {
+
+ /*
+ * We are going to enforce SIGBUS but the PF path might have
+ * dropped the mmap_sem already so take it again so that
+ * we do not break expectations of all arch specific PF paths
+ * and g-u-p
+ */
+ if (ret & VM_FAULT_RETRY)
+ down_read(&vma->vm_mm->mmap_sem);
ret = VM_FAULT_SIGBUS;
+ }

return ret;
}


2017-08-22 19:21:48

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 24/41] ARM: dts: imx6qdl-nitrogen6_som2: fix PCIe reset

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gary Bisson <[email protected]>

commit c40bc54fdf2d52a80f66b365f1eac9d43b32e107 upstream.

Previous value was a bad copy of nitrogen6_max device tree.

Signed-off-by: Gary Bisson <[email protected]>
Fixes: 3faa1bb2e89c ("ARM: dts: imx: add Boundary Devices Nitrogen6_SOM2 support")
Signed-off-by: Shawn Guo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
@@ -507,7 +507,7 @@
pinctrl_pcie: pciegrp {
fsl,pins = <
/* PCIe reset */
- MX6QDL_PAD_EIM_BCLK__GPIO6_IO31 0x030b0
+ MX6QDL_PAD_EIM_DA0__GPIO3_IO00 0x030b0
MX6QDL_PAD_EIM_DA4__GPIO3_IO04 0x030b0
>;
};
@@ -668,7 +668,7 @@
&pcie {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcie>;
- reset-gpio = <&gpio6 31 GPIO_ACTIVE_LOW>;
+ reset-gpio = <&gpio3 0 GPIO_ACTIVE_LOW>;
status = "okay";
};



2017-08-22 19:14:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 19/41] mm/cma_debug.c: fix stack corruption due to sprintf usage

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Prakash Gupta <[email protected]>

commit da094e42848e3c36feaa3b5271e53983fd45424f upstream.

name[] in cma_debugfs_add_one() can only accommodate 16 chars including
NULL to store sprintf output. It's common for cma device name to be
larger than 15 chars. This can cause stack corrpution. If the gcc
stack protector is turned on, this can cause a panic due to stack
corruption.

Below is one example trace:

Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in:
ffffff8e69a75730
Call trace:
dump_backtrace+0x0/0x2c4
show_stack+0x20/0x28
dump_stack+0xb8/0xf4
panic+0x154/0x2b0
print_tainted+0x0/0xc0
cma_debugfs_init+0x274/0x290
do_one_initcall+0x5c/0x168
kernel_init_freeable+0x1c8/0x280

Fix the short sprintf buffer in cma_debugfs_add_one() by using
scnprintf() instead of sprintf().

Link: http://lkml.kernel.org/r/[email protected]
Fixes: f318dd083c81 ("cma: Store a name in the cma structure")
Signed-off-by: Prakash Gupta <[email protected]>
Acked-by: Laura Abbott <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/cma_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/cma_debug.c
+++ b/mm/cma_debug.c
@@ -167,7 +167,7 @@ static void cma_debugfs_add_one(struct c
char name[16];
int u32s;

- sprintf(name, "cma-%s", cma->name);
+ scnprintf(name, sizeof(name), "cma-%s", cma->name);

tmp = debugfs_create_dir(name, cma_debugfs_root);



2017-08-22 19:22:12

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 22/41] mm: revert x86_64 and arm64 ELF_ET_DYN_BASE base changes

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit c715b72c1ba406f133217b509044c38d8e714a37 upstream.

Moving the x86_64 and arm64 PIE base from 0x555555554000 to 0x000100000000
broke AddressSanitizer. This is a partial revert of:

eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE")
02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB")

The AddressSanitizer tool has hard-coded expectations about where
executable mappings are loaded.

The motivation for changing the PIE base in the above commits was to
avoid the Stack-Clash CVEs that allowed executable mappings to get too
close to heap and stack. This was mainly a problem on 32-bit, but the
64-bit bases were moved too, in an effort to proactively protect those
systems (proofs of concept do exist that show 64-bit collisions, but
other recent changes to fix stack accounting and setuid behaviors will
minimize the impact).

The new 32-bit PIE base is fine for ASan (since it matches the ET_EXEC
base), so only the 64-bit PIE base needs to be reverted to let x86 and
arm64 ASan binaries run again. Future changes to the 64-bit PIE base on
these architectures can be made optional once a more dynamic method for
dealing with AddressSanitizer is found. (e.g. always loading PIE into
the mmap region for marked binaries.)

Link: http://lkml.kernel.org/r/20170807201542.GA21271@beast
Fixes: eab09532d400 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE")
Fixes: 02445990a96e ("arm64: move ELF_ET_DYN_BASE to 4GB / 4MB")
Signed-off-by: Kees Cook <[email protected]>
Reported-by: Kostya Serebryany <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/include/asm/elf.h | 4 ++--
arch/x86/include/asm/elf.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -114,10 +114,10 @@

/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers.
*/
-#define ELF_ET_DYN_BASE 0x100000000UL
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)

#ifndef __ASSEMBLY__

--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -247,11 +247,11 @@ extern int force_personality32;

/*
* This is the base location for PIE (ET_DYN with INTERP) loads. On
- * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * 64-bit, this is above 4GB to leave the entire 32-bit address
* space open for things that want to use the area for 32-bit pointers.
*/
#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
- 0x100000000UL)
+ (TASK_SIZE / 3 * 2))

/* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. This could be done in user space,


2017-08-22 19:22:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 21/41] mm/vmalloc.c: dont unconditonally use __GFP_HIGHMEM

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Laura Abbott <[email protected]>

commit 704b862f9efd6d4c87a8d0a344dda19bda9c6b69 upstream.

Commit 19809c2da28a ("mm, vmalloc: use __GFP_HIGHMEM implicitly") added
use of __GFP_HIGHMEM for allocations. vmalloc_32 may use
GFP_DMA/GFP_DMA32 which does not play nice with __GFP_HIGHMEM and will
trigger a BUG in gfp_zone.

Only add __GFP_HIGHMEM if we aren't using GFP_DMA/GFP_DMA32.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1482249
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 19809c2da28a ("mm, vmalloc: use __GFP_HIGHMEM implicitly")
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/vmalloc.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1669,7 +1669,10 @@ static void *__vmalloc_area_node(struct
struct page **pages;
unsigned int nr_pages, array_size, i;
const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
- const gfp_t alloc_mask = gfp_mask | __GFP_HIGHMEM | __GFP_NOWARN;
+ const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN;
+ const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ?
+ 0 :
+ __GFP_HIGHMEM;

nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
array_size = (nr_pages * sizeof(struct page *));
@@ -1677,7 +1680,7 @@ static void *__vmalloc_area_node(struct
area->nr_pages = nr_pages;
/* Please note that the recursion is strictly bounded. */
if (array_size > PAGE_SIZE) {
- pages = __vmalloc_node(array_size, 1, nested_gfp|__GFP_HIGHMEM,
+ pages = __vmalloc_node(array_size, 1, nested_gfp|highmem_mask,
PAGE_KERNEL, node, area->caller);
} else {
pages = kmalloc_node(array_size, nested_gfp, node);
@@ -1698,9 +1701,9 @@ static void *__vmalloc_area_node(struct
}

if (node == NUMA_NO_NODE)
- page = alloc_page(alloc_mask);
+ page = alloc_page(alloc_mask|highmem_mask);
else
- page = alloc_pages_node(node, alloc_mask, 0);
+ page = alloc_pages_node(node, alloc_mask|highmem_mask, 0);

if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
@@ -1708,7 +1711,7 @@ static void *__vmalloc_area_node(struct
goto fail;
}
area->pages[i] = page;
- if (gfpflags_allow_blocking(gfp_mask))
+ if (gfpflags_allow_blocking(gfp_mask|highmem_mask))
cond_resched();
}



2017-08-22 19:23:12

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 09/41] md: fix test in md_write_start()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: NeilBrown <[email protected]>

commit 81fe48e9aa00bdd509bd3c37a76d1132da6b9f09 upstream.

md_write_start() needs to clear the in_sync flag is it is set, or if
there might be a race with set_in_sync() such that the later will
set it very soon. In the later case it is sufficient to take the
spinlock to synchronize with set_in_sync(), and then set the flag
if needed.

The current test is incorrect.
It should be:
if "flag is set" or "race is possible"

"flag is set" is trivially "mddev->in_sync".
"race is possible" should be tested by "mddev->sync_checkers".

If sync_checkers is 0, then there can be no race. set_in_sync() will
wait in percpu_ref_switch_to_atomic_sync() for an RCU grace period,
and as md_write_start() holds the rcu_read_lock(), set_in_sync() will
be sure ot see the update to writes_pending.

If sync_checkers is > 0, there could be race. If md_write_start()
happened entirely between
if (!mddev->in_sync &&
percpu_ref_is_zero(&mddev->writes_pending)) {
and
mddev->in_sync = 1;
in set_in_sync(), then it would not see that is_sync had been set,
and set_in_sync() would not see that writes_pending had been
incremented.

This bug means that in_sync is sometimes not set when it should be.
Consequently there is a small chance that the array will be marked as
"clean" when in fact it is inconsistent.

Fixes: 4ad23a976413 ("MD: use per-cpu counter for writes_pending")
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Shaohua Li <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/md.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7979,7 +7979,7 @@ bool md_write_start(struct mddev *mddev,
if (mddev->safemode == 1)
mddev->safemode = 0;
/* sync_checkers is always 0 when writes_pending is in per-cpu mode */
- if (mddev->in_sync || !mddev->sync_checkers) {
+ if (mddev->in_sync || mddev->sync_checkers) {
spin_lock(&mddev->lock);
if (mddev->in_sync) {
mddev->in_sync = 0;


2017-08-22 19:23:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 05/41] drm/i915: Perform an invalidate prior to executing golden renderstate

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Chris Wilson <[email protected]>

commit a0125a932e917cb507b682cb66645efdca1f8cab upstream.

As we may have just bound the renderstate into the GGTT for execution, we
need to ensure that the GTT TLB are also flushed.

On snb-gt2, this would cause a random GPU hang at the start of a new
context (e.g. boot) and on snb-gt1, it was causing the renderstate batch
to take ~10s. It was the GPU hang that revealed the truth, as the CS
gleefully executed beyond the end of the golden renderstate batch, a good
indicator for a GTT TLB miss.

Fixes: 20fe17aa52dc ("drm/i915: Remove redundant TLB invalidate on switching contexts")
Signed-off-by: Chris Wilson <[email protected]>
Cc: Mika Kuoppala <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Reviewed-by: Mika Kuoppala <[email protected]>
Cc: <[email protected]> # v4.12-rc1+
(cherry picked from commit 802673d66f8a6ded5d2689d597853c7bb3a70163)
Signed-off-by: Jani Nikula <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/i915/i915_gem_render_state.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -242,6 +242,10 @@ int i915_gem_render_state_emit(struct dr
goto err_unpin;
}

+ ret = req->engine->emit_flush(req, EMIT_INVALIDATE);
+ if (ret)
+ goto err_unpin;
+
ret = req->engine->emit_bb_start(req,
so->batch_offset, so->batch_size,
I915_DISPATCH_SECURE);


2017-08-22 19:14:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 02/41] parisc: pci memory bar assignment fails with 64bit kernels on dino/cujo

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Bogendoerfer <[email protected]>

commit 4098116039911e8870d84c975e2ec22dab65a909 upstream.

For 64bit kernels the lmmio_space_offset of the host bridge window
isn't set correctly on systems with dino/cujo PCI host bridges.
This leads to not assigned memory bars and failing drivers, which
need to use these bars.

Signed-off-by: Thomas Bogendoerfer <[email protected]>
Acked-by: Helge Deller <[email protected]>
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/parisc/dino.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/parisc/dino.c
+++ b/drivers/parisc/dino.c
@@ -956,7 +956,7 @@ static int __init dino_probe(struct pari

dino_dev->hba.dev = dev;
dino_dev->hba.base_addr = ioremap_nocache(hpa, 4096);
- dino_dev->hba.lmmio_space_offset = 0; /* CPU addrs == bus addrs */
+ dino_dev->hba.lmmio_space_offset = PCI_F_EXTEND;
spin_lock_init(&dino_dev->dinosaur_pen);
dino_dev->hba.iommu = ccio_get_iommu(dev);



2017-08-22 19:24:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 04/41] crypto: x86/sha1 - Fix reads beyond the number of blocks passed

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: [email protected] <[email protected]>

commit 8861249c740fc4af9ddc5aee321eafefb960d7c6 upstream.

It was reported that the sha1 AVX2 function(sha1_transform_avx2) is
reading ahead beyond its intended data, and causing a crash if the next
block is beyond page boundary:
http://marc.info/?l=linux-crypto-vger&m=149373371023377

This patch makes sure that there is no overflow for any buffer length.

It passes the tests written by Jan Stancek that revealed this problem:
https://github.com/jstancek/sha1-avx2-crash

I have re-enabled sha1-avx2 by reverting commit
b82ce24426a4071da9529d726057e4e642948667

Fixes: b82ce24426a4 ("crypto: sha1-ssse3 - Disable avx2")
Originally-by: Ilya Albrekht <[email protected]>
Tested-by: Jan Stancek <[email protected]>
Signed-off-by: Megha Dey <[email protected]>
Reported-by: Jan Stancek <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/crypto/sha1_avx2_x86_64_asm.S | 67 +++++++++++++++++----------------
arch/x86/crypto/sha1_ssse3_glue.c | 2
2 files changed, 37 insertions(+), 32 deletions(-)

--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
@@ -117,11 +117,10 @@
.set T1, REG_T1
.endm

-#define K_BASE %r8
#define HASH_PTR %r9
+#define BLOCKS_CTR %r8
#define BUFFER_PTR %r10
#define BUFFER_PTR2 %r13
-#define BUFFER_END %r11

#define PRECALC_BUF %r14
#define WK_BUF %r15
@@ -205,14 +204,14 @@
* blended AVX2 and ALU instruction scheduling
* 1 vector iteration per 8 rounds
*/
- vmovdqu ((i * 2) + PRECALC_OFFSET)(BUFFER_PTR), W_TMP
+ vmovdqu (i * 2)(BUFFER_PTR), W_TMP
.elseif ((i & 7) == 1)
- vinsertf128 $1, (((i-1) * 2)+PRECALC_OFFSET)(BUFFER_PTR2),\
+ vinsertf128 $1, ((i-1) * 2)(BUFFER_PTR2),\
WY_TMP, WY_TMP
.elseif ((i & 7) == 2)
vpshufb YMM_SHUFB_BSWAP, WY_TMP, WY
.elseif ((i & 7) == 4)
- vpaddd K_XMM(K_BASE), WY, WY_TMP
+ vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
.elseif ((i & 7) == 7)
vmovdqu WY_TMP, PRECALC_WK(i&~7)

@@ -255,7 +254,7 @@
vpxor WY, WY_TMP, WY_TMP
.elseif ((i & 7) == 7)
vpxor WY_TMP2, WY_TMP, WY
- vpaddd K_XMM(K_BASE), WY, WY_TMP
+ vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7)

PRECALC_ROTATE_WY
@@ -291,7 +290,7 @@
vpsrld $30, WY, WY
vpor WY, WY_TMP, WY
.elseif ((i & 7) == 7)
- vpaddd K_XMM(K_BASE), WY, WY_TMP
+ vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP
vmovdqu WY_TMP, PRECALC_WK(i&~7)

PRECALC_ROTATE_WY
@@ -446,6 +445,16 @@

.endm

+/* Add constant only if (%2 > %3) condition met (uses RTA as temp)
+ * %1 + %2 >= %3 ? %4 : 0
+ */
+.macro ADD_IF_GE a, b, c, d
+ mov \a, RTA
+ add $\d, RTA
+ cmp $\c, \b
+ cmovge RTA, \a
+.endm
+
/*
* macro implements 80 rounds of SHA-1, for multiple blocks with s/w pipelining
*/
@@ -463,13 +472,16 @@
lea (2*4*80+32)(%rsp), WK_BUF

# Precalc WK for first 2 blocks
- PRECALC_OFFSET = 0
+ ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 2, 64
.set i, 0
.rept 160
PRECALC i
.set i, i + 1
.endr
- PRECALC_OFFSET = 128
+
+ /* Go to next block if needed */
+ ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 3, 128
+ ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128
xchg WK_BUF, PRECALC_BUF

.align 32
@@ -479,8 +491,8 @@ _loop:
* we use K_BASE value as a signal of a last block,
* it is set below by: cmovae BUFFER_PTR, K_BASE
*/
- cmp K_BASE, BUFFER_PTR
- jne _begin
+ test BLOCKS_CTR, BLOCKS_CTR
+ jnz _begin
.align 32
jmp _end
.align 32
@@ -512,10 +524,10 @@ _loop0:
.set j, j+2
.endr

- add $(2*64), BUFFER_PTR /* move to next odd-64-byte block */
- cmp BUFFER_END, BUFFER_PTR /* is current block the last one? */
- cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */
-
+ /* Update Counter */
+ sub $1, BLOCKS_CTR
+ /* Move to the next block only if needed*/
+ ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 4, 128
/*
* rounds
* 60,62,64,66,68
@@ -532,8 +544,8 @@ _loop0:
UPDATE_HASH 12(HASH_PTR), D
UPDATE_HASH 16(HASH_PTR), E

- cmp K_BASE, BUFFER_PTR /* is current block the last one? */
- je _loop
+ test BLOCKS_CTR, BLOCKS_CTR
+ jz _loop

mov TB, B

@@ -575,10 +587,10 @@ _loop2:
.set j, j+2
.endr

- add $(2*64), BUFFER_PTR2 /* move to next even-64-byte block */
-
- cmp BUFFER_END, BUFFER_PTR2 /* is current block the last one */
- cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */
+ /* update counter */
+ sub $1, BLOCKS_CTR
+ /* Move to the next block only if needed*/
+ ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128

jmp _loop3
_loop3:
@@ -641,19 +653,12 @@ _loop3:

avx2_zeroupper

- lea K_XMM_AR(%rip), K_BASE
-
+ /* Setup initial values */
mov CTX, HASH_PTR
mov BUF, BUFFER_PTR
- lea 64(BUF), BUFFER_PTR2
-
- shl $6, CNT /* mul by 64 */
- add BUF, CNT
- add $64, CNT
- mov CNT, BUFFER_END

- cmp BUFFER_END, BUFFER_PTR2
- cmovae K_BASE, BUFFER_PTR2
+ mov BUF, BUFFER_PTR2
+ mov CNT, BLOCKS_CTR

xmm_mov BSWAP_SHUFB_CTL(%rip), YMM_SHUFB_BSWAP

--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -201,7 +201,7 @@ asmlinkage void sha1_transform_avx2(u32

static bool avx2_usable(void)
{
- if (false && avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
+ if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
&& boot_cpu_has(X86_FEATURE_BMI1)
&& boot_cpu_has(X86_FEATURE_BMI2))
return true;


2017-08-22 19:24:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.12 01/41] audit: Fix use after free in audit_remove_watch_rule()

4.12-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jan Kara <[email protected]>

commit d76036ab47eafa6ce52b69482e91ca3ba337d6d6 upstream.

audit_remove_watch_rule() drops watch's reference to parent but then
continues to work with it. That is not safe as parent can get freed once
we drop our reference. The following is a trivial reproducer:

mount -o loop image /mnt
touch /mnt/file
auditctl -w /mnt/file -p wax
umount /mnt
auditctl -D
<crash in fsnotify_destroy_mark()>

Grab our own reference in audit_remove_watch_rule() earlier to make sure
mark does not get freed under us.

Reported-by: Tony Jones <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Tested-by: Tony Jones <[email protected]>
Signed-off-by: Paul Moore <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/audit_watch.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

--- a/kernel/audit_watch.c
+++ b/kernel/audit_watch.c
@@ -457,13 +457,15 @@ void audit_remove_watch_rule(struct audi
list_del(&krule->rlist);

if (list_empty(&watch->rules)) {
+ /*
+ * audit_remove_watch() drops our reference to 'parent' which
+ * can get freed. Grab our own reference to be safe.
+ */
+ audit_get_parent(parent);
audit_remove_watch(watch);
-
- if (list_empty(&parent->watches)) {
- audit_get_parent(parent);
+ if (list_empty(&parent->watches))
fsnotify_destroy_mark(&parent->mark, audit_watch_group);
- audit_put_parent(parent);
- }
+ audit_put_parent(parent);
}
}



2017-08-23 00:33:52

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 4.12 00/41] 4.12.9-stable review

On 08/22/2017 01:13 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.12.9 release.
> There are 41 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Aug 24 19:09:29 UTC 2017.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.12.9-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.12.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

thanks,
-- Shuah

2017-08-23 00:48:44

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.12 00/41] 4.12.9-stable review

On Tue, Aug 22, 2017 at 06:33:41PM -0600, Shuah Khan wrote:
> On 08/22/2017 01:13 PM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.12.9 release.
> > There are 41 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Thu Aug 24 19:09:29 UTC 2017.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.12.9-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.12.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Compiled and booted on my test system. No dmesg regressions.

Great, thanks for testing all of these and letting me know.

greg k-h

2017-08-27 18:18:10

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 4.12 00/41] 4.12.9-stable review

On 08/22/2017 12:13 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.12.9 release.
> There are 41 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Aug 24 19:09:29 UTC 2017.
> Anything received after that time might be too late.
>

Build results:
total: 145 pass: 145 fail: 0
Qemu test results:
total: 122 pass: 122 fail: 0

Details are available at http://kerneltests.org/builders.

Guenter