2017-04-19 14:37:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 00/69] 4.10.12-stable review

This is the start of the stable review cycle for the 4.10.12 release.
There are 69 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Fri Apr 21 14:15:37 UTC 2017.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.10.12-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.10.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 4.10.12-rc1

Omar Sandoval <[email protected]>
virtio-console: avoid DMA from stack

Stefan Brüns <[email protected]>
cxusb: Use a dma capable buffer also for reading

Kees Cook <[email protected]>
mm: Tighten x86 /dev/mem with zeroing reads

Thierry Reding <[email protected]>
rtc: tegra: Implement clock handling

Lv Zheng <[email protected]>
ACPI / EC: Use busy polling mode when GPE is not enabled

Mohit Gambhir <[email protected]>
x86/xen: Fix APIC id mismatch warning on Intel

Lee, Chun-Yi <[email protected]>
platform/x86: acer-wmi: setup accelerometer when machine has appropriate notify event

Andy Shevchenko <[email protected]>
ASoC: Intel: select DW_DMAC_CORE since it's mandatory

Arnd Bergmann <[email protected]>
dvb-usb-v2: avoid use-after-free

Helge Deller <[email protected]>
parisc: Fix get_user() for 64-bit value on 32-bit kernel

Herbert Xu <[email protected]>
crypto: lrw - Fix use-after-free on EINPROGRESS

Herbert Xu <[email protected]>
crypto: ahash - Fix EINPROGRESS notification callback

Herbert Xu <[email protected]>
crypto: xts - Fix use-after-free on EINPROGRESS

Herbert Xu <[email protected]>
crypto: algif_aead - Fix bogus request dereference in completion function

Namhyung Kim <[email protected]>
ftrace: Fix function pid filter on instances

Minchan Kim <[email protected]>
zram: do not use copy_page with non-page aligned address

Greg Kroah-Hartman <[email protected]>
Revert "MIPS: Lantiq: Fix cascaded IRQ setup"

Max Bires <[email protected]>
char: lack of bool string made CONFIG_DEVPORT always on

Min He <[email protected]>
drm/i915/gvt: set the correct default value of CTX STATUS PTR

Steven Rostedt (VMware) <[email protected]>
ftrace: Fix removing of second function probe

Tyler Baker <[email protected]>
irqchip/irq-imx-gpcv2: Fix spinlock initialization

Chen Yu <[email protected]>
cpufreq: Bring CPUs up even if cpufreq_online() failed

David Wu <[email protected]>
pwm: rockchip: State of PWM clock should synchronize with PWM enabled state

Markus Marb <[email protected]>
can: ifi: use correct register to read rx status

Dan Williams <[email protected]>
libnvdimm: band aid btt vs clear poison locking

Dan Williams <[email protected]>
libnvdimm: fix reconfig_mutex, mmap_sem, and jbd2_handle lockdep splat

Dan Williams <[email protected]>
libnvdimm: fix blk free space accounting

Al Viro <[email protected]>
make skb_copy_datagram_msg() et.al. preserve ->msg_iter on error

Al Viro <[email protected]>
new privimitive: iov_iter_revert()

Juergen Gross <[email protected]>
xen, fbfront: fix connecting to backend

Nicholas Bellinger <[email protected]>
target: Avoid mappedlun symlink creation during lun shutdown

Martin K. Petersen <[email protected]>
scsi: sd: Fix capacity calculation with 32-bit sector_t

Sawan Chandak <[email protected]>
scsi: qla2xxx: Add fix to read correct register value for ISP82xx.

Fam Zheng <[email protected]>
scsi: sd: Consider max_xfer_blocks if opt_xfer_blocks is unusable

Martin K. Petersen <[email protected]>
scsi: sr: Sanity check returned mode data

Nicholas Bellinger <[email protected]>
iscsi-target: Drop work-around for legacy GlobalSAN initiator

Nicholas Bellinger <[email protected]>
iscsi-target: Fix TMR reference leak during session shutdown

Ard Biesheuvel <[email protected]>
efi/fb: Avoid reconfiguration of BAR that covers the framebuffer

Cohen, Eugene <[email protected]>
efi/libstub: Skip GOP with PIXEL_BLT_ONLY format

Mikulas Patocka <[email protected]>
parisc: fix bugs in pa_memcpy

Rafael J. Wysocki <[email protected]>
ACPI / scan: Set the visited flag for all enumerated devices

Dan Williams <[email protected]>
acpi, nfit, libnvdimm: fix interleave set cookie calculation (64-bit comparison)

Thomas Gleixner <[email protected]>
x86/vdso: Plug race between mapping and ELF header setup

Mathias Krause <[email protected]>
x86/vdso: Ensure vdso32_enabled gets set to valid values only

Dan Williams <[email protected]>
x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions

Jiri Olsa <[email protected]>
x86/intel_rdt: Fix locking in rdtgroup_schemata_write()

Joerg Roedel <[email protected]>
x86/signals: Fix lower/upper bound reporting in compat siginfo

Omar Sandoval <[email protected]>
x86/efi: Don't try to reserve runtime regions

Peter Zijlstra <[email protected]>
perf/x86: Avoid exposing wrong/stale data in intel_pmu_lbr_read_32()

Christian Borntraeger <[email protected]>
perf annotate s390: Fix perf annotate error -95 (4.10 regression)

Cameron Gutman <[email protected]>
Input: xpad - add support for Razer Wildcat gamepad

Germano Percossi <[email protected]>
CIFS: store results of cifs_reopen_file to avoid infinite wait

Germano Percossi <[email protected]>
CIFS: reconnect thread reschedule itself

Michel Dänzer <[email protected]>
drm/fb-helper: Allow var->x/yres(_virtual) < fb->width/height again

Wei Yongjun <[email protected]>
drm/etnaviv: fix missing unlock on error in etnaviv_gpu_submit()

Ben Skeggs <[email protected]>
drm/nouveau: initial support (display-only) for GP107

Ben Skeggs <[email protected]>
drm/nouveau/kms/nv50: fix double dma_fence_put() when destroying plane state

Ben Skeggs <[email protected]>
drm/nouveau/kms/nv50: fix setting of HeadSetRasterVertBlankDmi method

Ilia Mirkin <[email protected]>
drm/nouveau/mmu/nv4a: use nv04 mmu rather than the nv44 one

Ilia Mirkin <[email protected]>
drm/nouveau/mpeg: mthd returns true on success now

Martin Brandenburg <[email protected]>
orangefs: free superblock when mount fails

Minchan Kim <[email protected]>
zsmalloc: expand class bit

Kirill A. Shutemov <[email protected]>
thp: fix MADV_DONTNEED vs clear soft dirty race

Kirill A. Shutemov <[email protected]>
thp: fix MADV_DONTNEED vs. MADV_FREE race

Xiubo Li <[email protected]>
tcmu: Skip Data-Out blocks before gathering Data-In buffer for BIDI case

Xiubo Li <[email protected]>
tcmu: Fix wrongly calculating of the base_command_size

Xiubo Li <[email protected]>
tcmu: Fix possible overwrite of t_data_sg's last iov[]

Paul Moore <[email protected]>
audit: make sure we don't let the retry queue grow without bounds

Tejun Heo <[email protected]>
cgroup, kthread: close race window where new kthreads can be migrated to non-root cgroups


-------------

Diffstat:

Makefile | 4 +-
arch/mips/lantiq/irq.c | 38 +++++-----
arch/parisc/include/asm/uaccess.h | 86 +++++++++++++--------
arch/parisc/lib/lusercopy.S | 27 +++----
arch/x86/entry/vdso/vdso32-setup.c | 11 ++-
arch/x86/events/intel/lbr.c | 3 +
arch/x86/include/asm/elf.h | 2 +-
arch/x86/include/asm/pmem.h | 42 ++++++++---
arch/x86/kernel/cpu/intel_rdt_schemata.c | 2 +-
arch/x86/kernel/signal_compat.c | 4 +-
arch/x86/mm/init.c | 41 +++++++---
arch/x86/platform/efi/quirks.c | 4 +
arch/x86/xen/apic.c | 2 +-
crypto/ahash.c | 79 ++++++++++++-------
crypto/algif_aead.c | 12 +--
crypto/lrw.c | 16 ++++
crypto/xts.c | 16 ++++
drivers/acpi/ec.c | 62 ++++++++-------
drivers/acpi/internal.h | 4 +-
drivers/acpi/nfit/core.c | 6 +-
drivers/acpi/scan.c | 19 +++--
drivers/block/zram/zram_drv.c | 6 +-
drivers/char/Kconfig | 5 +-
drivers/char/mem.c | 82 ++++++++++++--------
drivers/char/virtio_console.c | 12 ++-
drivers/cpufreq/cpufreq.c | 18 ++++-
drivers/firmware/efi/libstub/gop.c | 6 +-
drivers/gpu/drm/drm_fb_helper.c | 6 +-
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 3 +-
drivers/gpu/drm/i915/gvt/execlist.c | 3 +-
drivers/gpu/drm/nouveau/nv50_display.c | 10 +--
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c | 32 +++++++-
drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c | 2 +-
drivers/input/joystick/xpad.c | 2 +
drivers/irqchip/irq-imx-gpcv2.c | 2 +
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 9 ++-
drivers/media/usb/dvb-usb/cxusb.c | 15 ++--
drivers/net/can/ifi_canfd/ifi_canfd.c | 2 +-
drivers/nvdimm/bus.c | 6 ++
drivers/nvdimm/claim.c | 10 ++-
drivers/nvdimm/dimm_devs.c | 77 +++----------------
drivers/platform/x86/acer-wmi.c | 22 +++++-
drivers/pwm/pwm-rockchip.c | 40 ++++++++--
drivers/rtc/rtc-tegra.c | 28 ++++++-
drivers/scsi/qla2xxx/qla_os.c | 7 +-
drivers/scsi/sd.c | 23 +++++-
drivers/scsi/sr.c | 6 +-
drivers/target/iscsi/iscsi_target_parameters.c | 16 ----
drivers/target/iscsi/iscsi_target_util.c | 12 +--
drivers/target/target_core_fabric_configfs.c | 5 ++
drivers/target/target_core_tpg.c | 4 +
drivers/target/target_core_user.c | 92 ++++++++++++++++-------
drivers/video/fbdev/efifb.c | 66 +++++++++++++++-
drivers/video/fbdev/xen-fbfront.c | 4 +-
fs/cifs/file.c | 6 +-
fs/cifs/smb2pdu.c | 10 ++-
fs/orangefs/devorangefs-req.c | 9 ++-
fs/orangefs/orangefs-kernel.h | 1 +
fs/orangefs/super.c | 23 ++++--
fs/proc/task_mmu.c | 9 ++-
include/crypto/internal/hash.h | 10 +++
include/linux/cgroup.h | 21 ++++++
include/linux/sched.h | 4 +
include/linux/uio.h | 6 +-
include/target/target_core_base.h | 1 +
kernel/audit.c | 67 ++++++++---------
kernel/cgroup.c | 9 ++-
kernel/kthread.c | 3 +
kernel/trace/ftrace.c | 29 ++++++-
kernel/trace/trace.c | 1 +
kernel/trace/trace.h | 2 +
lib/iov_iter.c | 63 ++++++++++++++++
mm/huge_memory.c | 3 +-
mm/zsmalloc.c | 2 +-
net/core/datagram.c | 23 +++---
sound/soc/intel/Kconfig | 31 ++++----
tools/perf/util/annotate.c | 6 ++
78 files changed, 994 insertions(+), 460 deletions(-)



2017-04-19 14:38:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 11/69] drm/nouveau/mmu/nv4a: use nv04 mmu rather than the nv44 one

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ilia Mirkin <[email protected]>

commit f94773b9f5ecd1df7c88c2e921924dd41d2020cc upstream.

The NV4A (aka NV44A) is an oddity in the family. It only comes in AGP
and PCI varieties, rather than a core PCIE chip with a bridge for
AGP/PCI as necessary. As a result, it appears that the MMU is also
non-functional. For AGP cards, the vast majority of the NV4A lineup,
this worked out since we force AGP cards to use the nv04 mmu. However
for PCI variants, this did not work.

Switching to the NV04 MMU makes it work like a charm. Thanks to mwk for
the suggestion. This should be a no-op for NV4A AGP boards, as they were
using it already.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70388
Signed-off-by: Ilia Mirkin <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
@@ -714,7 +714,7 @@ nv4a_chipset = {
.i2c = nv04_i2c_new,
.imem = nv40_instmem_new,
.mc = nv44_mc_new,
- .mmu = nv44_mmu_new,
+ .mmu = nv04_mmu_new,
.pci = nv40_pci_new,
.therm = nv40_therm_new,
.timer = nv41_timer_new,


2017-04-19 14:38:13

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 14/69] drm/nouveau: initial support (display-only) for GP107

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ben Skeggs <[email protected]>

commit da2ba564a6dcf46df4f828624ff55531ff11d5b0 upstream.

Forked from GP106 implementation.

Split out from commit enabling secboot/gr support so that it can be
added to earlier kernels.

Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c | 30 ++++++++++++++++++++++
1 file changed, 30 insertions(+)

--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
@@ -2270,6 +2270,35 @@ nv136_chipset = {
.fifo = gp100_fifo_new,
};

+static const struct nvkm_device_chip
+nv137_chipset = {
+ .name = "GP107",
+ .bar = gf100_bar_new,
+ .bios = nvkm_bios_new,
+ .bus = gf100_bus_new,
+ .devinit = gm200_devinit_new,
+ .fb = gp102_fb_new,
+ .fuse = gm107_fuse_new,
+ .gpio = gk104_gpio_new,
+ .i2c = gm200_i2c_new,
+ .ibus = gm200_ibus_new,
+ .imem = nv50_instmem_new,
+ .ltc = gp100_ltc_new,
+ .mc = gp100_mc_new,
+ .mmu = gf100_mmu_new,
+ .pci = gp100_pci_new,
+ .pmu = gp102_pmu_new,
+ .timer = gk20a_timer_new,
+ .top = gk104_top_new,
+ .ce[0] = gp102_ce_new,
+ .ce[1] = gp102_ce_new,
+ .ce[2] = gp102_ce_new,
+ .ce[3] = gp102_ce_new,
+ .disp = gp102_disp_new,
+ .dma = gf119_dma_new,
+ .fifo = gp100_fifo_new,
+};
+
static int
nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size,
struct nvkm_notify *notify)
@@ -2707,6 +2736,7 @@ nvkm_device_ctor(const struct nvkm_devic
case 0x132: device->chip = &nv132_chipset; break;
case 0x134: device->chip = &nv134_chipset; break;
case 0x136: device->chip = &nv136_chipset; break;
+ case 0x137: device->chip = &nv137_chipset; break;
default:
nvdev_error(device, "unknown chipset (%08x)\n", boot0);
goto done;


2017-04-19 14:38:23

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 18/69] CIFS: store results of cifs_reopen_file to avoid infinite wait

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Germano Percossi <[email protected]>

commit 1fa839b4986d648b907d117275869a0e46c324b9 upstream.

This fixes Continuous Availability when errors during
file reopen are encountered.

cifs_user_readv and cifs_user_writev would wait for ever if
results of cifs_reopen_file are not stored and for later inspection.

In fact, results are checked and, in case of errors, a chain
of function calls leading to reads and writes to be scheduled in
a separate thread is skipped.
These threads will wake up the corresponding waiters once reads
and writes are done.

However, given the return value is not stored, when rc is checked
for errors a previous one (always zero) is inspected instead.
This leads to pending reads/writes added to the list, making
cifs_user_readv and cifs_user_writev wait for ever.

Signed-off-by: Germano Percossi <[email protected]>
Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/cifs/file.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2597,7 +2597,7 @@ cifs_write_from_iter(loff_t offset, size
wdata->credits = credits;

if (!wdata->cfile->invalidHandle ||
- !cifs_reopen_file(wdata->cfile, false))
+ !(rc = cifs_reopen_file(wdata->cfile, false)))
rc = server->ops->async_writev(wdata,
cifs_uncached_writedata_release);
if (rc) {
@@ -3002,7 +3002,7 @@ cifs_send_async_read(loff_t offset, size
rdata->credits = credits;

if (!rdata->cfile->invalidHandle ||
- !cifs_reopen_file(rdata->cfile, true))
+ !(rc = cifs_reopen_file(rdata->cfile, true)))
rc = server->ops->async_readv(rdata);
error:
if (rc) {
@@ -3577,7 +3577,7 @@ static int cifs_readpages(struct file *f
}

if (!rdata->cfile->invalidHandle ||
- !cifs_reopen_file(rdata->cfile, true))
+ !(rc = cifs_reopen_file(rdata->cfile, true)))
rc = server->ops->async_readv(rdata);
if (rc) {
add_credits_and_wake_if(server, rdata->credits, 0);


2017-04-19 14:38:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 15/69] drm/etnaviv: fix missing unlock on error in etnaviv_gpu_submit()

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Wei Yongjun <[email protected]>

commit 45abdf35cf82e4270328c7237e7812de960ac560 upstream.

Add the missing unlock before return from function etnaviv_gpu_submit()
in the error handling case.

lst: fixed label name.

Fixes: f3cd1b064f11 ("drm/etnaviv: (re-)protect fence allocation with GPU mutex")
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: Lucas Stach <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -1309,7 +1309,7 @@ int etnaviv_gpu_submit(struct etnaviv_gp
if (!fence) {
event_free(gpu, event);
ret = -ENOMEM;
- goto out_pm_put;
+ goto out_unlock;
}

gpu->event[event].fence = fence;
@@ -1349,6 +1349,7 @@ int etnaviv_gpu_submit(struct etnaviv_gp
hangcheck_timer_reset(gpu);
ret = 0;

+out_unlock:
mutex_unlock(&gpu->lock);

out_pm_put:


2017-04-19 14:38:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 21/69] perf/x86: Avoid exposing wrong/stale data in intel_pmu_lbr_read_32()

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit f2200ac311302fcdca6556fd0c5127eab6c65a3e upstream.

When the perf_branch_entry::{in_tx,abort,cycles} fields were added,
intel_pmu_lbr_read_32() wasn't updated to initialize them.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 135c5612c460 ("perf/x86/intel: Support Haswell/v4 LBR format")
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/events/intel/lbr.c | 3 +++
1 file changed, 3 insertions(+)

--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -507,6 +507,9 @@ static void intel_pmu_lbr_read_32(struct
cpuc->lbr_entries[i].to = msr_lastbranch.to;
cpuc->lbr_entries[i].mispred = 0;
cpuc->lbr_entries[i].predicted = 0;
+ cpuc->lbr_entries[i].in_tx = 0;
+ cpuc->lbr_entries[i].abort = 0;
+ cpuc->lbr_entries[i].cycles = 0;
cpuc->lbr_entries[i].reserved = 0;
}
cpuc->lbr_stack.nr = i;


2017-04-19 14:38:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 24/69] x86/intel_rdt: Fix locking in rdtgroup_schemata_write()

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jiri Olsa <[email protected]>

commit 7f00f388712b29005782bad7e4b25942620f3b9c upstream.

The schemata lock is released before freeing the resource's temporary
tmp_cbms allocation. That's racy versus another write which allocates and
uses new temporary storage, resulting in memory leaks, freeing in use
memory, double a free or any combination of those.

Move the unlock after the release code.

Fixes: 60ec2440c63d ("x86/intel_rdt: Add schemata file")
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Shaohua Li <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/kernel/cpu/intel_rdt_schemata.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kernel/cpu/intel_rdt_schemata.c
+++ b/arch/x86/kernel/cpu/intel_rdt_schemata.c
@@ -200,11 +200,11 @@ ssize_t rdtgroup_schemata_write(struct k
}

out:
- rdtgroup_kn_unlock(of->kn);
for_each_enabled_rdt_resource(r) {
kfree(r->tmp_cbms);
r->tmp_cbms = NULL;
}
+ rdtgroup_kn_unlock(of->kn);
return ret ?: nbytes;
}



2017-04-19 14:38:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 20/69] perf annotate s390: Fix perf annotate error -95 (4.10 regression)

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christian Borntraeger <[email protected]>

commit 3c1a427954399fd1bda1ee7e1b356f47b61cee74 upstream.

since 4.10 perf annotate exits on s390 with an "unknown error -95".
Turns out that commit 786c1b51844d ("perf annotate: Start supporting
cross arch annotation") added a hard requirement for architecture
support when objdump is used but only provided x86 and arm support.
Meanwhile power was added so lets add s390 as well.

While at it make sure to implement the branch and jump types.

Signed-off-by: Christian Borntraeger <[email protected]>
Cc: Andreas Krebbel <[email protected]>
Cc: Hendrik Brueckner <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: linux-s390 <[email protected]>
Fixes: 786c1b51844 "perf annotate: Start supporting cross arch annotation"
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
tools/perf/util/annotate.c | 6 ++++++
1 file changed, 6 insertions(+)

--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -130,6 +130,12 @@ static struct arch architectures[] = {
.name = "powerpc",
.init = powerpc__annotate_init,
},
+ {
+ .name = "s390",
+ .objdump = {
+ .comment_char = '#',
+ },
+ },
};

static void ins__delete(struct ins_operands *ops)


2017-04-19 14:38:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 26/69] x86/vdso: Ensure vdso32_enabled gets set to valid values only

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mathias Krause <[email protected]>

commit c06989da39cdb10604d572c8c7ea8c8c97f3c483 upstream.

vdso_enabled can be set to arbitrary integer values via the kernel command
line 'vdso32=' parameter or via 'sysctl abi.vsyscall32'.

load_vdso32() only maps VDSO if vdso_enabled == 1, but ARCH_DLINFO_IA32
merily checks for vdso_enabled != 0. As a consequence the AT_SYSINFO_EHDR
auxiliary vector for the VDSO_ENTRY is emitted with a NULL pointer which
causes a segfault when the application tries to use the VDSO.

Restrict the valid arguments on the command line and the sysctl to 0 and 1.

Fixes: b0b49f2673f0 ("x86, vdso: Remove compat vdso support")
Signed-off-by: Mathias Krause <[email protected]>
Acked-by: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Roland McGrath <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/entry/vdso/vdso32-setup.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

--- a/arch/x86/entry/vdso/vdso32-setup.c
+++ b/arch/x86/entry/vdso/vdso32-setup.c
@@ -30,8 +30,10 @@ static int __init vdso32_setup(char *s)
{
vdso32_enabled = simple_strtoul(s, NULL, 0);

- if (vdso32_enabled > 1)
+ if (vdso32_enabled > 1) {
pr_warn("vdso32 values other than 0 and 1 are no longer allowed; vdso disabled\n");
+ vdso32_enabled = 0;
+ }

return 1;
}
@@ -62,13 +64,18 @@ subsys_initcall(sysenter_setup);
/* Register vsyscall32 into the ABI table */
#include <linux/sysctl.h>

+static const int zero;
+static const int one = 1;
+
static struct ctl_table abi_table2[] = {
{
.procname = "vsyscall32",
.data = &vdso32_enabled,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = (int *)&zero,
+ .extra2 = (int *)&one,
},
{}
};


2017-04-19 14:38:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 30/69] parisc: fix bugs in pa_memcpy

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mikulas Patocka <[email protected]>

commit 409c1b250e30ad0e48b4d15d7319b4e18c046c4f upstream.

The patch 554bfeceb8a22d448cd986fc9efce25e833278a1 ("parisc: Fix access
fault handling in pa_memcpy()") reimplements the pa_memcpy function.
Unfortunatelly, it makes the kernel unbootable. The crash happens in the
function ide_complete_cmd where memcpy is called with the same source
and destination address.

This patch fixes a few bugs in pa_memcpy:

* When jumping to .Lcopy_loop_16 for the first time, don't skip the
instruction "ldi 31,t0" (this bug made the kernel unbootable)
* Use the COND macro when comparing length, so that the comparison is
64-bit (a theoretical issue, in case the length is greater than
0xffffffff)
* Don't use the COND macro after the "extru" instruction (the PA-RISC
specification says that the upper 32-bits of extru result are undefined,
although they are set to zero in practice)
* Fix exception addresses in .Lcopy16_fault and .Lcopy8_fault
* Rename .Lcopy_loop_4 to .Lcopy_loop_8 (so that it is consistent with
.Lcopy8_fault)

Fixes: 554bfeceb8a2 ("parisc: Fix access fault handling in pa_memcpy()")
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/lib/lusercopy.S | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)

--- a/arch/parisc/lib/lusercopy.S
+++ b/arch/parisc/lib/lusercopy.S
@@ -201,7 +201,7 @@ ENTRY_CFI(pa_memcpy)
add dst,len,end

/* short copy with less than 16 bytes? */
- cmpib,>>=,n 15,len,.Lbyte_loop
+ cmpib,COND(>>=),n 15,len,.Lbyte_loop

/* same alignment? */
xor src,dst,t0
@@ -216,7 +216,7 @@ ENTRY_CFI(pa_memcpy)
/* loop until we are 64-bit aligned */
.Lalign_loop64:
extru dst,31,3,t1
- cmpib,=,n 0,t1,.Lcopy_loop_16
+ cmpib,=,n 0,t1,.Lcopy_loop_16_start
20: ldb,ma 1(srcspc,src),t1
21: stb,ma t1,1(dstspc,dst)
b .Lalign_loop64
@@ -225,6 +225,7 @@ ENTRY_CFI(pa_memcpy)
ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)

+.Lcopy_loop_16_start:
ldi 31,t0
.Lcopy_loop_16:
cmpb,COND(>>=),n t0,len,.Lword_loop
@@ -267,7 +268,7 @@ ENTRY_CFI(pa_memcpy)
/* loop until we are 32-bit aligned */
.Lalign_loop32:
extru dst,31,2,t1
- cmpib,=,n 0,t1,.Lcopy_loop_4
+ cmpib,=,n 0,t1,.Lcopy_loop_8
20: ldb,ma 1(srcspc,src),t1
21: stb,ma t1,1(dstspc,dst)
b .Lalign_loop32
@@ -277,7 +278,7 @@ ENTRY_CFI(pa_memcpy)
ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)


-.Lcopy_loop_4:
+.Lcopy_loop_8:
cmpib,COND(>>=),n 15,len,.Lbyte_loop

10: ldw 0(srcspc,src),t1
@@ -299,7 +300,7 @@ ENTRY_CFI(pa_memcpy)
ASM_EXCEPTIONTABLE_ENTRY(16b,.Lcopy_done)
ASM_EXCEPTIONTABLE_ENTRY(17b,.Lcopy_done)

- b .Lcopy_loop_4
+ b .Lcopy_loop_8
ldo -16(len),len

.Lbyte_loop:
@@ -324,7 +325,7 @@ ENTRY_CFI(pa_memcpy)
.Lunaligned_copy:
/* align until dst is 32bit-word-aligned */
extru dst,31,2,t1
- cmpib,COND(=),n 0,t1,.Lcopy_dstaligned
+ cmpib,=,n 0,t1,.Lcopy_dstaligned
20: ldb 0(srcspc,src),t1
ldo 1(src),src
21: stb,ma t1,1(dstspc,dst)
@@ -362,7 +363,7 @@ ENTRY_CFI(pa_memcpy)
cmpiclr,<> 1,t0,%r0
b,n .Lcase1
.Lcase0:
- cmpb,= %r0,len,.Lcda_finish
+ cmpb,COND(=) %r0,len,.Lcda_finish
nop

1: ldw,ma 4(srcspc,src), a3
@@ -376,7 +377,7 @@ ENTRY_CFI(pa_memcpy)
1: ldw,ma 4(srcspc,src), a3
ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
ldo -1(len),len
- cmpb,=,n %r0,len,.Ldo0
+ cmpb,COND(=),n %r0,len,.Ldo0
.Ldo4:
1: ldw,ma 4(srcspc,src), a0
ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
@@ -402,7 +403,7 @@ ENTRY_CFI(pa_memcpy)
1: stw,ma t0, 4(dstspc,dst)
ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
ldo -4(len),len
- cmpb,<> %r0,len,.Ldo4
+ cmpb,COND(<>) %r0,len,.Ldo4
nop
.Ldo0:
shrpw a2, a3, %sar, t0
@@ -436,14 +437,14 @@ ENTRY_CFI(pa_memcpy)
/* fault exception fixup handlers: */
#ifdef CONFIG_64BIT
.Lcopy16_fault:
-10: b .Lcopy_done
- std,ma t1,8(dstspc,dst)
+ b .Lcopy_done
+10: std,ma t1,8(dstspc,dst)
ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
#endif

.Lcopy8_fault:
-10: b .Lcopy_done
- stw,ma t1,4(dstspc,dst)
+ b .Lcopy_done
+10: stw,ma t1,4(dstspc,dst)
ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)

.exit


2017-04-19 14:39:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 33/69] iscsi-target: Fix TMR reference leak during session shutdown

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicholas Bellinger <[email protected]>

commit efb2ea770bb3b0f40007530bc8b0c22f36e1c5eb upstream.

This patch fixes a iscsi-target specific TMR reference leak
during session shutdown, that could occur when a TMR was
quiesced before the hand-off back to iscsi-target code
via transport_cmd_check_stop_to_fabric().

The reference leak happens because iscsit_free_cmd() was
incorrectly skipping the final target_put_sess_cmd() for
TMRs when transport_generic_free_cmd() returned zero because
the se_cmd->cmd_kref did not reach zero, due to the missing
se_cmd assignment in original code.

The result was iscsi_cmd and it's associated se_cmd memory
would be freed once se_sess->sess_cmd_map where released,
but the associated se_tmr_req was leaked and remained part
of se_device->dev_tmr_list.

This bug would manfiest itself as kernel paging request
OOPsen in core_tmr_lun_reset(), when a left-over se_tmr_req
attempted to dereference it's se_cmd pointer that had
already been released during normal session shutdown.

To address this bug, go ahead and treat ISCSI_OP_SCSI_CMD
and ISCSI_OP_SCSI_TMFUNC the same when there is an extra
se_cmd->cmd_kref to drop in iscsit_free_cmd(), and use
op_scsi to signal __iscsit_free_cmd() when the former
needs to clear any further iscsi related I/O state.

Reported-by: Rob Millner <[email protected]>
Cc: Rob Millner <[email protected]>
Reported-by: Chu Yuan Lin <[email protected]>
Cc: Chu Yuan Lin <[email protected]>
Tested-by: Chu Yuan Lin <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/iscsi/iscsi_target_util.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -736,21 +736,23 @@ void iscsit_free_cmd(struct iscsi_cmd *c
{
struct se_cmd *se_cmd = NULL;
int rc;
+ bool op_scsi = false;
/*
* Determine if a struct se_cmd is associated with
* this struct iscsi_cmd.
*/
switch (cmd->iscsi_opcode) {
case ISCSI_OP_SCSI_CMD:
- se_cmd = &cmd->se_cmd;
- __iscsit_free_cmd(cmd, true, shutdown);
+ op_scsi = true;
/*
* Fallthrough
*/
case ISCSI_OP_SCSI_TMFUNC:
- rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown);
- if (!rc && shutdown && se_cmd && se_cmd->se_sess) {
- __iscsit_free_cmd(cmd, true, shutdown);
+ se_cmd = &cmd->se_cmd;
+ __iscsit_free_cmd(cmd, op_scsi, shutdown);
+ rc = transport_generic_free_cmd(se_cmd, shutdown);
+ if (!rc && shutdown && se_cmd->se_sess) {
+ __iscsit_free_cmd(cmd, op_scsi, shutdown);
target_put_sess_cmd(se_cmd);
}
break;


2017-04-19 14:39:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 08/69] zsmalloc: expand class bit

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Minchan Kim <[email protected]>

commit 85d492f28d056c40629fc25d79f54da618a29dc4 upstream.

Now 64K page system, zsamlloc has 257 classes so 8 class bit is not
enough. With that, it corrupts the system when zsmalloc stores
65536byte data(ie, index number 256) so that this patch increases class
bit for simple fix for stable backport. We should clean up this mess
soon.

index size
0 32
1 288
..
..
204 52256
256 65536

Fixes: 3783689a1 ("zsmalloc: introduce zspage structure")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Minchan Kim <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/zsmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -280,7 +280,7 @@ struct zs_pool {
struct zspage {
struct {
unsigned int fullness:FULLNESS_BITS;
- unsigned int class:CLASS_BITS;
+ unsigned int class:CLASS_BITS + 1;
unsigned int isolated:ISOLATED_BITS;
unsigned int magic:MAGIC_VAL_BITS;
};


2017-04-19 14:39:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 35/69] scsi: sr: Sanity check returned mode data

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Martin K. Petersen <[email protected]>

commit a00a7862513089f17209b732f230922f1942e0b9 upstream.

Kefeng Wang discovered that old versions of the QEMU CD driver would
return mangled mode data causing us to walk off the end of the buffer in
an attempt to parse it. Sanity check the returned mode sense data.

Reported-by: Kefeng Wang <[email protected]>
Tested-by: Kefeng Wang <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/sr.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -833,6 +833,7 @@ static void get_capabilities(struct scsi
unsigned char *buffer;
struct scsi_mode_data data;
struct scsi_sense_hdr sshdr;
+ unsigned int ms_len = 128;
int rc, n;

static const char *loadmech[] =
@@ -859,10 +860,11 @@ static void get_capabilities(struct scsi
scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr);

/* ask for mode page 0x2a */
- rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, 128,
+ rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, ms_len,
SR_TIMEOUT, 3, &data, NULL);

- if (!scsi_status_is_good(rc)) {
+ if (!scsi_status_is_good(rc) || data.length > ms_len ||
+ data.header_length + data.block_descriptor_length > data.length) {
/* failed, drive doesn't have capabilities mode page */
cd->cdi.speed = 1;
cd->cdi.mask |= (CDC_CD_R | CDC_CD_RW | CDC_DVD_R |


2017-04-19 14:39:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 52/69] char: lack of bool string made CONFIG_DEVPORT always on

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Max Bires <[email protected]>

commit f2cfa58b136e4b06a9b9db7af5ef62fbb5992f62 upstream.

Without a bool string present, using "# CONFIG_DEVPORT is not set" in
defconfig files would not actually unset devport. This esnured that
/dev/port was always on, but there are reasons a user may wish to
disable it (smaller kernel, attack surface reduction) if it's not being
used. Adding a message here in order to make this user visible.

Signed-off-by: Max Bires <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/char/Kconfig | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -571,9 +571,12 @@ config TELCLOCK
controlling the behavior of this hardware.

config DEVPORT
- bool
+ bool "/dev/port character device"
depends on ISA || PCI
default y
+ help
+ Say Y here if you want to support the /dev/port device. The /dev/port
+ device is similar to /dev/mem, but for I/O ports.

source "drivers/s390/char/Kconfig"



2017-04-19 14:39:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 48/69] cpufreq: Bring CPUs up even if cpufreq_online() failed

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Chen Yu <[email protected]>

commit c4a3fa261b16858416f1fd7db03a33d7ef5fc0b3 upstream.

There is a report that after commit 27622b061eb4 ("cpufreq: Convert
to hotplug state machine"), the normal CPU offline/online cycle
fails on some platforms.

According to the ftrace result, this problem was triggered on
platforms using acpi-cpufreq as the default cpufreq driver,
and due to the lack of some ACPI freq method (eg. _PCT),
cpufreq_online() failed and returned a negative value, so the CPU
hotplug state machine rolled back the CPU online process. Actually,
from the user's perspective, the failure of cpufreq_online() should
not prevent that CPU from being brought up, although cpufreq might
not work on that CPU.

BTW, during system startup cpufreq_online() is not invoked via CPU
online but by the cpufreq device creation process, so the APs can be
brought up even though cpufreq_online() fails in that stage.

This patch ignores the return value of cpufreq_online/offline() and
lets the cpufreq framework deal with the failure. cpufreq_online()
itself will do a proper rollback in that case and if _PCT is missing,
the ACPI cpufreq driver will print a warning if the corresponding
debug options have been enabled.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=194581
Fixes: 27622b061eb4 ("cpufreq: Convert to hotplug state machine")
Reported-and-tested-by: Tomasz Maciej Nowak <[email protected]>
Signed-off-by: Chen Yu <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/cpufreq/cpufreq.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)

--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -2405,6 +2405,20 @@ EXPORT_SYMBOL_GPL(cpufreq_boost_enabled)
*********************************************************************/
static enum cpuhp_state hp_online;

+static int cpuhp_cpufreq_online(unsigned int cpu)
+{
+ cpufreq_online(cpu);
+
+ return 0;
+}
+
+static int cpuhp_cpufreq_offline(unsigned int cpu)
+{
+ cpufreq_offline(cpu);
+
+ return 0;
+}
+
/**
* cpufreq_register_driver - register a CPU Frequency driver
* @driver_data: A struct cpufreq_driver containing the values#
@@ -2467,8 +2481,8 @@ int cpufreq_register_driver(struct cpufr
}

ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "cpufreq:online",
- cpufreq_online,
- cpufreq_offline);
+ cpuhp_cpufreq_online,
+ cpuhp_cpufreq_offline);
if (ret < 0)
goto err_if_unreg;
hp_online = ret;


2017-04-19 14:40:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 37/69] scsi: qla2xxx: Add fix to read correct register value for ISP82xx.

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sawan Chandak <[email protected]>

commit bf6061b17a8d47ef0d9344d3ef576a4ff0edf793 upstream.

Add fix to read correct register value for ISP82xx, during check for
register disconnect.ISP82xx has different base register.

Fixes: a465537ad1a4 ("qla2xxx: Disable the adapter and skip error recovery in case of register disconnect")
Signed-off-by: Sawan Chandak <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/qla2xxx/qla_os.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1125,8 +1125,13 @@ static inline
uint32_t qla2x00_isp_reg_stat(struct qla_hw_data *ha)
{
struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
+ struct device_reg_82xx __iomem *reg82 = &ha->iobase->isp82;

- return ((RD_REG_DWORD(&reg->host_status)) == ISP_REG_DISCONNECT);
+ if (IS_P3P_TYPE(ha))
+ return ((RD_REG_DWORD(&reg82->host_int)) == ISP_REG_DISCONNECT);
+ else
+ return ((RD_REG_DWORD(&reg->host_status)) ==
+ ISP_REG_DISCONNECT);
}

/**************************************************************************


2017-04-19 14:40:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 65/69] ACPI / EC: Use busy polling mode when GPE is not enabled

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lv Zheng <[email protected]>

commit c3a696b6e8f8f75f9f75e556a9f9f6472eae2655 upstream.

When GPE is not enabled, it is not efficient to use the wait polling mode
as it introduces an unexpected scheduler delay.
So before the GPE handler is installed, this patch uses busy polling mode
for all EC(s) and the logic can be applied to non boot EC(s) during the
suspend/resume process.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=191561
Tested-by: Jakobus Schurz <[email protected]>
Tested-by: Chen Yu <[email protected]>
Signed-off-by: Lv Zheng <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/acpi/ec.c | 62 +++++++++++++++++++++++-------------------------
drivers/acpi/internal.h | 4 +--
2 files changed, 32 insertions(+), 34 deletions(-)

--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -729,12 +729,12 @@ static void start_transaction(struct acp

static int ec_guard(struct acpi_ec *ec)
{
- unsigned long guard = usecs_to_jiffies(ec_polling_guard);
+ unsigned long guard = usecs_to_jiffies(ec->polling_guard);
unsigned long timeout = ec->timestamp + guard;

/* Ensure guarding period before polling EC status */
do {
- if (ec_busy_polling) {
+ if (ec->busy_polling) {
/* Perform busy polling */
if (ec_transaction_completed(ec))
return 0;
@@ -998,6 +998,28 @@ static void acpi_ec_stop(struct acpi_ec
spin_unlock_irqrestore(&ec->lock, flags);
}

+static void acpi_ec_enter_noirq(struct acpi_ec *ec)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ec->lock, flags);
+ ec->busy_polling = true;
+ ec->polling_guard = 0;
+ ec_log_drv("interrupt blocked");
+ spin_unlock_irqrestore(&ec->lock, flags);
+}
+
+static void acpi_ec_leave_noirq(struct acpi_ec *ec)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ec->lock, flags);
+ ec->busy_polling = ec_busy_polling;
+ ec->polling_guard = ec_polling_guard;
+ ec_log_drv("interrupt unblocked");
+ spin_unlock_irqrestore(&ec->lock, flags);
+}
+
void acpi_ec_block_transactions(void)
{
struct acpi_ec *ec = first_ec;
@@ -1278,7 +1300,7 @@ acpi_ec_space_handler(u32 function, acpi
if (function != ACPI_READ && function != ACPI_WRITE)
return AE_BAD_PARAMETER;

- if (ec_busy_polling || bits > 8)
+ if (ec->busy_polling || bits > 8)
acpi_ec_burst_enable(ec);

for (i = 0; i < bytes; ++i, ++address, ++value)
@@ -1286,7 +1308,7 @@ acpi_ec_space_handler(u32 function, acpi
acpi_ec_read(ec, address, value) :
acpi_ec_write(ec, address, *value);

- if (ec_busy_polling || bits > 8)
+ if (ec->busy_polling || bits > 8)
acpi_ec_burst_disable(ec);

switch (result) {
@@ -1329,6 +1351,8 @@ static struct acpi_ec *acpi_ec_alloc(voi
spin_lock_init(&ec->lock);
INIT_WORK(&ec->work, acpi_ec_event_handler);
ec->timestamp = jiffies;
+ ec->busy_polling = true;
+ ec->polling_guard = 0;
return ec;
}

@@ -1390,6 +1414,7 @@ static int ec_install_handlers(struct ac
acpi_ec_start(ec, false);

if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
+ acpi_ec_enter_noirq(ec);
status = acpi_install_address_space_handler(ec->handle,
ACPI_ADR_SPACE_EC,
&acpi_ec_space_handler,
@@ -1429,6 +1454,7 @@ static int ec_install_handlers(struct ac
/* This is not fatal as we can poll EC events */
if (ACPI_SUCCESS(status)) {
set_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags);
+ acpi_ec_leave_noirq(ec);
if (test_bit(EC_FLAGS_STARTED, &ec->flags) &&
ec->reference_count >= 1)
acpi_ec_enable_gpe(ec, true);
@@ -1839,34 +1865,6 @@ error:
}

#ifdef CONFIG_PM_SLEEP
-static void acpi_ec_enter_noirq(struct acpi_ec *ec)
-{
- unsigned long flags;
-
- if (ec == first_ec) {
- spin_lock_irqsave(&ec->lock, flags);
- ec->saved_busy_polling = ec_busy_polling;
- ec->saved_polling_guard = ec_polling_guard;
- ec_busy_polling = true;
- ec_polling_guard = 0;
- ec_log_drv("interrupt blocked");
- spin_unlock_irqrestore(&ec->lock, flags);
- }
-}
-
-static void acpi_ec_leave_noirq(struct acpi_ec *ec)
-{
- unsigned long flags;
-
- if (ec == first_ec) {
- spin_lock_irqsave(&ec->lock, flags);
- ec_busy_polling = ec->saved_busy_polling;
- ec_polling_guard = ec->saved_polling_guard;
- ec_log_drv("interrupt unblocked");
- spin_unlock_irqrestore(&ec->lock, flags);
- }
-}
-
static int acpi_ec_suspend_noirq(struct device *dev)
{
struct acpi_ec *ec =
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -172,8 +172,8 @@ struct acpi_ec {
struct work_struct work;
unsigned long timestamp;
unsigned long nr_pending_queries;
- bool saved_busy_polling;
- unsigned int saved_polling_guard;
+ bool busy_polling;
+ unsigned int polling_guard;
};

extern struct acpi_ec *first_ec;


2017-04-19 14:40:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 67/69] mm: Tighten x86 /dev/mem with zeroing reads

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit a4866aa812518ed1a37d8ea0c881dc946409de94 upstream.

Under CONFIG_STRICT_DEVMEM, reading System RAM through /dev/mem is
disallowed. However, on x86, the first 1MB was always allowed for BIOS
and similar things, regardless of it actually being System RAM. It was
possible for heap to end up getting allocated in low 1MB RAM, and then
read by things like x86info or dd, which would trip hardened usercopy:

usercopy: kernel memory exposure attempt detected from ffff880000090000 (dma-kmalloc-256) (4096 bytes)

This changes the x86 exception for the low 1MB by reading back zeros for
System RAM areas instead of blindly allowing them. More work is needed to
extend this to mmap, but currently mmap doesn't go through usercopy, so
hardened usercopy won't Oops the kernel.

Reported-by: Tommi Rantala <[email protected]>
Tested-by: Tommi Rantala <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Cc: Brad Spengler <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/mm/init.c | 41 +++++++++++++++++++-------
drivers/char/mem.c | 82 +++++++++++++++++++++++++++++++++--------------------
2 files changed, 82 insertions(+), 41 deletions(-)

--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -643,21 +643,40 @@ void __init init_mem_mapping(void)
* devmem_is_allowed() checks to see if /dev/mem access to a certain address
* is valid. The argument is a physical page number.
*
- *
- * On x86, access has to be given to the first megabyte of ram because that area
- * contains BIOS code and data regions used by X and dosemu and similar apps.
- * Access has to be given to non-kernel-ram areas as well, these contain the PCI
- * mmio resources as well as potential bios/acpi data regions.
+ * On x86, access has to be given to the first megabyte of RAM because that
+ * area traditionally contains BIOS code and data regions used by X, dosemu,
+ * and similar apps. Since they map the entire memory range, the whole range
+ * must be allowed (for mapping), but any areas that would otherwise be
+ * disallowed are flagged as being "zero filled" instead of rejected.
+ * Access has to be given to non-kernel-ram areas as well, these contain the
+ * PCI mmio resources as well as potential bios/acpi data regions.
*/
int devmem_is_allowed(unsigned long pagenr)
{
- if (pagenr < 256)
- return 1;
- if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
+ if (page_is_ram(pagenr)) {
+ /*
+ * For disallowed memory regions in the low 1MB range,
+ * request that the page be shown as all zeros.
+ */
+ if (pagenr < 256)
+ return 2;
+
+ return 0;
+ }
+
+ /*
+ * This must follow RAM test, since System RAM is considered a
+ * restricted resource under CONFIG_STRICT_IOMEM.
+ */
+ if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) {
+ /* Low 1MB bypasses iomem restrictions. */
+ if (pagenr < 256)
+ return 1;
+
return 0;
- if (!page_is_ram(pagenr))
- return 1;
- return 0;
+ }
+
+ return 1;
}

void free_init_pages(char *what, unsigned long begin, unsigned long end)
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -60,6 +60,10 @@ static inline int valid_mmap_phys_addr_r
#endif

#ifdef CONFIG_STRICT_DEVMEM
+static inline int page_is_allowed(unsigned long pfn)
+{
+ return devmem_is_allowed(pfn);
+}
static inline int range_is_allowed(unsigned long pfn, unsigned long size)
{
u64 from = ((u64)pfn) << PAGE_SHIFT;
@@ -75,6 +79,10 @@ static inline int range_is_allowed(unsig
return 1;
}
#else
+static inline int page_is_allowed(unsigned long pfn)
+{
+ return 1;
+}
static inline int range_is_allowed(unsigned long pfn, unsigned long size)
{
return 1;
@@ -122,23 +130,31 @@ static ssize_t read_mem(struct file *fil

while (count > 0) {
unsigned long remaining;
+ int allowed;

sz = size_inside_page(p, count);

- if (!range_is_allowed(p >> PAGE_SHIFT, count))
+ allowed = page_is_allowed(p >> PAGE_SHIFT);
+ if (!allowed)
return -EPERM;
+ if (allowed == 2) {
+ /* Show zeros for restricted memory. */
+ remaining = clear_user(buf, sz);
+ } else {
+ /*
+ * On ia64 if a page has been mapped somewhere as
+ * uncached, then it must also be accessed uncached
+ * by the kernel or data corruption may occur.
+ */
+ ptr = xlate_dev_mem_ptr(p);
+ if (!ptr)
+ return -EFAULT;

- /*
- * On ia64 if a page has been mapped somewhere as uncached, then
- * it must also be accessed uncached by the kernel or data
- * corruption may occur.
- */
- ptr = xlate_dev_mem_ptr(p);
- if (!ptr)
- return -EFAULT;
+ remaining = copy_to_user(buf, ptr, sz);
+
+ unxlate_dev_mem_ptr(p, ptr);
+ }

- remaining = copy_to_user(buf, ptr, sz);
- unxlate_dev_mem_ptr(p, ptr);
if (remaining)
return -EFAULT;

@@ -181,30 +197,36 @@ static ssize_t write_mem(struct file *fi
#endif

while (count > 0) {
+ int allowed;
+
sz = size_inside_page(p, count);

- if (!range_is_allowed(p >> PAGE_SHIFT, sz))
+ allowed = page_is_allowed(p >> PAGE_SHIFT);
+ if (!allowed)
return -EPERM;

- /*
- * On ia64 if a page has been mapped somewhere as uncached, then
- * it must also be accessed uncached by the kernel or data
- * corruption may occur.
- */
- ptr = xlate_dev_mem_ptr(p);
- if (!ptr) {
- if (written)
- break;
- return -EFAULT;
- }
+ /* Skip actual writing when a page is marked as restricted. */
+ if (allowed == 1) {
+ /*
+ * On ia64 if a page has been mapped somewhere as
+ * uncached, then it must also be accessed uncached
+ * by the kernel or data corruption may occur.
+ */
+ ptr = xlate_dev_mem_ptr(p);
+ if (!ptr) {
+ if (written)
+ break;
+ return -EFAULT;
+ }

- copied = copy_from_user(ptr, buf, sz);
- unxlate_dev_mem_ptr(p, ptr);
- if (copied) {
- written += sz - copied;
- if (written)
- break;
- return -EFAULT;
+ copied = copy_from_user(ptr, buf, sz);
+ unxlate_dev_mem_ptr(p, ptr);
+ if (copied) {
+ written += sz - copied;
+ if (written)
+ break;
+ return -EFAULT;
+ }
}

buf += sz;


2017-04-19 14:40:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 34/69] iscsi-target: Drop work-around for legacy GlobalSAN initiator

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicholas Bellinger <[email protected]>

commit 1c99de981f30b3e7868b8d20ce5479fa1c0fea46 upstream.

Once upon a time back in 2009, a work-around was added to support
the GlobalSAN iSCSI initiator v3.3 for MacOSX, which during login
did not propose nor respond to MaxBurstLength, FirstBurstLength,
DefaultTime2Wait and DefaultTime2Retain keys.

The work-around in iscsi_check_proposer_for_optional_reply()
allowed the missing keys to be proposed, but did not require
waiting for a response before moving to full feature phase
operation. This allowed GlobalSAN v3.3 to work out-of-the
box, and for many years we didn't run into login interopt
issues with any other initiators..

Until recently, when Martin tried a QLogic 57840S iSCSI Offload
HBA on Windows 2016 which completed login, but subsequently
failed with:

Got unknown iSCSI OpCode: 0x43

The issue was QLogic MSFT side did not propose DefaultTime2Wait +
DefaultTime2Retain, so LIO proposes them itself, and immediately
transitions to full feature phase because of the GlobalSAN hack.
However, the QLogic MSFT side still attempts to respond to
DefaultTime2Retain + DefaultTime2Wait, even though LIO has set
ISCSI_FLAG_LOGIN_NEXT_STAGE3 + ISCSI_FLAG_LOGIN_TRANSIT
in last login response.

So while the QLogic MSFT side should have been proposing these
two keys to start, it was doing the correct thing per RFC-3720
attempting to respond to proposed keys before transitioning to
full feature phase.

All that said, recent versions of GlobalSAN iSCSI (v5.3.0.541)
does correctly propose the four keys during login, making the
original work-around moot.

So in order to allow QLogic MSFT to run unmodified as-is, go
ahead and drop this long standing work-around.

Reported-by: Martin Svec <[email protected]>
Cc: Martin Svec <[email protected]>
Cc: Himanshu Madhani <[email protected]>
Cc: Arun Easi <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/iscsi/iscsi_target_parameters.c | 16 ----------------
1 file changed, 16 deletions(-)

--- a/drivers/target/iscsi/iscsi_target_parameters.c
+++ b/drivers/target/iscsi/iscsi_target_parameters.c
@@ -782,22 +782,6 @@ static void iscsi_check_proposer_for_opt
if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
SET_PSTATE_REPLY_OPTIONAL(param);
/*
- * The GlobalSAN iSCSI Initiator for MacOSX does
- * not respond to MaxBurstLength, FirstBurstLength,
- * DefaultTime2Wait or DefaultTime2Retain parameter keys.
- * So, we set them to 'reply optional' here, and assume the
- * the defaults from iscsi_parameters.h if the initiator
- * is not RFC compliant and the keys are not negotiated.
- */
- if (!strcmp(param->name, MAXBURSTLENGTH))
- SET_PSTATE_REPLY_OPTIONAL(param);
- if (!strcmp(param->name, FIRSTBURSTLENGTH))
- SET_PSTATE_REPLY_OPTIONAL(param);
- if (!strcmp(param->name, DEFAULTTIME2WAIT))
- SET_PSTATE_REPLY_OPTIONAL(param);
- if (!strcmp(param->name, DEFAULTTIME2RETAIN))
- SET_PSTATE_REPLY_OPTIONAL(param);
- /*
* Required for gPXE iSCSI boot client
*/
if (!strcmp(param->name, MAXCONNECTIONS))


2017-04-19 14:40:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 42/69] make skb_copy_datagram_msg() et.al. preserve ->msg_iter on error

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 3278682123811dd8ef07de5eb701fc4548fcebf2 upstream.

Fixes the mess observed in e.g. rsync over a noisy link we'd been
seeing since last Summer. What happens is that we copy part of
a datagram before noticing a checksum mismatch. Datagram will be
resent, all right, but we want the next try go into the same place,
not after it...

All this family of primitives (copy/checksum and copy a datagram
into destination) is "all or nothing" sort of interface - either
we get 0 (meaning that copy had been successful) or we get an
error (and no way to tell how much had been copied before we ran
into whatever error it had been). Make all of them leave iterator
unadvanced in case of errors - all callers must be able to cope
with that (an error might've been caught before the iterator had
been advanced), it costs very little to arrange, it's safer for
callers and actually fixes at least one bug in said callers.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/core/datagram.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)

--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -398,7 +398,7 @@ int skb_copy_datagram_iter(const struct
struct iov_iter *to, int len)
{
int start = skb_headlen(skb);
- int i, copy = start - offset;
+ int i, copy = start - offset, start_off = offset, n;
struct sk_buff *frag_iter;

trace_skb_copy_datagram_iovec(skb, len);
@@ -407,11 +407,12 @@ int skb_copy_datagram_iter(const struct
if (copy > 0) {
if (copy > len)
copy = len;
- if (copy_to_iter(skb->data + offset, copy, to) != copy)
+ n = copy_to_iter(skb->data + offset, copy, to);
+ offset += n;
+ if (n != copy)
goto short_copy;
if ((len -= copy) == 0)
return 0;
- offset += copy;
}

/* Copy paged appendix. Hmm... why does this look so complicated? */
@@ -425,13 +426,14 @@ int skb_copy_datagram_iter(const struct
if ((copy = end - offset) > 0) {
if (copy > len)
copy = len;
- if (copy_page_to_iter(skb_frag_page(frag),
+ n = copy_page_to_iter(skb_frag_page(frag),
frag->page_offset + offset -
- start, copy, to) != copy)
+ start, copy, to);
+ offset += n;
+ if (n != copy)
goto short_copy;
if (!(len -= copy))
return 0;
- offset += copy;
}
start = end;
}
@@ -463,6 +465,7 @@ int skb_copy_datagram_iter(const struct
*/

fault:
+ iov_iter_revert(to, offset - start_off);
return -EFAULT;

short_copy:
@@ -613,7 +616,7 @@ static int skb_copy_and_csum_datagram(co
__wsum *csump)
{
int start = skb_headlen(skb);
- int i, copy = start - offset;
+ int i, copy = start - offset, start_off = offset;
struct sk_buff *frag_iter;
int pos = 0;
int n;
@@ -623,11 +626,11 @@ static int skb_copy_and_csum_datagram(co
if (copy > len)
copy = len;
n = csum_and_copy_to_iter(skb->data + offset, copy, csump, to);
+ offset += n;
if (n != copy)
goto fault;
if ((len -= copy) == 0)
return 0;
- offset += copy;
pos = copy;
}

@@ -649,12 +652,12 @@ static int skb_copy_and_csum_datagram(co
offset - start, copy,
&csum2, to);
kunmap(page);
+ offset += n;
if (n != copy)
goto fault;
*csump = csum_block_add(*csump, csum2, pos);
if (!(len -= copy))
return 0;
- offset += copy;
pos += copy;
}
start = end;
@@ -687,6 +690,7 @@ static int skb_copy_and_csum_datagram(co
return 0;

fault:
+ iov_iter_revert(to, offset - start_off);
return -EFAULT;
}

@@ -771,6 +775,7 @@ int skb_copy_and_csum_datagram_msg(struc
}
return 0;
csum_error:
+ iov_iter_revert(&msg->msg_iter, chunk);
return -EINVAL;
fault:
return -EFAULT;


2017-04-19 14:41:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 41/69] [iov_iter] new privimitive: iov_iter_revert()

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 27c0e3748e41ca79171ffa3e97415a20af6facd0 upstream.

opposite to iov_iter_advance(); the caller is responsible for never
using it to move back past the initial position.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/uio.h | 6 ++++
lib/iov_iter.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 68 insertions(+), 1 deletion(-)

--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -39,7 +39,10 @@ struct iov_iter {
};
union {
unsigned long nr_segs;
- int idx;
+ struct {
+ int idx;
+ int start_idx;
+ };
};
};

@@ -81,6 +84,7 @@ unsigned long iov_shorten(struct iovec *
size_t iov_iter_copy_from_user_atomic(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes);
void iov_iter_advance(struct iov_iter *i, size_t bytes);
+void iov_iter_revert(struct iov_iter *i, size_t bytes);
int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes);
size_t iov_iter_single_seg_count(const struct iov_iter *i);
size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -786,6 +786,68 @@ void iov_iter_advance(struct iov_iter *i
}
EXPORT_SYMBOL(iov_iter_advance);

+void iov_iter_revert(struct iov_iter *i, size_t unroll)
+{
+ if (!unroll)
+ return;
+ i->count += unroll;
+ if (unlikely(i->type & ITER_PIPE)) {
+ struct pipe_inode_info *pipe = i->pipe;
+ int idx = i->idx;
+ size_t off = i->iov_offset;
+ while (1) {
+ size_t n = off - pipe->bufs[idx].offset;
+ if (unroll < n) {
+ off -= (n - unroll);
+ break;
+ }
+ unroll -= n;
+ if (!unroll && idx == i->start_idx) {
+ off = 0;
+ break;
+ }
+ if (!idx--)
+ idx = pipe->buffers - 1;
+ off = pipe->bufs[idx].offset + pipe->bufs[idx].len;
+ }
+ i->iov_offset = off;
+ i->idx = idx;
+ pipe_truncate(i);
+ return;
+ }
+ if (unroll <= i->iov_offset) {
+ i->iov_offset -= unroll;
+ return;
+ }
+ unroll -= i->iov_offset;
+ if (i->type & ITER_BVEC) {
+ const struct bio_vec *bvec = i->bvec;
+ while (1) {
+ size_t n = (--bvec)->bv_len;
+ i->nr_segs++;
+ if (unroll <= n) {
+ i->bvec = bvec;
+ i->iov_offset = n - unroll;
+ return;
+ }
+ unroll -= n;
+ }
+ } else { /* same logics for iovec and kvec */
+ const struct iovec *iov = i->iov;
+ while (1) {
+ size_t n = (--iov)->iov_len;
+ i->nr_segs++;
+ if (unroll <= n) {
+ i->iov = iov;
+ i->iov_offset = n - unroll;
+ return;
+ }
+ unroll -= n;
+ }
+ }
+}
+EXPORT_SYMBOL(iov_iter_revert);
+
/*
* Return the count of just the current iov_iter segment.
*/
@@ -839,6 +901,7 @@ void iov_iter_pipe(struct iov_iter *i, i
i->idx = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1);
i->iov_offset = 0;
i->count = count;
+ i->start_idx = i->idx;
}
EXPORT_SYMBOL(iov_iter_pipe);



2017-04-19 14:41:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 43/69] libnvdimm: fix blk free space accounting

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Williams <[email protected]>

commit fe514739d8538783749d3ce72f78e5a999ea5668 upstream.

Commit a1f3e4d6a0c3 "libnvdimm, region: update nd_region_available_dpa()
for multi-pmem support" reworked blk dpa (DIMM Physical Address)
accounting to comprehend multiple pmem namespace allocations aliasing
with a given blk-dpa range.

The following call trace is a result of failing to account for allocated
blk capacity.

WARNING: CPU: 1 PID: 2433 at tools/testing/nvdimm/../../../drivers/nvdimm/names
4 size_store+0x6f3/0x930 [libnvdimm]
nd_region region5: allocation underrun: 0x0 of 0x1000000 bytes
[..]
Call Trace:
dump_stack+0x86/0xc3
__warn+0xcb/0xf0
warn_slowpath_fmt+0x5f/0x80
size_store+0x6f3/0x930 [libnvdimm]
dev_attr_store+0x18/0x30

If a given blk-dpa allocation does not alias with any pmem ranges then
the full allocation should be accounted as busy space, not the size of
the current pmem contribution to the region.

The thinkos that led to this confusion was not realizing that the struct
resource management is already guaranteeing no collisions between pmem
allocations and blk allocations on the same dimm. Also, we do not try to
support blk allocations in aliased pmem holes.

This patch also fixes a case where the available blk goes negative.

Fixes: a1f3e4d6a0c3 ("libnvdimm, region: update nd_region_available_dpa() for multi-pmem support").
Reported-by: Dariusz Dokupil <[email protected]>
Reported-by: Dave Jiang <[email protected]>
Reported-by: Vishal Verma <[email protected]>
Tested-by: Dave Jiang <[email protected]>
Tested-by: Vishal Verma <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/nvdimm/dimm_devs.c | 77 ++++++---------------------------------------
1 file changed, 11 insertions(+), 66 deletions(-)

--- a/drivers/nvdimm/dimm_devs.c
+++ b/drivers/nvdimm/dimm_devs.c
@@ -395,7 +395,7 @@ EXPORT_SYMBOL_GPL(nvdimm_create);

int alias_dpa_busy(struct device *dev, void *data)
{
- resource_size_t map_end, blk_start, new, busy;
+ resource_size_t map_end, blk_start, new;
struct blk_alloc_info *info = data;
struct nd_mapping *nd_mapping;
struct nd_region *nd_region;
@@ -436,29 +436,19 @@ int alias_dpa_busy(struct device *dev, v
retry:
/*
* Find the free dpa from the end of the last pmem allocation to
- * the end of the interleave-set mapping that is not already
- * covered by a blk allocation.
+ * the end of the interleave-set mapping.
*/
- busy = 0;
for_each_dpa_resource(ndd, res) {
+ if (strncmp(res->name, "pmem", 4) != 0)
+ continue;
if ((res->start >= blk_start && res->start < map_end)
|| (res->end >= blk_start
&& res->end <= map_end)) {
- if (strncmp(res->name, "pmem", 4) == 0) {
- new = max(blk_start, min(map_end + 1,
- res->end + 1));
- if (new != blk_start) {
- blk_start = new;
- goto retry;
- }
- } else
- busy += min(map_end, res->end)
- - max(nd_mapping->start, res->start) + 1;
- } else if (nd_mapping->start > res->start
- && map_end < res->end) {
- /* total eclipse of the PMEM region mapping */
- busy += nd_mapping->size;
- break;
+ new = max(blk_start, min(map_end + 1, res->end + 1));
+ if (new != blk_start) {
+ blk_start = new;
+ goto retry;
+ }
}
}

@@ -470,52 +460,11 @@ int alias_dpa_busy(struct device *dev, v
return 1;
}

- info->available -= blk_start - nd_mapping->start + busy;
+ info->available -= blk_start - nd_mapping->start;

return 0;
}

-static int blk_dpa_busy(struct device *dev, void *data)
-{
- struct blk_alloc_info *info = data;
- struct nd_mapping *nd_mapping;
- struct nd_region *nd_region;
- resource_size_t map_end;
- int i;
-
- if (!is_nd_pmem(dev))
- return 0;
-
- nd_region = to_nd_region(dev);
- for (i = 0; i < nd_region->ndr_mappings; i++) {
- nd_mapping = &nd_region->mapping[i];
- if (nd_mapping->nvdimm == info->nd_mapping->nvdimm)
- break;
- }
-
- if (i >= nd_region->ndr_mappings)
- return 0;
-
- map_end = nd_mapping->start + nd_mapping->size - 1;
- if (info->res->start >= nd_mapping->start
- && info->res->start < map_end) {
- if (info->res->end <= map_end) {
- info->busy = 0;
- return 1;
- } else {
- info->busy -= info->res->end - map_end;
- return 0;
- }
- } else if (info->res->end >= nd_mapping->start
- && info->res->end <= map_end) {
- info->busy -= nd_mapping->start - info->res->start;
- return 0;
- } else {
- info->busy -= nd_mapping->size;
- return 0;
- }
-}
-
/**
* nd_blk_available_dpa - account the unused dpa of BLK region
* @nd_mapping: container of dpa-resource-root + labels
@@ -545,11 +494,7 @@ resource_size_t nd_blk_available_dpa(str
for_each_dpa_resource(ndd, res) {
if (strncmp(res->name, "blk", 3) != 0)
continue;
-
- info.res = res;
- info.busy = resource_size(res);
- device_for_each_child(&nvdimm_bus->dev, &info, blk_dpa_busy);
- info.available -= info.busy;
+ info.available -= resource_size(res);
}

return info.available;


2017-04-19 14:40:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 66/69] rtc: tegra: Implement clock handling

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thierry Reding <[email protected]>

commit 5fa4086987506b2ab8c92f8f99f2295db9918856 upstream.

Accessing the registers of the RTC block on Tegra requires the module
clock to be enabled. This only works because the RTC module clock will
be enabled by default during early boot. However, because the clock is
unused, the CCF will disable it at late_init time. This causes the RTC
to become unusable afterwards. This can easily be reproduced by trying
to use the RTC:

$ hwclock --rtc /dev/rtc1

This will hang the system. I ran into this by following up on a report
by Martin Michlmayr that reboot wasn't working on Tegra210 systems. It
turns out that the rtc-tegra driver's ->shutdown() implementation will
hang the CPU, because of the disabled clock, before the system can be
rebooted.

What confused me for a while is that the same driver is used on prior
Tegra generations where the hang can not be observed. However, as Peter
De Schrijver pointed out, this is because on 32-bit Tegra chips the RTC
clock is enabled by the tegra20_timer.c clocksource driver, which uses
the RTC to provide a persistent clock. This code is never enabled on
64-bit Tegra because the persistent clock infrastructure does not exist
on 64-bit ARM.

The proper fix for this is to add proper clock handling to the RTC
driver in order to ensure that the clock is enabled when the driver
requires it. All device trees contain the clock already, therefore
no additional changes are required.

Reported-by: Martin Michlmayr <[email protected]>
Acked-By Peter De Schrijver <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
Signed-off-by: Alexandre Belloni <[email protected]>
[bwh: Backported to 4.9: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/rtc/rtc-tegra.c | 28 ++++++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)

--- a/drivers/rtc/rtc-tegra.c
+++ b/drivers/rtc/rtc-tegra.c
@@ -18,6 +18,7 @@
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/kernel.h>
+#include <linux/clk.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
@@ -59,6 +60,7 @@ struct tegra_rtc_info {
struct platform_device *pdev;
struct rtc_device *rtc_dev;
void __iomem *rtc_base; /* NULL if not initialized. */
+ struct clk *clk;
int tegra_rtc_irq; /* alarm and periodic irq */
spinlock_t tegra_rtc_lock;
};
@@ -326,6 +328,14 @@ static int __init tegra_rtc_probe(struct
if (info->tegra_rtc_irq <= 0)
return -EBUSY;

+ info->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(info->clk))
+ return PTR_ERR(info->clk);
+
+ ret = clk_prepare_enable(info->clk);
+ if (ret < 0)
+ return ret;
+
/* set context info. */
info->pdev = pdev;
spin_lock_init(&info->tegra_rtc_lock);
@@ -346,7 +356,7 @@ static int __init tegra_rtc_probe(struct
ret = PTR_ERR(info->rtc_dev);
dev_err(&pdev->dev, "Unable to register device (err=%d).\n",
ret);
- return ret;
+ goto disable_clk;
}

ret = devm_request_irq(&pdev->dev, info->tegra_rtc_irq,
@@ -356,12 +366,25 @@ static int __init tegra_rtc_probe(struct
dev_err(&pdev->dev,
"Unable to request interrupt for device (err=%d).\n",
ret);
- return ret;
+ goto disable_clk;
}

dev_notice(&pdev->dev, "Tegra internal Real Time Clock\n");

return 0;
+
+disable_clk:
+ clk_disable_unprepare(info->clk);
+ return ret;
+}
+
+static int tegra_rtc_remove(struct platform_device *pdev)
+{
+ struct tegra_rtc_info *info = platform_get_drvdata(pdev);
+
+ clk_disable_unprepare(info->clk);
+
+ return 0;
}

#ifdef CONFIG_PM_SLEEP
@@ -413,6 +436,7 @@ static void tegra_rtc_shutdown(struct pl

MODULE_ALIAS("platform:tegra_rtc");
static struct platform_driver tegra_rtc_driver = {
+ .remove = tegra_rtc_remove,
.shutdown = tegra_rtc_shutdown,
.driver = {
.name = "tegra_rtc",


2017-04-19 14:42:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 40/69] xen, fbfront: fix connecting to backend

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Juergen Gross <[email protected]>

commit 9121b15b5628b38b4695282dc18c553440e0f79b upstream.

Connecting to the backend isn't working reliably in xen-fbfront: in
case XenbusStateInitWait of the backend has been missed the backend
transition to XenbusStateConnected will trigger the connected state
only without doing the actions required when the backend has
connected.

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: Bartlomiej Zolnierkiewicz <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/video/fbdev/xen-fbfront.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/video/fbdev/xen-fbfront.c
+++ b/drivers/video/fbdev/xen-fbfront.c
@@ -643,7 +643,6 @@ static void xenfb_backend_changed(struct
break;

case XenbusStateInitWait:
-InitWait:
xenbus_switch_state(dev, XenbusStateConnected);
break;

@@ -654,7 +653,8 @@ InitWait:
* get Connected twice here.
*/
if (dev->state != XenbusStateConnected)
- goto InitWait; /* no InitWait seen yet, fudge it */
+ /* no InitWait seen yet, fudge it */
+ xenbus_switch_state(dev, XenbusStateConnected);

if (xenbus_read_unsigned(info->xbdev->otherend,
"request-update", 0))


2017-04-19 14:42:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 39/69] target: Avoid mappedlun symlink creation during lun shutdown

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicholas Bellinger <[email protected]>

commit 49cb77e297dc611a1b795cfeb79452b3002bd331 upstream.

This patch closes a race between se_lun deletion during configfs
unlink in target_fabric_port_unlink() -> core_dev_del_lun()
-> core_tpg_remove_lun(), when transport_clear_lun_ref() blocks
waiting for percpu_ref RCU grace period to finish, but a new
NodeACL mappedlun is added before the RCU grace period has
completed.

This can happen in target_fabric_mappedlun_link() because it
only checks for se_lun->lun_se_dev, which is not cleared until
after transport_clear_lun_ref() percpu_ref RCU grace period
finishes.

This bug originally manifested as NULL pointer dereference
OOPsen in target_stat_scsi_att_intr_port_show_attr_dev() on
v4.1.y code, because it dereferences lun->lun_se_dev without
a explicit NULL pointer check.

In post v4.1 code with target-core RCU conversion, the code
in target_stat_scsi_att_intr_port_show_attr_dev() no longer
uses se_lun->lun_se_dev, but the same race still exists.

To address the bug, go ahead and set se_lun>lun_shutdown as
early as possible in core_tpg_remove_lun(), and ensure new
NodeACL mappedlun creation in target_fabric_mappedlun_link()
fails during se_lun shutdown.

Reported-by: James Shen <[email protected]>
Cc: James Shen <[email protected]>
Tested-by: James Shen <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_fabric_configfs.c | 5 +++++
drivers/target/target_core_tpg.c | 4 ++++
include/target/target_core_base.h | 1 +
3 files changed, 10 insertions(+)

--- a/drivers/target/target_core_fabric_configfs.c
+++ b/drivers/target/target_core_fabric_configfs.c
@@ -92,6 +92,11 @@ static int target_fabric_mappedlun_link(
pr_err("Source se_lun->lun_se_dev does not exist\n");
return -EINVAL;
}
+ if (lun->lun_shutdown) {
+ pr_err("Unable to create mappedlun symlink because"
+ " lun->lun_shutdown=true\n");
+ return -EINVAL;
+ }
se_tpg = lun->lun_tpg;

nacl_ci = &lun_acl_ci->ci_parent->ci_group->cg_item;
--- a/drivers/target/target_core_tpg.c
+++ b/drivers/target/target_core_tpg.c
@@ -640,6 +640,8 @@ void core_tpg_remove_lun(
*/
struct se_device *dev = rcu_dereference_raw(lun->lun_se_dev);

+ lun->lun_shutdown = true;
+
core_clear_lun_from_tpg(lun, tpg);
/*
* Wait for any active I/O references to percpu se_lun->lun_ref to
@@ -661,6 +663,8 @@ void core_tpg_remove_lun(
}
if (!(dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE))
hlist_del_rcu(&lun->link);
+
+ lun->lun_shutdown = false;
mutex_unlock(&tpg->tpg_lun_mutex);

percpu_ref_exit(&lun->lun_ref);
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -705,6 +705,7 @@ struct se_lun {
u64 unpacked_lun;
#define SE_LUN_LINK_MAGIC 0xffff7771
u32 lun_link_magic;
+ bool lun_shutdown;
bool lun_access_ro;
u32 lun_index;



2017-04-19 14:43:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 69/69] virtio-console: avoid DMA from stack

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Omar Sandoval <[email protected]>

commit c4baad50297d84bde1a7ad45e50c73adae4a2192 upstream.

put_chars() stuffs the buffer it gets into an sg, but that buffer may be
on the stack. This breaks with CONFIG_VMAP_STACK=y (for me, it
manifested as printks getting turned into NUL bytes).

Signed-off-by: Omar Sandoval <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Amit Shah <[email protected]>
Cc: Ben Hutchings <[email protected]>
Cc: Brad Spengler <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/char/virtio_console.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -1136,6 +1136,8 @@ static int put_chars(u32 vtermno, const
{
struct port *port;
struct scatterlist sg[1];
+ void *data;
+ int ret;

if (unlikely(early_put_chars))
return early_put_chars(vtermno, buf, count);
@@ -1144,8 +1146,14 @@ static int put_chars(u32 vtermno, const
if (!port)
return -EPIPE;

- sg_init_one(sg, buf, count);
- return __send_to_port(port, sg, 1, count, (void *)buf, false);
+ data = kmemdup(buf, count, GFP_ATOMIC);
+ if (!data)
+ return -ENOMEM;
+
+ sg_init_one(sg, data, count);
+ ret = __send_to_port(port, sg, 1, count, data, false);
+ kfree(data);
+ return ret;
}

/*


2017-04-19 15:17:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 38/69] scsi: sd: Fix capacity calculation with 32-bit sector_t

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Martin K. Petersen <[email protected]>

commit 7c856152cb92f8eee2df29ef325a1b1f43161aff upstream.

We previously made sure that the reported disk capacity was less than
0xffffffff blocks when the kernel was not compiled with large sector_t
support (CONFIG_LBDAF). However, this check assumed that the capacity
was reported in units of 512 bytes.

Add a sanity check function to ensure that we only enable disks if the
entire reported capacity can be expressed in terms of sector_t.

Reported-by: Steve Magnani <[email protected]>
Cc: Bart Van Assche <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/sd.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -2109,6 +2109,22 @@ static void read_capacity_error(struct s

#define READ_CAPACITY_RETRIES_ON_RESET 10

+/*
+ * Ensure that we don't overflow sector_t when CONFIG_LBDAF is not set
+ * and the reported logical block size is bigger than 512 bytes. Note
+ * that last_sector is a u64 and therefore logical_to_sectors() is not
+ * applicable.
+ */
+static bool sd_addressable_capacity(u64 lba, unsigned int sector_size)
+{
+ u64 last_sector = (lba + 1ULL) << (ilog2(sector_size) - 9);
+
+ if (sizeof(sector_t) == 4 && last_sector > U32_MAX)
+ return false;
+
+ return true;
+}
+
static int read_capacity_16(struct scsi_disk *sdkp, struct scsi_device *sdp,
unsigned char *buffer)
{
@@ -2174,7 +2190,7 @@ static int read_capacity_16(struct scsi_
return -ENODEV;
}

- if ((sizeof(sdkp->capacity) == 4) && (lba >= 0xffffffffULL)) {
+ if (!sd_addressable_capacity(lba, sector_size)) {
sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
"kernel compiled with support for large block "
"devices.\n");
@@ -2263,7 +2279,7 @@ static int read_capacity_10(struct scsi_
return sector_size;
}

- if ((sizeof(sdkp->capacity) == 4) && (lba == 0xffffffff)) {
+ if (!sd_addressable_capacity(lba, sector_size)) {
sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
"kernel compiled with support for large block "
"devices.\n");


2017-04-19 15:18:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 64/69] x86/xen: Fix APIC id mismatch warning on Intel

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mohit Gambhir <[email protected]>

commit cc272163ea554a97dac180fa8dd6cd54c2810bd1 upstream.

This patch fixes the following warning message seen when booting the
kernel as Dom0 with Xen on Intel machines.

[0.003000] [Firmware Bug]: CPU1: APIC id mismatch. Firmware: 0 APIC: 1]

The code generating the warning in validate_apic_and_package_id() matches
cpu_data(cpu).apicid (initialized in init_intel()->
detect_extended_topology() using cpuid) against the apicid returned from
xen_apic_read(). Now, xen_apic_read() makes a hypercall to retrieve apicid
for the boot cpu but returns 0 otherwise. Hence the warning gets thrown
for all but the boot cpu.

The idea behind xen_apic_read() returning 0 for apicid is that the
guests (even Dom0) should not need to know what physical processor their
vcpus are running on. This is because we currently do not have topology
information in Xen and also because xen allows more vcpus than physical
processors. However, boot cpu's apicid is required for loading
xen-acpi-processor driver on AMD machines. Look at following patch for
details:

commit 558daa289a40 ("xen/apic: Return the APIC ID (and version) for CPU
0.")

So to get rid of the warning, this patch modifies
xen_cpu_present_to_apicid() to return cpu_data(cpu).apicid instead of
calling xen_apic_read().

The warning is not seen on AMD machines because init_amd() populates
cpu_data(cpu).apicid by calling hard_smp_processor_id()->xen_apic_read()
as opposed to using apicid from cpuid as is done on Intel machines.

Signed-off-by: Mohit Gambhir <[email protected]>
Reviewed-by: Juergen Gross <[email protected]>
Signed-off-by: Boris Ostrovsky <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/xen/apic.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -145,7 +145,7 @@ static void xen_silent_inquire(int apici
static int xen_cpu_present_to_apicid(int cpu)
{
if (cpu_present(cpu))
- return xen_get_apic_id(xen_apic_read(APIC_ID));
+ return cpu_data(cpu).apicid;
else
return BAD_APICID;
}


2017-04-19 14:39:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 59/69] crypto: lrw - Fix use-after-free on EINPROGRESS

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit 4702bbeefb490e315189636a5588628c1151223d upstream.

When we get an EINPROGRESS completion in lrw, we will end up marking
the request as done and freeing it. This then blows up when the
request is really completed as we've already freed the memory.

Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
crypto/lrw.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -345,6 +345,13 @@ static void encrypt_done(struct crypto_a
struct rctx *rctx;

rctx = skcipher_request_ctx(req);
+
+ if (err == -EINPROGRESS) {
+ if (rctx->left != req->cryptlen)
+ return;
+ goto out;
+ }
+
subreq = &rctx->subreq;
subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;

@@ -352,6 +359,7 @@ static void encrypt_done(struct crypto_a
if (rctx->left)
return;

+out:
skcipher_request_complete(req, err);
}

@@ -389,6 +397,13 @@ static void decrypt_done(struct crypto_a
struct rctx *rctx;

rctx = skcipher_request_ctx(req);
+
+ if (err == -EINPROGRESS) {
+ if (rctx->left != req->cryptlen)
+ return;
+ goto out;
+ }
+
subreq = &rctx->subreq;
subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;

@@ -396,6 +411,7 @@ static void decrypt_done(struct crypto_a
if (rctx->left)
return;

+out:
skcipher_request_complete(req, err);
}



2017-04-19 14:39:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 58/69] crypto: ahash - Fix EINPROGRESS notification callback

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit ef0579b64e93188710d48667cb5e014926af9f1b upstream.

The ahash API modifies the request's callback function in order
to clean up after itself in some corner cases (unaligned final
and missing finup).

When the request is complete ahash will restore the original
callback and everything is fine. However, when the request gets
an EBUSY on a full queue, an EINPROGRESS callback is made while
the request is still ongoing.

In this case the ahash API will incorrectly call its own callback.

This patch fixes the problem by creating a temporary request
object on the stack which is used to relay EINPROGRESS back to
the original completion function.

This patch also adds code to preserve the original flags value.

Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
Reported-by: Sabrina Dubroca <[email protected]>
Tested-by: Sabrina Dubroca <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
crypto/ahash.c | 79 +++++++++++++++++++++++++----------------
include/crypto/internal/hash.h | 10 +++++
2 files changed, 60 insertions(+), 29 deletions(-)

--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -31,6 +31,7 @@ struct ahash_request_priv {
crypto_completion_t complete;
void *data;
u8 *result;
+ u32 flags;
void *ubuf[] CRYPTO_MINALIGN_ATTR;
};

@@ -252,6 +253,8 @@ static int ahash_save_req(struct ahash_r
priv->result = req->result;
priv->complete = req->base.complete;
priv->data = req->base.data;
+ priv->flags = req->base.flags;
+
/*
* WARNING: We do not backup req->priv here! The req->priv
* is for internal use of the Crypto API and the
@@ -266,38 +269,44 @@ static int ahash_save_req(struct ahash_r
return 0;
}

-static void ahash_restore_req(struct ahash_request *req)
+static void ahash_restore_req(struct ahash_request *req, int err)
{
struct ahash_request_priv *priv = req->priv;

+ if (!err)
+ memcpy(priv->result, req->result,
+ crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
+
/* Restore the original crypto request. */
req->result = priv->result;
- req->base.complete = priv->complete;
- req->base.data = priv->data;
+
+ ahash_request_set_callback(req, priv->flags,
+ priv->complete, priv->data);
req->priv = NULL;

/* Free the req->priv.priv from the ADJUSTED request. */
kzfree(priv);
}

-static void ahash_op_unaligned_finish(struct ahash_request *req, int err)
+static void ahash_notify_einprogress(struct ahash_request *req)
{
struct ahash_request_priv *priv = req->priv;
+ struct crypto_async_request oreq;

- if (err == -EINPROGRESS)
- return;
-
- if (!err)
- memcpy(priv->result, req->result,
- crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
+ oreq.data = priv->data;

- ahash_restore_req(req);
+ priv->complete(&oreq, -EINPROGRESS);
}

static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
{
struct ahash_request *areq = req->data;

+ if (err == -EINPROGRESS) {
+ ahash_notify_einprogress(areq);
+ return;
+ }
+
/*
* Restore the original request, see ahash_op_unaligned() for what
* goes where.
@@ -308,7 +317,7 @@ static void ahash_op_unaligned_done(stru
*/

/* First copy req->result into req->priv.result */
- ahash_op_unaligned_finish(areq, err);
+ ahash_restore_req(areq, err);

/* Complete the ORIGINAL request. */
areq->base.complete(&areq->base, err);
@@ -324,7 +333,12 @@ static int ahash_op_unaligned(struct aha
return err;

err = op(req);
- ahash_op_unaligned_finish(req, err);
+ if (err == -EINPROGRESS ||
+ (err == -EBUSY && (ahash_request_flags(req) &
+ CRYPTO_TFM_REQ_MAY_BACKLOG)))
+ return err;
+
+ ahash_restore_req(req, err);

return err;
}
@@ -359,25 +373,14 @@ int crypto_ahash_digest(struct ahash_req
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);

-static void ahash_def_finup_finish2(struct ahash_request *req, int err)
+static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
{
- struct ahash_request_priv *priv = req->priv;
+ struct ahash_request *areq = req->data;

if (err == -EINPROGRESS)
return;

- if (!err)
- memcpy(priv->result, req->result,
- crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
-
- ahash_restore_req(req);
-}
-
-static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
-{
- struct ahash_request *areq = req->data;
-
- ahash_def_finup_finish2(areq, err);
+ ahash_restore_req(areq, err);

areq->base.complete(&areq->base, err);
}
@@ -388,11 +391,15 @@ static int ahash_def_finup_finish1(struc
goto out;

req->base.complete = ahash_def_finup_done2;
- req->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
err = crypto_ahash_reqtfm(req)->final(req);
+ if (err == -EINPROGRESS ||
+ (err == -EBUSY && (ahash_request_flags(req) &
+ CRYPTO_TFM_REQ_MAY_BACKLOG)))
+ return err;

out:
- ahash_def_finup_finish2(req, err);
+ ahash_restore_req(req, err);
return err;
}

@@ -400,7 +407,16 @@ static void ahash_def_finup_done1(struct
{
struct ahash_request *areq = req->data;

+ if (err == -EINPROGRESS) {
+ ahash_notify_einprogress(areq);
+ return;
+ }
+
+ areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
err = ahash_def_finup_finish1(areq, err);
+ if (areq->priv)
+ return;

areq->base.complete(&areq->base, err);
}
@@ -415,6 +431,11 @@ static int ahash_def_finup(struct ahash_
return err;

err = tfm->update(req);
+ if (err == -EINPROGRESS ||
+ (err == -EBUSY && (ahash_request_flags(req) &
+ CRYPTO_TFM_REQ_MAY_BACKLOG)))
+ return err;
+
return ahash_def_finup_finish1(req, err);
}

--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -166,6 +166,16 @@ static inline struct ahash_instance *aha
return crypto_alloc_instance2(name, alg, ahash_instance_headroom());
}

+static inline void ahash_request_complete(struct ahash_request *req, int err)
+{
+ req->base.complete(&req->base, err);
+}
+
+static inline u32 ahash_request_flags(struct ahash_request *req)
+{
+ return req->base.flags;
+}
+
static inline struct crypto_ahash *crypto_spawn_ahash(
struct crypto_ahash_spawn *spawn)
{


2017-04-19 15:18:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 56/69] crypto: algif_aead - Fix bogus request dereference in completion function

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit e6534aebb26e32fbab14df9c713c65e8507d17e4 upstream.

The algif_aead completion function tries to deduce the aead_request
from the crypto_async_request argument. This is broken because
the API does not guarantee that the same request will be pased to
the completion function. Only the value of req->data can be used
in the completion function.

This patch fixes it by storing a pointer to sk in areq and using
that instead of passing in sk through req->data.

Fixes: 83094e5e9e49 ("crypto: af_alg - add async support to...")
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
crypto/algif_aead.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -39,6 +39,7 @@ struct aead_async_req {
struct aead_async_rsgl first_rsgl;
struct list_head list;
struct kiocb *iocb;
+ struct sock *sk;
unsigned int tsgls;
char iv[];
};
@@ -378,12 +379,10 @@ unlock:

static void aead_async_cb(struct crypto_async_request *_req, int err)
{
- struct sock *sk = _req->data;
- struct alg_sock *ask = alg_sk(sk);
- struct aead_ctx *ctx = ask->private;
- struct crypto_aead *tfm = crypto_aead_reqtfm(&ctx->aead_req);
- struct aead_request *req = aead_request_cast(_req);
+ struct aead_request *req = _req->data;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct aead_async_req *areq = GET_ASYM_REQ(req, tfm);
+ struct sock *sk = areq->sk;
struct scatterlist *sg = areq->tsgl;
struct aead_async_rsgl *rsgl;
struct kiocb *iocb = areq->iocb;
@@ -446,11 +445,12 @@ static int aead_recvmsg_async(struct soc
memset(&areq->first_rsgl, '\0', sizeof(areq->first_rsgl));
INIT_LIST_HEAD(&areq->list);
areq->iocb = msg->msg_iocb;
+ areq->sk = sk;
memcpy(areq->iv, ctx->iv, crypto_aead_ivsize(tfm));
aead_request_set_tfm(req, tfm);
aead_request_set_ad(req, ctx->aead_assoclen);
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- aead_async_cb, sk);
+ aead_async_cb, req);
used -= ctx->aead_assoclen;

/* take over all tx sgls from ctx */


2017-04-19 14:39:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 54/69] zram: do not use copy_page with non-page aligned address

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Minchan Kim <[email protected]>

commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream.

The copy_page is optimized memcpy for page-alinged address. If it is
used with non-page aligned address, it can corrupt memory which means
system corruption. With zram, it can happen with

1. 64K architecture
2. partial IO
3. slub debug

Partial IO need to allocate a page and zram allocates it via kmalloc.
With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned
address. And finally, copy_page(mem, cmem) corrupts memory.

So, this patch changes it to memcpy.

Actuaully, we don't need to change zram_bvec_write part because zsmalloc
returns page-aligned address in case of PAGE_SIZE class but it's not
good to rely on the internal of zsmalloc.

Note:
When this patch is merged to stable, clear_page should be fixed, too.
Unfortunately, recent zram removes it by "same page merge" feature so
it's hard to backport this patch to -stable tree.

I will handle it when I receive the mail from stable tree maintainer to
merge this patch to backport.

Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Minchan Kim <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/block/zram/zram_drv.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -583,13 +583,13 @@ static int zram_decompress_page(struct z

if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
- clear_page(mem);
+ memset(mem, 0, PAGE_SIZE);
return 0;
}

cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
if (size == PAGE_SIZE) {
- copy_page(mem, cmem);
+ memcpy(mem, cmem, PAGE_SIZE);
} else {
struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);

@@ -781,7 +781,7 @@ compress_again:

if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
src = kmap_atomic(page);
- copy_page(cmem, src);
+ memcpy(cmem, src, PAGE_SIZE);
kunmap_atomic(src);
} else {
memcpy(cmem, src, clen);


2017-04-19 15:19:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 62/69] ASoC: Intel: select DW_DMAC_CORE since its mandatory

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andy Shevchenko <[email protected]>

commit ebf79091bf85d9b2270ab29191de9cd3aaf888c5 upstream.

Select DW_DMAC_CORE like the rest of glue drivers do, e.g.
drivers/dma/dw/Kconfig.

While here group selectors under SND_SOC_INTEL_HASWELL and
SND_SOC_INTEL_BAYTRAIL.

Make platforms, which are using a common SST firmware driver, to be
dependent on DMADEVICES.

Signed-off-by: Andy Shevchenko <[email protected]>
Acked-by: Liam Girdwood <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/soc/intel/Kconfig | 31 +++++++++++++------------------
1 file changed, 13 insertions(+), 18 deletions(-)

--- a/sound/soc/intel/Kconfig
+++ b/sound/soc/intel/Kconfig
@@ -33,11 +33,9 @@ config SND_SOC_INTEL_SST
select SND_SOC_INTEL_SST_MATCH if ACPI
depends on (X86 || COMPILE_TEST)

-# firmware stuff depends DW_DMAC_CORE; since there is no depends-on from
-# the reverse selection, each machine driver needs to select
-# SND_SOC_INTEL_SST_FIRMWARE carefully depending on DW_DMAC_CORE
config SND_SOC_INTEL_SST_FIRMWARE
tristate
+ select DW_DMAC_CORE

config SND_SOC_INTEL_SST_ACPI
tristate
@@ -47,16 +45,18 @@ config SND_SOC_INTEL_SST_MATCH

config SND_SOC_INTEL_HASWELL
tristate
+ select SND_SOC_INTEL_SST
select SND_SOC_INTEL_SST_FIRMWARE

config SND_SOC_INTEL_BAYTRAIL
tristate
+ select SND_SOC_INTEL_SST
+ select SND_SOC_INTEL_SST_FIRMWARE

config SND_SOC_INTEL_HASWELL_MACH
tristate "ASoC Audio DSP support for Intel Haswell Lynxpoint"
depends on X86_INTEL_LPSS && I2C && I2C_DESIGNWARE_PLATFORM
- depends on DW_DMAC_CORE
- select SND_SOC_INTEL_SST
+ depends on DMADEVICES
select SND_SOC_INTEL_HASWELL
select SND_SOC_RT5640
help
@@ -99,9 +99,8 @@ config SND_SOC_INTEL_BXT_RT298_MACH
config SND_SOC_INTEL_BYT_RT5640_MACH
tristate "ASoC Audio driver for Intel Baytrail with RT5640 codec"
depends on X86_INTEL_LPSS && I2C
- depends on DW_DMAC_CORE && (SND_SST_IPC_ACPI = n)
- select SND_SOC_INTEL_SST
- select SND_SOC_INTEL_SST_FIRMWARE
+ depends on DMADEVICES
+ depends on SND_SST_IPC_ACPI = n
select SND_SOC_INTEL_BAYTRAIL
select SND_SOC_RT5640
help
@@ -112,9 +111,8 @@ config SND_SOC_INTEL_BYT_RT5640_MACH
config SND_SOC_INTEL_BYT_MAX98090_MACH
tristate "ASoC Audio driver for Intel Baytrail with MAX98090 codec"
depends on X86_INTEL_LPSS && I2C
- depends on DW_DMAC_CORE && (SND_SST_IPC_ACPI = n)
- select SND_SOC_INTEL_SST
- select SND_SOC_INTEL_SST_FIRMWARE
+ depends on DMADEVICES
+ depends on SND_SST_IPC_ACPI = n
select SND_SOC_INTEL_BAYTRAIL
select SND_SOC_MAX98090
help
@@ -123,9 +121,8 @@ config SND_SOC_INTEL_BYT_MAX98090_MACH

config SND_SOC_INTEL_BDW_RT5677_MACH
tristate "ASoC Audio driver for Intel Broadwell with RT5677 codec"
- depends on X86_INTEL_LPSS && GPIOLIB && I2C && DW_DMAC
- depends on DW_DMAC_CORE=y
- select SND_SOC_INTEL_SST
+ depends on X86_INTEL_LPSS && GPIOLIB && I2C
+ depends on DMADEVICES
select SND_SOC_INTEL_HASWELL
select SND_SOC_RT5677
help
@@ -134,10 +131,8 @@ config SND_SOC_INTEL_BDW_RT5677_MACH

config SND_SOC_INTEL_BROADWELL_MACH
tristate "ASoC Audio DSP support for Intel Broadwell Wildcatpoint"
- depends on X86_INTEL_LPSS && I2C && DW_DMAC && \
- I2C_DESIGNWARE_PLATFORM
- depends on DW_DMAC_CORE
- select SND_SOC_INTEL_SST
+ depends on X86_INTEL_LPSS && I2C && I2C_DESIGNWARE_PLATFORM
+ depends on DMADEVICES
select SND_SOC_INTEL_HASWELL
select SND_SOC_RT286
help


2017-04-19 15:19:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 61/69] [media] dvb-usb-v2: avoid use-after-free

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Arnd Bergmann <[email protected]>

commit 005145378c9ad7575a01b6ce1ba118fb427f583a upstream.

I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:

drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]

Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.

This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.

Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

--- a/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
+++ b/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
@@ -1013,8 +1013,8 @@ EXPORT_SYMBOL(dvb_usbv2_probe);
void dvb_usbv2_disconnect(struct usb_interface *intf)
{
struct dvb_usb_device *d = usb_get_intfdata(intf);
- const char *name = d->name;
- struct device dev = d->udev->dev;
+ const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);
+ const char *drvname = d->name;

dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
@@ -1024,8 +1024,9 @@ void dvb_usbv2_disconnect(struct usb_int

dvb_usbv2_exit(d);

- dev_info(&dev, "%s: '%s' successfully deinitialized and disconnected\n",
- KBUILD_MODNAME, name);
+ pr_info("%s: '%s:%s' successfully deinitialized and disconnected\n",
+ KBUILD_MODNAME, drvname, devname);
+ kfree(devname);
}
EXPORT_SYMBOL(dvb_usbv2_disconnect);



2017-04-19 15:19:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 57/69] crypto: xts - Fix use-after-free on EINPROGRESS

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit aa4a829bdaced81e70c215a84ef6595ce8bd4308 upstream.

When we get an EINPROGRESS completion in xts, we will end up marking
the request as done and freeing it. This then blows up when the
request is really completed as we've already freed the memory.

Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
Reported-by: Nathan Royce <[email protected]>
Reported-by: Krzysztof Kozlowski <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Tested-by: Krzysztof Kozlowski <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
crypto/xts.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

--- a/crypto/xts.c
+++ b/crypto/xts.c
@@ -286,6 +286,13 @@ static void encrypt_done(struct crypto_a
struct rctx *rctx;

rctx = skcipher_request_ctx(req);
+
+ if (err == -EINPROGRESS) {
+ if (rctx->left != req->cryptlen)
+ return;
+ goto out;
+ }
+
subreq = &rctx->subreq;
subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;

@@ -293,6 +300,7 @@ static void encrypt_done(struct crypto_a
if (rctx->left)
return;

+out:
skcipher_request_complete(req, err);
}

@@ -330,6 +338,13 @@ static void decrypt_done(struct crypto_a
struct rctx *rctx;

rctx = skcipher_request_ctx(req);
+
+ if (err == -EINPROGRESS) {
+ if (rctx->left != req->cryptlen)
+ return;
+ goto out;
+ }
+
subreq = &rctx->subreq;
subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;

@@ -337,6 +352,7 @@ static void decrypt_done(struct crypto_a
if (rctx->left)
return;

+out:
skcipher_request_complete(req, err);
}



2017-04-19 15:19:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 36/69] scsi: sd: Consider max_xfer_blocks if opt_xfer_blocks is unusable

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Fam Zheng <[email protected]>

commit 6780414519f91c2a84da9baa963a940ac916f803 upstream.

If device reports a small max_xfer_blocks and a zero opt_xfer_blocks, we
end up using BLK_DEF_MAX_SECTORS, which is wrong and r/w of that size
may get error.

[mkp: tweaked to avoid setting rw_max twice and added typecast]

Fixes: ca369d51b3e ("block/sd: Fix device-imposed transfer length limits")
Signed-off-by: Fam Zheng <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/sd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -2963,7 +2963,8 @@ static int sd_revalidate_disk(struct gen
q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
} else
- rw_max = BLK_DEF_MAX_SECTORS;
+ rw_max = min_not_zero(logical_to_sectors(sdp, dev_max),
+ (sector_t)BLK_DEF_MAX_SECTORS);

/* Combine with controller limits */
q->limits.max_sectors = min(rw_max, queue_max_hw_sectors(q));


2017-04-19 15:20:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 60/69] parisc: Fix get_user() for 64-bit value on 32-bit kernel

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Helge Deller <[email protected]>

commit 3f795cef0ecdf9bc980dd058d49bdab4b19af1d3 upstream.

This fixes a bug in which the upper 32-bits of a 64-bit value which is
read by get_user() was lost on a 32-bit kernel.
While touching this code, split out pre-loading of %sr2 space register
and clean up code indent.

Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/include/asm/uaccess.h | 86 ++++++++++++++++++++++++--------------
1 file changed, 55 insertions(+), 31 deletions(-)

--- a/arch/parisc/include/asm/uaccess.h
+++ b/arch/parisc/include/asm/uaccess.h
@@ -42,10 +42,10 @@ static inline long access_ok(int type, c
#define get_user __get_user

#if !defined(CONFIG_64BIT)
-#define LDD_USER(ptr) __get_user_asm64(ptr)
+#define LDD_USER(val, ptr) __get_user_asm64(val, ptr)
#define STD_USER(x, ptr) __put_user_asm64(x, ptr)
#else
-#define LDD_USER(ptr) __get_user_asm("ldd", ptr)
+#define LDD_USER(val, ptr) __get_user_asm(val, "ldd", ptr)
#define STD_USER(x, ptr) __put_user_asm("std", x, ptr)
#endif

@@ -100,63 +100,87 @@ struct exception_data {
" mtsp %0,%%sr2\n\t" \
: : "r"(get_fs()) : )

-#define __get_user(x, ptr) \
-({ \
- register long __gu_err __asm__ ("r8") = 0; \
- register long __gu_val; \
- \
- load_sr2(); \
- switch (sizeof(*(ptr))) { \
- case 1: __get_user_asm("ldb", ptr); break; \
- case 2: __get_user_asm("ldh", ptr); break; \
- case 4: __get_user_asm("ldw", ptr); break; \
- case 8: LDD_USER(ptr); break; \
- default: BUILD_BUG(); break; \
- } \
- \
- (x) = (__force __typeof__(*(ptr))) __gu_val; \
- __gu_err; \
+#define __get_user_internal(val, ptr) \
+({ \
+ register long __gu_err __asm__ ("r8") = 0; \
+ \
+ switch (sizeof(*(ptr))) { \
+ case 1: __get_user_asm(val, "ldb", ptr); break; \
+ case 2: __get_user_asm(val, "ldh", ptr); break; \
+ case 4: __get_user_asm(val, "ldw", ptr); break; \
+ case 8: LDD_USER(val, ptr); break; \
+ default: BUILD_BUG(); \
+ } \
+ \
+ __gu_err; \
})

-#define __get_user_asm(ldx, ptr) \
+#define __get_user(val, ptr) \
+({ \
+ load_sr2(); \
+ __get_user_internal(val, ptr); \
+})
+
+#define __get_user_asm(val, ldx, ptr) \
+{ \
+ register long __gu_val; \
+ \
__asm__("1: " ldx " 0(%%sr2,%2),%0\n" \
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
: "=r"(__gu_val), "=r"(__gu_err) \
- : "r"(ptr), "1"(__gu_err));
+ : "r"(ptr), "1"(__gu_err)); \
+ \
+ (val) = (__force __typeof__(*(ptr))) __gu_val; \
+}

#if !defined(CONFIG_64BIT)

-#define __get_user_asm64(ptr) \
+#define __get_user_asm64(val, ptr) \
+{ \
+ union { \
+ unsigned long long l; \
+ __typeof__(*(ptr)) t; \
+ } __gu_tmp; \
+ \
__asm__(" copy %%r0,%R0\n" \
"1: ldw 0(%%sr2,%2),%0\n" \
"2: ldw 4(%%sr2,%2),%R0\n" \
"9:\n" \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
- : "=r"(__gu_val), "=r"(__gu_err) \
- : "r"(ptr), "1"(__gu_err));
+ : "=&r"(__gu_tmp.l), "=r"(__gu_err) \
+ : "r"(ptr), "1"(__gu_err)); \
+ \
+ (val) = __gu_tmp.t; \
+}

#endif /* !defined(CONFIG_64BIT) */


-#define __put_user(x, ptr) \
+#define __put_user_internal(x, ptr) \
({ \
register long __pu_err __asm__ ("r8") = 0; \
__typeof__(*(ptr)) __x = (__typeof__(*(ptr)))(x); \
\
- load_sr2(); \
switch (sizeof(*(ptr))) { \
- case 1: __put_user_asm("stb", __x, ptr); break; \
- case 2: __put_user_asm("sth", __x, ptr); break; \
- case 4: __put_user_asm("stw", __x, ptr); break; \
- case 8: STD_USER(__x, ptr); break; \
- default: BUILD_BUG(); break; \
- } \
+ case 1: __put_user_asm("stb", __x, ptr); break; \
+ case 2: __put_user_asm("sth", __x, ptr); break; \
+ case 4: __put_user_asm("stw", __x, ptr); break; \
+ case 8: STD_USER(__x, ptr); break; \
+ default: BUILD_BUG(); \
+ } \
\
__pu_err; \
})

+#define __put_user(x, ptr) \
+({ \
+ load_sr2(); \
+ __put_user_internal(x, ptr); \
+})
+
+
/*
* The "__put_user/kernel_asm()" macros tell gcc they read from memory
* instead of writing. This is because they do not write to any memory


2017-04-19 15:20:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 51/69] drm/i915/gvt: set the correct default value of CTX STATUS PTR

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Min He <[email protected]>

commit a34f83639490a5cc11a9d5c1b3773d4b6eb69a9e upstream.

Fix wrong initial csb read pointer value. This fixes the random
engine timeout issue in guest when guest boots up.

Fixes: 8453d674ae7e ("drm/i915/gvt: vGPU execlist virtualization")
Signed-off-by: Min He <[email protected]>
Signed-off-by: Zhenyu Wang <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/i915/gvt/execlist.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -778,7 +778,8 @@ static void init_vgpu_execlist(struct in
_EL_OFFSET_STATUS_PTR);

ctx_status_ptr.dw = vgpu_vreg(vgpu, ctx_status_ptr_reg);
- ctx_status_ptr.read_ptr = ctx_status_ptr.write_ptr = 0x7;
+ ctx_status_ptr.read_ptr = 0;
+ ctx_status_ptr.write_ptr = 0x7;
vgpu_vreg(vgpu, ctx_status_ptr_reg) = ctx_status_ptr.dw;
}



2017-04-19 15:20:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 55/69] ftrace: Fix function pid filter on instances

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Namhyung Kim <[email protected]>

commit d879d0b8c183aabeb9a65eba91f3f9e3c7e7b905 upstream.

When function tracer has a pid filter, it adds a probe to sched_switch
to track if current task can be ignored. The probe checks the
ftrace_ignore_pid from current tr to filter tasks. But it misses to
delete the probe when removing an instance so that it can cause a crash
due to the invalid tr pointer (use-after-free).

This is easily reproducible with the following:

# cd /sys/kernel/debug/tracing
# mkdir instances/buggy
# echo $$ > instances/buggy/set_ftrace_pid
# rmdir instances/buggy

============================================================================
BUG: KASAN: use-after-free in ftrace_filter_pid_sched_switch_probe+0x3d/0x90
Read of size 8 by task kworker/0:1/17
CPU: 0 PID: 17 Comm: kworker/0:1 Tainted: G B 4.11.0-rc3 #198
Call Trace:
dump_stack+0x68/0x9f
kasan_object_err+0x21/0x70
kasan_report.part.1+0x22b/0x500
? ftrace_filter_pid_sched_switch_probe+0x3d/0x90
kasan_report+0x25/0x30
__asan_load8+0x5e/0x70
ftrace_filter_pid_sched_switch_probe+0x3d/0x90
? fpid_start+0x130/0x130
__schedule+0x571/0xce0
...

To fix it, use ftrace_clear_pids() to unregister the probe. As
instance_rmdir() already updated ftrace codes, it can just free the
filter safely.

Link: http://lkml.kernel.org/r/[email protected]

Fixes: 0c8916c34203 ("tracing: Add rmdir to remove multibuffer instances")
Cc: Ingo Molnar <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/ftrace.c | 9 +++++++++
kernel/trace/trace.c | 1 +
kernel/trace/trace.h | 2 ++
3 files changed, 12 insertions(+)

--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5422,6 +5422,15 @@ static void clear_ftrace_pids(struct tra
trace_free_pid_list(pid_list);
}

+void ftrace_clear_pids(struct trace_array *tr)
+{
+ mutex_lock(&ftrace_lock);
+
+ clear_ftrace_pids(tr);
+
+ mutex_unlock(&ftrace_lock);
+}
+
static void ftrace_pid_reset(struct trace_array *tr)
{
mutex_lock(&ftrace_lock);
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -7409,6 +7409,7 @@ static int instance_rmdir(const char *na

tracing_set_nop(tr);
event_trace_del_tracer(tr);
+ ftrace_clear_pids(tr);
ftrace_destroy_function_files(tr);
tracefs_remove_recursive(tr->dir);
free_trace_buffers(tr);
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -884,6 +884,7 @@ int using_ftrace_ops_list_func(void);
void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d_tracer);
void ftrace_init_tracefs_toplevel(struct trace_array *tr,
struct dentry *d_tracer);
+void ftrace_clear_pids(struct trace_array *tr);
#else
static inline int ftrace_trace_task(struct trace_array *tr)
{
@@ -902,6 +903,7 @@ ftrace_init_global_array_ops(struct trac
static inline void ftrace_reset_array_ops(struct trace_array *tr) { }
static inline void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d) { }
static inline void ftrace_init_tracefs_toplevel(struct trace_array *tr, struct dentry *d) { }
+static inline void ftrace_clear_pids(struct trace_array *tr) { }
/* ftace_func_t type is not defined, use macro instead of static inline */
#define ftrace_init_array_ops(tr, func) do { } while (0)
#endif /* CONFIG_FUNCTION_TRACER */


2017-04-19 15:20:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 53/69] Revert "MIPS: Lantiq: Fix cascaded IRQ setup"

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Greg Kroah-Hartman <[email protected]>

This reverts commit b576c58331340c87bcf61f1205003a8fdffdff24 which is
commit 6c356eda225e3ee134ed4176b9ae3a76f793f4dd upstream.

It shouldn't have been included in a stable release.

Reported-by: Amit Pundir <[email protected]>
Cc: Felix Fietkau <[email protected]>
Cc: John Crispin <[email protected]>
Cc: James Hogan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mips/lantiq/irq.c | 36 ++++++++++++++++++++----------------
1 file changed, 20 insertions(+), 16 deletions(-)

--- a/arch/mips/lantiq/irq.c
+++ b/arch/mips/lantiq/irq.c
@@ -269,11 +269,6 @@ static void ltq_hw5_irqdispatch(void)
DEFINE_HWx_IRQDISPATCH(5)
#endif

-static void ltq_hw_irq_handler(struct irq_desc *desc)
-{
- ltq_hw_irqdispatch(irq_desc_get_irq(desc) - 2);
-}
-
#ifdef CONFIG_MIPS_MT_SMP
void __init arch_init_ipiirq(int irq, struct irqaction *action)
{
@@ -318,19 +313,23 @@ static struct irqaction irq_call = {
asmlinkage void plat_irq_dispatch(void)
{
unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
- int irq;
+ unsigned int i;

- if (!pending) {
- spurious_interrupt();
- return;
+ if ((MIPS_CPU_TIMER_IRQ == 7) && (pending & CAUSEF_IP7)) {
+ do_IRQ(MIPS_CPU_TIMER_IRQ);
+ goto out;
+ } else {
+ for (i = 0; i < MAX_IM; i++) {
+ if (pending & (CAUSEF_IP2 << i)) {
+ ltq_hw_irqdispatch(i);
+ goto out;
+ }
+ }
}
+ pr_alert("Spurious IRQ: CAUSE=0x%08x\n", read_c0_status());

- pending >>= CAUSEB_IP;
- while (pending) {
- irq = fls(pending) - 1;
- do_IRQ(MIPS_CPU_IRQ_BASE + irq);
- pending &= ~BIT(irq);
- }
+out:
+ return;
}

static int icu_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
@@ -355,6 +354,11 @@ static const struct irq_domain_ops irq_d
.map = icu_map,
};

+static struct irqaction cascade = {
+ .handler = no_action,
+ .name = "cascade",
+};
+
int __init icu_of_init(struct device_node *node, struct device_node *parent)
{
struct device_node *eiu_node;
@@ -386,7 +390,7 @@ int __init icu_of_init(struct device_nod
mips_cpu_irq_init();

for (i = 0; i < MAX_IM; i++)
- irq_set_chained_handler(i + 2, ltq_hw_irq_handler);
+ setup_irq(i + 2, &cascade);

if (cpu_has_vint) {
pr_info("Setting up vectored interrupts\n");


2017-04-19 14:39:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 45/69] libnvdimm: band aid btt vs clear poison locking

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Williams <[email protected]>

commit 4aa5615e080a9855e607accc75b07ab79b252dde upstream.

The following warning results from holding a lane spinlock,
preempt_disable(), or the btt map spinlock and then trying to take the
reconfig_mutex to walk the poison list and potentially add new entries.

BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
in_atomic(): 1, irqs_disabled(): 0, pid: 17159, name: dd
[..]
Call Trace:
dump_stack+0x85/0xc8
___might_sleep+0x184/0x250
__might_sleep+0x4a/0x90
__mutex_lock+0x58/0x9b0
? nvdimm_bus_lock+0x21/0x30 [libnvdimm]
? __nvdimm_bus_badblocks_clear+0x2f/0x60 [libnvdimm]
? acpi_nfit_forget_poison+0x79/0x80 [nfit]
? _raw_spin_unlock+0x27/0x40
mutex_lock_nested+0x1b/0x20
nvdimm_bus_lock+0x21/0x30 [libnvdimm]
nvdimm_forget_poison+0x25/0x50 [libnvdimm]
nvdimm_clear_poison+0x106/0x140 [libnvdimm]
nsio_rw_bytes+0x164/0x270 [libnvdimm]
btt_write_pg+0x1de/0x3e0 [nd_btt]
? blk_queue_enter+0x30/0x290
btt_make_request+0x11a/0x310 [nd_btt]
? blk_queue_enter+0xb7/0x290
? blk_queue_enter+0x30/0x290
generic_make_request+0x118/0x3b0

As a minimal fix, disable error clearing when the BTT is enabled for the
namespace. For the final fix a larger rework of the poison list locking
is needed.

Note that this is not a problem in the blk case since that path never
calls nvdimm_clear_poison().

Fixes: 82bf1037f2ca ("libnvdimm: check and clear poison before writing to pmem")
Cc: Dave Jiang <[email protected]>
[jeff: dynamically disable error clearing in the btt case]
Suggested-by: Jeff Moyer <[email protected]>
Reviewed-by: Jeff Moyer <[email protected]>
Reported-by: Vishal Verma <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/nvdimm/claim.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

--- a/drivers/nvdimm/claim.c
+++ b/drivers/nvdimm/claim.c
@@ -243,7 +243,15 @@ static int nsio_rw_bytes(struct nd_names
}

if (unlikely(is_bad_pmem(&nsio->bb, sector, sz_align))) {
- if (IS_ALIGNED(offset, 512) && IS_ALIGNED(size, 512)) {
+ /*
+ * FIXME: nsio_rw_bytes() may be called from atomic
+ * context in the btt case and nvdimm_clear_poison()
+ * takes a sleeping lock. Until the locking can be
+ * reworked this capability requires that the namespace
+ * is not claimed by btt.
+ */
+ if (IS_ALIGNED(offset, 512) && IS_ALIGNED(size, 512)
+ && (!ndns->claim || !is_nd_btt(ndns->claim))) {
long cleared;

cleared = nvdimm_clear_poison(&ndns->dev, offset, size);


2017-04-19 15:21:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 50/69] ftrace: Fix removing of second function probe

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Steven Rostedt (VMware) <[email protected]>

commit 82cc4fc2e70ec5baeff8f776f2773abc8b2cc0ae upstream.

When two function probes are added to set_ftrace_filter, and then one of
them is removed, the update to the function locations is not performed, and
the record keeping of the function states are corrupted, and causes an
ftrace_bug() to occur.

This is easily reproducable by adding two probes, removing one, and then
adding it back again.

# cd /sys/kernel/debug/tracing
# echo schedule:traceoff > set_ftrace_filter
# echo do_IRQ:traceoff > set_ftrace_filter
# echo \!do_IRQ:traceoff > /debug/tracing/set_ftrace_filter
# echo do_IRQ:traceoff > set_ftrace_filter

Causes:
------------[ cut here ]------------
WARNING: CPU: 2 PID: 1098 at kernel/trace/ftrace.c:2369 ftrace_get_addr_curr+0x143/0x220
Modules linked in: [...]
CPU: 2 PID: 1098 Comm: bash Not tainted 4.10.0-test+ #405
Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
Call Trace:
dump_stack+0x68/0x9f
__warn+0x111/0x130
? trace_irq_work_interrupt+0xa0/0xa0
warn_slowpath_null+0x1d/0x20
ftrace_get_addr_curr+0x143/0x220
? __fentry__+0x10/0x10
ftrace_replace_code+0xe3/0x4f0
? ftrace_int3_handler+0x90/0x90
? printk+0x99/0xb5
? 0xffffffff81000000
ftrace_modify_all_code+0x97/0x110
arch_ftrace_update_code+0x10/0x20
ftrace_run_update_code+0x1c/0x60
ftrace_run_modify_code.isra.48.constprop.62+0x8e/0xd0
register_ftrace_function_probe+0x4b6/0x590
? ftrace_startup+0x310/0x310
? debug_lockdep_rcu_enabled.part.4+0x1a/0x30
? update_stack_state+0x88/0x110
? ftrace_regex_write.isra.43.part.44+0x1d3/0x320
? preempt_count_sub+0x18/0xd0
? mutex_lock_nested+0x104/0x800
? ftrace_regex_write.isra.43.part.44+0x1d3/0x320
? __unwind_start+0x1c0/0x1c0
? _mutex_lock_nest_lock+0x800/0x800
ftrace_trace_probe_callback.isra.3+0xc0/0x130
? func_set_flag+0xe0/0xe0
? __lock_acquire+0x642/0x1790
? __might_fault+0x1e/0x20
? trace_get_user+0x398/0x470
? strcmp+0x35/0x60
ftrace_trace_onoff_callback+0x48/0x70
ftrace_regex_write.isra.43.part.44+0x251/0x320
? match_records+0x420/0x420
ftrace_filter_write+0x2b/0x30
__vfs_write+0xd7/0x330
? do_loop_readv_writev+0x120/0x120
? locks_remove_posix+0x90/0x2f0
? do_lock_file_wait+0x160/0x160
? __lock_is_held+0x93/0x100
? rcu_read_lock_sched_held+0x5c/0xb0
? preempt_count_sub+0x18/0xd0
? __sb_start_write+0x10a/0x230
? vfs_write+0x222/0x240
vfs_write+0xef/0x240
SyS_write+0xab/0x130
? SyS_read+0x130/0x130
? trace_hardirqs_on_caller+0x182/0x280
? trace_hardirqs_on_thunk+0x1a/0x1c
entry_SYSCALL_64_fastpath+0x18/0xad
RIP: 0033:0x7fe61c157c30
RSP: 002b:00007ffe87890258 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: ffffffff8114a410 RCX: 00007fe61c157c30
RDX: 0000000000000010 RSI: 000055814798f5e0 RDI: 0000000000000001
RBP: ffff8800c9027f98 R08: 00007fe61c422740 R09: 00007fe61ca53700
R10: 0000000000000073 R11: 0000000000000246 R12: 0000558147a36400
R13: 00007ffe8788f160 R14: 0000000000000024 R15: 00007ffe8788f15c
? trace_hardirqs_off_caller+0xc0/0x110
---[ end trace 99fa09b3d9869c2c ]---
Bad trampoline accounting at: ffffffff81cc3b00 (do_IRQ+0x0/0x150)

Fixes: 59df055f1991 ("ftrace: trace different functions with a different tracer")
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/ftrace.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)

--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3740,23 +3740,24 @@ static void __enable_ftrace_function_pro
ftrace_probe_registered = 1;
}

-static void __disable_ftrace_function_probe(void)
+static bool __disable_ftrace_function_probe(void)
{
int i;

if (!ftrace_probe_registered)
- return;
+ return false;

for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
struct hlist_head *hhd = &ftrace_func_hash[i];
if (hhd->first)
- return;
+ return false;
}

/* no more funcs left */
ftrace_shutdown(&trace_probe_ops, 0);

ftrace_probe_registered = 0;
+ return true;
}


@@ -3886,6 +3887,7 @@ static void
__unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
void *data, int flags)
{
+ struct ftrace_ops_hash old_hash_ops;
struct ftrace_func_entry *rec_entry;
struct ftrace_func_probe *entry;
struct ftrace_func_probe *p;
@@ -3897,6 +3899,7 @@ __unregister_ftrace_function_probe(char
struct hlist_node *tmp;
char str[KSYM_SYMBOL_LEN];
int i, ret;
+ bool disabled;

if (glob && (strcmp(glob, "*") == 0 || !strlen(glob)))
func_g.search = NULL;
@@ -3915,6 +3918,10 @@ __unregister_ftrace_function_probe(char

mutex_lock(&trace_probe_ops.func_hash->regex_lock);

+ old_hash_ops.filter_hash = old_hash;
+ /* Probes only have filters */
+ old_hash_ops.notrace_hash = NULL;
+
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash)
/* Hmm, should report this somehow */
@@ -3952,12 +3959,17 @@ __unregister_ftrace_function_probe(char
}
}
mutex_lock(&ftrace_lock);
- __disable_ftrace_function_probe();
+ disabled = __disable_ftrace_function_probe();
/*
* Remove after the disable is called. Otherwise, if the last
* probe is removed, a null hash means *all enabled*.
*/
ret = ftrace_hash_move(&trace_probe_ops, 1, orig_hash, hash);
+
+ /* still need to update the function call sites */
+ if (ftrace_enabled && !disabled)
+ ftrace_run_modify_code(&trace_probe_ops, FTRACE_UPDATE_CALLS,
+ &old_hash_ops);
synchronize_sched();
if (!ret)
free_ftrace_hash_rcu(old_hash);


2017-04-19 15:21:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 49/69] irqchip/irq-imx-gpcv2: Fix spinlock initialization

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Tyler Baker <[email protected]>

commit 75eb5e1e7b4edbc8e8f930de59004d21cb46961f upstream.

The raw_spinlock in the IMX GPCV2 interupt chip is not initialized before
usage. That results in a lockdep splat:

INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.

Add the missing raw_spin_lock_init() to the setup code.

Fixes: e324c4dc4a59 ("irqchip/imx-gpcv2: IMX GPCv2 driver for wakeup sources")
Signed-off-by: Tyler Baker <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-imx-gpcv2.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/irqchip/irq-imx-gpcv2.c
+++ b/drivers/irqchip/irq-imx-gpcv2.c
@@ -230,6 +230,8 @@ static int __init imx_gpcv2_irqchip_init
return -ENOMEM;
}

+ raw_spin_lock_init(&cd->rlock);
+
cd->gpc_base = of_iomap(node, 0);
if (!cd->gpc_base) {
pr_err("fsl-gpcv2: unable to map gpc registers\n");


2017-04-19 15:22:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 47/69] pwm: rockchip: State of PWM clock should synchronize with PWM enabled state

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: David Wu <[email protected]>

commit a900152b5c29aea8134cc7a4c5db25552b3cd8f7 upstream.

If the PWM was not enabled at U-Boot loader, PWM could not work for
clock always disabled at PWM driver. The PWM clock is enabled at
beginning of pwm_apply(), but disabled at end of pwm_apply().

If the PWM was enabled at U-Boot loader, PWM clock is always enabled
unless closed by ATF. The pwm-backlight might turn off the power at
early suspend, should disable PWM clock for saving power consume.

It is important to provide opportunity to enable/disable clock at PWM
driver, the PWM consumer should ensure correct order to call PWM enable
and disable, and PWM driver ensure state of PWM clock synchronized with
PWM enabled state.

Fixes: 2bf1c98aa5a4 ("pwm: rockchip: Add support for atomic update")
Signed-off-by: David Wu <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pwm/pwm-rockchip.c | 40 +++++++++++++++++++++++++++++++++-------
1 file changed, 33 insertions(+), 7 deletions(-)

--- a/drivers/pwm/pwm-rockchip.c
+++ b/drivers/pwm/pwm-rockchip.c
@@ -191,6 +191,28 @@ static int rockchip_pwm_config(struct pw
return 0;
}

+static int rockchip_pwm_enable(struct pwm_chip *chip,
+ struct pwm_device *pwm,
+ bool enable,
+ enum pwm_polarity polarity)
+{
+ struct rockchip_pwm_chip *pc = to_rockchip_pwm_chip(chip);
+ int ret;
+
+ if (enable) {
+ ret = clk_enable(pc->clk);
+ if (ret)
+ return ret;
+ }
+
+ pc->data->set_enable(chip, pwm, enable, polarity);
+
+ if (!enable)
+ clk_disable(pc->clk);
+
+ return 0;
+}
+
static int rockchip_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
struct pwm_state *state)
{
@@ -207,22 +229,26 @@ static int rockchip_pwm_apply(struct pwm
return ret;

if (state->polarity != curstate.polarity && enabled) {
- pc->data->set_enable(chip, pwm, false, state->polarity);
+ ret = rockchip_pwm_enable(chip, pwm, false, state->polarity);
+ if (ret)
+ goto out;
enabled = false;
}

ret = rockchip_pwm_config(chip, pwm, state->duty_cycle, state->period);
if (ret) {
if (enabled != curstate.enabled)
- pc->data->set_enable(chip, pwm, !enabled,
- state->polarity);
-
+ rockchip_pwm_enable(chip, pwm, !enabled,
+ state->polarity);
goto out;
}

- if (state->enabled != enabled)
- pc->data->set_enable(chip, pwm, state->enabled,
- state->polarity);
+ if (state->enabled != enabled) {
+ ret = rockchip_pwm_enable(chip, pwm, state->enabled,
+ state->polarity);
+ if (ret)
+ goto out;
+ }

/*
* Update the state with the real hardware, which can differ a bit


2017-04-19 15:22:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 46/69] can: ifi: use correct register to read rx status

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Markus Marb <[email protected]>

commit 57c1d4c33e8f7ec90976d79127059c1919cc0651 upstream.

The incorrect offset was used when trying to read the RXSTCMD register.

Signed-off-by: Markus Marb <[email protected]>
Signed-off-by: Marc Kleine-Budde <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/can/ifi_canfd/ifi_canfd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
+++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
@@ -557,7 +557,7 @@ static int ifi_canfd_poll(struct napi_st
int work_done = 0;

u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
- u32 rxstcmd = readl(priv->base + IFI_CANFD_STCMD);
+ u32 rxstcmd = readl(priv->base + IFI_CANFD_RXSTCMD);
u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);

/* Handle bus state changes */


2017-04-19 15:23:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 44/69] libnvdimm: fix reconfig_mutex, mmap_sem, and jbd2_handle lockdep splat

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Williams <[email protected]>

commit 0beb2012a1722633515c8aaa263c73449636c893 upstream.

Holding the reconfig_mutex over a potential userspace fault sets up a
lockdep dependency chain between filesystem-DAX and the libnvdimm ioctl
path. Move the user access outside of the lock.

[ INFO: possible circular locking dependency detected ]
4.11.0-rc3+ #13 Tainted: G W O
-------------------------------------------------------
fallocate/16656 is trying to acquire lock:
(&nvdimm_bus->reconfig_mutex){+.+.+.}, at: [<ffffffffa00080b1>] nvdimm_bus_lock+0x21/0x30 [libnvdimm]
but task is already holding lock:
(jbd2_handle){++++..}, at: [<ffffffff813b4944>] start_this_handle+0x104/0x460

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #2 (jbd2_handle){++++..}:
lock_acquire+0xbd/0x200
start_this_handle+0x16a/0x460
jbd2__journal_start+0xe9/0x2d0
__ext4_journal_start_sb+0x89/0x1c0
ext4_dirty_inode+0x32/0x70
__mark_inode_dirty+0x235/0x670
generic_update_time+0x87/0xd0
touch_atime+0xa9/0xd0
ext4_file_mmap+0x90/0xb0
mmap_region+0x370/0x5b0
do_mmap+0x415/0x4f0
vm_mmap_pgoff+0xd7/0x120
SyS_mmap_pgoff+0x1c5/0x290
SyS_mmap+0x22/0x30
entry_SYSCALL_64_fastpath+0x1f/0xc2

-> #1 (&mm->mmap_sem){++++++}:
lock_acquire+0xbd/0x200
__might_fault+0x70/0xa0
__nd_ioctl+0x683/0x720 [libnvdimm]
nvdimm_ioctl+0x8b/0xe0 [libnvdimm]
do_vfs_ioctl+0xa8/0x740
SyS_ioctl+0x79/0x90
do_syscall_64+0x6c/0x200
return_from_SYSCALL_64+0x0/0x7a

-> #0 (&nvdimm_bus->reconfig_mutex){+.+.+.}:
__lock_acquire+0x16b6/0x1730
lock_acquire+0xbd/0x200
__mutex_lock+0x88/0x9b0
mutex_lock_nested+0x1b/0x20
nvdimm_bus_lock+0x21/0x30 [libnvdimm]
nvdimm_forget_poison+0x25/0x50 [libnvdimm]
nvdimm_clear_poison+0x106/0x140 [libnvdimm]
pmem_do_bvec+0x1c2/0x2b0 [nd_pmem]
pmem_make_request+0xf9/0x270 [nd_pmem]
generic_make_request+0x118/0x3b0
submit_bio+0x75/0x150

Fixes: 62232e45f4a2 ("libnvdimm: control (ioctl) messages for nvdimm_bus and nvdimm devices")
Cc: Dave Jiang <[email protected]>
Reported-by: Vishal Verma <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/nvdimm/bus.c | 6 ++++++
1 file changed, 6 insertions(+)

--- a/drivers/nvdimm/bus.c
+++ b/drivers/nvdimm/bus.c
@@ -934,8 +934,14 @@ static int __nd_ioctl(struct nvdimm_bus
rc = nd_desc->ndctl(nd_desc, nvdimm, cmd, buf, buf_len, NULL);
if (rc < 0)
goto out_unlock;
+ nvdimm_bus_unlock(&nvdimm_bus->dev);
+
if (copy_to_user(p, buf, buf_len))
rc = -EFAULT;
+
+ vfree(buf);
+ return rc;
+
out_unlock:
nvdimm_bus_unlock(&nvdimm_bus->dev);
out:


2017-04-19 14:39:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 09/69] orangefs: free superblock when mount fails

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Martin Brandenburg <[email protected]>

commit 1ec1688c5360e14dde4094d6acbf7516bf6db37e upstream.

Otherwise lockdep says:

[ 1337.483798] ================================================
[ 1337.483999] [ BUG: lock held when returning to user space! ]
[ 1337.484252] 4.11.0-rc6 #19 Not tainted
[ 1337.484423] ------------------------------------------------
[ 1337.484626] mount/14766 is leaving the kernel with locks still held!
[ 1337.484841] 1 lock held by mount/14766:
[ 1337.485017] #0: (&type->s_umount_key#33/1){+.+.+.}, at: [<ffffffff8124171f>] sget_userns+0x2af/0x520

Caught by xfstests generic/413 which tried to mount with the unsupported
mount option dax. Then xfstests generic/422 ran sync which deadlocks.

Signed-off-by: Martin Brandenburg <[email protected]>
Acked-by: Mike Marshall <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/orangefs/devorangefs-req.c | 9 +++++++--
fs/orangefs/orangefs-kernel.h | 1 +
fs/orangefs/super.c | 23 ++++++++++++++++-------
3 files changed, 24 insertions(+), 9 deletions(-)

--- a/fs/orangefs/devorangefs-req.c
+++ b/fs/orangefs/devorangefs-req.c
@@ -208,14 +208,19 @@ restart:
continue;
/*
* Skip ops whose filesystem we don't know about unless
- * it is being mounted.
+ * it is being mounted or unmounted. It is possible for
+ * a filesystem we don't know about to be unmounted if
+ * it fails to mount in the kernel after userspace has
+ * been sent the mount request.
*/
/* XXX: is there a better way to detect this? */
} else if (ret == -1 &&
!(op->upcall.type ==
ORANGEFS_VFS_OP_FS_MOUNT ||
op->upcall.type ==
- ORANGEFS_VFS_OP_GETATTR)) {
+ ORANGEFS_VFS_OP_GETATTR ||
+ op->upcall.type ==
+ ORANGEFS_VFS_OP_FS_UMOUNT)) {
gossip_debug(GOSSIP_DEV_DEBUG,
"orangefs: skipping op tag %llu %s\n",
llu(op->tag), get_opname_string(op));
--- a/fs/orangefs/orangefs-kernel.h
+++ b/fs/orangefs/orangefs-kernel.h
@@ -249,6 +249,7 @@ struct orangefs_sb_info_s {
char devname[ORANGEFS_MAX_SERVER_ADDR_LEN];
struct super_block *sb;
int mount_pending;
+ int no_list;
struct list_head list;
};

--- a/fs/orangefs/super.c
+++ b/fs/orangefs/super.c
@@ -493,7 +493,7 @@ struct dentry *orangefs_mount(struct fil

if (ret) {
d = ERR_PTR(ret);
- goto free_op;
+ goto free_sb_and_op;
}

/*
@@ -519,6 +519,9 @@ struct dentry *orangefs_mount(struct fil
spin_unlock(&orangefs_superblocks_lock);
op_release(new_op);

+ /* Must be removed from the list now. */
+ ORANGEFS_SB(sb)->no_list = 0;
+
if (orangefs_userspace_version >= 20906) {
new_op = op_alloc(ORANGEFS_VFS_OP_FEATURES);
if (!new_op)
@@ -533,6 +536,10 @@ struct dentry *orangefs_mount(struct fil

return dget(sb->s_root);

+free_sb_and_op:
+ /* Will call orangefs_kill_sb with sb not in list. */
+ ORANGEFS_SB(sb)->no_list = 1;
+ deactivate_locked_super(sb);
free_op:
gossip_err("orangefs_mount: mount request failed with %d\n", ret);
if (ret == -EINVAL) {
@@ -558,12 +565,14 @@ void orangefs_kill_sb(struct super_block
*/
orangefs_unmount_sb(sb);

- /* remove the sb from our list of orangefs specific sb's */
-
- spin_lock(&orangefs_superblocks_lock);
- __list_del_entry(&ORANGEFS_SB(sb)->list); /* not list_del_init */
- ORANGEFS_SB(sb)->list.prev = NULL;
- spin_unlock(&orangefs_superblocks_lock);
+ if (!ORANGEFS_SB(sb)->no_list) {
+ /* remove the sb from our list of orangefs specific sb's */
+ spin_lock(&orangefs_superblocks_lock);
+ /* not list_del_init */
+ __list_del_entry(&ORANGEFS_SB(sb)->list);
+ ORANGEFS_SB(sb)->list.prev = NULL;
+ spin_unlock(&orangefs_superblocks_lock);
+ }

/*
* make sure that ORANGEFS_DEV_REMOUNT_ALL loop that might've seen us


2017-04-19 15:25:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 07/69] thp: fix MADV_DONTNEED vs clear soft dirty race

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kirill A. Shutemov <[email protected]>

commit 5b7abeae3af8c08c577e599dd0578b9e3ee6687b upstream.

Yet another instance of the same race.

Fix is identical to change_huge_pmd().

See "thp: fix MADV_DONTNEED vs. numa balancing race" for more details.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/proc/task_mmu.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -899,7 +899,14 @@ static inline void clear_soft_dirty(stru
static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
- pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp);
+ pmd_t pmd = *pmdp;
+
+ /* See comment in change_huge_pmd() */
+ pmdp_invalidate(vma, addr, pmdp);
+ if (pmd_dirty(*pmdp))
+ pmd = pmd_mkdirty(pmd);
+ if (pmd_young(*pmdp))
+ pmd = pmd_mkyoung(pmd);

pmd = pmd_wrprotect(pmd);
pmd = pmd_clear_soft_dirty(pmd);


2017-04-19 15:26:13

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 05/69] tcmu: Skip Data-Out blocks before gathering Data-In buffer for BIDI case

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Xiubo Li <[email protected]>

commit a5d68ba85801a78c892a0eb8efb711e293ed314b upstream.

For the bidirectional case, the Data-Out buffer blocks will always at
the head of the tcmu_cmd's bitmap, and before gathering the Data-In
buffer, first of all it should skip the Data-Out ones, or the device
supporting BIDI commands won't work.

Fixed: 26418649eead ("target/user: Introduce data_bitmap, replace
data_length/data_head/data_tail")
Reported-by: Ilias Tsitsimpis <[email protected]>
Tested-by: Ilias Tsitsimpis <[email protected]>
Signed-off-by: Xiubo Li <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_user.c | 48 ++++++++++++++++++++++++++------------
1 file changed, 33 insertions(+), 15 deletions(-)

--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -307,24 +307,50 @@ static void free_data_area(struct tcmu_d
DATA_BLOCK_BITS);
}

-static void gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,
- struct scatterlist *data_sg, unsigned int data_nents)
+static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
+ bool bidi)
{
+ struct se_cmd *se_cmd = cmd->se_cmd;
int i, block;
int block_remaining = 0;
void *from, *to;
size_t copy_bytes, from_offset;
- struct scatterlist *sg;
+ struct scatterlist *sg, *data_sg;
+ unsigned int data_nents;
+ DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
+
+ bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
+
+ if (!bidi) {
+ data_sg = se_cmd->t_data_sg;
+ data_nents = se_cmd->t_data_nents;
+ } else {
+ uint32_t count;
+
+ /*
+ * For bidi case, the first count blocks are for Data-Out
+ * buffer blocks, and before gathering the Data-In buffer
+ * the Data-Out buffer blocks should be discarded.
+ */
+ count = DIV_ROUND_UP(se_cmd->data_length, DATA_BLOCK_SIZE);
+ while (count--) {
+ block = find_first_bit(bitmap, DATA_BLOCK_BITS);
+ clear_bit(block, bitmap);
+ }
+
+ data_sg = se_cmd->t_bidi_data_sg;
+ data_nents = se_cmd->t_bidi_data_nents;
+ }

for_each_sg(data_sg, sg, data_nents, i) {
int sg_remaining = sg->length;
to = kmap_atomic(sg_page(sg)) + sg->offset;
while (sg_remaining > 0) {
if (block_remaining == 0) {
- block = find_first_bit(cmd_bitmap,
+ block = find_first_bit(bitmap,
DATA_BLOCK_BITS);
block_remaining = DATA_BLOCK_SIZE;
- clear_bit(block, cmd_bitmap);
+ clear_bit(block, bitmap);
}
copy_bytes = min_t(size_t, sg_remaining,
block_remaining);
@@ -601,19 +627,11 @@ static void tcmu_handle_completion(struc
se_cmd->scsi_sense_length);
free_data_area(udev, cmd);
} else if (se_cmd->se_cmd_flags & SCF_BIDI) {
- DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
-
/* Get Data-In buffer before clean up */
- bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
- gather_data_area(udev, bitmap,
- se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents);
+ gather_data_area(udev, cmd, true);
free_data_area(udev, cmd);
} else if (se_cmd->data_direction == DMA_FROM_DEVICE) {
- DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
-
- bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
- gather_data_area(udev, bitmap,
- se_cmd->t_data_sg, se_cmd->t_data_nents);
+ gather_data_area(udev, cmd, false);
free_data_area(udev, cmd);
} else if (se_cmd->data_direction == DMA_TO_DEVICE) {
free_data_area(udev, cmd);


2017-04-19 15:26:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 04/69] tcmu: Fix wrongly calculating of the base_command_size

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Xiubo Li <[email protected]>

commit abe342a5b4b5aa579f6bf40ba73447c699e6b579 upstream.

The t_data_nents and t_bidi_data_nents are the numbers of the
segments, but it couldn't be sure the block size equals to size
of the segment.

For the worst case, all the blocks are discontiguous and there
will need the same number of iovecs, that's to say: blocks == iovs.
So here just set the number of iovs to block count needed by tcmu
cmd.

Tested-by: Ilias Tsitsimpis <[email protected]>
Reviewed-by: Mike Christie <[email protected]>
Signed-off-by: Xiubo Li <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_user.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -404,6 +404,13 @@ static inline size_t tcmu_cmd_get_data_l
return data_length;
}

+static inline uint32_t tcmu_cmd_get_block_cnt(struct tcmu_cmd *tcmu_cmd)
+{
+ size_t data_length = tcmu_cmd_get_data_length(tcmu_cmd);
+
+ return data_length / DATA_BLOCK_SIZE;
+}
+
static sense_reason_t
tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
{
@@ -431,8 +438,7 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcm
* expensive to tell how many regions are freed in the bitmap
*/
base_command_size = max(offsetof(struct tcmu_cmd_entry,
- req.iov[se_cmd->t_bidi_data_nents +
- se_cmd->t_data_nents]),
+ req.iov[tcmu_cmd_get_block_cnt(tcmu_cmd)]),
sizeof(struct tcmu_cmd_entry));
command_size = base_command_size
+ round_up(scsi_command_size(se_cmd->t_task_cdb), TCMU_OP_ALIGN_SIZE);


2017-04-19 15:26:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 06/69] thp: fix MADV_DONTNEED vs. MADV_FREE race

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kirill A. Shutemov <[email protected]>

commit 58ceeb6bec86d9140f9d91d71a710e963523d063 upstream.

Both MADV_DONTNEED and MADV_FREE handled with down_read(mmap_sem).

It's critical to not clear pmd intermittently while handling MADV_FREE
to avoid race with MADV_DONTNEED:

CPU0: CPU1:
madvise_free_huge_pmd()
pmdp_huge_get_and_clear_full()
madvise_dontneed()
zap_pmd_range()
pmd_trans_huge(*pmd) == 0 (without ptl)
// skip the pmd
set_pmd_at();
// pmd is re-established

It results in MADV_DONTNEED skipping the pmd, leaving it not cleared.
It violates MADV_DONTNEED interface and can result is userspace
misbehaviour.

Basically it's the same race as with numa balancing in
change_huge_pmd(), but a bit simpler to mitigate: we don't need to
preserve dirty/young flags here due to MADV_FREE functionality.

[[email protected]: Urgh... Power is special again]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Acked-by: Minchan Kim <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/huge_memory.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1393,8 +1393,7 @@ bool madvise_free_huge_pmd(struct mmu_ga
deactivate_page(page);

if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
- orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
- tlb->fullmm);
+ pmdp_invalidate(vma, addr, pmd);
orig_pmd = pmd_mkold(orig_pmd);
orig_pmd = pmd_mkclean(orig_pmd);



2017-04-19 15:26:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 32/69] efi/fb: Avoid reconfiguration of BAR that covers the framebuffer

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ard Biesheuvel <[email protected]>

commit 55d728a40d368ba80443be85c02e641fc9082a3f upstream.

On UEFI systems, the PCI subsystem is enumerated by the firmware,
and if a graphical framebuffer is exposed via a PCI device, its base
address and size are exposed to the OS via the Graphics Output
Protocol (GOP).

On arm64 PCI systems, the entire PCI hierarchy is reconfigured from
scratch at boot. This may result in the GOP framebuffer address to
become stale, if the BAR covering the framebuffer is modified. This
will cause the framebuffer to become unresponsive, and may in some
cases result in unpredictable behavior if the range is reassigned to
another device.

So add a non-x86 quirk to the EFI fb driver to find the BAR associated
with the GOP base address, and claim the BAR resource so that the PCI
core will not move it.

Signed-off-by: Ard Biesheuvel <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Matt Fleming <[email protected]>
Cc: Peter Jones <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: 9822504c1fa5 ("efifb: Enable the efi-framebuffer platform driver ...")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/video/fbdev/efifb.c | 66 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 65 insertions(+), 1 deletion(-)

--- a/drivers/video/fbdev/efifb.c
+++ b/drivers/video/fbdev/efifb.c
@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/errno.h>
#include <linux/fb.h>
+#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/screen_info.h>
#include <video/vga.h>
@@ -143,6 +144,8 @@ static struct attribute *efifb_attrs[] =
};
ATTRIBUTE_GROUPS(efifb);

+static bool pci_dev_disabled; /* FB base matches BAR of a disabled device */
+
static int efifb_probe(struct platform_device *dev)
{
struct fb_info *info;
@@ -152,7 +155,7 @@ static int efifb_probe(struct platform_d
unsigned int size_total;
char *option = NULL;

- if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
+ if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI || pci_dev_disabled)
return -ENODEV;

if (fb_get_options("efifb", &option))
@@ -360,3 +363,64 @@ static struct platform_driver efifb_driv
};

builtin_platform_driver(efifb_driver);
+
+#if defined(CONFIG_PCI) && !defined(CONFIG_X86)
+
+static bool pci_bar_found; /* did we find a BAR matching the efifb base? */
+
+static void claim_efifb_bar(struct pci_dev *dev, int idx)
+{
+ u16 word;
+
+ pci_bar_found = true;
+
+ pci_read_config_word(dev, PCI_COMMAND, &word);
+ if (!(word & PCI_COMMAND_MEMORY)) {
+ pci_dev_disabled = true;
+ dev_err(&dev->dev,
+ "BAR %d: assigned to efifb but device is disabled!\n",
+ idx);
+ return;
+ }
+
+ if (pci_claim_resource(dev, idx)) {
+ pci_dev_disabled = true;
+ dev_err(&dev->dev,
+ "BAR %d: failed to claim resource for efifb!\n", idx);
+ return;
+ }
+
+ dev_info(&dev->dev, "BAR %d: assigned to efifb\n", idx);
+}
+
+static void efifb_fixup_resources(struct pci_dev *dev)
+{
+ u64 base = screen_info.lfb_base;
+ u64 size = screen_info.lfb_size;
+ int i;
+
+ if (pci_bar_found || screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
+ return;
+
+ if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
+ base |= (u64)screen_info.ext_lfb_base << 32;
+
+ if (!base)
+ return;
+
+ for (i = 0; i < PCI_STD_RESOURCE_END; i++) {
+ struct resource *res = &dev->resource[i];
+
+ if (!(res->flags & IORESOURCE_MEM))
+ continue;
+
+ if (res->start <= base && res->end >= base + size - 1) {
+ claim_efifb_bar(dev, i);
+ break;
+ }
+ }
+}
+DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID, PCI_BASE_CLASS_DISPLAY,
+ 16, efifb_fixup_resources);
+
+#endif


2017-04-19 15:27:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 03/69] tcmu: Fix possible overwrite of t_data_sgs last iov[]

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Xiubo Li <[email protected]>

commit ab22d2604c86ceb01bb2725c9860b88a7dd383bb upstream.

If there has BIDI data, its first iov[] will overwrite the last
iov[] for se_cmd->t_data_sg.

To fix this, we can just increase the iov pointer, but this may
introuduce a new memory leakage bug: If the se_cmd->data_length
and se_cmd->t_bidi_data_sg->length are all not aligned up to the
DATA_BLOCK_SIZE, the actual length needed maybe larger than just
sum of them.

So, this could be avoided by rounding all the data lengthes up
to DATA_BLOCK_SIZE.

Reviewed-by: Mike Christie <[email protected]>
Tested-by: Ilias Tsitsimpis <[email protected]>
Reviewed-by: Bryant G. Ly <[email protected]>
Signed-off-by: Xiubo Li <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_user.c | 34 +++++++++++++++++++++++-----------
1 file changed, 23 insertions(+), 11 deletions(-)

--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -390,6 +390,20 @@ static bool is_ring_space_avail(struct t
return true;
}

+static inline size_t tcmu_cmd_get_data_length(struct tcmu_cmd *tcmu_cmd)
+{
+ struct se_cmd *se_cmd = tcmu_cmd->se_cmd;
+ size_t data_length = round_up(se_cmd->data_length, DATA_BLOCK_SIZE);
+
+ if (se_cmd->se_cmd_flags & SCF_BIDI) {
+ BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents));
+ data_length += round_up(se_cmd->t_bidi_data_sg->length,
+ DATA_BLOCK_SIZE);
+ }
+
+ return data_length;
+}
+
static sense_reason_t
tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
{
@@ -403,7 +417,7 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcm
uint32_t cmd_head;
uint64_t cdb_off;
bool copy_to_data_area;
- size_t data_length;
+ size_t data_length = tcmu_cmd_get_data_length(tcmu_cmd);
DECLARE_BITMAP(old_bitmap, DATA_BLOCK_BITS);

if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags))
@@ -429,11 +443,6 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcm

mb = udev->mb_addr;
cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
- data_length = se_cmd->data_length;
- if (se_cmd->se_cmd_flags & SCF_BIDI) {
- BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents));
- data_length += se_cmd->t_bidi_data_sg->length;
- }
if ((command_size > (udev->cmdr_size / 2)) ||
data_length > udev->data_size) {
pr_warn("TCMU: Request of size %zu/%zu is too big for %u/%zu "
@@ -503,11 +512,14 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcm
entry->req.iov_dif_cnt = 0;

/* Handle BIDI commands */
- iov_cnt = 0;
- alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
- se_cmd->t_bidi_data_nents, &iov, &iov_cnt, false);
- entry->req.iov_bidi_cnt = iov_cnt;
-
+ if (se_cmd->se_cmd_flags & SCF_BIDI) {
+ iov_cnt = 0;
+ iov++;
+ alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
+ se_cmd->t_bidi_data_nents, &iov, &iov_cnt,
+ false);
+ entry->req.iov_bidi_cnt = iov_cnt;
+ }
/* cmd's data_bitmap is what changed in process */
bitmap_xor(tcmu_cmd->data_bitmap, old_bitmap, udev->data_bitmap,
DATA_BLOCK_BITS);


2017-04-19 15:27:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 31/69] efi/libstub: Skip GOP with PIXEL_BLT_ONLY format

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Cohen, Eugene <[email protected]>

commit 540f4c0e894f7e46a66dfa424b16424cbdc12c38 upstream.

The UEFI Specification permits Graphics Output Protocol (GOP) instances
without direct framebuffer access. This is indicated in the Mode structure
with a PixelFormat enumeration value of PIXEL_BLT_ONLY. Given that the
kernel does not know how to drive a Blt() only framebuffer (which is only
permitted before ExitBootServices() anyway), we should disregard such
framebuffers when looking for a GOP instance that is suitable for use as
the boot console.

So modify the EFI GOP initialization to not use a PIXEL_BLT_ONLY instance,
preventing attempts later in boot to use an invalid screen_info.lfb_base
address.

Signed-off-by: Eugene Cohen <[email protected]>
[ Moved the Blt() only check into the loop and clarified that Blt() only GOPs are unusable by the kernel. ]
Signed-off-by: Ard Biesheuvel <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Matt Fleming <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: 9822504c1fa5 ("efifb: Enable the efi-framebuffer platform driver ...")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/firmware/efi/libstub/gop.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/drivers/firmware/efi/libstub/gop.c
+++ b/drivers/firmware/efi/libstub/gop.c
@@ -149,7 +149,8 @@ setup_gop32(efi_system_table_t *sys_tabl

status = __gop_query32(sys_table_arg, gop32, &info, &size,
&current_fb_base);
- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
+ if (status == EFI_SUCCESS && (!first_gop || conout_found) &&
+ info->pixel_format != PIXEL_BLT_ONLY) {
/*
* Systems that use the UEFI Console Splitter may
* provide multiple GOP devices, not all of which are
@@ -266,7 +267,8 @@ setup_gop64(efi_system_table_t *sys_tabl

status = __gop_query64(sys_table_arg, gop64, &info, &size,
&current_fb_base);
- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
+ if (status == EFI_SUCCESS && (!first_gop || conout_found) &&
+ info->pixel_format != PIXEL_BLT_ONLY) {
/*
* Systems that use the UEFI Console Splitter may
* provide multiple GOP devices, not all of which are


2017-04-19 15:28:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 28/69] acpi, nfit, libnvdimm: fix interleave set cookie calculation (64-bit comparison)

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Williams <[email protected]>

commit b03b99a329a14b7302f37c3ea6da3848db41c8c5 upstream.

While reviewing the -stable patch for commit 86ef58a4e35e "nfit,
libnvdimm: fix interleave set cookie calculation" Ben noted:

"This is returning an int, thus it's effectively doing a 32-bit
comparison and not the 64-bit comparison you say is needed."

Update the compare operation to be immune to this integer demotion problem.

Cc: Nicholas Moulin <[email protected]>
Fixes: 86ef58a4e35e ("nfit, libnvdimm: fix interleave set cookie calculation")
Reported-by: Ben Hutchings <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/acpi/nfit/core.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -1617,7 +1617,11 @@ static int cmp_map(const void *m0, const
const struct nfit_set_info_map *map0 = m0;
const struct nfit_set_info_map *map1 = m1;

- return map0->region_offset - map1->region_offset;
+ if (map0->region_offset < map1->region_offset)
+ return -1;
+ else if (map0->region_offset > map1->region_offset)
+ return 1;
+ return 0;
}

/* Retrieve the nth entry referencing this spa */


2017-04-19 15:28:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 27/69] x86/vdso: Plug race between mapping and ELF header setup

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 6fdc6dd90272ce7e75d744f71535cfbd8d77da81 upstream.

The vsyscall32 sysctl can racy against a concurrent fork when it switches
from disabled to enabled:

arch_setup_additional_pages()
if (vdso32_enabled)
--> No mapping
sysctl.vsysscall32()
--> vdso32_enabled = true
create_elf_tables()
ARCH_DLINFO_IA32
if (vdso32_enabled) {
--> Add VDSO entry with NULL pointer

Make ARCH_DLINFO_IA32 check whether the VDSO mapping has been set up for
the newly forked process or not.

Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Mathias Krause <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/elf.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -278,7 +278,7 @@ struct task_struct;

#define ARCH_DLINFO_IA32 \
do { \
- if (vdso32_enabled) { \
+ if (VDSO_CURRENT_BASE) { \
NEW_AUX_ENT(AT_SYSINFO, VDSO_ENTRY); \
NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_CURRENT_BASE); \
} \


2017-04-19 15:28:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 29/69] ACPI / scan: Set the visited flag for all enumerated devices

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Rafael J. Wysocki <[email protected]>

commit f406270bf73d71ea7b35ee3f7a08a44f6594c9b1 upstream.

Commit 10c7e20b2ff3 (ACPI / scan: fix enumeration (visited) flags for
bus rescans) attempted to fix a problem with ACPI-based enumerateion
of I2C/SPI devices, but it forgot to ensure that the visited flag
will be set for all of the other enumerated devices, so fix that.

Fixes: 10c7e20b2ff3 (ACPI / scan: fix enumeration (visited) flags for bus rescans)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=194885
Reported-and-tested-by: Kevin Locke <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Reviewed-by: Mika Westerberg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/acpi/scan.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)

--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1857,15 +1857,20 @@ static void acpi_bus_attach(struct acpi_
return;

device->flags.match_driver = true;
- if (!ret) {
- ret = device_attach(&device->dev);
- if (ret < 0)
- return;
-
- if (!ret && device->pnp.type.platform_id)
- acpi_default_enumeration(device);
+ if (ret > 0) {
+ acpi_device_set_enumerated(device);
+ goto ok;
}

+ ret = device_attach(&device->dev);
+ if (ret < 0)
+ return;
+
+ if (ret > 0 || !device->pnp.type.platform_id)
+ acpi_device_set_enumerated(device);
+ else
+ acpi_default_enumeration(device);
+
ok:
list_for_each_entry(child, &device->children, node)
acpi_bus_attach(child);


2017-04-19 15:28:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 22/69] x86/efi: Dont try to reserve runtime regions

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Omar Sandoval <[email protected]>

commit 6f6266a561306e206e0e31a5038f029b6a7b1d89 upstream.

Reserving a runtime region results in splitting the EFI memory
descriptors for the runtime region. This results in runtime region
descriptors with bogus memory mappings, leading to interesting crashes
like the following during a kexec:

general protection fault: 0000 [#1] SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1 #53
Hardware name: Wiwynn Leopard-Orv2/Leopard-DDR BW, BIOS LBM05 09/30/2016
RIP: 0010:virt_efi_set_variable()
...
Call Trace:
efi_delete_dummy_variable()
efi_enter_virtual_mode()
start_kernel()
? set_init_arg()
x86_64_start_reservations()
x86_64_start_kernel()
start_cpu()
...
Kernel panic - not syncing: Fatal exception

Runtime regions will not be freed and do not need to be reserved, so
skip the memmap modification in this case.

Signed-off-by: Omar Sandoval <[email protected]>
Signed-off-by: Matt Fleming <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Jones <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 8e80632fb23f ("efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/platform/efi/quirks.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -201,6 +201,10 @@ void __init efi_arch_mem_reserve(phys_ad
return;
}

+ /* No need to reserve regions that will never be freed. */
+ if (md.attribute & EFI_MEMORY_RUNTIME)
+ return;
+
size += addr % EFI_PAGE_SIZE;
size = round_up(size, EFI_PAGE_SIZE);
addr = round_down(addr, EFI_PAGE_SIZE);


2017-04-19 15:29:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 25/69] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Williams <[email protected]>

commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream.

Before we rework the "pmem api" to stop abusing __copy_user_nocache()
for memcpy_to_pmem() we need to fix cases where we may strand dirty data
in the cpu cache. The problem occurs when copy_from_iter_pmem() is used
for arbitrary data transfers from userspace. There is no guarantee that
these transfers, performed by dax_iomap_actor(), will have aligned
destinations or aligned transfer lengths. Backstop the usage
__copy_user_nocache() with explicit cache management in these unaligned
cases.

Yes, copy_from_iter_pmem() is now too big for an inline, but addressing
that is saved for a later patch that moves the entirety of the "pmem
api" into the pmem driver directly.

Fixes: 5de490daec8b ("pmem: add copy_from_iter_pmem() and clear_pmem()")
Cc: <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Reviewed-by: Ross Zwisler <[email protected]>
Signed-off-by: Toshi Kani <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/pmem.h | 42 +++++++++++++++++++++++++++++++-----------
1 file changed, 31 insertions(+), 11 deletions(-)

--- a/arch/x86/include/asm/pmem.h
+++ b/arch/x86/include/asm/pmem.h
@@ -55,7 +55,8 @@ static inline int arch_memcpy_from_pmem(
* @size: number of bytes to write back
*
* Write back a cache range using the CLWB (cache line write back)
- * instruction.
+ * instruction. Note that @size is internally rounded up to be cache
+ * line size aligned.
*/
static inline void arch_wb_cache_pmem(void *addr, size_t size)
{
@@ -69,15 +70,6 @@ static inline void arch_wb_cache_pmem(vo
clwb(p);
}

-/*
- * copy_from_iter_nocache() on x86 only uses non-temporal stores for iovec
- * iterators, so for other types (bvec & kvec) we must do a cache write-back.
- */
-static inline bool __iter_needs_pmem_wb(struct iov_iter *i)
-{
- return iter_is_iovec(i) == false;
-}
-
/**
* arch_copy_from_iter_pmem - copy data from an iterator to PMEM
* @addr: PMEM destination address
@@ -94,7 +86,35 @@ static inline size_t arch_copy_from_iter
/* TODO: skip the write-back by always using non-temporal stores */
len = copy_from_iter_nocache(addr, bytes, i);

- if (__iter_needs_pmem_wb(i))
+ /*
+ * In the iovec case on x86_64 copy_from_iter_nocache() uses
+ * non-temporal stores for the bulk of the transfer, but we need
+ * to manually flush if the transfer is unaligned. A cached
+ * memory copy is used when destination or size is not naturally
+ * aligned. That is:
+ * - Require 8-byte alignment when size is 8 bytes or larger.
+ * - Require 4-byte alignment when size is 4 bytes.
+ *
+ * In the non-iovec case the entire destination needs to be
+ * flushed.
+ */
+ if (iter_is_iovec(i)) {
+ unsigned long flushed, dest = (unsigned long) addr;
+
+ if (bytes < 8) {
+ if (!IS_ALIGNED(dest, 4) || (bytes != 4))
+ arch_wb_cache_pmem(addr, 1);
+ } else {
+ if (!IS_ALIGNED(dest, 8)) {
+ dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
+ arch_wb_cache_pmem(addr, 1);
+ }
+
+ flushed = dest - (unsigned long) addr;
+ if (bytes > flushed && !IS_ALIGNED(bytes - flushed, 8))
+ arch_wb_cache_pmem(addr + bytes - 1, 1);
+ }
+ } else
arch_wb_cache_pmem(addr, bytes);

return len;


2017-04-19 15:29:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 23/69] x86/signals: Fix lower/upper bound reporting in compat siginfo

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joerg Roedel <[email protected]>

commit cfac6dfa42bddfa9711b20d486e521d1a41ab09f upstream.

Put the right values from the original siginfo into the
userspace compat-siginfo.

This fixes the 32-bit MPX "tabletest" testcase on 64-bit kernels.

Signed-off-by: Joerg Roedel <[email protected]>
Acked-by: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a4455082dc6f0 ('x86/signals: Add missing signal_compat code for x86 features')
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/kernel/signal_compat.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -151,8 +151,8 @@ int __copy_siginfo_to_user32(compat_sigi

if (from->si_signo == SIGSEGV) {
if (from->si_code == SEGV_BNDERR) {
- compat_uptr_t lower = (unsigned long)&to->si_lower;
- compat_uptr_t upper = (unsigned long)&to->si_upper;
+ compat_uptr_t lower = (unsigned long)from->si_lower;
+ compat_uptr_t upper = (unsigned long)from->si_upper;
put_user_ex(lower, &to->si_lower);
put_user_ex(upper, &to->si_upper);
}


2017-04-19 15:30:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 19/69] Input: xpad - add support for Razer Wildcat gamepad

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Cameron Gutman <[email protected]>

commit 5376366886251e2f8f248704adb620a4bc4c0937 upstream.

Signed-off-by: Cameron Gutman <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/input/joystick/xpad.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/input/joystick/xpad.c
+++ b/drivers/input/joystick/xpad.c
@@ -202,6 +202,7 @@ static const struct xpad_device {
{ 0x1430, 0x8888, "TX6500+ Dance Pad (first generation)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
{ 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
{ 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
{ 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
{ 0x15e4, 0x3f0a, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
{ 0x15e4, 0x3f10, "Batarang Xbox 360 controller", 0, XTYPE_XBOX360 },
@@ -330,6 +331,7 @@ static struct usb_device_id xpad_table[]
XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */
XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
XPAD_XBOX360_VENDOR(0x1532), /* Razer Sabertooth */
+ XPAD_XBOXONE_VENDOR(0x1532), /* Razer Wildcat */
XPAD_XBOX360_VENDOR(0x15e4), /* Numark X-Box 360 controllers */
XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */
{ }


2017-04-19 15:30:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 02/69] audit: make sure we dont let the retry queue grow without bounds

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Paul Moore <[email protected]>

commit 264d509637d95f9404e52ced5003ad352e0f6a26 upstream.

The retry queue is intended to provide a temporary buffer in the case
of transient errors when communicating with auditd, it is not meant
as a long life queue, that functionality is provided by the hold
queue.

This patch fixes a problem identified by Seth where the retry queue
could grow uncontrollably if an auditd instance did not connect to
the kernel to drain the queues. This commit fixes this by doing the
following:

* Make sure we always call auditd_reset() if we decide the connection
with audit is really dead. There were some cases in
kauditd_hold_skb() where we did not reset the connection, this patch
relocates the reset calls to kauditd_thread() so all the error
conditions are caught and the connection reset. As a side effect,
this means we could move auditd_reset() and get rid of the forward
definition at the top of kernel/audit.c.

* We never checked the status of the auditd connection when
processing the main audit queue which meant that the retry queue
could grow unchecked. This patch adds a call to auditd_reset()
after the main queue has been processed if auditd is not connected,
the auditd_reset() call will make sure the retry and hold queues are
correctly managed/flushed so that the retry queue remains reasonable.

Reported-by: Seth Forshee <[email protected]>
Signed-off-by: Paul Moore <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/audit.c | 67 +++++++++++++++++++++++++++------------------------------
1 file changed, 32 insertions(+), 35 deletions(-)

--- a/kernel/audit.c
+++ b/kernel/audit.c
@@ -160,7 +160,6 @@ static LIST_HEAD(audit_freelist);

/* queue msgs to send via kauditd_task */
static struct sk_buff_head audit_queue;
-static void kauditd_hold_skb(struct sk_buff *skb);
/* queue msgs due to temporary unicast send problems */
static struct sk_buff_head audit_retry_queue;
/* queue msgs waiting for new auditd connection */
@@ -454,30 +453,6 @@ static void auditd_set(int pid, u32 port
}

/**
- * auditd_reset - Disconnect the auditd connection
- *
- * Description:
- * Break the auditd/kauditd connection and move all the queued records into the
- * hold queue in case auditd reconnects.
- */
-static void auditd_reset(void)
-{
- struct sk_buff *skb;
-
- /* if it isn't already broken, break the connection */
- rcu_read_lock();
- if (auditd_conn.pid)
- auditd_set(0, 0, NULL);
- rcu_read_unlock();
-
- /* flush all of the main and retry queues to the hold queue */
- while ((skb = skb_dequeue(&audit_retry_queue)))
- kauditd_hold_skb(skb);
- while ((skb = skb_dequeue(&audit_queue)))
- kauditd_hold_skb(skb);
-}
-
-/**
* kauditd_print_skb - Print the audit record to the ring buffer
* @skb: audit record
*
@@ -505,9 +480,6 @@ static void kauditd_rehold_skb(struct sk
{
/* put the record back in the queue at the same place */
skb_queue_head(&audit_hold_queue, skb);
-
- /* fail the auditd connection */
- auditd_reset();
}

/**
@@ -544,9 +516,6 @@ static void kauditd_hold_skb(struct sk_b
/* we have no other options - drop the message */
audit_log_lost("kauditd hold queue overflow");
kfree_skb(skb);
-
- /* fail the auditd connection */
- auditd_reset();
}

/**
@@ -567,6 +536,30 @@ static void kauditd_retry_skb(struct sk_
}

/**
+ * auditd_reset - Disconnect the auditd connection
+ *
+ * Description:
+ * Break the auditd/kauditd connection and move all the queued records into the
+ * hold queue in case auditd reconnects.
+ */
+static void auditd_reset(void)
+{
+ struct sk_buff *skb;
+
+ /* if it isn't already broken, break the connection */
+ rcu_read_lock();
+ if (auditd_conn.pid)
+ auditd_set(0, 0, NULL);
+ rcu_read_unlock();
+
+ /* flush all of the main and retry queues to the hold queue */
+ while ((skb = skb_dequeue(&audit_retry_queue)))
+ kauditd_hold_skb(skb);
+ while ((skb = skb_dequeue(&audit_queue)))
+ kauditd_hold_skb(skb);
+}
+
+/**
* auditd_send_unicast_skb - Send a record via unicast to auditd
* @skb: audit record
*
@@ -758,6 +751,7 @@ static int kauditd_thread(void *dummy)
NULL, kauditd_rehold_skb);
if (rc < 0) {
sk = NULL;
+ auditd_reset();
goto main_queue;
}

@@ -767,6 +761,7 @@ static int kauditd_thread(void *dummy)
NULL, kauditd_hold_skb);
if (rc < 0) {
sk = NULL;
+ auditd_reset();
goto main_queue;
}

@@ -775,16 +770,18 @@ main_queue:
* unicast, dump failed record sends to the retry queue; if
* sk == NULL due to previous failures we will just do the
* multicast send and move the record to the retry queue */
- kauditd_send_queue(sk, portid, &audit_queue, 1,
- kauditd_send_multicast_skb,
- kauditd_retry_skb);
+ rc = kauditd_send_queue(sk, portid, &audit_queue, 1,
+ kauditd_send_multicast_skb,
+ kauditd_retry_skb);
+ if (sk == NULL || rc < 0)
+ auditd_reset();
+ sk = NULL;

/* drop our netns reference, no auditd sends past this line */
if (net) {
put_net(net);
net = NULL;
}
- sk = NULL;

/* we have processed all the queues so wake everyone */
wake_up(&audit_backlog_wait);


2017-04-19 15:31:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 17/69] CIFS: reconnect thread reschedule itself

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Germano Percossi <[email protected]>

commit 18ea43113f5b74a97dd4be9bddbac10d68b1a6ce upstream.

In case of error, smb2_reconnect_server reschedule itself
with a delay, to avoid being too aggressive.

Signed-off-by: Germano Percossi <[email protected]>
Reviewed-by: Pavel Shilovsky <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/cifs/smb2pdu.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -1987,6 +1987,9 @@ void smb2_reconnect_server(struct work_s
struct cifs_tcon *tcon, *tcon2;
struct list_head tmp_list;
int tcon_exist = false;
+ int rc;
+ int resched = false;
+

/* Prevent simultaneous reconnects that can corrupt tcon->rlist list */
mutex_lock(&server->reconnect_mutex);
@@ -2014,13 +2017,18 @@ void smb2_reconnect_server(struct work_s
spin_unlock(&cifs_tcp_ses_lock);

list_for_each_entry_safe(tcon, tcon2, &tmp_list, rlist) {
- if (!smb2_reconnect(SMB2_INTERNAL_CMD, tcon))
+ rc = smb2_reconnect(SMB2_INTERNAL_CMD, tcon);
+ if (!rc)
cifs_reopen_persistent_handles(tcon);
+ else
+ resched = true;
list_del_init(&tcon->rlist);
cifs_put_tcon(tcon);
}

cifs_dbg(FYI, "Reconnecting tcons finished\n");
+ if (resched)
+ queue_delayed_work(cifsiod_wq, &server->reconnect, 2 * HZ);
mutex_unlock(&server->reconnect_mutex);

/* now we can safely release srv struct */


2017-04-19 14:38:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 01/69] cgroup, kthread: close race window where new kthreads can be migrated to non-root cgroups

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Tejun Heo <[email protected]>

commit 77f88796cee819b9c4562b0b6b44691b3b7755b1 upstream.

Creation of a kthread goes through a couple interlocked stages between
the kthread itself and its creator. Once the new kthread starts
running, it initializes itself and wakes up the creator. The creator
then can further configure the kthread and then let it start doing its
job by waking it up.

In this configuration-by-creator stage, the creator is the only one
that can wake it up but the kthread is visible to userland. When
altering the kthread's attributes from userland is allowed, this is
fine; however, for cases where CPU affinity is critical,
kthread_bind() is used to first disable affinity changes from userland
and then set the affinity. This also prevents the kthread from being
migrated into non-root cgroups as that can affect the CPU affinity and
many other things.

Unfortunately, the cgroup side of protection is racy. While the
PF_NO_SETAFFINITY flag prevents further migrations, userland can win
the race before the creator sets the flag with kthread_bind() and put
the kthread in a non-root cgroup, which can lead to all sorts of
problems including incorrect CPU affinity and starvation.

This bug got triggered by userland which periodically tries to migrate
all processes in the root cpuset cgroup to a non-root one. Per-cpu
workqueue workers got caught while being created and ended up with
incorrected CPU affinity breaking concurrency management and sometimes
stalling workqueue execution.

This patch adds task->no_cgroup_migration which disallows the task to
be migrated by userland. kthreadd starts with the flag set making
every child kthread start in the root cgroup with migration
disallowed. The flag is cleared after the kthread finishes
initialization by which time PF_NO_SETAFFINITY is set if the kthread
should stay in the root cgroup.

It'd be better to wait for the initialization instead of failing but I
couldn't think of a way of implementing that without adding either a
new PF flag, or sleeping and retrying from waiting side. Even if
userland depends on changing cgroup membership of a kthread, it either
has to be synchronized with kthread_create() or periodically repeat,
so it's unlikely that this would break anything.

v2: Switch to a simpler implementation using a new task_struct bit
field suggested by Oleg.

Signed-off-by: Tejun Heo <[email protected]>
Suggested-by: Oleg Nesterov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Peter Zijlstra (Intel) <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Reported-and-debugged-by: Chris Mason <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/cgroup.h | 21 +++++++++++++++++++++
include/linux/sched.h | 4 ++++
kernel/cgroup.c | 9 +++++----
kernel/kthread.c | 3 +++
4 files changed, 33 insertions(+), 4 deletions(-)

--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -570,6 +570,25 @@ static inline void pr_cont_cgroup_path(s
pr_cont_kernfs_path(cgrp->kn);
}

+static inline void cgroup_init_kthreadd(void)
+{
+ /*
+ * kthreadd is inherited by all kthreads, keep it in the root so
+ * that the new kthreads are guaranteed to stay in the root until
+ * initialization is finished.
+ */
+ current->no_cgroup_migration = 1;
+}
+
+static inline void cgroup_kthread_ready(void)
+{
+ /*
+ * This kthread finished initialization. The creator should have
+ * set PF_NO_SETAFFINITY if this kthread should stay in the root.
+ */
+ current->no_cgroup_migration = 0;
+}
+
#else /* !CONFIG_CGROUPS */

struct cgroup_subsys_state;
@@ -590,6 +609,8 @@ static inline void cgroup_free(struct ta

static inline int cgroup_init_early(void) { return 0; }
static inline int cgroup_init(void) { return 0; }
+static inline void cgroup_init_kthreadd(void) {}
+static inline void cgroup_kthread_ready(void) {}

static inline bool task_under_cgroup_hierarchy(struct task_struct *task,
struct cgroup *ancestor)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1620,6 +1620,10 @@ struct task_struct {
#ifdef CONFIG_COMPAT_BRK
unsigned brk_randomized:1;
#endif
+#ifdef CONFIG_CGROUPS
+ /* disallow userland-initiated cgroup migration */
+ unsigned no_cgroup_migration:1;
+#endif

unsigned long atomic_flags; /* Flags needing atomic access. */

--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2920,11 +2920,12 @@ static ssize_t __cgroup_procs_write(stru
tsk = tsk->group_leader;

/*
- * Workqueue threads may acquire PF_NO_SETAFFINITY and become
- * trapped in a cpuset, or RT worker may be born in a cgroup
- * with no rt_runtime allocated. Just say no.
+ * kthreads may acquire PF_NO_SETAFFINITY during initialization.
+ * If userland migrates such a kthread to a non-root cgroup, it can
+ * become trapped in a cpuset, or RT kthread may be born in a
+ * cgroup with no rt_runtime allocated. Just say no.
*/
- if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) {
+ if (tsk->no_cgroup_migration || (tsk->flags & PF_NO_SETAFFINITY)) {
ret = -EINVAL;
goto out_unlock_rcu;
}
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -18,6 +18,7 @@
#include <linux/freezer.h>
#include <linux/ptrace.h>
#include <linux/uaccess.h>
+#include <linux/cgroup.h>
#include <trace/events/sched.h>

static DEFINE_SPINLOCK(kthread_create_lock);
@@ -223,6 +224,7 @@ static int kthread(void *_create)

ret = -EINTR;
if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) {
+ cgroup_kthread_ready();
__kthread_parkme(self);
ret = threadfn(data);
}
@@ -536,6 +538,7 @@ int kthreadd(void *unused)
set_mems_allowed(node_states[N_MEMORY]);

current->flags |= PF_NOFREEZE;
+ cgroup_init_kthreadd();

for (;;) {
set_current_state(TASK_INTERRUPTIBLE);


2017-04-19 15:32:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 13/69] drm/nouveau/kms/nv50: fix double dma_fence_put() when destroying plane state

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ben Skeggs <[email protected]>

commit 2907e8670b6ef253bffb33bf47fd2182969cf2a0 upstream.

When the atomic support was added to nouveau, the DRM core did not do this.

However, later in the same merge window, a commit (drm/fence: add in-fences
support) was merged that added it, leading to use-after-frees of the fence
object.

Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/nouveau/nv50_display.c | 2 --
1 file changed, 2 deletions(-)

--- a/drivers/gpu/drm/nouveau/nv50_display.c
+++ b/drivers/gpu/drm/nouveau/nv50_display.c
@@ -995,7 +995,6 @@ nv50_wndw_atomic_destroy_state(struct dr
{
struct nv50_wndw_atom *asyw = nv50_wndw_atom(state);
__drm_atomic_helper_plane_destroy_state(&asyw->state);
- dma_fence_put(asyw->state.fence);
kfree(asyw);
}

@@ -1007,7 +1006,6 @@ nv50_wndw_atomic_duplicate_state(struct
if (!(asyw = kmalloc(sizeof(*asyw), GFP_KERNEL)))
return NULL;
__drm_atomic_helper_plane_duplicate_state(plane, &asyw->state);
- asyw->state.fence = NULL;
asyw->interval = 1;
asyw->sema = armw->sema;
asyw->ntfy = armw->ntfy;


2017-04-19 15:33:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 12/69] drm/nouveau/kms/nv50: fix setting of HeadSetRasterVertBlankDmi method

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ben Skeggs <[email protected]>

commit d639fbcc102745187f747a21bdcffcf628e174c8 upstream.

Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/nouveau/nv50_display.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

--- a/drivers/gpu/drm/nouveau/nv50_display.c
+++ b/drivers/gpu/drm/nouveau/nv50_display.c
@@ -2038,6 +2038,7 @@ nv50_head_atomic_check_mode(struct nv50_
u32 vbackp = (mode->vtotal - mode->vsync_end) * vscan / ilace;
u32 hfrontp = mode->hsync_start - mode->hdisplay;
u32 vfrontp = (mode->vsync_start - mode->vdisplay) * vscan / ilace;
+ u32 blankus;
struct nv50_head_mode *m = &asyh->mode;

m->h.active = mode->htotal;
@@ -2051,9 +2052,10 @@ nv50_head_atomic_check_mode(struct nv50_
m->v.blanks = m->v.active - vfrontp - 1;

/*XXX: Safe underestimate, even "0" works */
- m->v.blankus = (m->v.active - mode->vdisplay - 2) * m->h.active;
- m->v.blankus *= 1000;
- m->v.blankus /= mode->clock;
+ blankus = (m->v.active - mode->vdisplay - 2) * m->h.active;
+ blankus *= 1000;
+ blankus /= mode->clock;
+ m->v.blankus = blankus;

if (mode->flags & DRM_MODE_FLAG_INTERLACE) {
m->v.blank2e = m->v.active + m->v.synce + vbackp;


2017-04-19 14:38:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.10 10/69] drm/nouveau/mpeg: mthd returns true on success now

4.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ilia Mirkin <[email protected]>

commit 83bce9c2baa51e439480a713119a73d3c8b61083 upstream.

Signed-off-by: Ilia Mirkin <[email protected]>
Fixes: 590801c1a3 ("drm/nouveau/mpeg: remove dependence on namedb/engctx lookup")
Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c
@@ -198,7 +198,7 @@ nv31_mpeg_intr(struct nvkm_engine *engin
}

if (type == 0x00000010) {
- if (!nv31_mpeg_mthd(mpeg, mthd, data))
+ if (nv31_mpeg_mthd(mpeg, mthd, data))
show &= ~0x01000000;
}
}
--- a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
@@ -172,7 +172,7 @@ nv44_mpeg_intr(struct nvkm_engine *engin
}

if (type == 0x00000010) {
- if (!nv44_mpeg_mthd(subdev->device, mthd, data))
+ if (nv44_mpeg_mthd(subdev->device, mthd, data))
show &= ~0x01000000;
}
}


2017-04-19 20:38:34

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 4.10 00/69] 4.10.12-stable review

On 04/19/2017 08:36 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.10.12 release.
> There are 69 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri Apr 21 14:15:37 UTC 2017.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.10.12-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.10.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

thanks,
-- Shuah


2017-04-19 23:22:14

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 4.10 00/69] 4.10.12-stable review

On Wed, Apr 19, 2017 at 04:36:29PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.10.12 release.
> There are 69 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri Apr 21 14:15:37 UTC 2017.
> Anything received after that time might be too late.
>

Build results:
total: 149 pass: 149 fail: 0
Qemu test results:
total: 122 pass: 122 fail: 0

Details are available at http://kerneltests.org/builders.

Guenter

2017-04-20 06:30:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.10 00/69] 4.10.12-stable review

On Wed, Apr 19, 2017 at 04:22:09PM -0700, Guenter Roeck wrote:
> On Wed, Apr 19, 2017 at 04:36:29PM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.10.12 release.
> > There are 69 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri Apr 21 14:15:37 UTC 2017.
> > Anything received after that time might be too late.
> >
>
> Build results:
> total: 149 pass: 149 fail: 0
> Qemu test results:
> total: 122 pass: 122 fail: 0
>
> Details are available at http://kerneltests.org/builders.

Thanks for testing all of these and letting me know.

greg k-h

2017-04-20 06:34:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.10 00/69] 4.10.12-stable review

On Wed, Apr 19, 2017 at 02:38:21PM -0600, Shuah Khan wrote:
> On 04/19/2017 08:36 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.10.12 release.
> > There are 69 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri Apr 21 14:15:37 UTC 2017.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.10.12-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.10.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Compiled and booted on my test system. No dmesg regressions.

Thanks for testing all of these and letting me know.

greg k-h