2016-03-02 02:09:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 000/130] 3.14.63-stable review

This is the start of the stable review cycle for the 3.14.63 release.
There are 130 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Thu Mar 3 23:44:39 UTC 2016.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.x/stable-review/patch-3.14.63-rc1.gz
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 3.14.63-rc1

Oren Givon <[email protected]>
iwlwifi: update and fix 7265 series PCI IDs

Konrad Rzeszutek Wilk <[email protected]>
xen/pcifront: Fix mysterious crashes when NUMA locality information was extracted.

Al Viro <[email protected]>
do_last(): don't let a bogus return value from ->open() et.al. to confuse us

Simon Guinot <[email protected]>
kernel/resource.c: fix muxed resource handling in __request_region()

Stefan Hajnoczi <[email protected]>
sunrpc/cache: fix off-by-one in qword_get()

Steven Rostedt (Red Hat) <[email protected]>
tracing: Fix showing function event in available_events

Christian Borntraeger <[email protected]>
KVM: async_pf: do not warn on page allocation failures

Benjamin Coddington <[email protected]>
NFSv4: Fix a dentry leak on alias use

Christoph Hellwig <[email protected]>
nfs: fix nfs_size_to_loff_t

Sebastian Andrzej Siewior <[email protected]>
PCI/AER: Flush workqueue on device remove to avoid use-after-free

Tejun Heo <[email protected]>
libata: fix sff host state machine locking while polling

Tejun Heo <[email protected]>
Revert "workqueue: make sure delayed work run in local cpu"

Johannes Berg <[email protected]>
rfkill: fix rfkill_fop_read wait_event usage

Oliver Neukum <[email protected]>
cdc-acm:exclude Samsung phone 04e8:685d

Ilya Dryomov <[email protected]>
libceph: don't bail early from try_read() when skipping a message

Peter Rosin <[email protected]>
hwmon: (ads1015) Handle negative conversion values correctly

Mike Marciniszyn <[email protected]>
IB/qib: fix mcast detach when qp not attached

Insu Yun <[email protected]>
ACPI / PCI / hotplug: unlock in error path in acpiphp_enable_slot()

Alex Deucher <[email protected]>
drm/radeon/pm: adjust display configuration after powerstate

Rasmus Villemoes <[email protected]>
drm/radeon: use post-decrement in error handling

Gerd Hoffmann <[email protected]>
drm/qxl: use kmalloc_array to alloc reloc_info in qxl_process_single_command

Jani Nikula <[email protected]>
drm/i915/dp: fall back to 18 bpp when sink capability is unknown

Nicolai Hähnle <[email protected]>
drm/radeon: hold reference to fences in radeon_sa_bo_new

Alex Deucher <[email protected]>
drm/radeon: clean up fujitsu quirks

Rob Clark <[email protected]>
drm/vmwgfx: respect 'nomodeset'

Dmitry V. Levin <[email protected]>
sparc64: fix incorrect sign extension in sys_sparc64_personality

Borislav Petkov <[email protected]>
EDAC: Robustify workqueues destruction

zengtao <[email protected]>
cputime: Prevent 32bit overflow in time[val|spec]_to_cputime()

Linus Walleij <[email protected]>
mmc: mmci: fix an ages old detection error

Adrian Hunter <[email protected]>
mmc: sdhci: Fix sdhci_runtime_pm_bus_on/off()

Adrian Hunter <[email protected]>
mmc: sdio: Fix invalid vdd in voltage switch power cycle

Richard Cochran <[email protected]>
posix-clock: Fix return code on the poll method's error path

Mikulas Patocka <[email protected]>
dm snapshot: fix hung bios when copy error occurs

Mike Snitzer <[email protected]>
dm space map metadata: remove unused variable in brb_pop()

Mauro Carvalho Chehab <[email protected]>
tda1004x: only update the frontend properties if locked

Antonio Ospite <[email protected]>
gspca: ov534/topro: prevent a division by 0

Malcolm Priestley <[email protected]>
media: dvb-core: Don't force CAN_INVERSION_AUTO in oneshot mode

Vegard Nossum <[email protected]>
uml: fix hostfs mknod()

Vegard Nossum <[email protected]>
uml: flush stdout before forking

Stefan Haberland <[email protected]>
s390/dasd: fix refcount for PAV reassignment

Stefan Haberland <[email protected]>
s390/dasd: prevent incorrect length error under z/VM after PAV changes

Ard Biesheuvel <[email protected]>
s390: fix normalization bug in exception table sorting

Filipe Manana <[email protected]>
Btrfs: fix number of transaction units required to create symlink

Filipe Manana <[email protected]>
Btrfs: send, don't BUG_ON() when an empty symlink is found

Josef Bacik <[email protected]>
Btrfs: igrab inode in writepage

Anand Jain <[email protected]>
Btrfs: add missing brelse when superblock checksum fails

Russell King <[email protected]>
scripts: recordmcount: break hardlinks

Prarit Bhargava <[email protected]>
powercap / RAPL: fix BIOS lock check

James Bottomley <[email protected]>
ses: fix additional element traversal bug

James Bottomley <[email protected]>
ses: Fix problems with simple enclosures

Johannes Berg <[email protected]>
rfkill: copy the name into the rfkill struct

Kirill A. Shutemov <[email protected]>
vgaarb: fix signal handling in vga_get()

Guillaume Delbergue <[email protected]>
irqchip/versatile-fpga: Fix PCI IRQ mapping on Versatile PB

Joe Thornber <[email protected]>
dm btree: fix bufio buffer leaks in dm_btree_del() error path

Joe Thornber <[email protected]>
dm space map metadata: fix ref counting bug when bootstrapping a new space map

Mikulas Patocka <[email protected]>
sata_sil: disable trim

Sasha Levin <[email protected]>
sched/core: Remove false-positive warning from wake_up_process()

Xunlei Pang <[email protected]>
sched/core: Clear the root_domain cpumasks in init_rootdomain()

Mirza Krak <[email protected]>
can: sja1000: clear interrupts on start

Quentin Casasnovas <[email protected]>
RDS: fix race condition when sending a message on unbound socket

Johannes Berg <[email protected]>
mac80211: mesh: fix call_rcu() usage

Suman Anna <[email protected]>
virtio: fix memory leak of virtio ida cache layers

Steven Rostedt (Red Hat) <[email protected]>
ring-buffer: Update read stamp with first real commit on page

Jan Engelhardt <[email protected]>
target: fix COMPARE_AND_WRITE non zero SGL offset data corruption

Nicholas Bellinger <[email protected]>
target: Fix race for SCF_COMPARE_AND_WRITE_POST checking

Jan Kara <[email protected]>
vfs: Avoid softlockups with sendfile(2)

Vineet Gupta <[email protected]>
ARC: dw2 unwind: Remove falllback linear search thru FDE entries

Kees Cook <[email protected]>
mac: validate mac_partition is within sector

Luca Porzio <[email protected]>
mmc: remove bondage between REQ_META and reliable write

K. Y. Srinivasan <[email protected]>
storvsc: Don't set the SRB_FLAGS_QUEUE_ACTION_ENABLE flag

[email protected] <[email protected]>
megaraid_sas : SMAP restriction--do not access user memory from IOCTL code

[email protected] <[email protected]>
megaraid_sas: Do not use PAGE_SIZE for max_sectors

Andy Shevchenko <[email protected]>
dmaengine: dw: convert to __ffs()

Valentin Rothberg <[email protected]>
wm831x_power: Use IRQF_ONESHOT to request threaded IRQs

Dan Carpenter <[email protected]>
devres: fix a for loop bounds check

Andrey Ryabinin <[email protected]>
lockd: create NSM handles per net namespace

Alex Deucher <[email protected]>
drm/radeon: make rv770_set_sw_state failures non-fatal

Alex Deucher <[email protected]>
drm/radeon: unconditionally set sysfs_initialized

NeilBrown <[email protected]>
async_tx: use GFP_NOWAIT rather than GFP_IO

Roman Volkov <[email protected]>
clocksource/drivers/vt8500: Increase the minimum delta

Roman Volkov <[email protected]>
dts: vt8500: Add SDHC node to DTS file for WM8650

Thomas Gleixner <[email protected]>
genirq: Prevent chip buslock deadlock

Peter Zijlstra <[email protected]>
sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks

Peter Zijlstra <[email protected]>
sched,dl: Remove return value from pull_dl_task()

Peter Zijlstra <[email protected]>
sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks

Peter Zijlstra <[email protected]>
sched,rt: Remove return value from pull_rt_task()

Peter Zijlstra <[email protected]>
sched: Allow balance callbacks for check_class_changed()

Peter Zijlstra <[email protected]>
sched: Replace post_schedule with a balance callback list

Peter Zijlstra <[email protected]>
sched: Clean up idle task SMP logic

Hannes Frederic Sowa <[email protected]>
unix: correctly track in-flight fds in sending process user_struct

Manish Chopra <[email protected]>
bnx2x: Don't notify about scratchpad parities

Olga Kornievskaia <[email protected]>
Failing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount

Christophe Leroy <[email protected]>
splice: sendfile() at once fails for big files

Sasha Levin <[email protected]>
RDS: verify the underlying transport exists before creating a connection

Marcelo Leitner <[email protected]>
ipv6: addrconf: validate new MTU before applying it

James Hogan <[email protected]>
MIPS: KVM: Uninit VCPU in vcpu_create error path

James Hogan <[email protected]>
MIPS: KVM: Fix CACHE immediate offset sign extension

James Hogan <[email protected]>
MIPS: KVM: Fix ASID restoration logic

Al Viro <[email protected]>
lock_parent: don't step on stale ->d_parent of all-but-freed one

Linus Torvalds <[email protected]>
dcache: add missing lockdep annotation

Al Viro <[email protected]>
dentry_kill() doesn't need the second argument now

Al Viro <[email protected]>
dealing with the rest of shrink_dentry_list() livelock

Al Viro <[email protected]>
shrink_dentry_list(): take parent's ->d_lock earlier

Al Viro <[email protected]>
expand dentry_kill(dentry, 0) in shrink_dentry_list()

Al Viro <[email protected]>
split dentry_kill()

Al Viro <[email protected]>
lift the "already marked killed" case into shrink_dentry_list()

Hariprasad S <[email protected]>
iw_cxgb3: Fix incorrectly returning error on success

Corey Wright <[email protected]>
proc: Fix ptrace-based permission checks for accessing task maps

Bjørn Mork <[email protected]>
USB: option: add "4G LTE usb-modem U901"

Andrey Skvortsov <[email protected]>
USB: option: add support for SIM7100E

Ken Lin <[email protected]>
USB: cp210x: add IDs for GE B650V3 and B850V3 boards

Gerhard Uttenthaler <[email protected]>
can: ems_usb: Fix possible tx overflow

Nikolay Borisov <[email protected]>
dm thin: fix race condition when destroying thin pool workqueue

Joe Thornber <[email protected]>
dm thin metadata: fix bug when taking a metadata snapshot

Mike Snitzer <[email protected]>
dm thin: restore requested 'error_if_no_space' setting on OODS to WRITE transition

Ingo Molnar <[email protected]>
efi: Disable interrupts around EFI calls, not in the epilog/prolog calls

Dave Airlie <[email protected]>
drm/radeon: fix hotplug race at startup

Kamal Mostafa <[email protected]>
tools: Add a "make all" rule

Kent Overstreet <[email protected]>
bcache: Change refill_dirty() to always scan entire disk if necessary

Stefan Bader <[email protected]>
bcache: prevent crash on changing writeback_running

Zheng Liu <[email protected]>
bcache: unregister reboot notifier if bcache fails to unregister device

Al Viro <[email protected]>
bcache: fix a leak in bch_cached_dev_run()

Zheng Liu <[email protected]>
bcache: clear BCACHE_DEV_UNLINK_DONE flag when attaching a backing device

Kent Overstreet <[email protected]>
bcache: Add a cond_resched() call to gc

Zheng Liu <[email protected]>
bcache: fix a livelock when we cause a huge number of cache misses

Phil Sutter <[email protected]>
netfilter: ip6t_SYNPROXY: fix NULL pointer dereference

lucien <[email protected]>
netfilter: ipt_rpfilter: remove the nh_scope test in rpfilter_lookup_reverse

Mirek Kratochvil <[email protected]>
netfilter: nf_tables: fix bogus warning in nft_data_uninit()

Egbert Eich <[email protected]>
drm/ast: Initialized data needed to map fbdev memory

Steven Rostedt (Red Hat) <[email protected]>
tracepoints: Do not trace when cpu is offline


-------------

Diffstat:

Makefile | 4 +-
arch/arc/kernel/unwind.c | 37 +----
arch/arm/boot/dts/wm8650.dtsi | 9 ++
arch/mips/kvm/kvm_locore.S | 16 ++-
arch/mips/kvm/kvm_mips.c | 5 +-
arch/mips/kvm/kvm_mips_emul.c | 2 +-
arch/s390/mm/extable.c | 8 +-
arch/sparc/kernel/sys_sparc_64.c | 2 +-
arch/um/os-Linux/start_up.c | 2 +
arch/x86/platform/efi/efi.c | 7 +
arch/x86/platform/efi/efi_32.c | 11 +-
arch/x86/platform/efi/efi_64.c | 3 -
block/partitions/mac.c | 10 +-
crypto/async_tx/async_memcpy.c | 2 +-
crypto/async_tx/async_pq.c | 4 +-
crypto/async_tx/async_raid6_recov.c | 4 +-
crypto/async_tx/async_xor.c | 4 +-
drivers/ata/libata-sff.c | 32 ++---
drivers/ata/sata_sil.c | 3 +
drivers/clocksource/vt8500_timer.c | 6 +-
drivers/dma/dw/core.c | 12 +-
drivers/edac/edac_device.c | 11 +-
drivers/edac/edac_mc.c | 14 +-
drivers/edac/edac_pci.c | 9 +-
drivers/gpu/drm/ast/ast_drv.h | 1 +
drivers/gpu/drm/ast/ast_fb.c | 7 +
drivers/gpu/drm/ast/ast_main.c | 1 +
drivers/gpu/drm/ast/ast_mode.c | 2 +
drivers/gpu/drm/i915/intel_display.c | 20 ++-
drivers/gpu/drm/qxl/qxl_ioctl.c | 3 +-
drivers/gpu/drm/radeon/radeon_atombios.c | 12 +-
drivers/gpu/drm/radeon/radeon_irq_kms.c | 5 +
drivers/gpu/drm/radeon/radeon_pm.c | 8 +-
drivers/gpu/drm/radeon/radeon_sa.c | 5 +
drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
drivers/gpu/drm/radeon/rv770_dpm.c | 2 +-
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 7 +
drivers/gpu/vga/vgaarb.c | 6 +-
drivers/hwmon/ads1015.c | 2 +-
drivers/infiniband/hw/cxgb3/iwch_cm.c | 4 +-
drivers/infiniband/hw/qib/qib_verbs_mcast.c | 35 ++---
drivers/irqchip/irq-versatile-fpga.c | 5 +
drivers/md/bcache/btree.c | 5 +-
drivers/md/bcache/super.c | 11 +-
drivers/md/bcache/writeback.c | 37 ++++-
drivers/md/bcache/writeback.h | 3 +-
drivers/md/dm-exception-store.h | 2 +-
drivers/md/dm-snap-persistent.c | 5 +-
drivers/md/dm-snap-transient.c | 4 +-
drivers/md/dm-snap.c | 20 +--
drivers/md/dm-thin-metadata.c | 6 +
drivers/md/dm-thin.c | 5 +-
drivers/md/persistent-data/dm-btree.c | 16 ++-
drivers/md/persistent-data/dm-space-map-metadata.c | 29 ++--
drivers/media/dvb-core/dvb_frontend.c | 6 +-
drivers/media/dvb-frontends/tda1004x.c | 9 ++
drivers/media/usb/gspca/ov534.c | 9 +-
drivers/media/usb/gspca/topro.c | 6 +-
drivers/mmc/card/block.c | 11 +-
drivers/mmc/core/sdio.c | 2 +-
drivers/mmc/host/mmci.c | 2 +-
drivers/mmc/host/sdhci.c | 4 +-
drivers/net/can/sja1000/sja1000.c | 3 +
drivers/net/can/usb/ems_usb.c | 14 +-
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h | 11 +-
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 20 ++-
drivers/net/wireless/iwlwifi/pcie/drv.c | 5 +-
drivers/pci/hotplug/acpiphp_glue.c | 4 +-
drivers/pci/pcie/aer/aerdrv.c | 4 +-
drivers/pci/pcie/aer/aerdrv.h | 1 -
drivers/pci/pcie/aer/aerdrv_core.c | 2 -
drivers/pci/xen-pcifront.c | 10 +-
drivers/power/wm831x_power.c | 6 +-
drivers/powercap/intel_rapl.c | 7 +-
drivers/s390/block/dasd_alias.c | 23 ++-
drivers/scsi/megaraid/megaraid_sas.h | 2 +
drivers/scsi/megaraid/megaraid_sas_base.c | 15 +-
drivers/scsi/ses.c | 30 +++-
drivers/scsi/storvsc_drv.c | 3 +-
drivers/target/target_core_sbc.c | 17 ++-
drivers/target/target_core_transport.c | 14 +-
drivers/usb/class/cdc-acm.c | 5 +
drivers/usb/serial/cp210x.c | 2 +
drivers/usb/serial/option.c | 9 ++
drivers/virtio/virtio.c | 1 +
fs/btrfs/disk-io.c | 1 +
fs/btrfs/inode.c | 21 ++-
fs/btrfs/send.c | 16 ++-
fs/dcache.c | 155 +++++++++++++++------
fs/hostfs/hostfs_kern.c | 4 +-
fs/lockd/host.c | 7 +-
fs/lockd/mon.c | 36 +++--
fs/lockd/netns.h | 1 +
fs/lockd/svc.c | 1 +
fs/lockd/svc4proc.c | 2 +-
fs/lockd/svcproc.c | 2 +-
fs/namei.c | 4 +
fs/nfs/nfs4proc.c | 4 +-
fs/nfs/nfs4state.c | 2 +-
fs/proc/task_mmu.c | 4 +-
fs/proc/task_nommu.c | 2 +-
fs/splice.c | 13 +-
include/asm-generic/cputime_nsecs.h | 5 +-
include/linux/enclosure.h | 4 +
include/linux/lockd/lockd.h | 9 +-
include/linux/nfs_fs.h | 4 +-
include/linux/tracepoint.h | 6 +
include/net/af_unix.h | 4 +-
include/net/scm.h | 1 +
include/target/target_core_base.h | 2 +-
kernel/irq/manage.c | 6 +-
kernel/resource.c | 5 +-
kernel/sched/core.c | 69 ++++++---
kernel/sched/deadline.c | 65 ++++++---
kernel/sched/idle_task.c | 9 +-
kernel/sched/rt.c | 71 +++++-----
kernel/sched/sched.h | 19 ++-
kernel/time/posix-clock.c | 4 +-
kernel/trace/ring_buffer.c | 12 +-
kernel/trace/trace_events.c | 3 +-
kernel/workqueue.c | 8 +-
lib/devres.c | 2 +-
net/ceph/messenger.c | 4 +-
net/core/scm.c | 7 +
net/ipv4/netfilter/ipt_rpfilter.c | 4 +-
net/ipv6/addrconf.c | 17 ++-
net/ipv6/netfilter/ip6t_SYNPROXY.c | 18 +--
net/mac80211/mesh_pathtbl.c | 8 +-
net/netfilter/nf_tables_api.c | 4 +-
net/rds/send.c | 4 +-
net/rfkill/core.c | 22 +--
net/sunrpc/cache.c | 2 +-
net/unix/af_unix.c | 4 +-
net/unix/garbage.c | 8 +-
scripts/recordmcount.c | 14 ++
tools/Makefile | 9 ++
virt/kvm/async_pf.c | 2 +-
137 files changed, 935 insertions(+), 519 deletions(-)



2016-03-01 23:47:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 011/130] bcache: prevent crash on changing writeback_running

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stefan Bader <[email protected]>

commit 8d16ce540c94c9d366eb36fc91b7154d92d6397b upstream.

Added a safeguard in the shutdown case. At least while not being
attached it is also possible to trigger a kernel bug by writing into
writeback_running. This change adds the same check before trying to
wake up the thread for that case.

Signed-off-by: Stefan Bader <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/writeback.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -63,7 +63,8 @@ static inline bool should_writeback(stru

static inline void bch_writeback_queue(struct cached_dev *dc)
{
- wake_up_process(dc->writeback_thread);
+ if (!IS_ERR_OR_NULL(dc->writeback_thread))
+ wake_up_process(dc->writeback_thread);
}

static inline void bch_writeback_add(struct cached_dev *dc)


2016-03-01 23:49:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 006/130] bcache: fix a livelock when we cause a huge number of cache misses

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Zheng Liu <[email protected]>

commit 2ef9ccbfcb90cf84bdba320a571b18b05c41101b upstream.

Subject : [PATCH v2] bcache: fix a livelock in btree lock
Date : Wed, 25 Feb 2015 20:32:09 +0800 (02/25/2015 04:32:09 AM)

This commit tries to fix a livelock in bcache. This livelock might
happen when we causes a huge number of cache misses simultaneously.

When we get a cache miss, bcache will execute the following path.

->cached_dev_make_request()
->cached_dev_read()
->cached_lookup()
->bch->btree_map_keys()
->btree_root() <------------------------
->bch_btree_map_keys_recurse() |
->cache_lookup_fn() |
->cached_dev_cache_miss() |
->bch_btree_insert_check_key() -|
[If btree->seq is not equal to seq + 1, we should return
EINTR and traverse btree again.]

In bch_btree_insert_check_key() function we first need to check upgrade
flag (op->lock == -1), and when this flag is true we need to release
read btree->lock and try to take write btree->lock. During taking and
releasing this write lock, btree->seq will be monotone increased in
order to prevent other threads modify this in cache miss (see btree.h:74).
But if there are some cache misses caused by some requested, we could
meet a livelock because btree->seq is always changed by others. Thus no
one can make progress.

This commit will try to take write btree->lock if it encounters a race
when we traverse btree. Although it sacrifice the scalability but we
can ensure that only one can modify the btree.

Signed-off-by: Zheng Liu <[email protected]>
Tested-by: Joshua Schmid <[email protected]>
Tested-by: Eric Wheeler <[email protected]>
Cc: Joshua Schmid <[email protected]>
Cc: Zhu Yanhai <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/btree.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -2037,8 +2037,10 @@ int bch_btree_insert_check_key(struct bt
rw_lock(true, b, b->level);

if (b->key.ptr[0] != btree_ptr ||
- b->seq != seq + 1)
+ b->seq != seq + 1) {
+ op->lock = b->level;
goto out;
+ }
}

SET_KEY_PTRS(check_key, 1);


2016-03-01 23:50:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 027/130] expand dentry_kill(dentry, 0) in shrink_dentry_list()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit ff2fde9929feb2aef45377ce56b8b12df85dda69 upstream.

Result will be massaged to saner shape in the next commits. It is
ugly, no questions - the point of that one is to be a provably
equivalent transformation (and it might be worth splitting a bit
more).

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 30 +++++++++++++++++-------------
1 file changed, 17 insertions(+), 13 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -802,6 +802,7 @@ static void shrink_dentry_list(struct li
struct dentry *dentry, *parent;

while (!list_empty(list)) {
+ struct inode *inode;
dentry = list_entry(list->prev, struct dentry, d_lru);
spin_lock(&dentry->d_lock);
/*
@@ -829,23 +830,26 @@ static void shrink_dentry_list(struct li
continue;
}

- parent = dentry_kill(dentry, 0);
- /*
- * If dentry_kill returns NULL, we have nothing more to do.
- */
- if (!parent)
- continue;
-
- if (unlikely(parent == dentry)) {
- /*
- * trylocks have failed and d_lock has been held the
- * whole time, so it could not have been added to any
- * other lists. Just add it back to the shrink list.
- */
+ inode = dentry->d_inode;
+ if (inode && unlikely(!spin_trylock(&inode->i_lock))) {
d_shrink_add(dentry, list);
spin_unlock(&dentry->d_lock);
continue;
}
+
+ parent = NULL;
+ if (!IS_ROOT(dentry)) {
+ parent = dentry->d_parent;
+ if (unlikely(!spin_trylock(&parent->d_lock))) {
+ if (inode)
+ spin_unlock(&inode->i_lock);
+ d_shrink_add(dentry, list);
+ spin_unlock(&dentry->d_lock);
+ continue;
+ }
+ }
+
+ __dentry_kill(dentry);
/*
* We need to prune ancestors too. This is necessary to prevent
* quadratic behavior of shrink_dcache_parent(), but is also


2016-03-01 23:50:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 026/130] split dentry_kill()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit e55fd011549eae01a230e3cace6f4d031b6a3453 upstream.

... into trylocks and everything else. The latter (actual killing)
is __dentry_kill().

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 62 ++++++++++++++++++++++++++++++++++--------------------------
1 file changed, 36 insertions(+), 26 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -439,36 +439,12 @@ void d_drop(struct dentry *dentry)
}
EXPORT_SYMBOL(d_drop);

-/*
- * Finish off a dentry we've decided to kill.
- * dentry->d_lock must be held, returns with it unlocked.
- * If ref is non-zero, then decrement the refcount too.
- * Returns dentry requiring refcount drop, or NULL if we're done.
- */
-static struct dentry *
-dentry_kill(struct dentry *dentry, int unlock_on_failure)
- __releases(dentry->d_lock)
+static void __dentry_kill(struct dentry *dentry)
{
- struct inode *inode;
struct dentry *parent = NULL;
bool can_free = true;
-
- inode = dentry->d_inode;
- if (inode && !spin_trylock(&inode->i_lock)) {
-relock:
- if (unlock_on_failure) {
- spin_unlock(&dentry->d_lock);
- cpu_relax();
- }
- return dentry; /* try again with same dentry */
- }
if (!IS_ROOT(dentry))
parent = dentry->d_parent;
- if (parent && !spin_trylock(&parent->d_lock)) {
- if (inode)
- spin_unlock(&inode->i_lock);
- goto relock;
- }

/*
* The dentry is now unrecoverably dead to the world.
@@ -512,10 +488,44 @@ relock:
can_free = false;
}
spin_unlock(&dentry->d_lock);
-out:
if (likely(can_free))
dentry_free(dentry);
+}
+
+/*
+ * Finish off a dentry we've decided to kill.
+ * dentry->d_lock must be held, returns with it unlocked.
+ * If ref is non-zero, then decrement the refcount too.
+ * Returns dentry requiring refcount drop, or NULL if we're done.
+ */
+static struct dentry *
+dentry_kill(struct dentry *dentry, int unlock_on_failure)
+ __releases(dentry->d_lock)
+{
+ struct inode *inode = dentry->d_inode;
+ struct dentry *parent = NULL;
+
+ if (inode && unlikely(!spin_trylock(&inode->i_lock)))
+ goto failed;
+
+ if (!IS_ROOT(dentry)) {
+ parent = dentry->d_parent;
+ if (unlikely(!spin_trylock(&parent->d_lock))) {
+ if (inode)
+ spin_unlock(&inode->i_lock);
+ goto failed;
+ }
+ }
+
+ __dentry_kill(dentry);
return parent;
+
+failed:
+ if (unlock_on_failure) {
+ spin_unlock(&dentry->d_lock);
+ cpu_relax();
+ }
+ return dentry; /* try again with same dentry */
}

/*


2016-03-01 23:51:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 039/130] Failing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Olga Kornievskaia <[email protected]>

commit a41cbe86df3afbc82311a1640e20858c0cd7e065 upstream.

A test case is as the description says:
open(foobar, O_WRONLY);
sleep() --> reboot the server
close(foobar)

The bug is because in nfs4state.c in nfs4_reclaim_open_state() a few
line before going to restart, there is
clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &state->flags).

NFS4CLNT_RECLAIM_NOGRACE is a flag for the client states not open
owner states. Value of NFS4CLNT_RECLAIM_NOGRACE is 4 which is the
value of NFS_O_WRONLY_STATE in nfs4_state->flags. So clearing it wipes
out state and when we go to close it, “call_close” doesn’t get set as
state flag is not set and CLOSE doesn’t go on the wire.

Signed-off-by: Olga Kornievskaia <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/nfs/nfs4state.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1482,7 +1482,7 @@ restart:
spin_unlock(&state->state_lock);
}
nfs4_put_open_state(state);
- clear_bit(NFS4CLNT_RECLAIM_NOGRACE,
+ clear_bit(NFS_STATE_RECLAIM_NOGRACE,
&state->flags);
spin_lock(&sp->so_lock);
goto restart;

2016-03-01 23:51:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 055/130] lockd: create NSM handles per net namespace

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andrey Ryabinin <[email protected]>

commit 0ad95472bf169a3501991f8f33f5147f792a8116 upstream.

Commit cb7323fffa85 ("lockd: create and use per-net NSM
RPC clients on MON/UNMON requests") introduced per-net
NSM RPC clients. Unfortunately this doesn't make any sense
without per-net nsm_handle.

E.g. the following scenario could happen
Two hosts (X and Y) in different namespaces (A and B) share
the same nsm struct.

1. nsm_monitor(host_X) called => NSM rpc client created,
nsm->sm_monitored bit set.
2. nsm_mointor(host-Y) called => nsm->sm_monitored already set,
we just exit. Thus in namespace B ln->nsm_clnt == NULL.
3. host X destroyed => nsm->sm_count decremented to 1
4. host Y destroyed => nsm_unmonitor() => nsm_mon_unmon() => NULL-ptr
dereference of *ln->nsm_clnt

So this could be fixed by making per-net nsm_handles list,
instead of global. Thus different net namespaces will not be able
share the same nsm_handle.

Signed-off-by: Andrey Ryabinin <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/lockd/host.c | 7 ++++---
fs/lockd/mon.c | 36 ++++++++++++++++++++++--------------
fs/lockd/netns.h | 1 +
fs/lockd/svc.c | 1 +
fs/lockd/svc4proc.c | 2 +-
fs/lockd/svcproc.c | 2 +-
include/linux/lockd/lockd.h | 9 ++++++---
7 files changed, 36 insertions(+), 22 deletions(-)

--- a/fs/lockd/host.c
+++ b/fs/lockd/host.c
@@ -116,7 +116,7 @@ static struct nlm_host *nlm_alloc_host(s
atomic_inc(&nsm->sm_count);
else {
host = NULL;
- nsm = nsm_get_handle(ni->sap, ni->salen,
+ nsm = nsm_get_handle(ni->net, ni->sap, ni->salen,
ni->hostname, ni->hostname_len);
if (unlikely(nsm == NULL)) {
dprintk("lockd: %s failed; no nsm handle\n",
@@ -534,17 +534,18 @@ static struct nlm_host *next_host_state(

/**
* nlm_host_rebooted - Release all resources held by rebooted host
+ * @net: network namespace
* @info: pointer to decoded results of NLM_SM_NOTIFY call
*
* We were notified that the specified host has rebooted. Release
* all resources held by that peer.
*/
-void nlm_host_rebooted(const struct nlm_reboot *info)
+void nlm_host_rebooted(const struct net *net, const struct nlm_reboot *info)
{
struct nsm_handle *nsm;
struct nlm_host *host;

- nsm = nsm_reboot_lookup(info);
+ nsm = nsm_reboot_lookup(net, info);
if (unlikely(nsm == NULL))
return;

--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -51,7 +51,6 @@ struct nsm_res {
};

static const struct rpc_program nsm_program;
-static LIST_HEAD(nsm_handles);
static DEFINE_SPINLOCK(nsm_lock);

/*
@@ -259,33 +258,35 @@ void nsm_unmonitor(const struct nlm_host
}
}

-static struct nsm_handle *nsm_lookup_hostname(const char *hostname,
- const size_t len)
+static struct nsm_handle *nsm_lookup_hostname(const struct list_head *nsm_handles,
+ const char *hostname, const size_t len)
{
struct nsm_handle *nsm;

- list_for_each_entry(nsm, &nsm_handles, sm_link)
+ list_for_each_entry(nsm, nsm_handles, sm_link)
if (strlen(nsm->sm_name) == len &&
memcmp(nsm->sm_name, hostname, len) == 0)
return nsm;
return NULL;
}

-static struct nsm_handle *nsm_lookup_addr(const struct sockaddr *sap)
+static struct nsm_handle *nsm_lookup_addr(const struct list_head *nsm_handles,
+ const struct sockaddr *sap)
{
struct nsm_handle *nsm;

- list_for_each_entry(nsm, &nsm_handles, sm_link)
+ list_for_each_entry(nsm, nsm_handles, sm_link)
if (rpc_cmp_addr(nsm_addr(nsm), sap))
return nsm;
return NULL;
}

-static struct nsm_handle *nsm_lookup_priv(const struct nsm_private *priv)
+static struct nsm_handle *nsm_lookup_priv(const struct list_head *nsm_handles,
+ const struct nsm_private *priv)
{
struct nsm_handle *nsm;

- list_for_each_entry(nsm, &nsm_handles, sm_link)
+ list_for_each_entry(nsm, nsm_handles, sm_link)
if (memcmp(nsm->sm_priv.data, priv->data,
sizeof(priv->data)) == 0)
return nsm;
@@ -350,6 +351,7 @@ static struct nsm_handle *nsm_create_han

/**
* nsm_get_handle - Find or create a cached nsm_handle
+ * @net: network namespace
* @sap: pointer to socket address of handle to find
* @salen: length of socket address
* @hostname: pointer to C string containing hostname to find
@@ -362,11 +364,13 @@ static struct nsm_handle *nsm_create_han
* @hostname cannot be found in the handle cache. Returns NULL if
* an error occurs.
*/
-struct nsm_handle *nsm_get_handle(const struct sockaddr *sap,
+struct nsm_handle *nsm_get_handle(const struct net *net,
+ const struct sockaddr *sap,
const size_t salen, const char *hostname,
const size_t hostname_len)
{
struct nsm_handle *cached, *new = NULL;
+ struct lockd_net *ln = net_generic(net, lockd_net_id);

if (hostname && memchr(hostname, '/', hostname_len) != NULL) {
if (printk_ratelimit()) {
@@ -381,9 +385,10 @@ retry:
spin_lock(&nsm_lock);

if (nsm_use_hostnames && hostname != NULL)
- cached = nsm_lookup_hostname(hostname, hostname_len);
+ cached = nsm_lookup_hostname(&ln->nsm_handles,
+ hostname, hostname_len);
else
- cached = nsm_lookup_addr(sap);
+ cached = nsm_lookup_addr(&ln->nsm_handles, sap);

if (cached != NULL) {
atomic_inc(&cached->sm_count);
@@ -397,7 +402,7 @@ retry:
}

if (new != NULL) {
- list_add(&new->sm_link, &nsm_handles);
+ list_add(&new->sm_link, &ln->nsm_handles);
spin_unlock(&nsm_lock);
dprintk("lockd: created nsm_handle for %s (%s)\n",
new->sm_name, new->sm_addrbuf);
@@ -414,19 +419,22 @@ retry:

/**
* nsm_reboot_lookup - match NLMPROC_SM_NOTIFY arguments to an nsm_handle
+ * @net: network namespace
* @info: pointer to NLMPROC_SM_NOTIFY arguments
*
* Returns a matching nsm_handle if found in the nsm cache. The returned
* nsm_handle's reference count is bumped. Otherwise returns NULL if some
* error occurred.
*/
-struct nsm_handle *nsm_reboot_lookup(const struct nlm_reboot *info)
+struct nsm_handle *nsm_reboot_lookup(const struct net *net,
+ const struct nlm_reboot *info)
{
struct nsm_handle *cached;
+ struct lockd_net *ln = net_generic(net, lockd_net_id);

spin_lock(&nsm_lock);

- cached = nsm_lookup_priv(&info->priv);
+ cached = nsm_lookup_priv(&ln->nsm_handles, &info->priv);
if (unlikely(cached == NULL)) {
spin_unlock(&nsm_lock);
dprintk("lockd: never saw rebooted peer '%.*s' before\n",
--- a/fs/lockd/netns.h
+++ b/fs/lockd/netns.h
@@ -16,6 +16,7 @@ struct lockd_net {
spinlock_t nsm_clnt_lock;
unsigned int nsm_users;
struct rpc_clnt *nsm_clnt;
+ struct list_head nsm_handles;
};

extern int lockd_net_id;
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -583,6 +583,7 @@ static int lockd_init_net(struct net *ne
INIT_DELAYED_WORK(&ln->grace_period_end, grace_ender);
INIT_LIST_HEAD(&ln->grace_list);
spin_lock_init(&ln->nsm_clnt_lock);
+ INIT_LIST_HEAD(&ln->nsm_handles);
return 0;
}

--- a/fs/lockd/svc4proc.c
+++ b/fs/lockd/svc4proc.c
@@ -421,7 +421,7 @@ nlm4svc_proc_sm_notify(struct svc_rqst *
return rpc_system_err;
}

- nlm_host_rebooted(argp);
+ nlm_host_rebooted(SVC_NET(rqstp), argp);
return rpc_success;
}

--- a/fs/lockd/svcproc.c
+++ b/fs/lockd/svcproc.c
@@ -464,7 +464,7 @@ nlmsvc_proc_sm_notify(struct svc_rqst *r
return rpc_system_err;
}

- nlm_host_rebooted(argp);
+ nlm_host_rebooted(SVC_NET(rqstp), argp);
return rpc_success;
}

--- a/include/linux/lockd/lockd.h
+++ b/include/linux/lockd/lockd.h
@@ -236,7 +236,8 @@ void nlm_rebind_host(struct nlm_host
struct nlm_host * nlm_get_host(struct nlm_host *);
void nlm_shutdown_hosts(void);
void nlm_shutdown_hosts_net(struct net *net);
-void nlm_host_rebooted(const struct nlm_reboot *);
+void nlm_host_rebooted(const struct net *net,
+ const struct nlm_reboot *);

/*
* Host monitoring
@@ -244,11 +245,13 @@ void nlm_host_rebooted(const struct n
int nsm_monitor(const struct nlm_host *host);
void nsm_unmonitor(const struct nlm_host *host);

-struct nsm_handle *nsm_get_handle(const struct sockaddr *sap,
+struct nsm_handle *nsm_get_handle(const struct net *net,
+ const struct sockaddr *sap,
const size_t salen,
const char *hostname,
const size_t hostname_len);
-struct nsm_handle *nsm_reboot_lookup(const struct nlm_reboot *info);
+struct nsm_handle *nsm_reboot_lookup(const struct net *net,
+ const struct nlm_reboot *info);
void nsm_release(struct nsm_handle *nsm);

/*

2016-03-01 23:51:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 074/130] sched/core: Remove false-positive warning from wake_up_process()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sasha Levin <[email protected]>

commit 119d6f6a3be8b424b200dcee56e74484d5445f7e upstream.

Because wakeups can (fundamentally) be late, a task might not be in
the expected state. Therefore testing against a task's state is racy,
and can yield false positives.

Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 9067ac85d533 ("wake_up_process() should be never used to wakeup a TASK_STOPPED/TRACED task")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/core.c | 1 -
1 file changed, 1 deletion(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1696,7 +1696,6 @@ out:
*/
int wake_up_process(struct task_struct *p)
{
- WARN_ON(task_is_stopped_or_traced(p));
return try_to_wake_up(p, TASK_NORMAL, 0);
}
EXPORT_SYMBOL(wake_up_process);

2016-03-01 23:51:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 084/130] scripts: recordmcount: break hardlinks

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Russell King <[email protected]>

commit dd39a26538e37f6c6131e829a4a510787e43c783 upstream.

recordmcount edits the file in-place, which can cause problems when
using ccache in hardlink mode. Arrange for recordmcount to break a
hardlinked object.

Link: http://lkml.kernel.org/r/[email protected]

Signed-off-by: Russell King <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
scripts/recordmcount.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -189,6 +189,20 @@ static void *mmap_file(char const *fname
addr = umalloc(sb.st_size);
uread(fd_map, addr, sb.st_size);
}
+ if (sb.st_nlink != 1) {
+ /* file is hard-linked, break the hard link */
+ close(fd_map);
+ if (unlink(fname) < 0) {
+ perror(fname);
+ fail_file();
+ }
+ fd_map = open(fname, O_RDWR | O_CREAT, sb.st_mode);
+ if (fd_map < 0) {
+ perror(fname);
+ fail_file();
+ }
+ uwrite(fd_map, addr, sb.st_size);
+ }
return addr;
}

2016-03-01 23:52:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 073/130] sched/core: Clear the root_domain cpumasks in init_rootdomain()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Xunlei Pang <[email protected]>

commit 8295c69925ad53ec32ca54ac9fc194ff21bc40e2 upstream.

root_domain::rto_mask allocated through alloc_cpumask_var()
contains garbage data, this may cause problems. For instance,
When doing pull_rt_task(), it may do useless iterations if
rto_mask retains some extra garbage bits. Worse still, this
violates the isolated domain rule for clustered scheduling
using cpuset, because the tasks(with all the cpus allowed)
belongs to one root domain can be pulled away into another
root domain.

The patch cleans the garbage by using zalloc_cpumask_var()
instead of alloc_cpumask_var() for root_domain::rto_mask
allocation, thereby addressing the issues.

Do the same thing for root_domain's other cpumask memembers:
dlo_mask, span, and online.

Signed-off-by: Xunlei Pang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/core.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5420,13 +5420,13 @@ static int init_rootdomain(struct root_d
{
memset(rd, 0, sizeof(*rd));

- if (!alloc_cpumask_var(&rd->span, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&rd->span, GFP_KERNEL))
goto out;
- if (!alloc_cpumask_var(&rd->online, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&rd->online, GFP_KERNEL))
goto free_span;
- if (!alloc_cpumask_var(&rd->dlo_mask, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&rd->dlo_mask, GFP_KERNEL))
goto free_online;
- if (!alloc_cpumask_var(&rd->rto_mask, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&rd->rto_mask, GFP_KERNEL))
goto free_dlo_mask;

init_dl_bw(&rd->dl_bw);

2016-03-01 23:52:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 087/130] Btrfs: send, dont BUG_ON() when an empty symlink is found

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Filipe Manana <[email protected]>

commit a879719b8c90e15c9e7fa7266d5e3c0ca962f9df upstream.

When a symlink is successfully created it always has an inline extent
containing the source path. However if an error happens when creating
the symlink, we can leave in the subvolume's tree a symlink inode without
any such inline extent item - this happens if after btrfs_symlink() calls
btrfs_end_transaction() and before it calls the inode eviction handler
(through the final iput() call), the transaction gets committed and a
crash happens before the eviction handler gets called, or if a snapshot
of the subvolume is made before the eviction handler gets called. Sadly
we can't just avoid this by making btrfs_symlink() call
btrfs_end_transaction() after it calls the eviction handler, because the
later can commit the current transaction before it removes any items from
the subvolume tree (if it encounters ENOSPC errors while reserving space
for removing all the items).

So make send fail more gracefully, with an -EIO error, and print a
message to dmesg/syslog informing that there's an empty symlink inode,
so that the user can delete the empty symlink or do something else
about it.

Reported-by: Stephen R. van den Berg <[email protected]>
Signed-off-by: Filipe Manana <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/btrfs/send.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)

--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -1377,7 +1377,21 @@ static int read_symlink(struct btrfs_roo
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
if (ret < 0)
goto out;
- BUG_ON(ret);
+ if (ret) {
+ /*
+ * An empty symlink inode. Can happen in rare error paths when
+ * creating a symlink (transaction committed before the inode
+ * eviction handler removed the symlink inode items and a crash
+ * happened in between or the subvol was snapshoted in between).
+ * Print an informative message to dmesg/syslog so that the user
+ * can delete the symlink.
+ */
+ btrfs_err(root->fs_info,
+ "Found empty symlink inode %llu at root %llu",
+ ino, root->root_key.objectid);
+ ret = -EIO;
+ goto out;
+ }

ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
struct btrfs_file_extent_item);

2016-03-01 23:52:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 082/130] ses: fix additional element traversal bug

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Bottomley <[email protected]>

commit 5e1033561da1152c57b97ee84371dba2b3d64c25 upstream.

KASAN found that our additional element processing scripts drop off
the end of the VPD page into unallocated space. The reason is that
not every element has additional information but our traversal
routines think they do, leading to them expecting far more additional
information than is present. Fix this by adding a gate to the
traversal routine so that it only processes elements that are expected
to have additional information (list is in SES-2 section 6.1.13.1:
Additional Element Status diagnostic page overview)

Reported-by: Pavel Tikhomirov <[email protected]>
Tested-by: Pavel Tikhomirov <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/ses.c | 10 +++++++++-
include/linux/enclosure.h | 4 ++++
2 files changed, 13 insertions(+), 1 deletion(-)

--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -454,7 +454,15 @@ static void ses_enclosure_data_process(s
if (desc_ptr)
desc_ptr += len;

- if (addl_desc_ptr)
+ if (addl_desc_ptr &&
+ /* only find additional descriptions for specific devices */
+ (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||
+ type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE ||
+ type_ptr[0] == ENCLOSURE_COMPONENT_SAS_EXPANDER ||
+ /* these elements are optional */
+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||
+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||
+ type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))
addl_desc_ptr += addl_desc_ptr[1] + 2;

}
--- a/include/linux/enclosure.h
+++ b/include/linux/enclosure.h
@@ -29,7 +29,11 @@
/* A few generic types ... taken from ses-2 */
enum enclosure_component_type {
ENCLOSURE_COMPONENT_DEVICE = 0x01,
+ ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS = 0x07,
+ ENCLOSURE_COMPONENT_SCSI_TARGET_PORT = 0x14,
+ ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT = 0x15,
ENCLOSURE_COMPONENT_ARRAY_DEVICE = 0x17,
+ ENCLOSURE_COMPONENT_SAS_EXPANDER = 0x18,
};

/* ses-2 common element status */

2016-03-01 23:52:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 086/130] Btrfs: igrab inode in writepage

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Josef Bacik <[email protected]>

commit be7bd730841e69fe8f70120098596f648cd1f3ff upstream.

We hit this panic on a few of our boxes this week where we have an
ordered_extent with an NULL inode. We do an igrab() of the inode in writepages,
but weren't doing it in writepage which can be called directly from the VM on
dirty pages. If the inode has been unlinked then we could have I_FREEING set
which means igrab() would return NULL and we get this panic. Fix this by trying
to igrab in btrfs_writepage, and if it returns NULL then just redirty the page
and return AOP_WRITEPAGE_ACTIVATE; so the VM knows it wasn't successful. Thanks,

Signed-off-by: Josef Bacik <[email protected]>
Reviewed-by: Liu Bo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/btrfs/inode.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)

--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7511,15 +7511,28 @@ int btrfs_readpage(struct file *file, st
static int btrfs_writepage(struct page *page, struct writeback_control *wbc)
{
struct extent_io_tree *tree;
-
+ struct inode *inode = page->mapping->host;
+ int ret;

if (current->flags & PF_MEMALLOC) {
redirty_page_for_writepage(wbc, page);
unlock_page(page);
return 0;
}
+
+ /*
+ * If we are under memory pressure we will call this directly from the
+ * VM, we need to make sure we have the inode referenced for the ordered
+ * extent. If not just return like we didn't do anything.
+ */
+ if (!igrab(inode)) {
+ redirty_page_for_writepage(wbc, page);
+ return AOP_WRITEPAGE_ACTIVATE;
+ }
tree = &BTRFS_I(page->mapping->host)->io_tree;
- return extent_write_full_page(tree, page, btrfs_get_extent, wbc);
+ ret = extent_write_full_page(tree, page, btrfs_get_extent, wbc);
+ btrfs_add_delayed_iput(inode);
+ return ret;
}

static int btrfs_writepages(struct address_space *mapping,

2016-03-01 23:53:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 063/130] mac: validate mac_partition is within sector

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit 02e2a5bfebe99edcf9d694575a75032d53fe1b73 upstream.

If md->signature == MAC_DRIVER_MAGIC and md->block_size == 1023, a single
512 byte sector would be read (secsize / 512). However the partition
structure would be located past the end of the buffer (secsize % 512).

Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
block/partitions/mac.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

--- a/block/partitions/mac.c
+++ b/block/partitions/mac.c
@@ -32,7 +32,7 @@ int mac_partition(struct parsed_partitio
Sector sect;
unsigned char *data;
int slot, blocks_in_map;
- unsigned secsize;
+ unsigned secsize, datasize, partoffset;
#ifdef CONFIG_PPC_PMAC
int found_root = 0;
int found_root_goodness = 0;
@@ -50,10 +50,14 @@ int mac_partition(struct parsed_partitio
}
secsize = be16_to_cpu(md->block_size);
put_dev_sector(sect);
- data = read_part_sector(state, secsize/512, &sect);
+ datasize = round_down(secsize, 512);
+ data = read_part_sector(state, datasize / 512, &sect);
if (!data)
return -1;
- part = (struct mac_partition *) (data + secsize%512);
+ partoffset = secsize % 512;
+ if (partoffset + sizeof(*part) > datasize)
+ return -1;
+ part = (struct mac_partition *) (data + partoffset);
if (be16_to_cpu(part->signature) != MAC_PARTITION_MAGIC) {
put_dev_sector(sect);
return 0; /* not a MacOS disk */

2016-03-01 23:53:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 088/130] Btrfs: fix number of transaction units required to create symlink

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Filipe Manana <[email protected]>

commit 9269d12b2d57d9e3d13036bb750762d1110d425c upstream.

We weren't accounting for the insertion of an inline extent item for the
symlink inode nor that we need to update the parent inode item (through
the call to btrfs_add_nondir()). So fix this by including two more
transaction units.

Signed-off-by: Filipe Manana <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/btrfs/inode.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8625,9 +8625,11 @@ static int btrfs_symlink(struct inode *d
/*
* 2 items for inode item and ref
* 2 items for dir items
+ * 1 item for updating parent inode item
+ * 1 item for the inline extent item
* 1 item for xattr if selinux is on
*/
- trans = btrfs_start_transaction(root, 5);
+ trans = btrfs_start_transaction(root, 7);
if (IS_ERR(trans))
return PTR_ERR(trans);

2016-03-01 23:53:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 093/130] uml: fix hostfs mknod()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Vegard Nossum <[email protected]>

commit 9f2dfda2f2f1c6181c3732c16b85c59ab2d195e0 upstream.

An inverted return value check in hostfs_mknod() caused the function
to return success after handling it as an error (and cleaning up).

It resulted in the following segfault when trying to bind() a named
unix socket:

Pid: 198, comm: a.out Not tainted 4.4.0-rc4
RIP: 0033:[<0000000061077df6>]
RSP: 00000000daae5d60 EFLAGS: 00010202
RAX: 0000000000000000 RBX: 000000006092a460 RCX: 00000000dfc54208
RDX: 0000000061073ef1 RSI: 0000000000000070 RDI: 00000000e027d600
RBP: 00000000daae5de0 R08: 00000000da980ac0 R09: 0000000000000000
R10: 0000000000000003 R11: 00007fb1ae08f72a R12: 0000000000000000
R13: 000000006092a460 R14: 00000000daaa97c0 R15: 00000000daaa9a88
Kernel panic - not syncing: Kernel mode fault at addr 0x40, ip 0x61077df6
CPU: 0 PID: 198 Comm: a.out Not tainted 4.4.0-rc4 #1
Stack:
e027d620 dfc54208 0000006f da981398
61bee000 0000c1ed daae5de0 0000006e
e027d620 dfcd4208 00000005 6092a460
Call Trace:
[<60dedc67>] SyS_bind+0xf7/0x110
[<600587be>] handle_syscall+0x7e/0x80
[<60066ad7>] userspace+0x3e7/0x4e0
[<6006321f>] ? save_registers+0x1f/0x40
[<6006c88e>] ? arch_prctl+0x1be/0x1f0
[<60054985>] fork_handler+0x85/0x90

Let's also get rid of the "cosmic ray protection" while we're at it.

Fixes: e9193059b1b3 "hostfs: fix races in dentry_name() and inode_name()"
Signed-off-by: Vegard Nossum <[email protected]>
Cc: Jeff Dike <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/hostfs/hostfs_kern.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

--- a/fs/hostfs/hostfs_kern.c
+++ b/fs/hostfs/hostfs_kern.c
@@ -720,15 +720,13 @@ static int hostfs_mknod(struct inode *di

init_special_inode(inode, mode, dev);
err = do_mknod(name, mode, MAJOR(dev), MINOR(dev));
- if (!err)
+ if (err)
goto out_free;

err = read_name(inode, name);
__putname(name);
if (err)
goto out_put;
- if (err)
- goto out_put;

d_instantiate(dentry, inode);
return 0;

2016-03-01 23:53:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 097/130] dm space map metadata: remove unused variable in brb_pop()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mike Snitzer <[email protected]>

commit 512167788a6fe9481a33a3cce5f80b684631a1bb upstream.

Remove the unused struct block_op pointer that was inadvertantly
introduced, via cut-and-paste of previous brb_op() code, as part of
commit 50dd842ad.

(Cc'ing stable@ because commit 50dd842ad did)

Fixes: 50dd842ad ("dm space map metadata: fix ref counting bug when bootstrapping a new space map")
Reported-by: David Binderman <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/persistent-data/dm-space-map-metadata.c | 3 ---
1 file changed, 3 deletions(-)

--- a/drivers/md/persistent-data/dm-space-map-metadata.c
+++ b/drivers/md/persistent-data/dm-space-map-metadata.c
@@ -152,12 +152,9 @@ static int brb_peek(struct bop_ring_buff

static int brb_pop(struct bop_ring_buffer *brb)
{
- struct block_op *bop;
-
if (brb_empty(brb))
return -ENODATA;

- bop = brb->bops + brb->begin;
brb->begin = brb_next(brb, brb->begin);

return 0;

2016-03-01 23:53:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 100/130] mmc: sdio: Fix invalid vdd in voltage switch power cycle

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Adrian Hunter <[email protected]>

commit d9bfbb95ed598a09cf336adb0f190ee0ff802f0d upstream.

The 'ocr' parameter passed to mmc_set_signal_voltage()
defines the power-on voltage used when power cycling
after a failure to set the voltage. However, in the
case of mmc_sdio_init_card(), the value passed has the
R4_18V_PRESENT flag set which is not valid for power-on
and results in an invalid vdd. Fix by passing the card's
ocr value which does not have the flag.

Signed-off-by: Adrian Hunter <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/mmc/core/sdio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/mmc/core/sdio.c
+++ b/drivers/mmc/core/sdio.c
@@ -670,7 +670,7 @@ try_again:
*/
if (!powered_resume && (rocr & ocr & R4_18V_PRESENT)) {
err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180,
- ocr);
+ ocr_card);
if (err == -EAGAIN) {
sdio_reset(host);
mmc_go_idle(host);

2016-03-01 23:53:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 109/130] drm/i915/dp: fall back to 18 bpp when sink capability is unknown

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jani Nikula <[email protected]>

commit 5efd407674068dede403551bea3b0b134c32513a upstream.

Per DP spec, the source device should fall back to 18 bpp, VESA range
RGB when the sink capability is unknown. Fix the color depth
clamping. 18 bpp color depth should ensure full color range in automatic
mode.

The clamping has been HDMI specific since its introduction in

commit 996a2239f93b03c5972923f04b097f65565c5bed
Author: Daniel Vetter <[email protected]>
Date: Fri Apr 19 11:24:34 2013 +0200

drm/i915: Disable high-bpc on pre-1.4 EDID screens

Reported-and-tested-by: Dihan Wickremasuriya <[email protected]>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=105331
Reviewed-by: Ville Syrjälä <[email protected]>
Signed-off-by: Jani Nikula <[email protected]>
Link: http://patchwork.freedesktop.org/patch/msgid/[email protected]
(cherry picked from commit 013dd9e038723bbd2aa67be51847384b75be8253)
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/i915/intel_display.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)

--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -8821,11 +8821,21 @@ connected_sink_compute_bpp(struct intel_
pipe_config->pipe_bpp = connector->base.display_info.bpc*3;
}

- /* Clamp bpp to 8 on screens without EDID 1.4 */
- if (connector->base.display_info.bpc == 0 && bpp > 24) {
- DRM_DEBUG_KMS("clamping display bpp (was %d) to default limit of 24\n",
- bpp);
- pipe_config->pipe_bpp = 24;
+ /* Clamp bpp to default limit on screens without EDID 1.4 */
+ if (connector->base.display_info.bpc == 0) {
+ int type = connector->base.connector_type;
+ int clamp_bpp = 24;
+
+ /* Fall back to 18 bpp when DP sink capability is unknown. */
+ if (type == DRM_MODE_CONNECTOR_DisplayPort ||
+ type == DRM_MODE_CONNECTOR_eDP)
+ clamp_bpp = 18;
+
+ if (bpp > clamp_bpp) {
+ DRM_DEBUG_KMS("clamping display bpp (was %d) to default limit of %d\n",
+ bpp, clamp_bpp);
+ pipe_config->pipe_bpp = clamp_bpp;
+ }
}
}

2016-03-01 23:53:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 101/130] mmc: sdhci: Fix sdhci_runtime_pm_bus_on/off()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Adrian Hunter <[email protected]>

commit 5c671c410c8704800f4f1673b6f572137e7e6ddd upstream.

sdhci has a legacy facility to prevent runtime suspend if the
bus power is on. This is needed in cases where the power to
the card is dependent on the bus power. It is controlled by
a pair of functions: sdhci_runtime_pm_bus_on() and
sdhci_runtime_pm_bus_off(). These functions use a boolean
variable 'bus_on' to ensure changes are always paired.
There is an additional check for 'runtime_suspended' which is
the problem. In fact, its use is ill-conceived as the only
requirement for the logic is that 'on' and 'off' are paired,
which is actually broken by the check, for example if the bus
power is turned on during runtime resume. So remove the check.

Signed-off-by: Adrian Hunter <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/mmc/host/sdhci.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2663,7 +2663,7 @@ static int sdhci_runtime_pm_put(struct s

static void sdhci_runtime_pm_bus_on(struct sdhci_host *host)
{
- if (host->runtime_suspended || host->bus_on)
+ if (host->bus_on)
return;
host->bus_on = true;
pm_runtime_get_noresume(host->mmc->parent);
@@ -2671,7 +2671,7 @@ static void sdhci_runtime_pm_bus_on(stru

static void sdhci_runtime_pm_bus_off(struct sdhci_host *host)
{
- if (host->runtime_suspended || !host->bus_on)
+ if (!host->bus_on)
return;
host->bus_on = false;
pm_runtime_put_noidle(host->mmc->parent);

2016-03-01 23:54:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 118/130] rfkill: fix rfkill_fop_read wait_event usage

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Johannes Berg <[email protected]>

commit 6736fde9672ff6717ac576e9bba2fd5f3dfec822 upstream.

The code within wait_event_interruptible() is called with
!TASK_RUNNING, so mustn't call any functions that can sleep,
like mutex_lock().

Since we re-check the list_empty() in a loop after the wait,
it's safe to simply use list_empty() without locking.

This bug has existed forever, but was only discovered now
because all userspace implementations, including the default
'rfkill' tool, use poll() or select() to get a readable fd
before attempting to read.

Fixes: c64fb01627e24 ("rfkill: create useful userspace interface")
Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/rfkill/core.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)

--- a/net/rfkill/core.c
+++ b/net/rfkill/core.c
@@ -1078,17 +1078,6 @@ static unsigned int rfkill_fop_poll(stru
return res;
}

-static bool rfkill_readable(struct rfkill_data *data)
-{
- bool r;
-
- mutex_lock(&data->mtx);
- r = !list_empty(&data->events);
- mutex_unlock(&data->mtx);
-
- return r;
-}
-
static ssize_t rfkill_fop_read(struct file *file, char __user *buf,
size_t count, loff_t *pos)
{
@@ -1105,8 +1094,11 @@ static ssize_t rfkill_fop_read(struct fi
goto out;
}
mutex_unlock(&data->mtx);
+ /* since we re-check and it just compares pointers,
+ * using !list_empty() without locking isn't a problem
+ */
ret = wait_event_interruptible(data->read_wait,
- rfkill_readable(data));
+ !list_empty(&data->events));
mutex_lock(&data->mtx);

if (ret)

2016-03-02 01:36:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 119/130] Revert "workqueue: make sure delayed work run in local cpu"

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Tejun Heo <[email protected]>

commit 041bd12e272c53a35c54c13875839bcb98c999ce upstream.

This reverts commit 874bbfe600a660cba9c776b3957b1ce393151b76.

Workqueue used to implicity guarantee that work items queued without
explicit CPU specified are put on the local CPU. Recent changes in
timer broke the guarantee and led to vmstat breakage which was fixed
by 176bed1de5bf ("vmstat: explicitly schedule per-cpu work on the CPU
we need it to run on").

vmstat is the most likely to expose the issue and it's quite possible
that there are other similar problems which are a lot more difficult
to trigger. As a preventive measure, 874bbfe600a6 ("workqueue: make
sure delayed work run in local cpu") was applied to restore the local
CPU guarnatee. Unfortunately, the change exposed a bug in timer code
which got fixed by 22b886dd1018 ("timers: Use proper base migration in
add_timer_on()"). Due to code restructuring, the commit couldn't be
backported beyond certain point and stable kernels which only had
874bbfe600a6 started crashing.

The local CPU guarantee was accidental more than anything else and we
want to get rid of it anyway. As, with the vmstat case fixed,
874bbfe600a6 is causing more problems than it's fixing, it has been
decided to take the chance and officially break the guarantee by
reverting the commit. A debug feature will be added to force foreign
CPU assignment to expose cases relying on the guarantee and fixes for
the individual cases will be backported to stable as necessary.

Signed-off-by: Tejun Heo <[email protected]>
Fixes: 874bbfe600a6 ("workqueue: make sure delayed work run in local cpu")
Link: http://lkml.kernel.org/g/[email protected]
Cc: Mike Galbraith <[email protected]>
Cc: Henrique de Moraes Holschuh <[email protected]>
Cc: Daniel Bilik <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Sasha Levin <[email protected]>
Cc: Ben Hutchings <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Daniel Bilik <[email protected]>
Cc: Jiri Slaby <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/workqueue.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1475,13 +1475,13 @@ static void __queue_delayed_work(int cpu
timer_stats_timer_set_start_info(&dwork->timer);

dwork->wq = wq;
- /* timer isn't guaranteed to run in this cpu, record earlier */
- if (cpu == WORK_CPU_UNBOUND)
- cpu = raw_smp_processor_id();
dwork->cpu = cpu;
timer->expires = jiffies + delay;

- add_timer_on(timer, cpu);
+ if (unlikely(cpu != WORK_CPU_UNBOUND))
+ add_timer_on(timer, cpu);
+ else
+ add_timer(timer);
}

/**

2016-03-01 23:54:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 117/130] cdc-acm:exclude Samsung phone 04e8:685d

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Oliver Neukum <[email protected]>

commit e912e685f372ab62a2405a1acd923597f524e94a upstream.

This phone needs to be handled by a specialised firmware tool
and is reported to crash irrevocably if cdc-acm takes it.

Signed-off-by: Oliver Neukum <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/class/cdc-acm.c | 5 +++++
1 file changed, 5 insertions(+)

--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -1810,6 +1810,11 @@ static const struct usb_device_id acm_id
},
#endif

+ /*Samsung phone in firmware update mode */
+ { USB_DEVICE(0x04e8, 0x685d),
+ .driver_info = IGNORE_DEVICE,
+ },
+
/* Exclude Infineon Flash Loader utility */
{ USB_DEVICE(0x058b, 0x0041),
.driver_info = IGNORE_DEVICE,

2016-03-02 01:36:55

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 3.14 000/130] 3.14.63-stable review

On 03/01/2016 04:44 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.14.63 release.
> There are 130 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Mar 3 23:44:39 UTC 2016.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v3.x/stable-review/patch-3.14.63-rc1.gz
> and the diffstat can be found below.
>

Compiled and booted on my test system.
No dmesg regressions.

thanks,
-- Shuah

--
Shuah Khan
Sr. Linux Kernel Developer
Open Source Innovation Group
Samsung Research America (Silicon Valley)
[email protected] | (970) 217-8978

2016-03-01 23:54:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 115/130] hwmon: (ads1015) Handle negative conversion values correctly

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Rosin <[email protected]>

commit acc146943957d7418a6846f06e029b2c5e87e0d5 upstream.

Make the divisor signed as DIV_ROUND_CLOSEST is undefined for negative
dividends when the divisor is unsigned.

Signed-off-by: Peter Rosin <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hwmon/ads1015.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/hwmon/ads1015.c
+++ b/drivers/hwmon/ads1015.c
@@ -126,7 +126,7 @@ static int ads1015_reg_to_mv(struct i2c_
struct ads1015_data *data = i2c_get_clientdata(client);
unsigned int pga = data->channel_data[channel].pga;
int fullscale = fullscale_table[pga];
- const unsigned mask = data->id == ads1115 ? 0x7fff : 0x7ff0;
+ const int mask = data->id == ads1115 ? 0x7fff : 0x7ff0;

return DIV_ROUND_CLOSEST(reg * fullscale, mask);
}

2016-03-02 01:38:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 116/130] libceph: dont bail early from try_read() when skipping a message

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ilya Dryomov <[email protected]>

commit e7a88e82fe380459b864e05b372638aeacb0f52d upstream.

The contract between try_read() and try_write() is that when called
each processes as much data as possible. When instructed by osd_client
to skip a message, try_read() is violating this contract by returning
after receiving and discarding a single message instead of checking for
more. try_write() then gets a chance to write out more requests,
generating more replies/skips for try_read() to handle, forcing the
messenger into a starvation loop.

Reported-by: Varada Kari <[email protected]>
Signed-off-by: Ilya Dryomov <[email protected]>
Tested-by: Varada Kari <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/ceph/messenger.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2279,7 +2279,7 @@ static int read_partial_message(struct c
con->in_base_pos = -front_len - middle_len - data_len -
sizeof(m->footer);
con->in_tag = CEPH_MSGR_TAG_READY;
- return 0;
+ return 1;
} else if ((s64)seq - (s64)con->in_seq > 1) {
pr_err("read_partial_message bad seq %lld expected %lld\n",
seq, con->in_seq + 1);
@@ -2312,7 +2312,7 @@ static int read_partial_message(struct c
sizeof(m->footer);
con->in_tag = CEPH_MSGR_TAG_READY;
con->in_seq++;
- return 0;
+ return 1;
}

BUG_ON(!con->in_msg);

2016-03-01 23:54:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 122/130] nfs: fix nfs_size_to_loff_t

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christoph Hellwig <[email protected]>

commit 50ab8ec74a153eb30db26529088bc57dd700b24c upstream.

See http: //http://www.infradead.org/rpr.html
X-Evolution-Source: [email protected]
Content-Transfer-Encoding: 8bit
Mime-Version: 1.0

We support OFFSET_MAX just fine, so don't round down below it. Also
switch to using min_t to make the helper more readable.

Signed-off-by: Christoph Hellwig <[email protected]>
Fixes: 433c92379d9c ("NFS: Clean up nfs_size_to_loff_t()")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/nfs_fs.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -580,9 +580,7 @@ static inline int nfs3_proc_setacls(stru

static inline loff_t nfs_size_to_loff_t(__u64 size)
{
- if (size > (__u64) OFFSET_MAX - 1)
- return OFFSET_MAX - 1;
- return (loff_t) size;
+ return min_t(u64, size, OFFSET_MAX);
}

static inline ino_t

2016-03-02 01:40:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 114/130] IB/qib: fix mcast detach when qp not attached

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mike Marciniszyn <[email protected]>

commit 09dc9cd6528f5b52bcbd3292a6312e762c85260f upstream.

The code produces the following trace:

[1750924.419007] general protection fault: 0000 [#3] SMP
[1750924.420364] Modules linked in: nfnetlink autofs4 rpcsec_gss_krb5 nfsv4
dcdbas rfcomm bnep bluetooth nfsd auth_rpcgss nfs_acl dm_multipath nfs lockd
scsi_dh sunrpc fscache radeon ttm drm_kms_helper drm serio_raw parport_pc
ppdev i2c_algo_bit lpc_ich ipmi_si ib_mthca ib_qib dca lp parport ib_ipoib
mac_hid ib_cm i3000_edac ib_sa ib_uverbs edac_core ib_umad ib_mad ib_core
ib_addr tg3 ptp dm_mirror dm_region_hash dm_log psmouse pps_core
[1750924.420364] CPU: 1 PID: 8401 Comm: python Tainted: G D
3.13.0-39-generic #66-Ubuntu
[1750924.420364] Hardware name: Dell Computer Corporation PowerEdge
860/0XM089, BIOS A04 07/24/2007
[1750924.420364] task: ffff8800366a9800 ti: ffff88007af1c000 task.ti:
ffff88007af1c000
[1750924.420364] RIP: 0010:[<ffffffffa0131d51>] [<ffffffffa0131d51>]
qib_mcast_qp_free+0x11/0x50 [ib_qib]
[1750924.420364] RSP: 0018:ffff88007af1dd70 EFLAGS: 00010246
[1750924.420364] RAX: 0000000000000001 RBX: ffff88007b822688 RCX:
000000000000000f
[1750924.420364] RDX: ffff88007b822688 RSI: ffff8800366c15a0 RDI:
6764697200000000
[1750924.420364] RBP: ffff88007af1dd78 R08: 0000000000000001 R09:
0000000000000000
[1750924.420364] R10: 0000000000000011 R11: 0000000000000246 R12:
ffff88007baa1d98
[1750924.420364] R13: ffff88003ecab000 R14: ffff88007b822660 R15:
0000000000000000
[1750924.420364] FS: 00007ffff7fd8740(0000) GS:ffff88007fc80000(0000)
knlGS:0000000000000000
[1750924.420364] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1750924.420364] CR2: 00007ffff597c750 CR3: 000000006860b000 CR4:
00000000000007e0
[1750924.420364] Stack:
[1750924.420364] ffff88007b822688 ffff88007af1ddf0 ffffffffa0132429
000000007af1de20
[1750924.420364] ffff88007baa1dc8 ffff88007baa0000 ffff88007af1de70
ffffffffa00cb313
[1750924.420364] 00007fffffffde88 0000000000000000 0000000000000008
ffff88003ecab000
[1750924.420364] Call Trace:
[1750924.420364] [<ffffffffa0132429>] qib_multicast_detach+0x1e9/0x350
[ib_qib]
[1750924.568035] [<ffffffffa00cb313>] ? ib_uverbs_modify_qp+0x323/0x3d0
[ib_uverbs]
[1750924.568035] [<ffffffffa0092d61>] ib_detach_mcast+0x31/0x50 [ib_core]
[1750924.568035] [<ffffffffa00cc213>] ib_uverbs_detach_mcast+0x93/0x170
[ib_uverbs]
[1750924.568035] [<ffffffffa00c61f6>] ib_uverbs_write+0xc6/0x2c0 [ib_uverbs]
[1750924.568035] [<ffffffff81312e68>] ? apparmor_file_permission+0x18/0x20
[1750924.568035] [<ffffffff812d4cd3>] ? security_file_permission+0x23/0xa0
[1750924.568035] [<ffffffff811bd214>] vfs_write+0xb4/0x1f0
[1750924.568035] [<ffffffff811bdc49>] SyS_write+0x49/0xa0
[1750924.568035] [<ffffffff8172f7ed>] system_call_fastpath+0x1a/0x1f
[1750924.568035] Code: 66 2e 0f 1f 84 00 00 00 00 00 31 c0 5d c3 66 2e 0f 1f
84 00 00 00 00 00 66 90 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb 48 8b 7f 10
<f0> ff 8f 40 01 00 00 74 0e 48 89 df e8 8e f8 06 e1 5b 5d c3 0f
[1750924.568035] RIP [<ffffffffa0131d51>] qib_mcast_qp_free+0x11/0x50
[ib_qib]
[1750924.568035] RSP <ffff88007af1dd70>
[1750924.650439] ---[ end trace 73d5d4b3f8ad4851 ]

The fix is to note the qib_mcast_qp that was found. If none is found, then
return EINVAL indicating the error.

Reviewed-by: Dennis Dalessandro <[email protected]>
Reported-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/infiniband/hw/qib/qib_verbs_mcast.c | 35 ++++++++++++----------------
1 file changed, 15 insertions(+), 20 deletions(-)

--- a/drivers/infiniband/hw/qib/qib_verbs_mcast.c
+++ b/drivers/infiniband/hw/qib/qib_verbs_mcast.c
@@ -286,15 +286,13 @@ int qib_multicast_detach(struct ib_qp *i
struct qib_ibdev *dev = to_idev(ibqp->device);
struct qib_ibport *ibp = to_iport(ibqp->device, qp->port_num);
struct qib_mcast *mcast = NULL;
- struct qib_mcast_qp *p, *tmp;
+ struct qib_mcast_qp *p, *tmp, *delp = NULL;
struct rb_node *n;
int last = 0;
int ret;

- if (ibqp->qp_num <= 1 || qp->state == IB_QPS_RESET) {
- ret = -EINVAL;
- goto bail;
- }
+ if (ibqp->qp_num <= 1 || qp->state == IB_QPS_RESET)
+ return -EINVAL;

spin_lock_irq(&ibp->lock);

@@ -303,8 +301,7 @@ int qib_multicast_detach(struct ib_qp *i
while (1) {
if (n == NULL) {
spin_unlock_irq(&ibp->lock);
- ret = -EINVAL;
- goto bail;
+ return -EINVAL;
}

mcast = rb_entry(n, struct qib_mcast, rb_node);
@@ -328,6 +325,7 @@ int qib_multicast_detach(struct ib_qp *i
*/
list_del_rcu(&p->list);
mcast->n_attached--;
+ delp = p;

/* If this was the last attached QP, remove the GID too. */
if (list_empty(&mcast->qp_list)) {
@@ -338,15 +336,16 @@ int qib_multicast_detach(struct ib_qp *i
}

spin_unlock_irq(&ibp->lock);
+ /* QP not attached */
+ if (!delp)
+ return -EINVAL;
+ /*
+ * Wait for any list walkers to finish before freeing the
+ * list element.
+ */
+ wait_event(mcast->wait, atomic_read(&mcast->refcount) <= 1);
+ qib_mcast_qp_free(delp);

- if (p) {
- /*
- * Wait for any list walkers to finish before freeing the
- * list element.
- */
- wait_event(mcast->wait, atomic_read(&mcast->refcount) <= 1);
- qib_mcast_qp_free(p);
- }
if (last) {
atomic_dec(&mcast->refcount);
wait_event(mcast->wait, !atomic_read(&mcast->refcount));
@@ -355,11 +354,7 @@ int qib_multicast_detach(struct ib_qp *i
dev->n_mcast_grps_allocated--;
spin_unlock_irq(&dev->n_mcast_grps_lock);
}
-
- ret = 0;
-
-bail:
- return ret;
+ return 0;
}

int qib_mcast_tree_empty(struct qib_ibport *ibp)

2016-03-01 23:54:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 120/130] libata: fix sff host state machine locking while polling

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Tejun Heo <[email protected]>

commit 8eee1d3ed5b6fc8e14389567c9a6f53f82bb7224 upstream.

The bulk of ATA host state machine is implemented by
ata_sff_hsm_move(). The function is called from either the interrupt
handler or, if polling, a work item. Unlike from the interrupt path,
the polling path calls the function without holding the host lock and
ata_sff_hsm_move() selectively grabs the lock.

This is completely broken. If an IRQ triggers while polling is in
progress, the two can easily race and end up accessing the hardware
and updating state machine state at the same time. This can put the
state machine in an illegal state and lead to a crash like the
following.

kernel BUG at drivers/ata/libata-sff.c:1302!
invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
Modules linked in:
CPU: 1 PID: 10679 Comm: syz-executor Not tainted 4.5.0-rc1+ #300
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff88002bd00000 ti: ffff88002e048000 task.ti: ffff88002e048000
RIP: 0010:[<ffffffff83a83409>] [<ffffffff83a83409>] ata_sff_hsm_move+0x619/0x1c60
...
Call Trace:
<IRQ>
[<ffffffff83a84c31>] __ata_sff_port_intr+0x1e1/0x3a0 drivers/ata/libata-sff.c:1584
[<ffffffff83a85611>] ata_bmdma_port_intr+0x71/0x400 drivers/ata/libata-sff.c:2877
[< inline >] __ata_sff_interrupt drivers/ata/libata-sff.c:1629
[<ffffffff83a85bf3>] ata_bmdma_interrupt+0x253/0x580 drivers/ata/libata-sff.c:2902
[<ffffffff81479f98>] handle_irq_event_percpu+0x108/0x7e0 kernel/irq/handle.c:157
[<ffffffff8147a717>] handle_irq_event+0xa7/0x140 kernel/irq/handle.c:205
[<ffffffff81484573>] handle_edge_irq+0x1e3/0x8d0 kernel/irq/chip.c:623
[< inline >] generic_handle_irq_desc include/linux/irqdesc.h:146
[<ffffffff811a92bc>] handle_irq+0x10c/0x2a0 arch/x86/kernel/irq_64.c:78
[<ffffffff811a7e4d>] do_IRQ+0x7d/0x1a0 arch/x86/kernel/irq.c:240
[<ffffffff86653d4c>] common_interrupt+0x8c/0x8c arch/x86/entry/entry_64.S:520
<EOI>
[< inline >] rcu_lock_acquire include/linux/rcupdate.h:490
[< inline >] rcu_read_lock include/linux/rcupdate.h:874
[<ffffffff8164b4a1>] filemap_map_pages+0x131/0xba0 mm/filemap.c:2145
[< inline >] do_fault_around mm/memory.c:2943
[< inline >] do_read_fault mm/memory.c:2962
[< inline >] do_fault mm/memory.c:3133
[< inline >] handle_pte_fault mm/memory.c:3308
[< inline >] __handle_mm_fault mm/memory.c:3418
[<ffffffff816efb16>] handle_mm_fault+0x2516/0x49a0 mm/memory.c:3447
[<ffffffff8127dc16>] __do_page_fault+0x376/0x960 arch/x86/mm/fault.c:1238
[<ffffffff8127e358>] trace_do_page_fault+0xe8/0x420 arch/x86/mm/fault.c:1331
[<ffffffff8126f514>] do_async_page_fault+0x14/0xd0 arch/x86/kernel/kvm.c:264
[<ffffffff86655578>] async_page_fault+0x28/0x30 arch/x86/entry/entry_64.S:986

Fix it by ensuring that the polling path is holding the host lock
before entering ata_sff_hsm_move() so that all hardware accesses and
state updates are performed under the host lock.

Signed-off-by: Tejun Heo <[email protected]>
Reported-and-tested-by: Dmitry Vyukov <[email protected]>
Link: http://lkml.kernel.org/g/CACT4Y+b_JsOxJu2EZyEf+mOXORc_zid5V1-pLZSroJVxyWdSpw@mail.gmail.com
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/ata/libata-sff.c | 32 +++++++++++---------------------
1 file changed, 11 insertions(+), 21 deletions(-)

--- a/drivers/ata/libata-sff.c
+++ b/drivers/ata/libata-sff.c
@@ -997,12 +997,9 @@ static inline int ata_hsm_ok_in_wq(struc
static void ata_hsm_qc_complete(struct ata_queued_cmd *qc, int in_wq)
{
struct ata_port *ap = qc->ap;
- unsigned long flags;

if (ap->ops->error_handler) {
if (in_wq) {
- spin_lock_irqsave(ap->lock, flags);
-
/* EH might have kicked in while host lock is
* released.
*/
@@ -1014,8 +1011,6 @@ static void ata_hsm_qc_complete(struct a
} else
ata_port_freeze(ap);
}
-
- spin_unlock_irqrestore(ap->lock, flags);
} else {
if (likely(!(qc->err_mask & AC_ERR_HSM)))
ata_qc_complete(qc);
@@ -1024,10 +1019,8 @@ static void ata_hsm_qc_complete(struct a
}
} else {
if (in_wq) {
- spin_lock_irqsave(ap->lock, flags);
ata_sff_irq_on(ap);
ata_qc_complete(qc);
- spin_unlock_irqrestore(ap->lock, flags);
} else
ata_qc_complete(qc);
}
@@ -1048,9 +1041,10 @@ int ata_sff_hsm_move(struct ata_port *ap
{
struct ata_link *link = qc->dev->link;
struct ata_eh_info *ehi = &link->eh_info;
- unsigned long flags = 0;
int poll_next;

+ lockdep_assert_held(ap->lock);
+
WARN_ON_ONCE((qc->flags & ATA_QCFLAG_ACTIVE) == 0);

/* Make sure ata_sff_qc_issue() does not throw things
@@ -1112,14 +1106,6 @@ fsm_start:
}
}

- /* Send the CDB (atapi) or the first data block (ata pio out).
- * During the state transition, interrupt handler shouldn't
- * be invoked before the data transfer is complete and
- * hsm_task_state is changed. Hence, the following locking.
- */
- if (in_wq)
- spin_lock_irqsave(ap->lock, flags);
-
if (qc->tf.protocol == ATA_PROT_PIO) {
/* PIO data out protocol.
* send first data block.
@@ -1135,9 +1121,6 @@ fsm_start:
/* send CDB */
atapi_send_cdb(ap, qc);

- if (in_wq)
- spin_unlock_irqrestore(ap->lock, flags);
-
/* if polling, ata_sff_pio_task() handles the rest.
* otherwise, interrupt handler takes over from here.
*/
@@ -1361,12 +1344,14 @@ static void ata_sff_pio_task(struct work
u8 status;
int poll_next;

+ spin_lock_irq(ap->lock);
+
BUG_ON(ap->sff_pio_task_link == NULL);
/* qc can be NULL if timeout occurred */
qc = ata_qc_from_tag(ap, link->active_tag);
if (!qc) {
ap->sff_pio_task_link = NULL;
- return;
+ goto out_unlock;
}

fsm_start:
@@ -1381,11 +1366,14 @@ fsm_start:
*/
status = ata_sff_busy_wait(ap, ATA_BUSY, 5);
if (status & ATA_BUSY) {
+ spin_unlock_irq(ap->lock);
ata_msleep(ap, 2);
+ spin_lock_irq(ap->lock);
+
status = ata_sff_busy_wait(ap, ATA_BUSY, 10);
if (status & ATA_BUSY) {
ata_sff_queue_pio_task(link, ATA_SHORT_PAUSE);
- return;
+ goto out_unlock;
}
}

@@ -1402,6 +1390,8 @@ fsm_start:
*/
if (poll_next)
goto fsm_start;
+out_unlock:
+ spin_unlock_irq(ap->lock);
}

/**

2016-03-02 01:41:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 130/130] iwlwifi: update and fix 7265 series PCI IDs

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Oren Givon <[email protected]>

commit 006bda75d81fd27a583a3b310e9444fea2aa6ef2 upstream.

Update and fix some 7265 PCI IDs entries.

Signed-off-by: Oren Givon <[email protected]>
Signed-off-by: Emmanuel Grumbach <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/wireless/iwlwifi/pcie/drv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
@@ -367,6 +367,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_ca
{IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_n_cfg)},
{IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095A, 0x5C10, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)},
@@ -383,10 +384,10 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_ca
{IWL_PCI_DEVICE(0x095A, 0x9012, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)},
- {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x9210, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)},
- {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x095B, 0x9310, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x5020, iwl7265_2n_cfg)},
{IWL_PCI_DEVICE(0x095A, 0x502A, iwl7265_2n_cfg)},

2016-03-02 01:42:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 129/130] xen/pcifront: Fix mysterious crashes when NUMA locality information was extracted.

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Konrad Rzeszutek Wilk <[email protected]>

commit 4d8c8bd6f2062c9988817183a91fe2e623c8aa5e upstream.

Occasionaly PV guests would crash with:

pciback 0000:00:00.1: Xen PCI mapped GSI0 to IRQ16
BUG: unable to handle kernel paging request at 0000000d1a8c0be0
.. snip..
<ffffffff8139ce1b>] find_next_bit+0xb/0x10
[<ffffffff81387f22>] cpumask_next_and+0x22/0x40
[<ffffffff813c1ef8>] pci_device_probe+0xb8/0x120
[<ffffffff81529097>] ? driver_sysfs_add+0x77/0xa0
[<ffffffff815293e4>] driver_probe_device+0x1a4/0x2d0
[<ffffffff813c1ddd>] ? pci_match_device+0xdd/0x110
[<ffffffff81529657>] __device_attach_driver+0xa7/0xb0
[<ffffffff815295b0>] ? __driver_attach+0xa0/0xa0
[<ffffffff81527622>] bus_for_each_drv+0x62/0x90
[<ffffffff8152978d>] __device_attach+0xbd/0x110
[<ffffffff815297fb>] device_attach+0xb/0x10
[<ffffffff813b75ac>] pci_bus_add_device+0x3c/0x70
[<ffffffff813b7618>] pci_bus_add_devices+0x38/0x80
[<ffffffff813dc34e>] pcifront_scan_root+0x13e/0x1a0
[<ffffffff817a0692>] pcifront_backend_changed+0x262/0x60b
[<ffffffff814644c6>] ? xenbus_gather+0xd6/0x160
[<ffffffff8120900f>] ? put_object+0x2f/0x50
[<ffffffff81465c1d>] xenbus_otherend_changed+0x9d/0xa0
[<ffffffff814678ee>] backend_changed+0xe/0x10
[<ffffffff81463a28>] xenwatch_thread+0xc8/0x190
[<ffffffff810f22f0>] ? woken_wake_function+0x10/0x10

which was the result of two things:

When we call pci_scan_root_bus we would pass in 'sd' (sysdata)
pointer which was an 'pcifront_sd' structure. However in the
pci_device_add it expects that the 'sd' is 'struct sysdata' and
sets the dev->node to what is in sd->node (offset 4):

set_dev_node(&dev->dev, pcibus_to_node(bus));

__pcibus_to_node(const struct pci_bus *bus)
{
const struct pci_sysdata *sd = bus->sysdata;

return sd->node;
}

However our structure was pcifront_sd which had nothing at that
offset:

struct pcifront_sd {
int domain; /* 0 4 */
/* XXX 4 bytes hole, try to pack */
struct pcifront_device * pdev; /* 8 8 */
}

That is an hole - filled with garbage as we used kmalloc instead of
kzalloc (the second problem).

This patch fixes the issue by:
1) Use kzalloc to initialize to a well known state.
2) Put 'struct pci_sysdata' at the start of 'pcifront_sd'. That
way access to the 'node' will access the right offset.

Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pci/xen-pcifront.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)

--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -52,7 +52,7 @@ struct pcifront_device {
};

struct pcifront_sd {
- int domain;
+ struct pci_sysdata sd;
struct pcifront_device *pdev;
};

@@ -66,7 +66,9 @@ static inline void pcifront_init_sd(stru
unsigned int domain, unsigned int bus,
struct pcifront_device *pdev)
{
- sd->domain = domain;
+ /* Because we do not expose that information via XenBus. */
+ sd->sd.node = first_online_node;
+ sd->sd.domain = domain;
sd->pdev = pdev;
}

@@ -464,8 +466,8 @@ static int pcifront_scan_root(struct pci
dev_info(&pdev->xdev->dev, "Creating PCI Frontend Bus %04x:%02x\n",
domain, bus);

- bus_entry = kmalloc(sizeof(*bus_entry), GFP_KERNEL);
- sd = kmalloc(sizeof(*sd), GFP_KERNEL);
+ bus_entry = kzalloc(sizeof(*bus_entry), GFP_KERNEL);
+ sd = kzalloc(sizeof(*sd), GFP_KERNEL);
if (!bus_entry || !sd) {
err = -ENOMEM;
goto err_out;

2016-03-02 01:42:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 127/130] kernel/resource.c: fix muxed resource handling in __request_region()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Simon Guinot <[email protected]>

commit 59ceeaaf355fa0fb16558ef7c24413c804932ada upstream.

In __request_region, if a conflict with a BUSY and MUXED resource is
detected, then the caller goes to sleep and waits for the resource to be
released. A pointer on the conflicting resource is kept. At wake-up
this pointer is used as a parent to retry to request the region.

A first problem is that this pointer might well be invalid (if for
example the conflicting resource have already been freed). Another
problem is that the next call to __request_region() fails to detect a
remaining conflict. The previously conflicting resource is passed as a
parameter and __request_region() will look for a conflict among the
children of this resource and not at the resource itself. It is likely
to succeed anyway, even if there is still a conflict.

Instead, the parent of the conflicting resource should be passed to
__request_region().

As a fix, this patch doesn't update the parent resource pointer in the
case we have to wait for a muxed region right after.

Reported-and-tested-by: Vincent Pelletier <[email protected]>
Signed-off-by: Simon Guinot <[email protected]>
Tested-by: Vincent Donnefort <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/resource.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -961,9 +961,10 @@ struct resource * __request_region(struc
if (!conflict)
break;
if (conflict != parent) {
- parent = conflict;
- if (!(conflict->flags & IORESOURCE_BUSY))
+ if (!(conflict->flags & IORESOURCE_BUSY)) {
+ parent = conflict;
continue;
+ }
}
if (conflict->flags & flags & IORESOURCE_MUXED) {
add_wait_queue(&muxed_resource_wait, &wait);

2016-03-02 01:42:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 110/130] drm/qxl: use kmalloc_array to alloc reloc_info in qxl_process_single_command

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gerd Hoffmann <[email protected]>

commit 34855706c30d52b0a744da44348b5d1cc39fbe51 upstream.

This avoids integer overflows on 32bit machines when calculating
reloc_info size, as reported by Alan Cox.

Cc: [email protected]
Signed-off-by: Gerd Hoffmann <[email protected]>
Reviewed-by: Daniel Vetter <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/qxl/qxl_ioctl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
+++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
@@ -168,7 +168,8 @@ static int qxl_process_single_command(st
cmd->command_size))
return -EFAULT;

- reloc_info = kmalloc(sizeof(struct qxl_reloc_info) * cmd->relocs_num, GFP_KERNEL);
+ reloc_info = kmalloc_array(cmd->relocs_num,
+ sizeof(struct qxl_reloc_info), GFP_KERNEL);
if (!reloc_info)
return -ENOMEM;

2016-03-01 23:54:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 105/130] sparc64: fix incorrect sign extension in sys_sparc64_personality

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dmitry V. Levin <[email protected]>

commit 525fd5a94e1be0776fa652df5c687697db508c91 upstream.

The value returned by sys_personality has type "long int".
It is saved to a variable of type "int", which is not a problem
yet because the type of task_struct->pesonality is "unsigned int".
The problem is the sign extension from "int" to "long int"
that happens on return from sys_sparc64_personality.

For example, a userspace call personality((unsigned) -EINVAL) will
result to any subsequent personality call, including absolutely
harmless read-only personality(0xffffffff) call, failing with
errno set to EINVAL.

Signed-off-by: Dmitry V. Levin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/sparc/kernel/sys_sparc_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -412,7 +412,7 @@ out:

SYSCALL_DEFINE1(sparc64_personality, unsigned long, personality)
{
- int ret;
+ long ret;

if (personality(current->personality) == PER_LINUX32 &&
personality(personality) == PER_LINUX)

2016-03-02 01:42:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 113/130] ACPI / PCI / hotplug: unlock in error path in acpiphp_enable_slot()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Insu Yun <[email protected]>

commit 2c3033a0664dfae91e1dee7fabac10f24354b958 upstream.

In acpiphp_enable_slot(), there is a missing unlock path
when error occurred. It needs to be unlocked before returning
an error.

Signed-off-by: Insu Yun <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pci/hotplug/acpiphp_glue.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/pci/hotplug/acpiphp_glue.c
+++ b/drivers/pci/hotplug/acpiphp_glue.c
@@ -1133,8 +1133,10 @@ int acpiphp_enable_slot(struct acpiphp_s
{
pci_lock_rescan_remove();

- if (slot->flags & SLOT_IS_GOING_AWAY)
+ if (slot->flags & SLOT_IS_GOING_AWAY) {
+ pci_unlock_rescan_remove();
return -ENODEV;
+ }

mutex_lock(&slot->crit_sect);
/* configure all functions */

2016-03-02 01:43:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 112/130] drm/radeon/pm: adjust display configuration after powerstate

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 39d4275058baf53e89203407bf3841ff2c74fa32 upstream.

set_power_state defaults to no displays, so we need to update
the display configuration after setting up the powerstate on the
first call. In most cases this is not an issue since ends up
getting called multiple times at any given modeset and the proper
order is achieved in the display changed handling at the top of
the function.

Reviewed-by: Christian König <[email protected]>
Acked-by: Jordan Lazare <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_pm.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -915,8 +915,6 @@ static void radeon_dpm_change_power_stat

/* update display watermarks based on new power state */
radeon_bandwidth_update(rdev);
- /* update displays */
- radeon_dpm_display_configuration_changed(rdev);

rdev->pm.dpm.current_active_crtcs = rdev->pm.dpm.new_active_crtcs;
rdev->pm.dpm.current_active_crtc_count = rdev->pm.dpm.new_active_crtc_count;
@@ -936,6 +934,9 @@ static void radeon_dpm_change_power_stat

radeon_dpm_post_set_power_state(rdev);

+ /* update displays */
+ radeon_dpm_display_configuration_changed(rdev);
+
if (rdev->asic->dpm.force_performance_level) {
if (rdev->pm.dpm.thermal_active) {
enum radeon_dpm_forced_level level = rdev->pm.dpm.forced_level;

2016-03-02 01:43:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 128/130] do_last(): dont let a bogus return value from ->open() et.al. to confuse us

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit c80567c82ae4814a41287618e315a60ecf513be6 upstream.

... into returning a positive to path_openat(), which would interpret that
as "symlink had been encountered" and proceed to corrupt memory, etc.
It can only happen due to a bug in some ->open() instance or in some LSM
hook, etc., so we report any such event *and* make sure it doesn't trick
us into further unpleasantness.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/namei.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/fs/namei.c
+++ b/fs/namei.c
@@ -3085,6 +3085,10 @@ opened:
goto exit_fput;
}
out:
+ if (unlikely(error > 0)) {
+ WARN_ON(1);
+ error = -EINVAL;
+ }
if (got_write)
mnt_drop_write(nd->path.mnt);
path_put(&save_parent);

2016-03-02 01:43:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 126/130] sunrpc/cache: fix off-by-one in qword_get()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stefan Hajnoczi <[email protected]>

commit b7052cd7bcf3c1478796e93e3dff2b44c9e82943 upstream.

The qword_get() function NUL-terminates its output buffer. If the input
string is in hex format \xXXXX... and the same length as the output
buffer, there is an off-by-one:

int qword_get(char **bpp, char *dest, int bufsize)
{
...
while (len < bufsize) {
...
*dest++ = (h << 4) | l;
len++;
}
...
*dest = '\0';
return len;
}

This patch ensures the NUL terminator doesn't fall outside the output
buffer.

Signed-off-by: Stefan Hajnoczi <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/sunrpc/cache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1230,7 +1230,7 @@ int qword_get(char **bpp, char *dest, in
if (bp[0] == '\\' && bp[1] == 'x') {
/* HEX STRING */
bp += 2;
- while (len < bufsize) {
+ while (len < bufsize - 1) {
int h, l;

h = hex_to_bin(bp[0]);

2016-03-02 01:44:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 121/130] PCI/AER: Flush workqueue on device remove to avoid use-after-free

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <[email protected]>

commit 4ae2182b1e3407de369f8c5d799543b7db74221b upstream.

A Root Port's AER structure (rpc) contains a queue of events. aer_irq()
enqueues AER status information and schedules aer_isr() to dequeue and
process it. When we remove a device, aer_remove() waits for the queue to
be empty, then frees the rpc struct.

But aer_isr() references the rpc struct after dequeueing and possibly
emptying the queue, which can cause a use-after-free error as in the
following scenario with two threads, aer_isr() on the left and a
concurrent aer_remove() on the right:

Thread A Thread B
-------- --------
aer_irq():
rpc->prod_idx++
aer_remove():
wait_event(rpc->prod_idx == rpc->cons_idx)
# now blocked until queue becomes empty
aer_isr(): # ...
rpc->cons_idx++ # unblocked because queue is now empty
... kfree(rpc)
mutex_unlock(&rpc->rpc_mutex)

To prevent this problem, use flush_work() to wait until the last scheduled
instance of aer_isr() has completed before freeing the rpc struct in
aer_remove().

I reproduced this use-after-free by flashing a device FPGA and
re-enumerating the bus to find the new device. With SLUB debug, this
crashes with 0x6b bytes (POISON_FREE, the use-after-free magic number) in
GPR25:

pcieport 0000:00:00.0: AER: Multiple Corrected error received: id=0000
Unable to handle kernel paging request for data at address 0x27ef9e3e
Workqueue: events aer_isr
GPR24: dd6aa000 6b6b6b6b 605f8378 605f8360 d99b12c0 604fc674 606b1704 d99b12c0
NIP [602f5328] pci_walk_bus+0xd4/0x104

[bhelgaas: changelog, stable tag]
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pci/pcie/aer/aerdrv.c | 4 +---
drivers/pci/pcie/aer/aerdrv.h | 1 -
drivers/pci/pcie/aer/aerdrv_core.c | 2 --
3 files changed, 1 insertion(+), 6 deletions(-)

--- a/drivers/pci/pcie/aer/aerdrv.c
+++ b/drivers/pci/pcie/aer/aerdrv.c
@@ -262,7 +262,6 @@ static struct aer_rpc *aer_alloc_rpc(str
rpc->rpd = dev;
INIT_WORK(&rpc->dpc_handler, aer_isr);
mutex_init(&rpc->rpc_mutex);
- init_waitqueue_head(&rpc->wait_release);

/* Use PCIe bus function to store rpc into PCIe device */
set_service_data(dev, rpc);
@@ -285,8 +284,7 @@ static void aer_remove(struct pcie_devic
if (rpc->isr)
free_irq(dev->irq, dev);

- wait_event(rpc->wait_release, rpc->prod_idx == rpc->cons_idx);
-
+ flush_work(&rpc->dpc_handler);
aer_disable_rootport(rpc);
kfree(rpc);
set_service_data(dev, NULL);
--- a/drivers/pci/pcie/aer/aerdrv.h
+++ b/drivers/pci/pcie/aer/aerdrv.h
@@ -72,7 +72,6 @@ struct aer_rpc {
* recovery on the same
* root port hierarchy
*/
- wait_queue_head_t wait_release;
};

struct aer_broadcast_data {
--- a/drivers/pci/pcie/aer/aerdrv_core.c
+++ b/drivers/pci/pcie/aer/aerdrv_core.c
@@ -785,8 +785,6 @@ void aer_isr(struct work_struct *work)
while (get_e_source(rpc, &e_src))
aer_isr_one_error(p_device, &e_src);
mutex_unlock(&rpc->rpc_mutex);
-
- wake_up(&rpc->wait_release);
}

/**

2016-03-02 01:44:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 124/130] KVM: async_pf: do not warn on page allocation failures

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christian Borntraeger <[email protected]>

commit d7444794a02ff655eda87e3cc54e86b940e7736f upstream.

In async_pf we try to allocate with NOWAIT to get an element quickly
or fail. This code also handle failures gracefully. Lets silence
potential page allocation failures under load.

qemu-system-s39: page allocation failure: order:0,mode:0x2200000
[...]
Call Trace:
([<00000000001146b8>] show_trace+0xf8/0x148)
[<000000000011476a>] show_stack+0x62/0xe8
[<00000000004a36b8>] dump_stack+0x70/0x98
[<0000000000272c3a>] warn_alloc_failed+0xd2/0x148
[<000000000027709e>] __alloc_pages_nodemask+0x94e/0xb38
[<00000000002cd36a>] new_slab+0x382/0x400
[<00000000002cf7ac>] ___slab_alloc.constprop.30+0x2dc/0x378
[<00000000002d03d0>] kmem_cache_alloc+0x160/0x1d0
[<0000000000133db4>] kvm_setup_async_pf+0x6c/0x198
[<000000000013dee8>] kvm_arch_vcpu_ioctl_run+0xd48/0xd58
[<000000000012fcaa>] kvm_vcpu_ioctl+0x372/0x690
[<00000000002f66f6>] do_vfs_ioctl+0x3be/0x510
[<00000000002f68ec>] SyS_ioctl+0xa4/0xb8
[<0000000000781c5e>] system_call+0xd6/0x264
[<000003ffa24fa06a>] 0x3ffa24fa06a

Signed-off-by: Christian Borntraeger <[email protected]>
Reviewed-by: Dominik Dingel <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
virt/kvm/async_pf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -152,7 +152,7 @@ int kvm_setup_async_pf(struct kvm_vcpu *
* do alloc nowait since if we are going to sleep anyway we
* may as well sleep faulting in page
*/
- work = kmem_cache_zalloc(async_pf_cache, GFP_NOWAIT);
+ work = kmem_cache_zalloc(async_pf_cache, GFP_NOWAIT | __GFP_NOWARN);
if (!work)
return 0;

2016-03-02 01:44:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 125/130] tracing: Fix showing function event in available_events

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Steven Rostedt (Red Hat) <[email protected]>

commit d045437a169f899dfb0f6f7ede24cc042543ced9 upstream.

The ftrace:function event is only displayed for parsing the function tracer
data. It is not used to enable function tracing, and does not include an
"enable" file in its event directory.

Originally, this event was kept separate from other events because it did
not have a ->reg parameter. But perf added a "reg" parameter for its use
which caused issues, because it made the event available to functions where
it was not compatible for.

Commit 9b63776fa3ca9 "tracing: Do not enable function event with enable"
added a TRACE_EVENT_FL_IGNORE_ENABLE flag that prevented the function event
from being enabled by normal trace events. But this commit missed keeping
the function event from being displayed by the "available_events" directory,
which is used to show what events can be enabled by set_event.

One documented way to enable all events is to:

cat available_events > set_event

But because the function event is displayed in the available_events, this
now causes an INVALID error:

cat: write error: Invalid argument

Reported-by: Chunyu Hu <[email protected]>
Fixes: 9b63776fa3ca9 "tracing: Do not enable function event with enable"
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/trace_events.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -606,7 +606,8 @@ t_next(struct seq_file *m, void *v, loff
* The ftrace subsystem is for showing formats only.
* They can not be enabled or disabled via the event files.
*/
- if (call->class && call->class->reg)
+ if (call->class && call->class->reg &&
+ !(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE))
return file;
}

2016-03-01 23:54:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 108/130] drm/radeon: hold reference to fences in radeon_sa_bo_new

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicolai Hähnle <[email protected]>

commit f6ff4f67cdf8455d0a4226eeeaf5af17c37d05eb upstream.

An arbitrary amount of time can pass between spin_unlock and
radeon_fence_wait_any, so we need to ensure that nobody frees the
fences from under us.

Based on the analogous fix for amdgpu.

Signed-off-by: Nicolai Hähnle <[email protected]>
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_sa.c | 5 +++++
1 file changed, 5 insertions(+)

--- a/drivers/gpu/drm/radeon/radeon_sa.c
+++ b/drivers/gpu/drm/radeon/radeon_sa.c
@@ -349,8 +349,13 @@ int radeon_sa_bo_new(struct radeon_devic
/* see if we can skip over some allocations */
} while (radeon_sa_bo_next_hole(sa_manager, fences, tries));

+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ radeon_fence_ref(fences[i]);
+
spin_unlock(&sa_manager->wq.lock);
r = radeon_fence_wait_any(rdev, fences, false);
+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ radeon_fence_unref(&fences[i]);
spin_lock(&sa_manager->wq.lock);
/* if we have nothing to wait for block */
if (r == -ENOENT && block) {

2016-03-02 01:45:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 123/130] NFSv4: Fix a dentry leak on alias use

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Benjamin Coddington <[email protected]>

commit d9dfd8d741683347ee159d25f5b50c346a0df557 upstream.

In the case where d_add_unique() finds an appropriate alias to use it will
have already incremented the reference count. An additional dget() to swap
the open context's dentry is unnecessary and will leak a reference.

Signed-off-by: Benjamin Coddington <[email protected]>
Fixes: 275bb307865a3 ("NFSv4: Move dentry instantiation into the NFSv4-...")
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/nfs/nfs4proc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2187,9 +2187,9 @@ static int _nfs4_open_and_get_state(stru
dentry = d_add_unique(dentry, igrab(state->inode));
if (dentry == NULL) {
dentry = opendata->dentry;
- } else if (dentry != ctx->dentry) {
+ } else {
dput(ctx->dentry);
- ctx->dentry = dget(dentry);
+ ctx->dentry = dentry;
}
nfs_set_verifier(dentry,
nfs_save_change_attribute(opendata->dir->d_inode));

2016-03-02 01:46:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 061/130] storvsc: Dont set the SRB_FLAGS_QUEUE_ACTION_ENABLE flag

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: K. Y. Srinivasan <[email protected]>

commit 8cf308e1225f5f93575f03cc4dbef24516fa81c9 upstream.

Don't set the SRB_FLAGS_QUEUE_ACTION_ENABLE flag since we are not specifying
tags. Without this, the qlogic driver doesn't work properly with storvsc.

Signed-off-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/storvsc_drv.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1610,8 +1610,7 @@ static int storvsc_queuecommand(struct S
vm_srb->win8_extension.time_out_value = 60;

vm_srb->win8_extension.srb_flags |=
- (SRB_FLAGS_QUEUE_ACTION_ENABLE |
- SRB_FLAGS_DISABLE_SYNCH_TRANSFER);
+ SRB_FLAGS_DISABLE_SYNCH_TRANSFER;

/* Build the SRB */
switch (scmnd->sc_data_direction) {

2016-03-02 01:46:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 111/130] drm/radeon: use post-decrement in error handling

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Rasmus Villemoes <[email protected]>

commit bc3f5d8c4ca01555820617eb3b6c0857e4df710d upstream.

We need to use post-decrement to get the pci_map_page undone also for
i==0, and to avoid some very unpleasant behaviour if pci_map_page
failed already at i==0.

Reviewed-by: Christian König <[email protected]>
Signed-off-by: Rasmus Villemoes <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -623,7 +623,7 @@ static int radeon_ttm_tt_populate(struct
0, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
if (pci_dma_mapping_error(rdev->pdev, gtt->ttm.dma_address[i])) {
- while (--i) {
+ while (i--) {
pci_unmap_page(rdev->pdev, gtt->ttm.dma_address[i],
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
gtt->ttm.dma_address[i] = 0;

2016-03-02 01:47:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 068/130] ring-buffer: Update read stamp with first real commit on page

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Steven Rostedt (Red Hat) <[email protected]>

commit b81f472a208d3e2b4392faa6d17037a89442f4ce upstream.

Do not update the read stamp after swapping out the reader page from the
write buffer. If the reader page is swapped out of the buffer before an
event is written to it, then the read_stamp may get an out of date
timestamp, as the page timestamp is updated on the first commit to that
page.

rb_get_reader_page() only returns a page if it has an event on it, otherwise
it will return NULL. At that point, check if the page being returned has
events and has not been read yet. Then at that point update the read_stamp
to match the time stamp of the reader page.

Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/ring_buffer.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)

--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1949,12 +1949,6 @@ rb_set_commit_to_write(struct ring_buffe
goto again;
}

-static void rb_reset_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
-{
- cpu_buffer->read_stamp = cpu_buffer->reader_page->page->time_stamp;
- cpu_buffer->reader_page->read = 0;
-}
-
static void rb_inc_iter(struct ring_buffer_iter *iter)
{
struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
@@ -3592,7 +3586,7 @@ rb_get_reader_page(struct ring_buffer_pe

/* Finally update the reader page to the new head */
cpu_buffer->reader_page = reader;
- rb_reset_reader_page(cpu_buffer);
+ cpu_buffer->reader_page->read = 0;

if (overwrite != cpu_buffer->last_overrun) {
cpu_buffer->lost_events = overwrite - cpu_buffer->last_overrun;
@@ -3602,6 +3596,10 @@ rb_get_reader_page(struct ring_buffer_pe
goto again;

out:
+ /* Update the read_stamp on the first event */
+ if (reader && reader->read == 0)
+ cpu_buffer->read_stamp = reader->page->time_stamp;
+
arch_spin_unlock(&cpu_buffer->lock);
local_irq_restore(flags);

2016-03-02 01:47:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 069/130] virtio: fix memory leak of virtio ida cache layers

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Suman Anna <[email protected]>

commit c13f99b7e945dad5273a8b7ee230f4d1f22d3354 upstream.

The virtio core uses a static ida named virtio_index_ida for
assigning index numbers to virtio devices during registration.
The ida core may allocate some internal idr cache layers and
an ida bitmap upon any ida allocation, and all these layers are
truely freed only upon the ida destruction. The virtio_index_ida
is not destroyed at present, leading to a memory leak when using
the virtio core as a module and atleast one virtio device is
registered and unregistered.

Fix this by invoking ida_destroy() in the virtio core module
exit.

Signed-off-by: Suman Anna <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/virtio/virtio.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -249,6 +249,7 @@ static int virtio_init(void)
static void __exit virtio_exit(void)
{
bus_unregister(&virtio_bus);
+ ida_destroy(&virtio_index_ida);
}
core_initcall(virtio_init);
module_exit(virtio_exit);

2016-03-02 01:47:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 070/130] mac80211: mesh: fix call_rcu() usage

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Johannes Berg <[email protected]>

commit c2e703a55245bfff3db53b1f7cbe59f1ee8a4339 upstream.

When using call_rcu(), the called function may be delayed quite
significantly, and without a matching rcu_barrier() there's no
way to be sure it has finished.
Therefore, global state that could be gone/freed/reused should
never be touched in the callback.

Fix this in mesh by moving the atomic_dec() into the caller;
that's not really a problem since we already unlinked the path
and it will be destroyed anyway.

This fixes a crash Jouni observed when running certain tests in
a certain order, in which the mesh interface was torn down, the
memory reused for a function pointer (work struct) and running
that then crashed since the pointer had been decremented by 1,
resulting in an invalid instruction byte stream.

Fixes: eb2b9311fd00 ("mac80211: mesh path table implementation")
Reported-by: Jouni Malinen <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/mac80211/mesh_pathtbl.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/net/mac80211/mesh_pathtbl.c
+++ b/net/mac80211/mesh_pathtbl.c
@@ -746,10 +746,8 @@ void mesh_plink_broken(struct sta_info *
static void mesh_path_node_reclaim(struct rcu_head *rp)
{
struct mpath_node *node = container_of(rp, struct mpath_node, rcu);
- struct ieee80211_sub_if_data *sdata = node->mpath->sdata;

del_timer_sync(&node->mpath->timer);
- atomic_dec(&sdata->u.mesh.mpaths);
kfree(node->mpath);
kfree(node);
}
@@ -757,8 +755,9 @@ static void mesh_path_node_reclaim(struc
/* needs to be called with the corresponding hashwlock taken */
static void __mesh_path_del(struct mesh_table *tbl, struct mpath_node *node)
{
- struct mesh_path *mpath;
- mpath = node->mpath;
+ struct mesh_path *mpath = node->mpath;
+ struct ieee80211_sub_if_data *sdata = node->mpath->sdata;
+
spin_lock(&mpath->state_lock);
mpath->flags |= MESH_PATH_RESOLVING;
if (mpath->is_gate)
@@ -766,6 +765,7 @@ static void __mesh_path_del(struct mesh_
hlist_del_rcu(&node->list);
call_rcu(&node->rcu, mesh_path_node_reclaim);
spin_unlock(&mpath->state_lock);
+ atomic_dec(&sdata->u.mesh.mpaths);
atomic_dec(&tbl->entries);
}

2016-03-02 01:48:13

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 066/130] target: Fix race for SCF_COMPARE_AND_WRITE_POST checking

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicholas Bellinger <[email protected]>

commit 057085e522f8bf94c2e691a5b76880f68060f8ba upstream.

This patch addresses a race + use after free where the first
stage of COMPARE_AND_WRITE in compare_and_write_callback()
is rescheduled after the backend sends the secondary WRITE,
resulting in second stage compare_and_write_post() callback
completing in target_complete_ok_work() before the first
can return.

Because current code depends on checking se_cmd->se_cmd_flags
after return from se_cmd->transport_complete_callback(),
this results in first stage having SCF_COMPARE_AND_WRITE_POST
set, which incorrectly falls through into second stage CAW
processing code, eventually triggering a NULL pointer
dereference due to use after free.

To address this bug, pass in a new *post_ret parameter into
se_cmd->transport_complete_callback(), and depend upon this
value instead of ->se_cmd_flags to determine when to return
or fall through into ->queue_status() code for CAW.

Cc: Sagi Grimberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_sbc.c | 13 +++++++++----
drivers/target/target_core_transport.c | 16 +++++++++-------
include/target/target_core_base.h | 2 +-
3 files changed, 19 insertions(+), 12 deletions(-)

--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -314,7 +314,8 @@ sbc_setup_write_same(struct se_cmd *cmd,
return 0;
}

-static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success)
+static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
+ int *post_ret)
{
unsigned char *buf, *addr;
struct scatterlist *sg;
@@ -378,7 +379,8 @@ sbc_execute_rw(struct se_cmd *cmd)
cmd->data_direction);
}

-static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success)
+static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success,
+ int *post_ret)
{
struct se_device *dev = cmd->se_dev;

@@ -388,8 +390,10 @@ static sense_reason_t compare_and_write_
* sent to the backend driver.
*/
spin_lock_irq(&cmd->t_state_lock);
- if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status)
+ if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) {
cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;
+ *post_ret = 1;
+ }
spin_unlock_irq(&cmd->t_state_lock);

/*
@@ -401,7 +405,8 @@ static sense_reason_t compare_and_write_
return TCM_NO_SENSE;
}

-static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success)
+static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success,
+ int *post_ret)
{
struct se_device *dev = cmd->se_dev;
struct scatterlist *write_sg = NULL, *sg;
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1581,7 +1581,7 @@ bool target_stop_cmd(struct se_cmd *cmd,
void transport_generic_request_failure(struct se_cmd *cmd,
sense_reason_t sense_reason)
{
- int ret = 0;
+ int ret = 0, post_ret = 0;

pr_debug("-----[ Storage Engine Exception for cmd: %p ITT: 0x%08x"
" CDB: 0x%02x\n", cmd, cmd->se_tfo->get_task_tag(cmd),
@@ -1604,7 +1604,7 @@ void transport_generic_request_failure(s
*/
if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
cmd->transport_complete_callback)
- cmd->transport_complete_callback(cmd, false);
+ cmd->transport_complete_callback(cmd, false, &post_ret);

switch (sense_reason) {
case TCM_NON_EXISTENT_LUN:
@@ -1940,11 +1940,13 @@ static void target_complete_ok_work(stru
*/
if (cmd->transport_complete_callback) {
sense_reason_t rc;
-
- rc = cmd->transport_complete_callback(cmd, true);
- if (!rc && !(cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE_POST)) {
- if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
- !cmd->data_length)
+ bool caw = (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE);
+ bool zero_dl = !(cmd->data_length);
+ int post_ret = 0;
+
+ rc = cmd->transport_complete_callback(cmd, true, &post_ret);
+ if (!rc && !post_ret) {
+ if (caw && zero_dl)
goto queue_rsp;

return;
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -513,7 +513,7 @@ struct se_cmd {
sense_reason_t (*execute_cmd)(struct se_cmd *);
sense_reason_t (*execute_rw)(struct se_cmd *, struct scatterlist *,
u32, enum dma_data_direction);
- sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool);
+ sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool, int *);

unsigned char *t_task_cdb;
unsigned char __t_task_cdb[TCM_MAX_COMMAND_SIZE];

2016-03-02 01:48:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 067/130] target: fix COMPARE_AND_WRITE non zero SGL offset data corruption

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jan Engelhardt <[email protected]>

commit d94e5a61357a04938ce14d6033b4d33a3c5fd780 upstream.

target_core_sbc's compare_and_write functionality suffers from taking
data at the wrong memory location when writing a CAW request to disk
when a SGL offset is non-zero.

This can happen with loopback and vhost-scsi fabric drivers when
SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is used to map existing user-space
SGL memory into COMPARE_AND_WRITE READ/WRITE payload buffers.

Given the following sample LIO subtopology,

% targetcli ls /loopback/
o- loopback ................................. [1 Target]
o- naa.6001405ebb8df14a ....... [naa.60014059143ed2b3]
o- luns ................................... [2 LUNs]
o- lun0 ................ [iblock/ram0 (/dev/ram0)]
o- lun1 ................ [iblock/ram1 (/dev/ram1)]
% lsscsi -g
[3:0:1:0] disk LIO-ORG IBLOCK 4.0 /dev/sdc /dev/sg3
[3:0:1:1] disk LIO-ORG IBLOCK 4.0 /dev/sdd /dev/sg4

the following bug can be observed in Linux 4.3 and 4.4~rc1:

% perl -e 'print chr$_ for 0..255,reverse 0..255' >rand
% perl -e 'print "\0" x 512' >zero
% cat rand >/dev/sdd
% sg_compare_and_write -i rand -D zero --lba 0 /dev/sdd
% sg_compare_and_write -i zero -D rand --lba 0 /dev/sdd
Miscompare reported
% hexdump -Cn 512 /dev/sdd
00000000 0f 0e 0d 0c 0b 0a 09 08 07 06 05 04 03 02 01 00
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
00000200

Rather than writing all-zeroes as instructed with the -D file, it
corrupts the data in the sector by splicing some of the original
bytes in. The page of the first entry of cmd->t_data_sg includes the
CDB, and sg->offset is set to a position past the CDB. I presume that
sg->offset is also the right choice to use for subsequent sglist
members.

Signed-off-by: Jan Engelhardt <[email protected]>
Tested-by: Douglas Gilbert <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_sbc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -502,11 +502,11 @@ static sense_reason_t compare_and_write_

if (block_size < PAGE_SIZE) {
sg_set_page(&write_sg[i], m.page, block_size,
- block_size);
+ m.piter.sg->offset + block_size);
} else {
sg_miter_next(&m);
sg_set_page(&write_sg[i], m.page, block_size,
- 0);
+ m.piter.sg->offset);
}
len -= block_size;
i++;

2016-03-02 01:48:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 107/130] drm/radeon: clean up fujitsu quirks

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 0eb1c3d4084eeb6fb3a703f88d6ce1521f8fcdd1 upstream.

Combine the two quirks.

bug:
https://bugzilla.kernel.org/show_bug.cgi?id=109481

Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_atombios.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -436,7 +436,9 @@ static bool radeon_atom_apply_quirks(str
}

/* Fujitsu D3003-S2 board lists DVI-I as DVI-D and VGA */
- if (((dev->pdev->device == 0x9802) || (dev->pdev->device == 0x9806)) &&
+ if (((dev->pdev->device == 0x9802) ||
+ (dev->pdev->device == 0x9805) ||
+ (dev->pdev->device == 0x9806)) &&
(dev->pdev->subsystem_vendor == 0x1734) &&
(dev->pdev->subsystem_device == 0x11bd)) {
if (*connector_type == DRM_MODE_CONNECTOR_VGA) {
@@ -447,14 +449,6 @@ static bool radeon_atom_apply_quirks(str
}
}

- /* Fujitsu D3003-S2 board lists DVI-I as DVI-I and VGA */
- if ((dev->pdev->device == 0x9805) &&
- (dev->pdev->subsystem_vendor == 0x1734) &&
- (dev->pdev->subsystem_device == 0x11bd)) {
- if (*connector_type == DRM_MODE_CONNECTOR_VGA)
- return false;
- }
-
return true;
}

2016-03-01 23:53:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 099/130] posix-clock: Fix return code on the poll methods error path

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Richard Cochran <[email protected]>

commit 1b9f23727abb92c5e58f139e7d180befcaa06fe0 upstream.

The posix_clock_poll function is supposed to return a bit mask of
POLLxxx values. However, in case the hardware has disappeared (due to
hot plugging for example) this code returns -ENODEV in a futile
attempt to throw an error at the file descriptor level. The kernel's
file_operations interface does not accept such error codes from the
poll method. Instead, this function aught to return POLLERR.

The value -ENODEV does, in fact, contain the POLLERR bit (and almost
all the other POLLxxx bits as well), but only by chance. This patch
fixes code to return a proper bit mask.

Credit goes to Markus Elfring for pointing out the suspicious
signed/unsigned mismatch.

Reported-by: Markus Elfring <[email protected]>
igned-off-by: Richard Cochran <[email protected]>
Cc: John Stultz <[email protected]>
Cc: Julia Lawall <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/time/posix-clock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/time/posix-clock.c
+++ b/kernel/time/posix-clock.c
@@ -69,10 +69,10 @@ static ssize_t posix_clock_read(struct f
static unsigned int posix_clock_poll(struct file *fp, poll_table *wait)
{
struct posix_clock *clk = get_posix_clock(fp);
- int result = 0;
+ unsigned int result = 0;

if (!clk)
- return -ENODEV;
+ return POLLERR;

if (clk->ops.poll)
result = clk->ops.poll(clk, fp, wait);

2016-03-02 01:49:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 106/130] drm/vmwgfx: respect nomodeset

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Rob Clark <[email protected]>

commit 96c5d076f0a5e2023ecdb44d8261f87641ee71e0 upstream.

Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Thomas Hellstrom <[email protected]>.
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 7 +++++++
1 file changed, 7 insertions(+)

--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -25,6 +25,7 @@
*
**************************************************************************/
#include <linux/module.h>
+#include <linux/console.h>

#include <drm/drmP.h>
#include "vmwgfx_drv.h"
@@ -1383,6 +1384,12 @@ static int vmw_probe(struct pci_dev *pde
static int __init vmwgfx_init(void)
{
int ret;
+
+#ifdef CONFIG_VGA_CONSOLE
+ if (vgacon_text_force())
+ return -EINVAL;
+#endif
+
ret = drm_pci_init(&driver, &vmw_pci_driver);
if (ret)
DRM_ERROR("Failed initializing DRM.\n");

2016-03-02 01:49:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 104/130] EDAC: Robustify workqueues destruction

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Borislav Petkov <[email protected]>

commit fcd5c4dd8201595d4c598c9cca5e54760277d687 upstream.

EDAC workqueue destruction is really fragile. We cancel delayed work
but if it is still running and requeues itself, we still go ahead and
destroy the workqueue and the queued work explodes when workqueue core
attempts to run it.

Make the destruction more robust by switching op_state to offline so
that requeuing stops. Cancel any pending work *synchronously* too.

EDAC i7core: Driver loaded.
general protection fault: 0000 [#1] SMP
CPU 12
Modules linked in:
Supported: Yes
Pid: 0, comm: kworker/0:1 Tainted: G IE 3.0.101-0-default #1 HP ProLiant DL380 G7
RIP: 0010:[<ffffffff8107dcd7>] [<ffffffff8107dcd7>] __queue_work+0x17/0x3f0
< ... regs ...>
Process kworker/0:1 (pid: 0, threadinfo ffff88019def6000, task ffff88019def4600)
Stack:
...
Call Trace:
call_timer_fn
run_timer_softirq
__do_softirq
call_softirq
do_softirq
irq_exit
smp_apic_timer_interrupt
apic_timer_interrupt
intel_idle
cpuidle_idle_call
cpu_idle
Code: ...
RIP __queue_work
RSP <...>

Signed-off-by: Borislav Petkov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/edac/edac_device.c | 11 ++++-------
drivers/edac/edac_mc.c | 14 +++-----------
drivers/edac/edac_pci.c | 9 ++++-----
3 files changed, 11 insertions(+), 23 deletions(-)

--- a/drivers/edac/edac_device.c
+++ b/drivers/edac/edac_device.c
@@ -435,16 +435,13 @@ void edac_device_workq_setup(struct edac
*/
void edac_device_workq_teardown(struct edac_device_ctl_info *edac_dev)
{
- int status;
-
if (!edac_dev->edac_check)
return;

- status = cancel_delayed_work(&edac_dev->work);
- if (status == 0) {
- /* workq instance might be running, wait for it */
- flush_workqueue(edac_workqueue);
- }
+ edac_dev->op_state = OP_OFFLINE;
+
+ cancel_delayed_work_sync(&edac_dev->work);
+ flush_workqueue(edac_workqueue);
}

/*
--- a/drivers/edac/edac_mc.c
+++ b/drivers/edac/edac_mc.c
@@ -584,18 +584,10 @@ static void edac_mc_workq_setup(struct m
*/
static void edac_mc_workq_teardown(struct mem_ctl_info *mci)
{
- int status;
+ mci->op_state = OP_OFFLINE;

- if (mci->op_state != OP_RUNNING_POLL)
- return;
-
- status = cancel_delayed_work(&mci->work);
- if (status == 0) {
- edac_dbg(0, "not canceled, flush the queue\n");
-
- /* workq instance might be running, wait for it */
- flush_workqueue(edac_workqueue);
- }
+ cancel_delayed_work_sync(&mci->work);
+ flush_workqueue(edac_workqueue);
}

/*
--- a/drivers/edac/edac_pci.c
+++ b/drivers/edac/edac_pci.c
@@ -274,13 +274,12 @@ static void edac_pci_workq_setup(struct
*/
static void edac_pci_workq_teardown(struct edac_pci_ctl_info *pci)
{
- int status;
-
edac_dbg(0, "\n");

- status = cancel_delayed_work(&pci->work);
- if (status == 0)
- flush_workqueue(edac_workqueue);
+ pci->op_state = OP_OFFLINE;
+
+ cancel_delayed_work_sync(&pci->work);
+ flush_workqueue(edac_workqueue);
}

/*

2016-03-02 01:49:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 103/130] cputime: Prevent 32bit overflow in time[val|spec]_to_cputime()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: zengtao <[email protected]>

commit 0f26922fe5dc5724b1adbbd54b21bad03590b4f3 upstream.

The datatype __kernel_time_t is u32 on 32bit platform, so its subject to
overflows in the timeval/timespec to cputime conversion.

Currently the following functions are affected:
1. setitimer()
2. timer_create/timer_settime()
3. sys_clock_nanosleep

This can happen on MIPS32 and ARM32 with "Full dynticks CPU time accounting"
enabled, which is required for CONFIG_NO_HZ_FULL.

Enforce u64 conversion to prevent the overflow.

Fixes: 31c1fc818715 ("ARM: Kconfig: allow full nohz CPU accounting")
Signed-off-by: zengtao <[email protected]>
Reviewed-by: Arnd Bergmann <[email protected]>
Cc: <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/asm-generic/cputime_nsecs.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/include/asm-generic/cputime_nsecs.h
+++ b/include/asm-generic/cputime_nsecs.h
@@ -70,7 +70,7 @@ typedef u64 __nocast cputime64_t;
*/
static inline cputime_t timespec_to_cputime(const struct timespec *val)
{
- u64 ret = val->tv_sec * NSEC_PER_SEC + val->tv_nsec;
+ u64 ret = (u64)val->tv_sec * NSEC_PER_SEC + val->tv_nsec;
return (__force cputime_t) ret;
}
static inline void cputime_to_timespec(const cputime_t ct, struct timespec *val)
@@ -86,7 +86,8 @@ static inline void cputime_to_timespec(c
*/
static inline cputime_t timeval_to_cputime(const struct timeval *val)
{
- u64 ret = val->tv_sec * NSEC_PER_SEC + val->tv_usec * NSEC_PER_USEC;
+ u64 ret = (u64)val->tv_sec * NSEC_PER_SEC +
+ val->tv_usec * NSEC_PER_USEC;
return (__force cputime_t) ret;
}
static inline void cputime_to_timeval(const cputime_t ct, struct timeval *val)

2016-03-02 01:50:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 102/130] mmc: mmci: fix an ages old detection error

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Linus Walleij <[email protected]>

commit 0bcb7efdff63564e80fe84dd36a9fbdfbf6697a4 upstream.

commit 4956e10903fd ("ARM: 6244/1: mmci: add variant data and default
MCICLOCK support") added variant data for ARM, U300 and Ux500 variants.
The Nomadik NHK8815/8820 variant was erroneously labeled as a U300
variant, and when the proper Nomadik variant was later introduced in
commit 34fd421349ff ("ARM: 7378/1: mmci: add support for the Nomadik MMCI
variant") this was not fixes. Let's say this fixes the latter commit as
there was no proper Nomadik support until then.

Fixes: 34fd421349ff ("ARM: 7378/1: mmci: add support for the Nomadik...")
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/mmc/host/mmci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/mmc/host/mmci.c
+++ b/drivers/mmc/host/mmci.c
@@ -1860,7 +1860,7 @@ static struct amba_id mmci_ids[] = {
{
.id = 0x00280180,
.mask = 0x00ffffff,
- .data = &variant_u300,
+ .data = &variant_nomadik,
},
{
.id = 0x00480180,

2016-03-02 01:51:23

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 065/130] vfs: Avoid softlockups with sendfile(2)

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jan Kara <[email protected]>

commit c2489e07c0a71a56fb2c84bc0ee66cddfca7d068 upstream.

The following test program from Dmitry can cause softlockups or RCU
stalls as it copies 1GB from tmpfs into eventfd and we don't have any
scheduling point at that path in sendfile(2) implementation:

int r1 = eventfd(0, 0);
int r2 = memfd_create("", 0);
unsigned long n = 1<<30;
fallocate(r2, 0, 0, n);
sendfile(r1, r2, 0, n);

Add cond_resched() into __splice_from_pipe() to fix the problem.

CC: Dmitry Vyukov <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/splice.c | 1 +
1 file changed, 1 insertion(+)

--- a/fs/splice.c
+++ b/fs/splice.c
@@ -949,6 +949,7 @@ ssize_t __splice_from_pipe(struct pipe_i

splice_from_pipe_begin(sd);
do {
+ cond_resched();
ret = splice_from_pipe_next(pipe, sd);
if (ret > 0)
ret = splice_from_pipe_feed(pipe, sd, actor);

2016-03-01 23:53:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 098/130] dm snapshot: fix hung bios when copy error occurs

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mikulas Patocka <[email protected]>

commit 385277bfb57faac44e92497104ba542cdd82d5fe upstream.

When there is an error copying a chunk dm-snapshot can incorrectly hold
associated bios indefinitely, resulting in hung IO.

The function copy_callback sets pe->error if there was error copying the
chunk, and then calls complete_exception. complete_exception calls
pending_complete on error, otherwise it calls commit_exception with
commit_callback (and commit_callback calls complete_exception).

The persistent exception store (dm-snap-persistent.c) assumes that calls
to prepare_exception and commit_exception are paired.
persistent_prepare_exception increases ps->pending_count and
persistent_commit_exception decreases it.

If there is a copy error, persistent_prepare_exception is called but
persistent_commit_exception is not. This results in the variable
ps->pending_count never returning to zero and that causes some pending
exceptions (and their associated bios) to be held forever.

Fix this by unconditionally calling commit_exception regardless of
whether the copy was successful. A new "valid" parameter is added to
commit_exception -- when the copy fails this parameter is set to zero so
that the chunk that failed to copy (and all following chunks) is not
recorded in the snapshot store. Also, remove commit_callback now that
it is merely a wrapper around pending_complete.

Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/dm-exception-store.h | 2 +-
drivers/md/dm-snap-persistent.c | 5 ++++-
drivers/md/dm-snap-transient.c | 4 ++--
drivers/md/dm-snap.c | 20 +++++---------------
4 files changed, 12 insertions(+), 19 deletions(-)

--- a/drivers/md/dm-exception-store.h
+++ b/drivers/md/dm-exception-store.h
@@ -70,7 +70,7 @@ struct dm_exception_store_type {
* Update the metadata with this exception.
*/
void (*commit_exception) (struct dm_exception_store *store,
- struct dm_exception *e,
+ struct dm_exception *e, int valid,
void (*callback) (void *, int success),
void *callback_context);

--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -700,7 +700,7 @@ static int persistent_prepare_exception(
}

static void persistent_commit_exception(struct dm_exception_store *store,
- struct dm_exception *e,
+ struct dm_exception *e, int valid,
void (*callback) (void *, int success),
void *callback_context)
{
@@ -709,6 +709,9 @@ static void persistent_commit_exception(
struct core_exception ce;
struct commit_callback *cb;

+ if (!valid)
+ ps->valid = 0;
+
ce.old_chunk = e->old_chunk;
ce.new_chunk = e->new_chunk;
write_exception(ps, ps->current_committed++, &ce);
--- a/drivers/md/dm-snap-transient.c
+++ b/drivers/md/dm-snap-transient.c
@@ -52,12 +52,12 @@ static int transient_prepare_exception(s
}

static void transient_commit_exception(struct dm_exception_store *store,
- struct dm_exception *e,
+ struct dm_exception *e, int valid,
void (*callback) (void *, int success),
void *callback_context)
{
/* Just succeed */
- callback(callback_context, 1);
+ callback(callback_context, valid);
}

static void transient_usage(struct dm_exception_store *store,
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -1388,8 +1388,9 @@ static void __invalidate_snapshot(struct
dm_table_event(s->ti->table);
}

-static void pending_complete(struct dm_snap_pending_exception *pe, int success)
+static void pending_complete(void *context, int success)
{
+ struct dm_snap_pending_exception *pe = context;
struct dm_exception *e;
struct dm_snapshot *s = pe->snap;
struct bio *origin_bios = NULL;
@@ -1460,24 +1461,13 @@ out:
free_pending_exception(pe);
}

-static void commit_callback(void *context, int success)
-{
- struct dm_snap_pending_exception *pe = context;
-
- pending_complete(pe, success);
-}
-
static void complete_exception(struct dm_snap_pending_exception *pe)
{
struct dm_snapshot *s = pe->snap;

- if (unlikely(pe->copy_error))
- pending_complete(pe, 0);
-
- else
- /* Update the metadata if we are persistent */
- s->store->type->commit_exception(s->store, &pe->e,
- commit_callback, pe);
+ /* Update the metadata if we are persistent */
+ s->store->type->commit_exception(s->store, &pe->e, !pe->copy_error,
+ pending_complete, pe);
}

/*

2016-03-01 23:53:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 090/130] s390/dasd: prevent incorrect length error under z/VM after PAV changes

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stefan Haberland <[email protected]>

commit 020bf042e5b397479c1174081b935d0ff15d1a64 upstream.

The channel checks the specified length and the provided amount of
data for CCWs and provides an incorrect length error if the size does
not match. Under z/VM with simulation activated the length may get
changed. Having the suppress length indication bit set is stated as
good CCW coding practice and avoids errors under z/VM.

Signed-off-by: Stefan Haberland <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/s390/block/dasd_alias.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/s390/block/dasd_alias.c
+++ b/drivers/s390/block/dasd_alias.c
@@ -722,7 +722,7 @@ static int reset_summary_unit_check(stru
ASCEBC((char *) &cqr->magic, 4);
ccw = cqr->cpaddr;
ccw->cmd_code = DASD_ECKD_CCW_RSCK;
- ccw->flags = 0 ;
+ ccw->flags = CCW_FLAG_SLI;
ccw->count = 16;
ccw->cda = (__u32)(addr_t) cqr->data;
((char *)cqr->data)[0] = reason;

2016-03-02 01:52:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 096/130] [media] tda1004x: only update the frontend properties if locked

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mauro Carvalho Chehab <[email protected]>

commit e8beb02343e7582980c6705816cd957cf4f74c7a upstream.

The tda1004x was updating the properties cache before locking.
If the device is not locked, the data at the registers are just
random values with no real meaning.

This caused the driver to fail with libdvbv5, as such library
calls GET_PROPERTY from time to time, in order to return the
DVB stats.

Tested with a saa7134 card 78:
ASUSTeK P7131 Dual, vendor PCI ID: 1043:4862

Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/media/dvb-frontends/tda1004x.c | 9 +++++++++
1 file changed, 9 insertions(+)

--- a/drivers/media/dvb-frontends/tda1004x.c
+++ b/drivers/media/dvb-frontends/tda1004x.c
@@ -903,9 +903,18 @@ static int tda1004x_get_fe(struct dvb_fr
{
struct dtv_frontend_properties *fe_params = &fe->dtv_property_cache;
struct tda1004x_state* state = fe->demodulator_priv;
+ int status;

dprintk("%s\n", __func__);

+ status = tda1004x_read_byte(state, TDA1004X_STATUS_CD);
+ if (status == -1)
+ return -EIO;
+
+ /* Only update the properties cache if device is locked */
+ if (!(status & 8))
+ return 0;
+
// inversion status
fe_params->inversion = INVERSION_OFF;
if (tda1004x_read_byte(state, TDA1004X_CONFC1) & 0x20)

2016-03-02 01:52:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 095/130] [media] gspca: ov534/topro: prevent a division by 0

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Antonio Ospite <[email protected]>

commit dcc7fdbec53a960588f2c40232db2c6466c09917 upstream.

v4l2-compliance sends a zeroed struct v4l2_streamparm in
v4l2-test-formats.cpp::testParmType(), and this results in a division by
0 in some gspca subdrivers:

divide error: 0000 [#1] SMP
Modules linked in: gspca_ov534 gspca_main ...
CPU: 0 PID: 17201 Comm: v4l2-compliance Not tainted 4.3.0-rc2-ao2 #1
Hardware name: System manufacturer System Product Name/M2N-E SLI, BIOS
ASUS M2N-E SLI ACPI BIOS Revision 1301 09/16/2010
task: ffff8800818306c0 ti: ffff880095c4c000 task.ti: ffff880095c4c000
RIP: 0010:[<ffffffffa079bd62>] [<ffffffffa079bd62>] sd_set_streamparm+0x12/0x60 [gspca_ov534]
RSP: 0018:ffff880095c4fce8 EFLAGS: 00010296
RAX: 0000000000000000 RBX: ffff8800c9522000 RCX: ffffffffa077a140
RDX: 0000000000000000 RSI: ffff880095e0c100 RDI: ffff8800c9522000
RBP: ffff880095e0c100 R08: ffffffffa077a100 R09: 00000000000000cc
R10: ffff880067ec7740 R11: 0000000000000016 R12: ffffffffa07bb400
R13: 0000000000000000 R14: ffff880081b6a800 R15: 0000000000000000
FS: 00007fda0de78740(0000) GS:ffff88012fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000014630f8 CR3: 00000000cf349000 CR4: 00000000000006f0
Stack:
ffffffffa07a6431 ffff8800c9522000 ffffffffa077656e 00000000c0cc5616
ffff8800c9522000 ffffffffa07a5e20 ffff880095e0c100 0000000000000000
ffff880067ec7740 ffffffffa077a140 ffff880067ec7740 0000000000000016
Call Trace:
[<ffffffffa07a6431>] ? v4l_s_parm+0x21/0x50 [videodev]
[<ffffffffa077656e>] ? vidioc_s_parm+0x4e/0x60 [gspca_main]
[<ffffffffa07a5e20>] ? __video_do_ioctl+0x280/0x2f0 [videodev]
[<ffffffffa07a5ba0>] ? video_ioctl2+0x20/0x20 [videodev]
[<ffffffffa07a59b9>] ? video_usercopy+0x319/0x4e0 [videodev]
[<ffffffff81182dc1>] ? page_add_new_anon_rmap+0x71/0xa0
[<ffffffff811afb92>] ? mem_cgroup_commit_charge+0x52/0x90
[<ffffffff81179b18>] ? handle_mm_fault+0xc18/0x1680
[<ffffffffa07a15cc>] ? v4l2_ioctl+0xac/0xd0 [videodev]
[<ffffffff811c846f>] ? do_vfs_ioctl+0x28f/0x480
[<ffffffff811c86d4>] ? SyS_ioctl+0x74/0x80
[<ffffffff8154a8b6>] ? entry_SYSCALL_64_fastpath+0x16/0x75
Code: c7 93 d9 79 a0 5b 5d e9 f1 f3 9a e0 0f 1f 00 66 2e 0f 1f 84 00
00 00 00 00 66 66 66 66 90 53 31 d2 48 89 fb 48 83 ec 08 8b 46 10 <f7>
76 0c 80 bf ac 0c 00 00 00 88 87 4e 0e 00 00 74 09 80 bf 4f
RIP [<ffffffffa079bd62>] sd_set_streamparm+0x12/0x60 [gspca_ov534]
RSP <ffff880095c4fce8>
---[ end trace 279710c2c6c72080 ]---

Following what the doc says about a zeroed timeperframe (see
http://www.linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-parm.html):

...
To reset manually applications can just set this field to zero.

fix the issue by resetting the frame rate to a default value in case of
an unusable timeperframe.

The fix is done in the subdrivers instead of gspca.c because only the
subdrivers have notion of a default frame rate to reset the camera to.

Signed-off-by: Antonio Ospite <[email protected]>
Reviewed-by: Hans de Goede <[email protected]>
Signed-off-by: Hans Verkuil <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/media/usb/gspca/ov534.c | 9 +++++++--
drivers/media/usb/gspca/topro.c | 6 +++++-
2 files changed, 12 insertions(+), 3 deletions(-)

--- a/drivers/media/usb/gspca/ov534.c
+++ b/drivers/media/usb/gspca/ov534.c
@@ -1490,8 +1490,13 @@ static void sd_set_streamparm(struct gsp
struct v4l2_fract *tpf = &cp->timeperframe;
struct sd *sd = (struct sd *) gspca_dev;

- /* Set requested framerate */
- sd->frame_rate = tpf->denominator / tpf->numerator;
+ if (tpf->numerator == 0 || tpf->denominator == 0)
+ /* Set default framerate */
+ sd->frame_rate = 30;
+ else
+ /* Set requested framerate */
+ sd->frame_rate = tpf->denominator / tpf->numerator;
+
if (gspca_dev->streaming)
set_frame_rate(gspca_dev);

--- a/drivers/media/usb/gspca/topro.c
+++ b/drivers/media/usb/gspca/topro.c
@@ -4792,7 +4792,11 @@ static void sd_set_streamparm(struct gsp
struct v4l2_fract *tpf = &cp->timeperframe;
int fr, i;

- sd->framerate = tpf->denominator / tpf->numerator;
+ if (tpf->numerator == 0 || tpf->denominator == 0)
+ sd->framerate = 30;
+ else
+ sd->framerate = tpf->denominator / tpf->numerator;
+
if (gspca_dev->streaming)
setframerate(gspca_dev, v4l2_ctrl_g_ctrl(gspca_dev->exposure));

2016-03-02 01:53:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 094/130] [media] media: dvb-core: Dont force CAN_INVERSION_AUTO in oneshot mode

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Malcolm Priestley <[email protected]>

commit c9d57de6103e343f2d4e04ea8d9e417e10a24da7 upstream.

When in FE_TUNE_MODE_ONESHOT the frontend must report
the actual capabilities so user can take appropriate
action.

With frontends that can't do auto inversion this is done
by dvb-core automatically so CAN_INVERSION_AUTO is valid.

However, when in FE_TUNE_MODE_ONESHOT this is not true.

So only set FE_CAN_INVERSION_AUTO in modes other than
FE_TUNE_MODE_ONESHOT

Signed-off-by: Malcolm Priestley <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/media/dvb-core/dvb_frontend.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/media/dvb-core/dvb_frontend.c
+++ b/drivers/media/dvb-core/dvb_frontend.c
@@ -2195,9 +2195,9 @@ static int dvb_frontend_ioctl_legacy(str
dev_dbg(fe->dvb->device, "%s: current delivery system on cache: %d, V3 type: %d\n",
__func__, c->delivery_system, fe->ops.info.type);

- /* Force the CAN_INVERSION_AUTO bit on. If the frontend doesn't
- * do it, it is done for it. */
- info->caps |= FE_CAN_INVERSION_AUTO;
+ /* Set CAN_INVERSION_AUTO bit on in other than oneshot mode */
+ if (!(fepriv->tune_mode_flags & FE_TUNE_MODE_ONESHOT))
+ info->caps |= FE_CAN_INVERSION_AUTO;
err = 0;
break;
}

2016-03-01 23:53:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 089/130] s390: fix normalization bug in exception table sorting

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ard Biesheuvel <[email protected]>

commit bcb7825a77f41c7dd91da6f7ac10b928156a322e upstream.

The normalization pass in the sorting routine of the relative exception
table serves two purposes:
- it ensures that the address fields of the exception table entries are
fully ordered, so that no ambiguities arise between entries with
identical instruction offsets (i.e., when two instructions that are
exactly 8 bytes apart each have an exception table entry associated with
them)
- it ensures that the offsets of both the instruction and the fixup fields
of each entry are relative to their final location after sorting.

Commit eb608fb366de ("s390/exceptions: switch to relative exception table
entries") ported the relative exception table format from x86, but modified
the sorting routine to only normalize the instruction offset field and not
the fixup offset field. The result is that the fixup offset of each entry
will be relative to the original location of the entry before sorting,
likely leading to crashes when those entries are dereferenced.

Fixes: eb608fb366de ("s390/exceptions: switch to relative exception table entries")
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/s390/mm/extable.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

--- a/arch/s390/mm/extable.c
+++ b/arch/s390/mm/extable.c
@@ -52,12 +52,16 @@ void sort_extable(struct exception_table
int i;

/* Normalize entries to being relative to the start of the section */
- for (p = start, i = 0; p < finish; p++, i += 8)
+ for (p = start, i = 0; p < finish; p++, i += 8) {
p->insn += i;
+ p->fixup += i + 4;
+ }
sort(start, finish - start, sizeof(*start), cmp_ex, NULL);
/* Denormalize all entries */
- for (p = start, i = 0; p < finish; p++, i += 8)
+ for (p = start, i = 0; p < finish; p++, i += 8) {
p->insn -= i;
+ p->fixup -= i + 4;
+ }
}

#ifdef CONFIG_MODULES

2016-03-02 01:53:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 092/130] uml: flush stdout before forking

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Vegard Nossum <[email protected]>

commit 0754fb298f2f2719f0393491d010d46cfb25d043 upstream.

I was seeing some really weird behaviour where piping UML's output
somewhere would cause output to get duplicated:

$ ./vmlinux | head -n 40
Checking that ptrace can change system call numbers...Core dump limits :
soft - 0
hard - NONE
OK
Checking syscall emulation patch for ptrace...Core dump limits :
soft - 0
hard - NONE
OK
Checking advanced syscall emulation patch for ptrace...Core dump limits :
soft - 0
hard - NONE
OK
Core dump limits :
soft - 0
hard - NONE

This is because these tests do a fork() which duplicates the non-empty
stdout buffer, then glibc flushes the duplicated buffer as each child
exits.

A simple workaround is to flush before forking.

Signed-off-by: Vegard Nossum <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/um/os-Linux/start_up.c | 2 ++
1 file changed, 2 insertions(+)

--- a/arch/um/os-Linux/start_up.c
+++ b/arch/um/os-Linux/start_up.c
@@ -95,6 +95,8 @@ static int start_ptraced_child(void)
{
int pid, n, status;

+ fflush(stdout);
+
pid = fork();
if (pid == 0)
ptrace_child();

2016-03-02 01:53:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 091/130] s390/dasd: fix refcount for PAV reassignment

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stefan Haberland <[email protected]>

commit 9d862ababb609439c5d6987f6d3ddd09e703aa0b upstream.

Add refcount to the DASD device when a summary unit check worker is
scheduled. This prevents that the device is set offline with worker
in place.

Signed-off-by: Stefan Haberland <[email protected]>
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/s390/block/dasd_alias.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)

--- a/drivers/s390/block/dasd_alias.c
+++ b/drivers/s390/block/dasd_alias.c
@@ -264,8 +264,10 @@ void dasd_alias_disconnect_device_from_l
spin_unlock_irqrestore(&lcu->lock, flags);
cancel_work_sync(&lcu->suc_data.worker);
spin_lock_irqsave(&lcu->lock, flags);
- if (device == lcu->suc_data.device)
+ if (device == lcu->suc_data.device) {
+ dasd_put_device(device);
lcu->suc_data.device = NULL;
+ }
}
was_pending = 0;
if (device == lcu->ruac_data.device) {
@@ -273,8 +275,10 @@ void dasd_alias_disconnect_device_from_l
was_pending = 1;
cancel_delayed_work_sync(&lcu->ruac_data.dwork);
spin_lock_irqsave(&lcu->lock, flags);
- if (device == lcu->ruac_data.device)
+ if (device == lcu->ruac_data.device) {
+ dasd_put_device(device);
lcu->ruac_data.device = NULL;
+ }
}
private->lcu = NULL;
spin_unlock_irqrestore(&lcu->lock, flags);
@@ -549,8 +553,10 @@ static void lcu_update_work(struct work_
if ((rc && (rc != -EOPNOTSUPP)) || (lcu->flags & NEED_UAC_UPDATE)) {
DBF_DEV_EVENT(DBF_WARNING, device, "could not update"
" alias data in lcu (rc = %d), retry later", rc);
- schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ);
+ if (!schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ))
+ dasd_put_device(device);
} else {
+ dasd_put_device(device);
lcu->ruac_data.device = NULL;
lcu->flags &= ~UPDATE_PENDING;
}
@@ -593,8 +599,10 @@ static int _schedule_lcu_update(struct a
*/
if (!usedev)
return -EINVAL;
+ dasd_get_device(usedev);
lcu->ruac_data.device = usedev;
- schedule_delayed_work(&lcu->ruac_data.dwork, 0);
+ if (!schedule_delayed_work(&lcu->ruac_data.dwork, 0))
+ dasd_put_device(usedev);
return 0;
}

@@ -926,6 +934,7 @@ static void summary_unit_check_handling_
/* 3. read new alias configuration */
_schedule_lcu_update(lcu, device);
lcu->suc_data.device = NULL;
+ dasd_put_device(device);
spin_unlock_irqrestore(&lcu->lock, flags);
}

@@ -985,6 +994,8 @@ void dasd_alias_handle_summary_unit_chec
}
lcu->suc_data.reason = reason;
lcu->suc_data.device = device;
+ dasd_get_device(device);
spin_unlock(&lcu->lock);
- schedule_work(&lcu->suc_data.worker);
+ if (!schedule_work(&lcu->suc_data.worker))
+ dasd_put_device(device);
};

2016-03-02 01:54:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 064/130] ARC: dw2 unwind: Remove falllback linear search thru FDE entries

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Vineet Gupta <[email protected]>

commit 2e22502c080f27afeab5e6f11e618fb7bc7aea53 upstream.

Fixes STAR 9000953410: "perf callgraph profiling causing RCU stalls"

| perf record -g -c 15000 -e cycles /sbin/hackbench
|
| INFO: rcu_preempt self-detected stall on CPU
| 1: (1 GPs behind) idle=609/140000000000002/0 softirq=2914/2915 fqs=603
| Task dump for CPU 1:

in-kernel dwarf unwinder has a fast binary lookup and a fallback linear
search (which iterates thru each of ~11K entries) thus takes 2 orders of
magnitude longer (~3 million cycles vs. 2000). Routines written in hand
assembler lack dwarf info (as we don't support assembler CFI pseudo-ops
yet) fail the unwinder binary lookup, hit linear search, failing
nevertheless in the end.

However the linear search is pointless as binary lookup tables are created
from it in first place. It is impossible to have binary lookup fail while
succeed the linear search. It is pure waste of cycles thus removed by
this patch.

This manifested as RCU stalls / NMI watchdog splat when running
hackbench under perf with callgraph profiling. The triggering condition
was perf counter overflowing in routine lacking dwarf info (like memset)
leading to patheic 3 million cycle unwinder slow path and by the time it
returned new interrupts were already pending (Timer, IPI) and taken
rightaway. The original memset didn't make forward progress, system kept
accruing more interrupts and more unwinder delayes in a vicious feedback
loop, ultimately triggering the NMI diagnostic.

Signed-off-by: Vineet Gupta <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arc/kernel/unwind.c | 37 ++++---------------------------------
1 file changed, 4 insertions(+), 33 deletions(-)

--- a/arch/arc/kernel/unwind.c
+++ b/arch/arc/kernel/unwind.c
@@ -986,42 +986,13 @@ int arc_unwind(struct unwind_frame_info
(const u8 *)(fde +
1) +
*fde, ptrType);
- if (pc >= endLoc)
+ if (pc >= endLoc) {
fde = NULL;
- } else
- fde = NULL;
- }
- if (fde == NULL) {
- for (fde = table->address, tableSize = table->size;
- cie = NULL, tableSize > sizeof(*fde)
- && tableSize - sizeof(*fde) >= *fde;
- tableSize -= sizeof(*fde) + *fde,
- fde += 1 + *fde / sizeof(*fde)) {
- cie = cie_for_fde(fde, table);
- if (cie == &bad_cie) {
cie = NULL;
- break;
}
- if (cie == NULL
- || cie == &not_fde
- || (ptrType = fde_pointer_type(cie)) < 0)
- continue;
- ptr = (const u8 *)(fde + 2);
- startLoc = read_pointer(&ptr,
- (const u8 *)(fde + 1) +
- *fde, ptrType);
- if (!startLoc)
- continue;
- if (!(ptrType & DW_EH_PE_indirect))
- ptrType &=
- DW_EH_PE_FORM | DW_EH_PE_signed;
- endLoc =
- startLoc + read_pointer(&ptr,
- (const u8 *)(fde +
- 1) +
- *fde, ptrType);
- if (pc >= startLoc && pc < endLoc)
- break;
+ } else {
+ fde = NULL;
+ cie = NULL;
}
}
}

2016-03-02 01:54:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 077/130] dm btree: fix bufio buffer leaks in dm_btree_del() error path

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joe Thornber <[email protected]>

commit ed8b45a3679eb49069b094c0711b30833f27c734 upstream.

If dm_btree_del()'s call to push_frame() fails, e.g. due to
btree_node_validator finding invalid metadata, the dm_btree_del() error
path must unlock all frames (which have active dm-bufio buffers) that
were pushed onto the del_stack.

Otherwise, dm_bufio_client_destroy() will BUG_ON() because dm-bufio
buffers have leaked, e.g.:
device-mapper: bufio: leaked buffer 3, hold count 1, list 0

Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/persistent-data/dm-btree.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)

--- a/drivers/md/persistent-data/dm-btree.c
+++ b/drivers/md/persistent-data/dm-btree.c
@@ -250,6 +250,16 @@ static void pop_frame(struct del_stack *
dm_tm_unlock(s->tm, f->b);
}

+static void unlock_all_frames(struct del_stack *s)
+{
+ struct frame *f;
+
+ while (unprocessed_frames(s)) {
+ f = s->spine + s->top--;
+ dm_tm_unlock(s->tm, f->b);
+ }
+}
+
int dm_btree_del(struct dm_btree_info *info, dm_block_t root)
{
int r;
@@ -306,9 +316,13 @@ int dm_btree_del(struct dm_btree_info *i
pop_frame(s);
}
}
-
out:
+ if (r) {
+ /* cleanup all frames of del_stack */
+ unlock_all_frames(s);
+ }
kfree(s);
+
return r;
}
EXPORT_SYMBOL_GPL(dm_btree_del);

2016-03-01 23:51:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 083/130] powercap / RAPL: fix BIOS lock check

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Prarit Bhargava <[email protected]>

commit 79a21dbfae3cd40d5a801778071a9967b79c2c20 upstream.

Intel RAPL initialized on several systems where the BIOS lock bit (msr
0x610, bit 63) was set. This occured because the return value of
rapl_read_data_raw() was being checked, rather than the value of the variable
passed in, locked.

This patch properly implments the rapl_read_data_raw() call to check the
variable locked, and now the Intel RAPL driver outputs the warning:

intel_rapl: RAPL package 0 domain package locked by BIOS

and does not initialize for the package.

Signed-off-by: Prarit Bhargava <[email protected]>
Acked-by: Jacob Pan <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/powercap/intel_rapl.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

--- a/drivers/powercap/intel_rapl.c
+++ b/drivers/powercap/intel_rapl.c
@@ -1194,10 +1194,13 @@ static int rapl_detect_domains(struct ra

for (rd = rp->domains; rd < rp->domains + rp->nr_domains; rd++) {
/* check if the domain is locked by BIOS */
- if (rapl_read_data_raw(rd, FW_LOCK, false, &locked)) {
+ ret = rapl_read_data_raw(rd, FW_LOCK, false, &locked);
+ if (ret)
+ return ret;
+ if (locked) {
pr_info("RAPL package %d domain %s locked by BIOS\n",
rp->id, rd->name);
- rd->state |= DOMAIN_STATE_BIOS_LOCKED;
+ rd->state |= DOMAIN_STATE_BIOS_LOCKED;
}
}

2016-03-01 23:51:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 081/130] ses: Fix problems with simple enclosures

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Bottomley <[email protected]>

commit 3417c1b5cb1fdc10261dbed42b05cc93166a78fd upstream.

Simple enclosure implementations (mostly USB) are allowed to return only
page 8 to every diagnostic query. That really confuses our
implementation because we assume the return is the page we asked for and
end up doing incorrect offsets based on bogus information leading to
accesses outside of allocated ranges. Fix that by checking the page
code of the return and giving an error if it isn't the one we asked for.
This should fix reported bugs with USB storage by simply refusing to
attach to enclosures that behave like this. It's also good defensive
practise now that we're starting to see more USB enclosures.

Reported-by: Andrea Gelmini <[email protected]>
Reviewed-by: Ewan D. Milne <[email protected]>
Reviewed-by: Tomas Henzl <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/ses.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)

--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -70,6 +70,7 @@ static int ses_probe(struct device *dev)
static int ses_recv_diag(struct scsi_device *sdev, int page_code,
void *buf, int bufflen)
{
+ int ret;
unsigned char cmd[] = {
RECEIVE_DIAGNOSTIC,
1, /* Set PCV bit */
@@ -78,9 +79,26 @@ static int ses_recv_diag(struct scsi_dev
bufflen & 0xff,
0
};
+ unsigned char recv_page_code;

- return scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
+ ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,
NULL, SES_TIMEOUT, SES_RETRIES, NULL);
+ if (unlikely(!ret))
+ return ret;
+
+ recv_page_code = ((unsigned char *)buf)[0];
+
+ if (likely(recv_page_code == page_code))
+ return ret;
+
+ /* successful diagnostic but wrong page code. This happens to some
+ * USB devices, just print a message and pretend there was an error */
+
+ sdev_printk(KERN_ERR, sdev,
+ "Wrong diagnostic page; asked for %d got %u\n",
+ page_code, recv_page_code);
+
+ return -EINVAL;
}

static int ses_send_diag(struct scsi_device *sdev, int page_code,

2016-03-02 01:55:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 072/130] can: sja1000: clear interrupts on start

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mirza Krak <[email protected]>

commit 7cecd9ab80f43972c056dc068338f7bcc407b71c upstream.

According to SJA1000 data sheet error-warning (EI) interrupt is not
cleared by setting the controller in to reset-mode.

Then if we have the following case:
- system is suspended (echo mem > /sys/power/state) and SJA1000 is left
in operating state
- A bus error condition occurs which activates EI interrupt, system is
still suspended which means EI interrupt will be not be handled nor
cleared.

If the above two events occur, on resume there is no way to return the
SJA1000 to operating state, except to cycle power to it.

By simply reading the IR register on start we will clear any previous
conditions that could be present.

Signed-off-by: Mirza Krak <[email protected]>
Reported-by: Christian Magnusson <[email protected]>
Signed-off-by: Marc Kleine-Budde <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/can/sja1000/sja1000.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/net/can/sja1000/sja1000.c
+++ b/drivers/net/can/sja1000/sja1000.c
@@ -187,6 +187,9 @@ static void sja1000_start(struct net_dev
/* clear interrupt flags */
priv->read_reg(priv, SJA1000_IR);

+ /* clear interrupt flags */
+ priv->read_reg(priv, SJA1000_IR);
+
/* leave reset mode */
set_normal_mode(dev);
}

2016-03-01 23:51:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 013/130] tools: Add a "make all" rule

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kamal Mostafa <[email protected]>

commit f6ba98c5dc78708cb7fd29950c4a50c4c7e88f95 upstream.


Signed-off-by: Kamal Mostafa <[email protected]>
Acked-by: Pavel Machek <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Pali Rohar <[email protected]>
Cc: Roberta Dobrescu <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
[ kamal: backport to 3.14-stable: build all tools for this version ]
Signed-off-by: Kamal Mostafa <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
tools/Makefile | 9 +++++++++
1 file changed, 9 insertions(+)

--- a/tools/Makefile
+++ b/tools/Makefile
@@ -24,6 +24,10 @@ help:
@echo ' from the kernel command line to build and install one of'
@echo ' the tools above'
@echo ''
+ @echo ' $$ make tools/all'
+ @echo ''
+ @echo ' builds all tools.'
+ @echo ''
@echo ' $$ make tools/install'
@echo ''
@echo ' installs all tools.'
@@ -58,6 +62,11 @@ turbostat x86_energy_perf_policy: FORCE
tmon: FORCE
$(call descend,thermal/$@)

+all: acpi cgroup cpupower firewire lguest \
+ perf selftests turbostat usb \
+ virtio vm net x86_energy_perf_policy \
+ tmon
+
acpi_install:
$(call descend,power/$(@:_install=),install)

2016-03-02 01:55:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 071/130] RDS: fix race condition when sending a message on unbound socket

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Quentin Casasnovas <[email protected]>

commit 8c7188b23474cca017b3ef354c4a58456f68303a upstream.

Sasha's found a NULL pointer dereference in the RDS connection code when
sending a message to an apparently unbound socket. The problem is caused
by the code checking if the socket is bound in rds_sendmsg(), which checks
the rs_bound_addr field without taking a lock on the socket. This opens a
race where rs_bound_addr is temporarily set but where the transport is not
in rds_bind(), leading to a NULL pointer dereference when trying to
dereference 'trans' in __rds_conn_create().

Vegard wrote a reproducer for this issue, so kindly ask him to share if
you're interested.

I cannot reproduce the NULL pointer dereference using Vegard's reproducer
with this patch, whereas I could without.

Complete earlier incomplete fix to CVE-2015-6937:

74e98eb08588 ("RDS: verify the underlying transport exists before creating a connection")

Reviewed-by: Vegard Nossum <[email protected]>
Reviewed-by: Sasha Levin <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Signed-off-by: Quentin Casasnovas <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/rds/connection.c | 6 ------
net/rds/send.c | 4 +++-
2 files changed, 3 insertions(+), 7 deletions(-)

--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -189,12 +189,6 @@ static struct rds_connection *__rds_conn
goto out;
}

- if (trans == NULL) {
- kmem_cache_free(rds_conn_slab, conn);
- conn = ERR_PTR(-ENODEV);
- goto out;
- }
-
conn->c_trans = trans;

ret = trans->conn_alloc(conn, gfp);
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -955,11 +955,13 @@ int rds_sendmsg(struct kiocb *iocb, stru
release_sock(sk);
}

- /* racing with another thread binding seems ok here */
+ lock_sock(sk);
if (daddr == 0 || rs->rs_bound_addr == 0) {
+ release_sock(sk);
ret = -ENOTCONN; /* XXX not a great errno */
goto out;
}
+ release_sock(sk);

/* size of rm including all sgs */
ret = rds_rm_size(msg, payload_len);

2016-03-02 01:56:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 085/130] Btrfs: add missing brelse when superblock checksum fails

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Anand Jain <[email protected]>

commit b2acdddfad13c38a1e8b927d83c3cf321f63601a upstream.

Looks like oversight, call brelse() when checksum fails. Further down the
code, in the non error path, we do call brelse() and so we don't see
brelse() in the goto error paths.

Signed-off-by: Anand Jain <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/btrfs/disk-io.c | 1 +
1 file changed, 1 insertion(+)

--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2336,6 +2336,7 @@ int open_ctree(struct super_block *sb,
if (btrfs_check_super_csum(bh->b_data)) {
printk(KERN_ERR "BTRFS: superblock checksum mismatch\n");
err = -EINVAL;
+ brelse(bh);
goto fail_alloc;
}

2016-03-02 01:56:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 080/130] rfkill: copy the name into the rfkill struct

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Johannes Berg <[email protected]>

commit b7bb110008607a915298bf0f47d25886ecb94477 upstream.

Some users of rfkill, like NFC and cfg80211, use a dynamic name when
allocating rfkill, in those cases dev_name(). Therefore, the pointer
passed to rfkill_alloc() might not be valid forever, I specifically
found the case that the rfkill name was quite obviously an invalid
pointer (or at least garbage) when the wiphy had been renamed.

Fix this by making a copy of the rfkill name in rfkill_alloc().

Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/rfkill/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/net/rfkill/core.c
+++ b/net/rfkill/core.c
@@ -49,7 +49,6 @@
struct rfkill {
spinlock_t lock;

- const char *name;
enum rfkill_type type;

unsigned long state;
@@ -73,6 +72,7 @@ struct rfkill {
struct delayed_work poll_work;
struct work_struct uevent_work;
struct work_struct sync_work;
+ char name[];
};
#define to_rfkill(d) container_of(d, struct rfkill, dev)

@@ -861,14 +861,14 @@ struct rfkill * __must_check rfkill_allo
if (WARN_ON(type == RFKILL_TYPE_ALL || type >= NUM_RFKILL_TYPES))
return NULL;

- rfkill = kzalloc(sizeof(*rfkill), GFP_KERNEL);
+ rfkill = kzalloc(sizeof(*rfkill) + strlen(name) + 1, GFP_KERNEL);
if (!rfkill)
return NULL;

spin_lock_init(&rfkill->lock);
INIT_LIST_HEAD(&rfkill->node);
rfkill->type = type;
- rfkill->name = name;
+ strcpy(rfkill->name, name);
rfkill->ops = ops;
rfkill->data = ops_data;

2016-03-02 01:56:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 062/130] mmc: remove bondage between REQ_META and reliable write

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Luca Porzio <[email protected]>

commit d3df0465db00cf4ed9f90d0bfc3b827d32b9c796 upstream.

Anytime a write operation is performed with Reliable Write flag enabled,
the eMMC device is enforced to bypass the cache and do a write to the
underling NVM device by Jedec specification; this causes a performance
penalty since write operations can't be optimized by the device cache.

In our tests, we replayed a typical mobile daily trace pattern and found
~9% overall time reduction in trace replay by using this patch. Also the
write ops within 4KB~64KB chunk size range get a 40~60% performance
improvement by using the patch (as this range of write chunks are the ones
affected by REQ_META).

This patch has been discussed in the Mobile & Embedded Linux Storage Forum
and it's the results of feedbacks from many people. We also checked with
fsdevl and f2fs mailing list developers that this change in the usage of
REQ_META is not affecting FS behavior and we got positive feedbacks.
Reporting here the feedbacks:
http://comments.gmane.org/gmane.linux.file-systems/97219
http://thread.gmane.org/gmane.linux.file-systems.f2fs/3178/focus=3183

Signed-off-by: Bruce Ford <[email protected]>
Signed-off-by: Luca Porzio <[email protected]>
Fixes: ce39f9d17c14 ("mmc: support packed write command for eMMC4.5 devices")
Signed-off-by: Ulf Hansson <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/mmc/card/block.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)

--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -62,8 +62,7 @@ MODULE_ALIAS("mmc:block");
#define MMC_SANITIZE_REQ_TIMEOUT 240000
#define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16)

-#define mmc_req_rel_wr(req) (((req->cmd_flags & REQ_FUA) || \
- (req->cmd_flags & REQ_META)) && \
+#define mmc_req_rel_wr(req) ((req->cmd_flags & REQ_FUA) && \
(rq_data_dir(req) == WRITE))
#define PACKED_CMD_VER 0x01
#define PACKED_CMD_WR 0x02
@@ -1328,13 +1327,9 @@ static void mmc_blk_rw_rq_prep(struct mm

/*
* Reliable writes are used to implement Forced Unit Access and
- * REQ_META accesses, and are supported only on MMCs.
- *
- * XXX: this really needs a good explanation of why REQ_META
- * is treated special.
+ * are supported only on MMCs.
*/
- bool do_rel_wr = ((req->cmd_flags & REQ_FUA) ||
- (req->cmd_flags & REQ_META)) &&
+ bool do_rel_wr = (req->cmd_flags & REQ_FUA) &&
(rq_data_dir(req) == WRITE) &&
(md->flags & MMC_BLK_REL_WR);

2016-03-01 23:51:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 019/130] can: ems_usb: Fix possible tx overflow

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gerhard Uttenthaler <[email protected]>

commit 90cfde46586d2286488d8ed636929e936c0c9ab2 upstream.

This patch fixes the problem that more CAN messages could be sent to the
interface as could be send on the CAN bus. This was more likely for slow baud
rates. The sleeping _start_xmit was woken up in the _write_bulk_callback. Under
heavy TX load this produced another bulk transfer without checking the
free_slots variable and hence caused the overflow in the interface.

Signed-off-by: Gerhard Uttenthaler <[email protected]>
Signed-off-by: Marc Kleine-Budde <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/can/usb/ems_usb.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)

--- a/drivers/net/can/usb/ems_usb.c
+++ b/drivers/net/can/usb/ems_usb.c
@@ -117,6 +117,9 @@ MODULE_LICENSE("GPL v2");
*/
#define EMS_USB_ARM7_CLOCK 8000000

+#define CPC_TX_QUEUE_TRIGGER_LOW 25
+#define CPC_TX_QUEUE_TRIGGER_HIGH 35
+
/*
* CAN-Message representation in a CPC_MSG. Message object type is
* CPC_MSG_TYPE_CAN_FRAME or CPC_MSG_TYPE_RTR_FRAME or
@@ -278,6 +281,11 @@ static void ems_usb_read_interrupt_callb
switch (urb->status) {
case 0:
dev->free_slots = dev->intr_in_buffer[1];
+ if(dev->free_slots > CPC_TX_QUEUE_TRIGGER_HIGH){
+ if (netif_queue_stopped(netdev)){
+ netif_wake_queue(netdev);
+ }
+ }
break;

case -ECONNRESET: /* unlink */
@@ -529,8 +537,6 @@ static void ems_usb_write_bulk_callback(
/* Release context */
context->echo_index = MAX_TX_URBS;

- if (netif_queue_stopped(netdev))
- netif_wake_queue(netdev);
}

/*
@@ -590,7 +596,7 @@ static int ems_usb_start(struct ems_usb
int err, i;

dev->intr_in_buffer[0] = 0;
- dev->free_slots = 15; /* initial size */
+ dev->free_slots = 50; /* initial size */

for (i = 0; i < MAX_RX_URBS; i++) {
struct urb *urb = NULL;
@@ -841,7 +847,7 @@ static netdev_tx_t ems_usb_start_xmit(st

/* Slow down tx path */
if (atomic_read(&dev->active_tx_urbs) >= MAX_TX_URBS ||
- dev->free_slots < 5) {
+ dev->free_slots < CPC_TX_QUEUE_TRIGGER_LOW) {
netif_stop_queue(netdev);
}
}

2016-03-02 01:57:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 079/130] vgaarb: fix signal handling in vga_get()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kirill A. Shutemov <[email protected]>

commit 9f5bd30818c42c6c36a51f93b4df75a2ea2bd85e upstream.

There are few defects in vga_get() related to signal hadning:

- we shouldn't check for pending signals for TASK_UNINTERRUPTIBLE
case;

- if we found pending signal we must remove ourself from wait queue
and change task state back to running;

- -ERESTARTSYS is more appropriate, I guess.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: David Herrmann <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/vga/vgaarb.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/drivers/gpu/vga/vgaarb.c
+++ b/drivers/gpu/vga/vgaarb.c
@@ -392,8 +392,10 @@ int vga_get(struct pci_dev *pdev, unsign
set_current_state(interruptible ?
TASK_INTERRUPTIBLE :
TASK_UNINTERRUPTIBLE);
- if (signal_pending(current)) {
- rc = -EINTR;
+ if (interruptible && signal_pending(current)) {
+ __set_current_state(TASK_RUNNING);
+ remove_wait_queue(&vga_wait_queue, &wait);
+ rc = -ERESTARTSYS;
break;
}
schedule();

2016-03-02 01:57:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 078/130] irqchip/versatile-fpga: Fix PCI IRQ mapping on Versatile PB

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Guillaume Delbergue <[email protected]>

commit d5d4fdd86f5759924fe54efa793e22eccf508db6 upstream.

This patch is specifically for PCI support on the Versatile PB board using
a DT. Currently, the dynamic IRQ mapping is broken when using DTs. For
example, on QEMU, the SCSI driver is unable to request the IRQ. To fix
this issue, this patch replaces the current dynamic mechanism with a
static value as is done in the non-DT case.

Signed-off-by: Guillaume Delbergue <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-versatile-fpga.c | 5 +++++
1 file changed, 5 insertions(+)

--- a/drivers/irqchip/irq-versatile-fpga.c
+++ b/drivers/irqchip/irq-versatile-fpga.c
@@ -204,7 +204,12 @@ int __init fpga_irq_of_init(struct devic
if (!parent_irq)
parent_irq = -1;

+#ifdef CONFIG_ARCH_VERSATILE
+ fpga_irq_init(base, node->name, IRQ_SIC_START, parent_irq, valid_mask,
+ node);
+#else
fpga_irq_init(base, node->name, 0, parent_irq, valid_mask, node);
+#endif

writel(clear_mask, base + IRQ_ENABLE_CLEAR);
writel(clear_mask, base + FIQ_ENABLE_CLEAR);

2016-03-01 23:51:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 054/130] drm/radeon: make rv770_set_sw_state failures non-fatal

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 4e7697ed79d0c0d5f869c87a6b3ce3d5cd1a07d6 upstream.

On some cards it takes a relatively long time for the change
to take place. Make a timeout non-fatal.

bug:
https://bugs.freedesktop.org/show_bug.cgi?id=76130

Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/rv770_dpm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/gpu/drm/radeon/rv770_dpm.c
+++ b/drivers/gpu/drm/radeon/rv770_dpm.c
@@ -1415,7 +1415,7 @@ int rv770_resume_smc(struct radeon_devic
int rv770_set_sw_state(struct radeon_device *rdev)
{
if (rv770_send_msg_to_smc(rdev, PPSMC_MSG_SwitchToSwState) != PPSMC_Result_OK)
- return -EINVAL;
+ DRM_ERROR("rv770_set_sw_state failed\n");
return 0;
}

2016-03-02 01:57:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 076/130] dm space map metadata: fix ref counting bug when bootstrapping a new space map

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joe Thornber <[email protected]>

commit 50dd842ad83b43bed71790efb31cfb2f6c05c9c1 upstream.

When applying block operations (BOPs) do not remove them from the
uncommitted BOP ring-buffer until after they've been applied -- in case
we recurse.

Also, perform BOP_INC operation, in dm_sm_metadata_create() and
sm_metadata_extend(), in terms of the uncommitted BOP ring-buffer rather
than using direct calls to sm_ll_inc().

Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/persistent-data/dm-space-map-metadata.c | 32 ++++++++++++++-------
1 file changed, 22 insertions(+), 10 deletions(-)

--- a/drivers/md/persistent-data/dm-space-map-metadata.c
+++ b/drivers/md/persistent-data/dm-space-map-metadata.c
@@ -136,7 +136,7 @@ static int brb_push(struct bop_ring_buff
return 0;
}

-static int brb_pop(struct bop_ring_buffer *brb, struct block_op *result)
+static int brb_peek(struct bop_ring_buffer *brb, struct block_op *result)
{
struct block_op *bop;

@@ -147,6 +147,17 @@ static int brb_pop(struct bop_ring_buffe
result->type = bop->type;
result->block = bop->block;

+ return 0;
+}
+
+static int brb_pop(struct bop_ring_buffer *brb)
+{
+ struct block_op *bop;
+
+ if (brb_empty(brb))
+ return -ENODATA;
+
+ bop = brb->bops + brb->begin;
brb->begin = brb_next(brb, brb->begin);

return 0;
@@ -211,7 +222,7 @@ static int apply_bops(struct sm_metadata
while (!brb_empty(&smm->uncommitted)) {
struct block_op bop;

- r = brb_pop(&smm->uncommitted, &bop);
+ r = brb_peek(&smm->uncommitted, &bop);
if (r) {
DMERR("bug in bop ring buffer");
break;
@@ -220,6 +231,8 @@ static int apply_bops(struct sm_metadata
r = commit_bop(smm, &bop);
if (r)
break;
+
+ brb_pop(&smm->uncommitted);
}

return r;
@@ -681,7 +694,6 @@ static struct dm_space_map bootstrap_ops
static int sm_metadata_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
int r, i;
- enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
dm_block_t old_len = smm->ll.nr_blocks;

@@ -703,11 +715,12 @@ static int sm_metadata_extend(struct dm_
* allocate any new blocks.
*/
do {
- for (i = old_len; !r && i < smm->begin; i++) {
- r = sm_ll_inc(&smm->ll, i, &ev);
- if (r)
- goto out;
- }
+ for (i = old_len; !r && i < smm->begin; i++)
+ r = add_bop(smm, BOP_INC, i);
+
+ if (r)
+ goto out;
+
old_len = smm->begin;

r = apply_bops(smm);
@@ -752,7 +765,6 @@ int dm_sm_metadata_create(struct dm_spac
{
int r;
dm_block_t i;
- enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);

smm->begin = superblock + 1;
@@ -780,7 +792,7 @@ int dm_sm_metadata_create(struct dm_spac
* allocated blocks that they were built from.
*/
for (i = superblock; !r && i < smm->begin; i++)
- r = sm_ll_inc(&smm->ll, i, &ev);
+ r = add_bop(smm, BOP_INC, i);

if (r)
return r;

2016-03-02 01:58:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 075/130] sata_sil: disable trim

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mikulas Patocka <[email protected]>

commit d98f1cd0a3b70ea91f1dfda3ac36c3b2e1a4d5e2 upstream.

When I connect an Intel SSD to SATA SIL controller (PCI ID 1095:3114), any
TRIM command results in I/O errors being reported in the log. There is
other similar error reported with TRIM and the SIL controller:
https://bugs.centos.org/view.php?id=5880

Apparently the controller doesn't support TRIM commands. This patch
disables TRIM support on the SATA SIL controller.

ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
ata7.00: BMDMA2 stat 0x50001
ata7.00: failed command: DATA SET MANAGEMENT
ata7.00: cmd 06/01:01:00:00:00/00:00:00:00:00/a0 tag 0 dma 512 out
res 51/04:01:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
ata7.00: status: { DRDY ERR }
ata7.00: error: { ABRT }
ata7.00: device reported invalid CHS sector 0
sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 8:0:0:0: [sdb] tag#0 Sense Key : Illegal Request [current] [descriptor]
sd 8:0:0:0: [sdb] tag#0 Add. Sense: Unaligned write command
sd 8:0:0:0: [sdb] tag#0 CDB: Write same(16) 93 08 00 00 00 00 00 21 95 88 00 20 00 00 00 00
blk_update_request: I/O error, dev sdb, sector 2200968

Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/ata/sata_sil.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/ata/sata_sil.c
+++ b/drivers/ata/sata_sil.c
@@ -631,6 +631,9 @@ static void sil_dev_config(struct ata_de
unsigned int n, quirks = 0;
unsigned char model_num[ATA_ID_PROD_LEN + 1];

+ /* This controller doesn't support trim */
+ dev->horkage |= ATA_HORKAGE_NOTRIM;
+
ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));

for (n = 0; sil_blacklist[n].product; n++)

2016-03-02 01:58:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 020/130] USB: cp210x: add IDs for GE B650V3 and B850V3 boards

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ken Lin <[email protected]>

commit 6627ae19385283b89356a199d7f03c75ba35fb29 upstream.

Add USB ID for cp2104/5 devices on GE B650v3 and B850v3 boards.

Signed-off-by: Ken Lin <[email protected]>
Signed-off-by: Akshay Bhat <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/cp210x.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/usb/serial/cp210x.c
+++ b/drivers/usb/serial/cp210x.c
@@ -162,6 +162,8 @@ static const struct usb_device_id id_tab
{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
{ USB_DEVICE(0x18EF, 0xE025) }, /* ELV Marble Sound Board 1 */
+ { USB_DEVICE(0x1901, 0x0190) }, /* GE B850 CP2105 Recorder interface */
+ { USB_DEVICE(0x1901, 0x0193) }, /* GE B650 CP2104 PMC interface */
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
{ USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
{ USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */

2016-03-02 01:58:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 059/130] megaraid_sas: Do not use PAGE_SIZE for max_sectors

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: [email protected] <[email protected]>

commit 357ae967ad66e357f78b5cfb5ab6ca07fb4a7758 upstream.

Do not use PAGE_SIZE marco to calculate max_sectors per I/O
request. Driver code assumes PAGE_SIZE will be always 4096 which can
lead to wrongly calculated value if PAGE_SIZE is not 4096. This issue
was reported in Ubuntu Bugzilla Bug #1475166.

Signed-off-by: Sumit Saxena <[email protected]>
Signed-off-by: Kashyap Desai <[email protected]>
Reviewed-by: Tomas Henzl <[email protected]>
Reviewed-by: Martin K. Petersen <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/megaraid/megaraid_sas.h | 2 ++
drivers/scsi/megaraid/megaraid_sas_base.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -334,6 +334,8 @@ enum MR_EVT_ARGS {
MR_EVT_ARGS_GENERIC,
};

+
+#define SGE_BUFFER_SIZE 4096
/*
* define constants for device list query options
*/
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -3821,7 +3821,7 @@ static int megasas_init_fw(struct megasa
}
}
instance->max_sectors_per_req = instance->max_num_sge *
- PAGE_SIZE / 512;
+ SGE_BUFFER_SIZE / 512;
if (tmp_sectors && (instance->max_sectors_per_req > tmp_sectors))
instance->max_sectors_per_req = tmp_sectors;

2016-03-02 01:59:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 022/130] USB: option: add "4G LTE usb-modem U901"

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Bjørn Mork <[email protected]>

commit d061c1caa31d4d9792cfe48a2c6b309a0e01ef46 upstream.

Thomas reports:

T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 4 Spd=480 MxCh= 0
D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1
P: Vendor=05c6 ProdID=6001 Rev=00.00
S: Manufacturer=USB Modem
S: Product=USB Modem
S: SerialNumber=1234567890ABCDEF
C: #Ifs= 5 Cfg#= 1 Atr=e0 MxPwr=500mA
I: If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
I: If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
I: If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan
I: If#= 4 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage

Reported-by: Thomas Schäfer <[email protected]>
Signed-off-by: Bjørn Mork <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/option.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -1135,6 +1135,8 @@ static const struct usb_device_id option
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC650) },
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) },
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */
+ { USB_DEVICE_AND_INTERFACE_INFO(QUALCOMM_VENDOR_ID, 0x6001, 0xff, 0xff, 0xff), /* 4G LTE usb-modem U901 */
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */

2016-03-02 01:59:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 021/130] USB: option: add support for SIM7100E

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andrey Skvortsov <[email protected]>

commit 3158a8d416f4e1b79dcc867d67cb50013140772c upstream.

$ lsusb:
Bus 001 Device 101: ID 1e0e:9001 Qualcomm / Option

$ usb-devices:
T: Bus=01 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#=101 Spd=480 MxCh= 0
D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 2
P: Vendor=1e0e ProdID=9001 Rev= 2.32
S: Manufacturer=SimTech, Incorporated
S: Product=SimTech, Incorporated
S: SerialNumber=0123456789ABCDEF
C:* #Ifs= 7 Cfg#= 1 Atr=80 MxPwr=500mA
I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
I:* If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:* If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:* If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I:* If#= 5 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan
I:* If#= 6 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=42 Prot=01 Driver=(none)

The last interface (6) is used for Android Composite ADB interface.

Serial port layout:
0: QCDM/DIAG
1: NMEA
2: AT
3: AT/PPP
4: audio

Signed-off-by: Andrey Skvortsov <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/option.c | 7 +++++++
1 file changed, 7 insertions(+)

--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -315,6 +315,7 @@ static void option_instat_callback(struc
#define TOSHIBA_PRODUCT_G450 0x0d45

#define ALINK_VENDOR_ID 0x1e0e
+#define SIMCOM_PRODUCT_SIM7100E 0x9001 /* Yes, ALINK_VENDOR_ID */
#define ALINK_PRODUCT_PH300 0x9100
#define ALINK_PRODUCT_3GU 0x9200

@@ -615,6 +616,10 @@ static const struct option_blacklist_inf
.reserved = BIT(3) | BIT(4),
};

+static const struct option_blacklist_info simcom_sim7100e_blacklist = {
+ .reserved = BIT(5) | BIT(6),
+};
+
static const struct option_blacklist_info telit_le910_blacklist = {
.sendsetup = BIT(0),
.reserved = BIT(1) | BIT(2),
@@ -1645,6 +1650,8 @@ static const struct usb_device_id option
{ USB_DEVICE(ALINK_VENDOR_ID, 0x9000) },
{ USB_DEVICE(ALINK_VENDOR_ID, ALINK_PRODUCT_PH300) },
{ USB_DEVICE_AND_INTERFACE_INFO(ALINK_VENDOR_ID, ALINK_PRODUCT_3GU, 0xff, 0xff, 0xff) },
+ { USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E),
+ .driver_info = (kernel_ulong_t)&simcom_sim7100e_blacklist },
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
.driver_info = (kernel_ulong_t)&alcatel_x200_blacklist
},

2016-03-02 01:59:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 018/130] dm thin: fix race condition when destroying thin pool workqueue

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nikolay Borisov <[email protected]>

commit 18d03e8c25f173f4107a40d0b8c24defb6ed69f3 upstream.

When a thin pool is being destroyed delayed work items are
cancelled using cancel_delayed_work(), which doesn't guarantee that on
return the delayed item isn't running. This can cause the work item to
requeue itself on an already destroyed workqueue. Fix this by using
cancel_delayed_work_sync() which guarantees that on return the work item
is not running anymore.

Fixes: 905e51b39a555 ("dm thin: commit outstanding data every second")
Fixes: 85ad643b7e7e5 ("dm thin: add timeout to stop out-of-data-space mode holding IO forever")
Signed-off-by: Nikolay Borisov <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/dm-thin.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -2568,8 +2568,8 @@ static void pool_postsuspend(struct dm_t
struct pool_c *pt = ti->private;
struct pool *pool = pt->pool;

- cancel_delayed_work(&pool->waker);
- cancel_delayed_work(&pool->no_space_timeout);
+ cancel_delayed_work_sync(&pool->waker);
+ cancel_delayed_work_sync(&pool->no_space_timeout);
flush_workqueue(pool->wq);
(void) commit(pool);
}

2016-03-01 23:51:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 046/130] sched, rt: Convert switched_{from, to}_rt() / prio_changed_rt() to balance callbacks

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit fd7a4bed183523275279c9addbf42fce550c2e90 upstream.

Remove the direct {push,pull} balancing operations from
switched_{from,to}_rt() / prio_changed_rt() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/rt.c | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)

--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -315,16 +315,23 @@ static inline int has_pushable_tasks(str
return !plist_head_empty(&rq->rt.pushable_tasks);
}

-static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+static DEFINE_PER_CPU(struct callback_head, rt_push_head);
+static DEFINE_PER_CPU(struct callback_head, rt_pull_head);

static void push_rt_tasks(struct rq *);
+static void pull_rt_task(struct rq *);

static inline void queue_push_tasks(struct rq *rq)
{
if (!has_pushable_tasks(rq))
return;

- queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
+ queue_balance_callback(rq, &per_cpu(rt_push_head, rq->cpu), push_rt_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+ queue_balance_callback(rq, &per_cpu(rt_pull_head, rq->cpu), pull_rt_task);
}

static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
@@ -1837,7 +1844,7 @@ static void switched_from_rt(struct rq *
if (!p->on_rq || rq->rt.rt_nr_running)
return;

- pull_rt_task(rq);
+ queue_pull_task(rq);
}

void init_sched_rt_class(void)
@@ -1858,8 +1865,6 @@ void init_sched_rt_class(void)
*/
static void switched_to_rt(struct rq *rq, struct task_struct *p)
{
- int check_resched = 1;
-
/*
* If we are already running, then there's nothing
* that needs to be done. But if we are not running
@@ -1869,13 +1874,12 @@ static void switched_to_rt(struct rq *rq
*/
if (p->on_rq && rq->curr != p) {
#ifdef CONFIG_SMP
- if (rq->rt.overloaded && push_rt_task(rq) &&
- /* Don't resched if we changed runqueues */
- rq != task_rq(p))
- check_resched = 0;
-#endif /* CONFIG_SMP */
- if (check_resched && p->prio < rq->curr->prio)
+ if (rq->rt.overloaded)
+ queue_push_tasks(rq);
+#else
+ if (p->prio < rq->curr->prio)
resched_task(rq->curr);
+#endif /* CONFIG_SMP */
}
}

@@ -1896,14 +1900,13 @@ prio_changed_rt(struct rq *rq, struct ta
* may need to pull tasks to this runqueue.
*/
if (oldprio < p->prio)
- pull_rt_task(rq);
+ queue_pull_task(rq);
+
/*
* If there's a higher priority task waiting to run
- * then reschedule. Note, the above pull_rt_task
- * can release the rq lock and p could migrate.
- * Only reschedule if p is still on the same runqueue.
+ * then reschedule.
*/
- if (p->prio > rq->rt.highest_prio.curr && rq->curr == p)
+ if (p->prio > rq->rt.highest_prio.curr)
resched_task(p);
#else
/* For UP simply resched on drop of prio */

2016-03-02 02:00:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 057/130] wm831x_power: Use IRQF_ONESHOT to request threaded IRQs

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Valentin Rothberg <[email protected]>

commit 90adf98d9530054b8e665ba5a928de4307231d84 upstream.

Since commit 1c6c69525b40 ("genirq: Reject bogus threaded irq requests")
threaded IRQs without a primary handler need to be requested with
IRQF_ONESHOT, otherwise the request will fail.

scripts/coccinelle/misc/irqf_oneshot.cocci detected this issue.

Fixes: b5874f33bbaf ("wm831x_power: Use genirq")
Signed-off-by: Valentin Rothberg <[email protected]>
Signed-off-by: Sebastian Reichel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/power/wm831x_power.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/power/wm831x_power.c
+++ b/drivers/power/wm831x_power.c
@@ -567,7 +567,7 @@ static int wm831x_power_probe(struct pla

irq = wm831x_irq(wm831x, platform_get_irq_byname(pdev, "SYSLO"));
ret = request_threaded_irq(irq, NULL, wm831x_syslo_irq,
- IRQF_TRIGGER_RISING, "System power low",
+ IRQF_TRIGGER_RISING | IRQF_ONESHOT, "System power low",
power);
if (ret != 0) {
dev_err(&pdev->dev, "Failed to request SYSLO IRQ %d: %d\n",
@@ -577,7 +577,7 @@ static int wm831x_power_probe(struct pla

irq = wm831x_irq(wm831x, platform_get_irq_byname(pdev, "PWR SRC"));
ret = request_threaded_irq(irq, NULL, wm831x_pwr_src_irq,
- IRQF_TRIGGER_RISING, "Power source",
+ IRQF_TRIGGER_RISING | IRQF_ONESHOT, "Power source",
power);
if (ret != 0) {
dev_err(&pdev->dev, "Failed to request PWR SRC IRQ %d: %d\n",
@@ -590,7 +590,7 @@ static int wm831x_power_probe(struct pla
platform_get_irq_byname(pdev,
wm831x_bat_irqs[i]));
ret = request_threaded_irq(irq, NULL, wm831x_bat_irq,
- IRQF_TRIGGER_RISING,
+ IRQF_TRIGGER_RISING | IRQF_ONESHOT,
wm831x_bat_irqs[i],
power);
if (ret != 0) {

2016-03-02 02:00:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 060/130] megaraid_sas : SMAP restriction--do not access user memory from IOCTL code

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: [email protected] <[email protected]>

commit 323c4a02c631d00851d8edc4213c4d184ef83647 upstream.

This is an issue on SMAP enabled CPUs and 32 bit apps running on 64 bit
OS. Do not access user memory from kernel code. The SMAP bit restricts
accessing user memory from kernel code.

Signed-off-by: Sumit Saxena <[email protected]>
Signed-off-by: Kashyap Desai <[email protected]>
Reviewed-by: Tomas Henzl <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/megaraid/megaraid_sas_base.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -5281,6 +5281,9 @@ static int megasas_mgmt_compat_ioctl_fw(
int i;
int error = 0;
compat_uptr_t ptr;
+ unsigned long local_raw_ptr;
+ u32 local_sense_off;
+ u32 local_sense_len;

if (clear_user(ioc, sizeof(*ioc)))
return -EFAULT;
@@ -5298,9 +5301,15 @@ static int megasas_mgmt_compat_ioctl_fw(
* sense_len is not null, so prepare the 64bit value under
* the same condition.
*/
- if (ioc->sense_len) {
+ if (get_user(local_raw_ptr, ioc->frame.raw) ||
+ get_user(local_sense_off, &ioc->sense_off) ||
+ get_user(local_sense_len, &ioc->sense_len))
+ return -EFAULT;
+
+
+ if (local_sense_len) {
void __user **sense_ioc_ptr =
- (void __user **)(ioc->frame.raw + ioc->sense_off);
+ (void __user **)((u8*)local_raw_ptr + local_sense_off);
compat_uptr_t *sense_cioc_ptr =
(compat_uptr_t *)(cioc->frame.raw + cioc->sense_off);
if (get_user(ptr, sense_cioc_ptr) ||

2016-03-02 02:01:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 058/130] dmaengine: dw: convert to __ffs()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andy Shevchenko <[email protected]>

commit 39416677b95bf1ab8bbfa229ec7e511c96ad5d0c upstream.

We replace __fls() by __ffs() since we have to find a *minimum* data width that
satisfies both source and destination.

While here, rename dwc_fast_fls() to dwc_fast_ffs() which it really is.

Fixes: 4c2d56c574db (dw_dmac: introduce dwc_fast_fls())
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/dma/dw/core.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -176,7 +176,7 @@ static void dwc_initialize(struct dw_dma

/*----------------------------------------------------------------------*/

-static inline unsigned int dwc_fast_fls(unsigned long long v)
+static inline unsigned int dwc_fast_ffs(unsigned long long v)
{
/*
* We can be a lot more clever here, but this should take care
@@ -720,7 +720,7 @@ dwc_prep_dma_memcpy(struct dma_chan *cha
dw->data_width[dwc->dst_master]);

src_width = dst_width = min_t(unsigned int, data_width,
- dwc_fast_fls(src | dest | len));
+ dwc_fast_ffs(src | dest | len));

ctllo = DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_DST_WIDTH(dst_width)
@@ -799,7 +799,7 @@ dwc_prep_slave_sg(struct dma_chan *chan,

switch (direction) {
case DMA_MEM_TO_DEV:
- reg_width = __fls(sconfig->dst_addr_width);
+ reg_width = __ffs(sconfig->dst_addr_width);
reg = sconfig->dst_addr;
ctllo = (DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_DST_WIDTH(reg_width)
@@ -819,7 +819,7 @@ dwc_prep_slave_sg(struct dma_chan *chan,
len = sg_dma_len(sg);

mem_width = min_t(unsigned int,
- data_width, dwc_fast_fls(mem | len));
+ data_width, dwc_fast_ffs(mem | len));

slave_sg_todev_fill_desc:
desc = dwc_desc_get(dwc);
@@ -859,7 +859,7 @@ slave_sg_todev_fill_desc:
}
break;
case DMA_DEV_TO_MEM:
- reg_width = __fls(sconfig->src_addr_width);
+ reg_width = __ffs(sconfig->src_addr_width);
reg = sconfig->src_addr;
ctllo = (DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_SRC_WIDTH(reg_width)
@@ -879,7 +879,7 @@ slave_sg_todev_fill_desc:
len = sg_dma_len(sg);

mem_width = min_t(unsigned int,
- data_width, dwc_fast_fls(mem | len));
+ data_width, dwc_fast_ffs(mem | len));

slave_sg_fromdev_fill_desc:
desc = dwc_desc_get(dwc);

2016-03-01 23:51:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 016/130] dm thin: restore requested error_if_no_space setting on OODS to WRITE transition

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mike Snitzer <[email protected]>

commit 172c238612ebf81cabccc86b788c9209af591f61 upstream.

A thin-pool that is in out-of-data-space (OODS) mode may transition back
to write mode -- without the admin adding more space to the thin-pool --
if/when blocks are released (either by deleting thin devices or
discarding provisioned blocks).

But as part of the thin-pool's earlier transition to out-of-data-space
mode the thin-pool may have set the 'error_if_no_space' flag to true if
the no_space_timeout expires without more space having been made
available. That implementation detail, of changing the pool's
error_if_no_space setting, needs to be reset back to the default that
the user specified when the thin-pool's table was loaded.

Otherwise we'll drop the user requested behaviour on the floor when this
out-of-data-space to write mode transition occurs.

Reported-by: Vivek Goyal <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Acked-by: Joe Thornber <[email protected]>
Fixes: 2c43fd26e4 ("dm thin: fix missing out-of-data-space to write mode transition if blocks are released")
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/dm-thin.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -1619,6 +1619,7 @@ static void set_pool_mode(struct pool *p
case PM_WRITE:
if (old_mode != new_mode)
notify_of_pool_mode_change(pool, "write");
+ pool->pf.error_if_no_space = pt->requested_pf.error_if_no_space;
dm_pool_metadata_read_write(pool->pmd);
pool->process_bio = process_bio;
pool->process_discard = process_discard;

2016-03-02 02:01:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 051/130] clocksource/drivers/vt8500: Increase the minimum delta

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Roman Volkov <[email protected]>

commit f9eccf24615672896dc13251410c3f2f33a14f95 upstream.

The vt8500 clocksource driver declares itself as capable to handle the
minimum delay of 4 cycles by passing the value into
clockevents_config_and_register(). The vt8500_timer_set_next_event()
requires the passed cycles value to be at least 16. The impact is that
userspace hangs in nanosleep() calls with small delay intervals.

This problem is reproducible in Linux 4.2 starting from:
c6eb3f70d448 ('hrtimer: Get rid of hrtimer softirq')

>From Russell King, more detailed explanation:

"It's a speciality of the StrongARM/PXA hardware. It takes a certain
number of OSCR cycles for the value written to hit the compare registers.
So, if a very small delta is written (eg, the compare register is written
with a value of OSCR + 1), the OSCR will have incremented past this value
before it hits the underlying hardware. The result is, that you end up
waiting a very long time for the OSCR to wrap before the event fires.

So, we introduce a check in set_next_event() to detect this and return
-ETIME if the calculated delta is too small, which causes the generic
clockevents code to retry after adding the min_delta specified in
clockevents_config_and_register() to the current time value.

min_delta must be sufficient that we don't re-trip the -ETIME check - if
we do, we will return -ETIME, forward the next event time, try to set it,
return -ETIME again, and basically lock the system up. So, min_delta
must be larger than the check inside set_next_event(). A factor of two
was chosen to ensure that this situation would never occur.

The PXA code worked on PXA systems for years, and I'd suggest no one
changes this mechanism without access to a wide range of PXA systems,
otherwise they're risking breakage."

Cc: Russell King <[email protected]>
Acked-by: Alexey Charkov <[email protected]>
Signed-off-by: Roman Volkov <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/clocksource/vt8500_timer.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/drivers/clocksource/vt8500_timer.c
+++ b/drivers/clocksource/vt8500_timer.c
@@ -50,6 +50,8 @@

#define msecs_to_loops(t) (loops_per_jiffy / 1000 * HZ * t)

+#define MIN_OSCR_DELTA 16
+
static void __iomem *regbase;

static cycle_t vt8500_timer_read(struct clocksource *cs)
@@ -80,7 +82,7 @@ static int vt8500_timer_set_next_event(u
cpu_relax();
writel((unsigned long)alarm, regbase + TIMER_MATCH_VAL);

- if ((signed)(alarm - clocksource.read(&clocksource)) <= 16)
+ if ((signed)(alarm - clocksource.read(&clocksource)) <= MIN_OSCR_DELTA)
return -ETIME;

writel(1, regbase + TIMER_IER_VAL);
@@ -160,7 +162,7 @@ static void __init vt8500_timer_init(str
pr_err("%s: setup_irq failed for %s\n", __func__,
clockevent.name);
clockevents_config_and_register(&clockevent, VT8500_TIMER_HZ,
- 4, 0xf0000000);
+ MIN_OSCR_DELTA * 2, 0xf0000000);
}

CLOCKSOURCE_OF_DECLARE(vt8500, "via,vt8500-timer", vt8500_timer_init);

2016-03-02 02:01:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 056/130] devres: fix a for loop bounds check

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Carpenter <[email protected]>

commit 1f35d04a02a652f14566f875aef3a6f2af4cb77b upstream.

The iomap[] array has PCIM_IOMAP_MAX (6) elements and not
DEVICE_COUNT_RESOURCE (16). This bug was found using a static checker.
It may be that the "if (!(mask & (1 << i)))" check means we never
actually go past the end of the array in real life.

Fixes: ec04b075843d ('iomap: implement pcim_iounmap_regions()')
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
lib/devres.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/lib/devres.c
+++ b/lib/devres.c
@@ -423,7 +423,7 @@ void pcim_iounmap_regions(struct pci_dev
if (!iomap)
return;

- for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
+ for (i = 0; i < PCIM_IOMAP_MAX; i++) {
if (!(mask & (1 << i)))
continue;

2016-03-01 23:51:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 043/130] sched: Replace post_schedule with a balance callback list

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit e3fca9e7cbfb72694a21c886fcdf9f059cfded9c upstream.

Generalize the post_schedule() stuff into a balance callback list.
This allows us to more easily use it outside of schedule() and cross
sched_class.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/core.c | 36 ++++++++++++++++++++++++------------
kernel/sched/deadline.c | 23 ++++++++++++++++-------
kernel/sched/rt.c | 27 ++++++++++++++++-----------
kernel/sched/sched.h | 19 +++++++++++++++++--
4 files changed, 73 insertions(+), 32 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2179,18 +2179,30 @@ static inline void pre_schedule(struct r
}

/* rq->lock is NOT held, but preemption is disabled */
-static inline void post_schedule(struct rq *rq)
+static void __balance_callback(struct rq *rq)
{
- if (rq->post_schedule) {
- unsigned long flags;
+ struct callback_head *head, *next;
+ void (*func)(struct rq *rq);
+ unsigned long flags;

- raw_spin_lock_irqsave(&rq->lock, flags);
- if (rq->curr->sched_class->post_schedule)
- rq->curr->sched_class->post_schedule(rq);
- raw_spin_unlock_irqrestore(&rq->lock, flags);
+ raw_spin_lock_irqsave(&rq->lock, flags);
+ head = rq->balance_callback;
+ rq->balance_callback = NULL;
+ while (head) {
+ func = (void (*)(struct rq *))head->func;
+ next = head->next;
+ head->next = NULL;
+ head = next;

- rq->post_schedule = 0;
+ func(rq);
}
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+}
+
+static inline void balance_callback(struct rq *rq)
+{
+ if (unlikely(rq->balance_callback))
+ __balance_callback(rq);
}

#else
@@ -2199,7 +2211,7 @@ static inline void pre_schedule(struct r
{
}

-static inline void post_schedule(struct rq *rq)
+static inline void balance_callback(struct rq *rq)
{
}

@@ -2220,7 +2232,7 @@ asmlinkage void schedule_tail(struct tas
* FIXME: do we need to worry about rq being invalidated by the
* task_switch?
*/
- post_schedule(rq);
+ balance_callback(rq);

#ifdef __ARCH_WANT_UNLOCKED_CTXSW
/* In this case, finish_task_switch does not reenable preemption */
@@ -2732,7 +2744,7 @@ need_resched:
} else
raw_spin_unlock_irq(&rq->lock);

- post_schedule(rq);
+ balance_callback(rq);

sched_preempt_enable_no_resched();
if (need_resched())
@@ -6902,7 +6914,7 @@ void __init sched_init(void)
rq->sd = NULL;
rq->rd = NULL;
rq->cpu_power = SCHED_POWER_SCALE;
- rq->post_schedule = 0;
+ rq->balance_callback = NULL;
rq->active_balance = 0;
rq->next_balance = jiffies;
rq->push_cpu = 0;
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -210,6 +210,18 @@ static inline int has_pushable_dl_tasks(

static int push_dl_task(struct rq *rq);

+static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+
+static void push_dl_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
+{
+ if (!has_pushable_dl_tasks(rq))
+ return;
+
+ queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
+}
+
#else

static inline
@@ -232,6 +244,9 @@ void dec_dl_migration(struct sched_dl_en
{
}

+static inline void queue_push_tasks(struct rq *rq)
+{
+}
#endif /* CONFIG_SMP */

static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
@@ -1005,7 +1020,7 @@ struct task_struct *pick_next_task_dl(st
#endif

#ifdef CONFIG_SMP
- rq->post_schedule = has_pushable_dl_tasks(rq);
+ queue_push_tasks(rq);
#endif /* CONFIG_SMP */

return p;
@@ -1422,11 +1437,6 @@ static void pre_schedule_dl(struct rq *r
pull_dl_task(rq);
}

-static void post_schedule_dl(struct rq *rq)
-{
- push_dl_tasks(rq);
-}
-
/*
* Since the task is not running and a reschedule is not going to happen
* anytime soon on its runqueue, we try pushing it away now.
@@ -1615,7 +1625,6 @@ const struct sched_class dl_sched_class
.rq_online = rq_online_dl,
.rq_offline = rq_offline_dl,
.pre_schedule = pre_schedule_dl,
- .post_schedule = post_schedule_dl,
.task_woken = task_woken_dl,
#endif

--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -315,6 +315,18 @@ static inline int has_pushable_tasks(str
return !plist_head_empty(&rq->rt.pushable_tasks);
}

+static DEFINE_PER_CPU(struct callback_head, rt_balance_head);
+
+static void push_rt_tasks(struct rq *);
+
+static inline void queue_push_tasks(struct rq *rq)
+{
+ if (!has_pushable_tasks(rq))
+ return;
+
+ queue_balance_callback(rq, &per_cpu(rt_balance_head, rq->cpu), push_rt_tasks);
+}
+
static void enqueue_pushable_task(struct rq *rq, struct task_struct *p)
{
plist_del(&p->pushable_tasks, &rq->rt.pushable_tasks);
@@ -359,6 +371,9 @@ void dec_rt_migration(struct sched_rt_en
{
}

+static inline void queue_push_tasks(struct rq *rq)
+{
+}
#endif /* CONFIG_SMP */

static inline int on_rt_rq(struct sched_rt_entity *rt_se)
@@ -1349,11 +1364,7 @@ static struct task_struct *pick_next_tas
dequeue_pushable_task(rq, p);

#ifdef CONFIG_SMP
- /*
- * We detect this state here so that we can avoid taking the RQ
- * lock again later if there is no need to push
- */
- rq->post_schedule = has_pushable_tasks(rq);
+ queue_push_tasks(rq);
#endif

return p;
@@ -1731,11 +1742,6 @@ static void pre_schedule_rt(struct rq *r
pull_rt_task(rq);
}

-static void post_schedule_rt(struct rq *rq)
-{
- push_rt_tasks(rq);
-}
-
/*
* If we are not running and we are not going to reschedule soon, we should
* try to push tasks away now
@@ -2008,7 +2014,6 @@ const struct sched_class rt_sched_class
.rq_online = rq_online_rt,
.rq_offline = rq_offline_rt,
.pre_schedule = pre_schedule_rt,
- .post_schedule = post_schedule_rt,
.task_woken = task_woken_rt,
.switched_from = switched_from_rt,
#endif
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -587,9 +587,10 @@ struct rq {

unsigned long cpu_power;

+ struct callback_head *balance_callback;
+
unsigned char idle_balance;
/* For active balancing */
- int post_schedule;
int active_balance;
int push_cpu;
struct cpu_stop_work active_balance_work;
@@ -690,6 +691,21 @@ extern int migrate_swap(struct task_stru

#ifdef CONFIG_SMP

+static inline void
+queue_balance_callback(struct rq *rq,
+ struct callback_head *head,
+ void (*func)(struct rq *rq))
+{
+ lockdep_assert_held(&rq->lock);
+
+ if (unlikely(head->next))
+ return;
+
+ head->func = (void (*)(struct callback_head *))func;
+ head->next = rq->balance_callback;
+ rq->balance_callback = head;
+}
+
#define rcu_dereference_check_sched_domain(p) \
rcu_dereference_check((p), \
lockdep_is_held(&sched_domains_mutex))
@@ -1131,7 +1147,6 @@ struct sched_class {
void (*migrate_task_rq)(struct task_struct *p, int next_cpu);

void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
- void (*post_schedule) (struct rq *this_rq);
void (*task_waking) (struct task_struct *task);
void (*task_woken) (struct rq *this_rq, struct task_struct *task);

2016-03-02 02:02:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 017/130] dm thin metadata: fix bug when taking a metadata snapshot

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joe Thornber <[email protected]>

commit 49e99fc717f624aa75ca755d6e7bc029efd3f0e9 upstream.

When you take a metadata snapshot the btree roots for the mapping and
details tree need to have their reference counts incremented so they
persist for the lifetime of the metadata snap.

The roots being incremented were those currently written in the
superblock, which could possibly be out of date if concurrent IO is
triggering new mappings, breaking of sharing, etc.

Fix this by performing a commit with the metadata lock held while taking
a metadata snapshot.

Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/dm-thin-metadata.c | 6 ++++++
1 file changed, 6 insertions(+)

--- a/drivers/md/dm-thin-metadata.c
+++ b/drivers/md/dm-thin-metadata.c
@@ -1205,6 +1205,12 @@ static int __reserve_metadata_snap(struc
dm_block_t held_root;

/*
+ * We commit to ensure the btree roots which we increment in a
+ * moment are up to date.
+ */
+ __commit_transaction(pmd);
+
+ /*
* Copy the superblock.
*/
dm_sm_inc_block(pmd->metadata_sm, THIN_SUPERBLOCK_LOCATION);

2016-03-02 02:02:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 048/130] sched, dl: Convert switched_{from, to}_dl() / prio_changed_dl() to balance callbacks

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 9916e214998a4a363b152b637245e5c958067350 upstream.

Remove the direct {push,pull} balancing operations from
switched_{from,to}_dl() / prio_changed_dl() and use the balance
callback queue.

Again, err on the side of too many reschedules; since too few is a
hard bug while too many is just annoying.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/deadline.c | 34 +++++++++++++++++++++-------------
1 file changed, 21 insertions(+), 13 deletions(-)

--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -210,16 +210,23 @@ static inline int has_pushable_dl_tasks(

static int push_dl_task(struct rq *rq);

-static DEFINE_PER_CPU(struct callback_head, dl_balance_head);
+static DEFINE_PER_CPU(struct callback_head, dl_push_head);
+static DEFINE_PER_CPU(struct callback_head, dl_pull_head);

static void push_dl_tasks(struct rq *);
+static void pull_dl_task(struct rq *);

static inline void queue_push_tasks(struct rq *rq)
{
if (!has_pushable_dl_tasks(rq))
return;

- queue_balance_callback(rq, &per_cpu(dl_balance_head, rq->cpu), push_dl_tasks);
+ queue_balance_callback(rq, &per_cpu(dl_push_head, rq->cpu), push_dl_tasks);
+}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+ queue_balance_callback(rq, &per_cpu(dl_pull_head, rq->cpu), pull_dl_task);
}

#else
@@ -247,6 +254,10 @@ void dec_dl_migration(struct sched_dl_en
static inline void queue_push_tasks(struct rq *rq)
{
}
+
+static inline void queue_pull_task(struct rq *rq)
+{
+}
#endif /* CONFIG_SMP */

static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags);
@@ -1541,7 +1552,7 @@ static void switched_from_dl(struct rq *
* from an overloaded cpu, if any.
*/
if (!rq->dl.dl_nr_running)
- pull_dl_task(rq);
+ queue_pull_task(rq);
#endif
}

@@ -1551,8 +1562,6 @@ static void switched_from_dl(struct rq *
*/
static void switched_to_dl(struct rq *rq, struct task_struct *p)
{
- int check_resched = 1;
-
/*
* If p is throttled, don't consider the possibility
* of preempting rq->curr, the check will be done right
@@ -1563,12 +1572,12 @@ static void switched_to_dl(struct rq *rq

if (p->on_rq || rq->curr != p) {
#ifdef CONFIG_SMP
- if (rq->dl.overloaded && push_dl_task(rq) && rq != task_rq(p))
- /* Only reschedule if pushing failed */
- check_resched = 0;
-#endif /* CONFIG_SMP */
- if (check_resched && task_has_dl_policy(rq->curr))
+ if (rq->dl.overloaded)
+ queue_push_tasks(rq);
+#else
+ if (task_has_dl_policy(rq->curr))
check_preempt_curr_dl(rq, p, 0);
+#endif /* CONFIG_SMP */
}
}

@@ -1588,15 +1597,14 @@ static void prio_changed_dl(struct rq *r
* or lowering its prio, so...
*/
if (!rq->dl.overloaded)
- pull_dl_task(rq);
+ queue_pull_task(rq);

/*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
* runqueue.
*/
- if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline) &&
- rq->curr == p)
+ if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline))
resched_task(p);
#else
/*

2016-03-02 02:02:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 053/130] drm/radeon: unconditionally set sysfs_initialized

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 24dd2f64c5a877392925202321c7c2c46c2b0ddf upstream.

Avoids spew on resume for systems where sysfs may
fail even on init.

bug:
https://bugzilla.kernel.org/show_bug.cgi?id=106851

Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_pm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -1364,8 +1364,7 @@ int radeon_pm_late_init(struct radeon_de
ret = device_create_file(rdev->dev, &dev_attr_power_method);
if (ret)
DRM_ERROR("failed to create device file for power method\n");
- if (!ret)
- rdev->pm.sysfs_initialized = true;
+ rdev->pm.sysfs_initialized = true;
}

mutex_lock(&rdev->pm.mutex);

2016-03-02 02:03:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 047/130] sched,dl: Remove return value from pull_dl_task()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 0ea60c2054fc3b0c3eb68ac4f6884f3ee78d9925 upstream.

In order to be able to use pull_dl_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/deadline.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1351,15 +1351,16 @@ static void push_dl_tasks(struct rq *rq)
;
}

-static int pull_dl_task(struct rq *this_rq)
+static void pull_dl_task(struct rq *this_rq)
{
- int this_cpu = this_rq->cpu, ret = 0, cpu;
+ int this_cpu = this_rq->cpu, cpu;
struct task_struct *p;
+ bool resched = false;
struct rq *src_rq;
u64 dmin = LONG_MAX;

if (likely(!dl_overloaded(this_rq)))
- return 0;
+ return;

/*
* Match the barrier from dl_set_overloaded; this guarantees that if we
@@ -1414,7 +1415,7 @@ static int pull_dl_task(struct rq *this_
src_rq->curr->dl.deadline))
goto skip;

- ret = 1;
+ resched = true;

deactivate_task(src_rq, p, 0);
set_task_cpu(p, this_cpu);
@@ -1427,7 +1428,8 @@ skip:
double_unlock_balance(this_rq, src_rq);
}

- return ret;
+ if (resched)
+ resched_task(this_rq->curr);
}

static void pre_schedule_dl(struct rq *rq, struct task_struct *prev)

2016-03-02 02:03:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 052/130] async_tx: use GFP_NOWAIT rather than GFP_IO

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: NeilBrown <[email protected]>

commit b02bab6b0f928d49dbfb03e1e4e9dd43647623d7 upstream.

These async_XX functions are called from md/raid5 in an atomic
section, between get_cpu() and put_cpu(), so they must not sleep.
So use GFP_NOWAIT rather than GFP_IO.

Dan Williams writes: Longer term async_tx needs to be merged into md
directly as we can allocate this unmap data statically per-stripe
rather than per request.

Fixed: 7476bd79fc01 ("async_pq: convert to dmaengine_unmap_data")
Reported-and-tested-by: Stanislav Samsonov <[email protected]>
Acked-by: Dan Williams <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
crypto/async_tx/async_memcpy.c | 2 +-
crypto/async_tx/async_pq.c | 4 ++--
crypto/async_tx/async_raid6_recov.c | 4 ++--
crypto/async_tx/async_xor.c | 4 ++--
4 files changed, 7 insertions(+), 7 deletions(-)

--- a/crypto/async_tx/async_memcpy.c
+++ b/crypto/async_tx/async_memcpy.c
@@ -53,7 +53,7 @@ async_memcpy(struct page *dest, struct p
struct dmaengine_unmap_data *unmap = NULL;

if (device)
- unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, 2, GFP_NOWAIT);

if (unmap && is_dma_copy_aligned(device, src_offset, dest_offset, len)) {
unsigned long dma_prep_flags = 0;
--- a/crypto/async_tx/async_pq.c
+++ b/crypto/async_tx/async_pq.c
@@ -176,7 +176,7 @@ async_gen_syndrome(struct page **blocks,
BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks)));

if (device)
- unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT);

if (unmap &&
(src_cnt <= dma_maxpq(device, 0) ||
@@ -294,7 +294,7 @@ async_syndrome_val(struct page **blocks,
BUG_ON(disks < 4);

if (device)
- unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT);

if (unmap && disks <= dma_maxpq(device, 0) &&
is_dma_pq_aligned(device, offset, 0, len)) {
--- a/crypto/async_tx/async_raid6_recov.c
+++ b/crypto/async_tx/async_raid6_recov.c
@@ -41,7 +41,7 @@ async_sum_product(struct page *dest, str
u8 *a, *b, *c;

if (dma)
- unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOWAIT);

if (unmap) {
struct device *dev = dma->dev;
@@ -105,7 +105,7 @@ async_mult(struct page *dest, struct pag
u8 *d, *s;

if (dma)
- unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(dma->dev, 3, GFP_NOWAIT);

if (unmap) {
dma_addr_t dma_dest[2];
--- a/crypto/async_tx/async_xor.c
+++ b/crypto/async_tx/async_xor.c
@@ -182,7 +182,7 @@ async_xor(struct page *dest, struct page
BUG_ON(src_cnt <= 1);

if (device)
- unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, src_cnt+1, GFP_NOWAIT);

if (unmap && is_dma_xor_aligned(device, offset, 0, len)) {
struct dma_async_tx_descriptor *tx;
@@ -278,7 +278,7 @@ async_xor_val(struct page *dest, struct
BUG_ON(src_cnt <= 1);

if (device)
- unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOIO);
+ unmap = dmaengine_get_unmap_data(device->dev, src_cnt, GFP_NOWAIT);

if (unmap && src_cnt <= device->max_xor &&
is_dma_xor_aligned(device, offset, 0, len)) {

2016-03-02 02:03:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 050/130] dts: vt8500: Add SDHC node to DTS file for WM8650

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Roman Volkov <[email protected]>

commit 0f090bf14e51e7eefb71d9d1c545807f8b627986 upstream.

Since WM8650 has the same 'WMT' SDHC controller as WM8505, and the driver
is already in the kernel, this node enables the controller support for
WM8650

Signed-off-by: Roman Volkov <[email protected]>
Reviewed-by: Alexey Charkov <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/boot/dts/wm8650.dtsi | 9 +++++++++
1 file changed, 9 insertions(+)

--- a/arch/arm/boot/dts/wm8650.dtsi
+++ b/arch/arm/boot/dts/wm8650.dtsi
@@ -187,6 +187,15 @@
interrupts = <43>;
};

+ sdhc@d800a000 {
+ compatible = "wm,wm8505-sdhc";
+ reg = <0xd800a000 0x400>;
+ interrupts = <20>, <21>;
+ clocks = <&clksdhc>;
+ bus-width = <4>;
+ sdon-inverted;
+ };
+
fb: fb@d8050800 {
compatible = "wm,wm8505-fb";
reg = <0xd8050800 0x200>;

2016-03-02 02:04:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 049/130] genirq: Prevent chip buslock deadlock

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit abc7e40c81d113ef4bacb556f0a77ca63ac81d85 upstream.

If a interrupt chip utilizes chip->buslock then free_irq() can
deadlock in the following way:

CPU0 CPU1
interrupt(X) (Shared or spurious)
free_irq(X) interrupt_thread(X)
chip_bus_lock(X)
irq_finalize_oneshot(X)
chip_bus_lock(X)
synchronize_irq(X)

synchronize_irq() waits for the interrupt thread to complete,
i.e. forever.

Solution is simple: Drop chip_bus_lock() before calling
synchronize_irq() as we do with the irq_desc lock. There is nothing to
be protected after the point where irq_desc lock has been released.

This adds chip_bus_lock/unlock() to the remove_irq() code path, but
that's actually correct in the case where remove_irq() is called on
such an interrupt. The current users of remove_irq() are not affected
as none of those interrupts is on a chip which requires buslock.

Reported-by: Fredrik Markström <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/irq/manage.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1230,6 +1230,7 @@ static struct irqaction *__free_irq(unsi
if (!desc)
return NULL;

+ chip_bus_lock(desc);
raw_spin_lock_irqsave(&desc->lock, flags);

/*
@@ -1243,7 +1244,7 @@ static struct irqaction *__free_irq(unsi
if (!action) {
WARN(1, "Trying to free already-free IRQ %d\n", irq);
raw_spin_unlock_irqrestore(&desc->lock, flags);
-
+ chip_bus_sync_unlock(desc);
return NULL;
}

@@ -1266,6 +1267,7 @@ static struct irqaction *__free_irq(unsi
#endif

raw_spin_unlock_irqrestore(&desc->lock, flags);
+ chip_bus_sync_unlock(desc);

unregister_handler_proc(irq, action);

@@ -1339,9 +1341,7 @@ void free_irq(unsigned int irq, void *de
desc->affinity_notify = NULL;
#endif

- chip_bus_lock(desc);
kfree(__free_irq(irq, dev_id));
- chip_bus_sync_unlock(desc);
}
EXPORT_SYMBOL(free_irq);

2016-03-01 23:51:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 038/130] splice: sendfile() at once fails for big files

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christophe Leroy <[email protected]>

commit 0ff28d9f4674d781e492bcff6f32f0fe48cf0fed upstream.

Using sendfile with below small program to get MD5 sums of some files,
it appear that big files (over 64kbytes with 4k pages system) get a
wrong MD5 sum while small files get the correct sum.
This program uses sendfile() to send a file to an AF_ALG socket
for hashing.

/* md5sum2.c */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <linux/if_alg.h>

int main(int argc, char **argv)
{
int sk = socket(AF_ALG, SOCK_SEQPACKET, 0);
struct stat st;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "hash",
.salg_name = "md5",
};
int n;

bind(sk, (struct sockaddr*)&sa, sizeof(sa));

for (n = 1; n < argc; n++) {
int size;
int offset = 0;
char buf[4096];
int fd;
int sko;
int i;

fd = open(argv[n], O_RDONLY);
sko = accept(sk, NULL, 0);
fstat(fd, &st);
size = st.st_size;
sendfile(sko, fd, &offset, size);
size = read(sko, buf, sizeof(buf));
for (i = 0; i < size; i++)
printf("%2.2x", buf[i]);
printf(" %s\n", argv[n]);
close(fd);
close(sko);
}
exit(0);
}

Test below is done using official linux patch files. First result is
with a software based md5sum. Second result is with the program above.

root@vgoip:~# ls -l patch-3.6.*
-rw-r--r-- 1 root root 64011 Aug 24 12:01 patch-3.6.2.gz
-rw-r--r-- 1 root root 94131 Aug 24 12:01 patch-3.6.3.gz

root@vgoip:~# md5sum patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz

root@vgoip:~# ./md5sum2 patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
5fd77b24e68bb24dcc72d6e57c64790e patch-3.6.3.gz

After investivation, it appears that sendfile() sends the files by blocks
of 64kbytes (16 times PAGE_SIZE). The problem is that at the end of each
block, the SPLICE_F_MORE flag is missing, therefore the hashing operation
is reset as if it was the end of the file.

This patch adds SPLICE_F_MORE to the flags when more data is pending.

With the patch applied, we get the correct sums:

root@vgoip:~# md5sum patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz

root@vgoip:~# ./md5sum2 patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz

Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/splice.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

--- a/fs/splice.c
+++ b/fs/splice.c
@@ -1175,7 +1175,7 @@ ssize_t splice_direct_to_actor(struct fi
long ret, bytes;
umode_t i_mode;
size_t len;
- int i, flags;
+ int i, flags, more;

/*
* We require the input being a regular file, as we don't want to
@@ -1218,6 +1218,7 @@ ssize_t splice_direct_to_actor(struct fi
* Don't block on output, we have to drain the direct pipe.
*/
sd->flags &= ~SPLICE_F_NONBLOCK;
+ more = sd->flags & SPLICE_F_MORE;

while (len) {
size_t read_len;
@@ -1231,6 +1232,15 @@ ssize_t splice_direct_to_actor(struct fi
sd->total_len = read_len;

/*
+ * If more data is pending, set SPLICE_F_MORE
+ * If this is the last data and SPLICE_F_MORE was not set
+ * initially, clears it.
+ */
+ if (read_len < len)
+ sd->flags |= SPLICE_F_MORE;
+ else if (!more)
+ sd->flags &= ~SPLICE_F_MORE;
+ /*
* NOTE: nonblocking mode only applies to the input. We
* must not do the output in nonblocking mode as then we
* could get stuck data in the internal pipe:

2016-03-02 02:04:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 042/130] sched: Clean up idle task SMP logic

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 6c3b4d44ba2838f00614a5a2d777d4401e0bfd71 upstream.

The idle post_schedule flag is just a vile waste of time, furthermore
it appears unneeded, move the idle_enter_fair() call into
pick_next_task_idle().

Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Steven Rostedt <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Cc: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/idle_task.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)

--- a/kernel/sched/idle_task.c
+++ b/kernel/sched/idle_task.c
@@ -19,11 +19,6 @@ static void pre_schedule_idle(struct rq
idle_exit_fair(rq);
rq_last_tick_reset(rq);
}
-
-static void post_schedule_idle(struct rq *rq)
-{
- idle_enter_fair(rq);
-}
#endif /* CONFIG_SMP */
/*
* Idle tasks are unconditionally rescheduled:
@@ -37,8 +32,7 @@ static struct task_struct *pick_next_tas
{
schedstat_inc(rq, sched_goidle);
#ifdef CONFIG_SMP
- /* Trigger the post schedule to do an idle_enter for CFS */
- rq->post_schedule = 1;
+ idle_enter_fair(rq);
#endif
return rq->idle;
}
@@ -102,7 +96,6 @@ const struct sched_class idle_sched_clas
#ifdef CONFIG_SMP
.select_task_rq = select_task_rq_idle,
.pre_schedule = pre_schedule_idle,
- .post_schedule = post_schedule_idle,
#endif

.set_curr_task = set_curr_task_idle,

2016-03-02 02:04:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 045/130] sched,rt: Remove return value from pull_rt_task()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 8046d6806247088de5725eaf8a2580b29e50ac5a upstream.

In order to be able to use pull_rt_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/rt.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1652,14 +1652,15 @@ static void push_rt_tasks(struct rq *rq)
;
}

-static int pull_rt_task(struct rq *this_rq)
+static void pull_rt_task(struct rq *this_rq)
{
- int this_cpu = this_rq->cpu, ret = 0, cpu;
+ int this_cpu = this_rq->cpu, cpu;
+ bool resched = false;
struct task_struct *p;
struct rq *src_rq;

if (likely(!rt_overloaded(this_rq)))
- return 0;
+ return;

/*
* Match the barrier from rt_set_overloaded; this guarantees that if we
@@ -1716,7 +1717,7 @@ static int pull_rt_task(struct rq *this_
if (p->prio < src_rq->curr->prio)
goto skip;

- ret = 1;
+ resched = true;

deactivate_task(src_rq, p, 0);
set_task_cpu(p, this_cpu);
@@ -1732,7 +1733,8 @@ skip:
double_unlock_balance(this_rq, src_rq);
}

- return ret;
+ if (resched)
+ resched_task(this_rq->curr);
}

static void pre_schedule_rt(struct rq *rq, struct task_struct *prev)
@@ -1835,8 +1837,7 @@ static void switched_from_rt(struct rq *
if (!p->on_rq || rq->rt.rt_nr_running)
return;

- if (pull_rt_task(rq))
- resched_task(rq->curr);
+ pull_rt_task(rq);
}

void init_sched_rt_class(void)

2016-03-02 02:04:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 040/130] bnx2x: Dont notify about scratchpad parities

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Manish Chopra <[email protected]>

commit ad6afbe9578d1fa26680faf78c846bd8c00d1d6e upstream.

The scratchpad is a shared block between all functions of a given device.
Due to HW limitations, we can't properly close its parity notifications
to all functions on legal flows.
E.g., it's possible that while taking a register dump from one function
a parity error would be triggered on other functions.

Today driver doesn't consider this parity as a 'real' parity unless its
being accompanied by additional indications [which would happen in a real
parity scenario]; But it does print notifications for such events in the
system logs.

This eliminates such prints - in case of real parities driver would have
additional indications; But if this is the only signal user will not even
see a parity being logged in the system.

Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: Yuval Mintz <[email protected]>
Signed-off-by: Ariel Elior <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Patrick Schaaf <[email protected]>
Tested-by: Patrick Schaaf <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>


---
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h | 11 +++++++----
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c | 20 ++++++++++++++------
2 files changed, 21 insertions(+), 10 deletions(-)

--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -2401,10 +2401,13 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x
AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR | \
AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR)

-#define HW_PRTY_ASSERT_SET_3 (AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY | \
- AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY | \
- AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY | \
- AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY)
+#define HW_PRTY_ASSERT_SET_3_WITHOUT_SCPAD \
+ (AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY | \
+ AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY | \
+ AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY)
+
+#define HW_PRTY_ASSERT_SET_3 (HW_PRTY_ASSERT_SET_3_WITHOUT_SCPAD | \
+ AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY)

#define HW_PRTY_ASSERT_SET_4 (AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR | \
AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR)
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -4631,9 +4631,7 @@ static bool bnx2x_check_blocks_with_pari
res |= true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY:
- if (print)
- _print_next_block((*par_num)++,
- "MCP SCPAD");
+ (*par_num)++;
/* clear latched SCPAD PATIRY from MCP */
REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL,
1UL << 10);
@@ -4695,6 +4693,7 @@ static bool bnx2x_parity_attn(struct bnx
(sig[3] & HW_PRTY_ASSERT_SET_3) ||
(sig[4] & HW_PRTY_ASSERT_SET_4)) {
int par_num = 0;
+
DP(NETIF_MSG_HW, "Was parity error: HW block parity attention:\n"
"[0]:0x%08x [1]:0x%08x [2]:0x%08x [3]:0x%08x [4]:0x%08x\n",
sig[0] & HW_PRTY_ASSERT_SET_0,
@@ -4702,9 +4701,18 @@ static bool bnx2x_parity_attn(struct bnx
sig[2] & HW_PRTY_ASSERT_SET_2,
sig[3] & HW_PRTY_ASSERT_SET_3,
sig[4] & HW_PRTY_ASSERT_SET_4);
- if (print)
- netdev_err(bp->dev,
- "Parity errors detected in blocks: ");
+ if (print) {
+ if (((sig[0] & HW_PRTY_ASSERT_SET_0) ||
+ (sig[1] & HW_PRTY_ASSERT_SET_1) ||
+ (sig[2] & HW_PRTY_ASSERT_SET_2) ||
+ (sig[4] & HW_PRTY_ASSERT_SET_4)) ||
+ (sig[3] & HW_PRTY_ASSERT_SET_3_WITHOUT_SCPAD)) {
+ netdev_err(bp->dev,
+ "Parity errors detected in blocks: ");
+ } else {
+ print = false;
+ }
+ }
res |= bnx2x_check_blocks_with_parity0(bp,
sig[0] & HW_PRTY_ASSERT_SET_0, &par_num, print);
res |= bnx2x_check_blocks_with_parity1(bp,

2016-03-02 02:05:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 044/130] sched: Allow balance callbacks for check_class_changed()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 4c9a4bc89a9cca8128bce67d6bc8870d6b7ee0b2 upstream.

In order to remove dropping rq->lock from the
switched_{to,from}()/prio_changed() sched_class methods, run the
balance callbacks after it.

We need to remove dropping rq->lock because its buggy,
suppose using sched_setattr()/sched_setscheduler() to change a running
task from FIFO to OTHER.

By the time we get to switched_from_rt() the task is already enqueued
on the cfs runqueues. If switched_from_rt() does pull_rt_task() and
drops rq->lock, load-balancing can come in and move our task @p to
another rq.

The subsequent switched_to_fair() still assumes @p is on @rq and bad
things will happen.

By using balance callbacks we delay the load-balancing operations
{rt,dl}x{push,pull} until we've done all the important work and the
task is fully set up.

Furthermore, the balance callbacks do not know about @p, therefore
they cannot get confused like this.

Reported-by: Mike Galbraith <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Byungchul Park <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/core.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -937,6 +937,13 @@ inline int task_curr(const struct task_s
return cpu_curr(task_cpu(p)) == p;
}

+/*
+ * switched_from, switched_to and prio_changed must _NOT_ drop rq->lock,
+ * use the balance_callback list if you want balancing.
+ *
+ * this means any call to check_class_changed() must be followed by a call to
+ * balance_callback().
+ */
static inline void check_class_changed(struct rq *rq, struct task_struct *p,
const struct sched_class *prev_class,
int oldprio)
@@ -1423,8 +1430,12 @@ ttwu_do_wakeup(struct rq *rq, struct tas

p->state = TASK_RUNNING;
#ifdef CONFIG_SMP
- if (p->sched_class->task_woken)
+ if (p->sched_class->task_woken) {
+ /*
+ * XXX can drop rq->lock; most likely ok.
+ */
p->sched_class->task_woken(rq, p);
+ }

if (rq->idle_stamp) {
u64 delta = rq_clock(rq) - rq->idle_stamp;
@@ -3006,7 +3017,11 @@ void rt_mutex_setprio(struct task_struct

check_class_changed(rq, p, prev_class, oldprio);
out_unlock:
+ preempt_disable(); /* avoid rq from going away on us */
__task_rq_unlock(rq);
+
+ balance_callback(rq);
+ preempt_enable();
}
#endif

@@ -3512,10 +3527,17 @@ change:
enqueue_task(rq, p, 0);

check_class_changed(rq, p, prev_class, oldprio);
+ preempt_disable(); /* avoid rq from going away on us */
task_rq_unlock(rq, p, &flags);

rt_mutex_adjust_pi(p);

+ /*
+ * Run balance callbacks after we've adjusted the PI chain.
+ */
+ balance_callback(rq);
+ preempt_enable();
+
return 0;
}

2016-03-02 02:06:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 041/130] unix: correctly track in-flight fds in sending process user_struct

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Hannes Frederic Sowa <[email protected]>

commit 415e3d3e90ce9e18727e8843ae343eda5a58fad6 upstream.

The commit referenced in the Fixes tag incorrectly accounted the number
of in-flight fds over a unix domain socket to the original opener
of the file-descriptor. This allows another process to arbitrary
deplete the original file-openers resource limit for the maximum of
open files. Instead the sending processes and its struct cred should
be credited.

To do so, we add a reference counted struct user_struct pointer to the
scm_fp_list and use it to account for the number of inflight unix fds.

Fixes: 712f4aad406bb1 ("unix: properly account for FDs passed over unix sockets")
Reported-by: David Herrmann <[email protected]>
Cc: David Herrmann <[email protected]>
Cc: Willy Tarreau <[email protected]>
Cc: Linus Torvalds <[email protected]>
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Hannes Frederic Sowa <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/net/af_unix.h | 4 ++--
include/net/scm.h | 1 +
net/core/scm.c | 7 +++++++
net/unix/af_unix.c | 4 ++--
net/unix/garbage.c | 8 ++++----
5 files changed, 16 insertions(+), 8 deletions(-)

--- a/include/net/af_unix.h
+++ b/include/net/af_unix.h
@@ -6,8 +6,8 @@
#include <linux/mutex.h>
#include <net/sock.h>

-void unix_inflight(struct file *fp);
-void unix_notinflight(struct file *fp);
+void unix_inflight(struct user_struct *user, struct file *fp);
+void unix_notinflight(struct user_struct *user, struct file *fp);
void unix_gc(void);
void wait_for_unix_gc(void);
struct sock *unix_get_socket(struct file *filp);
--- a/include/net/scm.h
+++ b/include/net/scm.h
@@ -21,6 +21,7 @@ struct scm_creds {
struct scm_fp_list {
short count;
short max;
+ struct user_struct *user;
struct file *fp[SCM_MAX_FD];
};

--- a/net/core/scm.c
+++ b/net/core/scm.c
@@ -87,6 +87,7 @@ static int scm_fp_copy(struct cmsghdr *c
*fplp = fpl;
fpl->count = 0;
fpl->max = SCM_MAX_FD;
+ fpl->user = NULL;
}
fpp = &fpl->fp[fpl->count];

@@ -107,6 +108,10 @@ static int scm_fp_copy(struct cmsghdr *c
*fpp++ = file;
fpl->count++;
}
+
+ if (!fpl->user)
+ fpl->user = get_uid(current_user());
+
return num;
}

@@ -119,6 +124,7 @@ void __scm_destroy(struct scm_cookie *sc
scm->fp = NULL;
for (i=fpl->count-1; i>=0; i--)
fput(fpl->fp[i]);
+ free_uid(fpl->user);
kfree(fpl);
}
}
@@ -337,6 +343,7 @@ struct scm_fp_list *scm_fp_dup(struct sc
for (i = 0; i < fpl->count; i++)
get_file(fpl->fp[i]);
new_fpl->max = new_fpl->count;
+ new_fpl->user = get_uid(fpl->user);
}
return new_fpl;
}
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1469,7 +1469,7 @@ static void unix_detach_fds(struct scm_c
UNIXCB(skb).fp = NULL;

for (i = scm->fp->count-1; i >= 0; i--)
- unix_notinflight(scm->fp->fp[i]);
+ unix_notinflight(scm->fp->user, scm->fp->fp[i]);
}

static void unix_destruct_scm(struct sk_buff *skb)
@@ -1534,7 +1534,7 @@ static int unix_attach_fds(struct scm_co
return -ENOMEM;

for (i = scm->fp->count - 1; i >= 0; i--)
- unix_inflight(scm->fp->fp[i]);
+ unix_inflight(scm->fp->user, scm->fp->fp[i]);
return max_level;
}

--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -122,7 +122,7 @@ struct sock *unix_get_socket(struct file
* descriptor if it is for an AF_UNIX socket.
*/

-void unix_inflight(struct file *fp)
+void unix_inflight(struct user_struct *user, struct file *fp)
{
struct sock *s = unix_get_socket(fp);

@@ -139,11 +139,11 @@ void unix_inflight(struct file *fp)
}
unix_tot_inflight++;
}
- fp->f_cred->user->unix_inflight++;
+ user->unix_inflight++;
spin_unlock(&unix_gc_lock);
}

-void unix_notinflight(struct file *fp)
+void unix_notinflight(struct user_struct *user, struct file *fp)
{
struct sock *s = unix_get_socket(fp);

@@ -157,7 +157,7 @@ void unix_notinflight(struct file *fp)
list_del_init(&u->link);
unix_tot_inflight--;
}
- fp->f_cred->user->unix_inflight--;
+ user->unix_inflight--;
spin_unlock(&unix_gc_lock);
}

2016-03-02 02:06:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 036/130] ipv6: addrconf: validate new MTU before applying it

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Marcelo Leitner <[email protected]>

commit 77751427a1ff25b27d47a4c36b12c3c8667855ac upstream.

Currently we don't check if the new MTU is valid or not and this allows
one to configure a smaller than minimum allowed by RFCs or even bigger
than interface own MTU, which is a problem as it may lead to packet
drops.

If you have a daemon like NetworkManager running, this may be exploited
by remote attackers by forging RA packets with an invalid MTU, possibly
leading to a DoS. (NetworkManager currently only validates for values
too small, but not for too big ones.)

The fix is just to make sure the new value is valid. That is, between
IPV6_MIN_MTU and interface's MTU.

Note that similar check is already performed at
ndisc_router_discovery(), for when kernel itself parses the RA.

Signed-off-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: Sabrina Dubroca <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Cc: "Charles (Chas) Williams" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/ipv6/addrconf.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4793,6 +4793,21 @@ int addrconf_sysctl_forward(struct ctl_t
return ret;
}

+static
+int addrconf_sysctl_mtu(struct ctl_table *ctl, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ struct inet6_dev *idev = ctl->extra1;
+ int min_mtu = IPV6_MIN_MTU;
+ struct ctl_table lctl;
+
+ lctl = *ctl;
+ lctl.extra1 = &min_mtu;
+ lctl.extra2 = idev ? &idev->dev->mtu : NULL;
+
+ return proc_dointvec_minmax(&lctl, write, buffer, lenp, ppos);
+}
+
static void dev_disable_change(struct inet6_dev *idev)
{
struct netdev_notifier_info info;
@@ -4944,7 +4959,7 @@ static struct addrconf_sysctl_table
.data = &ipv6_devconf.mtu6,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = addrconf_sysctl_mtu,
},
{
.procname = "accept_ra",

2016-03-01 23:51:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 034/130] MIPS: KVM: Fix CACHE immediate offset sign extension

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Hogan <[email protected]>

commit c5c2a3b998f1ff5a586f9d37e154070b8d550d17 upstream.

The immediate field of the CACHE instruction is signed, so ensure that
it gets sign extended by casting it to an int16_t rather than just
masking the low 16 bits.

Fixes: e685c689f3a8 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Gleb Natapov <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mips/kvm/kvm_mips_emul.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -935,7 +935,7 @@ kvm_mips_emulate_cache(uint32_t inst, ui

base = (inst >> 21) & 0x1f;
op_inst = (inst >> 16) & 0x1f;
- offset = inst & 0xffff;
+ offset = (int16_t)inst;
cache = (inst >> 16) & 0x3;
op = (inst >> 18) & 0x7;

2016-03-01 23:51:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 015/130] efi: Disable interrupts around EFI calls, not in the epilog/prolog calls

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ingo Molnar <[email protected]>

commit 23a0d4e8fa6d3a1d7fb819f79bcc0a3739c30ba9 upstream.

Tapasweni Pathak reported that we do a kmalloc() in efi_call_phys_prolog()
on x86-64 while having interrupts disabled, which is a big no-no, as
kmalloc() can sleep.

Solve this by removing the irq disabling from the prolog/epilog calls
around EFI calls: it's unnecessary, as in this stage we are single
threaded in the boot thread, and we don't ever execute this from
interrupt contexts.

Reported-by: Tapasweni Pathak <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Matt Fleming <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/platform/efi/efi.c | 7 +++++++
arch/x86/platform/efi/efi_32.c | 11 +++--------
arch/x86/platform/efi/efi_64.c | 3 ---
3 files changed, 10 insertions(+), 11 deletions(-)

--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -248,12 +248,19 @@ static efi_status_t __init phys_efi_set_
efi_memory_desc_t *virtual_map)
{
efi_status_t status;
+ unsigned long flags;

efi_call_phys_prelog();
+
+ /* Disable interrupts around EFI calls: */
+ local_irq_save(flags);
status = efi_call_phys4(efi_phys.set_virtual_address_map,
memory_map_size, descriptor_size,
descriptor_version, virtual_map);
+ local_irq_restore(flags);
+
efi_call_phys_epilog();
+
return status;
}

--- a/arch/x86/platform/efi/efi_32.c
+++ b/arch/x86/platform/efi/efi_32.c
@@ -33,11 +33,10 @@

/*
* To make EFI call EFI runtime service in physical addressing mode we need
- * prelog/epilog before/after the invocation to disable interrupt, to
- * claim EFI runtime service handler exclusively and to duplicate a memory in
- * low memory space say 0 - 3G.
+ * prolog/epilog before/after the invocation to claim the EFI runtime service
+ * handler exclusively and to duplicate a memory mapping in low memory space,
+ * say 0 - 3G.
*/
-static unsigned long efi_rt_eflags;

void efi_sync_low_kernel_mappings(void) {}
void __init efi_dump_pagetable(void) {}
@@ -59,8 +58,6 @@ void efi_call_phys_prelog(void)
{
struct desc_ptr gdt_descr;

- local_irq_save(efi_rt_eflags);
-
load_cr3(initial_page_table);
__flush_tlb_all();

@@ -79,8 +76,6 @@ void efi_call_phys_epilog(void)

load_cr3(swapper_pg_dir);
__flush_tlb_all();
-
- local_irq_restore(efi_rt_eflags);
}

void __init efi_runtime_mkexec(void)
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -41,7 +41,6 @@
#include <asm/realmode.h>

static pgd_t *save_pgd __initdata;
-static unsigned long efi_flags __initdata;

/*
* We allocate runtime services regions bottom-up, starting from -4G, i.e.
@@ -87,7 +86,6 @@ void __init efi_call_phys_prelog(void)
return;

early_code_mapping_set_exec(1);
- local_irq_save(efi_flags);

n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE);
save_pgd = kmalloc(n_pgds * sizeof(pgd_t), GFP_KERNEL);
@@ -115,7 +113,6 @@ void __init efi_call_phys_epilog(void)
set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), save_pgd[pgd]);
kfree(save_pgd);
__flush_tlb_all();
- local_irq_restore(efi_flags);
early_code_mapping_set_exec(0);
}

2016-03-02 02:06:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 035/130] MIPS: KVM: Uninit VCPU in vcpu_create error path

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Hogan <[email protected]>

commit 585bb8f9a5e592f2ce7abbe5ed3112d5438d2754 upstream.

If either of the memory allocations in kvm_arch_vcpu_create() fail, the
vcpu which has been allocated and kvm_vcpu_init'd doesn't get uninit'd
in the error handling path. Add a call to kvm_vcpu_uninit() to fix this.

Fixes: 669e846e6c4e ("KVM/MIPS32: MIPS arch specific APIs for KVM")
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Gleb Natapov <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mips/kvm/kvm_mips.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -313,7 +313,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(st

if (!gebase) {
err = -ENOMEM;
- goto out_free_cpu;
+ goto out_uninit_cpu;
}
kvm_info("Allocated %d bytes for KVM Exception Handlers @ %p\n",
ALIGN(size, PAGE_SIZE), gebase);
@@ -373,6 +373,9 @@ struct kvm_vcpu *kvm_arch_vcpu_create(st
out_free_gebase:
kfree(gebase);

+out_uninit_cpu:
+ kvm_vcpu_uninit(vcpu);
+
out_free_cpu:
kfree(vcpu);

2016-03-02 02:07:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 037/130] RDS: verify the underlying transport exists before creating a connection

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sasha Levin <[email protected]>

commit 74e98eb085889b0d2d4908f59f6e00026063014f upstream.

There was no verification that an underlying transport exists when creating
a connection, this would cause dereferencing a NULL ptr.

It might happen on sockets that weren't properly bound before attempting to
send a message, which will cause a NULL ptr deref:

[135546.047719] kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN
[135546.051270] Modules linked in:
[135546.051781] CPU: 4 PID: 15650 Comm: trinity-c4 Not tainted 4.2.0-next-20150902-sasha-00041-gbaa1222-dirty #2527
[135546.053217] task: ffff8800835bc000 ti: ffff8800bc708000 task.ti: ffff8800bc708000
[135546.054291] RIP: __rds_conn_create (net/rds/connection.c:194)
[135546.055666] RSP: 0018:ffff8800bc70fab0 EFLAGS: 00010202
[135546.056457] RAX: dffffc0000000000 RBX: 0000000000000f2c RCX: ffff8800835bc000
[135546.057494] RDX: 0000000000000007 RSI: ffff8800835bccd8 RDI: 0000000000000038
[135546.058530] RBP: ffff8800bc70fb18 R08: 0000000000000001 R09: 0000000000000000
[135546.059556] R10: ffffed014d7a3a23 R11: ffffed014d7a3a21 R12: 0000000000000000
[135546.060614] R13: 0000000000000001 R14: ffff8801ec3d0000 R15: 0000000000000000
[135546.061668] FS: 00007faad4ffb700(0000) GS:ffff880252000000(0000) knlGS:0000000000000000
[135546.062836] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[135546.063682] CR2: 000000000000846a CR3: 000000009d137000 CR4: 00000000000006a0
[135546.064723] Stack:
[135546.065048] ffffffffafe2055c ffffffffafe23fc1 ffffed00493097bf ffff8801ec3d0008
[135546.066247] 0000000000000000 00000000000000d0 0000000000000000 ac194a24c0586342
[135546.067438] 1ffff100178e1f78 ffff880320581b00 ffff8800bc70fdd0 ffff880320581b00
[135546.068629] Call Trace:
[135546.069028] ? __rds_conn_create (include/linux/rcupdate.h:856 net/rds/connection.c:134)
[135546.069989] ? rds_message_copy_from_user (net/rds/message.c:298)
[135546.071021] rds_conn_create_outgoing (net/rds/connection.c:278)
[135546.071981] rds_sendmsg (net/rds/send.c:1058)
[135546.072858] ? perf_trace_lock (include/trace/events/lock.h:38)
[135546.073744] ? lockdep_init (kernel/locking/lockdep.c:3298)
[135546.074577] ? rds_send_drop_to (net/rds/send.c:976)
[135546.075508] ? __might_fault (./arch/x86/include/asm/current.h:14 mm/memory.c:3795)
[135546.076349] ? __might_fault (mm/memory.c:3795)
[135546.077179] ? rds_send_drop_to (net/rds/send.c:976)
[135546.078114] sock_sendmsg (net/socket.c:611 net/socket.c:620)
[135546.078856] SYSC_sendto (net/socket.c:1657)
[135546.079596] ? SYSC_connect (net/socket.c:1628)
[135546.080510] ? trace_dump_stack (kernel/trace/trace.c:1926)
[135546.081397] ? ring_buffer_unlock_commit (kernel/trace/ring_buffer.c:2479 kernel/trace/ring_buffer.c:2558 kernel/trace/ring_buffer.c:2674)
[135546.082390] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749)
[135546.083410] ? trace_event_raw_event_sys_enter (include/trace/events/syscalls.h:16)
[135546.084481] ? do_audit_syscall_entry (include/trace/events/syscalls.h:16)
[135546.085438] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749)
[135546.085515] rds_ib_laddr_check(): addr 36.74.25.172 ret -99 node type -1

Acked-by: Santosh Shilimkar <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Cc: "Charles (Chas) Williams" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/rds/connection.c | 6 ++++++
1 file changed, 6 insertions(+)

--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -189,6 +189,12 @@ static struct rds_connection *__rds_conn
goto out;
}

+ if (trans == NULL) {
+ kmem_cache_free(rds_conn_slab, conn);
+ conn = ERR_PTR(-ENODEV);
+ goto out;
+ }
+
conn->c_trans = trans;

ret = trans->conn_alloc(conn, gfp);

2016-03-02 02:07:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 033/130] MIPS: KVM: Fix ASID restoration logic

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Hogan <[email protected]>

commit 002374f371bd02df864cce1fe85d90dc5b292837 upstream.

ASID restoration on guest resume should determine the guest execution
mode based on the guest Status register rather than bit 30 of the guest
PC.

Fix the two places in locore.S that do this, loading the guest status
from the cop0 area. Note, this assembly is specific to the trap &
emulate implementation of KVM, so it doesn't need to check the
supervisor bit as that mode is not implemented in the guest.

Fixes: b680f70fc111 ("KVM/MIPS32: Entry point for trampolining to...")
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Gleb Natapov <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: James Hogan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/mips/kvm/kvm_locore.S | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)

--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -159,9 +159,11 @@ FEXPORT(__kvm_mips_vcpu_run)

FEXPORT(__kvm_mips_load_asid)
/* Set the ASID for the Guest Kernel */
- INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
- /* addresses shift to 0x80000000 */
- bltz t0, 1f /* If kernel */
+ PTR_L t0, VCPU_COP0(k1)
+ LONG_L t0, COP0_STATUS(t0)
+ andi t0, KSU_USER | ST0_ERL | ST0_EXL
+ xori t0, KSU_USER
+ bnez t0, 1f /* If kernel */
INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */
1:
@@ -438,9 +440,11 @@ __kvm_mips_return_to_guest:
mtc0 t0, CP0_EPC

/* Set the ASID for the Guest Kernel */
- INT_SLL t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
- /* addresses shift to 0x80000000 */
- bltz t0, 1f /* If kernel */
+ PTR_L t0, VCPU_COP0(k1)
+ LONG_L t0, COP0_STATUS(t0)
+ andi t0, KSU_USER | ST0_ERL | ST0_EXL
+ xori t0, KSU_USER
+ bnez t0, 1f /* If kernel */
INT_ADDIU t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
INT_ADDIU t1, k1, VCPU_GUEST_USER_ASID /* else user */
1:

2016-03-01 23:51:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 028/130] shrink_dentry_list(): take parents ->d_lock earlier

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 046b961b45f93a92e4c70525a12f3d378bced130 upstream.

The cause of livelocks there is that we are taking ->d_lock on
dentry and its parent in the wrong order, forcing us to use
trylock on the parent's one. d_walk() takes them in the right
order, and unfortunately it's not hard to create a situation
when shrink_dentry_list() can't make progress since trylock
keeps failing, and shrink_dcache_parent() or check_submounts_and_drop()
keeps calling d_walk() disrupting the very shrink_dentry_list() it's
waiting for.

Solution is straightforward - if that trylock fails, let's unlock
the dentry itself and take locks in the right order. We need to
stabilize ->d_parent without holding ->d_lock, but that's doable
using RCU. And we'd better do that in the very beginning of the
loop in shrink_dentry_list(), since the checks on refcount, etc.
would need to be redone anyway.

That deals with a half of the problem - killing dentries on the
shrink list itself. Another one (dropping their parents) is
in the next commit.

locking parent is interesting - it would be easy to do rcu_read_lock(),
lock whatever we think is a parent, lock dentry itself and check
if the parent is still the right one. Except that we need to check
that *before* locking the dentry, or we are risking taking ->d_lock
out of order. Fortunately, once the D1 is locked, we can check if
D2->d_parent is equal to D1 without the need to lock D2; D2->d_parent
can start or stop pointing to D1 only under D1->d_lock, so taking
D1->d_lock is enough. In other words, the right solution is
rcu_read_lock/lock what looks like parent right now/check if it's
still our parent/rcu_read_unlock/lock the child.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 53 +++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 41 insertions(+), 12 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -528,6 +528,38 @@ failed:
return dentry; /* try again with same dentry */
}

+static inline struct dentry *lock_parent(struct dentry *dentry)
+{
+ struct dentry *parent = dentry->d_parent;
+ if (IS_ROOT(dentry))
+ return NULL;
+ if (likely(spin_trylock(&parent->d_lock)))
+ return parent;
+ spin_unlock(&dentry->d_lock);
+ rcu_read_lock();
+again:
+ parent = ACCESS_ONCE(dentry->d_parent);
+ spin_lock(&parent->d_lock);
+ /*
+ * We can't blindly lock dentry until we are sure
+ * that we won't violate the locking order.
+ * Any changes of dentry->d_parent must have
+ * been done with parent->d_lock held, so
+ * spin_lock() above is enough of a barrier
+ * for checking if it's still our child.
+ */
+ if (unlikely(parent != dentry->d_parent)) {
+ spin_unlock(&parent->d_lock);
+ goto again;
+ }
+ rcu_read_unlock();
+ if (parent != dentry)
+ spin_lock(&dentry->d_lock);
+ else
+ parent = NULL;
+ return parent;
+}
+
/*
* This is dput
*
@@ -805,6 +837,8 @@ static void shrink_dentry_list(struct li
struct inode *inode;
dentry = list_entry(list->prev, struct dentry, d_lru);
spin_lock(&dentry->d_lock);
+ parent = lock_parent(dentry);
+
/*
* The dispose list is isolated and dentries are not accounted
* to the LRU here, so we can simply remove it from the list
@@ -818,6 +852,8 @@ static void shrink_dentry_list(struct li
*/
if ((int)dentry->d_lockref.count > 0) {
spin_unlock(&dentry->d_lock);
+ if (parent)
+ spin_unlock(&parent->d_lock);
continue;
}

@@ -825,6 +861,8 @@ static void shrink_dentry_list(struct li
if (unlikely(dentry->d_flags & DCACHE_DENTRY_KILLED)) {
bool can_free = dentry->d_flags & DCACHE_MAY_FREE;
spin_unlock(&dentry->d_lock);
+ if (parent)
+ spin_unlock(&parent->d_lock);
if (can_free)
dentry_free(dentry);
continue;
@@ -834,22 +872,13 @@ static void shrink_dentry_list(struct li
if (inode && unlikely(!spin_trylock(&inode->i_lock))) {
d_shrink_add(dentry, list);
spin_unlock(&dentry->d_lock);
+ if (parent)
+ spin_unlock(&parent->d_lock);
continue;
}

- parent = NULL;
- if (!IS_ROOT(dentry)) {
- parent = dentry->d_parent;
- if (unlikely(!spin_trylock(&parent->d_lock))) {
- if (inode)
- spin_unlock(&inode->i_lock);
- d_shrink_add(dentry, list);
- spin_unlock(&dentry->d_lock);
- continue;
- }
- }
-
__dentry_kill(dentry);
+
/*
* We need to prune ancestors too. This is necessary to prevent
* quadratic behavior of shrink_dcache_parent(), but is also

2016-03-01 23:51:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 031/130] dcache: add missing lockdep annotation

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Linus Torvalds <[email protected]>

commit 9f12600fe425bc28f0ccba034a77783c09c15af4 upstream.

lock_parent() very much on purpose does nested locking of dentries, and
is careful to maintain the right order (lock parent first). But because
it didn't annotate the nested locking order, lockdep thought it might be
a deadlock on d_lock, and complained.

Add the proper annotation for the inner locking of the child dentry to
make lockdep happy.

Introduced by commit 046b961b45f9 ("shrink_dentry_list(): take parent's
->d_lock earlier").

Reported-and-tested-by: Josh Boyer <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -551,7 +551,7 @@ again:
}
rcu_read_unlock();
if (parent != dentry)
- spin_lock(&dentry->d_lock);
+ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
else
parent = NULL;
return parent;

2016-03-02 02:08:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 030/130] dentry_kill() doesnt need the second argument now

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 8cbf74da435d1bd13dbb790f94c7ff67b2fb6af4 upstream.

it's 1 in the only remaining caller.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -498,8 +498,7 @@ static void __dentry_kill(struct dentry
* If ref is non-zero, then decrement the refcount too.
* Returns dentry requiring refcount drop, or NULL if we're done.
*/
-static struct dentry *
-dentry_kill(struct dentry *dentry, int unlock_on_failure)
+static struct dentry *dentry_kill(struct dentry *dentry)
__releases(dentry->d_lock)
{
struct inode *inode = dentry->d_inode;
@@ -521,10 +520,8 @@ dentry_kill(struct dentry *dentry, int u
return parent;

failed:
- if (unlock_on_failure) {
- spin_unlock(&dentry->d_lock);
- cpu_relax();
- }
+ spin_unlock(&dentry->d_lock);
+ cpu_relax();
return dentry; /* try again with same dentry */
}

@@ -616,7 +613,7 @@ repeat:
return;

kill_it:
- dentry = dentry_kill(dentry, 1);
+ dentry = dentry_kill(dentry);
if (dentry)
goto repeat;
}

2016-03-01 23:51:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 029/130] dealing with the rest of shrink_dentry_list() livelock

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit b2b80195d8829921506880f6dccd21cabd163d0d upstream.

We have the same problem with ->d_lock order in the inner loop, where
we are dropping references to ancestors. Same solution, basically -
instead of using dentry_kill() we use lock_parent() (introduced in the
previous commit) to get that lock in a safe way, recheck ->d_count
(in case if lock_parent() has ended up dropping and retaking ->d_lock
and somebody managed to grab a reference during that window), trylock
the inode->i_lock and use __dentry_kill() to do the rest.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 22 ++++++++++++++++++++--
1 file changed, 20 insertions(+), 2 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -886,8 +886,26 @@ static void shrink_dentry_list(struct li
* fragmentation.
*/
dentry = parent;
- while (dentry && !lockref_put_or_lock(&dentry->d_lockref))
- dentry = dentry_kill(dentry, 1);
+ while (dentry && !lockref_put_or_lock(&dentry->d_lockref)) {
+ parent = lock_parent(dentry);
+ if (dentry->d_lockref.count != 1) {
+ dentry->d_lockref.count--;
+ spin_unlock(&dentry->d_lock);
+ if (parent)
+ spin_unlock(&parent->d_lock);
+ break;
+ }
+ inode = dentry->d_inode; /* can't be NULL */
+ if (unlikely(!spin_trylock(&inode->i_lock))) {
+ spin_unlock(&dentry->d_lock);
+ if (parent)
+ spin_unlock(&parent->d_lock);
+ cpu_relax();
+ continue;
+ }
+ __dentry_kill(dentry);
+ dentry = parent;
+ }
}
}

2016-03-02 02:08:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 032/130] lock_parent: dont step on stale ->d_parent of all-but-freed one

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit c2338f2dc7c1e9f6202f370c64ffd7f44f3d4b51 upstream.

Dentry that had been through (or into) __dentry_kill() might be seen
by shrink_dentry_list(); that's normal, it'll be taken off the shrink
list and freed if __dentry_kill() has already finished. The problem
is, its ->d_parent might be pointing to already freed dentry, so
lock_parent() needs to be careful.

We need to check that dentry hasn't already gone into __dentry_kill()
*and* grab rcu_read_lock() before dropping ->d_lock - the latter makes
sure that whatever we see in ->d_parent after dropping ->d_lock it
won't be freed until we drop rcu_read_lock().

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -530,10 +530,12 @@ static inline struct dentry *lock_parent
struct dentry *parent = dentry->d_parent;
if (IS_ROOT(dentry))
return NULL;
+ if (unlikely((int)dentry->d_lockref.count < 0))
+ return NULL;
if (likely(spin_trylock(&parent->d_lock)))
return parent;
- spin_unlock(&dentry->d_lock);
rcu_read_lock();
+ spin_unlock(&dentry->d_lock);
again:
parent = ACCESS_ONCE(dentry->d_parent);
spin_lock(&parent->d_lock);

2016-03-02 02:09:23

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 025/130] lift the "already marked killed" case into shrink_dentry_list()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 64fd72e0a44bdd62c5ca277cb24d0d02b2d8e9dc upstream.

It can happen only when dentry_kill() is called with unlock_on_failure
equal to 0 - other callers had dentry pinned until the moment they've
got ->d_lock and DCACHE_DENTRY_KILLED is set only after lockref_mark_dead().

IOW, only one of three call sites of dentry_kill() might end up reaching
that code. Just move it there.

Signed-off-by: Al Viro <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -453,12 +453,6 @@ dentry_kill(struct dentry *dentry, int u
struct dentry *parent = NULL;
bool can_free = true;

- if (unlikely(dentry->d_flags & DCACHE_DENTRY_KILLED)) {
- can_free = dentry->d_flags & DCACHE_MAY_FREE;
- spin_unlock(&dentry->d_lock);
- goto out;
- }
-
inode = dentry->d_inode;
if (inode && !spin_trylock(&inode->i_lock)) {
relock:
@@ -816,6 +810,15 @@ static void shrink_dentry_list(struct li
continue;
}

+
+ if (unlikely(dentry->d_flags & DCACHE_DENTRY_KILLED)) {
+ bool can_free = dentry->d_flags & DCACHE_MAY_FREE;
+ spin_unlock(&dentry->d_lock);
+ if (can_free)
+ dentry_free(dentry);
+ continue;
+ }
+
parent = dentry_kill(dentry, 0);
/*
* If dentry_kill returns NULL, we have nothing more to do.


2016-03-02 02:09:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 024/130] iw_cxgb3: Fix incorrectly returning error on success

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Hariprasad S <[email protected]>

commit 67f1aee6f45059fd6b0f5b0ecb2c97ad0451f6b3 upstream.

The cxgb3_*_send() functions return NET_XMIT_ values, which are
positive integers values. So don't treat positive return values
as an error.

Signed-off-by: Steve Wise <[email protected]>
Signed-off-by: Hariprasad Shenai <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
[a pox on developers and maintainers who do not cc: stable for bug fixes like this - gregkh]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/infiniband/hw/cxgb3/iwch_cm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/infiniband/hw/cxgb3/iwch_cm.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c
@@ -149,7 +149,7 @@ static int iwch_l2t_send(struct t3cdev *
error = l2t_send(tdev, skb, l2e);
if (error < 0)
kfree_skb(skb);
- return error;
+ return error < 0 ? error : 0;
}

int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb)
@@ -165,7 +165,7 @@ int iwch_cxgb3_ofld_send(struct t3cdev *
error = cxgb3_ofld_send(tdev, skb);
if (error < 0)
kfree_skb(skb);
- return error;
+ return error < 0 ? error : 0;
}

static void release_tid(struct t3cdev *tdev, u32 hwtid, struct sk_buff *skb)


2016-03-02 02:09:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 023/130] proc: Fix ptrace-based permission checks for accessing task maps

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Corey Wright <[email protected]>

Modify mm_access() calls in fs/proc/task_mmu.c and fs/proc/task_nommu.c to
have the mode include PTRACE_MODE_FSCREDS so accessing /proc/pid/maps and
/proc/pid/pagemap is not denied to all users.

In backporting upstream commit caaee623 to pre-3.18 kernel versions it was
overlooked that mm_access() is used in fs/proc/task_*mmu.c as those calls
were removed in 3.18 (by upstream commit 29a40ace) and did not exist at the
time of the original commit.

Signed-off-by: Corey Wright <[email protected]>
Acked-by: Jann Horn <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/proc/task_mmu.c | 4 ++--
fs/proc/task_nommu.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -165,7 +165,7 @@ static void *m_start(struct seq_file *m,
if (!priv->task)
return ERR_PTR(-ESRCH);

- mm = mm_access(priv->task, PTRACE_MODE_READ);
+ mm = mm_access(priv->task, PTRACE_MODE_READ_FSCREDS);
if (!mm || IS_ERR(mm))
return mm;
down_read(&mm->mmap_sem);
@@ -1182,7 +1182,7 @@ static ssize_t pagemap_read(struct file
if (!pm.buffer)
goto out_task;

- mm = mm_access(task, PTRACE_MODE_READ);
+ mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
ret = PTR_ERR(mm);
if (!mm || IS_ERR(mm))
goto out_free;
--- a/fs/proc/task_nommu.c
+++ b/fs/proc/task_nommu.c
@@ -216,7 +216,7 @@ static void *m_start(struct seq_file *m,
if (!priv->task)
return ERR_PTR(-ESRCH);

- mm = mm_access(priv->task, PTRACE_MODE_READ);
+ mm = mm_access(priv->task, PTRACE_MODE_READ_FSCREDS);
if (!mm || IS_ERR(mm)) {
put_task_struct(priv->task);
priv->task = NULL;


2016-03-02 02:09:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 014/130] drm/radeon: fix hotplug race at startup

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dave Airlie <[email protected]>

commit 7f98ca454ad373fc1b76be804fa7138ff68c1d27 upstream.

We apparantly get a hotplug irq before we've initialised
modesetting,

[drm] Loading R100 Microcode
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<c125f56f>] __mutex_lock_slowpath+0x23/0x91
*pde = 00000000
Oops: 0002 [#1]
Modules linked in: radeon(+) drm_kms_helper ttm drm i2c_algo_bit backlight pcspkr psmouse evdev sr_mod input_leds led_class cdrom sg parport_pc parport floppy intel_agp intel_gtt lpc_ich acpi_cpufreq processor button mfd_core agpgart uhci_hcd ehci_hcd rng_core snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm usbcore usb_common i2c_i801 i2c_core snd_timer snd soundcore thermal_sys
CPU: 0 PID: 15 Comm: kworker/0:1 Not tainted 4.2.0-rc7-00015-gbf67402 #111
Hardware name: MicroLink /D850MV , BIOS MV85010A.86A.0067.P24.0304081124 04/08/2003
Workqueue: events radeon_hotplug_work_func [radeon]
task: f6ca5900 ti: f6d3e000 task.ti: f6d3e000
EIP: 0060:[<c125f56f>] EFLAGS: 00010282 CPU: 0
EIP is at __mutex_lock_slowpath+0x23/0x91
EAX: 00000000 EBX: f5e900fc ECX: 00000000 EDX: fffffffe
ESI: f6ca5900 EDI: f5e90100 EBP: f5e90000 ESP: f6d3ff0c
DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
CR0: 8005003b CR2: 00000000 CR3: 36f61000 CR4: 000006d0
Stack:
f5e90100 00000000 c103c4c1 f6d2a5a0 f5e900fc f6df394c c125f162 f8b0faca
f6d2a5a0 c138ca00 f6df394c f7395600 c1034741 00d40000 00000000 f6d2a5a0
c138ca00 f6d2a5b8 c138ca10 c1034b58 00000001 f6d40000 f6ca5900 f6d0c940
Call Trace:
[<c103c4c1>] ? dequeue_task_fair+0xa4/0xb7
[<c125f162>] ? mutex_lock+0x9/0xa
[<f8b0faca>] ? radeon_hotplug_work_func+0x17/0x57 [radeon]
[<c1034741>] ? process_one_work+0xfc/0x194
[<c1034b58>] ? worker_thread+0x18d/0x218
[<c10349cb>] ? rescuer_thread+0x1d5/0x1d5
[<c103742a>] ? kthread+0x7b/0x80
[<c12601c0>] ? ret_from_kernel_thread+0x20/0x30
[<c10373af>] ? init_completion+0x18/0x18
Code: 42 08 e8 8e a6 dd ff c3 57 56 53 83 ec 0c 8b 35 48 f7 37 c1 8b 10 4a 74 1a 89 c3 8d 78 04 8b 40 08 89 63

Reported-and-Tested-by: Meelis Roos <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/radeon_irq_kms.c | 5 +++++
1 file changed, 5 insertions(+)

--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -79,6 +79,11 @@ static void radeon_hotplug_work_func(str
struct drm_mode_config *mode_config = &dev->mode_config;
struct drm_connector *connector;

+ /* we can race here at startup, some boards seem to trigger
+ * hotplug irqs when they shouldn't. */
+ if (!rdev->mode_info.mode_config_initialized)
+ return;
+
mutex_lock(&mode_config->mutex);
if (mode_config->num_connector) {
list_for_each_entry(connector, &mode_config->connector_list, head)


2016-03-02 02:11:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 008/130] bcache: clear BCACHE_DEV_UNLINK_DONE flag when attaching a backing device

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Zheng Liu <[email protected]>

commit fecaee6f20ee122ad75402c53d8278f9bb142ddc upstream.

This bug can be reproduced by the following script:

#!/bin/bash

bcache_sysfs="/sys/fs/bcache"

function clear_cache()
{
if [ ! -e $bcache_sysfs ]; then
echo "no bcache sysfs"
exit
fi

cset_uuid=$(ls -l $bcache_sysfs|head -n 2|tail -n 1|awk '{print $9}')
sudo sh -c "echo $cset_uuid > /sys/block/sdb/sdb1/bcache/detach"
sleep 5
sudo sh -c "echo $cset_uuid > /sys/block/sdb/sdb1/bcache/attach"
}

for ((i=0;i<10;i++)); do
clear_cache
done

The warning messages look like below:
[ 275.948611] ------------[ cut here ]------------
[ 275.963840] WARNING: at fs/sysfs/dir.c:512 sysfs_add_one+0xb8/0xd0() (Tainted: P W
--------------- )
[ 275.979253] Hardware name: Tecal RH2285
[ 275.994106] sysfs: cannot create duplicate filename '/devices/pci0000:00/0000:00:09.0/0000:08:00.0/host4/target4:2:1/4:2:1:0/block/sdb/sdb1/bcache/cache'
[ 276.024105] Modules linked in: bcache tcp_diag inet_diag ipmi_devintf ipmi_si ipmi_msghandler
bonding 8021q garp stp llc ipv6 ext3 jbd loop sg iomemory_vsl(P) bnx2 microcode serio_raw i2c_i801
i2c_core iTCO_wdt iTCO_vendor_support i7core_edac edac_core shpchp ext4 jbd2 mbcache megaraid_sas
pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan]
[ 276.072643] Pid: 2765, comm: sh Tainted: P W --------------- 2.6.32 #1
[ 276.089315] Call Trace:
[ 276.105801] [<ffffffff81070fe7>] ? warn_slowpath_common+0x87/0xc0
[ 276.122650] [<ffffffff810710d6>] ? warn_slowpath_fmt+0x46/0x50
[ 276.139361] [<ffffffff81205c08>] ? sysfs_add_one+0xb8/0xd0
[ 276.156012] [<ffffffff8120609b>] ? sysfs_do_create_link+0x12b/0x170
[ 276.172682] [<ffffffff81206113>] ? sysfs_create_link+0x13/0x20
[ 276.189282] [<ffffffffa03bda21>] ? bcache_device_link+0xc1/0x110 [bcache]
[ 276.205993] [<ffffffffa03bfa08>] ? bch_cached_dev_attach+0x478/0x4f0 [bcache]
[ 276.222794] [<ffffffffa03c4a17>] ? bch_cached_dev_store+0x627/0x780 [bcache]
[ 276.239680] [<ffffffff8116783a>] ? alloc_pages_current+0xaa/0x110
[ 276.256594] [<ffffffff81203b15>] ? sysfs_write_file+0xe5/0x170
[ 276.273364] [<ffffffff811887b8>] ? vfs_write+0xb8/0x1a0
[ 276.290133] [<ffffffff811890b1>] ? sys_write+0x51/0x90
[ 276.306368] [<ffffffff8100c072>] ? system_call_fastpath+0x16/0x1b
[ 276.322301] ---[ end trace 9f5d4fcdd0c3edfb ]---
[ 276.338241] ------------[ cut here ]------------
[ 276.354109] WARNING: at /home/wenqing.lz/bcache/bcache/super.c:720
bcache_device_link+0xdf/0x110 [bcache]() (Tainted: P W --------------- )
[ 276.386017] Hardware name: Tecal RH2285
[ 276.401430] Couldn't create device <-> cache set symlinks
[ 276.401759] Modules linked in: bcache tcp_diag inet_diag ipmi_devintf ipmi_si ipmi_msghandler
bonding 8021q garp stp llc ipv6 ext3 jbd loop sg iomemory_vsl(P) bnx2 microcode serio_raw i2c_i801
i2c_core iTCO_wdt iTCO_vendor_support i7core_edac edac_core shpchp ext4 jbd2 mbcache megaraid_sas
pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan]
[ 276.465477] Pid: 2765, comm: sh Tainted: P W --------------- 2.6.32 #1
[ 276.482169] Call Trace:
[ 276.498610] [<ffffffff81070fe7>] ? warn_slowpath_common+0x87/0xc0
[ 276.515405] [<ffffffff810710d6>] ? warn_slowpath_fmt+0x46/0x50
[ 276.532059] [<ffffffffa03bda3f>] ? bcache_device_link+0xdf/0x110 [bcache]
[ 276.548808] [<ffffffffa03bfa08>] ? bch_cached_dev_attach+0x478/0x4f0 [bcache]
[ 276.565569] [<ffffffffa03c4a17>] ? bch_cached_dev_store+0x627/0x780 [bcache]
[ 276.582418] [<ffffffff8116783a>] ? alloc_pages_current+0xaa/0x110
[ 276.599341] [<ffffffff81203b15>] ? sysfs_write_file+0xe5/0x170
[ 276.616142] [<ffffffff811887b8>] ? vfs_write+0xb8/0x1a0
[ 276.632607] [<ffffffff811890b1>] ? sys_write+0x51/0x90
[ 276.648671] [<ffffffff8100c072>] ? system_call_fastpath+0x16/0x1b
[ 276.664756] ---[ end trace 9f5d4fcdd0c3edfc ]---

We forget to clear BCACHE_DEV_UNLINK_DONE flag in bcache_device_attach()
function when we attach a backing device first time. After detaching this
backing device, this flag will be true and sysfs_remove_link() isn't called in
bcache_device_unlink(). Then when we attach this backing device again,
sysfs_create_link() will return EEXIST error in bcache_device_link().

So the fix is trival and we clear this flag in bcache_device_link().

Signed-off-by: Zheng Liu <[email protected]>
Tested-by: Joshua Schmid <[email protected]>
Tested-by: Eric Wheeler <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/super.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -712,6 +712,8 @@ static void bcache_device_link(struct bc
WARN(sysfs_create_link(&d->kobj, &c->kobj, "cache") ||
sysfs_create_link(&c->kobj, &d->kobj, d->name),
"Couldn't create device <-> cache set symlinks");
+
+ clear_bit(BCACHE_DEV_UNLINK_DONE, &d->flags);
}

static void bcache_device_detach(struct bcache_device *d)


2016-03-02 02:11:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 009/130] bcache: fix a leak in bch_cached_dev_run()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Al Viro <[email protected]>

commit 4d4d8573a8451acc9f01cbea24b7e55f04a252fe upstream.

Signed-off-by: Al Viro <[email protected]>
Tested-by: Joshua Schmid <[email protected]>
Tested-by: Eric Wheeler <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/super.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -884,8 +884,11 @@ void bch_cached_dev_run(struct cached_de
buf[SB_LABEL_SIZE] = '\0';
env[2] = kasprintf(GFP_KERNEL, "CACHED_LABEL=%s", buf);

- if (atomic_xchg(&dc->running, 1))
+ if (atomic_xchg(&dc->running, 1)) {
+ kfree(env[1]);
+ kfree(env[2]);
return;
+ }

if (!d->c &&
BDEV_STATE(&dc->sb) != BDEV_STATE_NONE) {


2016-03-02 02:11:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 007/130] bcache: Add a cond_resched() call to gc

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kent Overstreet <[email protected]>

commit c5f1e5adf956e3ba82d204c7c141a75da9fa449a upstream.

Signed-off-by: Takashi Iwai <[email protected]>
Tested-by: Eric Wheeler <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/btree.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -1641,6 +1641,7 @@ static void bch_btree_gc(struct cache_se
do {
ret = btree_root(gc_root, c, &op, &writes, &stats);
closure_sync(&writes);
+ cond_resched();

if (ret && ret != -EAGAIN)
pr_warn("gc failed!");


2016-03-02 02:12:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 004/130] netfilter: ipt_rpfilter: remove the nh_scope test in rpfilter_lookup_reverse

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: lucien <[email protected]>

commit cc4998febd567d1c671684abce5595344bd4e8b2 upstream.

--accept-local option works for res.type == RTN_LOCAL, which should be
from the local table, but there, the fib_info's nh->nh_scope =
RT_SCOPE_NOWHERE ( > RT_SCOPE_HOST). in fib_create_info().

if (cfg->fc_scope == RT_SCOPE_HOST) {
struct fib_nh *nh = fi->fib_nh;

/* Local address is added. */
if (nhs != 1 || nh->nh_gw)
goto err_inval;
nh->nh_scope = RT_SCOPE_NOWHERE; <===
nh->nh_dev = dev_get_by_index(net, fi->fib_nh->nh_oif);
err = -ENODEV;
if (!nh->nh_dev)
goto failure;

but in our rpfilter_lookup_reverse():

if (dev_match || flags & XT_RPFILTER_LOOSE)
return FIB_RES_NH(res).nh_scope <= RT_SCOPE_HOST;

if nh->nh_scope > RT_SCOPE_HOST, it will fail. --accept-local option
will never be passed.

it seems the test is bogus and can be removed to fix this issue.

if (dev_match || flags & XT_RPFILTER_LOOSE)
return FIB_RES_NH(res).nh_scope <= RT_SCOPE_HOST;

ipv6 does not have this issue.

Signed-off-by: Xin Long <[email protected]>
Acked-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/ipv4/netfilter/ipt_rpfilter.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

--- a/net/ipv4/netfilter/ipt_rpfilter.c
+++ b/net/ipv4/netfilter/ipt_rpfilter.c
@@ -61,9 +61,7 @@ static bool rpfilter_lookup_reverse(stru
if (FIB_RES_DEV(res) == dev)
dev_match = true;
#endif
- if (dev_match || flags & XT_RPFILTER_LOOSE)
- return FIB_RES_NH(res).nh_scope <= RT_SCOPE_HOST;
- return dev_match;
+ return dev_match || flags & XT_RPFILTER_LOOSE;
}

static bool rpfilter_is_local(const struct sk_buff *skb)


2016-03-02 02:11:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 005/130] netfilter: ip6t_SYNPROXY: fix NULL pointer dereference

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Phil Sutter <[email protected]>

commit 96fffb4f23f124f297d51dedc9cf51d19eb88ee1 upstream.

This happens when networking namespaces are enabled.

Suggested-by: Patrick McHardy <[email protected]>
Signed-off-by: Phil Sutter <[email protected]>
Acked-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/ipv6/netfilter/ip6t_SYNPROXY.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

--- a/net/ipv6/netfilter/ip6t_SYNPROXY.c
+++ b/net/ipv6/netfilter/ip6t_SYNPROXY.c
@@ -37,12 +37,13 @@ synproxy_build_ip(struct sk_buff *skb, c
}

static void
-synproxy_send_tcp(const struct sk_buff *skb, struct sk_buff *nskb,
+synproxy_send_tcp(const struct synproxy_net *snet,
+ const struct sk_buff *skb, struct sk_buff *nskb,
struct nf_conntrack *nfct, enum ip_conntrack_info ctinfo,
struct ipv6hdr *niph, struct tcphdr *nth,
unsigned int tcp_hdr_size)
{
- struct net *net = nf_ct_net((struct nf_conn *)nfct);
+ struct net *net = nf_ct_net(snet->tmpl);
struct dst_entry *dst;
struct flowi6 fl6;

@@ -83,7 +84,8 @@ free_nskb:
}

static void
-synproxy_send_client_synack(const struct sk_buff *skb, const struct tcphdr *th,
+synproxy_send_client_synack(const struct synproxy_net *snet,
+ const struct sk_buff *skb, const struct tcphdr *th,
const struct synproxy_options *opts)
{
struct sk_buff *nskb;
@@ -119,7 +121,7 @@ synproxy_send_client_synack(const struct

synproxy_build_options(nth, opts);

- synproxy_send_tcp(skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
+ synproxy_send_tcp(snet, skb, nskb, skb->nfct, IP_CT_ESTABLISHED_REPLY,
niph, nth, tcp_hdr_size);
}

@@ -163,7 +165,7 @@ synproxy_send_server_syn(const struct sy

synproxy_build_options(nth, opts);

- synproxy_send_tcp(skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW,
+ synproxy_send_tcp(snet, skb, nskb, &snet->tmpl->ct_general, IP_CT_NEW,
niph, nth, tcp_hdr_size);
}

@@ -203,7 +205,7 @@ synproxy_send_server_ack(const struct sy

synproxy_build_options(nth, opts);

- synproxy_send_tcp(skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
+ synproxy_send_tcp(snet, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
}

static void
@@ -241,7 +243,7 @@ synproxy_send_client_ack(const struct sy

synproxy_build_options(nth, opts);

- synproxy_send_tcp(skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
+ synproxy_send_tcp(snet, skb, nskb, NULL, 0, niph, nth, tcp_hdr_size);
}

static bool
@@ -301,7 +303,7 @@ synproxy_tg6(struct sk_buff *skb, const
XT_SYNPROXY_OPT_SACK_PERM |
XT_SYNPROXY_OPT_ECN);

- synproxy_send_client_synack(skb, th, &opts);
+ synproxy_send_client_synack(snet, skb, th, &opts);
return NF_DROP;

} else if (th->ack && !(th->fin || th->rst || th->syn)) {


2016-03-02 02:12:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 003/130] netfilter: nf_tables: fix bogus warning in nft_data_uninit()

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mirek Kratochvil <[email protected]>

commit 960bd2c26421d321e890f1936938196ead41976f upstream.

The values 0x00000000-0xfffffeff are reserved for userspace datatype. When,
deleting set elements with maps, a bogus warning is triggered.

WARNING: CPU: 0 PID: 11133 at net/netfilter/nf_tables_api.c:4481 nft_data_uninit+0x35/0x40 [nf_tables]()

This fixes the check accordingly to enum definition in
include/linux/netfilter/nf_tables.h

Fixes: https://bugzilla.netfilter.org/show_bug.cgi?id=1013
Signed-off-by: Mirek Kratochvil <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/netfilter/nf_tables_api.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -3316,9 +3316,9 @@ EXPORT_SYMBOL_GPL(nft_data_init);
*/
void nft_data_uninit(const struct nft_data *data, enum nft_data_types type)
{
- switch (type) {
- case NFT_DATA_VALUE:
+ if (type < NFT_DATA_VERDICT)
return;
+ switch (type) {
case NFT_DATA_VERDICT:
return nft_verdict_uninit(data);
default:


2016-03-02 02:12:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 002/130] drm/ast: Initialized data needed to map fbdev memory

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Egbert Eich <[email protected]>

commit 28fb4cb7fa6f63dc2fbdb5f2564dcbead8e3eee0 upstream.

Due to a missing initialization there was no way to map fbdev memory.
Thus for example using the Xserver with the fbdev driver failed.
This fix adds initialization for fix.smem_start and fix.smem_len
in the fb_info structure, which fixes this problem.

Requested-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Egbert Eich <[email protected]>
[pulled from SuSE tree by me - airlied]
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/ast/ast_drv.h | 1 +
drivers/gpu/drm/ast/ast_fb.c | 7 +++++++
drivers/gpu/drm/ast/ast_main.c | 1 +
drivers/gpu/drm/ast/ast_mode.c | 2 ++
4 files changed, 11 insertions(+)

--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -296,6 +296,7 @@ int ast_framebuffer_init(struct drm_devi
int ast_fbdev_init(struct drm_device *dev);
void ast_fbdev_fini(struct drm_device *dev);
void ast_fbdev_set_suspend(struct drm_device *dev, int state);
+void ast_fbdev_set_base(struct ast_private *ast, unsigned long gpu_addr);

struct ast_bo {
struct ttm_buffer_object bo;
--- a/drivers/gpu/drm/ast/ast_fb.c
+++ b/drivers/gpu/drm/ast/ast_fb.c
@@ -367,3 +367,10 @@ void ast_fbdev_set_suspend(struct drm_de

fb_set_suspend(ast->fbdev->helper.fbdev, state);
}
+
+void ast_fbdev_set_base(struct ast_private *ast, unsigned long gpu_addr)
+{
+ ast->fbdev->helper.fbdev->fix.smem_start =
+ ast->fbdev->helper.fbdev->apertures->ranges[0].base + gpu_addr;
+ ast->fbdev->helper.fbdev->fix.smem_len = ast->vram_size - gpu_addr;
+}
--- a/drivers/gpu/drm/ast/ast_main.c
+++ b/drivers/gpu/drm/ast/ast_main.c
@@ -312,6 +312,7 @@ int ast_driver_load(struct drm_device *d
dev->mode_config.min_height = 0;
dev->mode_config.preferred_depth = 24;
dev->mode_config.prefer_shadow = 1;
+ dev->mode_config.fb_base = pci_resource_start(ast->dev->pdev, 0);

if (ast->chip == AST2100 ||
ast->chip == AST2200 ||
--- a/drivers/gpu/drm/ast/ast_mode.c
+++ b/drivers/gpu/drm/ast/ast_mode.c
@@ -509,6 +509,8 @@ static int ast_crtc_do_set_base(struct d
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
if (ret)
DRM_ERROR("failed to kmap fbcon\n");
+ else
+ ast_fbdev_set_base(ast, gpu_addr);
}
ast_bo_unreserve(bo);



2016-03-02 02:13:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 012/130] bcache: Change refill_dirty() to always scan entire disk if necessary

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kent Overstreet <[email protected]>

commit 627ccd20b4ad3ba836472468208e2ac4dfadbf03 upstream.

Previously, it would only scan the entire disk if it was starting from
the very start of the disk - i.e. if the previous scan got to the end.

This was broken by refill_full_stripes(), which updates last_scanned so
that refill_dirty was never triggering the searched_from_start path.

But if we change refill_dirty() to always scan the entire disk if
necessary, regardless of what last_scanned was, the code gets cleaner
and we fix that bug too.

Signed-off-by: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/writeback.c | 37 ++++++++++++++++++++++++++++++-------
1 file changed, 30 insertions(+), 7 deletions(-)

--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -323,6 +323,10 @@ void bcache_dev_sectors_dirty_add(struct

static bool dirty_pred(struct keybuf *buf, struct bkey *k)
{
+ struct cached_dev *dc = container_of(buf, struct cached_dev, writeback_keys);
+
+ BUG_ON(KEY_INODE(k) != dc->disk.id);
+
return KEY_DIRTY(k);
}

@@ -372,11 +376,24 @@ next:
}
}

+/*
+ * Returns true if we scanned the entire disk
+ */
static bool refill_dirty(struct cached_dev *dc)
{
struct keybuf *buf = &dc->writeback_keys;
+ struct bkey start = KEY(dc->disk.id, 0, 0);
struct bkey end = KEY(dc->disk.id, MAX_KEY_OFFSET, 0);
- bool searched_from_start = false;
+ struct bkey start_pos;
+
+ /*
+ * make sure keybuf pos is inside the range for this disk - at bringup
+ * we might not be attached yet so this disk's inode nr isn't
+ * initialized then
+ */
+ if (bkey_cmp(&buf->last_scanned, &start) < 0 ||
+ bkey_cmp(&buf->last_scanned, &end) > 0)
+ buf->last_scanned = start;

if (dc->partial_stripes_expensive) {
refill_full_stripes(dc);
@@ -384,14 +401,20 @@ static bool refill_dirty(struct cached_d
return false;
}

- if (bkey_cmp(&buf->last_scanned, &end) >= 0) {
- buf->last_scanned = KEY(dc->disk.id, 0, 0);
- searched_from_start = true;
- }
-
+ start_pos = buf->last_scanned;
bch_refill_keybuf(dc->disk.c, buf, &end, dirty_pred);

- return bkey_cmp(&buf->last_scanned, &end) >= 0 && searched_from_start;
+ if (bkey_cmp(&buf->last_scanned, &end) < 0)
+ return false;
+
+ /*
+ * If we get to the end start scanning again from the beginning, and
+ * only scan up to where we initially started scanning from:
+ */
+ buf->last_scanned = start;
+ bch_refill_keybuf(dc->disk.c, buf, &start_pos, dirty_pred);
+
+ return bkey_cmp(&buf->last_scanned, &start_pos) >= 0;
}

static int bch_writeback_thread(void *arg)


2016-03-02 02:13:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 010/130] bcache: unregister reboot notifier if bcache fails to unregister device

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Zheng Liu <[email protected]>

commit 2ecf0cdb2b437402110ab57546e02abfa68a716b upstream.

In bcache_init() function it forgot to unregister reboot notifier if
bcache fails to unregister a block device. This commit fixes this.

Signed-off-by: Zheng Liu <[email protected]>
Tested-by: Joshua Schmid <[email protected]>
Tested-by: Eric Wheeler <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/bcache/super.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2086,8 +2086,10 @@ static int __init bcache_init(void)
closure_debug_init();

bcache_major = register_blkdev(0, "bcache");
- if (bcache_major < 0)
+ if (bcache_major < 0) {
+ unregister_reboot_notifier(&reboot);
return bcache_major;
+ }

if (!(bcache_wq = create_workqueue("bcache")) ||
!(bcache_kobj = kobject_create_and_add("bcache", fs_kobj)) ||


2016-03-02 02:13:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.14 001/130] tracepoints: Do not trace when cpu is offline

3.14-stable review patch. If anyone has any objections, please let me know.

------------------

From: Steven Rostedt (Red Hat) <[email protected]>

commit f37755490fe9bf76f6ba1d8c6591745d3574a6a6 upstream.

The tracepoint infrastructure uses RCU sched protection to enable and
disable tracepoints safely. There are some instances where tracepoints are
used in infrastructure code (like kfree()) that get called after a CPU is
going offline, and perhaps when it is coming back online but hasn't been
registered yet.

This can probuce the following warning:

[ INFO: suspicious RCU usage. ]
4.4.0-00006-g0fe53e8-dirty #34 Tainted: G S
-------------------------------
include/trace/events/kmem.h:141 suspicious rcu_dereference_check() usage!

other info that might help us debug this:

RCU used illegally from offline CPU! rcu_scheduler_active = 1, debug_locks = 1
no locks held by swapper/8/0.

stack backtrace:
CPU: 8 PID: 0 Comm: swapper/8 Tainted: G S 4.4.0-00006-g0fe53e8-dirty #34
Call Trace:
[c0000005b76c78d0] [c0000000008b9540] .dump_stack+0x98/0xd4 (unreliable)
[c0000005b76c7950] [c00000000010c898] .lockdep_rcu_suspicious+0x108/0x170
[c0000005b76c79e0] [c00000000029adc0] .kfree+0x390/0x440
[c0000005b76c7a80] [c000000000055f74] .destroy_context+0x44/0x100
[c0000005b76c7b00] [c0000000000934a0] .__mmdrop+0x60/0x150
[c0000005b76c7b90] [c0000000000e3ff0] .idle_task_exit+0x130/0x140
[c0000005b76c7c20] [c000000000075804] .pseries_mach_cpu_die+0x64/0x310
[c0000005b76c7cd0] [c000000000043e7c] .cpu_die+0x3c/0x60
[c0000005b76c7d40] [c0000000000188d8] .arch_cpu_idle_dead+0x28/0x40
[c0000005b76c7db0] [c000000000101e6c] .cpu_startup_entry+0x50c/0x560
[c0000005b76c7ed0] [c000000000043bd8] .start_secondary+0x328/0x360
[c0000005b76c7f90] [c000000000008a6c] start_secondary_prolog+0x10/0x14

This warning is not a false positive either. RCU is not protecting code that
is being executed while the CPU is offline.

Instead of playing "whack-a-mole(TM)" and adding conditional statements to
the tracepoints we find that are used in this instance, simply add a
cpu_online() test to the tracepoint code where the tracepoint will be
ignored if the CPU is offline.

Use of raw_smp_processor_id() is fine, as there should never be a case where
the tracepoint code goes from running on a CPU that is online and suddenly
gets migrated to a CPU that is offline.

Link: http://lkml.kernel.org/r/[email protected]

Reported-by: Denis Kirjanov <[email protected]>
Fixes: 97e1c18e8d17b ("tracing: Kernel Tracepoints")
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/tracepoint.h | 6 ++++++
1 file changed, 6 insertions(+)

--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -14,8 +14,11 @@
* See the file COPYING for more details.
*/

+#include <linux/smp.h>
#include <linux/errno.h>
#include <linux/types.h>
+#include <linux/percpu.h>
+#include <linux/cpumask.h>
#include <linux/rcupdate.h>
#include <linux/static_key.h>

@@ -126,6 +129,9 @@ static inline void tracepoint_synchroniz
void *it_func; \
void *__data; \
\
+ if (!cpu_online(raw_smp_processor_id())) \
+ return; \
+ \
if (!(cond)) \
return; \
prercu; \


2016-03-02 14:34:12

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 3.14 000/130] 3.14.63-stable review

On 03/01/2016 03:44 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.14.63 release.
> There are 130 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Mar 3 23:44:39 UTC 2016.
> Anything received after that time might be too late.
>

Build results:
total: 128 pass: 128 fail: 0
Qemu test results:
total: 83 pass: 83 fail: 0

Details are available at http://kerneltests.org/builders.

Guenter