From: Zefan Li <[email protected]>
This is the start of the stable review cycle for the 3.4.112 release.
There are 92 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed Apr 20 10:44:28 UTC 2016.
Anything received after that time might be too late.
A combined patch relative to 3.4.111 will be posted as an additional
response to this. A shortlog and diffstat can be found below.
thanks,
Zefan Li
--------------------
Andreas Schwab (1):
m68k: Define asmlinkage_protect
Andrey Ryabinin (1):
crypto: ghash-clmulni: specify context size for ghash async algorithm
Andy Lutomirski (1):
x86/iopl/64: Properly context-switch IOPL on Xen PV
Ard Biesheuvel (1):
ARM: 8429/1: disable GCC SRA optimization
Arnaldo Carvalho de Melo (1):
perf header: Fixup reading of HEADER_NRCPUS feature
Ben Hutchings (3):
genirq: Fix race in register_irq_proc()
usb: Use the USB_SS_MULT() macro to decode burst multiplier for log
message
pipe: Fix buffer offset after partially failed read
Bjorn Helgaas (1):
PCI: Fix TI816X class code quirk
Bob Copeland (1):
mac80211: enable assoc check for mesh interfaces
Cathy Avery (1):
xen-blkfront: check for null drvdata in blkback_changed
(XenbusStateClosing)
Charles Keepax (1):
ASoC: wm8904: Correct number of EQ registers
Christian Zander (1):
iommu/vt-d: fix range computation when making room for large pages
Christoph Hellwig (2):
IB/uverbs: reject invalid or unknown opcodes
scsi_dh: fix randconfig build error
Christophe Leroy (1):
splice: sendfile() at once fails for big files
Chuck Lever (1):
svcrdma: Fix send_reply() scatter/gather set-up
Dan Carpenter (1):
drm: crtc: integer overflow in drm_property_create_blob()
David Daney (1):
of/address: Don't loop forever in of_find_matching_node_by_address().
David Härdeman (1):
rc-core: fix remove uevent generation
David Woodhouse (1):
x86/platform: Fix Geode LX timekeeping in the generic x86 build
Doron Tsur (1):
IB/cm: Fix rb-tree duplicate free and use-after-free
Dāvis Mosāns (1):
mvsas: Fix NULL pointer dereference in mvs_slot_task_free
Felix Fietkau (1):
ath9k: declare required extra tx headroom
Grant Likely (1):
drivercore: Fix unregistration path of platform devices
Guenter Roeck (1):
spi: Fix documentation of spi_alloc_master()
Herbert Xu (2):
ipv6: Fix IPsec pre-encap fragmentation check
crypto: api - Only abort operations on fatal signal
Hin-Tak Leung (2):
hfs,hfsplus: cache pages correctly between bnode_create and bnode_free
hfs: fix B-tree corruption after insertion at position 0
Ilia Mirkin (1):
drm/nouveau/gem: return only valid domain when there's only one
James Hogan (1):
MIPS: dma-default: Fix 32-bit fall back to GFP_DMA
Jan Kara (1):
mm: make sendfile(2) killable
Jann Horn (1):
drivers/tty: require read access for controlling terminal
Jeff Mahoney (1):
btrfs: skip waiting on ordered range for special files
Jeffery Miller (1):
Add radeon suspend/resume quirk for HP Compaq dc5750.
Joerg Roedel (1):
iommu/amd: Don't clear DTE flags when modifying it
Johannes Berg (1):
iwlwifi: dvm: fix D3 firmware PN programming
John Stultz (1):
clocksource: Fix abs() usage w/ 64bit values
Joseph Qi (1):
ocfs2/dlm: fix deadlock when dispatch assert master
Kees Cook (1):
fs: create and use seq_show_option for escaping
Kosuke Tatsukawa (1):
tty: fix stall caused by missing memory barrier in drivers/tty/n_tty.c
Laura Abbott (1):
xhci: Add spurious wakeup quirk for LynxPoint-LP controllers
Malcolm Crossley (1):
x86/xen: Do not clip xen_e820_map to xen_e820_map_entries when
sanitizing map
Mark Brown (2):
regmap: debugfs: Ensure we don't underflow when printing access masks
regmap: debugfs: Don't bother actually printing when calculating max
length
Mark Rustad (2):
PCI: Add dev_flags bit to access VPD through function 0
PCI: Add VPD function 0 quirk for Intel Ethernet devices
Masahiro Yamada (1):
devres: fix devres_get()
Mathias Nyman (4):
usb: Use the USB_SS_MULT() macro to get the burst multiplier.
xhci: give command abortion one more chance before killing xhci
xhci: change xhci 1.0 only restrictions to support xhci 1.1
xhci: handle no ping response error properly
Matthijs Kooijman (1):
USB: ftdi_sio: Added custom PID for CustomWare products
Mel Gorman (1):
mm: hugetlbfs: skip shared VMAs when unmapping private pages to
satisfy a fault
Mike Snitzer (1):
dm btree: fix leak of bufio-backed block in btree_split_beneath error
path
Mikulas Patocka (1):
hpfs: update ctime and mtime on directory modification
Nate Dailey (1):
raid1: include bio_end_io_list in nr_queued to prevent freeze_array
hang
NeilBrown (7):
md/raid0: update queue parameter in a safer location.
NFSv4: don't set SETATTR for O_RDONLY|O_EXCL
md/raid0: apply base queue limits *before* disk_stack_limits
md/raid10: ensure device failure recorded before write request
returns.
md/raid10: don't clear bitmap bit when bad-block-list write fails.
md/raid1: ensure device failure recorded before write request returns.
md/raid1: don't clear bitmap bit when bad-block-list write fails.
Noa Osherovich (1):
IB/mlx4: Use correct SL on AH query under RoCE
Paolo Bonzini (1):
KVM: x86: trap AMD MSRs for the TSeg base and mask
Paul Bolle (1):
windfarm: decrement client count when unregistering
Paul Mackerras (1):
powerpc/MSI: Fix race condition in tearing down MSI interrupts
Peter Chen (1):
usb: host: ehci-sys: delete useless bus_to_hcd conversion
Peter Seiderer (1):
cifs: use server timestamp for ntlmv2 authentication
Peter Zijlstra (2):
module: Fix locking in symbol_put_addr()
sched/core: Fix TASK_DEAD race in finish_task_switch()
Richard Weinberger (1):
UBI: Validate data_size
Robert Jarzmik (1):
ASoC: fix broken pxa SoC support
Roger Quadros (1):
usb: xhci: Clear XHCI_STATE_DYING on start
Russell King (2):
ARM: fix Thumb2 signal handling when ARMv6 is enabled
crypto: ahash - ensure statesize is non-zero
Stephen Chandler Paul (1):
DRM - radeon: Don't link train DisplayPort on HPD until we get the
dpcd
Sudip Mukherjee (1):
auxdisplay: ks0108: fix refcount
T.J. Purtell (1):
ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
Takashi Iwai (1):
ALSA: synth: Fix conflicting OSS device registration on AWE32
Tan, Jui Nee (1):
spi: spi-pxa2xx: Check status register to determine if SSSR_TINT is
disabled
Thomas Gleixner (1):
x86/process: Add proper bound checks in 64bit get_wchan()
Thomas Huth (1):
powerpc/rtas: Introduce rtas_get_sensor_fast() for IRQ handlers
Trond Myklebust (1):
SUNRPC: xs_reset_transport must mark the connection as disconnected
Tyler Hicks (1):
eCryptfs: Invalidate dcache entries when lower i_nlink is zero
Vasant Hegde (1):
powerpc/rtas: Validate rtas.entry before calling enter_rtas()
Vincent Palatin (1):
usb: Add device quirk for Logitech PTZ cameras
Yao-Wen Mao (1):
USB: Add reset-resume quirk for two Plantronics usb headphones.
Yishai Hadas (1):
IB/uverbs: Fix race between ib_uverbs_open and remove_one
shengyong (1):
UBI: return ENOSPC if no enough space available
arch/arm/Makefile | 8 ++++
arch/arm/kernel/signal.c | 19 +++++++--
arch/m68k/include/asm/linkage.h | 30 ++++++++++++++
arch/mips/mm/dma-default.c | 2 +-
arch/powerpc/include/asm/rtas.h | 1 +
arch/powerpc/kernel/rtas.c | 20 ++++++++++
arch/powerpc/platforms/powernv/pci.c | 4 +-
arch/powerpc/platforms/pseries/ras.c | 3 +-
arch/powerpc/sysdev/fsl_msi.c | 5 ++-
arch/powerpc/sysdev/mpic_pasemi_msi.c | 5 ++-
arch/powerpc/sysdev/mpic_u3msi.c | 5 ++-
arch/powerpc/sysdev/ppc4xx_msi.c | 5 ++-
arch/x86/crypto/ghash-clmulni-intel_glue.c | 1 +
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/include/asm/xen/hypervisor.h | 2 +
arch/x86/kernel/process_64.c | 64 +++++++++++++++++++++++++-----
arch/x86/kernel/tsc.c | 17 ++++----
arch/x86/kvm/x86.c | 2 +
arch/x86/xen/enlighten.c | 2 +-
arch/x86/xen/setup.c | 2 +-
crypto/ablkcipher.c | 2 +-
crypto/ahash.c | 3 +-
crypto/algapi.c | 2 +-
crypto/api.c | 6 +--
crypto/crypto_user.c | 2 +-
drivers/auxdisplay/ks0108.c | 1 +
drivers/base/devres.c | 4 +-
drivers/base/platform.c | 8 +---
drivers/base/regmap/regmap-debugfs.c | 5 +--
drivers/block/xen-blkfront.c | 3 +-
drivers/gpu/drm/drm_crtc.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_gem.c | 5 ++-
drivers/gpu/drm/radeon/radeon_combios.c | 8 ++++
drivers/gpu/drm/radeon/radeon_connectors.c | 5 +++
drivers/infiniband/core/cm.c | 10 ++++-
drivers/infiniband/core/uverbs.h | 3 +-
drivers/infiniband/core/uverbs_cmd.c | 10 ++++-
drivers/infiniband/core/uverbs_main.c | 43 ++++++++++++++------
drivers/infiniband/hw/mlx4/ah.c | 6 ++-
drivers/iommu/amd_iommu.c | 4 +-
drivers/iommu/amd_iommu_types.h | 1 +
drivers/iommu/intel-iommu.c | 19 ++++++---
drivers/macintosh/windfarm_core.c | 2 +-
drivers/md/Kconfig | 2 +-
drivers/md/md.c | 1 +
drivers/md/persistent-data/dm-btree.c | 2 +-
drivers/md/raid0.c | 55 ++++++++++++++++---------
drivers/md/raid1.c | 41 +++++++++++++++++--
drivers/md/raid1.h | 5 +++
drivers/md/raid10.c | 42 ++++++++++++++++++--
drivers/md/raid10.h | 6 +++
drivers/media/rc/rc-main.c | 3 --
drivers/mtd/ubi/io.c | 5 +++
drivers/mtd/ubi/vtbl.c | 1 +
drivers/mtd/ubi/wl.c | 1 +
drivers/net/wireless/ath/ath9k/init.c | 1 +
drivers/net/wireless/iwlwifi/iwl-agn-lib.c | 2 +-
drivers/of/address.c | 6 +--
drivers/pci/access.c | 61 +++++++++++++++++++++++++++-
drivers/pci/quirks.c | 18 +++++++--
drivers/scsi/mvsas/mv_sas.c | 2 +
drivers/spi/spi-pxa2xx.c | 4 ++
drivers/spi/spi.c | 3 +-
drivers/tty/n_tty.c | 6 +--
drivers/tty/tty_io.c | 31 +++++++++++++--
drivers/usb/core/config.c | 8 ++--
drivers/usb/core/quirks.c | 13 ++++++
drivers/usb/host/ehci-sysfs.c | 8 ++--
drivers/usb/host/xhci-mem.c | 6 +--
drivers/usb/host/xhci-pci.c | 1 +
drivers/usb/host/xhci-ring.c | 33 +++++++++++----
drivers/usb/host/xhci.c | 3 +-
drivers/usb/serial/ftdi_sio.c | 4 ++
drivers/usb/serial/ftdi_sio_ids.h | 8 ++++
fs/btrfs/inode.c | 3 +-
fs/ceph/super.c | 8 ++--
fs/cifs/cifsencrypt.c | 51 +++++++++++++++++++++++-
fs/cifs/cifsfs.c | 4 +-
fs/ecryptfs/dentry.c | 32 +++++++--------
fs/ext4/super.c | 4 +-
fs/gfs2/super.c | 6 +--
fs/hfs/bnode.c | 9 ++---
fs/hfs/brec.c | 20 +++++-----
fs/hfs/super.c | 4 +-
fs/hfsplus/bnode.c | 3 --
fs/hfsplus/options.c | 4 +-
fs/hostfs/hostfs_kern.c | 2 +-
fs/hpfs/namei.c | 25 +++++++++++-
fs/nfs/nfs4proc.c | 2 +-
fs/ocfs2/dlm/dlmmaster.c | 4 +-
fs/ocfs2/dlm/dlmrecovery.c | 6 ++-
fs/ocfs2/super.c | 4 +-
fs/pipe.c | 5 ++-
fs/reiserfs/super.c | 8 ++--
fs/splice.c | 12 +++++-
fs/xfs/xfs_super.c | 4 +-
include/linux/pci.h | 2 +
include/linux/seq_file.h | 35 ++++++++++++++++
include/sound/wm8904.h | 2 +-
kernel/cgroup.c | 7 ++--
kernel/irq/proc.c | 19 ++++++++-
kernel/module.c | 8 +++-
kernel/sched/core.c | 10 ++---
kernel/sched/sched.h | 4 +-
kernel/time/clocksource.c | 2 +-
mm/filemap.c | 9 +++--
mm/hugetlb.c | 8 ++++
net/ipv6/xfrm6_output.c | 16 +++++---
net/mac80211/tx.c | 3 --
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 11 ++++-
net/sunrpc/xprtsock.c | 2 +
security/selinux/hooks.c | 2 +-
sound/arm/Kconfig | 15 +++----
sound/soc/pxa/Kconfig | 2 -
sound/synth/emux/emux_oss.c | 3 +-
tools/perf/util/header.c | 22 ++++------
116 files changed, 871 insertions(+), 265 deletions(-)
--
1.9.1
From: David Härdeman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a66b0c41ad277ae62a3ae6ac430a71882f899557 upstream.
The input_dev is already gone when the rc device is being unregistered
so checking for its presence only means that no remove uevent will be
generated.
Signed-off-by: David Härdeman <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/media/rc/rc-main.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
index cec1f8c..a7ff6b5 100644
--- a/drivers/media/rc/rc-main.c
+++ b/drivers/media/rc/rc-main.c
@@ -946,9 +946,6 @@ static int rc_dev_uevent(struct device *device, struct kobj_uevent_env *env)
{
struct rc_dev *dev = to_rc_dev(device);
- if (!dev || !dev->input_dev)
- return -ENODEV;
-
if (dev->rc_map.name)
ADD_HOTPLUG_VAR("NAME=%s", dev->rc_map.name);
if (dev->driver_name)
--
1.9.1
From: Bjorn Helgaas <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit d1541dc977d376406f4584d8eb055488655c98ec upstream.
In fixup_ti816x_class(), we assigned "class = PCI_CLASS_MULTIMEDIA_VIDEO".
But PCI_CLASS_MULTIMEDIA_VIDEO is only the two-byte base class/sub-class
and needs to be shifted to make space for the low-order interface byte.
Shift PCI_CLASS_MULTIMEDIA_VIDEO to set the correct class code.
Fixes: 63c4408074cb ("PCI: Add quirk for setting valid class for TI816X Endpoint")
Signed-off-by: Bjorn Helgaas <[email protected]>
CC: Hemant Pedanekar <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/pci/quirks.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index c030024..887af797 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -2834,12 +2834,15 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x3c28, vtd_mask_spec_errors);
static void __devinit fixup_ti816x_class(struct pci_dev* dev)
{
+ u32 class = dev->class;
+
/* TI 816x devices do not have class code set when in PCIe boot mode */
- dev_info(&dev->dev, "Setting PCI class for 816x PCIe device\n");
- dev->class = PCI_CLASS_MULTIMEDIA_VIDEO;
+ dev->class = PCI_CLASS_MULTIMEDIA_VIDEO << 8;
+ dev_info(&dev->dev, "PCI class overridden (%#08x -> %#08x)\n",
+ class, dev->class);
}
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_TI, 0xb800,
- PCI_CLASS_NOT_DEFINED, 0, fixup_ti816x_class);
+ PCI_CLASS_NOT_DEFINED, 0, fixup_ti816x_class);
/* Some PCIe devices do not work reliably with the claimed maximum
* payload size supported.
--
1.9.1
From: Bob Copeland <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3633ebebab2bbe88124388b7620442315c968e8f upstream.
We already set a station to be associated when peering completes, both
in user space and in the kernel. Thus we should always have an
associated sta before sending data frames to that station.
Failure to check assoc state can cause crashes in the lower-level driver
due to transmitting unicast data frames before driver sta structures
(e.g. ampdu state in ath9k) are initialized. This occurred when
forwarding in the presence of fixed mesh paths: frames were transmitted
to stations with whom we hadn't yet completed peering.
Reported-by: Alexis Green <[email protected]>
Tested-by: Jesse Jones <[email protected]>
Signed-off-by: Bob Copeland <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
net/mac80211/tx.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index f4f24be..67cd0f1 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -284,9 +284,6 @@ ieee80211_tx_h_check_assoc(struct ieee80211_tx_data *tx)
if (tx->sdata->vif.type == NL80211_IFTYPE_WDS)
return TX_CONTINUE;
- if (tx->sdata->vif.type == NL80211_IFTYPE_MESH_POINT)
- return TX_CONTINUE;
-
if (tx->flags & IEEE80211_TX_PS_BUFFERED)
return TX_CONTINUE;
--
1.9.1
From: Mark Rustad <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 932c435caba8a2ce473a91753bad0173269ef334 upstream.
Add a dev_flags bit, PCI_DEV_FLAGS_VPD_REF_F0, to access VPD through
function 0 to provide VPD access on other functions. This is for hardware
devices that provide copies of the same VPD capability registers in
multiple functions. Because the kernel expects that each function has its
own registers, both the locking and the state tracking are affected by VPD
accesses to different functions.
On such devices for example, if a VPD write is performed on function 0,
*any* later attempt to read VPD from any other function of that device will
hang. This has to do with how the kernel tracks the expected value of the
F bit per function.
Concurrent accesses to different functions of the same device can not only
hang but also corrupt both read and write VPD data.
When hangs occur, typically the error message:
vpd r/w failed. This is likely a firmware bug on this device.
will be seen.
Never set this bit on function 0 or there will be an infinite recursion.
Signed-off-by: Mark Rustad <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Acked-by: Alexander Duyck <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/pci/access.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++-
include/linux/pci.h | 2 ++
2 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/access.c b/drivers/pci/access.c
index 2a58164..f49d961c 100644
--- a/drivers/pci/access.c
+++ b/drivers/pci/access.c
@@ -357,6 +357,56 @@ static const struct pci_vpd_ops pci_vpd_pci22_ops = {
.release = pci_vpd_pci22_release,
};
+static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
+ void *arg)
+{
+ struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
+ ssize_t ret;
+
+ if (!tdev)
+ return -ENODEV;
+
+ ret = pci_read_vpd(tdev, pos, count, arg);
+ pci_dev_put(tdev);
+ return ret;
+}
+
+static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
+ const void *arg)
+{
+ struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
+ ssize_t ret;
+
+ if (!tdev)
+ return -ENODEV;
+
+ ret = pci_write_vpd(tdev, pos, count, arg);
+ pci_dev_put(tdev);
+ return ret;
+}
+
+static const struct pci_vpd_ops pci_vpd_f0_ops = {
+ .read = pci_vpd_f0_read,
+ .write = pci_vpd_f0_write,
+ .release = pci_vpd_pci22_release,
+};
+
+static int pci_vpd_f0_dev_check(struct pci_dev *dev)
+{
+ struct pci_dev *tdev = pci_get_slot(dev->bus, PCI_SLOT(dev->devfn));
+ int ret = 0;
+
+ if (!tdev)
+ return -ENODEV;
+ if (!tdev->vpd || !tdev->multifunction ||
+ dev->class != tdev->class || dev->vendor != tdev->vendor ||
+ dev->device != tdev->device)
+ ret = -ENODEV;
+
+ pci_dev_put(tdev);
+ return ret;
+}
+
int pci_vpd_pci22_init(struct pci_dev *dev)
{
struct pci_vpd_pci22 *vpd;
@@ -365,12 +415,21 @@ int pci_vpd_pci22_init(struct pci_dev *dev)
cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
if (!cap)
return -ENODEV;
+ if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
+ int ret = pci_vpd_f0_dev_check(dev);
+
+ if (ret)
+ return ret;
+ }
vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC);
if (!vpd)
return -ENOMEM;
vpd->base.len = PCI_VPD_PCI22_SIZE;
- vpd->base.ops = &pci_vpd_pci22_ops;
+ if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0)
+ vpd->base.ops = &pci_vpd_f0_ops;
+ else
+ vpd->base.ops = &pci_vpd_pci22_ops;
mutex_init(&vpd->lock);
vpd->cap = cap;
vpd->busy = false;
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 469c953..579baf0 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -176,6 +176,8 @@ enum pci_dev_flags {
PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2,
/* Provide indication device is assigned by a Virtual Machine Manager */
PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4,
+ /* Get VPD from function 0 VPD */
+ PCI_DEV_FLAGS_VPD_REF_F0 = (__force pci_dev_flags_t) (1 << 8),
};
enum pci_irq_reroute_variant {
--
1.9.1
From: Chuck Lever <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 9d11b51ce7c150a69e761e30518f294fc73d55ff upstream.
The Linux NFS server returns garbage in the data payload of inline
NFS/RDMA READ replies. These are READs of under 1000 bytes or so
where the client has not provided either a reply chunk or a write
list.
The NFS server delivers the data payload for an NFS READ reply to
the transport in an xdr_buf page list. If the NFS client did not
provide a reply chunk or a write list, send_reply() is supposed to
set up a separate sge for the page containing the READ data, and
another sge for XDR padding if needed, then post all of the sges via
a single SEND Work Request.
The problem is send_reply() does not advance through the xdr_buf
when setting up scatter/gather entries for SEND WR. It always calls
dma_map_xdr with xdr_off set to zero. When there's more than one
sge, dma_map_xdr() sets up the SEND sge's so they all point to the
xdr_buf's head.
The current Linux NFS/RDMA client always provides a reply chunk or
a write list when performing an NFS READ over RDMA. Therefore, it
does not exercise this particular case. The Linux server has never
had to use more than one extra sge for building RPC/RDMA replies
with a Linux client.
However, an NFS/RDMA client _is_ allowed to send small NFS READs
without setting up a write list or reply chunk. The NFS READ reply
fits entirely within the inline reply buffer in this case. This is
perhaps a more efficient way of performing NFS READs that the Linux
NFS/RDMA client may some day adopt.
Fixes: b432e6b3d9c1 ('svcrdma: Change DMA mapping logic to . . .')
BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=285
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: J. Bruce Fields <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 42eb7ba..897a5f1 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -545,6 +545,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
{
struct ib_send_wr send_wr;
struct ib_send_wr inv_wr;
+ u32 xdr_off;
int sge_no;
int sge_bytes;
int page_no;
@@ -584,8 +585,8 @@ static int send_reply(struct svcxprt_rdma *rdma,
ctxt->direction = DMA_TO_DEVICE;
/* Map the payload indicated by 'byte_count' */
+ xdr_off = 0;
for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
- int xdr_off = 0;
sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count);
byte_count -= sge_bytes;
if (!vec->frmr) {
@@ -623,6 +624,14 @@ static int send_reply(struct svcxprt_rdma *rdma,
if (page_no+1 >= sge_no)
ctxt->sge[page_no+1].length = 0;
}
+
+ /* The loop above bumps sc_dma_used for each sge. The
+ * xdr_buf.tail gets a separate sge, but resides in the
+ * same page as xdr_buf.head. Don't count it twice.
+ */
+ if (sge_no > ctxt->count)
+ atomic_dec(&rdma->sc_dma_used);
+
BUG_ON(sge_no > rdma->sc_max_sge);
memset(&send_wr, 0, sizeof send_wr);
ctxt->wr_op = IB_WR_SEND;
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 199dc6ed5179251fa6158a461499c24bdd99c836 upstream.
When a (e.g.) RAID5 array is reshaped to RAID0, the updating
of queue parameters (e.g. max number of sectors per bio) is
done in the wrong place.
It should be part of ->run, but it is actually part of ->takeover.
This means it happens before level_store() calls:
blk_set_stacking_limits(&mddev->queue->limits);
and so it ineffective. This can lead to errors from underlying
devices.
So move all the relevant settings out of create_stripe_zones()
and into raid0_run().
As this can lead to a bug-on it is suitable for any -stable
kernel which supports reshape to RAID0. So 2.6.35 or later.
As the bug has been present for five years there is no urgency,
so no need to rush into -stable.
Fixes: 9af204cf720c ("md: Add support for Raid5->Raid0 and Raid10->Raid0 takeover")
Reported-by: Yi Zhang <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
[lizf: Backported to 3.4:
- adjust context
- remove changes to discard and write-same features]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid0.c | 55 ++++++++++++++++++++++++++++++++++++------------------
1 file changed, 37 insertions(+), 18 deletions(-)
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 06a0257..3e285e6 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -88,6 +88,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
char b[BDEVNAME_SIZE];
char b2[BDEVNAME_SIZE];
struct r0conf *conf = kzalloc(sizeof(*conf), GFP_KERNEL);
+ unsigned short blksize = 512;
if (!conf)
return -ENOMEM;
@@ -102,6 +103,9 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
sector_div(sectors, mddev->chunk_sectors);
rdev1->sectors = sectors * mddev->chunk_sectors;
+ blksize = max(blksize, queue_logical_block_size(
+ rdev1->bdev->bd_disk->queue));
+
rdev_for_each(rdev2, mddev) {
pr_debug("md/raid0:%s: comparing %s(%llu)"
" with %s(%llu)\n",
@@ -138,6 +142,18 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
}
pr_debug("md/raid0:%s: FINAL %d zones\n",
mdname(mddev), conf->nr_strip_zones);
+ /*
+ * now since we have the hard sector sizes, we can make sure
+ * chunk size is a multiple of that sector size
+ */
+ if ((mddev->chunk_sectors << 9) % blksize) {
+ printk(KERN_ERR "md/raid0:%s: chunk_size of %d not multiple of block size %d\n",
+ mdname(mddev),
+ mddev->chunk_sectors << 9, blksize);
+ err = -EINVAL;
+ goto abort;
+ }
+
err = -ENOMEM;
conf->strip_zone = kzalloc(sizeof(struct strip_zone)*
conf->nr_strip_zones, GFP_KERNEL);
@@ -186,9 +202,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
}
dev[j] = rdev1;
- disk_stack_limits(mddev->gendisk, rdev1->bdev,
- rdev1->data_offset << 9);
-
if (rdev1->bdev->bd_disk->queue->merge_bvec_fn)
conf->has_merge_bvec = 1;
@@ -257,21 +270,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
mddev->queue->backing_dev_info.congested_fn = raid0_congested;
mddev->queue->backing_dev_info.congested_data = mddev;
- /*
- * now since we have the hard sector sizes, we can make sure
- * chunk size is a multiple of that sector size
- */
- if ((mddev->chunk_sectors << 9) % queue_logical_block_size(mddev->queue)) {
- printk(KERN_ERR "md/raid0:%s: chunk_size of %d not valid\n",
- mdname(mddev),
- mddev->chunk_sectors << 9);
- goto abort;
- }
-
- blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
- blk_queue_io_opt(mddev->queue,
- (mddev->chunk_sectors << 9) * mddev->raid_disks);
-
pr_debug("md/raid0:%s: done.\n", mdname(mddev));
*private_conf = conf;
@@ -432,6 +430,27 @@ static int raid0_run(struct mddev *mddev)
mddev->private = conf;
}
conf = mddev->private;
+ if (mddev->queue) {
+ struct md_rdev *rdev;
+ bool discard_supported = false;
+
+ rdev_for_each(rdev, mddev) {
+ disk_stack_limits(mddev->gendisk, rdev->bdev,
+ rdev->data_offset << 9);
+ if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
+ discard_supported = true;
+ }
+ blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
+
+ blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
+ blk_queue_io_opt(mddev->queue,
+ (mddev->chunk_sectors << 9) * mddev->raid_disks);
+
+ if (!discard_supported)
+ queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
+ else
+ queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
+ }
/* calculate array device size */
md_set_array_sectors(mddev, raid0_size(mddev, 0, 0));
--
1.9.1
From: Masahiro Yamada <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 64526370d11ce8868ca495723d595b61e8697fbf upstream.
Currently, devres_get() passes devres_free() the pointer to devres,
but devres_free() should be given with the pointer to resource data.
Fixes: 9ac7849e35f7 ("devres: device resource management")
Signed-off-by: Masahiro Yamada <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/base/devres.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/base/devres.c b/drivers/base/devres.c
index 524bf96..06c541d 100644
--- a/drivers/base/devres.c
+++ b/drivers/base/devres.c
@@ -254,10 +254,10 @@ void * devres_get(struct device *dev, void *new_res,
if (!dr) {
add_dr(dev, &new_dr->node);
dr = new_dr;
- new_dr = NULL;
+ new_res = NULL;
}
spin_unlock_irqrestore(&dev->devres_lock, flags);
- devres_free(new_dr);
+ devres_free(new_res);
return dr->data;
}
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit efcbc04e16dfa95fef76309f89710dd1d99a5453 upstream.
It is unusual to combine the open flags O_RDONLY and O_EXCL, but
it appears that libre-office does just that.
[pid 3250] stat("/home/USER/.config", {st_mode=S_IFDIR|0700, st_size=8192, ...}) = 0
[pid 3250] open("/home/USER/.config/libreoffice/4-suse/user/extensions/buildid", O_RDONLY|O_EXCL <unfinished ...>
NFSv4 takes O_EXCL as a sign that a setattr command should be sent,
probably to reset the timestamps.
When it was an O_RDONLY open, the SETATTR command does not
identify any actual attributes to change.
If no delegation was provided to the open, the SETATTR uses the
all-zeros stateid and the request is accepted (at least by the
Linux NFS server - no harm, no foul).
If a read-delegation was provided, this is used in the SETATTR
request, and a Netapp filer will justifiably claim
NFS4ERR_BAD_STATEID, which the Linux client takes as a sign
to retry - indefinitely.
So only treat O_EXCL specially if O_CREAT was also given.
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
fs/nfs/nfs4proc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 3d344ab..92eff4d 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1851,7 +1851,7 @@ static int _nfs4_do_open(struct inode *dir, struct dentry *dentry, fmode_t fmode
if (server->caps & NFS_CAP_POSIX_LOCK)
set_bit(NFS_STATE_POSIX_LOCKS, &state->flags);
- if (opendata->o_arg.open_flags & O_EXCL) {
+ if ((opendata->o_arg.open_flags & (O_CREAT|O_EXCL)) == (O_CREAT|O_EXCL)) {
nfs4_exclusive_attrset(opendata, sattr);
nfs_fattr_init(opendata->o_res.f_attr);
--
1.9.1
From: Stephen Chandler Paul <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 924f92bf12bfbef3662619e3ed24a1cea7c1cbcd upstream.
Most of the time this isn't an issue since hotplugging an adaptor will
trigger a crtc mode change which in turn, causes the driver to probe
every DisplayPort for a dpcd. However, in cases where hotplugging
doesn't cause a mode change (specifically when one unplugs a monitor
from a DisplayPort connector, then plugs that same monitor back in
seconds later on the same port without any other monitors connected), we
never probe for the dpcd before starting the initial link training. What
happens from there looks like this:
- GPU has only one monitor connected. It's connected via
DisplayPort, and does not go through an adaptor of any sort.
- User unplugs DisplayPort connector from GPU.
- Change in HPD is detected by the driver, we probe every
DisplayPort for a possible connection.
- Probe the port the user originally had the monitor connected
on for it's dpcd. This fails, and we clear the first (and only
the first) byte of the dpcd to indicate we no longer have a
dpcd for this port.
- User plugs the previously disconnected monitor back into the
same DisplayPort.
- radeon_connector_hotplug() is called before everyone else,
and tries to handle the link training. Since only the first
byte of the dpcd is zeroed, the driver is able to complete
link training but does so against the wrong dpcd, causing it
to initialize the link with the wrong settings.
- Display stays blank (usually), dpcd is probed after the
initial link training, and the driver prints no obvious
messages to the log.
In theory, since only one byte of the dpcd is chopped off (specifically,
the byte that contains the revision information for DisplayPort), it's
not entirely impossible that this bug may not show on certain monitors.
For instance, the only reason this bug was visible on my ASUS PB238
monitor was due to the fact that this monitor using the enhanced framing
symbol sequence, the flag for which is ignored if the radeon driver
thinks that the DisplayPort version is below 1.1.
Signed-off-by: Stephen Chandler Paul <[email protected]>
Reviewed-by: Jerome Glisse <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/gpu/drm/radeon/radeon_connectors.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index 9184bbe..9c5d96c 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -82,6 +82,11 @@ void radeon_connector_hotplug(struct drm_connector *connector)
if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
} else if (radeon_dp_needs_link_train(radeon_connector)) {
+ /* Don't try to start link training before we
+ * have the dpcd */
+ if (!radeon_dp_getdpcd(radeon_connector))
+ return;
+
/* set it to OFF so that drm_helper_connector_dpms()
* won't return immediately since the current state
* is ON at this point.
--
1.9.1
From: Trond Myklebust <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 0c78789e3a030615c6650fde89546cadf40ec2cc upstream.
In case the reconnection attempt fails.
Signed-off-by: Trond Myklebust <[email protected]>
[lizf: Backported to 3.4: add definition of variable xprt]
Signed-off-by: Zefan Li <[email protected]>
---
net/sunrpc/xprtsock.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 31275e5..d4a564f 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -811,6 +811,7 @@ static void xs_reset_transport(struct sock_xprt *transport)
{
struct socket *sock = transport->sock;
struct sock *sk = transport->inet;
+ struct rpc_xprt *xprt = &transport->xprt;
if (sk == NULL)
return;
@@ -824,6 +825,7 @@ static void xs_reset_transport(struct sock_xprt *transport)
sk->sk_user_data = NULL;
xs_restore_old_callbacks(transport, sk);
+ xprt_clear_connected(xprt);
write_unlock_bh(&sk->sk_callback_lock);
sk->sk_no_check = 0;
--
1.9.1
From: Grant Likely <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 7f5dcaf1fdf289767a126a0a5cc3ef39b5254b06 upstream.
The unregister path of platform_device is broken. On registration, it
will register all resources with either a parent already set, or
type==IORESOURCE_{IO,MEM}. However, on unregister it will release
everything with type==IORESOURCE_{IO,MEM}, but ignore the others. There
are also cases where resources don't get registered in the first place,
like with devices created by of_platform_populate()*.
Fix the unregister path to be symmetrical with the register path by
checking the parent pointer instead of the type field to decide which
resources to unregister. This is safe because the upshot of the
registration path algorithm is that registered resources have a parent
pointer, and non-registered resources do not.
* It can be argued that of_platform_populate() should be registering
it's resources, and they argument has some merit. However, there are
quite a few platforms that end up broken if we try to do that due to
overlapping resources in the device tree. Until that is fixed, we need
to solve the immediate problem.
Cc: Pantelis Antoniou <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Ricardo Ribalda Delgado <[email protected]>
Signed-off-by: Grant Likely <[email protected]>
Tested-by: Ricardo Ribalda Delgado <[email protected]>
Tested-by: Wolfram Sang <[email protected]>
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/base/platform.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/base/platform.c b/drivers/base/platform.c
index a1a7225..5a13733 100644
--- a/drivers/base/platform.c
+++ b/drivers/base/platform.c
@@ -311,9 +311,7 @@ int platform_device_add(struct platform_device *pdev)
failed:
while (--i >= 0) {
struct resource *r = &pdev->resource[i];
- unsigned long type = resource_type(r);
-
- if (type == IORESOURCE_MEM || type == IORESOURCE_IO)
+ if (r->parent)
release_resource(r);
}
@@ -338,9 +336,7 @@ void platform_device_del(struct platform_device *pdev)
for (i = 0; i < pdev->num_resources; i++) {
struct resource *r = &pdev->resource[i];
- unsigned long type = resource_type(r);
-
- if (type == IORESOURCE_MEM || type == IORESOURCE_IO)
+ if (r->parent)
release_resource(r);
}
}
--
1.9.1
From: Jeffery Miller <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 09bfda10e6efd7b65bcc29237bee1765ed779657 upstream.
With the radeon driver loaded the HP Compaq dc5750
Small Form Factor machine fails to resume from suspend.
Adding a quirk similar to other devices avoids
the problem and the system resumes properly.
Signed-off-by: Jeffery Miller <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/gpu/drm/radeon/radeon_combios.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
index b72eb50..69c2dd0 100644
--- a/drivers/gpu/drm/radeon/radeon_combios.c
+++ b/drivers/gpu/drm/radeon/radeon_combios.c
@@ -3399,6 +3399,14 @@ void radeon_combios_asic_init(struct drm_device *dev)
rdev->pdev->subsystem_device == 0x30ae)
return;
+ /* quirk for rs4xx HP Compaq dc5750 Small Form Factor to make it resume
+ * - it hangs on resume inside the dynclk 1 table.
+ */
+ if (rdev->family == CHIP_RS480 &&
+ rdev->pdev->subsystem_vendor == 0x103c &&
+ rdev->pdev->subsystem_device == 0x280a)
+ return;
+
/* DYN CLK 1 */
table = combios_get_table_offset(dev, COMBIOS_DYN_CLK_1_TABLE);
if (table)
--
1.9.1
From: Mikulas Patocka <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit f49a26e7718dd30b49e3541e3e25aecf5e7294e2 upstream.
Update ctime and mtime when a directory is modified. (though OS/2 doesn't
update them anyway)
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/hpfs/namei.c | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/fs/hpfs/namei.c b/fs/hpfs/namei.c
index 30dd7b1..bdb86a8 100644
--- a/fs/hpfs/namei.c
+++ b/fs/hpfs/namei.c
@@ -8,6 +8,17 @@
#include <linux/sched.h>
#include "hpfs_fn.h"
+static void hpfs_update_directory_times(struct inode *dir)
+{
+ time_t t = get_seconds();
+ if (t == dir->i_mtime.tv_sec &&
+ t == dir->i_ctime.tv_sec)
+ return;
+ dir->i_mtime.tv_sec = dir->i_ctime.tv_sec = t;
+ dir->i_mtime.tv_nsec = dir->i_ctime.tv_nsec = 0;
+ hpfs_write_inode_nolock(dir);
+}
+
static int hpfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
{
const unsigned char *name = dentry->d_name.name;
@@ -99,6 +110,7 @@ static int hpfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
result->i_mode = mode | S_IFDIR;
hpfs_write_inode_nolock(result);
}
+ hpfs_update_directory_times(dir);
d_instantiate(dentry, result);
hpfs_unlock(dir->i_sb);
return 0;
@@ -187,6 +199,7 @@ static int hpfs_create(struct inode *dir, struct dentry *dentry, umode_t mode, s
result->i_mode = mode | S_IFREG;
hpfs_write_inode_nolock(result);
}
+ hpfs_update_directory_times(dir);
d_instantiate(dentry, result);
hpfs_unlock(dir->i_sb);
return 0;
@@ -262,6 +275,7 @@ static int hpfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, de
insert_inode_hash(result);
hpfs_write_inode_nolock(result);
+ hpfs_update_directory_times(dir);
d_instantiate(dentry, result);
brelse(bh);
hpfs_unlock(dir->i_sb);
@@ -340,6 +354,7 @@ static int hpfs_symlink(struct inode *dir, struct dentry *dentry, const char *sy
insert_inode_hash(result);
hpfs_write_inode_nolock(result);
+ hpfs_update_directory_times(dir);
d_instantiate(dentry, result);
hpfs_unlock(dir->i_sb);
return 0;
@@ -423,6 +438,8 @@ again:
out1:
hpfs_brelse4(&qbh);
out:
+ if (!err)
+ hpfs_update_directory_times(dir);
hpfs_unlock(dir->i_sb);
return err;
}
@@ -477,6 +494,8 @@ static int hpfs_rmdir(struct inode *dir, struct dentry *dentry)
out1:
hpfs_brelse4(&qbh);
out:
+ if (!err)
+ hpfs_update_directory_times(dir);
hpfs_unlock(dir->i_sb);
return err;
}
@@ -595,7 +614,7 @@ static int hpfs_rename(struct inode *old_dir, struct dentry *old_dentry,
goto end1;
}
- end:
+end:
hpfs_i(i)->i_parent_dir = new_dir->i_ino;
if (S_ISDIR(i->i_mode)) {
inc_nlink(new_dir);
@@ -610,6 +629,10 @@ static int hpfs_rename(struct inode *old_dir, struct dentry *old_dentry,
brelse(bh);
}
end1:
+ if (!err) {
+ hpfs_update_directory_times(old_dir);
+ hpfs_update_directory_times(new_dir);
+ }
hpfs_unlock(i->i_sb);
return err;
}
--
1.9.1
From: Kees Cook <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a068acf2ee77693e0bf39d6e07139ba704f461c3 upstream.
Many file systems that implement the show_options hook fail to correctly
escape their output which could lead to unescaped characters (e.g. new
lines) leaking into /proc/mounts and /proc/[pid]/mountinfo files. This
could lead to confusion, spoofed entries (resulting in things like
systemd issuing false d-bus "mount" notifications), and who knows what
else. This looks like it would only be the root user stepping on
themselves, but it's possible weird things could happen in containers or
in other situations with delegated mount privileges.
Here's an example using overlay with setuid fusermount trusting the
contents of /proc/mounts (via the /etc/mtab symlink). Imagine the use
of "sudo" is something more sneaky:
$ BASE="ovl"
$ MNT="$BASE/mnt"
$ LOW="$BASE/lower"
$ UP="$BASE/upper"
$ WORK="$BASE/work/ 0 0
none /proc fuse.pwn user_id=1000"
$ mkdir -p "$LOW" "$UP" "$WORK"
$ sudo mount -t overlay -o "lowerdir=$LOW,upperdir=$UP,workdir=$WORK" none /mnt
$ cat /proc/mounts
none /root/ovl/mnt overlay rw,relatime,lowerdir=ovl/lower,upperdir=ovl/upper,workdir=ovl/work/ 0 0
none /proc fuse.pwn user_id=1000 0 0
$ fusermount -u /proc
$ cat /proc/mounts
cat: /proc/mounts: No such file or directory
This fixes the problem by adding new seq_show_option and
seq_show_option_n helpers, and updating the vulnerable show_option
handlers to use them as needed. Some, like SELinux, need to be open
coded due to unusual existing escape mechanisms.
[[email protected]: add lost chunk, per Kees]
[[email protected]: seq_show_option should be using const parameters]
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Serge Hallyn <[email protected]>
Acked-by: Jan Kara <[email protected]>
Acked-by: Paul Moore <[email protected]>
Cc: J. R. Okajima <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
[lizf: Backported to 3.4:
- adjust context
- one more place in ceph needs to be changed
- drop changes to overlayfs
- drop showing vers in cifs]
Signed-off-by: Zefan Li <[email protected]>
---
fs/ceph/super.c | 8 +++++---
fs/cifs/cifsfs.c | 4 ++--
fs/ext4/super.c | 4 ++--
fs/gfs2/super.c | 6 +++---
fs/hfs/super.c | 4 ++--
fs/hfsplus/options.c | 4 ++--
fs/hostfs/hostfs_kern.c | 2 +-
fs/ocfs2/super.c | 4 ++--
fs/reiserfs/super.c | 8 +++++---
fs/xfs/xfs_super.c | 4 ++--
include/linux/seq_file.h | 35 +++++++++++++++++++++++++++++++++++
kernel/cgroup.c | 7 ++++---
security/selinux/hooks.c | 2 +-
13 files changed, 66 insertions(+), 26 deletions(-)
diff --git a/fs/ceph/super.c b/fs/ceph/super.c
index f4fa5cf..e5eacd9 100644
--- a/fs/ceph/super.c
+++ b/fs/ceph/super.c
@@ -383,8 +383,10 @@ static int ceph_show_options(struct seq_file *m, struct dentry *root)
if (opt->flags & CEPH_OPT_NOCRC)
seq_puts(m, ",nocrc");
- if (opt->name)
- seq_printf(m, ",name=%s", opt->name);
+ if (opt->name) {
+ seq_puts(m, ",name=");
+ seq_escape(m, opt->name, ", \t\n\\");
+ }
if (opt->key)
seq_puts(m, ",secret=<hidden>");
@@ -429,7 +431,7 @@ static int ceph_show_options(struct seq_file *m, struct dentry *root)
if (fsopt->max_readdir_bytes != CEPH_MAX_READDIR_BYTES_DEFAULT)
seq_printf(m, ",readdir_max_bytes=%d", fsopt->max_readdir_bytes);
if (strcmp(fsopt->snapdir_name, CEPH_SNAPDIRNAME_DEFAULT))
- seq_printf(m, ",snapdirname=%s", fsopt->snapdir_name);
+ seq_show_option(m, "snapdirname", fsopt->snapdir_name);
return 0;
}
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index c0f65e8..5b730ba 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -373,10 +373,10 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER)
seq_printf(s, ",multiuser");
else if (tcon->ses->user_name)
- seq_printf(s, ",username=%s", tcon->ses->user_name);
+ seq_show_option(s, "username", tcon->ses->user_name);
if (tcon->ses->domainName)
- seq_printf(s, ",domain=%s", tcon->ses->domainName);
+ seq_show_option(s, "domain", tcon->ses->domainName);
if (srcaddr->sa_family != AF_UNSPEC) {
struct sockaddr_in *saddr4;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 2e26a54..3de888c3 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1682,10 +1682,10 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
}
if (sbi->s_qf_names[USRQUOTA])
- seq_printf(seq, ",usrjquota=%s", sbi->s_qf_names[USRQUOTA]);
+ seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
if (sbi->s_qf_names[GRPQUOTA])
- seq_printf(seq, ",grpjquota=%s", sbi->s_qf_names[GRPQUOTA]);
+ seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
if (test_opt(sb, USRQUOTA))
seq_puts(seq, ",usrquota");
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 6172fa7..4db9a9a 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -1298,11 +1298,11 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
if (is_ancestor(root, sdp->sd_master_dir))
seq_printf(s, ",meta");
if (args->ar_lockproto[0])
- seq_printf(s, ",lockproto=%s", args->ar_lockproto);
+ seq_show_option(s, "lockproto", args->ar_lockproto);
if (args->ar_locktable[0])
- seq_printf(s, ",locktable=%s", args->ar_locktable);
+ seq_show_option(s, "locktable", args->ar_locktable);
if (args->ar_hostdata[0])
- seq_printf(s, ",hostdata=%s", args->ar_hostdata);
+ seq_show_option(s, "hostdata", args->ar_hostdata);
if (args->ar_spectator)
seq_printf(s, ",spectator");
if (args->ar_localflocks)
diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 7b4c537..be0e218 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -138,9 +138,9 @@ static int hfs_show_options(struct seq_file *seq, struct dentry *root)
struct hfs_sb_info *sbi = HFS_SB(root->d_sb);
if (sbi->s_creator != cpu_to_be32(0x3f3f3f3f))
- seq_printf(seq, ",creator=%.4s", (char *)&sbi->s_creator);
+ seq_show_option_n(seq, "creator", (char *)&sbi->s_creator, 4);
if (sbi->s_type != cpu_to_be32(0x3f3f3f3f))
- seq_printf(seq, ",type=%.4s", (char *)&sbi->s_type);
+ seq_show_option_n(seq, "type", (char *)&sbi->s_type, 4);
seq_printf(seq, ",uid=%u,gid=%u", sbi->s_uid, sbi->s_gid);
if (sbi->s_file_umask != 0133)
seq_printf(seq, ",file_umask=%o", sbi->s_file_umask);
diff --git a/fs/hfsplus/options.c b/fs/hfsplus/options.c
index 06fa561..38e41d0 100644
--- a/fs/hfsplus/options.c
+++ b/fs/hfsplus/options.c
@@ -211,9 +211,9 @@ int hfsplus_show_options(struct seq_file *seq, struct dentry *root)
struct hfsplus_sb_info *sbi = HFSPLUS_SB(root->d_sb);
if (sbi->creator != HFSPLUS_DEF_CR_TYPE)
- seq_printf(seq, ",creator=%.4s", (char *)&sbi->creator);
+ seq_show_option_n(seq, "creator", (char *)&sbi->creator, 4);
if (sbi->type != HFSPLUS_DEF_CR_TYPE)
- seq_printf(seq, ",type=%.4s", (char *)&sbi->type);
+ seq_show_option_n(seq, "type", (char *)&sbi->type, 4);
seq_printf(seq, ",umask=%o,uid=%u,gid=%u", sbi->umask,
sbi->uid, sbi->gid);
if (sbi->part >= 0)
diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
index 07c516b..fe63b15 100644
--- a/fs/hostfs/hostfs_kern.c
+++ b/fs/hostfs/hostfs_kern.c
@@ -264,7 +264,7 @@ static int hostfs_show_options(struct seq_file *seq, struct dentry *root)
size_t offset = strlen(root_ino) + 1;
if (strlen(root_path) > offset)
- seq_printf(seq, ",%s", root_path + offset);
+ seq_show_option(seq, root_path + offset, NULL);
return 0;
}
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 68f4541..91a0020 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -1578,8 +1578,8 @@ static int ocfs2_show_options(struct seq_file *s, struct dentry *root)
seq_printf(s, ",localflocks,");
if (osb->osb_cluster_stack[0])
- seq_printf(s, ",cluster_stack=%.*s", OCFS2_STACK_LABEL_LEN,
- osb->osb_cluster_stack);
+ seq_show_option_n(s, "cluster_stack", osb->osb_cluster_stack,
+ OCFS2_STACK_LABEL_LEN);
if (opts & OCFS2_MOUNT_USRQUOTA)
seq_printf(s, ",usrquota");
if (opts & OCFS2_MOUNT_GRPQUOTA)
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index 8169be9..e12357b 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -645,18 +645,20 @@ static int reiserfs_show_options(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",acl");
if (REISERFS_SB(s)->s_jdev)
- seq_printf(seq, ",jdev=%s", REISERFS_SB(s)->s_jdev);
+ seq_show_option(seq, "jdev", REISERFS_SB(s)->s_jdev);
if (journal->j_max_commit_age != journal->j_default_max_commit_age)
seq_printf(seq, ",commit=%d", journal->j_max_commit_age);
#ifdef CONFIG_QUOTA
if (REISERFS_SB(s)->s_qf_names[USRQUOTA])
- seq_printf(seq, ",usrjquota=%s", REISERFS_SB(s)->s_qf_names[USRQUOTA]);
+ seq_show_option(seq, "usrjquota",
+ REISERFS_SB(s)->s_qf_names[USRQUOTA]);
else if (opts & (1 << REISERFS_USRQUOTA))
seq_puts(seq, ",usrquota");
if (REISERFS_SB(s)->s_qf_names[GRPQUOTA])
- seq_printf(seq, ",grpjquota=%s", REISERFS_SB(s)->s_qf_names[GRPQUOTA]);
+ seq_show_option(seq, "grpjquota",
+ REISERFS_SB(s)->s_qf_names[GRPQUOTA]);
else if (opts & (1 << REISERFS_GRPQUOTA))
seq_puts(seq, ",grpquota");
if (REISERFS_SB(s)->s_jquota_fmt) {
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index dab9a5f..d6c787d 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -523,9 +523,9 @@ xfs_showargs(
seq_printf(m, "," MNTOPT_LOGBSIZE "=%dk", mp->m_logbsize >> 10);
if (mp->m_logname)
- seq_printf(m, "," MNTOPT_LOGDEV "=%s", mp->m_logname);
+ seq_show_option(m, MNTOPT_LOGDEV, mp->m_logname);
if (mp->m_rtname)
- seq_printf(m, "," MNTOPT_RTDEV "=%s", mp->m_rtname);
+ seq_show_option(m, MNTOPT_RTDEV, mp->m_rtname);
if (mp->m_dalign > 0)
seq_printf(m, "," MNTOPT_SUNIT "=%d",
diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h
index fc61854..149b92f 100644
--- a/include/linux/seq_file.h
+++ b/include/linux/seq_file.h
@@ -127,6 +127,41 @@ int seq_put_decimal_ull(struct seq_file *m, char delimiter,
int seq_put_decimal_ll(struct seq_file *m, char delimiter,
long long num);
+/**
+ * seq_show_options - display mount options with appropriate escapes.
+ * @m: the seq_file handle
+ * @name: the mount option name
+ * @value: the mount option name's value, can be NULL
+ */
+static inline void seq_show_option(struct seq_file *m, const char *name,
+ const char *value)
+{
+ seq_putc(m, ',');
+ seq_escape(m, name, ",= \t\n\\");
+ if (value) {
+ seq_putc(m, '=');
+ seq_escape(m, value, ", \t\n\\");
+ }
+}
+
+/**
+ * seq_show_option_n - display mount options with appropriate escapes
+ * where @value must be a specific length.
+ * @m: the seq_file handle
+ * @name: the mount option name
+ * @value: the mount option name's value, cannot be NULL
+ * @length: the length of @value to display
+ *
+ * This is a macro since this uses "length" to define the size of the
+ * stack buffer.
+ */
+#define seq_show_option_n(m, name, value, length) { \
+ char val_buf[length + 1]; \
+ strncpy(val_buf, value, length); \
+ val_buf[length] = '\0'; \
+ seq_show_option(m, name, val_buf); \
+}
+
#define SEQ_START_TOKEN ((void *)1)
/*
* Helpers for iteration over list_head-s in seq_files
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 34eda95..7ff5702 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1071,15 +1071,16 @@ static int cgroup_show_options(struct seq_file *seq, struct dentry *dentry)
mutex_lock(&cgroup_root_mutex);
for_each_subsys(root, ss)
- seq_printf(seq, ",%s", ss->name);
+ seq_show_option(seq, ss->name, NULL);
if (test_bit(ROOT_NOPREFIX, &root->flags))
seq_puts(seq, ",noprefix");
if (strlen(root->release_agent_path))
- seq_printf(seq, ",release_agent=%s", root->release_agent_path);
+ seq_show_option(seq, "release_agent",
+ root->release_agent_path);
if (clone_children(&root->top_cgroup))
seq_puts(seq, ",clone_children");
if (strlen(root->name))
- seq_printf(seq, ",name=%s", root->name);
+ seq_show_option(seq, "name", root->name);
mutex_unlock(&cgroup_root_mutex);
return 0;
}
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index cbae6d3..312d2fb 100644
--- a/security/selinux/hooks.c
+++ b/security/selinux/hooks.c
@@ -1012,7 +1012,7 @@ static void selinux_write_opts(struct seq_file *m,
seq_puts(m, prefix);
if (has_comma)
seq_putc(m, '\"');
- seq_puts(m, opts->mnt_opts[i]);
+ seq_escape(m, opts->mnt_opts[i], "\"\n\\");
if (has_comma)
seq_putc(m, '\"');
}
--
1.9.1
From: Hin-Tak Leung <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit b4cc0efea4f0bfa2477c56af406cfcf3d3e58680 upstream.
Fix B-tree corruption when a new record is inserted at position 0 in the
node in hfs_brec_insert().
This is an identical change to the corresponding hfs b-tree code to Sergei
Antonov's "hfsplus: fix B-tree corruption after insertion at position 0",
to keep similar code paths in the hfs and hfsplus drivers in sync, where
appropriate.
Signed-off-by: Hin-Tak Leung <[email protected]>
Cc: Sergei Antonov <[email protected]>
Cc: Joe Perches <[email protected]>
Reviewed-by: Vyacheslav Dubeyko <[email protected]>
Cc: Anton Altaparmakov <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/hfs/brec.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/fs/hfs/brec.c b/fs/hfs/brec.c
index 92fb358..db240c5 100644
--- a/fs/hfs/brec.c
+++ b/fs/hfs/brec.c
@@ -132,13 +132,16 @@ skip:
hfs_bnode_write(node, entry, data_off + key_len, entry_len);
hfs_bnode_dump(node);
- if (new_node) {
- /* update parent key if we inserted a key
- * at the start of the first node
- */
- if (!rec && new_node != node)
- hfs_brec_update_parent(fd);
+ /*
+ * update parent key if we inserted a key
+ * at the start of the node and it is not the new node
+ */
+ if (!rec && new_node != node) {
+ hfs_bnode_read_key(node, fd->search_key, data_off + size);
+ hfs_brec_update_parent(fd);
+ }
+ if (new_node) {
hfs_bnode_put(fd->bnode);
if (!new_node->parent) {
hfs_btree_inc_height(tree);
@@ -167,9 +170,6 @@ skip:
goto again;
}
- if (!rec)
- hfs_brec_update_parent(fd);
-
return 0;
}
@@ -366,6 +366,8 @@ again:
if (IS_ERR(parent))
return PTR_ERR(parent);
__hfs_brec_find(parent, fd);
+ if (fd->record < 0)
+ return -ENOENT;
hfs_bnode_dump(parent);
rec = fd->record;
--
1.9.1
From: Christoph Hellwig <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 294ab783ad98066b87296db1311c7ba2a60206a5 upstream.
It looks like the Kconfig check that was meant to fix this (commit
fe9233fb6914a0eb20166c967e3020f7f0fba2c9 [SCSI] scsi_dh: fix kconfig related
build errors) was actually reversed, but no-one noticed until the new set of
patches which separated DM and SCSI_DH).
Fixes: fe9233fb6914a0eb20166c967e3020f7f0fba2c9
Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Mike Snitzer <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index da4dc25..5187030 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -330,7 +330,7 @@ config DM_MULTIPATH
# of SCSI_DH if the latter isn't defined but if
# it is, DM_MULTIPATH must depend on it. We get a build
# error if SCSI_DH=m and DM_MULTIPATH=y
- depends on SCSI_DH || !SCSI_DH
+ depends on !SCSI_DH || SCSI
---help---
Allow volume managers to support multipath hardware.
--
1.9.1
From: Arnaldo Carvalho de Melo <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit caa470475d9b59eeff093ae650800d34612c4379 upstream.
The original patch introducing this header wrote the number of CPUs available
and online in one order and then swapped those values when reading, fix it.
Before:
# perf record usleep 1
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 4
# nrcpus avail : 4
# echo 0 > /sys/devices/system/cpu/cpu2/online
# perf record usleep 1
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 4
# nrcpus avail : 3
# echo 0 > /sys/devices/system/cpu/cpu1/online
# perf record usleep 1
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 4
# nrcpus avail : 2
After the fix, bringing back the CPUs online:
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 2
# nrcpus avail : 4
# echo 1 > /sys/devices/system/cpu/cpu2/online
# perf record usleep 1
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 3
# nrcpus avail : 4
# echo 1 > /sys/devices/system/cpu/cpu1/online
# perf record usleep 1
# perf report --header-only | grep 'nrcpus \(online\|avail\)'
# nrcpus online : 4
# nrcpus avail : 4
Acked-by: Namhyung Kim <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Wang Nan <[email protected]>
Fixes: fbe96f29ce4b ("perf tools: Make perf.data more self-descriptive (v8)")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
[lizf: Backported to 3.4: fix it by saving values in an array and then print
it in reverse order]
Signed-off-by: Zefan Li <[email protected]>
---
tools/perf/util/header.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index c0b70c6..5a4482c 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -1060,25 +1060,19 @@ static void print_cpudesc(struct perf_header *ph, int fd, FILE *fp)
static void print_nrcpus(struct perf_header *ph, int fd, FILE *fp)
{
ssize_t ret;
- u32 nr;
+ u32 nr[2];
ret = read(fd, &nr, sizeof(nr));
if (ret != (ssize_t)sizeof(nr))
- nr = -1; /* interpreted as error */
+ nr[0] = nr[1] = -1; /* interpreted as error */
- if (ph->needs_swap)
- nr = bswap_32(nr);
-
- fprintf(fp, "# nrcpus online : %u\n", nr);
-
- ret = read(fd, &nr, sizeof(nr));
- if (ret != (ssize_t)sizeof(nr))
- nr = -1; /* interpreted as error */
-
- if (ph->needs_swap)
- nr = bswap_32(nr);
+ if (ph->needs_swap) {
+ nr[0] = bswap_32(nr[0]);
+ nr[1] = bswap_32(nr[1]);
+ }
- fprintf(fp, "# nrcpus avail : %u\n", nr);
+ fprintf(fp, "# nrcpus online : %u\n", nr[1]);
+ fprintf(fp, "# nrcpus avail : %u\n", nr[0]);
}
static void print_version(struct perf_header *ph, int fd, FILE *fp)
--
1.9.1
From: Paul Mackerras <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit e297c939b745e420ef0b9dc989cb87bda617b399 upstream.
This fixes a race which can result in the same virtual IRQ number
being assigned to two different MSI interrupts. The most visible
consequence of that is usually a warning and stack trace from the
sysfs code about an attempt to create a duplicate entry in sysfs.
The race happens when one CPU (say CPU 0) is disposing of an MSI
while another CPU (say CPU 1) is setting up an MSI. CPU 0 calls
(for example) pnv_teardown_msi_irqs(), which calls
msi_bitmap_free_hwirqs() to indicate that the MSI (i.e. its
hardware IRQ number) is no longer in use. Then, before CPU 0 gets
to calling irq_dispose_mapping() to free up the virtal IRQ number,
CPU 1 comes in and calls msi_bitmap_alloc_hwirqs() to allocate an
MSI, and gets the same hardware IRQ number that CPU 0 just freed.
CPU 1 then calls irq_create_mapping() to get a virtual IRQ number,
which sees that there is currently a mapping for that hardware IRQ
number and returns the corresponding virtual IRQ number (which is
the same virtual IRQ number that CPU 0 was using). CPU 0 then
calls irq_dispose_mapping() and frees that virtual IRQ number.
Now, if another CPU comes along and calls irq_create_mapping(), it
is likely to get the virtual IRQ number that was just freed,
resulting in the same virtual IRQ number apparently being used for
two different hardware interrupts.
To fix this race, we just move the call to msi_bitmap_free_hwirqs()
to after the call to irq_dispose_mapping(). Since virq_to_hw()
doesn't work for the virtual IRQ number after irq_dispose_mapping()
has been called, we need to call it before irq_dispose_mapping() and
remember the result for the msi_bitmap_free_hwirqs() call.
The pattern of calling msi_bitmap_free_hwirqs() before
irq_dispose_mapping() appears in 5 places under arch/powerpc, and
appears to have originated in commit 05af7bd2d75e ("[POWERPC] MPIC
U3/U4 MSI backend") from 2007.
Fixes: 05af7bd2d75e ("[POWERPC] MPIC U3/U4 MSI backend")
Reported-by: Alexey Kardashevskiy <[email protected]>
Signed-off-by: Paul Mackerras <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
[bwh: Backported to 3.2:
- powernv uses a private functions instead of msi_bitmap_free_hwirqs()
- Adjust filename, context]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/powerpc/platforms/powernv/pci.c | 4 +++-
arch/powerpc/sysdev/fsl_msi.c | 5 +++--
arch/powerpc/sysdev/mpic_pasemi_msi.c | 5 +++--
arch/powerpc/sysdev/mpic_u3msi.c | 5 +++--
arch/powerpc/sysdev/ppc4xx_msi.c | 5 +++--
5 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
index be3cfc5..5b127c8 100644
--- a/arch/powerpc/platforms/powernv/pci.c
+++ b/arch/powerpc/platforms/powernv/pci.c
@@ -137,6 +137,7 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
struct pci_controller *hose = pci_bus_to_host(pdev->bus);
struct pnv_phb *phb = hose->private_data;
struct msi_desc *entry;
+ irq_hw_number_t hwirq;
if (WARN_ON(!phb))
return;
@@ -144,9 +145,10 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
list_for_each_entry(entry, &pdev->msi_list, list) {
if (entry->irq == NO_IRQ)
continue;
+ hwirq = virq_to_hw(entry->irq);
irq_set_msi_desc(entry->irq, NULL);
- pnv_put_msi(phb, virq_to_hw(entry->irq));
irq_dispose_mapping(entry->irq);
+ pnv_put_msi(phb, hwirq);
}
}
#endif /* CONFIG_PCI_MSI */
diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
index 6e097de..fd1a96b 100644
--- a/arch/powerpc/sysdev/fsl_msi.c
+++ b/arch/powerpc/sysdev/fsl_msi.c
@@ -108,15 +108,16 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev)
{
struct msi_desc *entry;
struct fsl_msi *msi_data;
+ irq_hw_number_t hwirq;
list_for_each_entry(entry, &pdev->msi_list, list) {
if (entry->irq == NO_IRQ)
continue;
+ hwirq = virq_to_hw(entry->irq);
msi_data = irq_get_chip_data(entry->irq);
irq_set_msi_desc(entry->irq, NULL);
- msi_bitmap_free_hwirqs(&msi_data->bitmap,
- virq_to_hw(entry->irq), 1);
irq_dispose_mapping(entry->irq);
+ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
}
return;
diff --git a/arch/powerpc/sysdev/mpic_pasemi_msi.c b/arch/powerpc/sysdev/mpic_pasemi_msi.c
index 38e6238..e873616 100644
--- a/arch/powerpc/sysdev/mpic_pasemi_msi.c
+++ b/arch/powerpc/sysdev/mpic_pasemi_msi.c
@@ -74,6 +74,7 @@ static int pasemi_msi_check_device(struct pci_dev *pdev, int nvec, int type)
static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
{
struct msi_desc *entry;
+ irq_hw_number_t hwirq;
pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev);
@@ -81,10 +82,10 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
if (entry->irq == NO_IRQ)
continue;
+ hwirq = virq_to_hw(entry->irq);
irq_set_msi_desc(entry->irq, NULL);
- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
- virq_to_hw(entry->irq), ALLOC_CHUNK);
irq_dispose_mapping(entry->irq);
+ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, ALLOC_CHUNK);
}
return;
diff --git a/arch/powerpc/sysdev/mpic_u3msi.c b/arch/powerpc/sysdev/mpic_u3msi.c
index 9a7aa0e..dfc3486 100644
--- a/arch/powerpc/sysdev/mpic_u3msi.c
+++ b/arch/powerpc/sysdev/mpic_u3msi.c
@@ -124,15 +124,16 @@ static int u3msi_msi_check_device(struct pci_dev *pdev, int nvec, int type)
static void u3msi_teardown_msi_irqs(struct pci_dev *pdev)
{
struct msi_desc *entry;
+ irq_hw_number_t hwirq;
list_for_each_entry(entry, &pdev->msi_list, list) {
if (entry->irq == NO_IRQ)
continue;
+ hwirq = virq_to_hw(entry->irq);
irq_set_msi_desc(entry->irq, NULL);
- msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
- virq_to_hw(entry->irq), 1);
irq_dispose_mapping(entry->irq);
+ msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, 1);
}
return;
diff --git a/arch/powerpc/sysdev/ppc4xx_msi.c b/arch/powerpc/sysdev/ppc4xx_msi.c
index 1c2d7af..4aae9c8 100644
--- a/arch/powerpc/sysdev/ppc4xx_msi.c
+++ b/arch/powerpc/sysdev/ppc4xx_msi.c
@@ -114,16 +114,17 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev)
{
struct msi_desc *entry;
struct ppc4xx_msi *msi_data = &ppc4xx_msi;
+ irq_hw_number_t hwirq;
dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n");
list_for_each_entry(entry, &dev->msi_list, list) {
if (entry->irq == NO_IRQ)
continue;
+ hwirq = virq_to_hw(entry->irq);
irq_set_msi_desc(entry->irq, NULL);
- msi_bitmap_free_hwirqs(&msi_data->bitmap,
- virq_to_hw(entry->irq), 1);
irq_dispose_mapping(entry->irq);
+ msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
}
}
--
1.9.1
From: David Woodhouse <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 03da3ff1cfcd7774c8780d2547ba0d995f7dc03d upstream.
In 2007, commit 07190a08eef36 ("Mark TSC on GeodeLX reliable")
bypassed verification of the TSC on Geode LX. However, this code
(now in the check_system_tsc_reliable() function in
arch/x86/kernel/tsc.c) was only present if CONFIG_MGEODE_LX was
set.
OpenWRT has recently started building its generic Geode target
for Geode GX, not LX, to include support for additional
platforms. This broke the timekeeping on LX-based devices,
because the TSC wasn't marked as reliable:
https://dev.openwrt.org/ticket/20531
By adding a runtime check on is_geode_lx(), we can also include
the fix if CONFIG_MGEODEGX1 or CONFIG_X86_GENERIC are set, thus
fixing the problem.
Signed-off-by: David Woodhouse <[email protected]>
Cc: Andres Salomon <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Marcelo Tosatti <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/kernel/tsc.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 8652aa4..ffded61 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -18,6 +18,7 @@
#include <asm/hypervisor.h>
#include <asm/nmi.h>
#include <asm/x86_init.h>
+#include <asm/geode.h>
unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */
EXPORT_SYMBOL(cpu_khz);
@@ -800,15 +801,17 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable);
static void __init check_system_tsc_reliable(void)
{
-#ifdef CONFIG_MGEODE_LX
- /* RTSC counts during suspend */
+#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
+ if (is_geode_lx()) {
+ /* RTSC counts during suspend */
#define RTSC_SUSP 0x100
- unsigned long res_low, res_high;
+ unsigned long res_low, res_high;
- rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
- /* Geode_LX - the OLPC CPU has a very reliable TSC */
- if (res_low & RTSC_SUSP)
- tsc_clocksource_reliable = 1;
+ rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
+ /* Geode_LX - the OLPC CPU has a very reliable TSC */
+ if (res_low & RTSC_SUSP)
+ tsc_clocksource_reliable = 1;
+ }
#endif
if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
tsc_clocksource_reliable = 1;
--
1.9.1
From: Russell King <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 9b55613f42e8d40d5c9ccb8970bde6af4764b2ab upstream.
When a kernel is built covering ARMv6 to ARMv7, we omit to clear the
IT state when entering a signal handler. This can cause the first
few instructions to be conditionally executed depending on the parent
context.
In any case, the original test for >= ARMv7 is broken - ARMv6 can have
Thumb-2 support as well, and an ARMv6T2 specific build would omit this
code too.
Relax the test back to ARMv6 or greater. This results in us always
clearing the IT state bits in the PSR, even on CPUs where these bits
are reserved. However, they're reserved for the IT state, so this
should cause no harm.
Fixes: d71e1352e240 ("Clear the IT state when invoking a Thumb-2 signal handler")
Acked-by: Tony Lindgren <[email protected]>
Tested-by: H. Nikolaus Schaller <[email protected]>
Tested-by: Grazvydas Ignotas <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/arm/kernel/signal.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
index ffc0902..13579aa 100644
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -437,12 +437,17 @@ setup_return(struct pt_regs *regs, struct k_sigaction *ka,
*/
thumb = handler & 1;
-#if __LINUX_ARM_ARCH__ >= 7
+#if __LINUX_ARM_ARCH__ >= 6
/*
- * Clear the If-Then Thumb-2 execution state
- * ARM spec requires this to be all 000s in ARM mode
- * Snapdragon S4/Krait misbehaves on a Thumb=>ARM
- * signal transition without this.
+ * Clear the If-Then Thumb-2 execution state. ARM spec
+ * requires this to be all 000s in ARM mode. Snapdragon
+ * S4/Krait misbehaves on a Thumb=>ARM signal transition
+ * without this.
+ *
+ * We must do this whenever we are running on a Thumb-2
+ * capable CPU, which includes ARMv6T2. However, we elect
+ * to do this whenever we're on an ARMv6 or later CPU for
+ * simplicity.
*/
cpsr &= ~PSR_IT_MASK;
#endif
--
1.9.1
From: Herbert Xu <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 93efac3f2e03321129de67a3c0ba53048bb53e31 upstream.
The IPv6 IPsec pre-encap path performs fragmentation for tunnel-mode
packets. That is, we perform fragmentation pre-encap rather than
post-encap.
A check was added later to ensure that proper MTU information is
passed back for locally generated traffic. Unfortunately this
check was performed on all IPsec packets, including transport-mode
packets.
What's more, the check failed to take GSO into account.
The end result is that transport-mode GSO packets get dropped at
the check.
This patch fixes it by moving the tunnel mode check forward as well
as adding the GSO check.
Fixes: dd767856a36e ("xfrm6: Don't call icmpv6_send on local error")
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Steffen Klassert <[email protected]>
[lizf: Backported to 3.4:
- adjust context
- s/ignore_df/local_df]
Signed-off-by: Zefan Li <[email protected]>
---
net/ipv6/xfrm6_output.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
index 8755a30..7fc10b9 100644
--- a/net/ipv6/xfrm6_output.c
+++ b/net/ipv6/xfrm6_output.c
@@ -137,20 +137,24 @@ static int __xfrm6_output(struct sk_buff *skb)
struct dst_entry *dst = skb_dst(skb);
struct xfrm_state *x = dst->xfrm;
int mtu = ip6_skb_dst_mtu(skb);
+ bool toobig;
- if (skb->len > mtu && xfrm6_local_dontfrag(skb)) {
+ if (x->props.mode != XFRM_MODE_TUNNEL)
+ goto skip_frag;
+
+ toobig = skb->len > mtu && !skb_is_gso(skb);
+
+ if (toobig && xfrm6_local_dontfrag(skb)) {
xfrm6_local_rxpmtu(skb, mtu);
return -EMSGSIZE;
- } else if (!skb->local_df && skb->len > mtu && skb->sk) {
+ } else if (!skb->local_df && toobig && skb->sk) {
xfrm6_local_error(skb, mtu);
return -EMSGSIZE;
}
- if (x->props.mode == XFRM_MODE_TUNNEL &&
- ((skb->len > mtu && !skb_is_gso(skb)) ||
- dst_allfrag(skb_dst(skb)))) {
+ if (toobig || dst_allfrag(skb_dst(skb)))
return ip6_fragment(skb, x->outer_mode->afinfo->output_finish);
- }
+skip_frag:
return x->outer_mode->afinfo->output_finish(skb);
}
--
1.9.1
From: James Hogan <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 53960059d56ecef67d4ddd546731623641a3d2d1 upstream.
If there is a DMA zone (usually 24bit = 16MB I believe), but no DMA32
zone, as is the case for some 32-bit kernels, then massage_gfp_flags()
will cause DMA memory allocated for devices with a 32..63-bit
coherent_dma_mask to fall back to using __GFP_DMA, even though there may
only be 32-bits of physical address available anyway.
Correct that case to compare against a mask the size of phys_addr_t
instead of always using a 64-bit mask.
Signed-off-by: James Hogan <[email protected]>
Fixes: a2e715a86c6d ("MIPS: DMA: Fix computation of DMA flags from device's coherent_dma_mask.")
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/9610/
Signed-off-by: Ralf Baechle <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/mips/mm/dma-default.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
index 0eea2d2..f395d5d 100644
--- a/arch/mips/mm/dma-default.c
+++ b/arch/mips/mm/dma-default.c
@@ -71,7 +71,7 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
else
#endif
#if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32)
- if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
+ if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
dma_flag = __GFP_DMA;
else
#endif
--
1.9.1
From: Peter Zijlstra <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 95913d97914f44db2b81271c2e2ebd4d2ac2df83 upstream.
So the problem this patch is trying to address is as follows:
CPU0 CPU1
context_switch(A, B)
ttwu(A)
LOCK A->pi_lock
A->on_cpu == 0
finish_task_switch(A)
prev_state = A->state <-.
WMB |
A->on_cpu = 0; |
UNLOCK rq0->lock |
| context_switch(C, A)
`-- A->state = TASK_DEAD
prev_state == TASK_DEAD
put_task_struct(A)
context_switch(A, C)
finish_task_switch(A)
A->state == TASK_DEAD
put_task_struct(A)
The argument being that the WMB will allow the load of A->state on CPU0
to cross over and observe CPU1's store of A->state, which will then
result in a double-drop and use-after-free.
Now the comment states (and this was true once upon a long time ago)
that we need to observe A->state while holding rq->lock because that
will order us against the wakeup; however the wakeup will not in fact
acquire (that) rq->lock; it takes A->pi_lock these days.
We can obviously fix this by upgrading the WMB to an MB, but that is
expensive, so we'd rather avoid that.
The alternative this patch takes is: smp_store_release(&A->on_cpu, 0),
which avoids the MB on some archs, but not important ones like ARM.
Reported-by: Oleg Nesterov <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: e4a52bcb9a18 ("sched: Remove rq->lock from the first half of ttwu()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
[lizf: Backported to 3.4: use smb_mb() instead of smp_store_release(), which
is not defined in 3.4.y]
Signed-off-by: Zefan Li <[email protected]>
---
kernel/sched/core.c | 10 +++++-----
kernel/sched/sched.h | 4 +++-
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 15be435..609a226 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1949,11 +1949,11 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
* If a task dies, then it sets TASK_DEAD in tsk->state and calls
* schedule one last time. The schedule call will never return, and
* the scheduled task must drop that reference.
- * The test for TASK_DEAD must occur while the runqueue locks are
- * still held, otherwise prev could be scheduled on another cpu, die
- * there before we look at prev->state, and then the reference would
- * be dropped twice.
- * Manfred Spraul <[email protected]>
+ *
+ * We must observe prev->state before clearing prev->on_cpu (in
+ * finish_lock_switch), otherwise a concurrent wakeup can get prev
+ * running on another CPU and we could rave with its RUNNING -> DEAD
+ * transition, resulting in a double drop.
*/
prev_state = prev->state;
finish_arch_switch(prev);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4a5e739..44f4058 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -702,8 +702,10 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
* After ->on_cpu is cleared, the task can be moved to a different CPU.
* We must ensure this doesn't happen until the switch is completely
* finished.
+ *
+ * Pairs with the control dependency and rmb in try_to_wake_up().
*/
- smp_wmb();
+ smp_mb();
prev->on_cpu = 0;
#endif
#ifdef CONFIG_DEBUG_SPINLOCK
--
1.9.1
From: Johannes Berg <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 5bd166872d8f99f156fac191299d24f828bb2348 upstream.
The code to send the RX PN data (for each TID) to the firmware
has a devastating bug: it overwrites the data for TID 0 with
all the TID data, leaving the remaining TIDs zeroed. This will
allow replays to actually be accepted by the firmware, which
could allow waking up the system.
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Luca Coelho <[email protected]>
[lizf: Backported to 3.4: adjust filename]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/net/wireless/iwlwifi/iwl-agn-lib.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/iwlwifi/iwl-agn-lib.c b/drivers/net/wireless/iwlwifi/iwl-agn-lib.c
index 56f41c9..6314e24 100644
--- a/drivers/net/wireless/iwlwifi/iwl-agn-lib.c
+++ b/drivers/net/wireless/iwlwifi/iwl-agn-lib.c
@@ -1063,7 +1063,7 @@ static void iwlagn_wowlan_program_keys(struct ieee80211_hw *hw,
u8 *pn = seq.ccmp.pn;
ieee80211_get_key_rx_seq(key, i, &seq);
- aes_sc->pn = cpu_to_le64(
+ aes_sc[i].pn = cpu_to_le64(
(u64)pn[5] |
((u64)pn[4] << 8) |
((u64)pn[3] << 16) |
--
1.9.1
From: Vasant Hegde <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 8832317f662c06f5c06e638f57bfe89a71c9b266 upstream.
Currently we do not validate rtas.entry before calling enter_rtas(). This
leads to a kernel oops when user space calls rtas system call on a powernv
platform (see below). This patch adds code to validate rtas.entry before
making enter_rtas() call.
Oops: Exception in kernel mode, sig: 4 [#1]
SMP NR_CPUS=1024 NUMA PowerNV
task: c000000004294b80 ti: c0000007e1a78000 task.ti: c0000007e1a78000
NIP: 0000000000000000 LR: 0000000000009c14 CTR: c000000000423140
REGS: c0000007e1a7b920 TRAP: 0e40 Not tainted (3.18.17-340.el7_1.pkvm3_1_0.2400.1.ppc64le)
MSR: 1000000000081000 <HV,ME> CR: 00000000 XER: 00000000
CFAR: c000000000009c0c SOFTE: 0
NIP [0000000000000000] (null)
LR [0000000000009c14] 0x9c14
Call Trace:
[c0000007e1a7bba0] [c00000000041a7f4] avc_has_perm_noaudit+0x54/0x110 (unreliable)
[c0000007e1a7bd80] [c00000000002ddc0] ppc_rtas+0x150/0x2d0
[c0000007e1a7be30] [c000000000009358] syscall_exit+0x0/0x98
Fixes: 55190f88789a ("powerpc: Add skeleton PowerNV platform")
Reported-by: NAGESWARA R. SASTRY <[email protected]>
Signed-off-by: Vasant Hegde <[email protected]>
[mpe: Reword change log, trim oops, and add stable + fixes]
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/powerpc/kernel/rtas.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index b42cdc3..8178294 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -1042,6 +1042,9 @@ asmlinkage int ppc_rtas(struct rtas_args __user *uargs)
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
+ if (!rtas.entry)
+ return -EINVAL;
+
if (copy_from_user(&args, uargs, 3 * sizeof(u32)) != 0)
return -EFAULT;
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit c340702ca26a628832fade4f133d8160a55c29cc upstream.
When a write fails and a bad-block-list is present, we can
update the bad-block-list instead of writing the data. If
this succeeds then it is OK clear the relevant bitmap-bit as
no further 'sync' of the block is needed.
However if writing the bad-block-list fails then we need to
treat the write as failed and particularly must not clear
the bitmap bit. Otherwise the device can be re-added (after
any hardware connection issues are resolved) and because the
relevant bit in the bitmap is clear, that block will not be
resynced. This leads to data corruption.
We already delay the final bio_endio() on the write until
the bad-block-list is written so that when the write
returns: either that data is safe, the bad-block record is
safe, or the fact that the device is faulty is safe.
However we *don't* delay the clearing of the bitmap, so the
bitmap bit can be recorded as cleared before we know if the
bad-block-list was written safely.
So: delay that until the write really is safe.
i.e. move the call to close_write() until just before
calling bio_endio(), and recheck the 'is array degraded'
status before making that call.
This bug goes back to v3.1 when bad-block-lists were
introduced, though it only affects arrays created with
mdadm-3.3 or later as only those have bad-block lists.
Backports will require at least
Commit: 95af587e95aa ("md/raid10: ensure device failure recorded before write request returns.")
as well. I'll send that to 'stable' separately.
Note that of the two tests of R10BIO_WriteError that this
patch adds, the first is certain to fail and the second is
certain to succeed. However doing it this way makes the
patch more obviously correct. I will tidy the code up in a
future merge window.
Reported-by: Nate Dailey <[email protected]>
Fixes: bd870a16c594 ("md/raid10: Handle write errors by updating badblock log.")
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid10.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index b0ad772..1b77980 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2568,16 +2568,17 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
rdev_dec_pending(rdev, conf->mddev);
}
}
- if (test_bit(R10BIO_WriteError,
- &r10_bio->state))
- close_write(r10_bio);
if (fail) {
spin_lock_irq(&conf->device_lock);
list_add(&r10_bio->retry_list, &conf->bio_end_io_list);
spin_unlock_irq(&conf->device_lock);
md_wakeup_thread(conf->mddev->thread);
- } else
+ } else {
+ if (test_bit(R10BIO_WriteError,
+ &r10_bio->state))
+ close_write(r10_bio);
raid_end_bio_io(r10_bio);
+ }
}
}
@@ -2604,6 +2605,12 @@ static void raid10d(struct mddev *mddev)
r10_bio = list_first_entry(&conf->bio_end_io_list,
struct r10bio, retry_list);
list_del(&r10_bio->retry_list);
+ if (mddev->degraded)
+ set_bit(R10BIO_Degraded, &r10_bio->state);
+
+ if (test_bit(R10BIO_WriteError,
+ &r10_bio->state))
+ close_write(r10_bio);
raid_end_bio_io(r10_bio);
}
}
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 95af587e95aacb9cfda4a9641069a5244a540dc8 upstream.
When a write to one of the legs of a RAID10 fails, the failure is
recorded in the metadata of the other legs so that after a restart
the data on the failed drive wont be trusted even if that drive seems
to be working again (maybe a cable was unplugged).
Currently there is no interlock between the write request completing
and the metadata update. So it is possible that the write will
complete, the app will confirm success in some way, and then the
machine will crash before the metadata update completes.
This is an extremely small hole for a racy to fit in, but it is
theoretically possible and so should be closed.
So:
- set MD_CHANGE_PENDING when requesting a metadata update for a
failed device, so we can know with certainty when it completes
- queue requests that experienced an error on a new queue which
is only processed after the metadata update completes
- call raid_end_bio_io() on bios in that queue when the time comes.
Signed-off-by: NeilBrown <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid10.c | 29 ++++++++++++++++++++++++++++-
drivers/md/raid10.h | 6 ++++++
2 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 149426c..b0ad772 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1463,6 +1463,7 @@ static void error(struct mddev *mddev, struct md_rdev *rdev)
set_bit(Blocked, &rdev->flags);
set_bit(Faulty, &rdev->flags);
set_bit(MD_CHANGE_DEVS, &mddev->flags);
+ set_bit(MD_CHANGE_PENDING, &mddev->flags);
printk(KERN_ALERT
"md/raid10:%s: Disk failure on %s, disabling device.\n"
"md/raid10:%s: Operation continuing on %d devices.\n",
@@ -2536,6 +2537,7 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
}
put_buf(r10_bio);
} else {
+ bool fail = false;
for (m = 0; m < conf->copies; m++) {
int dev = r10_bio->devs[m].devnum;
struct bio *bio = r10_bio->devs[m].bio;
@@ -2548,6 +2550,7 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
rdev_dec_pending(rdev, conf->mddev);
} else if (bio != NULL &&
!test_bit(BIO_UPTODATE, &bio->bi_flags)) {
+ fail = true;
if (!narrow_write_error(r10_bio, m)) {
md_error(conf->mddev, rdev);
set_bit(R10BIO_Degraded,
@@ -2568,7 +2571,13 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
if (test_bit(R10BIO_WriteError,
&r10_bio->state))
close_write(r10_bio);
- raid_end_bio_io(r10_bio);
+ if (fail) {
+ spin_lock_irq(&conf->device_lock);
+ list_add(&r10_bio->retry_list, &conf->bio_end_io_list);
+ spin_unlock_irq(&conf->device_lock);
+ md_wakeup_thread(conf->mddev->thread);
+ } else
+ raid_end_bio_io(r10_bio);
}
}
@@ -2582,6 +2591,23 @@ static void raid10d(struct mddev *mddev)
md_check_recovery(mddev);
+ if (!list_empty_careful(&conf->bio_end_io_list) &&
+ !test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+ LIST_HEAD(tmp);
+ spin_lock_irqsave(&conf->device_lock, flags);
+ if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+ list_add(&tmp, &conf->bio_end_io_list);
+ list_del_init(&conf->bio_end_io_list);
+ }
+ spin_unlock_irqrestore(&conf->device_lock, flags);
+ while (!list_empty(&tmp)) {
+ r10_bio = list_first_entry(&conf->bio_end_io_list,
+ struct r10bio, retry_list);
+ list_del(&r10_bio->retry_list);
+ raid_end_bio_io(r10_bio);
+ }
+ }
+
blk_start_plug(&plug);
for (;;) {
@@ -3286,6 +3312,7 @@ static struct r10conf *setup_conf(struct mddev *mddev)
spin_lock_init(&conf->device_lock);
INIT_LIST_HEAD(&conf->retry_list);
+ INIT_LIST_HEAD(&conf->bio_end_io_list);
spin_lock_init(&conf->resync_lock);
init_waitqueue_head(&conf->wait_barrier);
diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
index 24d45b8..8085d90 100644
--- a/drivers/md/raid10.h
+++ b/drivers/md/raid10.h
@@ -42,6 +42,12 @@ struct r10conf {
sector_t chunk_mask;
struct list_head retry_list;
+ /* A separate list of r1bio which just need raid_end_bio_io called.
+ * This mustn't happen for writes which had any errors if the superblock
+ * needs to be written.
+ */
+ struct list_head bio_end_io_list;
+
/* queue pending writes and submit them on unplug */
struct bio_list pending_bio_list;
int pending_count;
--
1.9.1
From: Guenter Roeck <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a394d635193b641f2c86ead5ada5b115d57c51f8 upstream.
Actually, spi_master_put() after spi_alloc_master() must _not_ be followed
by kfree(). The memory is already freed with the call to spi_master_put()
through spi_master_class, which registers a release function. Calling both
spi_master_put() and kfree() results in often nasty (and delayed) crashes
elsewhere in the kernel, often in the networking stack.
This reverts commit eb4af0f5349235df2e4a5057a72fc8962d00308a.
Link to patch and concerns: https://lkml.org/lkml/2012/9/3/269
or
http://lkml.iu.edu/hypermail/linux/kernel/1209.0/00790.html
Alexey Klimov: This revert becomes valid after
94c69f765f1b4a658d96905ec59928e3e3e07e6a when spi-imx.c
has been fixed and there is no need to call kfree() so comment
for spi_alloc_master() should be fixed.
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Alexey Klimov <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/spi/spi.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 3d8f662..a3f31e9 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -831,8 +831,7 @@ static struct class spi_master_class = {
*
* The caller is responsible for assigning the bus number and initializing
* the master's methods before calling spi_register_master(); and (after errors
- * adding the device) calling spi_master_put() and kfree() to prevent a memory
- * leak.
+ * adding the device) calling spi_master_put() to prevent a memory leak.
*/
struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
{
--
1.9.1
From: Jeff Mahoney <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a30e577c96f59b1e1678ea5462432b09bf7d5cbc upstream.
In btrfs_evict_inode, we properly truncate the page cache for evicted
inodes but then we call btrfs_wait_ordered_range for every inode as well.
It's the right thing to do for regular files but results in incorrect
behavior for device inodes for block devices.
filemap_fdatawrite_range gets called with inode->i_mapping which gets
resolved to the block device inode before getting passed to
wbc_attach_fdatawrite_inode and ultimately to inode_to_bdi. What happens
next depends on whether there's an open file handle associated with the
inode. If there is, we write to the block device, which is unexpected
behavior. If there isn't, we through normally and inode->i_data is used.
We can also end up racing against open/close which can result in crashes
when i_mapping points to a block device inode that has been closed.
Since there can't be any page cache associated with special file inodes,
it's safe to skip the btrfs_wait_ordered_range call entirely and avoid
the problem.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=100911
Tested-by: Christoph Biedl <[email protected]>
Signed-off-by: Jeff Mahoney <[email protected]>
Reviewed-by: Filipe Manana <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/btrfs/inode.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9e51325..575c190 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -3685,7 +3685,8 @@ void btrfs_evict_inode(struct inode *inode)
goto no_delete;
}
/* do we really want it for ->i_nlink > 0 and zero btrfs_root_refs? */
- btrfs_wait_ordered_range(inode, 0, (u64)-1);
+ if (!special_file(inode->i_mode))
+ btrfs_wait_ordered_range(inode, 0, (u64)-1);
if (root->fs_info->log_root_recovering) {
BUG_ON(!list_empty(&BTRFS_I(inode)->i_orphan));
--
1.9.1
From: Mathias Nyman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit ff30cbc8da425754e8ab96904db1d295bd034f27 upstream.
Bits 1:0 of the bmAttributes are used for the burst multiplier.
The rest of the bits used to be reserved (zero), but USB3.1 takes bit 7
into use.
Use the existing USB_SS_MULT() macro instead to make sure the mult value
and hence max packet calculations are correct for USB3.1 devices.
Note that burst multiplier in bmAttributes is zero based and that
the USB_SS_MULT() macro adds one.
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/core/config.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
index cc1004a..6baa836 100644
--- a/drivers/usb/core/config.c
+++ b/drivers/usb/core/config.c
@@ -114,7 +114,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
cfgno, inum, asnum, ep->desc.bEndpointAddress);
ep->ss_ep_comp.bmAttributes = 16;
} else if (usb_endpoint_xfer_isoc(&ep->desc) &&
- desc->bmAttributes > 2) {
+ USB_SS_MULT(desc->bmAttributes) > 3) {
dev_warn(ddev, "Isoc endpoint has Mult of %d in "
"config %d interface %d altsetting %d ep %d: "
"setting to 3\n", desc->bmAttributes + 1,
@@ -123,7 +123,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
}
if (usb_endpoint_xfer_isoc(&ep->desc))
- max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) *
+ max_tx = (desc->bMaxBurst + 1) *
+ (USB_SS_MULT(desc->bmAttributes)) *
usb_endpoint_maxp(&ep->desc);
else if (usb_endpoint_xfer_int(&ep->desc))
max_tx = usb_endpoint_maxp(&ep->desc) *
--
1.9.1
From: Roger Quadros <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit e5bfeab0ad515b4f6df39fe716603e9dc6d3dfd0 upstream.
For whatever reason if XHCI died in the previous instant
then it will never recover on the next xhci_start unless we
clear the DYING flag.
Signed-off-by: Roger Quadros <[email protected]>
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/xhci.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index fd52e1e..88be7a5 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -141,7 +141,8 @@ static int xhci_start(struct xhci_hcd *xhci)
"waited %u microseconds.\n",
XHCI_MAX_HALT_USEC);
if (!ret)
- xhci->xhc_state &= ~XHCI_STATE_HALTED;
+ xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING);
+
return ret;
}
--
1.9.1
From: Peter Seiderer <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 98ce94c8df762d413b3ecb849e2b966b21606d04 upstream.
Linux cifs mount with ntlmssp against an Mac OS X (Yosemite
10.10.5) share fails in case the clocks differ more than +/-2h:
digest-service: digest-request: od failed with 2 proto=ntlmv2
digest-service: digest-request: kdc failed with -1561745592 proto=ntlmv2
Fix this by (re-)using the given server timestamp for the
ntlmv2 authentication (as Windows 7 does).
A related problem was also reported earlier by Namjae Jaen (see below):
Windows machine has extended security feature which refuse to allow
authentication when there is time difference between server time and
client time when ntlmv2 negotiation is used. This problem is prevalent
in embedded enviornment where system time is set to default 1970.
Modern servers send the server timestamp in the TargetInfo Av_Pair
structure in the challenge message [see MS-NLMP 2.2.2.1]
In [MS-NLMP 3.1.5.1.2] it is explicitly mentioned that the client must
use the server provided timestamp if present OR current time if it is
not
Reported-by: Namjae Jeon <[email protected]>
Signed-off-by: Peter Seiderer <[email protected]>
Signed-off-by: Steve French <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
fs/cifs/cifsencrypt.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 50 insertions(+), 1 deletion(-)
diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
index 6dd3b61..8431216 100644
--- a/fs/cifs/cifsencrypt.c
+++ b/fs/cifs/cifsencrypt.c
@@ -388,6 +388,48 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp)
return 0;
}
+/* Server has provided av pairs/target info in the type 2 challenge
+ * packet and we have plucked it and stored within smb session.
+ * We parse that blob here to find the server given timestamp
+ * as part of ntlmv2 authentication (or local current time as
+ * default in case of failure)
+ */
+static __le64
+find_timestamp(struct cifs_ses *ses)
+{
+ unsigned int attrsize;
+ unsigned int type;
+ unsigned int onesize = sizeof(struct ntlmssp2_name);
+ unsigned char *blobptr;
+ unsigned char *blobend;
+ struct ntlmssp2_name *attrptr;
+
+ if (!ses->auth_key.len || !ses->auth_key.response)
+ return 0;
+
+ blobptr = ses->auth_key.response;
+ blobend = blobptr + ses->auth_key.len;
+
+ while (blobptr + onesize < blobend) {
+ attrptr = (struct ntlmssp2_name *) blobptr;
+ type = le16_to_cpu(attrptr->type);
+ if (type == NTLMSSP_AV_EOL)
+ break;
+ blobptr += 2; /* advance attr type */
+ attrsize = le16_to_cpu(attrptr->length);
+ blobptr += 2; /* advance attr size */
+ if (blobptr + attrsize > blobend)
+ break;
+ if (type == NTLMSSP_AV_TIMESTAMP) {
+ if (attrsize == sizeof(u64))
+ return *((__le64 *)blobptr);
+ }
+ blobptr += attrsize; /* advance attr value */
+ }
+
+ return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
+}
+
static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
const struct nls_table *nls_cp)
{
@@ -549,6 +591,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
struct ntlmv2_resp *buf;
char ntlmv2_hash[16];
unsigned char *tiblob = NULL; /* target info blob */
+ __le64 rsp_timestamp;
if (ses->server->secType == RawNTLMSSP) {
if (!ses->domainName) {
@@ -566,6 +609,12 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
}
}
+ /* Must be within 5 minutes of the server (or in range +/-2h
+ * in case of Mac OS X), so simply carry over server timestamp
+ * (as Windows 7 does)
+ */
+ rsp_timestamp = find_timestamp(ses);
+
baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
tilen = ses->auth_key.len;
tiblob = ses->auth_key.response;
@@ -583,7 +632,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
(ses->auth_key.response + CIFS_SESS_KEY_SIZE);
buf->blob_signature = cpu_to_le32(0x00000101);
buf->reserved = 0;
- buf->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
+ buf->time = rsp_timestamp;
get_random_bytes(&buf->client_chal, sizeof(buf->client_chal));
buf->reserved2 = 0;
--
1.9.1
From: Joseph Qi <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 012572d4fc2e4ddd5c8ec8614d51414ec6cae02a upstream.
The order of the following three spinlocks should be:
dlm_domain_lock < dlm_ctxt->spinlock < dlm_lock_resource->spinlock
But dlm_dispatch_assert_master() is called while holding
dlm_ctxt->spinlock and dlm_lock_resource->spinlock, and then it calls
dlm_grab() which will take dlm_domain_lock.
Once another thread (for example, dlm_query_join_handler) has already
taken dlm_domain_lock, and tries to take dlm_ctxt->spinlock deadlock
happens.
Signed-off-by: Joseph Qi <[email protected]>
Cc: Joel Becker <[email protected]>
Cc: Mark Fasheh <[email protected]>
Cc: "Junxiao Bi" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
fs/ocfs2/dlm/dlmmaster.c | 4 +++-
fs/ocfs2/dlm/dlmrecovery.c | 6 +++++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
index 7ba6ac1..751efa8 100644
--- a/fs/ocfs2/dlm/dlmmaster.c
+++ b/fs/ocfs2/dlm/dlmmaster.c
@@ -1411,6 +1411,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
int found, ret;
int set_maybe;
int dispatch_assert = 0;
+ int dispatched = 0;
if (!dlm_grab(dlm))
return DLM_MASTER_RESP_NO;
@@ -1617,6 +1618,8 @@ send_response:
mlog(ML_ERROR, "failed to dispatch assert master work\n");
response = DLM_MASTER_RESP_ERROR;
dlm_lockres_put(res);
+ } else {
+ dispatched = 1;
}
} else {
if (res)
@@ -2041,7 +2044,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
/* queue up work for dlm_assert_master_worker */
- dlm_grab(dlm); /* get an extra ref for the work item */
dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
item->u.am.lockres = res; /* already have a ref */
/* can optionally ignore node numbers higher than this node */
diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index d15b071..0e5013e 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -1689,6 +1689,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
unsigned int hash;
int master = DLM_LOCK_RES_OWNER_UNKNOWN;
u32 flags = DLM_ASSERT_MASTER_REQUERY;
+ int dispatched = 0;
if (!dlm_grab(dlm)) {
/* since the domain has gone away on this
@@ -1710,6 +1711,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
mlog_errno(-ENOMEM);
/* retry!? */
BUG();
+ } else {
+ dispatched = 1;
}
} else /* put.. incase we are not the master */
dlm_lockres_put(res);
@@ -1717,7 +1720,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
}
spin_unlock(&dlm->spinlock);
- dlm_put(dlm);
+ if (!dispatched)
+ dlm_put(dlm);
return master;
}
--
1.9.1
From: Malcolm Crossley <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 64c98e7f49100b637cd20a6c63508caed6bbba7a upstream.
Sanitizing the e820 map may produce extra E820 entries which would result in
the topmost E820 entries being removed. The removed entries would typically
include the top E820 usable RAM region and thus result in the domain having
signicantly less RAM available to it.
Fix by allowing sanitize_e820_map to use the full size of the allocated E820
array.
Signed-off-by: Malcolm Crossley <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
[lizf: Backported to 3.4: s/map/xen_e820_map]
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/xen/setup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index f8b0260..f60abe6 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -274,7 +274,7 @@ char * __init xen_memory_setup(void)
xen_ignore_unusable(map, memmap.nr_entries);
/* Make sure the Xen-supplied memory map is well-ordered. */
- sanitize_e820_map(map, memmap.nr_entries, &memmap.nr_entries);
+ sanitize_e820_map(map, ARRAY_SIZE(map), &memmap.nr_entries);
max_pages = xen_get_max_pages();
if (max_pages > max_pfn)
--
1.9.1
From: Thomas Gleixner <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit eddd3826a1a0190e5235703d1e666affa4d13b96 upstream.
Dmitry Vyukov reported the following using trinity and the memory
error detector AddressSanitizer
(https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel).
[ 124.575597] ERROR: AddressSanitizer: heap-buffer-overflow on
address ffff88002e280000
[ 124.576801] ffff88002e280000 is located 131938492886538 bytes to
the left of 28857600-byte region [ffffffff81282e0a, ffffffff82e0830a)
[ 124.578633] Accessed by thread T10915:
[ 124.579295] inlined in describe_heap_address
./arch/x86/mm/asan/report.c:164
[ 124.579295] #0 ffffffff810dd277 in asan_report_error
./arch/x86/mm/asan/report.c:278
[ 124.580137] #1 ffffffff810dc6a0 in asan_check_region
./arch/x86/mm/asan/asan.c:37
[ 124.581050] #2 ffffffff810dd423 in __tsan_read8 ??:0
[ 124.581893] #3 ffffffff8107c093 in get_wchan
./arch/x86/kernel/process_64.c:444
The address checks in the 64bit implementation of get_wchan() are
wrong in several ways:
- The lower bound of the stack is not the start of the stack
page. It's the start of the stack page plus sizeof (struct
thread_info)
- The upper bound must be:
top_of_stack - TOP_OF_KERNEL_STACK_PADDING - 2 * sizeof(unsigned long).
The 2 * sizeof(unsigned long) is required because the stack pointer
points at the frame pointer. The layout on the stack is: ... IP FP
... IP FP. So we need to make sure that both IP and FP are in the
bounds.
Fix the bound checks and get rid of the mix of numeric constants, u64
and unsigned long. Making all unsigned long allows us to use the same
function for 32bit as well.
Use READ_ONCE() when accessing the stack. This does not prevent a
concurrent wakeup of the task and the stack changing, but at least it
avoids TOCTOU.
Also check task state at the end of the loop. Again that does not
prevent concurrent changes, but it avoids walking for nothing.
Add proper comments while at it.
Reported-by: Dmitry Vyukov <[email protected]>
Reported-by: Sasha Levin <[email protected]>
Based-on-patch-from: Wolfram Gloger <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Borislav Petkov <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Kostya Serebryany <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: kasan-dev <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Wolfram Gloger <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
[lizf: Backported to 3.4:
- s/READ_ONCE/ACCESS_ONCE
- remove TOP_OF_KERNEL_STACK_PADDING]
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/kernel/process_64.c | 52 +++++++++++++++++++++++++++++++++++---------
1 file changed, 42 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index d5d7313..f6698ad 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -470,27 +470,59 @@ void set_personality_ia32(bool x32)
}
EXPORT_SYMBOL_GPL(set_personality_ia32);
+/*
+ * Called from fs/proc with a reference on @p to find the function
+ * which called into schedule(). This needs to be done carefully
+ * because the task might wake up and we might look at a stack
+ * changing under us.
+ */
unsigned long get_wchan(struct task_struct *p)
{
- unsigned long stack;
- u64 fp, ip;
+ unsigned long start, bottom, top, sp, fp, ip;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
- stack = (unsigned long)task_stack_page(p);
- if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE)
+
+ start = (unsigned long)task_stack_page(p);
+ if (!start)
+ return 0;
+
+ /*
+ * Layout of the stack page:
+ *
+ * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
+ * PADDING
+ * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
+ * stack
+ * ----------- bottom = start + sizeof(thread_info)
+ * thread_info
+ * ----------- start
+ *
+ * The tasks stack pointer points at the location where the
+ * framepointer is stored. The data on the stack is:
+ * ... IP FP ... IP FP
+ *
+ * We need to read FP and IP, so we need to adjust the upper
+ * bound by another unsigned long.
+ */
+ top = start + THREAD_SIZE;
+ top -= 2 * sizeof(unsigned long);
+ bottom = start + sizeof(struct thread_info);
+
+ sp = ACCESS_ONCE(p->thread.sp);
+ if (sp < bottom || sp > top)
return 0;
- fp = *(u64 *)(p->thread.sp);
+
+ fp = ACCESS_ONCE(*(unsigned long *)sp);
do {
- if (fp < (unsigned long)stack ||
- fp >= (unsigned long)stack+THREAD_SIZE)
+ if (fp < bottom || fp > top)
return 0;
- ip = *(u64 *)(fp+8);
+ ip = ACCESS_ONCE(*(unsigned long *)(fp + sizeof(unsigned long)));
if (!in_sched_functions(ip))
return ip;
- fp = *(u64 *)fp;
- } while (count++ < 16);
+ fp = ACCESS_ONCE(*(unsigned long *)fp);
+ } while (count++ < 16 && p->state != TASK_RUNNING);
return 0;
}
--
1.9.1
From: Mel Gorman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 2f84a8990ebbe235c59716896e017c6b2ca1200f upstream.
SunDong reported the following on
https://bugzilla.kernel.org/show_bug.cgi?id=103841
I think I find a linux bug, I have the test cases is constructed. I
can stable recurring problems in fedora22(4.0.4) kernel version,
arch for x86_64. I construct transparent huge page, when the parent
and child process with MAP_SHARE, MAP_PRIVATE way to access the same
huge page area, it has the opportunity to lead to huge page copy on
write failure, and then it will munmap the child corresponding mmap
area, but then the child mmap area with VM_MAYSHARE attributes, child
process munmap this area can trigger VM_BUG_ON in set_vma_resv_flags
functions (vma - > vm_flags & VM_MAYSHARE).
There were a number of problems with the report (e.g. it's hugetlbfs that
triggers this, not transparent huge pages) but it was fundamentally
correct in that a VM_BUG_ON in set_vma_resv_flags() can be triggered that
looks like this
vma ffff8804651fd0d0 start 00007fc474e00000 end 00007fc475e00000
next ffff8804651fd018 prev ffff8804651fd188 mm ffff88046b1b1800
prot 8000000000000027 anon_vma (null) vm_ops ffffffff8182a7a0
pgoff 0 file ffff88106bdb9800 private_data (null)
flags: 0x84400fb(read|write|shared|mayread|maywrite|mayexec|mayshare|dontexpand|hugetlb)
------------
kernel BUG at mm/hugetlb.c:462!
SMP
Modules linked in: xt_pkttype xt_LOG xt_limit [..]
CPU: 38 PID: 26839 Comm: map Not tainted 4.0.4-default #1
Hardware name: Dell Inc. PowerEdge R810/0TT6JF, BIOS 2.7.4 04/26/2012
set_vma_resv_flags+0x2d/0x30
The VM_BUG_ON is correct because private and shared mappings have
different reservation accounting but the warning clearly shows that the
VMA is shared.
When a private COW fails to allocate a new page then only the process
that created the VMA gets the page -- all the children unmap the page.
If the children access that data in the future then they get killed.
The problem is that the same file is mapped shared and private. During
the COW, the allocation fails, the VMAs are traversed to unmap the other
private pages but a shared VMA is found and the bug is triggered. This
patch identifies such VMAs and skips them.
Signed-off-by: Mel Gorman <[email protected]>
Reported-by: SunDong <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: David Rientjes <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
mm/hugetlb.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bc36e28..e622aab 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2504,6 +2504,14 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
continue;
/*
+ * Shared VMAs have their own reserves and do not affect
+ * MAP_PRIVATE accounting but it is possible that a shared
+ * VMA is using the same page so check and skip such VMAs.
+ */
+ if (iter_vma->vm_flags & VM_MAYSHARE)
+ continue;
+
+ /*
* Unmap the page from other VMAs without their own reserves.
* They get marked to be SIGKILLed if they fault in these
* areas. This is because a future no-page fault on this VMA
--
1.9.1
From: Ben Hutchings <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 95c2b17534654829db428f11bcf4297c059a2a7e upstream.
Per-IRQ directories in procfs are created only when a handler is first
added to the irqdesc, not when the irqdesc is created. In the case of
a shared IRQ, multiple tasks can race to create a directory. This
race condition seems to have been present forever, but is easier to
hit with async probing.
Signed-off-by: Ben Hutchings <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
kernel/irq/proc.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index fb655f5f..15374d0 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -12,6 +12,7 @@
#include <linux/seq_file.h>
#include <linux/interrupt.h>
#include <linux/kernel_stat.h>
+#include <linux/mutex.h>
#include "internals.h"
@@ -326,18 +327,29 @@ void register_handler_proc(unsigned int irq, struct irqaction *action)
void register_irq_proc(unsigned int irq, struct irq_desc *desc)
{
+ static DEFINE_MUTEX(register_lock);
char name [MAX_NAMELEN];
- if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir)
+ if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip))
return;
+ /*
+ * irq directories are registered only when a handler is
+ * added, not when the descriptor is created, so multiple
+ * tasks might try to register at the same time.
+ */
+ mutex_lock(®ister_lock);
+
+ if (desc->dir)
+ goto out_unlock;
+
memset(name, 0, MAX_NAMELEN);
sprintf(name, "%d", irq);
/* create /proc/irq/1234 */
desc->dir = proc_mkdir(name, root_irq_dir);
if (!desc->dir)
- return;
+ goto out_unlock;
#ifdef CONFIG_SMP
/* create /proc/irq/<irq>/smp_affinity */
@@ -358,6 +370,9 @@ void register_irq_proc(unsigned int irq, struct irq_desc *desc)
proc_create_data("spurious", 0444, desc->dir,
&irq_spurious_proc_fops, (void *)(long)irq);
+
+out_unlock:
+ mutex_unlock(®ister_lock);
}
void unregister_irq_proc(unsigned int irq, struct irq_desc *desc)
--
1.9.1
From: Kosuke Tatsukawa <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit e81107d4c6bd098878af9796b24edc8d4a9524fd upstream.
My colleague ran into a program stall on a x86_64 server, where
n_tty_read() was waiting for data even if there was data in the buffer
in the pty. kernel stack for the stuck process looks like below.
#0 [ffff88303d107b58] __schedule at ffffffff815c4b20
#1 [ffff88303d107bd0] schedule at ffffffff815c513e
#2 [ffff88303d107bf0] schedule_timeout at ffffffff815c7818
#3 [ffff88303d107ca0] wait_woken at ffffffff81096bd2
#4 [ffff88303d107ce0] n_tty_read at ffffffff8136fa23
#5 [ffff88303d107dd0] tty_read at ffffffff81368013
#6 [ffff88303d107e20] __vfs_read at ffffffff811a3704
#7 [ffff88303d107ec0] vfs_read at ffffffff811a3a57
#8 [ffff88303d107f00] sys_read at ffffffff811a4306
#9 [ffff88303d107f50] entry_SYSCALL_64_fastpath at ffffffff815c86d7
There seems to be two problems causing this issue.
First, in drivers/tty/n_tty.c, __receive_buf() stores the data and
updates ldata->commit_head using smp_store_release() and then checks
the wait queue using waitqueue_active(). However, since there is no
memory barrier, __receive_buf() could return without calling
wake_up_interactive_poll(), and at the same time, n_tty_read() could
start to wait in wait_woken() as in the following chart.
__receive_buf() n_tty_read()
------------------------------------------------------------------------
if (waitqueue_active(&tty->read_wait))
/* Memory operations issued after the
RELEASE may be completed before the
RELEASE operation has completed */
add_wait_queue(&tty->read_wait, &wait);
...
if (!input_available_p(tty, 0)) {
smp_store_release(&ldata->commit_head,
ldata->read_head);
...
timeout = wait_woken(&wait,
TASK_INTERRUPTIBLE, timeout);
------------------------------------------------------------------------
The second problem is that n_tty_read() also lacks a memory barrier
call and could also cause __receive_buf() to return without calling
wake_up_interactive_poll(), and n_tty_read() to wait in wait_woken()
as in the chart below.
__receive_buf() n_tty_read()
------------------------------------------------------------------------
spin_lock_irqsave(&q->lock, flags);
/* from add_wait_queue() */
...
if (!input_available_p(tty, 0)) {
/* Memory operations issued after the
RELEASE may be completed before the
RELEASE operation has completed */
smp_store_release(&ldata->commit_head,
ldata->read_head);
if (waitqueue_active(&tty->read_wait))
__add_wait_queue(q, wait);
spin_unlock_irqrestore(&q->lock,flags);
/* from add_wait_queue() */
...
timeout = wait_woken(&wait,
TASK_INTERRUPTIBLE, timeout);
------------------------------------------------------------------------
There are also other places in drivers/tty/n_tty.c which have similar
calls to waitqueue_active(), so instead of adding many memory barrier
calls, this patch simply removes the call to waitqueue_active(),
leaving just wake_up*() behind.
This fixes both problems because, even though the memory access before
or after the spinlocks in both wake_up*() and add_wait_queue() can
sneak into the critical section, it cannot go past it and the critical
section assures that they will be serialized (please see "INTER-CPU
ACQUIRING BARRIER EFFECTS" in Documentation/memory-barriers.txt for a
better explanation). Moreover, the resulting code is much simpler.
Latency measurement using a ping-pong test over a pty doesn't show any
visible performance drop.
Signed-off-by: Kosuke Tatsukawa <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[lizf: Backported to 3.4:
- adjust context
- s/wake_up_interruptible_poll/wake_up_interruptible/
- drop changes to __receive_buf() and n_tty_set_termios()]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/tty/n_tty.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
index fa0376b..fc8822f 100644
--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -1297,8 +1297,7 @@ handle_newline:
tty->canon_data++;
spin_unlock_irqrestore(&tty->read_lock, flags);
kill_fasync(&tty->fasync, SIGIO, POLL_IN);
- if (waitqueue_active(&tty->read_wait))
- wake_up_interruptible(&tty->read_wait);
+ wake_up_interruptible(&tty->read_wait);
return;
}
}
@@ -1421,8 +1420,7 @@ static void n_tty_receive_buf(struct tty_struct *tty, const unsigned char *cp,
if ((!tty->icanon && (tty->read_cnt >= tty->minimum_to_wake)) ||
L_EXTPROC(tty)) {
kill_fasync(&tty->fasync, SIGIO, POLL_IN);
- if (waitqueue_active(&tty->read_wait))
- wake_up_interruptible(&tty->read_wait);
+ wake_up_interruptible(&tty->read_wait);
}
/*
--
1.9.1
From: Takashi Iwai <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 225db5762dc1a35b26850477ffa06e5cd0097243 upstream.
When OSS emulation is loaded on ISA SB AWE32 chip, we get now kernel
warnings like:
WARNING: CPU: 0 PID: 2791 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x51/0x80()
sysfs: cannot create duplicate filename '/devices/isa/sbawe.0/sound/card0/seq-oss-0-0'
It's because both emux synth and opl3 drivers try to register their
OSS device object with the same static index number 0. This hasn't
been a big problem until the recent rewrite of device management code
(that exposes sysfs at the same time), but it's been an obvious bug.
This patch works around it just by using a different index number of
emux synth object. There can be a more elegant way to fix, but it's
enough for now, as this code won't be touched so often, in anyway.
Reported-and-tested-by: Michael Shell <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
sound/synth/emux/emux_oss.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
index daf61ab..646b667 100644
--- a/sound/synth/emux/emux_oss.c
+++ b/sound/synth/emux/emux_oss.c
@@ -69,7 +69,8 @@ snd_emux_init_seq_oss(struct snd_emux *emu)
struct snd_seq_oss_reg *arg;
struct snd_seq_device *dev;
- if (snd_seq_device_new(emu->card, 0, SNDRV_SEQ_DEV_ID_OSS,
+ /* using device#1 here for avoiding conflicts with OPL3 */
+ if (snd_seq_device_new(emu->card, 1, SNDRV_SEQ_DEV_ID_OSS,
sizeof(struct snd_seq_oss_reg), &dev) < 0)
return;
--
1.9.1
From: Yao-Wen Mao <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 8484bf2981b3d006426ac052a3642c9ce1d8d980 upstream.
These two headphones need a reset-resume quirk to properly resume to
original volume level.
Signed-off-by: Yao-Wen Mao <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/core/quirks.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
index 9fac46d..8d75403 100644
--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -73,6 +73,12 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Philips PSC805 audio device */
{ USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME },
+ /* Plantronic Audio 655 DSP */
+ { USB_DEVICE(0x047f, 0xc008), .driver_info = USB_QUIRK_RESET_RESUME },
+
+ /* Plantronic Audio 648 USB */
+ { USB_DEVICE(0x047f, 0xc013), .driver_info = USB_QUIRK_RESET_RESUME },
+
/* Artisman Watchdog Dongle */
{ USB_DEVICE(0x04b4, 0x0526), .driver_info =
USB_QUIRK_CONFIG_INTF_STRINGS },
--
1.9.1
From: Russell King <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 8996eafdcbad149ac0f772fb1649fbb75c482a6a upstream.
Unlike shash algorithms, ahash drivers must implement export
and import as their descriptors may contain hardware state and
cannot be exported as is. Unfortunately some ahash drivers did
not provide them and end up causing crashes with algif_hash.
This patch adds a check to prevent these drivers from registering
ahash algorithms until they are fixed.
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
crypto/ahash.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 0ec05fe..5824191 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -462,7 +462,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
struct crypto_alg *base = &alg->halg.base;
if (alg->halg.digestsize > PAGE_SIZE / 8 ||
- alg->halg.statesize > PAGE_SIZE / 8)
+ alg->halg.statesize > PAGE_SIZE / 8 ||
+ alg->halg.statesize == 0)
return -EINVAL;
base->cra_type = &crypto_ahash_type;
--
1.9.1
From: Laura Abbott <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit fd7cd061adcf5f7503515ba52b6a724642a839c8 upstream.
We received several reports of systems rebooting and powering on
after an attempted shutdown. Testing showed that setting
XHCI_SPURIOUS_WAKEUP quirk in addition to the XHCI_SPURIOUS_REBOOT
quirk allowed the system to shutdown as expected for LynxPoint-LP
xHCI controllers. Set the quirk back.
Note that the quirk was originally introduced for LynxPoint and
LynxPoint-LP just for this same reason. See:
commit 638298dc66ea ("xhci: Fix spurious wakeups after S5 on Haswell")
It was later limited to only concern HP machines as it caused
regression on some machines, see both bug and commit:
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=66171
commit 6962d914f317 ("xhci: Limit the spurious wakeup fix only to HP machines")
Later it was discovered that the powering on after shutdown
was limited to LynxPoint-LP (Haswell-ULT) and that some non-LP HP
machine suffered from spontaneous resume from S3 (which should
not be related to the SPURIOUS_WAKEUP quirk at all). An attempt
to fix this then removed the SPURIOUS_WAKEUP flag usage completely.
commit b45abacde3d5 ("xhci: no switching back on non-ULT Haswell")
Current understanding is that LynxPoint-LP (Haswell ULT) machines
need the SPURIOUS_WAKEUP quirk, otherwise they will restart, and
plain Lynxpoint (Haswell) machines may _not_ have the quirk
set otherwise they again will restart.
Signed-off-by: Laura Abbott <[email protected]>
Cc: Takashi Iwai <[email protected]>
Cc: Oliver Neukum <[email protected]>
[Added more history to commit message -Mathias]
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/xhci-pci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 710b2e9..3053933 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -121,6 +121,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
* PPT chipsets.
*/
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
}
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
(pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
--
1.9.1
From: Christian Zander <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit ba2374fd2bf379f933773811fdb06cb6a5445f41 upstream.
In preparation for the installation of a large page, any small page
tables that may still exist in the target IOV address range are
removed. However, if a scatter/gather list entry is large enough to
fit more than one large page, the address space for any subsequent
large pages is not cleared of conflicting small page tables.
This can cause legitimate mapping requests to fail with errors of the
form below, potentially followed by a series of IOMMU faults:
ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083)
In this example, a 4MiB scatter/gather list entry resulted in the
successful installation of a large page @ vPFN 0xfdc00, followed by
a failed attempt to install another large page @ vPFN 0xfde00, due to
the presence of a pointer to a small page table @ 0x7f83a4000.
To address this problem, compute the number of large pages that fit
into a given scatter/gather list entry, and use it to derive the
last vPFN covered by the large page(s).
Signed-off-by: Christian Zander <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
[bwh: Backported to 3.2:
- Add the lvl_pages variable, added by an earlier commit upstream
- Also change arguments to dma_pte_clear_range(), which is called by
dma_pte_free_pagetable() upstream]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/iommu/intel-iommu.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 28af276..bd400f2 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1827,13 +1827,20 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
return -ENOMEM;
/* It is large page*/
if (largepage_lvl > 1) {
+ unsigned long nr_superpages, end_pfn, lvl_pages;
+
pteval |= DMA_PTE_LARGE_PAGE;
- /* Ensure that old small page tables are removed to make room
- for superpage, if they exist. */
- dma_pte_clear_range(domain, iov_pfn,
- iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1);
- dma_pte_free_pagetable(domain, iov_pfn,
- iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1);
+ lvl_pages = lvl_to_nr_pages(largepage_lvl);
+
+ nr_superpages = sg_res / lvl_pages;
+ end_pfn = iov_pfn + nr_superpages * lvl_pages - 1;
+
+ /*
+ * Ensure that old small page tables are
+ * removed to make room for superpage(s).
+ */
+ dma_pte_clear_range(domain, iov_pfn, end_pfn);
+ dma_pte_free_pagetable(domain, iov_pfn, end_pfn);
} else {
pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
}
--
1.9.1
From: Ilia Mirkin <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 2a6c521bb41ce862e43db46f52e7681d33e8d771 upstream.
On nv50+, we restrict the valid domains to just the one where the buffer
was originally created. However after the buffer is evicted to system
memory, we might move it back to a different domain that was not
originally valid. When sharing the buffer and retrieving its GEM_INFO
data, we still want the domain that will be valid for this buffer in a
pushbuf, not the one where it currently happens to be.
This resolves fdo#92504 and several others. These are due to suspend
evicting all buffers, making it more likely that they temporarily end up
in the wrong place.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92504
Signed-off-by: Ilia Mirkin <[email protected]>
Signed-off-by: Ben Skeggs <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/gpu/drm/nouveau/nouveau_gem.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 2f46bbf..b242534 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -172,11 +172,12 @@ nouveau_gem_info(struct drm_file *file_priv, struct drm_gem_object *gem,
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
struct nouveau_vma *vma;
- if (nvbo->bo.mem.mem_type == TTM_PL_TT)
+ if (is_power_of_2(nvbo->valid_domains))
+ rep->domain = nvbo->valid_domains;
+ else if (nvbo->bo.mem.mem_type == TTM_PL_TT)
rep->domain = NOUVEAU_GEM_DOMAIN_GART;
else
rep->domain = NOUVEAU_GEM_DOMAIN_VRAM;
-
rep->offset = nvbo->bo.offset;
if (fpriv->vm) {
vma = nouveau_bo_vma_find(nvbo, fpriv->vm);
--
1.9.1
From: Charles Keepax <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 97aff2c03a1e4d343266adadb52313613efb027f upstream.
There are 24 EQ registers not 25, I suspect this bug came about because
the registers start at EQ1 not zero. The bug is relatively harmless as
the extra register written is an unused one.
Signed-off-by: Charles Keepax <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
include/sound/wm8904.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/sound/wm8904.h b/include/sound/wm8904.h
index 898be3a..6d8f8fb 100644
--- a/include/sound/wm8904.h
+++ b/include/sound/wm8904.h
@@ -119,7 +119,7 @@
#define WM8904_MIC_REGS 2
#define WM8904_GPIO_REGS 4
#define WM8904_DRC_REGS 4
-#define WM8904_EQ_REGS 25
+#define WM8904_EQ_REGS 24
/**
* DRC configurations are specified with a label and a set of register
--
1.9.1
From: Jan Kara <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 296291cdd1629c308114504b850dc343eabc2782 upstream.
Currently a simple program below issues a sendfile(2) system call which
takes about 62 days to complete in my test KVM instance.
int fd;
off_t off = 0;
fd = open("file", O_RDWR | O_TRUNC | O_SYNC | O_CREAT, 0644);
ftruncate(fd, 2);
lseek(fd, 0, SEEK_END);
sendfile(fd, fd, &off, 0xfffffff);
Now you should not ask kernel to do a stupid stuff like copying 256MB in
2-byte chunks and call fsync(2) after each chunk but if you do, sysadmin
should have a way to stop you.
We actually do have a check for fatal_signal_pending() in
generic_perform_write() which triggers in this path however because we
always succeed in writing something before the check is done, we return
value > 0 from generic_perform_write() and thus the information about
signal gets lost.
Fix the problem by doing the signal check before writing anything. That
way generic_perform_write() returns -EINTR, the error gets propagated up
and the sendfile loop terminates early.
Signed-off-by: Jan Kara <[email protected]>
Reported-by: Dmitry Vyukov <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
mm/filemap.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 79c4b2b..448f9ca 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2402,6 +2402,11 @@ again:
break;
}
+ if (fatal_signal_pending(current)) {
+ status = -EINTR;
+ break;
+ }
+
status = a_ops->write_begin(file, mapping, pos, bytes, flags,
&page, &fsdata);
if (unlikely(status))
@@ -2442,10 +2447,6 @@ again:
written += copied;
balance_dirty_pages_ratelimited(mapping);
- if (fatal_signal_pending(current)) {
- status = -EINTR;
- break;
- }
} while (iov_iter_count(i));
return written ? written : status;
--
1.9.1
From: Dāvis Mosāns <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 2280521719e81919283b82902ac24058f87dfc1b upstream.
When pci_pool_alloc fails in mvs_task_prep then task->lldd_task stays
NULL but it's later used in mvs_abort_task as slot which is passed
to mvs_slot_task_free causing NULL pointer dereference.
Just return from mvs_slot_task_free when passed with NULL slot.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=101891
Signed-off-by: Dāvis Mosāns <[email protected]>
Reviewed-by: Tomas Henzl <[email protected]>
Reviewed-by: Johannes Thumshirn <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/scsi/mvsas/mv_sas.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index dbb8edf..06da698 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -984,6 +984,8 @@ static void mvs_slot_free(struct mvs_info *mvi, u32 rx_desc)
static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
struct mvs_slot_info *slot, u32 slot_idx)
{
+ if (!slot)
+ return;
if (!slot->task)
return;
if (!sas_protocol_ata(task->task_proto))
--
1.9.1
From: Ben Hutchings <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 5377adb092664d336ac212499961cac5e8728794 upstream.
usb_parse_ss_endpoint_companion() now decodes the burst multiplier
correctly in order to check that it's <= 3, but still uses the wrong
expression if warning that it's > 3.
Fixes: ff30cbc8da42 ("usb: Use the USB_SS_MULT() macro to get the ...")
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/core/config.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
index 6baa836..bfc9b69 100644
--- a/drivers/usb/core/config.c
+++ b/drivers/usb/core/config.c
@@ -117,7 +117,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
USB_SS_MULT(desc->bmAttributes) > 3) {
dev_warn(ddev, "Isoc endpoint has Mult of %d in "
"config %d interface %d altsetting %d ep %d: "
- "setting to 3\n", desc->bmAttributes + 1,
+ "setting to 3\n",
+ USB_SS_MULT(desc->bmAttributes),
cfgno, inum, asnum, ep->desc.bEndpointAddress);
ep->ss_ep_comp.bmAttributes = 2;
}
--
1.9.1
From: Christophe Leroy <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 0ff28d9f4674d781e492bcff6f32f0fe48cf0fed upstream.
Using sendfile with below small program to get MD5 sums of some files,
it appear that big files (over 64kbytes with 4k pages system) get a
wrong MD5 sum while small files get the correct sum.
This program uses sendfile() to send a file to an AF_ALG socket
for hashing.
/* md5sum2.c */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <linux/if_alg.h>
int main(int argc, char **argv)
{
int sk = socket(AF_ALG, SOCK_SEQPACKET, 0);
struct stat st;
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "hash",
.salg_name = "md5",
};
int n;
bind(sk, (struct sockaddr*)&sa, sizeof(sa));
for (n = 1; n < argc; n++) {
int size;
int offset = 0;
char buf[4096];
int fd;
int sko;
int i;
fd = open(argv[n], O_RDONLY);
sko = accept(sk, NULL, 0);
fstat(fd, &st);
size = st.st_size;
sendfile(sko, fd, &offset, size);
size = read(sko, buf, sizeof(buf));
for (i = 0; i < size; i++)
printf("%2.2x", buf[i]);
printf(" %s\n", argv[n]);
close(fd);
close(sko);
}
exit(0);
}
Test below is done using official linux patch files. First result is
with a software based md5sum. Second result is with the program above.
root@vgoip:~# ls -l patch-3.6.*
-rw-r--r-- 1 root root 64011 Aug 24 12:01 patch-3.6.2.gz
-rw-r--r-- 1 root root 94131 Aug 24 12:01 patch-3.6.3.gz
root@vgoip:~# md5sum patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz
root@vgoip:~# ./md5sum2 patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
5fd77b24e68bb24dcc72d6e57c64790e patch-3.6.3.gz
After investivation, it appears that sendfile() sends the files by blocks
of 64kbytes (16 times PAGE_SIZE). The problem is that at the end of each
block, the SPLICE_F_MORE flag is missing, therefore the hashing operation
is reset as if it was the end of the file.
This patch adds SPLICE_F_MORE to the flags when more data is pending.
With the patch applied, we get the correct sums:
root@vgoip:~# md5sum patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz
root@vgoip:~# ./md5sum2 patch-3.6.*
b3ffb9848196846f31b2ff133d2d6443 patch-3.6.2.gz
c5e8f687878457db77cb7158c38a7e43 patch-3.6.3.gz
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Cc: Ben Hutchings <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/splice.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/fs/splice.c b/fs/splice.c
index 67c5210..2864177 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -1165,7 +1165,7 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
long ret, bytes;
umode_t i_mode;
size_t len;
- int i, flags;
+ int i, flags, more;
/*
* We require the input being a regular file, as we don't want to
@@ -1208,6 +1208,7 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
* Don't block on output, we have to drain the direct pipe.
*/
sd->flags &= ~SPLICE_F_NONBLOCK;
+ more = sd->flags & SPLICE_F_MORE;
while (len) {
size_t read_len;
@@ -1221,6 +1222,15 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd,
sd->total_len = read_len;
/*
+ * If more data is pending, set SPLICE_F_MORE
+ * If this is the last data and SPLICE_F_MORE was not set
+ * initially, clears it.
+ */
+ if (read_len < len)
+ sd->flags |= SPLICE_F_MORE;
+ else if (!more)
+ sd->flags &= ~SPLICE_F_MORE;
+ /*
* NOTE: nonblocking mode only applies to the input. We
* must not do the output in nonblocking mode as then we
* could get stuck data in the internal pipe:
--
1.9.1
From: Ben Hutchings <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
Quoting the RHEL advisory:
> It was found that the fix for CVE-2015-1805 incorrectly kept buffer
> offset and buffer length in sync on a failed atomic read, potentially
> resulting in a pipe buffer state corruption. A local, unprivileged user
> could use this flaw to crash the system or leak kernel memory to user
> space. (CVE-2016-0774, Moderate)
The same flawed fix was applied to stable branches from 2.6.32.y to
3.14.y inclusive, and I was able to reproduce the issue on 3.2.y.
We need to give pipe_iov_copy_to_user() a separate offset variable
and only update the buffer offset if it succeeds.
References: https://rhn.redhat.com/errata/RHSA-2016-0103.html
Signed-off-by: Ben Hutchings <[email protected]>
Cc: Jeffrey Vander Stoep <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/pipe.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/pipe.c b/fs/pipe.c
index abfb935..6049235 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -390,6 +390,7 @@ pipe_read(struct kiocb *iocb, const struct iovec *_iov,
void *addr;
size_t chars = buf->len, remaining;
int error, atomic;
+ int offset;
if (chars > total_len)
chars = total_len;
@@ -403,9 +404,10 @@ pipe_read(struct kiocb *iocb, const struct iovec *_iov,
atomic = !iov_fault_in_pages_write(iov, chars);
remaining = chars;
+ offset = buf->offset;
redo:
addr = ops->map(pipe, buf, atomic);
- error = pipe_iov_copy_to_user(iov, addr, &buf->offset,
+ error = pipe_iov_copy_to_user(iov, addr, &offset,
&remaining, atomic);
ops->unmap(pipe, buf, addr);
if (unlikely(error)) {
@@ -421,6 +423,7 @@ redo:
break;
}
ret += chars;
+ buf->offset += chars;
buf->len -= chars;
/* Was it a packet buffer? Clean up and exit */
--
1.9.1
From: Nate Dailey <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit ccfc7bf1f09d6190ef86693ddc761d5fe3fa47cb upstream.
If raid1d is handling a mix of read and write errors, handle_read_error's
call to freeze_array can get stuck.
This can happen because, though the bio_end_io_list is initially drained,
writes can be added to it via handle_write_finished as the retry_list
is processed. These writes contribute to nr_pending but are not included
in nr_queued.
If a later entry on the retry_list triggers a call to handle_read_error,
freeze array hangs waiting for nr_pending == nr_queued+extra. The writes
on the bio_end_io_list aren't included in nr_queued so the condition will
never be satisfied.
To prevent the hang, include bio_end_io_list writes in nr_queued.
There's probably a better way to handle decrementing nr_queued, but this
seemed like the safest way to avoid breaking surrounding code.
I'm happy to supply the script I used to repro this hang.
Fixes: 55ce74d4bfe1b(md/raid1: ensure device failure recorded before write request returns.)
Signed-off-by: Nate Dailey <[email protected]>
Signed-off-by: Shaohua Li <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid1.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 32d1f1a..a548eed 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2088,6 +2088,7 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
if (fail) {
spin_lock_irq(&conf->device_lock);
list_add(&r1_bio->retry_list, &conf->bio_end_io_list);
+ conf->nr_queued++;
spin_unlock_irq(&conf->device_lock);
md_wakeup_thread(conf->mddev->thread);
} else {
@@ -2202,8 +2203,10 @@ static void raid1d(struct mddev *mddev)
LIST_HEAD(tmp);
spin_lock_irqsave(&conf->device_lock, flags);
if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
- list_add(&tmp, &conf->bio_end_io_list);
- list_del_init(&conf->bio_end_io_list);
+ while (!list_empty(&conf->bio_end_io_list)) {
+ list_move(conf->bio_end_io_list.prev, &tmp);
+ conf->nr_queued--;
+ }
}
spin_unlock_irqrestore(&conf->device_lock, flags);
while (!list_empty(&tmp)) {
--
1.9.1
From: Mike Snitzer <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 4dcb8b57df3593dcb20481d9d6cf79d1dc1534be upstream.
btree_split_beneath()'s error path had an outstanding FIXME that speaks
directly to the potential for _not_ cleaning up a previously allocated
bufio-backed block.
Fix this by releasing the previously allocated bufio block using
unlock_block().
Reported-by: Mikulas Patocka <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Acked-by: Joe Thornber <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/persistent-data/dm-btree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
index dddd5a4..be86d59 100644
--- a/drivers/md/persistent-data/dm-btree.c
+++ b/drivers/md/persistent-data/dm-btree.c
@@ -502,7 +502,7 @@ static int btree_split_beneath(struct shadow_spine *s, uint64_t key)
r = new_block(s->info, &right);
if (r < 0) {
- /* FIXME: put left */
+ unlock_block(s->info, left);
return r;
}
--
1.9.1
From: Joerg Roedel <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit cbf3ccd09d683abf1cacd36e3640872ee912d99b upstream.
During device assignment/deassignment the flags in the DTE
get lost, which might cause spurious faults, for example
when the device tries to access the system management range.
Fix this by not clearing the flags with the rest of the DTE.
Reported-by: G. Richard Bellamy <[email protected]>
Tested-by: G. Richard Bellamy <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/iommu/amd_iommu.c | 4 ++--
drivers/iommu/amd_iommu_types.h | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index a55353c3..30e6d5f 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1931,8 +1931,8 @@ static void set_dte_entry(u16 devid, struct protection_domain *domain, bool ats)
static void clear_dte_entry(u16 devid)
{
/* remove entry from the device table seen by the hardware */
- amd_iommu_dev_table[devid].data[0] = IOMMU_PTE_P | IOMMU_PTE_TV;
- amd_iommu_dev_table[devid].data[1] = 0;
+ amd_iommu_dev_table[devid].data[0] = IOMMU_PTE_P | IOMMU_PTE_TV;
+ amd_iommu_dev_table[devid].data[1] &= DTE_FLAG_MASK;
amd_iommu_apply_erratum_63(devid);
}
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index c4ffacb..42f2090 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -277,6 +277,7 @@
#define IOMMU_PTE_IR (1ULL << 61)
#define IOMMU_PTE_IW (1ULL << 62)
+#define DTE_FLAG_MASK (0x3ffULL << 32)
#define DTE_FLAG_IOTLB (0x01UL << 32)
#define DTE_FLAG_GV (0x01ULL << 55)
#define DTE_GLX_SHIFT (56)
--
1.9.1
From: Mathias Nyman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3b4739b8951d650becbcd855d7d6f18ac98a9a85 upstream.
If a host fails to wake up a isochronous SuperSpeed device from U1/U2
in time for a isoch transfer it will generate a "No ping response error"
Host will then move to the next transfer descriptor.
Handle this case in the same way as missed service errors, tag the
current TD as skipped and handle it on the next transfer event.
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/xhci-ring.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 02c6dc8..f389328 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -2340,6 +2340,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
u32 trb_comp_code;
int ret = 0;
int td_num = 0;
+ bool handling_skipped_tds = false;
slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
xdev = xhci->devs[slot_id];
@@ -2473,6 +2474,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
ep->skip = true;
xhci_dbg(xhci, "Miss service interval error, set skip flag\n");
goto cleanup;
+ case COMP_PING_ERR:
+ ep->skip = true;
+ xhci_dbg(xhci, "No Ping response error, Skip one Isoc TD\n");
+ goto cleanup;
default:
if (xhci_is_vendor_info_code(xhci, trb_comp_code)) {
status = 0;
@@ -2604,13 +2609,18 @@ static int handle_tx_event(struct xhci_hcd *xhci,
ep, &status);
cleanup:
+
+
+ handling_skipped_tds = ep->skip &&
+ trb_comp_code != COMP_MISSED_INT &&
+ trb_comp_code != COMP_PING_ERR;
+
/*
- * Do not update event ring dequeue pointer if ep->skip is set.
- * Will roll back to continue process missed tds.
+ * Do not update event ring dequeue pointer if we're in a loop
+ * processing missed tds.
*/
- if (trb_comp_code == COMP_MISSED_INT || !ep->skip) {
+ if (!handling_skipped_tds)
inc_deq(xhci, xhci->event_ring);
- }
if (ret) {
urb = td->urb;
@@ -2645,7 +2655,7 @@ cleanup:
* Process them as short transfer until reach the td pointed by
* the event.
*/
- } while (ep->skip && trb_comp_code != COMP_MISSED_INT);
+ } while (handling_skipped_tds);
return 0;
}
--
1.9.1
From: Herbert Xu <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3fc89adb9fa4beff31374a4bf50b3d099d88ae83 upstream.
Currently a number of Crypto API operations may fail when a signal
occurs. This causes nasty problems as the caller of those operations
are often not in a good position to restart the operation.
In fact there is currently no need for those operations to be
interrupted by user signals at all. All we need is for them to
be killable.
This patch replaces the relevant calls of signal_pending with
fatal_signal_pending, and wait_for_completion_interruptible with
wait_for_completion_killable, respectively.
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
crypto/ablkcipher.c | 2 +-
crypto/algapi.c | 2 +-
crypto/api.c | 6 +++---
crypto/crypto_user.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
index 45fe410..4a9c499 100644
--- a/crypto/ablkcipher.c
+++ b/crypto/ablkcipher.c
@@ -700,7 +700,7 @@ struct crypto_ablkcipher *crypto_alloc_ablkcipher(const char *alg_name,
err:
if (err != -EAGAIN)
break;
- if (signal_pending(current)) {
+ if (fatal_signal_pending(current)) {
err = -EINTR;
break;
}
diff --git a/crypto/algapi.c b/crypto/algapi.c
index b4c046c..7bae610 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -342,7 +342,7 @@ static void crypto_wait_for_test(struct crypto_larval *larval)
crypto_alg_tested(larval->alg.cra_driver_name, 0);
}
- err = wait_for_completion_interruptible(&larval->completion);
+ err = wait_for_completion_killable(&larval->completion);
WARN_ON(err);
out:
diff --git a/crypto/api.c b/crypto/api.c
index 4f98dd5..c9c2f47 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -178,7 +178,7 @@ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
struct crypto_larval *larval = (void *)alg;
long timeout;
- timeout = wait_for_completion_interruptible_timeout(
+ timeout = wait_for_completion_killable_timeout(
&larval->completion, 60 * HZ);
alg = larval->adult;
@@ -441,7 +441,7 @@ struct crypto_tfm *crypto_alloc_base(const char *alg_name, u32 type, u32 mask)
err:
if (err != -EAGAIN)
break;
- if (signal_pending(current)) {
+ if (fatal_signal_pending(current)) {
err = -EINTR;
break;
}
@@ -558,7 +558,7 @@ void *crypto_alloc_tfm(const char *alg_name,
err:
if (err != -EAGAIN)
break;
- if (signal_pending(current)) {
+ if (fatal_signal_pending(current)) {
err = -EINTR;
break;
}
diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
index 910497b..0c19d03 100644
--- a/crypto/crypto_user.c
+++ b/crypto/crypto_user.c
@@ -350,7 +350,7 @@ static struct crypto_alg *crypto_user_aead_alg(const char *name, u32 type,
err = PTR_ERR(alg);
if (err != -EAGAIN)
break;
- if (signal_pending(current)) {
+ if (fatal_signal_pending(current)) {
err = -EINTR;
break;
}
--
1.9.1
From: Cathy Avery <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a54c8f0f2d7df525ff997e2afe71866a1a013064 upstream.
xen-blkfront will crash if the check to talk_to_blkback()
in blkback_changed()(XenbusStateInitWait) returns an error.
The driver data is freed and info is set to NULL. Later during
the close process via talk_to_blkback's call to xenbus_dev_fatal()
the null pointer is passed to and dereference in blkfront_closing.
Signed-off-by: Cathy Avery <[email protected]>
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/block/xen-blkfront.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a81cdd7..16477b2 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1314,7 +1314,8 @@ static void blkback_changed(struct xenbus_device *dev,
break;
/* Missed the backend's Closing state -- fallthrough */
case XenbusStateClosing:
- blkfront_closing(info);
+ if (info)
+ blkfront_closing(info);
break;
}
}
--
1.9.1
From: Jann Horn <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 0c55627167870255158db1cde0d28366f91c8872 upstream.
This is mostly a hardening fix, given that write-only access to other
users' ttys is usually only given through setgid tty executables.
Signed-off-by: Jann Horn <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/tty/tty_io.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 5f0b4a4..3ea4150 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -2018,8 +2018,24 @@ retry_open:
if (!noctty &&
current->signal->leader &&
!current->signal->tty &&
- tty->session == NULL)
- __proc_set_tty(current, tty);
+ tty->session == NULL) {
+ /*
+ * Don't let a process that only has write access to the tty
+ * obtain the privileges associated with having a tty as
+ * controlling terminal (being able to reopen it with full
+ * access through /dev/tty, being able to perform pushback).
+ * Many distributions set the group of all ttys to "tty" and
+ * grant write-only access to all terminals for setgid tty
+ * binaries, which should not imply full privileges on all ttys.
+ *
+ * This could theoretically break old code that performs open()
+ * on a write-only file descriptor. In that case, it might be
+ * necessary to also permit this if
+ * inode_permission(inode, MAY_READ) == 0.
+ */
+ if (filp->f_mode & FMODE_READ)
+ __proc_set_tty(current, tty);
+ }
spin_unlock_irq(¤t->sighand->siglock);
tty_unlock();
mutex_unlock(&tty_mutex);
@@ -2308,7 +2324,7 @@ static int fionbio(struct file *file, int __user *p)
* Takes ->siglock() when updating signal->tty
*/
-static int tiocsctty(struct tty_struct *tty, int arg)
+static int tiocsctty(struct tty_struct *tty, struct file *file, int arg)
{
int ret = 0;
if (current->signal->leader && (task_session(current) == tty->session))
@@ -2341,6 +2357,13 @@ static int tiocsctty(struct tty_struct *tty, int arg)
goto unlock;
}
}
+
+ /* See the comment in tty_open(). */
+ if ((file->f_mode & FMODE_READ) == 0 && !capable(CAP_SYS_ADMIN)) {
+ ret = -EPERM;
+ goto unlock;
+ }
+
proc_set_tty(current, tty);
unlock:
mutex_unlock(&tty_mutex);
@@ -2695,7 +2718,7 @@ long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
no_tty();
return 0;
case TIOCSCTTY:
- return tiocsctty(tty, arg);
+ return tiocsctty(tty, file, arg);
case TIOCGPGRP:
return tiocgpgrp(tty, real_tty, p);
case TIOCSPGRP:
--
1.9.1
From: Vincent Palatin <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 72194739f54607bbf8cfded159627a2015381557 upstream.
Add a device quirk for the Logitech PTZ Pro Camera and its sibling the
ConferenceCam CC3000e Camera.
This fixes the failed camera enumeration on some boot, particularly on
machines with fast CPU.
Tested by connecting a Logitech PTZ Pro Camera to a machine with a
Haswell Core i7-4600U CPU @ 2.10GHz, and doing thousands of reboot cycles
while recording the kernel logs and taking camera picture after each boot.
Before the patch, more than 7% of the boots show some enumeration transfer
failures and in a few of them, the kernel is giving up before actually
enumerating the webcam. After the patch, the enumeration has been correct
on every reboot.
Signed-off-by: Vincent Palatin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/core/quirks.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
index 8d75403..fd8e60e 100644
--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -49,6 +49,13 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Microsoft LifeCam-VX700 v2.0 */
{ USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME },
+ /* Logitech ConferenceCam CC3000e */
+ { USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT },
+ { USB_DEVICE(0x046d, 0x0848), .driver_info = USB_QUIRK_DELAY_INIT },
+
+ /* Logitech PTZ Pro Camera */
+ { USB_DEVICE(0x046d, 0x0853), .driver_info = USB_QUIRK_DELAY_INIT },
+
/* Logitech Quickcam Fusion */
{ USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME },
--
1.9.1
From: Andy Lutomirski <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit b7a584598aea7ca73140cb87b40319944dd3393f upstream.
On Xen PV, regs->flags doesn't reliably reflect IOPL and the
exit-to-userspace code doesn't change IOPL. We need to context
switch it manually.
I'm doing this without going through paravirt because this is
specific to Xen PV. After the dust settles, we can merge this with
the 32-bit code, tidy up the iopl syscall implementation, and remove
the set_iopl pvop entirely.
Fixes XSA-171.
Reviewewd-by: Jan Beulich <[email protected]>
Signed-off-by: Andy Lutomirski <[email protected]>
Cc: Andrew Cooper <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: David Vrabel <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Jan Beulich <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/693c3bd7aeb4d3c27c92c622b7d0f554a458173c.1458162709.git.luto@kernel.org
Signed-off-by: Ingo Molnar <[email protected]>
[ kamal: backport to 3.19-stable: no X86_FEATURE_XENPV so just call
xen_pv_domain() directly ]
Acked-by: Andy Lutomirski <[email protected]>
Signed-off-by: Kamal Mostafa <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/include/asm/xen/hypervisor.h | 2 ++
arch/x86/kernel/process_64.c | 12 ++++++++++++
arch/x86/xen/enlighten.c | 2 +-
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h
index 66d0fff..fc500f9 100644
--- a/arch/x86/include/asm/xen/hypervisor.h
+++ b/arch/x86/include/asm/xen/hypervisor.h
@@ -72,4 +72,6 @@ static inline bool xen_x2apic_para_available(void)
}
#endif
+extern void xen_set_iopl_mask(unsigned mask);
+
#endif /* _ASM_X86_XEN_HYPERVISOR_H */
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index f6698ad..9f341bb 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -49,6 +49,7 @@
#include <asm/syscalls.h>
#include <asm/debugreg.h>
#include <asm/switch_to.h>
+#include <asm/xen/hypervisor.h>
asmlinkage extern void ret_from_fork(void);
@@ -419,6 +420,17 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
__switch_to_xtra(prev_p, next_p, tss);
+#ifdef CONFIG_XEN
+ /*
+ * On Xen PV, IOPL bits in pt_regs->flags have no effect, and
+ * current_pt_regs()->flags may not match the current task's
+ * intended IOPL. We need to switch it manually.
+ */
+ if (unlikely(xen_pv_domain() &&
+ prev->iopl != next->iopl))
+ xen_set_iopl_mask(next->iopl);
+#endif
+
return prev_p;
}
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 8ade106..761c086 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -860,7 +860,7 @@ static void xen_load_sp0(struct tss_struct *tss,
xen_mc_issue(PARAVIRT_LAZY_CPU);
}
-static void xen_set_iopl_mask(unsigned mask)
+void xen_set_iopl_mask(unsigned mask)
{
struct physdev_set_iopl set_iopl;
--
1.9.1
From: John Stultz <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 67dfae0cd72fec5cd158b6e5fb1647b7dbe0834c upstream.
This patch fixes one cases where abs() was being used with 64-bit
nanosecond values, where the result may be capped at 32-bits.
This potentially could cause watchdog false negatives on 32-bit
systems, so this patch addresses the issue by using abs64().
Signed-off-by: John Stultz <[email protected]>
Cc: Prarit Bhargava <[email protected]>
Cc: Richard Cochran <[email protected]>
Cc: Ingo Molnar <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
kernel/time/clocksource.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index c958338..b3f5407 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -291,7 +291,7 @@ static void clocksource_watchdog(unsigned long data)
continue;
/* Check the deviation from the watchdog clocksource. */
- if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) {
+ if ((abs64(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) {
clocksource_unstable(cs, cs_nsec - wd_nsec);
continue;
}
--
1.9.1
From: shengyong <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 7c7feb2ebfc9c0552c51f0c050db1d1a004faac5 upstream.
UBI: attaching mtd1 to ubi0
UBI: scanning is finished
UBI error: init_volumes: not enough PEBs, required 706, available 686
UBI error: ubi_wl_init: no enough physical eraseblocks (-20, need 1)
UBI error: ubi_attach_mtd_dev: failed to attach mtd1, error -12 <= NOT ENOMEM
UBI error: ubi_init: cannot attach mtd1
If available PEBs are not enough when initializing volumes, return -ENOSPC
directly. If available PEBs are not enough when initializing WL, return
-ENOSPC instead of -ENOMEM.
Signed-off-by: Sheng Yong <[email protected]>
Signed-off-by: Richard Weinberger <[email protected]>
Reviewed-by: David Gstir <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/mtd/ubi/vtbl.c | 1 +
drivers/mtd/ubi/wl.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
index c015fc0..4105a50 100644
--- a/drivers/mtd/ubi/vtbl.c
+++ b/drivers/mtd/ubi/vtbl.c
@@ -656,6 +656,7 @@ static int init_volumes(struct ubi_device *ubi, const struct ubi_scan_info *si,
if (ubi->corr_peb_count)
ubi_err("%d PEBs are corrupted and not used",
ubi->corr_peb_count);
+ return -ENOSPC;
}
ubi->rsvd_pebs += reserved_pebs;
ubi->avail_pebs -= reserved_pebs;
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 284d144..3e42cd6 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -1513,6 +1513,7 @@ int ubi_wl_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si)
if (ubi->corr_peb_count)
ubi_err("%d PEBs are corrupted and not used",
ubi->corr_peb_count);
+ err = -ENOSPC;
goto out_free;
}
ubi->avail_pebs -= WL_RESERVED_PEBS;
--
1.9.1
From: Richard Weinberger <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 281fda27673f833a01d516658a64d22a32c8e072 upstream.
Make sure that data_size is less than LEB size.
Otherwise a handcrafted UBI image is able to trigger
an out of bounds memory access in ubi_compare_lebs().
Signed-off-by: Richard Weinberger <[email protected]>
Reviewed-by: David Gstir <[email protected]>
[lizf: Backported to 3.4: use dbg_err() instead of ubi_err()];
Signed-off-by: Zefan Li <[email protected]>
---
drivers/mtd/ubi/io.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c
index 43f1a00..8f793ea 100644
--- a/drivers/mtd/ubi/io.c
+++ b/drivers/mtd/ubi/io.c
@@ -942,6 +942,11 @@ static int validate_vid_hdr(const struct ubi_device *ubi,
goto bad;
}
+ if (data_size > ubi->leb_size) {
+ dbg_err("bad data_size");
+ goto bad;
+ }
+
if (vol_type == UBI_VID_STATIC) {
/*
* Although from high-level point of view static volumes may
--
1.9.1
From: Andreas Schwab <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 8474ba74193d302e8340dddd1e16c85cc4b98caf upstream.
Make sure the compiler does not modify arguments of syscall functions.
This can happen if the compiler generates a tailcall to another
function. For example, without asmlinkage_protect sys_openat is compiled
into this function:
sys_openat:
clr.l %d0
move.w 18(%sp),%d0
move.l %d0,16(%sp)
jbra do_sys_open
Note how the fourth argument is modified in place, modifying the register
%d4 that gets restored from this stack slot when the function returns to
user-space. The caller may expect the register to be unmodified across
system calls.
Signed-off-by: Andreas Schwab <[email protected]>
Signed-off-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/m68k/include/asm/linkage.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/arch/m68k/include/asm/linkage.h b/arch/m68k/include/asm/linkage.h
index 5a822bb..066e74f 100644
--- a/arch/m68k/include/asm/linkage.h
+++ b/arch/m68k/include/asm/linkage.h
@@ -4,4 +4,34 @@
#define __ALIGN .align 4
#define __ALIGN_STR ".align 4"
+/*
+ * Make sure the compiler doesn't do anything stupid with the
+ * arguments on the stack - they are owned by the *caller*, not
+ * the callee. This just fools gcc into not spilling into them,
+ * and keeps it from doing tailcall recursion and/or using the
+ * stack slots for temporaries, since they are live and "used"
+ * all the way to the end of the function.
+ */
+#define asmlinkage_protect(n, ret, args...) \
+ __asmlinkage_protect##n(ret, ##args)
+#define __asmlinkage_protect_n(ret, args...) \
+ __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)
+#define __asmlinkage_protect0(ret) \
+ __asmlinkage_protect_n(ret)
+#define __asmlinkage_protect1(ret, arg1) \
+ __asmlinkage_protect_n(ret, "m" (arg1))
+#define __asmlinkage_protect2(ret, arg1, arg2) \
+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))
+#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \
+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))
+#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \
+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
+ "m" (arg4))
+#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \
+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
+ "m" (arg4), "m" (arg5))
+#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \
+ __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
+ "m" (arg4), "m" (arg5), "m" (arg6))
+
#endif
--
1.9.1
From: Felix Fietkau <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 029cd0370241641eb70235d205aa0b90c84dce44 upstream.
ath9k inserts padding between the 802.11 header and the data area (to
align it). Since it didn't declare this extra required headroom, this
led to some nasty issues like randomly dropped packets in some setups.
Signed-off-by: Felix Fietkau <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/net/wireless/ath/ath9k/init.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
index cac5b25..37534a0 100644
--- a/drivers/net/wireless/ath/ath9k/init.c
+++ b/drivers/net/wireless/ath/ath9k/init.c
@@ -683,6 +683,7 @@ void ath9k_set_hw_capab(struct ath_softc *sc, struct ieee80211_hw *hw)
hw->max_rate_tries = 10;
hw->sta_data_size = sizeof(struct ath_node);
hw->vif_data_size = sizeof(struct ath_vif);
+ hw->extra_tx_headroom = 4;
hw->wiphy->available_antennas_rx = BIT(ah->caps.max_rxchains) - 1;
hw->wiphy->available_antennas_tx = BIT(ah->caps.max_txchains) - 1;
--
1.9.1
From: Mathias Nyman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit dca7794539eff04b786fb6907186989e5eaaa9c2 upstream.
Some changes between xhci 0.96 and xhci 1.0 specifications forced us to
check the hci version in code, some of these checks were implemented as
hci_version == 1.0, which will not work with new xhci 1.1 controllers.
xhci 1.1 behaves similar to xhci 1.0 in these cases, so change these
checks to hci_version >= 1.0
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/xhci-mem.c | 6 +++---
drivers/usb/host/xhci-ring.c | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 048cc38..cad4a174 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1493,10 +1493,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
* use Event Data TRBs, and we don't chain in a link TRB on short
* transfers, we're basically dividing by 1.
*
- * xHCI 1.0 specification indicates that the Average TRB Length should
- * be set to 8 for control endpoints.
+ * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length
+ * should be set to 8 for control endpoints.
*/
- if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)
+ if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));
else
ep_ctx->tx_info |=
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 3cfe7e3..02c6dc8 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -3496,8 +3496,8 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
if (start_cycle == 0)
field |= 0x1;
- /* xHCI 1.0 6.4.1.2.1: Transfer Type field */
- if (xhci->hci_version == 0x100) {
+ /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */
+ if (xhci->hci_version >= 0x100) {
if (urb->transfer_buffer_length > 0) {
if (setup->bRequestType & USB_DIR_IN)
field |= TRB_TX_TYPE(TRB_DATA_IN);
--
1.9.1
From: Mathias Nyman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a6809ffd1687b3a8c192960e69add559b9d32649 upstream.
We want to give the command abortion an additional try to stop
the command ring before we completely hose xhci.
Tested-by: Vincent Pelletier <[email protected]>
Signed-off-by: Mathias Nyman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[lizf: Backported to 3.4: call handshake() instead of xhci_handshake()]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/xhci-ring.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 5623785..3cfe7e3 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -331,6 +331,15 @@ static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
ret = handshake(xhci, &xhci->op_regs->cmd_ring,
CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
if (ret < 0) {
+ /* we are about to kill xhci, give it one more chance */
+ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+ &xhci->op_regs->cmd_ring);
+ udelay(1000);
+ ret = handshake(xhci, &xhci->op_regs->cmd_ring,
+ CMD_RING_RUNNING, 0, 3 * 1000 * 1000);
+ if (ret == 0)
+ return 0;
+
xhci_err(xhci, "Stopped the command ring failed, "
"maybe the host is dead\n");
xhci->xhc_state |= XHCI_STATE_DYING;
--
1.9.1
From: "Tan, Jui Nee" <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 02bc933ebb59208f42c2e6305b2c17fd306f695d upstream.
On Intel Baytrail, there is case when interrupt handler get called, no SPI
message is captured. The RX FIFO is indeed empty when RX timeout pending
interrupt (SSSR_TINT) happens.
Use the BIOS version where both HSUART and SPI are on the same IRQ. Both
drivers are using IRQF_SHARED when calling the request_irq function. When
running two separate and independent SPI and HSUART application that
generate data traffic on both components, user will see messages like
below on the console:
pxa2xx-spi pxa2xx-spi.0: bad message state in interrupt handler
This commit will fix this by first checking Receiver Time-out Interrupt,
if it is disabled, ignore the request and return without servicing.
Signed-off-by: Tan, Jui Nee <[email protected]>
Acked-by: Jarkko Nikula <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/spi/spi-pxa2xx.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
index dc25bee..2ecc2d6 100644
--- a/drivers/spi/spi-pxa2xx.c
+++ b/drivers/spi/spi-pxa2xx.c
@@ -799,6 +799,10 @@ static irqreturn_t ssp_int(int irq, void *dev_id)
if (!(sccr1_reg & SSCR1_TIE))
mask &= ~SSSR_TFS;
+ /* Ignore RX timeout interrupt if it is disabled */
+ if (!(sccr1_reg & SSCR1_TINTE))
+ mask &= ~SSSR_TINT;
+
if (!(status & mask))
return IRQ_NONE;
--
1.9.1
From: Mark Brown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 176fc2d5770a0990eebff903ba680d2edd32e718 upstream.
The in kernel snprintf() will conveniently return the actual length of
the printed string even if not given an output beffer at all so just do
that rather than relying on the user to pass in a suitable buffer,
ensuring that we don't need to worry if the buffer was truncated due to
the size of the buffer passed in.
Reported-by: Rasmus Villemoes <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/base/regmap/regmap-debugfs.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
index d805eb5..023a9d7 100644
--- a/drivers/base/regmap/regmap-debugfs.c
+++ b/drivers/base/regmap/regmap-debugfs.c
@@ -23,8 +23,7 @@ static struct dentry *regmap_debugfs_root;
/* Calculate the length of a fixed format */
static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size)
{
- snprintf(buf, buf_size, "%x", max_val);
- return strlen(buf);
+ return snprintf(NULL, 0, "%x", max_val);
}
static ssize_t regmap_name_read_file(struct file *file,
--
1.9.1
From: Paolo Bonzini <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3afb1121800128aae9f5722e50097fcf1a9d4d88 upstream.
These have roughly the same purpose as the SMRR, which we do not need
to implement in KVM. However, Linux accesses MSR_K8_TSEG_ADDR at
boot, which causes problems when running a Xen dom0 under KVM.
Just return 0, meaning that processor protection of SMRAM is not
in effect.
Reported-by: M A Young <[email protected]>
Acked-by: Borislav Petkov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/kvm/x86.c | 2 ++
2 files changed, 3 insertions(+)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e92e1e4..033b8a0 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -166,6 +166,7 @@
/* C1E active bits in int pending message */
#define K8_INTP_C1E_ACTIVE_MASK 0x18000000
#define MSR_K8_TSEG_ADDR 0xc0010112
+#define MSR_K8_TSEG_MASK 0xc0010113
#define K8_MTRRFIXRANGE_DRAM_ENABLE 0x00040000 /* MtrrFixDramEn bit */
#define K8_MTRRFIXRANGE_DRAM_MODIFY 0x00080000 /* MtrrFixDramModEn bit */
#define K8_MTRR_RDMEM_WRMEM_MASK 0x18181818 /* Mask: RdMem|WrMem */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9cc83e2..32a6521 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1914,6 +1914,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
case MSR_IA32_LASTINTFROMIP:
case MSR_IA32_LASTINTTOIP:
case MSR_K8_SYSCFG:
+ case MSR_K8_TSEG_ADDR:
+ case MSR_K8_TSEG_MASK:
case MSR_K7_HWCR:
case MSR_VM_HSAVE_PA:
case MSR_K7_EVNTSEL0:
--
1.9.1
From: Mark Brown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit b763ec17ac762470eec5be8ebcc43e4f8b2c2b82 upstream.
If a read is attempted which is smaller than the line length then we may
underflow the subtraction we're doing with the unsigned size_t type so
move some of the calculation to be additions on the right hand side
instead in order to avoid this.
Reported-by: Rasmus Villemoes <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/base/regmap/regmap-debugfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
index 1db1289..d805eb5 100644
--- a/drivers/base/regmap/regmap-debugfs.c
+++ b/drivers/base/regmap/regmap-debugfs.c
@@ -205,7 +205,7 @@ static ssize_t regmap_access_read_file(struct file *file,
/* If we're in the region the user is trying to read */
if (p >= *ppos) {
/* ...but not beyond it */
- if (buf_pos >= count - 1 - tot_len)
+ if (buf_pos + tot_len + 1 >= count)
break;
/* Format the register */
--
1.9.1
From: Dan Carpenter <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 9ac0934bbe52290e4e4c2a58ec41cab9b6ca8c96 upstream.
The size here comes from the user via the ioctl, it is a number between
1-u32max so the addition here could overflow on 32 bit systems.
Fixes: f453ba046074 ('DRM: add mode setting support')
Signed-off-by: Dan Carpenter <[email protected]>
Reviewed-by: Daniel Stone <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <[email protected]>
---
drivers/gpu/drm/drm_crtc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index ed4b748..93c5b2f 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -2945,7 +2945,7 @@ static struct drm_property_blob *drm_property_create_blob(struct drm_device *dev
struct drm_property_blob *blob;
int ret;
- if (!length || !data)
+ if (!length || length > ULONG_MAX - sizeof(struct drm_property_blob) || !data)
return NULL;
blob = kzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL);
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit bd8688a199b864944bf62eebed0ca13b46249453 upstream.
When a write fails and a bad-block-list is present, we can
update the bad-block-list instead of writing the data. If
this succeeds then it is OK clear the relevant bitmap-bit as
no further 'sync' of the block is needed.
However if writing the bad-block-list fails then we need to
treat the write as failed and particularly must not clear
the bitmap bit. Otherwise the device can be re-added (after
any hardware connection issues are resolved) and because the
relevant bit in the bitmap is clear, that block will not be
resynced. This leads to data corruption.
We already delay the final bio_endio() on the write until
the bad-block-list is written so that when the write
returns: either that data is safe, the bad-block record is
safe, or the fact that the device is faulty is safe.
However we *don't* delay the clearing of the bitmap, so the
bitmap bit can be recorded as cleared before we know if the
bad-block-list was written safely.
So: delay that until the write really is safe.
i.e. move the call to close_write() until just before
calling bio_endio(), and recheck the 'is array degraded'
status before making that call.
This bug goes back to v3.1 when bad-block-lists were
introduced, though it only affects arrays created with
mdadm-3.3 or later as only those have bad-block lists.
Backports will require at least
Commit: 55ce74d4bfe1 ("md/raid1: ensure device failure recorded before write request returns.")
as well. I'll send that to 'stable' separately.
Note that of the two tests of R1BIO_WriteError that this
patch adds, the first is certain to fail and the second is
certain to succeed. However doing it this way makes the
patch more obviously correct. I will tidy the code up in a
future merge window.
Reported-and-tested-by: Nate Dailey <[email protected]>
Cc: Jes Sorensen <[email protected]>
Fixes: cd5ff9a16f08 ("md/raid1: Handle write errors by updating badblock log.")
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid1.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 64d2351c..32d1f1a 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2085,15 +2085,16 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
rdev_dec_pending(conf->mirrors[m].rdev,
conf->mddev);
}
- if (test_bit(R1BIO_WriteError, &r1_bio->state))
- close_write(r1_bio);
if (fail) {
spin_lock_irq(&conf->device_lock);
list_add(&r1_bio->retry_list, &conf->bio_end_io_list);
spin_unlock_irq(&conf->device_lock);
md_wakeup_thread(conf->mddev->thread);
- } else
+ } else {
+ if (test_bit(R1BIO_WriteError, &r1_bio->state))
+ close_write(r1_bio);
raid_end_bio_io(r1_bio);
+ }
}
static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio)
@@ -2209,6 +2210,10 @@ static void raid1d(struct mddev *mddev)
r1_bio = list_first_entry(&conf->bio_end_io_list,
struct r1bio, retry_list);
list_del(&r1_bio->retry_list);
+ if (mddev->degraded)
+ set_bit(R1BIO_Degraded, &r1_bio->state);
+ if (test_bit(R1BIO_WriteError, &r1_bio->state))
+ close_write(r1_bio);
raid_end_bio_io(r1_bio);
}
}
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 55ce74d4bfe1b9444436264c637f39a152d1e5ac upstream.
When a write to one of the legs of a RAID1 fails, the failure is
recorded in the metadata of the other leg(s) so that after a restart
the data on the failed drive wont be trusted even if that drive seems
to be working again (maybe a cable was unplugged).
Similarly when we record a bad-block in response to a write failure,
we must not let the write complete until the bad-block update is safe.
Currently there is no interlock between the write request completing
and the metadata update. So it is possible that the write will
complete, the app will confirm success in some way, and then the
machine will crash before the metadata update completes.
This is an extremely small hole for a racy to fit in, but it is
theoretically possible and so should be closed.
So:
- set MD_CHANGE_PENDING when requesting a metadata update for a
failed device, so we can know with certainty when it completes
- queue requests that experienced an error on a new queue which
is only processed after the metadata update completes
- call raid_end_bio_io() on bios in that queue when the time comes.
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/md.c | 1 +
drivers/md/raid1.c | 29 ++++++++++++++++++++++++++++-
drivers/md/raid1.h | 5 +++++
3 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index a875348..9085ba9 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7954,6 +7954,7 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
/* Make sure they get written out promptly */
sysfs_notify_dirent_safe(rdev->sysfs_state);
set_bit(MD_CHANGE_CLEAN, &rdev->mddev->flags);
+ set_bit(MD_CHANGE_PENDING, &rdev->mddev->flags);
md_wakeup_thread(rdev->mddev->thread);
}
return rv;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 189eedb..64d2351c 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1285,6 +1285,7 @@ static void error(struct mddev *mddev, struct md_rdev *rdev)
set_bit(Faulty, &rdev->flags);
spin_unlock_irqrestore(&conf->device_lock, flags);
set_bit(MD_CHANGE_DEVS, &mddev->flags);
+ set_bit(MD_CHANGE_PENDING, &mddev->flags);
printk(KERN_ALERT
"md/raid1:%s: Disk failure on %s, disabling device.\n"
"md/raid1:%s: Operation continuing on %d devices.\n",
@@ -2061,6 +2062,7 @@ static void handle_sync_write_finished(struct r1conf *conf, struct r1bio *r1_bio
static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
{
int m;
+ bool fail = false;
for (m = 0; m < conf->raid_disks * 2 ; m++)
if (r1_bio->bios[m] == IO_MADE_GOOD) {
struct md_rdev *rdev = conf->mirrors[m].rdev;
@@ -2073,6 +2075,7 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
* narrow down and record precise write
* errors.
*/
+ fail = true;
if (!narrow_write_error(r1_bio, m)) {
md_error(conf->mddev,
conf->mirrors[m].rdev);
@@ -2084,7 +2087,13 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
}
if (test_bit(R1BIO_WriteError, &r1_bio->state))
close_write(r1_bio);
- raid_end_bio_io(r1_bio);
+ if (fail) {
+ spin_lock_irq(&conf->device_lock);
+ list_add(&r1_bio->retry_list, &conf->bio_end_io_list);
+ spin_unlock_irq(&conf->device_lock);
+ md_wakeup_thread(conf->mddev->thread);
+ } else
+ raid_end_bio_io(r1_bio);
}
static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio)
@@ -2187,6 +2196,23 @@ static void raid1d(struct mddev *mddev)
md_check_recovery(mddev);
+ if (!list_empty_careful(&conf->bio_end_io_list) &&
+ !test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+ LIST_HEAD(tmp);
+ spin_lock_irqsave(&conf->device_lock, flags);
+ if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+ list_add(&tmp, &conf->bio_end_io_list);
+ list_del_init(&conf->bio_end_io_list);
+ }
+ spin_unlock_irqrestore(&conf->device_lock, flags);
+ while (!list_empty(&tmp)) {
+ r1_bio = list_first_entry(&conf->bio_end_io_list,
+ struct r1bio, retry_list);
+ list_del(&r1_bio->retry_list);
+ raid_end_bio_io(r1_bio);
+ }
+ }
+
blk_start_plug(&plug);
for (;;) {
@@ -2596,6 +2622,7 @@ static struct r1conf *setup_conf(struct mddev *mddev)
conf->raid_disks = mddev->raid_disks;
conf->mddev = mddev;
INIT_LIST_HEAD(&conf->retry_list);
+ INIT_LIST_HEAD(&conf->bio_end_io_list);
spin_lock_init(&conf->resync_lock);
init_waitqueue_head(&conf->wait_barrier);
diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
index 80ded13..50086cf 100644
--- a/drivers/md/raid1.h
+++ b/drivers/md/raid1.h
@@ -48,6 +48,11 @@ struct r1conf {
* block, or anything else.
*/
struct list_head retry_list;
+ /* A separate list of r1bio which just need raid_end_bio_io called.
+ * This mustn't happen for writes which had any errors if the superblock
+ * needs to be written.
+ */
+ struct list_head bio_end_io_list;
/* queue pending writes to be submitted on unplug */
struct bio_list pending_bio_list;
--
1.9.1
From: Doron Tsur <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 0ca81a2840f77855bbad1b9f172c545c4dc9e6a4 upstream.
ib_send_cm_sidr_rep could sometimes erase the node from the sidr
(depending on errors in the process). Since ib_send_cm_sidr_rep is
called both from cm_sidr_req_handler and cm_destroy_id, cm_id_priv
could be either erased from the rb_tree twice or not erased at all.
Fixing that by making sure it's erased only once before freeing
cm_id_priv.
Fixes: a977049dacde ('[PATCH] IB: Add the kernel CM implementation')
Signed-off-by: Doron Tsur <[email protected]>
Signed-off-by: Matan Barak <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/infiniband/core/cm.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c889aae..90104c6 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -856,6 +856,11 @@ retest:
case IB_CM_SIDR_REQ_RCVD:
spin_unlock_irq(&cm_id_priv->lock);
cm_reject_sidr_req(cm_id_priv, IB_SIDR_REJECT);
+ spin_lock_irq(&cm.lock);
+ if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node))
+ rb_erase(&cm_id_priv->sidr_id_node,
+ &cm.remote_sidr_table);
+ spin_unlock_irq(&cm.lock);
break;
case IB_CM_REQ_SENT:
ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
@@ -3092,7 +3097,10 @@ int ib_send_cm_sidr_rep(struct ib_cm_id *cm_id,
spin_unlock_irqrestore(&cm_id_priv->lock, flags);
spin_lock_irqsave(&cm.lock, flags);
- rb_erase(&cm_id_priv->sidr_id_node, &cm.remote_sidr_table);
+ if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node)) {
+ rb_erase(&cm_id_priv->sidr_id_node, &cm.remote_sidr_table);
+ RB_CLEAR_NODE(&cm_id_priv->sidr_id_node);
+ }
spin_unlock_irqrestore(&cm.lock, flags);
return 0;
--
1.9.1
From: NeilBrown <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 66eefe5de11db1e0d8f2edc3880d50e7c36a9d43 upstream.
Calling e.g. blk_queue_max_hw_sectors() after calls to
disk_stack_limits() discards the settings determined by
disk_stack_limits().
So we need to make those calls first.
Fixes: 199dc6ed5179 ("md/raid0: update queue parameter in a safer location.")
Reported-by: Jes Sorensen <[email protected]>
Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/md/raid0.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 3e285e6..1c9094d 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -434,18 +434,18 @@ static int raid0_run(struct mddev *mddev)
struct md_rdev *rdev;
bool discard_supported = false;
- rdev_for_each(rdev, mddev) {
- disk_stack_limits(mddev->gendisk, rdev->bdev,
- rdev->data_offset << 9);
- if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
- discard_supported = true;
- }
blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
blk_queue_io_opt(mddev->queue,
(mddev->chunk_sectors << 9) * mddev->raid_disks);
+ rdev_for_each(rdev, mddev) {
+ disk_stack_limits(mddev->gendisk, rdev->bdev,
+ rdev->data_offset << 9);
+ if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
+ discard_supported = true;
+ }
if (!discard_supported)
queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
else
--
1.9.1
From: Robert Jarzmik <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3c8f7710c1c44fb650bc29b6ef78ed8b60cfaa28 upstream.
The previous fix of pxa library support, which was introduced to fix the
library dependency, broke the previous SoC behavior, where a machine
code binding pxa2xx-ac97 with a coded relied on :
- sound/soc/pxa/pxa2xx-ac97.c
- sound/soc/codecs/XXX.c
For example, the mioa701_wm9713.c machine code is currently broken. The
"select ARM" statement wrongly selects the soc/arm/pxa2xx-ac97 for
compilation, as per an unfortunate fate SND_PXA2XX_AC97 is both declared
in sound/arm/Kconfig and sound/soc/pxa/Kconfig.
Fix this by ensuring that SND_PXA2XX_SOC correctly triggers the correct
pxa2xx-ac97 compilation.
Fixes: 846172dfe33c ("ASoC: fix SND_PXA2XX_LIB Kconfig warning")
Signed-off-by: Robert Jarzmik <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
sound/arm/Kconfig | 15 ++++++++-------
sound/soc/pxa/Kconfig | 2 --
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/sound/arm/Kconfig b/sound/arm/Kconfig
index 885683a..e040621 100644
--- a/sound/arm/Kconfig
+++ b/sound/arm/Kconfig
@@ -9,6 +9,14 @@ menuconfig SND_ARM
Drivers that are implemented on ASoC can be found in
"ALSA for SoC audio support" section.
+config SND_PXA2XX_LIB
+ tristate
+ select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
+ select SND_DMAENGINE_PCM
+
+config SND_PXA2XX_LIB_AC97
+ bool
+
if SND_ARM
config SND_ARMAACI
@@ -21,13 +29,6 @@ config SND_PXA2XX_PCM
tristate
select SND_PCM
-config SND_PXA2XX_LIB
- tristate
- select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
-
-config SND_PXA2XX_LIB_AC97
- bool
-
config SND_PXA2XX_AC97
tristate "AC97 driver for the Intel PXA2xx chip"
depends on ARCH_PXA
diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig
index a0f7d3c..23deb67 100644
--- a/sound/soc/pxa/Kconfig
+++ b/sound/soc/pxa/Kconfig
@@ -1,7 +1,6 @@
config SND_PXA2XX_SOC
tristate "SoC Audio for the Intel PXA2xx chip"
depends on ARCH_PXA
- select SND_ARM
select SND_PXA2XX_LIB
help
Say Y or M if you want to add support for codecs attached to
@@ -15,7 +14,6 @@ config SND_PXA2XX_AC97
config SND_PXA2XX_SOC_AC97
tristate
select AC97_BUS
- select SND_ARM
select SND_PXA2XX_LIB_AC97
select SND_SOC_AC97_BUS
--
1.9.1
From: Peter Zijlstra <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 275d7d44d802ef271a42dc87ac091a495ba72fc5 upstream.
Poma (on the way to another bug) reported an assertion triggering:
[<ffffffff81150529>] module_assert_mutex_or_preempt+0x49/0x90
[<ffffffff81150822>] __module_address+0x32/0x150
[<ffffffff81150956>] __module_text_address+0x16/0x70
[<ffffffff81150f19>] symbol_put_addr+0x29/0x40
[<ffffffffa04b77ad>] dvb_frontend_detach+0x7d/0x90 [dvb_core]
Laura Abbott <[email protected]> produced a patch which lead us to
inspect symbol_put_addr(). This function has a comment claiming it
doesn't need to disable preemption around the module lookup
because it holds a reference to the module it wants to find, which
therefore cannot go away.
This is wrong (and a false optimization too, preempt_disable() is really
rather cheap, and I doubt any of this is on uber critical paths,
otherwise it would've retained a pointer to the actual module anyway and
avoided the second lookup).
While its true that the module cannot go away while we hold a reference
on it, the data structure we do the lookup in very much _CAN_ change
while we do the lookup. Therefore fix the comment and add the
required preempt_disable().
Reported-by: poma <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Fixes: a6e6abd575fc ("module: remove module_text_address()")
Signed-off-by: Zefan Li <[email protected]>
---
kernel/module.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index 5e39896..18e0879 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -887,11 +887,15 @@ void symbol_put_addr(void *addr)
if (core_kernel_text(a))
return;
- /* module_text_address is safe here: we're supposed to have reference
- * to module from symbol_get, so it can't go away. */
+ /*
+ * Even though we hold a reference on the module; we still need to
+ * disable preemption in order to safely traverse the data structure.
+ */
+ preempt_disable();
modaddr = __module_text_address(a);
BUG_ON(!modaddr);
module_put(modaddr);
+ preempt_enable();
}
EXPORT_SYMBOL_GPL(symbol_put_addr);
--
1.9.1
From: "T.J. Purtell" <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 6ecf830e5029598732e04067e325d946097519cb upstream.
The ARM architecture reference specifies that the IT state bits in the
PSR must be all zeros in ARM mode or behavior is unspecified. On the
Qualcomm Snapdragon S4/Krait architecture CPUs the processor continues
to consider the IT state bits while in ARM mode. This makes it so
that some instructions are skipped by the CPU.
Signed-off-by: T.J. Purtell <[email protected]>
[[email protected]: fixed whitespace formatting in patch]
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/arm/kernel/signal.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
index d68d1b6..ffc0902 100644
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -437,12 +437,18 @@ setup_return(struct pt_regs *regs, struct k_sigaction *ka,
*/
thumb = handler & 1;
- if (thumb) {
- cpsr |= PSR_T_BIT;
#if __LINUX_ARM_ARCH__ >= 7
- /* clear the If-Then Thumb-2 execution state */
- cpsr &= ~PSR_IT_MASK;
+ /*
+ * Clear the If-Then Thumb-2 execution state
+ * ARM spec requires this to be all 000s in ARM mode
+ * Snapdragon S4/Krait misbehaves on a Thumb=>ARM
+ * signal transition without this.
+ */
+ cpsr &= ~PSR_IT_MASK;
#endif
+
+ if (thumb) {
+ cpsr |= PSR_T_BIT;
} else
cpsr &= ~PSR_T_BIT;
}
--
1.9.1
From: Hin-Tak Leung <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 7cb74be6fd827e314f81df3c5889b87e4c87c569 upstream.
Pages looked up by __hfs_bnode_create() (called by hfs_bnode_create() and
hfs_bnode_find() for finding or creating pages corresponding to an inode)
are immediately kmap()'ed and used (both read and write) and kunmap()'ed,
and should not be page_cache_release()'ed until hfs_bnode_free().
This patch fixes a problem I first saw in July 2012: merely running "du"
on a large hfsplus-mounted directory a few times on a reasonably loaded
system would get the hfsplus driver all confused and complaining about
B-tree inconsistencies, and generates a "BUG: Bad page state". Most
recently, I can generate this problem on up-to-date Fedora 22 with shipped
kernel 4.0.5, by running "du /" (="/" + "/home" + "/mnt" + other smaller
mounts) and "du /mnt" simultaneously on two windows, where /mnt is a
lightly-used QEMU VM image of the full Mac OS X 10.9:
$ df -i / /home /mnt
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 551665 2725135 17% /
/dev/mapper/fedora-home 52879360 716221 52163139 2% /home
/dev/nbd0p2 4294967295 1387818 4293579477 1% /mnt
After applying the patch, I was able to run "du /" (60+ times) and "du
/mnt" (150+ times) continuously and simultaneously for 6+ hours.
There are many reports of the hfsplus driver getting confused under load
and generating "BUG: Bad page state" or other similar issues over the
years. [1]
The unpatched code [2] has always been wrong since it entered the kernel
tree. The only reason why it gets away with it is that the
kmap/memcpy/kunmap follow very quickly after the page_cache_release() so
the kernel has not had a chance to reuse the memory for something else,
most of the time.
The current RW driver appears to have followed the design and development
of the earlier read-only hfsplus driver [3], where-by version 0.1 (Dec
2001) had a B-tree node-centric approach to
read_cache_page()/page_cache_release() per bnode_get()/bnode_put(),
migrating towards version 0.2 (June 2002) of caching and releasing pages
per inode extents. When the current RW code first entered the kernel [2]
in 2005, there was an REF_PAGES conditional (and "//" commented out code)
to switch between B-node centric paging to inode-centric paging. There
was a mistake with the direction of one of the REF_PAGES conditionals in
__hfs_bnode_create(). In a subsequent "remove debug code" commit [4], the
read_cache_page()/page_cache_release() per bnode_get()/bnode_put() were
removed, but a page_cache_release() was mistakenly left in (propagating
the "REF_PAGES <-> !REF_PAGE" mistake), and the commented-out
page_cache_release() in bnode_release() (which should be spanned by
!REF_PAGES) was never enabled.
References:
[1]:
Michael Fox, Apr 2013
http://www.spinics.net/lists/linux-fsdevel/msg63807.html
("hfsplus volume suddenly inaccessable after 'hfs: recoff %d too large'")
Sasha Levin, Feb 2015
http://lkml.org/lkml/2015/2/20/85 ("use after free")
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/740814
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1027887
https://bugzilla.kernel.org/show_bug.cgi?id=42342
https://bugzilla.kernel.org/show_bug.cgi?id=63841
https://bugzilla.kernel.org/show_bug.cgi?id=78761
[2]:
http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
fs/hfs/bnode.c?id=d1081202f1d0ee35ab0beb490da4b65d4bc763db
commit d1081202f1d0ee35ab0beb490da4b65d4bc763db
Author: Andrew Morton <[email protected]>
Date: Wed Feb 25 16:17:36 2004 -0800
[PATCH] HFS rewrite
http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
fs/hfsplus/bnode.c?id=91556682e0bf004d98a529bf829d339abb98bbbd
commit 91556682e0bf004d98a529bf829d339abb98bbbd
Author: Andrew Morton <[email protected]>
Date: Wed Feb 25 16:17:48 2004 -0800
[PATCH] HFS+ support
[3]:
http://sourceforge.net/projects/linux-hfsplus/
http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.1/
http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.2/
http://linux-hfsplus.cvs.sourceforge.net/viewvc/linux-hfsplus/linux/\
fs/hfsplus/bnode.c?r1=1.4&r2=1.5
Date: Thu Jun 6 09:45:14 2002 +0000
Use buffer cache instead of page cache in bnode.c. Cache inode extents.
[4]:
http://git.kernel.org/cgit/linux/kernel/git/\
stable/linux-stable.git/commit/?id=a5e3985fa014029eb6795664c704953720cc7f7d
commit a5e3985fa014029eb6795664c704953720cc7f7d
Author: Roman Zippel <[email protected]>
Date: Tue Sep 6 15:18:47 2005 -0700
[PATCH] hfs: remove debug code
Signed-off-by: Hin-Tak Leung <[email protected]>
Signed-off-by: Sergei Antonov <[email protected]>
Reviewed-by: Anton Altaparmakov <[email protected]>
Reported-by: Sasha Levin <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Vyacheslav Dubeyko <[email protected]>
Cc: Sougata Santra <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/hfs/bnode.c | 9 ++++-----
fs/hfsplus/bnode.c | 3 ---
2 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
index cdb41a1..8daea16 100644
--- a/fs/hfs/bnode.c
+++ b/fs/hfs/bnode.c
@@ -287,7 +287,6 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
page_cache_release(page);
goto fail;
}
- page_cache_release(page);
node->page[i] = page;
}
@@ -397,11 +396,11 @@ node_error:
void hfs_bnode_free(struct hfs_bnode *node)
{
- //int i;
+ int i;
- //for (i = 0; i < node->tree->pages_per_bnode; i++)
- // if (node->page[i])
- // page_cache_release(node->page[i]);
+ for (i = 0; i < node->tree->pages_per_bnode; i++)
+ if (node->page[i])
+ page_cache_release(node->page[i]);
kfree(node);
}
diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index 1c42cc5..a1e9109 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -454,7 +454,6 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
page_cache_release(page);
goto fail;
}
- page_cache_release(page);
node->page[i] = page;
}
@@ -566,13 +565,11 @@ node_error:
void hfs_bnode_free(struct hfs_bnode *node)
{
-#if 0
int i;
for (i = 0; i < node->tree->pages_per_bnode; i++)
if (node->page[i])
page_cache_release(node->page[i]);
-#endif
kfree(node);
}
--
1.9.1
From: Ard Biesheuvel <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit a077224fd35b2f7fbc93f14cf67074fc792fbac2 upstream.
While working on the 32-bit ARM port of UEFI, I noticed a strange
corruption in the kernel log. The following snprintf() statement
(in drivers/firmware/efi/efi.c:efi_md_typeattr_format())
snprintf(pos, size, "|%3s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
was producing the following output in the log:
| | | | | |WB|WT|WC|UC]
| | | | | |WB|WT|WC|UC]
| | | | | |WB|WT|WC|UC]
|RUN| | | | |WB|WT|WC|UC]*
|RUN| | | | |WB|WT|WC|UC]*
| | | | | |WB|WT|WC|UC]
|RUN| | | | |WB|WT|WC|UC]*
| | | | | |WB|WT|WC|UC]
|RUN| | | | | | | |UC]
|RUN| | | | | | | |UC]
As it turns out, this is caused by incorrect code being emitted for
the string() function in lib/vsprintf.c. The following code
if (!(spec.flags & LEFT)) {
while (len < spec.field_width--) {
if (buf < end)
*buf = ' ';
++buf;
}
}
for (i = 0; i < len; ++i) {
if (buf < end)
*buf = *s;
++buf; ++s;
}
while (len < spec.field_width--) {
if (buf < end)
*buf = ' ';
++buf;
}
when called with len == 0, triggers an issue in the GCC SRA optimization
pass (Scalar Replacement of Aggregates), which handles promotion of signed
struct members incorrectly. This is a known but as yet unresolved issue.
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932). In this particular
case, it is causing the second while loop to be executed erroneously a
single time, causing the additional space characters to be printed.
So disable the optimization by passing -fno-ipa-sra.
Acked-by: Nicolas Pitre <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/arm/Makefile | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 1d6402c..4533386 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -53,6 +53,14 @@ endif
comma = ,
+#
+# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
+# later may result in code being generated that handles signed short and signed
+# char struct members incorrectly. So disable it.
+# (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932)
+#
+KBUILD_CFLAGS += $(call cc-option,-fno-ipa-sra)
+
# This selects which instruction set is used.
# Note that GCC does not numerically define an architecture version
# macro, but instead defines a whole series of macros which makes
--
1.9.1
From: Andrey Ryabinin <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 71c6da846be478a61556717ef1ee1cea91f5d6a8 upstream.
Currently context size (cra_ctxsize) doesn't specified for
ghash_async_alg. Which means it's zero. Thus crypto_create_tfm()
doesn't allocate needed space for ghash_async_ctx, so any
read/write to ctx (e.g. in ghash_async_init_tfm()) is not valid.
Signed-off-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/x86/crypto/ghash-clmulni-intel_glue.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
index c07446d..21069e9 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
+++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
@@ -292,6 +292,7 @@ static struct ahash_alg ghash_async_alg = {
.cra_name = "ghash",
.cra_driver_name = "ghash-clmulni",
.cra_priority = 400,
+ .cra_ctxsize = sizeof(struct ghash_async_ctx),
.cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC,
.cra_blocksize = GHASH_BLOCK_SIZE,
.cra_type = &crypto_ahash_type,
--
1.9.1
From: Noa Osherovich <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 5e99b139f1b68acd65e36515ca347b03856dfb5a upstream.
The mlx4 IB driver implementation for ib_query_ah used a wrong offset
(28 instead of 29) when link type is Ethernet. Fixed to use the correct one.
Fixes: fa417f7b520e ('IB/mlx4: Add support for IBoE')
Signed-off-by: Shani Michaeli <[email protected]>
Signed-off-by: Noa Osherovich <[email protected]>
Signed-off-by: Or Gerlitz <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/infiniband/hw/mlx4/ah.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx4/ah.c b/drivers/infiniband/hw/mlx4/ah.c
index a251bec..890c23b 100644
--- a/drivers/infiniband/hw/mlx4/ah.c
+++ b/drivers/infiniband/hw/mlx4/ah.c
@@ -169,9 +169,13 @@ int mlx4_ib_query_ah(struct ib_ah *ibah, struct ib_ah_attr *ah_attr)
enum rdma_link_layer ll;
memset(ah_attr, 0, sizeof *ah_attr);
- ah_attr->sl = be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
ah_attr->port_num = be32_to_cpu(ah->av.ib.port_pd) >> 24;
ll = rdma_port_get_link_layer(ibah->device, ah_attr->port_num);
+ if (ll == IB_LINK_LAYER_ETHERNET)
+ ah_attr->sl = be32_to_cpu(ah->av.eth.sl_tclass_flowlabel) >> 29;
+ else
+ ah_attr->sl = be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
+
ah_attr->dlid = ll == IB_LINK_LAYER_INFINIBAND ? be16_to_cpu(ah->av.ib.dlid) : 0;
if (ah->av.ib.stat_rate)
ah_attr->static_rate = ah->av.ib.stat_rate - MLX4_STAT_RATE_OFFSET;
--
1.9.1
From: Yishai Hadas <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 35d4a0b63dc0c6d1177d4f532a9deae958f0662c upstream.
Fixes: 2a72f212263701b927559f6850446421d5906c41 ("IB/uverbs: Remove dev_table")
Before this commit there was a device look-up table that was protected
by a spin_lock used by ib_uverbs_open and by ib_uverbs_remove_one. When
it was dropped and container_of was used instead, it enabled the race
with remove_one as dev might be freed just after:
dev = container_of(inode->i_cdev, struct ib_uverbs_device, cdev) but
before the kref_get.
In addition, this buggy patch added some dead code as
container_of(x,y,z) can never be NULL and so dev can never be NULL.
As a result the comment above ib_uverbs_open saying "the open method
will either immediately run -ENXIO" is wrong as it can never happen.
The solution follows Jason Gunthorpe suggestion from below URL:
https://www.mail-archive.com/[email protected]/msg25692.html
cdev will hold a kref on the parent (the containing structure,
ib_uverbs_device) and only when that kref is released it is
guaranteed that open will never be called again.
In addition, fixes the active count scheme to use an atomic
not a kref to prevent WARN_ON as pointed by above comment
from Jason.
Signed-off-by: Yishai Hadas <[email protected]>
Signed-off-by: Shachar Raindel <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/infiniband/core/uverbs.h | 3 ++-
drivers/infiniband/core/uverbs_main.c | 43 ++++++++++++++++++++++++-----------
2 files changed, 32 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
index 5bcb2af..228af18 100644
--- a/drivers/infiniband/core/uverbs.h
+++ b/drivers/infiniband/core/uverbs.h
@@ -69,7 +69,7 @@
*/
struct ib_uverbs_device {
- struct kref ref;
+ atomic_t refcount;
int num_comp_vectors;
struct completion comp;
struct device *dev;
@@ -78,6 +78,7 @@ struct ib_uverbs_device {
struct cdev cdev;
struct rb_root xrcd_tree;
struct mutex xrcd_tree_mutex;
+ struct kobject kobj;
};
struct ib_uverbs_event_file {
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index 5b51e4e..c8e7669 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -117,14 +117,18 @@ static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file,
static void ib_uverbs_add_one(struct ib_device *device);
static void ib_uverbs_remove_one(struct ib_device *device);
-static void ib_uverbs_release_dev(struct kref *ref)
+static void ib_uverbs_release_dev(struct kobject *kobj)
{
struct ib_uverbs_device *dev =
- container_of(ref, struct ib_uverbs_device, ref);
+ container_of(kobj, struct ib_uverbs_device, kobj);
- complete(&dev->comp);
+ kfree(dev);
}
+static struct kobj_type ib_uverbs_dev_ktype = {
+ .release = ib_uverbs_release_dev,
+};
+
static void ib_uverbs_release_event_file(struct kref *ref)
{
struct ib_uverbs_event_file *file =
@@ -273,13 +277,19 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
return context->device->dealloc_ucontext(context);
}
+static void ib_uverbs_comp_dev(struct ib_uverbs_device *dev)
+{
+ complete(&dev->comp);
+}
+
static void ib_uverbs_release_file(struct kref *ref)
{
struct ib_uverbs_file *file =
container_of(ref, struct ib_uverbs_file, ref);
module_put(file->device->ib_dev->owner);
- kref_put(&file->device->ref, ib_uverbs_release_dev);
+ if (atomic_dec_and_test(&file->device->refcount))
+ ib_uverbs_comp_dev(file->device);
kfree(file);
}
@@ -621,9 +631,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
int ret;
dev = container_of(inode->i_cdev, struct ib_uverbs_device, cdev);
- if (dev)
- kref_get(&dev->ref);
- else
+ if (!atomic_inc_not_zero(&dev->refcount))
return -ENXIO;
if (!try_module_get(dev->ib_dev->owner)) {
@@ -644,6 +652,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
mutex_init(&file->mutex);
filp->private_data = file;
+ kobject_get(&dev->kobj);
return nonseekable_open(inode, filp);
@@ -651,13 +660,16 @@ err_module:
module_put(dev->ib_dev->owner);
err:
- kref_put(&dev->ref, ib_uverbs_release_dev);
+ if (atomic_dec_and_test(&dev->refcount))
+ ib_uverbs_comp_dev(dev);
+
return ret;
}
static int ib_uverbs_close(struct inode *inode, struct file *filp)
{
struct ib_uverbs_file *file = filp->private_data;
+ struct ib_uverbs_device *dev = file->device;
ib_uverbs_cleanup_ucontext(file, file->ucontext);
@@ -665,6 +677,7 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp)
kref_put(&file->async_file->ref, ib_uverbs_release_event_file);
kref_put(&file->ref, ib_uverbs_release_file);
+ kobject_put(&dev->kobj);
return 0;
}
@@ -760,10 +773,11 @@ static void ib_uverbs_add_one(struct ib_device *device)
if (!uverbs_dev)
return;
- kref_init(&uverbs_dev->ref);
+ atomic_set(&uverbs_dev->refcount, 1);
init_completion(&uverbs_dev->comp);
uverbs_dev->xrcd_tree = RB_ROOT;
mutex_init(&uverbs_dev->xrcd_tree_mutex);
+ kobject_init(&uverbs_dev->kobj, &ib_uverbs_dev_ktype);
spin_lock(&map_lock);
devnum = find_first_zero_bit(dev_map, IB_UVERBS_MAX_DEVICES);
@@ -790,6 +804,7 @@ static void ib_uverbs_add_one(struct ib_device *device)
cdev_init(&uverbs_dev->cdev, NULL);
uverbs_dev->cdev.owner = THIS_MODULE;
uverbs_dev->cdev.ops = device->mmap ? &uverbs_mmap_fops : &uverbs_fops;
+ uverbs_dev->cdev.kobj.parent = &uverbs_dev->kobj;
kobject_set_name(&uverbs_dev->cdev.kobj, "uverbs%d", uverbs_dev->devnum);
if (cdev_add(&uverbs_dev->cdev, base, 1))
goto err_cdev;
@@ -820,9 +835,10 @@ err_cdev:
clear_bit(devnum, overflow_map);
err:
- kref_put(&uverbs_dev->ref, ib_uverbs_release_dev);
+ if (atomic_dec_and_test(&uverbs_dev->refcount))
+ ib_uverbs_comp_dev(uverbs_dev);
wait_for_completion(&uverbs_dev->comp);
- kfree(uverbs_dev);
+ kobject_put(&uverbs_dev->kobj);
return;
}
@@ -842,9 +858,10 @@ static void ib_uverbs_remove_one(struct ib_device *device)
else
clear_bit(uverbs_dev->devnum - IB_UVERBS_MAX_DEVICES, overflow_map);
- kref_put(&uverbs_dev->ref, ib_uverbs_release_dev);
+ if (atomic_dec_and_test(&uverbs_dev->refcount))
+ ib_uverbs_comp_dev(uverbs_dev);
wait_for_completion(&uverbs_dev->comp);
- kfree(uverbs_dev);
+ kobject_put(&uverbs_dev->kobj);
}
static char *uverbs_devnode(struct device *dev, umode_t *mode)
--
1.9.1
From: Christoph Hellwig <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit b632ffa7cee439ba5dce3b3bc4a5cbe2b3e20133 upstream.
We have many WR opcodes that are only supported in kernel space
and/or require optional information to be copied into the WR
structure. Reject all those not explicitly handled so that we
can't pass invalid information to drivers.
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Reviewed-by: Sagi Grimberg <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/infiniband/core/uverbs_cmd.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 4d27e4c3..95885b4 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -1979,6 +1979,12 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
next->send_flags = user_wr->send_flags;
if (is_ud) {
+ if (next->opcode != IB_WR_SEND &&
+ next->opcode != IB_WR_SEND_WITH_IMM) {
+ ret = -EINVAL;
+ goto out_put;
+ }
+
next->wr.ud.ah = idr_read_ah(user_wr->wr.ud.ah,
file->ucontext);
if (!next->wr.ud.ah) {
@@ -2015,9 +2021,11 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
user_wr->wr.atomic.compare_add;
next->wr.atomic.swap = user_wr->wr.atomic.swap;
next->wr.atomic.rkey = user_wr->wr.atomic.rkey;
+ case IB_WR_SEND:
break;
default:
- break;
+ ret = -EINVAL;
+ goto out_put;
}
}
--
1.9.1
From: David Daney <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 3a496b00b6f90c41bd21a410871dfc97d4f3c7ab upstream.
If the internal call to of_address_to_resource() fails, we end up
looping forever in of_find_matching_node_by_address(). This can be
caused by a defective device tree, or calling with an incorrect
matches argument.
Fix by calling of_find_matching_node() unconditionally at the end of
the loop.
Signed-off-by: David Daney <[email protected]>
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/of/address.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/of/address.c b/drivers/of/address.c
index c059ce1..f89fc0f 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -604,10 +604,10 @@ struct device_node *of_find_matching_node_by_address(struct device_node *from,
struct resource res;
while (dn) {
- if (of_address_to_resource(dn, 0, &res))
- continue;
- if (res.start == base_address)
+ if (!of_address_to_resource(dn, 0, &res) &&
+ res.start == base_address)
return dn;
+
dn = of_find_matching_node(dn, matches);
}
--
1.9.1
From: Tyler Hicks <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 5556e7e6d30e8e9b5ee51b0e5edd526ee80e5e36 upstream.
Consider eCryptfs dcache entries to be stale when the corresponding
lower inode's i_nlink count is zero. This solves a problem caused by the
lower inode being directly modified, without going through the eCryptfs
mount, leaving stale eCryptfs dentries cached and the eCryptfs inode's
i_nlink count not being cleared.
Signed-off-by: Tyler Hicks <[email protected]>
Reported-by: Richard Weinberger <[email protected]>
[bwh: Backported to 3.2:
- Test d_revalidate pointer directly rather than a DCACHE_OP flag
- Open-code d_inode()
- Adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
fs/ecryptfs/dentry.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
index 534c1d4..eba8f1d 100644
--- a/fs/ecryptfs/dentry.c
+++ b/fs/ecryptfs/dentry.c
@@ -55,26 +55,26 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, struct nameidata *nd)
lower_dentry = ecryptfs_dentry_to_lower(dentry);
lower_mnt = ecryptfs_dentry_to_lower_mnt(dentry);
- if (!lower_dentry->d_op || !lower_dentry->d_op->d_revalidate)
- goto out;
- if (nd) {
- dentry_save = nd->path.dentry;
- vfsmount_save = nd->path.mnt;
- nd->path.dentry = lower_dentry;
- nd->path.mnt = lower_mnt;
- }
- rc = lower_dentry->d_op->d_revalidate(lower_dentry, nd);
- if (nd) {
- nd->path.dentry = dentry_save;
- nd->path.mnt = vfsmount_save;
+ if (lower_dentry->d_op && lower_dentry->d_op->d_revalidate) {
+ if (nd) {
+ dentry_save = nd->path.dentry;
+ vfsmount_save = nd->path.mnt;
+ nd->path.dentry = lower_dentry;
+ nd->path.mnt = lower_mnt;
+ }
+ rc = lower_dentry->d_op->d_revalidate(lower_dentry, nd);
+ if (nd) {
+ nd->path.dentry = dentry_save;
+ nd->path.mnt = vfsmount_save;
+ }
}
if (dentry->d_inode) {
- struct inode *lower_inode =
- ecryptfs_inode_to_lower(dentry->d_inode);
+ struct inode *inode = dentry->d_inode;
- fsstack_copy_attr_all(dentry->d_inode, lower_inode);
+ fsstack_copy_attr_all(inode, ecryptfs_inode_to_lower(inode));
+ if (!inode->i_nlink)
+ return 0;
}
-out:
return rc;
}
--
1.9.1
From: Peter Chen <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 0521cfd06e1ebcd575e7ae36aab068b38df23850 upstream.
The ehci platform device's drvdata is the pointer of struct usb_hcd
already, so we doesn't need to call bus_to_hcd conversion again.
Signed-off-by: Peter Chen <[email protected]>
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/host/ehci-sysfs.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/host/ehci-sysfs.c b/drivers/usb/host/ehci-sysfs.c
index 14ced00..0659024 100644
--- a/drivers/usb/host/ehci-sysfs.c
+++ b/drivers/usb/host/ehci-sysfs.c
@@ -29,7 +29,7 @@ static ssize_t show_companion(struct device *dev,
int count = PAGE_SIZE;
char *ptr = buf;
- ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
+ ehci = hcd_to_ehci(dev_get_drvdata(dev));
nports = HCS_N_PORTS(ehci->hcs_params);
for (index = 0; index < nports; ++index) {
@@ -54,7 +54,7 @@ static ssize_t store_companion(struct device *dev,
struct ehci_hcd *ehci;
int portnum, new_owner;
- ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
+ ehci = hcd_to_ehci(dev_get_drvdata(dev));
new_owner = PORT_OWNER; /* Owned by companion */
if (sscanf(buf, "%d", &portnum) != 1)
return -EINVAL;
@@ -85,7 +85,7 @@ static ssize_t show_uframe_periodic_max(struct device *dev,
struct ehci_hcd *ehci;
int n;
- ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
+ ehci = hcd_to_ehci(dev_get_drvdata(dev));
n = scnprintf(buf, PAGE_SIZE, "%d\n", ehci->uframe_periodic_max);
return n;
}
@@ -102,7 +102,7 @@ static ssize_t store_uframe_periodic_max(struct device *dev,
unsigned long flags;
ssize_t ret;
- ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
+ ehci = hcd_to_ehci(dev_get_drvdata(dev));
if (kstrtouint(buf, 0, &uframe_periodic_max) < 0)
return -EINVAL;
--
1.9.1
From: Matthijs Kooijman <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 1fb8dc36384ae1140ee6ccc470de74397606a9d5 upstream.
CustomWare uses the FTDI VID with custom PIDs for their ShipModul MiniPlex
products.
Signed-off-by: Matthijs Kooijman <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/usb/serial/ftdi_sio.c | 4 ++++
drivers/usb/serial/ftdi_sio_ids.h | 8 ++++++++
2 files changed, 12 insertions(+)
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index 1e4899c..4038789 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -629,6 +629,10 @@ static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2WI_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX3_PID) },
/*
* ELV devices:
*/
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index 1fee973..70b24c0 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -568,6 +568,14 @@
*/
#define FTDI_SYNAPSE_SS200_PID 0x9090 /* SS200 - SNAP Stick 200 */
+/*
+ * CustomWare / ShipModul NMEA multiplexers product ids (FTDI_VID)
+ */
+#define FTDI_CUSTOMWARE_MINIPLEX_PID 0xfd48 /* MiniPlex first generation NMEA Multiplexer */
+#define FTDI_CUSTOMWARE_MINIPLEX2_PID 0xfd49 /* MiniPlex-USB and MiniPlex-2 series */
+#define FTDI_CUSTOMWARE_MINIPLEX2WI_PID 0xfd4a /* MiniPlex-2Wi */
+#define FTDI_CUSTOMWARE_MINIPLEX3_PID 0xfd4b /* MiniPlex-3 series */
+
/********************************/
/** third-party VID/PID combos **/
--
1.9.1
From: Paul Bolle <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit fe2b592173ff0274e70dc44d1d28c19bb995aa7c upstream.
wf_unregister_client() increments the client count when a client
unregisters. That is obviously incorrect. Decrement that client count
instead.
Fixes: 75722d3992f5 ("[PATCH] ppc64: Thermal control for SMU based machines")
Signed-off-by: Paul Bolle <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/macintosh/windfarm_core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/macintosh/windfarm_core.c b/drivers/macintosh/windfarm_core.c
index ce88979..004fa10 100644
--- a/drivers/macintosh/windfarm_core.c
+++ b/drivers/macintosh/windfarm_core.c
@@ -421,7 +421,7 @@ int wf_unregister_client(struct notifier_block *nb)
{
mutex_lock(&wf_lock);
blocking_notifier_chain_unregister(&wf_client_list, nb);
- wf_client_count++;
+ wf_client_count--;
if (wf_client_count == 0)
wf_stop_thread();
mutex_unlock(&wf_lock);
--
1.9.1
From: Sudip Mukherjee <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit bab383de3b84e584b0f09227151020b2a43dc34c upstream.
parport_find_base() will implicitly do parport_get_port() which
increases the refcount. Then parport_register_device() will again
increment the refcount. But while unloading the module we are only
doing parport_unregister_device() decrementing the refcount only once.
We add an parport_put_port() to neutralize the effect of
parport_get_port().
Signed-off-by: Sudip Mukherjee <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/auxdisplay/ks0108.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/auxdisplay/ks0108.c b/drivers/auxdisplay/ks0108.c
index 5b93852..0d75285 100644
--- a/drivers/auxdisplay/ks0108.c
+++ b/drivers/auxdisplay/ks0108.c
@@ -139,6 +139,7 @@ static int __init ks0108_init(void)
ks0108_pardevice = parport_register_device(ks0108_parport, KS0108_NAME,
NULL, NULL, NULL, PARPORT_DEV_EXCL, NULL);
+ parport_put_port(ks0108_parport);
if (ks0108_pardevice == NULL) {
printk(KERN_ERR KS0108_NAME ": ERROR: "
"parport didn't register new device\n");
--
1.9.1
From: Thomas Huth <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 1c2cb594441d02815d304cccec9742ff5c707495 upstream.
The EPOW interrupt handler uses rtas_get_sensor(), which in turn
uses rtas_busy_delay() to wait for RTAS becoming ready in case it
is necessary. But rtas_busy_delay() is annotated with might_sleep()
and thus may not be used by interrupts handlers like the EPOW handler!
This leads to the following BUG when CONFIG_DEBUG_ATOMIC_SLEEP is
enabled:
BUG: sleeping function called from invalid context at arch/powerpc/kernel/rtas.c:496
in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/1
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.2.0-rc2-thuth #6
Call Trace:
[c00000007ffe7b90] [c000000000807670] dump_stack+0xa0/0xdc (unreliable)
[c00000007ffe7bc0] [c0000000000e1f14] ___might_sleep+0x134/0x180
[c00000007ffe7c20] [c00000000002aec0] rtas_busy_delay+0x30/0xd0
[c00000007ffe7c50] [c00000000002bde4] rtas_get_sensor+0x74/0xe0
[c00000007ffe7ce0] [c000000000083264] ras_epow_interrupt+0x44/0x450
[c00000007ffe7d90] [c000000000120260] handle_irq_event_percpu+0xa0/0x300
[c00000007ffe7e70] [c000000000120524] handle_irq_event+0x64/0xc0
[c00000007ffe7eb0] [c000000000124dbc] handle_fasteoi_irq+0xec/0x260
[c00000007ffe7ef0] [c00000000011f4f0] generic_handle_irq+0x50/0x80
[c00000007ffe7f20] [c000000000010f3c] __do_irq+0x8c/0x200
[c00000007ffe7f90] [c0000000000236cc] call_do_irq+0x14/0x24
[c00000007e6f39e0] [c000000000011144] do_IRQ+0x94/0x110
[c00000007e6f3a30] [c000000000002594] hardware_interrupt_common+0x114/0x180
Fix this issue by introducing a new rtas_get_sensor_fast() function
that does not use rtas_busy_delay() - and thus can only be used for
sensors that do not cause a BUSY condition - known as "fast" sensors.
The EPOW sensor is defined to be "fast" in sPAPR - mpe.
Fixes: 587f83e8dd50 ("powerpc/pseries: Use rtas_get_sensor in RAS code")
Signed-off-by: Thomas Huth <[email protected]>
Reviewed-by: Nathan Fontenot <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
arch/powerpc/include/asm/rtas.h | 1 +
arch/powerpc/kernel/rtas.c | 17 +++++++++++++++++
arch/powerpc/platforms/pseries/ras.c | 3 ++-
3 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
index 5e7e008..8e17206 100644
--- a/arch/powerpc/include/asm/rtas.h
+++ b/arch/powerpc/include/asm/rtas.h
@@ -253,6 +253,7 @@ extern void rtas_power_off(void);
extern void rtas_halt(void);
extern void rtas_os_term(char *str);
extern int rtas_get_sensor(int sensor, int index, int *state);
+extern int rtas_get_sensor_fast(int sensor, int index, int *state);
extern int rtas_get_power_level(int powerdomain, int *level);
extern int rtas_set_power_level(int powerdomain, int level, int *setlevel);
extern bool rtas_indicator_present(int token, int *maxindex);
diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index 225e9f2..b42cdc3 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -585,6 +585,23 @@ int rtas_get_sensor(int sensor, int index, int *state)
}
EXPORT_SYMBOL(rtas_get_sensor);
+int rtas_get_sensor_fast(int sensor, int index, int *state)
+{
+ int token = rtas_token("get-sensor-state");
+ int rc;
+
+ if (token == RTAS_UNKNOWN_SERVICE)
+ return -ENOENT;
+
+ rc = rtas_call(token, 2, 2, state, sensor, index);
+ WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN &&
+ rc <= RTAS_EXTENDED_DELAY_MAX));
+
+ if (rc < 0)
+ return rtas_error_rc(rc);
+ return rc;
+}
+
bool rtas_indicator_present(int token, int *maxindex)
{
int proplen, count, i;
diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
index c4dfccd..2338e6e 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -187,7 +187,8 @@ static irqreturn_t ras_epow_interrupt(int irq, void *dev_id)
int state;
int critical;
- status = rtas_get_sensor(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX, &state);
+ status = rtas_get_sensor_fast(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX,
+ &state);
if (state > 3)
critical = 1; /* Time Critical */
--
1.9.1
From: Mark Rustad <[email protected]>
3.4.112-rc1 review patch. If anyone has any objections, please let me know.
------------------
commit 7aa6ca4d39edf01f997b9e02cf6d2fdeb224f351 upstream.
Set the PCI_DEV_FLAGS_VPD_REF_F0 flag on all Intel Ethernet device
functions other than function 0, so that on multi-function devices, we will
always read VPD from function 0 instead of from the other functions.
[bhelgaas: changelog]
Signed-off-by: Mark Rustad <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Acked-by: Alexander Duyck <[email protected]>
[bwh: Backported to 3.2:
- Put the class check in the new function as there is no
DECLARE_PCI_FIXUP_CLASS_EARLY(
- Adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Zefan Li <[email protected]>
---
drivers/pci/quirks.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 887af797..3ce87c8 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1883,6 +1883,15 @@ static void __devinit quirk_netmos(struct pci_dev *dev)
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL, 8, quirk_netmos);
+static void quirk_f0_vpd_link(struct pci_dev *dev)
+{
+ if ((dev->class >> 8) != PCI_CLASS_NETWORK_ETHERNET ||
+ !dev->multifunction || !PCI_FUNC(dev->devfn))
+ return;
+ dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
+}
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, quirk_f0_vpd_link);
+
static void __devinit quirk_e100_interrupt(struct pci_dev *dev)
{
u16 command, pmcsr;
--
1.9.1
Hi Zefan,
In dlm_master_request_handler, it has missed the following change :
- dlm_put(dlm);
+ if (!dispatched)
+ dlm_put(dlm);
Thanks,
Joseph
On 2016/4/18 18:46, [email protected] wrote:
> From: Joseph Qi <[email protected]>
>
> 3.4.112-rc1 review patch. If anyone has any objections, please let me know.
>
> ------------------
>
>
> commit 012572d4fc2e4ddd5c8ec8614d51414ec6cae02a upstream.
>
> The order of the following three spinlocks should be:
> dlm_domain_lock < dlm_ctxt->spinlock < dlm_lock_resource->spinlock
>
> But dlm_dispatch_assert_master() is called while holding
> dlm_ctxt->spinlock and dlm_lock_resource->spinlock, and then it calls
> dlm_grab() which will take dlm_domain_lock.
>
> Once another thread (for example, dlm_query_join_handler) has already
> taken dlm_domain_lock, and tries to take dlm_ctxt->spinlock deadlock
> happens.
>
> Signed-off-by: Joseph Qi <[email protected]>
> Cc: Joel Becker <[email protected]>
> Cc: Mark Fasheh <[email protected]>
> Cc: "Junxiao Bi" <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
> [lizf: Backported to 3.4: adjust context]
> Signed-off-by: Zefan Li <[email protected]>
> ---
> fs/ocfs2/dlm/dlmmaster.c | 4 +++-
> fs/ocfs2/dlm/dlmrecovery.c | 6 +++++-
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
> index 7ba6ac1..751efa8 100644
> --- a/fs/ocfs2/dlm/dlmmaster.c
> +++ b/fs/ocfs2/dlm/dlmmaster.c
> @@ -1411,6 +1411,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
> int found, ret;
> int set_maybe;
> int dispatch_assert = 0;
> + int dispatched = 0;
>
> if (!dlm_grab(dlm))
> return DLM_MASTER_RESP_NO;
> @@ -1617,6 +1618,8 @@ send_response:
> mlog(ML_ERROR, "failed to dispatch assert master work\n");
> response = DLM_MASTER_RESP_ERROR;
> dlm_lockres_put(res);
> + } else {
> + dispatched = 1;
> }
> } else {
> if (res)
> @@ -2041,7 +2044,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
>
>
> /* queue up work for dlm_assert_master_worker */
> - dlm_grab(dlm); /* get an extra ref for the work item */
> dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
> item->u.am.lockres = res; /* already have a ref */
> /* can optionally ignore node numbers higher than this node */
> diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
> index d15b071..0e5013e 100644
> --- a/fs/ocfs2/dlm/dlmrecovery.c
> +++ b/fs/ocfs2/dlm/dlmrecovery.c
> @@ -1689,6 +1689,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
> unsigned int hash;
> int master = DLM_LOCK_RES_OWNER_UNKNOWN;
> u32 flags = DLM_ASSERT_MASTER_REQUERY;
> + int dispatched = 0;
>
> if (!dlm_grab(dlm)) {
> /* since the domain has gone away on this
> @@ -1710,6 +1711,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
> mlog_errno(-ENOMEM);
> /* retry!? */
> BUG();
> + } else {
> + dispatched = 1;
> }
> } else /* put.. incase we are not the master */
> dlm_lockres_put(res);
> @@ -1717,7 +1720,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
> }
> spin_unlock(&dlm->spinlock);
>
> - dlm_put(dlm);
> + if (!dispatched)
> + dlm_put(dlm);
> return master;
> }
>
>
On Mon, Apr 18, 2016 at 06:45:38PM +0800, [email protected] wrote:
> From: Zefan Li <[email protected]>
>
> This is the start of the stable review cycle for the 3.4.112 release.
> There are 92 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Wed Apr 20 10:44:28 UTC 2016.
> Anything received after that time might be too late.
>
Build results:
total: 99 pass: 99 fail: 0
Qemu test results:
total: 64 pass: 64 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
On 2016/4/18 19:29, Joseph Qi wrote:
> Hi Zefan,
> In dlm_master_request_handler, it has missed the following change :
> - dlm_put(dlm);
> + if (!dispatched)
> + dlm_put(dlm);
Oops..I guess I forgot to refresh the patch after adjusting the context.
Thanks for the review!
>
> Thanks,
> Joseph
>
> On 2016/4/18 18:46, [email protected] wrote:
>> From: Joseph Qi <[email protected]>
>>
>> 3.4.112-rc1 review patch. If anyone has any objections, please let me know.
>>
>> ------------------
>>
>>
>> commit 012572d4fc2e4ddd5c8ec8614d51414ec6cae02a upstream.
>>
>> The order of the following three spinlocks should be:
>> dlm_domain_lock < dlm_ctxt->spinlock < dlm_lock_resource->spinlock
>>
>> But dlm_dispatch_assert_master() is called while holding
>> dlm_ctxt->spinlock and dlm_lock_resource->spinlock, and then it calls
>> dlm_grab() which will take dlm_domain_lock.
>>
>> Once another thread (for example, dlm_query_join_handler) has already
>> taken dlm_domain_lock, and tries to take dlm_ctxt->spinlock deadlock
>> happens.
>>
>> Signed-off-by: Joseph Qi <[email protected]>
>> Cc: Joel Becker <[email protected]>
>> Cc: Mark Fasheh <[email protected]>
>> Cc: "Junxiao Bi" <[email protected]>
>> Signed-off-by: Andrew Morton <[email protected]>
>> Signed-off-by: Linus Torvalds <[email protected]>
>> [lizf: Backported to 3.4: adjust context]
>> Signed-off-by: Zefan Li <[email protected]>
>> ---
>> fs/ocfs2/dlm/dlmmaster.c | 4 +++-
>> fs/ocfs2/dlm/dlmrecovery.c | 6 +++++-
>> 2 files changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
>> index 7ba6ac1..751efa8 100644
>> --- a/fs/ocfs2/dlm/dlmmaster.c
>> +++ b/fs/ocfs2/dlm/dlmmaster.c
>> @@ -1411,6 +1411,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
>> int found, ret;
>> int set_maybe;
>> int dispatch_assert = 0;
>> + int dispatched = 0;
>>
>> if (!dlm_grab(dlm))
>> return DLM_MASTER_RESP_NO;
>> @@ -1617,6 +1618,8 @@ send_response:
>> mlog(ML_ERROR, "failed to dispatch assert master work\n");
>> response = DLM_MASTER_RESP_ERROR;
>> dlm_lockres_put(res);
>> + } else {
>> + dispatched = 1;
>> }
>> } else {
>> if (res)
>> @@ -2041,7 +2044,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
>>
>>
>> /* queue up work for dlm_assert_master_worker */
>> - dlm_grab(dlm); /* get an extra ref for the work item */
>> dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
>> item->u.am.lockres = res; /* already have a ref */
>> /* can optionally ignore node numbers higher than this node */
>> diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
>> index d15b071..0e5013e 100644
>> --- a/fs/ocfs2/dlm/dlmrecovery.c
>> +++ b/fs/ocfs2/dlm/dlmrecovery.c
>> @@ -1689,6 +1689,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>> unsigned int hash;
>> int master = DLM_LOCK_RES_OWNER_UNKNOWN;
>> u32 flags = DLM_ASSERT_MASTER_REQUERY;
>> + int dispatched = 0;
>>
>> if (!dlm_grab(dlm)) {
>> /* since the domain has gone away on this
>> @@ -1710,6 +1711,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>> mlog_errno(-ENOMEM);
>> /* retry!? */
>> BUG();
>> + } else {
>> + dispatched = 1;
>> }
>> } else /* put.. incase we are not the master */
>> dlm_lockres_put(res);
>> @@ -1717,7 +1720,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
>> }
>> spin_unlock(&dlm->spinlock);
>>
>> - dlm_put(dlm);
>> + if (!dispatched)
>> + dlm_put(dlm);
>> return master;
>> }
>>
>>
>
>
> .
>
On 2016/4/19 0:37, Guenter Roeck wrote:
> On Mon, Apr 18, 2016 at 06:45:38PM +0800, [email protected] wrote:
>> From: Zefan Li <[email protected]>
>>
>> This is the start of the stable review cycle for the 3.4.112 release.
>> There are 92 patches in this series, all will be posted as a response
>> to this one. If anyone has any issues with these being applied, please
>> let me know.
>>
>> Responses should be made by Wed Apr 20 10:44:28 UTC 2016.
>> Anything received after that time might be too late.
>>
> Build results:
> total: 99 pass: 99 fail: 0
> Qemu test results:
> total: 64 pass: 64 fail: 0
>
> Details are available at http://kerneltests.org/builders.
>
Thanks for testing!