This is the start of the stable review cycle for the 3.2.32 release.
There are 147 patches in this series, which will be posted as responses
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Tue Oct 16 23:00:00 UTC 2012.
Anything received after that time might be too late.
A combined patch relative to 3.2.31 will be posted as an additional
response to this, and the diffstat can be found below.
Ben.
-------------
Documentation/virtual/lguest/lguest.c | 1 +
Makefile | 4 +-
arch/arm/plat-omap/counter_32k.c | 21 +-
arch/mips/Makefile | 2 +-
arch/mips/kernel/Makefile | 2 +-
arch/mn10300/Makefile | 2 +-
arch/powerpc/platforms/pseries/eeh_driver.c | 94 ++++++--
arch/x86/include/asm/pgtable.h | 11 +-
arch/x86/platform/efi/efi.c | 1 +
block/blk-core.c | 2 +-
drivers/acpi/bus.c | 8 +-
drivers/bluetooth/btusb.c | 2 +-
drivers/char/ttyprintk.c | 2 +-
drivers/dma/dmaengine.c | 4 +-
drivers/firewire/core-cdev.c | 4 +-
drivers/firmware/efivars.c | 17 +-
drivers/gpu/drm/i915/i915_gem.c | 17 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +-
drivers/gpu/drm/i915/i915_gem_tiling.c | 4 +-
drivers/gpu/drm/i915/i915_irq.c | 24 +-
drivers/gpu/drm/i915/i915_reg.h | 7 +
drivers/gpu/drm/i915/intel_display.c | 35 ++-
drivers/gpu/drm/i915/intel_hdmi.c | 17 +-
drivers/gpu/drm/radeon/evergreen.c | 185 +++++++--------
drivers/gpu/drm/radeon/evergreen_reg.h | 3 +
drivers/gpu/drm/radeon/evergreend.h | 7 +
drivers/gpu/drm/radeon/radeon_asic.h | 1 +
drivers/gpu/drm/radeon/radeon_irq_kms.c | 10 +
drivers/gpu/drm/radeon/radeon_pm.c | 8 +-
drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 19 +-
drivers/infiniband/ulp/srp/ib_srp.c | 3 +-
drivers/input/mouse/synaptics.c | 31 ++-
drivers/iommu/intel-iommu.c | 4 +-
drivers/media/rc/ite-cir.c | 2 +-
drivers/media/video/gspca/pac7302.c | 2 +
drivers/mmc/host/omap_hsmmc.c | 4 +-
drivers/mmc/host/sdhci-s3c.c | 2 +-
drivers/mmc/host/sh_mmcif.c | 4 +
drivers/mtd/maps/autcpu12-nvram.c | 19 +-
drivers/mtd/mtdpart.c | 5 +-
drivers/mtd/nand/nand_bbt.c | 2 +-
drivers/mtd/nand/nandsim.c | 1 +
drivers/mtd/nand/omap2.c | 3 +-
drivers/mtd/ubi/build.c | 5 +
drivers/mtd/ubi/scan.c | 2 +-
drivers/net/can/mscan/mpc5xxx_can.c | 4 +-
drivers/net/ethernet/intel/e1000/e1000_main.c | 8 +-
drivers/net/ethernet/realtek/r8169.c | 21 +-
drivers/net/rionet.c | 20 +-
drivers/net/wireless/ath/ath9k/pci.c | 5 +-
drivers/pci/probe.c | 6 +-
drivers/s390/scsi/zfcp_aux.c | 1 +
drivers/s390/scsi/zfcp_ccw.c | 73 +++++-
drivers/s390/scsi/zfcp_cfdc.c | 2 +-
drivers/s390/scsi/zfcp_dbf.c | 22 +-
drivers/s390/scsi/zfcp_dbf.h | 1 +
drivers/s390/scsi/zfcp_def.h | 2 +
drivers/s390/scsi/zfcp_ext.h | 2 +
drivers/s390/scsi/zfcp_fsf.c | 23 +-
drivers/s390/scsi/zfcp_qdio.c | 16 +-
drivers/s390/scsi/zfcp_sysfs.c | 18 +-
drivers/s390/scsi/zfcp_unit.c | 36 ++-
drivers/scsi/atp870u.c | 11 +-
drivers/scsi/device_handler/scsi_dh_alua.c | 3 +-
drivers/scsi/hpsa.c | 41 +++-
drivers/scsi/hpsa.h | 2 +
drivers/scsi/hpsa_cmd.h | 1 +
drivers/scsi/ibmvscsi/ibmvscsi.c | 3 +
drivers/scsi/isci/init.c | 1 -
drivers/scsi/isci/probe_roms.c | 1 -
drivers/scsi/scsi_sysfs.c | 30 ++-
drivers/staging/comedi/comedi_fops.c | 5 +-
drivers/staging/comedi/drivers/jr3_pci.c | 2 +-
drivers/staging/comedi/drivers/s626.c | 2 +-
drivers/staging/speakup/speakup_soft.c | 13 +-
drivers/target/iscsi/iscsi_target.c | 2 +-
drivers/target/iscsi/iscsi_target_core.h | 4 +-
drivers/target/iscsi/iscsi_target_tpg.c | 12 +
drivers/target/target_core_configfs.c | 8 +-
drivers/target/target_core_file.c | 41 +++-
drivers/target/target_core_file.h | 1 +
drivers/tty/n_gsm.c | 104 +++++----
drivers/tty/n_tty.c | 3 +-
drivers/tty/serial/8250_pci.c | 9 +-
drivers/tty/serial/amba-pl011.c | 15 +-
drivers/usb/host/xhci-mem.c | 7 +
drivers/usb/host/xhci-pci.c | 1 +
drivers/usb/host/xhci-ring.c | 286 +++++++++++++++++++++++-
drivers/usb/host/xhci.c | 39 +++-
drivers/usb/host/xhci.h | 20 ++
drivers/usb/serial/ftdi_sio.c | 2 +
drivers/usb/serial/ftdi_sio_ids.h | 5 +
drivers/usb/serial/option.c | 3 +-
drivers/usb/serial/qcaux.c | 10 +-
fs/autofs4/root.c | 6 +-
fs/binfmt_elf.c | 19 +-
fs/ecryptfs/ecryptfs_kernel.h | 2 +
fs/ecryptfs/file.c | 100 ++++-----
fs/ecryptfs/inode.c | 65 +++---
fs/ecryptfs/main.c | 1 +
fs/ecryptfs/mmap.c | 39 ++--
fs/ext4/inode.c | 25 ++-
fs/ext4/move_extent.c | 174 +++++---------
fs/ext4/namei.c | 2 -
fs/fs-writeback.c | 1 +
fs/jffs2/wbuf.c | 8 +-
fs/lockd/mon.c | 4 +-
fs/nfs/blocklayout/blocklayout.c | 176 ++++++++++++++-
fs/nfs/blocklayout/blocklayout.h | 1 +
fs/udf/super.c | 5 +-
include/linux/mempolicy.h | 2 +-
include/linux/pci_ids.h | 1 -
include/net/ip_vs.h | 2 +-
kernel/rcutree.c | 4 +-
kernel/sched_stoptask.c | 22 +-
kernel/sys.c | 1 +
kernel/workqueue.c | 27 ++-
lib/gcd.c | 3 +
mm/hugetlb.c | 4 +-
mm/mempolicy.c | 116 ++++++----
mm/slab.c | 6 +-
mm/truncate.c | 3 +-
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c | 8 +
net/ipv4/netfilter/nf_nat_sip.c | 10 +-
net/netfilter/nf_conntrack_expect.c | 29 +--
net/netfilter/xt_hashlimit.c | 8 +-
net/netfilter/xt_limit.c | 13 +-
net/sunrpc/xprtsock.c | 21 +-
scripts/Kbuild.include | 14 +-
scripts/Makefile.fwinst | 2 +-
scripts/gcc-version.sh | 6 +-
scripts/gcc-x86_32-has-stack-protector.sh | 2 +-
scripts/gcc-x86_64-has-stack-protector.sh | 2 +-
scripts/kconfig/check.sh | 2 +-
scripts/kconfig/lxdialog/check-lxdialog.sh | 2 +-
scripts/kconfig/streamline_config.pl | 2 +
scripts/package/buildtar | 2 +-
sound/drivers/aloop.c | 6 +
sound/pci/hda/patch_conexant.c | 94 ++++++--
tools/hv/hv_kvp_daemon.c | 8 +-
tools/perf/Makefile | 2 +-
tools/power/cpupower/Makefile | 2 +-
143 files changed, 1815 insertions(+), 794 deletions(-)
--
Ben Hutchings
Always try to do things in chronological order;
it's less confusing that way.
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russ Gorby <[email protected]>
commit 5e44708f75b0f8712da715d6babb0c21089b2317 upstream.
There were some locking holes in the management of the MUX's
message queue for 2 code paths:
1) gsmld_write_wakeup
2) receipt of CMD_FCON flow-control message
In both cases gsm_data_kick is called w/o locking so it can collide
with other other instances of gsm_data_kick (pulling messages tx_tail)
or potentially other instances of __gsm_data_queu (adding messages to tx_head)
Changed to take the tx_lock in these 2 cases
Signed-off-by: Russ Gorby <[email protected]>
Tested-by: Yin, Fengwei <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index 5f68f2a..0d93e51 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -1205,6 +1205,8 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
u8 *data, int clen)
{
u8 buf[1];
+ unsigned long flags;
+
switch (command) {
case CMD_CLD: {
struct gsm_dlci *dlci = gsm->dlci[0];
@@ -1225,7 +1227,9 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
gsm->constipated = 0;
gsm_control_reply(gsm, CMD_FCON, NULL, 0);
/* Kick the link in case it is idling */
+ spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_data_kick(gsm);
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
break;
case CMD_FCOFF:
/* Modem wants us to STFU */
@@ -2392,12 +2396,12 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
/* Queue poll */
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_data_kick(gsm);
if (gsm->tx_bytes < TX_THRESH_LO) {
- spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_dlci_data_sweep(gsm);
- spin_unlock_irqrestore(&gsm->tx_lock, flags);
}
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
}
/**
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frederic Berat <[email protected]>
commit c01af4fec2c8f303d6b3354d44308d9e6bef8026 upstream.
- Correcting handling of FCon/FCoff in order to respect 27.010 spec
- Consider FCon/off will overide all dlci flow control except for
dlci0 as we must be able to send control frames.
- Dlci constipated handling according to FC, RTC and RTR values.
- Modifying gsm_dlci_data_kick and gsm_dlci_data_sweep according
to dlci constipated value
Signed-off-by: Frederic Berat <[email protected]>
Signed-off-by: Russ Gorby <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 79 +++++++++++++++++++++++++++++++++++++--------------
1 file changed, 57 insertions(+), 22 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index c028f35..db15b56 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -673,6 +673,8 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
*
* The tty device has called us to indicate that room has appeared in
* the transmit queue. Ram more data into the pipe if we have any
+ * If we have been flow-stopped by a CMD_FCOFF, then we can only
+ * send messages on DLCI0 until CMD_FCON
*
* FIXME: lock against link layer control transmissions
*/
@@ -680,15 +682,19 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
static void gsm_data_kick(struct gsm_mux *gsm)
{
struct gsm_msg *msg = gsm->tx_head;
+ struct gsm_msg *free_msg;
int len;
int skip_sof = 0;
- /* FIXME: We need to apply this solely to data messages */
- if (gsm->constipated)
- return;
-
- while (gsm->tx_head != NULL) {
- msg = gsm->tx_head;
+ while (msg) {
+ if (gsm->constipated && msg->addr) {
+ msg = msg->next;
+ continue;
+ }
+ if (gsm->dlci[msg->addr]->constipated) {
+ msg = msg->next;
+ continue;
+ }
if (gsm->encoding != 0) {
gsm->txframe[0] = GSM1_SOF;
len = gsm_stuff_frame(msg->data,
@@ -711,15 +717,19 @@ static void gsm_data_kick(struct gsm_mux *gsm)
len - skip_sof) < 0)
break;
/* FIXME: Can eliminate one SOF in many more cases */
- gsm->tx_head = msg->next;
- if (gsm->tx_head == NULL)
- gsm->tx_tail = NULL;
gsm->tx_bytes -= msg->len;
- kfree(msg);
/* For a burst of frames skip the extra SOF within the
burst */
skip_sof = 1;
+
+ if (gsm->tx_head == msg)
+ gsm->tx_head = msg->next;
+ free_msg = msg;
+ msg = msg->next;
+ kfree(free_msg);
}
+ if (!gsm->tx_head)
+ gsm->tx_tail = NULL;
}
/**
@@ -738,6 +748,8 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
u8 *dp = msg->data;
u8 *fcs = dp + msg->len;
+ WARN_ONCE(dlci->constipated, "%s: queueing from a constipated DLCI",
+ __func__);
/* Fill in the header */
if (gsm->encoding == 0) {
if (msg->len < 128)
@@ -944,6 +956,9 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
break;
dlci = gsm->dlci[i];
if (dlci == NULL || dlci->constipated) {
+ if (dlci && (debug & 0x20))
+ pr_info("%s: DLCI %d is constipated",
+ __func__, i);
i++;
continue;
}
@@ -973,6 +988,13 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
unsigned long flags;
int sweep;
+ if (dlci->constipated) {
+ if (debug & 0x20)
+ pr_info("%s: DLCI %d is constipated",
+ __func__, dlci->addr);
+ return;
+ }
+
spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
/* If we have nothing running then we need to fire up */
sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
@@ -1030,6 +1052,7 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
{
int mlines = 0;
u8 brk = 0;
+ int fc;
/* The modem status command can either contain one octet (v.24 signals)
or two octets (v.24 signals + break signals). The length field will
@@ -1041,19 +1064,27 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
else {
brk = modem & 0x7f;
modem = (modem >> 7) & 0x7f;
- };
+ }
/* Flow control/ready to communicate */
- if (modem & MDM_FC) {
+ fc = (modem & MDM_FC) || !(modem & MDM_RTR);
+ if (fc && !dlci->constipated) {
+ if (debug & 0x20)
+ pr_info("%s: DLCI %d START constipated (tx_bytes=%d)",
+ __func__, dlci->addr, dlci->gsm->tx_bytes);
/* Need to throttle our output on this device */
dlci->constipated = 1;
- }
- if (modem & MDM_RTC) {
- mlines |= TIOCM_DSR | TIOCM_DTR;
+ } else if (!fc && dlci->constipated) {
+ if (debug & 0x20)
+ pr_info("%s: DLCI %d END constipated (tx_bytes=%d)",
+ __func__, dlci->addr, dlci->gsm->tx_bytes);
dlci->constipated = 0;
gsm_dlci_data_kick(dlci);
}
+
/* Map modem bits */
+ if (modem & MDM_RTC)
+ mlines |= TIOCM_DSR | TIOCM_DTR;
if (modem & MDM_RTR)
mlines |= TIOCM_RTS | TIOCM_CTS;
if (modem & MDM_IC)
@@ -1209,17 +1240,21 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
gsm_control_reply(gsm, CMD_TEST, data, clen);
break;
case CMD_FCON:
- /* Modem wants us to STFU */
- gsm->constipated = 1;
- gsm_control_reply(gsm, CMD_FCON, NULL, 0);
- break;
- case CMD_FCOFF:
/* Modem can accept data again */
+ if (debug & 0x20)
+ pr_info("%s: GSM END constipation", __func__);
gsm->constipated = 0;
- gsm_control_reply(gsm, CMD_FCOFF, NULL, 0);
+ gsm_control_reply(gsm, CMD_FCON, NULL, 0);
/* Kick the link in case it is idling */
gsm_data_kick(gsm);
break;
+ case CMD_FCOFF:
+ /* Modem wants us to STFU */
+ if (debug & 0x20)
+ pr_info("%s: GSM START constipation", __func__);
+ gsm->constipated = 1;
+ gsm_control_reply(gsm, CMD_FCOFF, NULL, 0);
+ break;
case CMD_MSC:
/* Out of band modem line change indicator for a DLCI */
gsm_control_modem(gsm, data, clen);
@@ -2276,7 +2311,7 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
gsm->error(gsm, *dp, flags);
break;
default:
- WARN_ONCE("%s: unknown flag %d\n",
+ WARN_ONCE(1, "%s: unknown flag %d\n",
tty_name(tty, buf), flags);
break;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chris Wilson <[email protected]>
commit 7dd4906586274f3945f2aeaaa5a33b451c3b4bba upstream.
The BLT commands on gen2/3 utilize the fence registers and so we cannot
modify any fences for the object whilst those commands are in flight.
Currently we marked tiled commands as occupying a fence, but forgot to
restrict the untiled commands from preventing a fence being assigned
before they were completed.
One side-effect is that we ten have to double check that a fence was
allocated for a fenced buffer during move-to-active.
Reported-by: Jiri Slaby <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=43427
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=47990
Reviewed-by: Daniel Vetter <[email protected]>
Testcase: i-g-t/tests/gem_tiled_after_untiled_blt
Tested-by: Daniel Vetter <[email protected]>
Signed-off-by: Chris Wilson <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2: The nesting of if-statements in the old
i915_gem_execbuffer_reserve() differs from pin_and_fence_object(),
so don't move the assignment of obj->pending_fenced_gpu_access but
adjust the boolean expression as recommended by Daniel Vetter.]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 15 +++++++++------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 +-
2 files changed, 10 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1543,16 +1543,19 @@ i915_gem_object_move_to_active(struct dr
list_move_tail(&obj->ring_list, &ring->active_list);
obj->last_rendering_seqno = seqno;
- if (obj->fenced_gpu_access) {
- struct drm_i915_fence_reg *reg;
-
- BUG_ON(obj->fence_reg == I915_FENCE_REG_NONE);
+ if (obj->fenced_gpu_access) {
obj->last_fenced_seqno = seqno;
obj->last_fenced_ring = ring;
- reg = &dev_priv->fence_regs[obj->fence_reg];
- list_move_tail(®->lru_list, &dev_priv->mm.fence_list);
+ /* Bump MRU to take account of the delayed flush */
+ if (obj->fence_reg != I915_FENCE_REG_NONE) {
+ struct drm_i915_fence_reg *reg;
+
+ reg = &dev_priv->fence_regs[obj->fence_reg];
+ list_move_tail(®->lru_list,
+ &dev_priv->mm.fence_list);
+ }
}
}
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -574,7 +574,8 @@ i915_gem_execbuffer_reserve(struct intel
if (ret)
break;
}
- obj->pending_fenced_gpu_access = need_fence;
+ obj->pending_fenced_gpu_access =
+ !!(entry->flags & EXEC_OBJECT_NEEDS_FENCE);
}
entry->offset = obj->gtt_offset;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Engelhardt <[email protected]>
commit 82e6bfe2fbc4d48852114c4f979137cd5bf1d1a8 upstream.
Commit v2.6.19-rc1~1272^2~41 tells us that r->cost != 0 can happen when
a running state is saved to userspace and then reinstated from there.
Make sure that private xt_limit area is initialized with correct values.
Otherwise, random matchings due to use of uninitialized memory.
Signed-off-by: Jan Engelhardt <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/netfilter/xt_limit.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/xt_limit.c b/net/netfilter/xt_limit.c
index 5c22ce8..a4c1e45 100644
--- a/net/netfilter/xt_limit.c
+++ b/net/netfilter/xt_limit.c
@@ -117,11 +117,11 @@ static int limit_mt_check(const struct xt_mtchk_param *par)
/* For SMP, we only want to use one set of state. */
r->master = priv;
+ /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
+ 128. */
+ priv->prev = jiffies;
+ priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
if (r->cost == 0) {
- /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
- 128. */
- priv->prev = jiffies;
- priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
r->credit_cap = priv->credit; /* Credits full. */
r->cost = user2credits(r->avg);
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Elric Fu <[email protected]>
commit b63f4053cc8aa22a98e3f9a97845afe6c15d0a0d upstream.
According to xHCI spec section 4.6.1.1 and section 4.6.1.2,
after aborting a command on the command ring, xHC will
generate a command completion event with its completion
code set to Command Ring Stopped at least. If a command is
currently executing at the time of aborting a command, xHC
also generate a command completion event with its completion
code set to Command Abort. When the command ring is stopped,
software may remove, add, or rearrage Command Descriptors.
To cancel a command, software will initialize a command
descriptor for the cancel command, and add it into a
cancel_cmd_list of xhci. When the command ring is stopped,
software will find the command trbs described by command
descriptors in cancel_cmd_list and modify it to No Op
command. If software can't find the matched trbs, we can
think it had been finished.
This patch should be backported to kernels as old as 3.0, that contain
the commit 7ed603ecf8b68ab81f4c83097d3063d43ec73bb8 "xhci: Add an
assertion to check for virt_dev=0 bug." That commit papers over a NULL
pointer dereference, and this patch fixes the underlying issue that
caused the NULL pointer dereference.
Signed-off-by: Elric Fu <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Tested-by: Miroslav Sabljic <[email protected]>
[bwh: Backported to 3.2: inc_deq() needs an additional 'consumer' argument;
Jonathan Nieder worked out that this should be false]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci-ring.c | 171 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 165 insertions(+), 6 deletions(-)
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -1154,6 +1154,20 @@ static void handle_reset_ep_completion(s
}
}
+/* Complete the command and detele it from the devcie's command queue.
+ */
+static void xhci_complete_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
+ struct xhci_command *command, u32 status)
+{
+ command->status = status;
+ list_del(&command->cmd_list);
+ if (command->completion)
+ complete(command->completion);
+ else
+ xhci_free_command(xhci, command);
+}
+
+
/* Check to see if a command in the device's command queue matches this one.
* Signal the completion or free the command, and return 1. Return 0 if the
* completed command isn't at the head of the command list.
@@ -1172,15 +1186,144 @@ static int handle_cmd_in_cmd_wait_list(s
if (xhci->cmd_ring->dequeue != command->command_trb)
return 0;
- command->status = GET_COMP_CODE(le32_to_cpu(event->status));
- list_del(&command->cmd_list);
- if (command->completion)
- complete(command->completion);
- else
- xhci_free_command(xhci, command);
+ xhci_complete_cmd_in_cmd_wait_list(xhci, command,
+ GET_COMP_CODE(le32_to_cpu(event->status)));
return 1;
}
+/*
+ * Finding the command trb need to be cancelled and modifying it to
+ * NO OP command. And if the command is in device's command wait
+ * list, finishing and freeing it.
+ *
+ * If we can't find the command trb, we think it had already been
+ * executed.
+ */
+static void xhci_cmd_to_noop(struct xhci_hcd *xhci, struct xhci_cd *cur_cd)
+{
+ struct xhci_segment *cur_seg;
+ union xhci_trb *cmd_trb;
+ u32 cycle_state;
+
+ if (xhci->cmd_ring->dequeue == xhci->cmd_ring->enqueue)
+ return;
+
+ /* find the current segment of command ring */
+ cur_seg = find_trb_seg(xhci->cmd_ring->first_seg,
+ xhci->cmd_ring->dequeue, &cycle_state);
+
+ /* find the command trb matched by cd from command ring */
+ for (cmd_trb = xhci->cmd_ring->dequeue;
+ cmd_trb != xhci->cmd_ring->enqueue;
+ next_trb(xhci, xhci->cmd_ring, &cur_seg, &cmd_trb)) {
+ /* If the trb is link trb, continue */
+ if (TRB_TYPE_LINK_LE32(cmd_trb->generic.field[3]))
+ continue;
+
+ if (cur_cd->cmd_trb == cmd_trb) {
+
+ /* If the command in device's command list, we should
+ * finish it and free the command structure.
+ */
+ if (cur_cd->command)
+ xhci_complete_cmd_in_cmd_wait_list(xhci,
+ cur_cd->command, COMP_CMD_STOP);
+
+ /* get cycle state from the origin command trb */
+ cycle_state = le32_to_cpu(cmd_trb->generic.field[3])
+ & TRB_CYCLE;
+
+ /* modify the command trb to NO OP command */
+ cmd_trb->generic.field[0] = 0;
+ cmd_trb->generic.field[1] = 0;
+ cmd_trb->generic.field[2] = 0;
+ cmd_trb->generic.field[3] = cpu_to_le32(
+ TRB_TYPE(TRB_CMD_NOOP) | cycle_state);
+ break;
+ }
+ }
+}
+
+static void xhci_cancel_cmd_in_cd_list(struct xhci_hcd *xhci)
+{
+ struct xhci_cd *cur_cd, *next_cd;
+
+ if (list_empty(&xhci->cancel_cmd_list))
+ return;
+
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ xhci_cmd_to_noop(xhci, cur_cd);
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ }
+}
+
+/*
+ * traversing the cancel_cmd_list. If the command descriptor according
+ * to cmd_trb is found, the function free it and return 1, otherwise
+ * return 0.
+ */
+static int xhci_search_cmd_trb_in_cd_list(struct xhci_hcd *xhci,
+ union xhci_trb *cmd_trb)
+{
+ struct xhci_cd *cur_cd, *next_cd;
+
+ if (list_empty(&xhci->cancel_cmd_list))
+ return 0;
+
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ if (cur_cd->cmd_trb == cmd_trb) {
+ if (cur_cd->command)
+ xhci_complete_cmd_in_cmd_wait_list(xhci,
+ cur_cd->command, COMP_CMD_STOP);
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * If the cmd_trb_comp_code is COMP_CMD_ABORT, we just check whether the
+ * trb pointed by the command ring dequeue pointer is the trb we want to
+ * cancel or not. And if the cmd_trb_comp_code is COMP_CMD_STOP, we will
+ * traverse the cancel_cmd_list to trun the all of the commands according
+ * to command descriptor to NO-OP trb.
+ */
+static int handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ int cmd_trb_comp_code)
+{
+ int cur_trb_is_good = 0;
+
+ /* Searching the cmd trb pointed by the command ring dequeue
+ * pointer in command descriptor list. If it is found, free it.
+ */
+ cur_trb_is_good = xhci_search_cmd_trb_in_cd_list(xhci,
+ xhci->cmd_ring->dequeue);
+
+ if (cmd_trb_comp_code == COMP_CMD_ABORT)
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ else if (cmd_trb_comp_code == COMP_CMD_STOP) {
+ /* traversing the cancel_cmd_list and canceling
+ * the command according to command descriptor
+ */
+ xhci_cancel_cmd_in_cd_list(xhci);
+
+ xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
+ /*
+ * ring command ring doorbell again to restart the
+ * command ring
+ */
+ if (xhci->cmd_ring->dequeue != xhci->cmd_ring->enqueue)
+ xhci_ring_cmd_db(xhci);
+ }
+ return cur_trb_is_good;
+}
+
static void handle_cmd_completion(struct xhci_hcd *xhci,
struct xhci_event_cmd *event)
{
@@ -1206,6 +1349,22 @@ static void handle_cmd_completion(struct
xhci->error_bitmask |= 1 << 5;
return;
}
+
+ if ((GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_ABORT) ||
+ (GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_STOP)) {
+ /* If the return value is 0, we think the trb pointed by
+ * command ring dequeue pointer is a good trb. The good
+ * trb means we don't want to cancel the trb, but it have
+ * been stopped by host. So we should handle it normally.
+ * Otherwise, driver should invoke inc_deq() and return.
+ */
+ if (handle_stopped_cmd_ring(xhci,
+ GET_COMP_CODE(le32_to_cpu(event->status)))) {
+ inc_deq(xhci, xhci->cmd_ring, false);
+ return;
+ }
+ }
+
switch (le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3])
& TRB_TYPE_BITMASK) {
case TRB_TYPE(TRB_ENABLE_SLOT):
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Stephen M. Cameron" <[email protected]>
commit e85c59746957fd6e3595d02cf614370056b5816e upstream.
Dial back the aggressiveness of the controller lockup detection thread.
Currently it will declare the controller to be locked up if it goes
for 10 seconds with no interrupts and no change in the heartbeat
register. Dial back this to 30 seconds with no heartbeat change, and
also snoop the ioctl path and if a firmware flash command is detected,
dial it back further to 4 minutes until the firmware flash command
completes. The reason for this is that during the firmware flash
operation, the controller apparently doesn't update the heartbeat
register as frequently as it is supposed to, and we can get a false
positive.
Signed-off-by: Stephen M. Cameron <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/hpsa.c | 39 ++++++++++++++++++++++++++++++++++-----
drivers/scsi/hpsa.h | 2 ++
drivers/scsi/hpsa_cmd.h | 1 +
3 files changed, 37 insertions(+), 5 deletions(-)
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -532,12 +532,42 @@ static void set_performant_mode(struct c
c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1);
}
+static int is_firmware_flash_cmd(u8 *cdb)
+{
+ return cdb[0] == BMIC_WRITE && cdb[6] == BMIC_FLASH_FIRMWARE;
+}
+
+/*
+ * During firmware flash, the heartbeat register may not update as frequently
+ * as it should. So we dial down lockup detection during firmware flash. and
+ * dial it back up when firmware flash completes.
+ */
+#define HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH (240 * HZ)
+#define HEARTBEAT_SAMPLE_INTERVAL (30 * HZ)
+static void dial_down_lockup_detection_during_fw_flash(struct ctlr_info *h,
+ struct CommandList *c)
+{
+ if (!is_firmware_flash_cmd(c->Request.CDB))
+ return;
+ atomic_inc(&h->firmware_flash_in_progress);
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH;
+}
+
+static void dial_up_lockup_detection_on_fw_flash_complete(struct ctlr_info *h,
+ struct CommandList *c)
+{
+ if (is_firmware_flash_cmd(c->Request.CDB) &&
+ atomic_dec_and_test(&h->firmware_flash_in_progress))
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
+}
+
static void enqueue_cmd_and_start_io(struct ctlr_info *h,
struct CommandList *c)
{
unsigned long flags;
set_performant_mode(h, c);
+ dial_down_lockup_detection_during_fw_flash(h, c);
spin_lock_irqsave(&h->lock, flags);
addQ(&h->reqQ, c);
h->Qdepth++;
@@ -3032,6 +3062,7 @@ static inline int bad_tag(struct ctlr_in
static inline void finish_cmd(struct CommandList *c, u32 raw_tag)
{
removeQ(c);
+ dial_up_lockup_detection_on_fw_flash_complete(c->h, c);
if (likely(c->cmd_type == CMD_SCSI))
complete_scsi_command(c);
else if (c->cmd_type == CMD_IOCTL_PEND)
@@ -4172,9 +4203,6 @@ static void controller_lockup_detected(s
spin_unlock_irqrestore(&h->lock, flags);
}
-#define HEARTBEAT_SAMPLE_INTERVAL (10 * HZ)
-#define HEARTBEAT_CHECK_MINIMUM_INTERVAL (HEARTBEAT_SAMPLE_INTERVAL / 2)
-
static void detect_controller_lockup(struct ctlr_info *h)
{
u64 now;
@@ -4185,7 +4213,7 @@ static void detect_controller_lockup(str
now = get_jiffies_64();
/* If we've received an interrupt recently, we're ok. */
if (time_after64(h->last_intr_timestamp +
- (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+ (h->heartbeat_sample_interval), now))
return;
/*
@@ -4194,7 +4222,7 @@ static void detect_controller_lockup(str
* otherwise don't care about signals in this thread.
*/
if (time_after64(h->last_heartbeat_timestamp +
- (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+ (h->heartbeat_sample_interval), now))
return;
/* If heartbeat has not changed since we last looked, we're not ok. */
@@ -4236,6 +4264,7 @@ static void add_ctlr_to_lockup_detector_
{
unsigned long flags;
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
spin_lock_irqsave(&lockup_detector_lock, flags);
list_add_tail(&h->lockup_list, &hpsa_ctlr_list);
spin_unlock_irqrestore(&lockup_detector_lock, flags);
--- a/drivers/scsi/hpsa.h
+++ b/drivers/scsi/hpsa.h
@@ -124,6 +124,8 @@ struct ctlr_info {
u64 last_intr_timestamp;
u32 last_heartbeat;
u64 last_heartbeat_timestamp;
+ u32 heartbeat_sample_interval;
+ atomic_t firmware_flash_in_progress;
u32 lockup_detected;
struct list_head lockup_list;
};
--- a/drivers/scsi/hpsa_cmd.h
+++ b/drivers/scsi/hpsa_cmd.h
@@ -163,6 +163,7 @@ struct SenseSubsystem_info {
#define BMIC_WRITE 0x27
#define BMIC_CACHE_FLUSH 0xc2
#define HPSA_CACHE_FLUSH 0x01 /* C2 was already being used by HPSA */
+#define BMIC_FLASH_FIRMWARE 0xF7
/* Command List Structure */
union SCSI3Addr {
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Kent <[email protected]>
commit 49999ab27eab6289a8e4f450e148bdab521361b2 upstream.
In autofs4_d_automount(), if a mount fail occurs the AUTOFS_INF_PENDING
mount pending flag is not cleared.
One effect of this is when using the "browse" option, directory entry
attributes show up with all "?"s due to the incorrect callback and
subsequent failure return (when in fact no callback should be made).
Signed-off-by: Ian Kent <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/autofs4/root.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/autofs4/root.c b/fs/autofs4/root.c
index e7396cf..91b1165 100644
--- a/fs/autofs4/root.c
+++ b/fs/autofs4/root.c
@@ -392,10 +392,12 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
ino->flags |= AUTOFS_INF_PENDING;
spin_unlock(&sbi->fs_lock);
status = autofs4_mount_wait(dentry);
- if (status)
- return ERR_PTR(status);
spin_lock(&sbi->fs_lock);
ino->flags &= ~AUTOFS_INF_PENDING;
+ if (status) {
+ spin_unlock(&sbi->fs_lock);
+ return ERR_PTR(status);
+ }
}
done:
if (!(ino->flags & AUTOFS_INF_EXPIRING)) {
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Colin Cross <[email protected]>
commit 9d7d6e363b06934221b81a859d509844c97380df upstream.
read_persistent_clock uses a global variable, use a spinlock to
ensure non-atomic updates to the variable don't overlap and cause
time to move backwards.
Signed-off-by: Colin Cross <[email protected]>
Signed-off-by: R Sricharan <[email protected]>
Signed-off-by: Tony Lindgren <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
arch/arm/plat-omap/counter_32k.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
--- a/arch/arm/plat-omap/counter_32k.c
+++ b/arch/arm/plat-omap/counter_32k.c
@@ -82,22 +82,29 @@ static void notrace omap_update_sched_cl
* nsecs and adds to a monotonically increasing timespec.
*/
static struct timespec persistent_ts;
-static cycles_t cycles, last_cycles;
+static cycles_t cycles;
static unsigned int persistent_mult, persistent_shift;
+static DEFINE_SPINLOCK(read_persistent_clock_lock);
+
void read_persistent_clock(struct timespec *ts)
{
unsigned long long nsecs;
- cycles_t delta;
- struct timespec *tsp = &persistent_ts;
+ cycles_t last_cycles;
+ unsigned long flags;
+
+ spin_lock_irqsave(&read_persistent_clock_lock, flags);
last_cycles = cycles;
cycles = timer_32k_base ? __raw_readl(timer_32k_base) : 0;
- delta = cycles - last_cycles;
- nsecs = clocksource_cyc2ns(delta, persistent_mult, persistent_shift);
+ nsecs = clocksource_cyc2ns(cycles - last_cycles,
+ persistent_mult, persistent_shift);
+
+ timespec_add_ns(&persistent_ts, nsecs);
+
+ *ts = persistent_ts;
- timespec_add_ns(tsp, nsecs);
- *ts = *tsp;
+ spin_unlock_irqrestore(&read_persistent_clock_lock, flags);
}
int __init omap_init_clocksource_32k(void)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Michlmayr <[email protected]>
commit 0f6d93aa9d96cc9022b51bd10d462b03296be146 upstream.
The ACARD driver calls udelay() with a value > 2000, which leads to to
the following compilation error on ARM:
ERROR: "__bad_udelay" [drivers/scsi/atp870u.ko] undefined!
make[1]: *** [__modpost] Error 1
This is because udelay is defined on ARM, roughly speaking, as
#define udelay(n) ((n) > 2000 ? __bad_udelay() : \
__const_udelay((n) * ((2199023U*HZ)>>11)))
The argument to __const_udelay is the number of jiffies to wait divided
by 4, but this does not work unless the multiplication does not
overflow, and that is what the build error is designed to prevent. The
intended behavior can be achieved by using mdelay to call udelay
multiple times in a loop.
[[email protected]: adding context]
Signed-off-by: Martin Michlmayr <[email protected]>
Signed-off-by: Jonathan Nieder <[email protected]>
Cc: James Bottomley <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/atp870u.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c
index 68ce085..a540162 100644
--- a/drivers/scsi/atp870u.c
+++ b/drivers/scsi/atp870u.c
@@ -1173,7 +1173,16 @@ wait_io1:
outw(val, tmport);
outb(2, 0x80);
TCM_SYNC:
- udelay(0x800);
+ /*
+ * The funny division into multiple delays is to accomodate
+ * arches like ARM where udelay() multiplies its argument by
+ * a large number to initialize a loop counter. To avoid
+ * overflow, the maximum supported udelay is 2000 microseconds.
+ *
+ * XXX it would be more polite to find a way to use msleep()
+ */
+ mdelay(2);
+ udelay(48);
if ((inb(tmport) & 0x80) == 0x00) { /* bsy ? */
outw(0, tmport--);
outb(0, tmport);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Widawsky <[email protected]>
commit f8f2ac9a76b0f80a6763ca316116a7bab8486997 upstream.
I can't even find how I figured this might be needed anymore. But sure
enough, the value I'm reading back on platforms doesn't match what the
docs recommends.
It seemed to fix Chris' GT1 in limited testing as well.
Tested-by: Chris Wilson <[email protected]>
Signed-off-by: Ben Widawsky <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2: open-code _MASKED_BIT_{ENABLE,DISABLE}]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/i915_reg.h | 3 +++
drivers/gpu/drm/i915/intel_display.c | 5 +++++
2 files changed, 8 insertions(+)
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -401,6 +401,9 @@
# define VS_TIMER_DISPATCH (1 << 6)
# define MI_FLUSH_ENABLE (1 << 11)
+#define GEN6_GT_MODE 0x20d0
+#define GEN6_GT_MODE_HI (1 << 9)
+
#define GFX_MODE 0x02520
#define GFX_MODE_GEN7 0x0229c
#define GFX_RUN_LIST_ENABLE (1<<15)
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -8298,6 +8298,11 @@ static void gen6_init_clock_gating(struc
DISPPLANE_TRICKLE_FEED_DISABLE);
intel_flush_display_plane(dev_priv, pipe);
}
+
+ /* The default value should be 0x200 according to docs, but the two
+ * platforms I checked have a 0 for this. (Maybe BIOS overrides?) */
+ I915_WRITE(GEN6_GT_MODE, 0xffff << 16);
+ I915_WRITE(GEN6_GT_MODE, GEN6_GT_MODE_HI << 16 | GEN6_GT_MODE_HI);
}
static void gen7_setup_fixed_func_scheduler(struct drm_i915_private *dev_priv)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fabio Estevam <[email protected]>
commit 0eb5a35801df3c438ce3fc91310a415ea4452c00 upstream.
Do the same as commit a03a202e95fd ("dmaengine: failure to get a
specific DMA channel is not critical") to get rid of the following
messages during kernel boot:
dmaengine_get: failed to get dma1chan0: (-22)
dmaengine_get: failed to get dma1chan1: (-22)
dmaengine_get: failed to get dma1chan2: (-22)
dmaengine_get: failed to get dma1chan3: (-22)
..
Signed-off-by: Fabio Estevam <[email protected]>
Cc: Vinod Koul <[email protected]>
Cc: Dan Williams <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
[bwh: Backported to 3.2: also apply changes to this logging statement
from commit 634332502366 ('dmaengine: Cleanup logging messages')]
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -564,8 +564,8 @@ void dmaengine_get(void)
list_del_rcu(&device->global_node);
break;
} else if (err)
- pr_err("dmaengine: failed to get %s: (%d)\n",
- dma_chan_name(chan), err);
+ pr_debug("%s: failed to get %s: (%d)\n",
+ __func__, dma_chan_name(chan), err);
}
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chris Wilson <[email protected]>
commit 5bb61643f6a70d48de9cfe91ad0fee0d618b6816 upstream.
This was meant to be the purpose of the
intel_crtc_wait_for_pending_flips() function which is called whilst
preparing the CRTC for a modeset or before disabling. However, as Ville
Syrjala pointed out, we set the pending flip notification on the old
framebuffer that is no longer attached to the CRTC by the time we come
to flush the pending operations. Instead, we can simply wait on the
pending unpin work to be finished on this CRTC, knowning that the
hardware has therefore finished modifying the registers, before proceeding
with our direct access.
Fixes i-g-t/flip_test on non-pch platforms. pch platforms simply
schedule the flip immediately when the pipe is disabled, leading
to other funny issues.
Signed-off-by: Chris Wilson <[email protected]>
[danvet: Added i-g-t note and cc: stable]
Signed-off-by: Daniel Vetter <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2850,13 +2850,34 @@ static void intel_clear_scanline_wait(st
I915_WRITE_CTL(ring, tmp);
}
+static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
+{
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned long flags;
+ bool pending;
+
+ if (atomic_read(&dev_priv->mm.wedged))
+ return false;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ pending = to_intel_crtc(crtc)->unpin_work != NULL;
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ return pending;
+}
+
static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
if (crtc->fb == NULL)
return;
+ wait_event(dev_priv->pending_flip_queue,
+ !intel_crtc_has_pending_flip(crtc));
+
mutex_lock(&dev->struct_mutex);
intel_finish_fb(crtc->fb);
mutex_unlock(&dev->struct_mutex);
@@ -6952,9 +6973,8 @@ static void do_intel_finish_page_flip(st
atomic_clear_mask(1 << intel_crtc->plane,
&obj->pending_flip.counter);
- if (atomic_read(&obj->pending_flip) == 0)
- wake_up(&dev_priv->pending_flip_queue);
+ wake_up(&dev_priv->pending_flip_queue);
schedule_work(&work->work);
trace_i915_flip_complete(intel_crtc->plane, work->pending_flip_obj);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peng Tao <[email protected]>
commit fe6e1e8d9fad86873eb74a26e80a8f91f9e870b5 upstream.
If applications use flock to protect its write range, generic NFS
will not do read-modify-write cycle at page cache level. Therefore
LD should know how to handle non-sector aligned writes. Otherwise
there will be data corruption.
Signed-off-by: Peng Tao <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
[bwh: Backported to Linux 3.2:
- Adjust context
- s/wdata->pages\.npages/wdata->npages/
- s/header->pnfs_error/wdata->pnfs_error/
- Drop change in missing out_mds exit path]
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 177 +++++++++++++++++++++++++++++++++++---
fs/nfs/blocklayout/blocklayout.h | 1 +
2 files changed, 166 insertions(+), 12 deletions(-)
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -164,25 +164,39 @@ static struct bio *bl_alloc_init_bio(int
return bio;
}
-static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+static struct bio *do_add_page_to_bio(struct bio *bio, int npg, int rw,
sector_t isect, struct page *page,
struct pnfs_block_extent *be,
void (*end_io)(struct bio *, int err),
- struct parallel_io *par)
+ struct parallel_io *par,
+ unsigned int offset, int len)
{
+ isect = isect + (offset >> SECTOR_SHIFT);
+ dprintk("%s: npg %d rw %d isect %llu offset %u len %d\n", __func__,
+ npg, rw, (unsigned long long)isect, offset, len);
retry:
if (!bio) {
bio = bl_alloc_init_bio(npg, isect, be, end_io, par);
if (!bio)
return ERR_PTR(-ENOMEM);
}
- if (bio_add_page(bio, page, PAGE_CACHE_SIZE, 0) < PAGE_CACHE_SIZE) {
+ if (bio_add_page(bio, page, len, offset) < len) {
bio = bl_submit_bio(rw, bio);
goto retry;
}
return bio;
}
+static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+ sector_t isect, struct page *page,
+ struct pnfs_block_extent *be,
+ void (*end_io)(struct bio *, int err),
+ struct parallel_io *par)
+{
+ return do_add_page_to_bio(bio, npg, rw, isect, page, be,
+ end_io, par, 0, PAGE_CACHE_SIZE);
+}
+
/* This is basically copied from mpage_end_io_read */
static void bl_end_io_read(struct bio *bio, int err)
{
@@ -446,6 +460,106 @@ map_block(struct buffer_head *bh, sector
return;
}
+static void
+bl_read_single_end_io(struct bio *bio, int error)
+{
+ struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ struct page *page = bvec->bv_page;
+
+ /* Only one page in bvec */
+ unlock_page(page);
+}
+
+static int
+bl_do_readpage_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int offset, unsigned int len)
+{
+ struct bio *bio;
+ struct page *shadow_page;
+ sector_t isect;
+ char *kaddr, *kshadow_addr;
+ int ret = 0;
+
+ dprintk("%s: offset %u len %u\n", __func__, offset, len);
+
+ shadow_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
+ if (shadow_page == NULL)
+ return -ENOMEM;
+
+ bio = bio_alloc(GFP_NOIO, 1);
+ if (bio == NULL)
+ return -ENOMEM;
+
+ isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) +
+ (offset / SECTOR_SIZE);
+
+ bio->bi_sector = isect - be->be_f_offset + be->be_v_offset;
+ bio->bi_bdev = be->be_mdev;
+ bio->bi_end_io = bl_read_single_end_io;
+
+ lock_page(shadow_page);
+ if (bio_add_page(bio, shadow_page,
+ SECTOR_SIZE, round_down(offset, SECTOR_SIZE)) == 0) {
+ unlock_page(shadow_page);
+ bio_put(bio);
+ return -EIO;
+ }
+
+ submit_bio(READ, bio);
+ wait_on_page_locked(shadow_page);
+ if (unlikely(!test_bit(BIO_UPTODATE, &bio->bi_flags))) {
+ ret = -EIO;
+ } else {
+ kaddr = kmap_atomic(page);
+ kshadow_addr = kmap_atomic(shadow_page);
+ memcpy(kaddr + offset, kshadow_addr + offset, len);
+ kunmap_atomic(kshadow_addr);
+ kunmap_atomic(kaddr);
+ }
+ __free_page(shadow_page);
+ bio_put(bio);
+
+ return ret;
+}
+
+static int
+bl_read_partial_page_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int dirty_offset, unsigned int dirty_len,
+ bool full_page)
+{
+ int ret = 0;
+ unsigned int start, end;
+
+ if (full_page) {
+ start = 0;
+ end = PAGE_CACHE_SIZE;
+ } else {
+ start = round_down(dirty_offset, SECTOR_SIZE);
+ end = round_up(dirty_offset + dirty_len, SECTOR_SIZE);
+ }
+
+ dprintk("%s: offset %u len %d\n", __func__, dirty_offset, dirty_len);
+ if (!be) {
+ zero_user_segments(page, start, dirty_offset,
+ dirty_offset + dirty_len, end);
+ if (start == 0 && end == PAGE_CACHE_SIZE &&
+ trylock_page(page)) {
+ SetPageUptodate(page);
+ unlock_page(page);
+ }
+ return ret;
+ }
+
+ if (start != dirty_offset)
+ ret = bl_do_readpage_sync(page, be, start, dirty_offset - start);
+
+ if (!ret && (dirty_offset + dirty_len < end))
+ ret = bl_do_readpage_sync(page, be, dirty_offset + dirty_len,
+ end - dirty_offset - dirty_len);
+
+ return ret;
+}
+
/* Given an unmapped page, zero it or read in page for COW, page is locked
* by caller.
*/
@@ -479,7 +593,6 @@ init_page_for_write(struct page *page, s
SetPageUptodate(page);
cleanup:
- bl_put_extent(cow_read);
if (bh)
free_buffer_head(bh);
if (ret) {
@@ -501,6 +614,7 @@ bl_write_pagelist(struct nfs_write_data
struct parallel_io *par;
loff_t offset = wdata->args.offset;
size_t count = wdata->args.count;
+ unsigned int pg_offset, pg_len, saved_len;
struct page **pages = wdata->args.pages;
struct page *page;
pgoff_t index;
@@ -615,10 +729,11 @@ next_page:
if (!extent_length) {
/* We've used up the previous extent */
bl_put_extent(be);
+ bl_put_extent(cow_read);
bio = bl_submit_bio(WRITE, bio);
/* Get the next one */
be = bl_find_get_extent(BLK_LSEG2EXT(wdata->lseg),
- isect, NULL);
+ isect, &cow_read);
if (!be || !is_writable(be, isect)) {
wdata->pnfs_error = -EINVAL;
goto out;
@@ -626,7 +741,26 @@ next_page:
extent_length = be->be_length -
(isect - be->be_f_offset);
}
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
+
+ dprintk("%s offset %lld count %Zu\n", __func__, offset, count);
+ pg_offset = offset & ~PAGE_CACHE_MASK;
+ if (pg_offset + count > PAGE_CACHE_SIZE)
+ pg_len = PAGE_CACHE_SIZE - pg_offset;
+ else
+ pg_len = count;
+
+ saved_len = pg_len;
+ if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
+ !bl_is_sector_init(be->be_inval, isect)) {
+ ret = bl_read_partial_page_sync(pages[i], cow_read,
+ pg_offset, pg_len, true);
+ if (ret) {
+ dprintk("%s bl_read_partial_page_sync fail %d\n",
+ __func__, ret);
+ wdata->pnfs_error = ret;
+ goto out;
+ }
+
ret = bl_mark_sectors_init(be->be_inval, isect,
PAGE_CACHE_SECTORS,
NULL);
@@ -636,15 +770,35 @@ next_page:
wdata->pnfs_error = ret;
goto out;
}
+
+ /* Expand to full page write */
+ pg_offset = 0;
+ pg_len = PAGE_CACHE_SIZE;
+ } else if ((pg_offset & (SECTOR_SIZE - 1)) ||
+ (pg_len & (SECTOR_SIZE - 1))){
+ /* ahh, nasty case. We have to do sync full sector
+ * read-modify-write cycles.
+ */
+ unsigned int saved_offset = pg_offset;
+ ret = bl_read_partial_page_sync(pages[i], be, pg_offset,
+ pg_len, false);
+ pg_offset = round_down(pg_offset, SECTOR_SIZE);
+ pg_len = round_up(saved_offset + pg_len, SECTOR_SIZE)
+ - pg_offset;
}
- bio = bl_add_page_to_bio(bio, wdata->npages - i, WRITE,
+
+
+ bio = do_add_page_to_bio(bio, wdata->npages - i, WRITE,
isect, pages[i], be,
- bl_end_io_write, par);
+ bl_end_io_write, par,
+ pg_offset, pg_len);
if (IS_ERR(bio)) {
wdata->pnfs_error = PTR_ERR(bio);
bio = NULL;
goto out;
}
+ offset += saved_len;
+ count -= saved_len;
isect += PAGE_CACHE_SECTORS;
last_isect = isect;
extent_length -= PAGE_CACHE_SECTORS;
@@ -662,12 +816,10 @@ next_page:
}
write_done:
- wdata->res.count = (last_isect << SECTOR_SHIFT) - (offset);
- if (count < wdata->res.count) {
- wdata->res.count = count;
- }
+ wdata->res.count = wdata->args.count;
out:
bl_put_extent(be);
+ bl_put_extent(cow_read);
bl_submit_bio(WRITE, bio);
put_parallel(par);
return PNFS_ATTEMPTED;
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -40,6 +40,7 @@
#define PAGE_CACHE_SECTORS (PAGE_CACHE_SIZE >> SECTOR_SHIFT)
#define PAGE_CACHE_SECTOR_SHIFT (PAGE_CACHE_SHIFT - SECTOR_SHIFT)
+#define SECTOR_SIZE (1 << SECTOR_SHIFT)
struct block_mount_id {
spinlock_t bm_lock; /* protects list */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit fb6ca6d154cdcd53e7f27f8dbba513830372699b upstream.
There are so many quirks, lets just try and force
this for all RS690s. See:
https://bugs.freedesktop.org/show_bug.cgi?id=37679
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/radeon/radeon_irq_kms.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index 2da4465..b06fa59 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -208,6 +208,10 @@ static bool radeon_msi_ok(struct radeon_device *rdev)
(rdev->pdev->subsystem_device == 0x0185))
return true;
+ /* try and enable MSIs by default on all RS690s */
+ if (rdev->family == CHIP_RS690)
+ return true;
+
/* RV515 seems to have MSI issues where it loses
* MSI rearms occasionally. This leads to lockups and freezes.
* disable it by default.
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Monakhov <[email protected]>
commit f066055a3449f0e5b0ae4f3ceab4445bead47638 upstream.
Proper block swap for inodes with full journaling enabled is
truly non obvious task. In order to be on a safe side let's
explicitly disable it for now.
Signed-off-by: Dmitry Monakhov <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ext4/move_extent.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index df5cde5..e2016f3 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -1142,7 +1142,12 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
orig_inode->i_ino, donor_inode->i_ino);
return -EINVAL;
}
-
+ /* TODO: This is non obvious task to swap blocks for inodes with full
+ jornaling enabled */
+ if (ext4_should_journal_data(orig_inode) ||
+ ext4_should_journal_data(donor_inode)) {
+ return -EINVAL;
+ }
/* Protect orig and donor inodes against a truncate */
mext_inode_double_lock(orig_inode, donor_inode);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lin Ming <[email protected]>
commit fc54ab72959edbf229b65ac74b2f122d799ca002 upstream.
The _OSC method may exist in module level code,
so it must be called after ACPI_FULL_INITIALIZATION
On some new platforms with Zero-Power-Optical-Disk-Drive (ZPODD)
support, this fix is necessary to save power.
Signed-off-by: Lin Ming <[email protected]>
Tested-by: Aaron Lu <[email protected]>
Signed-off-by: Len Brown <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/acpi/bus.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index e059695..d59175e 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -994,8 +994,6 @@ static int __init acpi_bus_init(void)
status = acpi_ec_ecdt_probe();
/* Ignore result. Not having an ECDT is not fatal. */
- acpi_bus_osc_support();
-
status = acpi_initialize_objects(ACPI_FULL_INITIALIZATION);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to initialize ACPI objects\n");
@@ -1003,6 +1001,12 @@ static int __init acpi_bus_init(void)
}
/*
+ * _OSC method may exist in module level code,
+ * so it must be run after ACPI_FULL_INITIALIZATION
+ */
+ acpi_bus_osc_support();
+
+ /*
* _PDC control method may load dynamic SSDT tables,
* and we need to install the table handler before that.
*/
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams <[email protected]>
commit bc3f02a795d3b4faa99d37390174be2a75d091bd upstream.
John reports:
BUG: soft lockup - CPU#2 stuck for 23s! [kworker/u:8:2202]
[..]
Call Trace:
[<ffffffff8141782a>] scsi_remove_target+0xda/0x1f0
[<ffffffff81421de5>] sas_rphy_remove+0x55/0x60
[<ffffffff81421e01>] sas_rphy_delete+0x11/0x20
[<ffffffff81421e35>] sas_port_delete+0x25/0x160
[<ffffffff814549a3>] mptsas_del_end_device+0x183/0x270
...introduced by commit 3b661a9 "[SCSI] fix hot unplug vs async scan race".
Don't restart lookup of more stargets in the multi-target case, just
arrange to traverse the list once, on the assumption that new targets
are always added at the end. There is no guarantee that the target will
change state in scsi_target_reap() so we can end up spinning if we
restart.
Acked-by: Jack Wang <[email protected]>
LKML-Reference: <CAEhu1-6wq1YsNiscGMwP4ud0Q+MrViRzv=kcWCQSBNc8c68N5Q@mail.gmail.com>
Reported-by: John Drescher <[email protected]>
Tested-by: John Drescher <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/scsi_sysfs.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index 093d4f6..ce5224c 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -1031,33 +1031,31 @@ static void __scsi_remove_target(struct scsi_target *starget)
void scsi_remove_target(struct device *dev)
{
struct Scsi_Host *shost = dev_to_shost(dev->parent);
- struct scsi_target *starget, *found;
+ struct scsi_target *starget, *last = NULL;
unsigned long flags;
- restart:
- found = NULL;
+ /* remove targets being careful to lookup next entry before
+ * deleting the last
+ */
spin_lock_irqsave(shost->host_lock, flags);
list_for_each_entry(starget, &shost->__targets, siblings) {
if (starget->state == STARGET_DEL)
continue;
if (starget->dev.parent == dev || &starget->dev == dev) {
- found = starget;
- found->reap_ref++;
- break;
+ /* assuming new targets arrive at the end */
+ starget->reap_ref++;
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ if (last)
+ scsi_target_reap(last);
+ last = starget;
+ __scsi_remove_target(starget);
+ spin_lock_irqsave(shost->host_lock, flags);
}
}
spin_unlock_irqrestore(shost->host_lock, flags);
- if (found) {
- __scsi_remove_target(found);
- scsi_target_reap(found);
- /* in the case where @dev has multiple starget children,
- * continue removing.
- *
- * FIXME: does such a case exist?
- */
- goto restart;
- }
+ if (last)
+ scsi_target_reap(last);
}
EXPORT_SYMBOL(scsi_remove_target);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Julia Lawall <[email protected]>
commit ca579c9f136af4274ccfd1bcaee7f38a29a0e2e9 upstream.
If list_for_each_entry, etc complete a traversal of the list, the iterator
variable ends up pointing to an address at an offset from the list head,
and not a meaningful structure. Thus this value should not be used after
the end of the iterator. Replace port->adapter->scsi_host by
adapter->scsi_host.
This problem was found using Coccinelle (http://coccinelle.lip6.fr/).
Oversight in upsteam commit of v2.6.37
a1ca48319a9aa1c5b57ce142f538e76050bb8972
"[SCSI] zfcp: Move ACL/CFDC code to zfcp_cfdc.c"
which merged the content of zfcp_erp_port_access_changed().
Signed-off-by: Julia Lawall <[email protected]>
Signed-off-by: Steffen Maier <[email protected]>
Reviewed-by: Martin Peschke <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_cfdc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/s390/scsi/zfcp_cfdc.c b/drivers/s390/scsi/zfcp_cfdc.c
index fbd8b4d..49b82e4 100644
--- a/drivers/s390/scsi/zfcp_cfdc.c
+++ b/drivers/s390/scsi/zfcp_cfdc.c
@@ -293,7 +293,7 @@ void zfcp_cfdc_adapter_access_changed(struct zfcp_adapter *adapter)
}
read_unlock_irqrestore(&adapter->port_list_lock, flags);
- shost_for_each_device(sdev, port->adapter->scsi_host) {
+ shost_for_each_device(sdev, adapter->scsi_host) {
zfcp_sdev = sdev_to_zfcp(sdev);
status = atomic_read(&zfcp_sdev->status);
if ((status & ZFCP_STATUS_COMMON_ACCESS_DENIED) ||
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bjørn Mork <[email protected]>
commit 160c9425ac52cb30502be2d9c5e848cec91bb115 upstream.
Interface #5 on ZTE MF683 is a QMI/wwan interface.
Signed-off-by: Bjørn Mork <[email protected]>
Cc: Shawn J. Goff <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/serial/option.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index 53bd5f4..30cff03 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -870,7 +870,8 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0153, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0155, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0156, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0158, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0159, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0161, 0xff, 0xff, 0xff) },
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Frank Schäfer <[email protected]>
commit 97d2fbf501e3cf105ac957086c7e40e62e15cdf8 upstream.
Signed-off-by: Frank Schäfer <[email protected]>
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
[bwh: Backported to 3.2: adjust filename]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/media/video/gspca/pac7302.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/media/video/gspca/pac7302.c
+++ b/drivers/media/video/gspca/pac7302.c
@@ -1198,6 +1198,7 @@ static const struct usb_device_id device
{USB_DEVICE(0x093a, 0x262a)},
{USB_DEVICE(0x093a, 0x262c)},
{USB_DEVICE(0x145f, 0x013c)},
+ {USB_DEVICE(0x1ae7, 0x2001)}, /* SpeedLink Snappy Mic SL-6825-SBK */
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hans de Goede <[email protected]>
commit 4d6454dbae935825e729f34dc7410bb1b22c7944 upstream.
Reported by: Grzegorz Woźniak
Signed-off-by: Hans de Goede <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/media/video/gspca/pac7302.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/media/video/gspca/pac7302.c b/drivers/media/video/gspca/pac7302.c
index 1c44f78..edbc04c 100644
--- a/drivers/media/video/gspca/pac7302.c
+++ b/drivers/media/video/gspca/pac7302.c
@@ -1197,6 +1197,7 @@ static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x093a, 0x2629), .driver_info = FL_VFLIP},
{USB_DEVICE(0x093a, 0x262a)},
{USB_DEVICE(0x093a, 0x262c)},
+ {USB_DEVICE(0x145f, 0x013c)},
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Elric Fu <[email protected]>
commit b92cc66c047ff7cf587b318fe377061a353c120f upstream.
Software have to abort command ring and cancel command
when a command is failed or hang. Otherwise, the command
ring will hang up and can't handle the others. An example
of a command that may hang is the Address Device Command,
because waiting for a SET_ADDRESS request to be acknowledged
by a USB device is outside of the xHC's ability to control.
To cancel a command, software will initialize a command
descriptor for the cancel command, and add it into a
cancel_cmd_list of xhci.
Sarah: Fixed missing newline on "Have the command ring been stopped?"
debugging statement.
This patch should be backported to kernels as old as 3.0, that contain
the commit 7ed603ecf8b68ab81f4c83097d3063d43ec73bb8 "xhci: Add an
assertion to check for virt_dev=0 bug." That commit papers over a NULL
pointer dereference, and this patch fixes the underlying issue that
caused the NULL pointer dereference.
Signed-off-by: Elric Fu <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Tested-by: Miroslav Sabljic <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci-mem.c | 7 +++
drivers/usb/host/xhci-ring.c | 108 ++++++++++++++++++++++++++++++++++++++++++
drivers/usb/host/xhci.c | 2 +-
drivers/usb/host/xhci.h | 12 +++++
4 files changed, 128 insertions(+), 1 deletion(-)
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1680,6 +1680,7 @@ void xhci_mem_cleanup(struct xhci_hcd *x
{
struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
struct dev_info *dev_info, *next;
+ struct xhci_cd *cur_cd, *next_cd;
unsigned long flags;
int size;
int i, j, num_ports;
@@ -1701,6 +1702,11 @@ void xhci_mem_cleanup(struct xhci_hcd *x
xhci_ring_free(xhci, xhci->cmd_ring);
xhci->cmd_ring = NULL;
xhci_dbg(xhci, "Freed command ring\n");
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ }
for (i = 1; i < MAX_HC_SLOTS; ++i)
xhci_free_virt_device(xhci, i);
@@ -2246,6 +2252,7 @@ int xhci_mem_init(struct xhci_hcd *xhci,
xhci->cmd_ring = xhci_ring_alloc(xhci, 1, true, false, flags);
if (!xhci->cmd_ring)
goto fail;
+ INIT_LIST_HEAD(&xhci->cancel_cmd_list);
xhci_dbg(xhci, "Allocated command ring at %p\n", xhci->cmd_ring);
xhci_dbg(xhci, "First segment DMA is 0x%llx\n",
(unsigned long long)xhci->cmd_ring->first_seg->dma);
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -318,6 +318,114 @@ void xhci_ring_cmd_db(struct xhci_hcd *x
xhci_readl(xhci, &xhci->dba->doorbell[0]);
}
+static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
+{
+ u64 temp_64;
+ int ret;
+
+ xhci_dbg(xhci, "Abort command ring\n");
+
+ if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING)) {
+ xhci_dbg(xhci, "The command ring isn't running, "
+ "Have the command ring been stopped?\n");
+ return 0;
+ }
+
+ temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+ if (!(temp_64 & CMD_RING_RUNNING)) {
+ xhci_dbg(xhci, "Command ring had been stopped\n");
+ return 0;
+ }
+ xhci->cmd_ring_state = CMD_RING_STATE_ABORTED;
+ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+ &xhci->op_regs->cmd_ring);
+
+ /* Section 4.6.1.2 of xHCI 1.0 spec says software should
+ * time the completion od all xHCI commands, including
+ * the Command Abort operation. If software doesn't see
+ * CRR negated in a timely manner (e.g. longer than 5
+ * seconds), then it should assume that the there are
+ * larger problems with the xHC and assert HCRST.
+ */
+ ret = handshake(xhci, &xhci->op_regs->cmd_ring,
+ CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
+ if (ret < 0) {
+ xhci_err(xhci, "Stopped the command ring failed, "
+ "maybe the host is dead\n");
+ xhci->xhc_state |= XHCI_STATE_DYING;
+ xhci_quiesce(xhci);
+ xhci_halt(xhci);
+ return -ESHUTDOWN;
+ }
+
+ return 0;
+}
+
+static int xhci_queue_cd(struct xhci_hcd *xhci,
+ struct xhci_command *command,
+ union xhci_trb *cmd_trb)
+{
+ struct xhci_cd *cd;
+ cd = kzalloc(sizeof(struct xhci_cd), GFP_ATOMIC);
+ if (!cd)
+ return -ENOMEM;
+ INIT_LIST_HEAD(&cd->cancel_cmd_list);
+
+ cd->command = command;
+ cd->cmd_trb = cmd_trb;
+ list_add_tail(&cd->cancel_cmd_list, &xhci->cancel_cmd_list);
+
+ return 0;
+}
+
+/*
+ * Cancel the command which has issue.
+ *
+ * Some commands may hang due to waiting for acknowledgement from
+ * usb device. It is outside of the xHC's ability to control and
+ * will cause the command ring is blocked. When it occurs software
+ * should intervene to recover the command ring.
+ * See Section 4.6.1.1 and 4.6.1.2
+ */
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+ union xhci_trb *cmd_trb)
+{
+ int retval = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&xhci->lock, flags);
+
+ if (xhci->xhc_state & XHCI_STATE_DYING) {
+ xhci_warn(xhci, "Abort the command ring,"
+ " but the xHCI is dead.\n");
+ retval = -ESHUTDOWN;
+ goto fail;
+ }
+
+ /* queue the cmd desriptor to cancel_cmd_list */
+ retval = xhci_queue_cd(xhci, command, cmd_trb);
+ if (retval) {
+ xhci_warn(xhci, "Queuing command descriptor failed.\n");
+ goto fail;
+ }
+
+ /* abort command ring */
+ retval = xhci_abort_cmd_ring(xhci);
+ if (retval) {
+ xhci_err(xhci, "Abort command ring failed\n");
+ if (unlikely(retval == -ESHUTDOWN)) {
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ usb_hc_died(xhci_to_hcd(xhci)->primary_hcd);
+ xhci_dbg(xhci, "xHCI host controller is dead.\n");
+ return retval;
+ }
+ }
+
+fail:
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ return retval;
+}
+
void xhci_ring_ep_doorbell(struct xhci_hcd *xhci,
unsigned int slot_id,
unsigned int ep_index,
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -52,7 +52,7 @@ MODULE_PARM_DESC(link_quirk, "Don't clea
* handshake done). There are two failure modes: "usec" have passed (major
* hardware flakeout), or the register reads as all-ones (hardware removed).
*/
-static int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
u32 mask, u32 done, int usec)
{
u32 result;
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1255,6 +1255,13 @@ struct xhci_td {
union xhci_trb *last_trb;
};
+/* command descriptor */
+struct xhci_cd {
+ struct list_head cancel_cmd_list;
+ struct xhci_command *command;
+ union xhci_trb *cmd_trb;
+};
+
struct xhci_dequeue_state {
struct xhci_segment *new_deq_seg;
union xhci_trb *new_deq_ptr;
@@ -1406,6 +1413,7 @@ struct xhci_hcd {
#define CMD_RING_STATE_RUNNING (1 << 0)
#define CMD_RING_STATE_ABORTED (1 << 1)
#define CMD_RING_STATE_STOPPED (1 << 2)
+ struct list_head cancel_cmd_list;
unsigned int cmd_ring_reserved_trbs;
struct xhci_ring *event_ring;
struct xhci_erst erst;
@@ -1670,6 +1678,8 @@ static inline void xhci_unregister_pci(v
/* xHCI host controller glue */
typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+ u32 mask, u32 done, int usec);
void xhci_quiesce(struct xhci_hcd *xhci);
int xhci_halt(struct xhci_hcd *xhci);
int xhci_reset(struct xhci_hcd *xhci);
@@ -1760,6 +1770,8 @@ void xhci_queue_config_ep_quirk(struct x
unsigned int slot_id, unsigned int ep_index,
struct xhci_dequeue_state *deq_state);
void xhci_stop_endpoint_command_watchdog(unsigned long arg);
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+ union xhci_trb *cmd_trb);
void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, unsigned int slot_id,
unsigned int ep_index, unsigned int stream_id);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Elric Fu <[email protected]>
commit c181bc5b5d5c79b71203cd10cef97f802fb6f9c1 upstream.
Adding cmd_ring_state for command ring. It helps to verify
the current command ring state for controlling the command
ring operations.
This patch should be backported to kernels as old as 3.0. The commit
7ed603ecf8b68ab81f4c83097d3063d43ec73bb8 "xhci: Add an assertion to
check for virt_dev=0 bug." papers over the NULL pointer dereference that
I now believe is related to a timed out Set Address command. This (and
the four patches that follow it) contain the real fix that also allows
VIA USB 3.0 hubs to consistently re-enumerate during the plug/unplug
stress tests.
Signed-off-by: Elric Fu <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Tested-by: Miroslav Sabljic <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci-ring.c | 3 +++
drivers/usb/host/xhci.c | 6 ++++--
drivers/usb/host/xhci.h | 4 ++++
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 643c2f3..75c857e 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -280,6 +280,9 @@ static inline int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring,
/* Ring the host controller doorbell after placing a command on the ring */
void xhci_ring_cmd_db(struct xhci_hcd *xhci)
{
+ if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING))
+ return;
+
xhci_dbg(xhci, "// Ding dong!\n");
xhci_writel(xhci, DB_VALUE_HOST, &xhci->dba->doorbell[0]);
/* Flush PCI posted writes */
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index c59d5b5..f425356 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -104,9 +104,10 @@ int xhci_halt(struct xhci_hcd *xhci)
ret = handshake(xhci, &xhci->op_regs->status,
STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
- if (!ret)
+ if (!ret) {
xhci->xhc_state |= XHCI_STATE_HALTED;
- else
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ } else
xhci_warn(xhci, "Host not halted after %u microseconds.\n",
XHCI_MAX_HALT_USEC);
return ret;
@@ -485,6 +486,7 @@ static int xhci_run_finished(struct xhci_hcd *xhci)
return -ENODEV;
}
xhci->shared_hcd->state = HC_STATE_RUNNING;
+ xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
if (xhci->quirks & XHCI_NEC_HOST)
xhci_ring_cmd_db(xhci);
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index c713256..33f24e9 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1421,6 +1421,10 @@ struct xhci_hcd {
/* data structures */
struct xhci_device_context_array *dcbaa;
struct xhci_ring *cmd_ring;
+ unsigned int cmd_ring_state;
+#define CMD_RING_STATE_RUNNING (1 << 0)
+#define CMD_RING_STATE_ABORTED (1 << 1)
+#define CMD_RING_STATE_STOPPED (1 << 2)
unsigned int cmd_ring_reserved_trbs;
struct xhci_ring *event_ring;
struct xhci_erst erst;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Elric Fu <[email protected]>
commit 6e4468b9a0793dfb53eb80d9fe52c739b13b27fd upstream.
The patch is used to cancel command when the command isn't
acknowledged and a timeout occurs.
This patch should be backported to kernels as old as 3.0, that contain
the commit 7ed603ecf8b68ab81f4c83097d3063d43ec73bb8 "xhci: Add an
assertion to check for virt_dev=0 bug." That commit papers over a NULL
pointer dereference, and this patch fixes the underlying issue that
caused the NULL pointer dereference.
Signed-off-by: Elric Fu <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Tested-by: Miroslav Sabljic <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci.c | 26 +++++++++++++++++++-------
drivers/usb/host/xhci.h | 3 +++
2 files changed, 22 insertions(+), 7 deletions(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 3f0763d..b95e3e8 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -2402,6 +2402,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
struct completion *cmd_completion;
u32 *cmd_status;
struct xhci_virt_device *virt_dev;
+ union xhci_trb *cmd_trb;
spin_lock_irqsave(&xhci->lock, flags);
virt_dev = xhci->devs[udev->slot_id];
@@ -2447,6 +2448,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
}
init_completion(cmd_completion);
+ cmd_trb = xhci->cmd_ring->dequeue;
if (!ctx_change)
ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
udev->slot_id, must_succeed);
@@ -2468,14 +2470,17 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
/* Wait for the configure endpoint command to complete */
timeleft = wait_for_completion_interruptible_timeout(
cmd_completion,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for %s command\n",
timeleft == 0 ? "Timeout" : "Signal",
ctx_change == 0 ?
"configure endpoint" :
"evaluate context");
- /* FIXME cancel the configure endpoint command */
+ /* cancel the configure endpoint command */
+ ret = xhci_cancel_cmd(xhci, command, cmd_trb);
+ if (ret < 0)
+ return ret;
return -ETIME;
}
@@ -3424,8 +3429,10 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
unsigned long flags;
int timeleft;
int ret;
+ union xhci_trb *cmd_trb;
spin_lock_irqsave(&xhci->lock, flags);
+ cmd_trb = xhci->cmd_ring->dequeue;
ret = xhci_queue_slot_control(xhci, TRB_ENABLE_SLOT, 0);
if (ret) {
spin_unlock_irqrestore(&xhci->lock, flags);
@@ -3437,12 +3444,12 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
/* XXX: how much time for xHC slot assignment? */
timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for a slot\n",
timeleft == 0 ? "Timeout" : "Signal");
- /* FIXME cancel the enable slot request */
- return 0;
+ /* cancel the enable slot request */
+ return xhci_cancel_cmd(xhci, NULL, cmd_trb);
}
if (!xhci->slot_id) {
@@ -3503,6 +3510,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
struct xhci_slot_ctx *slot_ctx;
struct xhci_input_control_ctx *ctrl_ctx;
u64 temp_64;
+ union xhci_trb *cmd_trb;
if (!udev->slot_id) {
xhci_dbg(xhci, "Bad Slot ID %d\n", udev->slot_id);
@@ -3541,6 +3549,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2);
spin_lock_irqsave(&xhci->lock, flags);
+ cmd_trb = xhci->cmd_ring->dequeue;
ret = xhci_queue_address_device(xhci, virt_dev->in_ctx->dma,
udev->slot_id);
if (ret) {
@@ -3553,7 +3562,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
/* ctrl tx can take up to 5 sec; XXX: need more time for xHC? */
timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
/* FIXME: From section 4.3.4: "Software shall be responsible for timing
* the SetAddress() "recovery interval" required by USB and aborting the
* command on a timeout.
@@ -3561,7 +3570,10 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for address device command\n",
timeleft == 0 ? "Timeout" : "Signal");
- /* FIXME cancel the address device command */
+ /* cancel the address device command */
+ ret = xhci_cancel_cmd(xhci, NULL, cmd_trb);
+ if (ret < 0)
+ return ret;
return -ETIME;
}
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index fdfcebf..e81ccfa 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1256,6 +1256,9 @@ struct xhci_td {
union xhci_trb *last_trb;
};
+/* xHCI command default timeout value */
+#define XHCI_CMD_DEFAULT_TIMEOUT (5 * HZ)
+
/* command descriptor */
struct xhci_cd {
struct list_head cancel_cmd_list;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit 5af98bb06dee79d28c805f9fd0805ce791121784 upstream.
Eric Fu reports a problem with his VIA host controller fetching a zeroed
event ring pointer on resume from suspend. The host should have been
halted, but we can't be sure because that code ignores the return value
from xhci_halt(). Print a warning when the host controller refuses to
halt within XHCI_MAX_HALT_USEC (currently 16 seconds).
(Update: it turns out that the VIA host controller is reporting a halted
state when it fetches the zeroed event ring pointer. However, we still
need this warning for other host controllers.)
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index e1963d4..f68bc15 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -106,6 +106,9 @@ int xhci_halt(struct xhci_hcd *xhci)
STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
if (!ret)
xhci->xhc_state |= XHCI_STATE_HALTED;
+ else
+ xhci_warn(xhci, "Host not halted after %u microseconds.\n",
+ XHCI_MAX_HALT_USEC);
return ret;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Williamson <[email protected]>
commit 2e12bc29fc5a12242d68e11875db3dd58efad9ff upstream.
domain_update_iommu_coherency() currently defaults to setting domains
as coherent when the domain is not attached to any iommus. This
allows for a window in domain_context_mapping_one() where such a
domain can update context entries non-coherently, and only after
update the domain capability to clear iommu_coherency.
This can be seen using KVM device assignment on VT-d systems that
do not support coherency in the ecap register. When a device is
added to a guest, a domain is created (iommu_coherency = 0), the
device is attached, and ranges are mapped. If we then hot unplug
the device, the coherency is updated and set to the default (1)
since no iommus are attached to the domain. A subsequent attach
of a device makes use of the same dmar domain (now marked coherent)
updates context entries with coherency enabled, and only disables
coherency as the last step in the process.
To fix this, switch domain_update_iommu_coherency() to use the
safer, non-coherent default for domains not attached to iommus.
Signed-off-by: Alex Williamson <[email protected]>
Tested-by: Donald Dutile <[email protected]>
Acked-by: Donald Dutile <[email protected]>
Acked-by: Chris Wright <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
[bwh: Backported to 3.2: dmar_domain::iommu_bmp is a single unsigned long
not an array, so add &]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/iommu/intel-iommu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -563,7 +563,9 @@ static void domain_update_iommu_coherenc
{
int i;
- domain->iommu_coherency = 1;
+ i = find_first_bit(&domain->iommu_bmp, g_num_of_iommus);
+
+ domain->iommu_coherency = i < g_num_of_iommus ? 1 : 0;
for_each_set_bit(i, &domain->iommu_bmp, g_num_of_iommus) {
if (!ecap_coherent(g_iommus[i]->ecap)) {
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gavin Shan <[email protected]>
commit feadf7c0a1a7c08c74bebb4a13b755f8c40e3bbc upstream.
The EEH core is talking with the PCI device driver to determine the
action (purely reset, or PCI device removal). During the period, the
driver might be unloaded and in turn causes kernel crash as follows:
EEH: Detected PCI bus error on PHB#4-PE#10000
EEH: This PCI device has failed 3 times in the last hour
lpfc 0004:01:00.0: 0:2710 PCI channel disable preparing for reset
Unable to handle kernel paging request for data at address 0x00000490
Faulting instruction address: 0xd00000000e682c90
cpu 0x1: Vector: 300 (Data Access) at [c000000fc75ffa20]
pc: d00000000e682c90: .lpfc_io_error_detected+0x30/0x240 [lpfc]
lr: d00000000e682c8c: .lpfc_io_error_detected+0x2c/0x240 [lpfc]
sp: c000000fc75ffca0
msr: 8000000000009032
dar: 490
dsisr: 40000000
current = 0xc000000fc79b88b0
paca = 0xc00000000edb0380 softe: 0 irq_happened: 0x00
pid = 3386, comm = eehd
enter ? for help
[c000000fc75ffca0] c000000fc75ffd30 (unreliable)
[c000000fc75ffd30] c00000000004fd3c .eeh_report_error+0x7c/0xf0
[c000000fc75ffdc0] c00000000004ee00 .eeh_pe_dev_traverse+0xa0/0x180
[c000000fc75ffe70] c00000000004ffd8 .eeh_handle_event+0x68/0x300
[c000000fc75fff00] c0000000000503a0 .eeh_event_handler+0x130/0x1a0
[c000000fc75fff90] c000000000020138 .kernel_thread+0x54/0x70
1:mon>
The patch increases the reference of the corresponding driver modules
while EEH core does the negotiation with PCI device driver so that the
corresponding driver modules can't be unloaded during the period and
we're safe to refer the callbacks.
Reported-by: Alexey Kardashevskiy <[email protected]>
Signed-off-by: Gavin Shan <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
[bwh: Backported to 3.2:
- Adjust context
- Reporting functions return int (success = 0), not void * (success = NULL)
- Assume that the 'dev' arguments are non-null]
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/arch/powerpc/platforms/pseries/eeh_driver.c
+++ b/arch/powerpc/platforms/pseries/eeh_driver.c
@@ -25,6 +25,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
+#include <linux/module.h>
#include <linux/pci.h>
#include <asm/eeh.h>
#include <asm/eeh_event.h>
@@ -41,6 +42,41 @@ static inline const char * pcid_name (st
return "";
}
+/**
+ * eeh_pcid_get - Get the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is used to retrieve the PCI device driver for
+ * the indicated PCI device. Besides, we will increase the reference
+ * of the PCI device driver to prevent that being unloaded on
+ * the fly. Otherwise, kernel crash would be seen.
+ */
+static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return NULL;
+
+ if (!try_module_get(pdev->driver->driver.owner))
+ return NULL;
+
+ return pdev->driver;
+}
+
+/**
+ * eeh_pcid_put - Dereference on the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is called to do dereference on the PCI device
+ * driver of the indicated PCI device.
+ */
+static inline void eeh_pcid_put(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return;
+
+ module_put(pdev->driver->driver.owner);
+}
+
#if 0
static void print_device_node_tree(struct pci_dn *pdn, int dent)
{
@@ -109,18 +145,20 @@ static void eeh_enable_irq(struct pci_de
static int eeh_report_error(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_frozen;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->error_detected (dev, pci_channel_io_frozen);
@@ -128,6 +166,7 @@ static int eeh_report_error(struct pci_d
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -142,12 +181,15 @@ static int eeh_report_error(struct pci_d
static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
- if (!driver ||
- !driver->err_handler ||
- !driver->err_handler->mmio_enabled)
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
+ if (!driver->err_handler ||
+ !driver->err_handler->mmio_enabled) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->mmio_enabled (dev);
@@ -155,6 +197,7 @@ static int eeh_report_mmio_enabled(struc
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -165,18 +208,20 @@ static int eeh_report_mmio_enabled(struc
static int eeh_report_reset(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
-
- if (!driver)
- return 0;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
+
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->slot_reset)
+ !driver->err_handler->slot_reset) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->slot_reset(dev);
if ((*res == PCI_ERS_RESULT_NONE) ||
@@ -184,6 +229,7 @@ static int eeh_report_reset(struct pci_d
if (*res == PCI_ERS_RESULT_DISCONNECT &&
rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -193,21 +239,24 @@ static int eeh_report_reset(struct pci_d
static int eeh_report_resume(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->resume)
+ !driver->err_handler->resume) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->resume(dev);
+ eeh_pcid_put(dev);
return 0;
}
@@ -220,21 +269,24 @@ static int eeh_report_resume(struct pci_
static int eeh_report_failure(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_perm_failure;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->error_detected(dev, pci_channel_io_perm_failure);
+ eeh_pcid_put(dev);
return 0;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings <[email protected]>
commit 4b961180ef275035b1538317ffd0e21e80e63e77 upstream.
ite_dev::rdev is currently initialised in ite_probe() after
rc_register_device() returns. If a newly registered device is opened
quickly enough, we may enable interrupts and try to use ite_dev::rdev
before it has been initialised. Move it up to the earliest point we
can, right after calling rc_allocate_device().
Reported-and-tested-by: YunQiang Su <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
---
drivers/media/rc/ite-cir.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
index 36fe5a3..24c77a4 100644
--- a/drivers/media/rc/ite-cir.c
+++ b/drivers/media/rc/ite-cir.c
@@ -1473,6 +1473,7 @@ static int ite_probe(struct pnp_dev *pdev, const struct pnp_device_id
rdev = rc_allocate_device();
if (!rdev)
goto failure;
+ itdev->rdev = rdev;
ret = -ENODEV;
@@ -1604,7 +1605,6 @@ static int ite_probe(struct pnp_dev *pdev, const struct pnp_device_id
if (ret)
goto failure;
- itdev->rdev = rdev;
ite_pr(KERN_NOTICE, "driver has been successfully loaded\n");
return 0;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Yongjun <[email protected]>
commit f61bd0585dfc7d99db4936d7467de4ca8e2f7ea0 upstream.
In case of error, the function clk_get() returns ERR_PTR()
and never returns NULL pointer. The NULL test in the error
handling should be replaced with IS_ERR().
dpatch engine is used to auto generated this patch.
(https://github.com/weiyj/dpatch)
Signed-off-by: Wei Yongjun <[email protected]>
Acked-by: Wolfgang Grandegger <[email protected]>
Signed-off-by: Marc Kleine-Budde <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/can/mscan/mpc5xxx_can.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/can/mscan/mpc5xxx_can.c b/drivers/net/can/mscan/mpc5xxx_can.c
index 06adf88..524ef96 100644
--- a/drivers/net/can/mscan/mpc5xxx_can.c
+++ b/drivers/net/can/mscan/mpc5xxx_can.c
@@ -181,7 +181,7 @@ static u32 __devinit mpc512x_can_get_clock(struct platform_device *ofdev,
if (!clock_name || !strcmp(clock_name, "sys")) {
sys_clk = clk_get(&ofdev->dev, "sys_clk");
- if (!sys_clk) {
+ if (IS_ERR(sys_clk)) {
dev_err(&ofdev->dev, "couldn't get sys_clk\n");
goto exit_unmap;
}
@@ -204,7 +204,7 @@ static u32 __devinit mpc512x_can_get_clock(struct platform_device *ofdev,
if (clocksrc < 0) {
ref_clk = clk_get(&ofdev->dev, "ref_clk");
- if (!ref_clk) {
+ if (IS_ERR(ref_clk)) {
dev_err(&ofdev->dev, "couldn't get ref_clk\n");
goto exit_unmap;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Stephen M. Cameron" <[email protected]>
commit 21e89afd325849eb38adccf382df16cc895911f9 upstream.
It turns out Smart Array logical drives do not support target
reset and when the target reset fails, the logical drive will
be taken off line. Symptoms look like this:
hpsa 0000:03:00.0: Abort request on C1:B0:T0:L0
hpsa 0000:03:00.0: resetting device 1:0:0:0
hpsa 0000:03:00.0: cp ffff880037c56000 is reported invalid (probably means target device no longer present)
hpsa 0000:03:00.0: resetting device failed.
sd 1:0:0:0: Device offlined - not ready after error recovery
sd 1:0:0:0: rejecting I/O to offline device
EXT3-fs error (device sdb1): read_block_bitmap:
LUN reset is supported though, and is what we should be using.
Target reset is also disruptive in shared SAS situations,
for example, an external MSA1210m which does support target
reset attached to Smart Arrays in multiple hosts -- a target
reset from one host is disruptive to other hosts as all LUNs
on the target will be reset and will abort all outstanding i/os
back to all the attached hosts. So we should use LUN reset,
not target reset.
Tested this with Smart Array logical drives and with tape drives.
Not sure how this bug survived since 2009, except it must be very
rare for a Smart Array to require more than 30s to complete a request.
Signed-off-by: Stephen M. Cameron <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/hpsa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 796482b..015a6c8 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -3265,7 +3265,7 @@ static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
c->Request.Timeout = 0; /* Don't time out */
memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
c->Request.CDB[0] = cmd;
- c->Request.CDB[1] = 0x03; /* Reset target above */
+ c->Request.CDB[1] = HPSA_RESET_TYPE_LUN;
/* If bytes 4-7 are zero, it means reset the */
/* LunID device */
c->Request.CDB[4] = 0x00;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Abbott <[email protected]>
commit c8cad4c89ee3b15935c532210ae6ebb5c0a2734d upstream.
When `do_cmd_ioctl()` allocates memory for the kernel copy of a channel
list, it frees any previously allocated channel list in
`async->cmd.chanlist` and replaces it with the new one. However, if the
device is ever removed (or "detached") the cleanup code in
`cleanup_device()` in "drivers.c" does not free this memory so it is
lost.
A sensible place to free the kernel copy of the channel list is in
`do_become_nonbusy()` as at that point the comedi asynchronous command
associated with the channel list is no longer valid. Free the channel
list in `do_become_nonbusy()` instead of `do_cmd_ioctl()` and clear the
pointer to prevent it being freed more than once.
Note that `cleanup_device()` could be called at an inappropriate time
while the comedi device is open, but that's a separate bug not related
to this this patch.
Signed-off-by: Ian Abbott <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/staging/comedi/comedi_fops.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/staging/comedi/comedi_fops.c
+++ b/drivers/staging/comedi/comedi_fops.c
@@ -1088,7 +1088,6 @@ static int do_cmd_ioctl(struct comedi_de
goto cleanup;
}
- kfree(async->cmd.chanlist);
async->cmd = user_cmd;
async->cmd.data = NULL;
/* load channel/gain list */
@@ -1833,6 +1832,8 @@ void do_become_nonbusy(struct comedi_dev
if (async) {
comedi_reset_async_buf(async);
async->inttrig = NULL;
+ kfree(async->cmd.chanlist);
+ async->cmd.chanlist = NULL;
} else {
printk(KERN_ERR
"BUG: (?) do_become_nonbusy called with async=0\n");
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings <[email protected]>
commit 40fe4f89671fb3c7ded94190fb267402a38b0261 upstream.
softsynth_read() reads a character at a time from the init string;
when it finds the null terminator it sets the initialized flag but
then repeats the last character.
Additionally, if the read() buffer is not big enough for the init
string, the next read() will start reading from the beginning again.
So the caller may never progress to reading anything else.
Replace the simple initialized flag with the current position in
the init string, carried over between calls. Switch to reading
real data once this reaches the null terminator.
(This assumes that the length of the init string can't change, which
seems to be the case. Really, the string and position belong together
in a per-file private struct.)
Tested-by: Samuel Thibault <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/staging/speakup/speakup_soft.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
index 2a67610..e2f5c81 100644
--- a/drivers/staging/speakup/speakup_soft.c
+++ b/drivers/staging/speakup/speakup_soft.c
@@ -40,7 +40,7 @@ static int softsynth_is_alive(struct spk_synth *synth);
static unsigned char get_index(void);
static struct miscdevice synth_device;
-static int initialized;
+static int init_pos;
static int misc_registered;
static struct var_t vars[] = {
@@ -194,7 +194,7 @@ static int softsynth_close(struct inode *inode, struct file *fp)
unsigned long flags;
spk_lock(flags);
synth_soft.alive = 0;
- initialized = 0;
+ init_pos = 0;
spk_unlock(flags);
/* Make sure we let applications go before leaving */
speakup_start_ttys();
@@ -239,13 +239,8 @@ static ssize_t softsynth_read(struct file *fp, char *buf, size_t count,
ch = '\x18';
} else if (synth_buffer_empty()) {
break;
- } else if (!initialized) {
- if (*init) {
- ch = *init;
- init++;
- } else {
- initialized = 1;
- }
+ } else if (init[init_pos]) {
+ ch = init[init_pos++];
} else {
ch = synth_buffer_getc();
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yinghai Lu <[email protected]>
commit 1965f66e7db08d1ebccd24a59043eba826cc1ce8 upstream.
For bridges with "secondary > subordinate", i.e., invalid bus number
apertures, we don't enumerate anything behind the bridge unless the
user specified "pci=assign-busses".
This patch makes us automatically try to reassign the downstream bus
numbers in this case (just for that bridge, not for all bridges as
"pci=assign-busses" does).
We don't discover all the devices on the Intel DP43BF motherboard
without this change (or "pci=assign-busses") because its BIOS configures
a bridge as:
pci 0000:00:1e.0: PCI bridge to [bus 20-08] (subtractive decode)
[bhelgaas: changelog, change message to dev_info]
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=18412
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=625754
Reported-by: Brian C. Huffman <[email protected]>
Reported-by: VL <[email protected]>
Tested-by: VL <[email protected]>
Signed-off-by: Yinghai Lu <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/pci/probe.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 9f8a6b7..61859d0 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -729,8 +729,10 @@ int __devinit pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max,
/* Check if setup is sensible at all */
if (!pass &&
- (primary != bus->number || secondary <= bus->number)) {
- dev_dbg(&dev->dev, "bus configuration invalid, reconfiguring\n");
+ (primary != bus->number || secondary <= bus->number ||
+ secondary > subordinate)) {
+ dev_info(&dev->dev, "bridge configuration invalid ([bus %02x-%02x]), reconfiguring\n",
+ secondary, subordinate);
broken = 1;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o <[email protected]>
commit 00d4e7362ed01987183e9528295de3213031309c upstream.
In ext4_nonda_switch(), if the file system is getting full we used to
call writeback_inodes_sb_if_idle(). The problem is that we can be
holding i_mutex already, and this causes a potential deadlock when
writeback_inodes_sb_if_idle() when it tries to take s_umount. (See
lockdep output below).
As it turns out we don't need need to hold s_umount; the fact that we
are in the middle of the write(2) system call will keep the superblock
pinned. Unfortunately writeback_inodes_sb() checks to make sure
s_umount is taken, and the VFS uses a different mechanism for making
sure the file system doesn't get unmounted out from under us. The
simplest way of dealing with this is to just simply grab s_umount
using a trylock, and skip kicking the writeback flusher thread in the
very unlikely case that we can't take a read lock on s_umount without
blocking.
Also, we now check the cirteria for kicking the writeback thread
before we decide to whether to fall back to non-delayed writeback, so
if there are any outstanding delayed allocation writes, we try to get
them resolved as soon as possible.
[ INFO: possible circular locking dependency detected ]
3.6.0-rc1-00042-gce894ca #367 Not tainted
-------------------------------------------------------
dd/8298 is trying to acquire lock:
(&type->s_umount_key#18){++++..}, at: [<c02277d4>] writeback_inodes_sb_if_idle+0x28/0x46
but task is already holding lock:
(&sb->s_type->i_mutex_key#8){+.+...}, at: [<c01ddcce>] generic_file_aio_write+0x5f/0xd3
which lock already depends on the new lock.
2 locks held by dd/8298:
#0: (sb_writers#2){.+.+.+}, at: [<c01ddcc5>] generic_file_aio_write+0x56/0xd3
#1: (&sb->s_type->i_mutex_key#8){+.+...}, at: [<c01ddcce>] generic_file_aio_write+0x5f/0xd3
stack backtrace:
Pid: 8298, comm: dd Not tainted 3.6.0-rc1-00042-gce894ca #367
Call Trace:
[<c015b79c>] ? console_unlock+0x345/0x372
[<c06d62a1>] print_circular_bug+0x190/0x19d
[<c019906c>] __lock_acquire+0x86d/0xb6c
[<c01999db>] ? mark_held_locks+0x5c/0x7b
[<c0199724>] lock_acquire+0x66/0xb9
[<c02277d4>] ? writeback_inodes_sb_if_idle+0x28/0x46
[<c06db935>] down_read+0x28/0x58
[<c02277d4>] ? writeback_inodes_sb_if_idle+0x28/0x46
[<c02277d4>] writeback_inodes_sb_if_idle+0x28/0x46
[<c026f3b2>] ext4_nonda_switch+0xe1/0xf4
[<c0271ece>] ext4_da_write_begin+0x27/0x193
[<c01dcdb0>] generic_file_buffered_write+0xc8/0x1bb
[<c01ddc47>] __generic_file_aio_write+0x1dd/0x205
[<c01ddce7>] generic_file_aio_write+0x78/0xd3
[<c026d336>] ext4_file_write+0x480/0x4a6
[<c0198c1d>] ? __lock_acquire+0x41e/0xb6c
[<c0180944>] ? sched_clock_cpu+0x11a/0x13e
[<c01967e9>] ? trace_hardirqs_off+0xb/0xd
[<c018099f>] ? local_clock+0x37/0x4e
[<c0209f2c>] do_sync_write+0x67/0x9d
[<c0209ec5>] ? wait_on_retry_sync_kiocb+0x44/0x44
[<c020a7b9>] vfs_write+0x7b/0xe6
[<c020a9a6>] sys_write+0x3b/0x64
[<c06dd4bd>] syscall_call+0x7/0xb
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ext4/inode.c | 17 ++++++++++-------
fs/fs-writeback.c | 1 +
2 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index ca76b5e..0a31197 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2462,6 +2462,16 @@ static int ext4_nonda_switch(struct super_block *sb)
free_blocks = EXT4_C2B(sbi,
percpu_counter_read_positive(&sbi->s_freeclusters_counter));
dirty_blocks = percpu_counter_read_positive(&sbi->s_dirtyclusters_counter);
+ /*
+ * Start pushing delalloc when 1/2 of free blocks are dirty.
+ */
+ if (dirty_blocks && (free_blocks < 2 * dirty_blocks) &&
+ !writeback_in_progress(sb->s_bdi) &&
+ down_read_trylock(&sb->s_umount)) {
+ writeback_inodes_sb(sb, WB_REASON_FS_FREE_SPACE);
+ up_read(&sb->s_umount);
+ }
+
if (2 * free_blocks < 3 * dirty_blocks ||
free_blocks < (dirty_blocks + EXT4_FREECLUSTERS_WATERMARK)) {
/*
@@ -2470,13 +2480,6 @@ static int ext4_nonda_switch(struct super_block *sb)
*/
return 1;
}
- /*
- * Even if we don't switch but are nearing capacity,
- * start pushing delalloc when 1/2 of free blocks are dirty.
- */
- if (free_blocks < 2 * dirty_blocks)
- writeback_inodes_sb_if_idle(sb, WB_REASON_FS_FREE_SPACE);
-
return 0;
}
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index be3efc4..5602d73 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -63,6 +63,7 @@ int writeback_in_progress(struct backing_dev_info *bdi)
{
return test_bit(BDI_writeback_running, &bdi->state);
}
+EXPORT_SYMBOL(writeback_in_progress);
static inline struct backing_dev_info *inode_to_bdi(struct inode *inode)
{
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Senna Tschudin <[email protected]>
commit 37bb7899ca366dc212b71b150e78566d04808cc0 upstream.
This patch fixes error cases within target_core_init_configfs() to
properly set ret = -ENOMEM before jumping to the out_global exception
path.
This was originally discovered with the following Coccinelle semantic
match information:
Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function. A simplified version of the semantic match
that finds this problem is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
{ ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
when != &ret
*if(...)
{
... when != ret = e2
when forall
return ret;
}
// </smpl>
Signed-off-by: Peter Senna Tschudin <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/target_core_configfs.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index a1b4171..015f5be 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -3124,6 +3124,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!target_cg->default_groups) {
pr_err("Unable to allocate target_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3139,6 +3140,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!hba_cg->default_groups) {
pr_err("Unable to allocate hba_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
config_group_init_type_name(&alua_group,
@@ -3154,6 +3156,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!alua_cg->default_groups) {
pr_err("Unable to allocate alua_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3165,14 +3168,17 @@ static int __init target_core_init_configfs(void)
* Add core/alua/lu_gps/default_lu_gp
*/
lu_gp = core_alua_allocate_lu_gp("default_lu_gp", 1);
- if (IS_ERR(lu_gp))
+ if (IS_ERR(lu_gp)) {
+ ret = -ENOMEM;
goto out_global;
+ }
lu_gp_cg = &alua_lu_gps_group;
lu_gp_cg->default_groups = kzalloc(sizeof(struct config_group) * 2,
GFP_KERNEL);
if (!lu_gp_cg->default_groups) {
pr_err("Unable to allocate lu_gp_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lai Jiangshan <[email protected]>
commit 3aa62497594430ea522050b75c033f71f2c60ee6 upstream.
Currently, when try_to_grab_pending() grabs a delayed work item, it
leaves its linked work items alone on the delayed_works. The linked
work items are always NO_COLOR and will cause future
cwq_activate_first_delayed() increase cwq->nr_active incorrectly, and
may cause the whole cwq to stall. For example,
state: cwq->max_active = 1, cwq->nr_active = 1
one work in cwq->pool, many in cwq->delayed_works.
step1: try_to_grab_pending() removes a work item from delayed_works
but leaves its NO_COLOR linked work items on it.
step2: Later on, cwq_activate_first_delayed() activates the linked
work item increasing ->nr_active.
step3: cwq->nr_active = 1, but all activated work items of the cwq are
NO_COLOR. When they finish, cwq->nr_active will not be
decreased due to NO_COLOR, and no further work items will be
activated from cwq->delayed_works. the cwq stalls.
Fix it by ensuring the target work item is activated before stealing
PENDING in try_to_grab_pending(). This ensures that all the linked
work items are activated without incorrectly bumping cwq->nr_active.
tj: Updated comment and description.
Signed-off-by: Lai Jiangshan <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
kernel/workqueue.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1726,10 +1726,9 @@ static void move_linked_works(struct wor
*nextp = n;
}
-static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
+static void cwq_activate_delayed_work(struct work_struct *work)
{
- struct work_struct *work = list_first_entry(&cwq->delayed_works,
- struct work_struct, entry);
+ struct cpu_workqueue_struct *cwq = get_work_cwq(work);
struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
trace_workqueue_activate_work(work);
@@ -1738,6 +1737,14 @@ static void cwq_activate_first_delayed(s
cwq->nr_active++;
}
+static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
+{
+ struct work_struct *work = list_first_entry(&cwq->delayed_works,
+ struct work_struct, entry);
+
+ cwq_activate_delayed_work(work);
+}
+
/**
* cwq_dec_nr_in_flight - decrement cwq's nr_in_flight
* @cwq: cwq of interest
@@ -2628,6 +2635,18 @@ static int try_to_grab_pending(struct wo
smp_rmb();
if (gcwq == get_work_gcwq(work)) {
debug_work_deactivate(work);
+
+ /*
+ * A delayed work item cannot be grabbed directly
+ * because it might have linked NO_COLOR work items
+ * which, if left on the delayed_list, will confuse
+ * cwq->nr_active management later on and cause
+ * stall. Make sure the work item is activated
+ * before grabbing.
+ */
+ if (*work_data_bits(work) & WORK_STRUCT_DELAYED)
+ cwq_activate_delayed_work(work);
+
list_del_init(&work->entry);
cwq_dec_nr_in_flight(get_work_cwq(work),
get_work_color(work),
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: "Paul E. McKenney" <[email protected]>
commit a10d206ef1a83121ab7430cb196e0376a7145b22 upstream.
Each grace period is supposed to have at least one callback waiting
for that grace period to complete. However, if CONFIG_NO_HZ=n, an
extra callback-free grace period is no big problem -- it will chew up
a tiny bit of CPU time, but it will complete normally. In contrast,
CONFIG_NO_HZ=y kernels have the potential for all the CPUs to go to
sleep indefinitely, in turn indefinitely delaying completion of the
callback-free grace period. Given that nothing is waiting on this grace
period, this is also not a problem.
That is, unless RCU CPU stall warnings are also enabled, as they are
in recent kernels. In this case, if a CPU wakes up after at least one
minute of inactivity, an RCU CPU stall warning will result. The reason
that no one noticed until quite recently is that most systems have enough
OS noise that they will never remain absolutely idle for a full minute.
But there are some embedded systems with cut-down userspace configurations
that consistently get into this situation.
All this begs the question of exactly how a callback-free grace period
gets started in the first place. This can happen due to the fact that
CPUs do not necessarily agree on which grace period is in progress.
If a CPU still believes that the grace period that just completed is
still ongoing, it will believe that it has callbacks that need to wait for
another grace period, never mind the fact that the grace period that they
were waiting for just completed. This CPU can therefore erroneously
decide to start a new grace period. Note that this can happen in
TREE_RCU and TREE_PREEMPT_RCU even on a single-CPU system: Deadlock
considerations mean that the CPU that detected the end of the grace
period is not necessarily officially informed of this fact for some time.
Once this CPU notices that the earlier grace period completed, it will
invoke its callbacks. It then won't have any callbacks left. If no
other CPU has any callbacks, we now have a callback-free grace period.
This commit therefore makes CPUs check more carefully before starting a
new grace period. This new check relies on an array of tail pointers
into each CPU's list of callbacks. If the CPU is up to date on which
grace periods have completed, it checks to see if any callbacks follow
the RCU_DONE_TAIL segment, otherwise it checks to see if any callbacks
follow the RCU_WAIT_TAIL segment. The reason that this works is that
the RCU_WAIT_TAIL segment will be promoted to the RCU_DONE_TAIL segment
as soon as the CPU is officially notified that the old grace period
has ended.
This change is to cpu_needs_another_gp(), which is called in a number
of places. The only one that really matters is in rcu_start_gp(), where
the root rcu_node structure's ->lock is held, which prevents any
other CPU from starting or completing a grace period, so that the
comparison that determines whether the CPU is missing the completion
of a grace period is stable.
Reported-by: Becky Bruce <[email protected]>
Reported-by: Subodh Nijsure <[email protected]>
Reported-by: Paul Walmsley <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Tested-by: Paul Walmsley <[email protected]> # OMAP3730, OMAP4430
Signed-off-by: Ben Hutchings <[email protected]>
---
kernel/rcutree.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index f280e54..f7bcd9e 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -305,7 +305,9 @@ cpu_has_callbacks_ready_to_invoke(struct rcu_data *rdp)
static int
cpu_needs_another_gp(struct rcu_state *rsp, struct rcu_data *rdp)
{
- return *rdp->nxttail[RCU_DONE_TAIL] && !rcu_gp_in_progress(rsp);
+ return *rdp->nxttail[RCU_DONE_TAIL +
+ ACCESS_ONCE(rsp->completed) != rdp->completed] &&
+ !rcu_gp_in_progress(rsp);
}
/*
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Abbott <[email protected]>
commit 5d06e3df280bd230e2eadc16372e62818c63e894 upstream.
`parse_insn()` is dereferencing the user-space pointer `insn->data`
directly when handling the `INSN_INTTRIG` comedi instruction. It
shouldn't be using `insn->data` at all; it should be using the separate
`data` pointer passed to the function. Fix it.
Signed-off-by: Ian Abbott <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/staging/comedi/comedi_fops.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
index 9b9d4e8..f496f4d 100644
--- a/drivers/staging/comedi/comedi_fops.c
+++ b/drivers/staging/comedi/comedi_fops.c
@@ -950,7 +950,7 @@ static int parse_insn(struct comedi_device *dev, struct comedi_insn *insn,
ret = -EAGAIN;
break;
}
- ret = s->async->inttrig(dev, s, insn->data[0]);
+ ret = s->async->inttrig(dev, s, data[0]);
if (ret >= 0)
ret = 1;
break;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit 60ea8226cbd5c8301f9a39edc574ddabcb8150e0 upstream.
A queue newly allocated with blk_alloc_queue_node() has only
QUEUE_FLAG_BYPASS set. For request-based drivers,
blk_init_allocated_queue() is called and q->queue_flags is overwritten
with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
initial bypass is still in effect.
In blk_init_allocated_queue(), or QUEUE_FLAG_DEFAULT to q->queue_flags
instead of overwriting.
Signed-off-by: Tejun Heo <[email protected]>
Acked-by: Vivek Goyal <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
block/blk-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 80e29c9..a17869f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -696,7 +696,7 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
q->request_fn = rfn;
q->prep_rq_fn = NULL;
q->unprep_rq_fn = NULL;
- q->queue_flags = QUEUE_FLAG_DEFAULT;
+ q->queue_flags |= QUEUE_FLAG_DEFAULT;
/* Override internal queue lock with supplied lock pointer */
if (lock)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <[email protected]>
commit a519fc7a70d1a918574bb826cc6905b87b482eb9 upstream.
Instead of doing a shutdown() call, we need to do an actual close().
Ditto if/when the server is sending us junk RPC headers.
Signed-off-by: Trond Myklebust <[email protected]>
Tested-by: Simon Kirby <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/sunrpc/xprtsock.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index a35b8e5..d1988cf 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1025,6 +1025,16 @@ static void xs_udp_data_ready(struct sock *sk, int len)
read_unlock_bh(&sk->sk_callback_lock);
}
+/*
+ * Helper function to force a TCP close if the server is sending
+ * junk and/or it has put us in CLOSE_WAIT
+ */
+static void xs_tcp_force_close(struct rpc_xprt *xprt)
+{
+ set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
+ xprt_force_disconnect(xprt);
+}
+
static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_reader *desc)
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
@@ -1051,7 +1061,7 @@ static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_rea
/* Sanity check of the record length */
if (unlikely(transport->tcp_reclen < 8)) {
dprintk("RPC: invalid TCP record fragment length\n");
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
return;
}
dprintk("RPC: reading TCP record fragment of length %d\n",
@@ -1132,7 +1142,7 @@ static inline void xs_tcp_read_calldir(struct sock_xprt *transport,
break;
default:
dprintk("RPC: invalid request message type\n");
- xprt_force_disconnect(&transport->xprt);
+ xs_tcp_force_close(&transport->xprt);
}
xs_tcp_check_fraghdr(transport);
}
@@ -1455,6 +1465,8 @@ static void xs_tcp_cancel_linger_timeout(struct rpc_xprt *xprt)
static void xs_sock_mark_closed(struct rpc_xprt *xprt)
{
smp_mb__before_clear_bit();
+ clear_bit(XPRT_CONNECTION_ABORT, &xprt->state);
+ clear_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
clear_bit(XPRT_CLOSING, &xprt->state);
smp_mb__after_clear_bit();
@@ -1512,8 +1524,8 @@ static void xs_tcp_state_change(struct sock *sk)
break;
case TCP_CLOSE_WAIT:
/* The server initiated a shutdown of the socket */
- xprt_force_disconnect(xprt);
xprt->connect_cookie++;
+ xs_tcp_force_close(xprt);
case TCP_CLOSING:
/*
* If the server closed down the connection, make sure that
@@ -2199,8 +2211,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
/* We're probably in TIME_WAIT. Get rid of existing socket,
* and retry
*/
- set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
break;
case -ECONNREFUSED:
case -ECONNRESET:
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit f25590f39d543272f7ae7b00d533359c8d7ff331 upstream.
This patch adds a missing iscsi_reject->ffffffff assignment within
iscsit_send_reject() code to properly follow RFC-3720 Section 10.17
Bytes 16 -> 19 for the PDU format definition of ISCSI_OP_REJECT.
We've not seen any initiators care about this bytes in practice, but
as Ronnie reported this was causing trouble with wireshark packet
decoding lets go ahead and fix this up now.
Reported-by: Ronnie Sahlberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/iscsi/iscsi_target.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 9cfdeed..30842e1 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3427,6 +3427,7 @@ static int iscsit_send_reject(
hdr->opcode = ISCSI_OP_REJECT;
hdr->flags |= ISCSI_FLAG_CMD_FINAL;
hton24(hdr->dlength, ISCSI_HDR_LEN);
+ hdr->ffffffff = 0xffffffff;
cmd->stat_sn = conn->stat_sn++;
hdr->statsn = cpu_to_be32(cmd->stat_sn);
hdr->exp_cmdsn = cpu_to_be32(conn->sess->exp_cmd_sn);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gustavo Padovan <[email protected]>
commit ee66401bb94b1f2ce51089c341dcdd25d26ae631 upstream.
Foxconn devices has a vendor specific class of device, we will match them
differently now.
Signed-off-by: Gustavo Padovan <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/bluetooth/btusb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -108,7 +108,7 @@ static struct usb_device_id btusb_table[
{ USB_DEVICE(0x413c, 0x8197) },
/* Foxconn - Hon Hai */
- { USB_DEVICE(0x0489, 0xe033) },
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0489, 0xff, 0x01, 0x01) },
/*Broadcom devices with vendor specific id */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0a5c, 0xff, 0x01, 0x01) },
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steffen Maier <[email protected]>
commit 01e60527f0a49b3d7df603010bd6079bb4b6cf07 upstream.
The pl vector has scount elements, i.e. pl[scount-1] is the last valid
element. For maximum sized requests, payload->counter == scount after
the last loop iteration. Therefore, do bounds checking first (with
boolean shortcut) to not access the invalid element pl[scount].
Do not trust the maximum sbale->scount value from the HBA
but ensure we won't access the pl vector out of our allocated bounds.
While at it, clean up scoping and prevent unnecessary memset.
Minor fix for 86a9668a8d29ea711613e1cb37efa68e7c4db564
"[SCSI] zfcp: support for hardware data router"
Signed-off-by: Steffen Maier <[email protected]>
Reviewed-by: Martin Peschke <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_dbf.c | 2 +-
drivers/s390/scsi/zfcp_qdio.c | 16 ++++++++++------
2 files changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
index 3c1d220..c6e47d5 100644
--- a/drivers/s390/scsi/zfcp_dbf.c
+++ b/drivers/s390/scsi/zfcp_dbf.c
@@ -191,7 +191,7 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
length = min((u16)sizeof(struct qdio_buffer),
(u16)ZFCP_DBF_PAY_MAX_REC);
- while ((char *)pl[payload->counter] && payload->counter < scount) {
+ while (payload->counter < scount && (char *)pl[payload->counter]) {
memcpy(payload->data, (char *)pl[payload->counter], length);
debug_event(dbf->pay, 1, payload, zfcp_dbf_plen(length));
payload->counter++;
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
index b9fffc8..50b5615 100644
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -102,18 +102,22 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) parm;
struct zfcp_adapter *adapter = qdio->adapter;
- struct qdio_buffer_element *sbale;
int sbal_no, sbal_idx;
- void *pl[ZFCP_QDIO_MAX_SBALS_PER_REQ + 1];
- u64 req_id;
- u8 scount;
if (unlikely(qdio_err)) {
- memset(pl, 0, ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
if (zfcp_adapter_multi_buffer_active(adapter)) {
+ void *pl[ZFCP_QDIO_MAX_SBALS_PER_REQ + 1];
+ struct qdio_buffer_element *sbale;
+ u64 req_id;
+ u8 scount;
+
+ memset(pl, 0,
+ ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
sbale = qdio->res_q[idx]->element;
req_id = (u64) sbale->addr;
- scount = sbale->scount + 1; /* incl. signaling SBAL */
+ scount = min(sbale->scount + 1,
+ ZFCP_QDIO_MAX_SBALS_PER_REQ + 1);
+ /* incl. signaling SBAL */
for (sbal_no = 0; sbal_no < scount; sbal_no++) {
sbal_idx = (idx + sbal_no) %
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steffen Maier <[email protected]>
commit 0100998dbfe6dfcd90a6e912ca7ed6f255d48f25 upstream.
Duplicate fssrh_2 from a54ca0f62f953898b05549391ac2a8a4dad6482b
"[SCSI] zfcp: Redesign of the debug tracing for HBA records."
complicates distinction of generic status read response from
local link up.
Duplicate fsscth1 from 2c55b750a884b86dea8b4cc5f15e1484cc47a25c
"[SCSI] zfcp: Redesign of the debug tracing for SAN records."
complicates distinction of good common transport response from
invalid port handle.
Signed-off-by: Steffen Maier <[email protected]>
Reviewed-by: Martin Peschke <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_fsf.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
index 30436d5..6e2892e 100644
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -219,7 +219,7 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
return;
}
- zfcp_dbf_hba_fsf_uss("fssrh_2", req);
+ zfcp_dbf_hba_fsf_uss("fssrh_4", req);
switch (sr_buf->status_type) {
case FSF_STATUS_READ_PORT_CLOSED:
@@ -915,7 +915,7 @@ static void zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *req)
switch (header->fsf_status) {
case FSF_GOOD:
- zfcp_dbf_san_res("fsscth1", req);
+ zfcp_dbf_san_res("fsscth2", req);
ct->status = 0;
break;
case FSF_SERVICE_CLASS_NOT_SUPPORTED:
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steffen Maier <[email protected]>
commit cb45214960bc989af8b911ebd77da541c797717d upstream.
If the mapping of FCP device bus ID and corresponding subchannel
is modified while the Linux image is suspended, the resume of FCP
devices can fail. During resume, zfcp gets callbacks from cio regarding
the modified subchannels but they can be arbitrarily mixed with the
restore/resume callback. Since the cio callbacks would trigger
adapter recovery, zfcp could wakeup before the resume callback.
Therefore, ignore the cio callbacks regarding subchannels while
being suspended. We can safely do so, since zfcp does not deal itself
with subchannels. For problem determination purposes, we still trace the
ignored callback events.
The following kernel messages could be seen on resume:
kernel: <WWPN>: parent <FCP device bus ID> should not be sleeping
As part of adapter reopen recovery, zfcp performs auto port scanning
which can erroneously try to register new remote ports with
scsi_transport_fc and the device core code complains about the parent
(adapter) still sleeping.
kernel: zfcp.3dff9c: <FCP device bus ID>:\
Setting up the QDIO connection to the FCP adapter failed
<last kernel message repeated 3 more times>
kernel: zfcp.574d43: <FCP device bus ID>:\
ERP cannot recover an error on the FCP device
In such cases, the adapter gave up recovery and remained blocked along
with its child objects: remote ports and LUNs/scsi devices. Even the
adapter shutdown as part of giving up recovery failed because the ccw
device state remained disconnected. Later, the corresponding remote
ports ran into dev_loss_tmo. As a result, the LUNs were erroneously
not available again after resume.
Even a manually triggered adapter recovery (e.g. sysfs attribute
failed, or device offline/online via sysfs) could not recover the
adapter due to the remaining disconnected state of the corresponding
ccw device.
Signed-off-by: Steffen Maier <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_ccw.c | 73 ++++++++++++++++++++++++++++++++++++------
drivers/s390/scsi/zfcp_dbf.c | 20 ++++++++++++
drivers/s390/scsi/zfcp_dbf.h | 1 +
drivers/s390/scsi/zfcp_def.h | 1 +
drivers/s390/scsi/zfcp_ext.h | 1 +
5 files changed, 86 insertions(+), 10 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c
index e37f045..9646766 100644
--- a/drivers/s390/scsi/zfcp_ccw.c
+++ b/drivers/s390/scsi/zfcp_ccw.c
@@ -39,17 +39,23 @@ void zfcp_ccw_adapter_put(struct zfcp_adapter *adapter)
spin_unlock_irqrestore(&zfcp_ccw_adapter_ref_lock, flags);
}
-static int zfcp_ccw_activate(struct ccw_device *cdev)
-
+/**
+ * zfcp_ccw_activate - activate adapter and wait for it to finish
+ * @cdev: pointer to belonging ccw device
+ * @clear: Status flags to clear.
+ * @tag: s390dbf trace record tag
+ */
+static int zfcp_ccw_activate(struct ccw_device *cdev, int clear, char *tag)
{
struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(cdev);
if (!adapter)
return 0;
+ zfcp_erp_clear_adapter_status(adapter, clear);
zfcp_erp_set_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING);
zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
- "ccresu2");
+ tag);
zfcp_erp_wait(adapter);
flush_work(&adapter->scan_work);
@@ -164,26 +170,29 @@ static int zfcp_ccw_set_online(struct ccw_device *cdev)
BUG_ON(!zfcp_reqlist_isempty(adapter->req_list));
adapter->req_no = 0;
- zfcp_ccw_activate(cdev);
+ zfcp_ccw_activate(cdev, 0, "ccsonl1");
zfcp_ccw_adapter_put(adapter);
return 0;
}
/**
- * zfcp_ccw_set_offline - set_offline function of zfcp driver
+ * zfcp_ccw_offline_sync - shut down adapter and wait for it to finish
* @cdev: pointer to belonging ccw device
+ * @set: Status flags to set.
+ * @tag: s390dbf trace record tag
*
* This function gets called by the common i/o layer and sets an adapter
* into state offline.
*/
-static int zfcp_ccw_set_offline(struct ccw_device *cdev)
+static int zfcp_ccw_offline_sync(struct ccw_device *cdev, int set, char *tag)
{
struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(cdev);
if (!adapter)
return 0;
- zfcp_erp_adapter_shutdown(adapter, 0, "ccsoff1");
+ zfcp_erp_set_adapter_status(adapter, set);
+ zfcp_erp_adapter_shutdown(adapter, 0, tag);
zfcp_erp_wait(adapter);
zfcp_ccw_adapter_put(adapter);
@@ -191,6 +200,18 @@ static int zfcp_ccw_set_offline(struct ccw_device *cdev)
}
/**
+ * zfcp_ccw_set_offline - set_offline function of zfcp driver
+ * @cdev: pointer to belonging ccw device
+ *
+ * This function gets called by the common i/o layer and sets an adapter
+ * into state offline.
+ */
+static int zfcp_ccw_set_offline(struct ccw_device *cdev)
+{
+ return zfcp_ccw_offline_sync(cdev, 0, "ccsoff1");
+}
+
+/**
* zfcp_ccw_notify - ccw notify function
* @cdev: pointer to belonging ccw device
* @event: indicates if adapter was detached or attached
@@ -207,6 +228,11 @@ static int zfcp_ccw_notify(struct ccw_device *cdev, int event)
switch (event) {
case CIO_GONE:
+ if (atomic_read(&adapter->status) &
+ ZFCP_STATUS_ADAPTER_SUSPENDED) { /* notification ignore */
+ zfcp_dbf_hba_basic("ccnigo1", adapter);
+ break;
+ }
dev_warn(&cdev->dev, "The FCP device has been detached\n");
zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti1");
break;
@@ -216,6 +242,11 @@ static int zfcp_ccw_notify(struct ccw_device *cdev, int event)
zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti2");
break;
case CIO_OPER:
+ if (atomic_read(&adapter->status) &
+ ZFCP_STATUS_ADAPTER_SUSPENDED) { /* notification ignore */
+ zfcp_dbf_hba_basic("ccniop1", adapter);
+ break;
+ }
dev_info(&cdev->dev, "The FCP device is operational again\n");
zfcp_erp_set_adapter_status(adapter,
ZFCP_STATUS_COMMON_RUNNING);
@@ -251,6 +282,28 @@ static void zfcp_ccw_shutdown(struct ccw_device *cdev)
zfcp_ccw_adapter_put(adapter);
}
+static int zfcp_ccw_suspend(struct ccw_device *cdev)
+{
+ zfcp_ccw_offline_sync(cdev, ZFCP_STATUS_ADAPTER_SUSPENDED, "ccsusp1");
+ return 0;
+}
+
+static int zfcp_ccw_thaw(struct ccw_device *cdev)
+{
+ /* trace records for thaw and final shutdown during suspend
+ can only be found in system dump until the end of suspend
+ but not after resume because it's based on the memory image
+ right after the very first suspend (freeze) callback */
+ zfcp_ccw_activate(cdev, 0, "ccthaw1");
+ return 0;
+}
+
+static int zfcp_ccw_resume(struct ccw_device *cdev)
+{
+ zfcp_ccw_activate(cdev, ZFCP_STATUS_ADAPTER_SUSPENDED, "ccresu1");
+ return 0;
+}
+
struct ccw_driver zfcp_ccw_driver = {
.driver = {
.owner = THIS_MODULE,
@@ -263,7 +316,7 @@ struct ccw_driver zfcp_ccw_driver = {
.set_offline = zfcp_ccw_set_offline,
.notify = zfcp_ccw_notify,
.shutdown = zfcp_ccw_shutdown,
- .freeze = zfcp_ccw_set_offline,
- .thaw = zfcp_ccw_activate,
- .restore = zfcp_ccw_activate,
+ .freeze = zfcp_ccw_suspend,
+ .thaw = zfcp_ccw_thaw,
+ .restore = zfcp_ccw_resume,
};
diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
index c6e47d5..e1a8cc2 100644
--- a/drivers/s390/scsi/zfcp_dbf.c
+++ b/drivers/s390/scsi/zfcp_dbf.c
@@ -200,6 +200,26 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
spin_unlock_irqrestore(&dbf->pay_lock, flags);
}
+/**
+ * zfcp_dbf_hba_basic - trace event for basic adapter events
+ * @adapter: pointer to struct zfcp_adapter
+ */
+void zfcp_dbf_hba_basic(char *tag, struct zfcp_adapter *adapter)
+{
+ struct zfcp_dbf *dbf = adapter->dbf;
+ struct zfcp_dbf_hba *rec = &dbf->hba_buf;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dbf->hba_lock, flags);
+ memset(rec, 0, sizeof(*rec));
+
+ memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
+ rec->id = ZFCP_DBF_HBA_BASIC;
+
+ debug_event(dbf->hba, 1, rec, sizeof(*rec));
+ spin_unlock_irqrestore(&dbf->hba_lock, flags);
+}
+
static void zfcp_dbf_set_common(struct zfcp_dbf_rec *rec,
struct zfcp_adapter *adapter,
struct zfcp_port *port,
diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h
index 714f087..3ac7a4b 100644
--- a/drivers/s390/scsi/zfcp_dbf.h
+++ b/drivers/s390/scsi/zfcp_dbf.h
@@ -154,6 +154,7 @@ enum zfcp_dbf_hba_id {
ZFCP_DBF_HBA_RES = 1,
ZFCP_DBF_HBA_USS = 2,
ZFCP_DBF_HBA_BIT = 3,
+ ZFCP_DBF_HBA_BASIC = 4,
};
/**
diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
index 2955e1a..5bc4afba 100644
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -77,6 +77,7 @@ struct zfcp_reqlist;
#define ZFCP_STATUS_ADAPTER_SIOSL_ISSUED 0x00000004
#define ZFCP_STATUS_ADAPTER_XCONFIG_OK 0x00000008
#define ZFCP_STATUS_ADAPTER_HOST_CON_INIT 0x00000010
+#define ZFCP_STATUS_ADAPTER_SUSPENDED 0x00000040
#define ZFCP_STATUS_ADAPTER_ERP_PENDING 0x00000100
#define ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED 0x00000200
#define ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED 0x00000400
diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
index 36f4227..4eeef78 100644
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -54,6 +54,7 @@ extern void zfcp_dbf_hba_fsf_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
+extern void zfcp_dbf_hba_basic(char *, struct zfcp_adapter *);
extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sujith Manoharan <[email protected]>
commit 046b6802c8d3c8a57448485513bf7291633e0fa3 upstream.
Currently, ASPM is disabled for all WLAN+BT combo chipsets
when BTCOEX is enabled. This is incorrect since the workaround
is required only for WB195, which is a AR9285+AR3011 combo
solution. Fix this by checking for the HW version when enabling
the workaround.
Signed-off-by: Sujith Manoharan <[email protected]>
Tested-by: Paul Stewart <[email protected]>
Signed-off-by: John W. Linville <[email protected]>
[bwh: Backported to 3.2: ath9k_hw_get_btcoex_scheme() function is missing]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/wireless/ath/ath9k/pci.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/net/wireless/ath/ath9k/pci.c
+++ b/drivers/net/wireless/ath/ath9k/pci.c
@@ -122,8 +122,9 @@ static void ath_pci_aspm_init(struct ath
if (!parent)
return;
- if (ah->btcoex_hw.scheme != ATH_BTCOEX_CFG_NONE) {
- /* Bluetooth coexistance requires disabling ASPM. */
+ if ((ah->btcoex_hw.scheme != ATH_BTCOEX_CFG_NONE) &&
+ (AR_SREV_9285(ah))) {
+ /* Bluetooth coexistance requires disabling ASPM for AR9285. */
pci_read_config_byte(pdev, pos + PCI_EXP_LNKCTL, &aspm);
aspm &= ~(PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
pci_write_config_byte(pdev, pos + PCI_EXP_LNKCTL, aspm);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steffen Maier <[email protected]>
commit d99b601b63386f3395dc26a699ae703a273d9982 upstream.
Upstream commit f3450c7b917201bb49d67032e9f60d5125675d6a
"[SCSI] zfcp: Replace local reference counting with common kref"
accidentally dropped a reference count check before tearing down
zfcp_ports that are potentially in use by zfcp_units.
Even remote ports in use can be removed causing
unreachable garbage objects zfcp_ports with zfcp_units.
Thus units won't come back even after a manual port_rescan.
The kref of zfcp_port->dev.kobj is already used by the driver core.
We cannot re-use it to track the number of zfcp_units.
Re-introduce our own counter for units per port
and check on port_remove.
Signed-off-by: Steffen Maier <[email protected]>
Reviewed-by: Heiko Carstens <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_aux.c | 1 +
drivers/s390/scsi/zfcp_def.h | 1 +
drivers/s390/scsi/zfcp_ext.h | 1 +
drivers/s390/scsi/zfcp_sysfs.c | 18 ++++++++++++++++--
drivers/s390/scsi/zfcp_unit.c | 36 ++++++++++++++++++++++++++----------
5 files changed, 45 insertions(+), 12 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
index aff8621..f6adde4 100644
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -519,6 +519,7 @@ struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *adapter, u64 wwpn,
rwlock_init(&port->unit_list_lock);
INIT_LIST_HEAD(&port->unit_list);
+ atomic_set(&port->units, 0);
INIT_WORK(&port->gid_pn_work, zfcp_fc_port_did_lookup);
INIT_WORK(&port->test_link_work, zfcp_fc_link_test_work);
diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
index 5bc4afba..1305955 100644
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -205,6 +205,7 @@ struct zfcp_port {
struct zfcp_adapter *adapter; /* adapter used to access port */
struct list_head unit_list; /* head of logical unit list */
rwlock_t unit_list_lock; /* unit list lock */
+ atomic_t units; /* zfcp_unit count */
atomic_t status; /* status of this remote port */
u64 wwnn; /* WWNN if known */
u64 wwpn; /* WWPN */
diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
index 4eeef78..03441a7f 100644
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -159,6 +159,7 @@ extern void zfcp_scsi_dif_sense_error(struct scsi_cmnd *, int);
extern struct attribute_group zfcp_sysfs_unit_attrs;
extern struct attribute_group zfcp_sysfs_adapter_attrs;
extern struct attribute_group zfcp_sysfs_port_attrs;
+extern struct mutex zfcp_sysfs_port_units_mutex;
extern struct device_attribute *zfcp_sysfs_sdev_attrs[];
extern struct device_attribute *zfcp_sysfs_shost_attrs[];
diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
index c66af27..1e0eb08 100644
--- a/drivers/s390/scsi/zfcp_sysfs.c
+++ b/drivers/s390/scsi/zfcp_sysfs.c
@@ -227,6 +227,8 @@ static ssize_t zfcp_sysfs_port_rescan_store(struct device *dev,
static ZFCP_DEV_ATTR(adapter, port_rescan, S_IWUSR, NULL,
zfcp_sysfs_port_rescan_store);
+DEFINE_MUTEX(zfcp_sysfs_port_units_mutex);
+
static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -249,6 +251,16 @@ static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
else
retval = 0;
+ mutex_lock(&zfcp_sysfs_port_units_mutex);
+ if (atomic_read(&port->units) > 0) {
+ retval = -EBUSY;
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ goto out;
+ }
+ /* port is about to be removed, so no more unit_add */
+ atomic_set(&port->units, -1);
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+
write_lock_irq(&adapter->port_list_lock);
list_del(&port->list);
write_unlock_irq(&adapter->port_list_lock);
@@ -289,12 +301,14 @@ static ssize_t zfcp_sysfs_unit_add_store(struct device *dev,
{
struct zfcp_port *port = container_of(dev, struct zfcp_port, dev);
u64 fcp_lun;
+ int retval;
if (strict_strtoull(buf, 0, (unsigned long long *) &fcp_lun))
return -EINVAL;
- if (zfcp_unit_add(port, fcp_lun))
- return -EINVAL;
+ retval = zfcp_unit_add(port, fcp_lun);
+ if (retval)
+ return retval;
return count;
}
diff --git a/drivers/s390/scsi/zfcp_unit.c b/drivers/s390/scsi/zfcp_unit.c
index 3f2bff0..1cd2b99 100644
--- a/drivers/s390/scsi/zfcp_unit.c
+++ b/drivers/s390/scsi/zfcp_unit.c
@@ -104,7 +104,7 @@ static void zfcp_unit_release(struct device *dev)
{
struct zfcp_unit *unit = container_of(dev, struct zfcp_unit, dev);
- put_device(&unit->port->dev);
+ atomic_dec(&unit->port->units);
kfree(unit);
}
@@ -119,16 +119,27 @@ static void zfcp_unit_release(struct device *dev)
int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
{
struct zfcp_unit *unit;
+ int retval = 0;
+
+ mutex_lock(&zfcp_sysfs_port_units_mutex);
+ if (atomic_read(&port->units) == -1) {
+ /* port is already gone */
+ retval = -ENODEV;
+ goto out;
+ }
unit = zfcp_unit_find(port, fcp_lun);
if (unit) {
put_device(&unit->dev);
- return -EEXIST;
+ retval = -EEXIST;
+ goto out;
}
unit = kzalloc(sizeof(struct zfcp_unit), GFP_KERNEL);
- if (!unit)
- return -ENOMEM;
+ if (!unit) {
+ retval = -ENOMEM;
+ goto out;
+ }
unit->port = port;
unit->fcp_lun = fcp_lun;
@@ -139,28 +150,33 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
if (dev_set_name(&unit->dev, "0x%016llx",
(unsigned long long) fcp_lun)) {
kfree(unit);
- return -ENOMEM;
+ retval = -ENOMEM;
+ goto out;
}
- get_device(&port->dev);
-
if (device_register(&unit->dev)) {
put_device(&unit->dev);
- return -ENOMEM;
+ retval = -ENOMEM;
+ goto out;
}
if (sysfs_create_group(&unit->dev.kobj, &zfcp_sysfs_unit_attrs)) {
device_unregister(&unit->dev);
- return -EINVAL;
+ retval = -EINVAL;
+ goto out;
}
+ atomic_inc(&port->units); /* under zfcp_sysfs_port_units_mutex ! */
+
write_lock_irq(&port->unit_list_lock);
list_add_tail(&unit->list, &port->unit_list);
write_unlock_irq(&port->unit_list_lock);
zfcp_unit_scsi_scan(unit);
- return 0;
+out:
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ return retval;
}
/**
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bjørn Mork <[email protected]>
commit c638eb2872b3af079501e7ee44cbb8a5cce9b4b5 upstream.
The three Pantech devices UML190 (106c:3716), UML290 (106c:3718) and
P4200 (106c:3721) all use the same subclasses to identify vendor
specific functions. Replace the existing device specific entries
with generic vendor matching, adding support for the P4200.
Signed-off-by: Bjørn Mork <[email protected]>
Cc: Thomas Schäfer <[email protected]>
Acked-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/serial/qcaux.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/usb/serial/qcaux.c b/drivers/usb/serial/qcaux.c
index a4edc7e..9b1b96f 100644
--- a/drivers/usb/serial/qcaux.c
+++ b/drivers/usb/serial/qcaux.c
@@ -36,8 +36,6 @@
#define UTSTARCOM_PRODUCT_UM175_V1 0x3712
#define UTSTARCOM_PRODUCT_UM175_V2 0x3714
#define UTSTARCOM_PRODUCT_UM175_ALLTEL 0x3715
-#define PANTECH_PRODUCT_UML190_VZW 0x3716
-#define PANTECH_PRODUCT_UML290_VZW 0x3718
/* CMOTECH devices */
#define CMOTECH_VENDOR_ID 0x16d8
@@ -68,11 +66,9 @@ static struct usb_device_id id_table[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(LG_VENDOR_ID, LG_PRODUCT_VX4400_6000, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SANYO_VENDOR_ID, SANYO_PRODUCT_KATANA_LX, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_U520, 0xff, 0x00, 0x00) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML190_VZW, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML190_VZW, 0xff, 0xfe, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xfd, 0xff) }, /* NMEA */
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xfe, 0xff) }, /* WMC */
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xff, 0xff) }, /* DIAG */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfd, 0xff) }, /* NMEA */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfe, 0xff) }, /* WMC */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xff, 0xff) }, /* DIAG */
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Martin Peschke <[email protected]>
commit d436de8ce25f53a8a880a931886821f632247943 upstream.
__scsi_remove_device (e.g. due to dev_loss_tmo) calls
zfcp_scsi_slave_destroy which in turn sends a close LUN FSF request to
the adapter. After 30 seconds without response,
zfcp_erp_timeout_handler kicks the ERP thread failing the close LUN
ERP action. zfcp_erp_wait in zfcp_erp_lun_shutdown_wait and thus
zfcp_scsi_slave_destroy returns and then scsi_device is no longer
valid. Sometime later the response to the close LUN FSF request may
finally come in. However, commit
b62a8d9b45b971a67a0f8413338c230e3117dff5
"[SCSI] zfcp: Use SCSI device data zfcp_scsi_dev instead of zfcp_unit"
introduced a number of attempts to unconditionally access struct
zfcp_scsi_dev through struct scsi_device causing a use-after-free.
This leads to an Oops due to kernel page fault in one of:
zfcp_fsf_abort_fcp_command_handler, zfcp_fsf_open_lun_handler,
zfcp_fsf_close_lun_handler, zfcp_fsf_req_trace,
zfcp_fsf_fcp_handler_common.
Move dereferencing of zfcp private data zfcp_scsi_dev allocated in
scsi_device via scsi_transport_reserve_device after the check for
potentially aborted FSF request and thus no longer valid scsi_device.
Only then assign sdev_to_zfcp(sdev) to the local auto variable struct
zfcp_scsi_dev *zfcp_sdev.
Signed-off-by: Martin Peschke <[email protected]>
Signed-off-by: Steffen Maier <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/s390/scsi/zfcp_fsf.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
index 9ffdc33..c96320d 100644
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -801,12 +801,14 @@ out:
static void zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *req)
{
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
union fsf_status_qual *fsq = &req->qtcb->header.fsf_status_qual;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (req->qtcb->header.fsf_status) {
case FSF_PORT_HANDLE_NOT_VALID:
if (fsq->word[0] == fsq->word[1]) {
@@ -1769,13 +1771,15 @@ static void zfcp_fsf_open_lun_handler(struct zfcp_fsf_req *req)
{
struct zfcp_adapter *adapter = req->adapter;
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct fsf_qtcb_header *header = &req->qtcb->header;
struct fsf_qtcb_bottom_support *bottom = &req->qtcb->bottom.support;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED |
ZFCP_STATUS_COMMON_ACCESS_BOXED |
ZFCP_STATUS_LUN_SHARED |
@@ -1886,11 +1890,13 @@ out:
static void zfcp_fsf_close_lun_handler(struct zfcp_fsf_req *req)
{
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (req->qtcb->header.fsf_status) {
case FSF_PORT_HANDLE_NOT_VALID:
zfcp_erp_adapter_reopen(zfcp_sdev->port->adapter, 0, "fscuh_1");
@@ -1980,7 +1986,7 @@ static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi)
{
struct fsf_qual_latency_info *lat_in;
struct latency_cont *lat = NULL;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scsi->device);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct zfcp_blk_drv_data blktrc;
int ticks = req->adapter->timer_ticks;
@@ -1995,6 +2001,7 @@ static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi)
if (req->adapter->adapter_features & FSF_FEATURE_MEASUREMENT_DATA &&
!(req->status & ZFCP_STATUS_FSFREQ_ERROR)) {
+ zfcp_sdev = sdev_to_zfcp(scsi->device);
blktrc.flags |= ZFCP_BLK_LAT_VALID;
blktrc.channel_lat = lat_in->channel_lat * ticks;
blktrc.fabric_lat = lat_in->fabric_lat * ticks;
@@ -2032,12 +2039,14 @@ static void zfcp_fsf_fcp_handler_common(struct zfcp_fsf_req *req)
{
struct scsi_cmnd *scmnd = req->data;
struct scsi_device *sdev = scmnd->device;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct fsf_qtcb_header *header = &req->qtcb->header;
if (unlikely(req->status & ZFCP_STATUS_FSFREQ_ERROR))
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (header->fsf_status) {
case FSF_HANDLE_MISMATCH:
case FSF_PORT_HANDLE_NOT_VALID:
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vivek Gautam <[email protected]>
commit 457a73d346187c2cc5d599072f38676f18f130e0 upstream.
In 71c731a: usb: host: xhci: Fix Compliance Mode on SN65LVPE502CP Hardware
when extracting DMI strings (vendor or product_name) to mark them as quirk
we may get NULL pointer in case of non-x86 systems which won't define
CONFIG_DMI. Hence susbsequent strstr() calls crash while driver probing.
So, returning 'false' here in case we get a NULL vendor or product_name.
This is tested with ARM (exynos) system.
This patch should be backported to stable kernels as old as 3.6, that
contain the commit 71c731a296f1b08a3724bd1b514b64f1bda87a23 "usb: host:
xhci: Fix Compliance Mode on SN65LVPE502CP Hardware"
Signed-off-by: Vivek Gautam <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Reported-by: Sebastian Gottschall (DD-WRT) <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 52b04b0..8d7fcbb 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -471,6 +471,8 @@ static bool compliance_mode_recovery_timer_quirk_check(void)
dmi_product_name = dmi_get_system_info(DMI_PRODUCT_NAME);
dmi_sys_vendor = dmi_get_system_info(DMI_SYS_VENDOR);
+ if (!dmi_product_name || !dmi_sys_vendor)
+ return false;
if (!(strstr(dmi_sys_vendor, "Hewlett-Packard")))
return false;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Herrenschmidt <[email protected]>
commit 225c56960fcafeccc2b6304f96cd3f0dbf42a16a upstream.
The length field in the host config packet is only 16-bit long, so
passing it 0x10000 (64K which is our standard PAGE_SIZE) doesn't
work and result in an empty config from the server.
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
Acked-by: Robert Jennings <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/ibmvscsi/ibmvscsi.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index a846217..ef9a54c 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -1851,6 +1851,9 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
host_config = &evt_struct->iu.mad.host_config;
+ /* The transport length field is only 16-bit */
+ length = min(0xffff, length);
+
/* Set up a lun reset SRP command */
memset(host_config, 0x00, sizeof(*host_config));
host_config->common.type = VIOSRP_HOST_CONFIG_TYPE;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Vetter <[email protected]>
commit c9c4b6f6c28354f1df9bd288dc33ba7ae0e66aaa upstream.
It looks like the desktop variants of i915 and i945 also have the DCC
register to control dram channel interleave and cpu side bit6
swizzling.
Unfortunately internal Cspec/ConfigDB documentation for these ancient chips
have already been dropped and there seem to be no archives. Also
somebody thought the swizzling behaviour is surely a worthy secret to
keep and redacted any mention of these fields from the published Intel
datasheets.
I suspect the hw engineers were really proud of the page coloring
they've achieved in their first dual channel dram controller with
bit17 - after all Bspec explains in great length the optimal layout of
page frame numbers modulo 4 for the color and depth buffers, too.
Later on when they've started to work on VT-d they shamefully
discoverd their stupidity and tried to cover the tracks ...
Tested-by: Daniel Vetter <[email protected]> (i915g)
Tested-by: Pavel Ondračka <[email protected]> (i945g)
Tested-by: Chris Wilson <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42625
Signed-Off-by: Daniel Vetter <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/i915_gem_tiling.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 31d334d..861223b 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -107,10 +107,10 @@ i915_gem_detect_bit_6_swizzle(struct drm_device *dev)
*/
swizzle_x = I915_BIT_6_SWIZZLE_NONE;
swizzle_y = I915_BIT_6_SWIZZLE_NONE;
- } else if (IS_MOBILE(dev)) {
+ } else if (IS_MOBILE(dev) || (IS_GEN3(dev) && !IS_G33(dev))) {
uint32_t dcc;
- /* On mobile 9xx chipsets, channel interleave by the CPU is
+ /* On 9xx chipsets, channel interleave by the CPU is
* determined by DCC. For single-channel, neither the CPU
* nor the GPU do swizzling. For dual channel interleaved,
* the GPU's interleave is bit 9 and 10 for X tiled, and bit
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit e47f8976d8e573928824a06748f7bc82c58d747f upstream.
A quote from SPC-4: "While in the unavailable primary target port
asymmetric access state, the device server shall support those of
the following commands that it supports while in the active/optimized
state: [ ... ] d) SET TARGET PORT GROUPS; [ ... ]". Hence enable
sending STPG to a target port group that is in the unavailable state.
Signed-off-by: Bart Van Assche <[email protected]>
Reviewed-by: Mike Christie <[email protected]>
Acked-by: Hannes Reinecke <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/device_handler/scsi_dh_alua.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 08d80a6..6f4d8e6 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -641,8 +641,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_dh_data *h)
h->state = TPGS_STATE_STANDBY;
break;
case TPGS_STATE_OFFLINE:
- case TPGS_STATE_UNAVAILABLE:
- /* Path unusable for unavailable/offline */
+ /* Path unusable */
err = SCSI_DH_DEV_OFFLINED;
break;
default:
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Antonio Ospite <[email protected]>
commit 54575b05af36959dfb6a49a3e9ca0c2b456b7126 upstream.
TIAO/DIYGADGET USB Multi-Protocol Adapter (TUMPA) is an FTDI FT2232H
based device which provides an easily accessible JTAG, SPI, I2C, serial
breakout.
http://www.diygadget.com/tiao-usb-multi-protocol-adapter-jtag-spi-i2c-serial.html
http://www.tiaowiki.com/w/TIAO_USB_Multi_Protocol_Adapter_User%27s_Manual
FTDI FT2232H provides two serial channels (A and B), but on the TUMPA
channel A is dedicated to JTAG/SPI while channel B can be used for
UART/RS-232: use the ftdi_jtag_quirk to expose only channel B as
a usb-serial interface to userspace.
Signed-off-by: Antonio Ospite <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/serial/ftdi_sio.c | 2 ++
drivers/usb/serial/ftdi_sio_ids.h | 5 +++++
2 files changed, 7 insertions(+)
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index 8742a0e..9169f51 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -579,6 +579,8 @@ static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PEDO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PROD_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_TAVIR_STK500_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_TIAO_UMPA_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
/*
* ELV devices:
*/
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index 41fe582..57c12ef 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -517,6 +517,11 @@
*/
#define FTDI_TAVIR_STK500_PID 0xFA33 /* STK500 AVR programmer */
+/*
+ * TIAO product ids (FTDI_VID)
+ * http://www.tiaowiki.com/w/Main_Page
+ */
+#define FTDI_TIAO_UMPA_PID 0x8a98 /* TIAO/DIYGADGET USB Multi-Protocol Adapter */
/********************************/
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paulo Zanoni <[email protected]>
commit 9d9740f099f2eaf309c4c9cbc0d732507140db28 upstream.
On IVB and older, we basically have two registers: the control and the
data register. We write a few consecutitve times to the control
register, and we need these writes to arrive exactly in the specified
order.
Also, when we're changing the data register, we need to guarantee that
anything written to the control register already arrived (since
changing the control register can change where the data register
points to). Also, we need to make sure all the writes to the data
register happen exactly in the specified order, and we also *can't*
read the data register during this process, since reading and/or
writing it will change the place it points to.
So invoke the "better safe than sorry" rule and just be careful and
put barriers everywhere :)
On HSW we still have a control register that we write many times, but
we have many data registers.
Demanded-by: Chris Wilson <[email protected]>
Signed-off-by: Paulo Zanoni <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2:
- There are only two write_infoframe functions to be modified
- The other VIDEO_DIP_CTL writes are in entirely different functions]
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -138,14 +138,17 @@ static void i9xx_write_infoframe(struct
I915_WRITE(VIDEO_DIP_CTL, VIDEO_DIP_ENABLE | val | port | flags);
+ mmiowb();
for (i = 0; i < len; i += 4) {
I915_WRITE(VIDEO_DIP_DATA, *data);
data++;
}
+ mmiowb();
flags |= intel_infoframe_flags(frame);
I915_WRITE(VIDEO_DIP_CTL, VIDEO_DIP_ENABLE | val | port | flags);
+ POSTING_READ(VIDEO_DIP_CTL);
}
static void ironlake_write_infoframe(struct drm_encoder *encoder,
@@ -168,14 +171,17 @@ static void ironlake_write_infoframe(str
I915_WRITE(reg, VIDEO_DIP_ENABLE | val | flags);
+ mmiowb();
for (i = 0; i < len; i += 4) {
I915_WRITE(TVIDEO_DIP_DATA(intel_crtc->pipe), *data);
data++;
}
+ mmiowb();
flags |= intel_infoframe_flags(frame);
I915_WRITE(reg, VIDEO_DIP_ENABLE | val | flags);
+ POSTING_READ(reg);
}
static void intel_set_infoframe(struct drm_encoder *encoder,
struct dip_infoframe *frame)
@@ -546,10 +552,13 @@ void intel_hdmi_init(struct drm_device *
if (!HAS_PCH_SPLIT(dev)) {
intel_hdmi->write_infoframe = i9xx_write_infoframe;
I915_WRITE(VIDEO_DIP_CTL, 0);
+ POSTING_READ(VIDEO_DIP_CTL);
} else {
intel_hdmi->write_infoframe = ironlake_write_infoframe;
- for_each_pipe(i)
+ for_each_pipe(i) {
I915_WRITE(TVIDEO_DIP_CTL(i), 0);
+ POSTING_READ(TVIDEO_DIP_CTL(i));
+ }
}
drm_encoder_helper_add(&intel_encoder->base, &intel_hdmi_helper_funcs);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paulo Zanoni <[email protected]>
commit adf00b26d18e1b3570451296e03bcb20e4798cdd upstream.
... even if the actual infoframe is smaller than the maximum possible
size.
If we don't write all the 32 DIP data bytes the InfoFrame ECC may not
be correctly calculated in some cases (e.g., when changing the port),
and this will lead to black screens on HDMI monitors. The ECC value is
generated by the hardware.
I don't see how this should break anything since we're writing 0 and
that should be the correct value, so this patch should be safe.
Notice that on IVB and older we actually have 64 bytes available for
VIDEO_DIP_DATA, but only bytes 0-31 actually store infoframe data: the
others are either read-only ECC values or marked as "reserved". On HSW
we only have 32 bytes, and the ECC value is stored on its own separate
read-only register. See BSpec.
This patch fixes bug #46761, which is marked as a regression
introduced by commit 4e89ee174bb2da341bf90a84321c7008a3c9210d:
drm/i915: set the DIP port on ibx_write_infoframe
Before commit 4e89 we were just failing to send AVI infoframes when we
needed to change the port, which can lead to black screens in some
cases. After commit 4e89 we started sending infoframes, but with a
possibly wrong ECC value. After this patch I hope we start sending
correct infoframes.
Version 2:
- Improve commit message
- Try to make the code more clear
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=46761
Signed-off-by: Paulo Zanoni <[email protected]>
Reviewed-by: Rodrigo Vivi <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2: only two write_infoframe functions to be modified]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/i915_reg.h | 4 ++++
drivers/gpu/drm/i915/intel_hdmi.c | 15 +++++++++++++++
2 files changed, 19 insertions(+)
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -1557,6 +1557,10 @@
/* Video Data Island Packet control */
#define VIDEO_DIP_DATA 0x61178
+/* Read the description of VIDEO_DIP_DATA (before Haswel) or VIDEO_DIP_ECC
+ * (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte
+ * of the infoframe structure specified by CEA-861. */
+#define VIDEO_DIP_DATA_SIZE 32
#define VIDEO_DIP_CTL 0x61170
#define VIDEO_DIP_ENABLE (1 << 31)
#define VIDEO_DIP_PORT_B (1 << 29)
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -143,6 +143,9 @@ static void i9xx_write_infoframe(struct
I915_WRITE(VIDEO_DIP_DATA, *data);
data++;
}
+ /* Write every possible data byte to force correct ECC calculation. */
+ for (; i < VIDEO_DIP_DATA_SIZE; i += 4)
+ I915_WRITE(VIDEO_DIP_DATA, 0);
mmiowb();
flags |= intel_infoframe_flags(frame);
@@ -176,6 +179,9 @@ static void ironlake_write_infoframe(str
I915_WRITE(TVIDEO_DIP_DATA(intel_crtc->pipe), *data);
data++;
}
+ /* Write every possible data byte to force correct ECC calculation. */
+ for (; i < VIDEO_DIP_DATA_SIZE; i += 4)
+ I915_WRITE(TVIDEO_DIP_DATA(intel_crtc->pipe), 0);
mmiowb();
flags |= intel_infoframe_flags(frame);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sarah Sharp <[email protected]>
commit 80fab3b244a22e0ca539d2439bdda50e81e5666f upstream.
When a device with an isochronous endpoint is behind a hub plugged into
the Intel Panther Point xHCI host controller, and the driver submits
multiple frames per URB, the xHCI driver will set the Block Event
Interrupt (BEI) flag on all but the last TD for the URB. This causes
the host controller to place an event on the event ring, but not send an
interrupt. When the last TD for the URB completes, BEI is cleared, and
we get an interrupt for the whole URB.
However, under a Panther Point xHCI host controller, if the parent hub
is unplugged when one or more events from transfers with BEI set are on
the event ring, a port status change event is placed on the event ring,
but no interrupt is generated. This means URBs stop completing, and the
USB device disconnect is not noticed. Something like a USB headset will
cause mplayer to hang when the device is disconnected.
If another transfer is sent (such as running `sudo lsusb -v`), the next
transfer event seems to "unstick" the event ring, the xHCI driver gets
an interrupt, and the disconnect is reported to the USB core.
The fix is not to use the BEI flag under the Panther Point xHCI host.
This will impact power consumption and system responsiveness, because
the xHCI driver will receive an interrupt for every frame in all
isochronous URBs instead of once per URB.
Intel chipset developers confirm that this bug will be hit if the BEI
flag is used on any endpoint, not just ones that are behind a hub.
This patch should be backported to kernels as old as 3.0, that contain
the commit 69e848c2090aebba5698a1620604c7dccb448684 "Intel xhci: Support
EHCI/xHCI port switching."
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci-pci.c | 1 +
drivers/usb/host/xhci-ring.c | 4 +++-
drivers/usb/host/xhci.h | 1 +
3 files changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -99,6 +99,7 @@ static void xhci_pci_quirks(struct devic
* PPT chipsets.
*/
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+ xhci->quirks |= XHCI_AVOID_BEI;
}
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -3467,7 +3467,9 @@ static int xhci_queue_isoc_tx(struct xhc
} else {
td->last_trb = ep_ring->enqueue;
field |= TRB_IOC;
- if (xhci->hci_version == 0x100) {
+ if (xhci->hci_version == 0x100 &&
+ !(xhci->quirks &
+ XHCI_AVOID_BEI)) {
/* Set BEI bit except for the last td */
if (i < num_tds - 1)
field |= TRB_BEI;
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1488,6 +1488,7 @@ struct xhci_hcd {
#define XHCI_TRUST_TX_LENGTH (1 << 10)
#define XHCI_SPURIOUS_REBOOT (1 << 13)
#define XHCI_COMP_MODE_QUIRK (1 << 14)
+#define XHCI_AVOID_BEI (1 << 15)
unsigned int num_active_eps;
unsigned int limit_active_eps;
/* There are two roothubs to keep track of bus suspend info for */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Spang <[email protected]>
commit a6e097dfdfd189b6929af6efa1d289af61858386 upstream.
The Intel XHCI specification says that after clearing the run/stop bit
the controller may take up to 16ms to halt. We've seen a device take
14ms, which with the current timeout of 10ms causes the kernel to
abort the suspend. Increasing the timeout to the recommended value
fixes the problem.
This patch should be backported to kernels as old as 2.6.37, that
contain the commit 5535b1d5f8885695c6ded783c692e3c0d0eda8ca "USB: xHCI:
PCI power management implementation".
Signed-off-by: Michael Spang <[email protected]>
Signed-off-by: Sarah Sharp <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/usb/host/xhci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index a4b0ce1..52b04b0 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -888,7 +888,7 @@ int xhci_suspend(struct xhci_hcd *xhci)
command &= ~CMD_RUN;
xhci_writel(xhci, command, &xhci->op_regs->command);
if (handshake(xhci, &xhci->op_regs->status,
- STS_HALT, STS_HALT, 100*100)) {
+ STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {
xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");
spin_unlock_irq(&xhci->lock);
return -ETIMEDOUT;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Monakhov <[email protected]>
commit 03bd8b9b896c8e940f282f540e6b4de90d666b7c upstream.
- Remove usless checks, because it is too late to check that inode != NULL
at the moment it was referenced several times.
- Double lock routines looks very ugly and locking ordering relays on
order of i_ino, but other kernel code rely on order of pointers.
Let's make them simple and clean.
- check that inodes belongs to the same SB as soon as possible.
Signed-off-by: Dmitry Monakhov <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ext4/move_extent.c | 167 ++++++++++++++-----------------------------------
1 file changed, 47 insertions(+), 120 deletions(-)
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index c5826c6..df5cde5 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -141,55 +141,21 @@ mext_next_extent(struct inode *inode, struct ext4_ext_path *path,
}
/**
- * mext_check_null_inode - NULL check for two inodes
- *
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
- */
-static int
-mext_check_null_inode(struct inode *inode1, struct inode *inode2,
- const char *function, unsigned int line)
-{
- int ret = 0;
-
- if (inode1 == NULL) {
- __ext4_error(inode2->i_sb, function, line,
- "Both inodes should not be NULL: "
- "inode1 NULL inode2 %lu", inode2->i_ino);
- ret = -EIO;
- } else if (inode2 == NULL) {
- __ext4_error(inode1->i_sb, function, line,
- "Both inodes should not be NULL: "
- "inode1 %lu inode2 NULL", inode1->i_ino);
- ret = -EIO;
- }
- return ret;
-}
-
-/**
* double_down_write_data_sem - Acquire two inodes' write lock of i_data_sem
*
- * @orig_inode: original inode structure
- * @donor_inode: donor inode structure
- * Acquire write lock of i_data_sem of the two inodes (orig and donor) by
- * i_ino order.
+ * Acquire write lock of i_data_sem of the two inodes
*/
static void
-double_down_write_data_sem(struct inode *orig_inode, struct inode *donor_inode)
+double_down_write_data_sem(struct inode *first, struct inode *second)
{
- struct inode *first = orig_inode, *second = donor_inode;
+ if (first < second) {
+ down_write(&EXT4_I(first)->i_data_sem);
+ down_write_nested(&EXT4_I(second)->i_data_sem, SINGLE_DEPTH_NESTING);
+ } else {
+ down_write(&EXT4_I(second)->i_data_sem);
+ down_write_nested(&EXT4_I(first)->i_data_sem, SINGLE_DEPTH_NESTING);
- /*
- * Use the inode number to provide the stable locking order instead
- * of its address, because the C language doesn't guarantee you can
- * compare pointers that don't come from the same array.
- */
- if (donor_inode->i_ino < orig_inode->i_ino) {
- first = donor_inode;
- second = orig_inode;
}
-
- down_write(&EXT4_I(first)->i_data_sem);
- down_write_nested(&EXT4_I(second)->i_data_sem, SINGLE_DEPTH_NESTING);
}
/**
@@ -969,14 +935,6 @@ mext_check_arguments(struct inode *orig_inode,
return -EINVAL;
}
- /* Files should be in the same ext4 FS */
- if (orig_inode->i_sb != donor_inode->i_sb) {
- ext4_debug("ext4 move extent: The argument files "
- "should be in same FS [ino:orig %lu, donor %lu]\n",
- orig_inode->i_ino, donor_inode->i_ino);
- return -EINVAL;
- }
-
/* Ext4 move extent supports only extent based file */
if (!(ext4_test_inode_flag(orig_inode, EXT4_INODE_EXTENTS))) {
ext4_debug("ext4 move extent: orig file is not extents "
@@ -1072,35 +1030,19 @@ mext_check_arguments(struct inode *orig_inode,
* @inode1: the inode structure
* @inode2: the inode structure
*
- * Lock two inodes' i_mutex by i_ino order.
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
+ * Lock two inodes' i_mutex
*/
-static int
+static void
mext_inode_double_lock(struct inode *inode1, struct inode *inode2)
{
- int ret = 0;
-
- BUG_ON(inode1 == NULL && inode2 == NULL);
-
- ret = mext_check_null_inode(inode1, inode2, __func__, __LINE__);
- if (ret < 0)
- goto out;
-
- if (inode1 == inode2) {
- mutex_lock(&inode1->i_mutex);
- goto out;
- }
-
- if (inode1->i_ino < inode2->i_ino) {
+ BUG_ON(inode1 == inode2);
+ if (inode1 < inode2) {
mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT);
mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD);
} else {
mutex_lock_nested(&inode2->i_mutex, I_MUTEX_PARENT);
mutex_lock_nested(&inode1->i_mutex, I_MUTEX_CHILD);
}
-
-out:
- return ret;
}
/**
@@ -1109,28 +1051,13 @@ out:
* @inode1: the inode that is released first
* @inode2: the inode that is released second
*
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
*/
-static int
+static void
mext_inode_double_unlock(struct inode *inode1, struct inode *inode2)
{
- int ret = 0;
-
- BUG_ON(inode1 == NULL && inode2 == NULL);
-
- ret = mext_check_null_inode(inode1, inode2, __func__, __LINE__);
- if (ret < 0)
- goto out;
-
- if (inode1)
- mutex_unlock(&inode1->i_mutex);
-
- if (inode2 && inode2 != inode1)
- mutex_unlock(&inode2->i_mutex);
-
-out:
- return ret;
+ mutex_unlock(&inode1->i_mutex);
+ mutex_unlock(&inode2->i_mutex);
}
/**
@@ -1187,16 +1114,23 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
ext4_lblk_t block_end, seq_start, add_blocks, file_end, seq_blocks = 0;
ext4_lblk_t rest_blocks;
pgoff_t orig_page_offset = 0, seq_end_page;
- int ret1, ret2, depth, last_extent = 0;
+ int ret, depth, last_extent = 0;
int blocks_per_page = PAGE_CACHE_SIZE >> orig_inode->i_blkbits;
int data_offset_in_page;
int block_len_in_page;
int uninit;
- /* orig and donor should be different file */
- if (orig_inode->i_ino == donor_inode->i_ino) {
+ if (orig_inode->i_sb != donor_inode->i_sb) {
+ ext4_debug("ext4 move extent: The argument files "
+ "should be in same FS [ino:orig %lu, donor %lu]\n",
+ orig_inode->i_ino, donor_inode->i_ino);
+ return -EINVAL;
+ }
+
+ /* orig and donor should be different inodes */
+ if (orig_inode == donor_inode) {
ext4_debug("ext4 move extent: The argument files should not "
- "be same file [ino:orig %lu, donor %lu]\n",
+ "be same inode [ino:orig %lu, donor %lu]\n",
orig_inode->i_ino, donor_inode->i_ino);
return -EINVAL;
}
@@ -1210,16 +1144,14 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
}
/* Protect orig and donor inodes against a truncate */
- ret1 = mext_inode_double_lock(orig_inode, donor_inode);
- if (ret1 < 0)
- return ret1;
+ mext_inode_double_lock(orig_inode, donor_inode);
/* Protect extent tree against block allocations via delalloc */
double_down_write_data_sem(orig_inode, donor_inode);
/* Check the filesystem environment whether move_extent can be done */
- ret1 = mext_check_arguments(orig_inode, donor_inode, orig_start,
+ ret = mext_check_arguments(orig_inode, donor_inode, orig_start,
donor_start, &len);
- if (ret1)
+ if (ret)
goto out;
file_end = (i_size_read(orig_inode) - 1) >> orig_inode->i_blkbits;
@@ -1227,13 +1159,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
if (file_end < block_end)
len -= block_end - file_end;
- ret1 = get_ext_path(orig_inode, block_start, &orig_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, block_start, &orig_path);
+ if (ret)
goto out;
/* Get path structure to check the hole */
- ret1 = get_ext_path(orig_inode, block_start, &holecheck_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, block_start, &holecheck_path);
+ if (ret)
goto out;
depth = ext_depth(orig_inode);
@@ -1252,13 +1184,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
last_extent = mext_next_extent(orig_inode,
holecheck_path, &ext_cur);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
goto out;
}
last_extent = mext_next_extent(orig_inode, orig_path,
&ext_dummy);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
goto out;
}
seq_start = le32_to_cpu(ext_cur->ee_block);
@@ -1272,7 +1204,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
if (le32_to_cpu(ext_cur->ee_block) > block_end) {
ext4_debug("ext4 move extent: The specified range of file "
"may be the hole\n");
- ret1 = -EINVAL;
+ ret = -EINVAL;
goto out;
}
@@ -1292,7 +1224,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
last_extent = mext_next_extent(orig_inode, holecheck_path,
&ext_cur);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
break;
}
add_blocks = ext4_ext_get_actual_len(ext_cur);
@@ -1349,18 +1281,18 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
orig_page_offset,
data_offset_in_page,
block_len_in_page, uninit,
- &ret1);
+ &ret);
/* Count how many blocks we have exchanged */
*moved_len += block_len_in_page;
- if (ret1 < 0)
+ if (ret < 0)
break;
if (*moved_len > len) {
EXT4_ERROR_INODE(orig_inode,
"We replaced blocks too much! "
"sum of replaced: %llu requested: %llu",
*moved_len, len);
- ret1 = -EIO;
+ ret = -EIO;
break;
}
@@ -1374,22 +1306,22 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
}
double_down_write_data_sem(orig_inode, donor_inode);
- if (ret1 < 0)
+ if (ret < 0)
break;
/* Decrease buffer counter */
if (holecheck_path)
ext4_ext_drop_refs(holecheck_path);
- ret1 = get_ext_path(orig_inode, seq_start, &holecheck_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, seq_start, &holecheck_path);
+ if (ret)
break;
depth = holecheck_path->p_depth;
/* Decrease buffer counter */
if (orig_path)
ext4_ext_drop_refs(orig_path);
- ret1 = get_ext_path(orig_inode, seq_start, &orig_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, seq_start, &orig_path);
+ if (ret)
break;
ext_cur = holecheck_path[depth].p_ext;
@@ -1412,12 +1344,7 @@ out:
kfree(holecheck_path);
}
double_up_write_data_sem(orig_inode, donor_inode);
- ret2 = mext_inode_double_unlock(orig_inode, donor_inode);
+ mext_inode_double_unlock(orig_inode, donor_inode);
- if (ret1)
- return ret1;
- else if (ret2)
- return ret2;
-
- return 0;
+ return ret;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Abbott <[email protected]>
commit b655c2c4782ed3e2e71d2608154e295a3e860311 upstream.
`s626_enc_insn_config()` is incorrectly dereferencing `insn->data` which
is a pointer to user memory. It should be dereferencing the separate
`data` parameter that points to a copy of the data in kernel memory.
Signed-off-by: Ian Abbott <[email protected]>
Reviewed-by: H Hartley Sweeten <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/staging/comedi/drivers/s626.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/comedi/drivers/s626.c b/drivers/staging/comedi/drivers/s626.c
index e1d10f3..430e26b 100644
--- a/drivers/staging/comedi/drivers/s626.c
+++ b/drivers/staging/comedi/drivers/s626.c
@@ -1825,7 +1825,7 @@ static int s626_enc_insn_config(struct comedi_device *dev,
/* (data==NULL) ? (Preloadvalue=0) : (Preloadvalue=data[0]); */
k->SetMode(dev, k, Setup, TRUE);
- Preload(dev, k, *(insn->data));
+ Preload(dev, k, data[0]);
k->PulseIndex(dev, k);
SetLatchSource(dev, k, valueSrclatch);
k->SetEnable(dev, k, (uint16_t) (enab != 0));
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Flavio Leitner <[email protected]>
commit 26e8220adb0aec43b7acafa0f1431760eee28522 upstream.
Apparently the same card model has two IDs, so this patch
complements the commit 39aced68d664291db3324d0fcf0985ab5626aac2
adding the missing one.
Signed-off-by: Flavio Leitner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[bwh: Backported to 3.2: adjust filename]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/serial/8250_pci.c | 9 +++++++--
include/linux/pci_ids.h | 1 -
2 files changed, 7 insertions(+), 3 deletions(-)
--- a/drivers/tty/serial/8250_pci.c
+++ b/drivers/tty/serial/8250_pci.c
@@ -1118,6 +1118,8 @@ pci_xr17c154_setup(struct serial_private
#define PCI_SUBDEVICE_ID_OCTPRO422 0x0208
#define PCI_SUBDEVICE_ID_POCTAL232 0x0308
#define PCI_SUBDEVICE_ID_POCTAL422 0x0408
+#define PCI_SUBDEVICE_ID_SIIG_DUAL_00 0x2500
+#define PCI_SUBDEVICE_ID_SIIG_DUAL_30 0x2530
#define PCI_VENDOR_ID_ADVANTECH 0x13fe
#define PCI_DEVICE_ID_INTEL_CE4100_UART 0x2e66
#define PCI_DEVICE_ID_ADVANTECH_PCI3620 0x3620
@@ -3168,8 +3170,11 @@ static struct pci_device_id serial_pci_t
* For now just used the hex ID 0x950a.
*/
{ PCI_VENDOR_ID_OXSEMI, 0x950a,
- PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_SERIAL, 0, 0,
- pbn_b0_2_115200 },
+ PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_00,
+ 0, 0, pbn_b0_2_115200 },
+ { PCI_VENDOR_ID_OXSEMI, 0x950a,
+ PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_30,
+ 0, 0, pbn_b0_2_115200 },
{ PCI_VENDOR_ID_OXSEMI, 0x950a,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
pbn_b0_2_1130000 },
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -1823,7 +1823,6 @@
#define PCI_DEVICE_ID_SIIG_8S_20x_650 0x2081
#define PCI_DEVICE_ID_SIIG_8S_20x_850 0x2082
#define PCI_SUBDEVICE_ID_SIIG_QUARTET_SERIAL 0x2050
-#define PCI_SUBDEVICE_ID_SIIG_DUAL_SERIAL 0x2530
#define PCI_VENDOR_ID_RADISYS 0x1331
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit 2e3b3b105ab3bb5b6a37198da4f193cd13781d13 upstream.
SI asics store voltage information differently so we
don't have a way to deal with it properly yet.
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/radeon/radeon_pm.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index c15e505..e024435 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -521,7 +521,9 @@ void radeon_pm_suspend(struct radeon_device *rdev)
void radeon_pm_resume(struct radeon_device *rdev)
{
/* set up the default clocks if the MC ucode is loaded */
- if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {
+ if ((rdev->family >= CHIP_BARTS) &&
+ (rdev->family <= CHIP_CAYMAN) &&
+ rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
SET_VOLTAGE_TYPE_ASIC_VDDC);
@@ -576,7 +578,9 @@ int radeon_pm_init(struct radeon_device *rdev)
radeon_pm_print_states(rdev);
radeon_pm_init_profile(rdev);
/* set up the default clocks if the MC ucode is loaded */
- if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {
+ if ((rdev->family >= CHIP_BARTS) &&
+ (rdev->family <= CHIP_CAYMAN) &&
+ rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
SET_VOLTAGE_TYPE_ASIC_VDDC);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Linus Walleij <[email protected]>
commit c5dd553b9fd069892c9e2de734f4f604e280fa7a upstream.
This works around a few glitches in the ST version of the PL011
serial driver when using very high baud rates, as we do in the
Ux500: 3, 3.25, 4 and 4.05 Mbps.
Problem Observed/rootcause:
When using high baud-rates, and the baudrate*8 is getting close to
the provided clock frequency (so a division factor close to 1), when
using bursts of characters (so they are abutted), then it seems as if
there is not enough time to detect the beginning of the start-bit which
is a timing reference for the entire character, and thus the sampling
moment of character bits is moving towards the end of each bit, instead
of the middle.
Fix:
Increase slightly the RX baud rate of the UART above the theoretical
baudrate by 5%. This will definitely give more margin time to the
UART_RX to correctly sample the data at the middle of the bit period.
Also fix the ages old copy-paste error in the very stressed comment,
it's referencing the registers used in the PL010 driver rather than
the PL011 ones.
Signed-off-by: Guillaume Jaunet <[email protected]>
Signed-off-by: Christophe Arnal <[email protected]>
Signed-off-by: Matthias Locher <[email protected]>
Signed-off-by: Rajanikanth HV <[email protected]>
Cc: Bibek Basu <[email protected]>
Cc: Par-Gunnar Hjalmdahl <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/serial/amba-pl011.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index cede938..925eb88 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -1595,13 +1595,26 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
old_cr &= ~ST_UART011_CR_OVSFACT;
}
+ /*
+ * Workaround for the ST Micro oversampling variants to
+ * increase the bitrate slightly, by lowering the divisor,
+ * to avoid delayed sampling of start bit at high speeds,
+ * else we see data corruption.
+ */
+ if (uap->vendor->oversampling) {
+ if ((baud >= 3000000) && (baud < 3250000) && (quot > 1))
+ quot -= 1;
+ else if ((baud > 3250000) && (quot > 2))
+ quot -= 2;
+ }
/* Set baud rate */
writew(quot & 0x3f, port->membase + UART011_FBRD);
writew(quot >> 6, port->membase + UART011_IBRD);
/*
* ----------v----------v----------v----------v-----
- * NOTE: MUST BE WRITTEN AFTER UARTLCR_M & UARTLCR_L
+ * NOTE: lcrh_tx and lcrh_rx MUST BE WRITTEN AFTER
+ * UART011_FBRD & UART011_IBRD.
* ----------^----------^----------^----------^-----
*/
writew(lcr_h, port->membase + uap->lcrh_rx);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bernd Schubert <[email protected]>
commit 6a08f447facb4f9e29fcc30fb68060bb5a0d21c2 upstream.
ext4_special_inode_operations have their own ifdef CONFIG_EXT4_FS_XATTR
to mask those methods. And ext4_iget also always sets it, so there is
an inconsistency.
Signed-off-by: Bernd Schubert <[email protected]>
Signed-off-by: "Theodore Ts'o" <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ext4/namei.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 8f4bda7..bd87b7a 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -2156,9 +2156,7 @@ retry:
err = PTR_ERR(inode);
if (!IS_ERR(inode)) {
init_special_inode(inode, inode->i_mode, rdev);
-#ifdef CONFIG_EXT4_FS_XATTR
inode->i_op = &ext4_special_inode_operations;
-#endif
err = ext4_add_nondir(handle, dentry, inode);
}
ext4_journal_stop(handle);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Shiyan <[email protected]>
commit d1f55c680e5d021e7066f4461dd678d42af18898 upstream.
Update driver autcpu12-nvram.c so it compiles; map_read32/map_write32
no longer exist in the kernel so the driver is totally broken.
Additionally, map_info name passed to simple_map_init is incorrect.
Signed-off-by: Alexander Shiyan <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/maps/autcpu12-nvram.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/drivers/mtd/maps/autcpu12-nvram.c b/drivers/mtd/maps/autcpu12-nvram.c
index e5bfd0e..0598d52 100644
--- a/drivers/mtd/maps/autcpu12-nvram.c
+++ b/drivers/mtd/maps/autcpu12-nvram.c
@@ -43,7 +43,8 @@ struct map_info autcpu12_sram_map = {
static int __init init_autcpu12_sram (void)
{
- int err, save0, save1;
+ map_word tmp, save0, save1;
+ int err;
autcpu12_sram_map.virt = ioremap(0x12000000, SZ_128K);
if (!autcpu12_sram_map.virt) {
@@ -51,7 +52,7 @@ static int __init init_autcpu12_sram (void)
err = -EIO;
goto out;
}
- simple_map_init(&autcpu_sram_map);
+ simple_map_init(&autcpu12_sram_map);
/*
* Check for 32K/128K
@@ -61,20 +62,22 @@ static int __init init_autcpu12_sram (void)
* Read and check result on ofs 0x0
* Restore contents
*/
- save0 = map_read32(&autcpu12_sram_map,0);
- save1 = map_read32(&autcpu12_sram_map,0x10000);
- map_write32(&autcpu12_sram_map,~save0,0x10000);
+ save0 = map_read(&autcpu12_sram_map, 0);
+ save1 = map_read(&autcpu12_sram_map, 0x10000);
+ tmp.x[0] = ~save0.x[0];
+ map_write(&autcpu12_sram_map, tmp, 0x10000);
/* if we find this pattern on 0x0, we have 32K size
* restore contents and exit
*/
- if ( map_read32(&autcpu12_sram_map,0) != save0) {
- map_write32(&autcpu12_sram_map,save0,0x0);
+ tmp = map_read(&autcpu12_sram_map, 0);
+ if (!map_word_equal(&autcpu12_sram_map, tmp, save0)) {
+ map_write(&autcpu12_sram_map, save0, 0x0);
goto map;
}
/* We have a 128K found, restore 0x10000 and set size
* to 128K
*/
- map_write32(&autcpu12_sram_map,save1,0x10000);
+ map_write(&autcpu12_sram_map, save1, 0x10000);
autcpu12_sram_map.size = SZ_128K;
map:
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit 3a6d59df80897cc87812b6826d70085905bed013 upstream.
Fixes another system on:
https://bugs.freedesktop.org/show_bug.cgi?id=37679
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/radeon/radeon_irq_kms.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index c4e638d..2da4465 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -202,6 +202,12 @@ static bool radeon_msi_ok(struct radeon_device *rdev)
(rdev->pdev->subsystem_device == 0x01fd))
return true;
+ /* Gateway RS690 only seems to work with MSIs. */
+ if ((rdev->pdev->device == 0x791f) &&
+ (rdev->pdev->subsystem_vendor == 0x107b) &&
+ (rdev->pdev->subsystem_device == 0x0185))
+ return true;
+
/* RV515 seems to have MSI issues where it loses
* MSI rearms occasionally. This leads to lockups and freezes.
* disable it by default.
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Denys Vlasenko <[email protected]>
commit f34f9d186df35e5c39163444c43b4fc6255e39c5 upstream.
In !CORE_DUMP_USE_REGSET case, if elf_note_info_init fails to allocate
memory for info->fields, it frees already allocated stuff and returns
error to its caller, fill_note_info. Which in turn returns error to its
caller, elf_core_dump. Which jumps to cleanup label and calls
free_note_info, which will happily try to free all info->fields again.
BOOM.
This is the fix.
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Denys Vlasenko <[email protected]>
Cc: Venu Byravarasu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/binfmt_elf.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 1b52956..0225fdd 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1696,30 +1696,19 @@ static int elf_note_info_init(struct elf_note_info *info)
return 0;
info->psinfo = kmalloc(sizeof(*info->psinfo), GFP_KERNEL);
if (!info->psinfo)
- goto notes_free;
+ return 0;
info->prstatus = kmalloc(sizeof(*info->prstatus), GFP_KERNEL);
if (!info->prstatus)
- goto psinfo_free;
+ return 0;
info->fpu = kmalloc(sizeof(*info->fpu), GFP_KERNEL);
if (!info->fpu)
- goto prstatus_free;
+ return 0;
#ifdef ELF_CORE_COPY_XFPREGS
info->xfpu = kmalloc(sizeof(*info->xfpu), GFP_KERNEL);
if (!info->xfpu)
- goto fpu_free;
+ return 0;
#endif
return 1;
-#ifdef ELF_CORE_COPY_XFPREGS
- fpu_free:
- kfree(info->fpu);
-#endif
- prstatus_free:
- kfree(info->prstatus);
- psinfo_free:
- kfree(info->psinfo);
- notes_free:
- kfree(info->notes);
- return 0;
}
static int fill_note_info(struct elfhdr *elf, int phdrs,
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Seth Forshee <[email protected]>
commit 824efd37415961d38821ecbd9694e213fb2e8b32 upstream.
Commit c039450 (Input: synaptics - handle out of bounds values from the
hardware) caused any hardware reported values over 7167 to be treated as
a wrapped-around negative value. It turns out that some firmware uses
the value 8176 to indicate a finger near the edge of the touchpad whose
actual position cannot be determined. This value now gets treated as
negative, which can cause pointer jumps and broken edge scrolling on
these machines.
I only know of one touchpad which reports negative values, and this
hardware never reports any value lower than -8 (i.e. 8184). Moving the
threshold for treating a value as negative up to 8176 should work fine
then for any hardware we currently know about, and since we're dealing
with unspecified behavior it's probably the best we can do. The special
8176 value is also likely to result in sudden jumps in position, so
let's also clamp this to the maximum specified value for the axis.
BugLink: http://bugs.launchpad.net/bugs/1046512
https://bugzilla.kernel.org/show_bug.cgi?id=46371
Signed-off-by: Seth Forshee <[email protected]>
Reviewed-by: Daniel Kurtz <[email protected]>
Tested-by: Alan Swanson <[email protected]>
Tested-by: Arteom <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/input/mouse/synaptics.c | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -53,14 +53,19 @@
#define ABS_POS_BITS 13
/*
- * Any position values from the hardware above the following limits are
- * treated as "wrapped around negative" values that have been truncated to
- * the 13-bit reporting range of the hardware. These are just reasonable
- * guesses and can be adjusted if hardware is found that operates outside
- * of these parameters.
+ * These values should represent the absolute maximum value that will
+ * be reported for a positive position value. Some Synaptics firmware
+ * uses this value to indicate a finger near the edge of the touchpad
+ * whose precise position cannot be determined.
+ *
+ * At least one touchpad is known to report positions in excess of this
+ * value which are actually negative values truncated to the 13-bit
+ * reporting range. These values have never been observed to be lower
+ * than 8184 (i.e. -8), so we treat all values greater than 8176 as
+ * negative and any other value as positive.
*/
-#define X_MAX_POSITIVE (((1 << ABS_POS_BITS) + XMAX) / 2)
-#define Y_MAX_POSITIVE (((1 << ABS_POS_BITS) + YMAX) / 2)
+#define X_MAX_POSITIVE 8176
+#define Y_MAX_POSITIVE 8176
/*
* Synaptics touchpads report the y coordinate from bottom to top, which is
@@ -561,11 +566,21 @@ static int synaptics_parse_hw_state(cons
hw->right = (buf[0] & 0x02) ? 1 : 0;
}
- /* Convert wrap-around values to negative */
+ /*
+ * Convert wrap-around values to negative. (X|Y)_MAX_POSITIVE
+ * is used by some firmware to indicate a finger at the edge of
+ * the touchpad whose precise position cannot be determined, so
+ * convert these values to the maximum axis value.
+ */
if (hw->x > X_MAX_POSITIVE)
hw->x -= 1 << ABS_POS_BITS;
+ else if (hw->x == X_MAX_POSITIVE)
+ hw->x = XMAX;
+
if (hw->y > Y_MAX_POSITIVE)
hw->y -= 1 << ABS_POS_BITS;
+ else if (hw->y == Y_MAX_POSITIVE)
+ hw->y = YMAX;
return 0;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ian Abbott <[email protected]>
commit e1878957b4676a17cf398f7f5723b365e9a2ca48 upstream.
Correct a direct dereference of I/O memory to use an appropriate I/O
memory access function. Note that the pointer being dereferenced is not
currently tagged with `__iomem` but I plan to correct that for 3.7.
Signed-off-by: Ian Abbott <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/staging/comedi/drivers/jr3_pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/staging/comedi/drivers/jr3_pci.c
+++ b/drivers/staging/comedi/drivers/jr3_pci.c
@@ -913,7 +913,7 @@ static int jr3_pci_attach(struct comedi_
}
/* Reset DSP card */
- devpriv->iobase->channel[0].reset = 0;
+ writel(0, &devpriv->iobase->channel[0].reset);
result = comedi_load_firmware(dev, "jr3pci.idm", jr3_download_firmware);
printk("Firmare load %d\n", result);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jani Nikula <[email protected]>
commit 0c96c65b48fba3ffe9822a554cbc0cd610765cd5 upstream.
The dithering introduced in
commit 3b5c78a35cf7511c15e09a9b0ffab290a42d9bcf
Author: Adam Jackson <[email protected]>
Date: Tue Dec 13 15:41:00 2011 -0800
drm/i915/dp: Dither down to 6bpc if it makes the mode fit
stores the INTEL_MODE_DP_FORCE_6BPC flag in the private_flags of the
adjusted mode, while i9xx_crtc_mode_set() and ironlake_crtc_mode_set() use
the original mode, without the flag, so it would never have any
effect. However, the BPC was clamped by VBT settings, making things work by
coincidence, until that part was removed in
commit 4344b813f105a19f793f1fd93ad775b784648b95
Author: Daniel Vetter <[email protected]>
Date: Fri Aug 10 11:10:20 2012 +0200
Use adjusted_mode instead of mode when checking for
INTEL_MODE_DP_FORCE_6BPC to make the flag have effect.
v2: Don't forget to fix this in i9xx_crtc_mode_set() also, pointed out by
Daniel both before and after sending the first patch.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=47621
CC: Adam Jackson <[email protected]>
Signed-off-by: Jani Nikula <[email protected]>
Reviewed-by: Adam Jackson <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2:
- Adjust context
- intel_choose_pipe_bpp_dither() doesn't take a drm_framebuffer argument]
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -5027,7 +5027,7 @@ static int i9xx_crtc_mode_set(struct drm
/* default to 8bpc */
pipeconf &= ~(PIPECONF_BPP_MASK | PIPECONF_DITHER_EN);
if (is_dp) {
- if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
+ if (adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
pipeconf |= PIPECONF_BPP_6 |
PIPECONF_DITHER_EN |
PIPECONF_DITHER_TYPE_SP;
@@ -5495,7 +5495,7 @@ static int ironlake_crtc_mode_set(struct
/* determine panel color depth */
temp = I915_READ(PIPECONF(pipe));
temp &= ~PIPE_BPC_MASK;
- dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, mode);
+ dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, adjusted_mode);
switch (pipe_bpp) {
case 18:
temp |= PIPE_6BPC;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit d8536670916a685df116b5c2cb256573fd25e4e3 upstream.
We need to call scsi_done() for commands after we abort them.
Signed-off-by: Bart Van Assche <[email protected]>
Acked-by: David Dillow <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/infiniband/ulp/srp/ib_srp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index ac66e6b..922d845 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -1687,6 +1687,7 @@ static int srp_abort(struct scsi_cmnd *scmnd)
SRP_TSK_ABORT_TASK);
srp_free_req(target, req, scmnd, 0);
scmnd->result = DID_ABORT << 16;
+ scmnd->scsi_done(scmnd);
return SUCCESS;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Bießmann <[email protected]>
commit 7d9b110269253b1d5858cfa57d68dfc7bf50dd77 upstream.
Do not kfree() the mtd_info; it is handled in the mtd subsystem and
already freed by nand_release(). Instead kfree() the struct
omap_nand_info allocated in omap_nand_probe which was not freed before.
This patch fixes following error when unloading the omap2 module:
---8<---
~ $ rmmod omap2
------------[ cut here ]------------
kernel BUG at mm/slab.c:3126!
Internal error: Oops - BUG: 0 [#1] PREEMPT ARM
Modules linked in: omap2(-)
CPU: 0 Not tainted (3.6.0-rc3-00230-g155e36d-dirty #3)
PC is at cache_free_debugcheck+0x2d4/0x36c
LR is at kfree+0xc8/0x2ac
pc : [<c01125a0>] lr : [<c0112efc>] psr: 200d0193
sp : c521fe08 ip : c0e8ef90 fp : c521fe5c
r10: bf0001fc r9 : c521e000 r8 : c0d99c8c
r7 : c661ebc0 r6 : c065d5a4 r5 : c65c4060 r4 : c78005c0
r3 : 00000000 r2 : 00001000 r1 : c65c4000 r0 : 00000001
Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user
Control: 10c5387d Table: 86694019 DAC: 00000015
Process rmmod (pid: 549, stack limit = 0xc521e2f0)
Stack: (0xc521fe08 to 0xc5220000)
fe00: c008a874 c00bf44c c515c6d0 200d0193 c65c4860 c515c240
fe20: c521fe3c c521fe30 c008a9c0 c008a854 c521fe5c c65c4860 c78005c0 bf0001fc
fe40: c780ff40 a00d0113 c521e000 00000000 c521fe84 c521fe60 c0112efc c01122d8
fe60: c65c4860 c0673778 c06737ac 00000000 00070013 00000000 c521fe9c c521fe88
fe80: bf0001fc c0112e40 c0673778 bf001ca8 c521feac c521fea0 c02ca11c bf0001ac
fea0: c521fec4 c521feb0 c02c82c4 c02ca100 c0673778 bf001ca8 c521fee4 c521fec8
fec0: c02c8dd8 c02c8250 00000000 bf001ca8 bf001ca8 c0804ee0 c521ff04 c521fee8
fee0: c02c804c c02c8d20 bf001924 00000000 bf001ca8 c521e000 c521ff1c c521ff08
ff00: c02c950c c02c7fbc bf001d48 00000000 c521ff2c c521ff20 c02ca3a4 c02c94b8
ff20: c521ff3c c521ff30 bf001938 c02ca394 c521ffa4 c521ff40 c009beb4 bf001930
ff40: c521ff6c 70616d6f b6fe0032 c0014f84 70616d6f b6fe0032 00000081 60070010
ff60: c521ff84 c521ff70 c008e1f4 c00bf328 0001a004 70616d6f c521ff94 0021ff88
ff80: c008e368 0001a004 70616d6f b6fe0032 00000081 c0015028 00000000 c521ffa8
ffa0: c0014dc0 c009bcd0 0001a004 70616d6f bec2ab38 00000880 bec2ab38 00000880
ffc0: 0001a004 70616d6f b6fe0032 00000081 00000319 00000000 b6fe1000 00000000
ffe0: bec2ab30 bec2ab20 00019f00 b6f539c0 60070010 bec2ab38 aaaaaaaa aaaaaaaa
Backtrace:
[<c01122cc>] (cache_free_debugcheck+0x0/0x36c) from [<c0112efc>] (kfree+0xc8/0x2ac)
[<c0112e34>] (kfree+0x0/0x2ac) from [<bf0001fc>] (omap_nand_remove+0x5c/0x64 [omap2])
[<bf0001a0>] (omap_nand_remove+0x0/0x64 [omap2]) from [<c02ca11c>] (platform_drv_remove+0x28/0x2c)
r5:bf001ca8 r4:c0673778
[<c02ca0f4>] (platform_drv_remove+0x0/0x2c) from [<c02c82c4>] (__device_release_driver+0x80/0xdc)
[<c02c8244>] (__device_release_driver+0x0/0xdc) from [<c02c8dd8>] (driver_detach+0xc4/0xc8)
r5:bf001ca8 r4:c0673778
[<c02c8d14>] (driver_detach+0x0/0xc8) from [<c02c804c>] (bus_remove_driver+0x9c/0x104)
r6:c0804ee0 r5:bf001ca8 r4:bf001ca8 r3:00000000
[<c02c7fb0>] (bus_remove_driver+0x0/0x104) from [<c02c950c>] (driver_unregister+0x60/0x80)
r6:c521e000 r5:bf001ca8 r4:00000000 r3:bf001924
[<c02c94ac>] (driver_unregister+0x0/0x80) from [<c02ca3a4>] (platform_driver_unregister+0x1c/0x20)
r5:00000000 r4:bf001d48
[<c02ca388>] (platform_driver_unregister+0x0/0x20) from [<bf001938>] (omap_nand_driver_exit+0x14/0x1c [omap2])
[<bf001924>] (omap_nand_driver_exit+0x0/0x1c [omap2]) from [<c009beb4>] (sys_delete_module+0x1f0/0x2ec)
[<c009bcc4>] (sys_delete_module+0x0/0x2ec) from [<c0014dc0>] (ret_fast_syscall+0x0/0x48)
r8:c0015028 r7:00000081 r6:b6fe0032 r5:70616d6f r4:0001a004
Code: e1a00005 eb0d9172 e7f001f2 e7f001f2 (e7f001f2)
---[ end trace 6a30b24d8c0cc2ee ]---
Segmentation fault
--->8---
This error was introduced in 67ce04bf2746f8a1f8c2a104b313d20c63f68378 which
was the first commit of this driver.
Signed-off-by: Andreas Bießmann <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/nand/omap2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c
index f47c422..e604a45 100644
--- a/drivers/mtd/nand/omap2.c
+++ b/drivers/mtd/nand/omap2.c
@@ -1364,7 +1364,7 @@ static int omap_nand_remove(struct platform_device *pdev)
/* Release NAND device, its internal structures and partitions */
nand_release(&info->mtd);
iounmap(info->nand.IO_ADDR_R);
- kfree(&info->mtd);
+ kfree(info);
return 0;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <[email protected]>
commit 9b796d06d5d1b1e85ae2316a283ea11dd739ef96 upstream.
srp_free_req() uses the scsi_cmnd structure contents to unmap
buffers, so we must invoke srp_free_req() before we release
ownership of that structure.
Signed-off-by: Bart Van Assche <[email protected]>
Acked-by: David Dillow <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/infiniband/ulp/srp/ib_srp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index 1b5b0c7..ac66e6b 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -638,9 +638,9 @@ static void srp_reset_req(struct srp_target_port *target, struct srp_request *re
struct scsi_cmnd *scmnd = srp_claim_req(target, req, NULL);
if (scmnd) {
+ srp_free_req(target, req, scmnd, 0);
scmnd->result = DID_RESET << 16;
scmnd->scsi_done(scmnd);
- srp_free_req(target, req, scmnd, 0);
}
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Marek <[email protected]>
commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
There were reports of users destroying their Fedora installs by a kernel
tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
toplevel directories from the tarball to prevent this from happening.
Reported-by: Andi Kleen <[email protected]>
Suggested-by: Ben Hutchings <[email protected]>
Signed-off-by: Michal Marek <[email protected]>
[bwh: Backported to 3.2: drop redundant changes]
Signed-off-by: Ben Hutchings <[email protected]>
---
scripts/Makefile.fwinst | 4 ++--
scripts/package/buildtar | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/scripts/Makefile.fwinst b/scripts/Makefile.fwinst
index 4d908d1..c3f69ae 100644
--- a/scripts/Makefile.fwinst
+++ b/scripts/Makefile.fwinst
@@ -42,7 +42,7 @@ quiet_cmd_install = INSTALL $(subst $(srctree)/,,$@)
$(installed-fw-dirs):
$(call cmd,mkdir)
-$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $(INSTALL_FW_PATH)/$$(dir %)
+$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $$(dir $(INSTALL_FW_PATH)/%)
$(call cmd,install)
PHONY += __fw_install __fw_modinst FORCE
diff --git a/scripts/package/buildtar b/scripts/package/buildtar
index 8a7b155..d0d748e 100644
--- a/scripts/package/buildtar
+++ b/scripts/package/buildtar
@@ -109,7 +109,7 @@ esac
if tar --owner=root --group=root --help >/dev/null 2>&1; then
opts="--owner=root --group=root"
fi
- tar cf - . $opts | ${compress} > "${tarball}${file_ext}"
+ tar cf - boot/* lib/* $opts | ${compress} > "${tarball}${file_ext}"
)
echo "Tarball successfully created in ${tarball}${file_ext}"
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Kara <[email protected]>
commit b71fc079b5d8f42b2a52743c8d2f1d35d655b1c5 upstream.
Code tracking when transaction needs to be committed on fdatasync(2) forgets
to handle a situation when only inode's i_size is changed. Thus in such
situations fdatasync(2) doesn't force transaction with new i_size to disk
and that can result in wrong i_size after a crash.
Fix the issue by updating inode's i_datasync_tid whenever its size is
updated.
Reported-by: Kristian Nielsen <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ext4/inode.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4007,6 +4007,7 @@ static int ext4_do_update_inode(handle_t
struct ext4_inode_info *ei = EXT4_I(inode);
struct buffer_head *bh = iloc->bh;
int err = 0, rc, block;
+ int need_datasync = 0;
/* For fields not not tracking in the in-memory inode,
* initialise them to zero for new inodes. */
@@ -4055,7 +4056,10 @@ static int ext4_do_update_inode(handle_t
raw_inode->i_file_acl_high =
cpu_to_le16(ei->i_file_acl >> 32);
raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
- ext4_isize_set(raw_inode, ei->i_disksize);
+ if (ei->i_disksize != ext4_isize(raw_inode)) {
+ ext4_isize_set(raw_inode, ei->i_disksize);
+ need_datasync = 1;
+ }
if (ei->i_disksize > 0x7fffffffULL) {
struct super_block *sb = inode->i_sb;
if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
@@ -4108,7 +4112,7 @@ static int ext4_do_update_inode(handle_t
err = rc;
ext4_clear_inode_state(inode, EXT4_STATE_NEW);
- ext4_update_inode_fsync_trans(handle, inode, 0);
+ ext4_update_inode_fsync_trans(handle, inode, need_datasync);
out_brelse:
brelse(bh);
ext4_std_error(inode->i_sb, err);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Shijie <[email protected]>
commit c51803ddba10d80d9f246066802c6e359cf1d44c upstream.
We may cause a memory leak when the @types has more then one parser.
Take the `default_mtd_part_types` for example. The default_mtd_part_types has
two parsers now: `cmdlinepart` and `ofpart`.
Assume the following case:
The kernel command line sets the partitions like:
#gpmi-nand:20m(boot),20m(kernel),1g(rootfs),-(user)
But the devicetree file(such as arch/arm/boot/dts/imx28-evk.dts) also sets
the same partitions as the kernel command line does.
In the current code, the partitions parsed out by the `ofpart` will
overwrite the @pparts which has already set by the `cmdlinepart` parser,
and the the partitions parsed out by the `cmdlinepart` is missed.
A memory leak occurs.
So we should break the code as soon as we parse out the partitions,
In actually, this patch makes a priority order between the parsers.
If one parser has already parsed out the partitions successfully,
it's no need to use another parser anymore.
Signed-off-by: Huang Shijie <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/mtdpart.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
index d518e4d..f8c08ec 100644
--- a/drivers/mtd/mtdpart.c
+++ b/drivers/mtd/mtdpart.c
@@ -711,6 +711,8 @@ static const char *default_mtd_part_types[] = {
* partition parsers, specified in @types. However, if @types is %NULL, then
* the default list of parsers is used. The default list contains only the
* "cmdlinepart" and "ofpart" parsers ATM.
+ * Note: If there are more then one parser in @types, the kernel only takes the
+ * partitions parsed out by the first parser.
*
* This function may return:
* o a negative error code in case of failure
@@ -735,11 +737,12 @@ int parse_mtd_partitions(struct mtd_info *master, const char **types,
if (!parser)
continue;
ret = (*parser->parse_fn)(master, pparts, data);
+ put_partition_parser(parser);
if (ret > 0) {
printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n",
ret, parser->name, master->name);
+ break;
}
- put_partition_parser(parser);
}
return ret;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Geert Uytterhoeven <[email protected]>
commit 9957423f035c2071f6d1c5d2f095cdafbeb25ad7 upstream.
It seems the current (gcc 4.6.3) no longer provides this so make it
conditional.
As reported by Tony before, the mn10300 architecture cross-compiles with
gcc-4.6.3 if -mmem-funcs is not added to KBUILD_CFLAGS.
Reported-by: Tony Breeds <[email protected]>
Signed-off-by: Geert Uytterhoeven <[email protected]>
Cc: David Howells <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
arch/mn10300/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mn10300/Makefile b/arch/mn10300/Makefile
index 33188b6..a3d0fef 100644
--- a/arch/mn10300/Makefile
+++ b/arch/mn10300/Makefile
@@ -26,7 +26,7 @@ CHECKFLAGS +=
PROCESSOR := unset
UNIT := unset
-KBUILD_CFLAGS += -mam33 -mmem-funcs -DCPU=AM33
+KBUILD_CFLAGS += -mam33 -DCPU=AM33 $(call cc-option,-mmem-funcs,)
KBUILD_AFLAGS += -mam33 -DCPU=AM33
ifeq ($(CONFIG_MN10300_CURRENT_IN_E2),y)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian Norris <[email protected]>
commit 74d83beaa229aac7d126ac1ed9414658ff1a89d2 upstream.
JFFS2 was designed without thought for OOB bitflips, it seems, but they
can occur and will be reported to JFFS2 via mtd_read_oob()[1]. We don't
want to fail on these transactions, since the data was corrected.
[1] Few drivers report bitflips for OOB-only transactions. With such
drivers, this patch should have no effect.
Signed-off-by: Brian Norris <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/jffs2/wbuf.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/fs/jffs2/wbuf.c
+++ b/fs/jffs2/wbuf.c
@@ -1032,11 +1032,11 @@ int jffs2_check_oob_empty(struct jffs2_s
ops.datbuf = NULL;
ret = c->mtd->read_oob(c->mtd, jeb->offset, &ops);
- if (ret || ops.oobretlen != ops.ooblen) {
+ if ((ret && !mtd_is_bitflip(ret)) || ops.oobretlen != ops.ooblen) {
printk(KERN_ERR "cannot read OOB for EB at %08x, requested %zd"
" bytes, read %zd bytes, error %d\n",
jeb->offset, ops.ooblen, ops.oobretlen, ret);
- if (!ret)
+ if (!ret || mtd_is_bitflip(ret))
ret = -EIO;
return ret;
}
@@ -1075,11 +1075,11 @@ int jffs2_check_nand_cleanmarker(struct
ops.datbuf = NULL;
ret = c->mtd->read_oob(c->mtd, jeb->offset, &ops);
- if (ret || ops.oobretlen != ops.ooblen) {
+ if ((ret && !mtd_is_bitflip(ret)) || ops.oobretlen != ops.ooblen) {
printk(KERN_ERR "cannot read OOB for EB at %08x, requested %zd"
" bytes, read %zd bytes, error %d\n",
jeb->offset, ops.ooblen, ops.oobretlen, ret);
- if (!ret)
+ if (!ret || mtd_is_bitflip(ret))
ret = -EIO;
return ret;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Bießmann <[email protected]>
commit 4d3d688da8e7016f15483e9319b41311e1db9515 upstream.
Unloading the omap2 nand driver missed to release the memory region which will
result in not being able to request it again if one want to load the driver
later on.
This patch fixes following error when loading omap2 module after unloading:
---8<---
~ $ rmmod omap2
~ $ modprobe omap2
[ 37.420928] omap2-nand: probe of omap2-nand.0 failed with error -16
~ $
--->8---
This error was introduced in 67ce04bf2746f8a1f8c2a104b313d20c63f68378 which
was the first commit of this driver.
Signed-off-by: Andreas Bießmann <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/nand/omap2.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c
index e604a45..9142005 100644
--- a/drivers/mtd/nand/omap2.c
+++ b/drivers/mtd/nand/omap2.c
@@ -1364,6 +1364,7 @@ static int omap_nand_remove(struct platform_device *pdev)
/* Release NAND device, its internal structures and partitions */
nand_release(&info->mtd);
iounmap(info->nand.IO_ADDR_R);
+ release_mem_region(info->phys_base, NAND_IO_SIZE);
kfree(info);
return 0;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Richard Genoud <[email protected]>
commit bb0a13a13411c4ce24c48c8ff3cdf7b48d237240 upstream.
If override size is too big, the module was actually loaded instead of
failing, because retval was not set.
This lead to memory corruption with the use of the freed structs nandsim
and nand_chip.
Signed-off-by: Richard Genoud <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/nand/nandsim.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/mtd/nand/nandsim.c b/drivers/mtd/nand/nandsim.c
index 21e64b5..a932c48 100644
--- a/drivers/mtd/nand/nandsim.c
+++ b/drivers/mtd/nand/nandsim.c
@@ -2317,6 +2317,7 @@ static int __init ns_init_module(void)
uint64_t new_size = (uint64_t)nsmtd->erasesize << overridesize;
if (new_size >> overridesize != nsmtd->erasesize) {
NS_ERR("overridesize is too big\n");
+ retval = -EINVAL;
goto err_exit;
}
/* N.B. This relies on nand_scan not doing anything with the size before we change it */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yuta Ando <[email protected]>
commit 4eae518d4b01b0cbf2f0d8edb5a6f3d6245ee8fb upstream.
The kbuild target 'localyesconfig' has been same as 'localmodconfig'
since the commit 50bce3e "kconfig/streamline_config.pl: merge
local{mod,yes}config". The commit expects this script generates
different configure depending on target, but it was not yet implemented.
So I added code that sets to 'yes' when target is 'localyesconfig'.
Link: http://lkml.kernel.org/r/[email protected]
Cc: [email protected]
Signed-off-by: Yuta Ando <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
scripts/kconfig/streamline_config.pl | 2 ++
1 file changed, 2 insertions(+)
diff --git a/scripts/kconfig/streamline_config.pl b/scripts/kconfig/streamline_config.pl
index 39b6314..3368939 100644
--- a/scripts/kconfig/streamline_config.pl
+++ b/scripts/kconfig/streamline_config.pl
@@ -601,6 +601,8 @@ foreach my $line (@config_file) {
if (defined($configs{$1})) {
if ($localyesconfig) {
$setconfigs{$1} = 'y';
+ print "$1=y\n";
+ next;
} else {
$setconfigs{$1} = $2;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit 38b11bae6ba02da352340aff12ee25755977b222 upstream.
We've had reports in the past about this specific case, so it's time to
go ahead and explicitly set cache_dynamic_acls=1 for generate_node_acls=1
(TPG demo-mode) operation.
During normal generate_node_acls=0 operation with explicit NodeACLs ->
se_node_acl memory is persistent to the configfs group located at
/sys/kernel/config/target/$TARGETNAME/$TPGT/acls/$INITIATORNAME, so in
the generate_node_acls=1 case we want the reservation logic to reference
existing per initiator IQN se_node_acl memory (not to generate a new
se_node_acl), so go ahead and always set cache_dynamic_acls=1 when
TPG demo-mode is enabled.
Reported-by: Ronnie Sahlberg <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/iscsi/iscsi_target_tpg.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
index a38a3f8..de9ea32 100644
--- a/drivers/target/iscsi/iscsi_target_tpg.c
+++ b/drivers/target/iscsi/iscsi_target_tpg.c
@@ -677,6 +677,12 @@ int iscsit_ta_generate_node_acls(
pr_debug("iSCSI_TPG[%hu] - Generate Initiator Portal Group ACLs: %s\n",
tpg->tpgt, (a->generate_node_acls) ? "Enabled" : "Disabled");
+ if (flag == 1 && a->cache_dynamic_acls == 0) {
+ pr_debug("Explicitly setting cache_dynamic_acls=1 when "
+ "generate_node_acls=1\n");
+ a->cache_dynamic_acls = 1;
+ }
+
return 0;
}
@@ -716,6 +722,12 @@ int iscsit_ta_cache_dynamic_acls(
return -EINVAL;
}
+ if (a->generate_node_acls == 1 && flag == 0) {
+ pr_debug("Skipping cache_dynamic_acls=0 when"
+ " generate_node_acls=1\n");
+ return 0;
+ }
+
a->cache_dynamic_acls = flag;
pr_debug("iSCSI_TPG[%hu] - Cache Dynamic Initiator Portal Group"
" ACLs %s\n", tpg->tpgt, (a->cache_dynamic_acls) ?
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrick McHardy <[email protected]>
commit bea1e22df494a729978e7f2c54f7bda328f74bc3 upstream.
Fix a crash in ipoib_mcast_join_task(). (with help from Or Gerlitz)
Commit c8c2afe360b7 ("IPoIB: Use rtnl lock/unlock when changing device
flags") added a call to rtnl_lock() in ipoib_mcast_join_task(), which
is run from the ipoib_workqueue, and hence the workqueue can't be
flushed from the context of ipoib_stop().
In the current code, ipoib_stop() (which doesn't flush the workqueue)
calls ipoib_mcast_dev_flush(), which goes and deletes all the
multicast entries. This takes place without any synchronization with
a possible running instance of ipoib_mcast_join_task() for the same
ipoib device, leading to a crash due to NULL pointer dereference.
Fix this by making sure that the workqueue is flushed before
ipoib_mcast_dev_flush() is called. To make that possible, we move the
RTNL-lock wrapped code to ipoib_mcast_join_finish().
Signed-off-by: Patrick McHardy <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 19 ++++++++++---------
2 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 1e19b5a..ea0dfc7 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -150,7 +150,7 @@ static int ipoib_stop(struct net_device *dev)
netif_stop_queue(dev);
- ipoib_ib_dev_down(dev, 0);
+ ipoib_ib_dev_down(dev, 1);
ipoib_ib_dev_stop(dev, 0);
if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index 7536724..cecb98a 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -175,7 +175,9 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
mcast->mcmember = *mcmember;
- /* Set the cached Q_Key before we attach if it's the broadcast group */
+ /* Set the multicast MTU and cached Q_Key before we attach if it's
+ * the broadcast group.
+ */
if (!memcmp(mcast->mcmember.mgid.raw, priv->dev->broadcast + 4,
sizeof (union ib_gid))) {
spin_lock_irq(&priv->lock);
@@ -183,10 +185,17 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
spin_unlock_irq(&priv->lock);
return -EAGAIN;
}
+ priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
priv->qkey = be32_to_cpu(priv->broadcast->mcmember.qkey);
spin_unlock_irq(&priv->lock);
priv->tx_wr.wr.ud.remote_qkey = priv->qkey;
set_qkey = 1;
+
+ if (!ipoib_cm_admin_enabled(dev)) {
+ rtnl_lock();
+ dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
+ rtnl_unlock();
+ }
}
if (!test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
@@ -574,14 +583,6 @@ void ipoib_mcast_join_task(struct work_struct *work)
return;
}
- priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
-
- if (!ipoib_cm_admin_enabled(dev)) {
- rtnl_lock();
- dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
- rtnl_unlock();
- }
-
ipoib_dbg_mcast(priv, "successfully joined all multicast groups\n");
clear_bit(IPOIB_MCAST_RUN, &priv->flags);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jean Delvare <[email protected]>
commit b1e0d8b70fa31821ebca3965f2ef8619d7c5e316 upstream.
The correct syntax for gcc -x is "gcc -x assembler", not
"gcc -xassembler". Even though the latter happens to work, the former
is what is documented in the manual page and thus what gcc wrappers
such as icecream do expect.
This isn't a cosmetic change. The missing space prevents icecream from
recognizing compilation tasks it can't handle, leading to silent kernel
miscompilations.
Besides me, credits go to Michael Matz and Dirk Mueller for
investigating the miscompilation issue and tracking it down to this
incorrect -x parameter syntax.
Signed-off-by: Jean Delvare <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Cc: Bernhard Walle <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Ralf Baechle <[email protected]>
Signed-off-by: Michal Marek <[email protected]>
[bwh: Backported to 3.2: drop unneeded change to arch/x86/Makefile]
Signed-off-by: Ben Hutchings <[email protected]>
---
arch/mips/Makefile | 2 +-
arch/mips/kernel/Makefile | 2 +-
arch/x86/Makefile | 2 +-
scripts/Kbuild.include | 12 ++++++------
scripts/gcc-version.sh | 6 +++---
scripts/gcc-x86_32-has-stack-protector.sh | 2 +-
scripts/gcc-x86_64-has-stack-protector.sh | 2 +-
scripts/kconfig/check.sh | 2 +-
scripts/kconfig/lxdialog/check-lxdialog.sh | 2 +-
tools/perf/Makefile | 2 +-
tools/power/cpupower/Makefile | 2 +-
11 files changed, 18 insertions(+), 18 deletions(-)
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -224,7 +224,7 @@ KBUILD_CPPFLAGS += -D"DATAOFFSET=$(if $(
LDFLAGS += -m $(ld-emul)
ifdef CONFIG_MIPS
-CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -xc /dev/null | \
+CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/")
ifdef CONFIG_64BIT
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -102,7 +102,7 @@ obj-$(CONFIG_MIPS_MACHINE) += mips_machi
obj-$(CONFIG_OF) += prom.o
-CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
+CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -x c /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
obj-$(CONFIG_HAVE_STD_PC_SERIAL_PORT) += 8250-platform.o
--- a/scripts/Kbuild.include
+++ b/scripts/Kbuild.include
@@ -98,24 +98,24 @@ try-run = $(shell set -e; \
# Usage: cflags-y += $(call as-option,-Wa$(comma)-isa=foo,)
as-option = $(call try-run,\
- $(CC) $(KBUILD_CFLAGS) $(1) -c -xassembler /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(KBUILD_CFLAGS) $(1) -c -x assembler /dev/null -o "$$TMP",$(1),$(2))
# as-instr
# Usage: cflags-y += $(call as-instr,instr,option1,option2)
as-instr = $(call try-run,\
- printf "%b\n" "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3))
+ printf "%b\n" "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -x assembler -o "$$TMP" -,$(2),$(3))
# cc-option
# Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
cc-option = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
# cc-option-yn
# Usage: flag := $(call cc-option-yn,-march=winchip-c6)
cc-option-yn = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",y,n)
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",y,n)
# cc-option-align
# Prefix align with either -falign or -malign
@@ -125,7 +125,7 @@ cc-option-align = $(subst -functions=0,,
# cc-disable-warning
# Usage: cflags-y += $(call cc-disable-warning,unused-but-set-variable)
cc-disable-warning = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -xc /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
# cc-version
# Usage gcc-ver := $(call cc-version)
@@ -143,7 +143,7 @@ cc-ifversion = $(shell [ $(call cc-versi
# cc-ldoption
# Usage: ldflags += $(call cc-ldoption, -Wl$(comma)--hash-style=both)
cc-ldoption = $(call try-run,\
- $(CC) $(1) -nostdlib -xc /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
# ld-option
# Usage: LDFLAGS += $(call ld-option, -X)
--- a/scripts/gcc-version.sh
+++ b/scripts/gcc-version.sh
@@ -22,10 +22,10 @@ if [ ${#compiler} -eq 0 ]; then
exit 1
fi
-MAJOR=$(echo __GNUC__ | $compiler -E -xc - | tail -n 1)
-MINOR=$(echo __GNUC_MINOR__ | $compiler -E -xc - | tail -n 1)
+MAJOR=$(echo __GNUC__ | $compiler -E -x c - | tail -n 1)
+MINOR=$(echo __GNUC_MINOR__ | $compiler -E -x c - | tail -n 1)
if [ "x$with_patchlevel" != "x" ] ; then
- PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -xc - | tail -n 1)
+ PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -x c - | tail -n 1)
printf "%02d%02d%02d\\n" $MAJOR $MINOR $PATCHLEVEL
else
printf "%02d%02d\\n" $MAJOR $MINOR
--- a/scripts/gcc-x86_32-has-stack-protector.sh
+++ b/scripts/gcc-x86_32-has-stack-protector.sh
@@ -1,6 +1,6 @@
#!/bin/sh
-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
if [ "$?" -eq "0" ] ; then
echo y
else
--- a/scripts/gcc-x86_64-has-stack-protector.sh
+++ b/scripts/gcc-x86_64-has-stack-protector.sh
@@ -1,6 +1,6 @@
#!/bin/sh
-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
if [ "$?" -eq "0" ] ; then
echo y
else
--- a/scripts/kconfig/check.sh
+++ b/scripts/kconfig/check.sh
@@ -1,6 +1,6 @@
#!/bin/sh
# Needed for systems without gettext
-$* -xc -o /dev/null - > /dev/null 2>&1 << EOF
+$* -x c -o /dev/null - > /dev/null 2>&1 << EOF
#include <libintl.h>
int main()
{
--- a/scripts/kconfig/lxdialog/check-lxdialog.sh
+++ b/scripts/kconfig/lxdialog/check-lxdialog.sh
@@ -38,7 +38,7 @@ trap "rm -f $tmp" 0 1 2 3 15
# Check if we can link to ncurses
check() {
- $cc -xc - -o $tmp 2>/dev/null <<'EOF'
+ $cc -x c - -o $tmp 2>/dev/null <<'EOF'
#include CURSES_LOC
main() {}
EOF
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -56,7 +56,7 @@ ifeq ($(ARCH),x86_64)
ARCH := x86
IS_X86_64 := 0
ifeq (, $(findstring m32,$(EXTRA_CFLAGS)))
- IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -xc - | tail -n 1)
+ IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -x c - | tail -n 1)
endif
ifeq (${IS_X86_64}, 1)
RAW_ARCH := x86_64
--- a/tools/power/cpupower/Makefile
+++ b/tools/power/cpupower/Makefile
@@ -100,7 +100,7 @@ GMO_FILES = ${shell for HLANG in ${LANGU
export CROSS CC AR STRIP RANLIB CFLAGS LDFLAGS LIB_OBJS
# check if compiler option is supported
-cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -xc /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;}
+cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;}
# use '-Os' optimization if available, else use -O2
OPTIMIZATION := $(call cc-supports,-Os,-O2)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit b32f4c7ed85c5cee2a21a55c9f59ebc9d57a2463 upstream.
This patch re-adds the ability to optionally run in buffered FILEIO mode
(eg: w/o O_DSYNC) for device backends in order to once again use the
Linux buffered cache as a write-back storage mechanism.
This logic was originally dropped with mainline v3.5-rc commit:
commit a4dff3043c231d57f982af635c9d2192ee40e5ae
Author: Nicholas Bellinger <[email protected]>
Date: Wed May 30 16:25:41 2012 -0700
target/file: Use O_DSYNC by default for FILEIO backends
This difference with this patch is that fd_create_virtdevice() now
forces the explicit setting of emulate_write_cache=1 when buffered FILEIO
operation has been enabled.
(v2: Switch to FDBD_HAS_BUFFERED_IO_WCE + add more detailed
comment as requested by hch)
Reported-by: Ferry <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/target_core_file.c | 41 ++++++++++++++++++++++++++++++++++---
drivers/target/target_core_file.h | 1 +
2 files changed, 39 insertions(+), 3 deletions(-)
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -139,6 +139,19 @@ static struct se_device *fd_create_virtd
* of pure timestamp updates.
*/
flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
+ /*
+ * Optionally allow fd_buffered_io=1 to be enabled for people
+ * who want use the fs buffer cache as an WriteCache mechanism.
+ *
+ * This means that in event of a hard failure, there is a risk
+ * of silent data-loss if the SCSI client has *not* performed a
+ * forced unit access (FUA) write, or issued SYNCHRONIZE_CACHE
+ * to write-out the entire device cache.
+ */
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Disabling O_DSYNC, using buffered FILEIO\n");
+ flags &= ~O_DSYNC;
+ }
file = filp_open(dev_p, flags, 0600);
if (IS_ERR(file)) {
@@ -206,6 +219,12 @@ static struct se_device *fd_create_virtd
if (!dev)
goto fail;
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Forcing setting of emulate_write_cache=1"
+ " with FDBD_HAS_BUFFERED_IO_WCE\n");
+ dev->se_sub_dev->se_dev_attrib.emulate_write_cache = 1;
+ }
+
fd_dev->fd_dev_id = fd_host->fd_host_dev_id_count++;
fd_dev->fd_queue_depth = dev->queue_depth;
@@ -450,6 +469,7 @@ enum {
static match_table_t tokens = {
{Opt_fd_dev_name, "fd_dev_name=%s"},
{Opt_fd_dev_size, "fd_dev_size=%s"},
+ {Opt_fd_buffered_io, "fd_buffered_io=%d"},
{Opt_err, NULL}
};
@@ -461,7 +481,7 @@ static ssize_t fd_set_configfs_dev_param
struct fd_dev *fd_dev = se_dev->se_dev_su_ptr;
char *orig, *ptr, *arg_p, *opts;
substring_t args[MAX_OPT_ARGS];
- int ret = 0, token;
+ int ret = 0, arg, token;
opts = kstrdup(page, GFP_KERNEL);
if (!opts)
@@ -505,6 +525,19 @@ static ssize_t fd_set_configfs_dev_param
" bytes\n", fd_dev->fd_dev_size);
fd_dev->fbd_flags |= FBDF_HAS_SIZE;
break;
+ case Opt_fd_buffered_io:
+ match_int(args, &arg);
+ if (arg != 1) {
+ pr_err("bogus fd_buffered_io=%d value\n", arg);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ pr_debug("FILEIO: Using buffered I/O"
+ " operations for struct fd_dev\n");
+
+ fd_dev->fbd_flags |= FDBD_HAS_BUFFERED_IO_WCE;
+ break;
default:
break;
}
@@ -536,8 +569,10 @@ static ssize_t fd_show_configfs_dev_para
ssize_t bl = 0;
bl = sprintf(b + bl, "TCM FILEIO ID: %u", fd_dev->fd_dev_id);
- bl += sprintf(b + bl, " File: %s Size: %llu Mode: O_DSYNC\n",
- fd_dev->fd_dev_name, fd_dev->fd_dev_size);
+ bl += sprintf(b + bl, " File: %s Size: %llu Mode: %s\n",
+ fd_dev->fd_dev_name, fd_dev->fd_dev_size,
+ (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) ?
+ "Buffered-WCE" : "O_DSYNC");
return bl;
}
--- a/drivers/target/target_core_file.h
+++ b/drivers/target/target_core_file.h
@@ -18,6 +18,7 @@ struct fd_request {
#define FBDF_HAS_PATH 0x01
#define FBDF_HAS_SIZE 0x02
+#define FDBD_HAS_BUFFERED_IO_WCE 0x04
struct fd_dev {
u32 fbd_flags;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislav Kinsbursky <[email protected]>
commit 303a7ce92064c285a04c870f2dc0192fdb2968cb upstream.
Taking hostname from uts namespace if not safe, because this cuold be
performind during umount operation on child reaper death. And in this case
current->nsproxy is NULL already.
Signed-off-by: Stanislav Kinsbursky <[email protected]>
Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/lockd/mon.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
index 38f240e..e0bc36e 100644
--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -42,6 +42,7 @@ struct nsm_args {
u32 proc;
char *mon_name;
+ char *nodename;
};
struct nsm_res {
@@ -141,6 +142,7 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res,
.vers = 3,
.proc = NLMPROC_NSM_NOTIFY,
.mon_name = nsm->sm_mon_name,
+ .nodename = utsname()->nodename,
};
struct rpc_message msg = {
.rpc_argp = &args,
@@ -477,7 +479,7 @@ static void encode_my_id(struct xdr_stream *xdr, const struct nsm_args *argp)
{
__be32 *p;
- encode_nsm_string(xdr, utsname()->nodename);
+ encode_nsm_string(xdr, argp->nodename);
p = xdr_reserve_space(xdr, 4 + 4 + 4);
*p++ = cpu_to_be32(argp->prog);
*p++ = cpu_to_be32(argp->vers);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger <[email protected]>
commit cf0eb28d3ba60098865bf7dbcbfdd6b1cc483e3b upstream.
This patch increases the default for nopin_timeout to 15 seconds (wait
between sending a new NopIN ping) and nopin_response_timeout to 30 seconds
(wait for NopOUT response before failing the connection) in order to avoid
false positives by iSCSI Initiators who are not always able (under load) to
respond to NopIN echo PING requests within the current 5 second window.
False positives have been observed recently using Open-iSCSI code on v3.3.x
with heavy large-block READ workloads over small MTU 1 Gb/sec ports, and
increasing these values to more reasonable defaults significantly reduces
the possibility of false positive NopIN response timeout events under
this specific workload.
Historically these have been set low to initiate connection recovery as
soon as possible if we don't hear a ping back, but for modern v3.x code
on 1 -> 10 Gb/sec ports these new defaults make alot more sense.
Cc: Christoph Hellwig <[email protected]>
Cc: Andy Grover <[email protected]>
Cc: Mike Christie <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/iscsi/iscsi_target_core.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
index ff3f482..2ba9f9b 100644
--- a/drivers/target/iscsi/iscsi_target_core.h
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -25,10 +25,10 @@
#define NA_DATAOUT_TIMEOUT_RETRIES 5
#define NA_DATAOUT_TIMEOUT_RETRIES_MAX 15
#define NA_DATAOUT_TIMEOUT_RETRIES_MIN 1
-#define NA_NOPIN_TIMEOUT 5
+#define NA_NOPIN_TIMEOUT 15
#define NA_NOPIN_TIMEOUT_MAX 60
#define NA_NOPIN_TIMEOUT_MIN 3
-#define NA_NOPIN_RESPONSE_TIMEOUT 5
+#define NA_NOPIN_RESPONSE_TIMEOUT 30
#define NA_NOPIN_RESPONSE_TIMEOUT_MAX 60
#define NA_NOPIN_RESPONSE_TIMEOUT_MIN 3
#define NA_RANDOM_DATAIN_PDU_OFFSETS 0
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Guennadi Liakhovetski <[email protected]>
commit 8464dd52d3198dd05cafb005371d76e5339eb842 upstream.
On some systems, e.g., kzm9g, MMCIF interfaces can produce spurious
interrupts without any active request. To prevent the Oops, that results
in such cases, don't dereference the mmc request pointer until we make
sure, that we are indeed processing such a request.
Reported-by: Tetsuyuki Kobayashi <[email protected]>
Signed-off-by: Guennadi Liakhovetski <[email protected]>
Signed-off-by: Chris Ball <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mmc/host/sh_mmcif.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/drivers/mmc/host/sh_mmcif.c
+++ b/drivers/mmc/host/sh_mmcif.c
@@ -1003,6 +1003,10 @@ static irqreturn_t sh_mmcif_intr(int irq
host->sd_error = true;
dev_dbg(&host->pd->dev, "int err state = %08x\n", state);
}
+ if (host->state == STATE_IDLE) {
+ dev_info(&host->pd->dev, "Spurious IRQ status 0x%x", state);
+ return IRQ_HANDLED;
+ }
if (state & ~(INT_CMD12RBE | INT_CMD12CRE))
complete(&host->intr_wait);
else
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Hellwig <[email protected]>
commit 904753da183566c71211d23c169a80184648c121 upstream.
Fix a potential multiple spin-unlock -> deadlock scenario during the
overflow check within iscsit_build_sendtargets_resp() as found by
sparse static checking.
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Nicholas Bellinger <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/target/iscsi/iscsi_target.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index cca6d13..29f3b24 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3274,7 +3274,6 @@ static int iscsit_build_sendtargets_response(struct iscsi_cmd *cmd)
len += 1;
if ((len + payload_len) > buffer_len) {
- spin_unlock(&tiqn->tiqn_tpg_lock);
end_of_buf = 1;
goto eob;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexandre Bounine <[email protected]>
commit 7c4a6106d6451fc03c491e61df37c044505d843a upstream.
Fix multicast packet transmit logic to account for repetitive transmission
of single skb:
- correct check for available buffers (this bug may produce NULL pointer
crash dump in case of heavy traffic);
- update skb user count (incorrect user counter causes a warning dump from
net_tx_action routine during multicast transfers in systems with three or
more rionet participants).
Signed-off-by: Alexandre Bounine <[email protected]>
Cc: Matt Porter <[email protected]>
Cc: David S. Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/rionet.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
index 91d2588..1470d3e 100644
--- a/drivers/net/rionet.c
+++ b/drivers/net/rionet.c
@@ -79,6 +79,7 @@ static int rionet_capable = 1;
* on system trade-offs.
*/
static struct rio_dev **rionet_active;
+static int nact; /* total number of active rionet peers */
#define is_rionet_capable(src_ops, dst_ops) \
((src_ops & RIO_SRC_OPS_DATA_MSG) && \
@@ -175,6 +176,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
struct ethhdr *eth = (struct ethhdr *)skb->data;
u16 destid;
unsigned long flags;
+ int add_num = 1;
local_irq_save(flags);
if (!spin_trylock(&rnet->tx_lock)) {
@@ -182,7 +184,10 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_LOCKED;
}
- if ((rnet->tx_cnt + 1) > RIONET_TX_RING_SIZE) {
+ if (is_multicast_ether_addr(eth->h_dest))
+ add_num = nact;
+
+ if ((rnet->tx_cnt + add_num) > RIONET_TX_RING_SIZE) {
netif_stop_queue(ndev);
spin_unlock_irqrestore(&rnet->tx_lock, flags);
printk(KERN_ERR "%s: BUG! Tx Ring full when queue awake!\n",
@@ -191,11 +196,16 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}
if (is_multicast_ether_addr(eth->h_dest)) {
+ int count = 0;
for (i = 0; i < RIO_MAX_ROUTE_ENTRIES(rnet->mport->sys_size);
i++)
- if (rionet_active[i])
+ if (rionet_active[i]) {
rionet_queue_tx_msg(skb, ndev,
rionet_active[i]);
+ if (count)
+ atomic_inc(&skb->users);
+ count++;
+ }
} else if (RIONET_MAC_MATCH(eth->h_dest)) {
destid = RIONET_GET_DESTID(eth->h_dest);
if (rionet_active[destid])
@@ -220,14 +230,17 @@ static void rionet_dbell_event(struct rio_mport *mport, void *dev_id, u16 sid, u
if (info == RIONET_DOORBELL_JOIN) {
if (!rionet_active[sid]) {
list_for_each_entry(peer, &rionet_peers, node) {
- if (peer->rdev->destid == sid)
+ if (peer->rdev->destid == sid) {
rionet_active[sid] = peer->rdev;
+ nact++;
+ }
}
rio_mport_send_doorbell(mport, sid,
RIONET_DOORBELL_JOIN);
}
} else if (info == RIONET_DOORBELL_LEAVE) {
rionet_active[sid] = NULL;
+ nact--;
} else {
if (netif_msg_intr(rnet))
printk(KERN_WARNING "%s: unhandled doorbell\n",
@@ -523,6 +536,7 @@ static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
rc = rionet_setup_netdev(rdev->net->hport, ndev);
rionet_check = 1;
+ nact = 0;
}
/*
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sascha Hauer <[email protected]>
commit c353acba28fb3fa1fd05fd6b85a9fc7938330f9c upstream.
The call if_changed mechanism does not work when the command contains
backslashes. This basically is an issue with lzo and bzip2 compressed
kernels. The compressed binaries do not contain the uncompressed image
size, so these use size_append to append the size. This results in
backslashes in the executed command. With this if_changed always
detects a change in the command and rebuilds the compressed image even
if nothing has changed.
Fix this by escaping backslashes in make-cmd
Signed-off-by: Sascha Hauer <[email protected]>
Signed-off-by: Jan Luebbe <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Bernhard Walle <[email protected]>
Cc: Michal Marek <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
scripts/Kbuild.include | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
index 6a3ee98..afa4459 100644
--- a/scripts/Kbuild.include
+++ b/scripts/Kbuild.include
@@ -209,7 +209,7 @@ endif
# >$< substitution to preserve $ when reloading .cmd file
# note: when using inline perl scripts [perl -e '...$$t=1;...']
# in $(cmd_xxx) double $$ your perl vars
-make-cmd = $(subst \#,\\\#,$(subst $$,$$$$,$(call escsq,$(cmd_$(1)))))
+make-cmd = $(subst \\,\\\\,$(subst \#,\\\#,$(subst $$,$$$$,$(call escsq,$(cmd_$(1))))))
# Find any prerequisites that is newer than target or that does not exist.
# PHONY targets skipped in both cases.
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rusty Russell <[email protected]>
commit ca16f580a5db7e60bfafe59a50bb133bd3347491 upstream.
We usually got away with ->next on the final entry being NULL, but it
finally bit me.
Signed-off-by: Rusty Russell <[email protected]>
[bwh: Backported to 3.2: adjust filename]
Signed-off-by: Ben Hutchings <[email protected]>
---
Documentation/virtual/lguest/lguest.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/Documentation/virtual/lguest/lguest.c b/Documentation/virtual/lguest/lguest.c
index f759f4f..fd2f922 100644
--- a/Documentation/virtual/lguest/lguest.c
+++ b/Documentation/virtual/lguest/lguest.c
@@ -1299,6 +1299,7 @@ static struct device *new_device(const char *name, u16 type)
dev->feature_len = 0;
dev->num_vq = 0;
dev->running = false;
+ dev->next = NULL;
/*
* Append to device list. Prepending to a single-linked list is
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Vetter <[email protected]>
commit 74d44445afb9f50126eba052adeb89827cee88f3 upstream.
... since finish_page_flip needs the vblank timestamp generated
in drm_handle_vblank. Somehow all the gmch platforms get it right,
but all the pch platform irq handlers get is wrong. Hooray for copy&
pasting!
Currently this gets papered over by a gross hack in finish_page_flip.
A second patch will remove that.
Note that without this, the new timestamp sanity checks in flip_test
occasionally get tripped up, hence the cc: stable tag.
Reviewed-by: [email protected]
Tested-by: Imre Deak <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
[bwh: Backported to 3.2: no loop over pipes in ivybridge_irq_handler(),
so make a similar change to that in ironlake_irq_handler()]
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -530,6 +530,12 @@ static irqreturn_t ivybridge_irq_handler
if (de_iir & DE_GSE_IVB)
intel_opregion_gse_intr(dev);
+ if (de_iir & DE_PIPEA_VBLANK_IVB)
+ drm_handle_vblank(dev, 0);
+
+ if (de_iir & DE_PIPEB_VBLANK_IVB)
+ drm_handle_vblank(dev, 1);
+
if (de_iir & DE_PLANEA_FLIP_DONE_IVB) {
intel_prepare_page_flip(dev, 0);
intel_finish_page_flip_plane(dev, 0);
@@ -540,12 +546,6 @@ static irqreturn_t ivybridge_irq_handler
intel_finish_page_flip_plane(dev, 1);
}
- if (de_iir & DE_PIPEA_VBLANK_IVB)
- drm_handle_vblank(dev, 0);
-
- if (de_iir & DE_PIPEB_VBLANK_IVB)
- drm_handle_vblank(dev, 1);
-
/* check event from PCH */
if (de_iir & DE_PCH_EVENT_IVB) {
if (pch_iir & SDE_HOTPLUG_MASK_CPT)
@@ -622,6 +622,12 @@ static irqreturn_t ironlake_irq_handler(
if (de_iir & DE_GSE)
intel_opregion_gse_intr(dev);
+ if (de_iir & DE_PIPEA_VBLANK)
+ drm_handle_vblank(dev, 0);
+
+ if (de_iir & DE_PIPEB_VBLANK)
+ drm_handle_vblank(dev, 1);
+
if (de_iir & DE_PLANEA_FLIP_DONE) {
intel_prepare_page_flip(dev, 0);
intel_finish_page_flip_plane(dev, 0);
@@ -632,12 +638,6 @@ static irqreturn_t ironlake_irq_handler(
intel_finish_page_flip_plane(dev, 1);
}
- if (de_iir & DE_PIPEA_VBLANK)
- drm_handle_vblank(dev, 0);
-
- if (de_iir & DE_PIPEB_VBLANK)
- drm_handle_vblank(dev, 1);
-
/* check event from PCH */
if (de_iir & DE_PCH_EVENT) {
if (pch_iir & hotplug_mask)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vaibhav Bedia <[email protected]>
commit c4c8eeb4df00aabb641553d6fbcd46f458e56cd9 upstream.
In some cases mmc_suspend_host() is not able to claim the
host and proceed with the suspend process. The core returns
-EBUSY to the host controller driver. Unfortunately, the
host controller driver does not pass on this information
to the PM core and hence the system suspend process continues.
ret = mmc_suspend_host(host->mmc);
if (ret) {
host->suspended = 0;
if (host->pdata->resume) {
ret = host->pdata->resume(dev, host->slot_id);
The return status from mmc_suspend_host() is overwritten by return
status from host->pdata->resume. So the original return status is lost.
In these cases the MMC core gets to an unexpected state
during resume and multiple issues related to MMC crop up.
1. Host controller driver starts accessing the device registers
before the clocks are enabled which leads to a prefetch abort.
2. A file copy thread which was launched before suspend gets
stuck due to the host not being reclaimed during resume.
To avoid such problems pass on the -EBUSY status to the PM core
from the host controller driver. With this change, MMC core
suspend might still fail but it does not end up making the
system unusable. Suspend gets aborted and the user can try
suspending the system again.
Signed-off-by: Vaibhav Bedia <[email protected]>
Signed-off-by: Hebbar, Gururaja <[email protected]>
Acked-by: Venkatraman S <[email protected]>
Signed-off-by: Chris Ball <[email protected]>
[bwh: Backported to 3.2:
- Adjust context, indentation
- s/dev/\&pdev->dev/]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mmc/host/omap_hsmmc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -2188,9 +2188,7 @@ static int omap_hsmmc_suspend(struct dev
} else {
host->suspended = 0;
if (host->pdata->resume) {
- ret = host->pdata->resume(&pdev->dev,
- host->slot_id);
- if (ret)
+ if (host->pdata->resume(&pdev->dev, host->slot_id))
dev_dbg(mmc_dev(host->mmc),
"Unmask interrupt failed\n");
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bernhard Walle <[email protected]>
commit 875de98623fa2b29f0cb19915fe3292ab6daa1cb upstream.
"echo -e" is a GNU extension. When cross-compiling the kernel on a
BSD-like operating system (Mac OS X in my case), this doesn't work.
One could install a GNU version of echo, put that in the $PATH before
the system echo and use "/usr/bin/env echo", but the solution with
printf is simpler.
Since it is no disadvantage on Linux, I hope that gets accepted even if
cross-compiling the Linux kernel on another Unix operating system is
quite a rare use case.
Signed-off-by: Bernhard Walle <[email protected]>
Andreas Bießmann <[email protected]>
Signed-off-by: Michal Marek <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
scripts/Kbuild.include | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
index d897278..6a3ee98 100644
--- a/scripts/Kbuild.include
+++ b/scripts/Kbuild.include
@@ -104,7 +104,7 @@ as-option = $(call try-run,\
# Usage: cflags-y += $(call as-instr,instr,option1,option2)
as-instr = $(call try-run,\
- /bin/echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3))
+ printf "%b\n" "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3))
# cc-option
# Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shawn Guo <[email protected]>
commit f96972f2dc6365421cf2366ebd61ee4cf060c8d5 upstream.
As kernel_power_off() calls disable_nonboot_cpus(), we may also want to
have kernel_restart() call disable_nonboot_cpus(). Doing so can help
machines that require boot cpu be the last alive cpu during reboot to
survive with kernel restart.
This fixes one reboot issue seen on imx6q (Cortex-A9 Quad). The machine
requires that the restart routine be run on the primary cpu rather than
secondary ones. Otherwise, the secondary core running the restart
routine will fail to come to online after reboot.
Signed-off-by: Shawn Guo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
kernel/sys.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sys.c b/kernel/sys.c
index f949228..fdad206 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -368,6 +368,7 @@ EXPORT_SYMBOL(unregister_reboot_notifier);
void kernel_restart(char *cmd)
{
kernel_restart_prepare(cmd);
+ disable_nonboot_cpus();
if (!cmd)
printk(KERN_EMERG "Restarting system.\n");
else
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Davidlohr Bueso <[email protected]>
commit e96875677fb2b7cb739c5d7769824dff7260d31d upstream.
Account for all properties when a and/or b are 0:
gcd(0, 0) = 0
gcd(a, 0) = a
gcd(0, b) = b
Fixes no known problems in current kernels.
Signed-off-by: Davidlohr Bueso <[email protected]>
Cc: Eric Dumazet <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
lib/gcd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lib/gcd.c b/lib/gcd.c
index cce4f3c..3657f12 100644
--- a/lib/gcd.c
+++ b/lib/gcd.c
@@ -9,6 +9,9 @@ unsigned long gcd(unsigned long a, unsigned long b)
if (a < b)
swap(a, b);
+
+ if (!b)
+ return a;
while ((r = a % b) != 0) {
a = b;
b = r;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Omair Mohammed Abdullah <[email protected]>
commit d4f1e48bd11e3df6a26811f7a1f06c4225d92f7d upstream.
When the loopback timer handler is running, calling del_timer() (for STOP
trigger) will not wait for the handler to complete before deactivating the
timer. The timer gets rescheduled in the handler as usual. Then a subsequent
START trigger will try to start the timer using add_timer() with a timer pending
leading to a kernel panic.
Serialize the calls to add_timer() and del_timer() using a spin lock to avoid
this.
Signed-off-by: Omair Mohammed Abdullah <[email protected]>
Signed-off-by: Vinod Koul <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
sound/drivers/aloop.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/sound/drivers/aloop.c
+++ b/sound/drivers/aloop.c
@@ -119,6 +119,7 @@ struct loopback_pcm {
unsigned int period_size_frac;
unsigned long last_jiffies;
struct timer_list timer;
+ spinlock_t timer_lock;
};
static struct platform_device *devices[SNDRV_CARDS];
@@ -169,6 +170,7 @@ static void loopback_timer_start(struct
unsigned long tick;
unsigned int rate_shift = get_rate_shift(dpcm);
+ spin_lock(&dpcm->timer_lock);
if (rate_shift != dpcm->pcm_rate_shift) {
dpcm->pcm_rate_shift = rate_shift;
dpcm->period_size_frac = frac_pos(dpcm, dpcm->pcm_period_size);
@@ -181,12 +183,15 @@ static void loopback_timer_start(struct
tick = (tick + dpcm->pcm_bps - 1) / dpcm->pcm_bps;
dpcm->timer.expires = jiffies + tick;
add_timer(&dpcm->timer);
+ spin_unlock(&dpcm->timer_lock);
}
static inline void loopback_timer_stop(struct loopback_pcm *dpcm)
{
+ spin_lock(&dpcm->timer_lock);
del_timer(&dpcm->timer);
dpcm->timer.expires = 0;
+ spin_unlock(&dpcm->timer_lock);
}
#define CABLE_VALID_PLAYBACK (1 << SNDRV_PCM_STREAM_PLAYBACK)
@@ -659,6 +664,7 @@ static int loopback_open(struct snd_pcm_
dpcm->substream = substream;
setup_timer(&dpcm->timer, loopback_timer_function,
(unsigned long)dpcm);
+ spin_lock_init(&dpcm->timer_lock);
cable = loopback->cables[substream->number][dev];
if (!cable) {
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michal Hocko <[email protected]>
commit 36e4f20af833d1ce196e6a4ade05dc26c44652d1 upstream.
Commit 0c176d52b0b2 ("mm: hugetlb: fix pgoff computation when unmapping
page from vma") fixed pgoff calculation but it has replaced it by
vma_hugecache_offset() which is not approapriate for offsets used for
vma_prio_tree_foreach() because that one expects index in page units
rather than in huge_page_shift.
Johannes said:
: The resulting index may not be too big, but it can be too small: assume
: hpage size of 2M and the address to unmap to be 0x200000. This is regular
: page index 512 and hpage index 1. If you have a VMA that maps the file
: only starting at the second huge page, that VMAs vm_pgoff will be 512 but
: you ask for offset 1 and miss it even though it does map the page of
: interest. hugetlb_cow() will try to unmap, miss the vma, and retry the
: cow until the allocation succeeds or the skipped vma(s) go away.
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Hillf Danton <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: David Rientjes <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/hugetlb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8536741..de5d1dc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2480,7 +2480,8 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
* from page cache lookup which is in HPAGE_SIZE units.
*/
address = address & huge_page_mask(h);
- pgoff = vma_hugecache_offset(h, vma, address);
+ pgoff = ((address - vma->vm_start) >> PAGE_SHIFT) +
+ vma->vm_pgoff;
mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
/*
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hugh Dickins <[email protected]>
commit ec4d9f626d5908b6052c2973f37992f1db52e967 upstream.
In fuzzing with trinity, lockdep protested "possible irq lock inversion
dependency detected" when isolate_lru_page() reenabled interrupts while
still holding the supposedly irq-safe tree_lock:
invalidate_inode_pages2
invalidate_complete_page2
spin_lock_irq(&mapping->tree_lock)
clear_page_mlock
isolate_lru_page
spin_unlock_irq(&zone->lru_lock)
isolate_lru_page() is correct to enable interrupts unconditionally:
invalidate_complete_page2() is incorrect to call clear_page_mlock() while
holding tree_lock, which is supposed to nest inside lru_lock.
Both truncate_complete_page() and invalidate_complete_page() call
clear_page_mlock() before taking tree_lock to remove page from radix_tree.
I guess invalidate_complete_page2() preferred to test PageDirty (again)
under tree_lock before committing to the munlock; but since the page has
already been unmapped, its state is already somewhat inconsistent, and no
worse if clear_page_mlock() moved up.
Reported-by: Sasha Levin <[email protected]>
Deciphered-by: Andrew Morton <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Ying Han <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/truncate.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 75801ac..f38055c 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -394,11 +394,12 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
return 0;
+ clear_page_mlock(page);
+
spin_lock_irq(&mapping->tree_lock);
if (PageDirty(page))
goto failed;
- clear_page_mlock(page);
BUG_ON(page_has_private(page));
__delete_from_page_cache(page);
spin_unlock_irq(&mapping->tree_lock);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hillf Danton <[email protected]>
commit 0c176d52b0b2619f231b2bbf329b90c028134f58 upstream.
The computation for pgoff is incorrect, at least with
(vma->vm_pgoff >> PAGE_SHIFT)
involved. It is fixed with the available method if HPAGE_SIZE is
concerned in page cache lookup.
[[email protected]: use vma_hugecache_offset() directly, per Michal]
Signed-off-by: Hillf Danton <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: David Rientjes <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/hugetlb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2429,8 +2429,7 @@ static int unmap_ref_private(struct mm_s
* from page cache lookup which is in HPAGE_SIZE units.
*/
address = address & huge_page_mask(h);
- pgoff = ((address - vma->vm_start) >> PAGE_SHIFT)
- + (vma->vm_pgoff >> PAGE_SHIFT);
+ pgoff = vma_hugecache_offset(h, vma, address);
mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
/*
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stefan Richter <[email protected]>
commit 790198f74c9d1b46b6a89504361b1a844670d050 upstream.
Fix two bugs of the /dev/fw* character device concerning the
FW_CDEV_IOC_GET_INFO ioctl with nonzero fw_cdev_get_info.bus_reset.
(Practically all /dev/fw* clients issue this ioctl right after opening
the device.)
Both bugs are caused by sizeof(struct fw_cdev_event_bus_reset) being 36
without natural alignment and 40 with natural alignment.
1) Memory corruption, affecting i386 userland on amd64 kernel:
Userland reserves a 36 bytes large buffer, kernel writes 40 bytes.
This has been first found and reported against libraw1394 if
compiled with gcc 4.7 which happens to order libraw1394's stack such
that the bug became visible as data corruption.
2) Information leak, affecting all kernel architectures except i386:
4 bytes of random kernel stack data were leaked to userspace.
Hence limit the respective copy_to_user() to the 32-bit aligned size of
struct fw_cdev_event_bus_reset.
Reported-by: Simon Kirby <[email protected]>
Signed-off-by: Stefan Richter <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/firewire/core-cdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
index 2783f69..f8d2287 100644
--- a/drivers/firewire/core-cdev.c
+++ b/drivers/firewire/core-cdev.c
@@ -473,8 +473,8 @@ static int ioctl_get_info(struct client *client, union ioctl_arg *arg)
client->bus_reset_closure = a->bus_reset_closure;
if (a->bus_reset != 0) {
fill_bus_reset_event(&bus_reset, client);
- ret = copy_to_user(u64_to_uptr(a->bus_reset),
- &bus_reset, sizeof(bus_reset));
+ /* unaligned size of bus_reset is 36 bytes */
+ ret = copy_to_user(u64_to_uptr(a->bus_reset), &bus_reset, 36);
}
if (ret == 0 && list_empty(&client->link))
list_add_tail(&client->link, &client->device->client_list);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Henningsson <[email protected]>
commit 18dcd3044e4c4b3ab6341c98e8d0e81e0d58d5e3 upstream.
The internal mic input is phase inverted on one channel.
To avoid people in userspace summing the channels together
and get zero result, use a separate mixer control for the
inverted channel.
BugLink: https://bugs.launchpad.net/bugs/903853
Signed-off-by: David Henningsson <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
[bwh: Backported to 3.2:
- Adjust context
- Change both invocations of apply_pin_fixup()]
Signed-off-by: Ben Hutchings <[email protected]>
---
sound/pci/hda/patch_conexant.c | 88 ++++++++++++++++++++++++++++++++++------
1 file changed, 75 insertions(+), 13 deletions(-)
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -139,6 +139,7 @@ struct conexant_spec {
unsigned int asus:1;
unsigned int pin_eapd_ctrls:1;
unsigned int single_adc_amp:1;
+ unsigned int fixup_stereo_dmic:1;
unsigned int adc_switching:1;
@@ -4113,9 +4114,9 @@ static int cx_auto_init(struct hda_codec
static int cx_auto_add_volume_idx(struct hda_codec *codec, const char *basename,
const char *dir, int cidx,
- hda_nid_t nid, int hda_dir, int amp_idx)
+ hda_nid_t nid, int hda_dir, int amp_idx, int chs)
{
- static char name[32];
+ static char name[44];
static struct snd_kcontrol_new knew[] = {
HDA_CODEC_VOLUME(name, 0, 0, 0),
HDA_CODEC_MUTE(name, 0, 0, 0),
@@ -4125,7 +4126,7 @@ static int cx_auto_add_volume_idx(struct
for (i = 0; i < 2; i++) {
struct snd_kcontrol *kctl;
- knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, 3, amp_idx,
+ knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, chs, amp_idx,
hda_dir);
knew[i].subdevice = HDA_SUBDEV_AMP_FLAG;
knew[i].index = cidx;
@@ -4144,7 +4145,7 @@ static int cx_auto_add_volume_idx(struct
}
#define cx_auto_add_volume(codec, str, dir, cidx, nid, hda_dir) \
- cx_auto_add_volume_idx(codec, str, dir, cidx, nid, hda_dir, 0)
+ cx_auto_add_volume_idx(codec, str, dir, cidx, nid, hda_dir, 0, 3)
#define cx_auto_add_pb_volume(codec, nid, str, idx) \
cx_auto_add_volume(codec, str, " Playback", idx, nid, HDA_OUTPUT)
@@ -4214,6 +4215,36 @@ static int cx_auto_build_output_controls
return 0;
}
+/* Returns zero if this is a normal stereo channel, and non-zero if it should
+ be split in two independent channels.
+ dest_label must be at least 44 characters. */
+static int cx_auto_get_rightch_label(struct hda_codec *codec, const char *label,
+ char *dest_label, int nid)
+{
+ struct conexant_spec *spec = codec->spec;
+ int i;
+
+ if (!spec->fixup_stereo_dmic)
+ return 0;
+
+ for (i = 0; i < AUTO_CFG_MAX_INS; i++) {
+ int def_conf;
+ if (spec->autocfg.inputs[i].pin != nid)
+ continue;
+
+ if (spec->autocfg.inputs[i].type != AUTO_PIN_MIC)
+ return 0;
+ def_conf = snd_hda_codec_get_pincfg(codec, nid);
+ if (snd_hda_get_input_pin_attr(def_conf) != INPUT_PIN_ATTR_INT)
+ return 0;
+
+ /* Finally found the inverted internal mic! */
+ snprintf(dest_label, 44, "Inverted %s", label);
+ return 1;
+ }
+ return 0;
+}
+
static int cx_auto_add_capture_volume(struct hda_codec *codec, hda_nid_t nid,
const char *label, const char *pfx,
int cidx)
@@ -4222,14 +4253,25 @@ static int cx_auto_add_capture_volume(st
int i;
for (i = 0; i < spec->num_adc_nids; i++) {
+ char rightch_label[44];
hda_nid_t adc_nid = spec->adc_nids[i];
int idx = get_input_connection(codec, adc_nid, nid);
if (idx < 0)
continue;
if (spec->single_adc_amp)
idx = 0;
+
+ if (cx_auto_get_rightch_label(codec, label, rightch_label, nid)) {
+ /* Make two independent kcontrols for left and right */
+ int err = cx_auto_add_volume_idx(codec, label, pfx,
+ cidx, adc_nid, HDA_INPUT, idx, 1);
+ if (err < 0)
+ return err;
+ return cx_auto_add_volume_idx(codec, rightch_label, pfx,
+ cidx, adc_nid, HDA_INPUT, idx, 2);
+ }
return cx_auto_add_volume_idx(codec, label, pfx,
- cidx, adc_nid, HDA_INPUT, idx);
+ cidx, adc_nid, HDA_INPUT, idx, 3);
}
return 0;
}
@@ -4242,9 +4284,19 @@ static int cx_auto_add_boost_volume(stru
int i, con;
nid = spec->imux_info[idx].pin;
- if (get_wcaps(codec, nid) & AC_WCAP_IN_AMP)
+ if (get_wcaps(codec, nid) & AC_WCAP_IN_AMP) {
+ char rightch_label[44];
+ if (cx_auto_get_rightch_label(codec, label, rightch_label, nid)) {
+ int err = cx_auto_add_volume_idx(codec, label, " Boost",
+ cidx, nid, HDA_INPUT, 0, 1);
+ if (err < 0)
+ return err;
+ return cx_auto_add_volume_idx(codec, rightch_label, " Boost",
+ cidx, nid, HDA_INPUT, 0, 2);
+ }
return cx_auto_add_volume(codec, label, " Boost", cidx,
nid, HDA_INPUT);
+ }
con = __select_input_connection(codec, spec->imux_info[idx].adc, nid,
&mux, false, 0);
if (con < 0)
@@ -4398,23 +4450,31 @@ static void apply_pincfg(struct hda_code
}
-static void apply_pin_fixup(struct hda_codec *codec,
+enum {
+ CXT_PINCFG_LENOVO_X200,
+ CXT_PINCFG_LENOVO_TP410,
+ CXT_FIXUP_STEREO_DMIC,
+};
+
+static void apply_fixup(struct hda_codec *codec,
const struct snd_pci_quirk *quirk,
const struct cxt_pincfg **table)
{
+ struct conexant_spec *spec = codec->spec;
+
quirk = snd_pci_quirk_lookup(codec->bus->pci, quirk);
- if (quirk) {
+ if (quirk && table[quirk->value]) {
snd_printdd(KERN_INFO "hda_codec: applying pincfg for %s\n",
quirk->name);
apply_pincfg(codec, table[quirk->value]);
}
+ if (quirk->value == CXT_FIXUP_STEREO_DMIC) {
+ snd_printdd(KERN_INFO "hda_codec: applying internal mic workaround for %s\n",
+ quirk->name);
+ spec->fixup_stereo_dmic = 1;
+ }
}
-enum {
- CXT_PINCFG_LENOVO_X200,
- CXT_PINCFG_LENOVO_TP410,
-};
-
/* ThinkPad X200 & co with cxt5051 */
static const struct cxt_pincfg cxt_pincfg_lenovo_x200[] = {
{ 0x16, 0x042140ff }, /* HP (seq# overridden) */
@@ -4434,6 +4494,7 @@ static const struct cxt_pincfg cxt_pincf
static const struct cxt_pincfg *cxt_pincfg_tbl[] = {
[CXT_PINCFG_LENOVO_X200] = cxt_pincfg_lenovo_x200,
[CXT_PINCFG_LENOVO_TP410] = cxt_pincfg_lenovo_tp410,
+ [CXT_FIXUP_STEREO_DMIC] = NULL,
};
static const struct snd_pci_quirk cxt5051_fixups[] = {
@@ -4447,6 +4508,7 @@ static const struct snd_pci_quirk cxt506
SND_PCI_QUIRK(0x17aa, 0x215f, "Lenovo T510", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21ce, "Lenovo T420", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410),
+ SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
{}
};
@@ -4486,10 +4548,10 @@ static int patch_conexant_auto(struct hd
break;
case 0x14f15051:
add_cx5051_fake_mutes(codec);
- apply_pin_fixup(codec, cxt5051_fixups, cxt_pincfg_tbl);
+ apply_fixup(codec, cxt5051_fixups, cxt_pincfg_tbl);
break;
default:
- apply_pin_fixup(codec, cxt5066_fixups, cxt_pincfg_tbl);
+ apply_fixup(codec, cxt5066_fixups, cxt_pincfg_tbl);
break;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jaehoon Chung <[email protected]>
commit 5feb54a1ab91a237e247c013b8c4fb100ea347b1 upstream.
We can use up to four bus-clocks; but on module remove, we didn't
disable the fourth bus clock.
Signed-off-by: Jaehoon Chung <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
Signed-off-by: Chris Ball <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mmc/host/sdhci-s3c.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/mmc/host/sdhci-s3c.c
+++ b/drivers/mmc/host/sdhci-s3c.c
@@ -601,7 +601,7 @@ static int __devexit sdhci_s3c_remove(st
sdhci_remove_host(host, 1);
- for (ptr = 0; ptr < 3; ptr++) {
+ for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) {
if (sc->clk_bus[ptr]) {
clk_disable(sc->clk_bus[ptr]);
clk_put(sc->clk_bus[ptr]);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Kaechele <[email protected]>
commit e4db0952e542090c605fd41d31d761f1b4624f4a upstream.
The Lenovo IdeaPad U310 has an internal mic where the right channel
is phase inverted.
Signed-off-by: Felix Kaechele <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
sound/pci/hda/patch_conexant.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 902ea0b..47938c7 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -4459,6 +4459,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x17aa, 0x21ce, "Lenovo T420", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
{}
};
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nikola Pajkovsky <[email protected]>
commit 68766a2edcd5cd744262a70a2f67a320ac944760 upstream.
In case we detect a problem and bail out, we fail to set "ret" to a
nonzero value, and udf_load_logicalvol will mistakenly report success.
Signed-off-by: Nikola Pajkovsky <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/udf/super.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/udf/super.c b/fs/udf/super.c
index 9f55f79..18fc038 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -1344,6 +1344,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
udf_err(sb, "error loading logical volume descriptor: "
"Partition table too long (%u > %lu)\n", table_len,
sb->s_blocksize - sizeof(*lvd));
+ ret = 1;
goto out_bh;
}
@@ -1388,8 +1389,10 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
UDF_ID_SPARABLE,
strlen(UDF_ID_SPARABLE))) {
if (udf_load_sparable_map(sb, map,
- (struct sparablePartitionMap *)gpm) < 0)
+ (struct sparablePartitionMap *)gpm) < 0) {
+ ret = 1;
goto out_bh;
+ }
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_METADATA,
strlen(UDF_ID_METADATA))) {
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Henningsson <[email protected]>
commit b3c5dce81584391af8b6dedb0647e65c17aab3a2 upstream.
The Lenovo Ideapad S205 has an internal mic where the right channel
is phase inverted.
BugLink: https://bugs.launchpad.net/bugs/884652
Signed-off-by: David Henningsson <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
sound/pci/hda/patch_conexant.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 172370b..2af0868 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -4466,6 +4466,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x17aa, 0x21ce, "Lenovo T420", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
{}
};
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrea Arcangeli <[email protected]>
commit 027ef6c87853b0a9df53175063028edb4950d476 upstream.
In many places !pmd_present has been converted to pmd_none. For pmds
that's equivalent and pmd_none is quicker so using pmd_none is better.
However (unless we delete pmd_present) we should provide an accurate
pmd_present too. This will avoid the risk of code thinking the pmd is non
present because it's under __split_huge_page_map, see the pmd_mknotpresent
there and the comment above it.
If the page has been mprotected as PROT_NONE, it would also lead to a
pmd_present false negative in the same way as the race with
split_huge_page.
Because the PSE bit stays on at all times (both during split_huge_page and
when the _PAGE_PROTNONE bit get set), we could only check for the PSE bit,
but checking the PROTNONE bit too is still good to remember pmd_present
must always keep PROT_NONE into account.
This explains a not reproducible BUG_ON that was seldom reported on the
lists.
The same issue is in pmd_large, it would go wrong with both PROT_NONE and
if it races with split_huge_page.
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
arch/x86/include/asm/pgtable.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index fc99484..a1f780d 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -146,8 +146,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
static inline int pmd_large(pmd_t pte)
{
- return (pmd_flags(pte) & (_PAGE_PSE | _PAGE_PRESENT)) ==
- (_PAGE_PSE | _PAGE_PRESENT);
+ return pmd_flags(pte) & _PAGE_PSE;
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -415,7 +414,13 @@ static inline int pte_hidden(pte_t pte)
static inline int pmd_present(pmd_t pmd)
{
- return pmd_flags(pmd) & _PAGE_PRESENT;
+ /*
+ * Checking for _PAGE_PSE is needed too because
+ * split_huge_page will temporarily clear the present bit (but
+ * the _PAGE_PSE flag will remain set at all times while the
+ * _PAGE_PRESENT bit is clear).
+ */
+ return pmd_flags(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PSE);
}
static inline int pmd_none(pmd_t pmd)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 821f7494a77627fb1ab539591c57b22cdca702d6 upstream.
A change was made about a year ago to get eCryptfs to better utilize its
page cache during writes. The idea was to do the page encryption
operations during page writeback, rather than doing them when initially
writing into the page cache, to reduce the number of page encryption
operations during sequential writes. This meant that the encrypted page
would only be written to the lower filesystem during page writeback,
which was a change from how eCryptfs had previously wrote to the lower
filesystem in ecryptfs_write_end().
The change caused a few eCryptfs-internal bugs that were shook out.
Unfortunately, more grave side effects have been identified that will
force changes outside of eCryptfs. Because the lower filesystem isn't
consulted until page writeback, eCryptfs has no way to pass lower write
errors (ENOSPC, mainly) back to userspace. Additionaly, it was reported
that quotas could be bypassed because of the way eCryptfs may sometimes
open the lower filesystem using a privileged kthread.
It would be nice to resolve the latest issues, but it is best if the
eCryptfs commits be reverted to the old behavior in the meantime.
This reverts:
32001d6f "eCryptfs: Flush file in vma close"
5be79de2 "eCryptfs: Flush dirty pages in setattr"
57db4e8d "ecryptfs: modify write path to encrypt page in writepage"
Signed-off-by: Tyler Hicks <[email protected]>
Tested-by: Colin King <[email protected]>
Cc: Colin King <[email protected]>
Cc: Thieu Le <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ecryptfs/file.c | 33 ++-------------------------------
fs/ecryptfs/inode.c | 6 ------
fs/ecryptfs/mmap.c | 39 +++++++++++++--------------------------
3 files changed, 15 insertions(+), 63 deletions(-)
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index baf8b05..44ce5c6 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -138,27 +138,6 @@ out:
return rc;
}
-static void ecryptfs_vma_close(struct vm_area_struct *vma)
-{
- filemap_write_and_wait(vma->vm_file->f_mapping);
-}
-
-static const struct vm_operations_struct ecryptfs_file_vm_ops = {
- .close = ecryptfs_vma_close,
- .fault = filemap_fault,
-};
-
-static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma)
-{
- int rc;
-
- rc = generic_file_mmap(file, vma);
- if (!rc)
- vma->vm_ops = &ecryptfs_file_vm_ops;
-
- return rc;
-}
-
struct kmem_cache *ecryptfs_file_info_cache;
static int read_or_initialize_metadata(struct dentry *dentry)
@@ -311,15 +290,7 @@ static int ecryptfs_release(struct inode *inode, struct file *file)
static int
ecryptfs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
{
- int rc = 0;
-
- rc = generic_file_fsync(file, start, end, datasync);
- if (rc)
- goto out;
- rc = vfs_fsync_range(ecryptfs_file_to_lower(file), start, end,
- datasync);
-out:
- return rc;
+ return vfs_fsync(ecryptfs_file_to_lower(file), datasync);
}
static int ecryptfs_fasync(int fd, struct file *file, int flag)
@@ -388,7 +359,7 @@ const struct file_operations ecryptfs_main_fops = {
#ifdef CONFIG_COMPAT
.compat_ioctl = ecryptfs_compat_ioctl,
#endif
- .mmap = ecryptfs_file_mmap,
+ .mmap = generic_file_mmap,
.open = ecryptfs_open,
.flush = ecryptfs_flush,
.release = ecryptfs_release,
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index 2d4143f..769fb85 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -981,12 +981,6 @@ static int ecryptfs_setattr(struct dentry *dentry, struct iattr *ia)
goto out;
}
- if (S_ISREG(inode->i_mode)) {
- rc = filemap_write_and_wait(inode->i_mapping);
- if (rc)
- goto out;
- fsstack_copy_attr_all(inode, lower_inode);
- }
memcpy(&lower_ia, ia, sizeof(lower_ia));
if (ia->ia_valid & ATTR_FILE)
lower_ia.ia_file = ecryptfs_file_to_lower(ia->ia_file);
diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c
index a46b3a8..bd1d57f 100644
--- a/fs/ecryptfs/mmap.c
+++ b/fs/ecryptfs/mmap.c
@@ -66,18 +66,6 @@ static int ecryptfs_writepage(struct page *page, struct writeback_control *wbc)
{
int rc;
- /*
- * Refuse to write the page out if we are called from reclaim context
- * since our writepage() path may potentially allocate memory when
- * calling into the lower fs vfs_write() which may in turn invoke
- * us again.
- */
- if (current->flags & PF_MEMALLOC) {
- redirty_page_for_writepage(wbc, page);
- rc = 0;
- goto out;
- }
-
rc = ecryptfs_encrypt_page(page);
if (rc) {
ecryptfs_printk(KERN_WARNING, "Error encrypting "
@@ -498,7 +486,6 @@ static int ecryptfs_write_end(struct file *file,
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
int rc;
- int need_unlock_page = 1;
ecryptfs_printk(KERN_DEBUG, "Calling fill_zeros_to_end_of_page"
"(page w/ index = [0x%.16lx], to = [%d])\n", index, to);
@@ -519,26 +506,26 @@ static int ecryptfs_write_end(struct file *file,
"zeros in page with index = [0x%.16lx]\n", index);
goto out;
}
- set_page_dirty(page);
- unlock_page(page);
- need_unlock_page = 0;
+ rc = ecryptfs_encrypt_page(page);
+ if (rc) {
+ ecryptfs_printk(KERN_WARNING, "Error encrypting page (upper "
+ "index [0x%.16lx])\n", index);
+ goto out;
+ }
if (pos + copied > i_size_read(ecryptfs_inode)) {
i_size_write(ecryptfs_inode, pos + copied);
ecryptfs_printk(KERN_DEBUG, "Expanded file size to "
"[0x%.16llx]\n",
(unsigned long long)i_size_read(ecryptfs_inode));
- balance_dirty_pages_ratelimited(mapping);
- rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
- if (rc) {
- printk(KERN_ERR "Error writing inode size to metadata; "
- "rc = [%d]\n", rc);
- goto out;
- }
}
- rc = copied;
+ rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
+ if (rc)
+ printk(KERN_ERR "Error writing inode size to metadata; "
+ "rc = [%d]\n", rc);
+ else
+ rc = copied;
out:
- if (need_unlock_page)
- unlock_page(page);
+ unlock_page(page);
page_cache_release(page);
return rc;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 8bc2d3cf612994a960c2e8eaea37f6676f67082a upstream.
ecryptfs_create() creates a lower inode, allocates an eCryptfs inode,
initializes the eCryptfs inode and cryptographic metadata attached to
the inode, and then writes the metadata to the header of the file.
If an error was to occur after the lower inode was created, an empty
lower file would be left in the lower filesystem. This is a problem
because ecryptfs_open() refuses to open any lower files which do not
have the appropriate metadata in the file header.
This patch properly unlinks the lower inode when an error occurs in the
later stages of ecryptfs_create(), reducing the chance that an empty
lower file will be left in the lower filesystem.
https://launchpad.net/bugs/872905
Signed-off-by: Tyler Hicks <[email protected]>
Cc: John Johansen <[email protected]>
Cc: Colin Ian King <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ecryptfs/inode.c | 55 ++++++++++++++++++++++++++++++---------------------
1 file changed, 32 insertions(+), 23 deletions(-)
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -161,6 +161,31 @@ ecryptfs_create_underlying_file(struct i
return vfs_create(lower_dir_inode, lower_dentry, mode, NULL);
}
+static int ecryptfs_do_unlink(struct inode *dir, struct dentry *dentry,
+ struct inode *inode)
+{
+ struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
+ struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
+ struct dentry *lower_dir_dentry;
+ int rc;
+
+ dget(lower_dentry);
+ lower_dir_dentry = lock_parent(lower_dentry);
+ rc = vfs_unlink(lower_dir_inode, lower_dentry);
+ if (rc) {
+ printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
+ goto out_unlock;
+ }
+ fsstack_copy_attr_times(dir, lower_dir_inode);
+ set_nlink(inode, ecryptfs_inode_to_lower(inode)->i_nlink);
+ inode->i_ctime = dir->i_ctime;
+ d_drop(dentry);
+out_unlock:
+ unlock_dir(lower_dir_dentry);
+ dput(lower_dentry);
+ return rc;
+}
+
/**
* ecryptfs_do_create
* @directory_inode: inode of the new file's dentry's parent in ecryptfs
@@ -201,8 +226,10 @@ ecryptfs_do_create(struct inode *directo
}
inode = __ecryptfs_get_inode(lower_dentry->d_inode,
directory_inode->i_sb);
- if (IS_ERR(inode))
+ if (IS_ERR(inode)) {
+ vfs_unlink(lower_dir_dentry->d_inode, lower_dentry);
goto out_lock;
+ }
fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode);
fsstack_copy_inode_size(directory_inode, lower_dir_dentry->d_inode);
out_lock:
@@ -284,7 +311,9 @@ ecryptfs_create(struct inode *directory_
* that this on disk file is prepared to be an ecryptfs file */
rc = ecryptfs_initialize_file(ecryptfs_dentry, ecryptfs_inode);
if (rc) {
- drop_nlink(ecryptfs_inode);
+ ecryptfs_do_unlink(directory_inode, ecryptfs_dentry,
+ ecryptfs_inode);
+ make_bad_inode(ecryptfs_inode);
unlock_new_inode(ecryptfs_inode);
iput(ecryptfs_inode);
goto out;
@@ -496,27 +525,7 @@ out_lock:
static int ecryptfs_unlink(struct inode *dir, struct dentry *dentry)
{
- int rc = 0;
- struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
- struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
- struct dentry *lower_dir_dentry;
-
- dget(lower_dentry);
- lower_dir_dentry = lock_parent(lower_dentry);
- rc = vfs_unlink(lower_dir_inode, lower_dentry);
- if (rc) {
- printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
- goto out_unlock;
- }
- fsstack_copy_attr_times(dir, lower_dir_inode);
- set_nlink(dentry->d_inode,
- ecryptfs_inode_to_lower(dentry->d_inode)->i_nlink);
- dentry->d_inode->i_ctime = dir->i_ctime;
- d_drop(dentry);
-out_unlock:
- unlock_dir(lower_dir_dentry);
- dput(lower_dentry);
- return rc;
+ return ecryptfs_do_unlink(dir, dentry, dentry->d_inode);
}
static int ecryptfs_symlink(struct inode *dir, struct dentry *dentry,
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit e3ccaa9761200952cc269b1f4b7d7bb77a5e071b upstream.
Historically, eCryptfs has only initialized lower files in the
ecryptfs_create() path. Lower file initialization is the act of writing
the cryptographic metadata from the inode's crypt_stat to the header of
the file. The ecryptfs_open() path already expects that metadata to be
in the header of the file.
A number of users have reported empty lower files in beneath their
eCryptfs mounts. Most of the causes for those empty files being left
around have been addressed, but the presence of empty files causes
problems due to the lack of proper cryptographic metadata.
To transparently solve this problem, this patch initializes empty lower
files in the ecryptfs_open() error path. If the metadata is unreadable
due to the lower inode size being 0, plaintext passthrough support is
not in use, and the metadata is stored in the header of the file (as
opposed to the user.ecryptfs extended attribute), the lower file will be
initialized.
The number of nested conditionals in ecryptfs_open() was getting out of
hand, so a helper function was created. To avoid the same nested
conditional problem, the conditional logic was reversed inside of the
helper function.
https://launchpad.net/bugs/911507
Signed-off-by: Tyler Hicks <[email protected]>
Cc: John Johansen <[email protected]>
Cc: Colin Ian King <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ecryptfs/ecryptfs_kernel.h | 2 ++
fs/ecryptfs/file.c | 71 ++++++++++++++++++++++++++---------------
fs/ecryptfs/inode.c | 4 +--
3 files changed, 49 insertions(+), 28 deletions(-)
diff --git a/fs/ecryptfs/ecryptfs_kernel.h b/fs/ecryptfs/ecryptfs_kernel.h
index 0deb4f2..9f77ff8 100644
--- a/fs/ecryptfs/ecryptfs_kernel.h
+++ b/fs/ecryptfs/ecryptfs_kernel.h
@@ -563,6 +563,8 @@ struct ecryptfs_open_req {
struct inode *ecryptfs_get_inode(struct inode *lower_inode,
struct super_block *sb);
void ecryptfs_i_size_init(const char *page_virt, struct inode *inode);
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode);
int ecryptfs_decode_and_decrypt_filename(char **decrypted_name,
size_t *decrypted_name_size,
struct dentry *ecryptfs_dentry,
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index 2b17f2f..baf8b05 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -161,6 +161,48 @@ static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma)
struct kmem_cache *ecryptfs_file_info_cache;
+static int read_or_initialize_metadata(struct dentry *dentry)
+{
+ struct inode *inode = dentry->d_inode;
+ struct ecryptfs_mount_crypt_stat *mount_crypt_stat;
+ struct ecryptfs_crypt_stat *crypt_stat;
+ int rc;
+
+ crypt_stat = &ecryptfs_inode_to_private(inode)->crypt_stat;
+ mount_crypt_stat = &ecryptfs_superblock_to_private(
+ inode->i_sb)->mount_crypt_stat;
+ mutex_lock(&crypt_stat->cs_mutex);
+
+ if (crypt_stat->flags & ECRYPTFS_POLICY_APPLIED &&
+ crypt_stat->flags & ECRYPTFS_KEY_VALID) {
+ rc = 0;
+ goto out;
+ }
+
+ rc = ecryptfs_read_metadata(dentry);
+ if (!rc)
+ goto out;
+
+ if (mount_crypt_stat->flags & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED) {
+ crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
+ | ECRYPTFS_ENCRYPTED);
+ rc = 0;
+ goto out;
+ }
+
+ if (!(mount_crypt_stat->flags & ECRYPTFS_XATTR_METADATA_ENABLED) &&
+ !i_size_read(ecryptfs_inode_to_lower(inode))) {
+ rc = ecryptfs_initialize_file(dentry, inode);
+ if (!rc)
+ goto out;
+ }
+
+ rc = -EIO;
+out:
+ mutex_unlock(&crypt_stat->cs_mutex);
+ return rc;
+}
+
/**
* ecryptfs_open
* @inode: inode speciying file to open
@@ -236,32 +278,9 @@ static int ecryptfs_open(struct inode *inode, struct file *file)
rc = 0;
goto out;
}
- mutex_lock(&crypt_stat->cs_mutex);
- if (!(crypt_stat->flags & ECRYPTFS_POLICY_APPLIED)
- || !(crypt_stat->flags & ECRYPTFS_KEY_VALID)) {
- rc = ecryptfs_read_metadata(ecryptfs_dentry);
- if (rc) {
- ecryptfs_printk(KERN_DEBUG,
- "Valid headers not found\n");
- if (!(mount_crypt_stat->flags
- & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED)) {
- rc = -EIO;
- printk(KERN_WARNING "Either the lower file "
- "is not in a valid eCryptfs format, "
- "or the key could not be retrieved. "
- "Plaintext passthrough mode is not "
- "enabled; returning -EIO\n");
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out_put;
- }
- rc = 0;
- crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
- | ECRYPTFS_ENCRYPTED);
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out;
- }
- }
- mutex_unlock(&crypt_stat->cs_mutex);
+ rc = read_or_initialize_metadata(ecryptfs_dentry);
+ if (rc)
+ goto out_put;
ecryptfs_printk(KERN_DEBUG, "inode w/ addr = [0x%p], i_ino = "
"[0x%.16lx] size: [0x%.16llx]\n", inode, inode->i_ino,
(unsigned long long)i_size_read(inode));
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index 65efe5f..2d4143f 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -227,8 +227,8 @@ out:
*
* Returns zero on success
*/
-static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
- struct inode *ecryptfs_inode)
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode)
{
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman <[email protected]>
commit 00442ad04a5eac08a98255697c510e708f6082e2 upstream.
Commit cc9a6c877661 ("cpuset: mm: reduce large amounts of memory barrier
related damage v3") introduced a potential memory corruption.
shmem_alloc_page() uses a pseudo vma and it has one significant unique
combination, vma->vm_ops=NULL and vma->policy->flags & MPOL_F_SHARED.
get_vma_policy() does NOT increase a policy ref when vma->vm_ops=NULL
and mpol_cond_put() DOES decrease a policy ref when a policy has
MPOL_F_SHARED. Therefore, when a cpuset update race occurs,
alloc_pages_vma() falls in 'goto retry_cpuset' path, decrements the
reference count and frees the policy prematurely.
Signed-off-by: KOSAKI Motohiro <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Christoph Lameter <[email protected]>
Cc: Josh Boyer <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/mempolicy.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1763418..3d64b36 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1552,8 +1552,18 @@ struct mempolicy *get_vma_policy(struct task_struct *task,
addr);
if (vpol)
pol = vpol;
- } else if (vma->vm_policy)
+ } else if (vma->vm_policy) {
pol = vma->vm_policy;
+
+ /*
+ * shmem_alloc_page() passes MPOL_F_SHARED policy with
+ * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+ * count on these policies which will be dropped by
+ * mpol_cond_put() later
+ */
+ if (mpol_needs_cond_ref(pol))
+ mpol_get(pol);
+ }
}
if (!pol)
pol = &default_policy;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jozsef Kadlecsik <[email protected]>
commit 07153c6ec074257ade76a461429b567cff2b3a1e upstream.
It was reported that the Linux kernel sometimes logs:
klogd: [2629147.402413] kernel BUG at net / netfilter /
nf_conntrack_proto_tcp.c: 447!
klogd: [1072212.887368] kernel BUG at net / netfilter /
nf_conntrack_proto_tcp.c: 392
ipv4_get_l4proto() in nf_conntrack_l3proto_ipv4.c and tcp_error() in
nf_conntrack_proto_tcp.c should catch malformed packets, so the errors
at the indicated lines - TCP options parsing - should not happen.
However, tcp_error() relies on the "dataoff" offset to the TCP header,
calculated by ipv4_get_l4proto(). But ipv4_get_l4proto() does not check
bogus ihl values in IPv4 packets, which then can slip through tcp_error()
and get caught at the TCP options parsing routines.
The patch fixes ipv4_get_l4proto() by invalidating packets with bogus
ihl value.
The patch closes netfilter bugzilla id 771.
Signed-off-by: Jozsef Kadlecsik <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
index 750b06a..cf73cc7 100644
--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
@@ -84,6 +84,14 @@ static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
*dataoff = nhoff + (iph->ihl << 2);
*protonum = iph->protocol;
+ /* Check bogus IP headers */
+ if (*dataoff > skb->len) {
+ pr_debug("nf_conntrack_ipv4: bogus IPv4 packet: "
+ "nhoff %u, ihl %u, skblen %u\n",
+ nhoff, iph->ihl << 2, skb->len);
+ return -NF_ACCEPT;
+ }
+
return NF_ACCEPT;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Florian Westphal <[email protected]>
commit 7a909ac70f6b0823d9f23a43f19598d4b57ac901 upstream.
credit_cap can be set to credit, which avoids inlining user2credits
twice. Also, remove inline keyword and let compiler decide.
old:
684 192 0 876 36c net/netfilter/xt_limit.o
4927 344 32 5303 14b7 net/netfilter/xt_hashlimit.o
now:
668 192 0 860 35c net/netfilter/xt_limit.o
4793 344 32 5169 1431 net/netfilter/xt_hashlimit.o
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/netfilter/xt_hashlimit.c | 8 +++-----
net/netfilter/xt_limit.c | 5 ++---
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
index d95f9c9..2195eb0 100644
--- a/net/netfilter/xt_hashlimit.c
+++ b/net/netfilter/xt_hashlimit.c
@@ -389,8 +389,7 @@ static void htable_put(struct xt_hashlimit_htable *hinfo)
#define CREDITS_PER_JIFFY POW2_BELOW32(MAX_CPJ)
/* Precision saver. */
-static inline u_int32_t
-user2credits(u_int32_t user)
+static u32 user2credits(u32 user)
{
/* If multiplying would overflow... */
if (user > 0xFFFFFFFF / (HZ*CREDITS_PER_JIFFY))
@@ -400,7 +399,7 @@ user2credits(u_int32_t user)
return (user * HZ * CREDITS_PER_JIFFY) / XT_HASHLIMIT_SCALE;
}
-static inline void rateinfo_recalc(struct dsthash_ent *dh, unsigned long now)
+static void rateinfo_recalc(struct dsthash_ent *dh, unsigned long now)
{
dh->rateinfo.credit += (now - dh->rateinfo.prev) * CREDITS_PER_JIFFY;
if (dh->rateinfo.credit > dh->rateinfo.credit_cap)
@@ -535,8 +534,7 @@ hashlimit_mt(const struct sk_buff *skb, struct xt_action_param *par)
dh->rateinfo.prev = jiffies;
dh->rateinfo.credit = user2credits(hinfo->cfg.avg *
hinfo->cfg.burst);
- dh->rateinfo.credit_cap = user2credits(hinfo->cfg.avg *
- hinfo->cfg.burst);
+ dh->rateinfo.credit_cap = dh->rateinfo.credit;
dh->rateinfo.cost = user2credits(hinfo->cfg.avg);
} else {
/* update expiration timeout */
diff --git a/net/netfilter/xt_limit.c b/net/netfilter/xt_limit.c
index 32b7a57..5c22ce8 100644
--- a/net/netfilter/xt_limit.c
+++ b/net/netfilter/xt_limit.c
@@ -88,8 +88,7 @@ limit_mt(const struct sk_buff *skb, struct xt_action_param *par)
}
/* Precision saver. */
-static u_int32_t
-user2credits(u_int32_t user)
+static u32 user2credits(u32 user)
{
/* If multiplying would overflow... */
if (user > 0xFFFFFFFF / (HZ*CREDITS_PER_JIFFY))
@@ -123,7 +122,7 @@ static int limit_mt_check(const struct xt_mtchk_param *par)
128. */
priv->prev = jiffies;
priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
- r->credit_cap = user2credits(r->avg * r->burst); /* Credits full. */
+ r->credit_cap = priv->credit; /* Credits full. */
r->cost = user2credits(r->avg);
}
return 0;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 7149f2558d5b5b988726662fe58b1c388337805b upstream.
Fixes a regression caused by:
821f749 eCryptfs: Revert to a writethrough cache model
That patch reverted some code (specifically, 32001d6f) that was
necessary to properly handle open() -> mmap() -> close() -> dirty pages
-> munmap(), because the lower file could be closed before the dirty
pages are written out.
Rather than reapplying 32001d6f, this approach is a better way of
ensuring that the lower file is still open in order to handle writing
out the dirty pages. It is called from ecryptfs_release(), while we have
a lock on the lower file pointer, just before the lower file gets the
final fput() and we overwrite the pointer.
https://launchpad.net/bugs/1047261
Signed-off-by: Tyler Hicks <[email protected]>
Reported-by: Artemy Tregubenko <[email protected]>
Tested-by: Artemy Tregubenko <[email protected]>
Tested-by: Colin Ian King <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ecryptfs/main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index 2768138..9b627c1 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -162,6 +162,7 @@ void ecryptfs_put_lower_file(struct inode *inode)
inode_info = ecryptfs_inode_to_private(inode);
if (atomic_dec_and_mutex_lock(&inode_info->lower_file_count,
&inode_info->lower_file_mutex)) {
+ filemap_write_and_wait(inode->i_mapping);
fput(inode_info->lower_file);
inode_info->lower_file = NULL;
mutex_unlock(&inode_info->lower_file_mutex);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Seiji Aguchi <[email protected]>
commit d6cf86d8f23253225fe2a763d627ecf7dfee9dae upstream.
A value of efi.runtime_version is checked before calling
update_capsule()/query_variable_info() as follows.
But it isn't initialized anywhere.
<snip>
static efi_status_t virt_efi_query_variable_info(u32 attr,
u64 *storage_space,
u64 *remaining_space,
u64 *max_variable_size)
{
if (efi.runtime_version < EFI_2_00_SYSTEM_TABLE_REVISION)
return EFI_UNSUPPORTED;
<snip>
This patch initializes a value of efi.runtime_version at boot time.
Signed-off-by: Seiji Aguchi <[email protected]>
Acked-by: Matthew Garrett <[email protected]>
Signed-off-by: Matt Fleming <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
arch/x86/platform/efi/efi.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 92660eda..f55a4ce 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -890,6 +890,7 @@ void __init efi_enter_virtual_mode(void)
*
* Call EFI services through wrapper functions.
*/
+ efi.runtime_version = efi_systab.fw_revision;
efi.get_time = virt_efi_get_time;
efi.set_time = virt_efi_set_time;
efi.get_wakeup_time = virt_efi_get_wakeup_time;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Deucher <[email protected]>
commit 62444b7462a2b98bc78d68736c03a7c4e66ba7e2 upstream.
- Stop the displays from accessing the FB
- Block CPU access
- Turn off MC client access
This should fix issues some users have seen, especially
with UEFI, when changing the MC FB location that result
in hangs or display corruption.
v2: fix crtc enabled check noticed by Luca Tettamanti
Signed-off-by: Alex Deucher <[email protected]>
[bwh: Backported to 3.2:
- Drop DCE6 cases
- Call evergreen_mc_wait_for_idle() directly
- Add dce4_wait_for_vblank() (commits 3ae19b750bdc09ce233e1504348320141593ffda
and 4a15903db02026728d0cf2755c6fabae16b8db6a) and call it directly
Signed-off-by: Ben Hutchings <[email protected]>
---
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -37,6 +37,16 @@
#define EVERGREEN_PFP_UCODE_SIZE 1120
#define EVERGREEN_PM4_UCODE_SIZE 1376
+static const u32 crtc_offsets[6] =
+{
+ EVERGREEN_CRTC0_REGISTER_OFFSET,
+ EVERGREEN_CRTC1_REGISTER_OFFSET,
+ EVERGREEN_CRTC2_REGISTER_OFFSET,
+ EVERGREEN_CRTC3_REGISTER_OFFSET,
+ EVERGREEN_CRTC4_REGISTER_OFFSET,
+ EVERGREEN_CRTC5_REGISTER_OFFSET
+};
+
static void evergreen_gpu_init(struct radeon_device *rdev);
void evergreen_fini(struct radeon_device *rdev);
void evergreen_pcie_gen2_enable(struct radeon_device *rdev);
@@ -66,6 +76,27 @@ void evergreen_fix_pci_max_read_req_size
}
}
+void dce4_wait_for_vblank(struct radeon_device *rdev, int crtc)
+{
+ int i;
+
+ if (crtc >= rdev->num_crtc)
+ return;
+
+ if (RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[crtc]) & EVERGREEN_CRTC_MASTER_EN) {
+ for (i = 0; i < rdev->usec_timeout; i++) {
+ if (!(RREG32(EVERGREEN_CRTC_STATUS + crtc_offsets[crtc]) & EVERGREEN_CRTC_V_BLANK))
+ break;
+ udelay(1);
+ }
+ for (i = 0; i < rdev->usec_timeout; i++) {
+ if (RREG32(EVERGREEN_CRTC_STATUS + crtc_offsets[crtc]) & EVERGREEN_CRTC_V_BLANK)
+ break;
+ udelay(1);
+ }
+ }
+}
+
void evergreen_pre_page_flip(struct radeon_device *rdev, int crtc)
{
/* enable the pflip int */
@@ -1065,116 +1096,88 @@ void evergreen_agp_enable(struct radeon_
void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)
{
+ u32 crtc_enabled, tmp, frame_count, blackout;
+ int i, j;
+
save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(VGA_HDP_CONTROL);
- /* Stop all video */
+ /* disable VGA render */
WREG32(VGA_RENDER_CONTROL, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 1);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC2_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC3_REGISTER_OFFSET, 1);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC4_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC5_REGISTER_OFFSET, 1);
- }
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
- }
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
- }
-
- WREG32(D1VGA_CONTROL, 0);
- WREG32(D2VGA_CONTROL, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_D3VGA_CONTROL, 0);
- WREG32(EVERGREEN_D4VGA_CONTROL, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_D5VGA_CONTROL, 0);
- WREG32(EVERGREEN_D6VGA_CONTROL, 0);
+ /* blank the display controllers */
+ for (i = 0; i < rdev->num_crtc; i++) {
+ crtc_enabled = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]) & EVERGREEN_CRTC_MASTER_EN;
+ if (crtc_enabled) {
+ save->crtc_enabled[i] = true;
+ tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
+ if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) {
+ dce4_wait_for_vblank(rdev, i);
+ tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ }
+ /* wait for the next frame */
+ frame_count = radeon_get_vblank_counter(rdev, i);
+ for (j = 0; j < rdev->usec_timeout; j++) {
+ if (radeon_get_vblank_counter(rdev, i) != frame_count)
+ break;
+ udelay(1);
+ }
+ }
+ }
+
+ evergreen_mc_wait_for_idle(rdev);
+
+ blackout = RREG32(MC_SHARED_BLACKOUT_CNTL);
+ if ((blackout & BLACKOUT_MODE_MASK) != 1) {
+ /* Block CPU access */
+ WREG32(BIF_FB_EN, 0);
+ /* blackout the MC */
+ blackout &= ~BLACKOUT_MODE_MASK;
+ WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1);
}
}
void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save)
{
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC0_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC0_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC0_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC0_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC1_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC1_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC1_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC1_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC2_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC2_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC2_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC2_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
+ u32 tmp, frame_count;
+ int i, j;
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC3_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC3_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC3_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC3_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC4_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC4_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC4_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC4_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ /* update crtc base addresses */
+ for (i = 0; i < rdev->num_crtc; i++) {
+ WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)rdev->mc.vram_start);
}
-
WREG32(EVERGREEN_VGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(rdev->mc.vram_start));
WREG32(EVERGREEN_VGA_MEMORY_BASE_ADDRESS, (u32)rdev->mc.vram_start);
- /* Unlock host access */
+
+ /* unblackout the MC */
+ tmp = RREG32(MC_SHARED_BLACKOUT_CNTL);
+ tmp &= ~BLACKOUT_MODE_MASK;
+ WREG32(MC_SHARED_BLACKOUT_CNTL, tmp);
+ /* allow CPU access */
+ WREG32(BIF_FB_EN, FB_READ_EN | FB_WRITE_EN);
+
+ for (i = 0; i < rdev->num_crtc; i++) {
+ if (save->crtc_enabled) {
+ tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
+ tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ /* wait for the next frame */
+ frame_count = radeon_get_vblank_counter(rdev, i);
+ for (j = 0; j < rdev->usec_timeout; j++) {
+ if (radeon_get_vblank_counter(rdev, i) != frame_count)
+ break;
+ udelay(1);
+ }
+ }
+ }
+ /* Unlock vga access */
WREG32(VGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
WREG32(VGA_RENDER_CONTROL, save->vga_render_control);
--- a/drivers/gpu/drm/radeon/evergreen_reg.h
+++ b/drivers/gpu/drm/radeon/evergreen_reg.h
@@ -210,7 +210,10 @@
#define EVERGREEN_CRTC_CONTROL 0x6e70
# define EVERGREEN_CRTC_MASTER_EN (1 << 0)
# define EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE (1 << 24)
+#define EVERGREEN_CRTC_BLANK_CONTROL 0x6e74
+# define EVERGREEN_CRTC_BLANK_DATA_EN (1 << 8)
#define EVERGREEN_CRTC_STATUS 0x6e8c
+# define EVERGREEN_CRTC_V_BLANK (1 << 0)
#define EVERGREEN_CRTC_STATUS_POSITION 0x6e90
#define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8
#define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -77,6 +77,10 @@
#define CONFIG_MEMSIZE 0x5428
+#define BIF_FB_EN 0x5490
+#define FB_READ_EN (1 << 0)
+#define FB_WRITE_EN (1 << 1)
+
#define CP_ME_CNTL 0x86D8
#define CP_ME_HALT (1 << 28)
#define CP_PFP_HALT (1 << 26)
@@ -194,6 +198,9 @@
#define NOOFCHAN_MASK 0x00003000
#define MC_SHARED_CHREMAP 0x2008
+#define MC_SHARED_BLACKOUT_CNTL 0x20ac
+#define BLACKOUT_MODE_MASK 0x00000007
+
#define MC_ARB_RAMCFG 0x2760
#define NOOFBANK_SHIFT 0
#define NOOFBANK_MASK 0x00000003
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -386,6 +386,7 @@ void r700_cp_fini(struct radeon_device *
struct evergreen_mc_save {
u32 vga_render_control;
u32 vga_hdp_control;
+ bool crtc_enabled[RADEON_MAX_CRTCS];
};
void evergreen_pcie_gart_tlb_flush(struct radeon_device *rdev);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tyler Hicks <[email protected]>
commit 64e6651dcc10e9d2cc6230208a8e6c2cfd19ae18 upstream.
Since eCryptfs only calls fput() on the lower file in
ecryptfs_release(), eCryptfs should call the lower filesystem's
->flush() from ecryptfs_flush().
If the lower filesystem implements ->flush(), then eCryptfs should try
to flush out any dirty pages prior to calling the lower ->flush(). If
the lower filesystem does not implement ->flush(), then eCryptfs has no
need to do anything in ecryptfs_flush() since dirty pages are now
written out to the lower filesystem in ecryptfs_release().
Signed-off-by: Tyler Hicks <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
fs/ecryptfs/file.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index 44ce5c6..d45ba45 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -275,8 +275,14 @@ out:
static int ecryptfs_flush(struct file *file, fl_owner_t td)
{
- return file->f_mode & FMODE_WRITE
- ? filemap_write_and_wait(file->f_mapping) : 0;
+ struct file *lower_file = ecryptfs_file_to_lower(file);
+
+ if (lower_file->f_op && lower_file->f_op->flush) {
+ filemap_write_and_wait(file->f_mapping);
+ return lower_file->f_op->flush(lower_file, td);
+ }
+
+ return 0;
}
static int ecryptfs_release(struct inode *inode, struct file *file)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: KOSAKI Motohiro <[email protected]>
commit 869833f2c5c6e4dd09a5378cfc665ffb4615e5d2 upstream.
Dave Jones' system call fuzz testing tool "trinity" triggered the
following bug error with slab debugging enabled
=============================================================================
BUG numa_policy (Not tainted): Poison overwritten
-----------------------------------------------------------------------------
INFO: 0xffff880146498250-0xffff880146498250. First byte 0x6a instead of 0x6b
INFO: Allocated in mpol_new+0xa3/0x140 age=46310 cpu=6 pid=32154
__slab_alloc+0x3d3/0x445
kmem_cache_alloc+0x29d/0x2b0
mpol_new+0xa3/0x140
sys_mbind+0x142/0x620
system_call_fastpath+0x16/0x1b
INFO: Freed in __mpol_put+0x27/0x30 age=46268 cpu=6 pid=32154
__slab_free+0x2e/0x1de
kmem_cache_free+0x25a/0x260
__mpol_put+0x27/0x30
remove_vma+0x68/0x90
exit_mmap+0x118/0x140
mmput+0x73/0x110
exit_mm+0x108/0x130
do_exit+0x162/0xb90
do_group_exit+0x4f/0xc0
sys_exit_group+0x17/0x20
system_call_fastpath+0x16/0x1b
INFO: Slab 0xffffea0005192600 objects=27 used=27 fp=0x (null) flags=0x20000000004080
INFO: Object 0xffff880146498250 @offset=592 fp=0xffff88014649b9d0
The problem is that the structure is being prematurely freed due to a
reference count imbalance. In the following case mbind(addr, len) should
replace the memory policies of both vma1 and vma2 and thus they will
become to share the same mempolicy and the new mempolicy will have the
MPOL_F_SHARED flag.
+-------------------+-------------------+
| vma1 | vma2(shmem) |
+-------------------+-------------------+
| |
addr addr+len
alloc_pages_vma() uses get_vma_policy() and mpol_cond_put() pair for
maintaining the mempolicy reference count. The current rule is that
get_vma_policy() only increments refcount for shmem VMA and
mpol_conf_put() only decrements refcount if the policy has
MPOL_F_SHARED.
In above case, vma1 is not shmem vma and vma->policy has MPOL_F_SHARED!
The reference count will be decreased even though was not increased
whenever alloc_page_vma() is called. This has been broken since commit
[52cd3b07: mempolicy: rework mempolicy Reference Counting] in 2008.
There is another serious bug with the sharing of memory policies.
Currently, mempolicy rebind logic (it is called from cpuset rebinding)
ignores a refcount of mempolicy and override it forcibly. Thus, any
mempolicy sharing may cause mempolicy corruption. The bug was
introduced by commit [68860ec1: cpusets: automatic numa mempolicy
rebinding].
Ideally, the shared policy handling would be rewritten to either
properly handle COW of the policy structures or at least reference count
MPOL_F_SHARED based exclusively on information within the policy.
However, this patch takes the easier approach of disabling any policy
sharing between VMAs. Each new range allocated with sp_alloc will
allocate a new policy, set the reference count to 1 and drop the
reference count of the old policy. This increases the memory footprint
but is not expected to be a major problem as mbind() is unlikely to be
used for fine-grained ranges. It is also inefficient because it means
we allocate a new policy even in cases where mbind_range() could use the
new_policy passed to it. However, it is more straight-forward and the
change should be invisible to the user.
[[email protected]: Edited changelog]
Reported-by: Dave Jones <[email protected]>,
Cc: Christoph Lameter <[email protected]>,
Reviewed-by: Christoph Lameter <[email protected]>
Signed-off-by: KOSAKI Motohiro <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Cc: Josh Boyer <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/mempolicy.c | 52 ++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 38 insertions(+), 14 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 92daa26..f0728ae 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -607,24 +607,39 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end,
return first;
}
-/* Apply policy to a single VMA */
-static int policy_vma(struct vm_area_struct *vma, struct mempolicy *new)
+/*
+ * Apply policy to a single VMA
+ * This must be called with the mmap_sem held for writing.
+ */
+static int vma_replace_policy(struct vm_area_struct *vma,
+ struct mempolicy *pol)
{
- int err = 0;
- struct mempolicy *old = vma->vm_policy;
+ int err;
+ struct mempolicy *old;
+ struct mempolicy *new;
pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n",
vma->vm_start, vma->vm_end, vma->vm_pgoff,
vma->vm_ops, vma->vm_file,
vma->vm_ops ? vma->vm_ops->set_policy : NULL);
- if (vma->vm_ops && vma->vm_ops->set_policy)
+ new = mpol_dup(pol);
+ if (IS_ERR(new))
+ return PTR_ERR(new);
+
+ if (vma->vm_ops && vma->vm_ops->set_policy) {
err = vma->vm_ops->set_policy(vma, new);
- if (!err) {
- mpol_get(new);
- vma->vm_policy = new;
- mpol_put(old);
+ if (err)
+ goto err_out;
}
+
+ old = vma->vm_policy;
+ vma->vm_policy = new; /* protected by mmap_sem */
+ mpol_put(old);
+
+ return 0;
+ err_out:
+ mpol_put(new);
return err;
}
@@ -676,7 +691,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
if (err)
goto out;
}
- err = policy_vma(vma, new_pol);
+ err = vma_replace_policy(vma, new_pol);
if (err)
goto out;
}
@@ -2153,15 +2168,24 @@ static void sp_delete(struct shared_policy *sp, struct sp_node *n)
static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
struct mempolicy *pol)
{
- struct sp_node *n = kmem_cache_alloc(sn_cache, GFP_KERNEL);
+ struct sp_node *n;
+ struct mempolicy *newpol;
+ n = kmem_cache_alloc(sn_cache, GFP_KERNEL);
if (!n)
return NULL;
+
+ newpol = mpol_dup(pol);
+ if (IS_ERR(newpol)) {
+ kmem_cache_free(sn_cache, n);
+ return NULL;
+ }
+ newpol->flags |= MPOL_F_SHARED;
+
n->start = start;
n->end = end;
- mpol_get(pol);
- pol->flags |= MPOL_F_SHARED; /* for unref */
- n->policy = pol;
+ n->policy = newpol;
+
return n;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman <[email protected]>
commit b22d127a39ddd10d93deee3d96e643657ad53a49 upstream.
shared_policy_replace() use of sp_alloc() is unsafe. 1) sp_node cannot
be dereferenced if sp->lock is not held and 2) another thread can modify
sp_node between spin_unlock for allocating a new sp node and next
spin_lock. The bug was introduced before 2.6.12-rc2.
Kosaki's original patch for this problem was to allocate an sp node and
policy within shared_policy_replace and initialise it when the lock is
reacquired. I was not keen on this approach because it partially
duplicates sp_alloc(). As the paths were sp->lock is taken are not that
performance critical this patch converts sp->lock to sp->mutex so it can
sleep when calling sp_alloc().
[[email protected]: Original patch]
Signed-off-by: Mel Gorman <[email protected]>
Acked-by: KOSAKI Motohiro <[email protected]>
Reviewed-by: Christoph Lameter <[email protected]>
Cc: Josh Boyer <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
include/linux/mempolicy.h | 2 +-
mm/mempolicy.c | 37 ++++++++++++++++---------------------
2 files changed, 17 insertions(+), 22 deletions(-)
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 95b738c..df08254 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -188,7 +188,7 @@ struct sp_node {
struct shared_policy {
struct rb_root root;
- spinlock_t lock;
+ struct mutex mutex;
};
void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f0728ae..b2f12ec 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2083,7 +2083,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
*/
/* lookup first element intersecting start-end */
-/* Caller holds sp->lock */
+/* Caller holds sp->mutex */
static struct sp_node *
sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
{
@@ -2147,13 +2147,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
if (!sp->root.rb_node)
return NULL;
- spin_lock(&sp->lock);
+ mutex_lock(&sp->mutex);
sn = sp_lookup(sp, idx, idx+1);
if (sn) {
mpol_get(sn->policy);
pol = sn->policy;
}
- spin_unlock(&sp->lock);
+ mutex_unlock(&sp->mutex);
return pol;
}
@@ -2193,10 +2193,10 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
unsigned long end, struct sp_node *new)
{
- struct sp_node *n, *new2 = NULL;
+ struct sp_node *n;
+ int ret = 0;
-restart:
- spin_lock(&sp->lock);
+ mutex_lock(&sp->mutex);
n = sp_lookup(sp, start, end);
/* Take care of old policies in the same range. */
while (n && n->start < end) {
@@ -2209,16 +2209,14 @@ restart:
} else {
/* Old policy spanning whole new range. */
if (n->end > end) {
+ struct sp_node *new2;
+ new2 = sp_alloc(end, n->end, n->policy);
if (!new2) {
- spin_unlock(&sp->lock);
- new2 = sp_alloc(end, n->end, n->policy);
- if (!new2)
- return -ENOMEM;
- goto restart;
+ ret = -ENOMEM;
+ goto out;
}
n->end = start;
sp_insert(sp, new2);
- new2 = NULL;
break;
} else
n->end = start;
@@ -2229,12 +2227,9 @@ restart:
}
if (new)
sp_insert(sp, new);
- spin_unlock(&sp->lock);
- if (new2) {
- mpol_put(new2->policy);
- kmem_cache_free(sn_cache, new2);
- }
- return 0;
+out:
+ mutex_unlock(&sp->mutex);
+ return ret;
}
/**
@@ -2252,7 +2247,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
int ret;
sp->root = RB_ROOT; /* empty tree == default mempolicy */
- spin_lock_init(&sp->lock);
+ mutex_init(&sp->mutex);
if (mpol) {
struct vm_area_struct pvma;
@@ -2318,7 +2313,7 @@ void mpol_free_shared_policy(struct shared_policy *p)
if (!p->root.rb_node)
return;
- spin_lock(&p->lock);
+ mutex_lock(&p->mutex);
next = rb_first(&p->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
@@ -2327,7 +2322,7 @@ void mpol_free_shared_policy(struct shared_policy *p)
mpol_put(n->policy);
kmem_cache_free(sn_cache, n);
}
- spin_unlock(&p->lock);
+ mutex_unlock(&p->mutex);
}
/* assumes fs == KERNEL_DS */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Francois Romieu <[email protected]>
commit d387b427c973974dd619a33549c070ac5d0e089f upstream.
The new 84xx stopped flying below the radars.
Signed-off-by: Francois Romieu <[email protected]>
Cc: Hayes Wang <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/ethernet/realtek/r8169.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 9b0b00b..207aadc 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -315,6 +315,8 @@ enum rtl_registers {
Config0 = 0x51,
Config1 = 0x52,
Config2 = 0x53,
+#define PME_SIGNAL (1 << 5) /* 8168c and later */
+
Config3 = 0x54,
Config4 = 0x55,
Config5 = 0x56,
@@ -1422,6 +1424,10 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
RTL_W8(Config1, options);
break;
default:
+ options = RTL_R8(Config2) & ~PME_SIGNAL;
+ if (wolopts)
+ options |= PME_SIGNAL;
+ RTL_W8(Config2, options);
break;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrick McHardy <[email protected]>
commit f22eb25cf5b1157b29ef88c793b71972efc47143 upstream.
Via-headers are parsed beginning at the first character after the Via-address.
When the address is translated first and its length decreases, the offset to
start parsing at is incorrect and header parameters might be missed.
Update the offset after translating the Via-address to fix this.
Signed-off-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/ipv4/netfilter/nf_nat_sip.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c
index eef8f29..4ad9cf1 100644
--- a/net/ipv4/netfilter/nf_nat_sip.c
+++ b/net/ipv4/netfilter/nf_nat_sip.c
@@ -148,7 +148,7 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
if (ct_sip_parse_header_uri(ct, *dptr, NULL, *datalen,
hdr, NULL, &matchoff, &matchlen,
&addr, &port) > 0) {
- unsigned int matchend, poff, plen, buflen, n;
+ unsigned int olen, matchend, poff, plen, buflen, n;
char buffer[sizeof("nnn.nnn.nnn.nnn:nnnnn")];
/* We're only interested in headers related to this
@@ -163,11 +163,12 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
goto next;
}
+ olen = *datalen;
if (!map_addr(skb, dataoff, dptr, datalen, matchoff, matchlen,
&addr, port))
return NF_DROP;
- matchend = matchoff + matchlen;
+ matchend = matchoff + matchlen + *datalen - olen;
/* The maddr= parameter (RFC 2361) specifies where to send
* the reply. */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lin Ming <[email protected]>
commit 9e33ce453f8ac8452649802bee1f410319408f4b upstream.
IPVS should not reset skb->nf_bridge in FORWARD hook
by calling nf_reset for NAT replies. It triggers oops in
br_nf_forward_finish.
[ 579.781508] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[ 579.781669] IP: [<ffffffff817b1ca5>] br_nf_forward_finish+0x58/0x112
[ 579.781792] PGD 218f9067 PUD 0
[ 579.781865] Oops: 0000 [#1] SMP
[ 579.781945] CPU 0
[ 579.781983] Modules linked in:
[ 579.782047]
[ 579.782080]
[ 579.782114] Pid: 4644, comm: qemu Tainted: G W 3.5.0-rc5-00006-g95e69f9 #282 Hewlett-Packard /30E8
[ 579.782300] RIP: 0010:[<ffffffff817b1ca5>] [<ffffffff817b1ca5>] br_nf_forward_finish+0x58/0x112
[ 579.782455] RSP: 0018:ffff88007b003a98 EFLAGS: 00010287
[ 579.782541] RAX: 0000000000000008 RBX: ffff8800762ead00 RCX: 000000000001670a
[ 579.782653] RDX: 0000000000000000 RSI: 000000000000000a RDI: ffff8800762ead00
[ 579.782845] RBP: ffff88007b003ac8 R08: 0000000000016630 R09: ffff88007b003a90
[ 579.782957] R10: ffff88007b0038e8 R11: ffff88002da37540 R12: ffff88002da01a02
[ 579.783066] R13: ffff88002da01a80 R14: ffff88002d83c000 R15: ffff88002d82a000
[ 579.783177] FS: 0000000000000000(0000) GS:ffff88007b000000(0063) knlGS:00000000f62d1b70
[ 579.783306] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[ 579.783395] CR2: 0000000000000004 CR3: 00000000218fe000 CR4: 00000000000027f0
[ 579.783505] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 579.783684] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 579.783795] Process qemu (pid: 4644, threadinfo ffff880021b20000, task ffff880021aba760)
[ 579.783919] Stack:
[ 579.783959] ffff88007693cedc ffff8800762ead00 ffff88002da01a02 ffff8800762ead00
[ 579.784110] ffff88002da01a02 ffff88002da01a80 ffff88007b003b18 ffffffff817b26c7
[ 579.784260] ffff880080000000 ffffffff81ef59f0 ffff8800762ead00 ffffffff81ef58b0
[ 579.784477] Call Trace:
[ 579.784523] <IRQ>
[ 579.784562]
[ 579.784603] [<ffffffff817b26c7>] br_nf_forward_ip+0x275/0x2c8
[ 579.784707] [<ffffffff81704b58>] nf_iterate+0x47/0x7d
[ 579.784797] [<ffffffff817ac32e>] ? br_dev_queue_push_xmit+0xae/0xae
[ 579.784906] [<ffffffff81704bfb>] nf_hook_slow+0x6d/0x102
[ 579.784995] [<ffffffff817ac32e>] ? br_dev_queue_push_xmit+0xae/0xae
[ 579.785175] [<ffffffff8187fa95>] ? _raw_write_unlock_bh+0x19/0x1b
[ 579.785179] [<ffffffff817ac417>] __br_forward+0x97/0xa2
[ 579.785179] [<ffffffff817ad366>] br_handle_frame_finish+0x1a6/0x257
[ 579.785179] [<ffffffff817b2386>] br_nf_pre_routing_finish+0x26d/0x2cb
[ 579.785179] [<ffffffff817b2cf0>] br_nf_pre_routing+0x55d/0x5c1
[ 579.785179] [<ffffffff81704b58>] nf_iterate+0x47/0x7d
[ 579.785179] [<ffffffff817ad1c0>] ? br_handle_local_finish+0x44/0x44
[ 579.785179] [<ffffffff81704bfb>] nf_hook_slow+0x6d/0x102
[ 579.785179] [<ffffffff817ad1c0>] ? br_handle_local_finish+0x44/0x44
[ 579.785179] [<ffffffff81551525>] ? sky2_poll+0xb35/0xb54
[ 579.785179] [<ffffffff817ad62a>] br_handle_frame+0x213/0x229
[ 579.785179] [<ffffffff817ad417>] ? br_handle_frame_finish+0x257/0x257
[ 579.785179] [<ffffffff816e3b47>] __netif_receive_skb+0x2b4/0x3f1
[ 579.785179] [<ffffffff816e69fc>] process_backlog+0x99/0x1e2
[ 579.785179] [<ffffffff816e6800>] net_rx_action+0xdf/0x242
[ 579.785179] [<ffffffff8107e8a8>] __do_softirq+0xc1/0x1e0
[ 579.785179] [<ffffffff8135a5ba>] ? trace_hardirqs_off_thunk+0x3a/0x6c
[ 579.785179] [<ffffffff8188812c>] call_softirq+0x1c/0x30
The steps to reproduce as follow,
1. On Host1, setup brige br0(192.168.1.106)
2. Boot a kvm guest(192.168.1.105) on Host1 and start httpd
3. Start IPVS service on Host1
ipvsadm -A -t 192.168.1.106:80 -s rr
ipvsadm -a -t 192.168.1.106:80 -r 192.168.1.105:80 -m
4. Run apache benchmark on Host2(192.168.1.101)
ab -n 1000 http://192.168.1.106/
ip_vs_reply4
ip_vs_out
handle_response
ip_vs_notrack
nf_reset()
{
skb->nf_bridge = NULL;
}
Actually, IPVS wants in this case just to replace nfct
with untracked version. So replace the nf_reset(skb) call
in ip_vs_notrack() with a nf_conntrack_put(skb->nfct) call.
Signed-off-by: Lin Ming <[email protected]>
Signed-off-by: Julian Anastasov <[email protected]>
Signed-off-by: Simon Horman <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
include/net/ip_vs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
index d6146b4..95374d1 100644
--- a/include/net/ip_vs.h
+++ b/include/net/ip_vs.h
@@ -1425,7 +1425,7 @@ static inline void ip_vs_notrack(struct sk_buff *skb)
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (!ct || !nf_ct_is_untracked(ct)) {
- nf_reset(skb);
+ nf_conntrack_put(skb->nfct);
skb->nfct = &nf_ct_untracked_get()->ct_general;
skb->nfctinfo = IP_CT_NEW;
nf_conntrack_get(skb->nfct);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: KOSAKI Motohiro <[email protected]>
commit 63f74ca21f1fad36d075e063f06dcc6d39fe86b2 upstream.
When shared_policy_replace() fails to allocate new->policy is not freed
correctly by mpol_set_shared_policy(). The problem is that shared
mempolicy code directly call kmem_cache_free() in multiple places where
it is easy to make a mistake.
This patch creates an sp_free wrapper function and uses it. The bug was
introduced pre-git age (IOW, before 2.6.12-rc2).
[[email protected]: Editted changelog]
Signed-off-by: KOSAKI Motohiro <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Christoph Lameter <[email protected]>
Cc: Josh Boyer <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/mempolicy.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b2f12ec..1763418 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2157,12 +2157,17 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
return pol;
}
+static void sp_free(struct sp_node *n)
+{
+ mpol_put(n->policy);
+ kmem_cache_free(sn_cache, n);
+}
+
static void sp_delete(struct shared_policy *sp, struct sp_node *n)
{
pr_debug("deleting %lx-l%lx\n", n->start, n->end);
rb_erase(&n->nd, &sp->root);
- mpol_put(n->policy);
- kmem_cache_free(sn_cache, n);
+ sp_free(n);
}
static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
@@ -2301,7 +2306,7 @@ int mpol_set_shared_policy(struct shared_policy *info,
}
err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new);
if (err && new)
- kmem_cache_free(sn_cache, new);
+ sp_free(new);
return err;
}
@@ -2318,9 +2323,7 @@ void mpol_free_shared_policy(struct shared_policy *p)
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
- rb_erase(&n->nd, &p->root);
- mpol_put(n->policy);
- kmem_cache_free(sn_cache, n);
+ sp_delete(p, n);
}
mutex_unlock(&p->mutex);
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Francois Romieu <[email protected]>
commit 851e60221926a53344b4227879858bef841b0477 upstream.
Suggested by Hayes.
Signed-off-by: Francois Romieu <[email protected]>
Cc: Hayes Wang <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/ethernet/realtek/r8169.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 71393ea..9b0b00b 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -1396,7 +1396,6 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
u16 reg;
u8 mask;
} cfg[] = {
- { WAKE_ANY, Config1, PMEnable },
{ WAKE_PHY, Config3, LinkUp },
{ WAKE_MAGIC, Config3, MagicPacket },
{ WAKE_UCAST, Config5, UWF },
@@ -1404,16 +1403,28 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
{ WAKE_MCAST, Config5, MWF },
{ WAKE_ANY, Config5, LanWake }
};
+ u8 options;
RTL_W8(Cfg9346, Cfg9346_Unlock);
for (i = 0; i < ARRAY_SIZE(cfg); i++) {
- u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
if (wolopts & cfg[i].opt)
options |= cfg[i].mask;
RTL_W8(cfg[i].reg, options);
}
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_17:
+ options = RTL_R8(Config1) & ~PMEnable;
+ if (wolopts)
+ options |= PMEnable;
+ RTL_W8(Config1, options);
+ break;
+ default:
+ break;
+ }
+
RTL_W8(Cfg9346, Cfg9346_Lock);
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <[email protected]>
commit 2614f86490122bf51eb7c12ec73927f1900f4e7d upstream.
In __nf_ct_expect_check, the function refresh_timer returns 1
if a matching expectation is found and its timer is successfully
refreshed. This results in nf_ct_expect_related returning 0.
Note that at this point:
- the passed expectation is not inserted in the expectation table
and its timer was not initialized, since we have refreshed one
matching/existing expectation.
- nf_ct_expect_alloc uses kmem_cache_alloc, so the expectation
timer is in some undefined state just after the allocation,
until it is appropriately initialized.
This can be a problem for the SIP helper during the expectation
addition:
...
if (nf_ct_expect_related(rtp_exp) == 0) {
if (nf_ct_expect_related(rtcp_exp) != 0)
nf_ct_unexpect_related(rtp_exp);
...
Note that nf_ct_expect_related(rtp_exp) may return 0 for the timer refresh
case that is detailed above. Then, if nf_ct_unexpect_related(rtcp_exp)
returns != 0, nf_ct_unexpect_related(rtp_exp) is called, which does:
spin_lock_bh(&nf_conntrack_lock);
if (del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
}
spin_unlock_bh(&nf_conntrack_lock);
Note that del_timer always returns false if the timer has been
initialized. However, the timer was not initialized since setup_timer
was not called, therefore, the expectation timer remains in some
undefined state. If I'm not missing anything, this may lead to the
removal an unexistent expectation.
To fix this, the optimization that allows refreshing an expectation
is removed. Now nf_conntrack_expect_related looks more consistent
to me since it always add the expectation in case that it returns
success.
Thanks to Patrick McHardy for participating in the discussion of
this patch.
I think this may be the source of the problem described by:
http://marc.info/?l=netfilter-devel&m=134073514719421&w=2
Reported-by: Rafal Fitt <[email protected]>
Acked-by: Patrick McHardy <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/netfilter/nf_conntrack_expect.c | 29 ++++++-----------------------
1 file changed, 6 insertions(+), 23 deletions(-)
--- a/net/netfilter/nf_conntrack_expect.c
+++ b/net/netfilter/nf_conntrack_expect.c
@@ -366,23 +366,6 @@ static void evict_oldest_expect(struct n
}
}
-static inline int refresh_timer(struct nf_conntrack_expect *i)
-{
- struct nf_conn_help *master_help = nfct_help(i->master);
- const struct nf_conntrack_expect_policy *p;
-
- if (!del_timer(&i->timeout))
- return 0;
-
- p = &rcu_dereference_protected(
- master_help->helper,
- lockdep_is_held(&nf_conntrack_lock)
- )->expect_policy[i->class];
- i->timeout.expires = jiffies + p->timeout * HZ;
- add_timer(&i->timeout);
- return 1;
-}
-
static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
{
const struct nf_conntrack_expect_policy *p;
@@ -390,7 +373,7 @@ static inline int __nf_ct_expect_check(s
struct nf_conn *master = expect->master;
struct nf_conn_help *master_help = nfct_help(master);
struct net *net = nf_ct_exp_net(expect);
- struct hlist_node *n;
+ struct hlist_node *n, *next;
unsigned int h;
int ret = 1;
@@ -401,12 +384,12 @@ static inline int __nf_ct_expect_check(s
goto out;
}
h = nf_ct_expect_dst_hash(&expect->tuple);
- hlist_for_each_entry(i, n, &net->ct.expect_hash[h], hnode) {
+ hlist_for_each_entry_safe(i, n, next, &net->ct.expect_hash[h], hnode) {
if (expect_matches(i, expect)) {
- /* Refresh timer: if it's dying, ignore.. */
- if (refresh_timer(i)) {
- ret = 0;
- goto out;
+ if (del_timer(&i->timeout)) {
+ nf_ct_unlink_expect(i);
+ nf_ct_expect_put(i);
+ break;
}
} else if (expect_clash(i, expect)) {
ret = -EBUSY;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <[email protected]>
commit 3f509c689a07a4aa989b426893d8491a7ffcc410 upstream.
We're hitting bug while trying to reinsert an already existing
expectation:
kernel BUG at kernel/timer.c:895!
invalid opcode: 0000 [#1] SMP
[...]
Call Trace:
<IRQ>
[<ffffffffa0069563>] nf_ct_expect_related_report+0x4a0/0x57a [nf_conntrack]
[<ffffffff812d423a>] ? in4_pton+0x72/0x131
[<ffffffffa00ca69e>] ip_nat_sdp_media+0xeb/0x185 [nf_nat_sip]
[<ffffffffa00b5b9b>] set_expected_rtp_rtcp+0x32d/0x39b [nf_conntrack_sip]
[<ffffffffa00b5f15>] process_sdp+0x30c/0x3ec [nf_conntrack_sip]
[<ffffffff8103f1eb>] ? irq_exit+0x9a/0x9c
[<ffffffffa00ca738>] ? ip_nat_sdp_media+0x185/0x185 [nf_nat_sip]
We have to remove the RTP expectation if the RTCP expectation hits EBUSY
since we keep trying with other ports until we succeed.
Reported-by: Rafal Fitt <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
net/ipv4/netfilter/nf_nat_sip.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c
index 4ad9cf1..9c87cde 100644
--- a/net/ipv4/netfilter/nf_nat_sip.c
+++ b/net/ipv4/netfilter/nf_nat_sip.c
@@ -502,7 +502,10 @@ static unsigned int ip_nat_sdp_media(struct sk_buff *skb, unsigned int dataoff,
ret = nf_ct_expect_related(rtcp_exp);
if (ret == 0)
break;
- else if (ret != -EBUSY) {
+ else if (ret == -EBUSY) {
+ nf_ct_unexpect_related(rtp_exp);
+ continue;
+ } else if (ret < 0) {
nf_ct_unexpect_related(rtp_exp);
port = 0;
break;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mike Galbraith <[email protected]>
commit 8f6189684eb4e85e6c593cd710693f09c944450a upstream.
Make stop scheduler class do the same accounting as other classes,
Migration threads can be caught in the act while doing exec balancing,
leading to the below due to use of unmaintained ->se.exec_start. The
load that triggered this particular instance was an apparently out of
control heavily threaded application that does system monitoring in
what equated to an exec bomb, with one of the VERY frequently migrated
tasks being ps.
%CPU PID USER CMD
99.3 45 root [migration/10]
97.7 53 root [migration/12]
97.0 57 root [migration/13]
90.1 49 root [migration/11]
89.6 65 root [migration/15]
88.7 17 root [migration/3]
80.4 37 root [migration/8]
78.1 41 root [migration/9]
44.2 13 root [migration/2]
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
[Steven Rostedt: backport for 3.2.]
Signed-off-by: Ben Hutchings <[email protected]>
---
kernel/sched_stoptask.c | 22 +++++++++++++++++++++-
1 files changed, 21 insertions(+), 1 deletions(-)
--- a/kernel/sched_stoptask.c
+++ b/kernel/sched_stoptask.c
@@ -25,8 +25,10 @@ static struct task_struct *pick_next_tas
{
struct task_struct *stop = rq->stop;
- if (stop && stop->on_rq)
+ if (stop && stop->on_rq) {
+ stop->se.exec_start = rq->clock_task;
return stop;
+ }
return NULL;
}
@@ -50,6 +52,21 @@ static void yield_task_stop(struct rq *r
static void put_prev_task_stop(struct rq *rq, struct task_struct *prev)
{
+ struct task_struct *curr = rq->curr;
+ u64 delta_exec;
+
+ delta_exec = rq->clock_task - curr->se.exec_start;
+ if (unlikely((s64)delta_exec < 0))
+ delta_exec = 0;
+
+ schedstat_set(curr->se.statistics.exec_max,
+ max(curr->se.statistics.exec_max, delta_exec));
+
+ curr->se.sum_exec_runtime += delta_exec;
+ account_group_exec_runtime(curr, delta_exec);
+
+ curr->se.exec_start = rq->clock_task;
+ cpuacct_charge(curr, delta_exec);
}
static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued)
@@ -58,6 +75,9 @@ static void task_tick_stop(struct rq *rq
static void set_curr_task_stop(struct rq *rq)
{
+ struct task_struct *stop = rq->stop;
+
+ stop->se.exec_start = rq->clock_task;
}
static void switched_to_stop(struct rq *rq, struct task_struct *p)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jesse Brandeburg <[email protected]>
commit 3a3847e007aae732d64d8fd1374126393e9879a3 upstream.
As reported by Steven Rostedt, e1000 has a lockdep splat added
during the recent merge window. The issue is that
cancel_delayed_work is called while holding our private mutex.
There is no reason that I can see to hold the mutex during pci
shutdown, it was more just paranoia that I put the mutex_lock
around the call to e1000_down.
In a quick survey lots of drivers handle locking differently when
being called by the pci layer. The assumption here is that we
don't need the mutexes' protection in this function because
the driver could not be unloaded while in the shutdown handler
which is only called at reboot or poweroff.
Reported-by: Steven Rostedt <[email protected]>
Signed-off-by: Jesse Brandeburg <[email protected]>
Tested-by: Steven Rostedt <[email protected]>
Tested-by: Aaron Brown <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/net/ethernet/intel/e1000/e1000_main.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index 985d589..934d5aa 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -4724,8 +4724,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
netif_device_detach(netdev);
- mutex_lock(&adapter->mutex);
-
if (netif_running(netdev)) {
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
@@ -4733,10 +4731,8 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
#ifdef CONFIG_PM
retval = pci_save_state(pdev);
- if (retval) {
- mutex_unlock(&adapter->mutex);
+ if (retval)
return retval;
- }
#endif
status = er32(STATUS);
@@ -4791,8 +4787,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
if (netif_running(netdev))
e1000_free_irq(adapter);
- mutex_unlock(&adapter->mutex);
-
pci_disable_device(pdev);
return 0;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Vetter <[email protected]>
commit 15a13bbdffb0d6288a5dd04aee9736267da1335f upstream.
This fixes a resume regression introduced in
commit 7dd4906586274f3945f2aeaaa5a33b451c3b4bba
Author: Chris Wilson <[email protected]>
Date: Wed Mar 21 10:48:18 2012 +0000
drm/i915: Mark untiled BLT commands as fenced on gen2/3
which fixed fencing tracking for untiled blt commands.
A side effect of that patch was that now also untiled objects have a
non-zero obj->last_fenced_seqno to track when a fence can be set up
after a pipelined tiling change. Unfortunately this was only cleared
by the fence setup and teardown code, resulting in tons of untiled but
inactive objects with non-zero last_fenced_seqno.
Now after resume we completely reset the seqno tracking, both on the
driver side (by setting dev_priv->next_seqno = 1) and on the hw side
(by allocating a new hws page, which contains the seqnos). Hilarity
and indefinite waits ensued from the stale seqnos in
obj->last_fenced_seqno from before the suspend.
The fix is to properly clear the fencing tracking state like we
already do for the normal gpu rendering while moving objects off the
active list.
Reported-and-tested-by: "Rafael J. Wysocki" <[email protected]>
Cc: Jiri Slaby <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Signed-Off-by: Daniel Vetter <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4c65c63..0e3c6ac 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1493,6 +1493,7 @@ i915_gem_object_move_off_active(struct drm_i915_gem_object *obj)
{
list_del_init(&obj->ring_list);
obj->last_rendering_seqno = 0;
+ obj->last_fenced_seqno = 0;
}
static void
@@ -1521,6 +1522,7 @@ i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj)
BUG_ON(!list_empty(&obj->gpu_write_list));
BUG_ON(!obj->active);
obj->ring = NULL;
+ obj->last_fenced_ring = NULL;
i915_gem_object_move_off_active(obj);
obj->fenced_gpu_access = false;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthieu CASTET <[email protected]>
commit 193819cf2e6e395b1e1be2d36785dc5563a6edca upstream.
Without this patch, these PEB are not scrubbed until we put data in them.
Bitflip can accumulate latter and we can loose the EC header (but VID header
should be intact and allow to recover data).
Signed-off-by: Matthieu Castet <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
[bwh: Backported to 3.2: adjust filename, context]
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/ubi/scan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/mtd/ubi/scan.c
+++ b/drivers/mtd/ubi/scan.c
@@ -997,7 +997,7 @@ static int process_eb(struct ubi_device
return err;
goto adjust_mean_ec;
case UBI_IO_FF:
- if (ec_err)
+ if (ec_err || bitflips)
err = add_to_list(si, pnum, ec, 1, &si->erase);
else
err = add_to_list(si, pnum, ec, 0, &si->free);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: "samix.lebsir" <[email protected]>
commit 10c6c383e43565c9c6ec07ff8eb2825f8091bdf0 upstream.
The design of uplink flow control in the mux driver is
that for constipated channels data will backup into the
per-channel fifos, and any messages that make it to the
outbound message queue will still go out.
Code was added to also stop messages that were in the outbound
queue but this requires filtering through all the messages on the
queue for stopped dlcis and changes some of the mux logic unneccessarily.
The message fiiltering was removed to be in line w/ the original design
as the message filtering does not provide any solution.
Extra debug messages used during investigation were also removed.
Signed-off-by: samix.lebsir <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 25 +------------------------
1 file changed, 1 insertion(+), 24 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index db15b56..5f68f2a 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -691,10 +691,6 @@ static void gsm_data_kick(struct gsm_mux *gsm)
msg = msg->next;
continue;
}
- if (gsm->dlci[msg->addr]->constipated) {
- msg = msg->next;
- continue;
- }
if (gsm->encoding != 0) {
gsm->txframe[0] = GSM1_SOF;
len = gsm_stuff_frame(msg->data,
@@ -748,8 +744,6 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
u8 *dp = msg->data;
u8 *fcs = dp + msg->len;
- WARN_ONCE(dlci->constipated, "%s: queueing from a constipated DLCI",
- __func__);
/* Fill in the header */
if (gsm->encoding == 0) {
if (msg->len < 128)
@@ -956,9 +950,6 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
break;
dlci = gsm->dlci[i];
if (dlci == NULL || dlci->constipated) {
- if (dlci && (debug & 0x20))
- pr_info("%s: DLCI %d is constipated",
- __func__, i);
i++;
continue;
}
@@ -988,12 +979,8 @@ static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
unsigned long flags;
int sweep;
- if (dlci->constipated) {
- if (debug & 0x20)
- pr_info("%s: DLCI %d is constipated",
- __func__, dlci->addr);
+ if (dlci->constipated)
return;
- }
spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
/* If we have nothing running then we need to fire up */
@@ -1069,15 +1056,9 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
/* Flow control/ready to communicate */
fc = (modem & MDM_FC) || !(modem & MDM_RTR);
if (fc && !dlci->constipated) {
- if (debug & 0x20)
- pr_info("%s: DLCI %d START constipated (tx_bytes=%d)",
- __func__, dlci->addr, dlci->gsm->tx_bytes);
/* Need to throttle our output on this device */
dlci->constipated = 1;
} else if (!fc && dlci->constipated) {
- if (debug & 0x20)
- pr_info("%s: DLCI %d END constipated (tx_bytes=%d)",
- __func__, dlci->addr, dlci->gsm->tx_bytes);
dlci->constipated = 0;
gsm_dlci_data_kick(dlci);
}
@@ -1241,8 +1222,6 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
break;
case CMD_FCON:
/* Modem can accept data again */
- if (debug & 0x20)
- pr_info("%s: GSM END constipation", __func__);
gsm->constipated = 0;
gsm_control_reply(gsm, CMD_FCON, NULL, 0);
/* Kick the link in case it is idling */
@@ -1250,8 +1229,6 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
break;
case CMD_FCOFF:
/* Modem wants us to STFU */
- if (debug & 0x20)
- pr_info("%s: GSM START constipation", __func__);
gsm->constipated = 1;
gsm_control_reply(gsm, CMD_FCOFF, NULL, 0);
break;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Wang <[email protected]>
commit 947ca1856a7e60aa6d20536785e6a42dff25aa6e upstream.
DEADLOCK will be report while running a kernel with NUMA and LOCKDEP enabled,
the process of this fake report is:
kmem_cache_free() //free obj in cachep
-> cache_free_alien() //acquire cachep's l3 alien lock
-> __drain_alien_cache()
-> free_block()
-> slab_destroy()
-> kmem_cache_free() //free slab in cachep->slabp_cache
-> cache_free_alien() //acquire cachep->slabp_cache's l3 alien lock
Since the cachep and cachep->slabp_cache's l3 alien are in the same lock class,
fake report generated.
This should not happen since we already have init_lock_keys() which will
reassign the lock class for both l3 list and l3 alien.
However, init_lock_keys() was invoked at a wrong position which is before we
invoke enable_cpucache() on each cache.
Since until set slab_state to be FULL, we won't invoke enable_cpucache()
on caches to build their l3 alien while creating them, so although we invoked
init_lock_keys(), the l3 alien lock class won't change since we don't have
them until invoked enable_cpucache() later.
This patch will invoke init_lock_keys() after we done enable_cpucache()
instead of before to avoid the fake DEADLOCK report.
Michael traced the problem back to a commit in release 3.0.0:
commit 30765b92ada267c5395fc788623cb15233276f5c
Author: Peter Zijlstra <[email protected]>
Date: Thu Jul 28 23:22:56 2011 +0200
slab, lockdep: Annotate the locks before using them
Fernando found we hit the regular OFF_SLAB 'recursion' before we
annotate the locks, cure this.
The relevant portion of the stack-trace:
> [ 0.000000] [<c085e24f>] rt_spin_lock+0x50/0x56
> [ 0.000000] [<c04fb406>] __cache_free+0x43/0xc3
> [ 0.000000] [<c04fb23f>] kmem_cache_free+0x6c/0xdc
> [ 0.000000] [<c04fb2fe>] slab_destroy+0x4f/0x53
> [ 0.000000] [<c04fb396>] free_block+0x94/0xc1
> [ 0.000000] [<c04fc551>] do_tune_cpucache+0x10b/0x2bb
> [ 0.000000] [<c04fc8dc>] enable_cpucache+0x7b/0xa7
> [ 0.000000] [<c0bd9d3c>] kmem_cache_init_late+0x1f/0x61
> [ 0.000000] [<c0bba687>] start_kernel+0x24c/0x363
> [ 0.000000] [<c0bba0ba>] i386_start_kernel+0xa9/0xaf
Reported-by: Fernando Lopez-Lezcano <[email protected]>
Acked-by: Pekka Enberg <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/1311888176.2617.379.camel@laptop
Signed-off-by: Ingo Molnar <[email protected]>
The commit moved init_lock_keys() before we build up the alien, so we
failed to reclass it.
Acked-by: Christoph Lameter <[email protected]>
Tested-by: Paul E. McKenney <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
mm/slab.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1669,9 +1669,6 @@ void __init kmem_cache_init_late(void)
g_cpucache_up = LATE;
- /* Annotate slab for lockdep -- annotate the malloc caches */
- init_lock_keys();
-
/* 6) resize the head arrays to their final sizes */
mutex_lock(&cache_chain_mutex);
list_for_each_entry(cachep, &cache_chain, next)
@@ -1679,6 +1676,9 @@ void __init kmem_cache_init_late(void)
BUG();
mutex_unlock(&cache_chain_mutex);
+ /* Annotate slab for lockdep -- annotate the malloc caches */
+ init_lock_keys();
+
/* Done! */
g_cpucache_up = FULL;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams <[email protected]>
commit 6d70a74ffd616073a68ae0974d98819bfa8e6da6 upstream.
The oem parameter image embedded in the efi variable is at an offset
from the start of the variable. However, in the failure path we try to
free the 'orom' pointer which is only valid when the paramaters are
being read from the legacy option-rom space.
Since failure to load the oem parameters is unlikely and we keep the
memory around in the success case just defer all de-allocation to devm.
Reported-by: Don Morris <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/scsi/isci/init.c | 1 -
drivers/scsi/isci/probe_roms.c | 1 -
2 files changed, 2 deletions(-)
--- a/drivers/scsi/isci/init.c
+++ b/drivers/scsi/isci/init.c
@@ -469,7 +469,6 @@ static int __devinit isci_pci_probe(stru
if (sci_oem_parameters_validate(&orom->ctrl[i])) {
dev_warn(&pdev->dev,
"[%d]: invalid oem parameters detected, falling back to firmware\n", i);
- devm_kfree(&pdev->dev, orom);
orom = NULL;
break;
}
--- a/drivers/scsi/isci/probe_roms.c
+++ b/drivers/scsi/isci/probe_roms.c
@@ -104,7 +104,6 @@ struct isci_orom *isci_request_oprom(str
if (i >= len) {
dev_err(&pdev->dev, "oprom parse error\n");
- devm_kfree(&pdev->dev, rom);
rom = NULL;
}
pci_unmap_biosrom(oprom);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eugeni Dodonov <[email protected]>
commit ab3951eb74e7c33a2f5b7b64d72e82f1eea61571 upstream.
We should not hit this under any sane conditions, but still, this does not
looks right.
CC: Chris Wilson <[email protected]>
CC: Daniel Vetter <[email protected]>
Reported-by: Herton Ronaldo Krzesinski <[email protected]>
Reviewed-by: Chris Wlison <[email protected]>
Signed-off-by: Eugeni Dodonov <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/gpu/drm/i915/intel_display.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 17020cd..2e1f28f 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -6317,7 +6317,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
default:
WARN_ONCE(1, "unknown plane in flip command\n");
ret = -ENODEV;
- goto err;
+ goto err_unpin;
}
ret = intel_ring_begin(ring, 4);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tejun Heo <[email protected]>
commit 959d1af8cffc8fd38ed53e8be1cf4ab8782f9c00 upstream.
WORK_STRUCT_PENDING is used to claim ownership of a work item and
process_one_work() releases it before starting execution. When
someone else grabs PENDING, all pre-release updates to the work item
should be visible and all updates made by the new owner should happen
afterwards.
Grabbing PENDING uses test_and_set_bit() and thus has a full barrier;
however, clearing doesn't have a matching wmb. Given the preceding
spin_unlock and use of clear_bit, I don't believe this can be a
problem on an actual machine and there hasn't been any related report
but it still is theretically possible for clear_pending to permeate
upwards and happen before work->entry update.
Add an explicit smp_wmb() before work_clear_pending().
Signed-off-by: Tejun Heo <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
kernel/workqueue.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 70f95ab..5c26d36 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1997,7 +1997,9 @@ __acquires(&gcwq->lock)
spin_unlock_irq(&gcwq->lock);
+ smp_wmb(); /* paired with test_and_set_bit(PENDING) */
work_clear_pending(work);
+
lock_map_acquire_read(&cwq->wq->lockdep_map);
lock_map_acquire(&lockdep_map);
trace_workqueue_execute_start(work);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiri Slaby <[email protected]>
commit ee8b593affdf893012e57f4c54a21984d1b0d92e upstream.
If a user provides a buffer larger than a tty->write_buf chunk and
passes '\r' at the end of the buffer, we touch an out-of-bound memory.
Add a check there to prevent this.
Signed-off-by: Jiri Slaby <[email protected]>
Cc: Samo Pogacnik <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/char/ttyprintk.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c
index 9b1e5e0..08755c5 100644
--- a/drivers/char/ttyprintk.c
+++ b/drivers/char/ttyprintk.c
@@ -67,7 +67,7 @@ static int tpk_printk(const unsigned char *buf, int count)
tmp[tpk_curr + 1] = '\0';
printk(KERN_INFO "%s%s", tpk_tag, tmp);
tpk_curr = 0;
- if (buf[i + 1] == '\n')
+ if ((i + 1) < count && buf[i + 1] == '\n')
i++;
break;
case '\n':
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Khalid Aziz <[email protected]>
commit 7083909023bbe29b3176e92d2d089def1aa7aa1e upstream.
Some of the EFI variable attributes are missing from print out from
/sys/firmware/efi/vars/*/attributes. This patch adds those in. It also
updates code to use pre-defined constants for masking current value
of attributes.
Signed-off-by: Khalid Aziz <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Acked-by: Matthew Garrett <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/firmware/efivars.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/firmware/efivars.c b/drivers/firmware/efivars.c
index 47408e8..d10c987 100644
--- a/drivers/firmware/efivars.c
+++ b/drivers/firmware/efivars.c
@@ -435,12 +435,23 @@ efivar_attr_read(struct efivar_entry *entry, char *buf)
if (status != EFI_SUCCESS)
return -EIO;
- if (var->Attributes & 0x1)
+ if (var->Attributes & EFI_VARIABLE_NON_VOLATILE)
str += sprintf(str, "EFI_VARIABLE_NON_VOLATILE\n");
- if (var->Attributes & 0x2)
+ if (var->Attributes & EFI_VARIABLE_BOOTSERVICE_ACCESS)
str += sprintf(str, "EFI_VARIABLE_BOOTSERVICE_ACCESS\n");
- if (var->Attributes & 0x4)
+ if (var->Attributes & EFI_VARIABLE_RUNTIME_ACCESS)
str += sprintf(str, "EFI_VARIABLE_RUNTIME_ACCESS\n");
+ if (var->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD)
+ str += sprintf(str, "EFI_VARIABLE_HARDWARE_ERROR_RECORD\n");
+ if (var->Attributes & EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS)
+ str += sprintf(str,
+ "EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS\n");
+ if (var->Attributes &
+ EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS)
+ str += sprintf(str,
+ "EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS\n");
+ if (var->Attributes & EFI_VARIABLE_APPEND_WRITE)
+ str += sprintf(str, "EFI_VARIABLE_APPEND_WRITE\n");
return str - buf;
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings <[email protected]>
commit 6bb22fea25624ab593eee376fa5fb82d1b13f45a upstream.
Linux native exit codes are 8-bit unsigned values. exit(-1) results
in an exit code of 255, which is usually reserved for shells reporting
'command not found'. Use the portable value EXIT_FAILURE. (Not that
this matters much for a daemon.)
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
[bwh: Backported to 3.2: drop changes to exit() calls not in this version]
---
tools/hv/hv_kvp_daemon.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
--- a/tools/hv/hv_kvp_daemon.c
+++ b/tools/hv/hv_kvp_daemon.c
@@ -348,7 +348,7 @@ int main(void)
fd = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
if (fd < 0) {
syslog(LOG_ERR, "netlink socket creation failed; error:%d", fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
addr.nl_family = AF_NETLINK;
addr.nl_pad = 0;
@@ -360,7 +360,7 @@ int main(void)
if (error < 0) {
syslog(LOG_ERR, "bind failed; error:%d", error);
close(fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
sock_opt = addr.nl_groups;
setsockopt(fd, 270, 1, &sock_opt, sizeof(sock_opt));
@@ -378,7 +378,7 @@ int main(void)
if (len < 0) {
syslog(LOG_ERR, "netlink_send failed; error:%d", len);
close(fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
pfd.fd = fd;
@@ -497,7 +497,7 @@ int main(void)
len = netlink_send(fd, incoming_cn_msg);
if (len < 0) {
syslog(LOG_ERR, "net_link send failed; error:%d", len);
- exit(-1);
+ exit(EXIT_FAILURE);
}
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Artem Bityutskiy <[email protected]>
commit abb3e01103eb4e2ea5c15e6fedbc74e08bd4cc2b upstream.
Currently UBI fails in autoresize when it is in R/O mode (e.g., because the
underlying MTD device is R/O). This patch fixes the issue - we just skip
autoresize and print a warning.
Reported-by: Pali Rohár <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/ubi/build.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index 355756b..8966088 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -799,6 +799,11 @@ static int autoresize(struct ubi_device *ubi, int vol_id)
struct ubi_volume *vol = ubi->volumes[vol_id];
int err, old_reserved_pebs = vol->reserved_pebs;
+ if (ubi->ro_mode) {
+ ubi_warn("skip auto-resize because of R/O mode");
+ return 0;
+ }
+
/*
* Clear the auto-resize flag in the volume in-memory copy of the
* volume table, and 'ubi_resize_volume()' will propagate this change
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: xiaojin <[email protected]>
commit 7e8ac7b23b67416700dfb8b4136a4e81ce675b48 upstream.
In 3GPP27.010 5.8.1, it defined:
The TE multiplexer initiates the establishment of the multiplexer control channel by sending a SABM frame on DLCI 0 using the procedures of clause 5.4.1.
Once the multiplexer channel is established other DLCs may be established using the procedures of clause 5.4.1.
This patch implement 5.8.1 in MUX level, it make sure DLC0 is the first channel to be setup.
[or for those not familiar with the specification: it was possible to try
and open a data connection while the control channel was not yet fully
open, which is a spec violation and confuses some modems]
Signed-off-by: xiaojin <[email protected]>
Tested-by: Yin, Fengwei <[email protected]>
[tweaked the order we check things and error code]
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -2889,6 +2889,10 @@ static int gsmtty_open(struct tty_struct
gsm = gsm_mux[mux];
if (gsm->dead)
return -EL2HLT;
+ /* If DLCI 0 is not yet fully open return an error. This is ok from a locking
+ perspective as we don't have to worry about this if DLCI0 is lost */
+ if (gsm->dlci[0] && gsm->dlci[0]->state != DLCI_OPEN)
+ return -EL2NSYNC;
dlci = gsm->dlci[line];
if (dlci == NULL)
dlci = gsm_dlci_alloc(gsm, line);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russ Gorby <[email protected]>
commit 88ed2a60610974443335c924d7cb8e5dcf9dbdc1 upstream.
Uplink (TX) network data will go through gsm_dlci_data_output_framed
there is a bug where if memory allocation fails, the skb which
has already been pulled off the list will be lost.
In addition TX skbs were being processed in LIFO order
Fixed the memory leak, and changed to FIFO order processing
Signed-off-by: Russ Gorby <[email protected]>
Tested-by: Kappel, LaurentX <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index a8c82f5..3e210a4 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -868,7 +868,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
/* dlci->skb is locked by tx_lock */
if (dlci->skb == NULL) {
- dlci->skb = skb_dequeue(&dlci->skb_list);
+ dlci->skb = skb_dequeue_tail(&dlci->skb_list);
if (dlci->skb == NULL)
return 0;
first = 1;
@@ -892,8 +892,11 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
/* FIXME: need a timer or something to kick this so it can't
get stuck with no work outstanding and no buffer free */
- if (msg == NULL)
+ if (msg == NULL) {
+ skb_queue_tail(&dlci->skb_list, dlci->skb);
+ dlci->skb = NULL;
return -ENOMEM;
+ }
dp = msg->data;
if (dlci->adaption == 4) { /* Interruptible framed (Packetised Data) */
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislav Kozina <[email protected]>
commit e9490e93c1978b6669f3e993caa3189be13ce459 upstream.
Change the BUG_ON to WARN_ON and return in case of tty->read_buf==NULL. We want to track a
couple of long standing reports of this but at the same time we can avoid killing the box.
Signed-off-by: Stanislav Kozina <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_tty.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
index 6259242..8c0b7b4 100644
--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -1742,7 +1742,8 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
do_it_again:
- BUG_ON(!tty->read_buf);
+ if (WARN_ON(!tty->read_buf))
+ return -EAGAIN;
c = job_control(tty, file);
if (c < 0)
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shmulik Ladkani <[email protected]>
commit 7bb9c75436212813b38700c34df4bbb6eb82debe upstream.
The code responsible for reading the version of the mirror bbt was
incorrectly using the descriptor of the main bbt.
Pass the mirror bbt descriptor to 'scan_read_raw' when reading the
version of the mirror bbt.
Signed-off-by: Shmulik Ladkani <[email protected]>
Acked-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Artem Bityutskiy <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/mtd/nand/nand_bbt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/nand_bbt.c b/drivers/mtd/nand/nand_bbt.c
index 30d1319..c126469 100644
--- a/drivers/mtd/nand/nand_bbt.c
+++ b/drivers/mtd/nand/nand_bbt.c
@@ -390,7 +390,7 @@ static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf,
/* Read the mirror version, if available */
if (md && (md->options & NAND_BBT_VERSION)) {
scan_read_raw(mtd, buf, (loff_t)md->pages[0] << this->page_shift,
- mtd->writesize, td);
+ mtd->writesize, md);
md->version[0] = buf[bbt_get_ver_offs(mtd, md)];
pr_info("Bad block table at page %d, version 0x%02X\n",
md->pages[0], md->version[0]);
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russ Gorby <[email protected]>
commit 192b6041e75bb4a2aae73834037038cea139a92d upstream.
gsm_dlci_data_kick will not call any output function if tx_bytes > THRESH_LO
furthermore it will call the output function only once if tx_bytes == 0
If the size of the IP writes are on the order of THRESH_LO
we can get into a situation where skbs accumulate on the outbound list
being starved for events to call the output function.
gsm_dlci_data_kick now calls the sweep function when tx_bytes==0
Signed-off-by: Russ Gorby <[email protected]>
Tested-by: Kappel, LaurentX <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index 0988aaa..c028f35 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -971,16 +971,19 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
{
unsigned long flags;
+ int sweep;
spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
/* If we have nothing running then we need to fire up */
+ sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
if (dlci->gsm->tx_bytes == 0) {
if (dlci->net)
gsm_dlci_data_output_framed(dlci->gsm, dlci);
else
gsm_dlci_data_output(dlci->gsm, dlci);
- } else if (dlci->gsm->tx_bytes < TX_THRESH_LO)
- gsm_dlci_data_sweep(dlci->gsm);
+ }
+ if (sweep)
+ gsm_dlci_data_sweep(dlci->gsm);
spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
}
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russ Gorby <[email protected]>
commit b4338e1efc339986cf6c0a3652906e914a86e2d3 upstream.
gsm_data_kick was recently modified to allow messages on the
tx queue bound for DLCI0 to flow even during FCOFF conditions.
Unfortunately we introduced a bug discovered by code inspection
where subsequent list traversers can access freed memory if
the DLCI0 messages were not all at the head of the list.
Replaced singly linked tx list w/ a list_head and used
provided interfaces for traversing and deleting members.
Signed-off-by: Russ Gorby <[email protected]>
Tested-by: Yin, Fengwei <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 40 +++++++++++++---------------------------
1 file changed, 13 insertions(+), 27 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index 0d93e51..6f4f8d3 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -108,7 +108,7 @@ struct gsm_mux_net {
*/
struct gsm_msg {
- struct gsm_msg *next;
+ struct list_head list;
u8 addr; /* DLCI address + flags */
u8 ctrl; /* Control byte + flags */
unsigned int len; /* Length of data block (can be zero) */
@@ -245,8 +245,7 @@ struct gsm_mux {
unsigned int tx_bytes; /* TX data outstanding */
#define TX_THRESH_HI 8192
#define TX_THRESH_LO 2048
- struct gsm_msg *tx_head; /* Pending data packets */
- struct gsm_msg *tx_tail;
+ struct list_head tx_list; /* Pending data packets */
/* Control messages */
struct timer_list t2_timer; /* Retransmit timer for commands */
@@ -663,7 +662,7 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
m->len = len;
m->addr = addr;
m->ctrl = ctrl;
- m->next = NULL;
+ INIT_LIST_HEAD(&m->list);
return m;
}
@@ -681,16 +680,13 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
static void gsm_data_kick(struct gsm_mux *gsm)
{
- struct gsm_msg *msg = gsm->tx_head;
- struct gsm_msg *free_msg;
+ struct gsm_msg *msg, *nmsg;
int len;
int skip_sof = 0;
- while (msg) {
- if (gsm->constipated && msg->addr) {
- msg = msg->next;
+ list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+ if (gsm->constipated && msg->addr)
continue;
- }
if (gsm->encoding != 0) {
gsm->txframe[0] = GSM1_SOF;
len = gsm_stuff_frame(msg->data,
@@ -718,14 +714,9 @@ static void gsm_data_kick(struct gsm_mux *gsm)
burst */
skip_sof = 1;
- if (gsm->tx_head == msg)
- gsm->tx_head = msg->next;
- free_msg = msg;
- msg = msg->next;
- kfree(free_msg);
+ list_del(&msg->list);
+ kfree(msg);
}
- if (!gsm->tx_head)
- gsm->tx_tail = NULL;
}
/**
@@ -774,11 +765,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
msg->data = dp;
/* Add to the actual output queue */
- if (gsm->tx_tail)
- gsm->tx_tail->next = msg;
- else
- gsm->tx_head = msg;
- gsm->tx_tail = msg;
+ list_add_tail(&msg->list, &gsm->tx_list);
gsm->tx_bytes += msg->len;
gsm_data_kick(gsm);
}
@@ -2026,7 +2013,7 @@ void gsm_cleanup_mux(struct gsm_mux *gsm)
{
int i;
struct gsm_dlci *dlci = gsm->dlci[0];
- struct gsm_msg *txq;
+ struct gsm_msg *txq, *utxq;
struct gsm_control *gc;
gsm->dead = 1;
@@ -2061,11 +2048,9 @@ void gsm_cleanup_mux(struct gsm_mux *gsm)
if (gsm->dlci[i])
gsm_dlci_release(gsm->dlci[i]);
/* Now wipe the queues */
- for (txq = gsm->tx_head; txq != NULL; txq = gsm->tx_head) {
- gsm->tx_head = txq->next;
+ list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list)
kfree(txq);
- }
- gsm->tx_tail = NULL;
+ INIT_LIST_HEAD(&gsm->tx_list);
}
EXPORT_SYMBOL_GPL(gsm_cleanup_mux);
@@ -2176,6 +2161,7 @@ struct gsm_mux *gsm_alloc_mux(void)
}
spin_lock_init(&gsm->lock);
kref_init(&gsm->ref);
+ INIT_LIST_HEAD(&gsm->tx_list);
gsm->t1 = T1;
gsm->t2 = T2;
3.2-stable review patch. If anyone has any objections, please let me know.
------------------
From: Russ Gorby <[email protected]>
commit 329e56780e514a7ab607bcb51a52ab0dc2669414 upstream.
Drivers are supposed to use the dev_* versions of the kfree_skb
interfaces. In a couple of cases we were called with IRQs
disabled as well which kfree_skb() does not expect.
Replaced kfree_skb calls w/ dev_kfree_skb and dev_kfree_skb_any
Signed-off-by: Russ Gorby <[email protected]>
Tested-by: Yin, Fengwei <[email protected]>
Signed-off-by: Alan Cox <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Ben Hutchings <[email protected]>
---
drivers/tty/n_gsm.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index 6f4f8d3..a8c82f5 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -879,7 +879,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
if (len > gsm->mtu) {
if (dlci->adaption == 3) {
/* Over long frame, bin it */
- kfree_skb(dlci->skb);
+ dev_kfree_skb_any(dlci->skb);
dlci->skb = NULL;
return 0;
}
@@ -905,7 +905,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
skb_pull(dlci->skb, len);
__gsm_data_queue(dlci, msg);
if (last) {
- kfree_skb(dlci->skb);
+ dev_kfree_skb_any(dlci->skb);
dlci->skb = NULL;
}
return size;
@@ -1674,7 +1674,7 @@ static void gsm_dlci_free(struct kref *ref)
dlci->gsm->dlci[dlci->addr] = NULL;
kfifo_free(dlci->fifo);
while ((dlci->skb = skb_dequeue(&dlci->skb_list)))
- kfree_skb(dlci->skb);
+ dev_kfree_skb(dlci->skb);
kfree(dlci);
}
@@ -2013,7 +2013,7 @@ void gsm_cleanup_mux(struct gsm_mux *gsm)
{
int i;
struct gsm_dlci *dlci = gsm->dlci[0];
- struct gsm_msg *txq, *utxq;
+ struct gsm_msg *txq, *ntxq;
struct gsm_control *gc;
gsm->dead = 1;
diff --git a/Documentation/virtual/lguest/lguest.c b/Documentation/virtual/lguest/lguest.c
index c095d79..288dba6 100644
--- a/Documentation/virtual/lguest/lguest.c
+++ b/Documentation/virtual/lguest/lguest.c
@@ -1299,6 +1299,7 @@ static struct device *new_device(const char *name, u16 type)
dev->feature_len = 0;
dev->num_vq = 0;
dev->running = false;
+ dev->next = NULL;
/*
* Append to device list. Prepending to a single-linked list is
diff --git a/Makefile b/Makefile
index fd9c414..9203135 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 2
-SUBLEVEL = 31
-EXTRAVERSION =
+SUBLEVEL = 32
+EXTRAVERSION = -rc1
NAME = Saber-toothed Squirrel
# *DOCUMENTATION*
diff --git a/arch/arm/plat-omap/counter_32k.c b/arch/arm/plat-omap/counter_32k.c
index a6cbb71..04e703a 100644
--- a/arch/arm/plat-omap/counter_32k.c
+++ b/arch/arm/plat-omap/counter_32k.c
@@ -82,22 +82,29 @@ static void notrace omap_update_sched_clock(void)
* nsecs and adds to a monotonically increasing timespec.
*/
static struct timespec persistent_ts;
-static cycles_t cycles, last_cycles;
+static cycles_t cycles;
static unsigned int persistent_mult, persistent_shift;
+static DEFINE_SPINLOCK(read_persistent_clock_lock);
+
void read_persistent_clock(struct timespec *ts)
{
unsigned long long nsecs;
- cycles_t delta;
- struct timespec *tsp = &persistent_ts;
+ cycles_t last_cycles;
+ unsigned long flags;
+
+ spin_lock_irqsave(&read_persistent_clock_lock, flags);
last_cycles = cycles;
cycles = timer_32k_base ? __raw_readl(timer_32k_base) : 0;
- delta = cycles - last_cycles;
- nsecs = clocksource_cyc2ns(delta, persistent_mult, persistent_shift);
+ nsecs = clocksource_cyc2ns(cycles - last_cycles,
+ persistent_mult, persistent_shift);
+
+ timespec_add_ns(&persistent_ts, nsecs);
+
+ *ts = persistent_ts;
- timespec_add_ns(tsp, nsecs);
- *ts = *tsp;
+ spin_unlock_irqrestore(&read_persistent_clock_lock, flags);
}
int __init omap_init_clocksource_32k(void)
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 0be3186..aaf7444 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -224,7 +224,7 @@ KBUILD_CPPFLAGS += -D"DATAOFFSET=$(if $(dataoffset-y),$(dataoffset-y),0)"
LDFLAGS += -m $(ld-emul)
ifdef CONFIG_MIPS
-CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -xc /dev/null | \
+CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/")
ifdef CONFIG_64BIT
diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
index 1a96618..ce7dd99 100644
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -102,7 +102,7 @@ obj-$(CONFIG_MIPS_MACHINE) += mips_machine.o
obj-$(CONFIG_OF) += prom.o
-CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
+CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(KBUILD_CFLAGS) -Wa,-mdaddi -c -o /dev/null -x c /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi)
obj-$(CONFIG_HAVE_STD_PC_SERIAL_PORT) += 8250-platform.o
diff --git a/arch/mn10300/Makefile b/arch/mn10300/Makefile
index 7120282..3eb4a52 100644
--- a/arch/mn10300/Makefile
+++ b/arch/mn10300/Makefile
@@ -26,7 +26,7 @@ CHECKFLAGS +=
PROCESSOR := unset
UNIT := unset
-KBUILD_CFLAGS += -mam33 -mmem-funcs -DCPU=AM33
+KBUILD_CFLAGS += -mam33 -DCPU=AM33 $(call cc-option,-mmem-funcs,)
KBUILD_AFLAGS += -mam33 -DCPU=AM33
ifeq ($(CONFIG_MN10300_CURRENT_IN_E2),y)
diff --git a/arch/powerpc/platforms/pseries/eeh_driver.c b/arch/powerpc/platforms/pseries/eeh_driver.c
index 1b6cb10..a0a4e8a 100644
--- a/arch/powerpc/platforms/pseries/eeh_driver.c
+++ b/arch/powerpc/platforms/pseries/eeh_driver.c
@@ -25,6 +25,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
+#include <linux/module.h>
#include <linux/pci.h>
#include <asm/eeh.h>
#include <asm/eeh_event.h>
@@ -41,6 +42,41 @@ static inline const char * pcid_name (struct pci_dev *pdev)
return "";
}
+/**
+ * eeh_pcid_get - Get the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is used to retrieve the PCI device driver for
+ * the indicated PCI device. Besides, we will increase the reference
+ * of the PCI device driver to prevent that being unloaded on
+ * the fly. Otherwise, kernel crash would be seen.
+ */
+static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return NULL;
+
+ if (!try_module_get(pdev->driver->driver.owner))
+ return NULL;
+
+ return pdev->driver;
+}
+
+/**
+ * eeh_pcid_put - Dereference on the PCI device driver
+ * @pdev: PCI device
+ *
+ * The function is called to do dereference on the PCI device
+ * driver of the indicated PCI device.
+ */
+static inline void eeh_pcid_put(struct pci_dev *pdev)
+{
+ if (!pdev || !pdev->driver)
+ return;
+
+ module_put(pdev->driver->driver.owner);
+}
+
#if 0
static void print_device_node_tree(struct pci_dn *pdn, int dent)
{
@@ -109,18 +145,20 @@ static void eeh_enable_irq(struct pci_dev *dev)
static int eeh_report_error(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_frozen;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->error_detected (dev, pci_channel_io_frozen);
@@ -128,6 +166,7 @@ static int eeh_report_error(struct pci_dev *dev, void *userdata)
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -142,12 +181,15 @@ static int eeh_report_error(struct pci_dev *dev, void *userdata)
static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
- if (!driver ||
- !driver->err_handler ||
- !driver->err_handler->mmio_enabled)
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
+ if (!driver->err_handler ||
+ !driver->err_handler->mmio_enabled) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->mmio_enabled (dev);
@@ -155,6 +197,7 @@ static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -165,18 +208,20 @@ static int eeh_report_mmio_enabled(struct pci_dev *dev, void *userdata)
static int eeh_report_reset(struct pci_dev *dev, void *userdata)
{
enum pci_ers_result rc, *res = userdata;
- struct pci_driver *driver = dev->driver;
-
- if (!driver)
- return 0;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
+
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->slot_reset)
+ !driver->err_handler->slot_reset) {
+ eeh_pcid_put(dev);
return 0;
+ }
rc = driver->err_handler->slot_reset(dev);
if ((*res == PCI_ERS_RESULT_NONE) ||
@@ -184,6 +229,7 @@ static int eeh_report_reset(struct pci_dev *dev, void *userdata)
if (*res == PCI_ERS_RESULT_DISCONNECT &&
rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
+ eeh_pcid_put(dev);
return 0;
}
@@ -193,21 +239,24 @@ static int eeh_report_reset(struct pci_dev *dev, void *userdata)
static int eeh_report_resume(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_normal;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_enable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->resume)
+ !driver->err_handler->resume) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->resume(dev);
+ eeh_pcid_put(dev);
return 0;
}
@@ -220,21 +269,24 @@ static int eeh_report_resume(struct pci_dev *dev, void *userdata)
static int eeh_report_failure(struct pci_dev *dev, void *userdata)
{
- struct pci_driver *driver = dev->driver;
+ struct pci_driver *driver;
dev->error_state = pci_channel_io_perm_failure;
- if (!driver)
- return 0;
+ driver = eeh_pcid_get(dev);
+ if (!driver) return 0;
eeh_disable_irq(dev);
if (!driver->err_handler ||
- !driver->err_handler->error_detected)
+ !driver->err_handler->error_detected) {
+ eeh_pcid_put(dev);
return 0;
+ }
driver->err_handler->error_detected(dev, pci_channel_io_perm_failure);
+ eeh_pcid_put(dev);
return 0;
}
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 18601c8..884507e 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -146,8 +146,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
static inline int pmd_large(pmd_t pte)
{
- return (pmd_flags(pte) & (_PAGE_PSE | _PAGE_PRESENT)) ==
- (_PAGE_PSE | _PAGE_PRESENT);
+ return pmd_flags(pte) & _PAGE_PSE;
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -415,7 +414,13 @@ static inline int pte_hidden(pte_t pte)
static inline int pmd_present(pmd_t pmd)
{
- return pmd_flags(pmd) & _PAGE_PRESENT;
+ /*
+ * Checking for _PAGE_PSE is needed too because
+ * split_huge_page will temporarily clear the present bit (but
+ * the _PAGE_PSE flag will remain set at all times while the
+ * _PAGE_PRESENT bit is clear).
+ */
+ return pmd_flags(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PSE);
}
static inline int pmd_none(pmd_t pmd)
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 37718f0..4d320b2 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -731,6 +731,7 @@ void __init efi_enter_virtual_mode(void)
*
* Call EFI services through wrapper functions.
*/
+ efi.runtime_version = efi_systab.fw_revision;
efi.get_time = virt_efi_get_time;
efi.set_time = virt_efi_set_time;
efi.get_wakeup_time = virt_efi_get_wakeup_time;
diff --git a/block/blk-core.c b/block/blk-core.c
index 49d9e91..c76dfb2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -580,7 +580,7 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
q->request_fn = rfn;
q->prep_rq_fn = NULL;
q->unprep_rq_fn = NULL;
- q->queue_flags = QUEUE_FLAG_DEFAULT;
+ q->queue_flags |= QUEUE_FLAG_DEFAULT;
/* Override internal queue lock with supplied lock pointer */
if (lock)
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index 9ecec98..5016de5 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -950,8 +950,6 @@ static int __init acpi_bus_init(void)
status = acpi_ec_ecdt_probe();
/* Ignore result. Not having an ECDT is not fatal. */
- acpi_bus_osc_support();
-
status = acpi_initialize_objects(ACPI_FULL_INITIALIZATION);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to initialize ACPI objects\n");
@@ -959,6 +957,12 @@ static int __init acpi_bus_init(void)
}
/*
+ * _OSC method may exist in module level code,
+ * so it must be run after ACPI_FULL_INITIALIZATION
+ */
+ acpi_bus_osc_support();
+
+ /*
* _PDC control method may load dynamic SSDT tables,
* and we need to install the table handler before that.
*/
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 6f95d98..1f90dab 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -108,7 +108,7 @@ static struct usb_device_id btusb_table[] = {
{ USB_DEVICE(0x413c, 0x8197) },
/* Foxconn - Hon Hai */
- { USB_DEVICE(0x0489, 0xe033) },
+ { USB_VENDOR_AND_INTERFACE_INFO(0x0489, 0xff, 0x01, 0x01) },
/*Broadcom devices with vendor specific id */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0a5c, 0xff, 0x01, 0x01) },
diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c
index eedd547..5936691 100644
--- a/drivers/char/ttyprintk.c
+++ b/drivers/char/ttyprintk.c
@@ -67,7 +67,7 @@ static int tpk_printk(const unsigned char *buf, int count)
tmp[tpk_curr + 1] = '\0';
printk(KERN_INFO "%s%s", tpk_tag, tmp);
tpk_curr = 0;
- if (buf[i + 1] == '\n')
+ if ((i + 1) < count && buf[i + 1] == '\n')
i++;
break;
case '\n':
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index b48967b..5991114 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -564,8 +564,8 @@ void dmaengine_get(void)
list_del_rcu(&device->global_node);
break;
} else if (err)
- pr_err("dmaengine: failed to get %s: (%d)\n",
- dma_chan_name(chan), err);
+ pr_debug("%s: failed to get %s: (%d)\n",
+ __func__, dma_chan_name(chan), err);
}
}
diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
index 4799393..b97d4f0 100644
--- a/drivers/firewire/core-cdev.c
+++ b/drivers/firewire/core-cdev.c
@@ -471,8 +471,8 @@ static int ioctl_get_info(struct client *client, union ioctl_arg *arg)
client->bus_reset_closure = a->bus_reset_closure;
if (a->bus_reset != 0) {
fill_bus_reset_event(&bus_reset, client);
- ret = copy_to_user(u64_to_uptr(a->bus_reset),
- &bus_reset, sizeof(bus_reset));
+ /* unaligned size of bus_reset is 36 bytes */
+ ret = copy_to_user(u64_to_uptr(a->bus_reset), &bus_reset, 36);
}
if (ret == 0 && list_empty(&client->link))
list_add_tail(&client->link, &client->device->client_list);
diff --git a/drivers/firmware/efivars.c b/drivers/firmware/efivars.c
index 0535c21..3e60e8d 100644
--- a/drivers/firmware/efivars.c
+++ b/drivers/firmware/efivars.c
@@ -435,12 +435,23 @@ efivar_attr_read(struct efivar_entry *entry, char *buf)
if (status != EFI_SUCCESS)
return -EIO;
- if (var->Attributes & 0x1)
+ if (var->Attributes & EFI_VARIABLE_NON_VOLATILE)
str += sprintf(str, "EFI_VARIABLE_NON_VOLATILE\n");
- if (var->Attributes & 0x2)
+ if (var->Attributes & EFI_VARIABLE_BOOTSERVICE_ACCESS)
str += sprintf(str, "EFI_VARIABLE_BOOTSERVICE_ACCESS\n");
- if (var->Attributes & 0x4)
+ if (var->Attributes & EFI_VARIABLE_RUNTIME_ACCESS)
str += sprintf(str, "EFI_VARIABLE_RUNTIME_ACCESS\n");
+ if (var->Attributes & EFI_VARIABLE_HARDWARE_ERROR_RECORD)
+ str += sprintf(str, "EFI_VARIABLE_HARDWARE_ERROR_RECORD\n");
+ if (var->Attributes & EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS)
+ str += sprintf(str,
+ "EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS\n");
+ if (var->Attributes &
+ EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS)
+ str += sprintf(str,
+ "EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS\n");
+ if (var->Attributes & EFI_VARIABLE_APPEND_WRITE)
+ str += sprintf(str, "EFI_VARIABLE_APPEND_WRITE\n");
return str - buf;
}
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e48e01e..33e1555 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1543,16 +1543,19 @@ i915_gem_object_move_to_active(struct drm_i915_gem_object *obj,
list_move_tail(&obj->ring_list, &ring->active_list);
obj->last_rendering_seqno = seqno;
- if (obj->fenced_gpu_access) {
- struct drm_i915_fence_reg *reg;
-
- BUG_ON(obj->fence_reg == I915_FENCE_REG_NONE);
+ if (obj->fenced_gpu_access) {
obj->last_fenced_seqno = seqno;
obj->last_fenced_ring = ring;
- reg = &dev_priv->fence_regs[obj->fence_reg];
- list_move_tail(®->lru_list, &dev_priv->mm.fence_list);
+ /* Bump MRU to take account of the delayed flush */
+ if (obj->fence_reg != I915_FENCE_REG_NONE) {
+ struct drm_i915_fence_reg *reg;
+
+ reg = &dev_priv->fence_regs[obj->fence_reg];
+ list_move_tail(®->lru_list,
+ &dev_priv->mm.fence_list);
+ }
}
}
@@ -1561,6 +1564,7 @@ i915_gem_object_move_off_active(struct drm_i915_gem_object *obj)
{
list_del_init(&obj->ring_list);
obj->last_rendering_seqno = 0;
+ obj->last_fenced_seqno = 0;
}
static void
@@ -1589,6 +1593,7 @@ i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj)
BUG_ON(!list_empty(&obj->gpu_write_list));
BUG_ON(!obj->active);
obj->ring = NULL;
+ obj->last_fenced_ring = NULL;
i915_gem_object_move_off_active(obj);
obj->fenced_gpu_access = false;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index a6c2f7a..1202198 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -574,7 +574,8 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
if (ret)
break;
}
- obj->pending_fenced_gpu_access = need_fence;
+ obj->pending_fenced_gpu_access =
+ !!(entry->flags & EXEC_OBJECT_NEEDS_FENCE);
}
entry->offset = obj->gtt_offset;
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 31d334d..861223b 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -107,10 +107,10 @@ i915_gem_detect_bit_6_swizzle(struct drm_device *dev)
*/
swizzle_x = I915_BIT_6_SWIZZLE_NONE;
swizzle_y = I915_BIT_6_SWIZZLE_NONE;
- } else if (IS_MOBILE(dev)) {
+ } else if (IS_MOBILE(dev) || (IS_GEN3(dev) && !IS_G33(dev))) {
uint32_t dcc;
- /* On mobile 9xx chipsets, channel interleave by the CPU is
+ /* On 9xx chipsets, channel interleave by the CPU is
* determined by DCC. For single-channel, neither the CPU
* nor the GPU do swizzling. For dual channel interleaved,
* the GPU's interleave is bit 9 and 10 for X tiled, and bit
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index c8b5bc1..2812d7b 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -530,6 +530,12 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
if (de_iir & DE_GSE_IVB)
intel_opregion_gse_intr(dev);
+ if (de_iir & DE_PIPEA_VBLANK_IVB)
+ drm_handle_vblank(dev, 0);
+
+ if (de_iir & DE_PIPEB_VBLANK_IVB)
+ drm_handle_vblank(dev, 1);
+
if (de_iir & DE_PLANEA_FLIP_DONE_IVB) {
intel_prepare_page_flip(dev, 0);
intel_finish_page_flip_plane(dev, 0);
@@ -540,12 +546,6 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
intel_finish_page_flip_plane(dev, 1);
}
- if (de_iir & DE_PIPEA_VBLANK_IVB)
- drm_handle_vblank(dev, 0);
-
- if (de_iir & DE_PIPEB_VBLANK_IVB)
- drm_handle_vblank(dev, 1);
-
/* check event from PCH */
if (de_iir & DE_PCH_EVENT_IVB) {
if (pch_iir & SDE_HOTPLUG_MASK_CPT)
@@ -622,6 +622,12 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
if (de_iir & DE_GSE)
intel_opregion_gse_intr(dev);
+ if (de_iir & DE_PIPEA_VBLANK)
+ drm_handle_vblank(dev, 0);
+
+ if (de_iir & DE_PIPEB_VBLANK)
+ drm_handle_vblank(dev, 1);
+
if (de_iir & DE_PLANEA_FLIP_DONE) {
intel_prepare_page_flip(dev, 0);
intel_finish_page_flip_plane(dev, 0);
@@ -632,12 +638,6 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
intel_finish_page_flip_plane(dev, 1);
}
- if (de_iir & DE_PIPEA_VBLANK)
- drm_handle_vblank(dev, 0);
-
- if (de_iir & DE_PIPEB_VBLANK)
- drm_handle_vblank(dev, 1);
-
/* check event from PCH */
if (de_iir & DE_PCH_EVENT) {
if (pch_iir & hotplug_mask)
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4a5e662..a294a32 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -401,6 +401,9 @@
# define VS_TIMER_DISPATCH (1 << 6)
# define MI_FLUSH_ENABLE (1 << 11)
+#define GEN6_GT_MODE 0x20d0
+#define GEN6_GT_MODE_HI (1 << 9)
+
#define GFX_MODE 0x02520
#define GFX_MODE_GEN7 0x0229c
#define GFX_RUN_LIST_ENABLE (1<<15)
@@ -1557,6 +1560,10 @@
/* Video Data Island Packet control */
#define VIDEO_DIP_DATA 0x61178
+/* Read the description of VIDEO_DIP_DATA (before Haswel) or VIDEO_DIP_ECC
+ * (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte
+ * of the infoframe structure specified by CEA-861. */
+#define VIDEO_DIP_DATA_SIZE 32
#define VIDEO_DIP_CTL 0x61170
#define VIDEO_DIP_ENABLE (1 << 31)
#define VIDEO_DIP_PORT_B (1 << 29)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 6c3fb44..adac0dd 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2850,13 +2850,34 @@ static void intel_clear_scanline_wait(struct drm_device *dev)
I915_WRITE_CTL(ring, tmp);
}
+static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
+{
+ struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned long flags;
+ bool pending;
+
+ if (atomic_read(&dev_priv->mm.wedged))
+ return false;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ pending = to_intel_crtc(crtc)->unpin_work != NULL;
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ return pending;
+}
+
static void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
if (crtc->fb == NULL)
return;
+ wait_event(dev_priv->pending_flip_queue,
+ !intel_crtc_has_pending_flip(crtc));
+
mutex_lock(&dev->struct_mutex);
intel_finish_fb(crtc->fb);
mutex_unlock(&dev->struct_mutex);
@@ -5027,7 +5048,7 @@ static int i9xx_crtc_mode_set(struct drm_crtc *crtc,
/* default to 8bpc */
pipeconf &= ~(PIPECONF_BPP_MASK | PIPECONF_DITHER_EN);
if (is_dp) {
- if (mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
+ if (adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC) {
pipeconf |= PIPECONF_BPP_6 |
PIPECONF_DITHER_EN |
PIPECONF_DITHER_TYPE_SP;
@@ -5495,7 +5516,7 @@ static int ironlake_crtc_mode_set(struct drm_crtc *crtc,
/* determine panel color depth */
temp = I915_READ(PIPECONF(pipe));
temp &= ~PIPE_BPC_MASK;
- dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, mode);
+ dither = intel_choose_pipe_bpp_dither(crtc, &pipe_bpp, adjusted_mode);
switch (pipe_bpp) {
case 18:
temp |= PIPE_6BPC;
@@ -6952,9 +6973,8 @@ static void do_intel_finish_page_flip(struct drm_device *dev,
atomic_clear_mask(1 << intel_crtc->plane,
&obj->pending_flip.counter);
- if (atomic_read(&obj->pending_flip) == 0)
- wake_up(&dev_priv->pending_flip_queue);
+ wake_up(&dev_priv->pending_flip_queue);
schedule_work(&work->work);
trace_i915_flip_complete(intel_crtc->plane, work->pending_flip_obj);
@@ -7193,7 +7213,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
default:
WARN_ONCE(1, "unknown plane in flip command\n");
ret = -ENODEV;
- goto err;
+ goto err_unpin;
}
ret = intel_ring_begin(ring, 4);
@@ -8278,6 +8298,11 @@ static void gen6_init_clock_gating(struct drm_device *dev)
DISPPLANE_TRICKLE_FEED_DISABLE);
intel_flush_display_plane(dev_priv, pipe);
}
+
+ /* The default value should be 0x200 according to docs, but the two
+ * platforms I checked have a 0 for this. (Maybe BIOS overrides?) */
+ I915_WRITE(GEN6_GT_MODE, 0xffff << 16);
+ I915_WRITE(GEN6_GT_MODE, GEN6_GT_MODE_HI << 16 | GEN6_GT_MODE_HI);
}
static void gen7_setup_fixed_func_scheduler(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index c2a64f4..497da2a 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -138,14 +138,20 @@ static void i9xx_write_infoframe(struct drm_encoder *encoder,
I915_WRITE(VIDEO_DIP_CTL, VIDEO_DIP_ENABLE | val | port | flags);
+ mmiowb();
for (i = 0; i < len; i += 4) {
I915_WRITE(VIDEO_DIP_DATA, *data);
data++;
}
+ /* Write every possible data byte to force correct ECC calculation. */
+ for (; i < VIDEO_DIP_DATA_SIZE; i += 4)
+ I915_WRITE(VIDEO_DIP_DATA, 0);
+ mmiowb();
flags |= intel_infoframe_flags(frame);
I915_WRITE(VIDEO_DIP_CTL, VIDEO_DIP_ENABLE | val | port | flags);
+ POSTING_READ(VIDEO_DIP_CTL);
}
static void ironlake_write_infoframe(struct drm_encoder *encoder,
@@ -168,14 +174,20 @@ static void ironlake_write_infoframe(struct drm_encoder *encoder,
I915_WRITE(reg, VIDEO_DIP_ENABLE | val | flags);
+ mmiowb();
for (i = 0; i < len; i += 4) {
I915_WRITE(TVIDEO_DIP_DATA(intel_crtc->pipe), *data);
data++;
}
+ /* Write every possible data byte to force correct ECC calculation. */
+ for (; i < VIDEO_DIP_DATA_SIZE; i += 4)
+ I915_WRITE(TVIDEO_DIP_DATA(intel_crtc->pipe), 0);
+ mmiowb();
flags |= intel_infoframe_flags(frame);
I915_WRITE(reg, VIDEO_DIP_ENABLE | val | flags);
+ POSTING_READ(reg);
}
static void intel_set_infoframe(struct drm_encoder *encoder,
struct dip_infoframe *frame)
@@ -546,10 +558,13 @@ void intel_hdmi_init(struct drm_device *dev, int sdvox_reg)
if (!HAS_PCH_SPLIT(dev)) {
intel_hdmi->write_infoframe = i9xx_write_infoframe;
I915_WRITE(VIDEO_DIP_CTL, 0);
+ POSTING_READ(VIDEO_DIP_CTL);
} else {
intel_hdmi->write_infoframe = ironlake_write_infoframe;
- for_each_pipe(i)
+ for_each_pipe(i) {
I915_WRITE(TVIDEO_DIP_CTL(i), 0);
+ POSTING_READ(TVIDEO_DIP_CTL(i));
+ }
}
drm_encoder_helper_add(&intel_encoder->base, &intel_hdmi_helper_funcs);
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index fc0633c..b61f490 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -37,6 +37,16 @@
#define EVERGREEN_PFP_UCODE_SIZE 1120
#define EVERGREEN_PM4_UCODE_SIZE 1376
+static const u32 crtc_offsets[6] =
+{
+ EVERGREEN_CRTC0_REGISTER_OFFSET,
+ EVERGREEN_CRTC1_REGISTER_OFFSET,
+ EVERGREEN_CRTC2_REGISTER_OFFSET,
+ EVERGREEN_CRTC3_REGISTER_OFFSET,
+ EVERGREEN_CRTC4_REGISTER_OFFSET,
+ EVERGREEN_CRTC5_REGISTER_OFFSET
+};
+
static void evergreen_gpu_init(struct radeon_device *rdev);
void evergreen_fini(struct radeon_device *rdev);
void evergreen_pcie_gen2_enable(struct radeon_device *rdev);
@@ -66,6 +76,27 @@ void evergreen_fix_pci_max_read_req_size(struct radeon_device *rdev)
}
}
+void dce4_wait_for_vblank(struct radeon_device *rdev, int crtc)
+{
+ int i;
+
+ if (crtc >= rdev->num_crtc)
+ return;
+
+ if (RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[crtc]) & EVERGREEN_CRTC_MASTER_EN) {
+ for (i = 0; i < rdev->usec_timeout; i++) {
+ if (!(RREG32(EVERGREEN_CRTC_STATUS + crtc_offsets[crtc]) & EVERGREEN_CRTC_V_BLANK))
+ break;
+ udelay(1);
+ }
+ for (i = 0; i < rdev->usec_timeout; i++) {
+ if (RREG32(EVERGREEN_CRTC_STATUS + crtc_offsets[crtc]) & EVERGREEN_CRTC_V_BLANK)
+ break;
+ udelay(1);
+ }
+ }
+}
+
void evergreen_pre_page_flip(struct radeon_device *rdev, int crtc)
{
/* enable the pflip int */
@@ -1065,116 +1096,88 @@ void evergreen_agp_enable(struct radeon_device *rdev)
void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)
{
+ u32 crtc_enabled, tmp, frame_count, blackout;
+ int i, j;
+
save->vga_render_control = RREG32(VGA_RENDER_CONTROL);
save->vga_hdp_control = RREG32(VGA_HDP_CONTROL);
- /* Stop all video */
+ /* disable VGA render */
WREG32(VGA_RENDER_CONTROL, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 1);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC2_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC3_REGISTER_OFFSET, 1);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC4_REGISTER_OFFSET, 1);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC5_REGISTER_OFFSET, 1);
- }
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
- }
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
- WREG32(EVERGREEN_CRTC_UPDATE_LOCK + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
+ /* blank the display controllers */
+ for (i = 0; i < rdev->num_crtc; i++) {
+ crtc_enabled = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]) & EVERGREEN_CRTC_MASTER_EN;
+ if (crtc_enabled) {
+ save->crtc_enabled[i] = true;
+ tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
+ if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) {
+ dce4_wait_for_vblank(rdev, i);
+ tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ }
+ /* wait for the next frame */
+ frame_count = radeon_get_vblank_counter(rdev, i);
+ for (j = 0; j < rdev->usec_timeout; j++) {
+ if (radeon_get_vblank_counter(rdev, i) != frame_count)
+ break;
+ udelay(1);
+ }
+ }
}
- WREG32(D1VGA_CONTROL, 0);
- WREG32(D2VGA_CONTROL, 0);
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_D3VGA_CONTROL, 0);
- WREG32(EVERGREEN_D4VGA_CONTROL, 0);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_D5VGA_CONTROL, 0);
- WREG32(EVERGREEN_D6VGA_CONTROL, 0);
+ evergreen_mc_wait_for_idle(rdev);
+
+ blackout = RREG32(MC_SHARED_BLACKOUT_CNTL);
+ if ((blackout & BLACKOUT_MODE_MASK) != 1) {
+ /* Block CPU access */
+ WREG32(BIF_FB_EN, 0);
+ /* blackout the MC */
+ blackout &= ~BLACKOUT_MODE_MASK;
+ WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1);
}
}
void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save)
{
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC0_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC0_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC0_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC0_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC1_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC1_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC1_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC1_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- if (rdev->num_crtc >= 4) {
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC2_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC2_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC2_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC2_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
-
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC3_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC3_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC3_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC3_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- }
- if (rdev->num_crtc >= 6) {
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC4_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC4_REGISTER_OFFSET,
- upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC4_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC4_REGISTER_OFFSET,
- (u32)rdev->mc.vram_start);
+ u32 tmp, frame_count;
+ int i, j;
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ /* update crtc base addresses */
+ for (i = 0; i < rdev->num_crtc; i++) {
+ WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
upper_32_bits(rdev->mc.vram_start));
- WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)rdev->mc.vram_start);
- WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + EVERGREEN_CRTC5_REGISTER_OFFSET,
+ WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + crtc_offsets[i],
(u32)rdev->mc.vram_start);
}
-
WREG32(EVERGREEN_VGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(rdev->mc.vram_start));
WREG32(EVERGREEN_VGA_MEMORY_BASE_ADDRESS, (u32)rdev->mc.vram_start);
- /* Unlock host access */
+
+ /* unblackout the MC */
+ tmp = RREG32(MC_SHARED_BLACKOUT_CNTL);
+ tmp &= ~BLACKOUT_MODE_MASK;
+ WREG32(MC_SHARED_BLACKOUT_CNTL, tmp);
+ /* allow CPU access */
+ WREG32(BIF_FB_EN, FB_READ_EN | FB_WRITE_EN);
+
+ for (i = 0; i < rdev->num_crtc; i++) {
+ if (save->crtc_enabled) {
+ tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
+ tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ /* wait for the next frame */
+ frame_count = radeon_get_vblank_counter(rdev, i);
+ for (j = 0; j < rdev->usec_timeout; j++) {
+ if (radeon_get_vblank_counter(rdev, i) != frame_count)
+ break;
+ udelay(1);
+ }
+ }
+ }
+ /* Unlock vga access */
WREG32(VGA_HDP_CONTROL, save->vga_hdp_control);
mdelay(1);
WREG32(VGA_RENDER_CONTROL, save->vga_render_control);
diff --git a/drivers/gpu/drm/radeon/evergreen_reg.h b/drivers/gpu/drm/radeon/evergreen_reg.h
index 7d7f215..e022776 100644
--- a/drivers/gpu/drm/radeon/evergreen_reg.h
+++ b/drivers/gpu/drm/radeon/evergreen_reg.h
@@ -210,7 +210,10 @@
#define EVERGREEN_CRTC_CONTROL 0x6e70
# define EVERGREEN_CRTC_MASTER_EN (1 << 0)
# define EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE (1 << 24)
+#define EVERGREEN_CRTC_BLANK_CONTROL 0x6e74
+# define EVERGREEN_CRTC_BLANK_DATA_EN (1 << 8)
#define EVERGREEN_CRTC_STATUS 0x6e8c
+# define EVERGREEN_CRTC_V_BLANK (1 << 0)
#define EVERGREEN_CRTC_STATUS_POSITION 0x6e90
#define EVERGREEN_MASTER_UPDATE_MODE 0x6ef8
#define EVERGREEN_CRTC_UPDATE_LOCK 0x6ed4
diff --git a/drivers/gpu/drm/radeon/evergreend.h b/drivers/gpu/drm/radeon/evergreend.h
index 6ecd23f..fe44a95 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -77,6 +77,10 @@
#define CONFIG_MEMSIZE 0x5428
+#define BIF_FB_EN 0x5490
+#define FB_READ_EN (1 << 0)
+#define FB_WRITE_EN (1 << 1)
+
#define CP_ME_CNTL 0x86D8
#define CP_ME_HALT (1 << 28)
#define CP_PFP_HALT (1 << 26)
@@ -194,6 +198,9 @@
#define NOOFCHAN_MASK 0x00003000
#define MC_SHARED_CHREMAP 0x2008
+#define MC_SHARED_BLACKOUT_CNTL 0x20ac
+#define BLACKOUT_MODE_MASK 0x00000007
+
#define MC_ARB_RAMCFG 0x2760
#define NOOFBANK_SHIFT 0
#define NOOFBANK_MASK 0x00000003
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
index 5ce9402..5aa6670 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -386,6 +386,7 @@ void r700_cp_fini(struct radeon_device *rdev);
struct evergreen_mc_save {
u32 vga_render_control;
u32 vga_hdp_control;
+ bool crtc_enabled[RADEON_MAX_CRTCS];
};
void evergreen_pcie_gart_tlb_flush(struct radeon_device *rdev);
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index baa019e..4f9496e 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -143,6 +143,16 @@ static bool radeon_msi_ok(struct radeon_device *rdev)
(rdev->pdev->subsystem_device == 0x01fd))
return true;
+ /* Gateway RS690 only seems to work with MSIs. */
+ if ((rdev->pdev->device == 0x791f) &&
+ (rdev->pdev->subsystem_vendor == 0x107b) &&
+ (rdev->pdev->subsystem_device == 0x0185))
+ return true;
+
+ /* try and enable MSIs by default on all RS690s */
+ if (rdev->family == CHIP_RS690)
+ return true;
+
/* RV515 seems to have MSI issues where it loses
* MSI rearms occasionally. This leads to lockups and freezes.
* disable it by default.
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index 78a665b..ebd6c51 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -553,7 +553,9 @@ void radeon_pm_suspend(struct radeon_device *rdev)
void radeon_pm_resume(struct radeon_device *rdev)
{
/* set up the default clocks if the MC ucode is loaded */
- if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {
+ if ((rdev->family >= CHIP_BARTS) &&
+ (rdev->family <= CHIP_CAYMAN) &&
+ rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
SET_VOLTAGE_TYPE_ASIC_VDDC);
@@ -608,7 +610,9 @@ int radeon_pm_init(struct radeon_device *rdev)
radeon_pm_print_states(rdev);
radeon_pm_init_profile(rdev);
/* set up the default clocks if the MC ucode is loaded */
- if (ASIC_IS_DCE5(rdev) && rdev->mc_fw) {
+ if ((rdev->family >= CHIP_BARTS) &&
+ (rdev->family <= CHIP_CAYMAN) &&
+ rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
SET_VOLTAGE_TYPE_ASIC_VDDC);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index fe2fdbb..1740b82 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -148,7 +148,7 @@ static int ipoib_stop(struct net_device *dev)
netif_stop_queue(dev);
- ipoib_ib_dev_down(dev, 0);
+ ipoib_ib_dev_down(dev, 1);
ipoib_ib_dev_stop(dev, 0);
if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index e5069b4..80799c0 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -190,7 +190,9 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
mcast->mcmember = *mcmember;
- /* Set the cached Q_Key before we attach if it's the broadcast group */
+ /* Set the multicast MTU and cached Q_Key before we attach if it's
+ * the broadcast group.
+ */
if (!memcmp(mcast->mcmember.mgid.raw, priv->dev->broadcast + 4,
sizeof (union ib_gid))) {
spin_lock_irq(&priv->lock);
@@ -198,10 +200,17 @@ static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
spin_unlock_irq(&priv->lock);
return -EAGAIN;
}
+ priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
priv->qkey = be32_to_cpu(priv->broadcast->mcmember.qkey);
spin_unlock_irq(&priv->lock);
priv->tx_wr.wr.ud.remote_qkey = priv->qkey;
set_qkey = 1;
+
+ if (!ipoib_cm_admin_enabled(dev)) {
+ rtnl_lock();
+ dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
+ rtnl_unlock();
+ }
}
if (!test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
@@ -590,14 +599,6 @@ void ipoib_mcast_join_task(struct work_struct *work)
return;
}
- priv->mcast_mtu = IPOIB_UD_MTU(ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu));
-
- if (!ipoib_cm_admin_enabled(dev)) {
- rtnl_lock();
- dev_set_mtu(dev, min(priv->mcast_mtu, priv->admin_mtu));
- rtnl_unlock();
- }
-
ipoib_dbg_mcast(priv, "successfully joined all multicast groups\n");
clear_bit(IPOIB_MCAST_RUN, &priv->flags);
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index c76b051..4ec049d 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -620,9 +620,9 @@ static void srp_reset_req(struct srp_target_port *target, struct srp_request *re
struct scsi_cmnd *scmnd = srp_claim_req(target, req, NULL);
if (scmnd) {
+ srp_free_req(target, req, scmnd, 0);
scmnd->result = DID_RESET << 16;
scmnd->scsi_done(scmnd);
- srp_free_req(target, req, scmnd, 0);
}
}
@@ -1669,6 +1669,7 @@ static int srp_abort(struct scsi_cmnd *scmnd)
SRP_TSK_ABORT_TASK);
srp_free_req(target, req, scmnd, 0);
scmnd->result = DID_ABORT << 16;
+ scmnd->scsi_done(scmnd);
return SUCCESS;
}
diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
index 96532bc..7be5fd9 100644
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -53,14 +53,19 @@
#define ABS_POS_BITS 13
/*
- * Any position values from the hardware above the following limits are
- * treated as "wrapped around negative" values that have been truncated to
- * the 13-bit reporting range of the hardware. These are just reasonable
- * guesses and can be adjusted if hardware is found that operates outside
- * of these parameters.
+ * These values should represent the absolute maximum value that will
+ * be reported for a positive position value. Some Synaptics firmware
+ * uses this value to indicate a finger near the edge of the touchpad
+ * whose precise position cannot be determined.
+ *
+ * At least one touchpad is known to report positions in excess of this
+ * value which are actually negative values truncated to the 13-bit
+ * reporting range. These values have never been observed to be lower
+ * than 8184 (i.e. -8), so we treat all values greater than 8176 as
+ * negative and any other value as positive.
*/
-#define X_MAX_POSITIVE (((1 << ABS_POS_BITS) + XMAX) / 2)
-#define Y_MAX_POSITIVE (((1 << ABS_POS_BITS) + YMAX) / 2)
+#define X_MAX_POSITIVE 8176
+#define Y_MAX_POSITIVE 8176
/*
* Synaptics touchpads report the y coordinate from bottom to top, which is
@@ -561,11 +566,21 @@ static int synaptics_parse_hw_state(const unsigned char buf[],
hw->right = (buf[0] & 0x02) ? 1 : 0;
}
- /* Convert wrap-around values to negative */
+ /*
+ * Convert wrap-around values to negative. (X|Y)_MAX_POSITIVE
+ * is used by some firmware to indicate a finger at the edge of
+ * the touchpad whose precise position cannot be determined, so
+ * convert these values to the maximum axis value.
+ */
if (hw->x > X_MAX_POSITIVE)
hw->x -= 1 << ABS_POS_BITS;
+ else if (hw->x == X_MAX_POSITIVE)
+ hw->x = XMAX;
+
if (hw->y > Y_MAX_POSITIVE)
hw->y -= 1 << ABS_POS_BITS;
+ else if (hw->y == Y_MAX_POSITIVE)
+ hw->y = YMAX;
return 0;
}
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index ccf347f..b9062c0 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -563,7 +563,9 @@ static void domain_update_iommu_coherency(struct dmar_domain *domain)
{
int i;
- domain->iommu_coherency = 1;
+ i = find_first_bit(&domain->iommu_bmp, g_num_of_iommus);
+
+ domain->iommu_coherency = i < g_num_of_iommus ? 1 : 0;
for_each_set_bit(i, &domain->iommu_bmp, g_num_of_iommus) {
if (!ecap_coherent(g_iommus[i]->ecap)) {
diff --git a/drivers/media/rc/ite-cir.c b/drivers/media/rc/ite-cir.c
index 0e49c99..c06992e 100644
--- a/drivers/media/rc/ite-cir.c
+++ b/drivers/media/rc/ite-cir.c
@@ -1473,6 +1473,7 @@ static int ite_probe(struct pnp_dev *pdev, const struct pnp_device_id
rdev = rc_allocate_device();
if (!rdev)
goto failure;
+ itdev->rdev = rdev;
ret = -ENODEV;
@@ -1604,7 +1605,6 @@ static int ite_probe(struct pnp_dev *pdev, const struct pnp_device_id
if (ret)
goto failure;
- itdev->rdev = rdev;
ite_pr(KERN_NOTICE, "driver has been successfully loaded\n");
return 0;
diff --git a/drivers/media/video/gspca/pac7302.c b/drivers/media/video/gspca/pac7302.c
index 1c44f78..6ddc769 100644
--- a/drivers/media/video/gspca/pac7302.c
+++ b/drivers/media/video/gspca/pac7302.c
@@ -1197,6 +1197,8 @@ static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x093a, 0x2629), .driver_info = FL_VFLIP},
{USB_DEVICE(0x093a, 0x262a)},
{USB_DEVICE(0x093a, 0x262c)},
+ {USB_DEVICE(0x145f, 0x013c)},
+ {USB_DEVICE(0x1ae7, 0x2001)}, /* SpeedLink Snappy Mic SL-6825-SBK */
{}
};
MODULE_DEVICE_TABLE(usb, device_table);
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index d5fe43d..bc27065 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -2188,9 +2188,7 @@ static int omap_hsmmc_suspend(struct device *dev)
} else {
host->suspended = 0;
if (host->pdata->resume) {
- ret = host->pdata->resume(&pdev->dev,
- host->slot_id);
- if (ret)
+ if (host->pdata->resume(&pdev->dev, host->slot_id))
dev_dbg(mmc_dev(host->mmc),
"Unmask interrupt failed\n");
}
diff --git a/drivers/mmc/host/sdhci-s3c.c b/drivers/mmc/host/sdhci-s3c.c
index 0d33ff0..06af9e4 100644
--- a/drivers/mmc/host/sdhci-s3c.c
+++ b/drivers/mmc/host/sdhci-s3c.c
@@ -601,7 +601,7 @@ static int __devexit sdhci_s3c_remove(struct platform_device *pdev)
sdhci_remove_host(host, 1);
- for (ptr = 0; ptr < 3; ptr++) {
+ for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) {
if (sc->clk_bus[ptr]) {
clk_disable(sc->clk_bus[ptr]);
clk_put(sc->clk_bus[ptr]);
diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
index d5505f3..559d30d 100644
--- a/drivers/mmc/host/sh_mmcif.c
+++ b/drivers/mmc/host/sh_mmcif.c
@@ -1003,6 +1003,10 @@ static irqreturn_t sh_mmcif_intr(int irq, void *dev_id)
host->sd_error = true;
dev_dbg(&host->pd->dev, "int err state = %08x\n", state);
}
+ if (host->state == STATE_IDLE) {
+ dev_info(&host->pd->dev, "Spurious IRQ status 0x%x", state);
+ return IRQ_HANDLED;
+ }
if (state & ~(INT_CMD12RBE | INT_CMD12CRE))
complete(&host->intr_wait);
else
diff --git a/drivers/mtd/maps/autcpu12-nvram.c b/drivers/mtd/maps/autcpu12-nvram.c
index e5bfd0e..0598d52 100644
--- a/drivers/mtd/maps/autcpu12-nvram.c
+++ b/drivers/mtd/maps/autcpu12-nvram.c
@@ -43,7 +43,8 @@ struct map_info autcpu12_sram_map = {
static int __init init_autcpu12_sram (void)
{
- int err, save0, save1;
+ map_word tmp, save0, save1;
+ int err;
autcpu12_sram_map.virt = ioremap(0x12000000, SZ_128K);
if (!autcpu12_sram_map.virt) {
@@ -51,7 +52,7 @@ static int __init init_autcpu12_sram (void)
err = -EIO;
goto out;
}
- simple_map_init(&autcpu_sram_map);
+ simple_map_init(&autcpu12_sram_map);
/*
* Check for 32K/128K
@@ -61,20 +62,22 @@ static int __init init_autcpu12_sram (void)
* Read and check result on ofs 0x0
* Restore contents
*/
- save0 = map_read32(&autcpu12_sram_map,0);
- save1 = map_read32(&autcpu12_sram_map,0x10000);
- map_write32(&autcpu12_sram_map,~save0,0x10000);
+ save0 = map_read(&autcpu12_sram_map, 0);
+ save1 = map_read(&autcpu12_sram_map, 0x10000);
+ tmp.x[0] = ~save0.x[0];
+ map_write(&autcpu12_sram_map, tmp, 0x10000);
/* if we find this pattern on 0x0, we have 32K size
* restore contents and exit
*/
- if ( map_read32(&autcpu12_sram_map,0) != save0) {
- map_write32(&autcpu12_sram_map,save0,0x0);
+ tmp = map_read(&autcpu12_sram_map, 0);
+ if (!map_word_equal(&autcpu12_sram_map, tmp, save0)) {
+ map_write(&autcpu12_sram_map, save0, 0x0);
goto map;
}
/* We have a 128K found, restore 0x10000 and set size
* to 128K
*/
- map_write32(&autcpu12_sram_map,save1,0x10000);
+ map_write(&autcpu12_sram_map, save1, 0x10000);
autcpu12_sram_map.size = SZ_128K;
map:
diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
index a0bd2de..198da0a 100644
--- a/drivers/mtd/mtdpart.c
+++ b/drivers/mtd/mtdpart.c
@@ -748,6 +748,8 @@ static const char *default_mtd_part_types[] = {
* partition parsers, specified in @types. However, if @types is %NULL, then
* the default list of parsers is used. The default list contains only the
* "cmdlinepart" and "ofpart" parsers ATM.
+ * Note: If there are more then one parser in @types, the kernel only takes the
+ * partitions parsed out by the first parser.
*
* This function may return:
* o a negative error code in case of failure
@@ -772,11 +774,12 @@ int parse_mtd_partitions(struct mtd_info *master, const char **types,
if (!parser)
continue;
ret = (*parser->parse_fn)(master, pparts, data);
+ put_partition_parser(parser);
if (ret > 0) {
printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n",
ret, parser->name, master->name);
+ break;
}
- put_partition_parser(parser);
}
return ret;
}
diff --git a/drivers/mtd/nand/nand_bbt.c b/drivers/mtd/nand/nand_bbt.c
index f024375..532da04 100644
--- a/drivers/mtd/nand/nand_bbt.c
+++ b/drivers/mtd/nand/nand_bbt.c
@@ -390,7 +390,7 @@ static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf,
/* Read the mirror version, if available */
if (md && (md->options & NAND_BBT_VERSION)) {
scan_read_raw(mtd, buf, (loff_t)md->pages[0] << this->page_shift,
- mtd->writesize, td);
+ mtd->writesize, md);
md->version[0] = buf[bbt_get_ver_offs(mtd, md)];
pr_info("Bad block table at page %d, version 0x%02X\n",
md->pages[0], md->version[0]);
diff --git a/drivers/mtd/nand/nandsim.c b/drivers/mtd/nand/nandsim.c
index 83e8e1b..ade0da0 100644
--- a/drivers/mtd/nand/nandsim.c
+++ b/drivers/mtd/nand/nandsim.c
@@ -2355,6 +2355,7 @@ static int __init ns_init_module(void)
uint64_t new_size = (uint64_t)nsmtd->erasesize << overridesize;
if (new_size >> overridesize != nsmtd->erasesize) {
NS_ERR("overridesize is too big\n");
+ retval = -EINVAL;
goto err_exit;
}
/* N.B. This relies on nand_scan not doing anything with the size before we change it */
diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c
index f745f00..297c965 100644
--- a/drivers/mtd/nand/omap2.c
+++ b/drivers/mtd/nand/omap2.c
@@ -1132,7 +1132,8 @@ static int omap_nand_remove(struct platform_device *pdev)
/* Release NAND device, its internal structures and partitions */
nand_release(&info->mtd);
iounmap(info->nand.IO_ADDR_R);
- kfree(&info->mtd);
+ release_mem_region(info->phys_base, NAND_IO_SIZE);
+ kfree(info);
return 0;
}
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index 6c3fb5a..1f9c363 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -816,6 +816,11 @@ static int autoresize(struct ubi_device *ubi, int vol_id)
struct ubi_volume *vol = ubi->volumes[vol_id];
int err, old_reserved_pebs = vol->reserved_pebs;
+ if (ubi->ro_mode) {
+ ubi_warn("skip auto-resize because of R/O mode");
+ return 0;
+ }
+
/*
* Clear the auto-resize flag in the volume in-memory copy of the
* volume table, and 'ubi_resize_volume()' will propagate this change
diff --git a/drivers/mtd/ubi/scan.c b/drivers/mtd/ubi/scan.c
index b99318e..b2b62de 100644
--- a/drivers/mtd/ubi/scan.c
+++ b/drivers/mtd/ubi/scan.c
@@ -997,7 +997,7 @@ static int process_eb(struct ubi_device *ubi, struct ubi_scan_info *si,
return err;
goto adjust_mean_ec;
case UBI_IO_FF:
- if (ec_err)
+ if (ec_err || bitflips)
err = add_to_list(si, pnum, ec, 1, &si->erase);
else
err = add_to_list(si, pnum, ec, 0, &si->free);
diff --git a/drivers/net/can/mscan/mpc5xxx_can.c b/drivers/net/can/mscan/mpc5xxx_can.c
index 5fedc33..d8f2b5b 100644
--- a/drivers/net/can/mscan/mpc5xxx_can.c
+++ b/drivers/net/can/mscan/mpc5xxx_can.c
@@ -181,7 +181,7 @@ static u32 __devinit mpc512x_can_get_clock(struct platform_device *ofdev,
if (!clock_name || !strcmp(clock_name, "sys")) {
sys_clk = clk_get(&ofdev->dev, "sys_clk");
- if (!sys_clk) {
+ if (IS_ERR(sys_clk)) {
dev_err(&ofdev->dev, "couldn't get sys_clk\n");
goto exit_unmap;
}
@@ -204,7 +204,7 @@ static u32 __devinit mpc512x_can_get_clock(struct platform_device *ofdev,
if (clocksrc < 0) {
ref_clk = clk_get(&ofdev->dev, "ref_clk");
- if (!ref_clk) {
+ if (IS_ERR(ref_clk)) {
dev_err(&ofdev->dev, "couldn't get ref_clk\n");
goto exit_unmap;
}
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index 0549261..c5f6b0e 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -4720,8 +4720,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
netif_device_detach(netdev);
- mutex_lock(&adapter->mutex);
-
if (netif_running(netdev)) {
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
@@ -4729,10 +4727,8 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
#ifdef CONFIG_PM
retval = pci_save_state(pdev);
- if (retval) {
- mutex_unlock(&adapter->mutex);
+ if (retval)
return retval;
- }
#endif
status = er32(STATUS);
@@ -4789,8 +4785,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
if (netif_running(netdev))
e1000_free_irq(adapter);
- mutex_unlock(&adapter->mutex);
-
pci_disable_device(pdev);
return 0;
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index ed1be8a..4b43bc5 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -327,6 +327,8 @@ enum rtl_registers {
Config0 = 0x51,
Config1 = 0x52,
Config2 = 0x53,
+#define PME_SIGNAL (1 << 5) /* 8168c and later */
+
Config3 = 0x54,
Config4 = 0x55,
Config5 = 0x56,
@@ -1360,7 +1362,6 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
u16 reg;
u8 mask;
} cfg[] = {
- { WAKE_ANY, Config1, PMEnable },
{ WAKE_PHY, Config3, LinkUp },
{ WAKE_MAGIC, Config3, MagicPacket },
{ WAKE_UCAST, Config5, UWF },
@@ -1368,16 +1369,32 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
{ WAKE_MCAST, Config5, MWF },
{ WAKE_ANY, Config5, LanWake }
};
+ u8 options;
RTL_W8(Cfg9346, Cfg9346_Unlock);
for (i = 0; i < ARRAY_SIZE(cfg); i++) {
- u8 options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
+ options = RTL_R8(cfg[i].reg) & ~cfg[i].mask;
if (wolopts & cfg[i].opt)
options |= cfg[i].mask;
RTL_W8(cfg[i].reg, options);
}
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_17:
+ options = RTL_R8(Config1) & ~PMEnable;
+ if (wolopts)
+ options |= PMEnable;
+ RTL_W8(Config1, options);
+ break;
+ default:
+ options = RTL_R8(Config2) & ~PME_SIGNAL;
+ if (wolopts)
+ options |= PME_SIGNAL;
+ RTL_W8(Config2, options);
+ break;
+ }
+
RTL_W8(Cfg9346, Cfg9346_Lock);
}
diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
index 7145714..c0f097b 100644
--- a/drivers/net/rionet.c
+++ b/drivers/net/rionet.c
@@ -79,6 +79,7 @@ static int rionet_capable = 1;
* on system trade-offs.
*/
static struct rio_dev **rionet_active;
+static int nact; /* total number of active rionet peers */
#define is_rionet_capable(src_ops, dst_ops) \
((src_ops & RIO_SRC_OPS_DATA_MSG) && \
@@ -175,6 +176,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
struct ethhdr *eth = (struct ethhdr *)skb->data;
u16 destid;
unsigned long flags;
+ int add_num = 1;
local_irq_save(flags);
if (!spin_trylock(&rnet->tx_lock)) {
@@ -182,7 +184,10 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_LOCKED;
}
- if ((rnet->tx_cnt + 1) > RIONET_TX_RING_SIZE) {
+ if (is_multicast_ether_addr(eth->h_dest))
+ add_num = nact;
+
+ if ((rnet->tx_cnt + add_num) > RIONET_TX_RING_SIZE) {
netif_stop_queue(ndev);
spin_unlock_irqrestore(&rnet->tx_lock, flags);
printk(KERN_ERR "%s: BUG! Tx Ring full when queue awake!\n",
@@ -191,11 +196,16 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}
if (is_multicast_ether_addr(eth->h_dest)) {
+ int count = 0;
for (i = 0; i < RIO_MAX_ROUTE_ENTRIES(rnet->mport->sys_size);
i++)
- if (rionet_active[i])
+ if (rionet_active[i]) {
rionet_queue_tx_msg(skb, ndev,
rionet_active[i]);
+ if (count)
+ atomic_inc(&skb->users);
+ count++;
+ }
} else if (RIONET_MAC_MATCH(eth->h_dest)) {
destid = RIONET_GET_DESTID(eth->h_dest);
if (rionet_active[destid])
@@ -220,14 +230,17 @@ static void rionet_dbell_event(struct rio_mport *mport, void *dev_id, u16 sid, u
if (info == RIONET_DOORBELL_JOIN) {
if (!rionet_active[sid]) {
list_for_each_entry(peer, &rionet_peers, node) {
- if (peer->rdev->destid == sid)
+ if (peer->rdev->destid == sid) {
rionet_active[sid] = peer->rdev;
+ nact++;
+ }
}
rio_mport_send_doorbell(mport, sid,
RIONET_DOORBELL_JOIN);
}
} else if (info == RIONET_DOORBELL_LEAVE) {
rionet_active[sid] = NULL;
+ nact--;
} else {
if (netif_msg_intr(rnet))
printk(KERN_WARNING "%s: unhandled doorbell\n",
@@ -524,6 +537,7 @@ static int rionet_probe(struct rio_dev *rdev, const struct rio_device_id *id)
rc = rionet_setup_netdev(rdev->net->hport, ndev);
rionet_check = 1;
+ nact = 0;
}
/*
diff --git a/drivers/net/wireless/ath/ath9k/pci.c b/drivers/net/wireless/ath/ath9k/pci.c
index 1883d39..f7e17a0 100644
--- a/drivers/net/wireless/ath/ath9k/pci.c
+++ b/drivers/net/wireless/ath/ath9k/pci.c
@@ -122,8 +122,9 @@ static void ath_pci_aspm_init(struct ath_common *common)
if (!parent)
return;
- if (ah->btcoex_hw.scheme != ATH_BTCOEX_CFG_NONE) {
- /* Bluetooth coexistance requires disabling ASPM. */
+ if ((ah->btcoex_hw.scheme != ATH_BTCOEX_CFG_NONE) &&
+ (AR_SREV_9285(ah))) {
+ /* Bluetooth coexistance requires disabling ASPM for AR9285. */
pci_read_config_byte(pdev, pos + PCI_EXP_LNKCTL, &aspm);
aspm &= ~(PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
pci_write_config_byte(pdev, pos + PCI_EXP_LNKCTL, aspm);
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index dfee1b3..9005380 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -658,8 +658,10 @@ int __devinit pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max,
/* Check if setup is sensible at all */
if (!pass &&
- (primary != bus->number || secondary <= bus->number)) {
- dev_dbg(&dev->dev, "bus configuration invalid, reconfiguring\n");
+ (primary != bus->number || secondary <= bus->number ||
+ secondary > subordinate)) {
+ dev_info(&dev->dev, "bridge configuration invalid ([bus %02x-%02x]), reconfiguring\n",
+ secondary, subordinate);
broken = 1;
}
diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
index 0860181..4f1b10b 100644
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -519,6 +519,7 @@ struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *adapter, u64 wwpn,
rwlock_init(&port->unit_list_lock);
INIT_LIST_HEAD(&port->unit_list);
+ atomic_set(&port->units, 0);
INIT_WORK(&port->gid_pn_work, zfcp_fc_port_did_lookup);
INIT_WORK(&port->test_link_work, zfcp_fc_link_test_work);
diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c
index 96f13ad8..79a6afe 100644
--- a/drivers/s390/scsi/zfcp_ccw.c
+++ b/drivers/s390/scsi/zfcp_ccw.c
@@ -39,17 +39,23 @@ void zfcp_ccw_adapter_put(struct zfcp_adapter *adapter)
spin_unlock_irqrestore(&zfcp_ccw_adapter_ref_lock, flags);
}
-static int zfcp_ccw_activate(struct ccw_device *cdev)
-
+/**
+ * zfcp_ccw_activate - activate adapter and wait for it to finish
+ * @cdev: pointer to belonging ccw device
+ * @clear: Status flags to clear.
+ * @tag: s390dbf trace record tag
+ */
+static int zfcp_ccw_activate(struct ccw_device *cdev, int clear, char *tag)
{
struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(cdev);
if (!adapter)
return 0;
+ zfcp_erp_clear_adapter_status(adapter, clear);
zfcp_erp_set_adapter_status(adapter, ZFCP_STATUS_COMMON_RUNNING);
zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
- "ccresu2");
+ tag);
zfcp_erp_wait(adapter);
flush_work(&adapter->scan_work);
@@ -164,26 +170,29 @@ static int zfcp_ccw_set_online(struct ccw_device *cdev)
BUG_ON(!zfcp_reqlist_isempty(adapter->req_list));
adapter->req_no = 0;
- zfcp_ccw_activate(cdev);
+ zfcp_ccw_activate(cdev, 0, "ccsonl1");
zfcp_ccw_adapter_put(adapter);
return 0;
}
/**
- * zfcp_ccw_set_offline - set_offline function of zfcp driver
+ * zfcp_ccw_offline_sync - shut down adapter and wait for it to finish
* @cdev: pointer to belonging ccw device
+ * @set: Status flags to set.
+ * @tag: s390dbf trace record tag
*
* This function gets called by the common i/o layer and sets an adapter
* into state offline.
*/
-static int zfcp_ccw_set_offline(struct ccw_device *cdev)
+static int zfcp_ccw_offline_sync(struct ccw_device *cdev, int set, char *tag)
{
struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(cdev);
if (!adapter)
return 0;
- zfcp_erp_adapter_shutdown(adapter, 0, "ccsoff1");
+ zfcp_erp_set_adapter_status(adapter, set);
+ zfcp_erp_adapter_shutdown(adapter, 0, tag);
zfcp_erp_wait(adapter);
zfcp_ccw_adapter_put(adapter);
@@ -191,6 +200,18 @@ static int zfcp_ccw_set_offline(struct ccw_device *cdev)
}
/**
+ * zfcp_ccw_set_offline - set_offline function of zfcp driver
+ * @cdev: pointer to belonging ccw device
+ *
+ * This function gets called by the common i/o layer and sets an adapter
+ * into state offline.
+ */
+static int zfcp_ccw_set_offline(struct ccw_device *cdev)
+{
+ return zfcp_ccw_offline_sync(cdev, 0, "ccsoff1");
+}
+
+/**
* zfcp_ccw_notify - ccw notify function
* @cdev: pointer to belonging ccw device
* @event: indicates if adapter was detached or attached
@@ -207,6 +228,11 @@ static int zfcp_ccw_notify(struct ccw_device *cdev, int event)
switch (event) {
case CIO_GONE:
+ if (atomic_read(&adapter->status) &
+ ZFCP_STATUS_ADAPTER_SUSPENDED) { /* notification ignore */
+ zfcp_dbf_hba_basic("ccnigo1", adapter);
+ break;
+ }
dev_warn(&cdev->dev, "The FCP device has been detached\n");
zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti1");
break;
@@ -216,6 +242,11 @@ static int zfcp_ccw_notify(struct ccw_device *cdev, int event)
zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti2");
break;
case CIO_OPER:
+ if (atomic_read(&adapter->status) &
+ ZFCP_STATUS_ADAPTER_SUSPENDED) { /* notification ignore */
+ zfcp_dbf_hba_basic("ccniop1", adapter);
+ break;
+ }
dev_info(&cdev->dev, "The FCP device is operational again\n");
zfcp_erp_set_adapter_status(adapter,
ZFCP_STATUS_COMMON_RUNNING);
@@ -251,6 +282,28 @@ static void zfcp_ccw_shutdown(struct ccw_device *cdev)
zfcp_ccw_adapter_put(adapter);
}
+static int zfcp_ccw_suspend(struct ccw_device *cdev)
+{
+ zfcp_ccw_offline_sync(cdev, ZFCP_STATUS_ADAPTER_SUSPENDED, "ccsusp1");
+ return 0;
+}
+
+static int zfcp_ccw_thaw(struct ccw_device *cdev)
+{
+ /* trace records for thaw and final shutdown during suspend
+ can only be found in system dump until the end of suspend
+ but not after resume because it's based on the memory image
+ right after the very first suspend (freeze) callback */
+ zfcp_ccw_activate(cdev, 0, "ccthaw1");
+ return 0;
+}
+
+static int zfcp_ccw_resume(struct ccw_device *cdev)
+{
+ zfcp_ccw_activate(cdev, ZFCP_STATUS_ADAPTER_SUSPENDED, "ccresu1");
+ return 0;
+}
+
struct ccw_driver zfcp_ccw_driver = {
.driver = {
.owner = THIS_MODULE,
@@ -263,7 +316,7 @@ struct ccw_driver zfcp_ccw_driver = {
.set_offline = zfcp_ccw_set_offline,
.notify = zfcp_ccw_notify,
.shutdown = zfcp_ccw_shutdown,
- .freeze = zfcp_ccw_set_offline,
- .thaw = zfcp_ccw_activate,
- .restore = zfcp_ccw_activate,
+ .freeze = zfcp_ccw_suspend,
+ .thaw = zfcp_ccw_thaw,
+ .restore = zfcp_ccw_resume,
};
diff --git a/drivers/s390/scsi/zfcp_cfdc.c b/drivers/s390/scsi/zfcp_cfdc.c
index fab2c25..8ed63aa 100644
--- a/drivers/s390/scsi/zfcp_cfdc.c
+++ b/drivers/s390/scsi/zfcp_cfdc.c
@@ -293,7 +293,7 @@ void zfcp_cfdc_adapter_access_changed(struct zfcp_adapter *adapter)
}
read_unlock_irqrestore(&adapter->port_list_lock, flags);
- shost_for_each_device(sdev, port->adapter->scsi_host) {
+ shost_for_each_device(sdev, adapter->scsi_host) {
zfcp_sdev = sdev_to_zfcp(sdev);
status = atomic_read(&zfcp_sdev->status);
if ((status & ZFCP_STATUS_COMMON_ACCESS_DENIED) ||
diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
index a9a816e..79b9848 100644
--- a/drivers/s390/scsi/zfcp_dbf.c
+++ b/drivers/s390/scsi/zfcp_dbf.c
@@ -191,7 +191,7 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
length = min((u16)sizeof(struct qdio_buffer),
(u16)ZFCP_DBF_PAY_MAX_REC);
- while ((char *)pl[payload->counter] && payload->counter < scount) {
+ while (payload->counter < scount && (char *)pl[payload->counter]) {
memcpy(payload->data, (char *)pl[payload->counter], length);
debug_event(dbf->pay, 1, payload, zfcp_dbf_plen(length));
payload->counter++;
@@ -200,6 +200,26 @@ void zfcp_dbf_hba_def_err(struct zfcp_adapter *adapter, u64 req_id, u16 scount,
spin_unlock_irqrestore(&dbf->pay_lock, flags);
}
+/**
+ * zfcp_dbf_hba_basic - trace event for basic adapter events
+ * @adapter: pointer to struct zfcp_adapter
+ */
+void zfcp_dbf_hba_basic(char *tag, struct zfcp_adapter *adapter)
+{
+ struct zfcp_dbf *dbf = adapter->dbf;
+ struct zfcp_dbf_hba *rec = &dbf->hba_buf;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dbf->hba_lock, flags);
+ memset(rec, 0, sizeof(*rec));
+
+ memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
+ rec->id = ZFCP_DBF_HBA_BASIC;
+
+ debug_event(dbf->hba, 1, rec, sizeof(*rec));
+ spin_unlock_irqrestore(&dbf->hba_lock, flags);
+}
+
static void zfcp_dbf_set_common(struct zfcp_dbf_rec *rec,
struct zfcp_adapter *adapter,
struct zfcp_port *port,
diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h
index 714f087..3ac7a4b 100644
--- a/drivers/s390/scsi/zfcp_dbf.h
+++ b/drivers/s390/scsi/zfcp_dbf.h
@@ -154,6 +154,7 @@ enum zfcp_dbf_hba_id {
ZFCP_DBF_HBA_RES = 1,
ZFCP_DBF_HBA_USS = 2,
ZFCP_DBF_HBA_BIT = 3,
+ ZFCP_DBF_HBA_BASIC = 4,
};
/**
diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
index ed5d921..f172b84 100644
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -77,6 +77,7 @@ struct zfcp_reqlist;
#define ZFCP_STATUS_ADAPTER_SIOSL_ISSUED 0x00000004
#define ZFCP_STATUS_ADAPTER_XCONFIG_OK 0x00000008
#define ZFCP_STATUS_ADAPTER_HOST_CON_INIT 0x00000010
+#define ZFCP_STATUS_ADAPTER_SUSPENDED 0x00000040
#define ZFCP_STATUS_ADAPTER_ERP_PENDING 0x00000100
#define ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED 0x00000200
#define ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED 0x00000400
@@ -204,6 +205,7 @@ struct zfcp_port {
struct zfcp_adapter *adapter; /* adapter used to access port */
struct list_head unit_list; /* head of logical unit list */
rwlock_t unit_list_lock; /* unit list lock */
+ atomic_t units; /* zfcp_unit count */
atomic_t status; /* status of this remote port */
u64 wwnn; /* WWNN if known */
u64 wwpn; /* WWPN */
diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
index 2302e1c..ef9e502 100644
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -54,6 +54,7 @@ extern void zfcp_dbf_hba_fsf_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
+extern void zfcp_dbf_hba_basic(char *, struct zfcp_adapter *);
extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);
@@ -158,6 +159,7 @@ extern void zfcp_scsi_dif_sense_error(struct scsi_cmnd *, int);
extern struct attribute_group zfcp_sysfs_unit_attrs;
extern struct attribute_group zfcp_sysfs_adapter_attrs;
extern struct attribute_group zfcp_sysfs_port_attrs;
+extern struct mutex zfcp_sysfs_port_units_mutex;
extern struct device_attribute *zfcp_sysfs_sdev_attrs[];
extern struct device_attribute *zfcp_sysfs_shost_attrs[];
diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
index e9a787e..8c849f0 100644
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -219,7 +219,7 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
return;
}
- zfcp_dbf_hba_fsf_uss("fssrh_2", req);
+ zfcp_dbf_hba_fsf_uss("fssrh_4", req);
switch (sr_buf->status_type) {
case FSF_STATUS_READ_PORT_CLOSED:
@@ -771,12 +771,14 @@ out:
static void zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *req)
{
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
union fsf_status_qual *fsq = &req->qtcb->header.fsf_status_qual;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (req->qtcb->header.fsf_status) {
case FSF_PORT_HANDLE_NOT_VALID:
if (fsq->word[0] == fsq->word[1]) {
@@ -885,7 +887,7 @@ static void zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *req)
switch (header->fsf_status) {
case FSF_GOOD:
- zfcp_dbf_san_res("fsscth1", req);
+ zfcp_dbf_san_res("fsscth2", req);
ct->status = 0;
break;
case FSF_SERVICE_CLASS_NOT_SUPPORTED:
@@ -1739,13 +1741,15 @@ static void zfcp_fsf_open_lun_handler(struct zfcp_fsf_req *req)
{
struct zfcp_adapter *adapter = req->adapter;
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct fsf_qtcb_header *header = &req->qtcb->header;
struct fsf_qtcb_bottom_support *bottom = &req->qtcb->bottom.support;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED |
ZFCP_STATUS_COMMON_ACCESS_BOXED |
ZFCP_STATUS_LUN_SHARED |
@@ -1856,11 +1860,13 @@ out:
static void zfcp_fsf_close_lun_handler(struct zfcp_fsf_req *req)
{
struct scsi_device *sdev = req->data;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (req->qtcb->header.fsf_status) {
case FSF_PORT_HANDLE_NOT_VALID:
zfcp_erp_adapter_reopen(zfcp_sdev->port->adapter, 0, "fscuh_1");
@@ -1950,7 +1956,7 @@ static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi)
{
struct fsf_qual_latency_info *lat_in;
struct latency_cont *lat = NULL;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scsi->device);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct zfcp_blk_drv_data blktrc;
int ticks = req->adapter->timer_ticks;
@@ -1965,6 +1971,7 @@ static void zfcp_fsf_req_trace(struct zfcp_fsf_req *req, struct scsi_cmnd *scsi)
if (req->adapter->adapter_features & FSF_FEATURE_MEASUREMENT_DATA &&
!(req->status & ZFCP_STATUS_FSFREQ_ERROR)) {
+ zfcp_sdev = sdev_to_zfcp(scsi->device);
blktrc.flags |= ZFCP_BLK_LAT_VALID;
blktrc.channel_lat = lat_in->channel_lat * ticks;
blktrc.fabric_lat = lat_in->fabric_lat * ticks;
@@ -2002,12 +2009,14 @@ static void zfcp_fsf_fcp_handler_common(struct zfcp_fsf_req *req)
{
struct scsi_cmnd *scmnd = req->data;
struct scsi_device *sdev = scmnd->device;
- struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
+ struct zfcp_scsi_dev *zfcp_sdev;
struct fsf_qtcb_header *header = &req->qtcb->header;
if (unlikely(req->status & ZFCP_STATUS_FSFREQ_ERROR))
return;
+ zfcp_sdev = sdev_to_zfcp(sdev);
+
switch (header->fsf_status) {
case FSF_HANDLE_MISMATCH:
case FSF_PORT_HANDLE_NOT_VALID:
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
index e14da57..e76d003 100644
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -102,18 +102,22 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) parm;
struct zfcp_adapter *adapter = qdio->adapter;
- struct qdio_buffer_element *sbale;
int sbal_no, sbal_idx;
- void *pl[ZFCP_QDIO_MAX_SBALS_PER_REQ + 1];
- u64 req_id;
- u8 scount;
if (unlikely(qdio_err)) {
- memset(pl, 0, ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
if (zfcp_adapter_multi_buffer_active(adapter)) {
+ void *pl[ZFCP_QDIO_MAX_SBALS_PER_REQ + 1];
+ struct qdio_buffer_element *sbale;
+ u64 req_id;
+ u8 scount;
+
+ memset(pl, 0,
+ ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
sbale = qdio->res_q[idx]->element;
req_id = (u64) sbale->addr;
- scount = sbale->scount + 1; /* incl. signaling SBAL */
+ scount = min(sbale->scount + 1,
+ ZFCP_QDIO_MAX_SBALS_PER_REQ + 1);
+ /* incl. signaling SBAL */
for (sbal_no = 0; sbal_no < scount; sbal_no++) {
sbal_idx = (idx + sbal_no) %
diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
index cdc4ff7..9e62210 100644
--- a/drivers/s390/scsi/zfcp_sysfs.c
+++ b/drivers/s390/scsi/zfcp_sysfs.c
@@ -227,6 +227,8 @@ static ssize_t zfcp_sysfs_port_rescan_store(struct device *dev,
static ZFCP_DEV_ATTR(adapter, port_rescan, S_IWUSR, NULL,
zfcp_sysfs_port_rescan_store);
+DEFINE_MUTEX(zfcp_sysfs_port_units_mutex);
+
static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -249,6 +251,16 @@ static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
else
retval = 0;
+ mutex_lock(&zfcp_sysfs_port_units_mutex);
+ if (atomic_read(&port->units) > 0) {
+ retval = -EBUSY;
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ goto out;
+ }
+ /* port is about to be removed, so no more unit_add */
+ atomic_set(&port->units, -1);
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+
write_lock_irq(&adapter->port_list_lock);
list_del(&port->list);
write_unlock_irq(&adapter->port_list_lock);
@@ -289,12 +301,14 @@ static ssize_t zfcp_sysfs_unit_add_store(struct device *dev,
{
struct zfcp_port *port = container_of(dev, struct zfcp_port, dev);
u64 fcp_lun;
+ int retval;
if (strict_strtoull(buf, 0, (unsigned long long *) &fcp_lun))
return -EINVAL;
- if (zfcp_unit_add(port, fcp_lun))
- return -EINVAL;
+ retval = zfcp_unit_add(port, fcp_lun);
+ if (retval)
+ return retval;
return count;
}
diff --git a/drivers/s390/scsi/zfcp_unit.c b/drivers/s390/scsi/zfcp_unit.c
index 20796eb..4e6a535 100644
--- a/drivers/s390/scsi/zfcp_unit.c
+++ b/drivers/s390/scsi/zfcp_unit.c
@@ -104,7 +104,7 @@ static void zfcp_unit_release(struct device *dev)
{
struct zfcp_unit *unit = container_of(dev, struct zfcp_unit, dev);
- put_device(&unit->port->dev);
+ atomic_dec(&unit->port->units);
kfree(unit);
}
@@ -119,16 +119,27 @@ static void zfcp_unit_release(struct device *dev)
int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
{
struct zfcp_unit *unit;
+ int retval = 0;
+
+ mutex_lock(&zfcp_sysfs_port_units_mutex);
+ if (atomic_read(&port->units) == -1) {
+ /* port is already gone */
+ retval = -ENODEV;
+ goto out;
+ }
unit = zfcp_unit_find(port, fcp_lun);
if (unit) {
put_device(&unit->dev);
- return -EEXIST;
+ retval = -EEXIST;
+ goto out;
}
unit = kzalloc(sizeof(struct zfcp_unit), GFP_KERNEL);
- if (!unit)
- return -ENOMEM;
+ if (!unit) {
+ retval = -ENOMEM;
+ goto out;
+ }
unit->port = port;
unit->fcp_lun = fcp_lun;
@@ -139,28 +150,33 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
if (dev_set_name(&unit->dev, "0x%016llx",
(unsigned long long) fcp_lun)) {
kfree(unit);
- return -ENOMEM;
+ retval = -ENOMEM;
+ goto out;
}
- get_device(&port->dev);
-
if (device_register(&unit->dev)) {
put_device(&unit->dev);
- return -ENOMEM;
+ retval = -ENOMEM;
+ goto out;
}
if (sysfs_create_group(&unit->dev.kobj, &zfcp_sysfs_unit_attrs)) {
device_unregister(&unit->dev);
- return -EINVAL;
+ retval = -EINVAL;
+ goto out;
}
+ atomic_inc(&port->units); /* under zfcp_sysfs_port_units_mutex ! */
+
write_lock_irq(&port->unit_list_lock);
list_add_tail(&unit->list, &port->unit_list);
write_unlock_irq(&port->unit_list_lock);
zfcp_unit_scsi_scan(unit);
- return 0;
+out:
+ mutex_unlock(&zfcp_sysfs_port_units_mutex);
+ return retval;
}
/**
diff --git a/drivers/scsi/atp870u.c b/drivers/scsi/atp870u.c
index 7e6eca4..59fc5a1 100644
--- a/drivers/scsi/atp870u.c
+++ b/drivers/scsi/atp870u.c
@@ -1174,7 +1174,16 @@ wait_io1:
outw(val, tmport);
outb(2, 0x80);
TCM_SYNC:
- udelay(0x800);
+ /*
+ * The funny division into multiple delays is to accomodate
+ * arches like ARM where udelay() multiplies its argument by
+ * a large number to initialize a loop counter. To avoid
+ * overflow, the maximum supported udelay is 2000 microseconds.
+ *
+ * XXX it would be more polite to find a way to use msleep()
+ */
+ mdelay(2);
+ udelay(48);
if ((inb(tmport) & 0x80) == 0x00) { /* bsy ? */
outw(0, tmport--);
outb(0, tmport);
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 4ef0212..e5a4423 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -578,8 +578,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_dh_data *h)
h->state = TPGS_STATE_STANDBY;
break;
case TPGS_STATE_OFFLINE:
- case TPGS_STATE_UNAVAILABLE:
- /* Path unusable for unavailable/offline */
+ /* Path unusable */
err = SCSI_DH_DEV_OFFLINED;
break;
default:
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index be9aad8..22523aa 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -532,12 +532,42 @@ static void set_performant_mode(struct ctlr_info *h, struct CommandList *c)
c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1);
}
+static int is_firmware_flash_cmd(u8 *cdb)
+{
+ return cdb[0] == BMIC_WRITE && cdb[6] == BMIC_FLASH_FIRMWARE;
+}
+
+/*
+ * During firmware flash, the heartbeat register may not update as frequently
+ * as it should. So we dial down lockup detection during firmware flash. and
+ * dial it back up when firmware flash completes.
+ */
+#define HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH (240 * HZ)
+#define HEARTBEAT_SAMPLE_INTERVAL (30 * HZ)
+static void dial_down_lockup_detection_during_fw_flash(struct ctlr_info *h,
+ struct CommandList *c)
+{
+ if (!is_firmware_flash_cmd(c->Request.CDB))
+ return;
+ atomic_inc(&h->firmware_flash_in_progress);
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL_DURING_FLASH;
+}
+
+static void dial_up_lockup_detection_on_fw_flash_complete(struct ctlr_info *h,
+ struct CommandList *c)
+{
+ if (is_firmware_flash_cmd(c->Request.CDB) &&
+ atomic_dec_and_test(&h->firmware_flash_in_progress))
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
+}
+
static void enqueue_cmd_and_start_io(struct ctlr_info *h,
struct CommandList *c)
{
unsigned long flags;
set_performant_mode(h, c);
+ dial_down_lockup_detection_during_fw_flash(h, c);
spin_lock_irqsave(&h->lock, flags);
addQ(&h->reqQ, c);
h->Qdepth++;
@@ -2926,7 +2956,7 @@ static void fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
c->Request.Timeout = 0; /* Don't time out */
memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
c->Request.CDB[0] = cmd;
- c->Request.CDB[1] = 0x03; /* Reset target above */
+ c->Request.CDB[1] = HPSA_RESET_TYPE_LUN;
/* If bytes 4-7 are zero, it means reset the */
/* LunID device */
c->Request.CDB[4] = 0x00;
@@ -3032,6 +3062,7 @@ static inline int bad_tag(struct ctlr_info *h, u32 tag_index,
static inline void finish_cmd(struct CommandList *c, u32 raw_tag)
{
removeQ(c);
+ dial_up_lockup_detection_on_fw_flash_complete(c->h, c);
if (likely(c->cmd_type == CMD_SCSI))
complete_scsi_command(c);
else if (c->cmd_type == CMD_IOCTL_PEND)
@@ -4172,9 +4203,6 @@ static void controller_lockup_detected(struct ctlr_info *h)
spin_unlock_irqrestore(&h->lock, flags);
}
-#define HEARTBEAT_SAMPLE_INTERVAL (10 * HZ)
-#define HEARTBEAT_CHECK_MINIMUM_INTERVAL (HEARTBEAT_SAMPLE_INTERVAL / 2)
-
static void detect_controller_lockup(struct ctlr_info *h)
{
u64 now;
@@ -4185,7 +4213,7 @@ static void detect_controller_lockup(struct ctlr_info *h)
now = get_jiffies_64();
/* If we've received an interrupt recently, we're ok. */
if (time_after64(h->last_intr_timestamp +
- (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+ (h->heartbeat_sample_interval), now))
return;
/*
@@ -4194,7 +4222,7 @@ static void detect_controller_lockup(struct ctlr_info *h)
* otherwise don't care about signals in this thread.
*/
if (time_after64(h->last_heartbeat_timestamp +
- (HEARTBEAT_CHECK_MINIMUM_INTERVAL), now))
+ (h->heartbeat_sample_interval), now))
return;
/* If heartbeat has not changed since we last looked, we're not ok. */
@@ -4236,6 +4264,7 @@ static void add_ctlr_to_lockup_detector_list(struct ctlr_info *h)
{
unsigned long flags;
+ h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
spin_lock_irqsave(&lockup_detector_lock, flags);
list_add_tail(&h->lockup_list, &hpsa_ctlr_list);
spin_unlock_irqrestore(&lockup_detector_lock, flags);
diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
index 91edafb..c721509 100644
--- a/drivers/scsi/hpsa.h
+++ b/drivers/scsi/hpsa.h
@@ -124,6 +124,8 @@ struct ctlr_info {
u64 last_intr_timestamp;
u32 last_heartbeat;
u64 last_heartbeat_timestamp;
+ u32 heartbeat_sample_interval;
+ atomic_t firmware_flash_in_progress;
u32 lockup_detected;
struct list_head lockup_list;
};
diff --git a/drivers/scsi/hpsa_cmd.h b/drivers/scsi/hpsa_cmd.h
index 3fd4715..e4ea0a3 100644
--- a/drivers/scsi/hpsa_cmd.h
+++ b/drivers/scsi/hpsa_cmd.h
@@ -163,6 +163,7 @@ struct SenseSubsystem_info {
#define BMIC_WRITE 0x27
#define BMIC_CACHE_FLUSH 0xc2
#define HPSA_CACHE_FLUSH 0x01 /* C2 was already being used by HPSA */
+#define BMIC_FLASH_FIRMWARE 0xF7
/* Command List Structure */
union SCSI3Addr {
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index 3d391dc..36aca4b 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -1547,6 +1547,9 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
host_config = &evt_struct->iu.mad.host_config;
+ /* The transport length field is only 16-bit */
+ length = min(0xffff, length);
+
/* Set up a lun reset SRP command */
memset(host_config, 0x00, sizeof(*host_config));
host_config->common.type = VIOSRP_HOST_CONFIG_TYPE;
diff --git a/drivers/scsi/isci/init.c b/drivers/scsi/isci/init.c
index 83d08b6..5c8b0dc 100644
--- a/drivers/scsi/isci/init.c
+++ b/drivers/scsi/isci/init.c
@@ -469,7 +469,6 @@ static int __devinit isci_pci_probe(struct pci_dev *pdev, const struct pci_devic
if (sci_oem_parameters_validate(&orom->ctrl[i])) {
dev_warn(&pdev->dev,
"[%d]: invalid oem parameters detected, falling back to firmware\n", i);
- devm_kfree(&pdev->dev, orom);
orom = NULL;
break;
}
diff --git a/drivers/scsi/isci/probe_roms.c b/drivers/scsi/isci/probe_roms.c
index b5f4341..7cd637d 100644
--- a/drivers/scsi/isci/probe_roms.c
+++ b/drivers/scsi/isci/probe_roms.c
@@ -104,7 +104,6 @@ struct isci_orom *isci_request_oprom(struct pci_dev *pdev)
if (i >= len) {
dev_err(&pdev->dev, "oprom parse error\n");
- devm_kfree(&pdev->dev, rom);
rom = NULL;
}
pci_unmap_biosrom(oprom);
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index bb7c482..08d48a3 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -1023,33 +1023,31 @@ static void __scsi_remove_target(struct scsi_target *starget)
void scsi_remove_target(struct device *dev)
{
struct Scsi_Host *shost = dev_to_shost(dev->parent);
- struct scsi_target *starget, *found;
+ struct scsi_target *starget, *last = NULL;
unsigned long flags;
- restart:
- found = NULL;
+ /* remove targets being careful to lookup next entry before
+ * deleting the last
+ */
spin_lock_irqsave(shost->host_lock, flags);
list_for_each_entry(starget, &shost->__targets, siblings) {
if (starget->state == STARGET_DEL)
continue;
if (starget->dev.parent == dev || &starget->dev == dev) {
- found = starget;
- found->reap_ref++;
- break;
+ /* assuming new targets arrive at the end */
+ starget->reap_ref++;
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ if (last)
+ scsi_target_reap(last);
+ last = starget;
+ __scsi_remove_target(starget);
+ spin_lock_irqsave(shost->host_lock, flags);
}
}
spin_unlock_irqrestore(shost->host_lock, flags);
- if (found) {
- __scsi_remove_target(found);
- scsi_target_reap(found);
- /* in the case where @dev has multiple starget children,
- * continue removing.
- *
- * FIXME: does such a case exist?
- */
- goto restart;
- }
+ if (last)
+ scsi_target_reap(last);
}
EXPORT_SYMBOL(scsi_remove_target);
diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
index 4ad2c0e..9465bce 100644
--- a/drivers/staging/comedi/comedi_fops.c
+++ b/drivers/staging/comedi/comedi_fops.c
@@ -843,7 +843,7 @@ static int parse_insn(struct comedi_device *dev, struct comedi_insn *insn,
ret = -EAGAIN;
break;
}
- ret = s->async->inttrig(dev, s, insn->data[0]);
+ ret = s->async->inttrig(dev, s, data[0]);
if (ret >= 0)
ret = 1;
break;
@@ -1088,7 +1088,6 @@ static int do_cmd_ioctl(struct comedi_device *dev,
goto cleanup;
}
- kfree(async->cmd.chanlist);
async->cmd = user_cmd;
async->cmd.data = NULL;
/* load channel/gain list */
@@ -1833,6 +1832,8 @@ void do_become_nonbusy(struct comedi_device *dev, struct comedi_subdevice *s)
if (async) {
comedi_reset_async_buf(async);
async->inttrig = NULL;
+ kfree(async->cmd.chanlist);
+ async->cmd.chanlist = NULL;
} else {
printk(KERN_ERR
"BUG: (?) do_become_nonbusy called with async=0\n");
diff --git a/drivers/staging/comedi/drivers/jr3_pci.c b/drivers/staging/comedi/drivers/jr3_pci.c
index 8d98cf4..c8b7eed 100644
--- a/drivers/staging/comedi/drivers/jr3_pci.c
+++ b/drivers/staging/comedi/drivers/jr3_pci.c
@@ -913,7 +913,7 @@ static int jr3_pci_attach(struct comedi_device *dev,
}
/* Reset DSP card */
- devpriv->iobase->channel[0].reset = 0;
+ writel(0, &devpriv->iobase->channel[0].reset);
result = comedi_load_firmware(dev, "jr3pci.idm", jr3_download_firmware);
printk("Firmare load %d\n", result);
diff --git a/drivers/staging/comedi/drivers/s626.c b/drivers/staging/comedi/drivers/s626.c
index 23fc64b..c72128f 100644
--- a/drivers/staging/comedi/drivers/s626.c
+++ b/drivers/staging/comedi/drivers/s626.c
@@ -2370,7 +2370,7 @@ static int s626_enc_insn_config(struct comedi_device *dev,
/* (data==NULL) ? (Preloadvalue=0) : (Preloadvalue=data[0]); */
k->SetMode(dev, k, Setup, TRUE);
- Preload(dev, k, *(insn->data));
+ Preload(dev, k, data[0]);
k->PulseIndex(dev, k);
SetLatchSource(dev, k, valueSrclatch);
k->SetEnable(dev, k, (uint16_t) (enab != 0));
diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
index 42cdafe..b5130c8 100644
--- a/drivers/staging/speakup/speakup_soft.c
+++ b/drivers/staging/speakup/speakup_soft.c
@@ -40,7 +40,7 @@ static int softsynth_is_alive(struct spk_synth *synth);
static unsigned char get_index(void);
static struct miscdevice synth_device;
-static int initialized;
+static int init_pos;
static int misc_registered;
static struct var_t vars[] = {
@@ -194,7 +194,7 @@ static int softsynth_close(struct inode *inode, struct file *fp)
unsigned long flags;
spk_lock(flags);
synth_soft.alive = 0;
- initialized = 0;
+ init_pos = 0;
spk_unlock(flags);
/* Make sure we let applications go before leaving */
speakup_start_ttys();
@@ -239,13 +239,8 @@ static ssize_t softsynth_read(struct file *fp, char *buf, size_t count,
ch = '\x18';
} else if (synth_buffer_empty()) {
break;
- } else if (!initialized) {
- if (*init) {
- ch = *init;
- init++;
- } else {
- initialized = 1;
- }
+ } else if (init[init_pos]) {
+ ch = init[init_pos++];
} else {
ch = synth_buffer_getc();
}
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 2ff1255..f35cb10 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -3204,7 +3204,6 @@ static int iscsit_build_sendtargets_response(struct iscsi_cmd *cmd)
len += 1;
if ((len + payload_len) > buffer_len) {
- spin_unlock(&tiqn->tiqn_tpg_lock);
end_of_buf = 1;
goto eob;
}
@@ -3357,6 +3356,7 @@ static int iscsit_send_reject(
hdr->opcode = ISCSI_OP_REJECT;
hdr->flags |= ISCSI_FLAG_CMD_FINAL;
hton24(hdr->dlength, ISCSI_HDR_LEN);
+ hdr->ffffffff = 0xffffffff;
cmd->stat_sn = conn->stat_sn++;
hdr->statsn = cpu_to_be32(cmd->stat_sn);
hdr->exp_cmdsn = cpu_to_be32(conn->sess->exp_cmd_sn);
diff --git a/drivers/target/iscsi/iscsi_target_core.h b/drivers/target/iscsi/iscsi_target_core.h
index 0f68197..dae283f 100644
--- a/drivers/target/iscsi/iscsi_target_core.h
+++ b/drivers/target/iscsi/iscsi_target_core.h
@@ -25,10 +25,10 @@
#define NA_DATAOUT_TIMEOUT_RETRIES 5
#define NA_DATAOUT_TIMEOUT_RETRIES_MAX 15
#define NA_DATAOUT_TIMEOUT_RETRIES_MIN 1
-#define NA_NOPIN_TIMEOUT 5
+#define NA_NOPIN_TIMEOUT 15
#define NA_NOPIN_TIMEOUT_MAX 60
#define NA_NOPIN_TIMEOUT_MIN 3
-#define NA_NOPIN_RESPONSE_TIMEOUT 5
+#define NA_NOPIN_RESPONSE_TIMEOUT 30
#define NA_NOPIN_RESPONSE_TIMEOUT_MAX 60
#define NA_NOPIN_RESPONSE_TIMEOUT_MIN 3
#define NA_RANDOM_DATAIN_PDU_OFFSETS 0
diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
index d4cf2cd..309f14c 100644
--- a/drivers/target/iscsi/iscsi_target_tpg.c
+++ b/drivers/target/iscsi/iscsi_target_tpg.c
@@ -674,6 +674,12 @@ int iscsit_ta_generate_node_acls(
pr_debug("iSCSI_TPG[%hu] - Generate Initiator Portal Group ACLs: %s\n",
tpg->tpgt, (a->generate_node_acls) ? "Enabled" : "Disabled");
+ if (flag == 1 && a->cache_dynamic_acls == 0) {
+ pr_debug("Explicitly setting cache_dynamic_acls=1 when "
+ "generate_node_acls=1\n");
+ a->cache_dynamic_acls = 1;
+ }
+
return 0;
}
@@ -713,6 +719,12 @@ int iscsit_ta_cache_dynamic_acls(
return -EINVAL;
}
+ if (a->generate_node_acls == 1 && flag == 0) {
+ pr_debug("Skipping cache_dynamic_acls=0 when"
+ " generate_node_acls=1\n");
+ return 0;
+ }
+
a->cache_dynamic_acls = flag;
pr_debug("iSCSI_TPG[%hu] - Cache Dynamic Initiator Portal Group"
" ACLs %s\n", tpg->tpgt, (a->cache_dynamic_acls) ?
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index 93d4f6a..0b01bfc 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -3123,6 +3123,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!target_cg->default_groups) {
pr_err("Unable to allocate target_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3138,6 +3139,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!hba_cg->default_groups) {
pr_err("Unable to allocate hba_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
config_group_init_type_name(&alua_group,
@@ -3153,6 +3155,7 @@ static int __init target_core_init_configfs(void)
GFP_KERNEL);
if (!alua_cg->default_groups) {
pr_err("Unable to allocate alua_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
@@ -3164,14 +3167,17 @@ static int __init target_core_init_configfs(void)
* Add core/alua/lu_gps/default_lu_gp
*/
lu_gp = core_alua_allocate_lu_gp("default_lu_gp", 1);
- if (IS_ERR(lu_gp))
+ if (IS_ERR(lu_gp)) {
+ ret = -ENOMEM;
goto out_global;
+ }
lu_gp_cg = &alua_lu_gps_group;
lu_gp_cg->default_groups = kzalloc(sizeof(struct config_group) * 2,
GFP_KERNEL);
if (!lu_gp_cg->default_groups) {
pr_err("Unable to allocate lu_gp_cg->default_groups\n");
+ ret = -ENOMEM;
goto out_global;
}
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 455a251..cafa477 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -139,6 +139,19 @@ static struct se_device *fd_create_virtdevice(
* of pure timestamp updates.
*/
flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
+ /*
+ * Optionally allow fd_buffered_io=1 to be enabled for people
+ * who want use the fs buffer cache as an WriteCache mechanism.
+ *
+ * This means that in event of a hard failure, there is a risk
+ * of silent data-loss if the SCSI client has *not* performed a
+ * forced unit access (FUA) write, or issued SYNCHRONIZE_CACHE
+ * to write-out the entire device cache.
+ */
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Disabling O_DSYNC, using buffered FILEIO\n");
+ flags &= ~O_DSYNC;
+ }
file = filp_open(dev_p, flags, 0600);
if (IS_ERR(file)) {
@@ -206,6 +219,12 @@ static struct se_device *fd_create_virtdevice(
if (!dev)
goto fail;
+ if (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) {
+ pr_debug("FILEIO: Forcing setting of emulate_write_cache=1"
+ " with FDBD_HAS_BUFFERED_IO_WCE\n");
+ dev->se_sub_dev->se_dev_attrib.emulate_write_cache = 1;
+ }
+
fd_dev->fd_dev_id = fd_host->fd_host_dev_id_count++;
fd_dev->fd_queue_depth = dev->queue_depth;
@@ -450,6 +469,7 @@ enum {
static match_table_t tokens = {
{Opt_fd_dev_name, "fd_dev_name=%s"},
{Opt_fd_dev_size, "fd_dev_size=%s"},
+ {Opt_fd_buffered_io, "fd_buffered_io=%d"},
{Opt_err, NULL}
};
@@ -461,7 +481,7 @@ static ssize_t fd_set_configfs_dev_params(
struct fd_dev *fd_dev = se_dev->se_dev_su_ptr;
char *orig, *ptr, *arg_p, *opts;
substring_t args[MAX_OPT_ARGS];
- int ret = 0, token;
+ int ret = 0, arg, token;
opts = kstrdup(page, GFP_KERNEL);
if (!opts)
@@ -505,6 +525,19 @@ static ssize_t fd_set_configfs_dev_params(
" bytes\n", fd_dev->fd_dev_size);
fd_dev->fbd_flags |= FBDF_HAS_SIZE;
break;
+ case Opt_fd_buffered_io:
+ match_int(args, &arg);
+ if (arg != 1) {
+ pr_err("bogus fd_buffered_io=%d value\n", arg);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ pr_debug("FILEIO: Using buffered I/O"
+ " operations for struct fd_dev\n");
+
+ fd_dev->fbd_flags |= FDBD_HAS_BUFFERED_IO_WCE;
+ break;
default:
break;
}
@@ -536,8 +569,10 @@ static ssize_t fd_show_configfs_dev_params(
ssize_t bl = 0;
bl = sprintf(b + bl, "TCM FILEIO ID: %u", fd_dev->fd_dev_id);
- bl += sprintf(b + bl, " File: %s Size: %llu Mode: O_DSYNC\n",
- fd_dev->fd_dev_name, fd_dev->fd_dev_size);
+ bl += sprintf(b + bl, " File: %s Size: %llu Mode: %s\n",
+ fd_dev->fd_dev_name, fd_dev->fd_dev_size,
+ (fd_dev->fbd_flags & FDBD_HAS_BUFFERED_IO_WCE) ?
+ "Buffered-WCE" : "O_DSYNC");
return bl;
}
diff --git a/drivers/target/target_core_file.h b/drivers/target/target_core_file.h
index 53ece69..6b1b6a9 100644
--- a/drivers/target/target_core_file.h
+++ b/drivers/target/target_core_file.h
@@ -18,6 +18,7 @@ struct fd_request {
#define FBDF_HAS_PATH 0x01
#define FBDF_HAS_SIZE 0x02
+#define FDBD_HAS_BUFFERED_IO_WCE 0x04
struct fd_dev {
u32 fbd_flags;
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index fc7bbba..d190269 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -108,7 +108,7 @@ struct gsm_mux_net {
*/
struct gsm_msg {
- struct gsm_msg *next;
+ struct list_head list;
u8 addr; /* DLCI address + flags */
u8 ctrl; /* Control byte + flags */
unsigned int len; /* Length of data block (can be zero) */
@@ -245,8 +245,7 @@ struct gsm_mux {
unsigned int tx_bytes; /* TX data outstanding */
#define TX_THRESH_HI 8192
#define TX_THRESH_LO 2048
- struct gsm_msg *tx_head; /* Pending data packets */
- struct gsm_msg *tx_tail;
+ struct list_head tx_list; /* Pending data packets */
/* Control messages */
struct timer_list t2_timer; /* Retransmit timer for commands */
@@ -663,7 +662,7 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
m->len = len;
m->addr = addr;
m->ctrl = ctrl;
- m->next = NULL;
+ INIT_LIST_HEAD(&m->list);
return m;
}
@@ -673,22 +672,21 @@ static struct gsm_msg *gsm_data_alloc(struct gsm_mux *gsm, u8 addr, int len,
*
* The tty device has called us to indicate that room has appeared in
* the transmit queue. Ram more data into the pipe if we have any
+ * If we have been flow-stopped by a CMD_FCOFF, then we can only
+ * send messages on DLCI0 until CMD_FCON
*
* FIXME: lock against link layer control transmissions
*/
static void gsm_data_kick(struct gsm_mux *gsm)
{
- struct gsm_msg *msg = gsm->tx_head;
+ struct gsm_msg *msg, *nmsg;
int len;
int skip_sof = 0;
- /* FIXME: We need to apply this solely to data messages */
- if (gsm->constipated)
- return;
-
- while (gsm->tx_head != NULL) {
- msg = gsm->tx_head;
+ list_for_each_entry_safe(msg, nmsg, &gsm->tx_list, list) {
+ if (gsm->constipated && msg->addr)
+ continue;
if (gsm->encoding != 0) {
gsm->txframe[0] = GSM1_SOF;
len = gsm_stuff_frame(msg->data,
@@ -711,14 +709,13 @@ static void gsm_data_kick(struct gsm_mux *gsm)
len - skip_sof) < 0)
break;
/* FIXME: Can eliminate one SOF in many more cases */
- gsm->tx_head = msg->next;
- if (gsm->tx_head == NULL)
- gsm->tx_tail = NULL;
gsm->tx_bytes -= msg->len;
- kfree(msg);
/* For a burst of frames skip the extra SOF within the
burst */
skip_sof = 1;
+
+ list_del(&msg->list);
+ kfree(msg);
}
}
@@ -768,11 +765,7 @@ static void __gsm_data_queue(struct gsm_dlci *dlci, struct gsm_msg *msg)
msg->data = dp;
/* Add to the actual output queue */
- if (gsm->tx_tail)
- gsm->tx_tail->next = msg;
- else
- gsm->tx_head = msg;
- gsm->tx_tail = msg;
+ list_add_tail(&msg->list, &gsm->tx_list);
gsm->tx_bytes += msg->len;
gsm_data_kick(gsm);
}
@@ -875,7 +868,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
/* dlci->skb is locked by tx_lock */
if (dlci->skb == NULL) {
- dlci->skb = skb_dequeue(&dlci->skb_list);
+ dlci->skb = skb_dequeue_tail(&dlci->skb_list);
if (dlci->skb == NULL)
return 0;
first = 1;
@@ -886,7 +879,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
if (len > gsm->mtu) {
if (dlci->adaption == 3) {
/* Over long frame, bin it */
- kfree_skb(dlci->skb);
+ dev_kfree_skb_any(dlci->skb);
dlci->skb = NULL;
return 0;
}
@@ -899,8 +892,11 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
/* FIXME: need a timer or something to kick this so it can't
get stuck with no work outstanding and no buffer free */
- if (msg == NULL)
+ if (msg == NULL) {
+ skb_queue_tail(&dlci->skb_list, dlci->skb);
+ dlci->skb = NULL;
return -ENOMEM;
+ }
dp = msg->data;
if (dlci->adaption == 4) { /* Interruptible framed (Packetised Data) */
@@ -912,7 +908,7 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
skb_pull(dlci->skb, len);
__gsm_data_queue(dlci, msg);
if (last) {
- kfree_skb(dlci->skb);
+ dev_kfree_skb_any(dlci->skb);
dlci->skb = NULL;
}
return size;
@@ -971,16 +967,22 @@ static void gsm_dlci_data_sweep(struct gsm_mux *gsm)
static void gsm_dlci_data_kick(struct gsm_dlci *dlci)
{
unsigned long flags;
+ int sweep;
+
+ if (dlci->constipated)
+ return;
spin_lock_irqsave(&dlci->gsm->tx_lock, flags);
/* If we have nothing running then we need to fire up */
+ sweep = (dlci->gsm->tx_bytes < TX_THRESH_LO);
if (dlci->gsm->tx_bytes == 0) {
if (dlci->net)
gsm_dlci_data_output_framed(dlci->gsm, dlci);
else
gsm_dlci_data_output(dlci->gsm, dlci);
- } else if (dlci->gsm->tx_bytes < TX_THRESH_LO)
- gsm_dlci_data_sweep(dlci->gsm);
+ }
+ if (sweep)
+ gsm_dlci_data_sweep(dlci->gsm);
spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags);
}
@@ -1027,6 +1029,7 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
{
int mlines = 0;
u8 brk = 0;
+ int fc;
/* The modem status command can either contain one octet (v.24 signals)
or two octets (v.24 signals + break signals). The length field will
@@ -1038,19 +1041,21 @@ static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci,
else {
brk = modem & 0x7f;
modem = (modem >> 7) & 0x7f;
- };
+ }
/* Flow control/ready to communicate */
- if (modem & MDM_FC) {
+ fc = (modem & MDM_FC) || !(modem & MDM_RTR);
+ if (fc && !dlci->constipated) {
/* Need to throttle our output on this device */
dlci->constipated = 1;
- }
- if (modem & MDM_RTC) {
- mlines |= TIOCM_DSR | TIOCM_DTR;
+ } else if (!fc && dlci->constipated) {
dlci->constipated = 0;
gsm_dlci_data_kick(dlci);
}
+
/* Map modem bits */
+ if (modem & MDM_RTC)
+ mlines |= TIOCM_DSR | TIOCM_DTR;
if (modem & MDM_RTR)
mlines |= TIOCM_RTS | TIOCM_CTS;
if (modem & MDM_IC)
@@ -1190,6 +1195,8 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
u8 *data, int clen)
{
u8 buf[1];
+ unsigned long flags;
+
switch (command) {
case CMD_CLD: {
struct gsm_dlci *dlci = gsm->dlci[0];
@@ -1206,16 +1213,18 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
gsm_control_reply(gsm, CMD_TEST, data, clen);
break;
case CMD_FCON:
- /* Modem wants us to STFU */
- gsm->constipated = 1;
- gsm_control_reply(gsm, CMD_FCON, NULL, 0);
- break;
- case CMD_FCOFF:
/* Modem can accept data again */
gsm->constipated = 0;
- gsm_control_reply(gsm, CMD_FCOFF, NULL, 0);
+ gsm_control_reply(gsm, CMD_FCON, NULL, 0);
/* Kick the link in case it is idling */
+ spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_data_kick(gsm);
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
+ break;
+ case CMD_FCOFF:
+ /* Modem wants us to STFU */
+ gsm->constipated = 1;
+ gsm_control_reply(gsm, CMD_FCOFF, NULL, 0);
break;
case CMD_MSC:
/* Out of band modem line change indicator for a DLCI */
@@ -1668,7 +1677,7 @@ static void gsm_dlci_free(struct kref *ref)
dlci->gsm->dlci[dlci->addr] = NULL;
kfifo_free(dlci->fifo);
while ((dlci->skb = skb_dequeue(&dlci->skb_list)))
- kfree_skb(dlci->skb);
+ dev_kfree_skb(dlci->skb);
kfree(dlci);
}
@@ -2007,7 +2016,7 @@ void gsm_cleanup_mux(struct gsm_mux *gsm)
{
int i;
struct gsm_dlci *dlci = gsm->dlci[0];
- struct gsm_msg *txq;
+ struct gsm_msg *txq, *ntxq;
struct gsm_control *gc;
gsm->dead = 1;
@@ -2042,11 +2051,9 @@ void gsm_cleanup_mux(struct gsm_mux *gsm)
if (gsm->dlci[i])
gsm_dlci_release(gsm->dlci[i]);
/* Now wipe the queues */
- for (txq = gsm->tx_head; txq != NULL; txq = gsm->tx_head) {
- gsm->tx_head = txq->next;
+ list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list)
kfree(txq);
- }
- gsm->tx_tail = NULL;
+ INIT_LIST_HEAD(&gsm->tx_list);
}
EXPORT_SYMBOL_GPL(gsm_cleanup_mux);
@@ -2157,6 +2164,7 @@ struct gsm_mux *gsm_alloc_mux(void)
}
spin_lock_init(&gsm->lock);
kref_init(&gsm->ref);
+ INIT_LIST_HEAD(&gsm->tx_list);
gsm->t1 = T1;
gsm->t2 = T2;
@@ -2273,7 +2281,7 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
gsm->error(gsm, *dp, flags);
break;
default:
- WARN_ONCE("%s: unknown flag %d\n",
+ WARN_ONCE(1, "%s: unknown flag %d\n",
tty_name(tty, buf), flags);
break;
}
@@ -2377,12 +2385,12 @@ static void gsmld_write_wakeup(struct tty_struct *tty)
/* Queue poll */
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_data_kick(gsm);
if (gsm->tx_bytes < TX_THRESH_LO) {
- spin_lock_irqsave(&gsm->tx_lock, flags);
gsm_dlci_data_sweep(gsm);
- spin_unlock_irqrestore(&gsm->tx_lock, flags);
}
+ spin_unlock_irqrestore(&gsm->tx_lock, flags);
}
/**
@@ -2889,6 +2897,10 @@ static int gsmtty_open(struct tty_struct *tty, struct file *filp)
gsm = gsm_mux[mux];
if (gsm->dead)
return -EL2HLT;
+ /* If DLCI 0 is not yet fully open return an error. This is ok from a locking
+ perspective as we don't have to worry about this if DLCI0 is lost */
+ if (gsm->dlci[0] && gsm->dlci[0]->state != DLCI_OPEN)
+ return -EL2NSYNC;
dlci = gsm->dlci[line];
if (dlci == NULL)
dlci = gsm_dlci_alloc(gsm, line);
diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
index 39d6ab6..8481aae 100644
--- a/drivers/tty/n_tty.c
+++ b/drivers/tty/n_tty.c
@@ -1728,7 +1728,8 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
do_it_again:
- BUG_ON(!tty->read_buf);
+ if (WARN_ON(!tty->read_buf))
+ return -EAGAIN;
c = job_control(tty, file);
if (c < 0)
diff --git a/drivers/tty/serial/8250_pci.c b/drivers/tty/serial/8250_pci.c
index 482d51e..e7d82c1 100644
--- a/drivers/tty/serial/8250_pci.c
+++ b/drivers/tty/serial/8250_pci.c
@@ -1118,6 +1118,8 @@ pci_xr17c154_setup(struct serial_private *priv,
#define PCI_SUBDEVICE_ID_OCTPRO422 0x0208
#define PCI_SUBDEVICE_ID_POCTAL232 0x0308
#define PCI_SUBDEVICE_ID_POCTAL422 0x0408
+#define PCI_SUBDEVICE_ID_SIIG_DUAL_00 0x2500
+#define PCI_SUBDEVICE_ID_SIIG_DUAL_30 0x2530
#define PCI_VENDOR_ID_ADVANTECH 0x13fe
#define PCI_DEVICE_ID_INTEL_CE4100_UART 0x2e66
#define PCI_DEVICE_ID_ADVANTECH_PCI3620 0x3620
@@ -3168,8 +3170,11 @@ static struct pci_device_id serial_pci_tbl[] = {
* For now just used the hex ID 0x950a.
*/
{ PCI_VENDOR_ID_OXSEMI, 0x950a,
- PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_SERIAL, 0, 0,
- pbn_b0_2_115200 },
+ PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_00,
+ 0, 0, pbn_b0_2_115200 },
+ { PCI_VENDOR_ID_OXSEMI, 0x950a,
+ PCI_SUBVENDOR_ID_SIIG, PCI_SUBDEVICE_ID_SIIG_DUAL_30,
+ 0, 0, pbn_b0_2_115200 },
{ PCI_VENDOR_ID_OXSEMI, 0x950a,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
pbn_b0_2_1130000 },
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index 6da8cf8..fe9f111 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -1627,13 +1627,26 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
old_cr &= ~ST_UART011_CR_OVSFACT;
}
+ /*
+ * Workaround for the ST Micro oversampling variants to
+ * increase the bitrate slightly, by lowering the divisor,
+ * to avoid delayed sampling of start bit at high speeds,
+ * else we see data corruption.
+ */
+ if (uap->vendor->oversampling) {
+ if ((baud >= 3000000) && (baud < 3250000) && (quot > 1))
+ quot -= 1;
+ else if ((baud > 3250000) && (quot > 2))
+ quot -= 2;
+ }
/* Set baud rate */
writew(quot & 0x3f, port->membase + UART011_FBRD);
writew(quot >> 6, port->membase + UART011_IBRD);
/*
* ----------v----------v----------v----------v-----
- * NOTE: MUST BE WRITTEN AFTER UARTLCR_M & UARTLCR_L
+ * NOTE: lcrh_tx and lcrh_rx MUST BE WRITTEN AFTER
+ * UART011_FBRD & UART011_IBRD.
* ----------^----------^----------^----------^-----
*/
writew(lcr_h, port->membase + uap->lcrh_rx);
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index a40ab98..4cddbfc 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1680,6 +1680,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
{
struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller);
struct dev_info *dev_info, *next;
+ struct xhci_cd *cur_cd, *next_cd;
unsigned long flags;
int size;
int i, j, num_ports;
@@ -1701,6 +1702,11 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
xhci_ring_free(xhci, xhci->cmd_ring);
xhci->cmd_ring = NULL;
xhci_dbg(xhci, "Freed command ring\n");
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ }
for (i = 1; i < MAX_HC_SLOTS; ++i)
xhci_free_virt_device(xhci, i);
@@ -2246,6 +2252,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
xhci->cmd_ring = xhci_ring_alloc(xhci, 1, true, false, flags);
if (!xhci->cmd_ring)
goto fail;
+ INIT_LIST_HEAD(&xhci->cancel_cmd_list);
xhci_dbg(xhci, "Allocated command ring at %p\n", xhci->cmd_ring);
xhci_dbg(xhci, "First segment DMA is 0x%llx\n",
(unsigned long long)xhci->cmd_ring->first_seg->dma);
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index bddcbfc..4ed7572 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -99,6 +99,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
* PPT chipsets.
*/
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
+ xhci->quirks |= XHCI_AVOID_BEI;
}
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index c7c530c..950aef8 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -309,12 +309,123 @@ static int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring,
/* Ring the host controller doorbell after placing a command on the ring */
void xhci_ring_cmd_db(struct xhci_hcd *xhci)
{
+ if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING))
+ return;
+
xhci_dbg(xhci, "// Ding dong!\n");
xhci_writel(xhci, DB_VALUE_HOST, &xhci->dba->doorbell[0]);
/* Flush PCI posted writes */
xhci_readl(xhci, &xhci->dba->doorbell[0]);
}
+static int xhci_abort_cmd_ring(struct xhci_hcd *xhci)
+{
+ u64 temp_64;
+ int ret;
+
+ xhci_dbg(xhci, "Abort command ring\n");
+
+ if (!(xhci->cmd_ring_state & CMD_RING_STATE_RUNNING)) {
+ xhci_dbg(xhci, "The command ring isn't running, "
+ "Have the command ring been stopped?\n");
+ return 0;
+ }
+
+ temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);
+ if (!(temp_64 & CMD_RING_RUNNING)) {
+ xhci_dbg(xhci, "Command ring had been stopped\n");
+ return 0;
+ }
+ xhci->cmd_ring_state = CMD_RING_STATE_ABORTED;
+ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,
+ &xhci->op_regs->cmd_ring);
+
+ /* Section 4.6.1.2 of xHCI 1.0 spec says software should
+ * time the completion od all xHCI commands, including
+ * the Command Abort operation. If software doesn't see
+ * CRR negated in a timely manner (e.g. longer than 5
+ * seconds), then it should assume that the there are
+ * larger problems with the xHC and assert HCRST.
+ */
+ ret = handshake(xhci, &xhci->op_regs->cmd_ring,
+ CMD_RING_RUNNING, 0, 5 * 1000 * 1000);
+ if (ret < 0) {
+ xhci_err(xhci, "Stopped the command ring failed, "
+ "maybe the host is dead\n");
+ xhci->xhc_state |= XHCI_STATE_DYING;
+ xhci_quiesce(xhci);
+ xhci_halt(xhci);
+ return -ESHUTDOWN;
+ }
+
+ return 0;
+}
+
+static int xhci_queue_cd(struct xhci_hcd *xhci,
+ struct xhci_command *command,
+ union xhci_trb *cmd_trb)
+{
+ struct xhci_cd *cd;
+ cd = kzalloc(sizeof(struct xhci_cd), GFP_ATOMIC);
+ if (!cd)
+ return -ENOMEM;
+ INIT_LIST_HEAD(&cd->cancel_cmd_list);
+
+ cd->command = command;
+ cd->cmd_trb = cmd_trb;
+ list_add_tail(&cd->cancel_cmd_list, &xhci->cancel_cmd_list);
+
+ return 0;
+}
+
+/*
+ * Cancel the command which has issue.
+ *
+ * Some commands may hang due to waiting for acknowledgement from
+ * usb device. It is outside of the xHC's ability to control and
+ * will cause the command ring is blocked. When it occurs software
+ * should intervene to recover the command ring.
+ * See Section 4.6.1.1 and 4.6.1.2
+ */
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+ union xhci_trb *cmd_trb)
+{
+ int retval = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&xhci->lock, flags);
+
+ if (xhci->xhc_state & XHCI_STATE_DYING) {
+ xhci_warn(xhci, "Abort the command ring,"
+ " but the xHCI is dead.\n");
+ retval = -ESHUTDOWN;
+ goto fail;
+ }
+
+ /* queue the cmd desriptor to cancel_cmd_list */
+ retval = xhci_queue_cd(xhci, command, cmd_trb);
+ if (retval) {
+ xhci_warn(xhci, "Queuing command descriptor failed.\n");
+ goto fail;
+ }
+
+ /* abort command ring */
+ retval = xhci_abort_cmd_ring(xhci);
+ if (retval) {
+ xhci_err(xhci, "Abort command ring failed\n");
+ if (unlikely(retval == -ESHUTDOWN)) {
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ usb_hc_died(xhci_to_hcd(xhci)->primary_hcd);
+ xhci_dbg(xhci, "xHCI host controller is dead.\n");
+ return retval;
+ }
+ }
+
+fail:
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ return retval;
+}
+
void xhci_ring_ep_doorbell(struct xhci_hcd *xhci,
unsigned int slot_id,
unsigned int ep_index,
@@ -1043,6 +1154,20 @@ static void handle_reset_ep_completion(struct xhci_hcd *xhci,
}
}
+/* Complete the command and detele it from the devcie's command queue.
+ */
+static void xhci_complete_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
+ struct xhci_command *command, u32 status)
+{
+ command->status = status;
+ list_del(&command->cmd_list);
+ if (command->completion)
+ complete(command->completion);
+ else
+ xhci_free_command(xhci, command);
+}
+
+
/* Check to see if a command in the device's command queue matches this one.
* Signal the completion or free the command, and return 1. Return 0 if the
* completed command isn't at the head of the command list.
@@ -1061,15 +1186,144 @@ static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci,
if (xhci->cmd_ring->dequeue != command->command_trb)
return 0;
- command->status = GET_COMP_CODE(le32_to_cpu(event->status));
- list_del(&command->cmd_list);
- if (command->completion)
- complete(command->completion);
- else
- xhci_free_command(xhci, command);
+ xhci_complete_cmd_in_cmd_wait_list(xhci, command,
+ GET_COMP_CODE(le32_to_cpu(event->status)));
return 1;
}
+/*
+ * Finding the command trb need to be cancelled and modifying it to
+ * NO OP command. And if the command is in device's command wait
+ * list, finishing and freeing it.
+ *
+ * If we can't find the command trb, we think it had already been
+ * executed.
+ */
+static void xhci_cmd_to_noop(struct xhci_hcd *xhci, struct xhci_cd *cur_cd)
+{
+ struct xhci_segment *cur_seg;
+ union xhci_trb *cmd_trb;
+ u32 cycle_state;
+
+ if (xhci->cmd_ring->dequeue == xhci->cmd_ring->enqueue)
+ return;
+
+ /* find the current segment of command ring */
+ cur_seg = find_trb_seg(xhci->cmd_ring->first_seg,
+ xhci->cmd_ring->dequeue, &cycle_state);
+
+ /* find the command trb matched by cd from command ring */
+ for (cmd_trb = xhci->cmd_ring->dequeue;
+ cmd_trb != xhci->cmd_ring->enqueue;
+ next_trb(xhci, xhci->cmd_ring, &cur_seg, &cmd_trb)) {
+ /* If the trb is link trb, continue */
+ if (TRB_TYPE_LINK_LE32(cmd_trb->generic.field[3]))
+ continue;
+
+ if (cur_cd->cmd_trb == cmd_trb) {
+
+ /* If the command in device's command list, we should
+ * finish it and free the command structure.
+ */
+ if (cur_cd->command)
+ xhci_complete_cmd_in_cmd_wait_list(xhci,
+ cur_cd->command, COMP_CMD_STOP);
+
+ /* get cycle state from the origin command trb */
+ cycle_state = le32_to_cpu(cmd_trb->generic.field[3])
+ & TRB_CYCLE;
+
+ /* modify the command trb to NO OP command */
+ cmd_trb->generic.field[0] = 0;
+ cmd_trb->generic.field[1] = 0;
+ cmd_trb->generic.field[2] = 0;
+ cmd_trb->generic.field[3] = cpu_to_le32(
+ TRB_TYPE(TRB_CMD_NOOP) | cycle_state);
+ break;
+ }
+ }
+}
+
+static void xhci_cancel_cmd_in_cd_list(struct xhci_hcd *xhci)
+{
+ struct xhci_cd *cur_cd, *next_cd;
+
+ if (list_empty(&xhci->cancel_cmd_list))
+ return;
+
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ xhci_cmd_to_noop(xhci, cur_cd);
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ }
+}
+
+/*
+ * traversing the cancel_cmd_list. If the command descriptor according
+ * to cmd_trb is found, the function free it and return 1, otherwise
+ * return 0.
+ */
+static int xhci_search_cmd_trb_in_cd_list(struct xhci_hcd *xhci,
+ union xhci_trb *cmd_trb)
+{
+ struct xhci_cd *cur_cd, *next_cd;
+
+ if (list_empty(&xhci->cancel_cmd_list))
+ return 0;
+
+ list_for_each_entry_safe(cur_cd, next_cd,
+ &xhci->cancel_cmd_list, cancel_cmd_list) {
+ if (cur_cd->cmd_trb == cmd_trb) {
+ if (cur_cd->command)
+ xhci_complete_cmd_in_cmd_wait_list(xhci,
+ cur_cd->command, COMP_CMD_STOP);
+ list_del(&cur_cd->cancel_cmd_list);
+ kfree(cur_cd);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * If the cmd_trb_comp_code is COMP_CMD_ABORT, we just check whether the
+ * trb pointed by the command ring dequeue pointer is the trb we want to
+ * cancel or not. And if the cmd_trb_comp_code is COMP_CMD_STOP, we will
+ * traverse the cancel_cmd_list to trun the all of the commands according
+ * to command descriptor to NO-OP trb.
+ */
+static int handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ int cmd_trb_comp_code)
+{
+ int cur_trb_is_good = 0;
+
+ /* Searching the cmd trb pointed by the command ring dequeue
+ * pointer in command descriptor list. If it is found, free it.
+ */
+ cur_trb_is_good = xhci_search_cmd_trb_in_cd_list(xhci,
+ xhci->cmd_ring->dequeue);
+
+ if (cmd_trb_comp_code == COMP_CMD_ABORT)
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ else if (cmd_trb_comp_code == COMP_CMD_STOP) {
+ /* traversing the cancel_cmd_list and canceling
+ * the command according to command descriptor
+ */
+ xhci_cancel_cmd_in_cd_list(xhci);
+
+ xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
+ /*
+ * ring command ring doorbell again to restart the
+ * command ring
+ */
+ if (xhci->cmd_ring->dequeue != xhci->cmd_ring->enqueue)
+ xhci_ring_cmd_db(xhci);
+ }
+ return cur_trb_is_good;
+}
+
static void handle_cmd_completion(struct xhci_hcd *xhci,
struct xhci_event_cmd *event)
{
@@ -1095,6 +1349,22 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
xhci->error_bitmask |= 1 << 5;
return;
}
+
+ if ((GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_ABORT) ||
+ (GET_COMP_CODE(le32_to_cpu(event->status)) == COMP_CMD_STOP)) {
+ /* If the return value is 0, we think the trb pointed by
+ * command ring dequeue pointer is a good trb. The good
+ * trb means we don't want to cancel the trb, but it have
+ * been stopped by host. So we should handle it normally.
+ * Otherwise, driver should invoke inc_deq() and return.
+ */
+ if (handle_stopped_cmd_ring(xhci,
+ GET_COMP_CODE(le32_to_cpu(event->status)))) {
+ inc_deq(xhci, xhci->cmd_ring, false);
+ return;
+ }
+ }
+
switch (le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3])
& TRB_TYPE_BITMASK) {
case TRB_TYPE(TRB_ENABLE_SLOT):
@@ -3356,7 +3626,9 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
} else {
td->last_trb = ep_ring->enqueue;
field |= TRB_IOC;
- if (xhci->hci_version == 0x100) {
+ if (xhci->hci_version == 0x100 &&
+ !(xhci->quirks &
+ XHCI_AVOID_BEI)) {
/* Set BEI bit except for the last td */
if (i < num_tds - 1)
field |= TRB_BEI;
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 09872ee..f5c0f38 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -52,7 +52,7 @@ MODULE_PARM_DESC(link_quirk, "Don't clear the chain bit on a link TRB");
* handshake done). There are two failure modes: "usec" have passed (major
* hardware flakeout), or the register reads as all-ones (hardware removed).
*/
-static int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
u32 mask, u32 done, int usec)
{
u32 result;
@@ -105,8 +105,12 @@ int xhci_halt(struct xhci_hcd *xhci)
ret = handshake(xhci, &xhci->op_regs->status,
STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
- if (!ret)
+ if (!ret) {
xhci->xhc_state |= XHCI_STATE_HALTED;
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ } else
+ xhci_warn(xhci, "Host not halted after %u microseconds.\n",
+ XHCI_MAX_HALT_USEC);
return ret;
}
@@ -459,6 +463,8 @@ static bool compliance_mode_recovery_timer_quirk_check(void)
dmi_product_name = dmi_get_system_info(DMI_PRODUCT_NAME);
dmi_sys_vendor = dmi_get_system_info(DMI_SYS_VENDOR);
+ if (!dmi_product_name || !dmi_sys_vendor)
+ return false;
if (!(strstr(dmi_sys_vendor, "Hewlett-Packard")))
return false;
@@ -570,6 +576,7 @@ static int xhci_run_finished(struct xhci_hcd *xhci)
return -ENODEV;
}
xhci->shared_hcd->state = HC_STATE_RUNNING;
+ xhci->cmd_ring_state = CMD_RING_STATE_RUNNING;
if (xhci->quirks & XHCI_NEC_HOST)
xhci_ring_cmd_db(xhci);
@@ -874,7 +881,7 @@ int xhci_suspend(struct xhci_hcd *xhci)
command &= ~CMD_RUN;
xhci_writel(xhci, command, &xhci->op_regs->command);
if (handshake(xhci, &xhci->op_regs->status,
- STS_HALT, STS_HALT, 100*100)) {
+ STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {
xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");
spin_unlock_irq(&xhci->lock);
return -ETIMEDOUT;
@@ -2506,6 +2513,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
struct completion *cmd_completion;
u32 *cmd_status;
struct xhci_virt_device *virt_dev;
+ union xhci_trb *cmd_trb;
spin_lock_irqsave(&xhci->lock, flags);
virt_dev = xhci->devs[udev->slot_id];
@@ -2551,6 +2559,7 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
}
init_completion(cmd_completion);
+ cmd_trb = xhci->cmd_ring->dequeue;
if (!ctx_change)
ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
udev->slot_id, must_succeed);
@@ -2572,14 +2581,17 @@ static int xhci_configure_endpoint(struct xhci_hcd *xhci,
/* Wait for the configure endpoint command to complete */
timeleft = wait_for_completion_interruptible_timeout(
cmd_completion,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for %s command\n",
timeleft == 0 ? "Timeout" : "Signal",
ctx_change == 0 ?
"configure endpoint" :
"evaluate context");
- /* FIXME cancel the configure endpoint command */
+ /* cancel the configure endpoint command */
+ ret = xhci_cancel_cmd(xhci, command, cmd_trb);
+ if (ret < 0)
+ return ret;
return -ETIME;
}
@@ -3528,8 +3540,10 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
unsigned long flags;
int timeleft;
int ret;
+ union xhci_trb *cmd_trb;
spin_lock_irqsave(&xhci->lock, flags);
+ cmd_trb = xhci->cmd_ring->dequeue;
ret = xhci_queue_slot_control(xhci, TRB_ENABLE_SLOT, 0);
if (ret) {
spin_unlock_irqrestore(&xhci->lock, flags);
@@ -3541,12 +3555,12 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
/* XXX: how much time for xHC slot assignment? */
timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for a slot\n",
timeleft == 0 ? "Timeout" : "Signal");
- /* FIXME cancel the enable slot request */
- return 0;
+ /* cancel the enable slot request */
+ return xhci_cancel_cmd(xhci, NULL, cmd_trb);
}
if (!xhci->slot_id) {
@@ -3607,6 +3621,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
struct xhci_slot_ctx *slot_ctx;
struct xhci_input_control_ctx *ctrl_ctx;
u64 temp_64;
+ union xhci_trb *cmd_trb;
if (!udev->slot_id) {
xhci_dbg(xhci, "Bad Slot ID %d\n", udev->slot_id);
@@ -3645,6 +3660,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2);
spin_lock_irqsave(&xhci->lock, flags);
+ cmd_trb = xhci->cmd_ring->dequeue;
ret = xhci_queue_address_device(xhci, virt_dev->in_ctx->dma,
udev->slot_id);
if (ret) {
@@ -3657,7 +3673,7 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
/* ctrl tx can take up to 5 sec; XXX: need more time for xHC? */
timeleft = wait_for_completion_interruptible_timeout(&xhci->addr_dev,
- USB_CTRL_SET_TIMEOUT);
+ XHCI_CMD_DEFAULT_TIMEOUT);
/* FIXME: From section 4.3.4: "Software shall be responsible for timing
* the SetAddress() "recovery interval" required by USB and aborting the
* command on a timeout.
@@ -3665,7 +3681,10 @@ int xhci_address_device(struct usb_hcd *hcd, struct usb_device *udev)
if (timeleft <= 0) {
xhci_warn(xhci, "%s while waiting for address device command\n",
timeleft == 0 ? "Timeout" : "Signal");
- /* FIXME cancel the address device command */
+ /* cancel the address device command */
+ ret = xhci_cancel_cmd(xhci, NULL, cmd_trb);
+ if (ret < 0)
+ return ret;
return -ETIME;
}
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 44d518a..cc368c2 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1255,6 +1255,16 @@ struct xhci_td {
union xhci_trb *last_trb;
};
+/* xHCI command default timeout value */
+#define XHCI_CMD_DEFAULT_TIMEOUT (5 * HZ)
+
+/* command descriptor */
+struct xhci_cd {
+ struct list_head cancel_cmd_list;
+ struct xhci_command *command;
+ union xhci_trb *cmd_trb;
+};
+
struct xhci_dequeue_state {
struct xhci_segment *new_deq_seg;
union xhci_trb *new_deq_ptr;
@@ -1402,6 +1412,11 @@ struct xhci_hcd {
/* data structures */
struct xhci_device_context_array *dcbaa;
struct xhci_ring *cmd_ring;
+ unsigned int cmd_ring_state;
+#define CMD_RING_STATE_RUNNING (1 << 0)
+#define CMD_RING_STATE_ABORTED (1 << 1)
+#define CMD_RING_STATE_STOPPED (1 << 2)
+ struct list_head cancel_cmd_list;
unsigned int cmd_ring_reserved_trbs;
struct xhci_ring *event_ring;
struct xhci_erst erst;
@@ -1473,6 +1488,7 @@ struct xhci_hcd {
#define XHCI_TRUST_TX_LENGTH (1 << 10)
#define XHCI_SPURIOUS_REBOOT (1 << 13)
#define XHCI_COMP_MODE_QUIRK (1 << 14)
+#define XHCI_AVOID_BEI (1 << 15)
unsigned int num_active_eps;
unsigned int limit_active_eps;
/* There are two roothubs to keep track of bus suspend info for */
@@ -1666,6 +1682,8 @@ static inline void xhci_unregister_pci(void) {}
/* xHCI host controller glue */
typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+int handshake(struct xhci_hcd *xhci, void __iomem *ptr,
+ u32 mask, u32 done, int usec);
void xhci_quiesce(struct xhci_hcd *xhci);
int xhci_halt(struct xhci_hcd *xhci);
int xhci_reset(struct xhci_hcd *xhci);
@@ -1756,6 +1774,8 @@ void xhci_queue_config_ep_quirk(struct xhci_hcd *xhci,
unsigned int slot_id, unsigned int ep_index,
struct xhci_dequeue_state *deq_state);
void xhci_stop_endpoint_command_watchdog(unsigned long arg);
+int xhci_cancel_cmd(struct xhci_hcd *xhci, struct xhci_command *command,
+ union xhci_trb *cmd_trb);
void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, unsigned int slot_id,
unsigned int ep_index, unsigned int stream_id);
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index 7324bea..e29a664 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -584,6 +584,8 @@ static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PEDO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_IBS_PROD_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_TAVIR_STK500_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_TIAO_UMPA_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
/*
* ELV devices:
*/
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index 06f6fd2..7b5eb74 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -517,6 +517,11 @@
*/
#define FTDI_TAVIR_STK500_PID 0xFA33 /* STK500 AVR programmer */
+/*
+ * TIAO product ids (FTDI_VID)
+ * http://www.tiaowiki.com/w/Main_Page
+ */
+#define FTDI_TIAO_UMPA_PID 0x8a98 /* TIAO/DIYGADGET USB Multi-Protocol Adapter */
/********************************/
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index c068b4d..3fd4e6f 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -870,7 +870,8 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0153, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0155, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0156, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff) },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf5_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0158, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0159, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0161, 0xff, 0xff, 0xff) },
diff --git a/drivers/usb/serial/qcaux.c b/drivers/usb/serial/qcaux.c
index a348198..87271e3 100644
--- a/drivers/usb/serial/qcaux.c
+++ b/drivers/usb/serial/qcaux.c
@@ -36,8 +36,6 @@
#define UTSTARCOM_PRODUCT_UM175_V1 0x3712
#define UTSTARCOM_PRODUCT_UM175_V2 0x3714
#define UTSTARCOM_PRODUCT_UM175_ALLTEL 0x3715
-#define PANTECH_PRODUCT_UML190_VZW 0x3716
-#define PANTECH_PRODUCT_UML290_VZW 0x3718
/* CMOTECH devices */
#define CMOTECH_VENDOR_ID 0x16d8
@@ -68,11 +66,9 @@ static struct usb_device_id id_table[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(LG_VENDOR_ID, LG_PRODUCT_VX4400_6000, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SANYO_VENDOR_ID, SANYO_PRODUCT_KATANA_LX, 0xff, 0xff, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_U520, 0xff, 0x00, 0x00) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML190_VZW, 0xff, 0xff, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML190_VZW, 0xff, 0xfe, 0xff) },
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xfd, 0xff) }, /* NMEA */
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xfe, 0xff) }, /* WMC */
- { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xff, 0xff) }, /* DIAG */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfd, 0xff) }, /* NMEA */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xfe, 0xff) }, /* WMC */
+ { USB_VENDOR_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, 0xff, 0xff, 0xff) }, /* DIAG */
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);
diff --git a/fs/autofs4/root.c b/fs/autofs4/root.c
index f55ae23..790fa63 100644
--- a/fs/autofs4/root.c
+++ b/fs/autofs4/root.c
@@ -392,10 +392,12 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
ino->flags |= AUTOFS_INF_PENDING;
spin_unlock(&sbi->fs_lock);
status = autofs4_mount_wait(dentry);
- if (status)
- return ERR_PTR(status);
spin_lock(&sbi->fs_lock);
ino->flags &= ~AUTOFS_INF_PENDING;
+ if (status) {
+ spin_unlock(&sbi->fs_lock);
+ return ERR_PTR(status);
+ }
}
done:
if (!(ino->flags & AUTOFS_INF_EXPIRING)) {
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 6ff96c6..8dd615c 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1668,30 +1668,19 @@ static int elf_note_info_init(struct elf_note_info *info)
return 0;
info->psinfo = kmalloc(sizeof(*info->psinfo), GFP_KERNEL);
if (!info->psinfo)
- goto notes_free;
+ return 0;
info->prstatus = kmalloc(sizeof(*info->prstatus), GFP_KERNEL);
if (!info->prstatus)
- goto psinfo_free;
+ return 0;
info->fpu = kmalloc(sizeof(*info->fpu), GFP_KERNEL);
if (!info->fpu)
- goto prstatus_free;
+ return 0;
#ifdef ELF_CORE_COPY_XFPREGS
info->xfpu = kmalloc(sizeof(*info->xfpu), GFP_KERNEL);
if (!info->xfpu)
- goto fpu_free;
+ return 0;
#endif
return 1;
-#ifdef ELF_CORE_COPY_XFPREGS
- fpu_free:
- kfree(info->fpu);
-#endif
- prstatus_free:
- kfree(info->prstatus);
- psinfo_free:
- kfree(info->psinfo);
- notes_free:
- kfree(info->notes);
- return 0;
}
static int fill_note_info(struct elfhdr *elf, int phdrs,
diff --git a/fs/ecryptfs/ecryptfs_kernel.h b/fs/ecryptfs/ecryptfs_kernel.h
index a9f29b1..2262a77 100644
--- a/fs/ecryptfs/ecryptfs_kernel.h
+++ b/fs/ecryptfs/ecryptfs_kernel.h
@@ -559,6 +559,8 @@ struct ecryptfs_open_req {
struct inode *ecryptfs_get_inode(struct inode *lower_inode,
struct super_block *sb);
void ecryptfs_i_size_init(const char *page_virt, struct inode *inode);
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode);
int ecryptfs_decode_and_decrypt_filename(char **decrypted_name,
size_t *decrypted_name_size,
struct dentry *ecryptfs_dentry,
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index d3f95f9..841f24f 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -139,29 +139,50 @@ out:
return rc;
}
-static void ecryptfs_vma_close(struct vm_area_struct *vma)
-{
- filemap_write_and_wait(vma->vm_file->f_mapping);
-}
-
-static const struct vm_operations_struct ecryptfs_file_vm_ops = {
- .close = ecryptfs_vma_close,
- .fault = filemap_fault,
-};
+struct kmem_cache *ecryptfs_file_info_cache;
-static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int read_or_initialize_metadata(struct dentry *dentry)
{
+ struct inode *inode = dentry->d_inode;
+ struct ecryptfs_mount_crypt_stat *mount_crypt_stat;
+ struct ecryptfs_crypt_stat *crypt_stat;
int rc;
- rc = generic_file_mmap(file, vma);
+ crypt_stat = &ecryptfs_inode_to_private(inode)->crypt_stat;
+ mount_crypt_stat = &ecryptfs_superblock_to_private(
+ inode->i_sb)->mount_crypt_stat;
+ mutex_lock(&crypt_stat->cs_mutex);
+
+ if (crypt_stat->flags & ECRYPTFS_POLICY_APPLIED &&
+ crypt_stat->flags & ECRYPTFS_KEY_VALID) {
+ rc = 0;
+ goto out;
+ }
+
+ rc = ecryptfs_read_metadata(dentry);
if (!rc)
- vma->vm_ops = &ecryptfs_file_vm_ops;
+ goto out;
+
+ if (mount_crypt_stat->flags & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED) {
+ crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
+ | ECRYPTFS_ENCRYPTED);
+ rc = 0;
+ goto out;
+ }
+
+ if (!(mount_crypt_stat->flags & ECRYPTFS_XATTR_METADATA_ENABLED) &&
+ !i_size_read(ecryptfs_inode_to_lower(inode))) {
+ rc = ecryptfs_initialize_file(dentry, inode);
+ if (!rc)
+ goto out;
+ }
+ rc = -EIO;
+out:
+ mutex_unlock(&crypt_stat->cs_mutex);
return rc;
}
-struct kmem_cache *ecryptfs_file_info_cache;
-
/**
* ecryptfs_open
* @inode: inode speciying file to open
@@ -237,32 +258,9 @@ static int ecryptfs_open(struct inode *inode, struct file *file)
rc = 0;
goto out;
}
- mutex_lock(&crypt_stat->cs_mutex);
- if (!(crypt_stat->flags & ECRYPTFS_POLICY_APPLIED)
- || !(crypt_stat->flags & ECRYPTFS_KEY_VALID)) {
- rc = ecryptfs_read_metadata(ecryptfs_dentry);
- if (rc) {
- ecryptfs_printk(KERN_DEBUG,
- "Valid headers not found\n");
- if (!(mount_crypt_stat->flags
- & ECRYPTFS_PLAINTEXT_PASSTHROUGH_ENABLED)) {
- rc = -EIO;
- printk(KERN_WARNING "Either the lower file "
- "is not in a valid eCryptfs format, "
- "or the key could not be retrieved. "
- "Plaintext passthrough mode is not "
- "enabled; returning -EIO\n");
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out_put;
- }
- rc = 0;
- crypt_stat->flags &= ~(ECRYPTFS_I_SIZE_INITIALIZED
- | ECRYPTFS_ENCRYPTED);
- mutex_unlock(&crypt_stat->cs_mutex);
- goto out;
- }
- }
- mutex_unlock(&crypt_stat->cs_mutex);
+ rc = read_or_initialize_metadata(ecryptfs_dentry);
+ if (rc)
+ goto out_put;
ecryptfs_printk(KERN_DEBUG, "inode w/ addr = [0x%p], i_ino = "
"[0x%.16lx] size: [0x%.16llx]\n", inode, inode->i_ino,
(unsigned long long)i_size_read(inode));
@@ -278,8 +276,14 @@ out:
static int ecryptfs_flush(struct file *file, fl_owner_t td)
{
- return file->f_mode & FMODE_WRITE
- ? filemap_write_and_wait(file->f_mapping) : 0;
+ struct file *lower_file = ecryptfs_file_to_lower(file);
+
+ if (lower_file->f_op && lower_file->f_op->flush) {
+ filemap_write_and_wait(file->f_mapping);
+ return lower_file->f_op->flush(lower_file, td);
+ }
+
+ return 0;
}
static int ecryptfs_release(struct inode *inode, struct file *file)
@@ -293,15 +297,7 @@ static int ecryptfs_release(struct inode *inode, struct file *file)
static int
ecryptfs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
{
- int rc = 0;
-
- rc = generic_file_fsync(file, start, end, datasync);
- if (rc)
- goto out;
- rc = vfs_fsync_range(ecryptfs_file_to_lower(file), start, end,
- datasync);
-out:
- return rc;
+ return vfs_fsync(ecryptfs_file_to_lower(file), datasync);
}
static int ecryptfs_fasync(int fd, struct file *file, int flag)
@@ -370,7 +366,7 @@ const struct file_operations ecryptfs_main_fops = {
#ifdef CONFIG_COMPAT
.compat_ioctl = ecryptfs_compat_ioctl,
#endif
- .mmap = ecryptfs_file_mmap,
+ .mmap = generic_file_mmap,
.open = ecryptfs_open,
.flush = ecryptfs_flush,
.release = ecryptfs_release,
diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
index 7c7556b..a9be90d 100644
--- a/fs/ecryptfs/inode.c
+++ b/fs/ecryptfs/inode.c
@@ -161,6 +161,31 @@ ecryptfs_create_underlying_file(struct inode *lower_dir_inode,
return vfs_create(lower_dir_inode, lower_dentry, mode, NULL);
}
+static int ecryptfs_do_unlink(struct inode *dir, struct dentry *dentry,
+ struct inode *inode)
+{
+ struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
+ struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
+ struct dentry *lower_dir_dentry;
+ int rc;
+
+ dget(lower_dentry);
+ lower_dir_dentry = lock_parent(lower_dentry);
+ rc = vfs_unlink(lower_dir_inode, lower_dentry);
+ if (rc) {
+ printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
+ goto out_unlock;
+ }
+ fsstack_copy_attr_times(dir, lower_dir_inode);
+ set_nlink(inode, ecryptfs_inode_to_lower(inode)->i_nlink);
+ inode->i_ctime = dir->i_ctime;
+ d_drop(dentry);
+out_unlock:
+ unlock_dir(lower_dir_dentry);
+ dput(lower_dentry);
+ return rc;
+}
+
/**
* ecryptfs_do_create
* @directory_inode: inode of the new file's dentry's parent in ecryptfs
@@ -201,8 +226,10 @@ ecryptfs_do_create(struct inode *directory_inode,
}
inode = __ecryptfs_get_inode(lower_dentry->d_inode,
directory_inode->i_sb);
- if (IS_ERR(inode))
+ if (IS_ERR(inode)) {
+ vfs_unlink(lower_dir_dentry->d_inode, lower_dentry);
goto out_lock;
+ }
fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode);
fsstack_copy_inode_size(directory_inode, lower_dir_dentry->d_inode);
out_lock:
@@ -219,8 +246,8 @@ out:
*
* Returns zero on success
*/
-static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
- struct inode *ecryptfs_inode)
+int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry,
+ struct inode *ecryptfs_inode)
{
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
@@ -284,7 +311,9 @@ ecryptfs_create(struct inode *directory_inode, struct dentry *ecryptfs_dentry,
* that this on disk file is prepared to be an ecryptfs file */
rc = ecryptfs_initialize_file(ecryptfs_dentry, ecryptfs_inode);
if (rc) {
- drop_nlink(ecryptfs_inode);
+ ecryptfs_do_unlink(directory_inode, ecryptfs_dentry,
+ ecryptfs_inode);
+ make_bad_inode(ecryptfs_inode);
unlock_new_inode(ecryptfs_inode);
iput(ecryptfs_inode);
goto out;
@@ -496,27 +525,7 @@ out_lock:
static int ecryptfs_unlink(struct inode *dir, struct dentry *dentry)
{
- int rc = 0;
- struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry);
- struct inode *lower_dir_inode = ecryptfs_inode_to_lower(dir);
- struct dentry *lower_dir_dentry;
-
- dget(lower_dentry);
- lower_dir_dentry = lock_parent(lower_dentry);
- rc = vfs_unlink(lower_dir_inode, lower_dentry);
- if (rc) {
- printk(KERN_ERR "Error in vfs_unlink; rc = [%d]\n", rc);
- goto out_unlock;
- }
- fsstack_copy_attr_times(dir, lower_dir_inode);
- set_nlink(dentry->d_inode,
- ecryptfs_inode_to_lower(dentry->d_inode)->i_nlink);
- dentry->d_inode->i_ctime = dir->i_ctime;
- d_drop(dentry);
-out_unlock:
- unlock_dir(lower_dir_dentry);
- dput(lower_dentry);
- return rc;
+ return ecryptfs_do_unlink(dir, dentry, dentry->d_inode);
}
static int ecryptfs_symlink(struct inode *dir, struct dentry *dentry,
@@ -1026,12 +1035,6 @@ static int ecryptfs_setattr(struct dentry *dentry, struct iattr *ia)
goto out;
}
- if (S_ISREG(inode->i_mode)) {
- rc = filemap_write_and_wait(inode->i_mapping);
- if (rc)
- goto out;
- fsstack_copy_attr_all(inode, lower_inode);
- }
memcpy(&lower_ia, ia, sizeof(lower_ia));
if (ia->ia_valid & ATTR_FILE)
lower_ia.ia_file = ecryptfs_file_to_lower(ia->ia_file);
diff --git a/fs/ecryptfs/main.c b/fs/ecryptfs/main.c
index b4a6bef..1cfef9f 100644
--- a/fs/ecryptfs/main.c
+++ b/fs/ecryptfs/main.c
@@ -162,6 +162,7 @@ void ecryptfs_put_lower_file(struct inode *inode)
inode_info = ecryptfs_inode_to_private(inode);
if (atomic_dec_and_mutex_lock(&inode_info->lower_file_count,
&inode_info->lower_file_mutex)) {
+ filemap_write_and_wait(inode->i_mapping);
fput(inode_info->lower_file);
inode_info->lower_file = NULL;
mutex_unlock(&inode_info->lower_file_mutex);
diff --git a/fs/ecryptfs/mmap.c b/fs/ecryptfs/mmap.c
index 6a44148..93a998a 100644
--- a/fs/ecryptfs/mmap.c
+++ b/fs/ecryptfs/mmap.c
@@ -62,18 +62,6 @@ static int ecryptfs_writepage(struct page *page, struct writeback_control *wbc)
{
int rc;
- /*
- * Refuse to write the page out if we are called from reclaim context
- * since our writepage() path may potentially allocate memory when
- * calling into the lower fs vfs_write() which may in turn invoke
- * us again.
- */
- if (current->flags & PF_MEMALLOC) {
- redirty_page_for_writepage(wbc, page);
- rc = 0;
- goto out;
- }
-
rc = ecryptfs_encrypt_page(page);
if (rc) {
ecryptfs_printk(KERN_WARNING, "Error encrypting "
@@ -498,7 +486,6 @@ static int ecryptfs_write_end(struct file *file,
struct ecryptfs_crypt_stat *crypt_stat =
&ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat;
int rc;
- int need_unlock_page = 1;
ecryptfs_printk(KERN_DEBUG, "Calling fill_zeros_to_end_of_page"
"(page w/ index = [0x%.16lx], to = [%d])\n", index, to);
@@ -519,26 +506,26 @@ static int ecryptfs_write_end(struct file *file,
"zeros in page with index = [0x%.16lx]\n", index);
goto out;
}
- set_page_dirty(page);
- unlock_page(page);
- need_unlock_page = 0;
+ rc = ecryptfs_encrypt_page(page);
+ if (rc) {
+ ecryptfs_printk(KERN_WARNING, "Error encrypting page (upper "
+ "index [0x%.16lx])\n", index);
+ goto out;
+ }
if (pos + copied > i_size_read(ecryptfs_inode)) {
i_size_write(ecryptfs_inode, pos + copied);
ecryptfs_printk(KERN_DEBUG, "Expanded file size to "
"[0x%.16llx]\n",
(unsigned long long)i_size_read(ecryptfs_inode));
- balance_dirty_pages_ratelimited(mapping);
- rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
- if (rc) {
- printk(KERN_ERR "Error writing inode size to metadata; "
- "rc = [%d]\n", rc);
- goto out;
- }
}
- rc = copied;
+ rc = ecryptfs_write_inode_size_to_metadata(ecryptfs_inode);
+ if (rc)
+ printk(KERN_ERR "Error writing inode size to metadata; "
+ "rc = [%d]\n", rc);
+ else
+ rc = copied;
out:
- if (need_unlock_page)
- unlock_page(page);
+ unlock_page(page);
page_cache_release(page);
return rc;
}
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 8b01f9f..bac2330 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2382,6 +2382,16 @@ static int ext4_nonda_switch(struct super_block *sb)
free_blocks = EXT4_C2B(sbi,
percpu_counter_read_positive(&sbi->s_freeclusters_counter));
dirty_blocks = percpu_counter_read_positive(&sbi->s_dirtyclusters_counter);
+ /*
+ * Start pushing delalloc when 1/2 of free blocks are dirty.
+ */
+ if (dirty_blocks && (free_blocks < 2 * dirty_blocks) &&
+ !writeback_in_progress(sb->s_bdi) &&
+ down_read_trylock(&sb->s_umount)) {
+ writeback_inodes_sb(sb, WB_REASON_FS_FREE_SPACE);
+ up_read(&sb->s_umount);
+ }
+
if (2 * free_blocks < 3 * dirty_blocks ||
free_blocks < (dirty_blocks + EXT4_FREECLUSTERS_WATERMARK)) {
/*
@@ -2390,13 +2400,6 @@ static int ext4_nonda_switch(struct super_block *sb)
*/
return 1;
}
- /*
- * Even if we don't switch but are nearing capacity,
- * start pushing delalloc when 1/2 of free blocks are dirty.
- */
- if (free_blocks < 2 * dirty_blocks)
- writeback_inodes_sb_if_idle(sb, WB_REASON_FS_FREE_SPACE);
-
return 0;
}
@@ -4004,6 +4007,7 @@ static int ext4_do_update_inode(handle_t *handle,
struct ext4_inode_info *ei = EXT4_I(inode);
struct buffer_head *bh = iloc->bh;
int err = 0, rc, block;
+ int need_datasync = 0;
/* For fields not not tracking in the in-memory inode,
* initialise them to zero for new inodes. */
@@ -4052,7 +4056,10 @@ static int ext4_do_update_inode(handle_t *handle,
raw_inode->i_file_acl_high =
cpu_to_le16(ei->i_file_acl >> 32);
raw_inode->i_file_acl_lo = cpu_to_le32(ei->i_file_acl);
- ext4_isize_set(raw_inode, ei->i_disksize);
+ if (ei->i_disksize != ext4_isize(raw_inode)) {
+ ext4_isize_set(raw_inode, ei->i_disksize);
+ need_datasync = 1;
+ }
if (ei->i_disksize > 0x7fffffffULL) {
struct super_block *sb = inode->i_sb;
if (!EXT4_HAS_RO_COMPAT_FEATURE(sb,
@@ -4105,7 +4112,7 @@ static int ext4_do_update_inode(handle_t *handle,
err = rc;
ext4_clear_inode_state(inode, EXT4_STATE_NEW);
- ext4_update_inode_fsync_trans(handle, inode, 0);
+ ext4_update_inode_fsync_trans(handle, inode, need_datasync);
out_brelse:
brelse(bh);
ext4_std_error(inode->i_sb, err);
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index c5826c6..e2016f3 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -141,55 +141,21 @@ mext_next_extent(struct inode *inode, struct ext4_ext_path *path,
}
/**
- * mext_check_null_inode - NULL check for two inodes
- *
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
- */
-static int
-mext_check_null_inode(struct inode *inode1, struct inode *inode2,
- const char *function, unsigned int line)
-{
- int ret = 0;
-
- if (inode1 == NULL) {
- __ext4_error(inode2->i_sb, function, line,
- "Both inodes should not be NULL: "
- "inode1 NULL inode2 %lu", inode2->i_ino);
- ret = -EIO;
- } else if (inode2 == NULL) {
- __ext4_error(inode1->i_sb, function, line,
- "Both inodes should not be NULL: "
- "inode1 %lu inode2 NULL", inode1->i_ino);
- ret = -EIO;
- }
- return ret;
-}
-
-/**
* double_down_write_data_sem - Acquire two inodes' write lock of i_data_sem
*
- * @orig_inode: original inode structure
- * @donor_inode: donor inode structure
- * Acquire write lock of i_data_sem of the two inodes (orig and donor) by
- * i_ino order.
+ * Acquire write lock of i_data_sem of the two inodes
*/
static void
-double_down_write_data_sem(struct inode *orig_inode, struct inode *donor_inode)
+double_down_write_data_sem(struct inode *first, struct inode *second)
{
- struct inode *first = orig_inode, *second = donor_inode;
+ if (first < second) {
+ down_write(&EXT4_I(first)->i_data_sem);
+ down_write_nested(&EXT4_I(second)->i_data_sem, SINGLE_DEPTH_NESTING);
+ } else {
+ down_write(&EXT4_I(second)->i_data_sem);
+ down_write_nested(&EXT4_I(first)->i_data_sem, SINGLE_DEPTH_NESTING);
- /*
- * Use the inode number to provide the stable locking order instead
- * of its address, because the C language doesn't guarantee you can
- * compare pointers that don't come from the same array.
- */
- if (donor_inode->i_ino < orig_inode->i_ino) {
- first = donor_inode;
- second = orig_inode;
}
-
- down_write(&EXT4_I(first)->i_data_sem);
- down_write_nested(&EXT4_I(second)->i_data_sem, SINGLE_DEPTH_NESTING);
}
/**
@@ -969,14 +935,6 @@ mext_check_arguments(struct inode *orig_inode,
return -EINVAL;
}
- /* Files should be in the same ext4 FS */
- if (orig_inode->i_sb != donor_inode->i_sb) {
- ext4_debug("ext4 move extent: The argument files "
- "should be in same FS [ino:orig %lu, donor %lu]\n",
- orig_inode->i_ino, donor_inode->i_ino);
- return -EINVAL;
- }
-
/* Ext4 move extent supports only extent based file */
if (!(ext4_test_inode_flag(orig_inode, EXT4_INODE_EXTENTS))) {
ext4_debug("ext4 move extent: orig file is not extents "
@@ -1072,35 +1030,19 @@ mext_check_arguments(struct inode *orig_inode,
* @inode1: the inode structure
* @inode2: the inode structure
*
- * Lock two inodes' i_mutex by i_ino order.
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
+ * Lock two inodes' i_mutex
*/
-static int
+static void
mext_inode_double_lock(struct inode *inode1, struct inode *inode2)
{
- int ret = 0;
-
- BUG_ON(inode1 == NULL && inode2 == NULL);
-
- ret = mext_check_null_inode(inode1, inode2, __func__, __LINE__);
- if (ret < 0)
- goto out;
-
- if (inode1 == inode2) {
- mutex_lock(&inode1->i_mutex);
- goto out;
- }
-
- if (inode1->i_ino < inode2->i_ino) {
+ BUG_ON(inode1 == inode2);
+ if (inode1 < inode2) {
mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT);
mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD);
} else {
mutex_lock_nested(&inode2->i_mutex, I_MUTEX_PARENT);
mutex_lock_nested(&inode1->i_mutex, I_MUTEX_CHILD);
}
-
-out:
- return ret;
}
/**
@@ -1109,28 +1051,13 @@ out:
* @inode1: the inode that is released first
* @inode2: the inode that is released second
*
- * If inode1 or inode2 is NULL, return -EIO. Otherwise, return 0.
*/
-static int
+static void
mext_inode_double_unlock(struct inode *inode1, struct inode *inode2)
{
- int ret = 0;
-
- BUG_ON(inode1 == NULL && inode2 == NULL);
-
- ret = mext_check_null_inode(inode1, inode2, __func__, __LINE__);
- if (ret < 0)
- goto out;
-
- if (inode1)
- mutex_unlock(&inode1->i_mutex);
-
- if (inode2 && inode2 != inode1)
- mutex_unlock(&inode2->i_mutex);
-
-out:
- return ret;
+ mutex_unlock(&inode1->i_mutex);
+ mutex_unlock(&inode2->i_mutex);
}
/**
@@ -1187,16 +1114,23 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
ext4_lblk_t block_end, seq_start, add_blocks, file_end, seq_blocks = 0;
ext4_lblk_t rest_blocks;
pgoff_t orig_page_offset = 0, seq_end_page;
- int ret1, ret2, depth, last_extent = 0;
+ int ret, depth, last_extent = 0;
int blocks_per_page = PAGE_CACHE_SIZE >> orig_inode->i_blkbits;
int data_offset_in_page;
int block_len_in_page;
int uninit;
- /* orig and donor should be different file */
- if (orig_inode->i_ino == donor_inode->i_ino) {
+ if (orig_inode->i_sb != donor_inode->i_sb) {
+ ext4_debug("ext4 move extent: The argument files "
+ "should be in same FS [ino:orig %lu, donor %lu]\n",
+ orig_inode->i_ino, donor_inode->i_ino);
+ return -EINVAL;
+ }
+
+ /* orig and donor should be different inodes */
+ if (orig_inode == donor_inode) {
ext4_debug("ext4 move extent: The argument files should not "
- "be same file [ino:orig %lu, donor %lu]\n",
+ "be same inode [ino:orig %lu, donor %lu]\n",
orig_inode->i_ino, donor_inode->i_ino);
return -EINVAL;
}
@@ -1208,18 +1142,21 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
orig_inode->i_ino, donor_inode->i_ino);
return -EINVAL;
}
-
+ /* TODO: This is non obvious task to swap blocks for inodes with full
+ jornaling enabled */
+ if (ext4_should_journal_data(orig_inode) ||
+ ext4_should_journal_data(donor_inode)) {
+ return -EINVAL;
+ }
/* Protect orig and donor inodes against a truncate */
- ret1 = mext_inode_double_lock(orig_inode, donor_inode);
- if (ret1 < 0)
- return ret1;
+ mext_inode_double_lock(orig_inode, donor_inode);
/* Protect extent tree against block allocations via delalloc */
double_down_write_data_sem(orig_inode, donor_inode);
/* Check the filesystem environment whether move_extent can be done */
- ret1 = mext_check_arguments(orig_inode, donor_inode, orig_start,
+ ret = mext_check_arguments(orig_inode, donor_inode, orig_start,
donor_start, &len);
- if (ret1)
+ if (ret)
goto out;
file_end = (i_size_read(orig_inode) - 1) >> orig_inode->i_blkbits;
@@ -1227,13 +1164,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
if (file_end < block_end)
len -= block_end - file_end;
- ret1 = get_ext_path(orig_inode, block_start, &orig_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, block_start, &orig_path);
+ if (ret)
goto out;
/* Get path structure to check the hole */
- ret1 = get_ext_path(orig_inode, block_start, &holecheck_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, block_start, &holecheck_path);
+ if (ret)
goto out;
depth = ext_depth(orig_inode);
@@ -1252,13 +1189,13 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
last_extent = mext_next_extent(orig_inode,
holecheck_path, &ext_cur);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
goto out;
}
last_extent = mext_next_extent(orig_inode, orig_path,
&ext_dummy);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
goto out;
}
seq_start = le32_to_cpu(ext_cur->ee_block);
@@ -1272,7 +1209,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
if (le32_to_cpu(ext_cur->ee_block) > block_end) {
ext4_debug("ext4 move extent: The specified range of file "
"may be the hole\n");
- ret1 = -EINVAL;
+ ret = -EINVAL;
goto out;
}
@@ -1292,7 +1229,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
last_extent = mext_next_extent(orig_inode, holecheck_path,
&ext_cur);
if (last_extent < 0) {
- ret1 = last_extent;
+ ret = last_extent;
break;
}
add_blocks = ext4_ext_get_actual_len(ext_cur);
@@ -1349,18 +1286,18 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
orig_page_offset,
data_offset_in_page,
block_len_in_page, uninit,
- &ret1);
+ &ret);
/* Count how many blocks we have exchanged */
*moved_len += block_len_in_page;
- if (ret1 < 0)
+ if (ret < 0)
break;
if (*moved_len > len) {
EXT4_ERROR_INODE(orig_inode,
"We replaced blocks too much! "
"sum of replaced: %llu requested: %llu",
*moved_len, len);
- ret1 = -EIO;
+ ret = -EIO;
break;
}
@@ -1374,22 +1311,22 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp,
}
double_down_write_data_sem(orig_inode, donor_inode);
- if (ret1 < 0)
+ if (ret < 0)
break;
/* Decrease buffer counter */
if (holecheck_path)
ext4_ext_drop_refs(holecheck_path);
- ret1 = get_ext_path(orig_inode, seq_start, &holecheck_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, seq_start, &holecheck_path);
+ if (ret)
break;
depth = holecheck_path->p_depth;
/* Decrease buffer counter */
if (orig_path)
ext4_ext_drop_refs(orig_path);
- ret1 = get_ext_path(orig_inode, seq_start, &orig_path);
- if (ret1)
+ ret = get_ext_path(orig_inode, seq_start, &orig_path);
+ if (ret)
break;
ext_cur = holecheck_path[depth].p_ext;
@@ -1412,12 +1349,7 @@ out:
kfree(holecheck_path);
}
double_up_write_data_sem(orig_inode, donor_inode);
- ret2 = mext_inode_double_unlock(orig_inode, donor_inode);
-
- if (ret1)
- return ret1;
- else if (ret2)
- return ret2;
+ mext_inode_double_unlock(orig_inode, donor_inode);
- return 0;
+ return ret;
}
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 4dd0890..88f97e5 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -1801,9 +1801,7 @@ retry:
err = PTR_ERR(inode);
if (!IS_ERR(inode)) {
init_special_inode(inode, inode->i_mode, rdev);
-#ifdef CONFIG_EXT4_FS_XATTR
inode->i_op = &ext4_special_inode_operations;
-#endif
err = ext4_add_nondir(handle, dentry, inode);
}
ext4_journal_stop(handle);
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 54f5786..13bfa07 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -63,6 +63,7 @@ int writeback_in_progress(struct backing_dev_info *bdi)
{
return test_bit(BDI_writeback_running, &bdi->state);
}
+EXPORT_SYMBOL(writeback_in_progress);
static inline struct backing_dev_info *inode_to_bdi(struct inode *inode)
{
diff --git a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c
index b09e51d..464cd76 100644
--- a/fs/jffs2/wbuf.c
+++ b/fs/jffs2/wbuf.c
@@ -1032,11 +1032,11 @@ int jffs2_check_oob_empty(struct jffs2_sb_info *c,
ops.datbuf = NULL;
ret = c->mtd->read_oob(c->mtd, jeb->offset, &ops);
- if (ret || ops.oobretlen != ops.ooblen) {
+ if ((ret && !mtd_is_bitflip(ret)) || ops.oobretlen != ops.ooblen) {
printk(KERN_ERR "cannot read OOB for EB at %08x, requested %zd"
" bytes, read %zd bytes, error %d\n",
jeb->offset, ops.ooblen, ops.oobretlen, ret);
- if (!ret)
+ if (!ret || mtd_is_bitflip(ret))
ret = -EIO;
return ret;
}
@@ -1075,11 +1075,11 @@ int jffs2_check_nand_cleanmarker(struct jffs2_sb_info *c,
ops.datbuf = NULL;
ret = c->mtd->read_oob(c->mtd, jeb->offset, &ops);
- if (ret || ops.oobretlen != ops.ooblen) {
+ if ((ret && !mtd_is_bitflip(ret)) || ops.oobretlen != ops.ooblen) {
printk(KERN_ERR "cannot read OOB for EB at %08x, requested %zd"
" bytes, read %zd bytes, error %d\n",
jeb->offset, ops.ooblen, ops.oobretlen, ret);
- if (!ret)
+ if (!ret || mtd_is_bitflip(ret))
ret = -EIO;
return ret;
}
diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
index 23d7451..df753a1 100644
--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -40,6 +40,7 @@ struct nsm_args {
u32 proc;
char *mon_name;
+ char *nodename;
};
struct nsm_res {
@@ -93,6 +94,7 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res)
.vers = 3,
.proc = NLMPROC_NSM_NOTIFY,
.mon_name = nsm->sm_mon_name,
+ .nodename = utsname()->nodename,
};
struct rpc_message msg = {
.rpc_argp = &args,
@@ -429,7 +431,7 @@ static void encode_my_id(struct xdr_stream *xdr, const struct nsm_args *argp)
{
__be32 *p;
- encode_nsm_string(xdr, utsname()->nodename);
+ encode_nsm_string(xdr, argp->nodename);
p = xdr_reserve_space(xdr, 4 + 4 + 4);
*p++ = cpu_to_be32(argp->prog);
*p++ = cpu_to_be32(argp->vers);
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index d774309..1aaa0ee 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -164,25 +164,39 @@ static struct bio *bl_alloc_init_bio(int npg, sector_t isect,
return bio;
}
-static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+static struct bio *do_add_page_to_bio(struct bio *bio, int npg, int rw,
sector_t isect, struct page *page,
struct pnfs_block_extent *be,
void (*end_io)(struct bio *, int err),
- struct parallel_io *par)
+ struct parallel_io *par,
+ unsigned int offset, int len)
{
+ isect = isect + (offset >> SECTOR_SHIFT);
+ dprintk("%s: npg %d rw %d isect %llu offset %u len %d\n", __func__,
+ npg, rw, (unsigned long long)isect, offset, len);
retry:
if (!bio) {
bio = bl_alloc_init_bio(npg, isect, be, end_io, par);
if (!bio)
return ERR_PTR(-ENOMEM);
}
- if (bio_add_page(bio, page, PAGE_CACHE_SIZE, 0) < PAGE_CACHE_SIZE) {
+ if (bio_add_page(bio, page, len, offset) < len) {
bio = bl_submit_bio(rw, bio);
goto retry;
}
return bio;
}
+static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
+ sector_t isect, struct page *page,
+ struct pnfs_block_extent *be,
+ void (*end_io)(struct bio *, int err),
+ struct parallel_io *par)
+{
+ return do_add_page_to_bio(bio, npg, rw, isect, page, be,
+ end_io, par, 0, PAGE_CACHE_SIZE);
+}
+
/* This is basically copied from mpage_end_io_read */
static void bl_end_io_read(struct bio *bio, int err)
{
@@ -446,6 +460,106 @@ map_block(struct buffer_head *bh, sector_t isect, struct pnfs_block_extent *be)
return;
}
+static void
+bl_read_single_end_io(struct bio *bio, int error)
+{
+ struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ struct page *page = bvec->bv_page;
+
+ /* Only one page in bvec */
+ unlock_page(page);
+}
+
+static int
+bl_do_readpage_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int offset, unsigned int len)
+{
+ struct bio *bio;
+ struct page *shadow_page;
+ sector_t isect;
+ char *kaddr, *kshadow_addr;
+ int ret = 0;
+
+ dprintk("%s: offset %u len %u\n", __func__, offset, len);
+
+ shadow_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
+ if (shadow_page == NULL)
+ return -ENOMEM;
+
+ bio = bio_alloc(GFP_NOIO, 1);
+ if (bio == NULL)
+ return -ENOMEM;
+
+ isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) +
+ (offset / SECTOR_SIZE);
+
+ bio->bi_sector = isect - be->be_f_offset + be->be_v_offset;
+ bio->bi_bdev = be->be_mdev;
+ bio->bi_end_io = bl_read_single_end_io;
+
+ lock_page(shadow_page);
+ if (bio_add_page(bio, shadow_page,
+ SECTOR_SIZE, round_down(offset, SECTOR_SIZE)) == 0) {
+ unlock_page(shadow_page);
+ bio_put(bio);
+ return -EIO;
+ }
+
+ submit_bio(READ, bio);
+ wait_on_page_locked(shadow_page);
+ if (unlikely(!test_bit(BIO_UPTODATE, &bio->bi_flags))) {
+ ret = -EIO;
+ } else {
+ kaddr = kmap_atomic(page);
+ kshadow_addr = kmap_atomic(shadow_page);
+ memcpy(kaddr + offset, kshadow_addr + offset, len);
+ kunmap_atomic(kshadow_addr);
+ kunmap_atomic(kaddr);
+ }
+ __free_page(shadow_page);
+ bio_put(bio);
+
+ return ret;
+}
+
+static int
+bl_read_partial_page_sync(struct page *page, struct pnfs_block_extent *be,
+ unsigned int dirty_offset, unsigned int dirty_len,
+ bool full_page)
+{
+ int ret = 0;
+ unsigned int start, end;
+
+ if (full_page) {
+ start = 0;
+ end = PAGE_CACHE_SIZE;
+ } else {
+ start = round_down(dirty_offset, SECTOR_SIZE);
+ end = round_up(dirty_offset + dirty_len, SECTOR_SIZE);
+ }
+
+ dprintk("%s: offset %u len %d\n", __func__, dirty_offset, dirty_len);
+ if (!be) {
+ zero_user_segments(page, start, dirty_offset,
+ dirty_offset + dirty_len, end);
+ if (start == 0 && end == PAGE_CACHE_SIZE &&
+ trylock_page(page)) {
+ SetPageUptodate(page);
+ unlock_page(page);
+ }
+ return ret;
+ }
+
+ if (start != dirty_offset)
+ ret = bl_do_readpage_sync(page, be, start, dirty_offset - start);
+
+ if (!ret && (dirty_offset + dirty_len < end))
+ ret = bl_do_readpage_sync(page, be, dirty_offset + dirty_len,
+ end - dirty_offset - dirty_len);
+
+ return ret;
+}
+
/* Given an unmapped page, zero it or read in page for COW, page is locked
* by caller.
*/
@@ -479,7 +593,6 @@ init_page_for_write(struct page *page, struct pnfs_block_extent *cow_read)
SetPageUptodate(page);
cleanup:
- bl_put_extent(cow_read);
if (bh)
free_buffer_head(bh);
if (ret) {
@@ -501,6 +614,7 @@ bl_write_pagelist(struct nfs_write_data *wdata, int sync)
struct parallel_io *par;
loff_t offset = wdata->args.offset;
size_t count = wdata->args.count;
+ unsigned int pg_offset, pg_len, saved_len;
struct page **pages = wdata->args.pages;
struct page *page;
pgoff_t index;
@@ -615,10 +729,11 @@ next_page:
if (!extent_length) {
/* We've used up the previous extent */
bl_put_extent(be);
+ bl_put_extent(cow_read);
bio = bl_submit_bio(WRITE, bio);
/* Get the next one */
be = bl_find_get_extent(BLK_LSEG2EXT(wdata->lseg),
- isect, NULL);
+ isect, &cow_read);
if (!be || !is_writable(be, isect)) {
wdata->pnfs_error = -EINVAL;
goto out;
@@ -626,7 +741,26 @@ next_page:
extent_length = be->be_length -
(isect - be->be_f_offset);
}
- if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
+
+ dprintk("%s offset %lld count %Zu\n", __func__, offset, count);
+ pg_offset = offset & ~PAGE_CACHE_MASK;
+ if (pg_offset + count > PAGE_CACHE_SIZE)
+ pg_len = PAGE_CACHE_SIZE - pg_offset;
+ else
+ pg_len = count;
+
+ saved_len = pg_len;
+ if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
+ !bl_is_sector_init(be->be_inval, isect)) {
+ ret = bl_read_partial_page_sync(pages[i], cow_read,
+ pg_offset, pg_len, true);
+ if (ret) {
+ dprintk("%s bl_read_partial_page_sync fail %d\n",
+ __func__, ret);
+ wdata->pnfs_error = ret;
+ goto out;
+ }
+
ret = bl_mark_sectors_init(be->be_inval, isect,
PAGE_CACHE_SECTORS,
NULL);
@@ -636,15 +770,35 @@ next_page:
wdata->pnfs_error = ret;
goto out;
}
+
+ /* Expand to full page write */
+ pg_offset = 0;
+ pg_len = PAGE_CACHE_SIZE;
+ } else if ((pg_offset & (SECTOR_SIZE - 1)) ||
+ (pg_len & (SECTOR_SIZE - 1))){
+ /* ahh, nasty case. We have to do sync full sector
+ * read-modify-write cycles.
+ */
+ unsigned int saved_offset = pg_offset;
+ ret = bl_read_partial_page_sync(pages[i], be, pg_offset,
+ pg_len, false);
+ pg_offset = round_down(pg_offset, SECTOR_SIZE);
+ pg_len = round_up(saved_offset + pg_len, SECTOR_SIZE)
+ - pg_offset;
}
- bio = bl_add_page_to_bio(bio, wdata->npages - i, WRITE,
+
+
+ bio = do_add_page_to_bio(bio, wdata->npages - i, WRITE,
isect, pages[i], be,
- bl_end_io_write, par);
+ bl_end_io_write, par,
+ pg_offset, pg_len);
if (IS_ERR(bio)) {
wdata->pnfs_error = PTR_ERR(bio);
bio = NULL;
goto out;
}
+ offset += saved_len;
+ count -= saved_len;
isect += PAGE_CACHE_SECTORS;
last_isect = isect;
extent_length -= PAGE_CACHE_SECTORS;
@@ -662,12 +816,10 @@ next_page:
}
write_done:
- wdata->res.count = (last_isect << SECTOR_SHIFT) - (offset);
- if (count < wdata->res.count) {
- wdata->res.count = count;
- }
+ wdata->res.count = wdata->args.count;
out:
bl_put_extent(be);
+ bl_put_extent(cow_read);
bl_submit_bio(WRITE, bio);
put_parallel(par);
return PNFS_ATTEMPTED;
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index 42acf7e..519a9de 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -40,6 +40,7 @@
#define PAGE_CACHE_SECTORS (PAGE_CACHE_SIZE >> SECTOR_SHIFT)
#define PAGE_CACHE_SECTOR_SHIFT (PAGE_CACHE_SHIFT - SECTOR_SHIFT)
+#define SECTOR_SIZE (1 << SECTOR_SHIFT)
struct block_mount_id {
spinlock_t bm_lock; /* protects list */
diff --git a/fs/udf/super.c b/fs/udf/super.c
index 516b7f0..f66439e 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -1289,6 +1289,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
udf_err(sb, "error loading logical volume descriptor: "
"Partition table too long (%u > %lu)\n", table_len,
sb->s_blocksize - sizeof(*lvd));
+ ret = 1;
goto out_bh;
}
@@ -1333,8 +1334,10 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
UDF_ID_SPARABLE,
strlen(UDF_ID_SPARABLE))) {
if (udf_load_sparable_map(sb, map,
- (struct sparablePartitionMap *)gpm) < 0)
+ (struct sparablePartitionMap *)gpm) < 0) {
+ ret = 1;
goto out_bh;
+ }
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_METADATA,
strlen(UDF_ID_METADATA))) {
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 7978eec..3e8f2f7 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -188,7 +188,7 @@ struct sp_node {
struct shared_policy {
struct rb_root root;
- spinlock_t lock;
+ struct mutex mutex;
};
void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 67cc215..1874c5e 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -1823,7 +1823,6 @@
#define PCI_DEVICE_ID_SIIG_8S_20x_650 0x2081
#define PCI_DEVICE_ID_SIIG_8S_20x_850 0x2082
#define PCI_SUBDEVICE_ID_SIIG_QUARTET_SERIAL 0x2050
-#define PCI_SUBDEVICE_ID_SIIG_DUAL_SERIAL 0x2530
#define PCI_VENDOR_ID_RADISYS 0x1331
diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
index e5a7b9a..416dcb0 100644
--- a/include/net/ip_vs.h
+++ b/include/net/ip_vs.h
@@ -1353,7 +1353,7 @@ static inline void ip_vs_notrack(struct sk_buff *skb)
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (!ct || !nf_ct_is_untracked(ct)) {
- nf_reset(skb);
+ nf_conntrack_put(skb->nfct);
skb->nfct = &nf_ct_untracked_get()->ct_general;
skb->nfctinfo = IP_CT_NEW;
nf_conntrack_get(skb->nfct);
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 6b76d81..a122196 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -292,7 +292,9 @@ cpu_has_callbacks_ready_to_invoke(struct rcu_data *rdp)
static int
cpu_needs_another_gp(struct rcu_state *rsp, struct rcu_data *rdp)
{
- return *rdp->nxttail[RCU_DONE_TAIL] && !rcu_gp_in_progress(rsp);
+ return *rdp->nxttail[RCU_DONE_TAIL +
+ ACCESS_ONCE(rsp->completed) != rdp->completed] &&
+ !rcu_gp_in_progress(rsp);
}
/*
diff --git a/kernel/sched_stoptask.c b/kernel/sched_stoptask.c
index 8b44e7f..85e9da2 100644
--- a/kernel/sched_stoptask.c
+++ b/kernel/sched_stoptask.c
@@ -25,8 +25,10 @@ static struct task_struct *pick_next_task_stop(struct rq *rq)
{
struct task_struct *stop = rq->stop;
- if (stop && stop->on_rq)
+ if (stop && stop->on_rq) {
+ stop->se.exec_start = rq->clock_task;
return stop;
+ }
return NULL;
}
@@ -50,6 +52,21 @@ static void yield_task_stop(struct rq *rq)
static void put_prev_task_stop(struct rq *rq, struct task_struct *prev)
{
+ struct task_struct *curr = rq->curr;
+ u64 delta_exec;
+
+ delta_exec = rq->clock_task - curr->se.exec_start;
+ if (unlikely((s64)delta_exec < 0))
+ delta_exec = 0;
+
+ schedstat_set(curr->se.statistics.exec_max,
+ max(curr->se.statistics.exec_max, delta_exec));
+
+ curr->se.sum_exec_runtime += delta_exec;
+ account_group_exec_runtime(curr, delta_exec);
+
+ curr->se.exec_start = rq->clock_task;
+ cpuacct_charge(curr, delta_exec);
}
static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued)
@@ -58,6 +75,9 @@ static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued)
static void set_curr_task_stop(struct rq *rq)
{
+ struct task_struct *stop = rq->stop;
+
+ stop->se.exec_start = rq->clock_task;
}
static void switched_to_stop(struct rq *rq, struct task_struct *p)
diff --git a/kernel/sys.c b/kernel/sys.c
index 481611f..c504302 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -365,6 +365,7 @@ EXPORT_SYMBOL(unregister_reboot_notifier);
void kernel_restart(char *cmd)
{
kernel_restart_prepare(cmd);
+ disable_nonboot_cpus();
if (!cmd)
printk(KERN_EMERG "Restarting system.\n");
else
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b413138..43a19c5 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1726,10 +1726,9 @@ static void move_linked_works(struct work_struct *work, struct list_head *head,
*nextp = n;
}
-static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
+static void cwq_activate_delayed_work(struct work_struct *work)
{
- struct work_struct *work = list_first_entry(&cwq->delayed_works,
- struct work_struct, entry);
+ struct cpu_workqueue_struct *cwq = get_work_cwq(work);
struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
trace_workqueue_activate_work(work);
@@ -1738,6 +1737,14 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
cwq->nr_active++;
}
+static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
+{
+ struct work_struct *work = list_first_entry(&cwq->delayed_works,
+ struct work_struct, entry);
+
+ cwq_activate_delayed_work(work);
+}
+
/**
* cwq_dec_nr_in_flight - decrement cwq's nr_in_flight
* @cwq: cwq of interest
@@ -1869,7 +1876,9 @@ __acquires(&gcwq->lock)
spin_unlock_irq(&gcwq->lock);
+ smp_wmb(); /* paired with test_and_set_bit(PENDING) */
work_clear_pending(work);
+
lock_map_acquire_read(&cwq->wq->lockdep_map);
lock_map_acquire(&lockdep_map);
trace_workqueue_execute_start(work);
@@ -2626,6 +2635,18 @@ static int try_to_grab_pending(struct work_struct *work)
smp_rmb();
if (gcwq == get_work_gcwq(work)) {
debug_work_deactivate(work);
+
+ /*
+ * A delayed work item cannot be grabbed directly
+ * because it might have linked NO_COLOR work items
+ * which, if left on the delayed_list, will confuse
+ * cwq->nr_active management later on and cause
+ * stall. Make sure the work item is activated
+ * before grabbing.
+ */
+ if (*work_data_bits(work) & WORK_STRUCT_DELAYED)
+ cwq_activate_delayed_work(work);
+
list_del_init(&work->entry);
cwq_dec_nr_in_flight(get_work_cwq(work),
get_work_color(work),
diff --git a/lib/gcd.c b/lib/gcd.c
index f879033..433d89b 100644
--- a/lib/gcd.c
+++ b/lib/gcd.c
@@ -9,6 +9,9 @@ unsigned long gcd(unsigned long a, unsigned long b)
if (a < b)
swap(a, b);
+
+ if (!b)
+ return a;
while ((r = a % b) != 0) {
a = b;
b = r;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0f897b8..d6c0fdf 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2429,8 +2429,8 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
* from page cache lookup which is in HPAGE_SIZE units.
*/
address = address & huge_page_mask(h);
- pgoff = ((address - vma->vm_start) >> PAGE_SHIFT)
- + (vma->vm_pgoff >> PAGE_SHIFT);
+ pgoff = ((address - vma->vm_start) >> PAGE_SHIFT) +
+ vma->vm_pgoff;
mapping = vma->vm_file->f_dentry->d_inode->i_mapping;
/*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 11b8d47..4c82c21 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -607,24 +607,39 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end,
return first;
}
-/* Apply policy to a single VMA */
-static int policy_vma(struct vm_area_struct *vma, struct mempolicy *new)
+/*
+ * Apply policy to a single VMA
+ * This must be called with the mmap_sem held for writing.
+ */
+static int vma_replace_policy(struct vm_area_struct *vma,
+ struct mempolicy *pol)
{
- int err = 0;
- struct mempolicy *old = vma->vm_policy;
+ int err;
+ struct mempolicy *old;
+ struct mempolicy *new;
pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n",
vma->vm_start, vma->vm_end, vma->vm_pgoff,
vma->vm_ops, vma->vm_file,
vma->vm_ops ? vma->vm_ops->set_policy : NULL);
- if (vma->vm_ops && vma->vm_ops->set_policy)
+ new = mpol_dup(pol);
+ if (IS_ERR(new))
+ return PTR_ERR(new);
+
+ if (vma->vm_ops && vma->vm_ops->set_policy) {
err = vma->vm_ops->set_policy(vma, new);
- if (!err) {
- mpol_get(new);
- vma->vm_policy = new;
- mpol_put(old);
+ if (err)
+ goto err_out;
}
+
+ old = vma->vm_policy;
+ vma->vm_policy = new; /* protected by mmap_sem */
+ mpol_put(old);
+
+ return 0;
+ err_out:
+ mpol_put(new);
return err;
}
@@ -675,7 +690,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
if (err)
goto out;
}
- err = policy_vma(vma, new_pol);
+ err = vma_replace_policy(vma, new_pol);
if (err)
goto out;
}
@@ -1507,8 +1522,18 @@ struct mempolicy *get_vma_policy(struct task_struct *task,
addr);
if (vpol)
pol = vpol;
- } else if (vma->vm_policy)
+ } else if (vma->vm_policy) {
pol = vma->vm_policy;
+
+ /*
+ * shmem_alloc_page() passes MPOL_F_SHARED policy with
+ * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+ * count on these policies which will be dropped by
+ * mpol_cond_put() later
+ */
+ if (mpol_needs_cond_ref(pol))
+ mpol_get(pol);
+ }
}
if (!pol)
pol = &default_policy;
@@ -2032,7 +2057,7 @@ int __mpol_equal(struct mempolicy *a, struct mempolicy *b)
*/
/* lookup first element intersecting start-end */
-/* Caller holds sp->lock */
+/* Caller holds sp->mutex */
static struct sp_node *
sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
{
@@ -2096,36 +2121,50 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
if (!sp->root.rb_node)
return NULL;
- spin_lock(&sp->lock);
+ mutex_lock(&sp->mutex);
sn = sp_lookup(sp, idx, idx+1);
if (sn) {
mpol_get(sn->policy);
pol = sn->policy;
}
- spin_unlock(&sp->lock);
+ mutex_unlock(&sp->mutex);
return pol;
}
+static void sp_free(struct sp_node *n)
+{
+ mpol_put(n->policy);
+ kmem_cache_free(sn_cache, n);
+}
+
static void sp_delete(struct shared_policy *sp, struct sp_node *n)
{
pr_debug("deleting %lx-l%lx\n", n->start, n->end);
rb_erase(&n->nd, &sp->root);
- mpol_put(n->policy);
- kmem_cache_free(sn_cache, n);
+ sp_free(n);
}
static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
struct mempolicy *pol)
{
- struct sp_node *n = kmem_cache_alloc(sn_cache, GFP_KERNEL);
+ struct sp_node *n;
+ struct mempolicy *newpol;
+ n = kmem_cache_alloc(sn_cache, GFP_KERNEL);
if (!n)
return NULL;
+
+ newpol = mpol_dup(pol);
+ if (IS_ERR(newpol)) {
+ kmem_cache_free(sn_cache, n);
+ return NULL;
+ }
+ newpol->flags |= MPOL_F_SHARED;
+
n->start = start;
n->end = end;
- mpol_get(pol);
- pol->flags |= MPOL_F_SHARED; /* for unref */
- n->policy = pol;
+ n->policy = newpol;
+
return n;
}
@@ -2133,10 +2172,10 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
unsigned long end, struct sp_node *new)
{
- struct sp_node *n, *new2 = NULL;
+ struct sp_node *n;
+ int ret = 0;
-restart:
- spin_lock(&sp->lock);
+ mutex_lock(&sp->mutex);
n = sp_lookup(sp, start, end);
/* Take care of old policies in the same range. */
while (n && n->start < end) {
@@ -2149,16 +2188,14 @@ restart:
} else {
/* Old policy spanning whole new range. */
if (n->end > end) {
+ struct sp_node *new2;
+ new2 = sp_alloc(end, n->end, n->policy);
if (!new2) {
- spin_unlock(&sp->lock);
- new2 = sp_alloc(end, n->end, n->policy);
- if (!new2)
- return -ENOMEM;
- goto restart;
+ ret = -ENOMEM;
+ goto out;
}
n->end = start;
sp_insert(sp, new2);
- new2 = NULL;
break;
} else
n->end = start;
@@ -2169,12 +2206,9 @@ restart:
}
if (new)
sp_insert(sp, new);
- spin_unlock(&sp->lock);
- if (new2) {
- mpol_put(new2->policy);
- kmem_cache_free(sn_cache, new2);
- }
- return 0;
+out:
+ mutex_unlock(&sp->mutex);
+ return ret;
}
/**
@@ -2192,7 +2226,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
int ret;
sp->root = RB_ROOT; /* empty tree == default mempolicy */
- spin_lock_init(&sp->lock);
+ mutex_init(&sp->mutex);
if (mpol) {
struct vm_area_struct pvma;
@@ -2246,7 +2280,7 @@ int mpol_set_shared_policy(struct shared_policy *info,
}
err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new);
if (err && new)
- kmem_cache_free(sn_cache, new);
+ sp_free(new);
return err;
}
@@ -2258,16 +2292,14 @@ void mpol_free_shared_policy(struct shared_policy *p)
if (!p->root.rb_node)
return;
- spin_lock(&p->lock);
+ mutex_lock(&p->mutex);
next = rb_first(&p->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
- rb_erase(&n->nd, &p->root);
- mpol_put(n->policy);
- kmem_cache_free(sn_cache, n);
+ sp_delete(p, n);
}
- spin_unlock(&p->lock);
+ mutex_unlock(&p->mutex);
}
/* assumes fs == KERNEL_DS */
diff --git a/mm/slab.c b/mm/slab.c
index cd3ab93..4c3b671 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1669,9 +1669,6 @@ void __init kmem_cache_init_late(void)
g_cpucache_up = LATE;
- /* Annotate slab for lockdep -- annotate the malloc caches */
- init_lock_keys();
-
/* 6) resize the head arrays to their final sizes */
mutex_lock(&cache_chain_mutex);
list_for_each_entry(cachep, &cache_chain, next)
@@ -1679,6 +1676,9 @@ void __init kmem_cache_init_late(void)
BUG();
mutex_unlock(&cache_chain_mutex);
+ /* Annotate slab for lockdep -- annotate the malloc caches */
+ init_lock_keys();
+
/* Done! */
g_cpucache_up = FULL;
diff --git a/mm/truncate.c b/mm/truncate.c
index 632b15e..00fb58a 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -394,11 +394,12 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
return 0;
+ clear_page_mlock(page);
+
spin_lock_irq(&mapping->tree_lock);
if (PageDirty(page))
goto failed;
- clear_page_mlock(page);
BUG_ON(page_has_private(page));
__delete_from_page_cache(page);
spin_unlock_irq(&mapping->tree_lock);
diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
index de9da21..d7d63f4 100644
--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
@@ -84,6 +84,14 @@ static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff,
*dataoff = nhoff + (iph->ihl << 2);
*protonum = iph->protocol;
+ /* Check bogus IP headers */
+ if (*dataoff > skb->len) {
+ pr_debug("nf_conntrack_ipv4: bogus IPv4 packet: "
+ "nhoff %u, ihl %u, skblen %u\n",
+ nhoff, iph->ihl << 2, skb->len);
+ return -NF_ACCEPT;
+ }
+
return NF_ACCEPT;
}
diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c
index 78844d9..6609a84 100644
--- a/net/ipv4/netfilter/nf_nat_sip.c
+++ b/net/ipv4/netfilter/nf_nat_sip.c
@@ -148,7 +148,7 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
if (ct_sip_parse_header_uri(ct, *dptr, NULL, *datalen,
hdr, NULL, &matchoff, &matchlen,
&addr, &port) > 0) {
- unsigned int matchend, poff, plen, buflen, n;
+ unsigned int olen, matchend, poff, plen, buflen, n;
char buffer[sizeof("nnn.nnn.nnn.nnn:nnnnn")];
/* We're only interested in headers related to this
@@ -163,11 +163,12 @@ static unsigned int ip_nat_sip(struct sk_buff *skb, unsigned int dataoff,
goto next;
}
+ olen = *datalen;
if (!map_addr(skb, dataoff, dptr, datalen, matchoff, matchlen,
&addr, port))
return NF_DROP;
- matchend = matchoff + matchlen;
+ matchend = matchoff + matchlen + *datalen - olen;
/* The maddr= parameter (RFC 2361) specifies where to send
* the reply. */
@@ -501,7 +502,10 @@ static unsigned int ip_nat_sdp_media(struct sk_buff *skb, unsigned int dataoff,
ret = nf_ct_expect_related(rtcp_exp);
if (ret == 0)
break;
- else if (ret != -EBUSY) {
+ else if (ret == -EBUSY) {
+ nf_ct_unexpect_related(rtp_exp);
+ continue;
+ } else if (ret < 0) {
nf_ct_unexpect_related(rtp_exp);
port = 0;
break;
diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
index 340c80d..7918eb7 100644
--- a/net/netfilter/nf_conntrack_expect.c
+++ b/net/netfilter/nf_conntrack_expect.c
@@ -366,23 +366,6 @@ static void evict_oldest_expect(struct nf_conn *master,
}
}
-static inline int refresh_timer(struct nf_conntrack_expect *i)
-{
- struct nf_conn_help *master_help = nfct_help(i->master);
- const struct nf_conntrack_expect_policy *p;
-
- if (!del_timer(&i->timeout))
- return 0;
-
- p = &rcu_dereference_protected(
- master_help->helper,
- lockdep_is_held(&nf_conntrack_lock)
- )->expect_policy[i->class];
- i->timeout.expires = jiffies + p->timeout * HZ;
- add_timer(&i->timeout);
- return 1;
-}
-
static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
{
const struct nf_conntrack_expect_policy *p;
@@ -390,7 +373,7 @@ static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
struct nf_conn *master = expect->master;
struct nf_conn_help *master_help = nfct_help(master);
struct net *net = nf_ct_exp_net(expect);
- struct hlist_node *n;
+ struct hlist_node *n, *next;
unsigned int h;
int ret = 1;
@@ -401,12 +384,12 @@ static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
goto out;
}
h = nf_ct_expect_dst_hash(&expect->tuple);
- hlist_for_each_entry(i, n, &net->ct.expect_hash[h], hnode) {
+ hlist_for_each_entry_safe(i, n, next, &net->ct.expect_hash[h], hnode) {
if (expect_matches(i, expect)) {
- /* Refresh timer: if it's dying, ignore.. */
- if (refresh_timer(i)) {
- ret = 0;
- goto out;
+ if (del_timer(&i->timeout)) {
+ nf_ct_unlink_expect(i);
+ nf_ct_expect_put(i);
+ break;
}
} else if (expect_clash(i, expect)) {
ret = -EBUSY;
diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
index dfd52ba..8f3f280 100644
--- a/net/netfilter/xt_hashlimit.c
+++ b/net/netfilter/xt_hashlimit.c
@@ -389,8 +389,7 @@ static void htable_put(struct xt_hashlimit_htable *hinfo)
#define CREDITS_PER_JIFFY POW2_BELOW32(MAX_CPJ)
/* Precision saver. */
-static inline u_int32_t
-user2credits(u_int32_t user)
+static u32 user2credits(u32 user)
{
/* If multiplying would overflow... */
if (user > 0xFFFFFFFF / (HZ*CREDITS_PER_JIFFY))
@@ -400,7 +399,7 @@ user2credits(u_int32_t user)
return (user * HZ * CREDITS_PER_JIFFY) / XT_HASHLIMIT_SCALE;
}
-static inline void rateinfo_recalc(struct dsthash_ent *dh, unsigned long now)
+static void rateinfo_recalc(struct dsthash_ent *dh, unsigned long now)
{
dh->rateinfo.credit += (now - dh->rateinfo.prev) * CREDITS_PER_JIFFY;
if (dh->rateinfo.credit > dh->rateinfo.credit_cap)
@@ -531,8 +530,7 @@ hashlimit_mt(const struct sk_buff *skb, struct xt_action_param *par)
dh->rateinfo.prev = jiffies;
dh->rateinfo.credit = user2credits(hinfo->cfg.avg *
hinfo->cfg.burst);
- dh->rateinfo.credit_cap = user2credits(hinfo->cfg.avg *
- hinfo->cfg.burst);
+ dh->rateinfo.credit_cap = dh->rateinfo.credit;
dh->rateinfo.cost = user2credits(hinfo->cfg.avg);
} else {
/* update expiration timeout */
diff --git a/net/netfilter/xt_limit.c b/net/netfilter/xt_limit.c
index 32b7a57..a4c1e45 100644
--- a/net/netfilter/xt_limit.c
+++ b/net/netfilter/xt_limit.c
@@ -88,8 +88,7 @@ limit_mt(const struct sk_buff *skb, struct xt_action_param *par)
}
/* Precision saver. */
-static u_int32_t
-user2credits(u_int32_t user)
+static u32 user2credits(u32 user)
{
/* If multiplying would overflow... */
if (user > 0xFFFFFFFF / (HZ*CREDITS_PER_JIFFY))
@@ -118,12 +117,12 @@ static int limit_mt_check(const struct xt_mtchk_param *par)
/* For SMP, we only want to use one set of state. */
r->master = priv;
+ /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
+ 128. */
+ priv->prev = jiffies;
+ priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
if (r->cost == 0) {
- /* User avg in seconds * XT_LIMIT_SCALE: convert to jiffies *
- 128. */
- priv->prev = jiffies;
- priv->credit = user2credits(r->avg * r->burst); /* Credits full. */
- r->credit_cap = user2credits(r->avg * r->burst); /* Credits full. */
+ r->credit_cap = priv->credit; /* Credits full. */
r->cost = user2credits(r->avg);
}
return 0;
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index c5391af..10a385b 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1028,6 +1028,16 @@ static void xs_udp_data_ready(struct sock *sk, int len)
read_unlock_bh(&sk->sk_callback_lock);
}
+/*
+ * Helper function to force a TCP close if the server is sending
+ * junk and/or it has put us in CLOSE_WAIT
+ */
+static void xs_tcp_force_close(struct rpc_xprt *xprt)
+{
+ set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
+ xprt_force_disconnect(xprt);
+}
+
static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_reader *desc)
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
@@ -1054,7 +1064,7 @@ static inline void xs_tcp_read_fraghdr(struct rpc_xprt *xprt, struct xdr_skb_rea
/* Sanity check of the record length */
if (unlikely(transport->tcp_reclen < 8)) {
dprintk("RPC: invalid TCP record fragment length\n");
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
return;
}
dprintk("RPC: reading TCP record fragment of length %d\n",
@@ -1135,7 +1145,7 @@ static inline void xs_tcp_read_calldir(struct sock_xprt *transport,
break;
default:
dprintk("RPC: invalid request message type\n");
- xprt_force_disconnect(&transport->xprt);
+ xs_tcp_force_close(&transport->xprt);
}
xs_tcp_check_fraghdr(transport);
}
@@ -1458,6 +1468,8 @@ static void xs_tcp_cancel_linger_timeout(struct rpc_xprt *xprt)
static void xs_sock_mark_closed(struct rpc_xprt *xprt)
{
smp_mb__before_clear_bit();
+ clear_bit(XPRT_CONNECTION_ABORT, &xprt->state);
+ clear_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
clear_bit(XPRT_CLOSING, &xprt->state);
smp_mb__after_clear_bit();
@@ -1515,8 +1527,8 @@ static void xs_tcp_state_change(struct sock *sk)
break;
case TCP_CLOSE_WAIT:
/* The server initiated a shutdown of the socket */
- xprt_force_disconnect(xprt);
xprt->connect_cookie++;
+ xs_tcp_force_close(xprt);
case TCP_CLOSING:
/*
* If the server closed down the connection, make sure that
@@ -2159,8 +2171,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
/* We're probably in TIME_WAIT. Get rid of existing socket,
* and retry
*/
- set_bit(XPRT_CONNECTION_CLOSE, &xprt->state);
- xprt_force_disconnect(xprt);
+ xs_tcp_force_close(xprt);
break;
case -ECONNREFUSED:
case -ECONNRESET:
diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
index d897278..978416d 100644
--- a/scripts/Kbuild.include
+++ b/scripts/Kbuild.include
@@ -98,24 +98,24 @@ try-run = $(shell set -e; \
# Usage: cflags-y += $(call as-option,-Wa$(comma)-isa=foo,)
as-option = $(call try-run,\
- $(CC) $(KBUILD_CFLAGS) $(1) -c -xassembler /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(KBUILD_CFLAGS) $(1) -c -x assembler /dev/null -o "$$TMP",$(1),$(2))
# as-instr
# Usage: cflags-y += $(call as-instr,instr,option1,option2)
as-instr = $(call try-run,\
- /bin/echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3))
+ printf "%b\n" "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -x assembler -o "$$TMP" -,$(2),$(3))
# cc-option
# Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
cc-option = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",$(1),$(2))
# cc-option-yn
# Usage: flag := $(call cc-option-yn,-march=winchip-c6)
cc-option-yn = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -xc /dev/null -o "$$TMP",y,n)
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) $(1) -c -x c /dev/null -o "$$TMP",y,n)
# cc-option-align
# Prefix align with either -falign or -malign
@@ -125,7 +125,7 @@ cc-option-align = $(subst -functions=0,,\
# cc-disable-warning
# Usage: cflags-y += $(call cc-disable-warning,unused-but-set-variable)
cc-disable-warning = $(call try-run,\
- $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -xc /dev/null -o "$$TMP",-Wno-$(strip $(1)))
+ $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -W$(strip $(1)) -c -x c /dev/null -o "$$TMP",-Wno-$(strip $(1)))
# cc-version
# Usage gcc-ver := $(call cc-version)
@@ -143,7 +143,7 @@ cc-ifversion = $(shell [ $(call cc-version, $(CC)) $(1) $(2) ] && echo $(3))
# cc-ldoption
# Usage: ldflags += $(call cc-ldoption, -Wl$(comma)--hash-style=both)
cc-ldoption = $(call try-run,\
- $(CC) $(1) -nostdlib -xc /dev/null -o "$$TMP",$(1),$(2))
+ $(CC) $(1) -nostdlib -x c /dev/null -o "$$TMP",$(1),$(2))
# ld-option
# Usage: LDFLAGS += $(call ld-option, -X)
@@ -209,7 +209,7 @@ endif
# >$< substitution to preserve $ when reloading .cmd file
# note: when using inline perl scripts [perl -e '...$$t=1;...']
# in $(cmd_xxx) double $$ your perl vars
-make-cmd = $(subst \#,\\\#,$(subst $$,$$$$,$(call escsq,$(cmd_$(1)))))
+make-cmd = $(subst \\,\\\\,$(subst \#,\\\#,$(subst $$,$$$$,$(call escsq,$(cmd_$(1))))))
# Find any prerequisites that is newer than target or that does not exist.
# PHONY targets skipped in both cases.
diff --git a/scripts/Makefile.fwinst b/scripts/Makefile.fwinst
index 6bf8e87..c3f69ae 100644
--- a/scripts/Makefile.fwinst
+++ b/scripts/Makefile.fwinst
@@ -42,7 +42,7 @@ quiet_cmd_install = INSTALL $(subst $(srctree)/,,$@)
$(installed-fw-dirs):
$(call cmd,mkdir)
-$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $(INSTALL_FW_PATH)/$$(dir %)
+$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $$(dir $(INSTALL_FW_PATH)/%)
$(call cmd,install)
PHONY += __fw_install __fw_modinst FORCE
diff --git a/scripts/gcc-version.sh b/scripts/gcc-version.sh
index debecb5..7f2126d 100644
--- a/scripts/gcc-version.sh
+++ b/scripts/gcc-version.sh
@@ -22,10 +22,10 @@ if [ ${#compiler} -eq 0 ]; then
exit 1
fi
-MAJOR=$(echo __GNUC__ | $compiler -E -xc - | tail -n 1)
-MINOR=$(echo __GNUC_MINOR__ | $compiler -E -xc - | tail -n 1)
+MAJOR=$(echo __GNUC__ | $compiler -E -x c - | tail -n 1)
+MINOR=$(echo __GNUC_MINOR__ | $compiler -E -x c - | tail -n 1)
if [ "x$with_patchlevel" != "x" ] ; then
- PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -xc - | tail -n 1)
+ PATCHLEVEL=$(echo __GNUC_PATCHLEVEL__ | $compiler -E -x c - | tail -n 1)
printf "%02d%02d%02d\\n" $MAJOR $MINOR $PATCHLEVEL
else
printf "%02d%02d\\n" $MAJOR $MINOR
diff --git a/scripts/gcc-x86_32-has-stack-protector.sh b/scripts/gcc-x86_32-has-stack-protector.sh
index 29493dc..12dbd0b 100644
--- a/scripts/gcc-x86_32-has-stack-protector.sh
+++ b/scripts/gcc-x86_32-has-stack-protector.sh
@@ -1,6 +1,6 @@
#!/bin/sh
-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
if [ "$?" -eq "0" ] ; then
echo y
else
diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh
index afaec61..973e8c1 100644
--- a/scripts/gcc-x86_64-has-stack-protector.sh
+++ b/scripts/gcc-x86_64-has-stack-protector.sh
@@ -1,6 +1,6 @@
#!/bin/sh
-echo "int foo(void) { char X[200]; return 3; }" | $* -S -xc -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
if [ "$?" -eq "0" ] ; then
echo y
else
diff --git a/scripts/kconfig/check.sh b/scripts/kconfig/check.sh
index fa59cbf..854d9c7 100755
--- a/scripts/kconfig/check.sh
+++ b/scripts/kconfig/check.sh
@@ -1,6 +1,6 @@
#!/bin/sh
# Needed for systems without gettext
-$* -xc -o /dev/null - > /dev/null 2>&1 << EOF
+$* -x c -o /dev/null - > /dev/null 2>&1 << EOF
#include <libintl.h>
int main()
{
diff --git a/scripts/kconfig/lxdialog/check-lxdialog.sh b/scripts/kconfig/lxdialog/check-lxdialog.sh
index 82cc3a8..50df490 100644
--- a/scripts/kconfig/lxdialog/check-lxdialog.sh
+++ b/scripts/kconfig/lxdialog/check-lxdialog.sh
@@ -38,7 +38,7 @@ trap "rm -f $tmp" 0 1 2 3 15
# Check if we can link to ncurses
check() {
- $cc -xc - -o $tmp 2>/dev/null <<'EOF'
+ $cc -x c - -o $tmp 2>/dev/null <<'EOF'
#include CURSES_LOC
main() {}
EOF
diff --git a/scripts/kconfig/streamline_config.pl b/scripts/kconfig/streamline_config.pl
index bccf07dd..3346f42 100644
--- a/scripts/kconfig/streamline_config.pl
+++ b/scripts/kconfig/streamline_config.pl
@@ -463,6 +463,8 @@ while(<CIN>) {
if (defined($configs{$1})) {
if ($localyesconfig) {
$setconfigs{$1} = 'y';
+ print "$1=y\n";
+ next;
} else {
$setconfigs{$1} = $2;
}
diff --git a/scripts/package/buildtar b/scripts/package/buildtar
index 8a7b155..d0d748e 100644
--- a/scripts/package/buildtar
+++ b/scripts/package/buildtar
@@ -109,7 +109,7 @@ esac
if tar --owner=root --group=root --help >/dev/null 2>&1; then
opts="--owner=root --group=root"
fi
- tar cf - . $opts | ${compress} > "${tarball}${file_ext}"
+ tar cf - boot/* lib/* $opts | ${compress} > "${tarball}${file_ext}"
)
echo "Tarball successfully created in ${tarball}${file_ext}"
diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
index d83bafc..193ce81 100644
--- a/sound/drivers/aloop.c
+++ b/sound/drivers/aloop.c
@@ -119,6 +119,7 @@ struct loopback_pcm {
unsigned int period_size_frac;
unsigned long last_jiffies;
struct timer_list timer;
+ spinlock_t timer_lock;
};
static struct platform_device *devices[SNDRV_CARDS];
@@ -169,6 +170,7 @@ static void loopback_timer_start(struct loopback_pcm *dpcm)
unsigned long tick;
unsigned int rate_shift = get_rate_shift(dpcm);
+ spin_lock(&dpcm->timer_lock);
if (rate_shift != dpcm->pcm_rate_shift) {
dpcm->pcm_rate_shift = rate_shift;
dpcm->period_size_frac = frac_pos(dpcm, dpcm->pcm_period_size);
@@ -181,12 +183,15 @@ static void loopback_timer_start(struct loopback_pcm *dpcm)
tick = (tick + dpcm->pcm_bps - 1) / dpcm->pcm_bps;
dpcm->timer.expires = jiffies + tick;
add_timer(&dpcm->timer);
+ spin_unlock(&dpcm->timer_lock);
}
static inline void loopback_timer_stop(struct loopback_pcm *dpcm)
{
+ spin_lock(&dpcm->timer_lock);
del_timer(&dpcm->timer);
dpcm->timer.expires = 0;
+ spin_unlock(&dpcm->timer_lock);
}
#define CABLE_VALID_PLAYBACK (1 << SNDRV_PCM_STREAM_PLAYBACK)
@@ -659,6 +664,7 @@ static int loopback_open(struct snd_pcm_substream *substream)
dpcm->substream = substream;
setup_timer(&dpcm->timer, loopback_timer_function,
(unsigned long)dpcm);
+ spin_lock_init(&dpcm->timer_lock);
cable = loopback->cables[substream->number][dev];
if (!cable) {
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 402f330..94f0c4a 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -139,6 +139,7 @@ struct conexant_spec {
unsigned int asus:1;
unsigned int pin_eapd_ctrls:1;
unsigned int single_adc_amp:1;
+ unsigned int fixup_stereo_dmic:1;
unsigned int adc_switching:1;
@@ -4113,9 +4114,9 @@ static int cx_auto_init(struct hda_codec *codec)
static int cx_auto_add_volume_idx(struct hda_codec *codec, const char *basename,
const char *dir, int cidx,
- hda_nid_t nid, int hda_dir, int amp_idx)
+ hda_nid_t nid, int hda_dir, int amp_idx, int chs)
{
- static char name[32];
+ static char name[44];
static struct snd_kcontrol_new knew[] = {
HDA_CODEC_VOLUME(name, 0, 0, 0),
HDA_CODEC_MUTE(name, 0, 0, 0),
@@ -4125,7 +4126,7 @@ static int cx_auto_add_volume_idx(struct hda_codec *codec, const char *basename,
for (i = 0; i < 2; i++) {
struct snd_kcontrol *kctl;
- knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, 3, amp_idx,
+ knew[i].private_value = HDA_COMPOSE_AMP_VAL(nid, chs, amp_idx,
hda_dir);
knew[i].subdevice = HDA_SUBDEV_AMP_FLAG;
knew[i].index = cidx;
@@ -4144,7 +4145,7 @@ static int cx_auto_add_volume_idx(struct hda_codec *codec, const char *basename,
}
#define cx_auto_add_volume(codec, str, dir, cidx, nid, hda_dir) \
- cx_auto_add_volume_idx(codec, str, dir, cidx, nid, hda_dir, 0)
+ cx_auto_add_volume_idx(codec, str, dir, cidx, nid, hda_dir, 0, 3)
#define cx_auto_add_pb_volume(codec, nid, str, idx) \
cx_auto_add_volume(codec, str, " Playback", idx, nid, HDA_OUTPUT)
@@ -4214,6 +4215,36 @@ static int cx_auto_build_output_controls(struct hda_codec *codec)
return 0;
}
+/* Returns zero if this is a normal stereo channel, and non-zero if it should
+ be split in two independent channels.
+ dest_label must be at least 44 characters. */
+static int cx_auto_get_rightch_label(struct hda_codec *codec, const char *label,
+ char *dest_label, int nid)
+{
+ struct conexant_spec *spec = codec->spec;
+ int i;
+
+ if (!spec->fixup_stereo_dmic)
+ return 0;
+
+ for (i = 0; i < AUTO_CFG_MAX_INS; i++) {
+ int def_conf;
+ if (spec->autocfg.inputs[i].pin != nid)
+ continue;
+
+ if (spec->autocfg.inputs[i].type != AUTO_PIN_MIC)
+ return 0;
+ def_conf = snd_hda_codec_get_pincfg(codec, nid);
+ if (snd_hda_get_input_pin_attr(def_conf) != INPUT_PIN_ATTR_INT)
+ return 0;
+
+ /* Finally found the inverted internal mic! */
+ snprintf(dest_label, 44, "Inverted %s", label);
+ return 1;
+ }
+ return 0;
+}
+
static int cx_auto_add_capture_volume(struct hda_codec *codec, hda_nid_t nid,
const char *label, const char *pfx,
int cidx)
@@ -4222,14 +4253,25 @@ static int cx_auto_add_capture_volume(struct hda_codec *codec, hda_nid_t nid,
int i;
for (i = 0; i < spec->num_adc_nids; i++) {
+ char rightch_label[44];
hda_nid_t adc_nid = spec->adc_nids[i];
int idx = get_input_connection(codec, adc_nid, nid);
if (idx < 0)
continue;
if (spec->single_adc_amp)
idx = 0;
+
+ if (cx_auto_get_rightch_label(codec, label, rightch_label, nid)) {
+ /* Make two independent kcontrols for left and right */
+ int err = cx_auto_add_volume_idx(codec, label, pfx,
+ cidx, adc_nid, HDA_INPUT, idx, 1);
+ if (err < 0)
+ return err;
+ return cx_auto_add_volume_idx(codec, rightch_label, pfx,
+ cidx, adc_nid, HDA_INPUT, idx, 2);
+ }
return cx_auto_add_volume_idx(codec, label, pfx,
- cidx, adc_nid, HDA_INPUT, idx);
+ cidx, adc_nid, HDA_INPUT, idx, 3);
}
return 0;
}
@@ -4242,9 +4284,19 @@ static int cx_auto_add_boost_volume(struct hda_codec *codec, int idx,
int i, con;
nid = spec->imux_info[idx].pin;
- if (get_wcaps(codec, nid) & AC_WCAP_IN_AMP)
+ if (get_wcaps(codec, nid) & AC_WCAP_IN_AMP) {
+ char rightch_label[44];
+ if (cx_auto_get_rightch_label(codec, label, rightch_label, nid)) {
+ int err = cx_auto_add_volume_idx(codec, label, " Boost",
+ cidx, nid, HDA_INPUT, 0, 1);
+ if (err < 0)
+ return err;
+ return cx_auto_add_volume_idx(codec, rightch_label, " Boost",
+ cidx, nid, HDA_INPUT, 0, 2);
+ }
return cx_auto_add_volume(codec, label, " Boost", cidx,
nid, HDA_INPUT);
+ }
con = __select_input_connection(codec, spec->imux_info[idx].adc, nid,
&mux, false, 0);
if (con < 0)
@@ -4398,23 +4450,31 @@ static void apply_pincfg(struct hda_codec *codec, const struct cxt_pincfg *cfg)
}
-static void apply_pin_fixup(struct hda_codec *codec,
+enum {
+ CXT_PINCFG_LENOVO_X200,
+ CXT_PINCFG_LENOVO_TP410,
+ CXT_FIXUP_STEREO_DMIC,
+};
+
+static void apply_fixup(struct hda_codec *codec,
const struct snd_pci_quirk *quirk,
const struct cxt_pincfg **table)
{
+ struct conexant_spec *spec = codec->spec;
+
quirk = snd_pci_quirk_lookup(codec->bus->pci, quirk);
- if (quirk) {
+ if (quirk && table[quirk->value]) {
snd_printdd(KERN_INFO "hda_codec: applying pincfg for %s\n",
quirk->name);
apply_pincfg(codec, table[quirk->value]);
}
+ if (quirk->value == CXT_FIXUP_STEREO_DMIC) {
+ snd_printdd(KERN_INFO "hda_codec: applying internal mic workaround for %s\n",
+ quirk->name);
+ spec->fixup_stereo_dmic = 1;
+ }
}
-enum {
- CXT_PINCFG_LENOVO_X200,
- CXT_PINCFG_LENOVO_TP410,
-};
-
/* ThinkPad X200 & co with cxt5051 */
static const struct cxt_pincfg cxt_pincfg_lenovo_x200[] = {
{ 0x16, 0x042140ff }, /* HP (seq# overridden) */
@@ -4434,6 +4494,7 @@ static const struct cxt_pincfg cxt_pincfg_lenovo_tp410[] = {
static const struct cxt_pincfg *cxt_pincfg_tbl[] = {
[CXT_PINCFG_LENOVO_X200] = cxt_pincfg_lenovo_x200,
[CXT_PINCFG_LENOVO_TP410] = cxt_pincfg_lenovo_tp410,
+ [CXT_FIXUP_STEREO_DMIC] = NULL,
};
static const struct snd_pci_quirk cxt5051_fixups[] = {
@@ -4447,6 +4508,9 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x17aa, 0x215f, "Lenovo T510", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21ce, "Lenovo T420", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410),
+ SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
{}
};
@@ -4486,10 +4550,10 @@ static int patch_conexant_auto(struct hda_codec *codec)
break;
case 0x14f15051:
add_cx5051_fake_mutes(codec);
- apply_pin_fixup(codec, cxt5051_fixups, cxt_pincfg_tbl);
+ apply_fixup(codec, cxt5051_fixups, cxt_pincfg_tbl);
break;
default:
- apply_pin_fixup(codec, cxt5066_fixups, cxt_pincfg_tbl);
+ apply_fixup(codec, cxt5066_fixups, cxt_pincfg_tbl);
break;
}
diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
index 323d4d9..0961d88 100644
--- a/tools/hv/hv_kvp_daemon.c
+++ b/tools/hv/hv_kvp_daemon.c
@@ -348,7 +348,7 @@ int main(void)
fd = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
if (fd < 0) {
syslog(LOG_ERR, "netlink socket creation failed; error:%d", fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
addr.nl_family = AF_NETLINK;
addr.nl_pad = 0;
@@ -360,7 +360,7 @@ int main(void)
if (error < 0) {
syslog(LOG_ERR, "bind failed; error:%d", error);
close(fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
sock_opt = addr.nl_groups;
setsockopt(fd, 270, 1, &sock_opt, sizeof(sock_opt));
@@ -378,7 +378,7 @@ int main(void)
if (len < 0) {
syslog(LOG_ERR, "netlink_send failed; error:%d", len);
close(fd);
- exit(-1);
+ exit(EXIT_FAILURE);
}
pfd.fd = fd;
@@ -497,7 +497,7 @@ int main(void)
len = netlink_send(fd, incoming_cn_msg);
if (len < 0) {
syslog(LOG_ERR, "net_link send failed; error:%d", len);
- exit(-1);
+ exit(EXIT_FAILURE);
}
}
diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index b98e307..e45d2b1 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -56,7 +56,7 @@ ifeq ($(ARCH),x86_64)
ARCH := x86
IS_X86_64 := 0
ifeq (, $(findstring m32,$(EXTRA_CFLAGS)))
- IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -xc - | tail -n 1)
+ IS_X86_64 := $(shell echo __x86_64__ | ${CC} -E -x c - | tail -n 1)
endif
ifeq (${IS_X86_64}, 1)
RAW_ARCH := x86_64
diff --git a/tools/power/cpupower/Makefile b/tools/power/cpupower/Makefile
index e8a03ac..7db8da5 100644
--- a/tools/power/cpupower/Makefile
+++ b/tools/power/cpupower/Makefile
@@ -100,7 +100,7 @@ GMO_FILES = ${shell for HLANG in ${LANGUAGES}; do echo po/$$HLANG.gmo; done;}
export CROSS CC AR STRIP RANLIB CFLAGS LDFLAGS LIB_OBJS
# check if compiler option is supported
-cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -xc /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;}
+cc-supports = ${shell if $(CC) ${1} -S -o /dev/null -x c /dev/null > /dev/null 2>&1; then echo "$(1)"; fi;}
# use '-Os' optimization if available, else use -O2
OPTIMIZATION := $(call cc-supports,-Os,-O2)
On Sun, Oct 14, 2012 at 03:36:11PM +0100, Ben Hutchings wrote:
> 3.2-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Tejun Heo <[email protected]>
>
> commit 60ea8226cbd5c8301f9a39edc574ddabcb8150e0 upstream.
>
> A queue newly allocated with blk_alloc_queue_node() has only
> QUEUE_FLAG_BYPASS set. For request-based drivers,
> blk_init_allocated_queue() is called and q->queue_flags is overwritten
> with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
> initial bypass is still in effect.
Since QUEUE_FLAG_BYPASS doesn't exist and nothing in
blk_alloc_queue_node is set on 3.2, this change is uneeded then,
but the patch should be harmless anyway.
>
> In blk_init_allocated_queue(), or QUEUE_FLAG_DEFAULT to q->queue_flags
> instead of overwriting.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Acked-by: Vivek Goyal <[email protected]>
> Signed-off-by: Jens Axboe <[email protected]>
> Signed-off-by: Ben Hutchings <[email protected]>
> ---
> block/blk-core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 80e29c9..a17869f 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -696,7 +696,7 @@ blk_init_allocated_queue(struct request_queue *q, request_fn_proc *rfn,
> q->request_fn = rfn;
> q->prep_rq_fn = NULL;
> q->unprep_rq_fn = NULL;
> - q->queue_flags = QUEUE_FLAG_DEFAULT;
> + q->queue_flags |= QUEUE_FLAG_DEFAULT;
>
> /* Override internal queue lock with supplied lock pointer */
> if (lock)
--
[]'s
Herton
On Tue, 2012-10-16 at 19:59 -0300, Herton Ronaldo Krzesinski wrote:
> On Sun, Oct 14, 2012 at 03:36:11PM +0100, Ben Hutchings wrote:
> > 3.2-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Tejun Heo <[email protected]>
> >
> > commit 60ea8226cbd5c8301f9a39edc574ddabcb8150e0 upstream.
> >
> > A queue newly allocated with blk_alloc_queue_node() has only
> > QUEUE_FLAG_BYPASS set. For request-based drivers,
> > blk_init_allocated_queue() is called and q->queue_flags is overwritten
> > with QUEUE_FLAG_DEFAULT which doesn't include BYPASS even though the
> > initial bypass is still in effect.
>
> Since QUEUE_FLAG_BYPASS doesn't exist and nothing in
> blk_alloc_queue_node is set on 3.2, this change is uneeded then,
> but the patch should be harmless anyway.
[...]
Thanks, I'll drop it.
Ben.
--
Ben Hutchings
No political challenge can be met by shopping. - George Monbiot
On Sun, Oct 14, 2012 at 03:36:47PM +0100, Ben Hutchings wrote:
> 3.2-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Michal Marek <[email protected]>
>
> commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
>
> There were reports of users destroying their Fedora installs by a kernel
> tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
> toplevel directories from the tarball to prevent this from happening.
>
> Reported-by: Andi Kleen <[email protected]>
> Suggested-by: Ben Hutchings <[email protected]>
> Signed-off-by: Michal Marek <[email protected]>
> [bwh: Backported to 3.2: drop redundant changes]
> Signed-off-by: Ben Hutchings <[email protected]>
> ---
> scripts/Makefile.fwinst | 4 ++--
> scripts/package/buildtar | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/scripts/Makefile.fwinst b/scripts/Makefile.fwinst
> index 4d908d1..c3f69ae 100644
> --- a/scripts/Makefile.fwinst
> +++ b/scripts/Makefile.fwinst
> @@ -42,7 +42,7 @@ quiet_cmd_install = INSTALL $(subst $(srctree)/,,$@)
> $(installed-fw-dirs):
> $(call cmd,mkdir)
>
> -$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $(INSTALL_FW_PATH)/$$(dir %)
> +$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $$(dir $(INSTALL_FW_PATH)/%)
> $(call cmd,install)
Pulling Linus tree today, I noticed commit
3ce9e53e788881da0d5f3912f80e0dd6b501f304 ("kbuild: Fix accidental revert
in commit fe04ddf"). This change to Makefile.fwinst is unrelated and
shouldn't be applied. This hunk was suspicious, could be dropped if not
too late. Otherwise, just picking 3ce9e53e7 later would be fine as well.
--
[]'s
Herton
On Wed, Oct 17, 2012 at 01:22:11PM -0300, Herton Ronaldo Krzesinski wrote:
> On Sun, Oct 14, 2012 at 03:36:47PM +0100, Ben Hutchings wrote:
> > 3.2-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Michal Marek <[email protected]>
> >
> > commit fe04ddf7c2910362f3817c8156e41cbd6c0ee35d upstream.
> >
> > There were reports of users destroying their Fedora installs by a kernel
> > tarball that replaces the /lib -> /usr/lib symlink. Let's remove the
> > toplevel directories from the tarball to prevent this from happening.
> >
> > Reported-by: Andi Kleen <[email protected]>
> > Suggested-by: Ben Hutchings <[email protected]>
> > Signed-off-by: Michal Marek <[email protected]>
> > [bwh: Backported to 3.2: drop redundant changes]
> > Signed-off-by: Ben Hutchings <[email protected]>
> > ---
> > scripts/Makefile.fwinst | 4 ++--
> > scripts/package/buildtar | 2 +-
> > 3 files changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/scripts/Makefile.fwinst b/scripts/Makefile.fwinst
> > index 4d908d1..c3f69ae 100644
> > --- a/scripts/Makefile.fwinst
> > +++ b/scripts/Makefile.fwinst
> > @@ -42,7 +42,7 @@ quiet_cmd_install = INSTALL $(subst $(srctree)/,,$@)
> > $(installed-fw-dirs):
> > $(call cmd,mkdir)
> >
> > -$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $(INSTALL_FW_PATH)/$$(dir %)
> > +$(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $$(dir $(INSTALL_FW_PATH)/%)
> > $(call cmd,install)
>
> Pulling Linus tree today, I noticed commit
> 3ce9e53e788881da0d5f3912f80e0dd6b501f304 ("kbuild: Fix accidental revert
> in commit fe04ddf"). This change to Makefile.fwinst is unrelated and
> shouldn't be applied. This hunk was suspicious, could be dropped if not
> too late. Otherwise, just picking 3ce9e53e7 later would be fine as well.
I saw that on the list and folded it into the version that went into
in 3.2.32. (Which is tagged in my own repo, but Greg hasn't yet
pulled and released it.)
Ben.
--
Ben Hutchings
We get into the habit of living before acquiring the habit of thinking.
- Albert Camus
On Sun, Oct 14, 2012 at 03:37:21PM +0100, Ben Hutchings wrote:
> 3.2-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: David Henningsson <[email protected]>
>
> commit 18dcd3044e4c4b3ab6341c98e8d0e81e0d58d5e3 upstream.
>
> The internal mic input is phase inverted on one channel.
> To avoid people in userspace summing the channels together
> and get zero result, use a separate mixer control for the
> inverted channel.
>
> BugLink: https://bugs.launchpad.net/bugs/903853
> Signed-off-by: David Henningsson <[email protected]>
> Signed-off-by: Takashi Iwai <[email protected]>
> [bwh: Backported to 3.2:
> - Adjust context
> - Change both invocations of apply_pin_fixup()]
> Signed-off-by: Ben Hutchings <[email protected]>
I noted today that a follow up fix for this is needed to avoid an oops
in apply_fixup, please consider adding to the next 3.2 stable this:
commit 83b0c6ba999643ee8ad6329f26e1cdc870e1a920
ALSA: hda - Fix oops caused by recent commit "Fix internal mic for Lenovo Ideapad U300s"
[...]
> +static void apply_fixup(struct hda_codec *codec,
> const struct snd_pci_quirk *quirk,
> const struct cxt_pincfg **table)
> {
> + struct conexant_spec *spec = codec->spec;
> +
> quirk = snd_pci_quirk_lookup(codec->bus->pci, quirk);
> - if (quirk) {
> + if (quirk && table[quirk->value]) {
> snd_printdd(KERN_INFO "hda_codec: applying pincfg for %s\n",
> quirk->name);
> apply_pincfg(codec, table[quirk->value]);
> }
> + if (quirk->value == CXT_FIXUP_STEREO_DMIC) {
> + snd_printdd(KERN_INFO "hda_codec: applying internal mic workaround for %s\n",
> + quirk->name);
> + spec->fixup_stereo_dmic = 1;
> + }
> }
[...]
--
[]'s
Herton
On Thu, 2012-10-18 at 15:50 -0300, Herton Ronaldo Krzesinski wrote:
> On Sun, Oct 14, 2012 at 03:37:21PM +0100, Ben Hutchings wrote:
> > 3.2-stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: David Henningsson <[email protected]>
> >
> > commit 18dcd3044e4c4b3ab6341c98e8d0e81e0d58d5e3 upstream.
> >
> > The internal mic input is phase inverted on one channel.
> > To avoid people in userspace summing the channels together
> > and get zero result, use a separate mixer control for the
> > inverted channel.
> >
> > BugLink: https://bugs.launchpad.net/bugs/903853
> > Signed-off-by: David Henningsson <[email protected]>
> > Signed-off-by: Takashi Iwai <[email protected]>
> > [bwh: Backported to 3.2:
> > - Adjust context
> > - Change both invocations of apply_pin_fixup()]
> > Signed-off-by: Ben Hutchings <[email protected]>
>
> I noted today that a follow up fix for this is needed to avoid an oops
> in apply_fixup, please consider adding to the next 3.2 stable this:
>
> commit 83b0c6ba999643ee8ad6329f26e1cdc870e1a920
> ALSA: hda - Fix oops caused by recent commit "Fix internal mic for Lenovo Ideapad U300s"
[...]
Added to the queue, thanks.
Ben.
--
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.