2013-04-15 02:25:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 00/17] 3.4.41-stable review

This is the start of the stable review cycle for the 3.4.41 release.
There are 17 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Wed Apr 17 02:23:06 UTC 2013.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.4.41-rc1.gz
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 3.4.41-rc1

Hayes Wang <[email protected]>
r8169: fix auto speed down issue

Linus Torvalds <[email protected]>
kobject: fix kset_find_obj() race with concurrent last kobject_put()

Linus Torvalds <[email protected]>
mtdchar: fix offset overflow detection

Boris Ostrovsky <[email protected]>
x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metal

Samu Kallio <[email protected]>
x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates

Thomas Gleixner <[email protected]>
sched_clock: Prevent 64bit inatomicity on 32bit systems

Dave Airlie <[email protected]>
udl: handle EDID failure properly.

Thomas Hellstrom <[email protected]>
kref: Implement kref_get_unless_zero v3

Suleiman Souhlal <[email protected]>
vfs: Revert spurious fix to spinning prevention in prune_icache_sb

Nicholas Bellinger <[email protected]>
target: Fix incorrect fallthrough of ALUA Standby/Offline/Transition CDBs

Sachin Prabhu <[email protected]>
cifs: Allow passwords which begin with a delimitor

Lukasz Dorau <[email protected]>
SCSI: libsas: fix handling vacant phy in sas_set_ex_phy()

Chris Wilson <[email protected]>
drm/i915: Use the correct size of the GTT for placing the per-process entries

Huacai Chen <[email protected]>
PM / reboot: call syscore_shutdown() after disable_nonboot_cpus()

Namhyung Kim <[email protected]>
tracing: Fix double free when function profile init failed

Alban Bedel <[email protected]>
ASoC: wm8903: Fix the bypass to HP/LINEOUT when no DAC or ADC is running

Eldad Zack <[email protected]>
ALSA: usb-audio: fix endianness bug in snd_nativeinstruments_*


-------------

Diffstat:

Makefile | 4 +--
arch/x86/include/asm/paravirt.h | 5 +++-
arch/x86/include/asm/paravirt_types.h | 2 ++
arch/x86/kernel/paravirt.c | 25 +++++++++---------
arch/x86/lguest/boot.c | 1 +
arch/x86/mm/fault.c | 6 +++--
arch/x86/xen/mmu.c | 1 +
drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +-
drivers/gpu/drm/udl/udl_connector.c | 4 +++
drivers/mtd/mtdchar.c | 48 ++++++++++++++++++++++++++++++-----
drivers/net/ethernet/realtek/r8169.c | 28 +++++++++++++++++---
drivers/scsi/libsas/sas_expander.c | 12 +++++++++
drivers/target/target_core_alua.c | 3 +++
fs/cifs/connect.c | 16 +++++++++---
fs/inode.c | 2 +-
include/linux/kref.h | 21 +++++++++++++++
kernel/sched/clock.c | 26 +++++++++++++++++++
kernel/sys.c | 3 ++-
kernel/trace/ftrace.c | 1 -
lib/kobject.c | 9 ++++++-
sound/soc/codecs/wm8903.c | 2 ++
sound/usb/mixer_quirks.c | 4 +--
sound/usb/quirks.c | 2 +-
23 files changed, 190 insertions(+), 37 deletions(-)


2013-04-15 02:25:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 16/17] kobject: fix kset_find_obj() race with concurrent last kobject_put()

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Linus Torvalds <[email protected]>

commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream.

Anatol Pomozov identified a race condition that hits module unloading
and re-loading. To quote Anatol:

"This is a race codition that exists between kset_find_obj() and
kobject_put(). kset_find_obj() might return kobject that has refcount
equal to 0 if this kobject is freeing by kobject_put() in other
thread.

Here is timeline for the crash in case if kset_find_obj() searches for
an object tht nobody holds and other thread is doing kobject_put() on
the same kobject:

THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put())
splin_lock()
atomic_dec_return(kobj->kref), counter gets zero here
... starts kobject cleanup ....
spin_lock() // WAIT thread A in kobj_kset_leave()
iterate over kset->list
atomic_inc(kobj->kref) (counter becomes 1)
spin_unlock()
spin_lock() // taken
// it does not know that thread A increased counter so it
remove obj from list
spin_unlock()
vfree(module) // frees module object with containing kobj

// kobj points to freed memory area!!
kobject_put(kobj) // OOPS!!!!

The race above happens because module.c tries to use kset_find_obj()
when somebody unloads module. The module.c code was introduced in
commit 6494a93d55fa"

Anatol supplied a patch specific for module.c that worked around the
problem by simply not using kset_find_obj() at all, but rather than make
a local band-aid, this just fixes kset_find_obj() to be thread-safe
using the proper model of refusing the get a new reference if the
refcount has already dropped to zero.

See examples of this proper refcount handling not only in the kref
documentation, but in various other equivalent uses of this pattern by
grepping for atomic_inc_not_zero().

[ Side note: the module race does indicate that module loading and
unloading is not properly serialized wrt sysfs information using the
module mutex. That may require further thought, but this is the
correct fix at the kobject layer regardless. ]

Reported-analyzed-and-tested-by: Anatol Pomozov <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
lib/kobject.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

--- a/lib/kobject.c
+++ b/lib/kobject.c
@@ -531,6 +531,13 @@ struct kobject *kobject_get(struct kobje
return kobj;
}

+static struct kobject *kobject_get_unless_zero(struct kobject *kobj)
+{
+ if (!kref_get_unless_zero(&kobj->kref))
+ kobj = NULL;
+ return kobj;
+}
+
/*
* kobject_cleanup - free kobject resources.
* @kobj: object to cleanup
@@ -753,7 +760,7 @@ struct kobject *kset_find_obj(struct kse

list_for_each_entry(k, &kset->list, entry) {
if (kobject_name(k) && !strcmp(kobject_name(k), name)) {
- ret = kobject_get(k);
+ ret = kobject_get_unless_zero(k);
break;
}
}

2013-04-15 02:25:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 17/17] r8169: fix auto speed down issue

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Hayes Wang <[email protected]>

commit e2409d83434d77874b461b78af6a19cd6e6a1280 upstream.

It would cause no link after suspending or shutdowning when the
nic changes the speed to 10M and connects to a link partner which
forces the speed to 100M.

Check the link partner ability to determine which speed to set.

Signed-off-by: Hayes Wang <[email protected]>
Acked-by: Francois Romieu <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/ethernet/realtek/r8169.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)

--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -3483,6 +3483,30 @@ static void __devinit rtl_init_mdio_ops(
}
}

+static void rtl_speed_down(struct rtl8169_private *tp)
+{
+ u32 adv;
+ int lpa;
+
+ rtl_writephy(tp, 0x1f, 0x0000);
+ lpa = rtl_readphy(tp, MII_LPA);
+
+ if (lpa & (LPA_10HALF | LPA_10FULL))
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full;
+ else if (lpa & (LPA_100HALF | LPA_100FULL))
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
+ else
+ adv = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full |
+ (tp->mii.supports_gmii ?
+ ADVERTISED_1000baseT_Half |
+ ADVERTISED_1000baseT_Full : 0);
+
+ rtl8169_set_speed(tp->dev, AUTONEG_ENABLE, SPEED_1000, DUPLEX_FULL,
+ adv);
+}
+
static void rtl_wol_suspend_quirk(struct rtl8169_private *tp)
{
void __iomem *ioaddr = tp->mmio_addr;
@@ -3508,9 +3532,7 @@ static bool rtl_wol_pll_power_down(struc
if (!(__rtl8169_get_wol(tp) & WAKE_ANY))
return false;

- rtl_writephy(tp, 0x1f, 0x0000);
- rtl_writephy(tp, MII_BMCR, 0x0000);
-
+ rtl_speed_down(tp);
rtl_wol_suspend_quirk(tp);

return true;

2013-04-15 02:25:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 14/17] x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metal

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Boris Ostrovsky <[email protected]>

commit 511ba86e1d386f671084b5d0e6f110bb30b8eeb2 upstream.

Invoking arch_flush_lazy_mmu_mode() results in calls to
preempt_enable()/disable() which may have performance impact.

Since lazy MMU is not used on bare metal we can patch away
arch_flush_lazy_mmu_mode() so that it is never called in such
environment.

[ hpa: the previous patch "Fix vmalloc_fault oops during lazy MMU
updates" may cause a minor performance regression on
bare metal. This patch resolves that performance regression. It is
somewhat unclear to me if this is a good -stable candidate. ]

Signed-off-by: Boris Ostrovsky <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Tested-by: Josh Boyer <[email protected]>
Tested-by: Konrad Rzeszutek Wilk <[email protected]>
Acked-by: Borislav Petkov <[email protected]>
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/paravirt.h | 5 ++++-
arch/x86/include/asm/paravirt_types.h | 2 ++
arch/x86/kernel/paravirt.c | 25 +++++++++++++------------
arch/x86/lguest/boot.c | 1 +
arch/x86/xen/mmu.c | 1 +
5 files changed, 21 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,7 +741,10 @@ static inline void arch_leave_lazy_mmu_m
PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave);
}

-void arch_flush_lazy_mmu_mode(void);
+static inline void arch_flush_lazy_mmu_mode(void)
+{
+ PVOP_VCALL0(pv_mmu_ops.lazy_mode.flush);
+}

static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
phys_addr_t phys, pgprot_t flags)
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -91,6 +91,7 @@ struct pv_lazy_ops {
/* Set deferred update mode, used for batching operations. */
void (*enter)(void);
void (*leave)(void);
+ void (*flush)(void);
};

struct pv_time_ops {
@@ -680,6 +681,7 @@ void paravirt_end_context_switch(struct

void paravirt_enter_lazy_mmu(void);
void paravirt_leave_lazy_mmu(void);
+void paravirt_flush_lazy_mmu(void);

void _paravirt_nop(void);
u32 _paravirt_ident_32(u32);
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -263,6 +263,18 @@ void paravirt_leave_lazy_mmu(void)
leave_lazy(PARAVIRT_LAZY_MMU);
}

+void paravirt_flush_lazy_mmu(void)
+{
+ preempt_disable();
+
+ if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
+ arch_leave_lazy_mmu_mode();
+ arch_enter_lazy_mmu_mode();
+ }
+
+ preempt_enable();
+}
+
void paravirt_start_context_switch(struct task_struct *prev)
{
BUG_ON(preemptible());
@@ -292,18 +304,6 @@ enum paravirt_lazy_mode paravirt_get_laz
return percpu_read(paravirt_lazy_mode);
}

-void arch_flush_lazy_mmu_mode(void)
-{
- preempt_disable();
-
- if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
- arch_leave_lazy_mmu_mode();
- arch_enter_lazy_mmu_mode();
- }
-
- preempt_enable();
-}
-
struct pv_info pv_info = {
.name = "bare hardware",
.paravirt_enabled = 0,
@@ -477,6 +477,7 @@ struct pv_mmu_ops pv_mmu_ops = {
.lazy_mode = {
.enter = paravirt_nop,
.leave = paravirt_nop,
+ .flush = paravirt_nop,
},

.set_fixmap = native_set_fixmap,
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -1333,6 +1333,7 @@ __init void lguest_init(void)
pv_mmu_ops.read_cr3 = lguest_read_cr3;
pv_mmu_ops.lazy_mode.enter = paravirt_enter_lazy_mmu;
pv_mmu_ops.lazy_mode.leave = lguest_leave_lazy_mmu_mode;
+ pv_mmu_ops.lazy_mode.flush = paravirt_flush_lazy_mmu;
pv_mmu_ops.pte_update = lguest_pte_update;
pv_mmu_ops.pte_update_defer = lguest_pte_update;

--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2076,6 +2076,7 @@ static const struct pv_mmu_ops xen_mmu_o
.lazy_mode = {
.enter = paravirt_enter_lazy_mmu,
.leave = xen_leave_lazy_mmu,
+ .flush = paravirt_flush_lazy_mmu,
},

.set_fixmap = xen_set_fixmap,

2013-04-15 02:25:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 05/17] drm/i915: Use the correct size of the GTT for placing the per-process entries

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Chris Wilson <[email protected]>

commit 9a0f938bde74bf9e50bd75c8de9e38c1787398cd upstream.

The current layout is to place the per-process tables at the end of the
GTT. However, this is currently using a hardcoded maximum size for the GTT
and not taking in account limitations imposed by the BIOS. Use the value
for the total number of entries allocated in the table as provided by
the configuration registers.

Reported-by: Matthew Garrett <[email protected]>
Signed-off-by: Chris Wilson <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Ben Widawsky <[email protected]>
Cc: Matthew Garret <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
Cc: Jonathan Nieder <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -72,7 +72,7 @@ int i915_gem_init_aliasing_ppgtt(struct
/* ppgtt PDEs reside in the global gtt pagetable, which has 512*1024
* entries. For aliasing ppgtt support we just steal them at the end for
* now. */
- first_pd_entry_in_global_pt = 512*1024 - I915_PPGTT_PD_ENTRIES;
+ first_pd_entry_in_global_pt = dev_priv->mm.gtt->gtt_total_entries - I915_PPGTT_PD_ENTRIES;

ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
if (!ppgtt)

2013-04-15 02:26:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 15/17] mtdchar: fix offset overflow detection

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Linus Torvalds <[email protected]>

commit 9c603e53d380459fb62fec7cd085acb0b74ac18f upstream.

Sasha Levin has been running trinity in a KVM tools guest, and was able
to trigger the BUG_ON() at arch/x86/mm/pat.c:279 (verifying the range of
the memory type). The call trace showed that it was mtdchar_mmap() that
created an invalid remap_pfn_range().

The problem is that mtdchar_mmap() does various really odd and subtle
things with the vma page offset etc, and uses the wrong types (and the
wrong overflow) detection for it.

For example, the page offset may well be 32-bit on a 32-bit
architecture, but after shifting it up by PAGE_SHIFT, we need to use a
potentially 64-bit resource_size_t to correctly hold the full value.

Also, we need to check that the vma length plus offset doesn't overflow
before we check that it is smaller than the length of the mtdmap region.

This fixes things up and tries to make the code a bit easier to read.

Reported-and-tested-by: Sasha Levin <[email protected]>
Acked-by: Suresh Siddha <[email protected]>
Acked-by: Artem Bityutskiy <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Cc: Ben Hutchings <[email protected]>
Cc: Brad Spengler <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/mtd/mtdchar.c | 48 ++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 42 insertions(+), 6 deletions(-)

--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -1123,6 +1123,33 @@ static unsigned long mtdchar_get_unmappe
}
#endif

+static inline unsigned long get_vm_size(struct vm_area_struct *vma)
+{
+ return vma->vm_end - vma->vm_start;
+}
+
+static inline resource_size_t get_vm_offset(struct vm_area_struct *vma)
+{
+ return (resource_size_t) vma->vm_pgoff << PAGE_SHIFT;
+}
+
+/*
+ * Set a new vm offset.
+ *
+ * Verify that the incoming offset really works as a page offset,
+ * and that the offset and size fit in a resource_size_t.
+ */
+static inline int set_vm_offset(struct vm_area_struct *vma, resource_size_t off)
+{
+ pgoff_t pgoff = off >> PAGE_SHIFT;
+ if (off != (resource_size_t) pgoff << PAGE_SHIFT)
+ return -EINVAL;
+ if (off + get_vm_size(vma) - 1 < off)
+ return -EINVAL;
+ vma->vm_pgoff = pgoff;
+ return 0;
+}
+
/*
* set up a mapping for shared memory segments
*/
@@ -1132,20 +1159,29 @@ static int mtdchar_mmap(struct file *fil
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
struct map_info *map = mtd->priv;
- unsigned long start;
- unsigned long off;
- u32 len;
+ resource_size_t start, off;
+ unsigned long len, vma_len;

if (mtd->type == MTD_RAM || mtd->type == MTD_ROM) {
- off = vma->vm_pgoff << PAGE_SHIFT;
+ off = get_vm_offset(vma);
start = map->phys;
len = PAGE_ALIGN((start & ~PAGE_MASK) + map->size);
start &= PAGE_MASK;
- if ((vma->vm_end - vma->vm_start + off) > len)
+ vma_len = get_vm_size(vma);
+
+ /* Overflow in off+len? */
+ if (vma_len + off < off)
+ return -EINVAL;
+ /* Does it fit in the mapping? */
+ if (vma_len + off > len)
return -EINVAL;

off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
+ /* Did that overflow? */
+ if (off < start)
+ return -EINVAL;
+ if (set_vm_offset(vma, off) < 0)
+ return -EINVAL;
vma->vm_flags |= VM_IO | VM_RESERVED;

#ifdef pgprot_noncached

2013-04-15 02:25:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 12/17] sched_clock: Prevent 64bit inatomicity on 32bit systems

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit a1cbcaa9ea87b87a96b9fc465951dcf36e459ca2 upstream.

The sched_clock_remote() implementation has the following inatomicity
problem on 32bit systems when accessing the remote scd->clock, which
is a 64bit value.

CPU0 CPU1

sched_clock_local() sched_clock_remote(CPU0)
...
remote_clock = scd[CPU0]->clock
read_low32bit(scd[CPU0]->clock)
cmpxchg64(scd->clock,...)
read_high32bit(scd[CPU0]->clock)

While the update of scd->clock is using an atomic64 mechanism, the
readout on the remote cpu is not, which can cause completely bogus
readouts.

It is a quite rare problem, because it requires the update to hit the
narrow race window between the low/high readout and the update must go
across the 32bit boundary.

The resulting misbehaviour is, that CPU1 will see the sched_clock on
CPU1 ~4 seconds ahead of it's own and update CPU1s sched_clock value
to this bogus timestamp. This stays that way due to the clamping
implementation for about 4 seconds until the synchronization with
CLOCK_MONOTONIC undoes the problem.

The issue is hard to observe, because it might only result in a less
accurate SCHED_OTHER timeslicing behaviour. To create observable
damage on realtime scheduling classes, it is necessary that the bogus
update of CPU1 sched_clock happens in the context of an realtime
thread, which then gets charged 4 seconds of RT runtime, which results
in the RT throttler mechanism to trigger and prevent scheduling of RT
tasks for a little less than 4 seconds. So this is quite unlikely as
well.

The issue was quite hard to decode as the reproduction time is between
2 days and 3 weeks and intrusive tracing makes it less likely, but the
following trace recorded with trace_clock=global, which uses
sched_clock_local(), gave the final hint:

<idle>-0 0d..30 400269.477150: hrtimer_cancel: hrtimer=0xf7061e80
<idle>-0 0d..30 400269.477151: hrtimer_start: hrtimer=0xf7061e80 ...
irq/20-S-587 1d..32 400273.772118: sched_wakeup: comm= ... target_cpu=0
<idle>-0 0dN.30 400273.772118: hrtimer_cancel: hrtimer=0xf7061e80

What happens is that CPU0 goes idle and invokes
sched_clock_idle_sleep_event() which invokes sched_clock_local() and
CPU1 runs a remote wakeup for CPU0 at the same time, which invokes
sched_remote_clock(). The time jump gets propagated to CPU0 via
sched_remote_clock() and stays stale on both cores for ~4 seconds.

There are only two other possibilities, which could cause a stale
sched clock:

1) ktime_get() which reads out CLOCK_MONOTONIC returns a sporadic
wrong value.

2) sched_clock() which reads the TSC returns a sporadic wrong value.

#1 can be excluded because sched_clock would continue to increase for
one jiffy and then go stale.

#2 can be excluded because it would not make the clock jump
forward. It would just result in a stale sched_clock for one jiffy.

After quite some brain twisting and finding the same pattern on other
traces, sched_clock_remote() remained the only place which could cause
such a problem and as explained above it's indeed racy on 32bit
systems.

So while on 64bit systems the readout is atomic, we need to verify the
remote readout on 32bit machines. We need to protect the local->clock
readout in sched_clock_remote() on 32bit as well because an NMI could
hit between the low and the high readout, call sched_clock_local() and
modify local->clock.

Thanks to Siegfried Wulsch for bearing with my debug requests and
going through the tedious tasks of running a bunch of reproducer
systems to generate the debug information which let me decode the
issue.

Reported-by: Siegfried Wulsch <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304051544160.21884@ionos
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/clock.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -176,10 +176,36 @@ static u64 sched_clock_remote(struct sch
u64 this_clock, remote_clock;
u64 *ptr, old_val, val;

+#if BITS_PER_LONG != 64
+again:
+ /*
+ * Careful here: The local and the remote clock values need to
+ * be read out atomic as we need to compare the values and
+ * then update either the local or the remote side. So the
+ * cmpxchg64 below only protects one readout.
+ *
+ * We must reread via sched_clock_local() in the retry case on
+ * 32bit as an NMI could use sched_clock_local() via the
+ * tracer and hit between the readout of
+ * the low32bit and the high 32bit portion.
+ */
+ this_clock = sched_clock_local(my_scd);
+ /*
+ * We must enforce atomic readout on 32bit, otherwise the
+ * update on the remote cpu can hit inbetween the readout of
+ * the low32bit and the high 32bit portion.
+ */
+ remote_clock = cmpxchg64(&scd->clock, 0, 0);
+#else
+ /*
+ * On 64bit the read of [my]scd->clock is atomic versus the
+ * update, so we can avoid the above 32bit dance.
+ */
sched_clock_local(my_scd);
again:
this_clock = my_scd->clock;
remote_clock = scd->clock;
+#endif

/*
* Use the opportunity that we have both locks

2013-04-15 02:27:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 13/17] x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Samu Kallio <[email protected]>

commit 1160c2779b826c6f5c08e5cc542de58fd1f667d5 upstream.

In paravirtualized x86_64 kernels, vmalloc_fault may cause an oops
when lazy MMU updates are enabled, because set_pgd effects are being
deferred.

One instance of this problem is during process mm cleanup with memory
cgroups enabled. The chain of events is as follows:

- zap_pte_range enables lazy MMU updates
- zap_pte_range eventually calls mem_cgroup_charge_statistics,
which accesses the vmalloc'd mem_cgroup per-cpu stat area
- vmalloc_fault is triggered which tries to sync the corresponding
PGD entry with set_pgd, but the update is deferred
- vmalloc_fault oopses due to a mismatch in the PUD entries

The OOPs usually looks as so:

------------[ cut here ]------------
kernel BUG at arch/x86/mm/fault.c:396!
invalid opcode: 0000 [#1] SMP
.. snip ..
CPU 1
Pid: 10866, comm: httpd Not tainted 3.6.10-4.fc18.x86_64 #1
RIP: e030:[<ffffffff816271bf>] [<ffffffff816271bf>] vmalloc_fault+0x11f/0x208
.. snip ..
Call Trace:
[<ffffffff81627759>] do_page_fault+0x399/0x4b0
[<ffffffff81004f4c>] ? xen_mc_extend_args+0xec/0x110
[<ffffffff81624065>] page_fault+0x25/0x30
[<ffffffff81184d03>] ? mem_cgroup_charge_statistics.isra.13+0x13/0x50
[<ffffffff81186f78>] __mem_cgroup_uncharge_common+0xd8/0x350
[<ffffffff8118aac7>] mem_cgroup_uncharge_page+0x57/0x60
[<ffffffff8115fbc0>] page_remove_rmap+0xe0/0x150
[<ffffffff8115311a>] ? vm_normal_page+0x1a/0x80
[<ffffffff81153e61>] unmap_single_vma+0x531/0x870
[<ffffffff81154962>] unmap_vmas+0x52/0xa0
[<ffffffff81007442>] ? pte_mfn_to_pfn+0x72/0x100
[<ffffffff8115c8f8>] exit_mmap+0x98/0x170
[<ffffffff810050d9>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
[<ffffffff81059ce3>] mmput+0x83/0xf0
[<ffffffff810624c4>] exit_mm+0x104/0x130
[<ffffffff8106264a>] do_exit+0x15a/0x8c0
[<ffffffff810630ff>] do_group_exit+0x3f/0xa0
[<ffffffff81063177>] sys_exit_group+0x17/0x20
[<ffffffff8162bae9>] system_call_fastpath+0x16/0x1b

Calling arch_flush_lazy_mmu_mode immediately after set_pgd makes the
changes visible to the consistency checks.

RedHat-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=914737
Tested-by: Josh Boyer <[email protected]>
Reported-and-Tested-by: Krishna Raman <[email protected]>
Signed-off-by: Samu Kallio <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Tested-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/mm/fault.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -377,10 +377,12 @@ static noinline __kprobes int vmalloc_fa
if (pgd_none(*pgd_ref))
return -1;

- if (pgd_none(*pgd))
+ if (pgd_none(*pgd)) {
set_pgd(pgd, *pgd_ref);
- else
+ arch_flush_lazy_mmu_mode();
+ } else {
BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+ }

/*
* Below here mismatches are bugs because these lower tables

2013-04-15 02:25:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 10/17] kref: Implement kref_get_unless_zero v3

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Hellstrom <[email protected]>

commit 4b20db3de8dab005b07c74161cb041db8c5ff3a7 upstream.

This function is intended to simplify locking around refcounting for
objects that can be looked up from a lookup structure, and which are
removed from that lookup structure in the object destructor.
Operations on such objects require at least a read lock around
lookup + kref_get, and a write lock around kref_put + remove from lookup
structure. Furthermore, RCU implementations become extremely tricky.
With a lookup followed by a kref_get_unless_zero *with return value check*
locking in the kref_put path can be deferred to the actual removal from
the lookup structure and RCU lookups become trivial.

v2: Formatting fixes.
v3: Invert the return value.

Signed-off-by: Thomas Hellstrom <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/kref.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -93,4 +93,25 @@ static inline int kref_put(struct kref *
{
return kref_sub(kref, 1, release);
}
+
+/**
+ * kref_get_unless_zero - Increment refcount for object unless it is zero.
+ * @kref: object.
+ *
+ * Return non-zero if the increment succeeded. Otherwise return 0.
+ *
+ * This function is intended to simplify locking around refcounting for
+ * objects that can be looked up from a lookup structure, and which are
+ * removed from that lookup structure in the object destructor.
+ * Operations on such objects require at least a read lock around
+ * lookup + kref_get, and a write lock around kref_put + remove from lookup
+ * structure. Furthermore, RCU implementations become extremely tricky.
+ * With a lookup followed by a kref_get_unless_zero *with return value check*
+ * locking in the kref_put path can be deferred to the actual removal from
+ * the lookup structure and RCU lookups become trivial.
+ */
+static inline int __must_check kref_get_unless_zero(struct kref *kref)
+{
+ return atomic_add_unless(&kref->refcount, 1, 0);
+}
#endif /* _KREF_H_ */

2013-04-15 02:27:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 11/17] udl: handle EDID failure properly.

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dave Airlie <[email protected]>

commit 1baee58638fc58248625255f5c5fcdb987f11b1f upstream.

Don't oops seems proper.

Signed-off-by: Dave Airlie <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/udl/udl_connector.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/drivers/gpu/drm/udl/udl_connector.c
+++ b/drivers/gpu/drm/udl/udl_connector.c
@@ -61,6 +61,10 @@ static int udl_get_modes(struct drm_conn
int ret;

edid = (struct edid *)udl_get_edid(udl);
+ if (!edid) {
+ drm_mode_connector_update_edid_property(connector, NULL);
+ return 0;
+ }

connector->display_info.raw_edid = (char *)edid;


2013-04-15 02:25:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 09/17] vfs: Revert spurious fix to spinning prevention in prune_icache_sb

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Suleiman Souhlal <[email protected]>

commit 5b55d708335a9e3e4f61f2dadf7511502205ccd1 upstream.

Revert commit 62a3ddef6181 ("vfs: fix spinning prevention in prune_icache_sb").

This commit doesn't look right: since we are looking at the tail of the
list (sb->s_inode_lru.prev) if we want to skip an inode, we should put
it back at the head of the list instead of the tail, otherwise we will
keep spinning on it.

Discovered when investigating why prune_icache_sb came top in perf
reports of a swapping load.

Signed-off-by: Suleiman Souhlal <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/inode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/fs/inode.c
+++ b/fs/inode.c
@@ -705,7 +705,7 @@ void prune_icache_sb(struct super_block
* inode to the back of the list so we don't spin on it.
*/
if (!spin_trylock(&inode->i_lock)) {
- list_move_tail(&inode->i_lru, &sb->s_inode_lru);
+ list_move(&inode->i_lru, &sb->s_inode_lru);
continue;
}


2013-04-15 02:25:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 06/17] SCSI: libsas: fix handling vacant phy in sas_set_ex_phy()

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lukasz Dorau <[email protected]>

commit d4a2618fa77b5e58ec15342972bd3505a1c3f551 upstream.

If a result of the SMP discover function is PHY VACANT,
the content of discover response structure (dr) is not valid.
It sometimes happens that dr->attached_sas_addr can contain
even SAS address of other phy. In such case an invalid phy
is created, what causes NULL pointer dereference during
destruction of expander's phys.

So if a result of SMP function is PHY VACANT, the content of discover
response structure (dr) must not be copied to phy structure.

This patch fixes the following bug:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000030
IP: [<ffffffff811c9002>] sysfs_find_dirent+0x12/0x90
Call Trace:
[<ffffffff811c95f5>] sysfs_get_dirent+0x35/0x80
[<ffffffff811cb55e>] sysfs_unmerge_group+0x1e/0xb0
[<ffffffff813329f4>] dpm_sysfs_remove+0x24/0x90
[<ffffffff8132b0f4>] device_del+0x44/0x1d0
[<ffffffffa016fc59>] sas_rphy_delete+0x9/0x20 [scsi_transport_sas]
[<ffffffffa01a16f6>] sas_destruct_devices+0xe6/0x110 [libsas]
[<ffffffff8107ac7c>] process_one_work+0x16c/0x350
[<ffffffff8107d84a>] worker_thread+0x17a/0x410
[<ffffffff81081b76>] kthread+0x96/0xa0
[<ffffffff81464944>] kernel_thread_helper+0x4/0x10

Signed-off-by: Lukasz Dorau <[email protected]>
Signed-off-by: Pawel Baldysiak <[email protected]>
Reviewed-by: Maciej Patelczyk <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/scsi/libsas/sas_expander.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

--- a/drivers/scsi/libsas/sas_expander.c
+++ b/drivers/scsi/libsas/sas_expander.c
@@ -235,6 +235,17 @@ static void sas_set_ex_phy(struct domain
linkrate = phy->linkrate;
memcpy(sas_addr, phy->attached_sas_addr, SAS_ADDR_SIZE);

+ /* Handle vacant phy - rest of dr data is not valid so skip it */
+ if (phy->phy_state == PHY_VACANT) {
+ memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
+ phy->attached_dev_type = NO_DEVICE;
+ if (!test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state)) {
+ phy->phy_id = phy_id;
+ goto skip;
+ } else
+ goto out;
+ }
+
phy->attached_dev_type = to_dev_type(dr);
if (test_bit(SAS_HA_ATA_EH_ACTIVE, &ha->state))
goto out;
@@ -272,6 +283,7 @@ static void sas_set_ex_phy(struct domain
phy->phy->maximum_linkrate = dr->pmax_linkrate;
phy->phy->negotiated_linkrate = phy->linkrate;

+ skip:
if (new_phy)
if (sas_phy_add(phy->phy)) {
sas_phy_free(phy->phy);

2013-04-15 02:28:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 08/17] target: Fix incorrect fallthrough of ALUA Standby/Offline/Transition CDBs

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nicholas Bellinger <[email protected]>

commit 30f359a6f9da65a66de8cadf959f0f4a0d498bba upstream.

This patch fixes a bug where a handful of informational / control CDBs
that should be allowed during ALUA access state Standby/Offline/Transition
where incorrectly returning CHECK_CONDITION + ASCQ_04H_ALUA_TG_PT_*.

This includes INQUIRY + REPORT_LUNS, which would end up preventing LUN
registration when LUN scanning occured during these ALUA access states.

Signed-off-by: Nicholas Bellinger <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/target/target_core_alua.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -392,6 +392,7 @@ static inline int core_alua_state_standb
case REPORT_LUNS:
case RECEIVE_DIAGNOSTIC:
case SEND_DIAGNOSTIC:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1]) {
case MI_REPORT_TARGET_PGS:
@@ -434,6 +435,7 @@ static inline int core_alua_state_unavai
switch (cdb[0]) {
case INQUIRY:
case REPORT_LUNS:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1]) {
case MI_REPORT_TARGET_PGS:
@@ -474,6 +476,7 @@ static inline int core_alua_state_transi
switch (cdb[0]) {
case INQUIRY:
case REPORT_LUNS:
+ return 0;
case MAINTENANCE_IN:
switch (cdb[1]) {
case MI_REPORT_TARGET_PGS:

2013-04-15 02:28:44

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 07/17] cifs: Allow passwords which begin with a delimitor

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sachin Prabhu <[email protected]>

commit c369c9a4a7c82d33329d869cbaf93304cc7a0c40 upstream.

Fixes a regression in cifs_parse_mount_options where a password
which begins with a delimitor is parsed incorrectly as being a blank
password.

Signed-off-by: Sachin Prabhu <[email protected]>
Acked-by: Jeff Layton <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/cifs/connect.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)

--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -1576,14 +1576,24 @@ cifs_parse_mount_options(const char *mou
}
break;
case Opt_blank_pass:
- vol->password = NULL;
- break;
- case Opt_pass:
/* passwords have to be handled differently
* to allow the character used for deliminator
* to be passed within them
*/

+ /*
+ * Check if this is a case where the password
+ * starts with a delimiter
+ */
+ tmp_end = strchr(data, '=');
+ tmp_end++;
+ if (!(tmp_end < end && tmp_end[1] == delim)) {
+ /* No it is not. Set the password to NULL */
+ vol->password = NULL;
+ break;
+ }
+ /* Yes it is. Drop down to Opt_pass below.*/
+ case Opt_pass:
/* Obtain the value string */
value = strchr(data, '=');
value++;

2013-04-15 02:25:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 04/17] PM / reboot: call syscore_shutdown() after disable_nonboot_cpus()

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Huacai Chen <[email protected]>

commit 6f389a8f1dd22a24f3d9afc2812b30d639e94625 upstream.

As commit 40dc166c (PM / Core: Introduce struct syscore_ops for core
subsystems PM) say, syscore_ops operations should be carried with one
CPU on-line and interrupts disabled. However, after commit f96972f2d
(kernel/sys.c: call disable_nonboot_cpus() in kernel_restart()),
syscore_shutdown() is called before disable_nonboot_cpus(), so break
the rules. We have a MIPS machine with a 8259A PIC, and there is an
external timer (HPET) linked at 8259A. Since 8259A has been shutdown
too early (by syscore_shutdown()), disable_nonboot_cpus() runs without
timer interrupt, so it hangs and reboot fails. This patch call
syscore_shutdown() a little later (after disable_nonboot_cpus()) to
avoid reboot failure, this is the same way as poweroff does.

For consistency, add disable_nonboot_cpus() to kernel_halt().

Signed-off-by: Huacai Chen <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sys.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -320,7 +320,6 @@ void kernel_restart_prepare(char *cmd)
system_state = SYSTEM_RESTART;
usermodehelper_disable();
device_shutdown();
- syscore_shutdown();
}

/**
@@ -366,6 +365,7 @@ void kernel_restart(char *cmd)
{
kernel_restart_prepare(cmd);
disable_nonboot_cpus();
+ syscore_shutdown();
if (!cmd)
printk(KERN_EMERG "Restarting system.\n");
else
@@ -391,6 +391,7 @@ static void kernel_shutdown_prepare(enum
void kernel_halt(void)
{
kernel_shutdown_prepare(SYSTEM_HALT);
+ disable_nonboot_cpus();
syscore_shutdown();
printk(KERN_EMERG "System halted.\n");
kmsg_dump(KMSG_DUMP_HALT);

2013-04-15 02:25:38

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 03/17] tracing: Fix double free when function profile init failed

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Namhyung Kim <[email protected]>

commit 83e03b3fe4daffdebbb42151d5410d730ae50bd1 upstream.

On the failure path, stat->start and stat->pages will refer same page.
So it'll attempt to free the same page again and get kernel panic.

Link: http://lkml.kernel.org/r/[email protected]

Signed-off-by: Namhyung Kim <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Namhyung Kim <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/ftrace.c | 1 -
1 file changed, 1 deletion(-)

--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -642,7 +642,6 @@ int ftrace_profile_pages_init(struct ftr
free_page(tmp);
}

- free_page((unsigned long)stat->pages);
stat->pages = NULL;
stat->start = NULL;


2013-04-15 02:29:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 01/17] ALSA: usb-audio: fix endianness bug in snd_nativeinstruments_*

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eldad Zack <[email protected]>

commit 889d66848b12d891248b03abcb2a42047f8e172a upstream.

The usb_control_msg() function expects __u16 types and performs
the endianness conversions by itself.
However, in three places, a conversion is performed before it is
handed over to usb_control_msg(), which leads to a double conversion
(= no conversion):
* snd_usb_nativeinstruments_boot_quirk()
* snd_nativeinstruments_control_get()
* snd_nativeinstruments_control_put()

Caught by sparse:

sound/usb/mixer_quirks.c:512:38: warning: incorrect type in argument 6 (different base types)
sound/usb/mixer_quirks.c:512:38: expected unsigned short [unsigned] [usertype] index
sound/usb/mixer_quirks.c:512:38: got restricted __le16 [usertype] <noident>
sound/usb/mixer_quirks.c:543:35: warning: incorrect type in argument 5 (different base types)
sound/usb/mixer_quirks.c:543:35: expected unsigned short [unsigned] [usertype] value
sound/usb/mixer_quirks.c:543:35: got restricted __le16 [usertype] <noident>
sound/usb/mixer_quirks.c:543:56: warning: incorrect type in argument 6 (different base types)
sound/usb/mixer_quirks.c:543:56: expected unsigned short [unsigned] [usertype] index
sound/usb/mixer_quirks.c:543:56: got restricted __le16 [usertype] <noident>
sound/usb/quirks.c:502:35: warning: incorrect type in argument 5 (different base types)
sound/usb/quirks.c:502:35: expected unsigned short [unsigned] [usertype] value
sound/usb/quirks.c:502:35: got restricted __le16 [usertype] <noident>

Signed-off-by: Eldad Zack <[email protected]>
Acked-by: Daniel Mack <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/usb/mixer_quirks.c | 4 ++--
sound/usb/quirks.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

--- a/sound/usb/mixer_quirks.c
+++ b/sound/usb/mixer_quirks.c
@@ -396,7 +396,7 @@ static int snd_nativeinstruments_control
else
ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), bRequest,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
- 0, cpu_to_le16(wIndex),
+ 0, wIndex,
&tmp, sizeof(tmp), 1000);
up_read(&mixer->chip->shutdown_rwsem);

@@ -427,7 +427,7 @@ static int snd_nativeinstruments_control
else
ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), bRequest,
USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
- cpu_to_le16(wValue), cpu_to_le16(wIndex),
+ wValue, wIndex,
NULL, 0, 1000);
up_read(&mixer->chip->shutdown_rwsem);

--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -486,7 +486,7 @@ static int snd_usb_nativeinstruments_boo
{
int ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
0xaf, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- cpu_to_le16(1), 0, NULL, 0, 1000);
+ 1, 0, NULL, 0, 1000);

if (ret < 0)
return ret;

2013-04-15 02:29:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [ 02/17] ASoC: wm8903: Fix the bypass to HP/LINEOUT when no DAC or ADC is running

3.4-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alban Bedel <[email protected]>

commit f1ca493b0b5e8f42d3b2dc8877860db2983f47b6 upstream.

The Charge Pump needs the DSP clock to work properly, without it the
bypass to HP/LINEOUT is not working properly. This requirement is not
mentioned in the datasheet but has been confirmed by Mark Brown from
Wolfson.

Signed-off-by: Alban Bedel <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/soc/codecs/wm8903.c | 2 ++
1 file changed, 2 insertions(+)

--- a/sound/soc/codecs/wm8903.c
+++ b/sound/soc/codecs/wm8903.c
@@ -1082,6 +1082,8 @@ static const struct snd_soc_dapm_route w
{ "ROP", NULL, "Right Speaker PGA" },
{ "RON", NULL, "Right Speaker PGA" },

+ { "Charge Pump", NULL, "CLK_DSP" },
+
{ "Left Headphone Output PGA", NULL, "Charge Pump" },
{ "Right Headphone Output PGA", NULL, "Charge Pump" },
{ "Left Line Output PGA", NULL, "Charge Pump" },

2013-04-15 14:04:36

by Shuah Khan

[permalink] [raw]
Subject: Re: [ 00/17] 3.4.41-stable review

On Sun, Apr 14, 2013 at 8:25 PM, Greg Kroah-Hartman
<[email protected]> wrote:
> This is the start of the stable review cycle for the 3.4.41 release.
> There are 17 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Wed Apr 17 02:23:06 UTC 2013.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.4.41-rc1.gz
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Patches applied cleanly to 3.0.73, 3.4.40, and 3.8.7

Compiled and booted on the following systems:

HP ProBook 6475b AMD A10-4600M APU with Radeon(tm) HD Graphics
(on the road, only one system and no cross compile tests this time)

dmesgs for all releases look good. No regressions compared to the previous
dmesgs for each of these releases.

cross compile tests - skipped

-- Shuah