2014-07-16 00:07:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 00/44] 3.10.49-stable review

This is the start of the stable review cycle for the 3.10.49 release.
There are 44 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Thu Jul 17 23:16:37 UTC 2014.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.49-rc1.gz
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 3.10.49-rc1

Lan Tianyu <[email protected]>
ACPI / battery: Retry to get battery information if failed during probing

Roland Dreier <[email protected]>
x86, ioremap: Speed up check for RAM pages

H. Peter Anvin <[email protected]>
x86, espfix: Make it possible to disable 16-bit support

H. Peter Anvin <[email protected]>
x86, espfix: Make espfix64 a Kconfig option, fix UML

H. Peter Anvin <[email protected]>
x86, espfix: Fix broken header guard

H. Peter Anvin <[email protected]>
x86, espfix: Move espfix definitions into a separate header file

H. Peter Anvin <[email protected]>
x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack

H. Peter Anvin <[email protected]>
Revert "x86-64, modify_ldt: Make support for 16-bit segments a runtime option"

Lennox Wu <[email protected]>
Score: Modify the Makefile of Score, remove -mlong-calls for compiling

Lennox Wu <[email protected]>
Score: The commit is for compiling successfully.

Lennox Wu <[email protected]>
Score: Implement the function csum_ipv6_magic

Jiang Liu <[email protected]>
score: normalize global variables exported by vmlinux.lds

Thomas Gleixner <[email protected]>
rtmutex: Plug slow unlock race

Thomas Gleixner <[email protected]>
rtmutex: Handle deadlock detection smarter

Thomas Gleixner <[email protected]>
rtmutex: Detect changes in the pi lock chain

Thomas Gleixner <[email protected]>
rtmutex: Fix deadlock detector for real

Steven Rostedt (Red Hat) <[email protected]>
ring-buffer: Check if buffer exists before polling

Christian König <[email protected]>
drm/radeon: stop poisoning the GART TLB

Alex Deucher <[email protected]>
drm/radeon: fix typo in golden register setup on evergreen

Eric Sandeen <[email protected]>
ext4: disable synchronous transaction batching if max_batch_time==0

Theodore Ts'o <[email protected]>
ext4: clarify error count warning messages

Theodore Ts'o <[email protected]>
ext4: fix unjournalled bg descriptor while initializing inode bitmap

Joe Thornber <[email protected]>
dm io: fix a race condition in the wake up code for sync_io

K. Y. Srinivasan <[email protected]>
Drivers: hv: vmbus: Fix a bug in the channel callback dispatch code

Thomas Gleixner <[email protected]>
clk: spear3xx: Use proper control register offset

Colin Cross <[email protected]>
arm64: implement TASK_SIZE_OF

Jussi Kivilinna <[email protected]>
crypto: sha512_ssse3 - fix byte count to bit count conversion

Prabhakar Lad <[email protected]>
cpufreq: Makefile: fix compilation for davinci platform

Joel Stanley <[email protected]>
powerpc/perf: Clear MMCR2 when enabling PMU

Joel Stanley <[email protected]>
powerpc/perf: Add PPMU_ARCH_207S define

Anton Blanchard <[email protected]>
powerpc/perf: Never program book3s PMCs with values >= 0x80000000

Andy Whitcroft <[email protected]>
ACPI / resources: only reject zero length resources based at address zero

Axel Lin <[email protected]>
hwmon: (adm1021) Fix cache problem when writing temperature limits

Axel Lin <[email protected]>
hwmon: (adm1029) Ensure the fan_div cache is updated in set_fan_div

Guenter Roeck <[email protected]>
hwmon: (adm1031) Fix writes to limit registers

Axel Lin <[email protected]>
hwmon: (amc6821) Fix permissions for temp2_input

Yasuaki Ishimatsu <[email protected]>
workqueue: zero cpumask of wq_numa_possible_cpumask on init

Gu Zheng <[email protected]>
cpuset,mempolicy: fix sleeping function called from invalid context

Maxime Bizon <[email protected]>
workqueue: fix dev_set_uevent_suppress() imbalance

Helge Deller <[email protected]>
parisc: add serial ports of C8000/1GHz machine to hardware database

Michal Sojka <[email protected]>
USB: serial: ftdi_sio: Add Infineon Triboard

Bert Vermeulen <[email protected]>
USB: ftdi_sio: Add extra PID.

Andras Kovacs <[email protected]>
USB: cp210x: add support for Corsair usb dongle

Bernd Wachter <[email protected]>
usb: option: Add ID for Telewell TW-LTE 4G v2


-------------

Diffstat:

Documentation/x86/x86_64/mm.txt | 2 +
Makefile | 4 +-
arch/arm64/include/asm/memory.h | 2 +
arch/parisc/kernel/hardware.c | 3 +-
arch/powerpc/include/asm/perf_event_server.h | 2 +-
arch/powerpc/perf/core-book3s.c | 22 ++-
arch/powerpc/perf/power8-pmu.c | 2 +-
arch/score/Kconfig | 3 +
arch/score/Makefile | 4 +-
arch/score/include/asm/checksum.h | 93 +++++-----
arch/score/include/asm/io.h | 1 -
arch/score/include/asm/pgalloc.h | 2 +-
arch/score/kernel/entry.S | 4 +-
arch/score/kernel/process.c | 4 +-
arch/score/kernel/vmlinux.lds.S | 1 +
arch/x86/Kconfig | 25 ++-
arch/x86/crypto/sha512_ssse3_glue.c | 2 +-
arch/x86/include/asm/espfix.h | 16 ++
arch/x86/include/asm/pgtable_64_types.h | 2 +
arch/x86/include/asm/setup.h | 2 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/entry_32.S | 12 ++
arch/x86/kernel/entry_64.S | 81 ++++++++-
arch/x86/kernel/espfix_64.c | 209 ++++++++++++++++++++++
arch/x86/kernel/ldt.c | 10 +-
arch/x86/kernel/smpboot.c | 7 +
arch/x86/mm/dump_pagetables.c | 31 ++--
arch/x86/mm/ioremap.c | 26 ++-
arch/x86/vdso/vdso32-setup.c | 8 -
drivers/acpi/battery.c | 27 ++-
drivers/acpi/resource.c | 10 +-
drivers/clk/spear/spear3xx_clock.c | 2 +-
drivers/cpufreq/Makefile | 2 +-
drivers/gpu/drm/radeon/evergreen.c | 8 +-
drivers/gpu/drm/radeon/rs600.c | 6 +-
drivers/hv/connection.c | 8 +-
drivers/hwmon/adm1021.c | 14 +-
drivers/hwmon/adm1029.c | 3 +
drivers/hwmon/adm1031.c | 8 +-
drivers/hwmon/amc6821.c | 2 +-
drivers/md/dm-io.c | 22 +--
drivers/usb/serial/cp210x.c | 1 +
drivers/usb/serial/ftdi_sio.c | 5 +-
drivers/usb/serial/ftdi_sio_ids.h | 9 +-
drivers/usb/serial/option.c | 2 +
fs/ext4/ialloc.c | 14 +-
fs/ext4/super.c | 9 +-
fs/jbd2/transaction.c | 5 +-
include/linux/ring_buffer.h | 2 +-
init/main.c | 4 +
kernel/cpuset.c | 8 +-
kernel/rtmutex-debug.h | 5 +
kernel/rtmutex.c | 252 ++++++++++++++++++++++++---
kernel/rtmutex.h | 5 +
kernel/trace/ring_buffer.c | 5 +-
kernel/trace/trace.c | 25 ++-
kernel/trace/trace.h | 4 +-
kernel/workqueue.c | 3 +-
mm/mempolicy.c | 2 -
59 files changed, 853 insertions(+), 200 deletions(-)


2014-07-15 23:13:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 14/44] powerpc/perf: Never program book3s PMCs with values >= 0x80000000

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Anton Blanchard <[email protected]>

commit f56029410a13cae3652d1f34788045c40a13ffc7 upstream.

We are seeing a lot of PMU warnings on POWER8:

Can't find PMC that caused IRQ

Looking closer, the active PMC is 0 at this point and we took a PMU
exception on the transition from negative to 0. Some versions of POWER8
have an issue where they edge detect and not level detect PMC overflows.

A number of places program the PMC with (0x80000000 - period_left),
where period_left can be negative. We can either fix all of these or
just ensure that period_left is always >= 1.

This patch takes the second option.

Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/perf/core-book3s.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -749,7 +749,22 @@ static void power_pmu_read(struct perf_e
} while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);

local64_add(delta, &event->count);
- local64_sub(delta, &event->hw.period_left);
+
+ /*
+ * A number of places program the PMC with (0x80000000 - period_left).
+ * We never want period_left to be less than 1 because we will program
+ * the PMC with a value >= 0x800000000 and an edge detected PMC will
+ * roll around to 0 before taking an exception. We have seen this
+ * on POWER8.
+ *
+ * To fix this, clamp the minimum value of period_left to 1.
+ */
+ do {
+ prev = local64_read(&event->hw.period_left);
+ val = prev - delta;
+ if (val < 1)
+ val = 1;
+ } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev);
}

/*

2014-07-15 23:13:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 39/44] x86, espfix: Move espfix definitions into a separate header file

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit e1fe9ed8d2a4937510d0d60e20705035c2609aea upstream.

Sparse warns that the percpu variables aren't declared before they are
defined. Rather than hacking around it, move espfix definitions into
a proper header file.

Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/espfix.h | 16 ++++++++++++++++
arch/x86/include/asm/setup.h | 5 ++---
arch/x86/kernel/espfix_64.c | 1 +
3 files changed, 19 insertions(+), 3 deletions(-)

--- /dev/null
+++ b/arch/x86/include/asm/espfix.h
@@ -0,0 +1,16 @@
+#ifdef _ASM_X86_ESPFIX_H
+#define _ASM_X86_ESPFIX_H
+
+#ifdef CONFIG_X86_64
+
+#include <asm/percpu.h>
+
+DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
+DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr);
+
+extern void init_espfix_bsp(void);
+extern void init_espfix_ap(void);
+
+#endif /* CONFIG_X86_64 */
+
+#endif /* _ASM_X86_ESPFIX_H */
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -60,11 +60,10 @@ extern void x86_ce4100_early_setup(void)
static inline void x86_ce4100_early_setup(void) { }
#endif

-extern void init_espfix_bsp(void);
-extern void init_espfix_ap(void);
-
#ifndef _SETUP

+#include <asm/espfix.h>
+
/*
* This is set up by the setup-routine at boot-time
*/
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -40,6 +40,7 @@
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/setup.h>
+#include <asm/espfix.h>

/*
* Note: we only need 6*8 = 48 bytes for the espfix stack, but round

2014-07-15 23:13:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 07/44] cpuset,mempolicy: fix sleeping function called from invalid context

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gu Zheng <[email protected]>

commit 391acf970d21219a2a5446282d3b20eace0c0d7a upstream.

When runing with the kernel(3.15-rc7+), the follow bug occurs:
[ 9969.258987] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:586
[ 9969.359906] in_atomic(): 1, irqs_disabled(): 0, pid: 160655, name: python
[ 9969.441175] INFO: lockdep is turned off.
[ 9969.488184] CPU: 26 PID: 160655 Comm: python Tainted: G A 3.15.0-rc7+ #85
[ 9969.581032] Hardware name: FUJITSU-SV PRIMEQUEST 1800E/SB, BIOS PRIMEQUEST 1000 Series BIOS Version 1.39 11/16/2012
[ 9969.706052] ffffffff81a20e60 ffff8803e941fbd0 ffffffff8162f523 ffff8803e941fd18
[ 9969.795323] ffff8803e941fbe0 ffffffff8109995a ffff8803e941fc58 ffffffff81633e6c
[ 9969.884710] ffffffff811ba5dc ffff880405c6b480 ffff88041fdd90a0 0000000000002000
[ 9969.974071] Call Trace:
[ 9970.003403] [<ffffffff8162f523>] dump_stack+0x4d/0x66
[ 9970.065074] [<ffffffff8109995a>] __might_sleep+0xfa/0x130
[ 9970.130743] [<ffffffff81633e6c>] mutex_lock_nested+0x3c/0x4f0
[ 9970.200638] [<ffffffff811ba5dc>] ? kmem_cache_alloc+0x1bc/0x210
[ 9970.272610] [<ffffffff81105807>] cpuset_mems_allowed+0x27/0x140
[ 9970.344584] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150
[ 9970.409282] [<ffffffff811b1385>] __mpol_dup+0xe5/0x150
[ 9970.471897] [<ffffffff811b1303>] ? __mpol_dup+0x63/0x150
[ 9970.536585] [<ffffffff81068c86>] ? copy_process.part.23+0x606/0x1d40
[ 9970.613763] [<ffffffff810bf28d>] ? trace_hardirqs_on+0xd/0x10
[ 9970.683660] [<ffffffff810ddddf>] ? monotonic_to_bootbased+0x2f/0x50
[ 9970.759795] [<ffffffff81068cf0>] copy_process.part.23+0x670/0x1d40
[ 9970.834885] [<ffffffff8106a598>] do_fork+0xd8/0x380
[ 9970.894375] [<ffffffff81110e4c>] ? __audit_syscall_entry+0x9c/0xf0
[ 9970.969470] [<ffffffff8106a8c6>] SyS_clone+0x16/0x20
[ 9971.030011] [<ffffffff81642009>] stub_clone+0x69/0x90
[ 9971.091573] [<ffffffff81641c29>] ? system_call_fastpath+0x16/0x1b

The cause is that cpuset_mems_allowed() try to take
mutex_lock(&callback_mutex) under the rcu_read_lock(which was hold in
__mpol_dup()). And in cpuset_mems_allowed(), the access to cpuset is
under rcu_read_lock, so in __mpol_dup, we can reduce the rcu_read_lock
protection region to protect the access to cpuset only in
current_cpuset_is_being_rebound(). So that we can avoid this bug.

This patch is a temporary solution that just addresses the bug
mentioned above, can not fix the long-standing issue about cpuset.mems
rebinding on fork():

"When the forker's task_struct is duplicated (which includes
->mems_allowed) and it races with an update to cpuset_being_rebound
in update_tasks_nodemask() then the task's mems_allowed doesn't get
updated. And the child task's mems_allowed can be wrong if the
cpuset's nodemask changes before the child has been added to the
cgroup's tasklist."

Signed-off-by: Gu Zheng <[email protected]>
Acked-by: Li Zefan <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/cpuset.c | 8 +++++++-
mm/mempolicy.c | 2 --
2 files changed, 7 insertions(+), 3 deletions(-)

--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1153,7 +1153,13 @@ done:

int current_cpuset_is_being_rebound(void)
{
- return task_cs(current) == cpuset_being_rebound;
+ int ret;
+
+ rcu_read_lock();
+ ret = task_cs(current) == cpuset_being_rebound;
+ rcu_read_unlock();
+
+ return ret;
}

static int update_relax_domain_level(struct cpuset *cs, s64 val)
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2086,7 +2086,6 @@ struct mempolicy *__mpol_dup(struct memp
} else
*new = *old;

- rcu_read_lock();
if (current_cpuset_is_being_rebound()) {
nodemask_t mems = cpuset_mems_allowed(current);
if (new->flags & MPOL_F_REBINDING)
@@ -2094,7 +2093,6 @@ struct mempolicy *__mpol_dup(struct memp
else
mpol_rebind_policy(new, &mems, MPOL_REBIND_ONCE);
}
- rcu_read_unlock();
atomic_set(&new->refcnt, 1);
return new;
}

2014-07-15 23:13:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 06/44] workqueue: fix dev_set_uevent_suppress() imbalance

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Maxime Bizon <[email protected]>

commit bddbceb688c6d0decaabc7884fede319d02f96c8 upstream.

Uevents are suppressed during attributes registration, but never
restored, so kobject_uevent() does nothing.

Signed-off-by: Maxime Bizon <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Fixes: 226223ab3c4118ddd10688cc2c131135848371ab
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/workqueue.c | 1 +
1 file changed, 1 insertion(+)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3373,6 +3373,7 @@ int workqueue_sysfs_register(struct work
}
}

+ dev_set_uevent_suppress(&wq_dev->dev, false);
kobject_uevent(&wq_dev->dev.kobj, KOBJ_ADD);
return 0;
}

2014-07-15 23:13:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 13/44] ACPI / resources: only reject zero length resources based at address zero

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andy Whitcroft <[email protected]>

commit 867f9d463b82462793ea4610e748be0b04b37fc7 upstream.

The recently merged change (in v3.14-rc6) to ACPI resource detection
(below) causes all zero length ACPI resources to be elided from the
table:

commit b355cee88e3b1a193f0e9a81db810f6f83ad728b
Author: Zhang Rui <[email protected]>
Date: Thu Feb 27 11:37:15 2014 +0800

ACPI / resources: ignore invalid ACPI device resources

This change has caused a regression in (at least) serial port detection
for a number of machines (see LP#1313981 [1]). These seem to represent
their IO regions (presumably incorrectly) as a zero length region.
Reverting the above commit restores these serial devices.

Only elide zero length resources which lie at address 0.

Fixes: b355cee88e3b (ACPI / resources: ignore invalid ACPI device resources)
Signed-off-by: Andy Whitcroft <[email protected]>
Acked-by: Zhang Rui <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/acpi/resource.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

--- a/drivers/acpi/resource.c
+++ b/drivers/acpi/resource.c
@@ -77,7 +77,7 @@ bool acpi_dev_resource_memory(struct acp
switch (ares->type) {
case ACPI_RESOURCE_TYPE_MEMORY24:
memory24 = &ares->data.memory24;
- if (!memory24->address_length)
+ if (!memory24->minimum && !memory24->address_length)
return false;
acpi_dev_get_memresource(res, memory24->minimum,
memory24->address_length,
@@ -85,7 +85,7 @@ bool acpi_dev_resource_memory(struct acp
break;
case ACPI_RESOURCE_TYPE_MEMORY32:
memory32 = &ares->data.memory32;
- if (!memory32->address_length)
+ if (!memory32->minimum && !memory32->address_length)
return false;
acpi_dev_get_memresource(res, memory32->minimum,
memory32->address_length,
@@ -93,7 +93,7 @@ bool acpi_dev_resource_memory(struct acp
break;
case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
fixed_memory32 = &ares->data.fixed_memory32;
- if (!fixed_memory32->address_length)
+ if (!fixed_memory32->address && !fixed_memory32->address_length)
return false;
acpi_dev_get_memresource(res, fixed_memory32->address,
fixed_memory32->address_length,
@@ -150,7 +150,7 @@ bool acpi_dev_resource_io(struct acpi_re
switch (ares->type) {
case ACPI_RESOURCE_TYPE_IO:
io = &ares->data.io;
- if (!io->address_length)
+ if (!io->minimum && !io->address_length)
return false;
acpi_dev_get_ioresource(res, io->minimum,
io->address_length,
@@ -158,7 +158,7 @@ bool acpi_dev_resource_io(struct acpi_re
break;
case ACPI_RESOURCE_TYPE_FIXED_IO:
fixed_io = &ares->data.fixed_io;
- if (!fixed_io->address_length)
+ if (!fixed_io->address && !fixed_io->address_length)
return false;
acpi_dev_get_ioresource(res, fixed_io->address,
fixed_io->address_length,

2014-07-16 00:06:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 44/44] ACPI / battery: Retry to get battery information if failed during probing

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lan Tianyu <[email protected]>

commit 75646e758a0ecbed5024454507d5be5b9ea9dcbf upstream.

Some machines (eg. Lenovo Z480) ECs are not stable during boot up
and causes battery driver fails to be loaded due to failure of getting
battery information from EC sometimes. After several retries, the
operation will work. This patch is to retry to get battery information 5
times if the first try fails.

[ backport to 3.14.5: removed second parameter in acpi_battery_update(),
introduced by the commit 9e50bc14a7f58b5d8a55973b2d69355852ae2dae (ACPI /
battery: Accelerate battery resume callback)]

[naszar <[email protected]>: backport to 3.14.5]
Link: https://bugzilla.kernel.org/show_bug.cgi?id=75581
Reported-and-tested-by: naszar <[email protected]>
Cc: All applicable <[email protected]>
Signed-off-by: Lan Tianyu <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/acpi/battery.c | 27 ++++++++++++++++++++++++++-
1 file changed, 26 insertions(+), 1 deletion(-)

--- a/drivers/acpi/battery.c
+++ b/drivers/acpi/battery.c
@@ -34,6 +34,7 @@
#include <linux/dmi.h>
#include <linux/slab.h>
#include <linux/suspend.h>
+#include <linux/delay.h>
#include <asm/unaligned.h>

#ifdef CONFIG_ACPI_PROCFS_POWER
@@ -1081,6 +1082,28 @@ static struct dmi_system_id bat_dmi_tabl
{},
};

+/*
+ * Some machines'(E,G Lenovo Z480) ECs are not stable
+ * during boot up and this causes battery driver fails to be
+ * probed due to failure of getting battery information
+ * from EC sometimes. After several retries, the operation
+ * may work. So add retry code here and 20ms sleep between
+ * every retries.
+ */
+static int acpi_battery_update_retry(struct acpi_battery *battery)
+{
+ int retry, ret;
+
+ for (retry = 5; retry; retry--) {
+ ret = acpi_battery_update(battery);
+ if (!ret)
+ break;
+
+ msleep(20);
+ }
+ return ret;
+}
+
static int acpi_battery_add(struct acpi_device *device)
{
int result = 0;
@@ -1100,9 +1123,11 @@ static int acpi_battery_add(struct acpi_
if (ACPI_SUCCESS(acpi_get_handle(battery->device->handle,
"_BIX", &handle)))
set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags);
- result = acpi_battery_update(battery);
+
+ result = acpi_battery_update_retry(battery);
if (result)
goto fail;
+
#ifdef CONFIG_ACPI_PROCFS_POWER
result = acpi_battery_add_fs(device);
#endif

2014-07-15 23:13:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 12/44] hwmon: (adm1021) Fix cache problem when writing temperature limits

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Axel Lin <[email protected]>

commit c024044d4da2c9c3b32933b4235df1e409293b84 upstream.

The module test script for the adm1021 driver exposes a cache problem
when writing temperature limits. temp_min and temp_max are expected
to be stored in milli-degrees C but are stored in degrees C.

Reported-by: Guenter Roeck <[email protected]>
Signed-off-by: Axel Lin <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hwmon/adm1021.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

--- a/drivers/hwmon/adm1021.c
+++ b/drivers/hwmon/adm1021.c
@@ -185,7 +185,7 @@ static ssize_t set_temp_max(struct devic
struct i2c_client *client = to_i2c_client(dev);
struct adm1021_data *data = i2c_get_clientdata(client);
long temp;
- int err;
+ int reg_val, err;

err = kstrtol(buf, 10, &temp);
if (err)
@@ -193,10 +193,11 @@ static ssize_t set_temp_max(struct devic
temp /= 1000;

mutex_lock(&data->update_lock);
- data->temp_max[index] = clamp_val(temp, -128, 127);
+ reg_val = clamp_val(temp, -128, 127);
+ data->temp_max[index] = reg_val * 1000;
if (!read_only)
i2c_smbus_write_byte_data(client, ADM1021_REG_TOS_W(index),
- data->temp_max[index]);
+ reg_val);
mutex_unlock(&data->update_lock);

return count;
@@ -210,7 +211,7 @@ static ssize_t set_temp_min(struct devic
struct i2c_client *client = to_i2c_client(dev);
struct adm1021_data *data = i2c_get_clientdata(client);
long temp;
- int err;
+ int reg_val, err;

err = kstrtol(buf, 10, &temp);
if (err)
@@ -218,10 +219,11 @@ static ssize_t set_temp_min(struct devic
temp /= 1000;

mutex_lock(&data->update_lock);
- data->temp_min[index] = clamp_val(temp, -128, 127);
+ reg_val = clamp_val(temp, -128, 127);
+ data->temp_min[index] = reg_val * 1000;
if (!read_only)
i2c_smbus_write_byte_data(client, ADM1021_REG_THYST_W(index),
- data->temp_min[index]);
+ reg_val);
mutex_unlock(&data->update_lock);

return count;

2014-07-16 00:06:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 42/44] x86, espfix: Make it possible to disable 16-bit support

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit 34273f41d57ee8d854dcd2a1d754cbb546cb548f upstream.

Embedded systems, which may be very memory-size-sensitive, are
extremely unlikely to ever encounter any 16-bit software, so make it
a CONFIG_EXPERT option to turn off support for any 16-bit software
whatsoever.

Signed-off-by: H. Peter Anvin <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/Kconfig | 23 ++++++++++++++++++-----
arch/x86/kernel/entry_32.S | 12 ++++++++++++
arch/x86/kernel/entry_64.S | 8 ++++++++
arch/x86/kernel/ldt.c | 5 +++++
4 files changed, 43 insertions(+), 5 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -951,14 +951,27 @@ config VM86
default y
depends on X86_32
---help---
- This option is required by programs like DOSEMU to run 16-bit legacy
- code on X86 processors. It also may be needed by software like
- XFree86 to initialize some video cards via BIOS. Disabling this
- option saves about 6k.
+ This option is required by programs like DOSEMU to run
+ 16-bit real mode legacy code on x86 processors. It also may
+ be needed by software like XFree86 to initialize some video
+ cards via BIOS. Disabling this option saves about 6K.
+
+config X86_16BIT
+ bool "Enable support for 16-bit segments" if EXPERT
+ default y
+ ---help---
+ This option is required by programs like Wine to run 16-bit
+ protected mode legacy code on x86 processors. Disabling
+ this option saves about 300 bytes on i386, or around 6K text
+ plus 16K runtime memory on x86-64,
+
+config X86_ESPFIX32
+ def_bool y
+ depends on X86_16BIT && X86_32

config X86_ESPFIX64
def_bool y
- depends on X86_64
+ depends on X86_16BIT && X86_64

config TOSHIBA
tristate "Toshiba Laptop support"
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -531,6 +531,7 @@ syscall_exit:
restore_all:
TRACE_IRQS_IRET
restore_all_notrace:
+#ifdef CONFIG_X86_ESPFIX32
movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS
# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
# are returning to the kernel.
@@ -541,6 +542,7 @@ restore_all_notrace:
cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss # returning to user-space with LDT SS
+#endif
restore_nocheck:
RESTORE_REGS 4 # skip orig_eax/error_code
irq_return:
@@ -553,6 +555,7 @@ ENTRY(iret_exc)
.previous
_ASM_EXTABLE(irq_return,iret_exc)

+#ifdef CONFIG_X86_ESPFIX32
CFI_RESTORE_STATE
ldt_ss:
#ifdef CONFIG_PARAVIRT
@@ -596,6 +599,7 @@ ldt_ss:
lss (%esp), %esp /* switch to espfix segment */
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
+#endif
CFI_ENDPROC
ENDPROC(system_call)

@@ -708,6 +712,7 @@ END(syscall_badsys)
* the high word of the segment base from the GDT and swiches to the
* normal stack and adjusts ESP with the matching offset.
*/
+#ifdef CONFIG_X86_ESPFIX32
/* fixup the stack */
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
@@ -717,8 +722,10 @@ END(syscall_badsys)
pushl_cfi %eax
lss (%esp), %esp /* switch to the normal stack segment */
CFI_ADJUST_CFA_OFFSET -8
+#endif
.endm
.macro UNWIND_ESPFIX_STACK
+#ifdef CONFIG_X86_ESPFIX32
movl %ss, %eax
/* see if on espfix stack */
cmpw $__ESPFIX_SS, %ax
@@ -729,6 +736,7 @@ END(syscall_badsys)
/* switch to normal stack */
FIXUP_ESPFIX_STACK
27:
+#endif
.endm

/*
@@ -1336,11 +1344,13 @@ END(debug)
ENTRY(nmi)
RING0_INT_FRAME
ASM_CLAC
+#ifdef CONFIG_X86_ESPFIX32
pushl_cfi %eax
movl %ss, %eax
cmpw $__ESPFIX_SS, %ax
popl_cfi %eax
je nmi_espfix_stack
+#endif
cmpl $ia32_sysenter_target,(%esp)
je nmi_stack_fixup
pushl_cfi %eax
@@ -1380,6 +1390,7 @@ nmi_debug_stack_check:
FIX_STACK 24, nmi_stack_correct, 1
jmp nmi_stack_correct

+#ifdef CONFIG_X86_ESPFIX32
nmi_espfix_stack:
/* We have a RING0_INT_FRAME here.
*
@@ -1401,6 +1412,7 @@ nmi_espfix_stack:
lss 12+4(%esp), %esp # back to espfix stack
CFI_ADJUST_CFA_OFFSET -24
jmp irq_return
+#endif
CFI_ENDPROC
END(nmi)

--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1060,8 +1060,10 @@ irq_return:
* Are we returning to a stack segment from the LDT? Note: in
* 64-bit mode SS:RSP on the exception stack is always valid.
*/
+#ifdef CONFIG_X86_ESPFIX64
testb $4,(SS-RIP)(%rsp)
jnz irq_return_ldt
+#endif

irq_return_iret:
INTERRUPT_RETURN
@@ -1073,6 +1075,7 @@ ENTRY(native_iret)
_ASM_EXTABLE(native_iret, bad_iret)
#endif

+#ifdef CONFIG_X86_ESPFIX64
irq_return_ldt:
pushq_cfi %rax
pushq_cfi %rdi
@@ -1096,6 +1099,7 @@ irq_return_ldt:
movq %rax,%rsp
popq_cfi %rax
jmp irq_return_iret
+#endif

.section .fixup,"ax"
bad_iret:
@@ -1169,6 +1173,7 @@ END(common_interrupt)
* modify the stack to make it look like we just entered
* the #GP handler from user space, similar to bad_iret.
*/
+#ifdef CONFIG_X86_ESPFIX64
ALIGN
__do_double_fault:
XCPT_FRAME 1 RDI+8
@@ -1194,6 +1199,9 @@ __do_double_fault:
retq
CFI_ENDPROC
END(__do_double_fault)
+#else
+# define __do_double_fault do_double_fault
+#endif

/*
* End of kprobes section
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -229,6 +229,11 @@ static int write_ldt(void __user *ptr, u
}
}

+ if (!IS_ENABLED(CONFIG_X86_16BIT) && !ldt_info.seg_32bit) {
+ error = -EINVAL;
+ goto out_unlock;
+ }
+
fill_ldt(&ldt, &ldt_info);
if (oldmode)
ldt.avl = 0;

2014-07-16 00:06:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 43/44] x86, ioremap: Speed up check for RAM pages

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Roland Dreier <[email protected]>

commit c81c8a1eeede61e92a15103748c23d100880cc8a upstream.

In __ioremap_caller() (the guts of ioremap), we loop over the range of
pfns being remapped and checks each one individually with page_is_ram().
For large ioremaps, this can be very slow. For example, we have a
device with a 256 GiB PCI BAR, and ioremapping this BAR can take 20+
seconds -- sometimes long enough to trigger the soft lockup detector!

Internally, page_is_ram() calls walk_system_ram_range() on a single
page. Instead, we can make a single call to walk_system_ram_range()
from __ioremap_caller(), and do our further checks only for any RAM
pages that we find. For the common case of MMIO, this saves an enormous
amount of work, since the range being ioremapped doesn't intersect
system RAM at all.

With this change, ioremap on our 256 GiB BAR takes less than 1 second.

Signed-off-by: Roland Dreier <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/mm/ioremap.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)

--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -50,6 +50,21 @@ int ioremap_change_attr(unsigned long va
return err;
}

+static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
+ void *arg)
+{
+ unsigned long i;
+
+ for (i = 0; i < nr_pages; ++i)
+ if (pfn_valid(start_pfn + i) &&
+ !PageReserved(pfn_to_page(start_pfn + i)))
+ return 1;
+
+ WARN_ONCE(1, "ioremap on RAM pfn 0x%lx\n", start_pfn);
+
+ return 0;
+}
+
/*
* Remap an arbitrary physical address space into the kernel virtual
* address space. Needed when the kernel wants to access high addresses
@@ -93,14 +108,11 @@ static void __iomem *__ioremap_caller(re
/*
* Don't allow anybody to remap normal RAM that we're using..
*/
+ pfn = phys_addr >> PAGE_SHIFT;
last_pfn = last_addr >> PAGE_SHIFT;
- for (pfn = phys_addr >> PAGE_SHIFT; pfn <= last_pfn; pfn++) {
- int is_ram = page_is_ram(pfn);
-
- if (is_ram && pfn_valid(pfn) && !PageReserved(pfn_to_page(pfn)))
- return NULL;
- WARN_ON_ONCE(is_ram);
- }
+ if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL,
+ __ioremap_check_ram) == 1)
+ return NULL;

/*
* Mappings have to be page-aligned

2014-07-16 00:07:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 41/44] x86, espfix: Make espfix64 a Kconfig option, fix UML

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit 197725de65477bc8509b41388157c1a2283542bb upstream.

Make espfix64 a hidden Kconfig option. This fixes the x86-64 UML
build which had broken due to the non-existence of init_espfix_bsp()
in UML: since UML uses its own Kconfig, this option does not appear in
the UML build.

This also makes it possible to make support for 16-bit segments a
configuration option, for the people who want to minimize the size of
the kernel.

Reported-by: Ingo Molnar <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Cc: Richard Weinberger <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/Kconfig | 4 ++++
arch/x86/kernel/Makefile | 2 +-
arch/x86/kernel/smpboot.c | 2 +-
init/main.c | 2 +-
4 files changed, 7 insertions(+), 3 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -956,6 +956,10 @@ config VM86
XFree86 to initialize some video cards via BIOS. Disabling this
option saves about 6k.

+config X86_ESPFIX64
+ def_bool y
+ depends on X86_64
+
config TOSHIBA
tristate "Toshiba Laptop support"
depends on X86_32
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -27,7 +27,7 @@ obj-$(CONFIG_X86_64) += sys_x86_64.o x86
obj-y += syscall_$(BITS).o
obj-$(CONFIG_X86_64) += vsyscall_64.o
obj-$(CONFIG_X86_64) += vsyscall_emu_64.o
-obj-$(CONFIG_X86_64) += espfix_64.o
+obj-$(CONFIG_X86_ESPFIX64) += espfix_64.o
obj-y += bootflag.o e820.o
obj-y += pci-dma.o quirks.o topology.o kdebugfs.o
obj-y += alternative.o i8253.o pci-nommu.o hw_breakpoint.o
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -267,7 +267,7 @@ notrace static void __cpuinit start_seco
/*
* Enable the espfix hack for this CPU
*/
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
init_espfix_ap();
#endif

--- a/init/main.c
+++ b/init/main.c
@@ -606,7 +606,7 @@ asmlinkage void __init start_kernel(void
if (efi_enabled(EFI_RUNTIME_SERVICES))
efi_enter_virtual_mode();
#endif
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
/* Should be run before the first non-init thread is created */
init_espfix_bsp();
#endif

2014-07-16 00:07:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 40/44] x86, espfix: Fix broken header guard

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit 20b68535cd27183ebd3651ff313afb2b97dac941 upstream.

Header guard is #ifndef, not #ifdef...

Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/espfix.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/include/asm/espfix.h
+++ b/arch/x86/include/asm/espfix.h
@@ -1,4 +1,4 @@
-#ifdef _ASM_X86_ESPFIX_H
+#ifndef _ASM_X86_ESPFIX_H
#define _ASM_X86_ESPFIX_H

#ifdef CONFIG_X86_64

2014-07-16 00:07:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 09/44] hwmon: (amc6821) Fix permissions for temp2_input

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Axel Lin <[email protected]>

commit df86754b746e9a0ff6f863f690b1c01d408e3cdc upstream.

temp2_input should not be writable, fix it.

Reported-by: Guenter Roeck <[email protected]>
Signed-off-by: Axel Lin <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hwmon/amc6821.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/hwmon/amc6821.c
+++ b/drivers/hwmon/amc6821.c
@@ -707,7 +707,7 @@ static SENSOR_DEVICE_ATTR(temp1_max_alar
get_temp_alarm, NULL, IDX_TEMP1_MAX);
static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO,
get_temp_alarm, NULL, IDX_TEMP1_CRIT);
-static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO | S_IWUSR,
+static SENSOR_DEVICE_ATTR(temp2_input, S_IRUGO,
get_temp, NULL, IDX_TEMP2_INPUT);
static SENSOR_DEVICE_ATTR(temp2_min, S_IRUGO | S_IWUSR, get_temp,
set_temp, IDX_TEMP2_MIN);

2014-07-16 00:08:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 08/44] workqueue: zero cpumask of wq_numa_possible_cpumask on init

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Yasuaki Ishimatsu <[email protected]>

commit 5a6024f1604eef119cf3a6fa413fe0261a81a8f3 upstream.

When hot-adding and onlining CPU, kernel panic occurs, showing following
call trace.

BUG: unable to handle kernel paging request at 0000000000001d08
IP: [<ffffffff8114acfd>] __alloc_pages_nodemask+0x9d/0xb10
PGD 0
Oops: 0000 [#1] SMP
...
Call Trace:
[<ffffffff812b8745>] ? cpumask_next_and+0x35/0x50
[<ffffffff810a3283>] ? find_busiest_group+0x113/0x8f0
[<ffffffff81193bc9>] ? deactivate_slab+0x349/0x3c0
[<ffffffff811926f1>] new_slab+0x91/0x300
[<ffffffff815de95a>] __slab_alloc+0x2bb/0x482
[<ffffffff8105bc1c>] ? copy_process.part.25+0xfc/0x14c0
[<ffffffff810a3c78>] ? load_balance+0x218/0x890
[<ffffffff8101a679>] ? sched_clock+0x9/0x10
[<ffffffff81105ba9>] ? trace_clock_local+0x9/0x10
[<ffffffff81193d1c>] kmem_cache_alloc_node+0x8c/0x200
[<ffffffff8105bc1c>] copy_process.part.25+0xfc/0x14c0
[<ffffffff81114d0d>] ? trace_buffer_unlock_commit+0x4d/0x60
[<ffffffff81085a80>] ? kthread_create_on_node+0x140/0x140
[<ffffffff8105d0ec>] do_fork+0xbc/0x360
[<ffffffff8105d3b6>] kernel_thread+0x26/0x30
[<ffffffff81086652>] kthreadd+0x2c2/0x300
[<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60
[<ffffffff815f20ec>] ret_from_fork+0x7c/0xb0
[<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60

In my investigation, I found the root cause is wq_numa_possible_cpumask.
All entries of wq_numa_possible_cpumask is allocated by
alloc_cpumask_var_node(). And these entries are used without initializing.
So these entries have wrong value.

When hot-adding and onlining CPU, wq_update_unbound_numa() is called.
wq_update_unbound_numa() calls alloc_unbound_pwq(). And alloc_unbound_pwq()
calls get_unbound_pool(). In get_unbound_pool(), worker_pool->node is set
as follow:

3592 /* if cpumask is contained inside a NUMA node, we belong to that node */
3593 if (wq_numa_enabled) {
3594 for_each_node(node) {
3595 if (cpumask_subset(pool->attrs->cpumask,
3596 wq_numa_possible_cpumask[node])) {
3597 pool->node = node;
3598 break;
3599 }
3600 }
3601 }

But wq_numa_possible_cpumask[node] does not have correct cpumask. So, wrong
node is selected. As a result, kernel panic occurs.

By this patch, all entries of wq_numa_possible_cpumask are allocated by
zalloc_cpumask_var_node to initialize them. And the panic disappeared.

Signed-off-by: Yasuaki Ishimatsu <[email protected]>
Reviewed-by: Lai Jiangshan <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Fixes: bce903809ab3 ("workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]")
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/workqueue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4968,7 +4968,7 @@ static void __init wq_numa_init(void)
BUG_ON(!tbl);

for_each_node(node)
- BUG_ON(!alloc_cpumask_var_node(&tbl[node], GFP_KERNEL,
+ BUG_ON(!zalloc_cpumask_var_node(&tbl[node], GFP_KERNEL,
node_online(node) ? node : NUMA_NO_NODE));

for_each_possible_cpu(cpu) {

2014-07-16 00:08:46

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 05/44] parisc: add serial ports of C8000/1GHz machine to hardware database

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Helge Deller <[email protected]>

commit eadcc7208a2237016be7bdff4551ba7614da85c8 upstream.

Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/kernel/hardware.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/arch/parisc/kernel/hardware.c
+++ b/arch/parisc/kernel/hardware.c
@@ -1205,7 +1205,8 @@ static struct hp_hardware hp_hardware_li
{HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"},
{HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"},
{HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"},
- {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"},
+ {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"},
+ {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"},
{HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"},
{HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"},
{HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},

2014-07-16 00:08:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 04/44] USB: serial: ftdi_sio: Add Infineon Triboard

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Michal Sojka <[email protected]>

commit d8279a40e50ad55539780aa617a32a53d7f0953e upstream.

This adds support for Infineon TriBoard TC1798 [1]. Only interface 1
is used as serial line (see [2], Figure 8-6).

[1] http://www.infineon.com/cms/de/product/microcontroller/development-tools-software-and-kits/tricore-tm-development-tools-software-and-kits/starterkits-and-evaluation-boards/starter-kit-tc1798/channel.html?channel=db3a304333b8a7ca0133cfa3d73e4268
[2] http://www.infineon.com/dgdl/TriBoardManual-TC1798-V10.pdf?folderId=db3a304412b407950112b409ae7c0343&fileId=db3a304333b8a7ca0133cfae99fe426a

Signed-off-by: Michal Sojka <[email protected]>
Cc: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/ftdi_sio.c | 2 ++
drivers/usb/serial/ftdi_sio_ids.h | 6 ++++++
2 files changed, 8 insertions(+)

--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -948,6 +948,8 @@ static struct usb_device_id id_table_com
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_2_PID) },
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_3_PID) },
{ USB_DEVICE(BRAINBOXES_VID, BRAINBOXES_US_842_4_PID) },
+ /* Infineon Devices */
+ { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) },
{ }, /* Optional parameter entry */
{ } /* Terminating entry */
};
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -584,6 +584,12 @@
#define RATOC_PRODUCT_ID_USB60F 0xb020

/*
+ * Infineon Technologies
+ */
+#define INFINEON_VID 0x058b
+#define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */
+
+/*
* Acton Research Corp.
*/
#define ACTON_VID 0x0647 /* Vendor ID */

2014-07-16 00:10:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 36/44] Score: Modify the Makefile of Score, remove -mlong-calls for compiling

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lennox Wu <[email protected]>

commit df9e4d1c39c472cb44d81ab2ed2db503fc486e3b upstream.

Signed-off-by: Lennox Wu <[email protected]>
Cc: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/score/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/score/Makefile
+++ b/arch/score/Makefile
@@ -20,8 +20,8 @@ cflags-y += -G0 -pipe -mel -mnhwloop -D_
#
KBUILD_AFLAGS += $(cflags-y)
KBUILD_CFLAGS += $(cflags-y)
-KBUILD_AFLAGS_MODULE += -mlong-calls
-KBUILD_CFLAGS_MODULE += -mlong-calls
+KBUILD_AFLAGS_MODULE +=
+KBUILD_CFLAGS_MODULE +=
LDFLAGS += --oformat elf32-littlescore
LDFLAGS_vmlinux += -G0 -static -nostdlib


2014-07-16 00:10:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 37/44] Revert "x86-64, modify_ldt: Make support for 16-bit segments a runtime option"

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit 7ed6fb9b5a5510e4ef78ab27419184741169978a upstream.

This reverts commit fa81511bb0bbb2b1aace3695ce869da9762624ff in
preparation of merging in the proper fix (espfix64).

Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/kernel/ldt.c | 4 +---
arch/x86/vdso/vdso32-setup.c | 8 --------
2 files changed, 1 insertion(+), 11 deletions(-)

--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -20,8 +20,6 @@
#include <asm/mmu_context.h>
#include <asm/syscalls.h>

-int sysctl_ldt16 = 0;
-
#ifdef CONFIG_SMP
static void flush_ldt(void *current_mm)
{
@@ -236,7 +234,7 @@ static int write_ldt(void __user *ptr, u
* IRET leaking the high bits of the kernel stack address.
*/
#ifdef CONFIG_X86_64
- if (!ldt_info.seg_32bit && !sysctl_ldt16) {
+ if (!ldt_info.seg_32bit) {
error = -EINVAL;
goto out_unlock;
}
--- a/arch/x86/vdso/vdso32-setup.c
+++ b/arch/x86/vdso/vdso32-setup.c
@@ -41,7 +41,6 @@ enum {
#ifdef CONFIG_X86_64
#define vdso_enabled sysctl_vsyscall32
#define arch_setup_additional_pages syscall32_setup_pages
-extern int sysctl_ldt16;
#endif

/*
@@ -380,13 +379,6 @@ static ctl_table abi_table2[] = {
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec
- },
- {
- .procname = "ldt16",
- .data = &sysctl_ldt16,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec
},
{}
};

2014-07-16 00:10:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 38/44] x86-64, espfix: Dont leak bits 31:16 of %esp returning to 16-bit stack

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "H. Peter Anvin" <[email protected]>

commit 3891a04aafd668686239349ea58f3314ea2af86b upstream.

The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer. This
causes some 16-bit software to break, but it also leaks kernel state
to user space. We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.

In checkin:

b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.

This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart. When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace. The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.

(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)

Special thanks to:

- Andy Lutomirski, for the suggestion of using very small stack slots
and copy (as opposed to map) the IRET frame there, and for the
suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.

Reported-by: Brian Gerst <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andrew Lutomriski <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Dirk Hohndel <[email protected]>
Cc: Arjan van de Ven <[email protected]>
Cc: comex <[email protected]>
Cc: Alexander van Heukelum <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: <[email protected]> # consider after upstream merge
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
Documentation/x86/x86_64/mm.txt | 2
arch/x86/include/asm/pgtable_64_types.h | 2
arch/x86/include/asm/setup.h | 3
arch/x86/kernel/Makefile | 1
arch/x86/kernel/entry_64.S | 73 ++++++++++-
arch/x86/kernel/espfix_64.c | 208 ++++++++++++++++++++++++++++++++
arch/x86/kernel/ldt.c | 11 -
arch/x86/kernel/smpboot.c | 7 +
arch/x86/mm/dump_pagetables.c | 31 +++-
init/main.c | 4
10 files changed, 316 insertions(+), 26 deletions(-)

--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ ffffc90000000000 - ffffe8ffffffffff (=45
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
... unused hole ...
+ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
+... unused hole ...
ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - ffffffffff5fffff (=1525 MB) module mapping space
ffffffffff600000 - ffffffffffdfffff (=8 MB) vsyscalls
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -61,6 +61,8 @@ typedef struct { pteval_t pte; } pte_t;
#define MODULES_VADDR _AC(0xffffffffa0000000, UL)
#define MODULES_END _AC(0xffffffffff000000, UL)
#define MODULES_LEN (MODULES_END - MODULES_VADDR)
+#define ESPFIX_PGD_ENTRY _AC(-2, UL)
+#define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << PGDIR_SHIFT)

#define EARLY_DYNAMIC_PAGE_TABLES 64

--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -60,6 +60,9 @@ extern void x86_ce4100_early_setup(void)
static inline void x86_ce4100_early_setup(void) { }
#endif

+extern void init_espfix_bsp(void);
+extern void init_espfix_ap(void);
+
#ifndef _SETUP

/*
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -27,6 +27,7 @@ obj-$(CONFIG_X86_64) += sys_x86_64.o x86
obj-y += syscall_$(BITS).o
obj-$(CONFIG_X86_64) += vsyscall_64.o
obj-$(CONFIG_X86_64) += vsyscall_emu_64.o
+obj-$(CONFIG_X86_64) += espfix_64.o
obj-y += bootflag.o e820.o
obj-y += pci-dma.o quirks.o topology.o kdebugfs.o
obj-y += alternative.o i8253.o pci-nommu.o hw_breakpoint.o
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -58,6 +58,7 @@
#include <asm/asm.h>
#include <asm/context_tracking.h>
#include <asm/smap.h>
+#include <asm/pgtable_types.h>
#include <linux/err.h>

/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
@@ -1055,8 +1056,16 @@ restore_args:
RESTORE_ARGS 1,8,1

irq_return:
+ /*
+ * Are we returning to a stack segment from the LDT? Note: in
+ * 64-bit mode SS:RSP on the exception stack is always valid.
+ */
+ testb $4,(SS-RIP)(%rsp)
+ jnz irq_return_ldt
+
+irq_return_iret:
INTERRUPT_RETURN
- _ASM_EXTABLE(irq_return, bad_iret)
+ _ASM_EXTABLE(irq_return_iret, bad_iret)

#ifdef CONFIG_PARAVIRT
ENTRY(native_iret)
@@ -1064,6 +1073,30 @@ ENTRY(native_iret)
_ASM_EXTABLE(native_iret, bad_iret)
#endif

+irq_return_ldt:
+ pushq_cfi %rax
+ pushq_cfi %rdi
+ SWAPGS
+ movq PER_CPU_VAR(espfix_waddr),%rdi
+ movq %rax,(0*8)(%rdi) /* RAX */
+ movq (2*8)(%rsp),%rax /* RIP */
+ movq %rax,(1*8)(%rdi)
+ movq (3*8)(%rsp),%rax /* CS */
+ movq %rax,(2*8)(%rdi)
+ movq (4*8)(%rsp),%rax /* RFLAGS */
+ movq %rax,(3*8)(%rdi)
+ movq (6*8)(%rsp),%rax /* SS */
+ movq %rax,(5*8)(%rdi)
+ movq (5*8)(%rsp),%rax /* RSP */
+ movq %rax,(4*8)(%rdi)
+ andl $0xffff0000,%eax
+ popq_cfi %rdi
+ orq PER_CPU_VAR(espfix_stack),%rax
+ SWAPGS
+ movq %rax,%rsp
+ popq_cfi %rax
+ jmp irq_return_iret
+
.section .fixup,"ax"
bad_iret:
/*
@@ -1127,9 +1160,41 @@ ENTRY(retint_kernel)
call preempt_schedule_irq
jmp exit_intr
#endif
-
CFI_ENDPROC
END(common_interrupt)
+
+ /*
+ * If IRET takes a fault on the espfix stack, then we
+ * end up promoting it to a doublefault. In that case,
+ * modify the stack to make it look like we just entered
+ * the #GP handler from user space, similar to bad_iret.
+ */
+ ALIGN
+__do_double_fault:
+ XCPT_FRAME 1 RDI+8
+ movq RSP(%rdi),%rax /* Trap on the espfix stack? */
+ sarq $PGDIR_SHIFT,%rax
+ cmpl $ESPFIX_PGD_ENTRY,%eax
+ jne do_double_fault /* No, just deliver the fault */
+ cmpl $__KERNEL_CS,CS(%rdi)
+ jne do_double_fault
+ movq RIP(%rdi),%rax
+ cmpq $irq_return_iret,%rax
+#ifdef CONFIG_PARAVIRT
+ je 1f
+ cmpq $native_iret,%rax
+#endif
+ jne do_double_fault /* This shouldn't happen... */
+1:
+ movq PER_CPU_VAR(kernel_stack),%rax
+ subq $(6*8-KERNEL_STACK_OFFSET),%rax /* Reset to original stack */
+ movq %rax,RSP(%rdi)
+ movq $0,(%rax) /* Missing (lost) #GP error code */
+ movq $general_protection,RIP(%rdi)
+ retq
+ CFI_ENDPROC
+END(__do_double_fault)
+
/*
* End of kprobes section
*/
@@ -1298,7 +1363,7 @@ zeroentry overflow do_overflow
zeroentry bounds do_bounds
zeroentry invalid_op do_invalid_op
zeroentry device_not_available do_device_not_available
-paranoiderrorentry double_fault do_double_fault
+paranoiderrorentry double_fault __do_double_fault
zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
errorentry invalid_TSS do_invalid_TSS
errorentry segment_not_present do_segment_not_present
@@ -1585,7 +1650,7 @@ error_sti:
*/
error_kernelspace:
incl %ebx
- leaq irq_return(%rip),%rcx
+ leaq irq_return_iret(%rip),%rcx
cmpq %rcx,RIP+8(%rsp)
je error_swapgs
movl %ecx,%eax /* zero extend */
--- /dev/null
+++ b/arch/x86/kernel/espfix_64.c
@@ -0,0 +1,208 @@
+/* ----------------------------------------------------------------------- *
+ *
+ * Copyright 2014 Intel Corporation; author: H. Peter Anvin
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * ----------------------------------------------------------------------- */
+
+/*
+ * The IRET instruction, when returning to a 16-bit segment, only
+ * restores the bottom 16 bits of the user space stack pointer. This
+ * causes some 16-bit software to break, but it also leaks kernel state
+ * to user space.
+ *
+ * This works around this by creating percpu "ministacks", each of which
+ * is mapped 2^16 times 64K apart. When we detect that the return SS is
+ * on the LDT, we copy the IRET frame to the ministack and use the
+ * relevant alias to return to userspace. The ministacks are mapped
+ * readonly, so if the IRET fault we promote #GP to #DF which is an IST
+ * vector and thus has its own stack; we then do the fixup in the #DF
+ * handler.
+ *
+ * This file sets up the ministacks and the related page tables. The
+ * actual ministack invocation is in entry_64.S.
+ */
+
+#include <linux/init.h>
+#include <linux/init_task.h>
+#include <linux/kernel.h>
+#include <linux/percpu.h>
+#include <linux/gfp.h>
+#include <linux/random.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/setup.h>
+
+/*
+ * Note: we only need 6*8 = 48 bytes for the espfix stack, but round
+ * it up to a cache line to avoid unnecessary sharing.
+ */
+#define ESPFIX_STACK_SIZE (8*8UL)
+#define ESPFIX_STACKS_PER_PAGE (PAGE_SIZE/ESPFIX_STACK_SIZE)
+
+/* There is address space for how many espfix pages? */
+#define ESPFIX_PAGE_SPACE (1UL << (PGDIR_SHIFT-PAGE_SHIFT-16))
+
+#define ESPFIX_MAX_CPUS (ESPFIX_STACKS_PER_PAGE * ESPFIX_PAGE_SPACE)
+#if CONFIG_NR_CPUS > ESPFIX_MAX_CPUS
+# error "Need more than one PGD for the ESPFIX hack"
+#endif
+
+#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO)
+
+/* This contains the *bottom* address of the espfix stack */
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_stack);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr);
+
+/* Initialization mutex - should this be a spinlock? */
+static DEFINE_MUTEX(espfix_init_mutex);
+
+/* Page allocation bitmap - each page serves ESPFIX_STACKS_PER_PAGE CPUs */
+#define ESPFIX_MAX_PAGES DIV_ROUND_UP(CONFIG_NR_CPUS, ESPFIX_STACKS_PER_PAGE)
+static void *espfix_pages[ESPFIX_MAX_PAGES];
+
+static __page_aligned_bss pud_t espfix_pud_page[PTRS_PER_PUD]
+ __aligned(PAGE_SIZE);
+
+static unsigned int page_random, slot_random;
+
+/*
+ * This returns the bottom address of the espfix stack for a specific CPU.
+ * The math allows for a non-power-of-two ESPFIX_STACK_SIZE, in which case
+ * we have to account for some amount of padding at the end of each page.
+ */
+static inline unsigned long espfix_base_addr(unsigned int cpu)
+{
+ unsigned long page, slot;
+ unsigned long addr;
+
+ page = (cpu / ESPFIX_STACKS_PER_PAGE) ^ page_random;
+ slot = (cpu + slot_random) % ESPFIX_STACKS_PER_PAGE;
+ addr = (page << PAGE_SHIFT) + (slot * ESPFIX_STACK_SIZE);
+ addr = (addr & 0xffffUL) | ((addr & ~0xffffUL) << 16);
+ addr += ESPFIX_BASE_ADDR;
+ return addr;
+}
+
+#define PTE_STRIDE (65536/PAGE_SIZE)
+#define ESPFIX_PTE_CLONES (PTRS_PER_PTE/PTE_STRIDE)
+#define ESPFIX_PMD_CLONES PTRS_PER_PMD
+#define ESPFIX_PUD_CLONES (65536/(ESPFIX_PTE_CLONES*ESPFIX_PMD_CLONES))
+
+#define PGTABLE_PROT ((_KERNPG_TABLE & ~_PAGE_RW) | _PAGE_NX)
+
+static void init_espfix_random(void)
+{
+ unsigned long rand;
+
+ /*
+ * This is run before the entropy pools are initialized,
+ * but this is hopefully better than nothing.
+ */
+ if (!arch_get_random_long(&rand)) {
+ /* The constant is an arbitrary large prime */
+ rdtscll(rand);
+ rand *= 0xc345c6b72fd16123UL;
+ }
+
+ slot_random = rand % ESPFIX_STACKS_PER_PAGE;
+ page_random = (rand / ESPFIX_STACKS_PER_PAGE)
+ & (ESPFIX_PAGE_SPACE - 1);
+}
+
+void __init init_espfix_bsp(void)
+{
+ pgd_t *pgd_p;
+ pteval_t ptemask;
+
+ ptemask = __supported_pte_mask;
+
+ /* Install the espfix pud into the kernel page directory */
+ pgd_p = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
+ pgd_populate(&init_mm, pgd_p, (pud_t *)espfix_pud_page);
+
+ /* Randomize the locations */
+ init_espfix_random();
+
+ /* The rest is the same as for any other processor */
+ init_espfix_ap();
+}
+
+void init_espfix_ap(void)
+{
+ unsigned int cpu, page;
+ unsigned long addr;
+ pud_t pud, *pud_p;
+ pmd_t pmd, *pmd_p;
+ pte_t pte, *pte_p;
+ int n;
+ void *stack_page;
+ pteval_t ptemask;
+
+ /* We only have to do this once... */
+ if (likely(this_cpu_read(espfix_stack)))
+ return; /* Already initialized */
+
+ cpu = smp_processor_id();
+ addr = espfix_base_addr(cpu);
+ page = cpu/ESPFIX_STACKS_PER_PAGE;
+
+ /* Did another CPU already set this up? */
+ stack_page = ACCESS_ONCE(espfix_pages[page]);
+ if (likely(stack_page))
+ goto done;
+
+ mutex_lock(&espfix_init_mutex);
+
+ /* Did we race on the lock? */
+ stack_page = ACCESS_ONCE(espfix_pages[page]);
+ if (stack_page)
+ goto unlock_done;
+
+ ptemask = __supported_pte_mask;
+
+ pud_p = &espfix_pud_page[pud_index(addr)];
+ pud = *pud_p;
+ if (!pud_present(pud)) {
+ pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP);
+ pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask));
+ paravirt_alloc_pud(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);
+ for (n = 0; n < ESPFIX_PUD_CLONES; n++)
+ set_pud(&pud_p[n], pud);
+ }
+
+ pmd_p = pmd_offset(&pud, addr);
+ pmd = *pmd_p;
+ if (!pmd_present(pmd)) {
+ pte_p = (pte_t *)__get_free_page(PGALLOC_GFP);
+ pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask));
+ paravirt_alloc_pmd(&init_mm, __pa(pte_p) >> PAGE_SHIFT);
+ for (n = 0; n < ESPFIX_PMD_CLONES; n++)
+ set_pmd(&pmd_p[n], pmd);
+ }
+
+ pte_p = pte_offset_kernel(&pmd, addr);
+ stack_page = (void *)__get_free_page(GFP_KERNEL);
+ pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask));
+ paravirt_alloc_pte(&init_mm, __pa(stack_page) >> PAGE_SHIFT);
+ for (n = 0; n < ESPFIX_PTE_CLONES; n++)
+ set_pte(&pte_p[n*PTE_STRIDE], pte);
+
+ /* Job is done for this CPU and any CPU which shares this page */
+ ACCESS_ONCE(espfix_pages[page]) = stack_page;
+
+unlock_done:
+ mutex_unlock(&espfix_init_mutex);
+done:
+ this_cpu_write(espfix_stack, addr);
+ this_cpu_write(espfix_waddr, (unsigned long)stack_page
+ + (addr & ~PAGE_MASK));
+}
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -229,17 +229,6 @@ static int write_ldt(void __user *ptr, u
}
}

- /*
- * On x86-64 we do not support 16-bit segments due to
- * IRET leaking the high bits of the kernel stack address.
- */
-#ifdef CONFIG_X86_64
- if (!ldt_info.seg_32bit) {
- error = -EINVAL;
- goto out_unlock;
- }
-#endif
-
fill_ldt(&ldt, &ldt_info);
if (oldmode)
ldt.avl = 0;
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -265,6 +265,13 @@ notrace static void __cpuinit start_seco
check_tsc_sync_target();

/*
+ * Enable the espfix hack for this CPU
+ */
+#ifdef CONFIG_X86_64
+ init_espfix_ap();
+#endif
+
+ /*
* We need to hold vector_lock so there the set of online cpus
* does not change while we are assigning vectors to cpus. Holding
* this lock ensures we don't half assign or remove an irq from a cpu.
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -30,11 +30,13 @@ struct pg_state {
unsigned long start_address;
unsigned long current_address;
const struct addr_marker *marker;
+ unsigned long lines;
};

struct addr_marker {
unsigned long start_address;
const char *name;
+ unsigned long max_lines;
};

/* indices for address_markers; keep sync'd w/ address_markers below */
@@ -45,6 +47,7 @@ enum address_markers_idx {
LOW_KERNEL_NR,
VMALLOC_START_NR,
VMEMMAP_START_NR,
+ ESPFIX_START_NR,
HIGH_KERNEL_NR,
MODULES_VADDR_NR,
MODULES_END_NR,
@@ -67,6 +70,7 @@ static struct addr_marker address_marker
{ PAGE_OFFSET, "Low Kernel Mapping" },
{ VMALLOC_START, "vmalloc() Area" },
{ VMEMMAP_START, "Vmemmap" },
+ { ESPFIX_BASE_ADDR, "ESPfix Area", 16 },
{ __START_KERNEL_map, "High Kernel Mapping" },
{ MODULES_VADDR, "Modules" },
{ MODULES_END, "End Modules" },
@@ -163,7 +167,7 @@ static void note_page(struct seq_file *m
pgprot_t new_prot, int level)
{
pgprotval_t prot, cur;
- static const char units[] = "KMGTPE";
+ static const char units[] = "BKMGTPE";

/*
* If we have a "break" in the series, we need to flush the state that
@@ -178,6 +182,7 @@ static void note_page(struct seq_file *m
st->current_prot = new_prot;
st->level = level;
st->marker = address_markers;
+ st->lines = 0;
seq_printf(m, "---[ %s ]---\n", st->marker->name);
} else if (prot != cur || level != st->level ||
st->current_address >= st->marker[1].start_address) {
@@ -188,17 +193,21 @@ static void note_page(struct seq_file *m
/*
* Now print the actual finished series
*/
- seq_printf(m, "0x%0*lx-0x%0*lx ",
- width, st->start_address,
- width, st->current_address);
-
- delta = (st->current_address - st->start_address) >> 10;
- while (!(delta & 1023) && unit[1]) {
- delta >>= 10;
- unit++;
+ if (!st->marker->max_lines ||
+ st->lines < st->marker->max_lines) {
+ seq_printf(m, "0x%0*lx-0x%0*lx ",
+ width, st->start_address,
+ width, st->current_address);
+
+ delta = (st->current_address - st->start_address) >> 10;
+ while (!(delta & 1023) && unit[1]) {
+ delta >>= 10;
+ unit++;
+ }
+ seq_printf(m, "%9lu%c ", delta, *unit);
+ printk_prot(m, st->current_prot, st->level);
}
- seq_printf(m, "%9lu%c ", delta, *unit);
- printk_prot(m, st->current_prot, st->level);
+ st->lines++;

/*
* We print markers for special areas of address space,
--- a/init/main.c
+++ b/init/main.c
@@ -606,6 +606,10 @@ asmlinkage void __init start_kernel(void
if (efi_enabled(EFI_RUNTIME_SERVICES))
efi_enter_virtual_mode();
#endif
+#ifdef CONFIG_X86_64
+ /* Should be run before the first non-init thread is created */
+ init_espfix_bsp();
+#endif
thread_info_cache_init();
cred_init();
fork_init(totalram_pages);

2014-07-16 00:11:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 35/44] Score: The commit is for compiling successfully.

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lennox Wu <[email protected]>

commit 5fbbf8a1a93452b26e7791cf32cefce62b0a480b upstream.

The modifications include:
1. Kconfig of Score: we don't support ioremap
2. Missed headfile including
3. There are some errors in other people's commit not checked by us, we fix it now
3.1 arch/score/kernel/entry.S: wrong instructions
3.2 arch/score/kernel/process.c : just some typos

Signed-off-by: Lennox Wu <[email protected]>
Cc: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/score/Kconfig | 3 +++
arch/score/include/asm/io.h | 1 -
arch/score/include/asm/pgalloc.h | 2 +-
arch/score/kernel/entry.S | 4 ++--
arch/score/kernel/process.c | 4 ++--
5 files changed, 8 insertions(+), 6 deletions(-)

--- a/arch/score/Kconfig
+++ b/arch/score/Kconfig
@@ -109,3 +109,6 @@ source "security/Kconfig"
source "crypto/Kconfig"

source "lib/Kconfig"
+
+config NO_IOMEM
+ def_bool y
--- a/arch/score/include/asm/io.h
+++ b/arch/score/include/asm/io.h
@@ -5,5 +5,4 @@

#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
-
#endif /* _ASM_SCORE_IO_H */
--- a/arch/score/include/asm/pgalloc.h
+++ b/arch/score/include/asm/pgalloc.h
@@ -2,7 +2,7 @@
#define _ASM_SCORE_PGALLOC_H

#include <linux/mm.h>
-
+#include <linux/highmem.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
--- a/arch/score/kernel/entry.S
+++ b/arch/score/kernel/entry.S
@@ -264,7 +264,7 @@ resume_kernel:
disable_irq
lw r8, [r28, TI_PRE_COUNT]
cmpz.c r8
- bne r8, restore_all
+ bne restore_all
need_resched:
lw r8, [r28, TI_FLAGS]
andri.c r9, r8, _TIF_NEED_RESCHED
@@ -415,7 +415,7 @@ ENTRY(handle_sys)
sw r9, [r0, PT_EPC]

cmpi.c r27, __NR_syscalls # check syscall number
- bgeu illegal_syscall
+ bcs illegal_syscall

slli r8, r27, 2 # get syscall routine
la r11, sys_call_table
--- a/arch/score/kernel/process.c
+++ b/arch/score/kernel/process.c
@@ -78,8 +78,8 @@ int copy_thread(unsigned long clone_flag
p->thread.reg0 = (unsigned long) childregs;
if (unlikely(p->flags & PF_KTHREAD)) {
memset(childregs, 0, sizeof(struct pt_regs));
- p->thread->reg12 = usp;
- p->thread->reg13 = arg;
+ p->thread.reg12 = usp;
+ p->thread.reg13 = arg;
p->thread.reg3 = (unsigned long) ret_from_kernel_thread;
} else {
*childregs = *current_pt_regs();

2014-07-16 00:11:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 33/44] score: normalize global variables exported by vmlinux.lds

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jiang Liu <[email protected]>

commit ae49b83dcacfb69e22092cab688c415c2f2d870c upstream.

Generate mandatory global variables _sdata in file vmlinux.lds.

Signed-off-by: Jiang Liu <[email protected]>
Cc: Chen Liqin <[email protected]>
Cc: Lennox Wu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Cc: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/score/kernel/vmlinux.lds.S | 1 +
1 file changed, 1 insertion(+)

--- a/arch/score/kernel/vmlinux.lds.S
+++ b/arch/score/kernel/vmlinux.lds.S
@@ -49,6 +49,7 @@ SECTIONS
}

. = ALIGN(16);
+ _sdata = .; /* Start of data section */
RODATA

EXCEPTION_TABLE(16)

2014-07-16 00:11:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 34/44] Score: Implement the function csum_ipv6_magic

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lennox Wu <[email protected]>

commit 1ed62ca648557b884d117a4a8bbcf2ae4e2d1153 upstream.

Signed-off-by: Lennox Wu <[email protected]>
Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/score/include/asm/checksum.h | 93 ++++++++++++++++++++------------------
1 file changed, 51 insertions(+), 42 deletions(-)

--- a/arch/score/include/asm/checksum.h
+++ b/arch/score/include/asm/checksum.h
@@ -184,48 +184,57 @@ static inline __sum16 csum_ipv6_magic(co
__wsum sum)
{
__asm__ __volatile__(
- ".set\tnoreorder\t\t\t# csum_ipv6_magic\n\t"
- ".set\tnoat\n\t"
- "addu\t%0, %5\t\t\t# proto (long in network byte order)\n\t"
- "sltu\t$1, %0, %5\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %6\t\t\t# csum\n\t"
- "sltu\t$1, %0, %6\n\t"
- "lw\t%1, 0(%2)\t\t\t# four words source address\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 4(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 8(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 12(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 0(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 4(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 8(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 12(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "addu\t%0, $1\t\t\t# Add final carry\n\t"
- ".set\tnoat\n\t"
- ".set\tnoreorder"
+ ".set\tvolatile\t\t\t# csum_ipv6_magic\n\t"
+ "add\t%0, %0, %5\t\t\t# proto (long in network byte order)\n\t"
+ "cmp.c\t%5, %0\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %6\t\t\t# csum\n\t"
+ "cmp.c\t%6, %0\n\t"
+ "lw\t%1, [%2, 0]\t\t\t# four words source address\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "1:lw\t%1, [%2, 4]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%2,8]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%2, 12]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0,%1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 0]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 4]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 8]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 12]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:\n\t"
+ ".set\toptimize"
: "=r" (sum), "=r" (proto)
: "r" (saddr), "r" (daddr),
"0" (htonl(len)), "1" (htonl(proto)), "r" (sum));

2014-07-16 00:12:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 30/44] rtmutex: Detect changes in the pi lock chain

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 82084984383babe728e6e3c9a8e5c46278091315 upstream.

When we walk the lock chain, we drop all locks after each step. So the
lock chain can change under us before we reacquire the locks. That's
harmless in principle as we just follow the wrong lock path. But it
can lead to a false positive in the dead lock detection logic:

T0 holds L0
T0 blocks on L1 held by T1
T1 blocks on L2 held by T2
T2 blocks on L3 held by T3
T4 blocks on L4 held by T4

Now we walk the chain

lock T1 -> lock L2 -> adjust L2 -> unlock T1 ->
lock T2 -> adjust T2 -> drop locks

T2 times out and blocks on L0

Now we continue:

lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all.

Brad tried to work around that in the deadlock detection logic itself,
but the more I looked at it the less I liked it, because it's crystal
ball magic after the fact.

We actually can detect a chain change very simple:

lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->

next_lock = T2->pi_blocked_on->lock;

drop locks

T2 times out and blocks on L0

Now we continue:

lock T2 ->

if (next_lock != T2->pi_blocked_on->lock)
return;

So if we detect that T2 is now blocked on a different lock we stop the
chain walk. That's also correct in the following scenario:

lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->

next_lock = T2->pi_blocked_on->lock;

drop locks

T3 times out and drops L3
T2 acquires L3 and blocks on L4 now

Now we continue:

lock T2 ->

if (next_lock != T2->pi_blocked_on->lock)
return;

We don't have to follow up the chain at that point, because T2
propagated our priority up to T4 already.

[ Folded a cleanup patch from peterz ]

Signed-off-by: Thomas Gleixner <[email protected]>
Reported-by: Brad Mouring <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/rtmutex.c | 74 +++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 59 insertions(+), 15 deletions(-)

--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -142,6 +142,11 @@ static void rt_mutex_adjust_prio(struct
*/
int max_lock_depth = 1024;

+static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p)
+{
+ return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL;
+}
+
/*
* Adjust the priority chain. Also used for deadlock detection.
* Decreases task's usage by one - may thus free the task.
@@ -150,6 +155,7 @@ int max_lock_depth = 1024;
static int rt_mutex_adjust_prio_chain(struct task_struct *task,
int deadlock_detect,
struct rt_mutex *orig_lock,
+ struct rt_mutex *next_lock,
struct rt_mutex_waiter *orig_waiter,
struct task_struct *top_task)
{
@@ -208,6 +214,18 @@ static int rt_mutex_adjust_prio_chain(st
goto out_unlock_pi;

/*
+ * We dropped all locks after taking a refcount on @task, so
+ * the task might have moved on in the lock chain or even left
+ * the chain completely and blocks now on an unrelated lock or
+ * on @orig_lock.
+ *
+ * We stored the lock on which @task was blocked in @next_lock,
+ * so we can detect the chain change.
+ */
+ if (next_lock != waiter->lock)
+ goto out_unlock_pi;
+
+ /*
* Drop out, when the task has no waiters. Note,
* top_waiter can be NULL, when we are in the deboosting
* mode!
@@ -293,11 +311,26 @@ static int rt_mutex_adjust_prio_chain(st
__rt_mutex_adjust_prio(task);
}

+ /*
+ * Check whether the task which owns the current lock is pi
+ * blocked itself. If yes we store a pointer to the lock for
+ * the lock chain change detection above. After we dropped
+ * task->pi_lock next_lock cannot be dereferenced anymore.
+ */
+ next_lock = task_blocked_on_lock(task);
+
raw_spin_unlock_irqrestore(&task->pi_lock, flags);

top_waiter = rt_mutex_top_waiter(lock);
raw_spin_unlock(&lock->wait_lock);

+ /*
+ * We reached the end of the lock chain. Stop right here. No
+ * point to go back just to figure that out.
+ */
+ if (!next_lock)
+ goto out_put_task;
+
if (!detect_deadlock && waiter != top_waiter)
goto out_put_task;

@@ -408,8 +441,9 @@ static int task_blocks_on_rt_mutex(struc
{
struct task_struct *owner = rt_mutex_owner(lock);
struct rt_mutex_waiter *top_waiter = waiter;
- unsigned long flags;
+ struct rt_mutex *next_lock;
int chain_walk = 0, res;
+ unsigned long flags;

/*
* Early deadlock detection. We really don't want the task to
@@ -442,20 +476,28 @@ static int task_blocks_on_rt_mutex(struc
if (!owner)
return 0;

+ raw_spin_lock_irqsave(&owner->pi_lock, flags);
if (waiter == rt_mutex_top_waiter(lock)) {
- raw_spin_lock_irqsave(&owner->pi_lock, flags);
plist_del(&top_waiter->pi_list_entry, &owner->pi_waiters);
plist_add(&waiter->pi_list_entry, &owner->pi_waiters);

__rt_mutex_adjust_prio(owner);
if (owner->pi_blocked_on)
chain_walk = 1;
- raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
- }
- else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock))
+ } else if (debug_rt_mutex_detect_deadlock(waiter, detect_deadlock)) {
chain_walk = 1;
+ }

- if (!chain_walk)
+ /* Store the lock on which owner is blocked or NULL */
+ next_lock = task_blocked_on_lock(owner);
+
+ raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
+ /*
+ * Even if full deadlock detection is on, if the owner is not
+ * blocked itself, we can avoid finding this out in the chain
+ * walk.
+ */
+ if (!chain_walk || !next_lock)
return 0;

/*
@@ -467,8 +509,8 @@ static int task_blocks_on_rt_mutex(struc

raw_spin_unlock(&lock->wait_lock);

- res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock, waiter,
- task);
+ res = rt_mutex_adjust_prio_chain(owner, detect_deadlock, lock,
+ next_lock, waiter, task);

raw_spin_lock(&lock->wait_lock);

@@ -517,8 +559,8 @@ static void remove_waiter(struct rt_mute
{
int first = (waiter == rt_mutex_top_waiter(lock));
struct task_struct *owner = rt_mutex_owner(lock);
+ struct rt_mutex *next_lock = NULL;
unsigned long flags;
- int chain_walk = 0;

raw_spin_lock_irqsave(&current->pi_lock, flags);
plist_del(&waiter->list_entry, &lock->wait_list);
@@ -542,15 +584,15 @@ static void remove_waiter(struct rt_mute
}
__rt_mutex_adjust_prio(owner);

- if (owner->pi_blocked_on)
- chain_walk = 1;
+ /* Store the lock on which owner is blocked or NULL */
+ next_lock = task_blocked_on_lock(owner);

raw_spin_unlock_irqrestore(&owner->pi_lock, flags);
}

WARN_ON(!plist_node_empty(&waiter->pi_list_entry));

- if (!chain_walk)
+ if (!next_lock)
return;

/* gets dropped in rt_mutex_adjust_prio_chain()! */
@@ -558,7 +600,7 @@ static void remove_waiter(struct rt_mute

raw_spin_unlock(&lock->wait_lock);

- rt_mutex_adjust_prio_chain(owner, 0, lock, NULL, current);
+ rt_mutex_adjust_prio_chain(owner, 0, lock, next_lock, NULL, current);

raw_spin_lock(&lock->wait_lock);
}
@@ -571,6 +613,7 @@ static void remove_waiter(struct rt_mute
void rt_mutex_adjust_pi(struct task_struct *task)
{
struct rt_mutex_waiter *waiter;
+ struct rt_mutex *next_lock;
unsigned long flags;

raw_spin_lock_irqsave(&task->pi_lock, flags);
@@ -580,12 +623,13 @@ void rt_mutex_adjust_pi(struct task_stru
raw_spin_unlock_irqrestore(&task->pi_lock, flags);
return;
}
-
+ next_lock = waiter->lock;
raw_spin_unlock_irqrestore(&task->pi_lock, flags);

/* gets dropped in rt_mutex_adjust_prio_chain()! */
get_task_struct(task);
- rt_mutex_adjust_prio_chain(task, 0, NULL, NULL, task);
+
+ rt_mutex_adjust_prio_chain(task, 0, NULL, next_lock, NULL, task);
}

/**

2014-07-16 00:12:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 03/44] USB: ftdi_sio: Add extra PID.

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Bert Vermeulen <[email protected]>

commit 5a7fbe7e9ea0b1b9d7ffdba64db1faa3a259164c upstream.

This patch adds PID 0x0003 to the VID 0x128d (Testo). At least the
Testo 435-4 uses this, likely other gear as well.

Signed-off-by: Bert Vermeulen <[email protected]>
Cc: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/ftdi_sio.c | 3 ++-
drivers/usb/serial/ftdi_sio_ids.h | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)

--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -723,7 +723,8 @@ static struct usb_device_id id_table_com
{ USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) },
- { USB_DEVICE(TESTO_VID, TESTO_USB_INTERFACE_PID) },
+ { USB_DEVICE(TESTO_VID, TESTO_1_PID) },
+ { USB_DEVICE(TESTO_VID, TESTO_3_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_GAMMA_SCOUT_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13M_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_TACTRIX_OPENPORT_13S_PID) },
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -798,7 +798,8 @@
* Submitted by Colin Leroy
*/
#define TESTO_VID 0x128D
-#define TESTO_USB_INTERFACE_PID 0x0001
+#define TESTO_1_PID 0x0001
+#define TESTO_3_PID 0x0003

/*
* Mobility Electronics products.

2014-07-16 00:12:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 28/44] ring-buffer: Check if buffer exists before polling

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <[email protected]>

commit 8b8b36834d0fff67fc8668093f4312dd04dcf21d upstream.

The per_cpu buffers are created one per possible CPU. But these do
not mean that those CPUs are online, nor do they even exist.

With the addition of the ring buffer polling, it assumes that the
caller polls on an existing buffer. But this is not the case if
the user reads trace_pipe from a CPU that does not exist, and this
causes the kernel to crash.

Simple fix is to check the cpu against buffer bitmask against to see
if the buffer was allocated or not and return -ENODEV if it is
not.

More updates were done to pass the -ENODEV back up to userspace.

Link: http://lkml.kernel.org/r/[email protected]

Reported-by: Sasha Levin <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/ring_buffer.h | 2 +-
kernel/trace/ring_buffer.c | 5 ++++-
kernel/trace/trace.c | 25 ++++++++++++++++++-------
kernel/trace/trace.h | 4 ++--
4 files changed, 25 insertions(+), 11 deletions(-)

--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -97,7 +97,7 @@ __ring_buffer_alloc(unsigned long size,
__ring_buffer_alloc((size), (flags), &__key); \
})

-void ring_buffer_wait(struct ring_buffer *buffer, int cpu);
+int ring_buffer_wait(struct ring_buffer *buffer, int cpu);
int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu,
struct file *filp, poll_table *poll_table);

--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -543,7 +543,7 @@ static void rb_wake_up_waiters(struct ir
* as data is added to any of the @buffer's cpu buffers. Otherwise
* it will wait for data to be added to a specific cpu buffer.
*/
-void ring_buffer_wait(struct ring_buffer *buffer, int cpu)
+int ring_buffer_wait(struct ring_buffer *buffer, int cpu)
{
struct ring_buffer_per_cpu *cpu_buffer;
DEFINE_WAIT(wait);
@@ -557,6 +557,8 @@ void ring_buffer_wait(struct ring_buffer
if (cpu == RING_BUFFER_ALL_CPUS)
work = &buffer->irq_work;
else {
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return -ENODEV;
cpu_buffer = buffer->buffers[cpu];
work = &cpu_buffer->irq_work;
}
@@ -591,6 +593,7 @@ void ring_buffer_wait(struct ring_buffer
schedule();

finish_wait(&work->waiters, &wait);
+ return 0;
}

/**
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1027,13 +1027,13 @@ update_max_tr_single(struct trace_array
}
#endif /* CONFIG_TRACER_MAX_TRACE */

-static void default_wait_pipe(struct trace_iterator *iter)
+static int default_wait_pipe(struct trace_iterator *iter)
{
/* Iterators are static, they should be filled or empty */
if (trace_buffer_iter(iter, iter->cpu_file))
- return;
+ return 0;

- ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file);
+ return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file);
}

#ifdef CONFIG_FTRACE_STARTUP_TEST
@@ -4054,17 +4054,19 @@ tracing_poll_pipe(struct file *filp, pol
*
* Anyway, this is really very primitive wakeup.
*/
-void poll_wait_pipe(struct trace_iterator *iter)
+int poll_wait_pipe(struct trace_iterator *iter)
{
set_current_state(TASK_INTERRUPTIBLE);
/* sleep for 100 msecs, and try again. */
schedule_timeout(HZ / 10);
+ return 0;
}

/* Must be called with trace_types_lock mutex held. */
static int tracing_wait_pipe(struct file *filp)
{
struct trace_iterator *iter = filp->private_data;
+ int ret;

while (trace_empty(iter)) {

@@ -4074,10 +4076,13 @@ static int tracing_wait_pipe(struct file

mutex_unlock(&iter->mutex);

- iter->trace->wait_pipe(iter);
+ ret = iter->trace->wait_pipe(iter);

mutex_lock(&iter->mutex);

+ if (ret)
+ return ret;
+
if (signal_pending(current))
return -EINTR;

@@ -5011,8 +5016,12 @@ tracing_buffers_read(struct file *filp,
goto out_unlock;
}
mutex_unlock(&trace_types_lock);
- iter->trace->wait_pipe(iter);
+ ret = iter->trace->wait_pipe(iter);
mutex_lock(&trace_types_lock);
+ if (ret) {
+ size = ret;
+ goto out_unlock;
+ }
if (signal_pending(current)) {
size = -EINTR;
goto out_unlock;
@@ -5224,8 +5233,10 @@ tracing_buffers_splice_read(struct file
goto out;
}
mutex_unlock(&trace_types_lock);
- iter->trace->wait_pipe(iter);
+ ret = iter->trace->wait_pipe(iter);
mutex_lock(&trace_types_lock);
+ if (ret)
+ goto out;
if (signal_pending(current)) {
ret = -EINTR;
goto out;
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -342,7 +342,7 @@ struct tracer {
void (*stop)(struct trace_array *tr);
void (*open)(struct trace_iterator *iter);
void (*pipe_open)(struct trace_iterator *iter);
- void (*wait_pipe)(struct trace_iterator *iter);
+ int (*wait_pipe)(struct trace_iterator *iter);
void (*close)(struct trace_iterator *iter);
void (*pipe_close)(struct trace_iterator *iter);
ssize_t (*read)(struct trace_iterator *iter,
@@ -557,7 +557,7 @@ void trace_init_global_iter(struct trace

void tracing_iter_reset(struct trace_iterator *iter, int cpu);

-void poll_wait_pipe(struct trace_iterator *iter);
+int poll_wait_pipe(struct trace_iterator *iter);

void ftrace(struct trace_array *tr,
struct trace_array_cpu *data,

2014-07-16 00:12:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 29/44] rtmutex: Fix deadlock detector for real

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 397335f004f41e5fcf7a795e94eb3ab83411a17c upstream.

The current deadlock detection logic does not work reliably due to the
following early exit path:

/*
* Drop out, when the task has no waiters. Note,
* top_waiter can be NULL, when we are in the deboosting
* mode!
*/
if (top_waiter && (!task_has_pi_waiters(task) ||
top_waiter != task_top_pi_waiter(task)))
goto out_unlock_pi;

So this not only exits when the task has no waiters, it also exits
unconditionally when the current waiter is not the top priority waiter
of the task.

So in a nested locking scenario, it might abort the lock chain walk
and therefor miss a potential deadlock.

Simple fix: Continue the chain walk, when deadlock detection is
enabled.

We also avoid the whole enqueue, if we detect the deadlock right away
(A-A). It's an optimization, but also prevents that another waiter who
comes in after the detection and before the task has undone the damage
observes the situation and detects the deadlock and returns
-EDEADLOCK, which is wrong as the other task is not in a deadlock
situation.

Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Reviewed-by: Steven Rostedt <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/rtmutex.c | 32 ++++++++++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)

--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -212,9 +212,16 @@ static int rt_mutex_adjust_prio_chain(st
* top_waiter can be NULL, when we are in the deboosting
* mode!
*/
- if (top_waiter && (!task_has_pi_waiters(task) ||
- top_waiter != task_top_pi_waiter(task)))
- goto out_unlock_pi;
+ if (top_waiter) {
+ if (!task_has_pi_waiters(task))
+ goto out_unlock_pi;
+ /*
+ * If deadlock detection is off, we stop here if we
+ * are not the top pi waiter of the task.
+ */
+ if (!detect_deadlock && top_waiter != task_top_pi_waiter(task))
+ goto out_unlock_pi;
+ }

/*
* When deadlock detection is off then we check, if further
@@ -230,7 +237,12 @@ static int rt_mutex_adjust_prio_chain(st
goto retry;
}

- /* Deadlock detection */
+ /*
+ * Deadlock detection. If the lock is the same as the original
+ * lock which caused us to walk the lock chain or if the
+ * current lock is owned by the task which initiated the chain
+ * walk, we detected a deadlock.
+ */
if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {
debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock);
raw_spin_unlock(&lock->wait_lock);
@@ -399,6 +411,18 @@ static int task_blocks_on_rt_mutex(struc
unsigned long flags;
int chain_walk = 0, res;

+ /*
+ * Early deadlock detection. We really don't want the task to
+ * enqueue on itself just to untangle the mess later. It's not
+ * only an optimization. We drop the locks, so another waiter
+ * can come in before the chain walk detects the deadlock. So
+ * the other will detect the deadlock and return -EDEADLOCK,
+ * which is wrong, as the other waiter is not in a deadlock
+ * situation.
+ */
+ if (detect_deadlock && owner == task)
+ return -EDEADLK;
+
raw_spin_lock_irqsave(&task->pi_lock, flags);
__rt_mutex_adjust_prio(task);
waiter->task = task;

2014-07-16 00:12:19

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 26/44] drm/radeon: fix typo in golden register setup on evergreen

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 6abafb78f9881b4891baf74ab4e9f090ae45230e upstream.

Fixes hangs on driver load on some cards.

bug:
https://bugs.freedesktop.org/show_bug.cgi?id=76998

Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/radeon/evergreen.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -94,7 +94,7 @@ static const u32 evergreen_golden_regist
0x8c1c, 0xffffffff, 0x00001010,
0x28350, 0xffffffff, 0x00000000,
0xa008, 0xffffffff, 0x00010000,
- 0x5cc, 0xffffffff, 0x00000001,
+ 0x5c4, 0xffffffff, 0x00000001,
0x9508, 0xffffffff, 0x00000002,
0x913c, 0x0000000f, 0x0000000a
};
@@ -381,7 +381,7 @@ static const u32 cedar_golden_registers[
0x8c1c, 0xffffffff, 0x00001010,
0x28350, 0xffffffff, 0x00000000,
0xa008, 0xffffffff, 0x00010000,
- 0x5cc, 0xffffffff, 0x00000001,
+ 0x5c4, 0xffffffff, 0x00000001,
0x9508, 0xffffffff, 0x00000002
};

@@ -540,7 +540,7 @@ static const u32 juniper_mgcg_init[] =
static const u32 supersumo_golden_registers[] =
{
0x5eb4, 0xffffffff, 0x00000002,
- 0x5cc, 0xffffffff, 0x00000001,
+ 0x5c4, 0xffffffff, 0x00000001,
0x7030, 0xffffffff, 0x00000011,
0x7c30, 0xffffffff, 0x00000011,
0x6104, 0x01000300, 0x00000000,
@@ -624,7 +624,7 @@ static const u32 sumo_golden_registers[]
static const u32 wrestler_golden_registers[] =
{
0x5eb4, 0xffffffff, 0x00000002,
- 0x5cc, 0xffffffff, 0x00000001,
+ 0x5c4, 0xffffffff, 0x00000001,
0x7030, 0xffffffff, 0x00000011,
0x7c30, 0xffffffff, 0x00000011,
0x6104, 0x01000300, 0x00000000,

2014-07-16 00:12:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 31/44] rtmutex: Handle deadlock detection smarter

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 3d5c9340d1949733eb37616abd15db36aef9a57c upstream.

Even in the case when deadlock detection is not requested by the
caller, we can detect deadlocks. Right now the code stops the lock
chain walk and keeps the waiter enqueued, even on itself. Silly not to
yell when such a scenario is detected and to keep the waiter enqueued.

Return -EDEADLK unconditionally and handle it at the call sites.

The futex calls return -EDEADLK. The non futex ones dequeue the
waiter, throw a warning and put the task into a schedule loop.

Tagged for stable as it makes the code more robust.

Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Brad Mouring <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/rtmutex-debug.h | 5 +++++
kernel/rtmutex.c | 33 ++++++++++++++++++++++++++++-----
kernel/rtmutex.h | 5 +++++
3 files changed, 38 insertions(+), 5 deletions(-)

--- a/kernel/rtmutex-debug.h
+++ b/kernel/rtmutex-debug.h
@@ -31,3 +31,8 @@ static inline int debug_rt_mutex_detect_
{
return (waiter != NULL);
}
+
+static inline void rt_mutex_print_deadlock(struct rt_mutex_waiter *w)
+{
+ debug_rt_mutex_print_deadlock(w);
+}
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -189,7 +189,7 @@ static int rt_mutex_adjust_prio_chain(st
}
put_task_struct(task);

- return deadlock_detect ? -EDEADLK : 0;
+ return -EDEADLK;
}
retry:
/*
@@ -264,7 +264,7 @@ static int rt_mutex_adjust_prio_chain(st
if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {
debug_rt_mutex_deadlock(deadlock_detect, orig_waiter, lock);
raw_spin_unlock(&lock->wait_lock);
- ret = deadlock_detect ? -EDEADLK : 0;
+ ret = -EDEADLK;
goto out_unlock_pi;
}

@@ -454,7 +454,7 @@ static int task_blocks_on_rt_mutex(struc
* which is wrong, as the other waiter is not in a deadlock
* situation.
*/
- if (detect_deadlock && owner == task)
+ if (owner == task)
return -EDEADLK;

raw_spin_lock_irqsave(&task->pi_lock, flags);
@@ -681,6 +681,26 @@ __rt_mutex_slowlock(struct rt_mutex *loc
return ret;
}

+static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
+ struct rt_mutex_waiter *w)
+{
+ /*
+ * If the result is not -EDEADLOCK or the caller requested
+ * deadlock detection, nothing to do here.
+ */
+ if (res != -EDEADLOCK || detect_deadlock)
+ return;
+
+ /*
+ * Yell lowdly and stop the task right here.
+ */
+ rt_mutex_print_deadlock(w);
+ while (1) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ }
+}
+
/*
* Slow path lock function:
*/
@@ -718,8 +738,10 @@ rt_mutex_slowlock(struct rt_mutex *lock,

set_current_state(TASK_RUNNING);

- if (unlikely(ret))
+ if (unlikely(ret)) {
remove_waiter(lock, &waiter);
+ rt_mutex_handle_deadlock(ret, detect_deadlock, &waiter);
+ }

/*
* try_to_take_rt_mutex() sets the waiter bit
@@ -1027,7 +1049,8 @@ int rt_mutex_start_proxy_lock(struct rt_
return 1;
}

- ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);
+ /* We enforce deadlock detection for futexes */
+ ret = task_blocks_on_rt_mutex(lock, waiter, task, 1);

if (ret && !rt_mutex_owner(lock)) {
/*
--- a/kernel/rtmutex.h
+++ b/kernel/rtmutex.h
@@ -24,3 +24,8 @@
#define debug_rt_mutex_print_deadlock(w) do { } while (0)
#define debug_rt_mutex_detect_deadlock(w,d) (d)
#define debug_rt_mutex_reset_waiter(w) do { } while (0)
+
+static inline void rt_mutex_print_deadlock(struct rt_mutex_waiter *w)
+{
+ WARN(1, "rtmutex deadlock detected\n");
+}

2014-07-16 00:11:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 32/44] rtmutex: Plug slow unlock race

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 27e35715df54cbc4f2d044f681802ae30479e7fb upstream.

When the rtmutex fast path is enabled the slow unlock function can
create the following situation:

spin_lock(foo->m->wait_lock);
foo->m->owner = NULL;
rt_mutex_lock(foo->m); <-- fast path
free = atomic_dec_and_test(foo->refcnt);
rt_mutex_unlock(foo->m); <-- fast path
if (free)
kfree(foo);

spin_unlock(foo->m->wait_lock); <--- Use after free.

Plug the race by changing the slow unlock to the following scheme:

while (!rt_mutex_has_waiters(m)) {
/* Clear the waiters bit in m->owner */
clear_rt_mutex_waiters(m);
owner = rt_mutex_owner(m);
spin_unlock(m->wait_lock);
if (cmpxchg(m->owner, owner, 0) == owner)
return;
spin_lock(m->wait_lock);
}

So in case of a new waiter incoming while the owner tries the slow
path unlock we have two situations:

unlock(wait_lock);
lock(wait_lock);
cmpxchg(p, owner, 0) == owner
mark_rt_mutex_waiters(lock);
acquire(lock);

Or:

unlock(wait_lock);
lock(wait_lock);
mark_rt_mutex_waiters(lock);
cmpxchg(p, owner, 0) != owner
enqueue_waiter();
unlock(wait_lock);
lock(wait_lock);
wakeup_next waiter();
unlock(wait_lock);
lock(wait_lock);
acquire(lock);

If the fast path is disabled, then the simple

m->owner = NULL;
unlock(m->wait_lock);

is sufficient as all access to m->owner is serialized via
m->wait_lock;

Also document and clarify the wakeup_next_waiter function as suggested
by Oleg Nesterov.

Reported-by: Steven Rostedt <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Steven Rostedt <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Mike Galbraith <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/rtmutex.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 109 insertions(+), 6 deletions(-)

--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -82,6 +82,47 @@ static inline void mark_rt_mutex_waiters
owner = *p;
} while (cmpxchg(p, owner, owner | RT_MUTEX_HAS_WAITERS) != owner);
}
+
+/*
+ * Safe fastpath aware unlock:
+ * 1) Clear the waiters bit
+ * 2) Drop lock->wait_lock
+ * 3) Try to unlock the lock with cmpxchg
+ */
+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)
+ __releases(lock->wait_lock)
+{
+ struct task_struct *owner = rt_mutex_owner(lock);
+
+ clear_rt_mutex_waiters(lock);
+ raw_spin_unlock(&lock->wait_lock);
+ /*
+ * If a new waiter comes in between the unlock and the cmpxchg
+ * we have two situations:
+ *
+ * unlock(wait_lock);
+ * lock(wait_lock);
+ * cmpxchg(p, owner, 0) == owner
+ * mark_rt_mutex_waiters(lock);
+ * acquire(lock);
+ * or:
+ *
+ * unlock(wait_lock);
+ * lock(wait_lock);
+ * mark_rt_mutex_waiters(lock);
+ *
+ * cmpxchg(p, owner, 0) != owner
+ * enqueue_waiter();
+ * unlock(wait_lock);
+ * lock(wait_lock);
+ * wake waiter();
+ * unlock(wait_lock);
+ * lock(wait_lock);
+ * acquire(lock);
+ */
+ return rt_mutex_cmpxchg(lock, owner, NULL);
+}
+
#else
# define rt_mutex_cmpxchg(l,c,n) (0)
static inline void mark_rt_mutex_waiters(struct rt_mutex *lock)
@@ -89,6 +130,17 @@ static inline void mark_rt_mutex_waiters
lock->owner = (struct task_struct *)
((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS);
}
+
+/*
+ * Simple slow path only version: lock->owner is protected by lock->wait_lock.
+ */
+static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock)
+ __releases(lock->wait_lock)
+{
+ lock->owner = NULL;
+ raw_spin_unlock(&lock->wait_lock);
+ return true;
+}
#endif

/*
@@ -520,7 +572,8 @@ static int task_blocks_on_rt_mutex(struc
/*
* Wake up the next waiter on the lock.
*
- * Remove the top waiter from the current tasks waiter list and wake it up.
+ * Remove the top waiter from the current tasks pi waiter list and
+ * wake it up.
*
* Called with lock->wait_lock held.
*/
@@ -541,10 +594,23 @@ static void wakeup_next_waiter(struct rt
*/
plist_del(&waiter->pi_list_entry, &current->pi_waiters);

- rt_mutex_set_owner(lock, NULL);
+ /*
+ * As we are waking up the top waiter, and the waiter stays
+ * queued on the lock until it gets the lock, this lock
+ * obviously has waiters. Just set the bit here and this has
+ * the added benefit of forcing all new tasks into the
+ * slow path making sure no task of lower priority than
+ * the top waiter can steal this lock.
+ */
+ lock->owner = (void *) RT_MUTEX_HAS_WAITERS;

raw_spin_unlock_irqrestore(&current->pi_lock, flags);

+ /*
+ * It's safe to dereference waiter as it cannot go away as
+ * long as we hold lock->wait_lock. The waiter task needs to
+ * acquire it in order to dequeue the waiter.
+ */
wake_up_process(waiter->task);
}

@@ -797,12 +863,49 @@ rt_mutex_slowunlock(struct rt_mutex *loc

rt_mutex_deadlock_account_unlock(current);

- if (!rt_mutex_has_waiters(lock)) {
- lock->owner = NULL;
- raw_spin_unlock(&lock->wait_lock);
- return;
+ /*
+ * We must be careful here if the fast path is enabled. If we
+ * have no waiters queued we cannot set owner to NULL here
+ * because of:
+ *
+ * foo->lock->owner = NULL;
+ * rtmutex_lock(foo->lock); <- fast path
+ * free = atomic_dec_and_test(foo->refcnt);
+ * rtmutex_unlock(foo->lock); <- fast path
+ * if (free)
+ * kfree(foo);
+ * raw_spin_unlock(foo->lock->wait_lock);
+ *
+ * So for the fastpath enabled kernel:
+ *
+ * Nothing can set the waiters bit as long as we hold
+ * lock->wait_lock. So we do the following sequence:
+ *
+ * owner = rt_mutex_owner(lock);
+ * clear_rt_mutex_waiters(lock);
+ * raw_spin_unlock(&lock->wait_lock);
+ * if (cmpxchg(&lock->owner, owner, 0) == owner)
+ * return;
+ * goto retry;
+ *
+ * The fastpath disabled variant is simple as all access to
+ * lock->owner is serialized by lock->wait_lock:
+ *
+ * lock->owner = NULL;
+ * raw_spin_unlock(&lock->wait_lock);
+ */
+ while (!rt_mutex_has_waiters(lock)) {
+ /* Drops lock->wait_lock ! */
+ if (unlock_rt_mutex_safe(lock) == true)
+ return;
+ /* Relock the rtmutex and try again */
+ raw_spin_lock(&lock->wait_lock);
}

+ /*
+ * The wakeup next waiter path does not suffer from the above
+ * race. See the comments there.
+ */
wakeup_next_waiter(lock);

raw_spin_unlock(&lock->wait_lock);

2014-07-16 00:13:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 25/44] ext4: disable synchronous transaction batching if max_batch_time==0

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric Sandeen <[email protected]>

commit 5dd214248f94d430d70e9230bda72f2654ac88a8 upstream.

The mount manpage says of the max_batch_time option,

This optimization can be turned off entirely
by setting max_batch_time to 0.

But the code doesn't do that. So fix the code to do
that.

Signed-off-by: Eric Sandeen <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/ext4/super.c | 2 --
fs/jbd2/transaction.c | 5 ++++-
2 files changed, 4 insertions(+), 3 deletions(-)

--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1483,8 +1483,6 @@ static int handle_mount_opt(struct super
arg = JBD2_DEFAULT_MAX_COMMIT_AGE;
sbi->s_commit_interval = HZ * arg;
} else if (token == Opt_max_batch_time) {
- if (arg == 0)
- arg = EXT4_DEF_MAX_BATCH_TIME;
sbi->s_max_batch_time = arg;
} else if (token == Opt_min_batch_time) {
sbi->s_min_batch_time = arg;
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -1442,9 +1442,12 @@ int jbd2_journal_stop(handle_t *handle)
* to perform a synchronous write. We do this to detect the
* case where a single process is doing a stream of sync
* writes. No point in waiting for joiners in that case.
+ *
+ * Setting max_batch_time to 0 disables this completely.
*/
pid = current->pid;
- if (handle->h_sync && journal->j_last_sync_writer != pid) {
+ if (handle->h_sync && journal->j_last_sync_writer != pid &&
+ journal->j_max_batch_time) {
u64 commit_time, trans_time;

journal->j_last_sync_writer = pid;

2014-07-16 00:14:25

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 23/44] ext4: fix unjournalled bg descriptor while initializing inode bitmap

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Theodore Ts'o <[email protected]>

commit 61c219f5814277ecb71d64cb30297028d6665979 upstream.

The first time that we allocate from an uninitialized inode allocation
bitmap, if the block allocation bitmap is also uninitalized, we need
to get write access to the block group descriptor before we start
modifying the block group descriptor flags and updating the free block
count, etc. Otherwise, there is the potential of a bad journal
checksum (if journal checksums are enabled), and of the file system
becoming inconsistent if we crash at exactly the wrong time.

Signed-off-by: Theodore Ts'o <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/ext4/ialloc.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -780,6 +780,13 @@ got:
goto out;
}

+ BUFFER_TRACE(group_desc_bh, "get_write_access");
+ err = ext4_journal_get_write_access(handle, group_desc_bh);
+ if (err) {
+ ext4_std_error(sb, err);
+ goto out;
+ }
+
/* We may have to initialize the block bitmap if it isn't already */
if (ext4_has_group_desc_csum(sb) &&
gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
@@ -816,13 +823,6 @@ got:
}
}

- BUFFER_TRACE(group_desc_bh, "get_write_access");
- err = ext4_journal_get_write_access(handle, group_desc_bh);
- if (err) {
- ext4_std_error(sb, err);
- goto out;
- }
-
/* Update the relevant bg descriptor fields */
if (ext4_has_group_desc_csum(sb)) {
int free;

2014-07-16 00:14:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 24/44] ext4: clarify error count warning messages

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Theodore Ts'o <[email protected]>

commit ae0f78de2c43b6fadd007c231a352b13b5be8ed2 upstream.

Make it clear that values printed are times, and that it is error
since last fsck. Also add note about fsck version required.

Signed-off-by: Pavel Machek <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
Reviewed-by: Andreas Dilger <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/ext4/super.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2687,10 +2687,11 @@ static void print_daily_error_info(unsig
es = sbi->s_es;

if (es->s_error_count)
- ext4_msg(sb, KERN_NOTICE, "error count: %u",
+ /* fsck newer than v1.41.13 is needed to clean this condition. */
+ ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u",
le32_to_cpu(es->s_error_count));
if (es->s_first_error_time) {
- printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d",
+ printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d",
sb->s_id, le32_to_cpu(es->s_first_error_time),
(int) sizeof(es->s_first_error_func),
es->s_first_error_func,
@@ -2704,7 +2705,7 @@ static void print_daily_error_info(unsig
printk("\n");
}
if (es->s_last_error_time) {
- printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d",
+ printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d",
sb->s_id, le32_to_cpu(es->s_last_error_time),
(int) sizeof(es->s_last_error_func),
es->s_last_error_func,

2014-07-16 00:15:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 21/44] Drivers: hv: vmbus: Fix a bug in the channel callback dispatch code

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: "K. Y. Srinivasan" <[email protected]>

commit affb1aff300ddee54df307812b38f166e8a865ef upstream.

Starting with Win8, we have implemented several optimizations to improve the
scalability and performance of the VMBUS transport between the Host and the
Guest. Some of the non-performance critical services cannot leverage these
optimization since they only read and process one message at a time.
Make adjustments to the callback dispatch code to account for the way
non-performance critical drivers handle reading of the channel.

Signed-off-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hv/connection.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -304,9 +304,13 @@ static void process_chn_event(u32 relid)
*/

do {
- hv_begin_read(&channel->inbound);
+ if (read_state)
+ hv_begin_read(&channel->inbound);
channel->onchannel_callback(arg);
- bytes_to_read = hv_end_read(&channel->inbound);
+ if (read_state)
+ bytes_to_read = hv_end_read(&channel->inbound);
+ else
+ bytes_to_read = 0;
} while (read_state && (bytes_to_read != 0));
} else {
pr_err("no channel callback for relid - %u\n", relid);

2014-07-16 00:15:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 20/44] clk: spear3xx: Use proper control register offset

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit 15ebb05248d025534773c9ef64915bd888f04e4b upstream.

The control register is at offset 0x10, not 0x0. This is wreckaged
since commit 5df33a62c (SPEAr: Switch to common clock framework).

Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Mike Turquette <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/clk/spear/spear3xx_clock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/clk/spear/spear3xx_clock.c
+++ b/drivers/clk/spear/spear3xx_clock.c
@@ -211,7 +211,7 @@ static inline void spear310_clk_init(voi
/* array of all spear 320 clock lookups */
#ifdef CONFIG_MACH_SPEAR320

-#define SPEAR320_CONTROL_REG (soc_config_base + 0x0000)
+#define SPEAR320_CONTROL_REG (soc_config_base + 0x0010)
#define SPEAR320_EXT_CTRL_REG (soc_config_base + 0x0018)

#define SPEAR320_UARTX_PCLK_MASK 0x1

2014-07-16 00:15:01

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 22/44] dm io: fix a race condition in the wake up code for sync_io

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joe Thornber <[email protected]>

commit 10f1d5d111e8aed46a0f1179faf9a3cf422f689e upstream.

There's a race condition between the atomic_dec_and_test(&io->count)
in dec_count() and the waking of the sync_io() thread. If the thread
is spuriously woken immediately after the decrement it may exit,
making the on stack io struct invalid, yet the dec_count could still
be using it.

Fix this race by using a completion in sync_io() and dec_count().

Reported-by: Minfei Huang <[email protected]>
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Acked-by: Mikulas Patocka <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/md/dm-io.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)

--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -10,6 +10,7 @@
#include <linux/device-mapper.h>

#include <linux/bio.h>
+#include <linux/completion.h>
#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/sched.h>
@@ -34,7 +35,7 @@ struct dm_io_client {
struct io {
unsigned long error_bits;
atomic_t count;
- struct task_struct *sleeper;
+ struct completion *wait;
struct dm_io_client *client;
io_notify_fn callback;
void *context;
@@ -122,8 +123,8 @@ static void dec_count(struct io *io, uns
invalidate_kernel_vmap_range(io->vma_invalidate_address,
io->vma_invalidate_size);

- if (io->sleeper)
- wake_up_process(io->sleeper);
+ if (io->wait)
+ complete(io->wait);

else {
unsigned long r = io->error_bits;
@@ -386,6 +387,7 @@ static int sync_io(struct dm_io_client *
*/
volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1];
struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io));
+ DECLARE_COMPLETION_ONSTACK(wait);

if (num_regions > 1 && (rw & RW_MASK) != WRITE) {
WARN_ON(1);
@@ -394,7 +396,7 @@ static int sync_io(struct dm_io_client *

io->error_bits = 0;
atomic_set(&io->count, 1); /* see dispatch_io() */
- io->sleeper = current;
+ io->wait = &wait;
io->client = client;

io->vma_invalidate_address = dp->vma_invalidate_address;
@@ -402,15 +404,7 @@ static int sync_io(struct dm_io_client *

dispatch_io(rw, num_regions, where, dp, io, 1);

- while (1) {
- set_current_state(TASK_UNINTERRUPTIBLE);
-
- if (!atomic_read(&io->count))
- break;
-
- io_schedule();
- }
- set_current_state(TASK_RUNNING);
+ wait_for_completion_io(&wait);

if (error_bits)
*error_bits = io->error_bits;
@@ -433,7 +427,7 @@ static int async_io(struct dm_io_client
io = mempool_alloc(client->pool, GFP_NOIO);
io->error_bits = 0;
atomic_set(&io->count, 1); /* see dispatch_io() */
- io->sleeper = NULL;
+ io->wait = NULL;
io->client = client;
io->callback = fn;
io->context = context;

2014-07-16 00:15:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 02/44] USB: cp210x: add support for Corsair usb dongle

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andras Kovacs <[email protected]>

commit b9326057a3d8447f5d2e74a7b521ccf21add2ec0 upstream.

Corsair USB Dongles are shipped with Corsair AXi series PSUs.
These are cp210x serial usb devices, so make driver detect these.
I have a program, that can get information from these PSUs.

Tested with 2 different dongles shipped with Corsair AX860i and
AX1200i units.

Signed-off-by: Andras Kovacs <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/cp210x.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/usb/serial/cp210x.c
+++ b/drivers/usb/serial/cp210x.c
@@ -153,6 +153,7 @@ static const struct usb_device_id id_tab
{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
{ USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
{ USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
{ USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */

2014-07-16 00:15:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 18/44] crypto: sha512_ssse3 - fix byte count to bit count conversion

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jussi Kivilinna <[email protected]>

commit cfe82d4f45c7cc39332a2be7c4c1d3bf279bbd3d upstream.

Byte-to-bit-count computation is only partly converted to big-endian and is
mixing in CPU-endian values. Problem was noticed by sparce with warning:

CHECK arch/x86/crypto/sha512_ssse3_glue.c
arch/x86/crypto/sha512_ssse3_glue.c:144:19: warning: restricted __be64 degrades to integer
arch/x86/crypto/sha512_ssse3_glue.c:144:17: warning: incorrect type in assignment (different base types)
arch/x86/crypto/sha512_ssse3_glue.c:144:17: expected restricted __be64 <noident>
arch/x86/crypto/sha512_ssse3_glue.c:144:17: got unsigned long long

Signed-off-by: Jussi Kivilinna <[email protected]>
Acked-by: Tim Chen <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/crypto/sha512_ssse3_glue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -141,7 +141,7 @@ static int sha512_ssse3_final(struct sha

/* save number of bits */
bits[1] = cpu_to_be64(sctx->count[0] << 3);
- bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61;
+ bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61);

/* Pad out to 112 mod 128 and append length */
index = sctx->count[0] & 0x7f;

2014-07-16 00:16:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 19/44] arm64: implement TASK_SIZE_OF

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Colin Cross <[email protected]>

commit fa2ec3ea10bd377f9d55772b1dab65178425a1a2 upstream.

include/linux/sched.h implements TASK_SIZE_OF as TASK_SIZE if it
is not set by the architecture headers. TASK_SIZE uses the
current task to determine the size of the virtual address space.
On a 64-bit kernel this will cause reading /proc/pid/pagemap of a
64-bit process from a 32-bit process to return EOF when it reads
past 0xffffffff.

Implement TASK_SIZE_OF exactly the same as TASK_SIZE with
test_tsk_thread_flag instead of test_thread_flag.

Signed-off-by: Colin Cross <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/include/asm/memory.h | 2 ++
1 file changed, 2 insertions(+)

--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -51,6 +51,8 @@
#define TASK_SIZE_32 UL(0x100000000)
#define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \
TASK_SIZE_32 : TASK_SIZE_64)
+#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \
+ TASK_SIZE_32 : TASK_SIZE_64)
#else
#define TASK_SIZE TASK_SIZE_64
#endif /* CONFIG_COMPAT */

2014-07-16 00:16:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 17/44] cpufreq: Makefile: fix compilation for davinci platform

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Prabhakar Lad <[email protected]>

commit 5a90af67c2126fe1d04ebccc1f8177e6ca70d3a9 upstream.

Since commtit 8a7b1227e303 (cpufreq: davinci: move cpufreq driver to
drivers/cpufreq) this added dependancy only for CONFIG_ARCH_DAVINCI_DA850
where as davinci_cpufreq_init() call is used by all davinci platform.

This patch fixes following build error:

arch/arm/mach-davinci/built-in.o: In function `davinci_init_late':
:(.init.text+0x928): undefined reference to `davinci_cpufreq_init'
make: *** [vmlinux] Error 1

Fixes: 8a7b1227e303 (cpufreq: davinci: move cpufreq driver to drivers/cpufreq)
Signed-off-by: Lad, Prabhakar <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/cpufreq/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -50,7 +50,7 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) +=
# LITTLE drivers, so that it is probed last.
obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o

-obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o
+obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o

2014-07-16 00:16:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 16/44] powerpc/perf: Clear MMCR2 when enabling PMU

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joel Stanley <[email protected]>

commit b50a6c584bb47b370f84bfd746770c0bbe7129b7 upstream.

On POWER8 when switching to a KVM guest we set bits in MMCR2 to freeze
the PMU counters. Aside from on boot they are then never reset,
resulting in stuck perf counters for any user in the guest or host.

We now set MMCR2 to 0 whenever enabling the PMU, which provides a sane
state for perf to use the PMU counters under either the guest or the
host.

This was manifesting as a bug with ppc64_cpu --frequency:

$ sudo ppc64_cpu --frequency
WARNING: couldn't run on cpu 0
WARNING: couldn't run on cpu 8
...
WARNING: couldn't run on cpu 144
WARNING: couldn't run on cpu 152
min: 18446744073.710 GHz (cpu -1)
max: 0.000 GHz (cpu -1)
avg: 0.000 GHz

The command uses a perf counter to measure CPU cycles over a fixed
amount of time, in order to approximate the frequency of the machine.
The counters were returning zero once a guest was started, regardless of
weather it was still running or had been shut down.

By dumping the value of MMCR2, it was observed that once a guest is
running MMCR2 is set to 1s - which stops counters from running:

$ sudo sh -c 'echo p > /proc/sysrq-trigger'
CPU: 0 PMU registers, ppmu = POWER8 n_counters = 6
PMC1: 5b635e38 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000
PMC5: 1bf5a646 PMC6: 5793d378 PMC7: deadbeef PMC8: deadbeef
MMCR0: 0000000080000000 MMCR1: 000000001e000000 MMCRA: 0000040000000000
MMCR2: fffffffffffffc00 EBBHR: 0000000000000000
EBBRR: 0000000000000000 BESCR: 0000000000000000
SIAR: 00000000000a51cc SDAR: c00000000fc40000 SIER: 0000000001000000

This is done unconditionally in book3s_hv_interrupts.S upon entering the
guest, and the original value is only save/restored if the host has
indicated it was using the PMU. This is okay, however the user of the
PMU needs to ensure that it is in a defined state when it starts using
it.

Fixes: e05b9b9e5c10 ("powerpc/perf: Power8 PMU support")
Signed-off-by: Joel Stanley <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/perf/core-book3s.c | 3 +++
1 file changed, 3 insertions(+)

--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -1342,6 +1342,9 @@ static int can_go_on_limited_pmc(struct
if (ppmu->limited_pmc_event(ev))
return 1;

+ if (ppmu->flags & PPMU_ARCH_207S)
+ mtspr(SPRN_MMCR2, 0);
+
/*
* The requested event_id isn't on a limited PMC already;
* see if any alternative code goes on a limited PMC.

2014-07-16 00:17:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 15/44] powerpc/perf: Add PPMU_ARCH_207S define

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Joel Stanley <[email protected]>

commit 4d9690dd56b0d18f2af8a9d4a279cb205aae3345 upstream.

Instead of separate bits for every POWER8 PMU feature, have a single one
for v2.07 of the architecture.

This saves us adding a MMCR2 define for a future patch.

Signed-off-by: Joel Stanley <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/include/asm/perf_event_server.h | 2 +-
arch/powerpc/perf/core-book3s.c | 2 +-
arch/powerpc/perf/power8-pmu.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

--- a/arch/powerpc/include/asm/perf_event_server.h
+++ b/arch/powerpc/include/asm/perf_event_server.h
@@ -59,7 +59,7 @@ struct power_pmu {
#define PPMU_SIAR_VALID 0x00000010 /* Processor has SIAR Valid bit */
#define PPMU_HAS_SSLOT 0x00000020 /* Has sampled slot in MMCRA */
#define PPMU_HAS_SIER 0x00000040 /* Has SIER */
-#define PPMU_BHRB 0x00000080 /* has BHRB feature enabled */
+#define PPMU_ARCH_207S 0x00000080 /* PMC is architecture v2.07S */

/*
* Values for flags to get_alternatives()
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -1436,7 +1436,7 @@ static int power_pmu_event_init(struct p

if (has_branch_stack(event)) {
/* PMU has BHRB enabled */
- if (!(ppmu->flags & PPMU_BHRB))
+ if (!(ppmu->flags & PPMU_ARCH_207S))
return -EOPNOTSUPP;
}

--- a/arch/powerpc/perf/power8-pmu.c
+++ b/arch/powerpc/perf/power8-pmu.c
@@ -592,7 +592,7 @@ static struct power_pmu power8_pmu = {
.get_constraint = power8_get_constraint,
.get_alternatives = power8_get_alternatives,
.disable_pmc = power8_disable_pmc,
- .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_BHRB,
+ .flags = PPMU_HAS_SSLOT | PPMU_HAS_SIER | PPMU_ARCH_207S,
.n_generic = ARRAY_SIZE(power8_generic_events),
.generic_events = power8_generic_events,
.attr_groups = power8_pmu_attr_groups,

2014-07-16 00:17:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 11/44] hwmon: (adm1029) Ensure the fan_div cache is updated in set_fan_div

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Axel Lin <[email protected]>

commit 1035a9e3e9c76b64a860a774f5b867d28d34acc2 upstream.

Writing to fanX_div does not clear the cache. As a result, reading
from fanX_div may return the old value for up to two seconds
after writing a new value.

This patch ensures the fan_div cache is updated in set_fan_div().

Reported-by: Guenter Roeck <[email protected]>
Signed-off-by: Axel Lin <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hwmon/adm1029.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/hwmon/adm1029.c
+++ b/drivers/hwmon/adm1029.c
@@ -232,6 +232,9 @@ static ssize_t set_fan_div(struct device
/* Update the value */
reg = (reg & 0x3F) | (val << 6);

+ /* Update the cache */
+ data->fan_div[attr->index] = reg;
+
/* Write value */
i2c_smbus_write_byte_data(client,
ADM1029_REG_FAN_DIV[attr->index], reg);

2014-07-16 00:17:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 10/44] hwmon: (adm1031) Fix writes to limit registers

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Guenter Roeck <[email protected]>

commit 145e74a4e5022225adb84f4e5d4fff7938475c35 upstream.

Upper limit for write operations to temperature limit registers
was clamped to a fractional value. However, limit registers do
not support fractional values. As a result, upper limits of 127.5
degrees C or higher resulted in a rounded limit of 128 degrees C.
Since limit registers are signed, this was stored as -128 degrees C.
Clamp limits to (-55, +127) degrees C to solve the problem.

Value on writes to auto_temp[12]_min and auto_temp[12]_max were not
clamped at all, but masked. As a result, out-of-range writes resulted
in a more or less arbitrary limit. Clamp those attributes to (0, 127)
degrees C for more predictable results.

Cc: Axel Lin <[email protected]>
Reviewed-by: Jean Delvare <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hwmon/adm1031.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

--- a/drivers/hwmon/adm1031.c
+++ b/drivers/hwmon/adm1031.c
@@ -365,6 +365,7 @@ set_auto_temp_min(struct device *dev, st
if (ret)
return ret;

+ val = clamp_val(val, 0, 127000);
mutex_lock(&data->update_lock);
data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]);
adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr),
@@ -394,6 +395,7 @@ set_auto_temp_max(struct device *dev, st
if (ret)
return ret;

+ val = clamp_val(val, 0, 127000);
mutex_lock(&data->update_lock);
data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr],
data->pwm[nr]);
@@ -696,7 +698,7 @@ static ssize_t set_temp_min(struct devic
if (ret)
return ret;

- val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875);
+ val = clamp_val(val, -55000, 127000);
mutex_lock(&data->update_lock);
data->temp_min[nr] = TEMP_TO_REG(val);
adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr),
@@ -717,7 +719,7 @@ static ssize_t set_temp_max(struct devic
if (ret)
return ret;

- val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875);
+ val = clamp_val(val, -55000, 127000);
mutex_lock(&data->update_lock);
data->temp_max[nr] = TEMP_TO_REG(val);
adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr),
@@ -738,7 +740,7 @@ static ssize_t set_temp_crit(struct devi
if (ret)
return ret;

- val = clamp_val(val, -55000, nr == 0 ? 127750 : 127875);
+ val = clamp_val(val, -55000, 127000);
mutex_lock(&data->update_lock);
data->temp_crit[nr] = TEMP_TO_REG(val);
adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),

2014-07-16 00:18:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 01/44] usb: option: Add ID for Telewell TW-LTE 4G v2

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Bernd Wachter <[email protected]>

commit 3d28bd840b2d3981cd28caf5fe1df38f1344dd60 upstream.

Add ID of the Telewell 4G v2 hardware to option driver to get legacy
serial interface working

Signed-off-by: Bernd Wachter <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/usb/serial/option.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -1487,6 +1487,8 @@ static const struct usb_device_id option
.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */
.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */
+ .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },

2014-07-16 04:25:41

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/44] 3.10.49-stable review

On 07/15/2014 04:16 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.10.49 release.
> There are 44 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Jul 17 23:16:37 UTC 2014.
> Anything received after that time might be too late.
>

Build results:
total: 138 pass: 138 fail: 0

Qemu tests all passed.

Results are <strikethrough>boring</strikethrough> perfect.

Details are available at http://server.roeck-us.net:8010/builders.

Guenter

2014-07-16 23:10:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/44] 3.10.49-stable review

On Tue, Jul 15, 2014 at 04:16:58PM -0700, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.10.49 release.
> There are 44 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Jul 17 23:16:37 UTC 2014.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.49-rc1.gz
> and the diffstat can be found below.

As with 3.15 and 3.14, there is a new -rc2 release out:

kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.49-rc2.gz

2014-07-17 13:25:01

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/44] 3.10.49-stable review

On 07/15/2014 05:16 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.10.49 release.
> There are 44 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Jul 17 23:16:37 UTC 2014.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.49-rc1.gz
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

rc1 and rc2 compiled and booted on my test system. No dmesg regressions.

-- Shuah

--
Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
[email protected] | (970) 672-0658