2014-02-18 23:14:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 00/26] 3.10.31-stable review

This is the start of the stable review cycle for the 3.10.31 release.
There are 26 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 3.10.31-rc1

Xishi Qiu <[email protected]>
mm: fix process accidentally killed by mce because of huge page migration

Dirk Brandewie <[email protected]>
intel_pstate: Take core C0 time into account for core busy calculation

Jan Kara <[email protected]>
IB/qib: Convert qib_user_sdma_pin_pages() to use get_user_pages_fast()

Naoya Horiguchi <[email protected]>
mm/memory-failure.c: fix memory leak in successful soft offlining

Stanislaw Gruszka <[email protected]>
pinctrl: protect pinctrl_list add

Tony Prisk <[email protected]>
pinctrl: vt8500: Change devicetree data parsing

Peter Oberparleiter <[email protected]>
x86, hweight: Fix BUG when booting with CONFIG_GCOV_PROFILE_ALL=y

Dave Jones <[email protected]>
mxl111sf: Fix compile when CONFIG_DVB_USB_MXL111SF is unset

Antti Palosaari <[email protected]>
af9035: add ID [2040:f900] Hauppauge WinTV-MiniStick 2

Mel Gorman <[email protected]>
x86: mm: change tlb_flushall_shift for IvyBridge

KOSAKI Motohiro <[email protected]>
mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irq

KOSAKI Motohiro <[email protected]>
mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq()

Takashi Iwai <[email protected]>
ALSA: hda - Add missing mixer widget for AD1983

Takashi Iwai <[email protected]>
ALSA: hda - Fix missing VREF setup for Mac Pro 1,1

Takashi Iwai <[email protected]>
ALSA: usb-audio: Add missing kconfig dependecy

Vinayak Kale <[email protected]>
arm64: add DSB after icache flush in __flush_icache_all()

Nathan Lynch <[email protected]>
arm64: vdso: fix coarse clock handling

Catalin Marinas <[email protected]>
arm64: Invalidate the TLB when replacing pmd entries during boot

Will Deacon <[email protected]>
arm64: vdso: prevent ld from aligning PT_LOAD segments to 64k

Nathan Lynch <[email protected]>
arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE

Lior Amsalem <[email protected]>
irqchip: armada-370-xp: fix IPI race condition

Harald Freudenberger <[email protected]>
crypto: s390 - fix des and des3_ede ctr concurrency issue

Harald Freudenberger <[email protected]>
crypto: s390 - fix des and des3_ede cbc concurrency issue

Harald Freudenberger <[email protected]>
crypto: s390 - fix concurrency issue in aes-ctr mode

Josef Bacik <[email protected]>
Btrfs: disable snapshot aware defrag for now

Stephen Smalley <[email protected]>
SELinux: Fix kernel BUG on empty security contexts.


-------------

Diffstat:

Makefile | 4 +-
arch/arm64/include/asm/cacheflush.h | 1 +
arch/arm64/kernel/vdso.c | 4 +-
arch/arm64/kernel/vdso/Makefile | 2 +-
arch/arm64/kernel/vdso/gettimeofday.S | 7 +-
arch/arm64/mm/mmu.c | 12 +++-
arch/s390/crypto/aes_s390.c | 65 ++++++++++++------
arch/s390/crypto/des_s390.c | 95 +++++++++++++++++----------
arch/x86/kernel/cpu/intel.c | 2 +-
drivers/cpufreq/intel_pstate.c | 16 +++--
drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +-
drivers/irqchip/irq-armada-370-xp.c | 2 +-
drivers/media/usb/dvb-usb-v2/af9035.c | 2 +
drivers/media/usb/dvb-usb-v2/mxl111sf-tuner.h | 2 +-
drivers/pinctrl/core.c | 2 +
drivers/pinctrl/vt8500/pinctrl-wmt.c | 15 ++++-
fs/btrfs/inode.c | 2 +-
fs/buffer.c | 6 +-
lib/Makefile | 1 +
mm/hugetlb.c | 11 +++-
mm/memory-failure.c | 22 +++++--
mm/page-writeback.c | 5 +-
security/selinux/ss/services.c | 4 ++
sound/pci/hda/patch_analog.c | 1 +
sound/pci/hda/patch_realtek.c | 9 ++-
sound/usb/Kconfig | 1 +
26 files changed, 214 insertions(+), 85 deletions(-)


2014-02-18 22:46:39

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 07/26] arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nathan Lynch <[email protected]>

commit d4022a335271a48cce49df35d825897914fbffe3 upstream.

Update wall-to-monotonic fields in the VDSO data page
unconditionally. These are used to service CLOCK_MONOTONIC_COARSE,
which is not guarded by use_syscall.

Signed-off-by: Nathan Lynch <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/kernel/vdso.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -235,6 +235,8 @@ void update_vsyscall(struct timekeeper *
vdso_data->use_syscall = use_syscall;
vdso_data->xtime_coarse_sec = xtime_coarse.tv_sec;
vdso_data->xtime_coarse_nsec = xtime_coarse.tv_nsec;
+ vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec;
+ vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;

if (!use_syscall) {
vdso_data->cs_cycle_last = tk->clock->cycle_last;
@@ -242,8 +244,6 @@ void update_vsyscall(struct timekeeper *
vdso_data->xtime_clock_nsec = tk->xtime_nsec;
vdso_data->cs_mult = tk->mult;
vdso_data->cs_shift = tk->shift;
- vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec;
- vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;
}

smp_wmb();

2014-02-18 22:46:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 25/26] intel_pstate: Take core C0 time into account for core busy calculation

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dirk Brandewie <[email protected]>

commit fcb6a15c2e7e76d493e6f91ea889ab40e1c643a4 upstream.

Take non-idle time into account when calculating core busy time.
This ensures that intel_pstate will notice a decrease in load.

References: https://bugzilla.kernel.org/show_bug.cgi?id=66581
Cc: 3.10+ <[email protected]> # 3.10+
Signed-off-by: Dirk Brandewie <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/cpufreq/intel_pstate.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -51,6 +51,7 @@ struct sample {
int32_t core_pct_busy;
u64 aperf;
u64 mperf;
+ unsigned long long tsc;
int freq;
};

@@ -86,6 +87,7 @@ struct cpudata {

u64 prev_aperf;
u64 prev_mperf;
+ unsigned long long prev_tsc;
int sample_ptr;
struct sample samples[SAMPLE_COUNT];
};
@@ -435,11 +437,17 @@ static inline void intel_pstate_calc_bus
struct sample *sample)
{
u64 core_pct;
- core_pct = div64_u64(int_tofp(sample->aperf * 100),
- sample->mperf);
- sample->freq = fp_toint(cpu->pstate.max_pstate * core_pct * 1000);
+ u64 c0_pct;

- sample->core_pct_busy = core_pct;
+ core_pct = div64_u64(sample->aperf * 100, sample->mperf);
+
+ c0_pct = div64_u64(sample->mperf * 100, sample->tsc);
+ sample->freq = fp_toint(
+ mul_fp(int_tofp(cpu->pstate.max_pstate),
+ int_tofp(core_pct * 1000)));
+
+ sample->core_pct_busy = mul_fp(int_tofp(core_pct),
+ div_fp(int_tofp(c0_pct + 1), int_tofp(100)));
}

static inline void intel_pstate_sample(struct cpudata *cpu)

2014-02-18 23:19:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 08/26] arm64: vdso: prevent ld from aligning PT_LOAD segments to 64k

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Will Deacon <[email protected]>

commit 40507403485fcb56b83d6ddfc954e9b08305054c upstream.

Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF
headers, it is mapped by the kernel and not actually subject to
demand-paging. ld doesn't realise this, and emits a p_align field of 64k
(the maximum supported page size), which conflicts with the load address
picked by the kernel on 4k systems, which will be 4k aligned. This
causes GDB to fail with "Failed to read a valid object file image from
memory" when attempting to load the VDSO.

This patch passes the -n option to ld, which prevents it from aligning
PT_LOAD segments to the maximum page size.

Reported-by: Kyle McMartin <[email protected]>
Acked-by: Kyle McMartin <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/kernel/vdso/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -48,7 +48,7 @@ $(obj-vdso): %.o: %.S

# Actual build commands
quiet_cmd_vdsold = VDSOL $@
- cmd_vdsold = $(CC) $(c_flags) -Wl,-T $^ -o $@
+ cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $^ -o $@
quiet_cmd_vdsoas = VDSOA $@
cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<


2014-02-18 23:19:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 09/26] arm64: Invalidate the TLB when replacing pmd entries during boot

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Catalin Marinas <[email protected]>

commit a55f9929a9b257f84b6cc7b2397379cabd744a22 upstream.

With the 64K page size configuration, __create_page_tables in head.S
maps enough memory to get started but using 64K pages rather than 512M
sections with a single pgd/pud/pmd entry pointing to a pte table.
create_mapping() may override the pgd/pud/pmd table entry with a block
(section) one if the RAM size is more than 512MB and aligned correctly.
For the end of this block to be accessible, the old TLB entry must be
invalidated.

Reported-by: Mark Salter <[email protected]>
Tested-by: Mark Salter <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/mm/mmu.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -203,10 +203,18 @@ static void __init alloc_init_pmd(pud_t
do {
next = pmd_addr_end(addr, end);
/* try section mapping first */
- if (((addr | next | phys) & ~SECTION_MASK) == 0)
+ if (((addr | next | phys) & ~SECTION_MASK) == 0) {
+ pmd_t old_pmd =*pmd;
set_pmd(pmd, __pmd(phys | prot_sect_kernel));
- else
+ /*
+ * Check for previous table entries created during
+ * boot (__create_page_tables) and flush them.
+ */
+ if (!pmd_none(old_pmd))
+ flush_tlb_all();
+ } else {
alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys));
+ }
phys += next - addr;
} while (pmd++, addr = next, addr != end);
}

2014-02-18 23:20:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 06/26] irqchip: armada-370-xp: fix IPI race condition

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lior Amsalem <[email protected]>

commit a6f089e95b1e08cdea9633d50ad20aa5d44ba64d upstream.

In the Armada 370/XP driver, when we receive an IRQ 0, we read the
list of doorbells that caused the interrupt from register
ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS. This gives the list of IPIs that
were generated. However, instead of acknowledging only the IPIs that
were generated, we acknowledge *all* the IPIs, by writing
~IPI_DOORBELL_MASK in the ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS register.

This creates a race condition: if a new IPI that isn't part of the
ones read into the temporary "ipimask" variable is fired before we
acknowledge all IPIs, then we will simply loose it. This is causing
scheduling hangs on SMP intensive workloads.

It is important to mention that this ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS
register has the following behavior: "A CPU write of 0 clears the bits
in this field. A CPU write of 1 has no effect". This is what allows us
to simply write ~ipimask to acknoledge the handled IPIs.

Notice that the same problem is present in the MSI implementation, but
it will be fixed as a separate patch, so that this IPI fix can be
pushed to older stable versions as appropriate (all the way to 3.8),
while the MSI code only appeared in 3.13.

Signed-off-by: Lior Amsalem <[email protected]>
Signed-off-by: Thomas Petazzoni <[email protected]>
Fixes: 344e873e5657e8dc0 'arm: mvebu: Add IPI support via doorbells'
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Jason Cooper <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-armada-370-xp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/irqchip/irq-armada-370-xp.c
+++ b/drivers/irqchip/irq-armada-370-xp.c
@@ -229,7 +229,7 @@ armada_370_xp_handle_irq(struct pt_regs
ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS)
& IPI_DOORBELL_MASK;

- writel(~IPI_DOORBELL_MASK, per_cpu_int_base +
+ writel(~ipimask, per_cpu_int_base +
ARMADA_370_XP_IN_DRBEL_CAUSE_OFFS);

/* Handle all pending doorbells */

2014-02-18 23:21:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 05/26] crypto: s390 - fix des and des3_ede ctr concurrency issue

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Harald Freudenberger <[email protected]>

commit ee97dc7db4cbda33e4241c2d85b42d1835bc8a35 upstream.

In s390 des and 3des ctr mode there is one preallocated page
used to speed up the en/decryption. This page is not protected
against concurrent usage and thus there is a potential of data
corruption with multiple threads.

The fix introduces locking/unlocking the ctr page and a slower
fallback solution at concurrency situations.

Signed-off-by: Harald Freudenberger <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/s390/crypto/des_s390.c | 69 ++++++++++++++++++++++++++++++--------------
1 file changed, 48 insertions(+), 21 deletions(-)

--- a/arch/s390/crypto/des_s390.c
+++ b/arch/s390/crypto/des_s390.c
@@ -25,6 +25,7 @@
#define DES3_KEY_SIZE (3 * DES_KEY_SIZE)

static u8 *ctrblk;
+static DEFINE_SPINLOCK(ctrblk_lock);

struct s390_des_ctx {
u8 iv[DES_BLOCK_SIZE];
@@ -368,54 +369,80 @@ static struct crypto_alg cbc_des3_alg =
}
};

+static unsigned int __ctrblk_init(u8 *ctrptr, unsigned int nbytes)
+{
+ unsigned int i, n;
+
+ /* align to block size, max. PAGE_SIZE */
+ n = (nbytes > PAGE_SIZE) ? PAGE_SIZE : nbytes & ~(DES_BLOCK_SIZE - 1);
+ for (i = DES_BLOCK_SIZE; i < n; i += DES_BLOCK_SIZE) {
+ memcpy(ctrptr + i, ctrptr + i - DES_BLOCK_SIZE, DES_BLOCK_SIZE);
+ crypto_inc(ctrptr + i, DES_BLOCK_SIZE);
+ }
+ return n;
+}
+
static int ctr_desall_crypt(struct blkcipher_desc *desc, long func,
- struct s390_des_ctx *ctx, struct blkcipher_walk *walk)
+ struct s390_des_ctx *ctx,
+ struct blkcipher_walk *walk)
{
int ret = blkcipher_walk_virt_block(desc, walk, DES_BLOCK_SIZE);
- unsigned int i, n, nbytes;
- u8 buf[DES_BLOCK_SIZE];
- u8 *out, *in;
+ unsigned int n, nbytes;
+ u8 buf[DES_BLOCK_SIZE], ctrbuf[DES_BLOCK_SIZE];
+ u8 *out, *in, *ctrptr = ctrbuf;
+
+ if (!walk->nbytes)
+ return ret;

- memcpy(ctrblk, walk->iv, DES_BLOCK_SIZE);
+ if (spin_trylock(&ctrblk_lock))
+ ctrptr = ctrblk;
+
+ memcpy(ctrptr, walk->iv, DES_BLOCK_SIZE);
while ((nbytes = walk->nbytes) >= DES_BLOCK_SIZE) {
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
while (nbytes >= DES_BLOCK_SIZE) {
- /* align to block size, max. PAGE_SIZE */
- n = (nbytes > PAGE_SIZE) ? PAGE_SIZE :
- nbytes & ~(DES_BLOCK_SIZE - 1);
- for (i = DES_BLOCK_SIZE; i < n; i += DES_BLOCK_SIZE) {
- memcpy(ctrblk + i, ctrblk + i - DES_BLOCK_SIZE,
- DES_BLOCK_SIZE);
- crypto_inc(ctrblk + i, DES_BLOCK_SIZE);
- }
- ret = crypt_s390_kmctr(func, ctx->key, out, in, n, ctrblk);
- if (ret < 0 || ret != n)
+ if (ctrptr == ctrblk)
+ n = __ctrblk_init(ctrptr, nbytes);
+ else
+ n = DES_BLOCK_SIZE;
+ ret = crypt_s390_kmctr(func, ctx->key, out, in,
+ n, ctrptr);
+ if (ret < 0 || ret != n) {
+ if (ctrptr == ctrblk)
+ spin_unlock(&ctrblk_lock);
return -EIO;
+ }
if (n > DES_BLOCK_SIZE)
- memcpy(ctrblk, ctrblk + n - DES_BLOCK_SIZE,
+ memcpy(ctrptr, ctrptr + n - DES_BLOCK_SIZE,
DES_BLOCK_SIZE);
- crypto_inc(ctrblk, DES_BLOCK_SIZE);
+ crypto_inc(ctrptr, DES_BLOCK_SIZE);
out += n;
in += n;
nbytes -= n;
}
ret = blkcipher_walk_done(desc, walk, nbytes);
}
-
+ if (ctrptr == ctrblk) {
+ if (nbytes)
+ memcpy(ctrbuf, ctrptr, DES_BLOCK_SIZE);
+ else
+ memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE);
+ spin_unlock(&ctrblk_lock);
+ }
/* final block may be < DES_BLOCK_SIZE, copy only nbytes */
if (nbytes) {
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
ret = crypt_s390_kmctr(func, ctx->key, buf, in,
- DES_BLOCK_SIZE, ctrblk);
+ DES_BLOCK_SIZE, ctrbuf);
if (ret < 0 || ret != DES_BLOCK_SIZE)
return -EIO;
memcpy(out, buf, nbytes);
- crypto_inc(ctrblk, DES_BLOCK_SIZE);
+ crypto_inc(ctrbuf, DES_BLOCK_SIZE);
ret = blkcipher_walk_done(desc, walk, 0);
+ memcpy(walk->iv, ctrbuf, DES_BLOCK_SIZE);
}
- memcpy(walk->iv, ctrblk, DES_BLOCK_SIZE);
return ret;
}


2014-02-18 22:46:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 26/26] mm: fix process accidentally killed by mce because of huge page migration

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Xishi Qiu <[email protected]>

Based on c8721bbbdd36382de51cd6b7a56322e0acca2414 upstream, but only the
bugfix portion pulled out.

Hi Naoya or Greg,

We found a bug in 3.10.x.
The problem is that we accidentally have a hwpoisoned hugepage in free
hugepage list. It could happend in the the following scenario:

process A process B

migrate_huge_page
put_page (old hugepage)
linked to free hugepage list
hugetlb_fault
hugetlb_no_page
alloc_huge_page
dequeue_huge_page_vma
dequeue_huge_page_node
(steal hwpoisoned hugepage)
set_page_hwpoison_huge_page
dequeue_hwpoisoned_huge_page
(fail to dequeue)

I tested this bug, one process keeps allocating huge page, and I
use sysfs interface to soft offline a huge page, then received:
"MCE: Killing UCP:2717 due to hardware memory corruption fault at 8200034"

Upstream kernel is free from this bug because of these two commits:

f15bdfa802bfa5eb6b4b5a241b97ec9fa1204a35
mm/memory-failure.c: fix memory leak in successful soft offlining

c8721bbbdd36382de51cd6b7a56322e0acca2414
mm: memory-hotplug: enable memory hotplug to handle hugepage

The first one, although the problem is about memory leak, this patch
moves unset_migratetype_isolate(), which is important to avoid the race.
The latter is not a bug fix and it's too big, so I rewrite a small one.

The following patch can fix this bug.(please apply f15bdfa802bf first)

Signed-off-by: Xishi Qiu <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/hugetlb.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -21,6 +21,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/page-isolation.h>

#include <asm/page.h>
#include <asm/pgtable.h>
@@ -517,9 +518,15 @@ static struct page *dequeue_huge_page_no
{
struct page *page;

- if (list_empty(&h->hugepage_freelists[nid]))
+ list_for_each_entry(page, &h->hugepage_freelists[nid], lru)
+ if (!is_migrate_isolate_page(page))
+ break;
+ /*
+ * if 'non-isolated free hugepage' not found on the list,
+ * the allocation fails.
+ */
+ if (&h->hugepage_freelists[nid] == &page->lru)
return NULL;
- page = list_entry(h->hugepage_freelists[nid].next, struct page, lru);
list_move(&page->lru, &h->hugepage_activelist);
set_page_refcounted(page);
h->free_huge_pages--;

2014-02-18 23:21:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 04/26] crypto: s390 - fix des and des3_ede cbc concurrency issue

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Harald Freudenberger <[email protected]>

commit adc3fcf1552b6e406d172fd9690bbd1395053d13 upstream.

In s390 des and des3_ede cbc mode the iv value is not protected
against concurrency access and modifications from another running
en/decrypt operation which is using the very same tfm struct
instance. This fix copies the iv to the local stack before
the crypto operation and stores the value back when done.

Signed-off-by: Harald Freudenberger <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/s390/crypto/des_s390.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)

--- a/arch/s390/crypto/des_s390.c
+++ b/arch/s390/crypto/des_s390.c
@@ -105,29 +105,35 @@ static int ecb_desall_crypt(struct blkci
}

static int cbc_desall_crypt(struct blkcipher_desc *desc, long func,
- u8 *iv, struct blkcipher_walk *walk)
+ struct blkcipher_walk *walk)
{
+ struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
int ret = blkcipher_walk_virt(desc, walk);
unsigned int nbytes = walk->nbytes;
+ struct {
+ u8 iv[DES_BLOCK_SIZE];
+ u8 key[DES3_KEY_SIZE];
+ } param;

if (!nbytes)
goto out;

- memcpy(iv, walk->iv, DES_BLOCK_SIZE);
+ memcpy(param.iv, walk->iv, DES_BLOCK_SIZE);
+ memcpy(param.key, ctx->key, DES3_KEY_SIZE);
do {
/* only use complete blocks */
unsigned int n = nbytes & ~(DES_BLOCK_SIZE - 1);
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;

- ret = crypt_s390_kmc(func, iv, out, in, n);
+ ret = crypt_s390_kmc(func, &param, out, in, n);
if (ret < 0 || ret != n)
return -EIO;

nbytes &= DES_BLOCK_SIZE - 1;
ret = blkcipher_walk_done(desc, walk, nbytes);
} while ((nbytes = walk->nbytes));
- memcpy(walk->iv, iv, DES_BLOCK_SIZE);
+ memcpy(walk->iv, param.iv, DES_BLOCK_SIZE);

out:
return ret;
@@ -179,22 +185,20 @@ static int cbc_des_encrypt(struct blkcip
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;

blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_DEA_ENCRYPT, ctx->iv, &walk);
+ return cbc_desall_crypt(desc, KMC_DEA_ENCRYPT, &walk);
}

static int cbc_des_decrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;

blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_DEA_DECRYPT, ctx->iv, &walk);
+ return cbc_desall_crypt(desc, KMC_DEA_DECRYPT, &walk);
}

static struct crypto_alg cbc_des_alg = {
@@ -327,22 +331,20 @@ static int cbc_des3_encrypt(struct blkci
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;

blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_TDEA_192_ENCRYPT, ctx->iv, &walk);
+ return cbc_desall_crypt(desc, KMC_TDEA_192_ENCRYPT, &walk);
}

static int cbc_des3_decrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
- struct s390_des_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk;

blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_TDEA_192_DECRYPT, ctx->iv, &walk);
+ return cbc_desall_crypt(desc, KMC_TDEA_192_DECRYPT, &walk);
}

static struct crypto_alg cbc_des3_alg = {

2014-02-18 23:22:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 03/26] crypto: s390 - fix concurrency issue in aes-ctr mode

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Harald Freudenberger <[email protected]>

commit 0519e9ad89e5cd6e6b08398f57c6a71d9580564c upstream.

The aes-ctr mode uses one preallocated page without any concurrency
protection. When multiple threads run aes-ctr encryption or decryption
this can lead to data corruption.

The patch introduces locking for the page and a fallback solution with
slower en/decryption performance in concurrency situations.

Signed-off-by: Harald Freudenberger <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/s390/crypto/aes_s390.c | 65 +++++++++++++++++++++++++++++++-------------
1 file changed, 46 insertions(+), 19 deletions(-)

--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -25,6 +25,7 @@
#include <linux/err.h>
#include <linux/module.h>
#include <linux/init.h>
+#include <linux/spinlock.h>
#include "crypt_s390.h"

#define AES_KEYLEN_128 1
@@ -32,6 +33,7 @@
#define AES_KEYLEN_256 4

static u8 *ctrblk;
+static DEFINE_SPINLOCK(ctrblk_lock);
static char keylen_flag;

struct s390_aes_ctx {
@@ -756,43 +758,67 @@ static int ctr_aes_set_key(struct crypto
return aes_set_key(tfm, in_key, key_len);
}

+static unsigned int __ctrblk_init(u8 *ctrptr, unsigned int nbytes)
+{
+ unsigned int i, n;
+
+ /* only use complete blocks, max. PAGE_SIZE */
+ n = (nbytes > PAGE_SIZE) ? PAGE_SIZE : nbytes & ~(AES_BLOCK_SIZE - 1);
+ for (i = AES_BLOCK_SIZE; i < n; i += AES_BLOCK_SIZE) {
+ memcpy(ctrptr + i, ctrptr + i - AES_BLOCK_SIZE,
+ AES_BLOCK_SIZE);
+ crypto_inc(ctrptr + i, AES_BLOCK_SIZE);
+ }
+ return n;
+}
+
static int ctr_aes_crypt(struct blkcipher_desc *desc, long func,
struct s390_aes_ctx *sctx, struct blkcipher_walk *walk)
{
int ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
- unsigned int i, n, nbytes;
- u8 buf[AES_BLOCK_SIZE];
- u8 *out, *in;
+ unsigned int n, nbytes;
+ u8 buf[AES_BLOCK_SIZE], ctrbuf[AES_BLOCK_SIZE];
+ u8 *out, *in, *ctrptr = ctrbuf;

if (!walk->nbytes)
return ret;

- memcpy(ctrblk, walk->iv, AES_BLOCK_SIZE);
+ if (spin_trylock(&ctrblk_lock))
+ ctrptr = ctrblk;
+
+ memcpy(ctrptr, walk->iv, AES_BLOCK_SIZE);
while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
while (nbytes >= AES_BLOCK_SIZE) {
- /* only use complete blocks, max. PAGE_SIZE */
- n = (nbytes > PAGE_SIZE) ? PAGE_SIZE :
- nbytes & ~(AES_BLOCK_SIZE - 1);
- for (i = AES_BLOCK_SIZE; i < n; i += AES_BLOCK_SIZE) {
- memcpy(ctrblk + i, ctrblk + i - AES_BLOCK_SIZE,
- AES_BLOCK_SIZE);
- crypto_inc(ctrblk + i, AES_BLOCK_SIZE);
- }
- ret = crypt_s390_kmctr(func, sctx->key, out, in, n, ctrblk);
- if (ret < 0 || ret != n)
+ if (ctrptr == ctrblk)
+ n = __ctrblk_init(ctrptr, nbytes);
+ else
+ n = AES_BLOCK_SIZE;
+ ret = crypt_s390_kmctr(func, sctx->key, out, in,
+ n, ctrptr);
+ if (ret < 0 || ret != n) {
+ if (ctrptr == ctrblk)
+ spin_unlock(&ctrblk_lock);
return -EIO;
+ }
if (n > AES_BLOCK_SIZE)
- memcpy(ctrblk, ctrblk + n - AES_BLOCK_SIZE,
+ memcpy(ctrptr, ctrptr + n - AES_BLOCK_SIZE,
AES_BLOCK_SIZE);
- crypto_inc(ctrblk, AES_BLOCK_SIZE);
+ crypto_inc(ctrptr, AES_BLOCK_SIZE);
out += n;
in += n;
nbytes -= n;
}
ret = blkcipher_walk_done(desc, walk, nbytes);
}
+ if (ctrptr == ctrblk) {
+ if (nbytes)
+ memcpy(ctrbuf, ctrptr, AES_BLOCK_SIZE);
+ else
+ memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE);
+ spin_unlock(&ctrblk_lock);
+ }
/*
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
@@ -800,14 +826,15 @@ static int ctr_aes_crypt(struct blkciphe
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
ret = crypt_s390_kmctr(func, sctx->key, buf, in,
- AES_BLOCK_SIZE, ctrblk);
+ AES_BLOCK_SIZE, ctrbuf);
if (ret < 0 || ret != AES_BLOCK_SIZE)
return -EIO;
memcpy(out, buf, nbytes);
- crypto_inc(ctrblk, AES_BLOCK_SIZE);
+ crypto_inc(ctrbuf, AES_BLOCK_SIZE);
ret = blkcipher_walk_done(desc, walk, 0);
+ memcpy(walk->iv, ctrbuf, AES_BLOCK_SIZE);
}
- memcpy(walk->iv, ctrblk, AES_BLOCK_SIZE);
+
return ret;
}


2014-02-18 23:22:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 24/26] IB/qib: Convert qib_user_sdma_pin_pages() to use get_user_pages_fast()

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jan Kara <[email protected]>

commit 603e7729920e42b3c2f4dbfab9eef4878cb6e8fa upstream.

qib_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even
more interestingly the function qib_user_sdma_queue_pkts() (and also
qib_user_sdma_coalesce() called somewhat later) call copy_from_user()
which can hit a page fault and we deadlock on trying to get mmap_sem
when handling that fault.

So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and
leave mmap_sem locking for mm.

This deadlock has actually been observed in the wild when the node
is under memory pressure.

Reviewed-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>
[Backported to 3.10: (Thanks to Ben Huthings)
- Adjust context
- Adjust indentation and nr_pages argument in qib_user_sdma_pin_pages()]
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)

--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
+++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
@@ -284,8 +284,7 @@ static int qib_user_sdma_pin_pages(const
int j;
int ret;

- ret = get_user_pages(current, current->mm, addr,
- npages, 0, 1, pages, NULL);
+ ret = get_user_pages_fast(addr, npages, 0, pages);

if (ret != npages) {
int i;
@@ -830,10 +829,7 @@ int qib_user_sdma_writev(struct qib_ctxt
while (dim) {
const int mxp = 8;

- down_write(&current->mm->mmap_sem);
ret = qib_user_sdma_queue_pkts(dd, pq, &list, iov, dim, mxp);
- up_write(&current->mm->mmap_sem);
-
if (ret <= 0)
goto done_unlock;
else {

2014-02-18 23:23:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 23/26] mm/memory-failure.c: fix memory leak in successful soft offlining

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Naoya Horiguchi <[email protected]>

commit f15bdfa802bfa5eb6b4b5a241b97ec9fa1204a35 upstream.

After a successful page migration by soft offlining, the source page is
not properly freed and it's never reusable even if we unpoison it
afterward.

This is caused by the race between freeing page and setting PG_hwpoison.
In successful soft offlining, the source page is put (and the refcount
becomes 0) by putback_lru_page() in unmap_and_move(), where it's linked
to pagevec and actual freeing back to buddy is delayed. So if
PG_hwpoison is set for the page before freeing, the freeing does not
functions as expected (in such case freeing aborts in
free_pages_prepare() check.)

This patch tries to make sure to free the source page before setting
PG_hwpoison on it. To avoid reallocating, the page keeps
MIGRATE_ISOLATE until after setting PG_hwpoison.

This patch also removes obsolete comments about "keeping elevated
refcount" because what they say is not true. Unlike memory_failure(),
soft_offline_page() uses no special page isolation code, and the
soft-offlined pages have no elevated.

Signed-off-by: Naoya Horiguchi <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Cc: Xishi Qiu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/memory-failure.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)

--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1421,7 +1421,8 @@ static int __get_any_page(struct page *p

/*
* Isolate the page, so that it doesn't get reallocated if it
- * was free.
+ * was free. This flag should be kept set until the source page
+ * is freed and PG_hwpoison on it is set.
*/
set_migratetype_isolate(p, true);
/*
@@ -1444,7 +1445,6 @@ static int __get_any_page(struct page *p
/* Not a free page */
ret = 1;
}
- unset_migratetype_isolate(p, MIGRATE_MOVABLE);
unlock_memory_hotplug();
return ret;
}
@@ -1511,7 +1511,6 @@ static int soft_offline_huge_page(struct
atomic_long_inc(&num_poisoned_pages);
}
}
- /* keep elevated page count for bad page */
return ret;
}

@@ -1576,7 +1575,7 @@ int soft_offline_page(struct page *page,
atomic_long_inc(&num_poisoned_pages);
}
}
- /* keep elevated page count for bad page */
+ unset_migratetype_isolate(page, MIGRATE_MOVABLE);
return ret;
}

@@ -1642,7 +1641,22 @@ static int __soft_offline_page(struct pa
if (ret > 0)
ret = -EIO;
} else {
+ /*
+ * After page migration succeeds, the source page can
+ * be trapped in pagevec and actual freeing is delayed.
+ * Freeing code works differently based on PG_hwpoison,
+ * so there's a race. We need to make sure that the
+ * source page should be freed back to buddy before
+ * setting PG_hwpoison.
+ */
+ if (!is_free_buddy_page(page))
+ lru_add_drain_all();
+ if (!is_free_buddy_page(page))
+ drain_all_pages();
SetPageHWPoison(page);
+ if (!is_free_buddy_page(page))
+ pr_info("soft offline: %#lx: page leaked\n",
+ pfn);
atomic_long_inc(&num_poisoned_pages);
}
} else {

2014-02-18 22:46:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 11/26] arm64: add DSB after icache flush in __flush_icache_all()

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Vinayak Kale <[email protected]>

commit 5044bad43ee573d0b6d90e3ccb7a40c2c7d25eb4 upstream.

Add DSB after icache flush to complete the cache maintenance operation.
The function __flush_icache_all() is used only for user space mappings
and an ISB is not required because of an exception return before executing
user instructions. An exception return would behave like an ISB.

Signed-off-by: Vinayak Kale <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/include/asm/cacheflush.h | 1 +
1 file changed, 1 insertion(+)

--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -116,6 +116,7 @@ extern void flush_dcache_page(struct pag
static inline void __flush_icache_all(void)
{
asm("ic ialluis");
+ dsb();
}

#define flush_dcache_mmap_lock(mapping) \

2014-02-18 23:23:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 22/26] pinctrl: protect pinctrl_list add

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stanislaw Gruszka <[email protected]>

commit 7b320cb1ed2dbd2c5f2a778197baf76fd6bf545a upstream.

We have few fedora bug reports about list corruption on pinctrl,
for example:
https://bugzilla.redhat.com/show_bug.cgi?id=1051918

Most likely corruption happen due lack of protection of pinctrl_list
when adding new nodes to it. Patch corrects that.

Fixes: 42fed7ba44e ("pinctrl: move subsystem mutex to pinctrl_dev struct")
Signed-off-by: Stanislaw Gruszka <[email protected]>
Acked-by: Stephen Warren <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pinctrl/core.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/pinctrl/core.c
+++ b/drivers/pinctrl/core.c
@@ -807,7 +807,9 @@ static struct pinctrl *create_pinctrl(st
kref_init(&p->users);

/* Add the pinctrl handle to the global list */
+ mutex_lock(&pinctrl_list_mutex);
list_add_tail(&p->node, &pinctrl_list);
+ mutex_unlock(&pinctrl_list_mutex);

return p;
}

2014-02-18 23:24:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 21/26] pinctrl: vt8500: Change devicetree data parsing

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Tony Prisk <[email protected]>

commit f17248ed868767567298e1cdf06faf8159a81f7c upstream.

Due to an assumption in the VT8500 pinctrl driver, the value passed
from devicetree for 'wm,pull' was not explicitly translated before
being passed to pinconf.

Since v3.10, changes to 'enum pin_config_param', PIN_CONFIG_BIAS_PULL_(UP/DOWN)
no longer map 1-to-1 with the expected values in devicetree.

This patch adds a small translation between the devicetree values (0..2)
and the enum pin_config_param equivalent values.

Signed-off-by: Tony Prisk <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/pinctrl/vt8500/pinctrl-wmt.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

--- a/drivers/pinctrl/vt8500/pinctrl-wmt.c
+++ b/drivers/pinctrl/vt8500/pinctrl-wmt.c
@@ -276,7 +276,20 @@ static int wmt_pctl_dt_node_to_map_pull(
if (!configs)
return -ENOMEM;

- configs[0] = pull;
+ switch (pull) {
+ case 0:
+ configs[0] = PIN_CONFIG_BIAS_DISABLE;
+ break;
+ case 1:
+ configs[0] = PIN_CONFIG_BIAS_PULL_DOWN;
+ break;
+ case 2:
+ configs[0] = PIN_CONFIG_BIAS_PULL_UP;
+ break;
+ default:
+ configs[0] = PIN_CONFIG_BIAS_DISABLE;
+ dev_err(data->dev, "invalid pull state %d - disabling\n", pull);
+ }

map->type = PIN_MAP_TYPE_CONFIGS_PIN;
map->data.configs.group_or_pin = data->groups[group];

2014-02-18 23:24:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 20/26] x86, hweight: Fix BUG when booting with CONFIG_GCOV_PROFILE_ALL=y

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Oberparleiter <[email protected]>

commit 6583327c4dd55acbbf2a6f25e775b28b3abf9a42 upstream.

Commit d61931d89b, "x86: Add optimized popcnt variants" introduced
compile flag -fcall-saved-rdi for lib/hweight.c. When combined with
options -fprofile-arcs and -O2, this flag causes gcc to generate
broken constructor code. As a result, a 64 bit x86 kernel compiled
with CONFIG_GCOV_PROFILE_ALL=y prints message "gcov: could not create
file" and runs into sproadic BUGs during boot.

The gcc people indicate that these kinds of problems are endemic when
using ad hoc calling conventions. It is therefore best to treat any
file compiled with ad hoc calling conventions as an isolated
environment and avoid things like profiling or coverage analysis,
since those subsystems assume a "normal" calling conventions.

This patch avoids the bug by excluding lib/hweight.o from coverage
profiling.

Reported-by: Meelis Roos <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Peter Oberparleiter <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: H. Peter Anvin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
lib/Makefile | 1 +
1 file changed, 1 insertion(+)

--- a/lib/Makefile
+++ b/lib/Makefile
@@ -45,6 +45,7 @@ lib-$(CONFIG_RWSEM_GENERIC_SPINLOCK) +=
lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
lib-$(CONFIG_PERCPU_RWSEM) += percpu-rwsem.o

+GCOV_PROFILE_hweight.o := n
CFLAGS_hweight.o = $(subst $(quote),,$(CONFIG_ARCH_HWEIGHT_CFLAGS))
obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o


2014-02-18 22:46:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 10/26] arm64: vdso: fix coarse clock handling

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nathan Lynch <[email protected]>

commit 069b918623e1510e58dacf178905a72c3baa3ae4 upstream.

When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
caller has placed in x2 ("ret x2" to return from the fast path). Fix
this by saving x30/LR to x2 only in code that will call
__do_get_tspec, restoring x30 afterward, and using a plain "ret" to
return from the routine.

Also: while the resulting tv_nsec value for CLOCK_REALTIME and
CLOCK_MONOTONIC must be computed using intermediate values that are
left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
coarse clocks should be calculated using unshifted values
(xtime_coarse_nsec is in units of actual nanoseconds). The current
code shifts intermediate values by x12 unconditionally, but x12 is
uninitialized when servicing a coarse clock. Fix this by setting x12
to 0 once we know we are dealing with a coarse clock id.

Signed-off-by: Nathan Lynch <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/kernel/vdso/gettimeofday.S | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

--- a/arch/arm64/kernel/vdso/gettimeofday.S
+++ b/arch/arm64/kernel/vdso/gettimeofday.S
@@ -103,6 +103,8 @@ ENTRY(__kernel_clock_gettime)
bl __do_get_tspec
seqcnt_check w9, 1b

+ mov x30, x2
+
cmp w0, #CLOCK_MONOTONIC
b.ne 6f

@@ -118,6 +120,9 @@ ENTRY(__kernel_clock_gettime)
ccmp w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne
b.ne 8f

+ /* xtime_coarse_nsec is already right-shifted */
+ mov x12, #0
+
/* Get coarse timespec. */
adr vdso_data, _vdso_data
3: seqcnt_acquire
@@ -156,7 +161,7 @@ ENTRY(__kernel_clock_gettime)
lsr x11, x11, x12
stp x10, x11, [x1, #TSPEC_TV_SEC]
mov x0, xzr
- ret x2
+ ret
7:
mov x30, x2
8: /* Syscall fallback. */

2014-02-18 23:24:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 02/26] Btrfs: disable snapshot aware defrag for now

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Josef Bacik <[email protected]>

commit 8101c8dbf6243ba517aab58d69bf1bc37d8b7b9c upstream.

It's just broken and it's taking a lot of effort to fix it, so for now just
disable it so people can defrag in peace. Thanks,

Signed-off-by: Josef Bacik <[email protected]>
Signed-off-by: Chris Mason <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/btrfs/inode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2655,7 +2655,7 @@ static int btrfs_finish_ordered_io(struc
EXTENT_DEFRAG, 1, cached_state);
if (ret) {
u64 last_snapshot = btrfs_root_last_snapshot(&root->root_item);
- if (last_snapshot >= BTRFS_I(inode)->generation)
+ if (0 && last_snapshot >= BTRFS_I(inode)->generation)
/* the inode is shared */
new = record_old_file_extents(inode, ordered_extent);


2014-02-18 23:25:34

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 17/26] x86: mm: change tlb_flushall_shift for IvyBridge

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mel Gorman <[email protected]>

commit f98b7a772ab51b52ca4d2a14362fc0e0c8a2e0f3 upstream.

There was a large performance regression that was bisected to
commit 611ae8e3 ("x86/tlb: enable tlb flush range support for
x86"). This patch simply changes the default balance point
between a local and global flush for IvyBridge.

In the interest of allowing the tests to be reproduced, this
patch was tested using mmtests 0.15 with the following
configurations

configs/config-global-dhp__tlbflush-performance
configs/config-global-dhp__scheduler-performance
configs/config-global-dhp__network-performance

Results are from two machines

Ivybridge 4 threads: Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
Ivybridge 8 threads: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz

Page fault microbenchmark showed nothing interesting.

Ebizzy was configured to run multiple iterations and threads.
Thread counts ranged from 1 to NR_CPUS*2. For each thread count,
it ran 100 iterations and each iteration lasted 10 seconds.

Ivybridge 4 threads
3.13.0-rc7 3.13.0-rc7
vanilla altshift-v3
Mean 1 6395.44 ( 0.00%) 6789.09 ( 6.16%)
Mean 2 7012.85 ( 0.00%) 8052.16 ( 14.82%)
Mean 3 6403.04 ( 0.00%) 6973.74 ( 8.91%)
Mean 4 6135.32 ( 0.00%) 6582.33 ( 7.29%)
Mean 5 6095.69 ( 0.00%) 6526.68 ( 7.07%)
Mean 6 6114.33 ( 0.00%) 6416.64 ( 4.94%)
Mean 7 6085.10 ( 0.00%) 6448.51 ( 5.97%)
Mean 8 6120.62 ( 0.00%) 6462.97 ( 5.59%)

Ivybridge 8 threads
3.13.0-rc7 3.13.0-rc7
vanilla altshift-v3
Mean 1 7336.65 ( 0.00%) 7787.02 ( 6.14%)
Mean 2 8218.41 ( 0.00%) 9484.13 ( 15.40%)
Mean 3 7973.62 ( 0.00%) 8922.01 ( 11.89%)
Mean 4 7798.33 ( 0.00%) 8567.03 ( 9.86%)
Mean 5 7158.72 ( 0.00%) 8214.23 ( 14.74%)
Mean 6 6852.27 ( 0.00%) 7952.45 ( 16.06%)
Mean 7 6774.65 ( 0.00%) 7536.35 ( 11.24%)
Mean 8 6510.50 ( 0.00%) 6894.05 ( 5.89%)
Mean 12 6182.90 ( 0.00%) 6661.29 ( 7.74%)
Mean 16 6100.09 ( 0.00%) 6608.69 ( 8.34%)

Ebizzy hits the worst case scenario for TLB range flushing every
time and it shows for these Ivybridge CPUs at least that the
default choice is a poor on. The patch addresses the problem.

Next was a tlbflush microbenchmark written by Alex Shi at
http://marc.info/?l=linux-kernel&m=133727348217113 . It
measures access costs while the TLB is being flushed. The
expectation is that if there are always full TLB flushes that
the benchmark would suffer and it benefits from range flushing

There are 320 iterations of the test per thread count. The
number of entries is randomly selected with a min of 1 and max
of 512. To ensure a reasonably even spread of entries, the full
range is broken up into 8 sections and a random number selected
within that section.

iteration 1, random number between 0-64
iteration 2, random number between 64-128 etc

This is still a very weak methodology. When you do not know
what are typical ranges, random is a reasonable choice but it
can be easily argued that the opimisation was for smaller ranges
and an even spread is not representative of any workload that
matters. To improve this, we'd need to know the probability
distribution of TLB flush range sizes for a set of workloads
that are considered "common", build a synthetic trace and feed
that into this benchmark. Even that is not perfect because it
would not account for the time between flushes but there are
limits of what can be reasonably done and still be doing
something useful. If a representative synthetic trace is
provided then this benchmark could be revisited and the shift values retuned.

Ivybridge 4 threads
3.13.0-rc7 3.13.0-rc7
vanilla altshift-v3
Mean 1 10.50 ( 0.00%) 10.50 ( 0.03%)
Mean 2 17.59 ( 0.00%) 17.18 ( 2.34%)
Mean 3 22.98 ( 0.00%) 21.74 ( 5.41%)
Mean 5 47.13 ( 0.00%) 46.23 ( 1.92%)
Mean 8 43.30 ( 0.00%) 42.56 ( 1.72%)

Ivybridge 8 threads
3.13.0-rc7 3.13.0-rc7
vanilla altshift-v3
Mean 1 9.45 ( 0.00%) 9.36 ( 0.93%)
Mean 2 9.37 ( 0.00%) 9.70 ( -3.54%)
Mean 3 9.36 ( 0.00%) 9.29 ( 0.70%)
Mean 5 14.49 ( 0.00%) 15.04 ( -3.75%)
Mean 8 41.08 ( 0.00%) 38.73 ( 5.71%)
Mean 13 32.04 ( 0.00%) 31.24 ( 2.49%)
Mean 16 40.05 ( 0.00%) 39.04 ( 2.51%)

For both CPUs, average access time is reduced which is good as
this is the benchmark that was used to tune the shift values in
the first place albeit it is now known *how* the benchmark was
used.

The scheduler benchmarks were somewhat inconclusive. They
showed gains and losses and makes me reconsider how stable those
benchmarks really are or if something else might be interfering
with the test results recently.

Network benchmarks were inconclusive. Almost all results were
flat except for netperf-udp tests on the 4 thread machine.
These results were unstable and showed large variations between
reboots. It is unknown if this is a recent problems but I've
noticed before that netperf-udp results tend to vary.

Based on these results, changing the default for Ivybridge seems
like a logical choice.

Signed-off-by: Mel Gorman <[email protected]>
Tested-by: Davidlohr Bueso <[email protected]>
Reviewed-by: Alex Shi <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/kernel/cpu/intel.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -628,7 +628,7 @@ static void __cpuinit intel_tlb_flushall
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
- tlb_flushall_shift = 1;
+ tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;

2014-02-18 23:25:29

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 18/26] [media] af9035: add ID [2040:f900] Hauppauge WinTV-MiniStick 2

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Antti Palosaari <[email protected]>

commit f2e4c5e004691dfe37d0e4b363296f28abdb9bc7 upstream.

Add USB ID [2040:f900] for Hauppauge WinTV-MiniStick 2.
Device is build upon IT9135 chipset.

Tested-by: Stefan Becker <[email protected]>
Signed-off-by: Antti Palosaari <[email protected]>
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/media/usb/dvb-usb-v2/af9035.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/media/usb/dvb-usb-v2/af9035.c
+++ b/drivers/media/usb/dvb-usb-v2/af9035.c
@@ -1517,6 +1517,8 @@ static const struct usb_device_id af9035
&af9035_props, "TerraTec Cinergy T Stick Dual RC (rev. 2)", NULL) },
{ DVB_USB_DEVICE(USB_VID_LEADTEK, 0x6a05,
&af9035_props, "Leadtek WinFast DTV Dongle Dual", NULL) },
+ { DVB_USB_DEVICE(USB_VID_HAUPPAUGE, 0xf900,
+ &af9035_props, "Hauppauge WinTV-MiniStick 2", NULL) },
{ }
};
MODULE_DEVICE_TABLE(usb, af9035_id_table);

2014-02-18 23:26:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 16/26] mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irq

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: KOSAKI Motohiro <[email protected]>

commit 227d53b397a32a7614667b3ecaf1d89902fb6c12 upstream.

To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
During aio buffer migration, we have a possibility to see the following
call stack.

aio_migratepage [disable interrupt]
migrate_page_copy
clear_page_dirty_for_io
set_page_dirty
__set_page_dirty_buffers
__set_page_dirty
spin_lock_irq

This mean, current aio migration is a deadlockable. spin_lock_irqsave
is a safer alternative and we should use it.

Signed-off-by: KOSAKI Motohiro <[email protected]>
Reported-by: David Rientjes [email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/buffer.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -620,14 +620,16 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
static void __set_page_dirty(struct page *page,
struct address_space *mapping, int warn)
{
- spin_lock_irq(&mapping->tree_lock);
+ unsigned long flags;
+
+ spin_lock_irqsave(&mapping->tree_lock, flags);
if (page->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !PageUptodate(page));
account_page_dirtied(page, mapping);
radix_tree_tag_set(&mapping->page_tree,
page_index(page), PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&mapping->tree_lock);
+ spin_unlock_irqrestore(&mapping->tree_lock, flags);
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
}


2014-02-18 22:46:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 01/26] SELinux: Fix kernel BUG on empty security contexts.

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Stephen Smalley <[email protected]>

commit 2172fa709ab32ca60e86179dc67d0857be8e2c98 upstream.

Setting an empty security context (length=0) on a file will
lead to incorrectly dereferencing the type and other fields
of the security context structure, yielding a kernel BUG.
As a zero-length security context is never valid, just reject
all such security contexts whether coming from userspace
via setxattr or coming from the filesystem upon a getxattr
request by SELinux.

Setting a security context value (empty or otherwise) unknown to
SELinux in the first place is only possible for a root process
(CAP_MAC_ADMIN), and, if running SELinux in enforcing mode, only
if the corresponding SELinux mac_admin permission is also granted
to the domain by policy. In Fedora policies, this is only allowed for
specific domains such as livecd for setting down security contexts
that are not defined in the build host policy.

Reproducer:
su
setenforce 0
touch foo
setfattr -n security.selinux foo

Caveat:
Relabeling or removing foo after doing the above may not be possible
without booting with SELinux disabled. Any subsequent access to foo
after doing the above will also trigger the BUG.

BUG output from Matthew Thode:
[ 473.893141] ------------[ cut here ]------------
[ 473.962110] kernel BUG at security/selinux/ss/services.c:654!
[ 473.995314] invalid opcode: 0000 [#6] SMP
[ 474.027196] Modules linked in:
[ 474.058118] CPU: 0 PID: 8138 Comm: ls Tainted: G D I
3.13.0-grsec #1
[ 474.116637] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0
07/29/10
[ 474.149768] task: ffff8805f50cd010 ti: ffff8805f50cd488 task.ti:
ffff8805f50cd488
[ 474.183707] RIP: 0010:[<ffffffff814681c7>] [<ffffffff814681c7>]
context_struct_compute_av+0xce/0x308
[ 474.219954] RSP: 0018:ffff8805c0ac3c38 EFLAGS: 00010246
[ 474.252253] RAX: 0000000000000000 RBX: ffff8805c0ac3d94 RCX:
0000000000000100
[ 474.287018] RDX: ffff8805e8aac000 RSI: 00000000ffffffff RDI:
ffff8805e8aaa000
[ 474.321199] RBP: ffff8805c0ac3cb8 R08: 0000000000000010 R09:
0000000000000006
[ 474.357446] R10: 0000000000000000 R11: ffff8805c567a000 R12:
0000000000000006
[ 474.419191] R13: ffff8805c2b74e88 R14: 00000000000001da R15:
0000000000000000
[ 474.453816] FS: 00007f2e75220800(0000) GS:ffff88061fc00000(0000)
knlGS:0000000000000000
[ 474.489254] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 474.522215] CR2: 00007f2e74716090 CR3: 00000005c085e000 CR4:
00000000000207f0
[ 474.556058] Stack:
[ 474.584325] ffff8805c0ac3c98 ffffffff811b549b ffff8805c0ac3c98
ffff8805f1190a40
[ 474.618913] ffff8805a6202f08 ffff8805c2b74e88 00068800d0464990
ffff8805e8aac860
[ 474.653955] ffff8805c0ac3cb8 000700068113833a ffff880606c75060
ffff8805c0ac3d94
[ 474.690461] Call Trace:
[ 474.723779] [<ffffffff811b549b>] ? lookup_fast+0x1cd/0x22a
[ 474.778049] [<ffffffff81468824>] security_compute_av+0xf4/0x20b
[ 474.811398] [<ffffffff8196f419>] avc_compute_av+0x2a/0x179
[ 474.843813] [<ffffffff8145727b>] avc_has_perm+0x45/0xf4
[ 474.875694] [<ffffffff81457d0e>] inode_has_perm+0x2a/0x31
[ 474.907370] [<ffffffff81457e76>] selinux_inode_getattr+0x3c/0x3e
[ 474.938726] [<ffffffff81455cf6>] security_inode_getattr+0x1b/0x22
[ 474.970036] [<ffffffff811b057d>] vfs_getattr+0x19/0x2d
[ 475.000618] [<ffffffff811b05e5>] vfs_fstatat+0x54/0x91
[ 475.030402] [<ffffffff811b063b>] vfs_lstat+0x19/0x1b
[ 475.061097] [<ffffffff811b077e>] SyS_newlstat+0x15/0x30
[ 475.094595] [<ffffffff8113c5c1>] ? __audit_syscall_entry+0xa1/0xc3
[ 475.148405] [<ffffffff8197791e>] system_call_fastpath+0x16/0x1b
[ 475.179201] Code: 00 48 85 c0 48 89 45 b8 75 02 0f 0b 48 8b 45 a0 48
8b 3d 45 d0 b6 00 8b 40 08 89 c6 ff ce e8 d1 b0 06 00 48 85 c0 49 89 c7
75 02 <0f> 0b 48 8b 45 b8 4c 8b 28 eb 1e 49 8d 7d 08 be 80 01 00 00 e8
[ 475.255884] RIP [<ffffffff814681c7>]
context_struct_compute_av+0xce/0x308
[ 475.296120] RSP <ffff8805c0ac3c38>
[ 475.328734] ---[ end trace f076482e9d754adc ]---

Reported-by: Matthew Thode <[email protected]>
Signed-off-by: Stephen Smalley <[email protected]>
Signed-off-by: Paul Moore <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
security/selinux/ss/services.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -1231,6 +1231,10 @@ static int security_context_to_sid_core(
struct context context;
int rc = 0;

+ /* An empty security context is never valid. */
+ if (!scontext_len)
+ return -EINVAL;
+
if (!ss_initialized) {
int i;


2014-02-18 23:27:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 15/26] mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq()

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: KOSAKI Motohiro <[email protected]>

commit a85d9df1ea1d23682a0ed1e100e6965006595d06 upstream.

During aio stress test, we observed the following lockdep warning. This
mean AIO+numa_balancing is currently deadlockable.

The problem is, aio_migratepage disable interrupt, but
__set_page_dirty_nobuffers unintentionally enable it again.

Generally, all helper function should use spin_lock_irqsave() instead of
spin_lock_irq() because they don't know caller at all.

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&(&ctx->completion_lock)->rlock);
<Interrupt>
lock(&(&ctx->completion_lock)->rlock);

*** DEADLOCK ***

dump_stack+0x19/0x1b
print_usage_bug+0x1f7/0x208
mark_lock+0x21d/0x2a0
mark_held_locks+0xb9/0x140
trace_hardirqs_on_caller+0x105/0x1d0
trace_hardirqs_on+0xd/0x10
_raw_spin_unlock_irq+0x2c/0x50
__set_page_dirty_nobuffers+0x8c/0xf0
migrate_page_copy+0x434/0x540
aio_migratepage+0xb1/0x140
move_to_new_page+0x7d/0x230
migrate_pages+0x5e5/0x700
migrate_misplaced_page+0xbc/0xf0
do_numa_page+0x102/0x190
handle_pte_fault+0x241/0x970
handle_mm_fault+0x265/0x370
__do_page_fault+0x172/0x5a0
do_page_fault+0x1a/0x70
page_fault+0x28/0x30

Signed-off-by: KOSAKI Motohiro <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Johannes Weiner <[email protected]>
Acked-by: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/page-writeback.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2026,11 +2026,12 @@ int __set_page_dirty_nobuffers(struct pa
if (!TestSetPageDirty(page)) {
struct address_space *mapping = page_mapping(page);
struct address_space *mapping2;
+ unsigned long flags;

if (!mapping)
return 1;

- spin_lock_irq(&mapping->tree_lock);
+ spin_lock_irqsave(&mapping->tree_lock, flags);
mapping2 = page_mapping(page);
if (mapping2) { /* Race with truncate? */
BUG_ON(mapping2 != mapping);
@@ -2039,7 +2040,7 @@ int __set_page_dirty_nobuffers(struct pa
radix_tree_tag_set(&mapping->page_tree,
page_index(page), PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&mapping->tree_lock);
+ spin_unlock_irqrestore(&mapping->tree_lock, flags);
if (mapping->host) {
/* !PageAnon && !swapper_space */
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);

2014-02-18 23:27:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 14/26] ALSA: hda - Add missing mixer widget for AD1983

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Takashi Iwai <[email protected]>

commit c7579fed1f1b2567529aea64ef19871337403ab3 upstream.

The mixer widget on AD1983 at NID 0x0e was missing in the commit
[f2f8be43c5c9: ALSA: hda - Add aamix NID to AD codecs].

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=70011
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/pci/hda/patch_analog.c | 1 +
1 file changed, 1 insertion(+)

--- a/sound/pci/hda/patch_analog.c
+++ b/sound/pci/hda/patch_analog.c
@@ -1680,6 +1680,7 @@ static int ad1983_parse_auto_config(stru
return err;
spec = codec->spec;

+ spec->gen.mixer_nid = 0x0e;
spec->gen.beep_nid = 0x10;
set_beep_amp(spec, 0x10, 0, HDA_OUTPUT);
err = ad198x_parse_auto_config(codec);

2014-02-18 23:28:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 12/26] ALSA: usb-audio: Add missing kconfig dependecy

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Takashi Iwai <[email protected]>

commit 4fa71c1550a857ff1dbfe9e99acc1f4cfec5f0d0 upstream.

The commit 44dcbbb1cd61 introduced the usage of bitreverse helpers but
forgot to add the dependency. This patch adds the selection for
CONFIG_BITREVERSE.

Fixes: 44dcbbb1cd61 ('ALSA: snd-usb: add support for bit-reversed byte formats')
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/usb/Kconfig | 1 +
1 file changed, 1 insertion(+)

--- a/sound/usb/Kconfig
+++ b/sound/usb/Kconfig
@@ -14,6 +14,7 @@ config SND_USB_AUDIO
select SND_HWDEP
select SND_RAWMIDI
select SND_PCM
+ select BITREVERSE
help
Say Y here to include support for USB audio and USB MIDI
devices.

2014-02-18 23:28:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 3.10 13/26] ALSA: hda - Fix missing VREF setup for Mac Pro 1,1

3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: Takashi Iwai <[email protected]>

commit c20f31ec421ea4fabea5e95a6afd46c5f41e5599 upstream.

Mac Pro 1,1 with ALC889A codec needs the VREF setup on NID 0x18 to
VREF50, in order to make the speaker working. The same fixup was
already needed for MacBook Air 1,1, so we can reuse it.

Reported-by: Nicolai Beuermann <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
sound/pci/hda/patch_realtek.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -1765,6 +1765,7 @@ enum {
ALC889_FIXUP_IMAC91_VREF,
ALC889_FIXUP_MBA11_VREF,
ALC889_FIXUP_MBA21_VREF,
+ ALC889_FIXUP_MP11_VREF,
ALC882_FIXUP_INV_DMIC,
ALC882_FIXUP_NO_PRIMARY_HP,
ALC887_FIXUP_ASUS_BASS,
@@ -2119,6 +2120,12 @@ static const struct hda_fixup alc882_fix
.chained = true,
.chain_id = ALC889_FIXUP_MBP_VREF,
},
+ [ALC889_FIXUP_MP11_VREF] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc889_fixup_mba11_vref,
+ .chained = true,
+ .chain_id = ALC885_FIXUP_MACPRO_GPIO,
+ },
[ALC882_FIXUP_INV_DMIC] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_inv_dmic_0x12,
@@ -2176,7 +2183,7 @@ static const struct snd_pci_quirk alc882
SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x00a1, "Macbook", ALC889_FIXUP_MBP_VREF),
SND_PCI_QUIRK(0x106b, 0x00a4, "MacbookPro 4,1", ALC889_FIXUP_MBP_VREF),
- SND_PCI_QUIRK(0x106b, 0x0c00, "Mac Pro", ALC885_FIXUP_MACPRO_GPIO),
+ SND_PCI_QUIRK(0x106b, 0x0c00, "Mac Pro", ALC889_FIXUP_MP11_VREF),
SND_PCI_QUIRK(0x106b, 0x1000, "iMac 24", ALC885_FIXUP_MACPRO_GPIO),
SND_PCI_QUIRK(0x106b, 0x2800, "AppleTV", ALC885_FIXUP_MACPRO_GPIO),
SND_PCI_QUIRK(0x106b, 0x2c00, "MacbookPro rev3", ALC889_FIXUP_MBP_VREF),

2014-02-19 04:27:46

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On 02/18/2014 02:46 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.10.31 release.
> There are 26 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
> Anything received after that time might be too late.
>

Build results:
total: 126 pass: 121 skipped: 4 fail: 1

qemu tests all passed.

Details are available at http://server.roeck-us.net:8010/builders.

Results are as expected.

Guenter

2014-02-20 00:30:03

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On 02/18/2014 03:46 PM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 3.10.31 release.
> There are 26 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compile and boot tests passed on AMD system. Boot failed on Intel
systems. I think the following changes are the suspect, so far by
process of elimination - these two aren't in 3.12 and 3.13

# modified: mm/hugetlb.c
# modified: mm/memory-failure.c

However, my strong suspect is the following:

Xishi Qiu <[email protected]>
mm: fix process accidentally killed by mce because of huge page
migration

I don't see how this could cause problems, none the less, I will test
without these changes and let you know.


Naoya Horiguchi <[email protected]>
mm/memory-failure.c: fix memory leak in successful soft offlining

I will test without these changes and let you know.

-- Shuah

Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
[email protected] | (970) 672-0658

2014-02-20 07:34:12

by Xishi Qiu

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On 2014/2/20 8:29, Shuah Khan wrote:

> On 02/18/2014 03:46 PM, Greg Kroah-Hartman wrote:
>> This is the start of the stable review cycle for the 3.10.31 release.
>> There are 26 patches in this series, all will be posted as a response
>> to this one. If anyone has any issues with these being applied, please
>> let me know.
>>
>> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
>> Anything received after that time might be too late.
>>
>> The whole patch series can be found in one patch at:
>> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
>> and the diffstat can be found below.
>>
>> thanks,
>>
>> greg k-h
>>
>
> Compile and boot tests passed on AMD system. Boot failed on Intel systems. I think the following changes are the suspect, so far by process of elimination - these two aren't in 3.12 and 3.13
>
> # modified: mm/hugetlb.c
> # modified: mm/memory-failure.c
>
> However, my strong suspect is the following:
>
> Xishi Qiu <[email protected]>
> mm: fix process accidentally killed by mce because of huge page migration
>
> I don't see how this could cause problems, none the less, I will test without these changes and let you know.
>
>
> Naoya Horiguchi <[email protected]>
> mm/memory-failure.c: fix memory leak in successful soft offlining
>
> I will test without these changes and let you know.
>
> -- Shuah
>

Hi Shuah

I tested on my system, it boot successfully.

hardware: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
OS: v3.10.30 + the two patches

Thanks,
Xishi Qiu

> Shuah Khan
> Senior Linux Kernel Developer - Open Source Group
> Samsung Research America(Silicon Valley)
> [email protected] | (970) 672-0658
>
>


2014-02-20 13:39:42

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On 02/20/2014 12:30 AM, Xishi Qiu wrote:
> On 2014/2/20 8:29, Shuah Khan wrote:
>
>> On 02/18/2014 03:46 PM, Greg Kroah-Hartman wrote:
>>> This is the start of the stable review cycle for the 3.10.31 release.
>>> There are 26 patches in this series, all will be posted as a response
>>> to this one. If anyone has any issues with these being applied, please
>>> let me know.
>>>
>>> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
>>> Anything received after that time might be too late.
>>>
>>> The whole patch series can be found in one patch at:
>>> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
>>> and the diffstat can be found below.
>>>
>>> thanks,
>>>
>>> greg k-h
>>>
>>
>> Compile and boot tests passed on AMD system. Boot failed on Intel systems. I think the following changes are the suspect, so far by process of elimination - these two aren't in 3.12 and 3.13
>>
>> # modified: mm/hugetlb.c
>> # modified: mm/memory-failure.c
>>
>> However, my strong suspect is the following:
>>
>> Xishi Qiu <[email protected]>
>> mm: fix process accidentally killed by mce because of huge page migration
>>
>> I don't see how this could cause problems, none the less, I will test without these changes and let you know.
>>
>>
>> Naoya Horiguchi <[email protected]>
>> mm/memory-failure.c: fix memory leak in successful soft offlining
>>
>> I will test without these changes and let you know.
>>
>> -- Shuah
>>
>
> Hi Shuah
>
> I tested on my system, it boot successfully.
>
> hardware: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
> OS: v3.10.30 + the two patches
>
> Thanks,
> Xishi Qiu

Xishi,

I tested without your patch and still see the issue. My wild guess
wasn't a good one :) I am starting git bisect now.

-- Shuah


--
Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
[email protected] | (970) 672-0658

2014-02-20 17:01:44

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On 02/20/2014 06:39 AM, Shuah Khan wrote:
> On 02/20/2014 12:30 AM, Xishi Qiu wrote:
>> On 2014/2/20 8:29, Shuah Khan wrote:
>>
>>> On 02/18/2014 03:46 PM, Greg Kroah-Hartman wrote:
>>>> This is the start of the stable review cycle for the 3.10.31 release.
>>>> There are 26 patches in this series, all will be posted as a response
>>>> to this one. If anyone has any issues with these being applied, please
>>>> let me know.
>>>>
>>>> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
>>>> Anything received after that time might be too late.
>>>>
>>>> The whole patch series can be found in one patch at:
>>>>
>>>> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
>>>> and the diffstat can be found below.
>>>>
>>>> thanks,
>>>>
>>>> greg k-h
>>>>
>>>
>>> Compile and boot tests passed on AMD system. Boot failed on Intel
>>> systems. I think the following changes are the suspect, so far by
>>> process of elimination - these two aren't in 3.12 and 3.13
>>>
>>> # modified: mm/hugetlb.c
>>> # modified: mm/memory-failure.c
>>>
>>> However, my strong suspect is the following:
>>>
>>> Xishi Qiu <[email protected]>
>>> mm: fix process accidentally killed by mce because of huge page
>>> migration
>>>
>>> I don't see how this could cause problems, none the less, I will test
>>> without these changes and let you know.
>>>
>>>
>>> Naoya Horiguchi <[email protected]>
>>> mm/memory-failure.c: fix memory leak in successful soft offlining
>>>
>>> I will test without these changes and let you know.
>>>
>>> -- Shuah
>>>
>>
>> Hi Shuah
>>
>> I tested on my system, it boot successfully.
>>
>> hardware: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
>> OS: v3.10.30 + the two patches
>>
>> Thanks,
>> Xishi Qiu
>
> Xishi,
>
> I tested without your patch and still see the issue. My wild guess
> wasn't a good one :) I am starting git bisect now.
>
> -- Shuah
>
>

ok I have it isolated to the following patch:

Dirk Brandewie <[email protected]>
intel_pstate: Take core C0 time into account for core busy calculation

From: Dirk Brandewie <[email protected]>

commit fcb6a15c2e7e76d493e6f91ea889ab40e1c643a4 upstream.

Take non-idle time into account when calculating core busy time.
This ensures that intel_pstate will notice a decrease in load.

Boots just fine without this change.

-- Shuah

--
Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
[email protected] | (970) 672-0658

2014-02-20 17:10:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 3.10 00/26] 3.10.31-stable review

On Thu, Feb 20, 2014 at 10:01:38AM -0700, Shuah Khan wrote:
> On 02/20/2014 06:39 AM, Shuah Khan wrote:
> > On 02/20/2014 12:30 AM, Xishi Qiu wrote:
> >> On 2014/2/20 8:29, Shuah Khan wrote:
> >>
> >>> On 02/18/2014 03:46 PM, Greg Kroah-Hartman wrote:
> >>>> This is the start of the stable review cycle for the 3.10.31 release.
> >>>> There are 26 patches in this series, all will be posted as a response
> >>>> to this one. If anyone has any issues with these being applied, please
> >>>> let me know.
> >>>>
> >>>> Responses should be made by Thu Feb 20 22:45:20 UTC 2014.
> >>>> Anything received after that time might be too late.
> >>>>
> >>>> The whole patch series can be found in one patch at:
> >>>>
> >>>> kernel.org/pub/linux/kernel/v3.0/stable-review/patch-3.10.31-rc1.gz
> >>>> and the diffstat can be found below.
> >>>>
> >>>> thanks,
> >>>>
> >>>> greg k-h
> >>>>
> >>>
> >>> Compile and boot tests passed on AMD system. Boot failed on Intel
> >>> systems. I think the following changes are the suspect, so far by
> >>> process of elimination - these two aren't in 3.12 and 3.13
> >>>
> >>> # modified: mm/hugetlb.c
> >>> # modified: mm/memory-failure.c
> >>>
> >>> However, my strong suspect is the following:
> >>>
> >>> Xishi Qiu <[email protected]>
> >>> mm: fix process accidentally killed by mce because of huge page
> >>> migration
> >>>
> >>> I don't see how this could cause problems, none the less, I will test
> >>> without these changes and let you know.
> >>>
> >>>
> >>> Naoya Horiguchi <[email protected]>
> >>> mm/memory-failure.c: fix memory leak in successful soft offlining
> >>>
> >>> I will test without these changes and let you know.
> >>>
> >>> -- Shuah
> >>>
> >>
> >> Hi Shuah
> >>
> >> I tested on my system, it boot successfully.
> >>
> >> hardware: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
> >> OS: v3.10.30 + the two patches
> >>
> >> Thanks,
> >> Xishi Qiu
> >
> > Xishi,
> >
> > I tested without your patch and still see the issue. My wild guess
> > wasn't a good one :) I am starting git bisect now.
> >
> > -- Shuah
> >
> >
>
> ok I have it isolated to the following patch:
>
> Dirk Brandewie <[email protected]>
> intel_pstate: Take core C0 time into account for core busy calculation
>
> From: Dirk Brandewie <[email protected]>
>
> commit fcb6a15c2e7e76d493e6f91ea889ab40e1c643a4 upstream.
>
> Take non-idle time into account when calculating core busy time.
> This ensures that intel_pstate will notice a decrease in load.
>
> Boots just fine without this change.

Thanks for testing, I'll be dropping that patch as it causes major
regressions on my boxes as well.

greg k-h