2017-07-19 10:09:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 00/88] 4.11.12-stable review

This is the start of the stable review cycle for the 4.11.12 release.
There are 88 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Fri Jul 21 10:07:36 UTC 2017.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.11.12-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.11.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 4.11.12-rc1

Haozhong Zhang <[email protected]>
kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS

Jim Mattson <[email protected]>
kvm: vmx: Check value written to IA32_BNDCFGS

Jim Mattson <[email protected]>
kvm: x86: Guest BNDCFGS requires guest MPX support

Jim Mattson <[email protected]>
kvm: vmx: Do not disable intercepts for BNDCFGS

Dan Carpenter <[email protected]>
PM / QoS: return -EINVAL for bogus strings

Ville Syrjälä <[email protected]>
ALSA: x86: Clear the pdata.notify_lpe_audio pointer before teardown

Thomas Gleixner <[email protected]>
PM / wakeirq: Convert to SRCU

Peter Zijlstra <[email protected]>
sched/topology: Fix overlapping sched_group_mask

Lauro Ramos Venancio <[email protected]>
sched/topology: Optimize build_group_mask()

Peter Zijlstra <[email protected]>
sched/topology: Fix building of overlapping sched-groups

Peter Zijlstra <[email protected]>
sched/fair, cpumask: Export for_each_cpu_wrap()

Horia Geantă <[email protected]>
crypto: caam - fix signals handling

David Gstir <[email protected]>
crypto: caam - properly set IV after {en,de}crypt

Herbert Xu <[email protected]>
crypto: sha1-ssse3 - Disable avx2

Gilad Ben-Yossef <[email protected]>
crypto: atmel - only treat EBUSY as transient if backlog

Martin Hicks <[email protected]>
crypto: talitos - Extend max key length for SHA384/512-HMAC and AEAD

Helge Deller <[email protected]>
mm: fix overflow check in expand_upwards()

Andy Lutomirski <[email protected]>
selftests/capabilities: Fix the test_execve test

Eric W. Biederman <[email protected]>
mnt: Make propagate_umount less slow for overlapping mount propagation trees

Eric W. Biederman <[email protected]>
mnt: In propgate_umount handle visiting mounts in any order

Eric W. Biederman <[email protected]>
mnt: In umount propagation reparent in a separate pass

Michael Kelley <[email protected]>
Drivers: hv: vmbus: Close timing hole that can corrupt per-cpu page

Johan Hovold <[email protected]>
nvmem: core: fix leaks on registration errors

Paul E. McKenney <[email protected]>
rcu: Add memory barriers for NOCB leader wakeup

Adam Borowski <[email protected]>
vt: fix unchecked __put_user() in tioclinux ioctls

Dong Bo <[email protected]>
arm64: Preventing READ_IMPLIES_EXEC propagation

Marc Zyngier <[email protected]>
ARM64: dts: marvell: armada37xx: Fix timer interrupt specifiers

Balbir Singh <[email protected]>
powerpc/kexec: Fix radix to hash kexec due to IAMR/AMOR

Kees Cook <[email protected]>
exec: Limit arg stack to at most 75% of _STK_LIM

Kees Cook <[email protected]>
s390: reduce ELF_ET_DYN_BASE

Kees Cook <[email protected]>
powerpc: move ELF_ET_DYN_BASE to 4GB / 4MB

Kees Cook <[email protected]>
arm64: move ELF_ET_DYN_BASE to 4GB / 4MB

Kees Cook <[email protected]>
arm: move ELF_ET_DYN_BASE to 4MB

Kees Cook <[email protected]>
binfmt_elf: use ELF_ET_DYN_BASE only for PIE

Cyril Bur <[email protected]>
checkpatch: silence perl 5.26.0 unescaped left brace warnings

Sahitya Tummala <[email protected]>
fs/dcache.c: fix spin lockup issue on nlru->lock

Sahitya Tummala <[email protected]>
mm/list_lru.c: fix list_lru_count_node() to be race free

Marcin Nowakowski <[email protected]>
kernel/extable.c: mark core_kernel_text notrace

Kirill A. Shutemov <[email protected]>
thp, mm: fix crash due race in MADV_FREE handling

Ben Hutchings <[email protected]>
tools/lib/lockdep: Reduce MAX_LOCK_DEPTH to avoid overflowing lock_chain/: Depth

Helge Deller <[email protected]>
parisc/mm: Ensure IRQs are off in switch_mm()

Thomas Bogendoerfer <[email protected]>
parisc: DMA API: return error instead of BUG_ON for dma ops on non dma devs

Eric Biggers <[email protected]>
parisc: use compat_sys_keyctl()

Helge Deller <[email protected]>
parisc: Report SIGSEGV instead of SIGBUS when running out of stack

Suzuki K Poulose <[email protected]>
irqchip/gic-v3: Fix out-of-bound access in gic_set_affinity

Alex Deucher <[email protected]>
drm/amdgpu/gfx6: properly cache mc_arb_ramcfg

Srinivas Dasari <[email protected]>
cfg80211: Check if NAN service ID is of expected size

Srinivas Dasari <[email protected]>
cfg80211: Check if PMKID attribute is of expected size

Srinivas Dasari <[email protected]>
cfg80211: Validate frequencies nested in NL80211_ATTR_SCAN_FREQUENCIES

Srinivas Dasari <[email protected]>
cfg80211: Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE

Daniel Kiper <[email protected]>
efi: Process the MEMATTR table only if EFI_MEMMAP is enabled

Peter S. Housel <[email protected]>
brcmfmac: Fix glom_skb leak in brcmf_sdiod_recv_chain

Christophe Jaillet <[email protected]>
brcmfmac: Fix a memory leak in error handling path in 'brcmf_cfg80211_attach'

Bart Van Assche <[email protected]>
block: Fix a blk_exit_rl() regression

Nitin Gupta <[email protected]>
sparc64: Fix gup_huge_pmd

Nagarathnam Muthusamy <[email protected]>
Adding the type of exported symbols

Nagarathnam Muthusamy <[email protected]>
sed regex in Makefile.build requires line break between exported symbols

Nagarathnam Muthusamy <[email protected]>
Adding asm-prototypes.h for genksyms to generate crc

Bert Kenward <[email protected]>
sfc: don't read beyond unicast address list

Arend van Spriel <[email protected]>
brcmfmac: fix possible buffer overflow in brcmf_cfg80211_mgmt_tx()

Eduardo Valentin <[email protected]>
bridge: mdb: fix leak on complete_info ptr on fail path

WANG Cong <[email protected]>
tap: convert a mutex to a spinlock

Guilherme G. Piccoli <[email protected]>
cxgb4: fix BUG() on interrupt deallocating path of ULD

Huy Nguyen <[email protected]>
net/mlx5e: Initialize CEE's getpermhwaddr address buffer to 0xff

Sowmini Varadhan <[email protected]>
rds: tcp: use sock_create_lite() to create the accept socket

Nikolay Aleksandrov <[email protected]>
vrf: fix bug_on triggered by rx when destroying a vrf

David Ahern <[email protected]>
net: ipv6: Compare lwstate in detecting duplicate nexthops

Derek Chickles <[email protected]>
liquidio: fix bug in soft reset failure detection

Alban Browaeys <[email protected]>
net: core: Fix slab-out-of-bounds in netdev_stats_to_stats64

Jiri Benc <[email protected]>
geneve: fix hlist corruption

Jiri Benc <[email protected]>
vxlan: fix hlist corruption

Sabrina Dubroca <[email protected]>
ipv6: dad: don't remove dynamic addresses if link is down

Gal Pressman <[email protected]>
net/mlx5e: Fix TX carrier errors report in get stats ndo

Mohamad Haj Yahia <[email protected]>
net/mlx5: Cancel delayed recovery work when unloading the driver

Michal Kubeček <[email protected]>
net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()

Daniel Borkmann <[email protected]>
bpf: prevent leaking pointer via xadd on unpriviledged

Dan Carpenter <[email protected]>
rocker: move dereference before free

Ido Schimmel <[email protected]>
mlxsw: spectrum_router: Fix NULL pointer dereference

Gao Feng <[email protected]>
net: sched: Fix one possible panic when no destroy callback

Jason Wang <[email protected]>
virtio-net: serialize tx routine during reset

Eric Dumazet <[email protected]>
net: prevent sign extension in dev_get_stats()

WANG Cong <[email protected]>
tcp: reset sk_rx_dst in tcp_disconnect()

Richard Cochran <[email protected]>
net: dp83640: Avoid NULL pointer dereference.

Michal Kubeček <[email protected]>
net: account for current skb length when deciding about UFO

Martin Habets <[email protected]>
sfc: Fix MCDI command size for filter operations

Arnd Bergmann <[email protected]>
netvsc: don't access netdev->num_rx_queues directly

WANG Cong <[email protected]>
ipv6: avoid unregistering inet6_dev for loopback

Zach Brown <[email protected]>
net/phy: micrel: configure intterupts after autoneg workaround


-------------

Diffstat:

Makefile | 4 +-
arch/arm/include/asm/elf.h | 8 +-
arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 12 +-
arch/arm64/include/asm/elf.h | 18 +-
arch/parisc/include/asm/dma-mapping.h | 11 +-
arch/parisc/include/asm/mmu_context.h | 15 +-
arch/parisc/kernel/syscall_table.S | 2 +-
arch/parisc/mm/fault.c | 2 +-
arch/powerpc/include/asm/elf.h | 13 +-
arch/powerpc/kernel/misc_64.S | 12 ++
arch/s390/include/asm/elf.h | 15 +-
arch/sparc/include/asm/asm-prototypes.h | 24 +++
arch/sparc/lib/atomic_64.S | 44 +++--
arch/sparc/lib/checksum_64.S | 1 +
arch/sparc/lib/csum_copy.S | 1 +
arch/sparc/lib/memscan_64.S | 2 +
arch/sparc/lib/memset.S | 1 +
arch/sparc/mm/gup.c | 4 +-
arch/x86/crypto/sha1_ssse3_glue.c | 2 +-
arch/x86/include/asm/elf.h | 13 +-
arch/x86/include/asm/msr-index.h | 2 +
arch/x86/kvm/cpuid.h | 8 +
arch/x86/kvm/vmx.c | 10 +-
block/blk-sysfs.c | 34 ++--
drivers/base/power/sysfs.c | 2 +
drivers/base/power/wakeup.c | 32 ++--
drivers/crypto/atmel-sha.c | 4 +-
drivers/crypto/caam/caamalg.c | 20 +-
drivers/crypto/caam/caamhash.c | 2 +-
drivers/crypto/caam/key_gen.c | 2 +-
drivers/crypto/talitos.c | 7 +-
drivers/firmware/efi/efi.c | 3 +-
drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 3 +-
drivers/hv/hv.c | 7 +-
drivers/irqchip/irq-gic-v3.c | 3 +
.../ethernet/cavium/liquidio/cn23xx_pf_device.c | 2 +-
.../net/ethernet/cavium/liquidio/cn66xx_device.c | 2 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 16 +-
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 42 ++--
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c | 2 +
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 -
drivers/net/ethernet/mellanox/mlx5/core/health.c | 15 +-
drivers/net/ethernet/mellanox/mlx5/core/main.c | 2 +-
drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 3 +
drivers/net/ethernet/rocker/rocker_ofdpa.c | 2 +-
drivers/net/ethernet/sfc/ef10.c | 16 +-
drivers/net/geneve.c | 48 +++--
drivers/net/hyperv/netvsc_drv.c | 4 +-
drivers/net/phy/dp83640.c | 2 +-
drivers/net/phy/micrel.c | 2 +
drivers/net/tap.c | 18 +-
drivers/net/virtio_net.c | 1 +
drivers/net/vrf.c | 11 +-
drivers/net/vxlan.c | 30 ++-
.../wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c | 7 +-
.../broadcom/brcm80211/brcmfmac/cfg80211.c | 8 +-
drivers/nvmem/core.c | 13 +-
drivers/parisc/ccio-dma.c | 12 ++
drivers/parisc/dino.c | 5 +-
drivers/parisc/lba_pci.c | 6 +-
drivers/parisc/sba_iommu.c | 14 ++
drivers/tty/vt/vt.c | 6 +-
fs/binfmt_elf.c | 59 +++++-
fs/dcache.c | 5 +-
fs/exec.c | 11 +-
fs/mount.h | 1 +
fs/namespace.c | 1 +
fs/pnode.c | 212 ++++++++++++++++-----
include/linux/blkdev.h | 2 +
include/linux/cpumask.h | 17 ++
include/linux/list_lru.h | 1 +
include/linux/mlx5/driver.h | 1 +
include/net/ip6_route.h | 8 +
include/net/vxlan.h | 10 +-
kernel/bpf/verifier.c | 5 +
kernel/extable.c | 2 +-
kernel/rcu/tree_plugin.h | 2 +
kernel/sched/fair.c | 45 +----
kernel/sched/topology.c | 24 ++-
lib/cpumask.c | 32 ++++
mm/huge_memory.c | 2 +-
mm/list_lru.c | 14 +-
mm/mmap.c | 2 +-
net/bridge/br_mdb.c | 3 +-
net/core/dev.c | 32 ++--
net/ipv4/ip_output.c | 3 +-
net/ipv4/tcp.c | 2 +
net/ipv6/addrconf.c | 23 +--
net/ipv6/ip6_fib.c | 5 +-
net/ipv6/ip6_output.c | 2 +-
net/ipv6/route.c | 8 +-
net/rds/tcp_listen.c | 2 +-
net/sched/sch_api.c | 3 +-
net/wireless/nl80211.c | 10 +-
scripts/checkpatch.pl | 6 +-
sound/x86/intel_hdmi_audio.c | 5 +
tools/lib/lockdep/uinclude/linux/lockdep.h | 2 +-
tools/testing/selftests/bpf/test_verifier.c | 66 +++++++
tools/testing/selftests/capabilities/test_execve.c | 7 +-
99 files changed, 892 insertions(+), 377 deletions(-)



2017-07-19 10:09:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 10/88] net: sched: Fix one possible panic when no destroy callback

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gao Feng <[email protected]>


[ Upstream commit c1a4872ebfb83b1af7144f7b29ac8c4b344a12a8 ]

When qdisc fail to init, qdisc_create would invoke the destroy callback
to cleanup. But there is no check if the callback exists really. So it
would cause the panic if there is no real destroy callback like the qdisc
codel, fq, and so on.

Take codel as an example following:
When a malicious user constructs one invalid netlink msg, it would cause
codel_init->codel_change->nla_parse_nested failed.
Then kernel would invoke the destroy callback directly but qdisc codel
doesn't define one. It causes one panic as a result.

Now add one the check for destroy to avoid the possible panic.

Fixes: 87b60cfacf9f ("net_sched: fix error recovery at qdisc creation")
Signed-off-by: Gao Feng <[email protected]>
Acked-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/sched/sch_api.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -1008,7 +1008,8 @@ static struct Qdisc *qdisc_create(struct
return sch;
}
/* ops->init() failed, we call ->destroy() like qdisc_create_dflt() */
- ops->destroy(sch);
+ if (ops->destroy)
+ ops->destroy(sch);
err_out3:
dev_put(dev);
kfree((char *) sch - sch->padded);


2017-07-19 10:09:11

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 02/88] ipv6: avoid unregistering inet6_dev for loopback

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: WANG Cong <[email protected]>


[ Upstream commit 60abc0be96e00ca71bac083215ac91ad2e575096 ]

The per netns loopback_dev->ip6_ptr is unregistered and set to
NULL when its mtu is set to smaller than IPV6_MIN_MTU, this
leads to that we could set rt->rt6i_idev NULL after a
rt6_uncached_list_flush_dev() and then crash after another
call.

In this case we should just bring its inet6_dev down, rather
than unregistering it, at least prior to commit 176c39af29bc
("netns: fix addrconf_ifdown kernel panic") we always
override the case for loopback.

Thanks a lot to Andrey for finding a reliable reproducer.

Fixes: 176c39af29bc ("netns: fix addrconf_ifdown kernel panic")
Reported-by: Andrey Konovalov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: David Ahern <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Acked-by: David Ahern <[email protected]>
Tested-by: Andrey Konovalov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv6/addrconf.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -3334,6 +3334,7 @@ static int addrconf_notify(struct notifi
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
struct netdev_notifier_changeupper_info *info;
struct inet6_dev *idev = __in6_dev_get(dev);
+ struct net *net = dev_net(dev);
int run_pending = 0;
int err;

@@ -3349,7 +3350,7 @@ static int addrconf_notify(struct notifi
case NETDEV_CHANGEMTU:
/* if MTU under IPV6_MIN_MTU stop IPv6 on this interface. */
if (dev->mtu < IPV6_MIN_MTU) {
- addrconf_ifdown(dev, 1);
+ addrconf_ifdown(dev, dev != net->loopback_dev);
break;
}

@@ -3465,7 +3466,7 @@ static int addrconf_notify(struct notifi
* IPV6_MIN_MTU stop IPv6 on this interface.
*/
if (dev->mtu < IPV6_MIN_MTU)
- addrconf_ifdown(dev, 1);
+ addrconf_ifdown(dev, dev != net->loopback_dev);
}
break;



2017-07-19 10:09:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 07/88] tcp: reset sk_rx_dst in tcp_disconnect()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: WANG Cong <[email protected]>


[ Upstream commit d747a7a51b00984127a88113cdbbc26f91e9d815 ]

We have to reset the sk->sk_rx_dst when we disconnect a TCP
connection, because otherwise when we re-connect it this
dst reference is simply overridden in tcp_finish_connect().

This fixes a dst leak which leads to a loopback dev refcnt
leak. It is a long-standing bug, Kevin reported a very similar
(if not same) bug before. Thanks to Andrei for providing such
a reliable reproducer which greatly narrows down the problem.

Fixes: 41063e9dd119 ("ipv4: Early TCP socket demux.")
Reported-by: Andrei Vagin <[email protected]>
Reported-by: Kevin Xu <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv4/tcp.c | 2 ++
1 file changed, 2 insertions(+)

--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2325,6 +2325,8 @@ int tcp_disconnect(struct sock *sk, int
tcp_init_send_head(sk);
memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));
__sk_dst_reset(sk);
+ dst_release(sk->sk_rx_dst);
+ sk->sk_rx_dst = NULL;
tcp_saved_syn_free(tp);

/* Clean up fastopen related fields */


2017-07-19 10:09:24

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 09/88] virtio-net: serialize tx routine during reset

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jason Wang <[email protected]>


[ Upstream commit 713a98d90c5ea072c1bb00ef40617aee2cef2232 ]

We don't hold any tx lock when trying to disable TX during reset, this
would lead a use after free since ndo_start_xmit() tries to access
the virtqueue which has already been freed. Fix this by using
netif_tx_disable() before freeing the vqs, this could make sure no tx
after vq freeing.

Reported-by: Jean-Philippe Menil <[email protected]>
Tested-by: Jean-Philippe Menil <[email protected]>
Fixes commit f600b6905015 ("virtio_net: Add XDP support")
Cc: John Fastabend <[email protected]>
Signed-off-by: Jason Wang <[email protected]>
Acked-by: Michael S. Tsirkin <[email protected]>
Acked-by: Robert McCabe <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/virtio_net.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1709,6 +1709,7 @@ static void virtnet_freeze_down(struct v
flush_work(&vi->config_work);

netif_device_detach(vi->dev);
+ netif_tx_disable(vi->dev);
cancel_delayed_work_sync(&vi->refill);

if (netif_running(vi->dev)) {


2017-07-19 10:09:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 15/88] net/mlx5: Cancel delayed recovery work when unloading the driver

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mohamad Haj Yahia <[email protected]>


[ Upstream commit 2a0165a034ac024b60cca49c61e46f4afa2e4d98 ]

Draining the health workqueue will ignore future health works including
the one that report hardware failure and thus we can't enter error state
Instead cancel the recovery flow and make sure only recovery flow won't
be scheduled.

Fixes: 5e44fca50470 ('net/mlx5: Only cancel recovery work when cleaning up device')
Signed-off-by: Mohamad Haj Yahia <[email protected]>
Signed-off-by: Moshe Shemesh <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/health.c | 15 ++++++++++++++-
drivers/net/ethernet/mellanox/mlx5/core/main.c | 2 +-
include/linux/mlx5/driver.h | 1 +
3 files changed, 16 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -67,6 +67,7 @@ enum {

enum {
MLX5_DROP_NEW_HEALTH_WORK,
+ MLX5_DROP_NEW_RECOVERY_WORK,
};

static u8 get_nic_state(struct mlx5_core_dev *dev)
@@ -193,7 +194,7 @@ static void health_care(struct work_stru
mlx5_handle_bad_state(dev);

spin_lock(&health->wq_lock);
- if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
+ if (!test_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags))
schedule_delayed_work(&health->recover_work, recover_delay);
else
dev_err(&dev->pdev->dev,
@@ -314,6 +315,7 @@ void mlx5_start_health_poll(struct mlx5_
init_timer(&health->timer);
health->sick = 0;
clear_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
+ clear_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
health->health = &dev->iseg->health;
health->health_counter = &dev->iseg->health_counter;

@@ -336,11 +338,22 @@ void mlx5_drain_health_wq(struct mlx5_co

spin_lock(&health->wq_lock);
set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
+ set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
spin_unlock(&health->wq_lock);
cancel_delayed_work_sync(&health->recover_work);
cancel_work_sync(&health->work);
}

+void mlx5_drain_health_recovery(struct mlx5_core_dev *dev)
+{
+ struct mlx5_core_health *health = &dev->priv.health;
+
+ spin_lock(&health->wq_lock);
+ set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
+ spin_unlock(&health->wq_lock);
+ cancel_delayed_work_sync(&dev->priv.health.recover_work);
+}
+
void mlx5_health_cleanup(struct mlx5_core_dev *dev)
{
struct mlx5_core_health *health = &dev->priv.health;
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1236,7 +1236,7 @@ static int mlx5_unload_one(struct mlx5_c
int err = 0;

if (cleanup)
- mlx5_drain_health_wq(dev);
+ mlx5_drain_health_recovery(dev);

mutex_lock(&dev->intf_state_mutex);
if (test_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state)) {
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -928,6 +928,7 @@ int mlx5_health_init(struct mlx5_core_de
void mlx5_start_health_poll(struct mlx5_core_dev *dev);
void mlx5_stop_health_poll(struct mlx5_core_dev *dev);
void mlx5_drain_health_wq(struct mlx5_core_dev *dev);
+void mlx5_drain_health_recovery(struct mlx5_core_dev *dev);
int mlx5_buf_alloc_node(struct mlx5_core_dev *dev, int size,
struct mlx5_buf *buf, int node);
int mlx5_buf_alloc(struct mlx5_core_dev *dev, int size, struct mlx5_buf *buf);


2017-07-19 10:09:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 19/88] geneve: fix hlist corruption

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jiri Benc <[email protected]>


[ Upstream commit 4b4c21fad6ae6bd58ff1566f23b0f4f70fdc9a30 ]

It's not a good idea to add the same hlist_node to two different hash lists.
This leads to various hard to debug memory corruptions.

Fixes: 8ed66f0e8235 ("geneve: implement support for IPv6-based tunnels")
Cc: John W. Linville <[email protected]>
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/geneve.c | 48 ++++++++++++++++++++++++++++++++----------------
1 file changed, 32 insertions(+), 16 deletions(-)

--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -45,9 +45,17 @@ struct geneve_net {

static unsigned int geneve_net_id;

+struct geneve_dev_node {
+ struct hlist_node hlist;
+ struct geneve_dev *geneve;
+};
+
/* Pseudo network device */
struct geneve_dev {
- struct hlist_node hlist; /* vni hash table */
+ struct geneve_dev_node hlist4; /* vni hash table for IPv4 socket */
+#if IS_ENABLED(CONFIG_IPV6)
+ struct geneve_dev_node hlist6; /* vni hash table for IPv6 socket */
+#endif
struct net *net; /* netns for packet i/o */
struct net_device *dev; /* netdev for geneve tunnel */
struct ip_tunnel_info info;
@@ -123,16 +131,16 @@ static struct geneve_dev *geneve_lookup(
__be32 addr, u8 vni[])
{
struct hlist_head *vni_list_head;
- struct geneve_dev *geneve;
+ struct geneve_dev_node *node;
__u32 hash;

/* Find the device for this VNI */
hash = geneve_net_vni_hash(vni);
vni_list_head = &gs->vni_list[hash];
- hlist_for_each_entry_rcu(geneve, vni_list_head, hlist) {
- if (eq_tun_id_and_vni((u8 *)&geneve->info.key.tun_id, vni) &&
- addr == geneve->info.key.u.ipv4.dst)
- return geneve;
+ hlist_for_each_entry_rcu(node, vni_list_head, hlist) {
+ if (eq_tun_id_and_vni((u8 *)&node->geneve->info.key.tun_id, vni) &&
+ addr == node->geneve->info.key.u.ipv4.dst)
+ return node->geneve;
}
return NULL;
}
@@ -142,16 +150,16 @@ static struct geneve_dev *geneve6_lookup
struct in6_addr addr6, u8 vni[])
{
struct hlist_head *vni_list_head;
- struct geneve_dev *geneve;
+ struct geneve_dev_node *node;
__u32 hash;

/* Find the device for this VNI */
hash = geneve_net_vni_hash(vni);
vni_list_head = &gs->vni_list[hash];
- hlist_for_each_entry_rcu(geneve, vni_list_head, hlist) {
- if (eq_tun_id_and_vni((u8 *)&geneve->info.key.tun_id, vni) &&
- ipv6_addr_equal(&addr6, &geneve->info.key.u.ipv6.dst))
- return geneve;
+ hlist_for_each_entry_rcu(node, vni_list_head, hlist) {
+ if (eq_tun_id_and_vni((u8 *)&node->geneve->info.key.tun_id, vni) &&
+ ipv6_addr_equal(&addr6, &node->geneve->info.key.u.ipv6.dst))
+ return node->geneve;
}
return NULL;
}
@@ -579,6 +587,7 @@ static int geneve_sock_add(struct geneve
{
struct net *net = geneve->net;
struct geneve_net *gn = net_generic(net, geneve_net_id);
+ struct geneve_dev_node *node;
struct geneve_sock *gs;
__u8 vni[3];
__u32 hash;
@@ -597,15 +606,20 @@ static int geneve_sock_add(struct geneve
out:
gs->collect_md = geneve->collect_md;
#if IS_ENABLED(CONFIG_IPV6)
- if (ipv6)
+ if (ipv6) {
rcu_assign_pointer(geneve->sock6, gs);
- else
+ node = &geneve->hlist6;
+ } else
#endif
+ {
rcu_assign_pointer(geneve->sock4, gs);
+ node = &geneve->hlist4;
+ }
+ node->geneve = geneve;

tunnel_id_to_vni(geneve->info.key.tun_id, vni);
hash = geneve_net_vni_hash(vni);
- hlist_add_head_rcu(&geneve->hlist, &gs->vni_list[hash]);
+ hlist_add_head_rcu(&node->hlist, &gs->vni_list[hash]);
return 0;
}

@@ -632,8 +646,10 @@ static int geneve_stop(struct net_device
{
struct geneve_dev *geneve = netdev_priv(dev);

- if (!hlist_unhashed(&geneve->hlist))
- hlist_del_rcu(&geneve->hlist);
+ hlist_del_init_rcu(&geneve->hlist4.hlist);
+#if IS_ENABLED(CONFIG_IPV6)
+ hlist_del_init_rcu(&geneve->hlist6.hlist);
+#endif
geneve_sock_release(geneve);
return 0;
}


2017-07-19 10:09:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 20/88] net: core: Fix slab-out-of-bounds in netdev_stats_to_stats64

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alban Browaeys <[email protected]>


[ Upstream commit 9af9959e142c274f4a30fefb71d97d2b028b337f ]

commit 9256645af098 ("net/core: relax BUILD_BUG_ON in
netdev_stats_to_stats64") made an attempt to read beyond
the size of the source a possibility.

Fix to only copy src size to dest. As dest might be bigger than src.

==================================================================
BUG: KASAN: slab-out-of-bounds in netdev_stats_to_stats64+0xe/0x30 at addr ffff8801be248b20
Read of size 192 by task VBoxNetAdpCtl/6734
CPU: 1 PID: 6734 Comm: VBoxNetAdpCtl Tainted: G O 4.11.4prahal+intel+ #118
Hardware name: LENOVO 20CDCTO1WW/20CDCTO1WW, BIOS GQET52WW (1.32 ) 05/04/2017
Call Trace:
dump_stack+0x63/0x86
kasan_object_err+0x1c/0x70
kasan_report+0x270/0x520
? netdev_stats_to_stats64+0xe/0x30
? sched_clock_cpu+0x1b/0x190
? __module_address+0x3e/0x3b0
? unwind_next_frame+0x1ea/0xb00
check_memory_region+0x13c/0x1a0
memcpy+0x23/0x50
netdev_stats_to_stats64+0xe/0x30
dev_get_stats+0x1b9/0x230
rtnl_fill_stats+0x44/0xc00
? nla_put+0xc6/0x130
rtnl_fill_ifinfo+0xe9e/0x3700
? rtnl_fill_vfinfo+0xde0/0xde0
? sched_clock+0x9/0x10
? sched_clock+0x9/0x10
? sched_clock_local+0x120/0x130
? __module_address+0x3e/0x3b0
? unwind_next_frame+0x1ea/0xb00
? sched_clock+0x9/0x10
? sched_clock+0x9/0x10
? sched_clock_cpu+0x1b/0x190
? VBoxNetAdpLinuxIOCtlUnlocked+0x14b/0x280 [vboxnetadp]
? depot_save_stack+0x1d8/0x4a0
? depot_save_stack+0x34f/0x4a0
? depot_save_stack+0x34f/0x4a0
? save_stack+0xb1/0xd0
? save_stack_trace+0x16/0x20
? save_stack+0x46/0xd0
? kasan_slab_alloc+0x12/0x20
? __kmalloc_node_track_caller+0x10d/0x350
? __kmalloc_reserve.isra.36+0x2c/0xc0
? __alloc_skb+0xd0/0x560
? rtmsg_ifinfo_build_skb+0x61/0x120
? rtmsg_ifinfo.part.25+0x16/0xb0
? rtmsg_ifinfo+0x47/0x70
? register_netdev+0x15/0x30
? vboxNetAdpOsCreate+0xc0/0x1c0 [vboxnetadp]
? vboxNetAdpCreate+0x210/0x400 [vboxnetadp]
? VBoxNetAdpLinuxIOCtlUnlocked+0x14b/0x280 [vboxnetadp]
? do_vfs_ioctl+0x17f/0xff0
? SyS_ioctl+0x74/0x80
? do_syscall_64+0x182/0x390
? __alloc_skb+0xd0/0x560
? __alloc_skb+0xd0/0x560
? save_stack_trace+0x16/0x20
? init_object+0x64/0xa0
? ___slab_alloc+0x1ae/0x5c0
? ___slab_alloc+0x1ae/0x5c0
? __alloc_skb+0xd0/0x560
? sched_clock+0x9/0x10
? kasan_unpoison_shadow+0x35/0x50
? kasan_kmalloc+0xad/0xe0
? __kmalloc_node_track_caller+0x246/0x350
? __alloc_skb+0xd0/0x560
? kasan_unpoison_shadow+0x35/0x50
? memset+0x31/0x40
? __alloc_skb+0x31f/0x560
? napi_consume_skb+0x320/0x320
? br_get_link_af_size_filtered+0xb7/0x120 [bridge]
? if_nlmsg_size+0x440/0x630
rtmsg_ifinfo_build_skb+0x83/0x120
rtmsg_ifinfo.part.25+0x16/0xb0
rtmsg_ifinfo+0x47/0x70
register_netdevice+0xa2b/0xe50
? __kmalloc+0x171/0x2d0
? netdev_change_features+0x80/0x80
register_netdev+0x15/0x30
vboxNetAdpOsCreate+0xc0/0x1c0 [vboxnetadp]
vboxNetAdpCreate+0x210/0x400 [vboxnetadp]
? vboxNetAdpComposeMACAddress+0x1d0/0x1d0 [vboxnetadp]
? kasan_check_write+0x14/0x20
VBoxNetAdpLinuxIOCtlUnlocked+0x14b/0x280 [vboxnetadp]
? VBoxNetAdpLinuxOpen+0x20/0x20 [vboxnetadp]
? lock_acquire+0x11c/0x270
? __audit_syscall_entry+0x2fb/0x660
do_vfs_ioctl+0x17f/0xff0
? __audit_syscall_entry+0x2fb/0x660
? ioctl_preallocate+0x1d0/0x1d0
? __audit_syscall_entry+0x2fb/0x660
? kmem_cache_free+0xb2/0x250
? syscall_trace_enter+0x537/0xd00
? exit_to_usermode_loop+0x100/0x100
SyS_ioctl+0x74/0x80
? do_sys_open+0x350/0x350
? do_vfs_ioctl+0xff0/0xff0
do_syscall_64+0x182/0x390
entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x7f7e39a1ae07
RSP: 002b:00007ffc6f04c6d8 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007ffc6f04c730 RCX: 00007f7e39a1ae07
RDX: 00007ffc6f04c730 RSI: 00000000c0207601 RDI: 0000000000000007
RBP: 00007ffc6f04c700 R08: 00007ffc6f04c780 R09: 0000000000000008
R10: 0000000000000541 R11: 0000000000000206 R12: 0000000000000007
R13: 00000000c0207601 R14: 00007ffc6f04c730 R15: 0000000000000012
Object at ffff8801be248008, in cache kmalloc-4096 size: 4096
Allocated:
PID = 6734
save_stack_trace+0x16/0x20
save_stack+0x46/0xd0
kasan_kmalloc+0xad/0xe0
__kmalloc+0x171/0x2d0
alloc_netdev_mqs+0x8a7/0xbe0
vboxNetAdpOsCreate+0x65/0x1c0 [vboxnetadp]
vboxNetAdpCreate+0x210/0x400 [vboxnetadp]
VBoxNetAdpLinuxIOCtlUnlocked+0x14b/0x280 [vboxnetadp]
do_vfs_ioctl+0x17f/0xff0
SyS_ioctl+0x74/0x80
do_syscall_64+0x182/0x390
return_from_SYSCALL_64+0x0/0x6a
Freed:
PID = 5600
save_stack_trace+0x16/0x20
save_stack+0x46/0xd0
kasan_slab_free+0x73/0xc0
kfree+0xe4/0x220
kvfree+0x25/0x30
single_release+0x74/0xb0
__fput+0x265/0x6b0
____fput+0x9/0x10
task_work_run+0xd5/0x150
exit_to_usermode_loop+0xe2/0x100
do_syscall_64+0x26c/0x390
return_from_SYSCALL_64+0x0/0x6a
Memory state around the buggy address:
ffff8801be248a80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff8801be248b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff8801be248b80: 00 00 00 00 00 00 00 00 00 00 00 07 fc fc fc fc
^
ffff8801be248c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8801be248c80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
==================================================================

Signed-off-by: Alban Browaeys <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/core/dev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7591,7 +7591,7 @@ void netdev_stats_to_stats64(struct rtnl
{
#if BITS_PER_LONG == 64
BUILD_BUG_ON(sizeof(*stats64) < sizeof(*netdev_stats));
- memcpy(stats64, netdev_stats, sizeof(*stats64));
+ memcpy(stats64, netdev_stats, sizeof(*netdev_stats));
/* zero out counters that only exist in rtnl_link_stats64 */
memset((char *)stats64 + sizeof(*netdev_stats), 0,
sizeof(*stats64) - sizeof(*netdev_stats));


2017-07-19 10:09:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 28/88] bridge: mdb: fix leak on complete_info ptr on fail path

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eduardo Valentin <[email protected]>


[ Upstream commit 1bfb159673957644951ab0a8d2aec44b93ddb1ae ]

We currently get the following kmemleak report:
unreferenced object 0xffff8800039d9820 (size 32):
comm "softirq", pid 0, jiffies 4295212383 (age 792.416s)
hex dump (first 32 bytes):
00 0c e0 03 00 88 ff ff ff 02 00 00 00 00 00 00 ................
00 00 00 01 ff 11 00 02 86 dd 00 00 ff ff ff ff ................
backtrace:
[<ffffffff8152b4aa>] kmemleak_alloc+0x4a/0xa0
[<ffffffff811d8ec8>] kmem_cache_alloc_trace+0xb8/0x1c0
[<ffffffffa0389683>] __br_mdb_notify+0x2a3/0x300 [bridge]
[<ffffffffa038a0ce>] br_mdb_notify+0x6e/0x70 [bridge]
[<ffffffffa0386479>] br_multicast_add_group+0x109/0x150 [bridge]
[<ffffffffa0386518>] br_ip6_multicast_add_group+0x58/0x60 [bridge]
[<ffffffffa0387fb5>] br_multicast_rcv+0x1d5/0xdb0 [bridge]
[<ffffffffa037d7cf>] br_handle_frame_finish+0xcf/0x510 [bridge]
[<ffffffffa03a236b>] br_nf_hook_thresh.part.27+0xb/0x10 [br_netfilter]
[<ffffffffa03a3738>] br_nf_hook_thresh+0x48/0xb0 [br_netfilter]
[<ffffffffa03a3fb9>] br_nf_pre_routing_finish_ipv6+0x109/0x1d0 [br_netfilter]
[<ffffffffa03a4400>] br_nf_pre_routing_ipv6+0xd0/0x14c [br_netfilter]
[<ffffffffa03a3c27>] br_nf_pre_routing+0x197/0x3d0 [br_netfilter]
[<ffffffff814a2952>] nf_iterate+0x52/0x60
[<ffffffff814a29bc>] nf_hook_slow+0x5c/0xb0
[<ffffffffa037ddf4>] br_handle_frame+0x1a4/0x2c0 [bridge]

This happens when switchdev_port_obj_add() fails. This patch
frees complete_info object in the fail path.

Reviewed-by: Vallish Vaidyeshwara <[email protected]>
Signed-off-by: Eduardo Valentin <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/bridge/br_mdb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -323,7 +323,8 @@ static void __br_mdb_notify(struct net_d
__mdb_entry_to_br_ip(entry, &complete_info->ip);
mdb.obj.complete_priv = complete_info;
mdb.obj.complete = br_mdb_complete;
- switchdev_port_obj_add(port_dev, &mdb.obj);
+ if (switchdev_port_obj_add(port_dev, &mdb.obj))
+ kfree(complete_info);
}
} else if (port_dev && type == RTM_DELMDB) {
switchdev_port_obj_del(port_dev, &mdb.obj);


2017-07-19 10:10:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 30/88] sfc: dont read beyond unicast address list

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Bert Kenward <[email protected]>


[ Upstream commit c70d68150f71b84cea6997a53493e17bf18a54db ]

If we have more than 32 unicast MAC addresses assigned to an interface
we will read beyond the end of the address table in the driver when
adding filters. The next 256 entries store multicast addresses, so we
will end up attempting to insert duplicate filters, which is mostly
harmless. If we add more than 288 unicast addresses we will then read
past the multicast address table, which is likely to be more exciting.

Fixes: 12fb0da45c9a ("sfc: clean fallbacks between promisc/normal in efx_ef10_filter_sync_rx_mode")
Signed-off-by: Bert Kenward <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/sfc/ef10.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)

--- a/drivers/net/ethernet/sfc/ef10.c
+++ b/drivers/net/ethernet/sfc/ef10.c
@@ -5033,12 +5033,9 @@ static void efx_ef10_filter_uc_addr_list
struct efx_ef10_filter_table *table = efx->filter_state;
struct net_device *net_dev = efx->net_dev;
struct netdev_hw_addr *uc;
- int addr_count;
unsigned int i;

- addr_count = netdev_uc_count(net_dev);
table->uc_promisc = !!(net_dev->flags & IFF_PROMISC);
- table->dev_uc_count = 1 + addr_count;
ether_addr_copy(table->dev_uc_list[0].addr, net_dev->dev_addr);
i = 1;
netdev_for_each_uc_addr(uc, net_dev) {
@@ -5049,6 +5046,8 @@ static void efx_ef10_filter_uc_addr_list
ether_addr_copy(table->dev_uc_list[i].addr, uc->addr);
i++;
}
+
+ table->dev_uc_count = i;
}

static void efx_ef10_filter_mc_addr_list(struct efx_nic *efx)
@@ -5056,11 +5055,10 @@ static void efx_ef10_filter_mc_addr_list
struct efx_ef10_filter_table *table = efx->filter_state;
struct net_device *net_dev = efx->net_dev;
struct netdev_hw_addr *mc;
- unsigned int i, addr_count;
+ unsigned int i;

table->mc_promisc = !!(net_dev->flags & (IFF_PROMISC | IFF_ALLMULTI));

- addr_count = netdev_mc_count(net_dev);
i = 0;
netdev_for_each_mc_addr(mc, net_dev) {
if (i >= EFX_EF10_FILTER_DEV_MC_MAX) {


2017-07-19 10:10:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 25/88] net/mlx5e: Initialize CEEs getpermhwaddr address buffer to 0xff

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Huy Nguyen <[email protected]>


[ Upstream commit d968f0f2e4404152f37ed2384b4a2269dd2dae5a ]

Latest change in open-lldp code uses bytes 6-11 of perm_addr buffer
as the Ethernet source address for the host TLV packet.
Since our driver does not fill these bytes, they stay at zero and
the open-lldp code ends up sending the TLV packet with zero source
address and the switch drops this packet.

The fix is to initialize these bytes to 0xff. The open-lldp code
considers 0xff:ff:ff:ff:ff:ff as the invalid address and falls back to
use the host's mac address as the Ethernet source address.

Fixes: 3a6a931dfb8e ("net/mlx5e: Support DCBX CEE API")
Signed-off-by: Huy Nguyen <[email protected]>
Reviewed-by: Daniel Jurgens <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -464,6 +464,8 @@ static void mlx5e_dcbnl_getpermhwaddr(st
if (!perm_addr)
return;

+ memset(perm_addr, 0xff, MAX_ADDR_LEN);
+
mlx5_query_nic_vport_mac_address(priv->mdev, 0, perm_addr);
}



2017-07-19 10:10:18

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 32/88] sed regex in Makefile.build requires line break between exported symbols

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nagarathnam Muthusamy <[email protected]>


[ Upstream commit d16c0649feb4fe4e814f44803df5a617769c3233 ]

The following regex in Makefile.build matches only one ___EXPORT_SYMBOL per line.

sed
's/.*___EXPORT_SYMBOL[[:space:]]*\([a-zA-Z0-9_]*\)[[:space:]]*,.*/EXPORT_SYMBOL(\1);/'

ATOMIC_OPS macro in atomic_64.S expands multiple symbols in same line hence
version generation is done only for the last matched symbol. This patch adds
new line between the symbol expansions.

Signed-off-by: Nagarathnam Muthusamy <[email protected]>
Reviewed-by: Babu Moger <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/lib/atomic_64.S | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)

--- a/arch/sparc/lib/atomic_64.S
+++ b/arch/sparc/lib/atomic_64.S
@@ -62,19 +62,23 @@ ENTRY(atomic_fetch_##op) /* %o0 = increm
ENDPROC(atomic_fetch_##op); \
EXPORT_SYMBOL(atomic_fetch_##op);

-#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)
+ATOMIC_OP(add)
+ATOMIC_OP_RETURN(add)
+ATOMIC_FETCH_OP(add)
+
+ATOMIC_OP(sub)
+ATOMIC_OP_RETURN(sub)
+ATOMIC_FETCH_OP(sub)

-ATOMIC_OPS(add)
-ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_FETCH_OP(and)

-#undef ATOMIC_OPS
-#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
+ATOMIC_OP(or)
+ATOMIC_FETCH_OP(or)

-ATOMIC_OPS(and)
-ATOMIC_OPS(or)
-ATOMIC_OPS(xor)
+ATOMIC_OP(xor)
+ATOMIC_FETCH_OP(xor)

-#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
@@ -124,19 +128,23 @@ ENTRY(atomic64_fetch_##op) /* %o0 = incr
ENDPROC(atomic64_fetch_##op); \
EXPORT_SYMBOL(atomic64_fetch_##op);

-#define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
+ATOMIC64_OP(add)
+ATOMIC64_OP_RETURN(add)
+ATOMIC64_FETCH_OP(add)
+
+ATOMIC64_OP(sub)
+ATOMIC64_OP_RETURN(sub)
+ATOMIC64_FETCH_OP(sub)

-ATOMIC64_OPS(add)
-ATOMIC64_OPS(sub)
+ATOMIC64_OP(and)
+ATOMIC64_FETCH_OP(and)

-#undef ATOMIC64_OPS
-#define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_FETCH_OP(op)
+ATOMIC64_OP(or)
+ATOMIC64_FETCH_OP(or)

-ATOMIC64_OPS(and)
-ATOMIC64_OPS(or)
-ATOMIC64_OPS(xor)
+ATOMIC64_OP(xor)
+ATOMIC64_FETCH_OP(xor)

-#undef ATOMIC64_OPS
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP


2017-07-19 10:10:12

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 36/88] brcmfmac: Fix a memory leak in error handling path in brcmf_cfg80211_attach

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Christophe Jaillet <[email protected]>

commit 57c00f2fac512837f8de73474ec1f54020015bae upstream.

If 'wiphy_new()' fails, we leak 'ops'. Add a new label in the error
handling path to free it in such a case.

Fixes: 5c22fb85102a7 ("brcmfmac: add wowl gtk rekeying offload support")
Signed-off-by: Christophe JAILLET <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -6827,7 +6827,7 @@ struct brcmf_cfg80211_info *brcmf_cfg802
wiphy = wiphy_new(ops, sizeof(struct brcmf_cfg80211_info));
if (!wiphy) {
brcmf_err("Could not allocate wiphy device\n");
- return NULL;
+ goto ops_out;
}
memcpy(wiphy->perm_addr, drvr->mac, ETH_ALEN);
set_wiphy_dev(wiphy, busdev);
@@ -6970,6 +6970,7 @@ priv_out:
ifp->vif = NULL;
wiphy_out:
brcmf_free_wiphy(wiphy);
+ops_out:
kfree(ops);
return NULL;
}


2017-07-19 10:10:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 34/88] sparc64: Fix gup_huge_pmd

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nitin Gupta <[email protected]>


[ Upstream commit dbd2667a4fb9ce4f547982b07cd69dda127c47ea ]

The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.

Cc: Julian Calaby <[email protected]>
Signed-off-by: Nitin Gupta <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/mm/gup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -78,8 +78,8 @@ static int gup_huge_pmd(pmd_t *pmdp, pmd
return 0;

refs = 0;
- head = pmd_page(pmd);
- page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+ page = pmd_page(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+ head = compound_head(page);
do {
VM_BUG_ON(compound_head(page) != head);
pages[*nr] = page;


2017-07-19 10:10:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 48/88] parisc/mm: Ensure IRQs are off in switch_mm()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Helge Deller <[email protected]>

commit 649aa24254e85bf6bd7807dd372d083707852b1f upstream.

This is because of commit f98db6013c55 ("sched/core: Add switch_mm_irqs_off()
and use it in the scheduler") in which switch_mm_irqs_off() is called by the
scheduler, vs switch_mm() which is used by use_mm().

This patch lets the parisc code mirror the x86 and powerpc code, ie. it
disables interrupts in switch_mm(), and optimises the scheduler case by
defining switch_mm_irqs_off().

Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/include/asm/mmu_context.h | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

--- a/arch/parisc/include/asm/mmu_context.h
+++ b/arch/parisc/include/asm/mmu_context.h
@@ -49,15 +49,26 @@ static inline void load_context(mm_conte
mtctl(__space_to_prot(context), 8);
}

-static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk)
+static inline void switch_mm_irqs_off(struct mm_struct *prev,
+ struct mm_struct *next, struct task_struct *tsk)
{
-
if (prev != next) {
mtctl(__pa(next->pgd), 25);
load_context(next->context);
}
}

+static inline void switch_mm(struct mm_struct *prev,
+ struct mm_struct *next, struct task_struct *tsk)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ switch_mm_irqs_off(prev, next, tsk);
+ local_irq_restore(flags);
+}
+#define switch_mm_irqs_off switch_mm_irqs_off
+
#define deactivate_mm(tsk,mm) do { } while (0)

static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)


2017-07-19 10:10:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 40/88] cfg80211: Validate frequencies nested in NL80211_ATTR_SCAN_FREQUENCIES

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Srinivas Dasari <[email protected]>

commit d7f13f7450369281a5d0ea463cc69890a15923ae upstream.

validate_scan_freqs() retrieves frequencies from attributes
nested in the attribute NL80211_ATTR_SCAN_FREQUENCIES with
nla_get_u32(), which reads 4 bytes from each attribute
without validating the size of data received. Attributes
nested in NL80211_ATTR_SCAN_FREQUENCIES don't have an nla policy.

Validate size of each attribute before parsing to avoid potential buffer
overread.

Fixes: 2a519311926 ("cfg80211/nl80211: scanning (and mac80211 update to use it)")
Signed-off-by: Srinivas Dasari <[email protected]>
Signed-off-by: Jouni Malinen <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/wireless/nl80211.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -6422,6 +6422,10 @@ static int validate_scan_freqs(struct nl
struct nlattr *attr1, *attr2;
int n_channels = 0, tmp1, tmp2;

+ nla_for_each_nested(attr1, freqs, tmp1)
+ if (nla_len(attr1) != sizeof(u32))
+ return 0;
+
nla_for_each_nested(attr1, freqs, tmp1) {
n_channels++;
/*


2017-07-19 10:10:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 42/88] cfg80211: Check if NAN service ID is of expected size

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Srinivas Dasari <[email protected]>

commit 0a27844ce86d039d74221dd56cd8c0349b146b63 upstream.

nla policy checks for only maximum length of the attribute data when the
attribute type is NLA_BINARY. If userspace sends less data than
specified, cfg80211 may access illegal memory. When type is NLA_UNSPEC,
nla policy check ensures that userspace sends minimum specified length
number of bytes.

Remove type assignment to NLA_BINARY from nla_policy of
NL80211_NAN_FUNC_SERVICE_ID to make these NLA_UNSPEC and to make sure
minimum NL80211_NAN_FUNC_SERVICE_ID_LEN bytes are received from
userspace with NL80211_NAN_FUNC_SERVICE_ID.

Fixes: a442b761b24 ("cfg80211: add add_nan_func / del_nan_func")
Signed-off-by: Srinivas Dasari <[email protected]>
Signed-off-by: Jouni Malinen <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/wireless/nl80211.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -509,7 +509,7 @@ nl80211_bss_select_policy[NL80211_BSS_SE
static const struct nla_policy
nl80211_nan_func_policy[NL80211_NAN_FUNC_ATTR_MAX + 1] = {
[NL80211_NAN_FUNC_TYPE] = { .type = NLA_U8 },
- [NL80211_NAN_FUNC_SERVICE_ID] = { .type = NLA_BINARY,
+ [NL80211_NAN_FUNC_SERVICE_ID] = {
.len = NL80211_NAN_FUNC_SERVICE_ID_LEN },
[NL80211_NAN_FUNC_PUBLISH_TYPE] = { .type = NLA_U8 },
[NL80211_NAN_FUNC_PUBLISH_BCAST] = { .type = NLA_FLAG },


2017-07-19 10:10:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 46/88] parisc: use compat_sys_keyctl()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric Biggers <[email protected]>

commit b0f94efd5aa8daa8a07d7601714c2573266cd4c9 upstream.

Architectures with a compat syscall table must put compat_sys_keyctl()
in it, not sys_keyctl(). The parisc architecture was not doing this;
fix it.

Signed-off-by: Eric Biggers <[email protected]>
Acked-by: Helge Deller <[email protected]>
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/kernel/syscall_table.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/parisc/kernel/syscall_table.S
+++ b/arch/parisc/kernel/syscall_table.S
@@ -361,7 +361,7 @@
ENTRY_SAME(ni_syscall) /* 263: reserved for vserver */
ENTRY_SAME(add_key)
ENTRY_SAME(request_key) /* 265 */
- ENTRY_SAME(keyctl)
+ ENTRY_COMP(keyctl)
ENTRY_SAME(ioprio_set)
ENTRY_SAME(ioprio_get)
ENTRY_SAME(inotify_init)


2017-07-19 10:10:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 44/88] irqchip/gic-v3: Fix out-of-bound access in gic_set_affinity

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Suzuki K Poulose <[email protected]>

commit 866d7c1b0a3c70387646c4e455e727a58c5d465a upstream.

The GICv3 driver doesn't check if the target CPU for gic_set_affinity
is valid before going ahead and making the changes. This triggers the
following splat with KASAN:

[ 141.189434] BUG: KASAN: global-out-of-bounds in gic_set_affinity+0x8c/0x140
[ 141.189704] Read of size 8 at addr ffff200009741d20 by task swapper/1/0
[ 141.189958]
[ 141.190158] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.12.0-rc7
[ 141.190458] Hardware name: Foundation-v8A (DT)
[ 141.190658] Call trace:
[ 141.190908] [<ffff200008089d70>] dump_backtrace+0x0/0x328
[ 141.191224] [<ffff20000808a1b4>] show_stack+0x14/0x20
[ 141.191507] [<ffff200008504c3c>] dump_stack+0xa4/0xc8
[ 141.191858] [<ffff20000826c19c>] print_address_description+0x13c/0x250
[ 141.192219] [<ffff20000826c5c8>] kasan_report+0x210/0x300
[ 141.192547] [<ffff20000826ad54>] __asan_load8+0x84/0x98
[ 141.192874] [<ffff20000854eeec>] gic_set_affinity+0x8c/0x140
[ 141.193158] [<ffff200008148b14>] irq_do_set_affinity+0x54/0xb8
[ 141.193473] [<ffff200008148d2c>] irq_set_affinity_locked+0x64/0xf0
[ 141.193828] [<ffff200008148e00>] __irq_set_affinity+0x48/0x78
[ 141.194158] [<ffff200008bc48a4>] arm_perf_starting_cpu+0x104/0x150
[ 141.194513] [<ffff2000080d73bc>] cpuhp_invoke_callback+0x17c/0x1f8
[ 141.194783] [<ffff2000080d94ec>] notify_cpu_starting+0x8c/0xb8
[ 141.195130] [<ffff2000080911ec>] secondary_start_kernel+0x15c/0x200
[ 141.195390] [<0000000080db81b4>] 0x80db81b4
[ 141.195603]
[ 141.195685] The buggy address belongs to the variable:
[ 141.196012] __cpu_logical_map+0x200/0x220
[ 141.196176]
[ 141.196315] Memory state around the buggy address:
[ 141.196586] ffff200009741c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 141.196913] ffff200009741c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 141.197158] >ffff200009741d00: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00
[ 141.197487] ^
[ 141.197758] ffff200009741d80: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00
[ 141.198060] ffff200009741e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 141.198358] ==================================================================
[ 141.198609] Disabling lock debugging due to kernel taint
[ 141.198961] CPU1: Booted secondary processor [410fd051]

This patch adds the check to make sure the cpu is valid.

Fixes: commit 021f653791ad17e03f98 ("irqchip: gic-v3: Initial support for GICv3")
Signed-off-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/irqchip/irq-gic-v3.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -645,6 +645,9 @@ static int gic_set_affinity(struct irq_d
int enabled;
u64 val;

+ if (cpu >= nr_cpu_ids)
+ return -EINVAL;
+
if (gic_irq_in_rdist(d))
return -EINVAL;



2017-07-19 10:10:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 38/88] efi: Process the MEMATTR table only if EFI_MEMMAP is enabled

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Daniel Kiper <[email protected]>

commit 457ea3f7e97881f937136ce0ba1f29f82b9abdb0 upstream.

Otherwise e.g. Xen dom0 on x86_64 EFI platforms crashes.

In theory we can check EFI_PARAVIRT too, however,
EFI_MEMMAP looks more targeted and covers more cases.

Signed-off-by: Daniel Kiper <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/firmware/efi/efi.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -528,7 +528,8 @@ int __init efi_config_parse_tables(void
}
}

- efi_memattr_init();
+ if (efi_enabled(EFI_MEMMAP))
+ efi_memattr_init();

/* Parse the EFI Properties table if it exists */
if (efi.properties_table != EFI_INVALID_TABLE_ADDR) {


2017-07-19 10:10:58

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 62/88] ARM64: dts: marvell: armada37xx: Fix timer interrupt specifiers

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Marc Zyngier <[email protected]>

commit 88cda00733f0731711c76e535d4972c296ac512e upstream.

Contrary to popular belief, PPIs connected to a GICv3 to not have
an affinity field similar to that of GICv2. That is consistent
with the fact that GICv3 is designed to accomodate thousands of
CPUs, and fitting them as a bitmap in a byte is... difficult.

Fixes: adbc3695d9e4 ("arm64: dts: add the Marvell Armada 3700 family and a development board")
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Gregory CLEMENT <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)

--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
@@ -75,14 +75,10 @@

timer {
compatible = "arm,armv8-timer";
- interrupts = <GIC_PPI 13
- (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
- <GIC_PPI 14
- (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
- <GIC_PPI 11
- (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>,
- <GIC_PPI 10
- (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>;
+ interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_PPI 14 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_PPI 10 IRQ_TYPE_LEVEL_HIGH>;
};

soc {


2017-07-19 10:11:05

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 64/88] vt: fix unchecked __put_user() in tioclinux ioctls

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Adam Borowski <[email protected]>

commit 6987dc8a70976561d22450b5858fc9767788cc1c upstream.

Only read access is checked before this call.

Actually, at the moment this is not an issue, as every in-tree arch does
the same manual checks for VERIFY_READ vs VERIFY_WRITE, relying on the MMU
to tell them apart, but this wasn't the case in the past and may happen
again on some odd arch in the future.

If anyone cares about 3.7 and earlier, this is a security hole (untested)
on real 80386 CPUs.

Signed-off-by: Adam Borowski <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/tty/vt/vt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -2709,13 +2709,13 @@ int tioclinux(struct tty_struct *tty, un
* related to the kernel should not use this.
*/
data = vt_get_shift_state();
- ret = __put_user(data, p);
+ ret = put_user(data, p);
break;
case TIOCL_GETMOUSEREPORTING:
console_lock(); /* May be overkill */
data = mouse_reporting();
console_unlock();
- ret = __put_user(data, p);
+ ret = put_user(data, p);
break;
case TIOCL_SETVESABLANK:
console_lock();
@@ -2724,7 +2724,7 @@ int tioclinux(struct tty_struct *tty, un
break;
case TIOCL_GETKMSGREDIRECT:
data = vt_get_kmsg_redirect();
- ret = __put_user(data, p);
+ ret = put_user(data, p);
break;
case TIOCL_SETKMSGREDIRECT:
if (!capable(CAP_SYS_ADMIN)) {


2017-07-19 10:11:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 52/88] mm/list_lru.c: fix list_lru_count_node() to be race free

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sahitya Tummala <[email protected]>

commit 2c80cd57c74339889a8752b20862a16c28929c3a upstream.

list_lru_count_node() iterates over all memcgs to get the total number of
entries on the node but it can race with memcg_drain_all_list_lrus(),
which migrates the entries from a dead cgroup to another. This can return
incorrect number of entries from list_lru_count_node().

Fix this by keeping track of entries per node and simply return it in
list_lru_count_node().

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Sahitya Tummala <[email protected]>
Acked-by: Vladimir Davydov <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Alexander Polakov <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/list_lru.h | 1 +
mm/list_lru.c | 14 ++++++--------
2 files changed, 7 insertions(+), 8 deletions(-)

--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -44,6 +44,7 @@ struct list_lru_node {
/* for cgroup aware lrus points to per cgroup lists, otherwise NULL */
struct list_lru_memcg *memcg_lrus;
#endif
+ long nr_items;
} ____cacheline_aligned_in_smp;

struct list_lru {
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -117,6 +117,7 @@ bool list_lru_add(struct list_lru *lru,
l = list_lru_from_kmem(nlru, item);
list_add_tail(item, &l->list);
l->nr_items++;
+ nlru->nr_items++;
spin_unlock(&nlru->lock);
return true;
}
@@ -136,6 +137,7 @@ bool list_lru_del(struct list_lru *lru,
l = list_lru_from_kmem(nlru, item);
list_del_init(item);
l->nr_items--;
+ nlru->nr_items--;
spin_unlock(&nlru->lock);
return true;
}
@@ -183,15 +185,10 @@ EXPORT_SYMBOL_GPL(list_lru_count_one);

unsigned long list_lru_count_node(struct list_lru *lru, int nid)
{
- long count = 0;
- int memcg_idx;
+ struct list_lru_node *nlru;

- count += __list_lru_count_one(lru, nid, -1);
- if (list_lru_memcg_aware(lru)) {
- for_each_memcg_cache_index(memcg_idx)
- count += __list_lru_count_one(lru, nid, memcg_idx);
- }
- return count;
+ nlru = &lru->node[nid];
+ return nlru->nr_items;
}
EXPORT_SYMBOL_GPL(list_lru_count_node);

@@ -226,6 +223,7 @@ restart:
assert_spin_locked(&nlru->lock);
case LRU_REMOVED:
isolated++;
+ nlru->nr_items--;
/*
* If the lru lock has been dropped, our list
* traversal is now invalid and so we have to


2017-07-19 10:11:16

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 59/88] s390: reduce ELF_ET_DYN_BASE

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit a73dc5370e153ac63718d850bddf0c9aa9d871e6 upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit
address space for 32-bit pointers. On 32-bit use 4MB, which is the
traditional x86 minimum load location, likely to avoid historically
requiring a 4MB page table entry when only a portion of the first 4MB
would be used (since the NULL address is avoided). For s390 the
position could be 0x10000, but that is needlessly close to the NULL
address.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Pratyush Anand <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/s390/include/asm/elf.h | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)

--- a/arch/s390/include/asm/elf.h
+++ b/arch/s390/include/asm/elf.h
@@ -160,14 +160,13 @@ extern unsigned int vdso_enabled;
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE 4096

-/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
- use of this is to invoke "./ld.so someprog" to test out a new version of
- the loader. We need to make sure that it is out of the way of the program
- that it will "exec", and that there is sufficient room for the brk. 64-bit
- tasks are aligned to 4GB. */
-#define ELF_ET_DYN_BASE (is_compat_task() ? \
- (STACK_TOP / 3 * 2) : \
- (STACK_TOP / 3 * 2) & ~((1UL << 32) - 1))
+/*
+ * This is the base location for PIE (ET_DYN with INTERP) loads. On
+ * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * space open for things that want to use the area for 32-bit pointers.
+ */
+#define ELF_ET_DYN_BASE (is_compat_task() ? 0x000400000UL : \
+ 0x100000000UL)

/* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. */


2017-07-19 10:11:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 75/88] crypto: sha1-ssse3 - Disable avx2

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Herbert Xu <[email protected]>

commit b82ce24426a4071da9529d726057e4e642948667 upstream.

It has been reported that sha1-avx2 can cause page faults by reading
beyond the end of the input. This patch disables it until it can be
fixed.

Fixes: 7c1da8d0d046 ("crypto: sha - SHA1 transform x86_64 AVX2")
Reported-by: Jan Stancek <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/crypto/sha1_ssse3_glue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -201,7 +201,7 @@ asmlinkage void sha1_transform_avx2(u32

static bool avx2_usable(void)
{
- if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
+ if (false && avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
&& boot_cpu_has(X86_FEATURE_BMI1)
&& boot_cpu_has(X86_FEATURE_BMI2))
return true;


2017-07-19 10:11:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 67/88] Drivers: hv: vmbus: Close timing hole that can corrupt per-cpu page

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Michael Kelley <[email protected]>

commit 13b9abfc92be7c4454bff912021b9f835dea6e15 upstream.

Extend the disabling of preemption to include the hypercall so that
another thread can't get the CPU and corrupt the per-cpu page used
for hypercall arguments.

Signed-off-by: Michael Kelley <[email protected]>
Signed-off-by: K. Y. Srinivasan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hv/hv.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -82,10 +82,15 @@ int hv_post_message(union hv_connection_
aligned_msg->message_type = message_type;
aligned_msg->payload_size = payload_size;
memcpy((void *)aligned_msg->payload, payload, payload_size);
- put_cpu_ptr(hv_cpu);

status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL);

+ /* Preemption must remain disabled until after the hypercall
+ * so some other thread can't get scheduled onto this cpu and
+ * corrupt the per-cpu post_msg_page
+ */
+ put_cpu_ptr(hv_cpu);
+
return status & 0xFFFF;
}



2017-07-19 10:11:47

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 74/88] crypto: atmel - only treat EBUSY as transient if backlog

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gilad Ben-Yossef <[email protected]>

commit 1606043f214f912a52195293614935811a6e3e53 upstream.

The Atmel SHA driver was treating -EBUSY as indication of queueing
to backlog without checking that backlog is enabled for the request.

Fix it by checking request flags.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/crypto/atmel-sha.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -1204,7 +1204,9 @@ static int atmel_sha_finup(struct ahash_
ctx->flags |= SHA_FLAGS_FINUP;

err1 = atmel_sha_update(req);
- if (err1 == -EINPROGRESS || err1 == -EBUSY)
+ if (err1 == -EINPROGRESS ||
+ (err1 == -EBUSY && (ahash_request_flags(req) &
+ CRYPTO_TFM_REQ_MAY_BACKLOG)))
return err1;

/*


2017-07-19 10:11:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 71/88] selftests/capabilities: Fix the test_execve test

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Andy Lutomirski <[email protected]>

commit 796a3bae2fba6810427efdb314a1c126c9490fb3 upstream.

test_execve does rather odd mount manipulations to safely create
temporary setuid and setgid executables that aren't visible to the
rest of the system. Those executables end up in the test's cwd, but
that cwd is MNT_DETACHed.

The core namespace code considers MNT_DETACHed trees to belong to no
mount namespace at all and, in general, MNT_DETACHed trees are only
barely function. This interacted with commit 380cf5ba6b0a ("fs:
Treat foreign mounts as nosuid") to cause all MNT_DETACHed trees to
act as though they're nosuid, breaking the test.

Fix it by just not detaching the tree. It's still in a private
mount namespace and is therefore still invisible to the rest of the
system (except via /proc, and the same nosuid logic will protect all
other programs on the system from believing in test_execve's setuid
bits).

While we're at it, fix some blatant whitespace problems.

Reported-by: Naresh Kamboju <[email protected]>
Fixes: 380cf5ba6b0a ("fs: Treat foreign mounts as nosuid")
Cc: "Eric W. Biederman" <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Greg KH <[email protected]>
Cc: [email protected]
Signed-off-by: Andy Lutomirski <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
tools/testing/selftests/capabilities/test_execve.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

--- a/tools/testing/selftests/capabilities/test_execve.c
+++ b/tools/testing/selftests/capabilities/test_execve.c
@@ -138,9 +138,6 @@ static void chdir_to_tmpfs(void)

if (chdir(cwd) != 0)
err(1, "chdir to private tmpfs");
-
- if (umount2(".", MNT_DETACH) != 0)
- err(1, "detach private tmpfs");
}

static void copy_fromat_to(int fromfd, const char *fromname, const char *toname)
@@ -248,7 +245,7 @@ static int do_tests(int uid, const char
err(1, "chown");
if (chmod("validate_cap_sgidnonroot", S_ISGID | 0710) != 0)
err(1, "chmod");
-}
+ }

capng_get_caps_process();

@@ -384,7 +381,7 @@ static int do_tests(int uid, const char
} else {
printf("[RUN]\tNon-root +ia, sgidnonroot => i\n");
exec_other_validate_cap("./validate_cap_sgidnonroot",
- false, false, true, false);
+ false, false, true, false);

if (fork_wait()) {
printf("[RUN]\tNon-root +ia, sgidroot => i\n");


2017-07-19 10:12:06

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 84/88] PM / QoS: return -EINVAL for bogus strings

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Carpenter <[email protected]>

commit 2ca30331c156ca9e97643ad05dd8930b8fe78b01 upstream.

In the current code, if the user accidentally writes a bogus command to
this sysfs file, then we set the latency tolerance to an uninitialized
variable.

Fixes: 2d984ad132a8 (PM / QoS: Introcuce latency tolerance device PM QoS type)
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: Pavel Machek <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/base/power/sysfs.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -272,6 +272,8 @@ static ssize_t pm_qos_latency_tolerance_
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
value = PM_QOS_LATENCY_ANY;
+ else
+ return -EINVAL;
}
ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
return ret < 0 ? ret : n;


2017-07-19 10:12:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 78/88] sched/fair, cpumask: Export for_each_cpu_wrap()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit c6508a39640b9a27fc2bc10cb708152672c82045 upstream.

commit c743f0a5c50f2fcbc628526279cfa24f3dabe182 upstream.

More users for for_each_cpu_wrap() have appeared. Promote the construct
to generic cpumask interface.

The implementation is slightly modified to reduce arguments.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Lauro Ramos Venancio <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
include/linux/cpumask.h | 17 +++++++++++++++++
kernel/sched/fair.c | 45 ++++-----------------------------------------
lib/cpumask.c | 32 ++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 41 deletions(-)

--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -236,6 +236,23 @@ unsigned int cpumask_local_spread(unsign
(cpu) = cpumask_next_zero((cpu), (mask)), \
(cpu) < nr_cpu_ids;)

+extern int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap);
+
+/**
+ * for_each_cpu_wrap - iterate over every cpu in a mask, starting at a specified location
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask poiter
+ * @start: the start location
+ *
+ * The implementation does not assume any bit in @mask is set (including @start).
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_wrap(cpu, mask, start) \
+ for ((cpu) = cpumask_next_wrap((start)-1, (mask), (start), false); \
+ (cpu) < nr_cpumask_bits; \
+ (cpu) = cpumask_next_wrap((cpu), (mask), (start), true))
+
/**
* for_each_cpu_and - iterate over every cpu in both masks
* @cpu: the (optionally unsigned) integer iterator
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5615,43 +5615,6 @@ find_idlest_cpu(struct sched_group *grou
return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu;
}

-/*
- * Implement a for_each_cpu() variant that starts the scan at a given cpu
- * (@start), and wraps around.
- *
- * This is used to scan for idle CPUs; such that not all CPUs looking for an
- * idle CPU find the same CPU. The down-side is that tasks tend to cycle
- * through the LLC domain.
- *
- * Especially tbench is found sensitive to this.
- */
-
-static int cpumask_next_wrap(int n, const struct cpumask *mask, int start, int *wrapped)
-{
- int next;
-
-again:
- next = find_next_bit(cpumask_bits(mask), nr_cpumask_bits, n+1);
-
- if (*wrapped) {
- if (next >= start)
- return nr_cpumask_bits;
- } else {
- if (next >= nr_cpumask_bits) {
- *wrapped = 1;
- n = -1;
- goto again;
- }
- }
-
- return next;
-}
-
-#define for_each_cpu_wrap(cpu, mask, start, wrap) \
- for ((wrap) = 0, (cpu) = (start)-1; \
- (cpu) = cpumask_next_wrap((cpu), (mask), (start), &(wrap)), \
- (cpu) < nr_cpumask_bits; )
-
#ifdef CONFIG_SCHED_SMT

static inline void set_idle_cores(int cpu, int val)
@@ -5711,7 +5674,7 @@ unlock:
static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
- int core, cpu, wrap;
+ int core, cpu;

if (!static_branch_likely(&sched_smt_present))
return -1;
@@ -5721,7 +5684,7 @@ static int select_idle_core(struct task_

cpumask_and(cpus, sched_domain_span(sd), &p->cpus_allowed);

- for_each_cpu_wrap(core, cpus, target, wrap) {
+ for_each_cpu_wrap(core, cpus, target) {
bool idle = true;

for_each_cpu(cpu, cpu_smt_mask(core)) {
@@ -5787,7 +5750,7 @@ static int select_idle_cpu(struct task_s
u64 avg_cost, avg_idle = this_rq()->avg_idle;
u64 time, cost;
s64 delta;
- int cpu, wrap;
+ int cpu;

this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
if (!this_sd)
@@ -5804,7 +5767,7 @@ static int select_idle_cpu(struct task_s

time = local_clock();

- for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) {
+ for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
continue;
if (idle_cpu(cpu))
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -43,6 +43,38 @@ int cpumask_any_but(const struct cpumask
}
EXPORT_SYMBOL(cpumask_any_but);

+/**
+ * cpumask_next_wrap - helper to implement for_each_cpu_wrap
+ * @n: the cpu prior to the place to search
+ * @mask: the cpumask pointer
+ * @start: the start point of the iteration
+ * @wrap: assume @n crossing @start terminates the iteration
+ *
+ * Returns >= nr_cpu_ids on completion
+ *
+ * Note: the @wrap argument is required for the start condition when
+ * we cannot assume @start is set in @mask.
+ */
+int cpumask_next_wrap(int n, const struct cpumask *mask, int start, bool wrap)
+{
+ int next;
+
+again:
+ next = cpumask_next(n, mask);
+
+ if (wrap && n < start && next >= start) {
+ return nr_cpumask_bits;
+
+ } else if (next >= nr_cpumask_bits) {
+ wrap = true;
+ n = -1;
+ goto again;
+ }
+
+ return next;
+}
+EXPORT_SYMBOL(cpumask_next_wrap);
+
/* These are not inline because of header tangles. */
#ifdef CONFIG_CPUMASK_OFFSTACK
/**


2017-07-19 10:12:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 82/88] PM / wakeirq: Convert to SRCU

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <[email protected]>

commit ea0212f40c6bc0594c8eff79266759e3ecd4bacc upstream.

The wakeirq infrastructure uses RCU to protect the list of wakeirqs. That
breaks the irq bus locking infrastructure, which is allows sleeping
functions to be called so interrupt controllers behind slow busses,
e.g. i2c, can be handled.

The wakeirq functions hold rcu_read_lock and call into irq functions, which
in case of interrupts using the irq bus locking will trigger a
might_sleep() splat.

Convert the wakeirq infrastructure to Sleepable RCU and unbreak it.

Fixes: 4990d4fe327b (PM / Wakeirq: Add automated device wake IRQ handling)
Reported-by: Brian Norris <[email protected]>
Suggested-by: Paul E. McKenney <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
Tested-by: Tony Lindgren <[email protected]>
Tested-by: Brian Norris <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/base/power/wakeup.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)

--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -60,6 +60,8 @@ static LIST_HEAD(wakeup_sources);

static DECLARE_WAIT_QUEUE_HEAD(wakeup_count_wait_queue);

+DEFINE_STATIC_SRCU(wakeup_srcu);
+
static struct wakeup_source deleted_ws = {
.name = "deleted",
.lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
@@ -198,7 +200,7 @@ void wakeup_source_remove(struct wakeup_
spin_lock_irqsave(&events_lock, flags);
list_del_rcu(&ws->entry);
spin_unlock_irqrestore(&events_lock, flags);
- synchronize_rcu();
+ synchronize_srcu(&wakeup_srcu);
}
EXPORT_SYMBOL_GPL(wakeup_source_remove);

@@ -332,12 +334,12 @@ void device_wakeup_detach_irq(struct dev
void device_wakeup_arm_wake_irqs(void)
{
struct wakeup_source *ws;
+ int srcuidx;

- rcu_read_lock();
+ srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
dev_pm_arm_wake_irq(ws->wakeirq);
-
- rcu_read_unlock();
+ srcu_read_unlock(&wakeup_srcu, srcuidx);
}

/**
@@ -348,12 +350,12 @@ void device_wakeup_arm_wake_irqs(void)
void device_wakeup_disarm_wake_irqs(void)
{
struct wakeup_source *ws;
+ int srcuidx;

- rcu_read_lock();
+ srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
dev_pm_disarm_wake_irq(ws->wakeirq);
-
- rcu_read_unlock();
+ srcu_read_unlock(&wakeup_srcu, srcuidx);
}

/**
@@ -805,10 +807,10 @@ EXPORT_SYMBOL_GPL(pm_wakeup_event);
void pm_print_active_wakeup_sources(void)
{
struct wakeup_source *ws;
- int active = 0;
+ int srcuidx, active = 0;
struct wakeup_source *last_activity_ws = NULL;

- rcu_read_lock();
+ srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
if (ws->active) {
pr_debug("active wakeup source: %s\n", ws->name);
@@ -824,7 +826,7 @@ void pm_print_active_wakeup_sources(void
if (!active && last_activity_ws)
pr_debug("last active wakeup source: %s\n",
last_activity_ws->name);
- rcu_read_unlock();
+ srcu_read_unlock(&wakeup_srcu, srcuidx);
}
EXPORT_SYMBOL_GPL(pm_print_active_wakeup_sources);

@@ -951,8 +953,9 @@ void pm_wakep_autosleep_enabled(bool set
{
struct wakeup_source *ws;
ktime_t now = ktime_get();
+ int srcuidx;

- rcu_read_lock();
+ srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
spin_lock_irq(&ws->lock);
if (ws->autosleep_enabled != set) {
@@ -966,7 +969,7 @@ void pm_wakep_autosleep_enabled(bool set
}
spin_unlock_irq(&ws->lock);
}
- rcu_read_unlock();
+ srcu_read_unlock(&wakeup_srcu, srcuidx);
}
#endif /* CONFIG_PM_AUTOSLEEP */

@@ -1027,15 +1030,16 @@ static int print_wakeup_source_stats(str
static int wakeup_sources_stats_show(struct seq_file *m, void *unused)
{
struct wakeup_source *ws;
+ int srcuidx;

seq_puts(m, "name\t\tactive_count\tevent_count\twakeup_count\t"
"expire_count\tactive_since\ttotal_time\tmax_time\t"
"last_change\tprevent_suspend_time\n");

- rcu_read_lock();
+ srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
print_wakeup_source_stats(m, ws);
- rcu_read_unlock();
+ srcu_read_unlock(&wakeup_srcu, srcuidx);

print_wakeup_source_stats(m, &deleted_ws);



2017-07-19 10:11:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 80/88] sched/topology: Optimize build_group_mask()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Lauro Ramos Venancio <[email protected]>

commit f32d782e31bf079f600dcec126ed117b0577e85c upstream.

The group mask is always used in intersection with the group CPUs. So,
when building the group mask, we don't have to care about CPUs that are
not part of the group.

Signed-off-by: Lauro Ramos Venancio <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/topology.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -490,12 +490,12 @@ enum s_alloc {
*/
static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
{
- const struct cpumask *span = sched_domain_span(sd);
+ const struct cpumask *sg_span = sched_group_cpus(sg);
struct sd_data *sdd = sd->private;
struct sched_domain *sibling;
int i;

- for_each_cpu(i, span) {
+ for_each_cpu(i, sg_span) {
sibling = *per_cpu_ptr(sdd->sd, i);
if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
continue;


2017-07-19 10:12:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 81/88] sched/topology: Fix overlapping sched_group_mask

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 73bb059f9b8a00c5e1bf2f7ca83138c05d05e600 upstream.

The point of sched_group_mask is to select those CPUs from
sched_group_cpus that can actually arrive at this balance domain.

The current code gets it wrong, as can be readily demonstrated with a
topology like:

node 0 1 2 3
0: 10 20 30 20
1: 20 10 20 30
2: 30 20 10 20
3: 20 30 20 10

Where (for example) domain 1 on CPU1 ends up with a mask that includes
CPU0:

[] CPU1 attaching sched-domain:
[] domain 0: span 0-2 level NUMA
[] groups: 1 (mask: 1), 2, 0
[] domain 1: span 0-3 level NUMA
[] groups: 0-2 (mask: 0-2) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

This causes sched_balance_cpu() to compute the wrong CPU and
consequently should_we_balance() will terminate early resulting in
missed load-balance opportunities.

The fixed topology looks like:

[] CPU1 attaching sched-domain:
[] domain 0: span 0-2 level NUMA
[] groups: 1 (mask: 1), 2, 0
[] domain 1: span 0-3 level NUMA
[] groups: 0-2 (mask: 1) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

(note: this relies on OVERLAP domains to always have children, this is
true because the regular topology domains are still here -- this is
before degenerate trimming)

Debugged-by: Lauro Ramos Venancio <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: e3589f6c81e4 ("sched: Allow for overlapping sched_domain spans")
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/topology.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)

--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -480,6 +480,9 @@ enum s_alloc {
* Build an iteration mask that can exclude certain CPUs from the upwards
* domain traversal.
*
+ * Only CPUs that can arrive at this group should be considered to continue
+ * balancing.
+ *
* Asymmetric node setups can result in situations where the domain tree is of
* unequal depth, make sure to skip domains that already cover the entire
* range.
@@ -497,11 +500,24 @@ static void build_group_mask(struct sche

for_each_cpu(i, sg_span) {
sibling = *per_cpu_ptr(sdd->sd, i);
- if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
+
+ /*
+ * Can happen in the asymmetric case, where these siblings are
+ * unused. The mask will not be empty because those CPUs that
+ * do have the top domain _should_ span the domain.
+ */
+ if (!sibling->child)
+ continue;
+
+ /* If we would not end up here, we can't continue from here */
+ if (!cpumask_equal(sg_span, sched_domain_span(sibling->child)))
continue;

cpumask_set_cpu(i, sched_group_mask(sg));
}
+
+ /* We must not have empty masks here */
+ WARN_ON_ONCE(cpumask_empty(sched_group_mask(sg)));
}

/*


2017-07-19 10:11:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 79/88] sched/topology: Fix building of overlapping sched-groups

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 0372dd2736e02672ac6e189c31f7d8c02ad543cd upstream.

When building the overlapping groups, we very obviously should start
with the previous domain of _this_ @cpu, not CPU-0.

This can be readily demonstrated with a topology like:

node 0 1 2 3
0: 10 20 30 20
1: 20 10 20 30
2: 30 20 10 20
3: 20 30 20 10

Where (for example) CPU1 ends up generating the following nonsensical groups:

[] CPU1 attaching sched-domain:
[] domain 0: span 0-2 level NUMA
[] groups: 1 2 0
[] domain 1: span 0-3 level NUMA
[] groups: 1-3 (cpu_capacity = 3072) 0-1,3 (cpu_capacity = 3072)

Where the fact that domain 1 doesn't include a group with span 0-2 is
the obvious fail.

With patch this looks like:

[] CPU1 attaching sched-domain:
[] domain 0: span 0-2 level NUMA
[] groups: 1 0 2
[] domain 1: span 0-3 level NUMA
[] groups: 0-2 (cpu_capacity = 3072) 0,2-3 (cpu_capacity = 3072)

Debugged-by: Lauro Ramos Venancio <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: e3589f6c81e4 ("sched: Allow for overlapping sched_domain spans")
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/sched/topology.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -525,7 +525,7 @@ build_overlap_sched_groups(struct sched_

cpumask_clear(covered);

- for_each_cpu(i, span) {
+ for_each_cpu_wrap(i, span, cpu) {
struct cpumask *sg_span;

if (cpumask_test_cpu(i, covered))


2017-07-19 10:15:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 88/88] kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Haozhong Zhang <[email protected]>

commit 691bd4340bef49cf7e5855d06cf24444b5bf2d85 upstream.

It's easier for host applications, such as QEMU, if they can always
access guest MSR_IA32_BNDCFGS in VMCS, even though MPX is disabled in
guest cpuid.

Signed-off-by: Haozhong Zhang <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/kvm/vmx.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3200,7 +3200,8 @@ static int vmx_get_msr(struct kvm_vcpu *
msr_info->data = vmcs_readl(GUEST_SYSENTER_ESP);
break;
case MSR_IA32_BNDCFGS:
- if (!kvm_mpx_supported() || !guest_cpuid_has_mpx(vcpu))
+ if (!kvm_mpx_supported() ||
+ (!msr_info->host_initiated && !guest_cpuid_has_mpx(vcpu)))
return 1;
msr_info->data = vmcs_read64(GUEST_BNDCFGS);
break;
@@ -3282,7 +3283,8 @@ static int vmx_set_msr(struct kvm_vcpu *
vmcs_writel(GUEST_SYSENTER_ESP, data);
break;
case MSR_IA32_BNDCFGS:
- if (!kvm_mpx_supported() || !guest_cpuid_has_mpx(vcpu))
+ if (!kvm_mpx_supported() ||
+ (!msr_info->host_initiated && !guest_cpuid_has_mpx(vcpu)))
return 1;
if (is_noncanonical_address(data & PAGE_MASK) ||
(data & MSR_IA32_BNDCFGS_RSVD))


2017-07-19 10:16:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 65/88] rcu: Add memory barriers for NOCB leader wakeup

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Paul E. McKenney <[email protected]>

commit 6b5fc3a1331810db407c9e0e673dc1837afdc9d0 upstream.

Wait/wakeup operations do not guarantee ordering on their own. Instead,
either locking or memory barriers are required. This commit therefore
adds memory barriers to wake_nocb_leader() and nocb_leader_wait().

Signed-off-by: Paul E. McKenney <[email protected]>
Tested-by: Krister Johansen <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/rcu/tree_plugin.h | 2 ++
1 file changed, 2 insertions(+)

--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1769,6 +1769,7 @@ static void wake_nocb_leader(struct rcu_
if (READ_ONCE(rdp_leader->nocb_leader_sleep) || force) {
/* Prior smp_mb__after_atomic() orders against prior enqueue. */
WRITE_ONCE(rdp_leader->nocb_leader_sleep, false);
+ smp_mb(); /* ->nocb_leader_sleep before swake_up(). */
swake_up(&rdp_leader->nocb_wq);
}
}
@@ -2023,6 +2024,7 @@ wait_again:
* nocb_gp_head, where they await a grace period.
*/
gotcbs = false;
+ smp_mb(); /* wakeup before ->nocb_head reads. */
for (rdp = my_rdp; rdp; rdp = rdp->nocb_next_follower) {
rdp->nocb_gp_head = READ_ONCE(rdp->nocb_head);
if (!rdp->nocb_gp_head)


2017-07-19 10:11:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 69/88] mnt: In propgate_umount handle visiting mounts in any order

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric W. Biederman <[email protected]>

commit 99b19d16471e9c3faa85cad38abc9cbbe04c6d55 upstream.

While investigating some poor umount performance I realized that in
the case of overlapping mount trees where some of the mounts are locked
the code has been failing to unmount all of the mounts it should
have been unmounting.

This failure to unmount all of the necessary
mounts can be reproduced with:

$ cat locked_mounts_test.sh

mount -t tmpfs test-base /mnt
mount --make-shared /mnt
mkdir -p /mnt/b

mount -t tmpfs test1 /mnt/b
mount --make-shared /mnt/b
mkdir -p /mnt/b/10

mount -t tmpfs test2 /mnt/b/10
mount --make-shared /mnt/b/10
mkdir -p /mnt/b/10/20

mount --rbind /mnt/b /mnt/b/10/20

unshare -Urm --propagation unchaged /bin/sh -c 'sleep 5; if [ $(grep test /proc/self/mountinfo | wc -l) -eq 1 ] ; then echo SUCCESS ; else echo FAILURE ; fi'
sleep 1
umount -l /mnt/b
wait %%

$ unshare -Urm ./locked_mounts_test.sh

This failure is corrected by removing the prepass that marks mounts
that may be umounted.

A first pass is added that umounts mounts if possible and if not sets
mount mark if they could be unmounted if they weren't locked and adds
them to a list to umount possibilities. This first pass reconsiders
the mounts parent if it is on the list of umount possibilities, ensuring
that information of umoutability will pass from child to mount parent.

A second pass then walks through all mounts that are umounted and processes
their children unmounting them or marking them for reparenting.

A last pass cleans up the state on the mounts that could not be umounted
and if applicable reparents them to their first parent that remained
mounted.

While a bit longer than the old code this code is much more robust
as it allows information to flow up from the leaves and down
from the trunk making the order in which mounts are encountered
in the umount propgation tree irrelevant.

Fixes: 0c56fe31420c ("mnt: Don't propagate unmounts to locked mounts")
Reviewed-by: Andrei Vagin <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/mount.h | 2
fs/namespace.c | 2
fs/pnode.c | 150 +++++++++++++++++++++++++++++++++------------------------
3 files changed, 91 insertions(+), 63 deletions(-)

--- a/fs/mount.h
+++ b/fs/mount.h
@@ -58,7 +58,7 @@ struct mount {
struct mnt_namespace *mnt_ns; /* containing namespace */
struct mountpoint *mnt_mp; /* where is it mounted */
struct hlist_node mnt_mp_list; /* list mounts with the same mountpoint */
- struct list_head mnt_reparent; /* reparent list entry */
+ struct list_head mnt_umounting; /* list entry for umount propagation */
#ifdef CONFIG_FSNOTIFY
struct hlist_head mnt_fsnotify_marks;
__u32 mnt_fsnotify_mask;
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -236,7 +236,7 @@ static struct mount *alloc_vfsmnt(const
INIT_LIST_HEAD(&mnt->mnt_slave_list);
INIT_LIST_HEAD(&mnt->mnt_slave);
INIT_HLIST_NODE(&mnt->mnt_mp_list);
- INIT_LIST_HEAD(&mnt->mnt_reparent);
+ INIT_LIST_HEAD(&mnt->mnt_umounting);
#ifdef CONFIG_FSNOTIFY
INIT_HLIST_HEAD(&mnt->mnt_fsnotify_marks);
#endif
--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -413,86 +413,95 @@ void propagate_mount_unlock(struct mount
}
}

-/*
- * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
- */
-static void mark_umount_candidates(struct mount *mnt)
+static void umount_one(struct mount *mnt, struct list_head *to_umount)
{
- struct mount *parent = mnt->mnt_parent;
- struct mount *m;
-
- BUG_ON(parent == mnt);
-
- for (m = propagation_next(parent, parent); m;
- m = propagation_next(m, parent)) {
- struct mount *child = __lookup_mnt(&m->mnt,
- mnt->mnt_mountpoint);
- if (!child || (child->mnt.mnt_flags & MNT_UMOUNT))
- continue;
- if (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m)) {
- SET_MNT_MARK(child);
- }
- }
+ CLEAR_MNT_MARK(mnt);
+ mnt->mnt.mnt_flags |= MNT_UMOUNT;
+ list_del_init(&mnt->mnt_child);
+ list_del_init(&mnt->mnt_umounting);
+ list_move_tail(&mnt->mnt_list, to_umount);
}

/*
* NOTE: unmounting 'mnt' naturally propagates to all other mounts its
* parent propagates to.
*/
-static void __propagate_umount(struct mount *mnt, struct list_head *to_reparent)
+static bool __propagate_umount(struct mount *mnt,
+ struct list_head *to_umount,
+ struct list_head *to_restore)
{
- struct mount *parent = mnt->mnt_parent;
- struct mount *m;
+ bool progress = false;
+ struct mount *child;

- BUG_ON(parent == mnt);
-
- for (m = propagation_next(parent, parent); m;
- m = propagation_next(m, parent)) {
- struct mount *topper;
- struct mount *child = __lookup_mnt(&m->mnt,
- mnt->mnt_mountpoint);
- /*
- * umount the child only if the child has no children
- * and the child is marked safe to unmount.
- */
- if (!child || !IS_MNT_MARKED(child))
+ /*
+ * The state of the parent won't change if this mount is
+ * already unmounted or marked as without children.
+ */
+ if (mnt->mnt.mnt_flags & (MNT_UMOUNT | MNT_MARKED))
+ goto out;
+
+ /* Verify topper is the only grandchild that has not been
+ * speculatively unmounted.
+ */
+ list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
+ if (child->mnt_mountpoint == mnt->mnt.mnt_root)
+ continue;
+ if (!list_empty(&child->mnt_umounting) && IS_MNT_MARKED(child))
continue;
- CLEAR_MNT_MARK(child);
+ /* Found a mounted child */
+ goto children;
+ }

- /* If there is exactly one mount covering all of child
- * replace child with that mount.
- */
- topper = find_topper(child);
- if (topper)
- list_add_tail(&topper->mnt_reparent, to_reparent);
-
- if (topper || list_empty(&child->mnt_mounts)) {
- list_del_init(&child->mnt_child);
- list_del_init(&child->mnt_reparent);
- child->mnt.mnt_flags |= MNT_UMOUNT;
- list_move_tail(&child->mnt_list, &mnt->mnt_list);
+ /* Mark mounts that can be unmounted if not locked */
+ SET_MNT_MARK(mnt);
+ progress = true;
+
+ /* If a mount is without children and not locked umount it. */
+ if (!IS_MNT_LOCKED(mnt)) {
+ umount_one(mnt, to_umount);
+ } else {
+children:
+ list_move_tail(&mnt->mnt_umounting, to_restore);
+ }
+out:
+ return progress;
+}
+
+static void umount_list(struct list_head *to_umount,
+ struct list_head *to_restore)
+{
+ struct mount *mnt, *child, *tmp;
+ list_for_each_entry(mnt, to_umount, mnt_list) {
+ list_for_each_entry_safe(child, tmp, &mnt->mnt_mounts, mnt_child) {
+ /* topper? */
+ if (child->mnt_mountpoint == mnt->mnt.mnt_root)
+ list_move_tail(&child->mnt_umounting, to_restore);
+ else
+ umount_one(child, to_umount);
}
}
}

-static void reparent_mounts(struct list_head *to_reparent)
+static void restore_mounts(struct list_head *to_restore)
{
- while (!list_empty(to_reparent)) {
+ /* Restore mounts to a clean working state */
+ while (!list_empty(to_restore)) {
struct mount *mnt, *parent;
struct mountpoint *mp;

- mnt = list_first_entry(to_reparent, struct mount, mnt_reparent);
- list_del_init(&mnt->mnt_reparent);
+ mnt = list_first_entry(to_restore, struct mount, mnt_umounting);
+ CLEAR_MNT_MARK(mnt);
+ list_del_init(&mnt->mnt_umounting);

- /* Where should this mount be reparented to? */
+ /* Should this mount be reparented? */
mp = mnt->mnt_mp;
parent = mnt->mnt_parent;
while (parent->mnt.mnt_flags & MNT_UMOUNT) {
mp = parent->mnt_mp;
parent = parent->mnt_parent;
}
-
- mnt_change_mountpoint(parent, mp, mnt);
+ if (parent != mnt->mnt_parent)
+ mnt_change_mountpoint(parent, mp, mnt);
}
}

@@ -506,15 +515,34 @@ static void reparent_mounts(struct list_
int propagate_umount(struct list_head *list)
{
struct mount *mnt;
- LIST_HEAD(to_reparent);
-
- list_for_each_entry_reverse(mnt, list, mnt_list)
- mark_umount_candidates(mnt);
+ LIST_HEAD(to_restore);
+ LIST_HEAD(to_umount);

- list_for_each_entry(mnt, list, mnt_list)
- __propagate_umount(mnt, &to_reparent);
+ list_for_each_entry(mnt, list, mnt_list) {
+ struct mount *parent = mnt->mnt_parent;
+ struct mount *m;
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+ struct mount *child = __lookup_mnt(&m->mnt,
+ mnt->mnt_mountpoint);
+ if (!child)
+ continue;
+
+ /* Check the child and parents while progress is made */
+ while (__propagate_umount(child,
+ &to_umount, &to_restore)) {
+ /* Is the parent a umount candidate? */
+ child = child->mnt_parent;
+ if (list_empty(&child->mnt_umounting))
+ break;
+ }
+ }
+ }

- reparent_mounts(&to_reparent);
+ umount_list(&to_umount, &to_restore);
+ restore_mounts(&to_restore);
+ list_splice_tail(&to_umount, list);

return 0;
}


2017-07-19 10:16:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 70/88] mnt: Make propagate_umount less slow for overlapping mount propagation trees

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric W. Biederman <[email protected]>

commit 296990deb389c7da21c78030376ba244dc1badf5 upstream.

Andrei Vagin pointed out that time to executue propagate_umount can go
non-linear (and take a ludicrious amount of time) when the mount
propogation trees of the mounts to be unmunted by a lazy unmount
overlap.

Make the walk of the mount propagation trees nearly linear by
remembering which mounts have already been visited, allowing
subsequent walks to detect when walking a mount propgation tree or a
subtree of a mount propgation tree would be duplicate work and to skip
them entirely.

Walk the list of mounts whose propgatation trees need to be traversed
from the mount highest in the mount tree to mounts lower in the mount
tree so that odds are higher that the code will walk the largest trees
first, allowing later tree walks to be skipped entirely.

Add cleanup_umount_visitation to remover the code's memory of which
mounts have been visited.

Add the functions last_slave and skip_propagation_subtree to allow
skipping appropriate parts of the mount propagation tree without
needing to change the logic of the rest of the code.

A script to generate overlapping mount propagation trees:

$ cat runs.h
set -e
mount -t tmpfs zdtm /mnt
mkdir -p /mnt/1 /mnt/2
mount -t tmpfs zdtm /mnt/1
mount --make-shared /mnt/1
mkdir /mnt/1/1

iteration=10
if [ -n "$1" ] ; then
iteration=$1
fi

for i in $(seq $iteration); do
mount --bind /mnt/1/1 /mnt/1/1
done

mount --rbind /mnt/1 /mnt/2

TIMEFORMAT='%Rs'
nr=$(( ( 2 ** ( $iteration + 1 ) ) + 1 ))
echo -n "umount -l /mnt/1 -> $nr "
time umount -l /mnt/1

nr=$(cat /proc/self/mountinfo | grep zdtm | wc -l )
time umount -l /mnt/2

$ for i in $(seq 9 19); do echo $i; unshare -Urm bash ./run.sh $i; done

Here are the performance numbers with and without the patch:

mhash | 8192 | 8192 | 1048576 | 1048576
mounts | before | after | before | after
------------------------------------------------
1025 | 0.040s | 0.016s | 0.038s | 0.019s
2049 | 0.094s | 0.017s | 0.080s | 0.018s
4097 | 0.243s | 0.019s | 0.206s | 0.023s
8193 | 1.202s | 0.028s | 1.562s | 0.032s
16385 | 9.635s | 0.036s | 9.952s | 0.041s
32769 | 60.928s | 0.063s | 44.321s | 0.064s
65537 | | 0.097s | | 0.097s
131073 | | 0.233s | | 0.176s
262145 | | 0.653s | | 0.344s
524289 | | 2.305s | | 0.735s
1048577 | | 7.107s | | 2.603s

Andrei Vagin reports fixing the performance problem is part of the
work to fix CVE-2016-6213.

Fixes: a05964f3917c ("[PATCH] shared mounts handling: umount")
Reported-by: Andrei Vagin <[email protected]>
Reviewed-by: Andrei Vagin <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/pnode.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 62 insertions(+), 1 deletion(-)

--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -24,6 +24,11 @@ static inline struct mount *first_slave(
return list_entry(p->mnt_slave_list.next, struct mount, mnt_slave);
}

+static inline struct mount *last_slave(struct mount *p)
+{
+ return list_entry(p->mnt_slave_list.prev, struct mount, mnt_slave);
+}
+
static inline struct mount *next_slave(struct mount *p)
{
return list_entry(p->mnt_slave.next, struct mount, mnt_slave);
@@ -162,6 +167,19 @@ static struct mount *propagation_next(st
}
}

+static struct mount *skip_propagation_subtree(struct mount *m,
+ struct mount *origin)
+{
+ /*
+ * Advance m such that propagation_next will not return
+ * the slaves of m.
+ */
+ if (!IS_MNT_NEW(m) && !list_empty(&m->mnt_slave_list))
+ m = last_slave(m);
+
+ return m;
+}
+
static struct mount *next_group(struct mount *m, struct mount *origin)
{
while (1) {
@@ -505,6 +523,15 @@ static void restore_mounts(struct list_h
}
}

+static void cleanup_umount_visitations(struct list_head *visited)
+{
+ while (!list_empty(visited)) {
+ struct mount *mnt =
+ list_first_entry(visited, struct mount, mnt_umounting);
+ list_del_init(&mnt->mnt_umounting);
+ }
+}
+
/*
* collect all mounts that receive propagation from the mount in @list,
* and return these additional mounts in the same list.
@@ -517,11 +544,23 @@ int propagate_umount(struct list_head *l
struct mount *mnt;
LIST_HEAD(to_restore);
LIST_HEAD(to_umount);
+ LIST_HEAD(visited);

- list_for_each_entry(mnt, list, mnt_list) {
+ /* Find candidates for unmounting */
+ list_for_each_entry_reverse(mnt, list, mnt_list) {
struct mount *parent = mnt->mnt_parent;
struct mount *m;

+ /*
+ * If this mount has already been visited it is known that it's
+ * entire peer group and all of their slaves in the propagation
+ * tree for the mountpoint has already been visited and there is
+ * no need to visit them again.
+ */
+ if (!list_empty(&mnt->mnt_umounting))
+ continue;
+
+ list_add_tail(&mnt->mnt_umounting, &visited);
for (m = propagation_next(parent, parent); m;
m = propagation_next(m, parent)) {
struct mount *child = __lookup_mnt(&m->mnt,
@@ -529,6 +568,27 @@ int propagate_umount(struct list_head *l
if (!child)
continue;

+ if (!list_empty(&child->mnt_umounting)) {
+ /*
+ * If the child has already been visited it is
+ * know that it's entire peer group and all of
+ * their slaves in the propgation tree for the
+ * mountpoint has already been visited and there
+ * is no need to visit this subtree again.
+ */
+ m = skip_propagation_subtree(m, parent);
+ continue;
+ } else if (child->mnt.mnt_flags & MNT_UMOUNT) {
+ /*
+ * We have come accross an partially unmounted
+ * mount in list that has not been visited yet.
+ * Remember it has been visited and continue
+ * about our merry way.
+ */
+ list_add_tail(&child->mnt_umounting, &visited);
+ continue;
+ }
+
/* Check the child and parents while progress is made */
while (__propagate_umount(child,
&to_umount, &to_restore)) {
@@ -542,6 +602,7 @@ int propagate_umount(struct list_head *l

umount_list(&to_umount, &to_restore);
restore_mounts(&to_restore);
+ cleanup_umount_visitations(&visited);
list_splice_tail(&to_umount, list);

return 0;


2017-07-19 10:17:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 68/88] mnt: In umount propagation reparent in a separate pass

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric W. Biederman <[email protected]>

commit 570487d3faf2a1d8a220e6ee10f472163123d7da upstream.

It was observed that in some pathlogical cases that the current code
does not unmount everything it should. After investigation it
was determined that the issue is that mnt_change_mntpoint can
can change which mounts are available to be unmounted during mount
propagation which is wrong.

The trivial reproducer is:
$ cat ./pathological.sh

mount -t tmpfs test-base /mnt
cd /mnt
mkdir 1 2 1/1
mount --bind 1 1
mount --make-shared 1
mount --bind 1 2
mount --bind 1/1 1/1
mount --bind 1/1 1/1
echo
grep test-base /proc/self/mountinfo
umount 1/1
echo
grep test-base /proc/self/mountinfo

$ unshare -Urm ./pathological.sh

The expected output looks like:
46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
49 54 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
50 53 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
51 49 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
54 47 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
53 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
52 50 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000

46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000

The output without the fix looks like:
46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
49 54 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
50 53 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
51 49 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
54 47 0:25 /1/1 /mnt/1/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
53 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
52 50 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000

46 31 0:25 / /mnt rw,relatime - tmpfs test-base rw,uid=1000,gid=1000
47 46 0:25 /1 /mnt/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
48 46 0:25 /1 /mnt/2 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000
52 48 0:25 /1/1 /mnt/2/1 rw,relatime shared:1 - tmpfs test-base rw,uid=1000,gid=1000

That last mount in the output was in the propgation tree to be unmounted but
was missed because the mnt_change_mountpoint changed it's parent before the walk
through the mount propagation tree observed it.

Fixes: 1064f874abc0 ("mnt: Tuck mounts under others instead of creating shadow/side mounts.")
Acked-by: Andrei Vagin <[email protected]>
Reviewed-by: Ram Pai <[email protected]>
Signed-off-by: "Eric W. Biederman" <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/mount.h | 1 +
fs/namespace.c | 1 +
fs/pnode.c | 35 ++++++++++++++++++++++++++++++-----
3 files changed, 32 insertions(+), 5 deletions(-)

--- a/fs/mount.h
+++ b/fs/mount.h
@@ -58,6 +58,7 @@ struct mount {
struct mnt_namespace *mnt_ns; /* containing namespace */
struct mountpoint *mnt_mp; /* where is it mounted */
struct hlist_node mnt_mp_list; /* list mounts with the same mountpoint */
+ struct list_head mnt_reparent; /* reparent list entry */
#ifdef CONFIG_FSNOTIFY
struct hlist_head mnt_fsnotify_marks;
__u32 mnt_fsnotify_mask;
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -236,6 +236,7 @@ static struct mount *alloc_vfsmnt(const
INIT_LIST_HEAD(&mnt->mnt_slave_list);
INIT_LIST_HEAD(&mnt->mnt_slave);
INIT_HLIST_NODE(&mnt->mnt_mp_list);
+ INIT_LIST_HEAD(&mnt->mnt_reparent);
#ifdef CONFIG_FSNOTIFY
INIT_HLIST_HEAD(&mnt->mnt_fsnotify_marks);
#endif
--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -439,7 +439,7 @@ static void mark_umount_candidates(struc
* NOTE: unmounting 'mnt' naturally propagates to all other mounts its
* parent propagates to.
*/
-static void __propagate_umount(struct mount *mnt)
+static void __propagate_umount(struct mount *mnt, struct list_head *to_reparent)
{
struct mount *parent = mnt->mnt_parent;
struct mount *m;
@@ -464,17 +464,38 @@ static void __propagate_umount(struct mo
*/
topper = find_topper(child);
if (topper)
- mnt_change_mountpoint(child->mnt_parent, child->mnt_mp,
- topper);
+ list_add_tail(&topper->mnt_reparent, to_reparent);

- if (list_empty(&child->mnt_mounts)) {
+ if (topper || list_empty(&child->mnt_mounts)) {
list_del_init(&child->mnt_child);
+ list_del_init(&child->mnt_reparent);
child->mnt.mnt_flags |= MNT_UMOUNT;
list_move_tail(&child->mnt_list, &mnt->mnt_list);
}
}
}

+static void reparent_mounts(struct list_head *to_reparent)
+{
+ while (!list_empty(to_reparent)) {
+ struct mount *mnt, *parent;
+ struct mountpoint *mp;
+
+ mnt = list_first_entry(to_reparent, struct mount, mnt_reparent);
+ list_del_init(&mnt->mnt_reparent);
+
+ /* Where should this mount be reparented to? */
+ mp = mnt->mnt_mp;
+ parent = mnt->mnt_parent;
+ while (parent->mnt.mnt_flags & MNT_UMOUNT) {
+ mp = parent->mnt_mp;
+ parent = parent->mnt_parent;
+ }
+
+ mnt_change_mountpoint(parent, mp, mnt);
+ }
+}
+
/*
* collect all mounts that receive propagation from the mount in @list,
* and return these additional mounts in the same list.
@@ -485,11 +506,15 @@ static void __propagate_umount(struct mo
int propagate_umount(struct list_head *list)
{
struct mount *mnt;
+ LIST_HEAD(to_reparent);

list_for_each_entry_reverse(mnt, list, mnt_list)
mark_umount_candidates(mnt);

list_for_each_entry(mnt, list, mnt_list)
- __propagate_umount(mnt);
+ __propagate_umount(mnt, &to_reparent);
+
+ reparent_mounts(&to_reparent);
+
return 0;
}


2017-07-19 10:17:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 66/88] nvmem: core: fix leaks on registration errors

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Johan Hovold <[email protected]>

commit 3360acdf839170b612f5b212539694c20e3f16d0 upstream.

Make sure to deregister and release the nvmem device and underlying
memory on registration errors.

Note that the private data must be freed using put_device() once the
struct device has been initialised.

Also note that there's a related reference leak in the deregistration
function as reported by Mika Westerberg which is being fixed separately.

Fixes: b6c217ab9be6 ("nvmem: Add backwards compatibility support for older EEPROM drivers.")
Fixes: eace75cfdcf7 ("nvmem: Add a simple NVMEM framework for nvmem providers")
Cc: Andrew Lunn <[email protected]>
Cc: Srinivas Kandagatla <[email protected]>
Cc: Mika Westerberg <[email protected]>
Signed-off-by: Johan Hovold <[email protected]>
Acked-by: Andrey Smirnov <[email protected]>
Signed-off-by: Srinivas Kandagatla <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/nvmem/core.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -488,21 +488,24 @@ struct nvmem_device *nvmem_register(cons

rval = device_add(&nvmem->dev);
if (rval)
- goto out;
+ goto err_put_device;

if (config->compat) {
rval = nvmem_setup_compat(nvmem, config);
if (rval)
- goto out;
+ goto err_device_del;
}

if (config->cells)
nvmem_add_cells(nvmem, config);

return nvmem;
-out:
- ida_simple_remove(&nvmem_ida, nvmem->id);
- kfree(nvmem);
+
+err_device_del:
+ device_del(&nvmem->dev);
+err_put_device:
+ put_device(&nvmem->dev);
+
return ERR_PTR(rval);
}
EXPORT_SYMBOL_GPL(nvmem_register);


2017-07-19 10:18:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 61/88] powerpc/kexec: Fix radix to hash kexec due to IAMR/AMOR

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Balbir Singh <[email protected]>

commit 1e2a516e89fc412a754327522ab271b42f99c6b4 upstream.

This patch fixes a crash seen while doing a kexec from radix mode to
hash mode. Key 0 is special in hash and used in the RPN by default, we
set the key values to 0 today. In radix mode key 0 is used to control
supervisor<->user access. In hash key 0 is used by default, so the
first instruction after the switch causes a crash on kexec.

Commit 3b10d0095a1e ("powerpc/mm/radix: Prevent kernel execution of
user space") introduced the setting of IAMR and AMOR values to prevent
execution of user mode instructions from supervisor mode. We need to
clean up these SPR's on kexec.

Fixes: 3b10d0095a1e ("powerpc/mm/radix: Prevent kernel execution of user space")
Reported-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Balbir Singh <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/kernel/misc_64.S | 12 ++++++++++++
1 file changed, 12 insertions(+)

--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -614,6 +614,18 @@ _GLOBAL(kexec_sequence)
li r0,0
std r0,16(r1)

+BEGIN_FTR_SECTION
+ /*
+ * This is the best time to turn AMR/IAMR off.
+ * key 0 is used in radix for supervisor<->user
+ * protection, but on hash key 0 is reserved
+ * ideally we want to enter with a clean state.
+ * NOTE, we rely on r0 being 0 from above.
+ */
+ mtspr SPRN_IAMR,r0
+ mtspr SPRN_AMOR,r0
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+
/* save regs for local vars on new stack.
* yes, we won't go back, but ...
*/


2017-07-19 10:11:14

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 58/88] powerpc: move ELF_ET_DYN_BASE to 4GB / 4MB

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit 47ebb09d54856500c5a5e14824781902b3bb738e upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit
address space for 32-bit pointers. On 32-bit use 4MB, which is the
traditional x86 minimum load location, likely to avoid historically
requiring a 4MB page table entry when only a portion of the first 4MB
would be used (since the NULL address is avoided).

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
Tested-by: Michael Ellerman <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Pratyush Anand <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/powerpc/include/asm/elf.h | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

--- a/arch/powerpc/include/asm/elf.h
+++ b/arch/powerpc/include/asm/elf.h
@@ -23,12 +23,13 @@
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE PAGE_SIZE

-/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
- use of this is to invoke "./ld.so someprog" to test out a new version of
- the loader. We need to make sure that it is out of the way of the program
- that it will "exec", and that there is sufficient room for the brk. */
-
-#define ELF_ET_DYN_BASE 0x20000000
+/*
+ * This is the base location for PIE (ET_DYN with INTERP) loads. On
+ * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * space open for things that want to use the area for 32-bit pointers.
+ */
+#define ELF_ET_DYN_BASE (is_32bit_task() ? 0x000400000UL : \
+ 0x100000000UL)

#define ELF_CORE_EFLAGS (is_elf2_task() ? 2 : 0)



2017-07-19 10:18:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 60/88] exec: Limit arg stack to at most 75% of _STK_LIM

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit da029c11e6b12f321f36dac8771e833b65cec962 upstream.

To avoid pathological stack usage or the need to special-case setuid
execs, just limit all arg stack usage to at most 75% of _STK_LIM (6MB).

Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/exec.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)

--- a/fs/exec.c
+++ b/fs/exec.c
@@ -220,8 +220,7 @@ static struct page *get_arg_page(struct

if (write) {
unsigned long size = bprm->vma->vm_end - bprm->vma->vm_start;
- unsigned long ptr_size;
- struct rlimit *rlim;
+ unsigned long ptr_size, limit;

/*
* Since the stack will hold pointers to the strings, we
@@ -250,14 +249,16 @@ static struct page *get_arg_page(struct
return page;

/*
- * Limit to 1/4-th the stack size for the argv+env strings.
+ * Limit to 1/4 of the max stack size or 3/4 of _STK_LIM
+ * (whichever is smaller) for the argv+env strings.
* This ensures that:
* - the remaining binfmt code will not run out of stack space,
* - the program will have a reasonable amount of stack left
* to work from.
*/
- rlim = current->signal->rlim;
- if (size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur) / 4)
+ limit = _STK_LIM / 4 * 3;
+ limit = min(limit, rlimit(RLIMIT_STACK) / 4);
+ if (size > limit)
goto fail;
}



2017-07-19 10:19:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 56/88] arm: move ELF_ET_DYN_BASE to 4MB

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit 6a9af90a3bcde217a1c053e135f5f43e5d5fafbd upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

4MB is chosen here mainly to have parity with x86, where this is the
traditional minimum load location, likely to avoid historically
requiring a 4MB page table entry when only a portion of the first 4MB
would be used (since the NULL address is avoided).

For ARM the position could be 0x8000, the standard ET_EXEC load address,
but that is needlessly close to the NULL address, and anyone running PIE
on 32-bit ARM will have an MMU, so the tight mapping is not needed.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
Cc: Russell King <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Pratyush Anand <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Daniel Micay <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: Grzegorz Andrejczuk <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Qualys Security Advisory <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/include/asm/elf.h | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

--- a/arch/arm/include/asm/elf.h
+++ b/arch/arm/include/asm/elf.h
@@ -112,12 +112,8 @@ int dump_task_regs(struct task_struct *t
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE 4096

-/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
- use of this is to invoke "./ld.so someprog" to test out a new version of
- the loader. We need to make sure that it is out of the way of the program
- that it will "exec", and that there is sufficient room for the brk. */
-
-#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
+/* This is the base location for PIE (ET_DYN with INTERP) loads. */
+#define ELF_ET_DYN_BASE 0x400000UL

/* When the program starts, a1 contains a pointer to a function to be
registered with atexit, as per the SVR4 ABI. A value of 0 means we


2017-07-19 10:19:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 57/88] arm64: move ELF_ET_DYN_BASE to 4GB / 4MB

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit 02445990a96e60a67526510d8b00f7e3d14101c3 upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

For 64-bit, align to 4GB to allow runtimes to use the entire 32-bit
address space for 32-bit pointers. On 32-bit use 4MB, to match ARM.
This could be 0x8000, the standard ET_EXEC load address, but that is
needlessly close to the NULL address, and anyone running arm compat PIE
will have an MMU, so the tight mapping is not needed.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/include/asm/elf.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -113,12 +113,11 @@
#define ELF_EXEC_PAGESIZE PAGE_SIZE

/*
- * This is the location that an ET_DYN program is loaded if exec'ed. Typical
- * use of this is to invoke "./ld.so someprog" to test out a new version of
- * the loader. We need to make sure that it is out of the way of the program
- * that it will "exec", and that there is sufficient room for the brk.
+ * This is the base location for PIE (ET_DYN with INTERP) loads. On
+ * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * space open for things that want to use the area for 32-bit pointers.
*/
-#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)
+#define ELF_ET_DYN_BASE 0x100000000UL

#ifndef __ASSEMBLY__

@@ -173,7 +172,8 @@ extern int arch_setup_additional_pages(s

#ifdef CONFIG_COMPAT

-#define COMPAT_ELF_ET_DYN_BASE (2 * TASK_SIZE_32 / 3)
+/* PIE load location for compat arm. Must match ARM ELF_ET_DYN_BASE. */
+#define COMPAT_ELF_ET_DYN_BASE 0x000400000UL

/* AArch32 registers. */
#define COMPAT_ELF_NGREG 18


2017-07-19 10:19:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 55/88] binfmt_elf: use ELF_ET_DYN_BASE only for PIE

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kees Cook <[email protected]>

commit eab09532d40090698b05a07c1c87f39fdbc5fab5 upstream.

The ELF_ET_DYN_BASE position was originally intended to keep loaders
away from ET_EXEC binaries. (For example, running "/lib/ld-linux.so.2
/bin/cat" might cause the subsequent load of /bin/cat into where the
loader had been loaded.)

With the advent of PIE (ET_DYN binaries with an INTERP Program Header),
ELF_ET_DYN_BASE continued to be used since the kernel was only looking
at ET_DYN. However, since ELF_ET_DYN_BASE is traditionally set at the
top 1/3rd of the TASK_SIZE, a substantial portion of the address space
is unused.

For 32-bit tasks when RLIMIT_STACK is set to RLIM_INFINITY, programs are
loaded above the mmap region. This means they can be made to collide
(CVE-2017-1000370) or nearly collide (CVE-2017-1000371) with
pathological stack regions.

Lowering ELF_ET_DYN_BASE solves both by moving programs below the mmap
region in all cases, and will now additionally avoid programs falling
back to the mmap region by enforcing MAP_FIXED for program loads (i.e.
if it would have collided with the stack, now it will fail to load
instead of falling back to the mmap region).

To allow for a lower ELF_ET_DYN_BASE, loaders (ET_DYN without INTERP)
are loaded into the mmap region, leaving space available for either an
ET_EXEC binary with a fixed location or PIE being loaded into mmap by
the loader. Only PIE programs are loaded offset from ELF_ET_DYN_BASE,
which means architectures can now safely lower their values without risk
of loaders colliding with their subsequently loaded programs.

For 64-bit, ELF_ET_DYN_BASE is best set to 4GB to allow runtimes to use
the entire 32-bit address space for 32-bit pointers.

Thanks to PaX Team, Daniel Micay, and Rik van Riel for inspiration and
suggestions on how to implement this solution.

Fixes: d1fd836dcf00 ("mm: split ET_DYN ASLR from mmap ASLR")
Link: http://lkml.kernel.org/r/20170621173201.GA114489@beast
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Daniel Micay <[email protected]>
Cc: Qualys Security Advisory <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Dmitry Safonov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Grzegorz Andrejczuk <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Pratyush Anand <[email protected]>
Cc: Russell King <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/elf.h | 13 +++++----
fs/binfmt_elf.c | 59 ++++++++++++++++++++++++++++++++++++++-------
2 files changed, 58 insertions(+), 14 deletions(-)

--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -245,12 +245,13 @@ extern int force_personality32;
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE 4096

-/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
- use of this is to invoke "./ld.so someprog" to test out a new version of
- the loader. We need to make sure that it is out of the way of the program
- that it will "exec", and that there is sufficient room for the brk. */
-
-#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
+/*
+ * This is the base location for PIE (ET_DYN with INTERP) loads. On
+ * 64-bit, this is raised to 4GB to leave the entire 32-bit address
+ * space open for things that want to use the area for 32-bit pointers.
+ */
+#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \
+ 0x100000000UL)

/* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. This could be done in user space,
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -927,17 +927,60 @@ static int load_elf_binary(struct linux_
elf_flags = MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE;

vaddr = elf_ppnt->p_vaddr;
+ /*
+ * If we are loading ET_EXEC or we have already performed
+ * the ET_DYN load_addr calculations, proceed normally.
+ */
if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) {
elf_flags |= MAP_FIXED;
} else if (loc->elf_ex.e_type == ET_DYN) {
- /* Try and get dynamic programs out of the way of the
- * default mmap base, as well as whatever program they
- * might try to exec. This is because the brk will
- * follow the loader, and is not movable. */
- load_bias = ELF_ET_DYN_BASE - vaddr;
- if (current->flags & PF_RANDOMIZE)
- load_bias += arch_mmap_rnd();
- load_bias = ELF_PAGESTART(load_bias);
+ /*
+ * This logic is run once for the first LOAD Program
+ * Header for ET_DYN binaries to calculate the
+ * randomization (load_bias) for all the LOAD
+ * Program Headers, and to calculate the entire
+ * size of the ELF mapping (total_size). (Note that
+ * load_addr_set is set to true later once the
+ * initial mapping is performed.)
+ *
+ * There are effectively two types of ET_DYN
+ * binaries: programs (i.e. PIE: ET_DYN with INTERP)
+ * and loaders (ET_DYN without INTERP, since they
+ * _are_ the ELF interpreter). The loaders must
+ * be loaded away from programs since the program
+ * may otherwise collide with the loader (especially
+ * for ET_EXEC which does not have a randomized
+ * position). For example to handle invocations of
+ * "./ld.so someprog" to test out a new version of
+ * the loader, the subsequent program that the
+ * loader loads must avoid the loader itself, so
+ * they cannot share the same load range. Sufficient
+ * room for the brk must be allocated with the
+ * loader as well, since brk must be available with
+ * the loader.
+ *
+ * Therefore, programs are loaded offset from
+ * ELF_ET_DYN_BASE and loaders are loaded into the
+ * independently randomized mmap region (0 load_bias
+ * without MAP_FIXED).
+ */
+ if (elf_interpreter) {
+ load_bias = ELF_ET_DYN_BASE;
+ if (current->flags & PF_RANDOMIZE)
+ load_bias += arch_mmap_rnd();
+ elf_flags |= MAP_FIXED;
+ } else
+ load_bias = 0;
+
+ /*
+ * Since load_bias is used for all subsequent loading
+ * calculations, we must lower it by the first vaddr
+ * so that the remaining calculations based on the
+ * ELF vaddrs will be correctly offset. The result
+ * is then page aligned.
+ */
+ load_bias = ELF_PAGESTART(load_bias - vaddr);
+
total_size = total_mapping_size(elf_phdata,
loc->elf_ex.e_phnum);
if (!total_size) {


2017-07-19 10:11:03

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 54/88] checkpatch: silence perl 5.26.0 unescaped left brace warnings

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Cyril Bur <[email protected]>

commit 8d81ae05d0176da1c54aeaed697fa34be5c5575e upstream.

As of perl 5, version 26, subversion 0 (v5.26.0) some new warnings have
occurred when running checkpatch.

Unescaped left brace in regex is deprecated here (and will be fatal in
Perl 5.30), passed through in regex; marked by <-- HERE in m/^(.\s*){
<-- HERE \s*/ at scripts/checkpatch.pl line 3544.

Unescaped left brace in regex is deprecated here (and will be fatal in
Perl 5.30), passed through in regex; marked by <-- HERE in m/^(.\s*){
<-- HERE \s*/ at scripts/checkpatch.pl line 3885.

Unescaped left brace in regex is deprecated here (and will be fatal in
Perl 5.30), passed through in regex; marked by <-- HERE in
m/^(\+.*(?:do|\))){ <-- HERE / at scripts/checkpatch.pl line 4374.

It seems perfectly reasonable to do as the warning suggests and simply
escape the left brace in these three locations.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Cyril Bur <[email protected]>
Acked-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
scripts/checkpatch.pl | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -3523,7 +3523,7 @@ sub process {
$fixedline =~ s/\s*=\s*$/ = {/;
fix_insert_line($fixlinenr, $fixedline);
$fixedline = $line;
- $fixedline =~ s/^(.\s*){\s*/$1/;
+ $fixedline =~ s/^(.\s*)\{\s*/$1/;
fix_insert_line($fixlinenr, $fixedline);
}
}
@@ -3864,7 +3864,7 @@ sub process {
my $fixedline = rtrim($prevrawline) . " {";
fix_insert_line($fixlinenr, $fixedline);
$fixedline = $rawline;
- $fixedline =~ s/^(.\s*){\s*/$1\t/;
+ $fixedline =~ s/^(.\s*)\{\s*/$1\t/;
if ($fixedline !~ /^\+\s*$/) {
fix_insert_line($fixlinenr, $fixedline);
}
@@ -4353,7 +4353,7 @@ sub process {
if (ERROR("SPACING",
"space required before the open brace '{'\n" . $herecurr) &&
$fix) {
- $fixed[$fixlinenr] =~ s/^(\+.*(?:do|\))){/$1 {/;
+ $fixed[$fixlinenr] =~ s/^(\+.*(?:do|\)))\{/$1 {/;
}
}



2017-07-19 10:20:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 63/88] arm64: Preventing READ_IMPLIES_EXEC propagation

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dong Bo <[email protected]>

commit 48f99c8ec0b25756d0283ab058826ae07d14fad7 upstream.

Like arch/arm/, we inherit the READ_IMPLIES_EXEC personality flag across
fork(). This is undesirable for a number of reasons:

* ELF files that don't require executable stack can end up with it
anyway

* We end up performing un-necessary I-cache maintenance when mapping
what should be non-executable pages

* Restricting what is executable is generally desirable when defending
against overflow attacks

This patch clears the personality flag when setting up the personality for
newly spwaned native tasks. Given that semi-recent AArch64 toolchains emit
a non-executable PT_GNU_STACK header, userspace applications can already
not rely on READ_IMPLIES_EXEC so shouldn't be adversely affected by this
change.

Reported-by: Peter Maydell <[email protected]>
Signed-off-by: Dong Bo <[email protected]>
[will: added comment to compat code, rewrote commit message]
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/include/asm/elf.h | 6 ++++++
1 file changed, 6 insertions(+)

--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -141,6 +141,7 @@ typedef struct user_fpsimd_state elf_fpr
({ \
clear_bit(TIF_32BIT, &current->mm->context.flags); \
clear_thread_flag(TIF_32BIT); \
+ current->personality &= ~READ_IMPLIES_EXEC; \
})

/* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
@@ -187,6 +188,11 @@ typedef compat_elf_greg_t compat_elf_gr
((x)->e_flags & EF_ARM_EABI_MASK))

#define compat_start_thread compat_start_thread
+/*
+ * Unlike the native SET_PERSONALITY macro, the compat version inherits
+ * READ_IMPLIES_EXEC across a fork() since this is the behaviour on
+ * arch/arm/.
+ */
#define COMPAT_SET_PERSONALITY(ex) \
({ \
set_bit(TIF_32BIT, &current->mm->context.flags); \


2017-07-19 10:20:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 53/88] fs/dcache.c: fix spin lockup issue on nlru->lock

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sahitya Tummala <[email protected]>

commit b17c070fb624cf10162cf92ea5e1ec25cd8ac176 upstream.

__list_lru_walk_one() acquires nlru spin lock (nlru->lock) for longer
duration if there are more number of items in the lru list. As per the
current code, it can hold the spin lock for upto maximum UINT_MAX
entries at a time. So if there are more number of items in the lru
list, then "BUG: spinlock lockup suspected" is observed in the below
path:

spin_bug+0x90
do_raw_spin_lock+0xfc
_raw_spin_lock+0x28
list_lru_add+0x28
dput+0x1c8
path_put+0x20
terminate_walk+0x3c
path_lookupat+0x100
filename_lookup+0x6c
user_path_at_empty+0x54
SyS_faccessat+0xd0
el0_svc_naked+0x24

This nlru->lock is acquired by another CPU in this path -

d_lru_shrink_move+0x34
dentry_lru_isolate_shrink+0x48
__list_lru_walk_one.isra.10+0x94
list_lru_walk_node+0x40
shrink_dcache_sb+0x60
do_remount_sb+0xbc
do_emergency_remount+0xb0
process_one_work+0x228
worker_thread+0x2e0
kthread+0xf4
ret_from_fork+0x10

Fix this lockup by reducing the number of entries to be shrinked from
the lru list to 1024 at once. Also, add cond_resched() before
processing the lru list again.

Link: http://marc.info/?t=149722864900001&r=1&w=2
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Sahitya Tummala <[email protected]>
Suggested-by: Jan Kara <[email protected]>
Suggested-by: Vladimir Davydov <[email protected]>
Acked-by: Vladimir Davydov <[email protected]>
Cc: Alexander Polakov <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/dcache.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1133,11 +1133,12 @@ void shrink_dcache_sb(struct super_block
LIST_HEAD(dispose);

freed = list_lru_walk(&sb->s_dentry_lru,
- dentry_lru_isolate_shrink, &dispose, UINT_MAX);
+ dentry_lru_isolate_shrink, &dispose, 1024);

this_cpu_sub(nr_dentry_unused, freed);
shrink_dentry_list(&dispose);
- } while (freed > 0);
+ cond_resched();
+ } while (list_lru_count(&sb->s_dentry_lru) > 0);
}
EXPORT_SYMBOL(shrink_dcache_sb);



2017-07-19 10:21:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 47/88] parisc: DMA API: return error instead of BUG_ON for dma ops on non dma devs

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Thomas Bogendoerfer <[email protected]>

commit 33f9e02495d15a061f0c94ef46f5103a2d0c20f3 upstream.

Enabling parport pc driver on a B2600 (and probably other 64bit PARISC
systems) produced following BUG:

CPU: 0 PID: 1 Comm: swapper Not tainted 4.12.0-rc5-30198-g1132d5e #156
task: 000000009e050000 task.stack: 000000009e04c000

YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI
PSW: 00001000000001101111111100001111 Not tainted
r00-03 000000ff0806ff0f 000000009e04c990 0000000040871b78 000000009e04cac0
r04-07 0000000040c14de0 ffffffffffffffff 000000009e07f098 000000009d82d200
r08-11 000000009d82d210 0000000000000378 0000000000000000 0000000040c345e0
r12-15 0000000000000005 0000000040c345e0 0000000000000000 0000000040c9d5e0
r16-19 0000000040c345e0 00000000f00001c4 00000000f00001bc 0000000000000061
r20-23 000000009e04ce28 0000000000000010 0000000000000010 0000000040b89e40
r24-27 0000000000000003 0000000000ffffff 000000009d82d210 0000000040c14de0
r28-31 0000000000000000 000000009e04ca90 000000009e04cb40 0000000000000000
sr00-03 0000000000000000 0000000000000000 0000000000000000 0000000000000000
sr04-07 0000000000000000 0000000000000000 0000000000000000 0000000000000000

IASQ: 0000000000000000 0000000000000000 IAOQ: 00000000404aece0 00000000404aece4
IIR: 03ffe01f ISR: 0000000010340000 IOR: 000001781304cac8
CPU: 0 CR30: 000000009e04c000 CR31: 00000000e2976de2
ORIG_R28: 0000000000000200
IAOQ[0]: sba_dma_supported+0x80/0xd0
IAOQ[1]: sba_dma_supported+0x84/0xd0
RP(r2): parport_pc_probe_port+0x178/0x1200

Cause is a call to dma_coerce_mask_and_coherenet in parport_pc_probe_port,
which PARISC DMA API doesn't handle very nicely. This commit gives back
DMA_ERROR_CODE for DMA API calls, if device isn't capable of DMA
transaction.

Signed-off-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/include/asm/dma-mapping.h | 11 +++++++----
drivers/parisc/ccio-dma.c | 12 ++++++++++++
drivers/parisc/dino.c | 5 ++++-
drivers/parisc/lba_pci.c | 6 ++++--
drivers/parisc/sba_iommu.c | 14 ++++++++++++++
5 files changed, 41 insertions(+), 7 deletions(-)

--- a/arch/parisc/include/asm/dma-mapping.h
+++ b/arch/parisc/include/asm/dma-mapping.h
@@ -20,6 +20,8 @@
** flush/purge and allocate "regular" cacheable pages for everything.
*/

+#define DMA_ERROR_CODE (~(dma_addr_t)0)
+
#ifdef CONFIG_PA11
extern const struct dma_map_ops pcxl_dma_ops;
extern const struct dma_map_ops pcx_dma_ops;
@@ -54,12 +56,13 @@ parisc_walk_tree(struct device *dev)
break;
}
}
- BUG_ON(!dev->platform_data);
return dev->platform_data;
}
-
-#define GET_IOC(dev) (HBA_DATA(parisc_walk_tree(dev))->iommu)
-
+
+#define GET_IOC(dev) ({ \
+ void *__pdata = parisc_walk_tree(dev); \
+ __pdata ? HBA_DATA(__pdata)->iommu : NULL; \
+})

#ifdef CONFIG_IOMMU_CCIO
struct parisc_device;
--- a/drivers/parisc/ccio-dma.c
+++ b/drivers/parisc/ccio-dma.c
@@ -741,6 +741,8 @@ ccio_map_single(struct device *dev, void

BUG_ON(!dev);
ioc = GET_IOC(dev);
+ if (!ioc)
+ return DMA_ERROR_CODE;

BUG_ON(size <= 0);

@@ -814,6 +816,10 @@ ccio_unmap_page(struct device *dev, dma_

BUG_ON(!dev);
ioc = GET_IOC(dev);
+ if (!ioc) {
+ WARN_ON(!ioc);
+ return;
+ }

DBG_RUN("%s() iovp 0x%lx/%x\n",
__func__, (long)iova, size);
@@ -918,6 +924,8 @@ ccio_map_sg(struct device *dev, struct s

BUG_ON(!dev);
ioc = GET_IOC(dev);
+ if (!ioc)
+ return 0;

DBG_RUN_SG("%s() START %d entries\n", __func__, nents);

@@ -990,6 +998,10 @@ ccio_unmap_sg(struct device *dev, struct

BUG_ON(!dev);
ioc = GET_IOC(dev);
+ if (!ioc) {
+ WARN_ON(!ioc);
+ return;
+ }

DBG_RUN_SG("%s() START %d entries, %p,%x\n",
__func__, nents, sg_virt(sglist), sglist->length);
--- a/drivers/parisc/dino.c
+++ b/drivers/parisc/dino.c
@@ -154,7 +154,10 @@ struct dino_device
};

/* Looks nice and keeps the compiler happy */
-#define DINO_DEV(d) ((struct dino_device *) d)
+#define DINO_DEV(d) ({ \
+ void *__pdata = d; \
+ BUG_ON(!__pdata); \
+ (struct dino_device *)__pdata; })


/*
--- a/drivers/parisc/lba_pci.c
+++ b/drivers/parisc/lba_pci.c
@@ -111,8 +111,10 @@ static u32 lba_t32;


/* Looks nice and keeps the compiler happy */
-#define LBA_DEV(d) ((struct lba_device *) (d))
-
+#define LBA_DEV(d) ({ \
+ void *__pdata = d; \
+ BUG_ON(!__pdata); \
+ (struct lba_device *)__pdata; })

/*
** Only allow 8 subsidiary busses per LBA
--- a/drivers/parisc/sba_iommu.c
+++ b/drivers/parisc/sba_iommu.c
@@ -691,6 +691,8 @@ static int sba_dma_supported( struct dev
return 0;

ioc = GET_IOC(dev);
+ if (!ioc)
+ return 0;

/*
* check if mask is >= than the current max IO Virt Address
@@ -722,6 +724,8 @@ sba_map_single(struct device *dev, void
int pide;

ioc = GET_IOC(dev);
+ if (!ioc)
+ return DMA_ERROR_CODE;

/* save offset bits */
offset = ((dma_addr_t) (long) addr) & ~IOVP_MASK;
@@ -813,6 +817,10 @@ sba_unmap_page(struct device *dev, dma_a
DBG_RUN("%s() iovp 0x%lx/%x\n", __func__, (long) iova, size);

ioc = GET_IOC(dev);
+ if (!ioc) {
+ WARN_ON(!ioc);
+ return;
+ }
offset = iova & ~IOVP_MASK;
iova ^= offset; /* clear offset bits */
size += offset;
@@ -952,6 +960,8 @@ sba_map_sg(struct device *dev, struct sc
DBG_RUN_SG("%s() START %d entries\n", __func__, nents);

ioc = GET_IOC(dev);
+ if (!ioc)
+ return 0;

/* Fast path single entry scatterlists. */
if (nents == 1) {
@@ -1037,6 +1047,10 @@ sba_unmap_sg(struct device *dev, struct
__func__, nents, sg_virt(sglist), sglist->length);

ioc = GET_IOC(dev);
+ if (!ioc) {
+ WARN_ON(!ioc);
+ return;
+ }

#ifdef SBA_COLLECT_STATS
ioc->usg_calls++;


2017-07-19 10:22:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 45/88] parisc: Report SIGSEGV instead of SIGBUS when running out of stack

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Helge Deller <[email protected]>

commit 247462316f85a9e0479445c1a4223950b68ffac1 upstream.

When a process runs out of stack the parisc kernel wrongly faults with SIGBUS
instead of the expected SIGSEGV signal.

This example shows how the kernel faults:
do_page_fault() command='a.out' type=15 address=0xfaac2000 in libc-2.24.so[f8308000+16c000]
trap #15: Data TLB miss fault, vm_start = 0xfa2c2000, vm_end = 0xfaac2000

The vma->vm_end value is the first address which does not belong to the vma, so
adjust the check to include vma->vm_end to the range for which to send the
SIGSEGV signal.

This patch unbreaks building the debian libsigsegv package.

Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/parisc/mm/fault.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/parisc/mm/fault.c
+++ b/arch/parisc/mm/fault.c
@@ -367,7 +367,7 @@ bad_area:
case 15: /* Data TLB miss fault/Data page fault */
/* send SIGSEGV when outside of vma */
if (!vma ||
- address < vma->vm_start || address > vma->vm_end) {
+ address < vma->vm_start || address >= vma->vm_end) {
si.si_signo = SIGSEGV;
si.si_code = SEGV_MAPERR;
break;


2017-07-19 10:22:21

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 43/88] drm/amdgpu/gfx6: properly cache mc_arb_ramcfg

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Deucher <[email protected]>

commit 6653ebd48f493efe3f3598ff3fe7b3d5451665df upstream.

This was missing for gfx6.

Acked-by: Huang Rui <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
@@ -1685,7 +1685,8 @@ static void gfx_v6_0_gpu_init(struct amd
WREG32(mmBIF_FB_EN, BIF_FB_EN__FB_READ_EN_MASK | BIF_FB_EN__FB_WRITE_EN_MASK);

mc_shared_chmap = RREG32(mmMC_SHARED_CHMAP);
- mc_arb_ramcfg = RREG32(mmMC_ARB_RAMCFG);
+ adev->gfx.config.mc_arb_ramcfg = RREG32(mmMC_ARB_RAMCFG);
+ mc_arb_ramcfg = adev->gfx.config.mc_arb_ramcfg;

adev->gfx.config.num_tile_pipes = adev->gfx.config.max_tile_pipes;
adev->gfx.config.mem_max_burst_length_bytes = 256;


2017-07-19 10:22:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 41/88] cfg80211: Check if PMKID attribute is of expected size

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Srinivas Dasari <[email protected]>

commit 9361df14d1cbf966409d5d6f48bb334384fbe138 upstream.

nla policy checks for only maximum length of the attribute data
when the attribute type is NLA_BINARY. If userspace sends less
data than specified, the wireless drivers may access illegal
memory. When type is NLA_UNSPEC, nla policy check ensures that
userspace sends minimum specified length number of bytes.

Remove type assignment to NLA_BINARY from nla_policy of
NL80211_ATTR_PMKID to make this NLA_UNSPEC and to make sure minimum
WLAN_PMKID_LEN bytes are received from userspace with
NL80211_ATTR_PMKID.

Fixes: 67fbb16be69d ("nl80211: PMKSA caching support")
Signed-off-by: Srinivas Dasari <[email protected]>
Signed-off-by: Jouni Malinen <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/wireless/nl80211.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -291,8 +291,7 @@ static const struct nla_policy nl80211_p
[NL80211_ATTR_WPA_VERSIONS] = { .type = NLA_U32 },
[NL80211_ATTR_PID] = { .type = NLA_U32 },
[NL80211_ATTR_4ADDR] = { .type = NLA_U8 },
- [NL80211_ATTR_PMKID] = { .type = NLA_BINARY,
- .len = WLAN_PMKID_LEN },
+ [NL80211_ATTR_PMKID] = { .len = WLAN_PMKID_LEN },
[NL80211_ATTR_DURATION] = { .type = NLA_U32 },
[NL80211_ATTR_COOKIE] = { .type = NLA_U64 },
[NL80211_ATTR_TX_RATES] = { .type = NLA_NESTED },


2017-07-19 10:10:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 50/88] thp, mm: fix crash due race in MADV_FREE handling

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Kirill A. Shutemov <[email protected]>

commit bbf29ffc7f963bb894f84f0580c70cfea01c3892 upstream.

Reinette reported the following crash:

BUG: Bad page state in process log2exe pfn:57600
page:ffffea00015d8000 count:0 mapcount:0 mapping: (null) index:0x20200
flags: 0x4000000000040019(locked|uptodate|dirty|swapbacked)
raw: 4000000000040019 0000000000000000 0000000000020200 00000000ffffffff
raw: ffffea00015d8020 ffffea00015d8020 0000000000000000 0000000000000000
page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
bad because of flags: 0x1(locked)
Modules linked in: rfcomm 8021q bnep intel_rapl x86_pkg_temp_thermal coretemp efivars btusb btrtl btbcm pwm_lpss_pci snd_hda_codec_hdmi btintel pwm_lpss snd_hda_codec_realtek snd_soc_skl snd_hda_codec_generic snd_soc_skl_ipc spi_pxa2xx_platform snd_soc_sst_ipc snd_soc_sst_dsp i2c_designware_platform i2c_designware_core snd_hda_ext_core snd_soc_sst_match snd_hda_intel snd_hda_codec mei_me snd_hda_core mei snd_soc_rt286 snd_soc_rl6347a snd_soc_core efivarfs
CPU: 1 PID: 354 Comm: log2exe Not tainted 4.12.0-rc7-test-test #19
Hardware name: Intel corporation NUC6CAYS/NUC6CAYB, BIOS AYAPLCEL.86A.0027.2016.1108.1529 11/08/2016
Call Trace:
bad_page+0x16a/0x1f0
free_pages_check_bad+0x117/0x190
free_hot_cold_page+0x7b1/0xad0
__put_page+0x70/0xa0
madvise_free_huge_pmd+0x627/0x7b0
madvise_free_pte_range+0x6f8/0x1150
__walk_page_range+0x6b5/0xe30
walk_page_range+0x13b/0x310
madvise_free_page_range.isra.16+0xad/0xd0
madvise_free_single_vma+0x2e4/0x470
SyS_madvise+0x8ce/0x1450

If somebody frees the page under us and we hold the last reference to
it, put_page() would attempt to free the page before unlocking it.

The fix is trivial reorder of operations.

Dave said:
"I came up with the exact same patch. For posterity, here's the test
case, generated by syzkaller and trimmed down by Reinette:

https://www.sr71.net/~dave/intel/log2.c

And the config that helps detect this:

https://www.sr71.net/~dave/intel/config-log2"

Fixes: b8d3c4c3009d ("mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Reported-by: Reinette Chatre <[email protected]>
Acked-by: Dave Hansen <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Minchan Kim <[email protected]>
Cc: Huang Ying <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1561,8 +1561,8 @@ bool madvise_free_huge_pmd(struct mmu_ga
get_page(page);
spin_unlock(ptl);
split_huge_page(page);
- put_page(page);
unlock_page(page);
+ put_page(page);
goto out_unlocked;
}



2017-07-19 10:23:10

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 51/88] kernel/extable.c: mark core_kernel_text notrace

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Marcin Nowakowski <[email protected]>

commit c0d80ddab89916273cb97114889d3f337bc370ae upstream.

core_kernel_text is used by MIPS in its function graph trace processing,
so having this method traced leads to an infinite set of recursive calls
such as:

Call Trace:
ftrace_return_to_handler+0x50/0x128
core_kernel_text+0x10/0x1b8
prepare_ftrace_return+0x6c/0x114
ftrace_graph_caller+0x20/0x44
return_to_handler+0x10/0x30
return_to_handler+0x0/0x30
return_to_handler+0x0/0x30
ftrace_ops_no_ops+0x114/0x1bc
core_kernel_text+0x10/0x1b8
core_kernel_text+0x10/0x1b8
core_kernel_text+0x10/0x1b8
ftrace_ops_no_ops+0x114/0x1bc
core_kernel_text+0x10/0x1b8
prepare_ftrace_return+0x6c/0x114
ftrace_graph_caller+0x20/0x44
(...)

Mark the function notrace to avoid it being traced.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Marcin Nowakowski <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Meyer <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/extable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/extable.c
+++ b/kernel/extable.c
@@ -69,7 +69,7 @@ static inline int init_kernel_text(unsig
return 0;
}

-int core_kernel_text(unsigned long addr)
+int notrace core_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_stext &&
addr < (unsigned long)_etext)


2017-07-19 10:23:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 49/88] tools/lib/lockdep: Reduce MAX_LOCK_DEPTH to avoid overflowing lock_chain/: Depth

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ben Hutchings <[email protected]>

commit 98dcea0cfd04e083ac74137ceb9a632604740e2d upstream.

liblockdep has been broken since commit 75dd602a5198 ("lockdep: Fix
lock_chain::base size"), as that adds a check that MAX_LOCK_DEPTH is
within the range of lock_chain::depth and in liblockdep it is much
too large.

That should have resulted in a compiler error, but didn't because:

- the check uses ARRAY_SIZE(), which isn't yet defined in liblockdep
so is assumed to be an (undeclared) function
- putting a function call inside a BUILD_BUG_ON() expression quietly
turns it into some nonsense involving a variable-length array

It did produce a compiler warning, but I didn't notice because
liblockdep already produces too many warnings if -Wall is enabled
(which I'll fix shortly).

Even before that commit, which reduced lock_chain::depth from 8 bits
to 6, MAX_LOCK_DEPTH was too large.

Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
tools/lib/lockdep/uinclude/linux/lockdep.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/tools/lib/lockdep/uinclude/linux/lockdep.h
+++ b/tools/lib/lockdep/uinclude/linux/lockdep.h
@@ -8,7 +8,7 @@
#include <linux/utsname.h>
#include <linux/compiler.h>

-#define MAX_LOCK_DEPTH 2000UL
+#define MAX_LOCK_DEPTH 63UL

#define asmlinkage
#define __visible


2017-07-19 10:24:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 39/88] cfg80211: Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Srinivas Dasari <[email protected]>

commit 8feb69c7bd89513be80eb19198d48f154b254021 upstream.

Buffer overread may happen as nl80211_set_station() reads 4 bytes
from the attribute NL80211_ATTR_LOCAL_MESH_POWER_MODE without
validating the size of data received when userspace sends less
than 4 bytes of data with NL80211_ATTR_LOCAL_MESH_POWER_MODE.
Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE to avoid
the buffer overread.

Fixes: 3b1c5a5307f ("{cfg,nl}80211: mesh power mode primitives and userspace access")
Signed-off-by: Srinivas Dasari <[email protected]>
Signed-off-by: Jouni Malinen <[email protected]>
Signed-off-by: Johannes Berg <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/wireless/nl80211.c | 1 +
1 file changed, 1 insertion(+)

--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -348,6 +348,7 @@ static const struct nla_policy nl80211_p
[NL80211_ATTR_SCAN_FLAGS] = { .type = NLA_U32 },
[NL80211_ATTR_P2P_CTWINDOW] = { .type = NLA_U8 },
[NL80211_ATTR_P2P_OPPPS] = { .type = NLA_U8 },
+ [NL80211_ATTR_LOCAL_MESH_POWER_MODE] = {. type = NLA_U32 },
[NL80211_ATTR_ACL_POLICY] = {. type = NLA_U32 },
[NL80211_ATTR_MAC_ADDRS] = { .type = NLA_NESTED },
[NL80211_ATTR_STA_CAPABILITY] = { .type = NLA_U16 },


2017-07-19 10:24:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 33/88] Adding the type of exported symbols

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nagarathnam Muthusamy <[email protected]>


[ Upstream commit f5a651f1d5e524cab345250a783702fb6a3f14d6 ]

Missing symbol type for few functions prevents genksyms from generating
symbol versions for those functions. This patch fixes them.

Signed-off-by: Nagarathnam Muthusamy <[email protected]>
Reviewed-by: Babu Moger <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/lib/checksum_64.S | 1 +
arch/sparc/lib/csum_copy.S | 1 +
arch/sparc/lib/memscan_64.S | 2 ++
arch/sparc/lib/memset.S | 1 +
4 files changed, 5 insertions(+)

--- a/arch/sparc/lib/checksum_64.S
+++ b/arch/sparc/lib/checksum_64.S
@@ -38,6 +38,7 @@ csum_partial_fix_alignment:

.align 32
.globl csum_partial
+ .type csum_partial,#function
EXPORT_SYMBOL(csum_partial)
csum_partial: /* %o0=buff, %o1=len, %o2=sum */
prefetch [%o0 + 0x000], #n_reads
--- a/arch/sparc/lib/csum_copy.S
+++ b/arch/sparc/lib/csum_copy.S
@@ -65,6 +65,7 @@
add %o5, %o4, %o4

.globl FUNC_NAME
+ .type FUNC_NAME,#function
EXPORT_SYMBOL(FUNC_NAME)
FUNC_NAME: /* %o0=src, %o1=dst, %o2=len, %o3=sum */
LOAD(prefetch, %o0 + 0x000, #n_reads)
--- a/arch/sparc/lib/memscan_64.S
+++ b/arch/sparc/lib/memscan_64.S
@@ -14,6 +14,8 @@
.text
.align 32
.globl __memscan_zero, __memscan_generic
+ .type __memscan_zero,#function
+ .type __memscan_generic,#function
.globl memscan
EXPORT_SYMBOL(__memscan_zero)
EXPORT_SYMBOL(__memscan_generic)
--- a/arch/sparc/lib/memset.S
+++ b/arch/sparc/lib/memset.S
@@ -63,6 +63,7 @@
__bzero_begin:

.globl __bzero
+ .type __bzero,#function
.globl memset
EXPORT_SYMBOL(__bzero)
EXPORT_SYMBOL(memset)


2017-07-19 10:24:51

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 37/88] brcmfmac: Fix glom_skb leak in brcmf_sdiod_recv_chain

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Peter S. Housel <[email protected]>

commit 5ea59db8a375216e6c915c5586f556766673b5a7 upstream.

An earlier change to this function (3bdae810721b) fixed a leak in the
case of an unsuccessful call to brcmf_sdiod_buffrw(). However, the
glom_skb buffer, used for emulating a scattering read, is never used
or referenced after its contents are copied into the destination
buffers, and therefore always needs to be freed by the end of the
function.

Fixes: 3bdae810721b ("brcmfmac: Fix glob_skb leak in brcmf_sdiod_recv_chain")
Fixes: a413e39a38573 ("brcmfmac: fix brcmf_sdcard_recv_chain() for host without sg support")
Signed-off-by: Peter S. Housel <[email protected]>
Signed-off-by: Arend van Spriel <[email protected]>
Signed-off-by: Kalle Valo <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
@@ -705,7 +705,7 @@ done:
int brcmf_sdiod_recv_chain(struct brcmf_sdio_dev *sdiodev,
struct sk_buff_head *pktq, uint totlen)
{
- struct sk_buff *glom_skb;
+ struct sk_buff *glom_skb = NULL;
struct sk_buff *skb;
u32 addr = sdiodev->sbwad;
int err = 0;
@@ -726,10 +726,8 @@ int brcmf_sdiod_recv_chain(struct brcmf_
return -ENOMEM;
err = brcmf_sdiod_buffrw(sdiodev, SDIO_FUNC_2, false, addr,
glom_skb);
- if (err) {
- brcmu_pkt_buf_free_skb(glom_skb);
+ if (err)
goto done;
- }

skb_queue_walk(pktq, skb) {
memcpy(skb->data, glom_skb->data, skb->len);
@@ -740,6 +738,7 @@ int brcmf_sdiod_recv_chain(struct brcmf_
pktq);

done:
+ brcmu_pkt_buf_free_skb(glom_skb);
return err;
}



2017-07-19 10:28:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 4.11 00/88] 4.11.12-stable review

On Wed, Jul 19, 2017 at 12:07:22PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.11.12 release.
> There are 88 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri Jul 21 10:07:36 UTC 2017.
> Anything received after that time might be too late.

Note, this will be the LAST 4.11.y kernel to be released as far as I
know. It will be end-of-life after this release, please move to the
4.12.y kernel tree at this point in time.

thanks,

greg k-h

2017-07-19 10:43:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 35/88] block: Fix a blk_exit_rl() regression

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Bart Van Assche <[email protected]>

commit dc9edc44de6cd7cc8cc7f5b36c1adb221eda3207 upstream.

Avoid that the following complaint is reported:

BUG: sleeping function called from invalid context at kernel/workqueue.c:2790
in_atomic(): 1, irqs_disabled(): 0, pid: 41, name: rcuop/3
1 lock held by rcuop/3/41:
#0: (rcu_callback){......}, at: [<ffffffff8111f9a2>] rcu_nocb_kthread+0x282/0x500
Call Trace:
dump_stack+0x86/0xcf
___might_sleep+0x174/0x260
__might_sleep+0x4a/0x80
flush_work+0x7e/0x2e0
__cancel_work_timer+0x143/0x1c0
cancel_work_sync+0x10/0x20
blk_throtl_exit+0x25/0x60
blkcg_exit_queue+0x35/0x40
blk_release_queue+0x42/0x130
kobject_put+0xa9/0x190

This happens since we invoke callbacks that need to block from the
queue release handler. Fix this by pushing the final release to
a workqueue.

Reported-by: Ross Zwisler <[email protected]>
Fixes: commit b425e5049258 ("block: Avoid that blk_exit_rl() triggers a use-after-free")
Signed-off-by: Bart Van Assche <[email protected]>
Tested-by: Ross Zwisler <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Updated changelog
Signed-off-by: Jens Axboe <[email protected]>
Cc: Laura Abbott <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>


---
block/blk-sysfs.c | 34 ++++++++++++++++++++++------------
include/linux/blkdev.h | 2 ++
2 files changed, 24 insertions(+), 12 deletions(-)

--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -791,24 +791,25 @@ static void blk_free_queue_rcu(struct rc
}

/**
- * blk_release_queue: - release a &struct request_queue when it is no longer needed
- * @kobj: the kobj belonging to the request queue to be released
+ * __blk_release_queue - release a request queue when it is no longer needed
+ * @work: pointer to the release_work member of the request queue to be released
*
* Description:
- * blk_release_queue is the pair to blk_init_queue() or
- * blk_queue_make_request(). It should be called when a request queue is
- * being released; typically when a block device is being de-registered.
- * Currently, its primary task it to free all the &struct request
- * structures that were allocated to the queue and the queue itself.
+ * blk_release_queue is the counterpart of blk_init_queue(). It should be
+ * called when a request queue is being released; typically when a block
+ * device is being de-registered. Its primary task it to free the queue
+ * itself.
*
- * Note:
+ * Notes:
* The low level driver must have finished any outstanding requests first
* via blk_cleanup_queue().
- **/
-static void blk_release_queue(struct kobject *kobj)
+ *
+ * Although blk_release_queue() may be called with preemption disabled,
+ * __blk_release_queue() may sleep.
+ */
+static void __blk_release_queue(struct work_struct *work)
{
- struct request_queue *q =
- container_of(kobj, struct request_queue, kobj);
+ struct request_queue *q = container_of(work, typeof(*q), release_work);

wbt_exit(q);
bdi_put(q->backing_dev_info);
@@ -844,6 +845,15 @@ static void blk_release_queue(struct kob
call_rcu(&q->rcu_head, blk_free_queue_rcu);
}

+static void blk_release_queue(struct kobject *kobj)
+{
+ struct request_queue *q =
+ container_of(kobj, struct request_queue, kobj);
+
+ INIT_WORK(&q->release_work, __blk_release_queue);
+ schedule_work(&q->release_work);
+}
+
static const struct sysfs_ops queue_sysfs_ops = {
.show = queue_attr_show,
.store = queue_attr_store,
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -580,6 +580,8 @@ struct request_queue {

size_t cmd_size;
void *rq_alloc_data;
+
+ struct work_struct release_work;
};

#define QUEUE_FLAG_QUEUED 1 /* uses generic tag queueing */


2017-07-19 10:44:45

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 31/88] Adding asm-prototypes.h for genksyms to generate crc

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nagarathnam Muthusamy <[email protected]>


[ Upstream commit bdca8cc096203b17ad0ac4e19f50578207e054d2 ]

This patch adds the prototypes of assembly defined functions to asm-prototypes.h.
Some prototypes are directly added as they are not present in any existing header
files.

Signed-off-by: Nagarathnam Muthusamy <[email protected]>
Reviewed-by: Babu Moger <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/sparc/include/asm/asm-prototypes.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100644 arch/sparc/include/asm/asm-prototypes.h

--- /dev/null
+++ b/arch/sparc/include/asm/asm-prototypes.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) 2017 Oracle and/or its affiliates. All rights reserved.
+ */
+
+#include <asm/xor.h>
+#include <asm/checksum.h>
+#include <asm/trap_block.h>
+#include <asm/uaccess.h>
+#include <asm/atomic.h>
+#include <asm/ftrace.h>
+#include <asm/cacheflush.h>
+#include <asm/oplib.h>
+#include <linux/atomic.h>
+
+void *__memscan_zero(void *, size_t);
+void *__memscan_generic(void *, int, size_t);
+void *__bzero(void *, size_t);
+void VISenter(void); /* Dummy prototype to supress warning */
+#undef memcpy
+#undef memset
+void *memcpy(void *dest, const void *src, size_t n);
+void *memset(void *s, int c, size_t n);
+typedef int TItype __attribute__((mode(TI)));
+TItype __multi3(TItype a, TItype b);


2017-07-19 10:09:55

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 12/88] rocker: move dereference before free

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Dan Carpenter <[email protected]>


[ Upstream commit acb4b7df48b539cb391287921de57e4e5fae3460 ]

My static checker complains that ofdpa_neigh_del() can sometimes free
"found". It just makes sense to use it first before deleting it.

Fixes: ecf244f753e0 ("rocker: fix maybe-uninitialized warning")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/rocker/rocker_ofdpa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
+++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
@@ -1505,8 +1505,8 @@ static int ofdpa_port_ipv4_nh(struct ofd
*index = entry->index;
resolved = false;
} else if (removing) {
- ofdpa_neigh_del(trans, found);
*index = found->index;
+ ofdpa_neigh_del(trans, found);
} else if (updating) {
ofdpa_neigh_update(found, trans, NULL, false);
resolved = !is_zero_ether_addr(found->eth_dst);


2017-07-19 10:09:54

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 26/88] cxgb4: fix BUG() on interrupt deallocating path of ULD

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: "Guilherme G. Piccoli" <[email protected]>


[ Upstream commit 6a146f3a5894b751cef16feb3d7903e45e3c445c ]

Since the introduction of ULD (Upper-Layer Drivers), the MSI-X
deallocating path changed in cxgb4: the driver frees the interrupts
of ULD when unregistering it or on shutdown PCI handler.

Problem is that if a MSI-X is not freed before deallocated in the PCI
layer, it will trigger a BUG() due to still "alive" interrupt being
tentatively quiesced.

The below trace was observed when doing a simple unbind of Chelsio's
adapter PCI function, like:
"echo 001e:80:00.4 > /sys/bus/pci/drivers/cxgb4/unbind"

Trace:

kernel BUG at drivers/pci/msi.c:352!
Oops: Exception in kernel mode, sig: 5 [#1]
...
NIP [c0000000005a5e60] free_msi_irqs+0xa0/0x250
LR [c0000000005a5e50] free_msi_irqs+0x90/0x250
Call Trace:
[c0000000005a5e50] free_msi_irqs+0x90/0x250 (unreliable)
[c0000000005a72c4] pci_disable_msix+0x124/0x180
[d000000011e06708] disable_msi+0x88/0xb0 [cxgb4]
[d000000011e06948] free_some_resources+0xa8/0x160 [cxgb4]
[d000000011e06d60] remove_one+0x170/0x3c0 [cxgb4]
[c00000000058a910] pci_device_remove+0x70/0x110
[c00000000064ef04] device_release_driver_internal+0x1f4/0x2c0
...

This patch fixes the issue by refactoring the shutdown path of ULD on
cxgb4 driver, by properly freeing and disabling interrupts on PCI
remove handler too.

Fixes: 0fbc81b3ad51 ("Allocate resources dynamically for all cxgb4 ULD's")
Reported-by: Harsha Thyagaraja <[email protected]>
Signed-off-by: Guilherme G. Piccoli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 16 ++++++---
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 42 ++++++++++++++----------
2 files changed, 36 insertions(+), 22 deletions(-)

--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -2076,12 +2076,12 @@ static void detach_ulds(struct adapter *

mutex_lock(&uld_mutex);
list_del(&adap->list_node);
+
for (i = 0; i < CXGB4_ULD_MAX; i++)
- if (adap->uld && adap->uld[i].handle) {
+ if (adap->uld && adap->uld[i].handle)
adap->uld[i].state_change(adap->uld[i].handle,
CXGB4_STATE_DETACH);
- adap->uld[i].handle = NULL;
- }
+
if (netevent_registered && list_empty(&adapter_list)) {
unregister_netevent_notifier(&cxgb4_netevent_nb);
netevent_registered = false;
@@ -5089,8 +5089,10 @@ static void remove_one(struct pci_dev *p
*/
destroy_workqueue(adapter->workq);

- if (is_uld(adapter))
+ if (is_uld(adapter)) {
detach_ulds(adapter);
+ t4_uld_clean_up(adapter);
+ }

disable_interrupts(adapter);

@@ -5167,7 +5169,11 @@ static void shutdown_one(struct pci_dev
if (adapter->port[i]->reg_state == NETREG_REGISTERED)
cxgb_close(adapter->port[i]);

- t4_uld_clean_up(adapter);
+ if (is_uld(adapter)) {
+ detach_ulds(adapter);
+ t4_uld_clean_up(adapter);
+ }
+
disable_interrupts(adapter);
disable_msi(adapter);

--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
@@ -589,22 +589,37 @@ void t4_uld_mem_free(struct adapter *ada
kfree(adap->uld);
}

+/* This function should be called with uld_mutex taken. */
+static void cxgb4_shutdown_uld_adapter(struct adapter *adap, enum cxgb4_uld type)
+{
+ if (adap->uld[type].handle) {
+ adap->uld[type].handle = NULL;
+ adap->uld[type].add = NULL;
+ release_sge_txq_uld(adap, type);
+
+ if (adap->flags & FULL_INIT_DONE)
+ quiesce_rx_uld(adap, type);
+
+ if (adap->flags & USING_MSIX)
+ free_msix_queue_irqs_uld(adap, type);
+
+ free_sge_queues_uld(adap, type);
+ free_queues_uld(adap, type);
+ }
+}
+
void t4_uld_clean_up(struct adapter *adap)
{
unsigned int i;

- if (!adap->uld)
- return;
+ mutex_lock(&uld_mutex);
for (i = 0; i < CXGB4_ULD_MAX; i++) {
if (!adap->uld[i].handle)
continue;
- if (adap->flags & FULL_INIT_DONE)
- quiesce_rx_uld(adap, i);
- if (adap->flags & USING_MSIX)
- free_msix_queue_irqs_uld(adap, i);
- free_sge_queues_uld(adap, i);
- free_queues_uld(adap, i);
+
+ cxgb4_shutdown_uld_adapter(adap, i);
}
+ mutex_unlock(&uld_mutex);
}

static void uld_init(struct adapter *adap, struct cxgb4_lld_info *lld)
@@ -782,15 +797,8 @@ int cxgb4_unregister_uld(enum cxgb4_uld
continue;
if (type == CXGB4_ULD_ISCSIT && is_t4(adap->params.chip))
continue;
- adap->uld[type].handle = NULL;
- adap->uld[type].add = NULL;
- release_sge_txq_uld(adap, type);
- if (adap->flags & FULL_INIT_DONE)
- quiesce_rx_uld(adap, type);
- if (adap->flags & USING_MSIX)
- free_msix_queue_irqs_uld(adap, type);
- free_sge_queues_uld(adap, type);
- free_queues_uld(adap, type);
+
+ cxgb4_shutdown_uld_adapter(adap, type);
}
mutex_unlock(&uld_mutex);



2017-07-19 10:45:28

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 27/88] tap: convert a mutex to a spinlock

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: WANG Cong <[email protected]>


[ Upstream commit ffa423fb3251f8737303ffc3b0659e86e501808e ]

We are not allowed to block on the RCU reader side, so can't
just hold the mutex as before. As a quick fix, convert it to
a spinlock.

Fixes: d9f1f61c0801 ("tap: Extending tap device create/destroy APIs")
Reported-by: Christian Borntraeger <[email protected]>
Tested-by: Christian Borntraeger <[email protected]>
Cc: Sainath Grandhi <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/tap.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -106,7 +106,7 @@ struct major_info {
struct rcu_head rcu;
dev_t major;
struct idr minor_idr;
- struct mutex minor_lock;
+ spinlock_t minor_lock;
const char *device_name;
struct list_head next;
};
@@ -416,15 +416,15 @@ int tap_get_minor(dev_t major, struct ta
goto unlock;
}

- mutex_lock(&tap_major->minor_lock);
- retval = idr_alloc(&tap_major->minor_idr, tap, 1, TAP_NUM_DEVS, GFP_KERNEL);
+ spin_lock(&tap_major->minor_lock);
+ retval = idr_alloc(&tap_major->minor_idr, tap, 1, TAP_NUM_DEVS, GFP_ATOMIC);
if (retval >= 0) {
tap->minor = retval;
} else if (retval == -ENOSPC) {
netdev_err(tap->dev, "Too many tap devices\n");
retval = -EINVAL;
}
- mutex_unlock(&tap_major->minor_lock);
+ spin_unlock(&tap_major->minor_lock);

unlock:
rcu_read_unlock();
@@ -442,12 +442,12 @@ void tap_free_minor(dev_t major, struct
goto unlock;
}

- mutex_lock(&tap_major->minor_lock);
+ spin_lock(&tap_major->minor_lock);
if (tap->minor) {
idr_remove(&tap_major->minor_idr, tap->minor);
tap->minor = 0;
}
- mutex_unlock(&tap_major->minor_lock);
+ spin_unlock(&tap_major->minor_lock);

unlock:
rcu_read_unlock();
@@ -467,13 +467,13 @@ static struct tap_dev *dev_get_by_tap_fi
goto unlock;
}

- mutex_lock(&tap_major->minor_lock);
+ spin_lock(&tap_major->minor_lock);
tap = idr_find(&tap_major->minor_idr, minor);
if (tap) {
dev = tap->dev;
dev_hold(dev);
}
- mutex_unlock(&tap_major->minor_lock);
+ spin_unlock(&tap_major->minor_lock);

unlock:
rcu_read_unlock();
@@ -1227,7 +1227,7 @@ static int tap_list_add(dev_t major, con
tap_major->major = MAJOR(major);

idr_init(&tap_major->minor_idr);
- mutex_init(&tap_major->minor_lock);
+ spin_lock_init(&tap_major->minor_lock);

tap_major->device_name = device_name;



2017-07-19 10:46:02

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 21/88] liquidio: fix bug in soft reset failure detection

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Derek Chickles <[email protected]>


[ Upstream commit 05a6b4cae8c0cc1680c9dd33a97a49a13c0f01bc ]

The code that detects a failed soft reset of Octeon is comparing the wrong
value against the reset value of the Octeon SLI_SCRATCH_1 register,
resulting in an inability to detect a soft reset failure. Fix it by using
the correct value in the comparison, which is any non-zero value.

Fixes: f21fb3ed364b ("Add support of Cavium Liquidio ethernet adapters")
Fixes: c0eab5b3580a ("liquidio: CN23XX firmware download")
Signed-off-by: Derek Chickles <[email protected]>
Signed-off-by: Satanand Burla <[email protected]>
Signed-off-by: Raghu Vatsavayi <[email protected]>
Signed-off-by: Felix Manlunas <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c | 2 +-
drivers/net/ethernet/cavium/liquidio/cn66xx_device.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
@@ -221,7 +221,7 @@ static int cn23xx_pf_soft_reset(struct o
/* Wait for 100ms as Octeon resets. */
mdelay(100);

- if (octeon_read_csr64(oct, CN23XX_SLI_SCRATCH1) == 0x1234ULL) {
+ if (octeon_read_csr64(oct, CN23XX_SLI_SCRATCH1)) {
dev_err(&oct->pci_dev->dev, "OCTEON[%d]: Soft reset failed\n",
oct->octeon_id);
return 1;
--- a/drivers/net/ethernet/cavium/liquidio/cn66xx_device.c
+++ b/drivers/net/ethernet/cavium/liquidio/cn66xx_device.c
@@ -44,7 +44,7 @@ int lio_cn6xxx_soft_reset(struct octeon_
/* Wait for 10ms as Octeon resets. */
mdelay(100);

- if (octeon_read_csr64(oct, CN6XXX_SLI_SCRATCH1) == 0x1234ULL) {
+ if (octeon_read_csr64(oct, CN6XXX_SLI_SCRATCH1)) {
dev_err(&oct->pci_dev->dev, "Soft reset failed\n");
return 1;
}


2017-07-19 10:09:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 18/88] vxlan: fix hlist corruption

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Jiri Benc <[email protected]>


[ Upstream commit 69e766612c4bcb79e19cebed9eed61d4222c1d47 ]

It's not a good idea to add the same hlist_node to two different hash lists.
This leads to various hard to debug memory corruptions.

Fixes: b1be00a6c39f ("vxlan: support both IPv4 and IPv6 sockets in a single vxlan device")
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/vxlan.c | 30 ++++++++++++++++++++----------
include/net/vxlan.h | 10 +++++++++-
2 files changed, 29 insertions(+), 11 deletions(-)

--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -228,15 +228,15 @@ static struct vxlan_sock *vxlan_find_soc

static struct vxlan_dev *vxlan_vs_find_vni(struct vxlan_sock *vs, __be32 vni)
{
- struct vxlan_dev *vxlan;
+ struct vxlan_dev_node *node;

/* For flow based devices, map all packets to VNI 0 */
if (vs->flags & VXLAN_F_COLLECT_METADATA)
vni = 0;

- hlist_for_each_entry_rcu(vxlan, vni_head(vs, vni), hlist) {
- if (vxlan->default_dst.remote_vni == vni)
- return vxlan;
+ hlist_for_each_entry_rcu(node, vni_head(vs, vni), hlist) {
+ if (node->vxlan->default_dst.remote_vni == vni)
+ return node->vxlan;
}

return NULL;
@@ -2361,17 +2361,22 @@ static void vxlan_vs_del_dev(struct vxla
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);

spin_lock(&vn->sock_lock);
- hlist_del_init_rcu(&vxlan->hlist);
+ hlist_del_init_rcu(&vxlan->hlist4.hlist);
+#if IS_ENABLED(CONFIG_IPV6)
+ hlist_del_init_rcu(&vxlan->hlist6.hlist);
+#endif
spin_unlock(&vn->sock_lock);
}

-static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan)
+static void vxlan_vs_add_dev(struct vxlan_sock *vs, struct vxlan_dev *vxlan,
+ struct vxlan_dev_node *node)
{
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
__be32 vni = vxlan->default_dst.remote_vni;

+ node->vxlan = vxlan;
spin_lock(&vn->sock_lock);
- hlist_add_head_rcu(&vxlan->hlist, vni_head(vs, vni));
+ hlist_add_head_rcu(&node->hlist, vni_head(vs, vni));
spin_unlock(&vn->sock_lock);
}

@@ -2817,6 +2822,7 @@ static int __vxlan_sock_add(struct vxlan
{
struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id);
struct vxlan_sock *vs = NULL;
+ struct vxlan_dev_node *node;

if (!vxlan->cfg.no_share) {
spin_lock(&vn->sock_lock);
@@ -2834,12 +2840,16 @@ static int __vxlan_sock_add(struct vxlan
if (IS_ERR(vs))
return PTR_ERR(vs);
#if IS_ENABLED(CONFIG_IPV6)
- if (ipv6)
+ if (ipv6) {
rcu_assign_pointer(vxlan->vn6_sock, vs);
- else
+ node = &vxlan->hlist6;
+ } else
#endif
+ {
rcu_assign_pointer(vxlan->vn4_sock, vs);
- vxlan_vs_add_dev(vs, vxlan);
+ node = &vxlan->hlist4;
+ }
+ vxlan_vs_add_dev(vs, vxlan, node);
return 0;
}

--- a/include/net/vxlan.h
+++ b/include/net/vxlan.h
@@ -221,9 +221,17 @@ struct vxlan_config {
bool no_share;
};

+struct vxlan_dev_node {
+ struct hlist_node hlist;
+ struct vxlan_dev *vxlan;
+};
+
/* Pseudo network device */
struct vxlan_dev {
- struct hlist_node hlist; /* vni hash table */
+ struct vxlan_dev_node hlist4; /* vni hash table for IPv4 socket */
+#if IS_ENABLED(CONFIG_IPV6)
+ struct vxlan_dev_node hlist6; /* vni hash table for IPv6 socket */
+#endif
struct list_head next; /* vxlan's per namespace list */
struct vxlan_sock __rcu *vn4_sock; /* listening socket for IPv4 */
#if IS_ENABLED(CONFIG_IPV6)


2017-07-19 10:09:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 17/88] ipv6: dad: dont remove dynamic addresses if link is down

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sabrina Dubroca <[email protected]>


[ Upstream commit ec8add2a4c9df723c94a863b8fcd6d93c472deed ]

Currently, when the link for $DEV is down, this command succeeds but the
address is removed immediately by DAD (1):

ip addr add 1111::12/64 dev $DEV valid_lft 3600 preferred_lft 1800

In the same situation, this will succeed and not remove the address (2):

ip addr add 1111::12/64 dev $DEV
ip addr change 1111::12/64 dev $DEV valid_lft 3600 preferred_lft 1800

The comment in addrconf_dad_begin() when !IF_READY makes it look like
this is the intended behavior, but doesn't explain why:

* If the device is not ready:
* - keep it tentative if it is a permanent address.
* - otherwise, kill it.

We clearly cannot prevent userspace from doing (2), but we can make (1)
work consistently with (2).

addrconf_dad_stop() is only called in two cases: if DAD failed, or to
skip DAD when the link is down. In that second case, the fix is to avoid
deleting the address, like we already do for permanent addresses.

Fixes: 3c21edbd1137 ("[IPV6]: Defer IPv6 device initialization until the link becomes ready.")
Signed-off-by: Sabrina Dubroca <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv6/addrconf.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -1888,15 +1888,7 @@ static void addrconf_dad_stop(struct ine
if (dad_failed)
ifp->flags |= IFA_F_DADFAILED;

- if (ifp->flags&IFA_F_PERMANENT) {
- spin_lock_bh(&ifp->lock);
- addrconf_del_dad_work(ifp);
- ifp->flags |= IFA_F_TENTATIVE;
- spin_unlock_bh(&ifp->lock);
- if (dad_failed)
- ipv6_ifa_notify(0, ifp);
- in6_ifa_put(ifp);
- } else if (ifp->flags&IFA_F_TEMPORARY) {
+ if (ifp->flags&IFA_F_TEMPORARY) {
struct inet6_ifaddr *ifpub;
spin_lock_bh(&ifp->lock);
ifpub = ifp->ifpub;
@@ -1909,6 +1901,14 @@ static void addrconf_dad_stop(struct ine
spin_unlock_bh(&ifp->lock);
}
ipv6_del_addr(ifp);
+ } else if (ifp->flags&IFA_F_PERMANENT || !dad_failed) {
+ spin_lock_bh(&ifp->lock);
+ addrconf_del_dad_work(ifp);
+ ifp->flags |= IFA_F_TENTATIVE;
+ spin_unlock_bh(&ifp->lock);
+ if (dad_failed)
+ ipv6_ifa_notify(0, ifp);
+ in6_ifa_put(ifp);
} else {
ipv6_del_addr(ifp);
}


2017-07-19 10:46:57

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 16/88] net/mlx5e: Fix TX carrier errors report in get stats ndo

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Gal Pressman <[email protected]>


[ Upstream commit 8ff93de7668bd81bc8efa819d1184ebd48fae72d ]

Symbol error during carrier counter from PPCNT was mistakenly reported as
TX carrier errors in get_stats ndo, although it's an RX counter.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Gal Pressman <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 --
1 file changed, 2 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2771,8 +2771,6 @@ mlx5e_get_stats(struct net_device *dev,
PPORT_802_3_GET(pstats, a_frame_check_sequence_errors);
stats->rx_frame_errors = PPORT_802_3_GET(pstats, a_alignment_errors);
stats->tx_aborted_errors = PPORT_2863_GET(pstats, if_out_discards);
- stats->tx_carrier_errors =
- PPORT_802_3_GET(pstats, a_symbol_error_during_carrier);
stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
stats->rx_frame_errors;
stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;


2017-07-19 10:09:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 24/88] rds: tcp: use sock_create_lite() to create the accept socket

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Sowmini Varadhan <[email protected]>


[ Upstream commit 0933a578cd55b02dc80f219dc8f2efb17ec61c9a ]

There are two problems with calling sock_create_kern() from
rds_tcp_accept_one()
1. it sets up a new_sock->sk that is wasteful, because this ->sk
is going to get replaced by inet_accept() in the subsequent ->accept()
2. The new_sock->sk is a leaked reference in sock_graft() which
expects to find a null parent->sk

Avoid these problems by calling sock_create_lite().

Signed-off-by: Sowmini Varadhan <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/rds/tcp_listen.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -125,7 +125,7 @@ int rds_tcp_accept_one(struct socket *so
if (!sock) /* module unload or netns delete in progress */
return -ENETUNREACH;

- ret = sock_create_kern(sock_net(sock->sk), sock->sk->sk_family,
+ ret = sock_create_lite(sock->sk->sk_family,
sock->sk->sk_type, sock->sk->sk_protocol,
&new_sock);
if (ret)


2017-07-19 10:09:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 23/88] vrf: fix bug_on triggered by rx when destroying a vrf

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Nikolay Aleksandrov <[email protected]>


[ Upstream commit f630c38ef0d785101363a8992bbd4f302180f86f ]

When destroying a VRF device we cleanup the slaves in its ndo_uninit()
function, but that causes packets to be switched (skb->dev == vrf being
destroyed) even though we're pass the point where the VRF should be
receiving any packets while it is being dismantled. This causes a BUG_ON
to trigger if we have raw sockets (trace below).
The reason is that the inetdev of the VRF has been destroyed but we're
still sending packets up the stack with it, so let's free the slaves in
the dellink callback as David Ahern suggested.

Note that this fix doesn't prevent packets from going up when the VRF
device is admin down.

[ 35.631371] ------------[ cut here ]------------
[ 35.631603] kernel BUG at net/ipv4/fib_frontend.c:285!
[ 35.631854] invalid opcode: 0000 [#1] SMP
[ 35.631977] Modules linked in:
[ 35.632081] CPU: 2 PID: 22 Comm: ksoftirqd/2 Not tainted 4.12.0-rc7+ #45
[ 35.632247] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 35.632477] task: ffff88005ad68000 task.stack: ffff88005ad64000
[ 35.632632] RIP: 0010:fib_compute_spec_dst+0xfc/0x1ee
[ 35.632769] RSP: 0018:ffff88005ad67978 EFLAGS: 00010202
[ 35.632910] RAX: 0000000000000001 RBX: ffff880059a7f200 RCX: 0000000000000000
[ 35.633084] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff82274af0
[ 35.633256] RBP: ffff88005ad679f8 R08: 000000000001ef70 R09: 0000000000000046
[ 35.633430] R10: ffff88005ad679f8 R11: ffff880037731cb0 R12: 0000000000000001
[ 35.633603] R13: ffff8800599e3000 R14: 0000000000000000 R15: ffff8800599cb852
[ 35.634114] FS: 0000000000000000(0000) GS:ffff88005d900000(0000) knlGS:0000000000000000
[ 35.634306] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 35.634456] CR2: 00007f3563227095 CR3: 000000000201d000 CR4: 00000000000406e0
[ 35.634632] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 35.634865] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 35.635055] Call Trace:
[ 35.635271] ? __lock_acquire+0xf0d/0x1117
[ 35.635522] ipv4_pktinfo_prepare+0x82/0x151
[ 35.635831] raw_rcv_skb+0x17/0x3c
[ 35.636062] raw_rcv+0xe5/0xf7
[ 35.636287] raw_local_deliver+0x169/0x1d9
[ 35.636534] ip_local_deliver_finish+0x87/0x1c4
[ 35.636820] ip_local_deliver+0x63/0x7f
[ 35.637058] ip_rcv_finish+0x340/0x3a1
[ 35.637295] ip_rcv+0x314/0x34a
[ 35.637525] __netif_receive_skb_core+0x49f/0x7c5
[ 35.637780] ? lock_acquire+0x13f/0x1d7
[ 35.638018] ? lock_acquire+0x15e/0x1d7
[ 35.638259] __netif_receive_skb+0x1e/0x94
[ 35.638502] ? __netif_receive_skb+0x1e/0x94
[ 35.638748] netif_receive_skb_internal+0x74/0x300
[ 35.639002] ? dev_gro_receive+0x2ed/0x411
[ 35.639246] ? lock_is_held_type+0xc4/0xd2
[ 35.639491] napi_gro_receive+0x105/0x1a0
[ 35.639736] receive_buf+0xc32/0xc74
[ 35.639965] ? detach_buf+0x67/0x153
[ 35.640201] ? virtqueue_get_buf_ctx+0x120/0x176
[ 35.640453] virtnet_poll+0x128/0x1c5
[ 35.640690] net_rx_action+0x103/0x343
[ 35.640932] __do_softirq+0x1c7/0x4b7
[ 35.641171] run_ksoftirqd+0x23/0x5c
[ 35.641403] smpboot_thread_fn+0x24f/0x26d
[ 35.641646] ? sort_range+0x22/0x22
[ 35.641878] kthread+0x129/0x131
[ 35.642104] ? __list_add+0x31/0x31
[ 35.642335] ? __list_add+0x31/0x31
[ 35.642568] ret_from_fork+0x2a/0x40
[ 35.642804] Code: 05 bd 87 a3 00 01 e8 1f ef 98 ff 4d 85 f6 48 c7 c7 f0 4a 27 82 41 0f 94 c4 31 c9 31 d2 41 0f b6 f4 e8 04 71 a1 ff 45 84 e4 74 02 <0f> 0b 0f b7 93 c4 00 00 00 4d 8b a5 80 05 00 00 48 03 93 d0 00
[ 35.644342] RIP: fib_compute_spec_dst+0xfc/0x1ee RSP: ffff88005ad67978

Fixes: 193125dbd8eb ("net: Introduce VRF device driver")
Reported-by: Chris Cormier <[email protected]>
Signed-off-by: Nikolay Aleksandrov <[email protected]>
Acked-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/vrf.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)

--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -788,15 +788,10 @@ static int vrf_del_slave(struct net_devi
static void vrf_dev_uninit(struct net_device *dev)
{
struct net_vrf *vrf = netdev_priv(dev);
- struct net_device *port_dev;
- struct list_head *iter;

vrf_rtable_release(dev, vrf);
vrf_rt6_release(dev, vrf);

- netdev_for_each_lower_dev(dev, port_dev, iter)
- vrf_del_slave(dev, port_dev);
-
free_percpu(dev->dstats);
dev->dstats = NULL;
}
@@ -1247,6 +1242,12 @@ static int vrf_validate(struct nlattr *t

static void vrf_dellink(struct net_device *dev, struct list_head *head)
{
+ struct net_device *port_dev;
+ struct list_head *iter;
+
+ netdev_for_each_lower_dev(dev, port_dev, iter)
+ vrf_del_slave(dev, port_dev);
+
unregister_netdevice_queue(dev, head);
}



2017-07-19 10:47:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 13/88] bpf: prevent leaking pointer via xadd on unpriviledged

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Daniel Borkmann <[email protected]>


[ Upstream commit 6bdf6abc56b53103324dfd270a86580306e1a232 ]

Leaking kernel addresses on unpriviledged is generally disallowed,
for example, verifier rejects the following:

0: (b7) r0 = 0
1: (18) r2 = 0xffff897e82304400
3: (7b) *(u64 *)(r1 +48) = r2
R2 leaks addr into ctx

Doing pointer arithmetic on them is also forbidden, so that they
don't turn into unknown value and then get leaked out. However,
there's xadd as a special case, where we don't check the src reg
for being a pointer register, e.g. the following will pass:

0: (b7) r0 = 0
1: (7b) *(u64 *)(r1 +48) = r0
2: (18) r2 = 0xffff897e82304400 ; map
4: (db) lock *(u64 *)(r1 +48) += r2
5: (95) exit

We could store the pointer into skb->cb, loose the type context,
and then read it out from there again to leak it eventually out
of a map value. Or more easily in a different variant, too:

0: (bf) r6 = r1
1: (7a) *(u64 *)(r10 -8) = 0
2: (bf) r2 = r10
3: (07) r2 += -8
4: (18) r1 = 0x0
6: (85) call bpf_map_lookup_elem#1
7: (15) if r0 == 0x0 goto pc+3
R0=map_value(ks=8,vs=8,id=0),min_value=0,max_value=0 R6=ctx R10=fp
8: (b7) r3 = 0
9: (7b) *(u64 *)(r0 +0) = r3
10: (db) lock *(u64 *)(r0 +0) += r6
11: (b7) r0 = 0
12: (95) exit

from 7 to 11: R0=inv,min_value=0,max_value=0 R6=ctx R10=fp
11: (b7) r0 = 0
12: (95) exit

Prevent this by checking xadd src reg for pointer types. Also
add a couple of test cases related to this.

Fixes: 1be7f75d1668 ("bpf: enable non-root eBPF programs")
Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
Acked-by: Edward Cree <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
kernel/bpf/verifier.c | 5 ++
tools/testing/selftests/bpf/test_verifier.c | 66 ++++++++++++++++++++++++++++
2 files changed, 71 insertions(+)

--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -951,6 +951,11 @@ static int check_xadd(struct bpf_verifie
if (err)
return err;

+ if (is_pointer_value(env, insn->src_reg)) {
+ verbose("R%d leaks addr into mem\n", insn->src_reg);
+ return -EACCES;
+ }
+
/* check whether atomic_add can read the memory */
err = check_mem_access(env, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, -1);
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -3518,6 +3518,72 @@ static struct bpf_test tests[] = {
.errstr = "invalid bpf_context access",
},
{
+ "leak pointer into ctx 1",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0,
+ offsetof(struct __sk_buff, cb[0])),
+ BPF_LD_MAP_FD(BPF_REG_2, 0),
+ BPF_STX_XADD(BPF_DW, BPF_REG_1, BPF_REG_2,
+ offsetof(struct __sk_buff, cb[0])),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 2 },
+ .errstr_unpriv = "R2 leaks addr into mem",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+ "leak pointer into ctx 2",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0,
+ offsetof(struct __sk_buff, cb[0])),
+ BPF_STX_XADD(BPF_DW, BPF_REG_1, BPF_REG_10,
+ offsetof(struct __sk_buff, cb[0])),
+ BPF_EXIT_INSN(),
+ },
+ .errstr_unpriv = "R10 leaks addr into mem",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+ "leak pointer into ctx 3",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_LD_MAP_FD(BPF_REG_2, 0),
+ BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2,
+ offsetof(struct __sk_buff, cb[0])),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 1 },
+ .errstr_unpriv = "R2 leaks addr into ctx",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+ "leak pointer into map val",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+ BPF_LD_MAP_FD(BPF_REG_1, 0),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_map_lookup_elem),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+ BPF_MOV64_IMM(BPF_REG_3, 0),
+ BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
+ BPF_STX_XADD(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .fixup_map1 = { 4 },
+ .errstr_unpriv = "R6 leaks addr into mem",
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
"helper access to map: full range",
.insns = {
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),


2017-07-19 10:48:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 08/88] net: prevent sign extension in dev_get_stats()

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Eric Dumazet <[email protected]>


[ Upstream commit 6f64ec74515925cced6df4571638b5a099a49aae ]

Similar to the fix provided by Dominik Heidler in commit
9b3dc0a17d73 ("l2tp: cast l2tp traffic counter to unsigned")
we need to take care of 32bit kernels in dev_get_stats().

When using atomic_long_read(), we add a 'long' to u64 and
might misinterpret high order bit, unless we cast to unsigned.

Fixes: caf586e5f23ce ("net: add a core netdev->rx_dropped counter")
Fixes: 015f0688f57ca ("net: net: add a core netdev->tx_dropped counter")
Fixes: 6e7333d315a76 ("net: add rx_nohandler stat counter")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Jarod Wilson <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/core/dev.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7623,9 +7623,9 @@ struct rtnl_link_stats64 *dev_get_stats(
} else {
netdev_stats_to_stats64(storage, &dev->stats);
}
- storage->rx_dropped += atomic_long_read(&dev->rx_dropped);
- storage->tx_dropped += atomic_long_read(&dev->tx_dropped);
- storage->rx_nohandler += atomic_long_read(&dev->rx_nohandler);
+ storage->rx_dropped += (unsigned long)atomic_long_read(&dev->rx_dropped);
+ storage->tx_dropped += (unsigned long)atomic_long_read(&dev->tx_dropped);
+ storage->rx_nohandler += (unsigned long)atomic_long_read(&dev->rx_nohandler);
return storage;
}
EXPORT_SYMBOL(dev_get_stats);


2017-07-19 10:09:17

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 06/88] net: dp83640: Avoid NULL pointer dereference.

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Richard Cochran <[email protected]>


[ Upstream commit db9d8b29d19d2801793e4419f4c6272bf8951c62 ]

The function, skb_complete_tx_timestamp(), used to allow passing in a
NULL pointer for the time stamps, but that was changed in commit
62bccb8cdb69051b95a55ab0c489e3cab261c8ef ("net-timestamp: Make the
clone operation stand-alone from phy timestamping"), and the existing
call sites, all of which are in the dp83640 driver, were fixed up.

Even though the kernel-doc was subsequently updated in commit
7a76a021cd5a292be875fbc616daf03eab1e6996 ("net-timestamp: Update
skb_complete_tx_timestamp comment"), still a bug fix from Manfred
Rudigier came into the driver using the old semantics. Probably
Manfred derived that patch from an older kernel version.

This fix should be applied to the stable trees as well.

Fixes: 81e8f2e930fe ("net: dp83640: Fix tx timestamp overflow handling.")
Signed-off-by: Richard Cochran <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/phy/dp83640.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/phy/dp83640.c
+++ b/drivers/net/phy/dp83640.c
@@ -908,7 +908,7 @@ static void decode_txts(struct dp83640_p
if (overflow) {
pr_debug("tx timestamp queue overflow, count %d\n", overflow);
while (skb) {
- skb_complete_tx_timestamp(skb, NULL);
+ kfree_skb(skb);
skb = skb_dequeue(&dp83640->tx_queue);
}
return;


2017-07-19 10:48:42

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 04/88] sfc: Fix MCDI command size for filter operations

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Martin Habets <[email protected]>


[ Upstream commit bb53f4d4f5116d3dae76bb12fb16bc73771f958a ]

The 8000 series adapters uses catch-all filters for encapsulated traffic
to support filtering VXLAN, NVGRE and GENEVE traffic.
This new filter functionality requires a longer MCDI command.
This patch increases the size of buffers on stack that were missed, which
fixes a kernel panic from the stack protector.

Fixes: 9b41080125176 ("sfc: insert catch-all filters for encapsulated traffic")
Signed-off-by: Martin Habets <[email protected]>
Acked-by: Edward Cree <[email protected]>
Acked-by: Bert Kenward [email protected]
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/sfc/ef10.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--- a/drivers/net/ethernet/sfc/ef10.c
+++ b/drivers/net/ethernet/sfc/ef10.c
@@ -4171,7 +4171,7 @@ found:
* recipients
*/
if (is_mc_recip) {
- MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
unsigned int depth, i;

memset(inbuf, 0, sizeof(inbuf));
@@ -4319,7 +4319,7 @@ static int efx_ef10_filter_remove_intern
efx_ef10_filter_set_entry(table, filter_idx, NULL, 0);
} else {
efx_mcdi_display_error(efx, MC_CMD_FILTER_OP,
- MC_CMD_FILTER_OP_IN_LEN,
+ MC_CMD_FILTER_OP_EXT_IN_LEN,
NULL, 0, rc);
}
}
@@ -4452,7 +4452,7 @@ static s32 efx_ef10_filter_rfs_insert(st
struct efx_filter_spec *spec)
{
struct efx_ef10_filter_table *table = efx->filter_state;
- MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
struct efx_filter_spec *saved_spec;
unsigned int hash, i, depth = 1;
bool replacing = false;
@@ -4939,7 +4939,7 @@ not_restored:
static void efx_ef10_filter_table_remove(struct efx_nic *efx)
{
struct efx_ef10_filter_table *table = efx->filter_state;
- MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
struct efx_filter_spec *spec;
unsigned int filter_idx;
int rc;


2017-07-19 10:09:09

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 03/88] netvsc: dont access netdev->num_rx_queues directly

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Arnd Bergmann <[email protected]>


[ Upstream commit b92b7d3312033a08cae2c879b9243c42ad7de94b ]

This structure member is hidden behind CONFIG_SYSFS, and we
get a build error when that is disabled:

drivers/net/hyperv/netvsc_drv.c: In function 'netvsc_set_channels':
drivers/net/hyperv/netvsc_drv.c:754:49: error: 'struct net_device' has no member named 'num_rx_queues'; did you mean 'num_tx_queues'?
drivers/net/hyperv/netvsc_drv.c: In function 'netvsc_set_rxfh':
drivers/net/hyperv/netvsc_drv.c:1181:25: error: 'struct net_device' has no member named 'num_rx_queues'; did you mean 'num_tx_queues'?

As the value is only set once to the argument of alloc_netdev_mq(),
we can compare against that constant directly.

Fixes: ff4a44199012 ("netvsc: allow get/set of RSS indirection table")
Fixes: 2b01888d1b45 ("netvsc: allow more flexible setting of number of channels")
Signed-off-by: Arnd Bergmann <[email protected]>
Reviewed-by: Haiyang Zhang <[email protected]>
Signed-off-by: Stephen Hemminger <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/hyperv/netvsc_drv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -753,7 +753,7 @@ static int netvsc_set_channels(struct ne
channels->rx_count || channels->tx_count || channels->other_count)
return -EINVAL;

- if (count > net->num_tx_queues || count > net->num_rx_queues)
+ if (count > net->num_tx_queues || count > VRSS_CHANNEL_MAX)
return -EINVAL;

if (net_device_ctx->start_remove || !nvdev || nvdev->destroy)
@@ -1142,7 +1142,7 @@ static int netvsc_set_rxfh(struct net_de

if (indir) {
for (i = 0; i < ITAB_NUM; i++)
- if (indir[i] >= dev->num_rx_queues)
+ if (indir[i] >= VRSS_CHANNEL_MAX)
return -EINVAL;

for (i = 0; i < ITAB_NUM; i++)


2017-07-19 10:49:31

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 11/88] mlxsw: spectrum_router: Fix NULL pointer dereference

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Ido Schimmel <[email protected]>


[ Upstream commit 6b27c8adf27edf1dabe2cdcfaa101ef7e2712415 ]

In case a VLAN device is enslaved to a bridge we shouldn't create a
router interface (RIF) for it when it's configured with an IP address.
This is already handled by the driver for other types of netdevs, such
as physical ports and LAG devices.

If this IP address is then removed and the interface is subsequently
unlinked from the bridge, a NULL pointer dereference can happen, as the
original 802.1d FID was replaced with an rFID which was then deleted.

To reproduce:
$ ip link set dev enp3s0np9 up
$ ip link add name enp3s0np9.111 link enp3s0np9 type vlan id 111
$ ip link set dev enp3s0np9.111 up
$ ip link add name br0 type bridge
$ ip link set dev br0 up
$ ip link set enp3s0np9.111 master br0
$ ip address add dev enp3s0np9.111 192.168.0.1/24
$ ip address del dev enp3s0np9.111 192.168.0.1/24
$ ip link set dev enp3s0np9.111 nomaster

Fixes: 99724c18fc66 ("mlxsw: spectrum: Introduce support for router interfaces")
Signed-off-by: Ido Schimmel <[email protected]>
Reported-by: Petr Machata <[email protected]>
Tested-by: Petr Machata <[email protected]>
Reviewed-by: Petr Machata <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -3829,6 +3829,9 @@ static int mlxsw_sp_inetaddr_vlan_event(
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(vlan_dev);
u16 vid = vlan_dev_vlan_id(vlan_dev);

+ if (netif_is_bridge_port(vlan_dev))
+ return 0;
+
if (mlxsw_sp_port_dev_check(real_dev))
return mlxsw_sp_inetaddr_vport_event(vlan_dev, real_dev, event,
vid);


2017-07-19 10:49:49

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 4.11 01/88] net/phy: micrel: configure intterupts after autoneg workaround

4.11-stable review patch. If anyone has any objections, please let me know.

------------------

From: Zach Brown <[email protected]>


[ Upstream commit b866203d872d5deeafcecd25ea429d6748b5bd56 ]

The commit ("net/phy: micrel: Add workaround for bad autoneg") fixes an
autoneg failure case by resetting the hardware. This turns off
intterupts. Things will work themselves out if the phy polls, as it will
figure out it's state during a poll. However if the phy uses only
intterupts, the phy will stall, since interrupts are off. This patch
fixes the issue by calling config_intr after resetting the phy.

Fixes: d2fd719bcb0e ("net/phy: micrel: Add workaround for bad autoneg ")
Signed-off-by: Zach Brown <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Reviewed-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/phy/micrel.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -611,6 +611,8 @@ static int ksz9031_read_status(struct ph
if ((regval & 0xFF) == 0xFF) {
phy_init_hw(phydev);
phydev->link = 0;
+ if (phydev->drv->config_intr && phy_interrupt_is_valid(phydev))
+ phydev->drv->config_intr(phydev);
}

return 0;


2017-07-19 20:35:00

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 4.11 00/88] 4.11.12-stable review

On Wed, Jul 19, 2017 at 12:07:22PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.11.12 release.
> There are 88 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri Jul 21 10:07:36 UTC 2017.
> Anything received after that time might be too late.
>

Build results:
total: 145 pass: 145 fail: 0
Qemu test results:
total: 122 pass: 122 fail: 0

Details are available at http://kerneltests.org/builders.

Guenter

2017-07-19 23:38:26

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 4.11 00/88] 4.11.12-stable review

On 07/19/2017 04:07 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.11.12 release.
> There are 88 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri Jul 21 10:07:36 UTC 2017.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.11.12-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.11.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

thanks,
-- Shuah