2020-05-13 10:03:03

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 000/118] 5.6.13-rc1 review

This is the start of the stable review cycle for the 5.6.13 release.
There are 118 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 5.6.13-rc1

Amir Goldstein <[email protected]>
fanotify: merge duplicate events on parent and child

Amir Goldstein <[email protected]>
fsnotify: replace inode pointer with an object id

Max Kellermann <[email protected]>
io_uring: don't use 'fd' for openat/openat2/statx

Christoph Hellwig <[email protected]>
bdi: add a ->dev_name field to struct backing_dev_info

Christoph Hellwig <[email protected]>
bdi: move bdi_dev_name out of line

Yafang Shao <[email protected]>
mm, memcg: fix error return value of mem_cgroup_css_alloc()

Ivan Delalande <[email protected]>
scripts/decodecode: fix trapping instruction formatting

Julia Lawall <[email protected]>
iommu/virtio: Reverse arguments to list_add

Josh Poimboeuf <[email protected]>
objtool: Fix stack offset tracking for indirect CFAs

Paolo Bonzini <[email protected]>
kvm: ioapic: Restrict lazy EOI update to edge-triggered interrupts

Arnd Bergmann <[email protected]>
netfilter: nf_osf: avoid passing pointer to local var

Guillaume Nault <[email protected]>
netfilter: nat: never update the UDP checksum when it's 0

Janakarajan Natarajan <[email protected]>
arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory()

Suravee Suthikulpanit <[email protected]>
KVM: x86: Fixes posted interrupt check for IRQs delivery modes

Josh Poimboeuf <[email protected]>
x86/unwind/orc: Fix premature unwind stoppage due to IRET frames

Josh Poimboeuf <[email protected]>
x86/unwind/orc: Fix error path for bad ORC entry type

Josh Poimboeuf <[email protected]>
x86/unwind/orc: Prevent unwinding before ORC initialization

Miroslav Benes <[email protected]>
x86/unwind/orc: Don't skip the first frame for inactive tasks

Jann Horn <[email protected]>
x86/entry/64: Fix unwind hints in rewind_stack_do_exit()

Josh Poimboeuf <[email protected]>
x86/entry/64: Fix unwind hints in __switch_to_asm()

Josh Poimboeuf <[email protected]>
x86/entry/64: Fix unwind hints in kernel exit path

Josh Poimboeuf <[email protected]>
x86/entry/64: Fix unwind hints in register clearing code

Rick Edgecombe <[email protected]>
x86/mm/cpa: Flush direct map alias during cpa

Xiyu Yang <[email protected]>
batman-adv: Fix refcnt leak in batadv_v_ogm_process

Xiyu Yang <[email protected]>
batman-adv: Fix refcnt leak in batadv_store_throughput_override

Xiyu Yang <[email protected]>
batman-adv: Fix refcnt leak in batadv_show_throughput_override

George Spelvin <[email protected]>
batman-adv: fix batadv_nc_random_weight_tq

Tejun Heo <[email protected]>
iocost: protect iocg->abs_vdebt with iocg->waitq.lock

Vincent Chen <[email protected]>
riscv: set max_pfn to the PFN of the last page

Luis Chamberlain <[email protected]>
coredump: fix crash when umh is disabled

Oscar Carter <[email protected]>
staging: gasket: Check the return value of gasket_get_bar_index()

Luis Henriques <[email protected]>
ceph: demote quotarealm lookup warning to a debug message

Jeff Layton <[email protected]>
ceph: fix endianness bug when handling MDS session feature bits

Henry Willard <[email protected]>
mm: limit boost_watermark on small zones

David Hildenbrand <[email protected]>
mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous()

Khazhismel Kumykov <[email protected]>
eventpoll: fix missing wakeup for ovflist in ep_poll_callback

Roman Penyaev <[email protected]>
epoll: atomically remove wait entry on wake up

Oleg Nesterov <[email protected]>
ipc/mqueue.c: change __do_notify() to bypass check_kill_permission()

Daniel Kolesa <[email protected]>
drm/amd/display: work around fp code being emitted outside of DC_FP_START/END

H. Nikolaus Schaller <[email protected]>
drm: ingenic-drm: add MODULE_DEVICE_TABLE

Tomas Winkler <[email protected]>
mei: me: disable mei interface on LBG servers.

Ulf Hansson <[email protected]>
amba: Initialize dma_parms for amba devices

Ulf Hansson <[email protected]>
driver core: platform: Initialize dma_parms for platform devices

Mark Rutland <[email protected]>
arm64: hugetlb: avoid potential NULL dereference

Marc Zyngier <[email protected]>
KVM: arm64: Fix 32bit PC wrap-around

Marc Zyngier <[email protected]>
KVM: arm: vgic: Fix limit condition when writing to GICD_I[CS]ACTIVER

Sean Christopherson <[email protected]>
KVM: VMX: Explicitly clear RFLAGS.CF and RFLAGS.ZF in VM-Exit RSB path

Christian Borntraeger <[email protected]>
KVM: s390: Remove false WARN_ON_ONCE for the PQAP instruction

Jason A. Donenfeld <[email protected]>
crypto: arch/lib - limit simd usage to 4k chunks

Jason A. Donenfeld <[email protected]>
crypto: arch/nhpoly1305 - process in explicit 4k chunks

Steven Rostedt (VMware) <[email protected]>
tracing: Add a vmalloc_sync_mappings() for safe measure

Steven Rostedt (VMware) <[email protected]>
tracing: Wait for preempt irq delay thread to finish

Masami Hiramatsu <[email protected]>
tracing/kprobes: Reject new event if loc is NULL

Masami Hiramatsu <[email protected]>
tracing/boottime: Fix kprobe event API usage

Oliver Neukum <[email protected]>
USB: serial: garmin_gps: add sanity checking for data length

Bryan O'Donoghue <[email protected]>
usb: chipidea: msm: Ensure proper controller reset using role switch API

Oliver Neukum <[email protected]>
USB: uas: add quirk for LaCie 2Big Quadra

Jason Gerecke <[email protected]>
HID: wacom: Report 2nd-gen Intuos Pro S center button status over BT

Alan Stern <[email protected]>
HID: usbhid: Fix race between usbhid_close() and usbhid_stop()

Jason Gerecke <[email protected]>
Revert "HID: wacom: generic: read the number of expected touches on a per collection basis"

Jere Leppänen <[email protected]>
sctp: Fix bundling of SHUTDOWN with COOKIE-ACK

Jason Gerecke <[email protected]>
HID: wacom: Read HID_DG_CONTACTMAX directly for non-generic devices

Jason A. Donenfeld <[email protected]>
wireguard: send/receive: cond_resched() when processing worker ringbuffers

Jason A. Donenfeld <[email protected]>
wireguard: socket: remove errant restriction on looping to self

Dejin Zheng <[email protected]>
net: enetc: fix an issue about leak system resources

Toke Høiland-Jørgensen <[email protected]>
wireguard: receive: use tunnel helpers for decapsulating ECN markings

Jason A. Donenfeld <[email protected]>
wireguard: queueing: cleanup ptr_ring in error path of packet_queue_init

Dan Carpenter <[email protected]>
net: mvpp2: cls: Prevent buffer overflow in mvpp2_ethtool_cls_rule_del()

Dan Carpenter <[email protected]>
net: mvpp2: prevent buffer overflow in mvpp22_rss_ctx()

Roi Dayan <[email protected]>
net/mlx5e: Fix q counters on uplink representors

Moshe Shemesh <[email protected]>
net/mlx5: Fix command entry leak in Internal Error State

Moshe Shemesh <[email protected]>
net/mlx5: Fix forced completion access non initialized command entry

Erez Shitrit <[email protected]>
net/mlx5: DR, On creation set CQ's arm_db member to right value

Michael Chan <[email protected]>
bnxt_en: Fix VLAN acceleration handling in bnxt_fix_features().

Michael Chan <[email protected]>
bnxt_en: Return error when allocating zero size context memory.

Michael Chan <[email protected]>
bnxt_en: Improve AER slot reset.

Vasundhara Volam <[email protected]>
bnxt_en: Reduce BNXT_MSIX_VEC_MAX value to supported CQs per PF.

Michael Chan <[email protected]>
bnxt_en: Fix VF anti-spoof filter setup.

Toke Høiland-Jørgensen <[email protected]>
tunnel: Propagate ECT(1) when decapsulating as recommended by RFC6040

Tuong Lien <[email protected]>
tipc: fix partial topology connection closure

Eric Dumazet <[email protected]>
selftests: net: tcp_mmap: fix SO_RCVLOWAT setting

Eric Dumazet <[email protected]>
selftests: net: tcp_mmap: clear whole tcp_zerocopy_receive struct

Eric Dumazet <[email protected]>
sch_sfq: validate silly quantum values

Eric Dumazet <[email protected]>
sch_choke: avoid potential panic in choke_reset()

Qiushi Wu <[email protected]>
nfp: abm: fix a memory leak bug

Matt Jolly <[email protected]>
net: usb: qmi_wwan: add support for DW5816e

Xiyu Yang <[email protected]>
net/tls: Fix sk_psock refcnt leak when in tls_data_ready()

Xiyu Yang <[email protected]>
net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict()

Anthony Felice <[email protected]>
net: tc35815: Fix phydev supported/advertising mask

Willem de Bruijn <[email protected]>
net: stricter validation of untrusted gso packets

Eric Dumazet <[email protected]>
net_sched: sch_skbprio: add message validation to skbprio_change()

Baruch Siach <[email protected]>
net: phy: marvell10g: fix temperature sensor on 2110

Tariq Toukan <[email protected]>
net/mlx4_core: Fix use of ENOSPC around mlx4_counter_alloc()

Scott Dial <[email protected]>
net: macsec: preserve ingress frame ordering

Dejin Zheng <[email protected]>
net: macb: fix an issue about leak related system resources

Florian Fainelli <[email protected]>
net: dsa: Do not make user port errors fatal

Florian Fainelli <[email protected]>
net: dsa: Do not leave DSA master with NULL netdev_ops

Ido Schimmel <[email protected]>
net: bridge: vlan: Add a schedule point during VLAN processing

Roman Mashak <[email protected]>
neigh: send protocol value in neighbor create notification

Jiri Pirko <[email protected]>
mlxsw: spectrum_acl_tcam: Position vchunk in a vregion list properly

David Ahern <[email protected]>
ipv6: Use global sernum for dst validation with nexthop objects

Eric Dumazet <[email protected]>
fq_codel: fix TCA_FQ_CODEL_DROP_BATCH_SIZE sanity checks

Julia Lawall <[email protected]>
dp83640: reverse arguments to list_add_tail

Jakub Kicinski <[email protected]>
devlink: fix return value after hitting end in region read

Aya Levin <[email protected]>
devlink: Fix reporter's recovery condition

Rahul Lakkireddy <[email protected]>
cxgb4: fix EOTID leak when disabling TC-MQPRIO offload

Andy Shevchenko <[email protected]>
net: macb: Fix runtime PM refcounting

Masami Hiramatsu <[email protected]>
tracing/kprobes: Fix a double initialization typo

Sagi Grimberg <[email protected]>
nvme: fix possible hang when ns scanning fails during error recovery

Christoph Hellwig <[email protected]>
nvme: refactor nvme_identify_ns_descs error handling

Eric Whitney <[email protected]>
ext4: disable dioread_nolock whenever delayed allocation is disabled

Ritesh Harjani <[email protected]>
ext4: don't set dioread_nolock by default for blocksize < pagesize

Shubhrajyoti Datta <[email protected]>
tty: xilinx_uartps: Fix missing id assignment to the console

Nicolas Pitre <[email protected]>
vt: fix unicode console freeing with a common interface

Evan Quan <[email protected]>
drm/amdgpu: drop redundant cg/pg ungate on runpm enter

Evan Quan <[email protected]>
drm/amdgpu: move kfd suspend after ip_suspend_phase1

Matt Jolly <[email protected]>
USB: serial: qcserial: Add DW5816e support

Mika Westerberg <[email protected]>
thunderbolt: Check return value of tb_sw_read() in usb4_switch_op()


-------------

Diffstat:

Makefile | 4 +-
arch/arm/crypto/chacha-glue.c | 14 ++-
arch/arm/crypto/nhpoly1305-neon-glue.c | 2 +-
arch/arm/crypto/poly1305-glue.c | 15 ++-
arch/arm64/crypto/chacha-neon-glue.c | 14 ++-
arch/arm64/crypto/nhpoly1305-neon-glue.c | 2 +-
arch/arm64/crypto/poly1305-glue.c | 15 ++-
arch/arm64/kvm/guest.c | 7 ++
arch/arm64/mm/hugetlbpage.c | 2 +
arch/riscv/mm/init.c | 3 +-
arch/s390/kvm/priv.c | 4 +-
arch/x86/crypto/blake2s-glue.c | 10 +-
arch/x86/crypto/chacha_glue.c | 14 ++-
arch/x86/crypto/nhpoly1305-avx2-glue.c | 2 +-
arch/x86/crypto/nhpoly1305-sse2-glue.c | 2 +-
arch/x86/crypto/poly1305_glue.c | 13 ++-
arch/x86/entry/calling.h | 40 +++----
arch/x86/entry/entry_64.S | 14 +--
arch/x86/include/asm/kvm_host.h | 4 +-
arch/x86/include/asm/unwind.h | 2 +-
arch/x86/kernel/unwind_orc.c | 61 ++++++++---
arch/x86/kvm/ioapic.c | 10 +-
arch/x86/kvm/svm.c | 2 +-
arch/x86/kvm/vmx/vmenter.S | 3 +
arch/x86/mm/pat/set_memory.c | 12 ++-
block/blk-iocost.c | 117 +++++++++++++--------
drivers/amba/bus.c | 1 +
drivers/base/platform.c | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +-
.../gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 31 ++++--
drivers/gpu/drm/ingenic/ingenic-drm.c | 1 +
drivers/hid/usbhid/hid-core.c | 37 +++++--
drivers/hid/usbhid/usbhid.h | 1 +
drivers/hid/wacom_sys.c | 4 +-
drivers/hid/wacom_wac.c | 88 ++++------------
drivers/iommu/virtio-iommu.c | 2 +-
drivers/misc/mei/hw-me.c | 8 ++
drivers/misc/mei/hw-me.h | 4 +
drivers/misc/mei/pci-me.c | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 20 ++--
drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 -
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 10 +-
drivers/net/ethernet/cadence/macb_main.c | 24 ++---
drivers/net/ethernet/chelsio/cxgb4/sge.c | 39 ++++++-
.../net/ethernet/freescale/enetc/enetc_pci_mdio.c | 2 +-
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 3 +
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +
drivers/net/ethernet/mellanox/mlx4/main.c | 4 +-
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 6 +-
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 9 +-
.../ethernet/mellanox/mlx5/core/steering/dr_send.c | 14 ++-
.../ethernet/mellanox/mlxsw/spectrum_acl_tcam.c | 12 ++-
drivers/net/ethernet/netronome/nfp/abm/main.c | 1 +
drivers/net/ethernet/toshiba/tc35815.c | 2 +-
drivers/net/macsec.c | 3 +-
drivers/net/phy/dp83640.c | 2 +-
drivers/net/phy/marvell10g.c | 27 ++++-
drivers/net/usb/qmi_wwan.c | 1 +
drivers/net/wireguard/queueing.c | 4 +-
drivers/net/wireguard/receive.c | 8 +-
drivers/net/wireguard/send.c | 4 +
drivers/net/wireguard/socket.c | 12 ---
drivers/nvme/host/core.c | 28 +++--
drivers/staging/gasket/gasket_core.c | 4 +
drivers/thunderbolt/usb4.c | 3 +
drivers/tty/serial/xilinx_uartps.c | 1 +
drivers/tty/vt/vt.c | 9 +-
drivers/usb/chipidea/ci_hdrc_msm.c | 2 +-
drivers/usb/serial/garmin_gps.c | 4 +-
drivers/usb/serial/qcserial.c | 1 +
drivers/usb/storage/unusual_uas.h | 7 ++
fs/ceph/mds_client.c | 8 +-
fs/ceph/quota.c | 4 +-
fs/coredump.c | 8 ++
fs/eventpoll.c | 61 ++++++-----
fs/ext4/ext4_jbd2.h | 3 +
fs/ext4/super.c | 13 ++-
fs/io_uring.c | 6 --
fs/notify/fanotify/fanotify.c | 9 +-
fs/notify/inotify/inotify_fsnotify.c | 4 +-
fs/notify/inotify/inotify_user.c | 2 +-
include/linux/amba/bus.h | 1 +
include/linux/backing-dev-defs.h | 1 +
include/linux/backing-dev.h | 9 +-
include/linux/fsnotify_backend.h | 7 +-
include/linux/platform_device.h | 1 +
include/linux/virtio_net.h | 26 ++++-
include/net/inet_ecn.h | 57 +++++++++-
include/net/ip6_fib.h | 4 +
include/net/net_namespace.h | 7 ++
ipc/mqueue.c | 34 ++++--
kernel/trace/preemptirq_delay_test.c | 30 ++++--
kernel/trace/trace.c | 13 +++
kernel/trace/trace_boot.c | 20 ++--
kernel/trace/trace_kprobe.c | 8 +-
kernel/umh.c | 5 +
mm/backing-dev.c | 13 ++-
mm/memcontrol.c | 15 +--
mm/page_alloc.c | 9 ++
net/batman-adv/bat_v_ogm.c | 2 +-
net/batman-adv/network-coding.c | 9 +-
net/batman-adv/sysfs.c | 3 +-
net/bridge/br_netlink.c | 1 +
net/core/devlink.c | 12 ++-
net/core/neighbour.c | 6 +-
net/dsa/dsa2.c | 2 +-
net/dsa/master.c | 3 +-
net/ipv6/route.c | 25 +++++
net/netfilter/nf_nat_proto.c | 4 +-
net/netfilter/nfnetlink_osf.c | 12 ++-
net/sched/sch_choke.c | 3 +-
net/sched/sch_fq_codel.c | 2 +-
net/sched/sch_sfq.c | 9 ++
net/sched/sch_skbprio.c | 3 +
net/sctp/sm_statefuns.c | 6 +-
net/tipc/topsrv.c | 5 +-
net/tls/tls_sw.c | 7 +-
scripts/decodecode | 2 +-
tools/cgroup/iocost_monitor.py | 7 +-
tools/objtool/check.c | 2 +-
tools/testing/selftests/net/tcp_mmap.c | 7 +-
tools/testing/selftests/wireguard/netns.sh | 54 +++++++++-
virt/kvm/arm/hyp/aarch32.c | 8 +-
virt/kvm/arm/vgic/vgic-mmio.c | 4 +-
125 files changed, 964 insertions(+), 459 deletions(-)



2020-05-13 10:03:07

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 004/118] drm/amdgpu: drop redundant cg/pg ungate on runpm enter

From: Evan Quan <[email protected]>

[ Upstream commit f7b52890daba570bc8162d43c96b5583bbdd4edd ]

CG/PG ungate is already performed in ip_suspend_phase1. Otherwise,
the CG/PG ungate will be performed twice. That will cause gfxoff
disablement is performed twice also on runpm enter while gfxoff
enablemnt once on rump exit. That will put gfxoff into disabled
state.

Fixes: b2a7e9735ab286 ("drm/amdgpu: fix the hw hang during perform system reboot and reset")
Signed-off-by: Evan Quan <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: [email protected]
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 1ddf2460cf834..5fcbacddb9b0e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3325,9 +3325,6 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
}
}

- amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
- amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
-
amdgpu_ras_suspend(adev);

r = amdgpu_device_ip_suspend_phase1(adev);
--
2.20.1



2020-05-13 10:03:08

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 007/118] ext4: dont set dioread_nolock by default for blocksize < pagesize

From: Ritesh Harjani <[email protected]>

commit 626b035b816b61a7a7b4d2205a6807e2f11a18c1 upstream.

Currently on calling echo 3 > drop_caches on host machine, we see
FS corruption in the guest. This happens on Power machine where
blocksize < pagesize.

So as a temporary workaound don't enable dioread_nolock by default
for blocksize < pagesize until we identify the root cause.

Also emit a warning msg in case if this mount option is manually
enabled for blocksize < pagesize.

Reported-by: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Ritesh Harjani <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Theodore Ts'o <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
fs/ext4/super.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 446158ab507dd..70796de7c4682 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2181,6 +2181,14 @@ static int parse_options(char *options, struct super_block *sb,
}
}
#endif
+ if (test_opt(sb, DIOREAD_NOLOCK)) {
+ int blocksize =
+ BLOCK_SIZE << le32_to_cpu(sbi->s_es->s_log_block_size);
+ if (blocksize < PAGE_SIZE)
+ ext4_msg(sb, KERN_WARNING, "Warning: mounting with an "
+ "experimental mount option 'dioread_nolock' "
+ "for blocksize < PAGE_SIZE");
+ }
return 1;
}

@@ -3787,7 +3795,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
set_opt(sb, NO_UID32);
/* xattr user namespace & acls are now defaulted on */
set_opt(sb, XATTR_USER);
- set_opt(sb, DIOREAD_NOLOCK);
#ifdef CONFIG_EXT4_FS_POSIX_ACL
set_opt(sb, POSIX_ACL);
#endif
@@ -3837,6 +3844,10 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;

blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
+
+ if (blocksize == PAGE_SIZE)
+ set_opt(sb, DIOREAD_NOLOCK);
+
if (blocksize < EXT4_MIN_BLOCK_SIZE ||
blocksize > EXT4_MAX_BLOCK_SIZE) {
ext4_msg(sb, KERN_ERR,
--
2.20.1



2020-05-13 10:03:11

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 021/118] net: bridge: vlan: Add a schedule point during VLAN processing

From: Ido Schimmel <[email protected]>

[ Upstream commit 7979457b1d3a069cd857f5bd69e070e30223dd0c ]

User space can request to delete a range of VLANs from a bridge slave in
one netlink request. For each deleted VLAN the FDB needs to be traversed
in order to flush all the affected entries.

If a large range of VLANs is deleted and the number of FDB entries is
large or the FDB lock is contented, it is possible for the kernel to
loop through the deleted VLANs for a long time. In case preemption is
disabled, this can result in a soft lockup.

Fix this by adding a schedule point after each VLAN is deleted to yield
the CPU, if needed. This is safe because the VLANs are traversed in
process context.

Fixes: bdced7ef7838 ("bridge: support for multiple vlans and vlan ranges in setlink and dellink requests")
Signed-off-by: Ido Schimmel <[email protected]>
Reported-by: Stefan Priebe - Profihost AG <[email protected]>
Tested-by: Stefan Priebe - Profihost AG <[email protected]>
Acked-by: Nikolay Aleksandrov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/bridge/br_netlink.c | 1 +
1 file changed, 1 insertion(+)

--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -612,6 +612,7 @@ int br_process_vlan_info(struct net_brid
v - 1, rtm_cmd);
v_change_start = 0;
}
+ cond_resched();
}
/* v_change_start is set only if the last/whole range changed */
if (v_change_start)


2020-05-13 10:03:11

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 022/118] net: dsa: Do not leave DSA master with NULL netdev_ops

From: Florian Fainelli <[email protected]>

[ Upstream commit 050569fc8384c8056bacefcc246bcb2dfe574936 ]

When ndo_get_phys_port_name() for the CPU port was added we introduced
an early check for when the DSA master network device in
dsa_master_ndo_setup() already implements ndo_get_phys_port_name(). When
we perform the teardown operation in dsa_master_ndo_teardown() we would
not be checking that cpu_dp->orig_ndo_ops was successfully allocated and
non-NULL initialized.

With network device drivers such as virtio_net, this leads to a NPD as
soon as the DSA switch hanging off of it gets torn down because we are
now assigning the virtio_net device's netdev_ops a NULL pointer.

Fixes: da7b9e9b00d4 ("net: dsa: Add ndo_get_phys_port_name() for CPU port")
Reported-by: Allen Pais <[email protected]>
Signed-off-by: Florian Fainelli <[email protected]>
Tested-by: Allen Pais <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/dsa/master.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/net/dsa/master.c
+++ b/net/dsa/master.c
@@ -289,7 +289,8 @@ static void dsa_master_ndo_teardown(stru
{
struct dsa_port *cpu_dp = dev->dsa_ptr;

- dev->netdev_ops = cpu_dp->orig_ndo_ops;
+ if (cpu_dp->orig_ndo_ops)
+ dev->netdev_ops = cpu_dp->orig_ndo_ops;
cpu_dp->orig_ndo_ops = NULL;
}



2020-05-13 10:03:12

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 020/118] neigh: send protocol value in neighbor create notification

From: Roman Mashak <[email protected]>

[ Upstream commit 38212bb31fe923d0a2c6299bd2adfbb84cddef2a ]

When a new neighbor entry has been added, event is generated but it does not
include protocol, because its value is assigned after the event notification
routine has run, so move protocol assignment code earlier.

Fixes: df9b0e30d44c ("neighbor: Add protocol attribute")
Cc: David Ahern <[email protected]>
Signed-off-by: Roman Mashak <[email protected]>
Reviewed-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/core/neighbour.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1954,6 +1954,9 @@ static int neigh_add(struct sk_buff *skb
NEIGH_UPDATE_F_OVERRIDE_ISROUTER);
}

+ if (protocol)
+ neigh->protocol = protocol;
+
if (ndm->ndm_flags & NTF_EXT_LEARNED)
flags |= NEIGH_UPDATE_F_EXT_LEARNED;

@@ -1967,9 +1970,6 @@ static int neigh_add(struct sk_buff *skb
err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
NETLINK_CB(skb).portid, extack);

- if (protocol)
- neigh->protocol = protocol;
-
neigh_release(neigh);

out:


2020-05-13 10:03:15

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 055/118] wireguard: socket: remove errant restriction on looping to self

From: "Jason A. Donenfeld" <[email protected]>

[ Upstream commit b673e24aad36981f327a6570412ffa7754de8911 ]

It's already possible to create two different interfaces and loop
packets between them. This has always been possible with tunnels in the
kernel, and isn't specific to wireguard. Therefore, the networking stack
already needs to deal with that. At the very least, the packet winds up
exceeding the MTU and is discarded at that point. So, since this is
already something that happens, there's no need to forbid the not very
exceptional case of routing a packet back to the same interface; this
loop is no different than others, and we shouldn't special case it, but
rather rely on generic handling of loops in general. This also makes it
easier to do interesting things with wireguard such as onion routing.

At the same time, we add a selftest for this, ensuring that both onion
routing works and infinite routing loops do not crash the kernel. We
also add a test case for wireguard interfaces nesting packets and
sending traffic between each other, as well as the loop in this case
too. We make sure to send some throughput-heavy traffic for this use
case, to stress out any possible recursion issues with the locks around
workqueues.

Fixes: e7096c131e51 ("net: WireGuard secure network tunnel")
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/wireguard/socket.c | 12 ------
tools/testing/selftests/wireguard/netns.sh | 54 +++++++++++++++++++++++++++--
2 files changed, 51 insertions(+), 15 deletions(-)

--- a/drivers/net/wireguard/socket.c
+++ b/drivers/net/wireguard/socket.c
@@ -76,12 +76,6 @@ static int send4(struct wg_device *wg, s
net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
wg->dev->name, &endpoint->addr, ret);
goto err;
- } else if (unlikely(rt->dst.dev == skb->dev)) {
- ip_rt_put(rt);
- ret = -ELOOP;
- net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
- wg->dev->name, &endpoint->addr);
- goto err;
}
if (cache)
dst_cache_set_ip4(cache, &rt->dst, fl.saddr);
@@ -149,12 +143,6 @@ static int send6(struct wg_device *wg, s
net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
wg->dev->name, &endpoint->addr, ret);
goto err;
- } else if (unlikely(dst->dev == skb->dev)) {
- dst_release(dst);
- ret = -ELOOP;
- net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
- wg->dev->name, &endpoint->addr);
- goto err;
}
if (cache)
dst_cache_set_ip6(cache, dst, &fl.saddr);
--- a/tools/testing/selftests/wireguard/netns.sh
+++ b/tools/testing/selftests/wireguard/netns.sh
@@ -48,8 +48,11 @@ cleanup() {
exec 2>/dev/null
printf "$orig_message_cost" > /proc/sys/net/core/message_cost
ip0 link del dev wg0
+ ip0 link del dev wg1
ip1 link del dev wg0
+ ip1 link del dev wg1
ip2 link del dev wg0
+ ip2 link del dev wg1
local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)"
[[ -n $to_kill ]] && kill $to_kill
pp ip netns del $netns1
@@ -77,18 +80,20 @@ ip0 link set wg0 netns $netns2
key1="$(pp wg genkey)"
key2="$(pp wg genkey)"
key3="$(pp wg genkey)"
+key4="$(pp wg genkey)"
pub1="$(pp wg pubkey <<<"$key1")"
pub2="$(pp wg pubkey <<<"$key2")"
pub3="$(pp wg pubkey <<<"$key3")"
+pub4="$(pp wg pubkey <<<"$key4")"
psk="$(pp wg genpsk)"
[[ -n $key1 && -n $key2 && -n $psk ]]

configure_peers() {
ip1 addr add 192.168.241.1/24 dev wg0
- ip1 addr add fd00::1/24 dev wg0
+ ip1 addr add fd00::1/112 dev wg0

ip2 addr add 192.168.241.2/24 dev wg0
- ip2 addr add fd00::2/24 dev wg0
+ ip2 addr add fd00::2/112 dev wg0

n1 wg set wg0 \
private-key <(echo "$key1") \
@@ -230,9 +235,38 @@ n1 ping -W 1 -c 1 192.168.241.2
n1 wg set wg0 private-key <(echo "$key3")
n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove
n1 ping -W 1 -c 1 192.168.241.2
+n2 wg set wg0 peer "$pub3" remove

-ip1 link del wg0
+# Test that we can route wg through wg
+ip1 addr flush dev wg0
+ip2 addr flush dev wg0
+ip1 addr add fd00::5:1/112 dev wg0
+ip2 addr add fd00::5:2/112 dev wg0
+n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips fd00::5:2/128 endpoint 127.0.0.1:2
+n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips fd00::5:1/128 endpoint 127.212.121.99:9998
+ip1 link add wg1 type wireguard
+ip2 link add wg1 type wireguard
+ip1 addr add 192.168.241.1/24 dev wg1
+ip1 addr add fd00::1/112 dev wg1
+ip2 addr add 192.168.241.2/24 dev wg1
+ip2 addr add fd00::2/112 dev wg1
+ip1 link set mtu 1340 up dev wg1
+ip2 link set mtu 1340 up dev wg1
+n1 wg set wg1 listen-port 5 private-key <(echo "$key3") peer "$pub4" allowed-ips 192.168.241.2/32,fd00::2/128 endpoint [fd00::5:2]:5
+n2 wg set wg1 listen-port 5 private-key <(echo "$key4") peer "$pub3" allowed-ips 192.168.241.1/32,fd00::1/128 endpoint [fd00::5:1]:5
+tests
+# Try to set up a routing loop between the two namespaces
+ip1 link set netns $netns0 dev wg1
+ip0 addr add 192.168.241.1/24 dev wg1
+ip0 link set up dev wg1
+n0 ping -W 1 -c 1 192.168.241.2
+n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7
ip2 link del wg0
+ip2 link del wg1
+! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel
+
+ip0 link del wg1
+ip1 link del wg0

# Test using NAT. We now change the topology to this:
# ┌────────────────────────────────────────┐ ┌────────────────────────────────────────────────┐ ┌────────────────────────────────────────┐
@@ -282,6 +316,20 @@ pp sleep 3
n2 ping -W 1 -c 1 192.168.241.1
n1 wg set wg0 peer "$pub2" persistent-keepalive 0

+# Test that onion routing works, even when it loops
+n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5
+ip1 addr add 192.168.242.1/24 dev wg0
+ip2 link add wg1 type wireguard
+ip2 addr add 192.168.242.2/24 dev wg1
+n2 wg set wg1 private-key <(echo "$key3") listen-port 5 peer "$pub1" allowed-ips 192.168.242.1/32
+ip2 link set wg1 up
+n1 ping -W 1 -c 1 192.168.242.2
+ip2 link del wg1
+n1 wg set wg0 peer "$pub3" endpoint 192.168.242.2:5
+! n1 ping -W 1 -c 1 192.168.242.2 || false # Should not crash kernel
+n1 wg set wg0 peer "$pub3" remove
+ip1 addr del 192.168.242.1/24 dev wg0
+
# Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs.
ip1 -6 addr add fc00::9/96 dev vethc
ip1 -6 route add default via fc00::1


2020-05-13 10:03:45

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 074/118] KVM: arm64: Fix 32bit PC wrap-around

From: Marc Zyngier <[email protected]>

commit 0225fd5e0a6a32af7af0aefac45c8ebf19dc5183 upstream.

In the unlikely event that a 32bit vcpu traps into the hypervisor
on an instruction that is located right at the end of the 32bit
range, the emulation of that instruction is going to increment
PC past the 32bit range. This isn't great, as userspace can then
observe this value and get a bit confused.

Conversly, userspace can do things like (in the context of a 64bit
guest that is capable of 32bit EL0) setting PSTATE to AArch64-EL0,
set PC to a 64bit value, change PSTATE to AArch32-USR, and observe
that PC hasn't been truncated. More confusion.

Fix both by:
- truncating PC increments for 32bit guests
- sanitizing all 32bit regs every time a core reg is changed by
userspace, and that PSTATE indicates a 32bit mode.

Cc: [email protected]
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm64/kvm/guest.c | 7 +++++++
virt/kvm/arm/hyp/aarch32.c | 8 ++++++--
2 files changed, 13 insertions(+), 2 deletions(-)

--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -201,6 +201,13 @@ static int set_core_reg(struct kvm_vcpu
}

memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id));
+
+ if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) {
+ int i;
+
+ for (i = 0; i < 16; i++)
+ *vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i);
+ }
out:
return err;
}
--- a/virt/kvm/arm/hyp/aarch32.c
+++ b/virt/kvm/arm/hyp/aarch32.c
@@ -125,12 +125,16 @@ static void __hyp_text kvm_adjust_itstat
*/
void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
{
+ u32 pc = *vcpu_pc(vcpu);
bool is_thumb;

is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);
if (is_thumb && !is_wide_instr)
- *vcpu_pc(vcpu) += 2;
+ pc += 2;
else
- *vcpu_pc(vcpu) += 4;
+ pc += 4;
+
+ *vcpu_pc(vcpu) = pc;
+
kvm_adjust_itstate(vcpu);
}


2020-05-13 10:03:52

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 010/118] nvme: fix possible hang when ns scanning fails during error recovery

From: Sagi Grimberg <[email protected]>

[ Upstream commit 59c7c3caaaf8750df4ec3255082f15eb4e371514 ]

When the controller is reconnecting, the host fails I/O and admin
commands as the host cannot reach the controller. ns scanning may
revalidate namespaces during that period and it is wrong to remove
namespaces due to these failures as we may hang (see 205da2434301).

One command that may fail is nvme_identify_ns_descs. Since we return
success due to having ns identify descriptor list optional, we continue
to compare ns identifiers in nvme_revalidate_disk, obviously fail and
return -ENODEV to nvme_validate_ns, which will remove the namespace.

Exactly what we don't want to happen.

Fixes: 22802bf742c2 ("nvme: Namepace identification descriptor list is optional")
Tested-by: Anton Eidelman <[email protected]>
Signed-off-by: Sagi Grimberg <[email protected]>
Reviewed-by: Keith Busch <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/nvme/host/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 545e9e5f1b737..84f20369d8467 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1082,7 +1082,7 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
* Don't treat an error as fatal, as we potentially already
* have a NGUID or EUI-64.
*/
- if (status > 0)
+ if (status > 0 && !(status & NVME_SC_DNR))
status = 0;
goto free_data;
}
--
2.20.1



2020-05-13 10:03:56

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 014/118] devlink: Fix reporters recovery condition

From: Aya Levin <[email protected]>

[ Upstream commit bea0c5c942d3b4e9fb6ed45f6a7de74c6b112437 ]

Devlink health core conditions the reporter's recovery with the
expiration of the grace period. This is not relevant for the first
recovery. Explicitly demand that the grace period will only apply to
recoveries other than the first.

Fixes: c8e1da0bf923 ("devlink: Add health report functionality")
Signed-off-by: Aya Levin <[email protected]>
Reviewed-by: Moshe Shemesh <[email protected]>
Reviewed-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/core/devlink.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

--- a/net/core/devlink.c
+++ b/net/core/devlink.c
@@ -5029,6 +5029,7 @@ int devlink_health_report(struct devlink
{
enum devlink_health_reporter_state prev_health_state;
struct devlink *devlink = reporter->devlink;
+ unsigned long recover_ts_threshold;

/* write a log message of the current error */
WARN_ON(!msg);
@@ -5039,10 +5040,12 @@ int devlink_health_report(struct devlink
devlink_recover_notify(reporter, DEVLINK_CMD_HEALTH_REPORTER_RECOVER);

/* abort if the previous error wasn't recovered */
+ recover_ts_threshold = reporter->last_recovery_ts +
+ msecs_to_jiffies(reporter->graceful_period);
if (reporter->auto_recover &&
(prev_health_state != DEVLINK_HEALTH_REPORTER_STATE_HEALTHY ||
- jiffies - reporter->last_recovery_ts <
- msecs_to_jiffies(reporter->graceful_period))) {
+ (reporter->last_recovery_ts && reporter->recovery_count &&
+ time_is_after_jiffies(recover_ts_threshold)))) {
trace_devlink_health_recover_aborted(devlink,
reporter->ops->name,
reporter->health_state,


2020-05-13 10:04:03

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 060/118] HID: usbhid: Fix race between usbhid_close() and usbhid_stop()

From: Alan Stern <[email protected]>

commit 0ed08faded1da03eb3def61502b27f81aef2e615 upstream.

The syzbot fuzzer discovered a bad race between in the usbhid driver
between usbhid_stop() and usbhid_close(). In particular,
usbhid_stop() does:

usb_free_urb(usbhid->urbin);
...
usbhid->urbin = NULL; /* don't mess up next start */

and usbhid_close() does:

usb_kill_urb(usbhid->urbin);

with no mutual exclusion. If the two routines happen to run
concurrently so that usb_kill_urb() is called in between the
usb_free_urb() and the NULL assignment, it will access the
deallocated urb structure -- a use-after-free bug.

This patch adds a mutex to the usbhid private structure and uses it to
enforce mutual exclusion of the usbhid_start(), usbhid_stop(),
usbhid_open() and usbhid_close() callbacks.

Reported-and-tested-by: [email protected]
Signed-off-by: Alan Stern <[email protected]>
CC: <[email protected]>
Signed-off-by: Jiri Kosina <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/hid/usbhid/hid-core.c | 37 +++++++++++++++++++++++++++++--------
drivers/hid/usbhid/usbhid.h | 1 +
2 files changed, 30 insertions(+), 8 deletions(-)

--- a/drivers/hid/usbhid/hid-core.c
+++ b/drivers/hid/usbhid/hid-core.c
@@ -682,16 +682,21 @@ static int usbhid_open(struct hid_device
struct usbhid_device *usbhid = hid->driver_data;
int res;

+ mutex_lock(&usbhid->mutex);
+
set_bit(HID_OPENED, &usbhid->iofl);

- if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
- return 0;
+ if (hid->quirks & HID_QUIRK_ALWAYS_POLL) {
+ res = 0;
+ goto Done;
+ }

res = usb_autopm_get_interface(usbhid->intf);
/* the device must be awake to reliably request remote wakeup */
if (res < 0) {
clear_bit(HID_OPENED, &usbhid->iofl);
- return -EIO;
+ res = -EIO;
+ goto Done;
}

usbhid->intf->needs_remote_wakeup = 1;
@@ -725,6 +730,9 @@ static int usbhid_open(struct hid_device
msleep(50);

clear_bit(HID_RESUME_RUNNING, &usbhid->iofl);
+
+ Done:
+ mutex_unlock(&usbhid->mutex);
return res;
}

@@ -732,6 +740,8 @@ static void usbhid_close(struct hid_devi
{
struct usbhid_device *usbhid = hid->driver_data;

+ mutex_lock(&usbhid->mutex);
+
/*
* Make sure we don't restart data acquisition due to
* a resumption we no longer care about by avoiding racing
@@ -743,12 +753,13 @@ static void usbhid_close(struct hid_devi
clear_bit(HID_IN_POLLING, &usbhid->iofl);
spin_unlock_irq(&usbhid->lock);

- if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
- return;
+ if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) {
+ hid_cancel_delayed_stuff(usbhid);
+ usb_kill_urb(usbhid->urbin);
+ usbhid->intf->needs_remote_wakeup = 0;
+ }

- hid_cancel_delayed_stuff(usbhid);
- usb_kill_urb(usbhid->urbin);
- usbhid->intf->needs_remote_wakeup = 0;
+ mutex_unlock(&usbhid->mutex);
}

/*
@@ -1057,6 +1068,8 @@ static int usbhid_start(struct hid_devic
unsigned int n, insize = 0;
int ret;

+ mutex_lock(&usbhid->mutex);
+
clear_bit(HID_DISCONNECTED, &usbhid->iofl);

usbhid->bufsize = HID_MIN_BUFFER_SIZE;
@@ -1177,6 +1190,8 @@ static int usbhid_start(struct hid_devic
usbhid_set_leds(hid);
device_set_wakeup_enable(&dev->dev, 1);
}
+
+ mutex_unlock(&usbhid->mutex);
return 0;

fail:
@@ -1187,6 +1202,7 @@ fail:
usbhid->urbout = NULL;
usbhid->urbctrl = NULL;
hid_free_buffers(dev, hid);
+ mutex_unlock(&usbhid->mutex);
return ret;
}

@@ -1202,6 +1218,8 @@ static void usbhid_stop(struct hid_devic
usbhid->intf->needs_remote_wakeup = 0;
}

+ mutex_lock(&usbhid->mutex);
+
clear_bit(HID_STARTED, &usbhid->iofl);
spin_lock_irq(&usbhid->lock); /* Sync with error and led handlers */
set_bit(HID_DISCONNECTED, &usbhid->iofl);
@@ -1222,6 +1240,8 @@ static void usbhid_stop(struct hid_devic
usbhid->urbout = NULL;

hid_free_buffers(hid_to_usb_dev(hid), hid);
+
+ mutex_unlock(&usbhid->mutex);
}

static int usbhid_power(struct hid_device *hid, int lvl)
@@ -1382,6 +1402,7 @@ static int usbhid_probe(struct usb_inter
INIT_WORK(&usbhid->reset_work, hid_reset);
timer_setup(&usbhid->io_retry, hid_retry_timeout, 0);
spin_lock_init(&usbhid->lock);
+ mutex_init(&usbhid->mutex);

ret = hid_add_device(hid);
if (ret) {
--- a/drivers/hid/usbhid/usbhid.h
+++ b/drivers/hid/usbhid/usbhid.h
@@ -80,6 +80,7 @@ struct usbhid_device {
dma_addr_t outbuf_dma; /* Output buffer dma */
unsigned long last_out; /* record of last output for timeouts */

+ struct mutex mutex; /* start/stop/open/close */
spinlock_t lock; /* fifo spinlock */
unsigned long iofl; /* I/O flags (CTRL_RUNNING, OUT_RUNNING) */
struct timer_list io_retry; /* Retry timer */


2020-05-13 10:04:38

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 039/118] tipc: fix partial topology connection closure

From: Tuong Lien <[email protected]>

[ Upstream commit 980d69276f3048af43a045be2925dacfb898a7be ]

When an application connects to the TIPC topology server and subscribes
to some services, a new connection is created along with some objects -
'tipc_subscription' to store related data correspondingly...
However, there is one omission in the connection handling that when the
connection or application is orderly shutdown (e.g. via SIGQUIT, etc.),
the connection is not closed in kernel, the 'tipc_subscription' objects
are not freed too.
This results in:
- The maximum number of subscriptions (65535) will be reached soon, new
subscriptions will be rejected;
- TIPC module cannot be removed (unless the objects are somehow forced
to release first);

The commit fixes the issue by closing the connection if the 'recvmsg()'
returns '0' i.e. when the peer is shutdown gracefully. It also includes
the other unexpected cases.

Acked-by: Jon Maloy <[email protected]>
Acked-by: Ying Xue <[email protected]>
Signed-off-by: Tuong Lien <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/tipc/topsrv.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

--- a/net/tipc/topsrv.c
+++ b/net/tipc/topsrv.c
@@ -402,10 +402,11 @@ static int tipc_conn_rcv_from_sock(struc
read_lock_bh(&sk->sk_callback_lock);
ret = tipc_conn_rcv_sub(srv, con, &s);
read_unlock_bh(&sk->sk_callback_lock);
+ if (!ret)
+ return 0;
}
- if (ret < 0)
- tipc_conn_close(con);

+ tipc_conn_close(con);
return ret;
}



2020-05-13 10:04:47

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 001/118] thunderbolt: Check return value of tb_sw_read() in usb4_switch_op()

From: Mika Westerberg <[email protected]>

commit c3bf9930921b33edb31909006607e478751a6f5e upstream.

The function misses checking return value of tb_sw_read() before it
accesses the value that was read. Fix this by checking the return value
first.

Fixes: b04079837b20 ("thunderbolt: Add initial support for USB4")
Signed-off-by: Mika Westerberg <[email protected]>
Reviewed-by: Yehezkel Bernat <[email protected]>
Cc: stable <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/thunderbolt/usb4.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -182,6 +182,9 @@ static int usb4_switch_op(struct tb_swit
return ret;

ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
+ if (ret)
+ return ret;
+
if (val & ROUTER_CS_26_ONS)
return -EOPNOTSUPP;



2020-05-13 10:05:22

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 019/118] mlxsw: spectrum_acl_tcam: Position vchunk in a vregion list properly

From: Jiri Pirko <[email protected]>

[ Upstream commit 6ef4889fc0b3aa6ab928e7565935ac6f762cee6e ]

Vregion helpers to get min and max priority depend on the correct
ordering of vchunks in the vregion list. However, the current code
always adds new chunk to the end of the list, no matter what the
priority is. Fix this by finding the correct place in the list and put
vchunk there.

Fixes: 22a677661f56 ("mlxsw: spectrum: Introduce ACL core with simple TCAM implementation")
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: Ido Schimmel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
@@ -986,8 +986,9 @@ mlxsw_sp_acl_tcam_vchunk_create(struct m
unsigned int priority,
struct mlxsw_afk_element_usage *elusage)
{
+ struct mlxsw_sp_acl_tcam_vchunk *vchunk, *vchunk2;
struct mlxsw_sp_acl_tcam_vregion *vregion;
- struct mlxsw_sp_acl_tcam_vchunk *vchunk;
+ struct list_head *pos;
int err;

if (priority == MLXSW_SP_ACL_TCAM_CATCHALL_PRIO)
@@ -1025,7 +1026,14 @@ mlxsw_sp_acl_tcam_vchunk_create(struct m
}

mlxsw_sp_acl_tcam_rehash_ctx_vregion_changed(vregion);
- list_add_tail(&vchunk->list, &vregion->vchunk_list);
+
+ /* Position the vchunk inside the list according to priority */
+ list_for_each(pos, &vregion->vchunk_list) {
+ vchunk2 = list_entry(pos, typeof(*vchunk2), list);
+ if (vchunk2->priority > priority)
+ break;
+ }
+ list_add_tail(&vchunk->list, pos);
mutex_unlock(&vregion->lock);

return vchunk;


2020-05-13 11:15:28

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 018/118] ipv6: Use global sernum for dst validation with nexthop objects

From: David Ahern <[email protected]>

[ Upstream commit 8f34e53b60b337e559f1ea19e2780ff95ab2fa65 ]

Nik reported a bug with pcpu dst cache when nexthop objects are
used illustrated by the following:
$ ip netns add foo
$ ip -netns foo li set lo up
$ ip -netns foo addr add 2001:db8:11::1/128 dev lo
$ ip netns exec foo sysctl net.ipv6.conf.all.forwarding=1
$ ip li add veth1 type veth peer name veth2
$ ip li set veth1 up
$ ip addr add 2001:db8:10::1/64 dev veth1
$ ip li set dev veth2 netns foo
$ ip -netns foo li set veth2 up
$ ip -netns foo addr add 2001:db8:10::2/64 dev veth2
$ ip -6 nexthop add id 100 via 2001:db8:10::2 dev veth1
$ ip -6 route add 2001:db8:11::1/128 nhid 100

Create a pcpu entry on cpu 0:
$ taskset -a -c 0 ip -6 route get 2001:db8:11::1

Re-add the route entry:
$ ip -6 ro del 2001:db8:11::1
$ ip -6 route add 2001:db8:11::1/128 nhid 100

Route get on cpu 0 returns the stale pcpu:
$ taskset -a -c 0 ip -6 route get 2001:db8:11::1
RTNETLINK answers: Network is unreachable

While cpu 1 works:
$ taskset -a -c 1 ip -6 route get 2001:db8:11::1
2001:db8:11::1 from :: via 2001:db8:10::2 dev veth1 src 2001:db8:10::1 metric 1024 pref medium

Conversion of FIB entries to work with external nexthop objects
missed an important difference between IPv4 and IPv6 - how dst
entries are invalidated when the FIB changes. IPv4 has a per-network
namespace generation id (rt_genid) that is bumped on changes to the FIB.
Checking if a dst_entry is still valid means comparing rt_genid in the
rtable to the current value of rt_genid for the namespace.

IPv6 also has a per network namespace counter, fib6_sernum, but the
count is saved per fib6_node. With the per-node counter only dst_entries
based on fib entries under the node are invalidated when changes are
made to the routes - limiting the scope of invalidations. IPv6 uses a
reference in the rt6_info, 'from', to track the corresponding fib entry
used to create the dst_entry. When validating a dst_entry, the 'from'
is used to backtrack to the fib6_node and check the sernum of it to the
cookie passed to the dst_check operation.

With the inline format (nexthop definition inline with the fib6_info),
dst_entries cached in the fib6_nh have a 1:1 correlation between fib
entries, nexthop data and dst_entries. With external nexthops, IPv6
looks more like IPv4 which means multiple fib entries across disparate
fib6_nodes can all reference the same fib6_nh. That means validation
of dst_entries based on external nexthops needs to use the IPv4 format
- the per-network namespace counter.

Add sernum to rt6_info and set it when creating a pcpu dst entry. Update
rt6_get_cookie to return sernum if it is set and update dst_check for
IPv6 to look for sernum set and based the check on it if so. Finally,
rt6_get_pcpu_route needs to validate the cached entry before returning
a pcpu entry (similar to the rt_cache_valid calls in __mkroute_input and
__mkroute_output for IPv4).

This problem only affects routes using the new, external nexthops.

Thanks to the kbuild test robot for catching the IS_ENABLED needed
around rt_genid_ipv6 before I sent this out.

Fixes: 5b98324ebe29 ("ipv6: Allow routes to use nexthop objects")
Reported-by: Nikolay Aleksandrov <[email protected]>
Signed-off-by: David Ahern <[email protected]>
Reviewed-by: Nikolay Aleksandrov <[email protected]>
Tested-by: Nikolay Aleksandrov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/net/ip6_fib.h | 4 ++++
include/net/net_namespace.h | 7 +++++++
net/ipv6/route.c | 25 +++++++++++++++++++++++++
3 files changed, 36 insertions(+)

--- a/include/net/ip6_fib.h
+++ b/include/net/ip6_fib.h
@@ -204,6 +204,7 @@ struct fib6_info {
struct rt6_info {
struct dst_entry dst;
struct fib6_info __rcu *from;
+ int sernum;

struct rt6key rt6i_dst;
struct rt6key rt6i_src;
@@ -292,6 +293,9 @@ static inline u32 rt6_get_cookie(const s
struct fib6_info *from;
u32 cookie = 0;

+ if (rt->sernum)
+ return rt->sernum;
+
rcu_read_lock();

from = rcu_dereference(rt->from);
--- a/include/net/net_namespace.h
+++ b/include/net/net_namespace.h
@@ -432,6 +432,13 @@ static inline int rt_genid_ipv4(const st
return atomic_read(&net->ipv4.rt_genid);
}

+#if IS_ENABLED(CONFIG_IPV6)
+static inline int rt_genid_ipv6(const struct net *net)
+{
+ return atomic_read(&net->ipv6.fib6_sernum);
+}
+#endif
+
static inline void rt_genid_bump_ipv4(struct net *net)
{
atomic_inc(&net->ipv4.rt_genid);
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -1388,9 +1388,18 @@ static struct rt6_info *ip6_rt_pcpu_allo
}
ip6_rt_copy_init(pcpu_rt, res);
pcpu_rt->rt6i_flags |= RTF_PCPU;
+
+ if (f6i->nh)
+ pcpu_rt->sernum = rt_genid_ipv6(dev_net(dev));
+
return pcpu_rt;
}

+static bool rt6_is_valid(const struct rt6_info *rt6)
+{
+ return rt6->sernum == rt_genid_ipv6(dev_net(rt6->dst.dev));
+}
+
/* It should be called with rcu_read_lock() acquired */
static struct rt6_info *rt6_get_pcpu_route(const struct fib6_result *res)
{
@@ -1398,6 +1407,19 @@ static struct rt6_info *rt6_get_pcpu_rou

pcpu_rt = this_cpu_read(*res->nh->rt6i_pcpu);

+ if (pcpu_rt && pcpu_rt->sernum && !rt6_is_valid(pcpu_rt)) {
+ struct rt6_info *prev, **p;
+
+ p = this_cpu_ptr(res->nh->rt6i_pcpu);
+ prev = xchg(p, NULL);
+ if (prev) {
+ dst_dev_put(&prev->dst);
+ dst_release(&prev->dst);
+ }
+
+ pcpu_rt = NULL;
+ }
+
return pcpu_rt;
}

@@ -2596,6 +2618,9 @@ static struct dst_entry *ip6_dst_check(s

rt = container_of(dst, struct rt6_info, dst);

+ if (rt->sernum)
+ return rt6_is_valid(rt) ? dst : NULL;
+
rcu_read_lock();

/* All IPV6 dsts are created with ->obsolete set to the value


2020-05-13 11:15:33

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 006/118] tty: xilinx_uartps: Fix missing id assignment to the console

From: Shubhrajyoti Datta <[email protected]>

[ Upstream commit 2ae11c46d5fdc46cb396e35911c713d271056d35 ]

When serial console has been assigned to ttyPS1 (which is serial1 alias)
console index is not updated property and pointing to index -1 (statically
initialized) which ends up in situation where nothing has been printed on
the port.

The commit 18cc7ac8a28e ("Revert "serial: uartps: Register own uart console
and driver structures"") didn't contain this line which was removed by
accident.

Fixes: 18cc7ac8a28e ("Revert "serial: uartps: Register own uart console and driver structures"")
Signed-off-by: Shubhrajyoti Datta <[email protected]>
Cc: stable <[email protected]>
Signed-off-by: Michal Simek <[email protected]>
Link: https://lore.kernel.org/r/ed3111533ef5bd342ee5ec504812240b870f0853.1588602446.git.michal.simek@xilinx.com
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/tty/serial/xilinx_uartps.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
index 7a9b360b04386..1d8b6993a4357 100644
--- a/drivers/tty/serial/xilinx_uartps.c
+++ b/drivers/tty/serial/xilinx_uartps.c
@@ -1471,6 +1471,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;
#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
cdns_uart_uart_driver.cons = &cdns_uart_console;
+ cdns_uart_console.index = id;
#endif

rc = uart_register_driver(&cdns_uart_uart_driver);
--
2.20.1



2020-05-13 11:15:40

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 009/118] nvme: refactor nvme_identify_ns_descs error handling

From: Christoph Hellwig <[email protected]>

[ Upstream commit fb314eb0cbb2e11540d1ae1a7b28346397f621ef ]

Move the handling of an error into the function from the caller, and
only do it for an actual error on the admin command itself, not the
command parsing, as that should be enough to deal with devices claiming
a bogus version compliance.

Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/nvme/host/core.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fb4c35a430650..545e9e5f1b737 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1075,8 +1075,17 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,

status = nvme_submit_sync_cmd(ctrl->admin_q, &c, data,
NVME_IDENTIFY_DATA_SIZE);
- if (status)
+ if (status) {
+ dev_warn(ctrl->device,
+ "Identify Descriptors failed (%d)\n", status);
+ /*
+ * Don't treat an error as fatal, as we potentially already
+ * have a NGUID or EUI-64.
+ */
+ if (status > 0)
+ status = 0;
goto free_data;
+ }

for (pos = 0; pos < NVME_IDENTIFY_DATA_SIZE; pos += len) {
struct nvme_ns_id_desc *cur = data + pos;
@@ -1734,26 +1743,15 @@ static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns)
static int nvme_report_ns_ids(struct nvme_ctrl *ctrl, unsigned int nsid,
struct nvme_id_ns *id, struct nvme_ns_ids *ids)
{
- int ret = 0;
-
memset(ids, 0, sizeof(*ids));

if (ctrl->vs >= NVME_VS(1, 1, 0))
memcpy(ids->eui64, id->eui64, sizeof(id->eui64));
if (ctrl->vs >= NVME_VS(1, 2, 0))
memcpy(ids->nguid, id->nguid, sizeof(id->nguid));
- if (ctrl->vs >= NVME_VS(1, 3, 0)) {
- /* Don't treat error as fatal we potentially
- * already have a NGUID or EUI-64
- */
- ret = nvme_identify_ns_descs(ctrl, nsid, ids);
- if (ret)
- dev_warn(ctrl->device,
- "Identify Descriptors failed (%d)\n", ret);
- if (ret > 0)
- ret = 0;
- }
- return ret;
+ if (ctrl->vs >= NVME_VS(1, 3, 0))
+ return nvme_identify_ns_descs(ctrl, nsid, ids);
+ return 0;
}

static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
--
2.20.1



2020-05-13 11:15:45

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 025/118] net: macsec: preserve ingress frame ordering

From: Scott Dial <[email protected]>

[ Upstream commit ab046a5d4be4c90a3952a0eae75617b49c0cb01b ]

MACsec decryption always occurs in a softirq context. Since
the FPU may not be usable in the softirq context, the call to
decrypt may be scheduled on the cryptd work queue. The cryptd
work queue does not provide ordering guarantees. Therefore,
preserving order requires masking out ASYNC implementations
of gcm(aes).

For instance, an Intel CPU with AES-NI makes available the
generic-gcm-aesni driver from the aesni_intel module to
implement gcm(aes). However, this implementation requires
the FPU, so it is not always available to use from a softirq
context, and will fallback to the cryptd work queue, which
does not preserve frame ordering. With this change, such a
system would select gcm_base(ctr(aes-aesni),ghash-generic).
While the aes-aesni implementation prefers to use the FPU, it
will fallback to the aes-asm implementation if unavailable.

By using a synchronous version of gcm(aes), the decryption
will complete before returning from crypto_aead_decrypt().
Therefore, the macsec_decrypt_done() callback will be called
before returning from macsec_decrypt(). Thus, the order of
calls to macsec_post_decrypt() for the frames is preserved.

While it's presumable that the pure AES-NI version of gcm(aes)
is more performant, the hybrid solution is capable of gigabit
speeds on modest hardware. Regardless, preserving the order
of frames is paramount for many network protocols (e.g.,
triggering TCP retries). Within the MACsec driver itself, the
replay protection is tripped by the out-of-order frames, and
can cause frames to be dropped.

This bug has been present in this code since it was added in
v4.6, however it may not have been noticed since not all CPUs
have FPU offload available. Additionally, the bug manifests
as occasional out-of-order packets that are easily
misattributed to other network phenomena.

When this code was added in v4.6, the crypto/gcm.c code did
not restrict selection of the ghash function based on the
ASYNC flag. For instance, x86 CPUs with PCLMULQDQ would
select the ghash-clmulni driver instead of ghash-generic,
which submits to the cryptd work queue if the FPU is busy.
However, this bug was was corrected in v4.8 by commit
b30bdfa86431afbafe15284a3ad5ac19b49b88e3, and was backported
all the way back to the v3.14 stable branch, so this patch
should be applicable back to the v4.6 stable branch.

Signed-off-by: Scott Dial <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/macsec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -1226,7 +1226,8 @@ static struct crypto_aead *macsec_alloc_
struct crypto_aead *tfm;
int ret;

- tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
+ /* Pick a sync gcm(aes) cipher to ensure order is preserved. */
+ tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);

if (IS_ERR(tfm))
return tfm;


2020-05-13 11:15:55

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 038/118] selftests: net: tcp_mmap: fix SO_RCVLOWAT setting

From: Eric Dumazet <[email protected]>

[ Upstream commit a84724178bd7081cf3bd5b558616dd6a9a4ca63b ]

Since chunk_size is no longer an integer, we can not
use it directly as an argument of setsockopt().

This patch should fix tcp_mmap for Big Endian kernels.

Fixes: 597b01edafac ("selftests: net: avoid ptl lock contention in tcp_mmap")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Soheil Hassas Yeganeh <[email protected]>
Cc: Arjun Roy <[email protected]>
Acked-by: Soheil Hassas Yeganeh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
tools/testing/selftests/net/tcp_mmap.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

--- a/tools/testing/selftests/net/tcp_mmap.c
+++ b/tools/testing/selftests/net/tcp_mmap.c
@@ -282,12 +282,14 @@ static void setup_sockaddr(int domain, c
static void do_accept(int fdlisten)
{
pthread_attr_t attr;
+ int rcvlowat;

pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);

+ rcvlowat = chunk_size;
if (setsockopt(fdlisten, SOL_SOCKET, SO_RCVLOWAT,
- &chunk_size, sizeof(chunk_size)) == -1) {
+ &rcvlowat, sizeof(rcvlowat)) == -1) {
perror("setsockopt SO_RCVLOWAT");
}



2020-05-13 11:16:03

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 040/118] tunnel: Propagate ECT(1) when decapsulating as recommended by RFC6040

From: "Toke H?iland-J?rgensen" <[email protected]>

[ Upstream commit b723748750ece7d844cdf2f52c01d37f83387208 ]

RFC 6040 recommends propagating an ECT(1) mark from an outer tunnel header
to the inner header if that inner header is already marked as ECT(0). When
RFC 6040 decapsulation was implemented, this case of propagation was not
added. This simply appears to be an oversight, so let's fix that.

Fixes: eccc1bb8d4b4 ("tunnel: drop packet if ECN present with not-ECT")
Reported-by: Bob Briscoe <[email protected]>
Reported-by: Olivier Tilmans <[email protected]>
Cc: Dave Taht <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Signed-off-by: Toke Høiland-Jørgensen <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/net/inet_ecn.h | 57 +++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 55 insertions(+), 2 deletions(-)

--- a/include/net/inet_ecn.h
+++ b/include/net/inet_ecn.h
@@ -99,6 +99,20 @@ static inline int IP_ECN_set_ce(struct i
return 1;
}

+static inline int IP_ECN_set_ect1(struct iphdr *iph)
+{
+ u32 check = (__force u32)iph->check;
+
+ if ((iph->tos & INET_ECN_MASK) != INET_ECN_ECT_0)
+ return 0;
+
+ check += (__force u16)htons(0x100);
+
+ iph->check = (__force __sum16)(check + (check>=0xFFFF));
+ iph->tos ^= INET_ECN_MASK;
+ return 1;
+}
+
static inline void IP_ECN_clear(struct iphdr *iph)
{
iph->tos &= ~INET_ECN_MASK;
@@ -134,6 +148,22 @@ static inline int IP6_ECN_set_ce(struct
return 1;
}

+static inline int IP6_ECN_set_ect1(struct sk_buff *skb, struct ipv6hdr *iph)
+{
+ __be32 from, to;
+
+ if ((ipv6_get_dsfield(iph) & INET_ECN_MASK) != INET_ECN_ECT_0)
+ return 0;
+
+ from = *(__be32 *)iph;
+ to = from ^ htonl(INET_ECN_MASK << 20);
+ *(__be32 *)iph = to;
+ if (skb->ip_summed == CHECKSUM_COMPLETE)
+ skb->csum = csum_add(csum_sub(skb->csum, (__force __wsum)from),
+ (__force __wsum)to);
+ return 1;
+}
+
static inline void ipv6_copy_dscp(unsigned int dscp, struct ipv6hdr *inner)
{
dscp &= ~INET_ECN_MASK;
@@ -159,6 +189,25 @@ static inline int INET_ECN_set_ce(struct
return 0;
}

+static inline int INET_ECN_set_ect1(struct sk_buff *skb)
+{
+ switch (skb->protocol) {
+ case cpu_to_be16(ETH_P_IP):
+ if (skb_network_header(skb) + sizeof(struct iphdr) <=
+ skb_tail_pointer(skb))
+ return IP_ECN_set_ect1(ip_hdr(skb));
+ break;
+
+ case cpu_to_be16(ETH_P_IPV6):
+ if (skb_network_header(skb) + sizeof(struct ipv6hdr) <=
+ skb_tail_pointer(skb))
+ return IP6_ECN_set_ect1(skb, ipv6_hdr(skb));
+ break;
+ }
+
+ return 0;
+}
+
/*
* RFC 6040 4.2
* To decapsulate the inner header at the tunnel egress, a compliant
@@ -208,8 +257,12 @@ static inline int INET_ECN_decapsulate(s
int rc;

rc = __INET_ECN_decapsulate(outer, inner, &set_ce);
- if (!rc && set_ce)
- INET_ECN_set_ce(skb);
+ if (!rc) {
+ if (set_ce)
+ INET_ECN_set_ce(skb);
+ else if ((outer & INET_ECN_MASK) == INET_ECN_ECT_1)
+ INET_ECN_set_ect1(skb);
+ }

return rc;
}


2020-05-13 11:16:32

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 051/118] net: mvpp2: cls: Prevent buffer overflow in mvpp2_ethtool_cls_rule_del()

From: Dan Carpenter <[email protected]>

[ Upstream commit 722c0f00d4feea77475a5dc943b53d60824a1e4e ]

The "info->fs.location" is a u32 that comes from the user via the
ethtool_set_rxnfc() function. We need to check for invalid values to
prevent a buffer overflow.

I copy and pasted this check from the mvpp2_ethtool_cls_rule_ins()
function.

Fixes: 90b509b39ac9 ("net: mvpp2: cls: Add Classification offload support")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c | 3 +++
1 file changed, 3 insertions(+)

--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -1422,6 +1422,9 @@ int mvpp2_ethtool_cls_rule_del(struct mv
struct mvpp2_ethtool_fs *efs;
int ret;

+ if (info->fs.location >= MVPP2_N_RFS_ENTRIES_PER_FLOW)
+ return -EINVAL;
+
efs = port->rfs_rules[info->fs.location];
if (!efs)
return -EINVAL;


2020-05-13 11:16:41

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 030/118] net: tc35815: Fix phydev supported/advertising mask

From: Anthony Felice <[email protected]>

[ Upstream commit 4b5b71f770e2edefbfe74203777264bfe6a9927c ]

Commit 3c1bcc8614db ("net: ethernet: Convert phydev advertize and
supported from u32 to link mode") updated ethernet drivers to use a
linkmode bitmap. It mistakenly dropped a bitwise negation in the
tc35815 ethernet driver on a bitmask to set the supported/advertising
flags.

Found by Anthony via code inspection, not tested as I do not have the
required hardware.

Fixes: 3c1bcc8614db ("net: ethernet: Convert phydev advertize and supported from u32 to link mode")
Signed-off-by: Anthony Felice <[email protected]>
Reviewed-by: Akshay Bhat <[email protected]>
Reviewed-by: Heiner Kallweit <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/toshiba/tc35815.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/toshiba/tc35815.c
+++ b/drivers/net/ethernet/toshiba/tc35815.c
@@ -643,7 +643,7 @@ static int tc_mii_probe(struct net_devic
linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, mask);
linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask);
}
- linkmode_and(phydev->supported, phydev->supported, mask);
+ linkmode_andnot(phydev->supported, phydev->supported, mask);
linkmode_copy(phydev->advertising, phydev->supported);

lp->link = 0;


2020-05-13 11:16:41

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 029/118] net: stricter validation of untrusted gso packets

From: Willem de Bruijn <[email protected]>

[ Upstream commit 9274124f023b5c56dc4326637d4f787968b03607 ]

Syzkaller again found a path to a kernel crash through bad gso input:
a packet with transport header extending beyond skb_headlen(skb).

Tighten validation at kernel entry:

- Verify that the transport header lies within the linear section.

To avoid pulling linux/tcp.h, verify just sizeof tcphdr.
tcp_gso_segment will call pskb_may_pull (th->doff * 4) before use.

- Match the gso_type against the ip_proto found by the flow dissector.

Fixes: bfd5f4a3d605 ("packet: Add GSO/csum offload support.")
Reported-by: syzbot <[email protected]>
Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
include/linux/virtio_net.h | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)

--- a/include/linux/virtio_net.h
+++ b/include/linux/virtio_net.h
@@ -3,6 +3,8 @@
#define _LINUX_VIRTIO_NET_H

#include <linux/if_vlan.h>
+#include <uapi/linux/tcp.h>
+#include <uapi/linux/udp.h>
#include <uapi/linux/virtio_net.h>

static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
@@ -28,17 +30,25 @@ static inline int virtio_net_hdr_to_skb(
bool little_endian)
{
unsigned int gso_type = 0;
+ unsigned int thlen = 0;
+ unsigned int ip_proto;

if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
case VIRTIO_NET_HDR_GSO_TCPV4:
gso_type = SKB_GSO_TCPV4;
+ ip_proto = IPPROTO_TCP;
+ thlen = sizeof(struct tcphdr);
break;
case VIRTIO_NET_HDR_GSO_TCPV6:
gso_type = SKB_GSO_TCPV6;
+ ip_proto = IPPROTO_TCP;
+ thlen = sizeof(struct tcphdr);
break;
case VIRTIO_NET_HDR_GSO_UDP:
gso_type = SKB_GSO_UDP;
+ ip_proto = IPPROTO_UDP;
+ thlen = sizeof(struct udphdr);
break;
default:
return -EINVAL;
@@ -57,16 +67,22 @@ static inline int virtio_net_hdr_to_skb(

if (!skb_partial_csum_set(skb, start, off))
return -EINVAL;
+
+ if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
+ return -EINVAL;
} else {
/* gso packets without NEEDS_CSUM do not set transport_offset.
* probe and drop if does not match one of the above types.
*/
if (gso_type && skb->network_header) {
+ struct flow_keys_basic keys;
+
if (!skb->protocol)
virtio_net_hdr_set_proto(skb, hdr);
retry:
- skb_probe_transport_header(skb);
- if (!skb_transport_header_was_set(skb)) {
+ if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
+ NULL, 0, 0, 0,
+ 0)) {
/* UFO does not specify ipv4 or 6: try both */
if (gso_type & SKB_GSO_UDP &&
skb->protocol == htons(ETH_P_IP)) {
@@ -75,6 +91,12 @@ retry:
}
return -EINVAL;
}
+
+ if (keys.control.thoff + thlen > skb_headlen(skb) ||
+ keys.basic.ip_proto != ip_proto)
+ return -EINVAL;
+
+ skb_set_transport_header(skb, keys.control.thoff);
}
}



2020-05-13 11:16:42

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 028/118] net_sched: sch_skbprio: add message validation to skbprio_change()

From: Eric Dumazet <[email protected]>

[ Upstream commit 2761121af87de45951989a0adada917837d8fa82 ]

Do not assume the attribute has the right size.

Fixes: aea5f654e6b7 ("net/sched: add skbprio scheduler")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: syzbot <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/sched/sch_skbprio.c | 3 +++
1 file changed, 3 insertions(+)

--- a/net/sched/sch_skbprio.c
+++ b/net/sched/sch_skbprio.c
@@ -169,6 +169,9 @@ static int skbprio_change(struct Qdisc *
{
struct tc_skbprio_qopt *ctl = nla_data(opt);

+ if (opt->nla_len != nla_attr_size(sizeof(*ctl)))
+ return -EINVAL;
+
sch->limit = ctl->limit;
return 0;
}


2020-05-13 11:16:42

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 024/118] net: macb: fix an issue about leak related system resources

From: Dejin Zheng <[email protected]>

[ Upstream commit b959c77dac09348955f344104c6a921ebe104753 ]

A call of the function macb_init() can fail in the function
fu540_c000_init. The related system resources were not released
then. use devm_platform_ioremap_resource() to replace ioremap()
to fix it.

Fixes: c218ad559020ff9 ("macb: Add support for SiFive FU540-C000")
Cc: Andy Shevchenko <[email protected]>
Reviewed-by: Yash Shah <[email protected]>
Suggested-by: Nicolas Ferre <[email protected]>
Suggested-by: Andy Shevchenko <[email protected]>
Signed-off-by: Dejin Zheng <[email protected]>
Acked-by: Nicolas Ferre <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/cadence/macb_main.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -4165,15 +4165,9 @@ static int fu540_c000_clk_init(struct pl

static int fu540_c000_init(struct platform_device *pdev)
{
- struct resource *res;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
- if (!res)
- return -ENODEV;
-
- mgmt->reg = ioremap(res->start, resource_size(res));
- if (!mgmt->reg)
- return -ENOMEM;
+ mgmt->reg = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(mgmt->reg))
+ return PTR_ERR(mgmt->reg);

return macb_init(pdev);
}


2020-05-13 11:16:51

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 079/118] drm: ingenic-drm: add MODULE_DEVICE_TABLE

From: H. Nikolaus Schaller <[email protected]>

commit c59359a02d14a7256cd508a4886b7d2012df2363 upstream.

so that the driver can load by matching the device tree
if compiled as module.

Cc: [email protected] # v5.3+
Fixes: 90b86fcc47b4 ("DRM: Add KMS driver for the Ingenic JZ47xx SoCs")
Signed-off-by: H. Nikolaus Schaller <[email protected]>
Signed-off-by: Paul Cercueil <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/1694a29b7a3449b6b662cec33d1b33f2ee0b174a.1588574111.git.hns@goldelico.com
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
drivers/gpu/drm/ingenic/ingenic-drm.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
+++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
@@ -843,6 +843,7 @@ static const struct of_device_id ingenic
{ .compatible = "ingenic,jz4770-lcd", .data = &jz4770_soc_info },
{ /* sentinel */ },
};
+MODULE_DEVICE_TABLE(of, ingenic_drm_of_match);

static struct platform_driver ingenic_drm_driver = {
.driver = {


2020-05-13 11:17:00

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 084/118] mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous()

From: David Hildenbrand <[email protected]>

commit e84fe99b68ce353c37ceeecc95dce9696c976556 upstream.

Without CONFIG_PREEMPT, it can happen that we get soft lockups detected,
e.g., while booting up.

watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:1]
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.0-next-20200331+ #4
Hardware name: Red Hat KVM, BIOS 1.11.1-4.module+el8.1.0+4066+0f1aadab 04/01/2014
RIP: __pageblock_pfn_to_page+0x134/0x1c0
Call Trace:
set_zone_contiguous+0x56/0x70
page_alloc_init_late+0x166/0x176
kernel_init_freeable+0xfa/0x255
kernel_init+0xa/0x106
ret_from_fork+0x35/0x40

The issue becomes visible when having a lot of memory (e.g., 4TB)
assigned to a single NUMA node - a system that can easily be created
using QEMU. Inside VMs on a hypervisor with quite some memory
overcommit, this is fairly easy to trigger.

Signed-off-by: David Hildenbrand <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
Reviewed-by: Pankaj Gupta <[email protected]>
Reviewed-by: Baoquan He <[email protected]>
Reviewed-by: Shile Zhang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Kirill Tkhai <[email protected]>
Cc: Shile Zhang <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Daniel Jordan <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/page_alloc.c | 1 +
1 file changed, 1 insertion(+)

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1555,6 +1555,7 @@ void set_zone_contiguous(struct zone *zo
if (!__pageblock_pfn_to_page(block_start_pfn,
block_end_pfn, zone))
return;
+ cond_resched();
}

/* We confirm that there is no hole */


2020-05-13 11:17:01

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 066/118] tracing/kprobes: Reject new event if loc is NULL

From: Masami Hiramatsu <[email protected]>

commit 5b4dcd2d201a395ad4054067bfae4a07554fbd65 upstream.

Reject the new event which has NULL location for kprobes.
For kprobes, user must specify at least the location.

Link: http://lkml.kernel.org/r/158779376597.6082.1411212055469099461.stgit@devnote2

Cc: Tom Zanussi <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Fixes: 2a588dd1d5d6 ("tracing: Add kprobe event command generation functions")
Signed-off-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/trace_kprobe.c | 6 ++++++
1 file changed, 6 insertions(+)

--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -940,6 +940,9 @@ EXPORT_SYMBOL_GPL(kprobe_event_cmd_init)
* complete command or only the first part of it; in the latter case,
* kprobe_event_add_fields() can be used to add more fields following this.
*
+ * Unlikely the synth_event_gen_cmd_start(), @loc must be specified. This
+ * returns -EINVAL if @loc == NULL.
+ *
* Return: 0 if successful, error otherwise.
*/
int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd, bool kretprobe,
@@ -953,6 +956,9 @@ int __kprobe_event_gen_cmd_start(struct
if (cmd->type != DYNEVENT_TYPE_KPROBE)
return -EINVAL;

+ if (!loc)
+ return -EINVAL;
+
if (kretprobe)
snprintf(buf, MAX_EVENT_NAME_LEN, "r:kprobes/%s", name);
else


2020-05-13 11:17:01

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 082/118] epoll: atomically remove wait entry on wake up

From: Roman Penyaev <[email protected]>

commit 412895f03cbf9633298111cb4dfde13b7720e2c5 upstream.

This patch does two things:

- fixes a lost wakeup introduced by commit 339ddb53d373 ("fs/epoll:
remove unnecessary wakeups of nested epoll")

- improves performance for events delivery.

The description of the problem is the following: if N (>1) threads are
waiting on ep->wq for new events and M (>1) events come, it is quite
likely that >1 wakeups hit the same wait queue entry, because there is
quite a big window between __add_wait_queue_exclusive() and the
following __remove_wait_queue() calls in ep_poll() function.

This can lead to lost wakeups, because thread, which was woken up, can
handle not all the events in ->rdllist. (in better words the problem is
described here: https://lkml.org/lkml/2019/10/7/905)

The idea of the current patch is to use init_wait() instead of
init_waitqueue_entry().

Internally init_wait() sets autoremove_wake_function as a callback,
which removes the wait entry atomically (under the wq locks) from the
list, thus the next coming wakeup hits the next wait entry in the wait
queue, thus preventing lost wakeups.

Problem is very well reproduced by the epoll60 test case [1].

Wait entry removal on wakeup has also performance benefits, because
there is no need to take a ep->lock and remove wait entry from the queue
after the successful wakeup. Here is the timing output of the epoll60
test case:

With explicit wakeup from ep_scan_ready_list() (the state of the
code prior 339ddb53d373):

real 0m6.970s
user 0m49.786s
sys 0m0.113s

After this patch:

real 0m5.220s
user 0m36.879s
sys 0m0.019s

The other testcase is the stress-epoll [2], where one thread consumes
all the events and other threads produce many events:

With explicit wakeup from ep_scan_ready_list() (the state of the
code prior 339ddb53d373):

threads events/ms run-time ms
8 5427 1474
16 6163 2596
32 6824 4689
64 7060 9064
128 6991 18309

After this patch:

threads events/ms run-time ms
8 5598 1429
16 7073 2262
32 7502 4265
64 7640 8376
128 7634 16767

(number of "events/ms" represents event bandwidth, thus higher is
better; number of "run-time ms" represents overall time spent
doing the benchmark, thus lower is better)

[1] tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c
[2] https://github.com/rouming/test-tools/blob/master/stress-epoll.c

Signed-off-by: Roman Penyaev <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Jason Baron <[email protected]>
Cc: Khazhismel Kumykov <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Heiher <[email protected]>
Cc: <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/eventpoll.c | 43 ++++++++++++++++++++++++-------------------
1 file changed, 24 insertions(+), 19 deletions(-)

--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1800,7 +1800,6 @@ static int ep_poll(struct eventpoll *ep,
{
int res = 0, eavail, timed_out = 0;
u64 slack = 0;
- bool waiter = false;
wait_queue_entry_t wait;
ktime_t expires, *to = NULL;

@@ -1845,21 +1844,23 @@ fetch_events:
*/
ep_reset_busy_poll_napi_id(ep);

- /*
- * We don't have any available event to return to the caller. We need
- * to sleep here, and we will be woken by ep_poll_callback() when events
- * become available.
- */
- if (!waiter) {
- waiter = true;
- init_waitqueue_entry(&wait, current);
-
+ do {
+ /*
+ * Internally init_wait() uses autoremove_wake_function(),
+ * thus wait entry is removed from the wait queue on each
+ * wakeup. Why it is important? In case of several waiters
+ * each new wakeup will hit the next waiter, giving it the
+ * chance to harvest new event. Otherwise wakeup can be
+ * lost. This is also good performance-wise, because on
+ * normal wakeup path no need to call __remove_wait_queue()
+ * explicitly, thus ep->lock is not taken, which halts the
+ * event delivery.
+ */
+ init_wait(&wait);
write_lock_irq(&ep->lock);
__add_wait_queue_exclusive(&ep->wq, &wait);
write_unlock_irq(&ep->lock);
- }

- for (;;) {
/*
* We don't want to sleep if the ep_poll_callback() sends us
* a wakeup in between. That's why we set the task state
@@ -1889,10 +1890,20 @@ fetch_events:
timed_out = 1;
break;
}
- }
+
+ /* We were woken up, thus go and try to harvest some events */
+ eavail = 1;
+
+ } while (0);

__set_current_state(TASK_RUNNING);

+ if (!list_empty_careful(&wait.entry)) {
+ write_lock_irq(&ep->lock);
+ __remove_wait_queue(&ep->wq, &wait);
+ write_unlock_irq(&ep->lock);
+ }
+
send_events:
/*
* Try to transfer events to user space. In case we get 0 events and
@@ -1903,12 +1914,6 @@ send_events:
!(res = ep_send_events(ep, events, maxevents)) && !timed_out)
goto fetch_events;

- if (waiter) {
- write_lock_irq(&ep->lock);
- __remove_wait_queue(&ep->wq, &wait);
- write_unlock_irq(&ep->lock);
- }
-
return res;
}



2020-05-13 11:18:05

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 073/118] KVM: arm: vgic: Fix limit condition when writing to GICD_I[CS]ACTIVER

From: Marc Zyngier <[email protected]>

commit 1c32ca5dc6d00012f0c964e5fdd7042fcc71efb1 upstream.

When deciding whether a guest has to be stopped we check whether this
is a private interrupt or not. Unfortunately, there's an off-by-one bug
here, and we fail to recognize a whole range of interrupts as being
global (GICv2 SPIs 32-63).

Fix the condition from > to be >=.

Cc: [email protected]
Fixes: abd7229626b93 ("KVM: arm/arm64: Simplify active_change_prepare and plug race")
Reported-by: André Przywara <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
virt/kvm/arm/vgic/vgic-mmio.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -368,7 +368,7 @@ static void vgic_mmio_change_active(stru
static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
{
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
- intid > VGIC_NR_PRIVATE_IRQS)
+ intid >= VGIC_NR_PRIVATE_IRQS)
kvm_arm_halt_guest(vcpu->kvm);
}

@@ -376,7 +376,7 @@ static void vgic_change_active_prepare(s
static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
{
if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
- intid > VGIC_NR_PRIVATE_IRQS)
+ intid >= VGIC_NR_PRIVATE_IRQS)
kvm_arm_resume_guest(vcpu->kvm);
}



2020-05-13 11:18:21

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 114/118] bdi: move bdi_dev_name out of line

From: Christoph Hellwig <[email protected]>

[ Upstream commit eb7ae5e06bb6e6ac6bb86872d27c43ebab92f6b2 ]

bdi_dev_name is not a fast path function, move it out of line. This
prepares for using it from modular callers without having to export
an implementation detail like bdi_unknown_name.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Bart Van Assche <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
include/linux/backing-dev.h | 9 +--------
mm/backing-dev.c | 10 +++++++++-
2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index f88197c1ffc2d..c9ad5c3b7b4b2 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -505,13 +505,6 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
(1 << WB_async_congested));
}

-extern const char *bdi_unknown_name;
-
-static inline const char *bdi_dev_name(struct backing_dev_info *bdi)
-{
- if (!bdi || !bdi->dev)
- return bdi_unknown_name;
- return dev_name(bdi->dev);
-}
+const char *bdi_dev_name(struct backing_dev_info *bdi);

#endif /* _LINUX_BACKING_DEV_H */
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 62f05f605fb5b..680e5028d0fc5 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -21,7 +21,7 @@ struct backing_dev_info noop_backing_dev_info = {
EXPORT_SYMBOL_GPL(noop_backing_dev_info);

static struct class *bdi_class;
-const char *bdi_unknown_name = "(unknown)";
+static const char *bdi_unknown_name = "(unknown)";

/*
* bdi_lock protects bdi_tree and updates to bdi_list. bdi_list has RCU
@@ -1043,6 +1043,14 @@ void bdi_put(struct backing_dev_info *bdi)
}
EXPORT_SYMBOL(bdi_put);

+const char *bdi_dev_name(struct backing_dev_info *bdi)
+{
+ if (!bdi || !bdi->dev)
+ return bdi_unknown_name;
+ return dev_name(bdi->dev);
+}
+EXPORT_SYMBOL_GPL(bdi_dev_name);
+
static wait_queue_head_t congestion_wqh[2] = {
__WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[0]),
__WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[1])
--
2.20.1



2020-05-13 11:18:24

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 108/118] netfilter: nf_osf: avoid passing pointer to local var

From: Arnd Bergmann <[email protected]>

commit c165d57b552aaca607fa5daf3fb524a6efe3c5a3 upstream.

gcc-10 points out that a code path exists where a pointer to a stack
variable may be passed back to the caller:

net/netfilter/nfnetlink_osf.c: In function 'nf_osf_hdr_ctx_init':
cc1: warning: function may return address of local variable [-Wreturn-local-addr]
net/netfilter/nfnetlink_osf.c:171:16: note: declared here
171 | struct tcphdr _tcph;
| ^~~~~

I am not sure whether this can happen in practice, but moving the
variable declaration into the callers avoids the problem.

Fixes: 31a9c29210e2 ("netfilter: nf_osf: add struct nf_osf_hdr_ctx")
Signed-off-by: Arnd Bergmann <[email protected]>
Reviewed-by: Florian Westphal <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/netfilter/nfnetlink_osf.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

--- a/net/netfilter/nfnetlink_osf.c
+++ b/net/netfilter/nfnetlink_osf.c
@@ -165,12 +165,12 @@ static bool nf_osf_match_one(const struc
static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
const struct sk_buff *skb,
const struct iphdr *ip,
- unsigned char *opts)
+ unsigned char *opts,
+ struct tcphdr *_tcph)
{
const struct tcphdr *tcp;
- struct tcphdr _tcph;

- tcp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(struct tcphdr), &_tcph);
+ tcp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(struct tcphdr), _tcph);
if (!tcp)
return NULL;

@@ -205,10 +205,11 @@ nf_osf_match(const struct sk_buff *skb,
int fmatch = FMATCH_WRONG;
struct nf_osf_hdr_ctx ctx;
const struct tcphdr *tcp;
+ struct tcphdr _tcph;

memset(&ctx, 0, sizeof(ctx));

- tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts);
+ tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts, &_tcph);
if (!tcp)
return false;

@@ -265,10 +266,11 @@ bool nf_osf_find(const struct sk_buff *s
const struct nf_osf_finger *kf;
struct nf_osf_hdr_ctx ctx;
const struct tcphdr *tcp;
+ struct tcphdr _tcph;

memset(&ctx, 0, sizeof(ctx));

- tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts);
+ tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts, &_tcph);
if (!tcp)
return false;



2020-05-13 11:18:27

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 081/118] ipc/mqueue.c: change __do_notify() to bypass check_kill_permission()

From: Oleg Nesterov <[email protected]>

commit b5f2006144c6ae941726037120fa1001ddede784 upstream.

Commit cc731525f26a ("signal: Remove kernel interal si_code magic")
changed the value of SI_FROMUSER(SI_MESGQ), this means that mq_notify() no
longer works if the sender doesn't have rights to send a signal.

Change __do_notify() to use do_send_sig_info() instead of kill_pid_info()
to avoid check_kill_permission().

This needs the additional notify.sigev_signo != 0 check, shouldn't we
change do_mq_notify() to deny sigev_signo == 0 ?

Test-case:

#include <signal.h>
#include <mqueue.h>
#include <unistd.h>
#include <sys/wait.h>
#include <assert.h>

static int notified;

static void sigh(int sig)
{
notified = 1;
}

int main(void)
{
signal(SIGIO, sigh);

int fd = mq_open("/mq", O_RDWR|O_CREAT, 0666, NULL);
assert(fd >= 0);

struct sigevent se = {
.sigev_notify = SIGEV_SIGNAL,
.sigev_signo = SIGIO,
};
assert(mq_notify(fd, &se) == 0);

if (!fork()) {
assert(setuid(1) == 0);
mq_send(fd, "",1,0);
return 0;
}

wait(NULL);
mq_unlink("/mq");
assert(notified);
return 0;
}

[[email protected]: 1) Add self_exec_id evaluation so that the implementation matches do_notify_parent 2) use PIDTYPE_TGID everywhere]
Fixes: cc731525f26a ("signal: Remove kernel interal si_code magic")
Reported-by: Yoji <[email protected]>
Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Manfred Spraul <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Acked-by: "Eric W. Biederman" <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: Markus Elfring <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
ipc/mqueue.c | 34 ++++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)

--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -142,6 +142,7 @@ struct mqueue_inode_info {

struct sigevent notify;
struct pid *notify_owner;
+ u32 notify_self_exec_id;
struct user_namespace *notify_user_ns;
struct user_struct *user; /* user who created, for accounting */
struct sock *notify_sock;
@@ -774,28 +775,44 @@ static void __do_notify(struct mqueue_in
* synchronously. */
if (info->notify_owner &&
info->attr.mq_curmsgs == 1) {
- struct kernel_siginfo sig_i;
switch (info->notify.sigev_notify) {
case SIGEV_NONE:
break;
- case SIGEV_SIGNAL:
- /* sends signal */
+ case SIGEV_SIGNAL: {
+ struct kernel_siginfo sig_i;
+ struct task_struct *task;
+
+ /* do_mq_notify() accepts sigev_signo == 0, why?? */
+ if (!info->notify.sigev_signo)
+ break;

clear_siginfo(&sig_i);
sig_i.si_signo = info->notify.sigev_signo;
sig_i.si_errno = 0;
sig_i.si_code = SI_MESGQ;
sig_i.si_value = info->notify.sigev_value;
- /* map current pid/uid into info->owner's namespaces */
rcu_read_lock();
+ /* map current pid/uid into info->owner's namespaces */
sig_i.si_pid = task_tgid_nr_ns(current,
ns_of_pid(info->notify_owner));
- sig_i.si_uid = from_kuid_munged(info->notify_user_ns, current_uid());
+ sig_i.si_uid = from_kuid_munged(info->notify_user_ns,
+ current_uid());
+ /*
+ * We can't use kill_pid_info(), this signal should
+ * bypass check_kill_permission(). It is from kernel
+ * but si_fromuser() can't know this.
+ * We do check the self_exec_id, to avoid sending
+ * signals to programs that don't expect them.
+ */
+ task = pid_task(info->notify_owner, PIDTYPE_TGID);
+ if (task && task->self_exec_id ==
+ info->notify_self_exec_id) {
+ do_send_sig_info(info->notify.sigev_signo,
+ &sig_i, task, PIDTYPE_TGID);
+ }
rcu_read_unlock();
-
- kill_pid_info(info->notify.sigev_signo,
- &sig_i, info->notify_owner);
break;
+ }
case SIGEV_THREAD:
set_cookie(info->notify_cookie, NOTIFY_WOKENUP);
netlink_sendskb(info->notify_sock, info->notify_cookie);
@@ -1384,6 +1401,7 @@ retry:
info->notify.sigev_signo = notification->sigev_signo;
info->notify.sigev_value = notification->sigev_value;
info->notify.sigev_notify = SIGEV_SIGNAL;
+ info->notify_self_exec_id = current->self_exec_id;
break;
}



2020-05-13 11:18:30

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 105/118] KVM: x86: Fixes posted interrupt check for IRQs delivery modes

From: Suravee Suthikulpanit <[email protected]>

commit 637543a8d61c6afe4e9be64bfb43c78701a83375 upstream.

Current logic incorrectly uses the enum ioapic_irq_destination_types
to check the posted interrupt destination types. However, the value was
set using APIC_DM_XXX macros, which are left-shifted by 8 bits.

Fixes by using the APIC_DM_FIXED and APIC_DM_LOWEST instead.

Fixes: (fdcf75621375 'KVM: x86: Disable posted interrupts for non-standard IRQs delivery modes')
Cc: Alexander Graf <[email protected]>
Signed-off-by: Suravee Suthikulpanit <[email protected]>
Message-Id: <[email protected]>
Reviewed-by: Maxim Levitsky <[email protected]>
Tested-by: Maxim Levitsky <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/x86/include/asm/kvm_host.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1664,8 +1664,8 @@ void kvm_set_msi_irq(struct kvm *kvm, st
static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
{
/* We can only post Fixed and LowPrio IRQs */
- return (irq->delivery_mode == dest_Fixed ||
- irq->delivery_mode == dest_LowestPrio);
+ return (irq->delivery_mode == APIC_DM_FIXED ||
+ irq->delivery_mode == APIC_DM_LOWEST);
}

static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)


2020-05-13 11:18:40

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 089/118] coredump: fix crash when umh is disabled

From: Luis Chamberlain <[email protected]>

commit 3740d93e37902b31159a82da2d5c8812ed825404 upstream.

Commit 64e90a8acb859 ("Introduce STATIC_USERMODEHELPER to mediate
call_usermodehelper()") added the optiont to disable all
call_usermodehelper() calls by setting STATIC_USERMODEHELPER_PATH to
an empty string. When this is done, and crashdump is triggered, it
will crash on null pointer dereference, since we make assumptions
over what call_usermodehelper_exec() did.

This has been reported by Sergey when one triggers a a coredump
with the following configuration:

```
CONFIG_STATIC_USERMODEHELPER=y
CONFIG_STATIC_USERMODEHELPER_PATH=""
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
```

The way disabling the umh was designed was that call_usermodehelper_exec()
would just return early, without an error. But coredump assumes
certain variables are set up for us when this happens, and calls
ile_start_write(cprm.file) with a NULL file.

[ 2.819676] BUG: kernel NULL pointer dereference, address: 0000000000000020
[ 2.819859] #PF: supervisor read access in kernel mode
[ 2.820035] #PF: error_code(0x0000) - not-present page
[ 2.820188] PGD 0 P4D 0
[ 2.820305] Oops: 0000 [#1] SMP PTI
[ 2.820436] CPU: 2 PID: 89 Comm: a Not tainted 5.7.0-rc1+ #7
[ 2.820680] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31 04/01/2014
[ 2.821150] RIP: 0010:do_coredump+0xd80/0x1060
[ 2.821385] Code: e8 95 11 ed ff 48 c7 c6 cc a7 b4 81 48 8d bd 28 ff
ff ff 89 c2 e8 70 f1 ff ff 41 89 c2 85 c0 0f 84 72 f7 ff ff e9 b4 fe ff
ff <48> 8b 57 20 0f b7 02 66 25 00 f0 66 3d 00 8
0 0f 84 9c 01 00 00 44
[ 2.822014] RSP: 0000:ffffc9000029bcb8 EFLAGS: 00010246
[ 2.822339] RAX: 0000000000000000 RBX: ffff88803f860000 RCX: 000000000000000a
[ 2.822746] RDX: 0000000000000009 RSI: 0000000000000282 RDI: 0000000000000000
[ 2.823141] RBP: ffffc9000029bde8 R08: 0000000000000000 R09: ffffc9000029bc00
[ 2.823508] R10: 0000000000000001 R11: ffff88803dec90be R12: ffffffff81c39da0
[ 2.823902] R13: ffff88803de84400 R14: 0000000000000000 R15: 0000000000000000
[ 2.824285] FS: 00007fee08183540(0000) GS:ffff88803e480000(0000) knlGS:0000000000000000
[ 2.824767] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2.825111] CR2: 0000000000000020 CR3: 000000003f856005 CR4: 0000000000060ea0
[ 2.825479] Call Trace:
[ 2.825790] get_signal+0x11e/0x720
[ 2.826087] do_signal+0x1d/0x670
[ 2.826361] ? force_sig_info_to_task+0xc1/0xf0
[ 2.826691] ? force_sig_fault+0x3c/0x40
[ 2.826996] ? do_trap+0xc9/0x100
[ 2.827179] exit_to_usermode_loop+0x49/0x90
[ 2.827359] prepare_exit_to_usermode+0x77/0xb0
[ 2.827559] ? invalid_op+0xa/0x30
[ 2.827747] ret_from_intr+0x20/0x20
[ 2.827921] RIP: 0033:0x55e2c76d2129
[ 2.828107] Code: 2d ff ff ff e8 68 ff ff ff 5d c6 05 18 2f 00 00 01
c3 0f 1f 80 00 00 00 00 c3 0f 1f 80 00 00 00 00 e9 7b ff ff ff 55 48 89
e5 <0f> 0b b8 00 00 00 00 5d c3 66 2e 0f 1f 84 0
0 00 00 00 00 0f 1f 40
[ 2.828603] RSP: 002b:00007fffeba5e080 EFLAGS: 00010246
[ 2.828801] RAX: 000055e2c76d2125 RBX: 0000000000000000 RCX: 00007fee0817c718
[ 2.829034] RDX: 00007fffeba5e188 RSI: 00007fffeba5e178 RDI: 0000000000000001
[ 2.829257] RBP: 00007fffeba5e080 R08: 0000000000000000 R09: 00007fee08193c00
[ 2.829482] R10: 0000000000000009 R11: 0000000000000000 R12: 000055e2c76d2040
[ 2.829727] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 2.829964] CR2: 0000000000000020
[ 2.830149] ---[ end trace ceed83d8c68a1bf1 ]---
```

Cc: <[email protected]> # v4.11+
Fixes: 64e90a8acb85 ("Introduce STATIC_USERMODEHELPER to mediate call_usermodehelper()")
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=199795
Reported-by: Tony Vroon <[email protected]>
Reported-by: Sergey Kvachonok <[email protected]>
Tested-by: Sergei Trofimovich <[email protected]>
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/coredump.c | 8 ++++++++
kernel/umh.c | 5 +++++
2 files changed, 13 insertions(+)

--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -788,6 +788,14 @@ void do_coredump(const kernel_siginfo_t
if (displaced)
put_files_struct(displaced);
if (!dump_interrupted()) {
+ /*
+ * umh disabled with CONFIG_STATIC_USERMODEHELPER_PATH="" would
+ * have this set to NULL.
+ */
+ if (!cprm.file) {
+ pr_info("Core dump to |%s disabled\n", cn.corename);
+ goto close_fail;
+ }
file_start_write(cprm.file);
core_dumped = binfmt->core_dump(&cprm);
file_end_write(cprm.file);
--- a/kernel/umh.c
+++ b/kernel/umh.c
@@ -544,6 +544,11 @@ EXPORT_SYMBOL_GPL(fork_usermode_blob);
* Runs a user-space application. The application is started
* asynchronously if wait is not set, and runs as a child of system workqueues.
* (ie. it runs with full root capabilities and optimized affinity).
+ *
+ * Note: successful return value does not guarantee the helper was called at
+ * all. You can't rely on sub_info->{init,cleanup} being called even for
+ * UMH_WAIT_* wait modes as STATIC_USERMODEHELPER_PATH="" turns all helpers
+ * into a successful no-op.
*/
int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait)
{


2020-05-13 11:18:54

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 058/118] sctp: Fix bundling of SHUTDOWN with COOKIE-ACK

From: Jere Leppänen <[email protected]>

commit 145cb2f7177d94bc54563ed26027e952ee0ae03c upstream.

When we start shutdown in sctp_sf_do_dupcook_a(), we want to bundle
the SHUTDOWN with the COOKIE-ACK to ensure that the peer receives them
at the same time and in the correct order. This bundling was broken by
commit 4ff40b86262b ("sctp: set chunk transport correctly when it's a
new asoc"), which assigns a transport for the COOKIE-ACK, but not for
the SHUTDOWN.

Fix this by passing a reference to the COOKIE-ACK chunk as an argument
to sctp_sf_do_9_2_start_shutdown() and onward to
sctp_make_shutdown(). This way the SHUTDOWN chunk is assigned the same
transport as the COOKIE-ACK chunk, which allows them to be bundled.

In sctp_sf_do_9_2_start_shutdown(), the void *arg parameter was
previously unused. Now that we're taking it into use, it must be a
valid pointer to a chunk, or NULL. There is only one call site where
it's not, in sctp_sf_autoclose_timer_expire(). Fix that too.

Fixes: 4ff40b86262b ("sctp: set chunk transport correctly when it's a new asoc")
Signed-off-by: Jere Leppänen <[email protected]>
Acked-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Cc: Guenter Roeck <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/sctp/sm_statefuns.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

--- a/net/sctp/sm_statefuns.c
+++ b/net/sctp/sm_statefuns.c
@@ -1865,7 +1865,7 @@ static enum sctp_disposition sctp_sf_do_
*/
sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
return sctp_sf_do_9_2_start_shutdown(net, ep, asoc,
- SCTP_ST_CHUNK(0), NULL,
+ SCTP_ST_CHUNK(0), repl,
commands);
} else {
sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
@@ -5470,7 +5470,7 @@ enum sctp_disposition sctp_sf_do_9_2_sta
* in the Cumulative TSN Ack field the last sequential TSN it
* has received from the peer.
*/
- reply = sctp_make_shutdown(asoc, NULL);
+ reply = sctp_make_shutdown(asoc, arg);
if (!reply)
goto nomem;

@@ -6068,7 +6068,7 @@ enum sctp_disposition sctp_sf_autoclose_
disposition = SCTP_DISPOSITION_CONSUME;
if (sctp_outq_is_empty(&asoc->outqueue)) {
disposition = sctp_sf_do_9_2_start_shutdown(net, ep, asoc, type,
- arg, commands);
+ NULL, commands);
}

return disposition;


2020-05-13 11:18:57

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 067/118] tracing: Wait for preempt irq delay thread to finish

From: Steven Rostedt (VMware) <[email protected]>

commit d16a8c31077e75ecb9427fbfea59b74eed00f698 upstream.

Running on a slower machine, it is possible that the preempt delay kernel
thread may still be executing if the module was immediately removed after
added, and this can cause the kernel to crash as the kernel thread might be
executing after its code has been removed.

There's no reason that the caller of the code shouldn't just wait for the
delay thread to finish, as the thread can also be created by a trigger in
the sysfs code, which also has the same issues.

Link: http://lore.kernel.org/r/[email protected]

Cc: [email protected]
Fixes: 793937236d1ee ("lib: Add module for testing preemptoff/irqsoff latency tracers")
Reported-by: Xiao Yang <[email protected]>
Reviewed-by: Xiao Yang <[email protected]>
Reviewed-by: Joel Fernandes <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
kernel/trace/preemptirq_delay_test.c | 30 ++++++++++++++++++++++++------
1 file changed, 24 insertions(+), 6 deletions(-)

--- a/kernel/trace/preemptirq_delay_test.c
+++ b/kernel/trace/preemptirq_delay_test.c
@@ -113,22 +113,42 @@ static int preemptirq_delay_run(void *da

for (i = 0; i < s; i++)
(testfuncs[i])(i);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ while (!kthread_should_stop()) {
+ schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+
+ __set_current_state(TASK_RUNNING);
+
return 0;
}

-static struct task_struct *preemptirq_start_test(void)
+static int preemptirq_run_test(void)
{
+ struct task_struct *task;
+
char task_name[50];

snprintf(task_name, sizeof(task_name), "%s_test", test_mode);
- return kthread_run(preemptirq_delay_run, NULL, task_name);
+ task = kthread_run(preemptirq_delay_run, NULL, task_name);
+ if (IS_ERR(task))
+ return PTR_ERR(task);
+ if (task)
+ kthread_stop(task);
+ return 0;
}


static ssize_t trigger_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
- preemptirq_start_test();
+ ssize_t ret;
+
+ ret = preemptirq_run_test();
+ if (ret)
+ return ret;
return count;
}

@@ -148,11 +168,9 @@ static struct kobject *preemptirq_delay_

static int __init preemptirq_delay_init(void)
{
- struct task_struct *test_task;
int retval;

- test_task = preemptirq_start_test();
- retval = PTR_ERR_OR_ZERO(test_task);
+ retval = preemptirq_run_test();
if (retval != 0)
return retval;



2020-05-13 11:19:12

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 050/118] net: mvpp2: prevent buffer overflow in mvpp22_rss_ctx()

From: Dan Carpenter <[email protected]>

[ Upstream commit 39bd16df7c31bb8cf5dfd0c88e42abd5ae10029d ]

The "rss_context" variable comes from the user via ethtool_get_rxfh().
It can be any u32 value except zero. Eventually it gets passed to
mvpp22_rss_ctx() and if it is over MVPP22_N_RSS_TABLES (8) then it
results in an array overflow.

Fixes: 895586d5dc32 ("net: mvpp2: cls: Use RSS contexts to handle RSS tables")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -4325,6 +4325,8 @@ static int mvpp2_ethtool_get_rxfh_contex

if (!mvpp22_rss_is_supported())
return -EOPNOTSUPP;
+ if (rss_context >= MVPP22_N_RSS_TABLES)
+ return -EINVAL;

if (hfunc)
*hfunc = ETH_RSS_HASH_CRC32;


2020-05-13 11:19:19

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 069/118] crypto: arch/nhpoly1305 - process in explicit 4k chunks

From: Jason A. Donenfeld <[email protected]>

commit a9a8ba90fa5857c2c8a0e32eef2159cec717da11 upstream.

Rather than chunking via PAGE_SIZE, this commit changes the arch
implementations to chunk in explicit 4k parts, so that calculations on
maximum acceptable latency don't suddenly become invalid on platforms
where PAGE_SIZE isn't 4k, such as arm64.

Fixes: 0f961f9f670e ("crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305")
Fixes: 012c82388c03 ("crypto: x86/nhpoly1305 - add SSE2 accelerated NHPoly1305")
Fixes: a00fa0c88774 ("crypto: arm64/nhpoly1305 - add NEON-accelerated NHPoly1305")
Fixes: 16aae3595a9d ("crypto: arm/nhpoly1305 - add NEON-accelerated NHPoly1305")
Cc: [email protected]
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/crypto/nhpoly1305-neon-glue.c | 2 +-
arch/arm64/crypto/nhpoly1305-neon-glue.c | 2 +-
arch/x86/crypto/nhpoly1305-avx2-glue.c | 2 +-
arch/x86/crypto/nhpoly1305-sse2-glue.c | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)

--- a/arch/arm/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm/crypto/nhpoly1305-neon-glue.c
@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct
return crypto_nhpoly1305_update(desc, src, srclen);

do {
- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);

kernel_neon_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct
return crypto_nhpoly1305_update(desc, src, srclen);

do {
- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);

kernel_neon_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
--- a/arch/x86/crypto/nhpoly1305-avx2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c
@@ -29,7 +29,7 @@ static int nhpoly1305_avx2_update(struct
return crypto_nhpoly1305_update(desc, src, srclen);

do {
- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);

kernel_fpu_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
--- a/arch/x86/crypto/nhpoly1305-sse2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c
@@ -29,7 +29,7 @@ static int nhpoly1305_sse2_update(struct
return crypto_nhpoly1305_update(desc, src, srclen);

do {
- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);

kernel_fpu_begin();
crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);


2020-05-13 11:19:25

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 013/118] cxgb4: fix EOTID leak when disabling TC-MQPRIO offload

From: Rahul Lakkireddy <[email protected]>

[ Upstream commit 69422a7e5d578aab277091f4ebb7c1b387f3e355 ]

Under heavy load, the EOTID termination FLOWC request fails to get
enqueued to the end of the Tx ring due to lack of credits. This
results in EOTID leak.

When disabling TC-MQPRIO offload, the link is already brought down
to cleanup EOTIDs. So, flush any pending enqueued skbs that can't be
sent outside the wire, to make room for FLOWC request. Also, move the
FLOWC descriptor consumption logic closer to when the FLOWC request is
actually posted to hardware.

Fixes: 0e395b3cb1fb ("cxgb4: add FLOWC based QoS offload")
Signed-off-by: Rahul Lakkireddy <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/chelsio/cxgb4/sge.c | 39 ++++++++++++++++++++++++++++---
1 file changed, 36 insertions(+), 3 deletions(-)

--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -2202,6 +2202,9 @@ static void ethofld_hard_xmit(struct net
if (unlikely(skip_eotx_wr)) {
start = (u64 *)wr;
eosw_txq->state = next_state;
+ eosw_txq->cred -= wrlen16;
+ eosw_txq->ncompl++;
+ eosw_txq->last_compl = 0;
goto write_wr_headers;
}

@@ -2360,6 +2363,34 @@ netdev_tx_t t4_start_xmit(struct sk_buff
return cxgb4_eth_xmit(skb, dev);
}

+static void eosw_txq_flush_pending_skbs(struct sge_eosw_txq *eosw_txq)
+{
+ int pktcount = eosw_txq->pidx - eosw_txq->last_pidx;
+ int pidx = eosw_txq->pidx;
+ struct sk_buff *skb;
+
+ if (!pktcount)
+ return;
+
+ if (pktcount < 0)
+ pktcount += eosw_txq->ndesc;
+
+ while (pktcount--) {
+ pidx--;
+ if (pidx < 0)
+ pidx += eosw_txq->ndesc;
+
+ skb = eosw_txq->desc[pidx].skb;
+ if (skb) {
+ dev_consume_skb_any(skb);
+ eosw_txq->desc[pidx].skb = NULL;
+ eosw_txq->inuse--;
+ }
+ }
+
+ eosw_txq->pidx = eosw_txq->last_pidx + 1;
+}
+
/**
* cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc.
* @dev - netdevice
@@ -2435,9 +2466,11 @@ int cxgb4_ethofld_send_flowc(struct net_
FW_FLOWC_MNEM_EOSTATE_CLOSING :
FW_FLOWC_MNEM_EOSTATE_ESTABLISHED);

- eosw_txq->cred -= len16;
- eosw_txq->ncompl++;
- eosw_txq->last_compl = 0;
+ /* Free up any pending skbs to ensure there's room for
+ * termination FLOWC.
+ */
+ if (tc == FW_SCHED_CLS_NONE)
+ eosw_txq_flush_pending_skbs(eosw_txq);

ret = eosw_txq_enqueue(eosw_txq, skb);
if (ret) {


2020-05-13 13:49:35

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review


On 13/05/2020 10:43, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.6.13 release.
> There are 118 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

All tests are passing for Tegra ...

Test results for stable-v5.6:
13 builds: 13 pass, 0 fail
26 boots: 26 pass, 0 fail
42 tests: 42 pass, 0 fail

Linux version: 5.6.13-rc1-gf1d28d1c7608
Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000,
tegra194-p2972-0000, tegra20-ventana,
tegra210-p2371-2180, tegra210-p3450-0000,
tegra30-cardhu-a04

Cheers
Jon

--
nvpublic

2020-05-13 17:07:10

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, May 13, 2020 at 11:43:39AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.6.13 release.
> There are 118 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> Anything received after that time might be too late.
>

Build results:
total: 155 pass: 155 fail: 0
Qemu test results:
total: 431 pass: 431 fail: 0

Guenter

2020-05-13 17:28:39

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, 13 May 2020 at 15:22, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 5.6.13 release.
> There are 118 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h

Results from Linaro’s test farm.
No regressions on arm64, arm, x86_64, and i386.

Summary
------------------------------------------------------------------------

kernel: 5.6.13-rc1
git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git branch: linux-5.6.y
git commit: f1d28d1c7608478dd10b7a36c40f2375bcc1648e
git describe: v5.6.12-119-gf1d28d1c7608
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-5.6-oe/build/v5.6.12-119-gf1d28d1c7608

No regressions (compared to build v5.6.12)

No fixes (compared to build v5.6.12)

Ran 35302 total tests in the following environments and test suites.

Environments
--------------
- dragonboard-410c
- hi6220-hikey
- i386
- juno-r2
- juno-r2-compat
- juno-r2-kasan
- nxp-ls2088
- qemu_arm
- qemu_arm64
- qemu_i386
- qemu_x86_64
- x15
- x86
- x86-kasan

Test Suites
-----------
* build
* install-android-platform-tools-r2600
* install-android-platform-tools-r2800
* kselftest
* kselftest/drivers
* kselftest/filesystems
* libgpiod
* linux-log-parser
* ltp-containers-tests
* ltp-cve-tests
* ltp-fcntl-locktests-tests
* ltp-filecaps-tests
* ltp-fs_bind-tests
* ltp-fs_perms_simple-tests
* ltp-fsx-tests
* ltp-ipc-tests
* ltp-sched-tests
* ltp-syscalls-tests
* perf
* ltp-commands-tests
* ltp-dio-tests
* ltp-io-tests
* ltp-math-tests
* ltp-nptl-tests
* ltp-pty-tests
* ltp-securebits-tests
* network-basic-tests
* kselftest/net
* kselftest/networking
* libhugetlbfs
* ltp-cap_bounds-tests
* ltp-cpuhotplug-tests
* ltp-crypto-tests
* ltp-fs-tests
* ltp-hugetlb-tests
* ltp-mm-tests
* ltp-open-posix-tests
* v4l2-compliance
* kvm-unit-tests
* kselftest-vsyscall-mode-native
* kselftest-vsyscall-mode-native/drivers
* kselftest-vsyscall-mode-native/filesystems
* kselftest-vsyscall-mode-native/net
* kselftest-vsyscall-mode-native/networking
* kselftest-vsyscall-mode-none
* kselftest-vsyscall-mode-none/drivers
* kselftest-vsyscall-mode-none/filesystems
* kselftest-vsyscall-mode-none/net
* kselftest-vsyscall-mode-none/networking

--
Linaro LKFT
https://lkft.linaro.org

2020-05-13 20:30:58

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 049/118] net/mlx5e: Fix q counters on uplink representors

From: Roi Dayan <[email protected]>

[ Upstream commit 67b38de646894c9a94fe4d6d17719e70cc6028eb ]

Need to allocate the q counters before init_rx which needs them
when creating the rq.

Fixes: 8520fa57a4e9 ("net/mlx5e: Create q counters on uplink representors")
Signed-off-by: Roi Dayan <[email protected]>
Reviewed-by: Vlad Buslov <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -1692,19 +1692,14 @@ static void mlx5e_cleanup_rep_rx(struct

static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv)
{
- int err = mlx5e_init_rep_rx(priv);
-
- if (err)
- return err;
-
mlx5e_create_q_counters(priv);
- return 0;
+ return mlx5e_init_rep_rx(priv);
}

static void mlx5e_cleanup_ul_rep_rx(struct mlx5e_priv *priv)
{
- mlx5e_destroy_q_counters(priv);
mlx5e_cleanup_rep_rx(priv);
+ mlx5e_destroy_q_counters(priv);
}

static int mlx5e_init_uplink_rep_tx(struct mlx5e_rep_priv *rpriv)


2020-05-13 20:31:31

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 056/118] wireguard: send/receive: cond_resched() when processing worker ringbuffers

From: "Jason A. Donenfeld" <[email protected]>

[ Upstream commit 4005f5c3c9d006157ba716594e0d70c88a235c5e ]

Users with pathological hardware reported CPU stalls on CONFIG_
PREEMPT_VOLUNTARY=y, because the ringbuffers would stay full, meaning
these workers would never terminate. That turned out not to be okay on
systems without forced preemption, which Sultan observed. This commit
adds a cond_resched() to the bottom of each loop iteration, so that
these workers don't hog the core. Note that we don't need this on the
napi poll worker, since that terminates after its budget is expended.

Suggested-by: Sultan Alsawaf <[email protected]>
Reported-by: Wang Jian <[email protected]>
Fixes: e7096c131e51 ("net: WireGuard secure network tunnel")
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/wireguard/receive.c | 2 ++
drivers/net/wireguard/send.c | 4 ++++
2 files changed, 6 insertions(+)

--- a/drivers/net/wireguard/receive.c
+++ b/drivers/net/wireguard/receive.c
@@ -516,6 +516,8 @@ void wg_packet_decrypt_worker(struct wor
&PACKET_CB(skb)->keypair->receiving)) ?
PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
wg_queue_enqueue_per_peer_napi(skb, state);
+ if (need_resched())
+ cond_resched();
}
}

--- a/drivers/net/wireguard/send.c
+++ b/drivers/net/wireguard/send.c
@@ -281,6 +281,8 @@ void wg_packet_tx_worker(struct work_str

wg_noise_keypair_put(keypair, false);
wg_peer_put(peer);
+ if (need_resched())
+ cond_resched();
}
}

@@ -305,6 +307,8 @@ void wg_packet_encrypt_worker(struct wor
wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
state);

+ if (need_resched())
+ cond_resched();
}
}



2020-05-13 20:31:58

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 093/118] batman-adv: Fix refcnt leak in batadv_show_throughput_override

From: Xiyu Yang <[email protected]>

commit f872de8185acf1b48b954ba5bd8f9bc0a0d14016 upstream.

batadv_show_throughput_override() invokes batadv_hardif_get_by_netdev(),
which gets a batadv_hard_iface object from net_dev with increased refcnt
and its reference is assigned to a local pointer 'hard_iface'.

When batadv_show_throughput_override() returns, "hard_iface" becomes
invalid, so the refcount should be decreased to keep refcount balanced.

The issue happens in the normal path of
batadv_show_throughput_override(), which forgets to decrease the refcnt
increased by batadv_hardif_get_by_netdev() before the function returns,
causing a refcnt leak.

Fix this issue by calling batadv_hardif_put() before the
batadv_show_throughput_override() returns in the normal path.

Fixes: 0b5ecc6811bd ("batman-adv: add throughput override attribute to hard_ifaces")
Signed-off-by: Xiyu Yang <[email protected]>
Signed-off-by: Xin Tan <[email protected]>
Signed-off-by: Sven Eckelmann <[email protected]>
Signed-off-by: Simon Wunderlich <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
net/batman-adv/sysfs.c | 1 +
1 file changed, 1 insertion(+)

--- a/net/batman-adv/sysfs.c
+++ b/net/batman-adv/sysfs.c
@@ -1190,6 +1190,7 @@ static ssize_t batadv_show_throughput_ov

tp_override = atomic_read(&hard_iface->bat_v.throughput_override);

+ batadv_hardif_put(hard_iface);
return sprintf(buff, "%u.%u MBit\n", tp_override / 10,
tp_override % 10);
}


2020-05-13 20:32:05

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 047/118] net/mlx5: Fix forced completion access non initialized command entry

From: Moshe Shemesh <[email protected]>

[ Upstream commit f3cb3cebe26ed4c8036adbd9448b372129d3c371 ]

mlx5_cmd_flush() will trigger forced completions to all valid command
entries. Triggered by an asynch event such as fast teardown it can
happen at any stage of the command, including command initialization.
It will trigger forced completion and that can lead to completion on an
uninitialized command entry.

Setting MLX5_CMD_ENT_STATE_PENDING_COMP only after command entry is
initialized will ensure force completion is treated only if command
entry is initialized.

Fixes: 73dd3a4839c1 ("net/mlx5: Avoid using pending command interface slots")
Signed-off-by: Moshe Shemesh <[email protected]>
Signed-off-by: Eran Ben Elisha <[email protected]>
Signed-off-by: Saeed Mahameed <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -888,7 +888,6 @@ static void cmd_work_handler(struct work
}

cmd->ent_arr[ent->idx] = ent;
- set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);
lay = get_inst(cmd, ent->idx);
ent->lay = lay;
memset(lay, 0, sizeof(*lay));
@@ -910,6 +909,7 @@ static void cmd_work_handler(struct work

if (ent->callback)
schedule_delayed_work(&ent->cb_timeout_work, cb_timeout);
+ set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);

/* Skip sending command to fw if internal error */
if (pci_channel_offline(dev->pdev) ||


2020-05-13 20:32:14

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 054/118] net: enetc: fix an issue about leak system resources

From: Dejin Zheng <[email protected]>

[ Upstream commit d975cb7ea915e64a3ebcfef8a33051f3e6bf22a8 ]

the related system resources were not released when enetc_hw_alloc()
return error in the enetc_pci_mdio_probe(), add iounmap() for error
handling label "err_hw_alloc" to fix it.

Fixes: 6517798dd3432a ("enetc: Make MDIO accessors more generic and export to include/linux/fsl")
Cc: Andy Shevchenko <[email protected]>
Signed-off-by: Dejin Zheng <[email protected]>
Reviewed-by: Vladimir Oltean <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
@@ -74,8 +74,8 @@ err_pci_mem_reg:
pci_disable_device(pdev);
err_pci_enable:
err_mdiobus_alloc:
- iounmap(port_regs);
err_hw_alloc:
+ iounmap(port_regs);
err_ioremap:
return err;
}


2020-05-13 20:32:32

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 070/118] crypto: arch/lib - limit simd usage to 4k chunks

From: Jason A. Donenfeld <[email protected]>

commit 706024a52c614b478b63f7728d202532ce6591a9 upstream.

The initial Zinc patchset, after some mailing list discussion, contained
code to ensure that kernel_fpu_enable would not be kept on for more than
a 4k chunk, since it disables preemption. The choice of 4k isn't totally
scientific, but it's not a bad guess either, and it's what's used in
both the x86 poly1305, blake2s, and nhpoly1305 code already (in the form
of PAGE_SIZE, which this commit corrects to be explicitly 4k for the
former two).

Ard did some back of the envelope calculations and found that
at 5 cycles/byte (overestimate) on a 1ghz processor (pretty slow), 4k
means we have a maximum preemption disabling of 20us, which Sebastian
confirmed was probably a good limit.

Unfortunately the chunking appears to have been left out of the final
patchset that added the glue code. So, this commit adds it back in.

Fixes: 84e03fa39fbe ("crypto: x86/chacha - expose SIMD ChaCha routine as library function")
Fixes: b3aad5bad26a ("crypto: arm64/chacha - expose arm64 ChaCha routine as library function")
Fixes: a44a3430d71b ("crypto: arm/chacha - expose ARM ChaCha routine as library function")
Fixes: d7d7b8535662 ("crypto: x86/poly1305 - wire up faster implementations for kernel")
Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation")
Fixes: a6b803b3ddc7 ("crypto: arm/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation")
Fixes: ed0356eda153 ("crypto: blake2s - x86_64 SIMD implementation")
Cc: Eric Biggers <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: [email protected]
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
arch/arm/crypto/chacha-glue.c | 14 +++++++++++---
arch/arm/crypto/poly1305-glue.c | 15 +++++++++++----
arch/arm64/crypto/chacha-neon-glue.c | 14 +++++++++++---
arch/arm64/crypto/poly1305-glue.c | 15 +++++++++++----
arch/x86/crypto/blake2s-glue.c | 10 ++++------
arch/x86/crypto/chacha_glue.c | 14 +++++++++++---
arch/x86/crypto/poly1305_glue.c | 13 ++++++-------
7 files changed, 65 insertions(+), 30 deletions(-)

--- a/arch/arm/crypto/chacha-glue.c
+++ b/arch/arm/crypto/chacha-glue.c
@@ -91,9 +91,17 @@ void chacha_crypt_arch(u32 *state, u8 *d
return;
}

- kernel_neon_begin();
- chacha_doneon(state, dst, src, bytes, nrounds);
- kernel_neon_end();
+ do {
+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
+
+ kernel_neon_begin();
+ chacha_doneon(state, dst, src, todo, nrounds);
+ kernel_neon_end();
+
+ bytes -= todo;
+ src += todo;
+ dst += todo;
+ } while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

--- a/arch/arm/crypto/poly1305-glue.c
+++ b/arch/arm/crypto/poly1305-glue.c
@@ -160,13 +160,20 @@ void poly1305_update_arch(struct poly130
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);

if (static_branch_likely(&have_neon) && do_neon) {
- kernel_neon_begin();
- poly1305_blocks_neon(&dctx->h, src, len, 1);
- kernel_neon_end();
+ do {
+ unsigned int todo = min_t(unsigned int, len, SZ_4K);
+
+ kernel_neon_begin();
+ poly1305_blocks_neon(&dctx->h, src, todo, 1);
+ kernel_neon_end();
+
+ len -= todo;
+ src += todo;
+ } while (len);
} else {
poly1305_blocks_arm(&dctx->h, src, len, 1);
+ src += len;
}
- src += len;
nbytes %= POLY1305_BLOCK_SIZE;
}

--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -87,9 +87,17 @@ void chacha_crypt_arch(u32 *state, u8 *d
!crypto_simd_usable())
return chacha_crypt_generic(state, dst, src, bytes, nrounds);

- kernel_neon_begin();
- chacha_doneon(state, dst, src, bytes, nrounds);
- kernel_neon_end();
+ do {
+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
+
+ kernel_neon_begin();
+ chacha_doneon(state, dst, src, todo, nrounds);
+ kernel_neon_end();
+
+ bytes -= todo;
+ src += todo;
+ dst += todo;
+ } while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

--- a/arch/arm64/crypto/poly1305-glue.c
+++ b/arch/arm64/crypto/poly1305-glue.c
@@ -143,13 +143,20 @@ void poly1305_update_arch(struct poly130
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);

if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
- kernel_neon_begin();
- poly1305_blocks_neon(&dctx->h, src, len, 1);
- kernel_neon_end();
+ do {
+ unsigned int todo = min_t(unsigned int, len, SZ_4K);
+
+ kernel_neon_begin();
+ poly1305_blocks_neon(&dctx->h, src, todo, 1);
+ kernel_neon_end();
+
+ len -= todo;
+ src += todo;
+ } while (len);
} else {
poly1305_blocks(&dctx->h, src, len, 1);
+ src += len;
}
- src += len;
nbytes %= POLY1305_BLOCK_SIZE;
}

--- a/arch/x86/crypto/blake2s-glue.c
+++ b/arch/x86/crypto/blake2s-glue.c
@@ -32,16 +32,16 @@ void blake2s_compress_arch(struct blake2
const u32 inc)
{
/* SIMD disables preemption, so relax after processing each page. */
- BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);
+ BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);

if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {
blake2s_compress_generic(state, block, nblocks, inc);
return;
}

- for (;;) {
+ do {
const size_t blocks = min_t(size_t, nblocks,
- PAGE_SIZE / BLAKE2S_BLOCK_SIZE);
+ SZ_4K / BLAKE2S_BLOCK_SIZE);

kernel_fpu_begin();
if (IS_ENABLED(CONFIG_AS_AVX512) &&
@@ -52,10 +52,8 @@ void blake2s_compress_arch(struct blake2
kernel_fpu_end();

nblocks -= blocks;
- if (!nblocks)
- break;
block += blocks * BLAKE2S_BLOCK_SIZE;
- }
+ } while (nblocks);
}
EXPORT_SYMBOL(blake2s_compress_arch);

--- a/arch/x86/crypto/chacha_glue.c
+++ b/arch/x86/crypto/chacha_glue.c
@@ -154,9 +154,17 @@ void chacha_crypt_arch(u32 *state, u8 *d
bytes <= CHACHA_BLOCK_SIZE)
return chacha_crypt_generic(state, dst, src, bytes, nrounds);

- kernel_fpu_begin();
- chacha_dosimd(state, dst, src, bytes, nrounds);
- kernel_fpu_end();
+ do {
+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
+
+ kernel_fpu_begin();
+ chacha_dosimd(state, dst, src, todo, nrounds);
+ kernel_fpu_end();
+
+ bytes -= todo;
+ src += todo;
+ dst += todo;
+ } while (bytes);
}
EXPORT_SYMBOL(chacha_crypt_arch);

--- a/arch/x86/crypto/poly1305_glue.c
+++ b/arch/x86/crypto/poly1305_glue.c
@@ -91,8 +91,8 @@ static void poly1305_simd_blocks(void *c
struct poly1305_arch_internal *state = ctx;

/* SIMD disables preemption, so relax after processing each page. */
- BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||
- PAGE_SIZE % POLY1305_BLOCK_SIZE);
+ BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
+ SZ_4K % POLY1305_BLOCK_SIZE);

if (!IS_ENABLED(CONFIG_AS_AVX) || !static_branch_likely(&poly1305_use_avx) ||
(len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||
@@ -102,8 +102,8 @@ static void poly1305_simd_blocks(void *c
return;
}

- for (;;) {
- const size_t bytes = min_t(size_t, len, PAGE_SIZE);
+ do {
+ const size_t bytes = min_t(size_t, len, SZ_4K);

kernel_fpu_begin();
if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))
@@ -113,11 +113,10 @@ static void poly1305_simd_blocks(void *c
else
poly1305_blocks_avx(ctx, inp, bytes, padbit);
kernel_fpu_end();
+
len -= bytes;
- if (!len)
- break;
inp += bytes;
- }
+ } while (len);
}

static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],


2020-05-13 20:32:41

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 085/118] mm: limit boost_watermark on small zones

From: Henry Willard <[email protected]>

commit 14f69140ff9c92a0928547ceefb153a842e8492c upstream.

Commit 1c30844d2dfe ("mm: reclaim small amounts of memory when an
external fragmentation event occurs") adds a boost_watermark() function
which increases the min watermark in a zone by at least
pageblock_nr_pages or the number of pages in a page block.

On Arm64, with 64K pages and 512M huge pages, this is 8192 pages or
512M. It does this regardless of the number of managed pages managed in
the zone or the likelihood of success.

This can put the zone immediately under water in terms of allocating
pages from the zone, and can cause a small machine to fail immediately
due to OoM. Unlike set_recommended_min_free_kbytes(), which
substantially increases min_free_kbytes and is tied to THP,
boost_watermark() can be called even if THP is not active.

The problem is most likely to appear on architectures such as Arm64
where pageblock_nr_pages is very large.

It is desirable to run the kdump capture kernel in as small a space as
possible to avoid wasting memory. In some architectures, such as Arm64,
there are restrictions on where the capture kernel can run, and
therefore, the space available. A capture kernel running in 768M can
fail due to OoM immediately after boost_watermark() sets the min in zone
DMA32, where most of the memory is, to 512M. It fails even though there
is over 500M of free memory. With boost_watermark() suppressed, the
capture kernel can run successfully in 448M.

This patch limits boost_watermark() to boosting a zone's min watermark
only when there are enough pages that the boost will produce positive
results. In this case that is estimated to be four times as many pages
as pageblock_nr_pages.

Mel said:

: There is no harm in marking it stable. Clearly it does not happen very
: often but it's not impossible. 32-bit x86 is a lot less common now
: which would previously have been vulnerable to triggering this easily.
: ppc64 has a larger base page size but typically only has one zone.
: arm64 is likely the most vulnerable, particularly when CMA is
: configured with a small movable zone.

Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs")
Signed-off-by: Henry Willard <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
mm/page_alloc.c | 8 ++++++++
1 file changed, 8 insertions(+)

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2351,6 +2351,14 @@ static inline void boost_watermark(struc

if (!watermark_boost_factor)
return;
+ /*
+ * Don't bother in zones that are unlikely to produce results.
+ * On small machines, including kdump capture kernels running
+ * in a small area, boosting the watermark can cause an out of
+ * memory situation immediately.
+ */
+ if ((pageblock_nr_pages * 4) > zone_managed_pages(zone))
+ return;

max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
watermark_boost_factor, 10000);


2020-05-13 20:32:50

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 035/118] sch_choke: avoid potential panic in choke_reset()

From: Eric Dumazet <[email protected]>

[ Upstream commit 8738c85c72b3108c9b9a369a39868ba5f8e10ae0 ]

If choke_init() could not allocate q->tab, we would crash later
in choke_reset().

BUG: KASAN: null-ptr-deref in memset include/linux/string.h:366 [inline]
BUG: KASAN: null-ptr-deref in choke_reset+0x208/0x340 net/sched/sch_choke.c:326
Write of size 8 at addr 0000000000000000 by task syz-executor822/7022

CPU: 1 PID: 7022 Comm: syz-executor822 Not tainted 5.7.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x188/0x20d lib/dump_stack.c:118
__kasan_report.cold+0x5/0x4d mm/kasan/report.c:515
kasan_report+0x33/0x50 mm/kasan/common.c:625
check_memory_region_inline mm/kasan/generic.c:187 [inline]
check_memory_region+0x141/0x190 mm/kasan/generic.c:193
memset+0x20/0x40 mm/kasan/common.c:85
memset include/linux/string.h:366 [inline]
choke_reset+0x208/0x340 net/sched/sch_choke.c:326
qdisc_reset+0x6b/0x520 net/sched/sch_generic.c:910
dev_deactivate_queue.constprop.0+0x13c/0x240 net/sched/sch_generic.c:1138
netdev_for_each_tx_queue include/linux/netdevice.h:2197 [inline]
dev_deactivate_many+0xe2/0xba0 net/sched/sch_generic.c:1195
dev_deactivate+0xf8/0x1c0 net/sched/sch_generic.c:1233
qdisc_graft+0xd25/0x1120 net/sched/sch_api.c:1051
tc_modify_qdisc+0xbab/0x1a00 net/sched/sch_api.c:1670
rtnetlink_rcv_msg+0x44e/0xad0 net/core/rtnetlink.c:5454
netlink_rcv_skb+0x15a/0x410 net/netlink/af_netlink.c:2469
netlink_unicast_kernel net/netlink/af_netlink.c:1303 [inline]
netlink_unicast+0x537/0x740 net/netlink/af_netlink.c:1329
netlink_sendmsg+0x882/0xe10 net/netlink/af_netlink.c:1918
sock_sendmsg_nosec net/socket.c:652 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:672
____sys_sendmsg+0x6bf/0x7e0 net/socket.c:2362
___sys_sendmsg+0x100/0x170 net/socket.c:2416
__sys_sendmsg+0xec/0x1b0 net/socket.c:2449
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295

Fixes: 77e62da6e60c ("sch_choke: drop all packets in queue during reset")
Signed-off-by: Eric Dumazet <[email protected]>
Reported-by: syzbot <[email protected]>
Cc: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/sched/sch_choke.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/net/sched/sch_choke.c
+++ b/net/sched/sch_choke.c
@@ -323,7 +323,8 @@ static void choke_reset(struct Qdisc *sc

sch->q.qlen = 0;
sch->qstats.backlog = 0;
- memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *));
+ if (q->tab)
+ memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *));
q->head = q->tail = 0;
red_restart(&q->vars);
}


2020-05-13 20:33:06

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 087/118] ceph: demote quotarealm lookup warning to a debug message

From: Luis Henriques <[email protected]>

commit 12ae44a40a1be891bdc6463f8c7072b4ede746ef upstream.

A misconfigured cephx can easily result in having the kernel client
flooding the logs with:

ceph: Can't lookup inode 1 (err: -13)

Change this message to debug level.

Cc: [email protected]
URL: https://tracker.ceph.com/issues/44546
Signed-off-by: Luis Henriques <[email protected]>
Reviewed-by: Jeff Layton <[email protected]>
Signed-off-by: Ilya Dryomov <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
fs/ceph/quota.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/fs/ceph/quota.c
+++ b/fs/ceph/quota.c
@@ -159,8 +159,8 @@ static struct inode *lookup_quotarealm_i
}

if (IS_ERR(in)) {
- pr_warn("Can't lookup inode %llx (err: %ld)\n",
- realm->ino, PTR_ERR(in));
+ dout("Can't lookup inode %llx (err: %ld)\n",
+ realm->ino, PTR_ERR(in));
qri->timeout = jiffies + msecs_to_jiffies(60 * 1000); /* XXX */
} else {
qri->timeout = 0;


2020-05-13 20:33:34

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 117/118] fsnotify: replace inode pointer with an object id

From: Amir Goldstein <[email protected]>

[ Upstream commit dfc2d2594e4a79204a3967585245f00644b8f838 ]

The event inode field is used only for comparison in queue merges and
cannot be dereferenced after handle_event(), because it does not hold a
refcount on the inode.

Replace it with an abstract id to do the same thing.

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Amir Goldstein <[email protected]>
Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/notify/fanotify/fanotify.c | 4 ++--
fs/notify/inotify/inotify_fsnotify.c | 4 ++--
fs/notify/inotify/inotify_user.c | 2 +-
include/linux/fsnotify_backend.h | 7 +++----
4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
index 5778d1347b351..14d0ac4664595 100644
--- a/fs/notify/fanotify/fanotify.c
+++ b/fs/notify/fanotify/fanotify.c
@@ -26,7 +26,7 @@ static bool should_merge(struct fsnotify_event *old_fsn,
old = FANOTIFY_E(old_fsn);
new = FANOTIFY_E(new_fsn);

- if (old_fsn->inode != new_fsn->inode || old->pid != new->pid ||
+ if (old_fsn->objectid != new_fsn->objectid || old->pid != new->pid ||
old->fh_type != new->fh_type || old->fh_len != new->fh_len)
return false;

@@ -314,7 +314,7 @@ struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
if (!event)
goto out;
init: __maybe_unused
- fsnotify_init_event(&event->fse, inode);
+ fsnotify_init_event(&event->fse, (unsigned long)inode);
event->mask = mask;
if (FAN_GROUP_FLAG(group, FAN_REPORT_TID))
event->pid = get_pid(task_pid(current));
diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
index d510223d302ca..589dee9629938 100644
--- a/fs/notify/inotify/inotify_fsnotify.c
+++ b/fs/notify/inotify/inotify_fsnotify.c
@@ -39,7 +39,7 @@ static bool event_compare(struct fsnotify_event *old_fsn,
if (old->mask & FS_IN_IGNORED)
return false;
if ((old->mask == new->mask) &&
- (old_fsn->inode == new_fsn->inode) &&
+ (old_fsn->objectid == new_fsn->objectid) &&
(old->name_len == new->name_len) &&
(!old->name_len || !strcmp(old->name, new->name)))
return true;
@@ -118,7 +118,7 @@ int inotify_handle_event(struct fsnotify_group *group,
mask &= ~IN_ISDIR;

fsn_event = &event->fse;
- fsnotify_init_event(fsn_event, inode);
+ fsnotify_init_event(fsn_event, (unsigned long)inode);
event->mask = mask;
event->wd = i_mark->wd;
event->sync_cookie = cookie;
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index 107537a543fd8..81ffc8629fc4b 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -635,7 +635,7 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events)
return ERR_PTR(-ENOMEM);
}
group->overflow_event = &oevent->fse;
- fsnotify_init_event(group->overflow_event, NULL);
+ fsnotify_init_event(group->overflow_event, 0);
oevent->mask = FS_Q_OVERFLOW;
oevent->wd = -1;
oevent->sync_cookie = 0;
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 1915bdba2fad9..64cfb5446f4d4 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -133,8 +133,7 @@ struct fsnotify_ops {
*/
struct fsnotify_event {
struct list_head list;
- /* inode may ONLY be dereferenced during handle_event(). */
- struct inode *inode; /* either the inode the event happened to or its parent */
+ unsigned long objectid; /* identifier for queue merges */
};

/*
@@ -500,10 +499,10 @@ extern void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info);
extern bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info);

static inline void fsnotify_init_event(struct fsnotify_event *event,
- struct inode *inode)
+ unsigned long objectid)
{
INIT_LIST_HEAD(&event->list);
- event->inode = inode;
+ event->objectid = objectid;
}

#else
--
2.20.1



2020-05-13 20:33:55

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 033/118] net: usb: qmi_wwan: add support for DW5816e

From: Matt Jolly <[email protected]>

[ Upstream commit 57c7f2bd758eed867295c81d3527fff4fab1ed74 ]

Add support for Dell Wireless 5816e to drivers/net/usb/qmi_wwan.c

Signed-off-by: Matt Jolly <[email protected]>
Acked-by: Bjørn Mork <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/usb/qmi_wwan.c | 1 +
1 file changed, 1 insertion(+)

--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -1359,6 +1359,7 @@ static const struct usb_device_id produc
{QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
{QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
{QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x413c, 0x81cc, 8)}, /* Dell Wireless 5816e */
{QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */
{QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */
{QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/


2020-05-13 20:34:01

by Greg KH

[permalink] [raw]
Subject: [PATCH 5.6 037/118] selftests: net: tcp_mmap: clear whole tcp_zerocopy_receive struct

From: Eric Dumazet <[email protected]>

[ Upstream commit bf5525f3a8e3248be5aa5defe5aaadd60e1c1ba1 ]

We added fields in tcp_zerocopy_receive structure,
so make sure to clear all fields to not pass garbage to the kernel.

We were lucky because recent additions added 'out' parameters,
still we need to clean our reference implementation, before folks
copy/paste it.

Fixes: c8856c051454 ("tcp-zerocopy: Return inq along with tcp receive zerocopy.")
Fixes: 33946518d493 ("tcp-zerocopy: Return sk_err (if set) along with tcp receive zerocopy.")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Arjun Roy <[email protected]>
Cc: Soheil Hassas Yeganeh <[email protected]>
Acked-by: Soheil Hassas Yeganeh <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
tools/testing/selftests/net/tcp_mmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/tools/testing/selftests/net/tcp_mmap.c
+++ b/tools/testing/selftests/net/tcp_mmap.c
@@ -165,9 +165,10 @@ void *child_thread(void *arg)
socklen_t zc_len = sizeof(zc);
int res;

+ memset(&zc, 0, sizeof(zc));
zc.address = (__u64)((unsigned long)addr);
zc.length = chunk_size;
- zc.recv_skip_hint = 0;
+
res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE,
&zc, &zc_len);
if (res == -1)


2020-05-13 20:46:08

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, May 13, 2020 at 02:46:31PM +0100, Jon Hunter wrote:
>
> On 13/05/2020 10:43, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.6.13 release.
> > There are 118 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
>
> All tests are passing for Tegra ...
>
> Test results for stable-v5.6:
> 13 builds: 13 pass, 0 fail
> 26 boots: 26 pass, 0 fail
> 42 tests: 42 pass, 0 fail
>
> Linux version: 5.6.13-rc1-gf1d28d1c7608
> Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000,
> tegra194-p2972-0000, tegra20-ventana,
> tegra210-p2371-2180, tegra210-p3450-0000,
> tegra30-cardhu-a04

Wonderful, thanks for testing all of these and letting me know.

greg k-h

2020-05-13 23:03:52

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On 5/13/20 3:43 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.6.13 release.
> There are 118 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

thanks,
-- Shuah

2020-05-15 08:54:00

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, May 13, 2020 at 04:59:52PM -0600, shuah wrote:
> On 5/13/20 3:43 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.6.13 release.
> > There are 118 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Compiled and booted on my test system. No dmesg regressions.

Thanks for testing all of these and letting me know.

greg k-h

2020-05-15 08:54:41

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, May 13, 2020 at 10:04:12AM -0700, Guenter Roeck wrote:
> On Wed, May 13, 2020 at 11:43:39AM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.6.13 release.
> > There are 118 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> > Anything received after that time might be too late.
> >
>
> Build results:
> total: 155 pass: 155 fail: 0
> Qemu test results:
> total: 431 pass: 431 fail: 0

Thanks for testing them all.

greg k-h

2020-05-15 08:55:01

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 5.6 000/118] 5.6.13-rc1 review

On Wed, May 13, 2020 at 10:56:21PM +0530, Naresh Kamboju wrote:
> On Wed, 13 May 2020 at 15:22, Greg Kroah-Hartman
> <[email protected]> wrote:
> >
> > This is the start of the stable review cycle for the 5.6.13 release.
> > There are 118 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Fri, 15 May 2020 09:41:20 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.6.13-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.6.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
>
> Results from Linaro’s test farm.
> No regressions on arm64, arm, x86_64, and i386.

Thanks for testing all of these trees.

greg k-h