2020-09-11 14:09:40

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 00/16] 5.8.9-rc1 review

This is the start of the stable review cycle for the 5.8.9 release.
There are 16 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.

Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
Anything received after that time might be too late.

The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.8.9-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.8.y
and the diffstat can be found below.

thanks,

greg k-h

-------------
Pseudo-Shortlog of commits:

Greg Kroah-Hartman <[email protected]>
Linux 5.8.9-rc1

Florian Westphal <[email protected]>
mptcp: free acked data before waiting for more memory

Jakub Kicinski <[email protected]>
net: disable netpoll on fresh napis

Tuong Lien <[email protected]>
tipc: fix using smp_processor_id() in preemptible

Tetsuo Handa <[email protected]>
tipc: fix shutdown() of connectionless socket

Vinicius Costa Gomes <[email protected]>
taprio: Fix using wrong queues in gate mask

Xin Long <[email protected]>
sctp: not disable bh in the whole sctp_get_port_local()

Kamil Lorenc <[email protected]>
net: usb: dm9601: Add USB ID of Keenetic Plus DSL

Paul Moore <[email protected]>
netlabel: fix problems with mapping removal

Ido Schimmel <[email protected]>
ipv6: Fix sysctl max for fib_multipath_hash_policy

Ido Schimmel <[email protected]>
ipv4: Silence suspicious RCU usage warning

Jason Gunthorpe <[email protected]>
RDMA/cma: Execute rdma_cm destruction from a handler properly

Jason Gunthorpe <[email protected]>
RDMA/cma: Remove unneeded locking for req paths

Jason Gunthorpe <[email protected]>
RDMA/cma: Using the standard locking pattern when delivering the removal event

Jason Gunthorpe <[email protected]>
RDMA/cma: Simplify DEVICE_REMOVAL for internal_id

Pavel Begunkov <[email protected]>
io_uring: fix linked deferred ->files cancellation

Pavel Begunkov <[email protected]>
io_uring: fix cancel of deferred reqs with ->files


-------------

Diffstat:

Makefile | 4 +-
drivers/infiniband/core/cma.c | 257 ++++++++++++++++++-------------------
drivers/net/usb/dm9601.c | 4 +
fs/io_uring.c | 47 +++++++
net/core/dev.c | 3 +-
net/core/netpoll.c | 2 +-
net/ipv4/fib_trie.c | 3 +-
net/ipv6/sysctl_net_ipv6.c | 3 +-
net/mptcp/protocol.c | 3 +-
net/netlabel/netlabel_domainhash.c | 59 ++++-----
net/sched/sch_taprio.c | 30 ++++-
net/sctp/socket.c | 16 +--
net/tipc/crypto.c | 12 +-
net/tipc/socket.c | 9 +-
14 files changed, 259 insertions(+), 193 deletions(-)



2020-09-11 14:10:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 15/16] net: disable netpoll on fresh napis

From: Jakub Kicinski <[email protected]>

[ Upstream commit 96e97bc07e90f175a8980a22827faf702ca4cb30 ]

napi_disable() makes sure to set the NAPI_STATE_NPSVC bit to prevent
netpoll from accessing rings before init is complete. However, the
same is not done for fresh napi instances in netif_napi_add(),
even though we expect NAPI instances to be added as disabled.

This causes crashes during driver reconfiguration (enabling XDP,
changing the channel count) - if there is any printk() after
netif_napi_add() but before napi_enable().

To ensure memory ordering is correct we need to use RCU accessors.

Reported-by: Rob Sherwood <[email protected]>
Fixes: 2d8bff12699a ("netpoll: Close race condition between poll_one_napi and napi_disable")
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/core/dev.c | 3 ++-
net/core/netpoll.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)

--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6609,12 +6609,13 @@ void netif_napi_add(struct net_device *d
netdev_err_once(dev, "%s() called with weight %d\n", __func__,
weight);
napi->weight = weight;
- list_add(&napi->dev_list, &dev->napi_list);
napi->dev = dev;
#ifdef CONFIG_NETPOLL
napi->poll_owner = -1;
#endif
set_bit(NAPI_STATE_SCHED, &napi->state);
+ set_bit(NAPI_STATE_NPSVC, &napi->state);
+ list_add_rcu(&napi->dev_list, &dev->napi_list);
napi_hash_add(napi);
}
EXPORT_SYMBOL(netif_napi_add);
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -162,7 +162,7 @@ static void poll_napi(struct net_device
struct napi_struct *napi;
int cpu = smp_processor_id();

- list_for_each_entry(napi, &dev->napi_list, dev_list) {
+ list_for_each_entry_rcu(napi, &dev->napi_list, dev_list) {
if (cmpxchg(&napi->poll_owner, -1, cpu) == -1) {
poll_one_napi(napi);
smp_store_release(&napi->poll_owner, -1);


2020-09-11 14:12:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 02/16] io_uring: fix linked deferred ->files cancellation

From: Pavel Begunkov <[email protected]>

[ Upstream commit c127a2a1b7baa5eb40a7e2de4b7f0c51ccbbb2ef ]

While looking for ->files in ->defer_list, consider that requests there
may actually be links.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/io_uring.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5f627194d0920..d05023ca74bdc 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7601,6 +7601,28 @@ static bool io_match_link(struct io_kiocb *preq, struct io_kiocb *req)
return false;
}

+static inline bool io_match_files(struct io_kiocb *req,
+ struct files_struct *files)
+{
+ return (req->flags & REQ_F_WORK_INITIALIZED) && req->work.files == files;
+}
+
+static bool io_match_link_files(struct io_kiocb *req,
+ struct files_struct *files)
+{
+ struct io_kiocb *link;
+
+ if (io_match_files(req, files))
+ return true;
+ if (req->flags & REQ_F_LINK_HEAD) {
+ list_for_each_entry(link, &req->link_list, link_list) {
+ if (io_match_files(link, files))
+ return true;
+ }
+ }
+ return false;
+}
+
/*
* We're looking to cancel 'req' because it's holding on to our files, but
* 'req' could be a link to another request. See if it is, and cancel that
@@ -7683,8 +7705,7 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,

spin_lock_irq(&ctx->completion_lock);
list_for_each_entry_reverse(req, &ctx->defer_list, list) {
- if ((req->flags & REQ_F_WORK_INITIALIZED)
- && req->work.files == files) {
+ if (io_match_link_files(req, files)) {
list_cut_position(&list, &ctx->defer_list, &req->list);
break;
}
--
2.25.1



2020-09-11 14:15:15

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 12/16] taprio: Fix using wrong queues in gate mask

From: Vinicius Costa Gomes <[email protected]>

[ Upstream commit 09e31cf0c528dac3358a081dc4e773d1b3de1bc9 ]

Since commit 9c66d1564676 ("taprio: Add support for hardware
offloading") there's a bit of inconsistency when offloading schedules
to the hardware:

In software mode, the gate masks are specified in terms of traffic
classes, so if say "sched-entry S 03 20000", it means that the traffic
classes 0 and 1 are open for 20us; when taprio is offloaded to
hardware, the gate masks are specified in terms of hardware queues.

The idea here is to fix hardware offloading, so schedules in hardware
and software mode have the same behavior. What's needed to do is to
map traffic classes to queues when applying the offload to the driver.

Fixes: 9c66d1564676 ("taprio: Add support for hardware offloading")
Signed-off-by: Vinicius Costa Gomes <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/sched/sch_taprio.c | 30 ++++++++++++++++++++++++------
1 file changed, 24 insertions(+), 6 deletions(-)

--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -1177,9 +1177,27 @@ static void taprio_offload_config_change
spin_unlock(&q->current_entry_lock);
}

-static void taprio_sched_to_offload(struct taprio_sched *q,
+static u32 tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask)
+{
+ u32 i, queue_mask = 0;
+
+ for (i = 0; i < dev->num_tc; i++) {
+ u32 offset, count;
+
+ if (!(tc_mask & BIT(i)))
+ continue;
+
+ offset = dev->tc_to_txq[i].offset;
+ count = dev->tc_to_txq[i].count;
+
+ queue_mask |= GENMASK(offset + count - 1, offset);
+ }
+
+ return queue_mask;
+}
+
+static void taprio_sched_to_offload(struct net_device *dev,
struct sched_gate_list *sched,
- const struct tc_mqprio_qopt *mqprio,
struct tc_taprio_qopt_offload *offload)
{
struct sched_entry *entry;
@@ -1194,7 +1212,8 @@ static void taprio_sched_to_offload(stru

e->command = entry->command;
e->interval = entry->interval;
- e->gate_mask = entry->gate_mask;
+ e->gate_mask = tc_map_to_queue_mask(dev, entry->gate_mask);
+
i++;
}

@@ -1202,7 +1221,6 @@ static void taprio_sched_to_offload(stru
}

static int taprio_enable_offload(struct net_device *dev,
- struct tc_mqprio_qopt *mqprio,
struct taprio_sched *q,
struct sched_gate_list *sched,
struct netlink_ext_ack *extack)
@@ -1224,7 +1242,7 @@ static int taprio_enable_offload(struct
return -ENOMEM;
}
offload->enable = 1;
- taprio_sched_to_offload(q, sched, mqprio, offload);
+ taprio_sched_to_offload(dev, sched, offload);

err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload);
if (err < 0) {
@@ -1486,7 +1504,7 @@ static int taprio_change(struct Qdisc *s
}

if (FULL_OFFLOAD_IS_ENABLED(q->flags))
- err = taprio_enable_offload(dev, mqprio, q, new_admin, extack);
+ err = taprio_enable_offload(dev, q, new_admin, extack);
else
err = taprio_disable_offload(dev, q, extack);
if (err)


2020-09-11 15:22:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 11/16] sctp: not disable bh in the whole sctp_get_port_local()

From: Xin Long <[email protected]>

[ Upstream commit 3106ecb43a05dc3e009779764b9da245a5d082de ]

With disabling bh in the whole sctp_get_port_local(), when
snum == 0 and too many ports have been used, the do-while
loop will take the cpu for a long time and cause cpu stuck:

[ ] watchdog: BUG: soft lockup - CPU#11 stuck for 22s!
[ ] RIP: 0010:native_queued_spin_lock_slowpath+0x4de/0x940
[ ] Call Trace:
[ ] _raw_spin_lock+0xc1/0xd0
[ ] sctp_get_port_local+0x527/0x650 [sctp]
[ ] sctp_do_bind+0x208/0x5e0 [sctp]
[ ] sctp_autobind+0x165/0x1e0 [sctp]
[ ] sctp_connect_new_asoc+0x355/0x480 [sctp]
[ ] __sctp_connect+0x360/0xb10 [sctp]

There's no need to disable bh in the whole function of
sctp_get_port_local. So fix this cpu stuck by removing
local_bh_disable() called at the beginning, and using
spin_lock_bh() instead.

The same thing was actually done for inet_csk_get_port() in
Commit ea8add2b1903 ("tcp/dccp: better use of ephemeral
ports in bind()").

Thanks to Marcelo for pointing the buggy code out.

v1->v2:
- use cond_resched() to yield cpu to other tasks if needed,
as Eric noticed.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: Ying Xu <[email protected]>
Signed-off-by: Xin Long <[email protected]>
Acked-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/sctp/socket.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -8297,8 +8297,6 @@ static int sctp_get_port_local(struct so

pr_debug("%s: begins, snum:%d\n", __func__, snum);

- local_bh_disable();
-
if (snum == 0) {
/* Search for an available port. */
int low, high, remaining, index;
@@ -8316,20 +8314,21 @@ static int sctp_get_port_local(struct so
continue;
index = sctp_phashfn(net, rover);
head = &sctp_port_hashtable[index];
- spin_lock(&head->lock);
+ spin_lock_bh(&head->lock);
sctp_for_each_hentry(pp, &head->chain)
if ((pp->port == rover) &&
net_eq(net, pp->net))
goto next;
break;
next:
- spin_unlock(&head->lock);
+ spin_unlock_bh(&head->lock);
+ cond_resched();
} while (--remaining > 0);

/* Exhausted local port range during search? */
ret = 1;
if (remaining <= 0)
- goto fail;
+ return ret;

/* OK, here is the one we will use. HEAD (the port
* hash table list entry) is non-NULL and we hold it's
@@ -8344,7 +8343,7 @@ static int sctp_get_port_local(struct so
* port iterator, pp being NULL.
*/
head = &sctp_port_hashtable[sctp_phashfn(net, snum)];
- spin_lock(&head->lock);
+ spin_lock_bh(&head->lock);
sctp_for_each_hentry(pp, &head->chain) {
if ((pp->port == snum) && net_eq(pp->net, net))
goto pp_found;
@@ -8444,10 +8443,7 @@ success:
ret = 0;

fail_unlock:
- spin_unlock(&head->lock);
-
-fail:
- local_bh_enable();
+ spin_unlock_bh(&head->lock);
return ret;
}



2020-09-11 15:25:04

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 03/16] RDMA/cma: Simplify DEVICE_REMOVAL for internal_id

From: Jason Gunthorpe <[email protected]>

[ Upstream commit d54f23c09ec62670901f1a2a4712a5218522ca2b ]

cma_process_remove() triggers an unconditional rdma_destroy_id() for
internal_id's and skips the event deliver and transition through
RDMA_CM_DEVICE_REMOVAL.

This is confusing and unnecessary. internal_id always has
cma_listen_handler() as the handler, have it catch the
RDMA_CM_DEVICE_REMOVAL event and directly consume it and signal removal.

This way the FSM sequence never skips the DEVICE_REMOVAL case and the
logic in this hard to test area is simplified.

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/core/cma.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index c30cf5307ce3e..537eeebde5f4d 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -2482,6 +2482,10 @@ static int cma_listen_handler(struct rdma_cm_id *id,
{
struct rdma_id_private *id_priv = id->context;

+ /* Listening IDs are always destroyed on removal */
+ if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL)
+ return -1;
+
id->context = id_priv->id.context;
id->event_handler = id_priv->id.event_handler;
trace_cm_event_handler(id_priv, event);
@@ -4829,7 +4833,7 @@ static void cma_process_remove(struct cma_device *cma_dev)
cma_id_get(id_priv);
mutex_unlock(&lock);

- ret = id_priv->internal_id ? 1 : cma_remove_id_dev(id_priv);
+ ret = cma_remove_id_dev(id_priv);
cma_id_put(id_priv);
if (ret)
rdma_destroy_id(&id_priv->id);
--
2.25.1



2020-09-11 15:26:13

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 14/16] tipc: fix using smp_processor_id() in preemptible

From: Tuong Lien <[email protected]>

[ Upstream commit bb8872a1e6bc911869a729240781076ed950764b ]

The 'this_cpu_ptr()' is used to obtain the AEAD key' TFM on the current
CPU for encryption, however the execution can be preemptible since it's
actually user-space context, so the 'using smp_processor_id() in
preemptible' has been observed.

We fix the issue by using the 'get/put_cpu_ptr()' API which consists of
a 'preempt_disable()' instead.

Fixes: fc1b6d6de220 ("tipc: introduce TIPC encryption & authentication")
Acked-by: Jon Maloy <[email protected]>
Signed-off-by: Tuong Lien <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/tipc/crypto.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

--- a/net/tipc/crypto.c
+++ b/net/tipc/crypto.c
@@ -326,7 +326,8 @@ static void tipc_aead_free(struct rcu_he
if (aead->cloned) {
tipc_aead_put(aead->cloned);
} else {
- head = *this_cpu_ptr(aead->tfm_entry);
+ head = *get_cpu_ptr(aead->tfm_entry);
+ put_cpu_ptr(aead->tfm_entry);
list_for_each_entry_safe(tfm_entry, tmp, &head->list, list) {
crypto_free_aead(tfm_entry->tfm);
list_del(&tfm_entry->list);
@@ -399,10 +400,15 @@ static void tipc_aead_users_set(struct t
*/
static struct crypto_aead *tipc_aead_tfm_next(struct tipc_aead *aead)
{
- struct tipc_tfm **tfm_entry = this_cpu_ptr(aead->tfm_entry);
+ struct tipc_tfm **tfm_entry;
+ struct crypto_aead *tfm;

+ tfm_entry = get_cpu_ptr(aead->tfm_entry);
*tfm_entry = list_next_entry(*tfm_entry, list);
- return (*tfm_entry)->tfm;
+ tfm = (*tfm_entry)->tfm;
+ put_cpu_ptr(tfm_entry);
+
+ return tfm;
}

/**


2020-09-11 15:28:43

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 10/16] net: usb: dm9601: Add USB ID of Keenetic Plus DSL

From: Kamil Lorenc <[email protected]>

[ Upstream commit a609d0259183a841621f252e067f40f8cc25d6f6 ]

Keenetic Plus DSL is a xDSL modem that uses dm9620 as its USB interface.

Signed-off-by: Kamil Lorenc <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/usb/dm9601.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/drivers/net/usb/dm9601.c
+++ b/drivers/net/usb/dm9601.c
@@ -625,6 +625,10 @@ static const struct usb_device_id produc
USB_DEVICE(0x0a46, 0x1269), /* DM9621A USB to Fast Ethernet Adapter */
.driver_info = (unsigned long)&dm9601_info,
},
+ {
+ USB_DEVICE(0x0586, 0x3427), /* ZyXEL Keenetic Plus DSL xDSL modem */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
{}, // END
};



2020-09-11 16:32:32

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 09/16] netlabel: fix problems with mapping removal

From: Paul Moore <[email protected]>

[ Upstream commit d3b990b7f327e2afa98006e7666fb8ada8ed8683 ]

This patch fixes two main problems seen when removing NetLabel
mappings: memory leaks and potentially extra audit noise.

The memory leaks are caused by not properly free'ing the mapping's
address selector struct when free'ing the entire entry as well as
not properly cleaning up a temporary mapping entry when adding new
address selectors to an existing entry. This patch fixes both these
problems such that kmemleak reports no NetLabel associated leaks
after running the SELinux test suite.

The potentially extra audit noise was caused by the auditing code in
netlbl_domhsh_remove_entry() being called regardless of the entry's
validity. If another thread had already marked the entry as invalid,
but not removed/free'd it from the list of mappings, then it was
possible that an additional mapping removal audit record would be
generated. This patch fixes this by returning early from the removal
function when the entry was previously marked invalid. This change
also had the side benefit of improving the code by decreasing the
indentation level of large chunk of code by one (accounting for most
of the diffstat).

Fixes: 63c416887437 ("netlabel: Add network address selectors to the NetLabel/LSM domain mapping")
Reported-by: Stephen Smalley <[email protected]>
Signed-off-by: Paul Moore <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/netlabel/netlabel_domainhash.c | 59 ++++++++++++++++++-------------------
1 file changed, 30 insertions(+), 29 deletions(-)

--- a/net/netlabel/netlabel_domainhash.c
+++ b/net/netlabel/netlabel_domainhash.c
@@ -85,6 +85,7 @@ static void netlbl_domhsh_free_entry(str
kfree(netlbl_domhsh_addr6_entry(iter6));
}
#endif /* IPv6 */
+ kfree(ptr->def.addrsel);
}
kfree(ptr->domain);
kfree(ptr);
@@ -537,6 +538,8 @@ int netlbl_domhsh_add(struct netlbl_dom_
goto add_return;
}
#endif /* IPv6 */
+ /* cleanup the new entry since we've moved everything over */
+ netlbl_domhsh_free_entry(&entry->rcu);
} else
ret_val = -EINVAL;

@@ -580,6 +583,12 @@ int netlbl_domhsh_remove_entry(struct ne
{
int ret_val = 0;
struct audit_buffer *audit_buf;
+ struct netlbl_af4list *iter4;
+ struct netlbl_domaddr4_map *map4;
+#if IS_ENABLED(CONFIG_IPV6)
+ struct netlbl_af6list *iter6;
+ struct netlbl_domaddr6_map *map6;
+#endif /* IPv6 */

if (entry == NULL)
return -ENOENT;
@@ -597,6 +606,9 @@ int netlbl_domhsh_remove_entry(struct ne
ret_val = -ENOENT;
spin_unlock(&netlbl_domhsh_lock);

+ if (ret_val)
+ return ret_val;
+
audit_buf = netlbl_audit_start_common(AUDIT_MAC_MAP_DEL, audit_info);
if (audit_buf != NULL) {
audit_log_format(audit_buf,
@@ -606,40 +618,29 @@ int netlbl_domhsh_remove_entry(struct ne
audit_log_end(audit_buf);
}

- if (ret_val == 0) {
- struct netlbl_af4list *iter4;
- struct netlbl_domaddr4_map *map4;
-#if IS_ENABLED(CONFIG_IPV6)
- struct netlbl_af6list *iter6;
- struct netlbl_domaddr6_map *map6;
-#endif /* IPv6 */
-
- switch (entry->def.type) {
- case NETLBL_NLTYPE_ADDRSELECT:
- netlbl_af4list_foreach_rcu(iter4,
- &entry->def.addrsel->list4) {
- map4 = netlbl_domhsh_addr4_entry(iter4);
- cipso_v4_doi_putdef(map4->def.cipso);
- }
+ switch (entry->def.type) {
+ case NETLBL_NLTYPE_ADDRSELECT:
+ netlbl_af4list_foreach_rcu(iter4, &entry->def.addrsel->list4) {
+ map4 = netlbl_domhsh_addr4_entry(iter4);
+ cipso_v4_doi_putdef(map4->def.cipso);
+ }
#if IS_ENABLED(CONFIG_IPV6)
- netlbl_af6list_foreach_rcu(iter6,
- &entry->def.addrsel->list6) {
- map6 = netlbl_domhsh_addr6_entry(iter6);
- calipso_doi_putdef(map6->def.calipso);
- }
+ netlbl_af6list_foreach_rcu(iter6, &entry->def.addrsel->list6) {
+ map6 = netlbl_domhsh_addr6_entry(iter6);
+ calipso_doi_putdef(map6->def.calipso);
+ }
#endif /* IPv6 */
- break;
- case NETLBL_NLTYPE_CIPSOV4:
- cipso_v4_doi_putdef(entry->def.cipso);
- break;
+ break;
+ case NETLBL_NLTYPE_CIPSOV4:
+ cipso_v4_doi_putdef(entry->def.cipso);
+ break;
#if IS_ENABLED(CONFIG_IPV6)
- case NETLBL_NLTYPE_CALIPSO:
- calipso_doi_putdef(entry->def.calipso);
- break;
+ case NETLBL_NLTYPE_CALIPSO:
+ calipso_doi_putdef(entry->def.calipso);
+ break;
#endif /* IPv6 */
- }
- call_rcu(&entry->rcu, netlbl_domhsh_free_entry);
}
+ call_rcu(&entry->rcu, netlbl_domhsh_free_entry);

return ret_val;
}


2020-09-11 16:33:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 05/16] RDMA/cma: Remove unneeded locking for req paths

From: Jason Gunthorpe <[email protected]>

[ Upstream commit cc9c037343898eb7a775e6b81d092ee21eeff218 ]

The REQ flows are concerned that once the handler is called on the new
cm_id the ULP can choose to trigger a rdma_destroy_id() concurrently at
any time.

However, this is not true, while the ULP can call rdma_destroy_id(), it
immediately blocks on the handler_mutex which prevents anything harmful
from running concurrently.

Remove the confusing extra locking and refcounts and make the
handler_mutex protecting state during destroy more clear.

Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
drivers/infiniband/core/cma.c | 31 ++++++-------------------------
1 file changed, 6 insertions(+), 25 deletions(-)

diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 04151c301e851..11f43204fee77 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -1831,21 +1831,21 @@ static void cma_leave_mc_groups(struct rdma_id_private *id_priv)

void rdma_destroy_id(struct rdma_cm_id *id)
{
- struct rdma_id_private *id_priv;
+ struct rdma_id_private *id_priv =
+ container_of(id, struct rdma_id_private, id);
enum rdma_cm_state state;

- id_priv = container_of(id, struct rdma_id_private, id);
- trace_cm_id_destroy(id_priv);
- state = cma_exch(id_priv, RDMA_CM_DESTROYING);
- cma_cancel_operation(id_priv, state);
-
/*
* Wait for any active callback to finish. New callbacks will find
* the id_priv state set to destroying and abort.
*/
mutex_lock(&id_priv->handler_mutex);
+ trace_cm_id_destroy(id_priv);
+ state = cma_exch(id_priv, RDMA_CM_DESTROYING);
mutex_unlock(&id_priv->handler_mutex);

+ cma_cancel_operation(id_priv, state);
+
rdma_restrack_del(&id_priv->res);
if (id_priv->cma_dev) {
if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
@@ -2205,19 +2205,9 @@ static int cma_ib_req_handler(struct ib_cm_id *cm_id,
cm_id->context = conn_id;
cm_id->cm_handler = cma_ib_handler;

- /*
- * Protect against the user destroying conn_id from another thread
- * until we're done accessing it.
- */
- cma_id_get(conn_id);
ret = cma_cm_event_handler(conn_id, &event);
if (ret)
goto err3;
- /*
- * Acquire mutex to prevent user executing rdma_destroy_id()
- * while we're accessing the cm_id.
- */
- mutex_lock(&lock);
if (cma_comp(conn_id, RDMA_CM_CONNECT) &&
(conn_id->id.qp_type != IB_QPT_UD)) {
trace_cm_send_mra(cm_id->context);
@@ -2226,13 +2216,11 @@ static int cma_ib_req_handler(struct ib_cm_id *cm_id,
mutex_unlock(&lock);
mutex_unlock(&conn_id->handler_mutex);
mutex_unlock(&listen_id->handler_mutex);
- cma_id_put(conn_id);
if (net_dev)
dev_put(net_dev);
return 0;

err3:
- cma_id_put(conn_id);
/* Destroy the CM ID by returning a non-zero value. */
conn_id->cm_id.ib = NULL;
err2:
@@ -2409,11 +2397,6 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id,
memcpy(cma_src_addr(conn_id), laddr, rdma_addr_size(laddr));
memcpy(cma_dst_addr(conn_id), raddr, rdma_addr_size(raddr));

- /*
- * Protect against the user destroying conn_id from another thread
- * until we're done accessing it.
- */
- cma_id_get(conn_id);
ret = cma_cm_event_handler(conn_id, &event);
if (ret) {
/* User wants to destroy the CM ID */
@@ -2421,13 +2404,11 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id,
cma_exch(conn_id, RDMA_CM_DESTROYING);
mutex_unlock(&conn_id->handler_mutex);
mutex_unlock(&listen_id->handler_mutex);
- cma_id_put(conn_id);
rdma_destroy_id(&conn_id->id);
return ret;
}

mutex_unlock(&conn_id->handler_mutex);
- cma_id_put(conn_id);

out:
mutex_unlock(&listen_id->handler_mutex);
--
2.25.1



2020-09-11 16:33:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 01/16] io_uring: fix cancel of deferred reqs with ->files

From: Pavel Begunkov <[email protected]>

[ Upstream commit b7ddce3cbf010edbfac6c6d8cc708560a7bcd7a4 ]

While trying to cancel requests with ->files, it also should look for
requests in ->defer_list, otherwise it might end up hanging a thread.

Cancel all requests in ->defer_list up to the last request there with
matching ->files, that's needed to follow drain ordering semantics.

Signed-off-by: Pavel Begunkov <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
fs/io_uring.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 38f3ec15ba3b1..5f627194d0920 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7675,12 +7675,38 @@ static void io_attempt_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
io_timeout_remove_link(ctx, req);
}

+static void io_cancel_defer_files(struct io_ring_ctx *ctx,
+ struct files_struct *files)
+{
+ struct io_kiocb *req = NULL;
+ LIST_HEAD(list);
+
+ spin_lock_irq(&ctx->completion_lock);
+ list_for_each_entry_reverse(req, &ctx->defer_list, list) {
+ if ((req->flags & REQ_F_WORK_INITIALIZED)
+ && req->work.files == files) {
+ list_cut_position(&list, &ctx->defer_list, &req->list);
+ break;
+ }
+ }
+ spin_unlock_irq(&ctx->completion_lock);
+
+ while (!list_empty(&list)) {
+ req = list_first_entry(&list, struct io_kiocb, list);
+ list_del_init(&req->list);
+ req_set_fail_links(req);
+ io_cqring_add_event(req, -ECANCELED);
+ io_double_put_req(req);
+ }
+}
+
static void io_uring_cancel_files(struct io_ring_ctx *ctx,
struct files_struct *files)
{
if (list_empty_careful(&ctx->inflight_list))
return;

+ io_cancel_defer_files(ctx, files);
/* cancel all at once, should be faster than doing it one by one*/
io_wq_cancel_cb(ctx->io_wq, io_wq_files_match, files, true);

--
2.25.1



2020-09-11 16:33:56

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 16/16] mptcp: free acked data before waiting for more memory

From: Florian Westphal <[email protected]>

[ Upstream commit 1cec170d458b1d18f6f1654ca84c0804a701c5ef ]

After subflow lock is dropped, more wmem might have been made available.

This fixes a deadlock in mptcp_connect.sh 'mmap' mode: wmem is exhausted.
But as the mptcp socket holds on to already-acked data (for retransmit)
no wakeup will occur.

Using 'goto restart' calls mptcp_clean_una(sk) which will free pages
that have been acked completely in the mean time.

Fixes: fb529e62d3f3 ("mptcp: break and restart in case mptcp sndbuf is full")
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/mptcp/protocol.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -772,7 +772,6 @@ fallback:
restart:
mptcp_clean_una(sk);

-wait_for_sndbuf:
__mptcp_flush_join_list(msk);
ssk = mptcp_subflow_get_send(msk);
while (!sk_stream_memory_free(sk) ||
@@ -873,7 +872,7 @@ wait_for_sndbuf:
*/
mptcp_set_timeout(sk, ssk);
release_sock(ssk);
- goto wait_for_sndbuf;
+ goto restart;
}
}
}


2020-09-11 16:34:50

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 08/16] ipv6: Fix sysctl max for fib_multipath_hash_policy

From: Ido Schimmel <[email protected]>

[ Upstream commit 05d4487197b2b71d5363623c28924fd58c71c0b6 ]

Cited commit added the possible value of '2', but it cannot be set. Fix
it by adjusting the maximum value to '2'. This is consistent with the
corresponding IPv4 sysctl.

Before:

# sysctl -w net.ipv6.fib_multipath_hash_policy=2
sysctl: setting key "net.ipv6.fib_multipath_hash_policy": Invalid argument
net.ipv6.fib_multipath_hash_policy = 2
# sysctl net.ipv6.fib_multipath_hash_policy
net.ipv6.fib_multipath_hash_policy = 0

After:

# sysctl -w net.ipv6.fib_multipath_hash_policy=2
net.ipv6.fib_multipath_hash_policy = 2
# sysctl net.ipv6.fib_multipath_hash_policy
net.ipv6.fib_multipath_hash_policy = 2

Fixes: d8f74f0975d8 ("ipv6: Support multipath hashing on inner IP pkts")
Signed-off-by: Ido Schimmel <[email protected]>
Reviewed-by: Stephen Suryaputra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/ipv6/sysctl_net_ipv6.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--- a/net/ipv6/sysctl_net_ipv6.c
+++ b/net/ipv6/sysctl_net_ipv6.c
@@ -21,6 +21,7 @@
#include <net/calipso.h>
#endif

+static int two = 2;
static int flowlabel_reflect_max = 0x7;
static int auto_flowlabels_min;
static int auto_flowlabels_max = IP6_AUTO_FLOW_LABEL_MAX;
@@ -150,7 +151,7 @@ static struct ctl_table ipv6_table_templ
.mode = 0644,
.proc_handler = proc_rt6_multipath_hash_policy,
.extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
+ .extra2 = &two,
},
{
.procname = "seg6_flowlabel",


2020-09-11 16:37:02

by Greg Kroah-Hartman

[permalink] [raw]
Subject: [PATCH 5.8 13/16] tipc: fix shutdown() of connectionless socket

From: Tetsuo Handa <[email protected]>

[ Upstream commit 2a63866c8b51a3f72cea388dfac259d0e14c4ba6 ]

syzbot is reporting hung task at nbd_ioctl() [1], for there are two
problems regarding TIPC's connectionless socket's shutdown() operation.

----------
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/ioctl.h>
#include <linux/nbd.h>
#include <unistd.h>

int main(int argc, char *argv[])
{
const int fd = open("/dev/nbd0", 3);
alarm(5);
ioctl(fd, NBD_SET_SOCK, socket(PF_TIPC, SOCK_DGRAM, 0));
ioctl(fd, NBD_DO_IT, 0); /* To be interrupted by SIGALRM. */
return 0;
}
----------

One problem is that wait_for_completion() from flush_workqueue() from
nbd_start_device_ioctl() from nbd_ioctl() cannot be completed when
nbd_start_device_ioctl() received a signal at wait_event_interruptible(),
for tipc_shutdown() from kernel_sock_shutdown(SHUT_RDWR) from
nbd_mark_nsock_dead() from sock_shutdown() from nbd_start_device_ioctl()
is failing to wake up a WQ thread sleeping at wait_woken() from
tipc_wait_for_rcvmsg() from sock_recvmsg() from sock_xmit() from
nbd_read_stat() from recv_work() scheduled by nbd_start_device() from
nbd_start_device_ioctl(). Fix this problem by always invoking
sk->sk_state_change() (like inet_shutdown() does) when tipc_shutdown() is
called.

The other problem is that tipc_wait_for_rcvmsg() cannot return when
tipc_shutdown() is called, for tipc_shutdown() sets sk->sk_shutdown to
SEND_SHUTDOWN (despite "how" is SHUT_RDWR) while tipc_wait_for_rcvmsg()
needs sk->sk_shutdown set to RCV_SHUTDOWN or SHUTDOWN_MASK. Fix this
problem by setting sk->sk_shutdown to SHUTDOWN_MASK (like inet_shutdown()
does) when the socket is connectionless.

[1] https://syzkaller.appspot.com/bug?id=3fe51d307c1f0a845485cf1798aa059d12bf18b2

Reported-by: syzbot <[email protected]>
Signed-off-by: Tetsuo Handa <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
net/tipc/socket.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -2773,18 +2773,21 @@ static int tipc_shutdown(struct socket *

trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " ");
__tipc_shutdown(sock, TIPC_CONN_SHUTDOWN);
- sk->sk_shutdown = SEND_SHUTDOWN;
+ if (tipc_sk_type_connectionless(sk))
+ sk->sk_shutdown = SHUTDOWN_MASK;
+ else
+ sk->sk_shutdown = SEND_SHUTDOWN;

if (sk->sk_state == TIPC_DISCONNECTING) {
/* Discard any unreceived messages */
__skb_queue_purge(&sk->sk_receive_queue);

- /* Wake up anyone sleeping in poll */
- sk->sk_state_change(sk);
res = 0;
} else {
res = -ENOTCONN;
}
+ /* Wake up anyone sleeping in poll. */
+ sk->sk_state_change(sk);

release_sock(sk);
return res;


2020-09-11 22:21:01

by Shuah Khan

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On 9/11/20 6:47 AM, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.8.9 release.
> There are 16 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.8.9-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.8.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Compiled and booted on my test system. No dmesg regressions.

Tested-by: Shuah Khan <[email protected]>

thanks,
-- Shuah

2020-09-12 02:23:31

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On Fri, Sep 11, 2020 at 02:47:17PM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.8.9 release.
> There are 16 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> Anything received after that time might be too late.
>

Build results:
total: 154 pass: 154 fail: 0
Qemu test results:
total: 430 pass: 430 fail: 0

Tested-by: Guenter Roeck <[email protected]>

Guenter

2020-09-12 07:29:01

by Naresh Kamboju

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On Fri, 11 Sep 2020 at 18:30, Greg Kroah-Hartman
<[email protected]> wrote:
>
> This is the start of the stable review cycle for the 5.8.9 release.
> There are 16 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.8.9-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.8.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>

Results from Linaro’s test farm.
No regressions on arm64, arm, x86_64, and i386.

Tested-by: Linux Kernel Functional Testing <[email protected]>

Summary
------------------------------------------------------------------------

kernel: 5.8.9-rc1
git repo: ['https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git',
'https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc']
git branch: linux-5.8.y
git commit: dcdaabfe3cea0fafa25a9a2ce8d83fd4edad41d6
git describe: v5.8.8-17-gdcdaabfe3cea
Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.8.y/build/v5.8.8-17-gdcdaabfe3cea


No regressions (compared to build v5.8.8)


No fixes (compared to build v5.8.8)

Ran 20836 total tests in the following environments and test suites.

Environments
--------------
- dragonboard-410c
- hi6220-hikey
- i386
- juno-r2
- qemu_arm
- qemu_arm64
- qemu_i386
- qemu_x86_64
- x15
- x86

Test Suites
-----------
* build
* install-android-platform-tools-r2600
* kselftest
* kselftest/drivers
* kselftest/filesystems
* kselftest/net
* linux-log-parser
* perf
* network-basic-tests
* libhugetlbfs
* ltp-cap_bounds-tests
* ltp-commands-tests
* ltp-containers-tests
* ltp-controllers-tests
* ltp-cpuhotplug-tests
* ltp-crypto-tests
* ltp-cve-tests
* ltp-dio-tests
* ltp-fcntl-locktests-tests
* ltp-filecaps-tests
* ltp-fs-tests
* ltp-fs_bind-tests
* ltp-fs_perms_simple-tests
* ltp-fsx-tests
* ltp-hugetlb-tests
* ltp-io-tests
* ltp-ipc-tests
* ltp-math-tests
* ltp-mm-tests
* ltp-nptl-tests
* ltp-pty-tests
* ltp-sched-tests
* ltp-securebits-tests
* ltp-syscalls-tests
* ltp-tracing-tests
* v4l2-compliance
* ltp-open-posix-tests
* kselftest-vsyscall-mode-native
* kselftest-vsyscall-mode-native/drivers
* kselftest-vsyscall-mode-native/filesystems
* kselftest-vsyscall-mode-native/net
* kselftest-vsyscall-mode-none
* kselftest-vsyscall-mode-none/drivers
* kselftest-vsyscall-mode-none/filesystems
* kselftest-vsyscall-mode-none/net
* ssuite

--
Linaro LKFT
https://lkft.linaro.org

2020-09-12 12:49:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On Fri, Sep 11, 2020 at 04:19:48PM -0600, Shuah Khan wrote:
> On 9/11/20 6:47 AM, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.8.9 release.
> > There are 16 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.8.9-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.8.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Compiled and booted on my test system. No dmesg regressions.
>
> Tested-by: Shuah Khan <[email protected]>

Thanks for testing all of these and letting me know.

greg k-h

2020-09-12 12:50:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On Sat, Sep 12, 2020 at 12:57:38PM +0530, Naresh Kamboju wrote:
> On Fri, 11 Sep 2020 at 18:30, Greg Kroah-Hartman
> <[email protected]> wrote:
> >
> > This is the start of the stable review cycle for the 5.8.9 release.
> > There are 16 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> > Anything received after that time might be too late.
> >
> > The whole patch series can be found in one patch at:
> > https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.8.9-rc1.gz
> > or in the git tree and branch at:
> > git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.8.y
> > and the diffstat can be found below.
> >
> > thanks,
> >
> > greg k-h
> >
>
> Results from Linaro’s test farm.
> No regressions on arm64, arm, x86_64, and i386.
>
> Tested-by: Linux Kernel Functional Testing <[email protected]>

Wonderful, thanks for testing them all and letting me know.

greg k-h

2020-09-12 12:52:37

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 5.8 00/16] 5.8.9-rc1 review

On Fri, Sep 11, 2020 at 07:19:12PM -0700, Guenter Roeck wrote:
> On Fri, Sep 11, 2020 at 02:47:17PM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 5.8.9 release.
> > There are 16 patches in this series, all will be posted as a response
> > to this one. If anyone has any issues with these being applied, please
> > let me know.
> >
> > Responses should be made by Sun, 13 Sep 2020 12:24:42 +0000.
> > Anything received after that time might be too late.
> >
>
> Build results:
> total: 154 pass: 154 fail: 0
> Qemu test results:
> total: 430 pass: 430 fail: 0
>
> Tested-by: Guenter Roeck <[email protected]>

Thanks for testing all of tehse and letting me know.

greg k-h