2019-11-25 09:13:17

by Nicholas Johnson

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On Mon, Nov 25, 2019 at 11:25:50AM +0300, Alexander Lobakin wrote:
> Alexander Lobakin wrote 25.11.2019 10:54:
> > Nicholas Johnson wrote 25.11.2019 10:29:
> > > Hi,
> > >
> > > On Wed, Oct 16, 2019 at 10:31:31AM +0300, Alexander Lobakin wrote:
> > > > David Miller wrote 16.10.2019 04:16:
> > > > > From: Alexander Lobakin <[email protected]>
> > > > > Date: Mon, 14 Oct 2019 11:00:33 +0300
> > > > >
> > > > > > Commit 323ebb61e32b4 ("net: use listified RX for handling GRO_NORMAL
> > > > > > skbs") made use of listified skb processing for the users of
> > > > > > napi_gro_frags().
> > > > > > The same technique can be used in a way more common napi_gro_receive()
> > > > > > to speed up non-merged (GRO_NORMAL) skbs for a wide range of drivers
> > > > > > including gro_cells and mac80211 users.
> > > > > > This slightly changes the return value in cases where skb is being
> > > > > > dropped by the core stack, but it seems to have no impact on related
> > > > > > drivers' functionality.
> > > > > > gro_normal_batch is left untouched as it's very individual for every
> > > > > > single system configuration and might be tuned in manual order to
> > > > > > achieve an optimal performance.
> > > > > >
> > > > > > Signed-off-by: Alexander Lobakin <[email protected]>
> > > > > > Acked-by: Edward Cree <[email protected]>
> > > > >
> > > > > Applied, thank you.
> > > >
> > > > David, Edward, Eric, Ilias,
> > > > thank you for your time.
> > > >
> > > > Regards,
> > > > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
> > >
> > > I am very sorry to be the bearer of bad news. It appears that this
> > > commit is causing a regression in Linux 5.4.0-rc8-next-20191122,
> > > preventing me from connecting to Wi-Fi networks. I have a Dell XPS
> > > 9370
> > > (Intel Core i7-8650U) with Intel Wireless 8265 [8086:24fd].
> >
> > Hi!
> >
> > It's a bit strange as this commit doesn't directly affect the packet
> > flow. I don't have any iwlwifi hardware at the moment, so let's see if
> > anyone else will be able to reproduce this (for now, it is the first
> > report in a ~6 weeks after applying to net-next).
> > Anyway, I'll investigate iwlwifi's Rx processing -- maybe I could find
> > something driver-specific that might produce this.
Just in case, I double checked by reapplying the patch to check it is
the problem. The problem reappeared. So I am sure.

Here's what I will do. I know somebody with the same Dell XPS 9370,
except theirs has the Intel Core i7 8550U and Killer Wi-Fi. Mine is the
"business" model, which was harder to obtain. I have been doing bisects
on a USB-C SSD because I do not have enough space on the internal NVMe
drive. I will ask to borrow their laptop, and boot off the drive as I
have been doing with my laptop. If the problem does not appear on their
laptop, then there is a good chance that the problem is specific to
iwlwifi.

> >
> > Thank you for the report.
> >
> > > I did a bisect, and this commit was named the culprit. I then applied
> > > the reverse patch on another clone of Linux next-20191122, and it
> > > started working.
> > >
> > > 6570bc79c0dfff0f228b7afd2de720fb4e84d61d
> > > net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()
> > >
> > > You can see more at the bug report I filed at [0].
> > >
> > > [0]
> > > https://bugzilla.kernel.org/show_bug.cgi?id=205647
> > >
> > > I called on others at [0] to try to reproduce this - you should not
> > > pull
> > > a patch because of a single reporter - as I could be wrong.
> > >
> > > Please let me know if you want me to give more debugging information
> > > or
> > > test any potential fixes. I am happy to help to fix this. :)
>
> And you can also set /proc/sys/net/core/gro_normal_batch to the value
> of 1 and see if there are any changes. This value makes GRO stack to
> behave just like without the patch.
The default value of /proc/sys/net/core/gro_normal_batch was 8.

Setting it to 1 allowed it to connect to Wi-Fi network.

Setting it back to 8 did not kill the connection.

But when I disconnected and tried to reconnect, it did not re-connect.

Hence, it appears that the problem only affects the initial handshake
when associating with a network, and not normal packet flow.

>
> > > Kind regards,
> > > Nicholas Johnson
> >
> > Regards,
> > ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ
>
> Regards,
> ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ

Regards,
Nicholas


2019-11-25 10:32:55

by Edward Cree

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On 25/11/2019 09:09, Nicholas Johnson wrote:
> The default value of /proc/sys/net/core/gro_normal_batch was 8.
> Setting it to 1 allowed it to connect to Wi-Fi network.
>
> Setting it back to 8 did not kill the connection.
>
> But when I disconnected and tried to reconnect, it did not re-connect.
>
> Hence, it appears that the problem only affects the initial handshake
> when associating with a network, and not normal packet flow.
That sounds like the GRO batch isn't getting flushed at the endof the
 NAPI — maybe the driver isn't calling napi_complete_done() at the
 appropriate time?
Indeed, from digging through the layers of iwlwifi I eventually get to
 iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the
 napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1);
 return 0; }) and instead calls napi_gro_flush() at the end of its RX
 handling.  Unfortunately, napi_gro_flush() is no longer enough,
 because it doesn't call gro_normal_list() so the packets on the
 GRO_NORMAL list just sit there indefinitely.

It was seeing drivers calling napi_gro_flush() directly that had me
 worried in the first place about whether listifying napi_gro_receive()
 was safe and where the gro_normal_list() should go.
I wondered if other drivers that show up in [1] needed fixing with a
 gro_normal_list() next to their napi_gro_flush() call.  From a cursory
 check:
brocade/bna: has a real poller, calls napi_complete_done() so is OK.
cortina/gemini: calls napi_complete_done() straight after
 napi_gro_flush(), so is OK.
hisilicon/hns3: calls napi_complete(), so is _probably_ OK.
But it's far from clear to me why *any* of those drivers are calling
 napi_gro_flush() themselves...

-Ed

[1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush

2019-11-25 11:03:06

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

Edward Cree wrote 25.11.2019 13:31:
> On 25/11/2019 09:09, Nicholas Johnson wrote:
>> The default value of /proc/sys/net/core/gro_normal_batch was 8.
>> Setting it to 1 allowed it to connect to Wi-Fi network.
>>
>> Setting it back to 8 did not kill the connection.
>>
>> But when I disconnected and tried to reconnect, it did not re-connect.
>>
>> Hence, it appears that the problem only affects the initial handshake
>> when associating with a network, and not normal packet flow.
> That sounds like the GRO batch isn't getting flushed at the endof the
>  NAPI — maybe the driver isn't calling napi_complete_done() at the
>  appropriate time?

Yes, this was the first reason I thought about, but didn't look at
iwlwifi yet. I already knew this driver has some tricky parts, but
this 'fake NAPI' solution seems rather strange to me.

> Indeed, from digging through the layers of iwlwifi I eventually get to
>  iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the
>  napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1);
>  return 0; }) and instead calls napi_gro_flush() at the end of its RX
>  handling.  Unfortunately, napi_gro_flush() is no longer enough,
>  because it doesn't call gro_normal_list() so the packets on the
>  GRO_NORMAL list just sit there indefinitely.
>
> It was seeing drivers calling napi_gro_flush() directly that had me
>  worried in the first place about whether listifying napi_gro_receive()
>  was safe and where the gro_normal_list() should go.
> I wondered if other drivers that show up in [1] needed fixing with a
>  gro_normal_list() next to their napi_gro_flush() call.  From a cursory
>  check:
> brocade/bna: has a real poller, calls napi_complete_done() so is OK.
> cortina/gemini: calls napi_complete_done() straight after
>  napi_gro_flush(), so is OK.
> hisilicon/hns3: calls napi_complete(), so is _probably_ OK.
> But it's far from clear to me why *any* of those drivers are calling
>  napi_gro_flush() themselves...

Agree. I mean, we _can_ handle this particular problem from networking
core side, but from my point of view only rethinking driver's logic is
the correct way to solve this and other issues that may potentionally
appear in future.

> -Ed
>
> [1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush

Regards,
ᚷ ᛖ ᚢ ᚦ ᚠ ᚱ

2019-11-25 11:10:39

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote:
> Edward Cree wrote 25.11.2019 13:31:
> > On 25/11/2019 09:09, Nicholas Johnson wrote:
> > > The default value of /proc/sys/net/core/gro_normal_batch was 8.
> > > Setting it to 1 allowed it to connect to Wi-Fi network.
> > >
> > > Setting it back to 8 did not kill the connection.
> > >
> > > But when I disconnected and tried to reconnect, it did not re-connect.
> > >
> > > Hence, it appears that the problem only affects the initial handshake
> > > when associating with a network, and not normal packet flow.
> > That sounds like the GRO batch isn't getting flushed at the endof the
> > NAPI — maybe the driver isn't calling napi_complete_done() at the
> > appropriate time?
>
> Yes, this was the first reason I thought about, but didn't look at
> iwlwifi yet. I already knew this driver has some tricky parts, but
> this 'fake NAPI' solution seems rather strange to me.

Truth be told, we kinda just fudged it until we got GRO, since that's
what we really want on wifi (to reduce the costly TCP ACKs if possible).

Maybe we should call napi_complete_done() instead? But as Edward noted
(below), we don't actually really do NAPI polling, we just fake it for
each interrupt since we will often get a lot of frames in one interrupt
if there's high throughput (A-MPDUs are basically coming in all at the
same time). I've never really looked too much at what exactly happens
here, beyond seeing the difference from GRO.


> > Indeed, from digging through the layers of iwlwifi I eventually get to
> > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the
> > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1);
> > return 0; }) and instead calls napi_gro_flush() at the end of its RX
> > handling. Unfortunately, napi_gro_flush() is no longer enough,
> > because it doesn't call gro_normal_list() so the packets on the
> > GRO_NORMAL list just sit there indefinitely.
> >
> > It was seeing drivers calling napi_gro_flush() directly that had me
> > worried in the first place about whether listifying napi_gro_receive()
> > was safe and where the gro_normal_list() should go.
> > I wondered if other drivers that show up in [1] needed fixing with a
> > gro_normal_list() next to their napi_gro_flush() call. From a cursory
> > check:
> > brocade/bna: has a real poller, calls napi_complete_done() so is OK.
> > cortina/gemini: calls napi_complete_done() straight after
> > napi_gro_flush(), so is OK.
> > hisilicon/hns3: calls napi_complete(), so is _probably_ OK.
> > But it's far from clear to me why *any* of those drivers are calling
> > napi_gro_flush() themselves...
>
> Agree. I mean, we _can_ handle this particular problem from networking
> core side, but from my point of view only rethinking driver's logic is
> the correct way to solve this and other issues that may potentionally
> appear in future.

Do tell what you think it should be doing :)

One additional wrinkle is that we have firmware notifications, command
completions and actual RX interleaved, so I think we do want to have
interrupts for the notifications and command completions?

johannes

2019-11-25 11:50:16

by Paolo Abeni

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On Mon, 2019-11-25 at 12:05 +0100, Johannes Berg wrote:
> On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote:
> > Edward Cree wrote 25.11.2019 13:31:
> > > On 25/11/2019 09:09, Nicholas Johnson wrote:
> > > > The default value of /proc/sys/net/core/gro_normal_batch was 8.
> > > > Setting it to 1 allowed it to connect to Wi-Fi network.
> > > >
> > > > Setting it back to 8 did not kill the connection.
> > > >
> > > > But when I disconnected and tried to reconnect, it did not re-connect.
> > > >
> > > > Hence, it appears that the problem only affects the initial handshake
> > > > when associating with a network, and not normal packet flow.
> > > That sounds like the GRO batch isn't getting flushed at the endof the
> > > NAPI — maybe the driver isn't calling napi_complete_done() at the
> > > appropriate time?
> >
> > Yes, this was the first reason I thought about, but didn't look at
> > iwlwifi yet. I already knew this driver has some tricky parts, but
> > this 'fake NAPI' solution seems rather strange to me.
>
> Truth be told, we kinda just fudged it until we got GRO, since that's
> what we really want on wifi (to reduce the costly TCP ACKs if possible).
>
> Maybe we should call napi_complete_done() instead? But as Edward noted
> (below), we don't actually really do NAPI polling, we just fake it for
> each interrupt since we will often get a lot of frames in one interrupt
> if there's high throughput (A-MPDUs are basically coming in all at the
> same time). I've never really looked too much at what exactly happens
> here, beyond seeing the difference from GRO.
>
>
> > > Indeed, from digging through the layers of iwlwifi I eventually get to
> > > iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the
> > > napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1);
> > > return 0; }) and instead calls napi_gro_flush() at the end of its RX
> > > handling. Unfortunately, napi_gro_flush() is no longer enough,
> > > because it doesn't call gro_normal_list() so the packets on the
> > > GRO_NORMAL list just sit there indefinitely.
> > >
> > > It was seeing drivers calling napi_gro_flush() directly that had me
> > > worried in the first place about whether listifying napi_gro_receive()
> > > was safe and where the gro_normal_list() should go.
> > > I wondered if other drivers that show up in [1] needed fixing with a
> > > gro_normal_list() next to their napi_gro_flush() call. From a cursory
> > > check:
> > > brocade/bna: has a real poller, calls napi_complete_done() so is OK.
> > > cortina/gemini: calls napi_complete_done() straight after
> > > napi_gro_flush(), so is OK.
> > > hisilicon/hns3: calls napi_complete(), so is _probably_ OK.
> > > But it's far from clear to me why *any* of those drivers are calling
> > > napi_gro_flush() themselves...
> >
> > Agree. I mean, we _can_ handle this particular problem from networking
> > core side, but from my point of view only rethinking driver's logic is
> > the correct way to solve this and other issues that may potentionally
> > appear in future.
>
> Do tell what you think it should be doing :)
>
> One additional wrinkle is that we have firmware notifications, command
> completions and actual RX interleaved, so I think we do want to have
> interrupts for the notifications and command completions?

I think it would be nice moving the iwlwifi driver to full/plain NAPI
mode. The interrupt handler could keep processing extra work as it does
now and queue real pkts on some internal queue, and than schedule the
relevant napi, which in turn could process such queue in the napi poll
method. Likely I missed tons of details and/or oversimplified it...

For -net, I *think* something as dumb and hacky as the following could
possibly work:
----
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index 4bba6b8a863c..df82fad96cbb 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1527,7 +1527,7 @@ static void iwl_pcie_rx_handle(struct iwl_trans *trans, int queue)
iwl_pcie_rxq_alloc_rbs(trans, GFP_ATOMIC, rxq);

if (rxq->napi.poll)
- napi_gro_flush(&rxq->napi, false);
+ napi_complete_done(&rxq->napi, 0);

iwl_pcie_rxq_restock(trans, rxq);
}
---

Cheers,

Paolo


2019-11-25 12:32:13

by Kalle Valo

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

Paolo Abeni <[email protected]> writes:

> On Mon, 2019-11-25 at 12:05 +0100, Johannes Berg wrote:
>> On Mon, 2019-11-25 at 13:58 +0300, Alexander Lobakin wrote:
>>
>> > Agree. I mean, we _can_ handle this particular problem from networking
>> > core side, but from my point of view only rethinking driver's logic is
>> > the correct way to solve this and other issues that may potentionally
>> > appear in future.
>>
>> Do tell what you think it should be doing :)
>>
>> One additional wrinkle is that we have firmware notifications, command
>> completions and actual RX interleaved, so I think we do want to have
>> interrupts for the notifications and command completions?
>
> I think it would be nice moving the iwlwifi driver to full/plain NAPI
> mode. The interrupt handler could keep processing extra work as it does
> now and queue real pkts on some internal queue, and than schedule the
> relevant napi, which in turn could process such queue in the napi poll
> method. Likely I missed tons of details and/or oversimplified it...

Sorry for hijacking the thread, but I have a patch pending for ath10k
(another wireless driver) which adds NAPI support to SDIO devices:

https://patchwork.kernel.org/patch/11188393/

I think it does just what you suggested, but I'm no NAPI expert and
would appreciate if someone more knowledgeable could take a look :)

--
https://wireless.wiki.kernel.org/en/developers/documentation/submittingpatches

2019-11-25 13:14:44

by Nicholas Johnson

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On Mon, Nov 25, 2019 at 10:31:12AM +0000, Edward Cree wrote:
> On 25/11/2019 09:09, Nicholas Johnson wrote:
> > The default value of /proc/sys/net/core/gro_normal_batch was 8.
> > Setting it to 1 allowed it to connect to Wi-Fi network.
> >
> > Setting it back to 8 did not kill the connection.
> >
> > But when I disconnected and tried to reconnect, it did not re-connect.
> >
> > Hence, it appears that the problem only affects the initial handshake
> > when associating with a network, and not normal packet flow.
> That sounds like the GRO batch isn't getting flushed at the endof the
>  NAPI — maybe the driver isn't calling napi_complete_done() at the
>  appropriate time?
> Indeed, from digging through the layers of iwlwifi I eventually get to
>  iwl_pcie_rx_handle() which doesn't really have a NAPI poll (the
>  napi->poll function is iwl_pcie_dummy_napi_poll() { WARN_ON(1);
>  return 0; }) and instead calls napi_gro_flush() at the end of its RX
>  handling.  Unfortunately, napi_gro_flush() is no longer enough,
>  because it doesn't call gro_normal_list() so the packets on the
>  GRO_NORMAL list just sit there indefinitely.
>
> It was seeing drivers calling napi_gro_flush() directly that had me
>  worried in the first place about whether listifying napi_gro_receive()
>  was safe and where the gro_normal_list() should go.
> I wondered if other drivers that show up in [1] needed fixing with a
>  gro_normal_list() next to their napi_gro_flush() call.  From a cursory
>  check:
> brocade/bna: has a real poller, calls napi_complete_done() so is OK.
> cortina/gemini: calls napi_complete_done() straight after
>  napi_gro_flush(), so is OK.
> hisilicon/hns3: calls napi_complete(), so is _probably_ OK.
> But it's far from clear to me why *any* of those drivers are calling
>  napi_gro_flush() themselves...
Pardon my lack of understanding, but is it unusual that something that
the drivers should not be calling be exposed to the drivers? Could it be
hidden from the drivers so that it is out of scope, once the current
drivers are modified to not use it?

>
> -Ed
>
> [1]: https://elixir.bootlin.com/linux/latest/ident/napi_gro_flush
Kind regards,
Nicholas