2020-06-24 22:01:04

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

Hi Alexander,

This patch introduced a behavior change around GRO_DROP:

napi_skb_finish used to sometimes return GRO_DROP:

> -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb)
> +static gro_result_t napi_skb_finish(struct napi_struct *napi,
> + struct sk_buff *skb,
> + gro_result_t ret)
> {
> switch (ret) {
> case GRO_NORMAL:
> - if (netif_receive_skb_internal(skb))
> - ret = GRO_DROP;
> + gro_normal_one(napi, skb);
>

But under your change, gro_normal_one and the various calls that makes
never propagates its return value, and so GRO_DROP is never returned to
the caller, even if something drops it.

Was this intentional? Or should I start looking into how to restore it?

Thanks,
Jason


2020-06-24 22:05:38

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On Wed, Jun 24, 2020 at 03:06:10PM -0600, Jason A. Donenfeld wrote:
> Hi Alexander,
>
> This patch introduced a behavior change around GRO_DROP:
>
> napi_skb_finish used to sometimes return GRO_DROP:
>
> > -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb)
> > +static gro_result_t napi_skb_finish(struct napi_struct *napi,
> > + struct sk_buff *skb,
> > + gro_result_t ret)
> > {
> > switch (ret) {
> > case GRO_NORMAL:
> > - if (netif_receive_skb_internal(skb))
> > - ret = GRO_DROP;
> > + gro_normal_one(napi, skb);
> >
>
> But under your change, gro_normal_one and the various calls that makes
> never propagates its return value, and so GRO_DROP is never returned to
> the caller, even if something drops it.
>
> Was this intentional? Or should I start looking into how to restore it?
>
> Thanks,
> Jason

For some context, I'm consequently mulling over this change in my code,
since checking for GRO_DROP now constitutes dead code:

diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
index 91438144e4f7..9b2ab6fc91cd 100644
--- a/drivers/net/wireguard/receive.c
+++ b/drivers/net/wireguard/receive.c
@@ -414,14 +414,8 @@ static void wg_packet_consume_data_done(struct wg_peer *peer,
if (unlikely(routed_peer != peer))
goto dishonest_packet_peer;

- if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) {
- ++dev->stats.rx_dropped;
- net_dbg_ratelimited("%s: Failed to give packet to userspace from peer %llu (%pISpfsc)\n",
- dev->name, peer->internal_id,
- &peer->endpoint.addr);
- } else {
- update_rx_stats(peer, message_data_len(len_before_trim));
- }
+ napi_gro_receive(&peer->napi, skb);
+ update_rx_stats(peer, message_data_len(len_before_trim));
return;

dishonest_packet_peer:

2020-06-25 14:29:01

by Edward Cree

[permalink] [raw]
Subject: Re: [PATCH v2 net-next] net: core: use listified Rx for GRO_NORMAL in napi_gro_receive()

On 24/06/2020 22:06, Jason A. Donenfeld wrote:
> Hi Alexander,
>
> This patch introduced a behavior change around GRO_DROP:
>
> napi_skb_finish used to sometimes return GRO_DROP:
>
>> -static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb)
>> +static gro_result_t napi_skb_finish(struct napi_struct *napi,
>> + struct sk_buff *skb,
>> + gro_result_t ret)
>> {
>> switch (ret) {
>> case GRO_NORMAL:
>> - if (netif_receive_skb_internal(skb))
>> - ret = GRO_DROP;
>> + gro_normal_one(napi, skb);
>>
> But under your change, gro_normal_one and the various calls that makes
> never propagates its return value, and so GRO_DROP is never returned to
> the caller, even if something drops it.
This followed the pattern set by napi_frags_finish(), and is
 intentional: gro_normal_one() usually defers processing of
 the skb to the end of the napi poll, so by the time we know
 that the network stack has dropped it, the caller has long
 since returned.
In fact the RX will be handled by netif_receive_skb_list_internal(),
 which can't return NET_RX_SUCCESS vs. NET_RX_DROP, because it's
 handling many skbs which might not all have the same verdict.

When originally doing this work I felt this was OK because
 almost no-one was sensitive to the return value — almost the
 only callers that were were in our own sfc driver, and then
 only for making bogus decisions about interrupt moderation.
Alexander just followed my lead, so don't blame him ;-)

> For some context, I'm consequently mulling over this change in my code,
> since checking for GRO_DROP now constitutes dead code:
Incidentally, it's only dead because dev_gro_receive() can't
 return GRO_DROP either.  If it could, napi_skb_finish()
 would pass that on.  And napi_gro_frags() (which AIUI is the
 better API for some performance reasons that I can't remember)
 can still return GRO_DROP too.

However, I think that incrementing your rx_dropped stat when
 the network stack chose to drop the packet is the wrong
 thing to do anyway (IMHO rx_dropped is for "there was a
 packet on the wire but either the hardware or the driver was
 unable to receive it"), so I'd say go ahead and remove the
 check.

HTH
-ed