2020-07-25 15:42:36

by Sebastian Gottschall

[permalink] [raw]
Subject: Re: [RFC 0/7] Add support to process rx packets in thread


>> i agree. i just can say that i tested this patch recently due this
>> discussion here. and it can be changed by sysfs. but it doesnt work for
>> wifi drivers which are mainly using dummy netdev devices. for this i
>> made a small patch to get them working using napi_set_threaded manually
>> hardcoded in the drivers. (see patch bellow)
> By CONFIG_THREADED_NAPI, there is no need to consider what you did here
> in the napi core because device drivers know better and are responsible
> for it before calling napi_schedule(n).
yeah. but that approach will not work for some cases. some stupid
drivers are using locking context in the napi poll function.
in that case the performance will runto shit. i discovered this with the
mvneta eth driver (marvell) and mt76 tx polling (rx  works)
for mvneta is will cause very high latencies and packet drops. for mt76
it causes packet stop. doesnt work simply (on all cases no crashes)
so the threading will only work for drivers which are compatible with
that approach. it cannot be used as drop in replacement from my point of
view.
its all a question of the driver design


2020-07-26 11:16:40

by David Laight

[permalink] [raw]
Subject: RE: [RFC 0/7] Add support to process rx packets in thread

From: Sebastian Gottschall <[email protected]>
> Sent: 25 July 2020 16:42
> >> i agree. i just can say that i tested this patch recently due this
> >> discussion here. and it can be changed by sysfs. but it doesnt work for
> >> wifi drivers which are mainly using dummy netdev devices. for this i
> >> made a small patch to get them working using napi_set_threaded manually
> >> hardcoded in the drivers. (see patch bellow)

> > By CONFIG_THREADED_NAPI, there is no need to consider what you did here
> > in the napi core because device drivers know better and are responsible
> > for it before calling napi_schedule(n).

> yeah. but that approach will not work for some cases. some stupid
> drivers are using locking context in the napi poll function.
> in that case the performance will runto shit. i discovered this with the
> mvneta eth driver (marvell) and mt76 tx polling (rx  works)
> for mvneta is will cause very high latencies and packet drops. for mt76
> it causes packet stop. doesnt work simply (on all cases no crashes)
> so the threading will only work for drivers which are compatible with
> that approach. it cannot be used as drop in replacement from my point of
> view.
> its all a question of the driver design

Why should it make (much) difference whether the napi callbacks (etc)
are done in the context of the interrupted process or that of a
specific kernel thread.
The process flags (or whatever) can even be set so that it appears
to be the expected 'softint' context.

In any case running NAPI from a thread will just show up the next
piece of code that runs for ages in softint context.
I think I've seen the tail end of memory being freed under rcu
finally happening under softint and taking absolutely ages.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2020-07-28 17:00:39

by Rakesh Pillai

[permalink] [raw]
Subject: RE: [RFC 0/7] Add support to process rx packets in thread



> -----Original Message-----
> From: David Laight <[email protected]>
> Sent: Sunday, July 26, 2020 4:46 PM
> To: 'Sebastian Gottschall' <[email protected]>; Hillf Danton
> <[email protected]>
> Cc: Andrew Lunn <[email protected]>; Rakesh Pillai <[email protected]>;
> [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; Markus Elfring <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: RE: [RFC 0/7] Add support to process rx packets in thread
>
> From: Sebastian Gottschall <[email protected]>
> > Sent: 25 July 2020 16:42
> > >> i agree. i just can say that i tested this patch recently due this
> > >> discussion here. and it can be changed by sysfs. but it doesnt work for
> > >> wifi drivers which are mainly using dummy netdev devices. for this i
> > >> made a small patch to get them working using napi_set_threaded
> manually
> > >> hardcoded in the drivers. (see patch bellow)
>
> > > By CONFIG_THREADED_NAPI, there is no need to consider what you did
> here
> > > in the napi core because device drivers know better and are responsible
> > > for it before calling napi_schedule(n).
>
> > yeah. but that approach will not work for some cases. some stupid
> > drivers are using locking context in the napi poll function.
> > in that case the performance will runto shit. i discovered this with the
> > mvneta eth driver (marvell) and mt76 tx polling (rx works)
> > for mvneta is will cause very high latencies and packet drops. for mt76
> > it causes packet stop. doesnt work simply (on all cases no crashes)
> > so the threading will only work for drivers which are compatible with
> > that approach. it cannot be used as drop in replacement from my point of
> > view.
> > its all a question of the driver design
>
> Why should it make (much) difference whether the napi callbacks (etc)
> are done in the context of the interrupted process or that of a
> specific kernel thread.
> The process flags (or whatever) can even be set so that it appears
> to be the expected 'softint' context.
>
> In any case running NAPI from a thread will just show up the next
> piece of code that runs for ages in softint context.
> I think I've seen the tail end of memory being freed under rcu
> finally happening under softint and taking absolutely ages.
>
> David
>

Hi All,

Is the threaded NAPI change posted to kernel ?
Is the conclusion of this discussion that " we cannot use threads for processing packets " ??


> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes,
> MK1 1PT, UK
> Registration No: 1397386 (Wales)