2023-02-02 02:44:35

by Qingfang Deng

[permalink] [raw]
Subject: [PATCH net] net: page_pool: use in_softirq() instead

From: Qingfang DENG <[email protected]>

We use BH context only for synchronization, so we don't care if it's
actually serving softirq or not.

As a side node, in case of threaded NAPI, in_serving_softirq() will
return false because it's in process context with BH off, making
page_pool_recycle_in_cache() unreachable.

Signed-off-by: Qingfang DENG <[email protected]>
Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
---
include/net/page_pool.h | 4 ++--
net/core/page_pool.c | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 813c93499f20..34bf531ffc8d 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -386,7 +386,7 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
static inline void page_pool_ring_lock(struct page_pool *pool)
__acquires(&pool->ring.producer_lock)
{
- if (in_serving_softirq())
+ if (in_softirq())
spin_lock(&pool->ring.producer_lock);
else
spin_lock_bh(&pool->ring.producer_lock);
@@ -395,7 +395,7 @@ static inline void page_pool_ring_lock(struct page_pool *pool)
static inline void page_pool_ring_unlock(struct page_pool *pool)
__releases(&pool->ring.producer_lock)
{
- if (in_serving_softirq())
+ if (in_softirq())
spin_unlock(&pool->ring.producer_lock);
else
spin_unlock_bh(&pool->ring.producer_lock);
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 9b203d8660e4..193c18799865 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -511,8 +511,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
{
int ret;
- /* BH protection not needed if current is serving softirq */
- if (in_serving_softirq())
+ /* BH protection not needed if current is softirq */
+ if (in_softirq())
ret = ptr_ring_produce(&pool->ring, page);
else
ret = ptr_ring_produce_bh(&pool->ring, page);
@@ -570,7 +570,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
page_pool_dma_sync_for_device(pool, page,
dma_sync_size);

- if (allow_direct && in_serving_softirq() &&
+ if (allow_direct && in_softirq() &&
page_pool_recycle_in_cache(page, pool))
return NULL;

--
2.34.1



2023-02-02 06:03:56

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead

On Thu, 2 Feb 2023 10:44:17 +0800 Qingfang DENG wrote:
> From: Qingfang DENG <[email protected]>
>
> We use BH context only for synchronization, so we don't care if it's
> actually serving softirq or not.
>
> As a side node, in case of threaded NAPI, in_serving_softirq() will
> return false because it's in process context with BH off, making
> page_pool_recycle_in_cache() unreachable.

LGTM, but I don't think this qualifies as a fix.
It's just a missed optimization, right?

2023-02-02 08:59:27

by Leon Romanovsky

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead

On Thu, Feb 02, 2023 at 10:44:17AM +0800, Qingfang DENG wrote:
> From: Qingfang DENG <[email protected]>
>
> We use BH context only for synchronization, so we don't care if it's
> actually serving softirq or not.
>
> As a side node, in case of threaded NAPI, in_serving_softirq() will
> return false because it's in process context with BH off, making
> page_pool_recycle_in_cache() unreachable.
>
> Signed-off-by: Qingfang DENG <[email protected]>
> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")

Please put your Signed-off-by after Fixes.

Thanks

2023-02-02 18:06:52

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead

On Wed, 1 Feb 2023 22:01:05 -0800 Jakub Kicinski wrote:
> On Thu, 2 Feb 2023 10:44:17 +0800 Qingfang DENG wrote:
> > From: Qingfang DENG <[email protected]>
> >
> > We use BH context only for synchronization, so we don't care if it's
> > actually serving softirq or not.
> >
> > As a side node, in case of threaded NAPI, in_serving_softirq() will
> > return false because it's in process context with BH off, making
> > page_pool_recycle_in_cache() unreachable.
>
> LGTM, but I don't think this qualifies as a fix.
> It's just a missed optimization, right?

Well, nobody seems to have disagreed with me in 12h, so please drop the
Fixes tags and repost against net-next :)

2023-02-03 11:16:37

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead


On 02/02/2023 03.44, Qingfang DENG wrote:
> From: Qingfang DENG <[email protected]>
>
> We use BH context only for synchronization, so we don't care if it's
> actually serving softirq or not.
>

Are you sure this is safe?
(also see my inline notes below)

> As a side node, in case of threaded NAPI, in_serving_softirq() will
> return false because it's in process context with BH off, making
> page_pool_recycle_in_cache() unreachable.

How can I enable threaded NAPI on my system?

> Signed-off-by: Qingfang DENG <[email protected]>
> Fixes: 7886244736a4 ("net: page_pool: Add bulk support for ptr_ring")
> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
> ---
> include/net/page_pool.h | 4 ++--
> net/core/page_pool.c | 6 +++---
> 2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 813c93499f20..34bf531ffc8d 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -386,7 +386,7 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
> static inline void page_pool_ring_lock(struct page_pool *pool)
> __acquires(&pool->ring.producer_lock)
> {
> - if (in_serving_softirq())
> + if (in_softirq())
> spin_lock(&pool->ring.producer_lock);
> else
> spin_lock_bh(&pool->ring.producer_lock);
> @@ -395,7 +395,7 @@ static inline void page_pool_ring_lock(struct page_pool *pool)
> static inline void page_pool_ring_unlock(struct page_pool *pool)
> __releases(&pool->ring.producer_lock)
> {
> - if (in_serving_softirq())
> + if (in_softirq())
> spin_unlock(&pool->ring.producer_lock);
> else
> spin_unlock_bh(&pool->ring.producer_lock);
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 9b203d8660e4..193c18799865 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -511,8 +511,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
> static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
> {
> int ret;
> - /* BH protection not needed if current is serving softirq */
> - if (in_serving_softirq())
> + /* BH protection not needed if current is softirq */
> + if (in_softirq())
> ret = ptr_ring_produce(&pool->ring, page);
> else
> ret = ptr_ring_produce_bh(&pool->ring, page);
> @@ -570,7 +570,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
> page_pool_dma_sync_for_device(pool, page,
> dma_sync_size);
>
> - if (allow_direct && in_serving_softirq() &&
> + if (allow_direct && in_softirq() &&
> page_pool_recycle_in_cache(page, pool))

I think other cases (above) are likely safe, but I worry a little about
this case, as the page_pool_recycle_in_cache() rely on RX-NAPI protection.
Meaning it is only the CPU that handles RX-NAPI for this RX-queue that
is allowed to access this lockless array.

We do have the 'allow_direct' boolean, and if every driver/user uses
this correctly, then this should be safe. Changing this makes it
possible for drivers to use page_pool API incorrectly and this leads to
hard-to-debug errors.

> return NULL;
>

--Jesper


2023-02-03 13:05:56

by Qingfang Deng

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead

Hi Jesper,

On Fri, Feb 3, 2023 at 7:15 PM Jesper Dangaard Brouer
<[email protected]> wrote:
> How can I enable threaded NAPI on my system?

dev_set_threaded(napi_dev, true);

You can also enable it at runtime by writing 1 to
/sys/class/net/<devname>/threaded, but it only works if the driver
does _not_ use a dummy netdev for NAPI poll.

> I think other cases (above) are likely safe, but I worry a little about
> this case, as the page_pool_recycle_in_cache() rely on RX-NAPI protection.
> Meaning it is only the CPU that handles RX-NAPI for this RX-queue that
> is allowed to access this lockless array.

The major changes to the threaded NAPI is that instead of scheduling a
NET_RX softirq, it wakes up a kthread which then does the polling,
allowing it to be scheduled to another CPU. The driver's poll function
is called with BH disabled so it's still considered BH context.

> We do have the 'allow_direct' boolean, and if every driver/user uses
> this correctly, then this should be safe. Changing this makes it
> possible for drivers to use page_pool API incorrectly and this leads to
> hard-to-debug errors.

"incorrectly", like, calling it outside RX-NAPI?

2023-02-06 10:20:06

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net] net: page_pool: use in_softirq() instead



On 03/02/2023 14.05, DENG Qingfang wrote:
> Hi Jesper,
>
> On Fri, Feb 3, 2023 at 7:15 PM Jesper Dangaard Brouer
> <[email protected]> wrote:
>> How can I enable threaded NAPI on my system?
>
> dev_set_threaded(napi_dev, true);
>
> You can also enable it at runtime by writing 1 to
> /sys/class/net/<devname>/threaded, but it only works if the driver
> does _not_ use a dummy netdev for NAPI poll.
>

Thanks for providing this setup info.

I quickly tested driver i40e with a XDP_DROP workload, and switch
between the threaded and normal NAPI, and no performance difference.
(p.s. driver i40e doesn't use page_pool)

>> I think other cases (above) are likely safe, but I worry a little about
>> this case, as the page_pool_recycle_in_cache() rely on RX-NAPI protection.
>> Meaning it is only the CPU that handles RX-NAPI for this RX-queue that
>> is allowed to access this lockless array.
>
> The major changes to the threaded NAPI is that instead of scheduling a
> NET_RX softirq, it wakes up a kthread which then does the polling,
> allowing it to be scheduled to another CPU. The driver's poll function
> is called with BH disabled so it's still considered BH context.
>

As long as drivers NAPI poll function doesn't migrate between CPUs while
it is running this should be fine. (This guarantee is needed as XDP + TC
have per_cpu bpf_redirect_info).

Looking at the code napi_threaded_poll() in net/core/dev.c I can see
this is guarantee is provided by the local_bh_disable() +
local_bh_enable around the call to __napi_poll().

>> We do have the 'allow_direct' boolean, and if every driver/user uses
>> this correctly, then this should be safe. Changing this makes it
>> possible for drivers to use page_pool API incorrectly and this leads to
>> hard-to-debug errors.
>
> "incorrectly", like, calling it outside RX-NAPI?

Yes.

--Jesper