2022-01-07 09:00:20

by Yunsheng Lin

[permalink] [raw]
Subject: [PATCH net-next] page_pool: remove spinlock in page_pool_refill_alloc_cache()

As page_pool_refill_alloc_cache() is only called by
__page_pool_get_cached(), which assumes non-concurrent access
as suggested by the comment in __page_pool_get_cached(), and
ptr_ring allows concurrent access between consumer and producer,
so remove the spinlock in page_pool_refill_alloc_cache().

Signed-off-by: Yunsheng Lin <[email protected]>
---
net/core/page_pool.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 1a6978427d6c..6efad8b29e9c 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -130,9 +130,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
pref_nid = numa_mem_id(); /* will be zero like page_to_nid() */
#endif

- /* Slower-path: Get pages from locked ring queue */
- spin_lock(&r->consumer_lock);
-
/* Refill alloc array, but only if NUMA match */
do {
page = __ptr_ring_consume(r);
@@ -157,7 +154,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
if (likely(pool->alloc.count > 0))
page = pool->alloc.cache[--pool->alloc.count];

- spin_unlock(&r->consumer_lock);
return page;
}

--
2.33.0



2022-01-07 13:32:25

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next] page_pool: remove spinlock in page_pool_refill_alloc_cache()



On 07/01/2022 10.00, Yunsheng Lin wrote:
> As page_pool_refill_alloc_cache() is only called by
> __page_pool_get_cached(), which assumes non-concurrent access
> as suggested by the comment in __page_pool_get_cached(), and
> ptr_ring allows concurrent access between consumer and producer,
> so remove the spinlock in page_pool_refill_alloc_cache().

This should be okay as __ptr_ring_consume() have a memory barrier via
READ_ONCE in __ptr_ring_peek(). The code page_pool_empty_ring() also
does ptr_ring consume, but drivers should already take care that this
will not be called concurrently, as it is part of the teardown code path
(which can only run concurrently with ptr_ring producer side).

Acked-by: Jesper Dangaard Brouer <[email protected]>

The original reason behind this lock was that I was planning to allow
the memory subsystem to reclaim pages sitting in page_pool's cache.
Unfortunately I never got around to implement this.

> Signed-off-by: Yunsheng Lin <[email protected]>
> ---
> net/core/page_pool.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 1a6978427d6c..6efad8b29e9c 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -130,9 +130,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
> pref_nid = numa_mem_id(); /* will be zero like page_to_nid() */
> #endif
>
> - /* Slower-path: Get pages from locked ring queue */
> - spin_lock(&r->consumer_lock);
> -
> /* Refill alloc array, but only if NUMA match */
> do {
> page = __ptr_ring_consume(r);
> @@ -157,7 +154,6 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
> if (likely(pool->alloc.count > 0))
> page = pool->alloc.cache[--pool->alloc.count];
>
> - spin_unlock(&r->consumer_lock);
> return page;
> }
>
>


2022-01-10 01:01:13

by patchwork-bot+netdevbpf

[permalink] [raw]
Subject: Re: [PATCH net-next] page_pool: remove spinlock in page_pool_refill_alloc_cache()

Hello:

This patch was applied to netdev/net-next.git (master)
by Jakub Kicinski <[email protected]>:

On Fri, 7 Jan 2022 17:00:42 +0800 you wrote:
> As page_pool_refill_alloc_cache() is only called by
> __page_pool_get_cached(), which assumes non-concurrent access
> as suggested by the comment in __page_pool_get_cached(), and
> ptr_ring allows concurrent access between consumer and producer,
> so remove the spinlock in page_pool_refill_alloc_cache().
>
> Signed-off-by: Yunsheng Lin <[email protected]>
>
> [...]

Here is the summary with links:
- [net-next] page_pool: remove spinlock in page_pool_refill_alloc_cache()
https://git.kernel.org/netdev/net-next/c/07b17f0f7485

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html