2021-03-25 03:27:19

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: [PATCH mel-git 0/3] page_pool using alloc_pages_bulk API

This patchset is against Mel's tree:
- git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git
- Branch: mm-bulk-rebase-v6r5
- https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git/log/?h=mm-bulk-rebase-v6r5

The benchmarks are here:
- https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org#test-on-mel-git-tree-mm-bulk-rebase-v6r5

The compiler choose a strange code layout see here:
- https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org#strange-code-layout
- Used: gcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)

Intent is for Mel to pickup these patches.
---

Jesper Dangaard Brouer (3):
net: page_pool: refactor dma_map into own function page_pool_dma_map
net: page_pool: use alloc_pages_bulk in refill code path
net: page_pool: convert to use alloc_pages_bulk_array variant


include/net/page_pool.h | 2 -
net/core/page_pool.c | 111 +++++++++++++++++++++++++++++++----------------
2 files changed, 75 insertions(+), 38 deletions(-)

--


2021-03-25 03:27:28

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: [PATCH mel-git 3/3] net: page_pool: convert to use alloc_pages_bulk_array variant

Using the API variant alloc_pages_bulk_array from page_pool
was done in a separate patch to ease benchmarking the
variants separately. Maintainers can squash patch if preferred.

Signed-off-by: Jesper Dangaard Brouer <[email protected]>
---
include/net/page_pool.h | 2 +-
net/core/page_pool.c | 22 ++++++++++++++++------
2 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b5b195305346..6d517a37c18b 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -65,7 +65,7 @@
#define PP_ALLOC_CACHE_REFILL 64
struct pp_alloc_cache {
u32 count;
- void *cache[PP_ALLOC_CACHE_SIZE];
+ struct page *cache[PP_ALLOC_CACHE_SIZE];
};

struct page_pool_params {
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3bf6e7f5fc89..9ec1aa9640ad 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -233,24 +233,34 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
const int bulk = PP_ALLOC_CACHE_REFILL;
unsigned int pp_flags = pool->p.flags;
unsigned int pp_order = pool->p.order;
- struct page *page, *next;
- LIST_HEAD(page_list);
+ struct page *page;
+ int i, nr_pages;

/* Don't support bulk alloc for high-order pages */
if (unlikely(pp_order))
return __page_pool_alloc_page_order(pool, gfp);

- if (unlikely(!alloc_pages_bulk_list(gfp, bulk, &page_list)))
+ /* Unnecessary as alloc cache is empty, but guarantees zero count */
+ if (unlikely(pool->alloc.count > 0))
+ return pool->alloc.cache[--pool->alloc.count];
+
+ /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */
+ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
+
+ nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
+ if (unlikely(!nr_pages))
return NULL;

- list_for_each_entry_safe(page, next, &page_list, lru) {
- list_del(&page->lru);
+ /* Pages have been filled into alloc.cache array, but count is zero and
+ * page element have not been (possibly) DMA mapped.
+ */
+ for (i = 0; i < nr_pages; i++) {
+ page = pool->alloc.cache[i];
if ((pp_flags & PP_FLAG_DMA_MAP) &&
unlikely(!page_pool_dma_map(pool, page))) {
put_page(page);
continue;
}
- /* Alloc cache have room as it is empty on function call */
pool->alloc.cache[pool->alloc.count++] = page;
/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;