2023-01-26 21:51:43

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 00/12] dmapool enhancements

From: Keith Busch <[email protected]>

Time spent in dma_pool alloc/free increases linearly with the number of
pages backing the pool. We can reduce this to constant time with minor
changes to how free pages are tracked.

Changes since v4:

Added received reviews

Applied comments from Christoph:
Combined all debug code in one #ifdef block
Fixed some whitespace

Keith Busch (8):
dmapool: add alloc/free performance test
dmapool: move debug code to own functions
dmapool: rearrange page alloc failure handling
dmapool: consolidate page initialization
dmapool: simplify freeing
dmapool: don't memset on free twice
dmapool: link blocks across pages
dmapool: create/destroy cleanup

Tony Battersby (4):
dmapool: remove checks for dev == NULL
dmapool: use sysfs_emit() instead of scnprintf()
dmapool: cleanup integer types
dmapool: speedup DMAPOOL_DEBUG with init_on_alloc

mm/Kconfig | 9 ++
mm/Makefile | 1 +
mm/dmapool.c | 402 ++++++++++++++++++++++------------------------
mm/dmapool_test.c | 147 +++++++++++++++++
4 files changed, 350 insertions(+), 209 deletions(-)
create mode 100644 mm/dmapool_test.c

--
2.30.2



2023-01-26 21:51:47

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 03/12] dmapool: use sysfs_emit() instead of scnprintf()

From: Tony Battersby <[email protected]>

Use sysfs_emit instead of scnprintf, snprintf or sprintf.

Signed-off-by: Tony Battersby <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 23 +++++++----------------
1 file changed, 7 insertions(+), 16 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 559207e1c3339..20616b760bb9c 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -64,18 +64,11 @@ static DEFINE_MUTEX(pools_reg_lock);

static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf)
{
- unsigned temp;
- unsigned size;
- char *next;
+ int size;
struct dma_page *page;
struct dma_pool *pool;

- next = buf;
- size = PAGE_SIZE;
-
- temp = scnprintf(next, size, "poolinfo - 0.1\n");
- size -= temp;
- next += temp;
+ size = sysfs_emit(buf, "poolinfo - 0.1\n");

mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
@@ -90,16 +83,14 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
spin_unlock_irq(&pool->lock);

/* per-pool info, no real statistics yet */
- temp = scnprintf(next, size, "%-16s %4u %4zu %4zu %2u\n",
- pool->name, blocks,
- pages * (pool->allocation / pool->size),
- pool->size, pages);
- size -= temp;
- next += temp;
+ size += sysfs_emit_at(buf, size, "%-16s %4u %4zu %4zu %2u\n",
+ pool->name, blocks,
+ pages * (pool->allocation / pool->size),
+ pool->size, pages);
}
mutex_unlock(&pools_lock);

- return PAGE_SIZE - size;
+ return size;
}

static DEVICE_ATTR_RO(pools);
--
2.30.2


2023-01-26 21:51:50

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 01/12] dmapool: add alloc/free performance test

From: Keith Busch <[email protected]>

Provide a module that allocates and frees many blocks of various sizes
and report how long it takes. This is intended to provide a consistent
way to measure how changes to the dma_pool_alloc/free routines affect
timing.

Signed-off-by: Keith Busch <[email protected]>
---
mm/Kconfig | 9 +++
mm/Makefile | 1 +
mm/dmapool_test.c | 147 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 157 insertions(+)
create mode 100644 mm/dmapool_test.c

diff --git a/mm/Kconfig b/mm/Kconfig
index ebfe5796adf83..28ee03f39e93f 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1099,6 +1099,15 @@ comment "GUP_TEST needs to have DEBUG_FS enabled"
config GUP_GET_PXX_LOW_HIGH
bool

+config DMAPOOL_TEST
+ tristate "Enable a module to run time tests on dma_pool"
+ depends on HAS_DMA
+ help
+ Provides a test module that will allocate and free many blocks of
+ various sizes and report how long it takes. This is intended to
+ provide a consistent way to measure how changes to the
+ dma_pool_alloc/free routines affect performance.
+
config ARCH_HAS_PTE_SPECIAL
bool

diff --git a/mm/Makefile b/mm/Makefile
index 8e105e5b3e293..3a08f5d7b1782 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -103,6 +103,7 @@ obj-$(CONFIG_MEMCG) += swap_cgroup.o
endif
obj-$(CONFIG_CGROUP_HUGETLB) += hugetlb_cgroup.o
obj-$(CONFIG_GUP_TEST) += gup_test.o
+obj-$(CONFIG_DMAPOOL_TEST) += dmapool_test.o
obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o
obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
diff --git a/mm/dmapool_test.c b/mm/dmapool_test.c
new file mode 100644
index 0000000000000..370fb9e209eff
--- /dev/null
+++ b/mm/dmapool_test.c
@@ -0,0 +1,147 @@
+#include <linux/device.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/kernel.h>
+#include <linux/ktime.h>
+#include <linux/module.h>
+
+#define NR_TESTS (100)
+
+struct dma_pool_pair {
+ dma_addr_t dma;
+ void *v;
+};
+
+struct dmapool_parms {
+ size_t size;
+ size_t align;
+ size_t boundary;
+};
+
+static const struct dmapool_parms pool_parms[] = {
+ { .size = 16, .align = 16, .boundary = 0 },
+ { .size = 64, .align = 64, .boundary = 0 },
+ { .size = 256, .align = 256, .boundary = 0 },
+ { .size = 1024, .align = 1024, .boundary = 0 },
+ { .size = 4096, .align = 4096, .boundary = 0 },
+ { .size = 68, .align = 32, .boundary = 4096 },
+};
+
+static struct dma_pool *pool;
+static struct device test_dev;
+static u64 dma_mask;
+
+static inline int nr_blocks(int size)
+{
+ return clamp_t(int, (PAGE_SIZE / size) * 512, 1024, 8192);
+}
+
+static int dmapool_test_alloc(struct dma_pool_pair *p, int blocks)
+{
+ int i;
+
+ for (i = 0; i < blocks; i++) {
+ p[i].v = dma_pool_alloc(pool, GFP_KERNEL,
+ &p[i].dma);
+ if (!p[i].v)
+ goto pool_fail;
+ }
+
+ for (i = 0; i < blocks; i++)
+ dma_pool_free(pool, p[i].v, p[i].dma);
+
+ return 0;
+
+pool_fail:
+ for (--i; i >= 0; i--)
+ dma_pool_free(pool, p[i].v, p[i].dma);
+ return -ENOMEM;
+}
+
+static int dmapool_test_block(const struct dmapool_parms *parms)
+{
+ int blocks = nr_blocks(parms->size);
+ ktime_t start_time, end_time;
+ struct dma_pool_pair *p;
+ int i, ret;
+
+ p = kcalloc(blocks, sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return -ENOMEM;
+
+ pool = dma_pool_create("test pool", &test_dev, parms->size,
+ parms->align, parms->boundary);
+ if (!pool) {
+ ret = -ENOMEM;
+ goto free_pairs;
+ }
+
+ start_time = ktime_get();
+ for (i = 0; i < NR_TESTS; i++) {
+ ret = dmapool_test_alloc(p, blocks);
+ if (ret)
+ goto free_pool;
+ if (need_resched())
+ cond_resched();
+ }
+ end_time = ktime_get();
+
+ printk("dmapool test: size:%-4zu align:%-4zu blocks:%-4d time:%llu\n",
+ parms->size, parms->align, blocks,
+ ktime_us_delta(end_time, start_time));
+
+free_pool:
+ dma_pool_destroy(pool);
+free_pairs:
+ kfree(p);
+ return ret;
+}
+
+static void dmapool_test_release(struct device *dev)
+{
+}
+
+static int dmapool_checks(void)
+{
+ int i, ret;
+
+ ret = dev_set_name(&test_dev, "dmapool-test");
+ if (ret)
+ return ret;
+
+ ret = device_register(&test_dev);
+ if (ret) {
+ printk("%s: register failed:%d\n", __func__, ret);
+ goto put_device;
+ }
+
+ test_dev.release = dmapool_test_release;
+ set_dma_ops(&test_dev, NULL);
+ test_dev.dma_mask = &dma_mask;
+ ret = dma_set_mask_and_coherent(&test_dev, DMA_BIT_MASK(64));
+ if (ret) {
+ printk("%s: mask failed:%d\n", __func__, ret);
+ goto del_device;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(pool_parms); i++) {
+ ret = dmapool_test_block(&pool_parms[i]);
+ if (ret)
+ break;
+ }
+
+del_device:
+ device_del(&test_dev);
+put_device:
+ put_device(&test_dev);
+ return ret;
+}
+
+static void dmapool_exit(void)
+{
+}
+
+module_init(dmapool_checks);
+module_exit(dmapool_exit);
+MODULE_LICENSE("GPL");
--
2.30.2


2023-01-26 21:54:28

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 07/12] dmapool: rearrange page alloc failure handling

From: Keith Busch <[email protected]>

Handle the error in a condition so the good path can be in the normal
flow.

Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 30b069e999968..900f2afa363a9 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -292,17 +292,19 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
page = kmalloc(sizeof(*page), mem_flags);
if (!page)
return NULL;
+
page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
&page->dma, mem_flags);
- if (page->vaddr) {
- pool_init_page(pool, page);
- pool_initialise_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
- } else {
+ if (!page->vaddr) {
kfree(page);
- page = NULL;
+ return NULL;
}
+
+ pool_init_page(pool, page);
+ pool_initialise_page(pool, page);
+ page->in_use = 0;
+ page->offset = 0;
+
return page;
}

--
2.30.2


2023-01-26 21:54:39

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc

From: Tony Battersby <[email protected]>

Avoid double-memset of the same allocated memory in dma_pool_alloc()
when both DMAPOOL_DEBUG is enabled and init_on_alloc=1.

Signed-off-by: Tony Battersby <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index ee993bb59fc27..eaed3ffb42aa8 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -356,7 +356,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
break;
}
}
- if (!(mem_flags & __GFP_ZERO))
+ if (!want_init_on_alloc(mem_flags))
memset(retval, POOL_POISON_ALLOCATED, pool->size);
#endif
spin_unlock_irqrestore(&pool->lock, flags);
--
2.30.2


2023-01-26 21:54:44

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 06/12] dmapool: move debug code to own functions

From: Keith Busch <[email protected]>

Clean up the normal path by moving the debug code outside it.

Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 128 +++++++++++++++++++++++++++++++--------------------
1 file changed, 77 insertions(+), 51 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index eaed3ffb42aa8..30b069e999968 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -96,6 +96,78 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha

static DEVICE_ATTR_RO(pools);

+#ifdef DMAPOOL_DEBUG
+static void pool_check_block(struct dma_pool *pool, void *retval,
+ unsigned int offset, gfp_t mem_flags)
+{
+ int i;
+ u8 *data = retval;
+ /* page->offset is stored in first 4 bytes */
+ for (i = sizeof(offset); i < pool->size; i++) {
+ if (data[i] == POOL_POISON_FREED)
+ continue;
+ dev_err(pool->dev, "%s %s, %p (corrupted)\n",
+ __func__, pool->name, retval);
+
+ /*
+ * Dump the first 4 bytes even if they are not
+ * POOL_POISON_FREED
+ */
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1,
+ data, pool->size, 1);
+ break;
+ }
+ if (!want_init_on_alloc(mem_flags))
+ memset(retval, POOL_POISON_ALLOCATED, pool->size);
+}
+
+static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
+ void *vaddr, dma_addr_t dma)
+{
+ unsigned int offset = vaddr - page->vaddr;
+ unsigned int chain = page->offset;
+
+ if ((dma - page->dma) != offset) {
+ dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
+ __func__, pool->name, vaddr, &dma);
+ return true;
+ }
+
+ while (chain < pool->allocation) {
+ if (chain != offset) {
+ chain = *(int *)(page->vaddr + chain);
+ continue;
+ }
+ dev_err(pool->dev, "%s %s, dma %pad already free\n",
+ __func__, pool->name, &dma);
+ return true;
+ }
+ memset(vaddr, POOL_POISON_FREED, pool->size);
+ return false;
+}
+
+static void pool_init_page(struct dma_pool *pool, struct dma_page *page)
+{
+ memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
+}
+#else
+static void pool_check_block(struct dma_pool *pool, void *retval,
+ unsigned int offset, gfp_t mem_flags)
+
+{
+}
+
+static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
+ void *vaddr, dma_addr_t dma)
+{
+ return false;
+}
+
+static void pool_init_page(struct dma_pool *pool, struct dma_page *page)
+{
+}
+#endif
+
/**
* dma_pool_create - Creates a pool of consistent memory blocks, for dma.
* @name: name of pool, for diagnostics
@@ -223,9 +295,7 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
&page->dma, mem_flags);
if (page->vaddr) {
-#ifdef DMAPOOL_DEBUG
- memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
-#endif
+ pool_init_page(pool, page);
pool_initialise_page(pool, page);
page->in_use = 0;
page->offset = 0;
@@ -245,9 +315,7 @@ static void pool_free_page(struct dma_pool *pool, struct dma_page *page)
{
dma_addr_t dma = page->dma;

-#ifdef DMAPOOL_DEBUG
- memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
-#endif
+ pool_init_page(pool, page);
dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma);
list_del(&page->page_list);
kfree(page);
@@ -336,29 +404,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
page->offset = *(int *)(page->vaddr + offset);
retval = offset + page->vaddr;
*handle = offset + page->dma;
-#ifdef DMAPOOL_DEBUG
- {
- int i;
- u8 *data = retval;
- /* page->offset is stored in first 4 bytes */
- for (i = sizeof(page->offset); i < pool->size; i++) {
- if (data[i] == POOL_POISON_FREED)
- continue;
- dev_err(pool->dev, "%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
-
- /*
- * Dump the first 4 bytes even if they are not
- * POOL_POISON_FREED
- */
- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1,
- data, pool->size, 1);
- break;
- }
- }
- if (!want_init_on_alloc(mem_flags))
- memset(retval, POOL_POISON_ALLOCATED, pool->size);
-#endif
+ pool_check_block(pool, retval, offset, mem_flags);
spin_unlock_irqrestore(&pool->lock, flags);

if (want_init_on_alloc(mem_flags))
@@ -394,7 +440,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
struct dma_page *page;
unsigned long flags;
- unsigned int offset;

spin_lock_irqsave(&pool->lock, flags);
page = pool_find_page(pool, dma);
@@ -405,35 +450,16 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
return;
}

- offset = vaddr - page->vaddr;
if (want_init_on_free())
memset(vaddr, 0, pool->size);
-#ifdef DMAPOOL_DEBUG
- if ((dma - page->dma) != offset) {
+ if (pool_page_err(pool, page, vaddr, dma)) {
spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
- __func__, pool->name, vaddr, &dma);
return;
}
- {
- unsigned int chain = page->offset;
- while (chain < pool->allocation) {
- if (chain != offset) {
- chain = *(int *)(page->vaddr + chain);
- continue;
- }
- spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, dma %pad already free\n",
- __func__, pool->name, &dma);
- return;
- }
- }
- memset(vaddr, POOL_POISON_FREED, pool->size);
-#endif

page->in_use--;
*(int *)vaddr = page->offset;
- page->offset = offset;
+ page->offset = vaddr - page->vaddr;
/*
* Resist a temptation to do
* if (!is_page_busy(page)) pool_free_page(pool, page);
--
2.30.2


2023-01-26 21:54:46

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 08/12] dmapool: consolidate page initialization

From: Keith Busch <[email protected]>

Various fields of the dma pool are set in different places. Move it all
to one function.

Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 900f2afa363a9..9e98065a68b1f 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -274,6 +274,9 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
unsigned int offset = 0;
unsigned int next_boundary = pool->boundary;

+ pool_init_page(pool, page);
+ page->in_use = 0;
+ page->offset = 0;
do {
unsigned int next = offset + pool->size;
if (unlikely((next + pool->size) >= next_boundary)) {
@@ -300,11 +303,7 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
return NULL;
}

- pool_init_page(pool, page);
pool_initialise_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
-
return page;
}

--
2.30.2


2023-01-26 21:54:52

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 04/12] dmapool: cleanup integer types

From: Tony Battersby <[email protected]>

To represent the size of a single allocation, dmapool currently uses
'unsigned int' in some places and 'size_t' in other places. Standardize
on 'unsigned int' to reduce overhead, but use 'size_t' when counting all
the blocks in the entire pool.

Signed-off-by: Tony Battersby <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 20616b760bb9c..ee993bb59fc27 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -43,10 +43,10 @@
struct dma_pool { /* the pool */
struct list_head page_list;
spinlock_t lock;
- size_t size;
struct device *dev;
- size_t allocation;
- size_t boundary;
+ unsigned int size;
+ unsigned int allocation;
+ unsigned int boundary;
char name[32];
struct list_head pools;
};
@@ -73,7 +73,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
unsigned pages = 0;
- unsigned blocks = 0;
+ size_t blocks = 0;

spin_lock_irq(&pool->lock);
list_for_each_entry(page, &pool->page_list, page_list) {
@@ -83,9 +83,10 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
spin_unlock_irq(&pool->lock);

/* per-pool info, no real statistics yet */
- size += sysfs_emit_at(buf, size, "%-16s %4u %4zu %4zu %2u\n",
+ size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2u\n",
pool->name, blocks,
- pages * (pool->allocation / pool->size),
+ (size_t) pages *
+ (pool->allocation / pool->size),
pool->size, pages);
}
mutex_unlock(&pools_lock);
@@ -133,7 +134,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
else if (align & (align - 1))
return NULL;

- if (size == 0)
+ if (size == 0 || size > INT_MAX)
return NULL;
else if (size < 4)
size = 4;
@@ -146,6 +147,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
else if ((boundary < size) || (boundary & (boundary - 1)))
return NULL;

+ boundary = min(boundary, allocation);
+
retval = kmalloc(sizeof(*retval), GFP_KERNEL);
if (!retval)
return retval;
@@ -306,7 +309,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
{
unsigned long flags;
struct dma_page *page;
- size_t offset;
+ unsigned int offset;
void *retval;

might_alloc(mem_flags);
--
2.30.2


2023-01-26 21:54:56

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 09/12] dmapool: simplify freeing

From: Keith Busch <[email protected]>

The actions for busy and not busy are mostly the same, so combine these
and remove the unnecessary function. Also, the pool is about to be freed
so there's no need to poison the page data since we only check for
poison on alloc, which can't be done on a freed pool.

Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 22 ++++++----------------
1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 9e98065a68b1f..4dea2a0dbd336 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -312,16 +312,6 @@ static inline bool is_page_busy(struct dma_page *page)
return page->in_use != 0;
}

-static void pool_free_page(struct dma_pool *pool, struct dma_page *page)
-{
- dma_addr_t dma = page->dma;
-
- pool_init_page(pool, page);
- dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma);
- list_del(&page->page_list);
- kfree(page);
-}
-
/**
* dma_pool_destroy - destroys a pool of dma memory blocks.
* @pool: dma pool that will be destroyed
@@ -349,14 +339,14 @@ void dma_pool_destroy(struct dma_pool *pool)
mutex_unlock(&pools_reg_lock);

list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) {
- if (is_page_busy(page)) {
+ if (!is_page_busy(page))
+ dma_free_coherent(pool->dev, pool->allocation,
+ page->vaddr, page->dma);
+ else
dev_err(pool->dev, "%s %s, %p busy\n", __func__,
pool->name, page->vaddr);
- /* leak the still-in-use consistent memory */
- list_del(&page->page_list);
- kfree(page);
- } else
- pool_free_page(pool, page);
+ list_del(&page->page_list);
+ kfree(page);
}

kfree(pool);
--
2.30.2


2023-01-26 21:55:06

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 12/12] dmapool: create/destroy cleanup

From: Keith Busch <[email protected]>

Set the 'empty' bool directly from the result of the function that
determines its value instead of adding additional logic.

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
---
mm/dmapool.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index bb8893b4f4b96..1920890ff8d3d 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -226,7 +226,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
{
struct dma_pool *retval;
size_t allocation;
- bool empty = false;
+ bool empty;

if (!dev)
return NULL;
@@ -276,8 +276,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
*/
mutex_lock(&pools_reg_lock);
mutex_lock(&pools_lock);
- if (list_empty(&dev->dma_pools))
- empty = true;
+ empty = list_empty(&dev->dma_pools);
list_add(&retval->pools, &dev->dma_pools);
mutex_unlock(&pools_lock);
if (empty) {
@@ -350,7 +349,7 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
void dma_pool_destroy(struct dma_pool *pool)
{
struct dma_page *page, *tmp;
- bool empty = false, busy = false;
+ bool empty, busy = false;

if (unlikely(!pool))
return;
@@ -358,8 +357,7 @@ void dma_pool_destroy(struct dma_pool *pool)
mutex_lock(&pools_reg_lock);
mutex_lock(&pools_lock);
list_del(&pool->pools);
- if (list_empty(&pool->dev->dma_pools))
- empty = true;
+ empty = list_empty(&pool->dev->dma_pools);
mutex_unlock(&pools_lock);
if (empty)
device_remove_file(pool->dev, &dev_attr_pools);
--
2.30.2


2023-01-26 21:55:17

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 11/12] dmapool: link blocks across pages

From: Keith Busch <[email protected]>

The allocated dmapool pages are never freed for the lifetime of the
pool. There is no need for the two level list+stack lookup for finding a
free block since nothing is ever removed from the list. Just use a
simple stack, reducing time complexity to constant.

The implementation inserts the stack linking elements and the dma handle
of the block within itself when freed. This means the smallest possible
dmapool block is increased to at most 16 bytes to accomodate these
fields, but there are no exisiting users requesting a dma pool smaller
than that anyway.

Removing the list has a significant change in performance. Using the
kernel's micro-benchmarking self test:

Before:

# modprobe dmapool_test
dmapool test: size:16 blocks:8192 time:57282
dmapool test: size:64 blocks:8192 time:172562
dmapool test: size:256 blocks:8192 time:789247
dmapool test: size:1024 blocks:2048 time:371823
dmapool test: size:4096 blocks:1024 time:362237

After:

# modprobe dmapool_test
dmapool test: size:16 blocks:8192 time:24997
dmapool test: size:64 blocks:8192 time:26584
dmapool test: size:256 blocks:8192 time:33542
dmapool test: size:1024 blocks:2048 time:9022
dmapool test: size:4096 blocks:1024 time:6045

The module test allocates quite a few blocks that may not accurately
represent how these pools are used in real life. For a more marco level
benchmark, running fio high-depth + high-batched on nvme, this patch
shows submission and completion latency reduced by ~100usec each, 1%
IOPs improvement, and perf record's time spent in dma_pool_alloc/free
were reduced by half.

Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
---
mm/dmapool.c | 246 +++++++++++++++++++++++++--------------------------
1 file changed, 119 insertions(+), 127 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 21e6d362c7264..bb8893b4f4b96 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -15,7 +15,7 @@
* represented by the 'struct dma_pool' which keeps a doubly-linked list of
* allocated pages. Each page in the page_list is split into blocks of at
* least 'size' bytes. Free blocks are tracked in an unsorted singly-linked
- * list of free blocks within the page. Used blocks aren't tracked, but we
+ * list of free blocks across all pages. Used blocks aren't tracked, but we
* keep a count of how many are currently allocated from each page.
*/

@@ -40,9 +40,18 @@
#define DMAPOOL_DEBUG 1
#endif

+struct dma_block {
+ struct dma_block *next_block;
+ dma_addr_t dma;
+};
+
struct dma_pool { /* the pool */
struct list_head page_list;
spinlock_t lock;
+ struct dma_block *next_block;
+ size_t nr_blocks;
+ size_t nr_active;
+ size_t nr_pages;
struct device *dev;
unsigned int size;
unsigned int allocation;
@@ -55,8 +64,6 @@ struct dma_page { /* cacheable header for 'allocation' bytes */
struct list_head page_list;
void *vaddr;
dma_addr_t dma;
- unsigned int in_use;
- unsigned int offset;
};

static DEFINE_MUTEX(pools_lock);
@@ -64,30 +71,18 @@ static DEFINE_MUTEX(pools_reg_lock);

static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf)
{
- int size;
- struct dma_page *page;
struct dma_pool *pool;
+ unsigned size;

size = sysfs_emit(buf, "poolinfo - 0.1\n");

mutex_lock(&pools_lock);
list_for_each_entry(pool, &dev->dma_pools, pools) {
- unsigned pages = 0;
- size_t blocks = 0;
-
- spin_lock_irq(&pool->lock);
- list_for_each_entry(page, &pool->page_list, page_list) {
- pages++;
- blocks += page->in_use;
- }
- spin_unlock_irq(&pool->lock);
-
/* per-pool info, no real statistics yet */
- size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2u\n",
- pool->name, blocks,
- (size_t) pages *
- (pool->allocation / pool->size),
- pool->size, pages);
+ size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2zu\n",
+ pool->name, pool->nr_active,
+ pool->nr_blocks, pool->size,
+ pool->nr_pages);
}
mutex_unlock(&pools_lock);

@@ -97,17 +92,17 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha
static DEVICE_ATTR_RO(pools);

#ifdef DMAPOOL_DEBUG
-static void pool_check_block(struct dma_pool *pool, void *retval,
- unsigned int offset, gfp_t mem_flags)
+static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
+ gfp_t mem_flags)
{
+ u8 *data = (void *)block;
int i;
- u8 *data = retval;
- /* page->offset is stored in first 4 bytes */
- for (i = sizeof(offset); i < pool->size; i++) {
+
+ for (i = sizeof(struct dma_block); i < pool->size; i++) {
if (data[i] == POOL_POISON_FREED)
continue;
- dev_err(pool->dev, "%s %s, %p (corrupted)\n",
- __func__, pool->name, retval);
+ dev_err(pool->dev, "%s %s, %p (corrupted)\n", __func__,
+ pool->name, block);

/*
* Dump the first 4 bytes even if they are not
@@ -117,31 +112,46 @@ static void pool_check_block(struct dma_pool *pool, void *retval,
data, pool->size, 1);
break;
}
+
if (!want_init_on_alloc(mem_flags))
- memset(retval, POOL_POISON_ALLOCATED, pool->size);
+ memset(block, POOL_POISON_ALLOCATED, pool->size);
+}
+
+static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
+{
+ struct dma_page *page;
+
+ list_for_each_entry(page, &pool->page_list, page_list) {
+ if (dma < page->dma)
+ continue;
+ if ((dma - page->dma) < pool->allocation)
+ return page;
+ }
+ return NULL;
}

-static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
- void *vaddr, dma_addr_t dma)
+static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
- unsigned int offset = vaddr - page->vaddr;
- unsigned int chain = page->offset;
+ struct dma_block *block = pool->next_block;
+ struct dma_page *page;

- if ((dma - page->dma) != offset) {
- dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n",
+ page = pool_find_page(pool, dma);
+ if (!page) {
+ dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
__func__, pool->name, vaddr, &dma);
return true;
}

- while (chain < pool->allocation) {
- if (chain != offset) {
- chain = *(int *)(page->vaddr + chain);
+ while (block) {
+ if (block != vaddr) {
+ block = block->next_block;
continue;
}
dev_err(pool->dev, "%s %s, dma %pad already free\n",
__func__, pool->name, &dma);
return true;
}
+
memset(vaddr, POOL_POISON_FREED, pool->size);
return false;
}
@@ -151,14 +161,12 @@ static void pool_init_page(struct dma_pool *pool, struct dma_page *page)
memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
}
#else
-static void pool_check_block(struct dma_pool *pool, void *retval,
- unsigned int offset, gfp_t mem_flags)
-
+static void pool_check_block(struct dma_pool *pool, struct dma_block *block,
+ gfp_t mem_flags)
{
}

-static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
- void *vaddr, dma_addr_t dma)
+static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
if (want_init_on_free())
memset(vaddr, 0, pool->size);
@@ -170,6 +178,26 @@ static void pool_init_page(struct dma_pool *pool, struct dma_page *page)
}
#endif

+static struct dma_block *pool_block_pop(struct dma_pool *pool)
+{
+ struct dma_block *block = pool->next_block;
+
+ if (block) {
+ pool->next_block = block->next_block;
+ pool->nr_active++;
+ }
+ return block;
+}
+
+static void pool_block_push(struct dma_pool *pool, struct dma_block *block,
+ dma_addr_t dma)
+{
+ block->dma = dma;
+ block->next_block = pool->next_block;
+ pool->next_block = block;
+}
+
+
/**
* dma_pool_create - Creates a pool of consistent memory blocks, for dma.
* @name: name of pool, for diagnostics
@@ -210,8 +238,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,

if (size == 0 || size > INT_MAX)
return NULL;
- else if (size < 4)
- size = 4;
+ if (size < sizeof(struct dma_block))
+ size = sizeof(struct dma_block);

size = ALIGN(size, align);
allocation = max_t(size_t, size, PAGE_SIZE);
@@ -223,7 +251,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,

boundary = min(boundary, allocation);

- retval = kmalloc(sizeof(*retval), GFP_KERNEL);
+ retval = kzalloc(sizeof(*retval), GFP_KERNEL);
if (!retval)
return retval;

@@ -236,7 +264,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
retval->size = size;
retval->boundary = boundary;
retval->allocation = allocation;
-
INIT_LIST_HEAD(&retval->pools);

/*
@@ -273,21 +300,25 @@ EXPORT_SYMBOL(dma_pool_create);

static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
- unsigned int offset = 0;
- unsigned int next_boundary = pool->boundary;
+ unsigned int next_boundary = pool->boundary, offset = 0;
+ struct dma_block *block;

pool_init_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
- do {
- unsigned int next = offset + pool->size;
- if (unlikely((next + pool->size) >= next_boundary)) {
- next = next_boundary;
+ while (offset + pool->size <= pool->allocation) {
+ if (offset + pool->size > next_boundary) {
+ offset = next_boundary;
next_boundary += pool->boundary;
+ continue;
}
- *(int *)(page->vaddr + offset) = next;
- offset = next;
- } while (offset < pool->allocation);
+
+ block = page->vaddr + offset;
+ pool_block_push(pool, block, page->dma + offset);
+ offset += pool->size;
+ pool->nr_blocks++;
+ }
+
+ list_add(&page->page_list, &pool->page_list);
+ pool->nr_pages++;
}

static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
@@ -305,15 +336,9 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
return NULL;
}

- pool_initialise_page(pool, page);
return page;
}

-static inline bool is_page_busy(struct dma_page *page)
-{
- return page->in_use != 0;
-}
-
/**
* dma_pool_destroy - destroys a pool of dma memory blocks.
* @pool: dma pool that will be destroyed
@@ -325,7 +350,7 @@ static inline bool is_page_busy(struct dma_page *page)
void dma_pool_destroy(struct dma_pool *pool)
{
struct dma_page *page, *tmp;
- bool empty = false;
+ bool empty = false, busy = false;

if (unlikely(!pool))
return;
@@ -340,13 +365,15 @@ void dma_pool_destroy(struct dma_pool *pool)
device_remove_file(pool->dev, &dev_attr_pools);
mutex_unlock(&pools_reg_lock);

+ if (pool->nr_active) {
+ dev_err(pool->dev, "%s %s busy\n", __func__, pool->name);
+ busy = true;
+ }
+
list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) {
- if (!is_page_busy(page))
+ if (!busy)
dma_free_coherent(pool->dev, pool->allocation,
page->vaddr, page->dma);
- else
- dev_err(pool->dev, "%s %s, %p busy\n", __func__,
- pool->name, page->vaddr);
list_del(&page->page_list);
kfree(page);
}
@@ -368,58 +395,40 @@ EXPORT_SYMBOL(dma_pool_destroy);
void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
dma_addr_t *handle)
{
- unsigned long flags;
+ struct dma_block *block;
struct dma_page *page;
- unsigned int offset;
- void *retval;
+ unsigned long flags;

might_alloc(mem_flags);

spin_lock_irqsave(&pool->lock, flags);
- list_for_each_entry(page, &pool->page_list, page_list) {
- if (page->offset < pool->allocation)
- goto ready;
- }
-
- /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */
- spin_unlock_irqrestore(&pool->lock, flags);
-
- page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO));
- if (!page)
- return NULL;
+ block = pool_block_pop(pool);
+ if (!block) {
+ /*
+ * pool_alloc_page() might sleep, so temporarily drop
+ * &pool->lock
+ */
+ spin_unlock_irqrestore(&pool->lock, flags);

- spin_lock_irqsave(&pool->lock, flags);
+ page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO));
+ if (!page)
+ return NULL;

- list_add(&page->page_list, &pool->page_list);
- ready:
- page->in_use++;
- offset = page->offset;
- page->offset = *(int *)(page->vaddr + offset);
- retval = offset + page->vaddr;
- *handle = offset + page->dma;
- pool_check_block(pool, retval, offset, mem_flags);
+ spin_lock_irqsave(&pool->lock, flags);
+ pool_initialise_page(pool, page);
+ block = pool_block_pop(pool);
+ }
spin_unlock_irqrestore(&pool->lock, flags);

+ *handle = block->dma;
+ pool_check_block(pool, block, mem_flags);
if (want_init_on_alloc(mem_flags))
- memset(retval, 0, pool->size);
+ memset(block, 0, pool->size);

- return retval;
+ return block;
}
EXPORT_SYMBOL(dma_pool_alloc);

-static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
-{
- struct dma_page *page;
-
- list_for_each_entry(page, &pool->page_list, page_list) {
- if (dma < page->dma)
- continue;
- if ((dma - page->dma) < pool->allocation)
- return page;
- }
- return NULL;
-}
-
/**
* dma_pool_free - put block back into dma pool
* @pool: the dma pool holding the block
@@ -431,31 +440,14 @@ static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma)
*/
void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
- struct dma_page *page;
+ struct dma_block *block = vaddr;
unsigned long flags;

spin_lock_irqsave(&pool->lock, flags);
- page = pool_find_page(pool, dma);
- if (!page) {
- spin_unlock_irqrestore(&pool->lock, flags);
- dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n",
- __func__, pool->name, vaddr, &dma);
- return;
+ if (!pool_block_err(pool, vaddr, dma)) {
+ pool_block_push(pool, block, dma);
+ pool->nr_active--;
}
-
- if (pool_page_err(pool, page, vaddr, dma)) {
- spin_unlock_irqrestore(&pool->lock, flags);
- return;
- }
-
- page->in_use--;
- *(int *)vaddr = page->offset;
- page->offset = vaddr - page->vaddr;
- /*
- * Resist a temptation to do
- * if (!is_page_busy(page)) pool_free_page(pool, page);
- * Better have a few empty pages hang around.
- */
spin_unlock_irqrestore(&pool->lock, flags);
}
EXPORT_SYMBOL(dma_pool_free);
--
2.30.2


2023-01-26 21:58:36

by Keith Busch

[permalink] [raw]
Subject: [PATCHv4 10/12] dmapool: don't memset on free twice

From: Keith Busch <[email protected]>

If debug is enabled, dmapool will poison the range, so no need to clear
it to 0 immediately before writing over it.

Signed-off-by: Keith Busch <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
mm/dmapool.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 4dea2a0dbd336..21e6d362c7264 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -160,6 +160,8 @@ static void pool_check_block(struct dma_pool *pool, void *retval,
static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
void *vaddr, dma_addr_t dma)
{
+ if (want_init_on_free())
+ memset(vaddr, 0, pool->size);
return false;
}

@@ -441,8 +443,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
return;
}

- if (want_init_on_free())
- memset(vaddr, 0, pool->size);
if (pool_page_err(pool, page, vaddr, dma)) {
spin_unlock_irqrestore(&pool->lock, flags);
return;
--
2.30.2


2023-01-26 22:22:16

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCHv4 00/12] dmapool enhancements

On Thu, 26 Jan 2023 13:51:13 -0800 Keith Busch <[email protected]> wrote:

> Time spent in dma_pool alloc/free increases linearly with the number of
> pages backing the pool. We can reduce this to constant time with minor
> changes to how free pages are tracked.

Do we have any performance testing results for realistic workloads?

2023-01-27 00:27:32

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCHv4 00/12] dmapool enhancements

On Thu, Jan 26, 2023 at 02:22:09PM -0800, Andrew Morton wrote:
> On Thu, 26 Jan 2023 13:51:13 -0800 Keith Busch <[email protected]> wrote:
>
> > Time spent in dma_pool alloc/free increases linearly with the number of
> > pages backing the pool. We can reduce this to constant time with minor
> > changes to how free pages are tracked.
>
> Do we have any performance testing results for realistic workloads?

Yes, I mentioned this a little in patch 11, profiling with nvme with high-depth
dmapool allocating workloads. Results really depend on your environment, so
YMMV, but I was able to observe time spent in dma_pool_{alloc,free}() reduced
by half.

2023-02-01 17:42:21

by Bryan O'Donoghue

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <[email protected]> wrote:
>
> From: Keith Busch <[email protected]>
>
> The allocated dmapool pages are never freed for the lifetime of the
> pool. There is no need for the two level list+stack lookup for finding a
> free block since nothing is ever removed from the list. Just use a
> simple stack, reducing time complexity to constant.
>
> The implementation inserts the stack linking elements and the dma handle
> of the block within itself when freed. This means the smallest possible
> dmapool block is increased to at most 16 bytes to accomodate these
> fields, but there are no exisiting users requesting a dma pool smaller
> than that anyway.
>
> Removing the list has a significant change in performance. Using the
> kernel's micro-benchmarking self test:
>
> Before:
>
> # modprobe dmapool_test
> dmapool test: size:16 blocks:8192 time:57282
> dmapool test: size:64 blocks:8192 time:172562
> dmapool test: size:256 blocks:8192 time:789247
> dmapool test: size:1024 blocks:2048 time:371823
> dmapool test: size:4096 blocks:1024 time:362237
>
> After:
>
> # modprobe dmapool_test
> dmapool test: size:16 blocks:8192 time:24997
> dmapool test: size:64 blocks:8192 time:26584
> dmapool test: size:256 blocks:8192 time:33542
> dmapool test: size:1024 blocks:2048 time:9022
> dmapool test: size:4096 blocks:1024 time:6045
>
> The module test allocates quite a few blocks that may not accurately
> represent how these pools are used in real life. For a more marco level
> benchmark, running fio high-depth + high-batched on nvme, this patch
> shows submission and completion latency reduced by ~100usec each, 1%
> IOPs improvement, and perf record's time spent in dma_pool_alloc/free
> were reduced by half.
>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Keith Busch <[email protected]>

So.

Somehow this commit has broken USB device mode for me with the
Chipidea IP on msm8916 and msm8939.

Bisecting down I find this is the inflection point

commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete)
Author: Keith Busch <[email protected]>
Date: Thu Jan 26 13:51:24 2023 -0800

Host side sees
[128418.779220] usb 5-1.3: New USB device found, idVendor=18d1,
idProduct=d00d, bcdDevice= 1.00
[128418.779225] usb 5-1.3: New USB device strings: Mfr=1, Product=2,
SerialNumber=3
[128418.779227] usb 5-1.3: Product: Android
[128418.779228] usb 5-1.3: Manufacturer: Google
[128418.779229] usb 5-1.3: SerialNumber: 1628e0d7
[128432.387235] usb 5-1.3: USB disconnect, device number 88
[128510.296291] usb 5-1.3: new full-speed USB device number 89 using xhci_hcd
[128525.812946] usb 5-1.3: device descriptor read/64, error -110
[128541.382920] usb 5-1.3: device descriptor read/64, error -110

The commit immediately prior is fine

commit c1e5fc194960aa3d3daa4f102a29e962f25a64d1
Author: Keith Busch <[email protected]>
Date: Thu Jan 26 13:51:23 2023 -0800

dmapool: don't memset on free twice

[128750.414739] usb 5-1.3: New USB device found, idVendor=18d1,
idProduct=d00d, bcdDevice= 1.00
[128750.414745] usb 5-1.3: New USB device strings: Mfr=1, Product=2,
SerialNumber=3
[128750.414746] usb 5-1.3: Product: Android
[128750.414747] usb 5-1.3: Manufacturer: Google
[128750.414748] usb 5-1.3: SerialNumber: 1628e0d7
[128764.035758] usb 5-1.3: USB disconnect, device number 91
[128788.305767] usb 5-1.3: new full-speed USB device number 92 using xhci_hcd
[128788.406795] usb 5-1.3: not running at top speed; connect to a high speed hub
[128788.427793] usb 5-1.3: New USB device found, idVendor=0525,
idProduct=a4a2, bcdDevice= 6.02
[128788.427798] usb 5-1.3: New USB device strings: Mfr=1, Product=2,
SerialNumber=0
[128788.427799] usb 5-1.3: Product: RNDIS/Ethernet Gadget
[128788.427801] usb 5-1.3: Manufacturer: Linux
6.2.0-rc4-00517-gc1e5fc194960-dirty with ci_hdrc_msm
[128788.490939] cdc_ether 5-1.3:1.0 usb0: register 'cdc_ether' at
usb-0000:31:00.3-1.3, CDC Ethernet Device, 36:0e:12:58:48:ec

---
bod

2023-02-01 17:44:03

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

On Wed, Feb 01, 2023 at 05:42:04PM +0000, Bryan O'Donoghue wrote:
> On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <[email protected]> wrote:
> So.
>
> Somehow this commit has broken USB device mode for me with the
> Chipidea IP on msm8916 and msm8939.
>
> Bisecting down I find this is the inflection point
>
> commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete)

Thanks for the report. I'll look into this immediately.

2023-02-02 00:39:09

by Bryan O'Donoghue

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

On 01/02/2023 17:43, Keith Busch wrote:
> On Wed, Feb 01, 2023 at 05:42:04PM +0000, Bryan O'Donoghue wrote:
>> On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <[email protected]> wrote:
>> So.
>>
>> Somehow this commit has broken USB device mode for me with the
>> Chipidea IP on msm8916 and msm8939.
>>
>> Bisecting down I find this is the inflection point
>>
>> commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete)
>
> Thanks for the report. I'll look into this immediately.

Just to confirm if I revert that patch on the tip of my working tree USB
device works again

Here's a dirty working tree

https://git.codelinaro.org/bryan.odonoghue/kernel/-/commits/linux-next-23-02-01-msm8939-nocpr

---
bod

2023-02-27 00:54:52

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

Hi,

On Thu, Jan 26, 2023 at 01:51:24PM -0800, Keith Busch wrote:
> From: Keith Busch <[email protected]>
>
> The allocated dmapool pages are never freed for the lifetime of the
> pool. There is no need for the two level list+stack lookup for finding a
> free block since nothing is ever removed from the list. Just use a
> simple stack, reducing time complexity to constant.
>
> The implementation inserts the stack linking elements and the dma handle
> of the block within itself when freed. This means the smallest possible
> dmapool block is increased to at most 16 bytes to accomodate these
> fields, but there are no exisiting users requesting a dma pool smaller
> than that anyway.
>
> Removing the list has a significant change in performance. Using the
> kernel's micro-benchmarking self test:
>
> Before:
>
> # modprobe dmapool_test
> dmapool test: size:16 blocks:8192 time:57282
> dmapool test: size:64 blocks:8192 time:172562
> dmapool test: size:256 blocks:8192 time:789247
> dmapool test: size:1024 blocks:2048 time:371823
> dmapool test: size:4096 blocks:1024 time:362237
>
> After:
>
> # modprobe dmapool_test
> dmapool test: size:16 blocks:8192 time:24997
> dmapool test: size:64 blocks:8192 time:26584
> dmapool test: size:256 blocks:8192 time:33542
> dmapool test: size:1024 blocks:2048 time:9022
> dmapool test: size:4096 blocks:1024 time:6045
>
> The module test allocates quite a few blocks that may not accurately
> represent how these pools are used in real life. For a more marco level
> benchmark, running fio high-depth + high-batched on nvme, this patch
> shows submission and completion latency reduced by ~100usec each, 1%
> IOPs improvement, and perf record's time spent in dma_pool_alloc/free
> were reduced by half.
>

With this patch in linux-next, I see a boot failure when trying to boot
a powernv qemu emulation from the SCSI MEGASAS controller.

Qemu command line is

qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \
-kernel arch/powerpc/boot/zImage.epapr \
-snapshot \
-device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \
-drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \
-device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \
-nographic -vga none -monitor null -no-reboot \
--append "root=/dev/sda console=tty console=hvc0"

Reverting this patch together with "dmapool: create/destroy cleanup"
fixes the problem.

Bisect log is attached for reference.

Guenter

---
# bad: [8232539f864ca60474e38eb42d451f5c26415856] Add linux-next specific files for 20230225
# good: [c9c3395d5e3dcc6daee66c6908354d47bf98cb0c] Linux 6.2
git bisect start 'HEAD' 'v6.2'
# good: [fe3130bc4df0b1303de4321af2bc4dcee5d7db2f] cifs: reuse cifs_match_ipaddr for comparison of dstaddr too
git bisect good fe3130bc4df0b1303de4321af2bc4dcee5d7db2f
# good: [8138ddac3c324feb92cc30f6d0d3a1bba51345a9] Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
git bisect good 8138ddac3c324feb92cc30f6d0d3a1bba51345a9
# bad: [2a15ddbcd09ca3a7843a48832884e37e703eaf83] Merge branch 'master' of git://linuxtv.org/media_tree.git
git bisect bad 2a15ddbcd09ca3a7843a48832884e37e703eaf83
# bad: [a7d241d71cf464413307df69177ae2dec8481d37] Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
git bisect bad a7d241d71cf464413307df69177ae2dec8481d37
# bad: [446eb7f1f4aec9232d4b10222123a4566a8b1a95] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap.git
git bisect bad 446eb7f1f4aec9232d4b10222123a4566a8b1a95
# good: [14c61d2100377dde2f6338395325b4090279d6a7] soc: document merges
git bisect good 14c61d2100377dde2f6338395325b4090279d6a7
# bad: [cb26c07e8a8acaecb43228181e1eae68ece8db0e] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git
git bisect bad cb26c07e8a8acaecb43228181e1eae68ece8db0e
# bad: [d37d53a39d853fcc2121770fd3b61f274985d594] Merge branch 'mm-everything' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
git bisect bad d37d53a39d853fcc2121770fd3b61f274985d594
# bad: [708a06c601945c3415240ed0950e37fe27dd8e60] mm/userfaultfd: support WP on multiple VMAs
git bisect bad 708a06c601945c3415240ed0950e37fe27dd8e60
# good: [beb78ba6c0dbed73b38d5ed74bf47aa2c65fafa7] dmapool: move debug code to own functions
git bisect good beb78ba6c0dbed73b38d5ed74bf47aa2c65fafa7
# bad: [a2cb3f101b06f78258cf0c6813b3a17bd1ec846a] zsmalloc: remove insert_zspage() ->inuse optimization
git bisect bad a2cb3f101b06f78258cf0c6813b3a17bd1ec846a
# good: [e637ac603aec2b0a73e50fd8031481c6e55bf139] dmapool: don't memset on free twice
git bisect good e637ac603aec2b0a73e50fd8031481c6e55bf139
# bad: [8f5073712e32685dfeb4925f13a95c6eb9f10cd8] dmapool: create/destroy cleanup
git bisect bad 8f5073712e32685dfeb4925f13a95c6eb9f10cd8
# bad: [28b0a0c64bc658e176368f9270dc8085aa469c63] dmapool: link blocks across pages
git bisect bad 28b0a0c64bc658e176368f9270dc8085aa469c63
# first bad commit: [28b0a0c64bc658e176368f9270dc8085aa469c63] dmapool: link blocks across pages

2023-02-28 01:02:00

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

On Sun, Feb 26, 2023 at 04:54:45PM -0800, Guenter Roeck wrote:
> With this patch in linux-next, I see a boot failure when trying to boot
> a powernv qemu emulation from the SCSI MEGASAS controller.
>
> Qemu command line is
>
> qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \
> -kernel arch/powerpc/boot/zImage.epapr \
> -snapshot \
> -device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \
> -drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \
> -device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \
> -nographic -vga none -monitor null -no-reboot \
> --append "root=/dev/sda console=tty console=hvc0"
>
> Reverting this patch together with "dmapool: create/destroy cleanup"
> fixes the problem.

Thanks for the notice. I was able to recreate, and it does look like this is
fixed with my more recent update changing the dma pool block order, and that is
still pending out of tree. Would you also be able to verify? The patch is
available here:

https://lore.kernel.org/linux-mm/Y%[email protected]/T/#t

2023-02-28 02:18:14

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCHv4 11/12] dmapool: link blocks across pages

On Mon, Feb 27, 2023 at 06:01:48PM -0700, Keith Busch wrote:
> On Sun, Feb 26, 2023 at 04:54:45PM -0800, Guenter Roeck wrote:
> > With this patch in linux-next, I see a boot failure when trying to boot
> > a powernv qemu emulation from the SCSI MEGASAS controller.
> >
> > Qemu command line is
> >
> > qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \
> > -kernel arch/powerpc/boot/zImage.epapr \
> > -snapshot \
> > -device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \
> > -drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \
> > -device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \
> > -nographic -vga none -monitor null -no-reboot \
> > --append "root=/dev/sda console=tty console=hvc0"
> >
> > Reverting this patch together with "dmapool: create/destroy cleanup"
> > fixes the problem.
>
> Thanks for the notice. I was able to recreate, and it does look like this is
> fixed with my more recent update changing the dma pool block order, and that is
> still pending out of tree. Would you also be able to verify? The patch is
> available here:
>
> https://lore.kernel.org/linux-mm/Y%[email protected]/T/#t

Yes, that fixes the problem I have observed. I sent a Tested-by:
a minute ago.

Guenter