2007-09-26 18:58:12

by Matthew Wilcox

[permalink] [raw]
Subject: dmapool


I have a series of patches to dmapool that I'd like opinions on. I
don't have any performance numbers yet, but some of the patches are a
good idea, with or without performance numbers.

One of the problems with dmapool is that it doesn't have a maintainer
listed. I've spent enough time looking at it over the past couple of
weeks that I think I'd be comfortable in that role, so unless someone
objects, I'll submit a patch to MAINTAINERS to add myself.

I shan't post the first two patches. The first simply Lindents the code
and the second moves it to mm/ (a better fit than drivers/base, IMO).

Four patches to follow.

--
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."


2007-09-26 19:01:31

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH 1/4] Avoid taking waitqueue lock in dmapool

With one trivial change (taking the lock slightly earlier on wakeup
from schedule), all uses of the waitq are under the pool lock, so we
can use the locked (or __) versions of the wait queue functions, and
avoid the extra spinlock.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/dmapool.c | 9 +++++----
1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 6201371..a359b5e 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -273,8 +273,8 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
size_t offset;
void *retval;

- restart:
spin_lock_irqsave(&pool->lock, flags);
+ restart:
list_for_each_entry(page, &pool->page_list, page_list) {
int i;
/* only cachable accesses here ... */
@@ -296,12 +296,13 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
DECLARE_WAITQUEUE(wait, current);

current->state = TASK_INTERRUPTIBLE;
- add_wait_queue(&pool->waitq, &wait);
+ __add_wait_queue(&pool->waitq, &wait);
spin_unlock_irqrestore(&pool->lock, flags);

schedule_timeout(POOL_TIMEOUT_JIFFIES);

- remove_wait_queue(&pool->waitq, &wait);
+ spin_lock_irqsave(&pool->lock, flags);
+ __remove_wait_queue(&pool->waitq, &wait);
goto restart;
}
retval = NULL;
@@ -401,7 +402,7 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
page->in_use--;
set_bit(block, &page->bitmap[map]);
if (waitqueue_active(&pool->waitq))
- wake_up(&pool->waitq);
+ wake_up_locked(&pool->waitq);
/*
* Resist a temptation to do
* if (!is_page_busy(bpp, page->bitmap)) pool_free_page(pool, page);
--
1.5.3.1

2007-09-26 19:01:44

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH 2/4] dmapool: Validate parameters to dma_pool_create

Check that 'align' is a power of two, like the API specifies.
Align 'size' to 'align' correctly -- the current code has an off-by-one.
The ALIGN macro in kernel.h doesn't.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/dmapool.c | 15 ++++++++-------
1 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index a359b5e..f5d12a7 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -107,17 +107,18 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
{
struct dma_pool *retval;

- if (align == 0)
+ if (align == 0) {
align = 1;
- if (size == 0)
+ } else if (align & (align - 1)) {
return NULL;
- else if (size < align)
- size = align;
- else if ((size % align) != 0) {
- size += align + 1;
- size &= ~(align - 1);
}

+ if (size == 0)
+ return NULL;
+
+ if ((size % align) != 0)
+ size = ALIGN(size, align);
+
if (allocation == 0) {
if (PAGE_SIZE < size)
allocation = size;
--
1.5.3.1

2007-09-26 19:01:59

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH 3/4] Change dmapool free block management

Also add documentation for how dma pools work, move the header above the
includes, add my copyright, add the original author's copyright, add a
GPL v2 licence to the file and fix the includes.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/dmapool.c | 161 +++++++++++++++++++++++++++++++--------------------------
1 files changed, 88 insertions(+), 73 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index f5d12a7..4418e4d 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -1,25 +1,45 @@
+/*
+ * DMA Pool allocator
+ *
+ * Copyright 2001 David Brownell
+ * Copyright 2007 Intel Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ *
+ * This software may be redistributed and/or modified under the terms of
+ * the GNU General Public License ("GPL") version 2 as published by the
+ * Free Software Foundation.
+ *
+ * This allocator returns small blocks of a given size which are DMA-able by
+ * the given device. It uses the dma_alloc_coherent page allocator to get
+ * new pages, then splits them up into blocks of the required size.
+ * Many older drivers still have their own code to do this.
+ *
+ * The current design of this allocator is fairly simple. The pool is
+ * represented by the 'struct dma_pool' which keeps a doubly-linked list of
+ * allocated pages. Each page in the page_list is split into blocks of at
+ * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked
+ * list of free blocks within the page. Used blocks aren't tracked, but we
+ * keep a count of how many are currently allocated from each page.
+ */

#include <linux/device.h>
-#include <linux/mm.h>
-#include <asm/io.h> /* Needed for i386 to build */
-#include <asm/scatterlist.h> /* Needed for i386 to build */
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
-#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
#include <linux/module.h>
+#include <linux/mutex.h>
#include <linux/poison.h>
#include <linux/sched.h>
-
-/*
- * Pool allocator ... wraps the dma_alloc_coherent page allocator, so
- * small blocks are easily used by drivers for bus mastering controllers.
- * This should probably be sharing the guts of the slab allocator.
- */
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/wait.h>

struct dma_pool { /* the pool */
struct list_head page_list;
spinlock_t lock;
- size_t blocks_per_page;
size_t size;
struct device *dev;
size_t allocation;
@@ -32,8 +52,8 @@ struct dma_page { /* cacheable header for 'allocation' bytes */
struct list_head page_list;
void *vaddr;
dma_addr_t dma;
- unsigned in_use;
- unsigned long bitmap[0];
+ unsigned int in_use;
+ unsigned int offset;
};

#define POOL_TIMEOUT_JIFFIES ((100 /* msec */ * HZ) / 1000)
@@ -68,8 +88,8 @@ show_pools(struct device *dev, struct device_attribute *attr, char *buf)

/* per-pool info, no real statistics yet */
temp = scnprintf(next, size, "%-16s %4u %4Zu %4Zu %2u\n",
- pool->name,
- blocks, pages * pool->blocks_per_page,
+ pool->name, blocks,
+ pages * (pool->allocation / pool->size),
pool->size, pages);
size -= temp;
next += temp;
@@ -113,9 +133,12 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
return NULL;
}

- if (size == 0)
+ if (size == 0) {
return NULL;
-
+ } else if (size < 4) {
+ size = 4;
+ }
+
if ((size % align) != 0)
size = ALIGN(size, align);

@@ -140,7 +163,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
spin_lock_init(&retval->lock);
retval->size = size;
retval->allocation = allocation;
- retval->blocks_per_page = allocation / size;
init_waitqueue_head(&retval->waitq);

if (dev) {
@@ -165,28 +187,36 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
return retval;
}

+static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
+{
+ unsigned int offset = 0;
+
+ do {
+ unsigned int next = offset + pool->size;
+ if (unlikely((next + pool->size) >= pool->allocation))
+ next = pool->allocation;
+ *(int *)(page->vaddr + offset) = next;
+ offset = next;
+ } while (offset < pool->allocation);
+}
+
static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
{
struct dma_page *page;
- int mapsize;
-
- mapsize = pool->blocks_per_page;
- mapsize = (mapsize + BITS_PER_LONG - 1) / BITS_PER_LONG;
- mapsize *= sizeof(long);

- page = kmalloc(mapsize + sizeof *page, mem_flags);
+ page = kmalloc(sizeof(*page), mem_flags);
if (!page)
return NULL;
- page->vaddr = dma_alloc_coherent(pool->dev,
- pool->allocation,
+ page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
&page->dma, mem_flags);
if (page->vaddr) {
- memset(page->bitmap, 0xff, mapsize); // bit set == free
#ifdef CONFIG_DEBUG_SLAB
memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
#endif
+ pool_initialise_page(pool, page);
list_add(&page->page_list, &pool->page_list);
page->in_use = 0;
+ page->offset = 0;
} else {
kfree(page);
page = NULL;
@@ -194,14 +224,9 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
return page;
}

-static inline int is_page_busy(int blocks, unsigned long *bitmap)
+static inline int is_page_busy(struct dma_page *page)
{
- while (blocks > 0) {
- if (*bitmap++ != ~0UL)
- return 1;
- blocks -= BITS_PER_LONG;
- }
- return 0;
+ return page->in_use != 0;
}

static void pool_free_page(struct dma_pool *pool, struct dma_page *page)
@@ -236,7 +261,7 @@ void dma_pool_destroy(struct dma_pool *pool)
struct dma_page *page;
page = list_entry(pool->page_list.next,
struct dma_page, page_list);
- if (is_page_busy(pool->blocks_per_page, page->bitmap)) {
+ if (is_page_busy(page)) {
if (pool->dev)
dev_err(pool->dev,
"dma_pool_destroy %s, %p busy\n",
@@ -263,34 +288,21 @@ void dma_pool_destroy(struct dma_pool *pool)
*
* This returns the kernel virtual address of a currently unused block,
* and reports its dma address through the handle.
- * If such a memory block can't be allocated, null is returned.
+ * If such a memory block can't be allocated, %NULL is returned.
*/
void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
dma_addr_t * handle)
{
unsigned long flags;
struct dma_page *page;
- int map, block;
size_t offset;
void *retval;

spin_lock_irqsave(&pool->lock, flags);
restart:
list_for_each_entry(page, &pool->page_list, page_list) {
- int i;
- /* only cachable accesses here ... */
- for (map = 0, i = 0;
- i < pool->blocks_per_page; i += BITS_PER_LONG, map++) {
- if (page->bitmap[map] == 0)
- continue;
- block = ffz(~page->bitmap[map]);
- if ((i + block) < pool->blocks_per_page) {
- clear_bit(block, &page->bitmap[map]);
- offset = (BITS_PER_LONG * map) + block;
- offset *= pool->size;
- goto ready;
- }
- }
+ if (page->offset < pool->allocation)
+ goto ready;
}
if (!(page = pool_alloc_page(pool, GFP_ATOMIC))) {
if (mem_flags & __GFP_WAIT) {
@@ -309,11 +321,10 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
retval = NULL;
goto done;
}
-
- clear_bit(0, &page->bitmap[0]);
- offset = 0;
ready:
page->in_use++;
+ offset = page->offset;
+ page->offset = *(int *)(page->vaddr + offset);
retval = offset + page->vaddr;
*handle = offset + page->dma;
#ifdef CONFIG_DEBUG_SLAB
@@ -355,7 +366,7 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
{
struct dma_page *page;
unsigned long flags;
- int map, block;
+ unsigned int offset;

if ((page = pool_find_page(pool, dma)) == 0) {
if (pool->dev)
@@ -368,13 +379,9 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
return;
}

- block = dma - page->dma;
- block /= pool->size;
- map = block / BITS_PER_LONG;
- block %= BITS_PER_LONG;
-
+ offset = vaddr - page->vaddr;
#ifdef CONFIG_DEBUG_SLAB
- if (((dma - page->dma) + (void *)page->vaddr) != vaddr) {
+ if ((dma - page->dma) != offset) {
if (pool->dev)
dev_err(pool->dev,
"dma_pool_free %s, %p (bad vaddr)/%Lx\n",
@@ -385,28 +392,36 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
pool->name, vaddr, (unsigned long long)dma);
return;
}
- if (page->bitmap[map] & (1UL << block)) {
- if (pool->dev)
- dev_err(pool->dev,
- "dma_pool_free %s, dma %Lx already free\n",
- pool->name, (unsigned long long)dma);
- else
- printk(KERN_ERR
- "dma_pool_free %s, dma %Lx already free\n",
- pool->name, (unsigned long long)dma);
- return;
+ {
+ unsigned int chain = page->offset;
+ while (chain < pool->allocation) {
+ if (chain != offset) {
+ chain = *(int *)(page->vaddr + chain);
+ continue;
+ }
+ if (pool->dev)
+ dev_err(pool->dev, "dma_pool_free %s, dma %Lx "
+ "already free\n", pool->name,
+ (unsigned long long)dma);
+ else
+ printk(KERN_ERR "dma_pool_free %s, dma %Lx "
+ "already free\n", pool->name,
+ (unsigned long long)dma);
+ return;
+ }
}
memset(vaddr, POOL_POISON_FREED, pool->size);
#endif

spin_lock_irqsave(&pool->lock, flags);
page->in_use--;
- set_bit(block, &page->bitmap[map]);
+ *(int *)vaddr = page->offset;
+ page->offset = offset;
if (waitqueue_active(&pool->waitq))
wake_up_locked(&pool->waitq);
/*
* Resist a temptation to do
- * if (!is_page_busy(bpp, page->bitmap)) pool_free_page(pool, page);
+ * if (!is_page_busy(page)) pool_free_page(pool, page);
* Better have a few empty pages hang around.
*/
spin_unlock_irqrestore(&pool->lock, flags);
--
1.5.3.1

2007-09-26 19:02:27

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH 4/4] dmapool: Improve memory usage for devices which can't cross boundaries

The previous implementation simply refused to allocate more than a
boundary's worth of data from an entire page. Some users didn't know
this, so specified things like SMP_CACHE_BYTES, not realising the
horrible waste of memory that this was. It's fairly easy to correct
this problem, just by ensuring we don't cross a boundary within a page.
This even helps drivers like EHCI (which can't cross a 4k boundary)
on machines with larger page sizes.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/dmapool.c | 29 +++++++++++++++++------------
1 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 4418e4d..cc43d20 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -43,6 +43,7 @@ struct dma_pool { /* the pool */
size_t size;
struct device *dev;
size_t allocation;
+ size_t boundary;
char name[32];
wait_queue_head_t waitq;
struct list_head pools;
@@ -107,7 +108,7 @@ static DEVICE_ATTR(pools, S_IRUGO, show_pools, NULL);
* @dev: device that will be doing the DMA
* @size: size of the blocks in this pool.
* @align: alignment requirement for blocks; must be a power of two
- * @allocation: returned blocks won't cross this boundary (or zero)
+ * @boundary: returned blocks won't cross this power of two boundary
* Context: !in_interrupt()
*
* Returns a dma allocation pool with the requested characteristics, or
@@ -117,15 +118,16 @@ static DEVICE_ATTR(pools, S_IRUGO, show_pools, NULL);
* cache flushing primitives. The actual size of blocks allocated may be
* larger than requested because of alignment.
*
- * If allocation is nonzero, objects returned from dma_pool_alloc() won't
+ * If @boundary is nonzero, objects returned from dma_pool_alloc() won't
* cross that size boundary. This is useful for devices which have
* addressing restrictions on individual DMA transfers, such as not crossing
* boundaries of 4KBytes.
*/
struct dma_pool *dma_pool_create(const char *name, struct device *dev,
- size_t size, size_t align, size_t allocation)
+ size_t size, size_t align, size_t boundary)
{
struct dma_pool *retval;
+ size_t allocation;

if (align == 0) {
align = 1;
@@ -142,14 +144,13 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
if ((size % align) != 0)
size = ALIGN(size, align);

- if (allocation == 0) {
- if (PAGE_SIZE < size)
- allocation = size;
- else
- allocation = PAGE_SIZE;
- // FIXME: round up for less fragmentation
- } else if (allocation < size)
+ allocation = max_t(size_t, size, PAGE_SIZE);
+
+ if (!boundary) {
+ boundary = allocation;
+ } else if ((boundary < size) || (boundary & (boundary - 1))) {
return NULL;
+ }

retval = kmalloc_node(sizeof *retval, GFP_KERNEL, dev_to_node(dev));
if (!retval)
@@ -162,6 +163,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
INIT_LIST_HEAD(&retval->page_list);
spin_lock_init(&retval->lock);
retval->size = size;
+ retval->boundary = boundary;
retval->allocation = allocation;
init_waitqueue_head(&retval->waitq);

@@ -190,11 +192,14 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
unsigned int offset = 0;
+ unsigned int next_boundary = pool->boundary;

do {
unsigned int next = offset + pool->size;
- if (unlikely((next + pool->size) >= pool->allocation))
- next = pool->allocation;
+ if (unlikely((next + pool->size) >= next_boundary)) {
+ next = next_boundary;
+ next_boundary += pool->boundary;
+ }
*(int *)(page->vaddr + offset) = next;
offset = next;
} while (offset < pool->allocation);
--
1.5.3.1

2007-09-26 19:23:29

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 3/4] Change dmapool free block management

> Also add documentation for how dma pools work, move the header above the
> includes, add my copyright, add the original author's copyright, add a
> GPL v2 licence to the file and fix the includes.

The fact that you have all these other changes mixed in makes the main
change very difficult to review. Also I assume the main change is
"Change dmapool free block management" -- but I'm left wondering
exactly how and why you're changing it.

- R.

2007-09-26 19:47:52

by Roel Kluin

[permalink] [raw]
Subject: Re: [PATCH 2/4] dmapool: Validate parameters to dma_pool_create

Matthew Wilcox wrote:
> Check that 'align' is a power of two, like the API specifies.
> Align 'size' to 'align' correctly -- the current code has an off-by-one.
> The ALIGN macro in kernel.h doesn't.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> mm/dmapool.c | 15 ++++++++-------
> 1 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index a359b5e..f5d12a7 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -107,17 +107,18 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
> {
> struct dma_pool *retval;
>
> - if (align == 0)
> + if (align == 0) {
> align = 1;
> - if (size == 0)
> + } else if (align & (align - 1)) {
> return NULL;
> - else if (size < align)
> - size = align;
> - else if ((size % align) != 0) {
> - size += align + 1;
> - size &= ~(align - 1);
> }
>
> + if (size == 0)
> + return NULL;

The brackets in the first if/else are not required, and you could combine the two statements:

if (align == 0)
align = 1;
else if (align & (align - 1) || size == 0)
return NULL;
> +
> + if ((size % align) != 0)
> + size = ALIGN(size, align);
> +
> if (allocation == 0) {
> if (PAGE_SIZE < size)
> allocation = size;

2007-09-26 20:07:37

by Roel Kluin

[permalink] [raw]
Subject: Re: [PATCH 3/4] Change dmapool free block management

Matthew Wilcox wrote:

[...]

> @@ -113,9 +133,12 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
> return NULL;
> }
>
> - if (size == 0)
> + if (size == 0) {
> return NULL;
> -
> + } else if (size < 4) {
> + size = 4;
> + }

you could do without brackets

[...]

> @@ -263,34 +288,21 @@ void dma_pool_destroy(struct dma_pool *pool)
> *
> * This returns the kernel virtual address of a currently unused block,
> * and reports its dma address through the handle.
> - * If such a memory block can't be allocated, null is returned.
> + * If such a memory block can't be allocated, %NULL is returned.
> */
> void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
> dma_addr_t * handle)
> {
> unsigned long flags;
> struct dma_page *page;
> - int map, block;
> size_t offset;
> void *retval;
>
> spin_lock_irqsave(&pool->lock, flags);
> restart:
> list_for_each_entry(page, &pool->page_list, page_list) {
> - int i;
> - /* only cachable accesses here ... */
> - for (map = 0, i = 0;
> - i < pool->blocks_per_page; i += BITS_PER_LONG, map++) {
> - if (page->bitmap[map] == 0)
> - continue;
> - block = ffz(~page->bitmap[map]);
> - if ((i + block) < pool->blocks_per_page) {
> - clear_bit(block, &page->bitmap[map]);
> - offset = (BITS_PER_LONG * map) + block;
> - offset *= pool->size;
> - goto ready;
> - }
> - }
> + if (page->offset < pool->allocation)
> + goto ready;
> }
> if (!(page = pool_alloc_page(pool, GFP_ATOMIC))) {

page = pool_alloc_page(pool, GFP_ATOMIC);
if(!page) {

[...]

> @@ -355,7 +366,7 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
> {
> struct dma_page *page;
> unsigned long flags;
> - int map, block;
> + unsigned int offset;
>
> if ((page = pool_find_page(pool, dma)) == 0) {

page = pool_find_page(pool, dma);
if (page == 0) {

> if (pool->dev)
> @@ -368,13 +379,9 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
> return;
> }
>
> - block = dma - page->dma;
> - block /= pool->size;
> - map = block / BITS_PER_LONG;
> - block %= BITS_PER_LONG;
> -
> + offset = vaddr - page->vaddr;
> #ifdef CONFIG_DEBUG_SLAB
> - if (((dma - page->dma) + (void *)page->vaddr) != vaddr) {
> + if ((dma - page->dma) != offset) {

if (dma - page->dma != offset) {
[...]

2007-09-26 20:12:40

by Roel Kluin

[permalink] [raw]
Subject: Re: [PATCH 4/4] dmapool: Improve memory usage for devices which can't cross boundaries

Matthew Wilcox wrote:

[...]

> @@ -142,14 +144,13 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
> if ((size % align) != 0)
> size = ALIGN(size, align);
>
> - if (allocation == 0) {
> - if (PAGE_SIZE < size)
> - allocation = size;
> - else
> - allocation = PAGE_SIZE;
> - // FIXME: round up for less fragmentation
> - } else if (allocation < size)
> + allocation = max_t(size_t, size, PAGE_SIZE);
> +
> + if (!boundary) {
> + boundary = allocation;
> + } else if ((boundary < size) || (boundary & (boundary - 1))) {
> return NULL;
> + }

if (!boundary)
boundary = allocation;
else if (boundary < size || boundary & (boundary - 1))
return NULL;

[...]

> @@ -190,11 +192,14 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
> static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
> {
> unsigned int offset = 0;
> + unsigned int next_boundary = pool->boundary;
>
> do {
> unsigned int next = offset + pool->size;
> - if (unlikely((next + pool->size) >= pool->allocation))
> - next = pool->allocation;
> + if (unlikely((next + pool->size) >= next_boundary)) {

if (unlikely(next + pool->size >= next_boundary)) {

[...]

2007-09-26 21:08:33

by Roel Kluin

[permalink] [raw]
Subject: Re: [PATCH 2/4] dmapool: Validate parameters to dma_pool_create

Matthew Wilcox wrote:
> On Wed, Sep 26, 2007 at 09:47:41PM +0200, roel wrote:
>> The brackets in the first if/else are not required, and you could combine the two statements:
>
> You mean braces, not brackets. And I find this little fetish of yours
> highly disturbing. I prefer to use braces, and will continue to do so,
> regardless of your nitpicking.

Well as you say it, you like the braces, so it appears to be your fetish. Of course you don't
have to make any changes, I am just reporting them cause they aren't needed. No need for the
offensive tone either.

Roel

2007-09-26 21:10:22

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 1/4] Avoid taking waitqueue lock in dmapool

From: Matthew Wilcox <[email protected]>
Date: Wed, 26 Sep 2007 15:01:16 -0400

> With one trivial change (taking the lock slightly earlier on wakeup
> from schedule), all uses of the waitq are under the pool lock, so we
> can use the locked (or __) versions of the wait queue functions, and
> avoid the extra spinlock.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

This one looks good to me:

Acked-by: David S. Miller <[email protected]>

2007-09-26 21:10:38

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 2/4] dmapool: Validate parameters to dma_pool_create

From: Matthew Wilcox <[email protected]>
Date: Wed, 26 Sep 2007 15:01:17 -0400

> Check that 'align' is a power of two, like the API specifies.
> Align 'size' to 'align' correctly -- the current code has an off-by-one.
> The ALIGN macro in kernel.h doesn't.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

Acked-by: David S. Miller <[email protected]>

2007-09-26 21:13:33

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 3/4] Change dmapool free block management

From: Matthew Wilcox <[email protected]>
Date: Wed, 26 Sep 2007 15:01:18 -0400

> Also add documentation for how dma pools work, move the header above the
> includes, add my copyright, add the original author's copyright, add a
> GPL v2 licence to the file and fix the includes.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

Correct or not, you need to describe how you are changing the
free block management, why that transformation is valid and
worthwhile, etc.

I can see that you're getting rid of a bitmap and replacing it
with the dma_page->in_use et al. stuff, but you need to describe
that in detail.

2007-09-26 21:14:53

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 4/4] dmapool: Improve memory usage for devices which can't cross boundaries

From: Matthew Wilcox <[email protected]>
Date: Wed, 26 Sep 2007 15:01:19 -0400

> The previous implementation simply refused to allocate more than a
> boundary's worth of data from an entire page. Some users didn't know
> this, so specified things like SMP_CACHE_BYTES, not realising the
> horrible waste of memory that this was. It's fairly easy to correct
> this problem, just by ensuring we don't cross a boundary within a page.
> This even helps drivers like EHCI (which can't cross a 4k boundary)
> on machines with larger page sizes.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

This one looks good to me:

Acked-by: David S. Miller <[email protected]>