2007-10-03 02:50:21

by Arthur Kepner

[permalink] [raw]
Subject: [PATCH 4/5] infiniband: add "dmabarrier" argument to ib_umem_get()


Pass a "dmabarrier" argument to ib_umem_get() and use the new
argument to control setting the DMA_BARRIER_ATTR attribute on
the memory that ib_umem_get() maps for DMA.

Signed-off-by: Arthur Kepner <[email protected]>

---

drivers/infiniband/core/umem.c | 8 ++++++--
drivers/infiniband/hw/amso1100/c2_provider.c | 2 +-
drivers/infiniband/hw/cxgb3/iwch_provider.c | 2 +-
drivers/infiniband/hw/ehca/ehca_mrmw.c | 2 +-
drivers/infiniband/hw/ipath/ipath_mr.c | 2 +-
drivers/infiniband/hw/mlx4/cq.c | 2 +-
drivers/infiniband/hw/mlx4/doorbell.c | 2 +-
drivers/infiniband/hw/mlx4/mr.c | 3 ++-
drivers/infiniband/hw/mlx4/qp.c | 2 +-
drivers/infiniband/hw/mlx4/srq.c | 2 +-
drivers/infiniband/hw/mthca/mthca_provider.c | 2 +-
include/rdma/ib_umem.h | 4 ++--
12 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 664d2fa..093b58d 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -69,9 +69,10 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
* @addr: userspace virtual address to start at
* @size: length of region to pin
* @access: IB_ACCESS_xxx flags for memory being pinned
+ * @dmabarrier: set "dmabarrier" attribute on this memory
*/
struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
- size_t size, int access)
+ size_t size, int access, int dmabarrier)
{
struct ib_umem *umem;
struct page **page_list;
@@ -83,6 +84,9 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
int ret;
int off;
int i;
+ int flags = dmabarrier ? dma_flags_set_attr(DMA_BARRIER_ATTR,
+ DMA_BIDIRECTIONAL) :
+ DMA_BIDIRECTIONAL;

if (!can_do_mlock())
return ERR_PTR(-EPERM);
@@ -160,7 +164,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
chunk->nmap = ib_dma_map_sg(context->device,
&chunk->page_list[0],
chunk->nents,
- DMA_BIDIRECTIONAL);
+ flags);
if (chunk->nmap <= 0) {
for (i = 0; i < chunk->nents; ++i)
put_page(chunk->page_list[i].page);
diff --git a/drivers/infiniband/hw/amso1100/c2_provider.c b/drivers/infiniband/hw/amso1100/c2_provider.c
index 997cf15..17243b7 100644
--- a/drivers/infiniband/hw/amso1100/c2_provider.c
+++ b/drivers/infiniband/hw/amso1100/c2_provider.c
@@ -449,7 +449,7 @@ static struct ib_mr *c2_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
return ERR_PTR(-ENOMEM);
c2mr->pd = c2pd;

- c2mr->umem = ib_umem_get(pd->uobject->context, start, length, acc);
+ c2mr->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0);
if (IS_ERR(c2mr->umem)) {
err = PTR_ERR(c2mr->umem);
kfree(c2mr);
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index f0c7775..d0a514c 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -601,7 +601,7 @@ static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (!mhp)
return ERR_PTR(-ENOMEM);

- mhp->umem = ib_umem_get(pd->uobject->context, start, length, acc);
+ mhp->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0);
if (IS_ERR(mhp->umem)) {
err = PTR_ERR(mhp->umem);
kfree(mhp);
diff --git a/drivers/infiniband/hw/ehca/ehca_mrmw.c b/drivers/infiniband/hw/ehca/ehca_mrmw.c
index d97eda3..c13c11c 100644
--- a/drivers/infiniband/hw/ehca/ehca_mrmw.c
+++ b/drivers/infiniband/hw/ehca/ehca_mrmw.c
@@ -329,7 +329,7 @@ struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
}

e_mr->umem = ib_umem_get(pd->uobject->context, start, length,
- mr_access_flags);
+ mr_access_flags, 0);
if (IS_ERR(e_mr->umem)) {
ib_mr = (void *)e_mr->umem;
goto reg_user_mr_exit1;
diff --git a/drivers/infiniband/hw/ipath/ipath_mr.c b/drivers/infiniband/hw/ipath/ipath_mr.c
index e442470..e351222 100644
--- a/drivers/infiniband/hw/ipath/ipath_mr.c
+++ b/drivers/infiniband/hw/ipath/ipath_mr.c
@@ -195,7 +195,7 @@ struct ib_mr *ipath_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
goto bail;
}

- umem = ib_umem_get(pd->uobject->context, start, length, mr_access_flags);
+ umem = ib_umem_get(pd->uobject->context, start, length, mr_access_flags, 0);
if (IS_ERR(umem))
return (void *) umem;

diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index 8bf44da..96ec149 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -122,7 +122,7 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, int entries, int vector
}

cq->umem = ib_umem_get(context, ucmd.buf_addr, buf_size,
- IB_ACCESS_LOCAL_WRITE);
+ IB_ACCESS_LOCAL_WRITE, 0);
if (IS_ERR(cq->umem)) {
err = PTR_ERR(cq->umem);
goto err_cq;
diff --git a/drivers/infiniband/hw/mlx4/doorbell.c b/drivers/infiniband/hw/mlx4/doorbell.c
index 1c36087..0afde2d 100644
--- a/drivers/infiniband/hw/mlx4/doorbell.c
+++ b/drivers/infiniband/hw/mlx4/doorbell.c
@@ -181,7 +181,7 @@ int mlx4_ib_db_map_user(struct mlx4_ib_ucontext *context, unsigned long virt,
page->user_virt = (virt & PAGE_MASK);
page->refcnt = 0;
page->umem = ib_umem_get(&context->ibucontext, virt & PAGE_MASK,
- PAGE_SIZE, 0);
+ PAGE_SIZE, 0, 0);
if (IS_ERR(page->umem)) {
err = PTR_ERR(page->umem);
kfree(page);
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 85ae906..3e05eb6 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -133,7 +133,8 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (!mr)
return ERR_PTR(-ENOMEM);

- mr->umem = ib_umem_get(pd->uobject->context, start, length, access_flags);
+ mr->umem = ib_umem_get(pd->uobject->context, start, length,
+ access_flags, 0);
if (IS_ERR(mr->umem)) {
err = PTR_ERR(mr->umem);
goto err_free;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 85c51bd..f4dd553 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -335,7 +335,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
goto err;

qp->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr,
- qp->buf_size, 0);
+ qp->buf_size, 0, 0);
if (IS_ERR(qp->umem)) {
err = PTR_ERR(qp->umem);
goto err;
diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c
index 408748f..7c17b89 100644
--- a/drivers/infiniband/hw/mlx4/srq.c
+++ b/drivers/infiniband/hw/mlx4/srq.c
@@ -115,7 +115,7 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd,
}

srq->umem = ib_umem_get(pd->uobject->context, ucmd.buf_addr,
- buf_size, 0);
+ buf_size, 0, 0);
if (IS_ERR(srq->umem)) {
err = PTR_ERR(srq->umem);
goto err_srq;
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 6bcde1c..17486a4 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -1027,7 +1027,7 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (!mr)
return ERR_PTR(-ENOMEM);

- mr->umem = ib_umem_get(pd->uobject->context, start, length, acc);
+ mr->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0);
if (IS_ERR(mr->umem)) {
err = PTR_ERR(mr->umem);
goto err;
diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h
index c533d6c..054d279 100644
--- a/include/rdma/ib_umem.h
+++ b/include/rdma/ib_umem.h
@@ -61,7 +61,7 @@ struct ib_umem_chunk {
#ifdef CONFIG_INFINIBAND_USER_MEM

struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
- size_t size, int access);
+ size_t size, int access, int dmabarrier);
void ib_umem_release(struct ib_umem *umem);
int ib_umem_page_count(struct ib_umem *umem);

@@ -71,7 +71,7 @@ int ib_umem_page_count(struct ib_umem *umem);

static inline struct ib_umem *ib_umem_get(struct ib_ucontext *context,
unsigned long addr, size_t size,
- int access) {
+ int access, int dmabarrier) {
return ERR_PTR(-EINVAL);
}
static inline void ib_umem_release(struct ib_umem *umem) { }


2007-10-03 03:10:32

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 4/5] infiniband: add "dmabarrier" argument to ib_umem_get()

From: [email protected]
Date: Tue, 2 Oct 2007 19:49:06 -0700

>
> Pass a "dmabarrier" argument to ib_umem_get() and use the new
> argument to control setting the DMA_BARRIER_ATTR attribute on
> the memory that ib_umem_get() maps for DMA.
>
> Signed-off-by: Arthur Kepner <[email protected]>

Acked-by: David S. Miller <[email protected]>

However I'm a little unhappy with how IA64 achieves this.

The last argument for dma_map_foo() is an enum not an int,
every platform other than IA64 properly defines the last
argument as "enum dma_data_direction". It can take one
of several distinct values, it is not a mask.

This hijacking of the DMA direction argument is hokey at
best, and at worst is type bypassing which is going to
explode subtly for someone in the future and result in
a long painful debugging session.

Adding another argument could be painful to do this cleanly, but at
least with inline functions and macros it could just evaluate to
nothing on platforms that don't need it.

Either that, or we should turn the thing into an integer "flags"
across the board and audit every DMA mapping implementation so that it
can handle multiple bits being set. But that's really ugly and
invites mistakes as I detailed above.

2007-10-03 17:52:31

by Arthur Kepner

[permalink] [raw]
Subject: Re: [PATCH 4/5] infiniband: add "dmabarrier" argument to ib_umem_get()

On Tue, Oct 02, 2007 at 08:10:23PM -0700, David Miller wrote:
> From: [email protected]
> Date: Tue, 2 Oct 2007 19:49:06 -0700
>
> >
> > Pass a "dmabarrier" argument to ib_umem_get() and use the new
> > argument to control setting the DMA_BARRIER_ATTR attribute on
> > the memory that ib_umem_get() maps for DMA.
> >
> > Signed-off-by: Arthur Kepner <[email protected]>
>
> Acked-by: David S. Miller <[email protected]>
>
> However I'm a little unhappy with how IA64 achieves this.
>
> The last argument for dma_map_foo() is an enum not an int,
> every platform other than IA64 properly defines the last
> argument as "enum dma_data_direction". It can take one
> of several distinct values, it is not a mask.
>
> This hijacking of the DMA direction argument is hokey at
> best, and at worst is type bypassing which is going to
> explode subtly for someone in the future and result in
> a long painful debugging session.
> ....

I don't dispute your point about abusing the enum here, it
just seemed the least objectionable, and most expedient way
to go. But I'll add that ia64 isn't alone, x86_64 also uses
an int for the final argument to its dma_map_* implementations.

--
Arthur