2005-09-20 22:08:51

by Roland Dreier

[permalink] [raw]
Subject: [git pull] InfiniBand fixes

Linus, please pull from

master.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband.git

This tree is also available from kernel.org mirrors at:

rsync://rsync.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git

This will pull the following changes (patches also sent as replies to this email):

Hal Rosenstock:
IPoIB: Fix SA client retransmission strategy
IB: Fix data length for RMPP SA sends

Michael S. Tsirkin:
IPoIB: fix module removal race
IB/mthca: Fix device removal memory leak

Roland Dreier:
IB/mthca: assign ACK timeout field correctly
IB/mthca: fix posting of first work request
IB/mthca: Initialize eq->nent before we use it
IB/mthca: Fix posting work requests to shared receive queues
IB/mthca: Don't try to set srq->last for userspace SRQs
IPoIB: Don't flush workqueue from within workqueue

drivers/infiniband/core/user_mad.c | 5 +-
drivers/infiniband/hw/mthca/mthca_eq.c | 16 ++------
drivers/infiniband/hw/mthca/mthca_qp.c | 51 +++++++++++-------------
drivers/infiniband/hw/mthca/mthca_srq.c | 25 +++++-------
drivers/infiniband/ulp/ipoib/ipoib.h | 2 -
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 4 +-
drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 +
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 13 +++---
8 files changed, 55 insertions(+), 63 deletions(-)


2005-09-20 22:08:56

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 09/10] IPoIB: Don't flush workqueue from within workqueue

Subject: [PATCH] IPoIB: Don't flush workqueue from within workqueue

ipoib_mcast_restart_task() is always called from within the
single-threaded IPoIB workqueue, so flushing the workqueue from within
the function can lead to a recursion overflow. But since we're
running in a single-threaded workqueue, we're already synchronized
against other items in the workqueue, so just get rid of the flush in
ipoib_mcast_restart_task().

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/ulp/ipoib/ipoib.h | 2 +-
drivers/infiniband/ulp/ipoib/ipoib_ib.c | 4 ++--
drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 7 ++++---
3 files changed, 7 insertions(+), 6 deletions(-)

8d2cae0651502028bf64844508ab18528bbd65c2
diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -257,7 +257,7 @@ void ipoib_mcast_send(struct net_device

void ipoib_mcast_restart_task(void *dev_ptr);
int ipoib_mcast_start_thread(struct net_device *dev);
-int ipoib_mcast_stop_thread(struct net_device *dev);
+int ipoib_mcast_stop_thread(struct net_device *dev, int flush);

void ipoib_mcast_dev_down(struct net_device *dev);
void ipoib_mcast_dev_flush(struct net_device *dev);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -432,7 +432,7 @@ int ipoib_ib_dev_down(struct net_device
flush_workqueue(ipoib_workqueue);
}

- ipoib_mcast_stop_thread(dev);
+ ipoib_mcast_stop_thread(dev, 1);

/*
* Flush the multicast groups first so we stop any multicast joins. The
@@ -599,7 +599,7 @@ void ipoib_ib_dev_cleanup(struct net_dev

ipoib_dbg(priv, "cleaning up ib_dev\n");

- ipoib_mcast_stop_thread(dev);
+ ipoib_mcast_stop_thread(dev, 1);

/* Delete the broadcast address and the local address */
ipoib_mcast_dev_down(dev);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -598,7 +598,7 @@ int ipoib_mcast_start_thread(struct net_
return 0;
}

-int ipoib_mcast_stop_thread(struct net_device *dev)
+int ipoib_mcast_stop_thread(struct net_device *dev, int flush)
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ipoib_mcast *mcast;
@@ -610,7 +610,8 @@ int ipoib_mcast_stop_thread(struct net_d
cancel_delayed_work(&priv->mcast_task);
up(&mcast_mutex);

- flush_workqueue(ipoib_workqueue);
+ if (flush)
+ flush_workqueue(ipoib_workqueue);

if (priv->broadcast && priv->broadcast->query) {
ib_sa_cancel_query(priv->broadcast->query_id, priv->broadcast->query);
@@ -832,7 +833,7 @@ void ipoib_mcast_restart_task(void *dev_

ipoib_dbg_mcast(priv, "restarting multicast task\n");

- ipoib_mcast_stop_thread(dev);
+ ipoib_mcast_stop_thread(dev, 0);

spin_lock_irqsave(&priv->lock, flags);

2005-09-20 22:09:31

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 02/10] IB/mthca: assign ACK timeout field correctly

The hardware reads the ACK timeout field from the most significant 5
bits of struct mthca_qp_path's ackto field, not the least significant
bits. This fix has the driver put the timeout in the right place.
Without this, we get a timeout that is 2^8 times too small.

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_qp.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

6fd9dccd77024ea85b65aa3e8f1cce22caa0d578
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
--- a/drivers/infiniband/hw/mthca/mthca_qp.c
+++ b/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -687,7 +687,7 @@ int mthca_modify_qp(struct ib_qp *ibqp,
}

if (attr_mask & IB_QP_TIMEOUT) {
- qp_context->pri_path.ackto = attr->timeout;
+ qp_context->pri_path.ackto = attr->timeout << 3;
qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_ACK_TIMEOUT);
}

2005-09-20 22:08:26

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 01/10] IPoIB: fix module removal race

From: Michael S. Tsirkin <[email protected]>

Since ipoib uses queue_delayed_work to run flush task on port state events,
it must flush scheduled work after unregistering the event handler.

Signed-off-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/ulp/ipoib/ipoib_main.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)

b21607256f3370b9eba48cd0b67e8686c6b51a64
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -1005,6 +1005,7 @@ debug_failed:

register_failed:
ib_unregister_event_handler(&priv->event_handler);
+ flush_scheduled_work();

event_failed:
ipoib_dev_cleanup(priv->dev);
@@ -1057,6 +1058,7 @@ static void ipoib_remove_one(struct ib_d

list_for_each_entry_safe(priv, tmp, dev_list, list) {
ib_unregister_event_handler(&priv->event_handler);
+ flush_scheduled_work();

unregister_netdev(priv->dev);
ipoib_dev_cleanup(priv->dev);

2005-09-20 22:09:32

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 04/10] IPoIB: Fix SA client retransmission strategy

From: Hal Rosenstock <[email protected]>

We got a little mixed up with what the backoff member holds in the
IPoIB multicast group structure: sometimes it was used as a number of
seconds, and sometimes it was used as a number of jiffies. Fix the
code so that backoff is always in seconds.

Signed-off-by: Hal Rosenstock <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>


---

drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

d24ef0519e081774db6a1515ad8dadefd3fcd508
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -145,7 +145,7 @@ static struct ipoib_mcast *ipoib_mcast_a

mcast->dev = dev;
mcast->created = jiffies;
- mcast->backoff = HZ;
+ mcast->backoff = 1;
mcast->logcount = 0;

INIT_LIST_HEAD(&mcast->list);
@@ -396,7 +396,7 @@ static void ipoib_mcast_join_complete(in
IPOIB_GID_ARG(mcast->mcmember.mgid), status);

if (!status && !ipoib_mcast_join_finish(mcast, mcmember)) {
- mcast->backoff = HZ;
+ mcast->backoff = 1;
down(&mcast_mutex);
if (test_bit(IPOIB_MCAST_RUN, &priv->flags))
queue_work(ipoib_workqueue, &priv->mcast_task);
@@ -496,7 +496,7 @@ static void ipoib_mcast_join(struct net_
if (test_bit(IPOIB_MCAST_RUN, &priv->flags))
queue_delayed_work(ipoib_workqueue,
&priv->mcast_task,
- mcast->backoff);
+ mcast->backoff * HZ);
up(&mcast_mutex);
} else
mcast->query_id = ret;

2005-09-20 22:11:09

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 07/10] IB/mthca: Don't try to set srq->last for userspace SRQs

Subject: [PATCH] IB/mthca: Don't try to set srq->last for userspace SRQs

Userspace SRQs don't have a buffer allocated for them in the kernel, so
it doesn't make sense to set srq->last during initialization. In fact,
this can crash trying to follow a nonexistent buffer pointer.

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_srq.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

6577ae51cf52f5fb0e4a85e673dd7bf2d0074e3e
diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c
--- a/drivers/infiniband/hw/mthca/mthca_srq.c
+++ b/drivers/infiniband/hw/mthca/mthca_srq.c
@@ -172,6 +172,8 @@ static int mthca_alloc_srq_buf(struct mt
scatter->lkey = cpu_to_be32(MTHCA_INVAL_LKEY);
}

+ srq->last = get_wqe(srq, srq->max - 1);
+
return 0;
}

@@ -263,7 +265,6 @@ int mthca_alloc_srq(struct mthca_dev *de

srq->first_free = 0;
srq->last_free = srq->max - 1;
- srq->last = get_wqe(srq, srq->max - 1);

return 0;

2005-09-20 22:10:05

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 03/10] IB/mthca: fix posting of first work request

Fix posting first WQE for mem-free HCAs: we need to link to previous
WQE even in that case. While we're at it, simplify code for
Tavor-mode HCAs. We don't really need the conditional test there
either; we can similarly always link to the previous WQE.

Based on Michael S. Tsirkin's analogous fix for userspace libmthca.

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_qp.c | 48 ++++++++++++++-----------------
drivers/infiniband/hw/mthca/mthca_srq.c | 14 ++++-----
2 files changed, 28 insertions(+), 34 deletions(-)

bfb454be124d30e9bd140d9c6ef6aec549f4d293
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
--- a/drivers/infiniband/hw/mthca/mthca_qp.c
+++ b/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -227,7 +227,6 @@ static void mthca_wq_init(struct mthca_w
wq->last_comp = wq->max - 1;
wq->head = 0;
wq->tail = 0;
- wq->last = NULL;
}

void mthca_qp_event(struct mthca_dev *dev, u32 qpn,
@@ -1103,6 +1102,9 @@ static int mthca_alloc_qp_common(struct
}
}

+ qp->sq.last = get_send_wqe(qp, qp->sq.max - 1);
+ qp->rq.last = get_recv_wqe(qp, qp->rq.max - 1);
+
return 0;
}

@@ -1583,15 +1585,13 @@ int mthca_tavor_post_send(struct ib_qp *
goto out;
}

- if (prev_wqe) {
- ((struct mthca_next_seg *) prev_wqe)->nda_op =
- cpu_to_be32(((ind << qp->sq.wqe_shift) +
- qp->send_wqe_offset) |
- mthca_opcode[wr->opcode]);
- wmb();
- ((struct mthca_next_seg *) prev_wqe)->ee_nds =
- cpu_to_be32((size0 ? 0 : MTHCA_NEXT_DBD) | size);
- }
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32(((ind << qp->sq.wqe_shift) +
+ qp->send_wqe_offset) |
+ mthca_opcode[wr->opcode]);
+ wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32((size0 ? 0 : MTHCA_NEXT_DBD) | size);

if (!size0) {
size0 = size;
@@ -1688,13 +1688,11 @@ int mthca_tavor_post_receive(struct ib_q

qp->wrid[ind] = wr->wr_id;

- if (likely(prev_wqe)) {
- ((struct mthca_next_seg *) prev_wqe)->nda_op =
- cpu_to_be32((ind << qp->rq.wqe_shift) | 1);
- wmb();
- ((struct mthca_next_seg *) prev_wqe)->ee_nds =
- cpu_to_be32(MTHCA_NEXT_DBD | size);
- }
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32((ind << qp->rq.wqe_shift) | 1);
+ wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32(MTHCA_NEXT_DBD | size);

if (!size0)
size0 = size;
@@ -1905,15 +1903,13 @@ int mthca_arbel_post_send(struct ib_qp *
goto out;
}

- if (likely(prev_wqe)) {
- ((struct mthca_next_seg *) prev_wqe)->nda_op =
- cpu_to_be32(((ind << qp->sq.wqe_shift) +
- qp->send_wqe_offset) |
- mthca_opcode[wr->opcode]);
- wmb();
- ((struct mthca_next_seg *) prev_wqe)->ee_nds =
- cpu_to_be32(MTHCA_NEXT_DBD | size);
- }
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32(((ind << qp->sq.wqe_shift) +
+ qp->send_wqe_offset) |
+ mthca_opcode[wr->opcode]);
+ wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32(MTHCA_NEXT_DBD | size);

if (!size0) {
size0 = size;
diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c
--- a/drivers/infiniband/hw/mthca/mthca_srq.c
+++ b/drivers/infiniband/hw/mthca/mthca_srq.c
@@ -189,7 +189,6 @@ int mthca_alloc_srq(struct mthca_dev *de

srq->max = attr->max_wr;
srq->max_gs = attr->max_sge;
- srq->last = NULL;
srq->counter = 0;

if (mthca_is_memfree(dev))
@@ -264,6 +263,7 @@ int mthca_alloc_srq(struct mthca_dev *de

srq->first_free = 0;
srq->last_free = srq->max - 1;
+ srq->last = get_wqe(srq, srq->max - 1);

return 0;

@@ -446,13 +446,11 @@ int mthca_tavor_post_srq_recv(struct ib_
((struct mthca_data_seg *) wqe)->addr = 0;
}

- if (likely(prev_wqe)) {
- ((struct mthca_next_seg *) prev_wqe)->nda_op =
- cpu_to_be32((ind << srq->wqe_shift) | 1);
- wmb();
- ((struct mthca_next_seg *) prev_wqe)->ee_nds =
- cpu_to_be32(MTHCA_NEXT_DBD);
- }
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32((ind << srq->wqe_shift) | 1);
+ wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32(MTHCA_NEXT_DBD);

srq->wrid[ind] = wr->wr_id;
srq->first_free = next_ind;

2005-09-20 22:10:06

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 06/10] IB/mthca: Fix posting work requests to shared receive queues

The error handling paths in mthca_tavor_post_srq_recv() and
mthca_arbel_post_srq_recv() are quite bogus, the result of a
screwed up merge. Fix them so they work as intended.

Pointed out by Michael S. Tsirkin <[email protected]>

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_srq.c | 10 ++++------
1 files changed, 4 insertions(+), 6 deletions(-)

b95e7b96cd976a5974fa2ae8f3a1af510cb8d4c9
diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c
--- a/drivers/infiniband/hw/mthca/mthca_srq.c
+++ b/drivers/infiniband/hw/mthca/mthca_srq.c
@@ -409,7 +409,7 @@ int mthca_tavor_post_srq_recv(struct ib_
mthca_err(dev, "SRQ %06x full\n", srq->srqn);
err = -ENOMEM;
*bad_wr = wr;
- return nreq;
+ break;
}

wqe = get_wqe(srq, ind);
@@ -427,7 +427,7 @@ int mthca_tavor_post_srq_recv(struct ib_
err = -EINVAL;
*bad_wr = wr;
srq->last = prev_wqe;
- return nreq;
+ break;
}

for (i = 0; i < wr->num_sge; ++i) {
@@ -456,8 +456,6 @@ int mthca_tavor_post_srq_recv(struct ib_
srq->first_free = next_ind;
}

- return nreq;
-
if (likely(nreq)) {
__be32 doorbell[2];

@@ -501,7 +499,7 @@ int mthca_arbel_post_srq_recv(struct ib_
mthca_err(dev, "SRQ %06x full\n", srq->srqn);
err = -ENOMEM;
*bad_wr = wr;
- return nreq;
+ break;
}

wqe = get_wqe(srq, ind);
@@ -517,7 +515,7 @@ int mthca_arbel_post_srq_recv(struct ib_
if (unlikely(wr->num_sge > srq->max_gs)) {
err = -EINVAL;
*bad_wr = wr;
- return nreq;
+ break;
}

for (i = 0; i < wr->num_sge; ++i) {

2005-09-20 22:10:06

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 05/10] IB/mthca: Initialize eq->nent before we use it

In mthca_create_eq(), we call get_eqe() before setting eq->nent. This
is wrong, because get_eqe() uses eq->nent. Fix this, and clean up the
code a little while we're at it. (We got lucky with the current code,
because eq->nent was cleared to 0, which get_eqe() made happen to do
the right thing)

Pointed out by Michael S. Tsirkin <[email protected]>

Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_eq.c | 16 +++++-----------
1 files changed, 5 insertions(+), 11 deletions(-)

8b194835d2f28c793d25d2c41753af2b7ee29f31
diff --git a/drivers/infiniband/hw/mthca/mthca_eq.c b/drivers/infiniband/hw/mthca/mthca_eq.c
--- a/drivers/infiniband/hw/mthca/mthca_eq.c
+++ b/drivers/infiniband/hw/mthca/mthca_eq.c
@@ -476,12 +476,8 @@ static int __devinit mthca_create_eq(str
int i;
u8 status;

- /* Make sure EQ size is aligned to a power of 2 size. */
- for (i = 1; i < nent; i <<= 1)
- ; /* nothing */
- nent = i;
-
- eq->dev = dev;
+ eq->dev = dev;
+ eq->nent = roundup_pow_of_two(max(nent, 2));

eq->page_list = kmalloc(npages * sizeof *eq->page_list,
GFP_KERNEL);
@@ -512,7 +508,7 @@ static int __devinit mthca_create_eq(str
memset(eq->page_list[i].buf, 0, PAGE_SIZE);
}

- for (i = 0; i < nent; ++i)
+ for (i = 0; i < eq->nent; ++i)
set_eqe_hw(get_eqe(eq, i));

eq->eqn = mthca_alloc(&dev->eq_table.alloc);
@@ -528,8 +524,6 @@ static int __devinit mthca_create_eq(str
if (err)
goto err_out_free_eq;

- eq->nent = nent;
-
memset(eq_context, 0, sizeof *eq_context);
eq_context->flags = cpu_to_be32(MTHCA_EQ_STATUS_OK |
MTHCA_EQ_OWNER_HW |
@@ -538,7 +532,7 @@ static int __devinit mthca_create_eq(str
if (mthca_is_memfree(dev))
eq_context->flags |= cpu_to_be32(MTHCA_EQ_STATE_ARBEL);

- eq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24);
+ eq_context->logsize_usrpage = cpu_to_be32((ffs(eq->nent) - 1) << 24);
if (mthca_is_memfree(dev)) {
eq_context->arbel_pd = cpu_to_be32(dev->driver_pd.pd_num);
} else {
@@ -569,7 +563,7 @@ static int __devinit mthca_create_eq(str
dev->eq_table.arm_mask |= eq->eqn_mask;

mthca_dbg(dev, "Allocated EQ %d with %d entries\n",
- eq->eqn, nent);
+ eq->eqn, eq->nent);

return err;

2005-09-20 22:08:57

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 10/10] IB/mthca: Fix device removal memory leak

Subject: [PATCH] IB/mthca: Fix device removal memory leak
From: Michael S. Tsirkin <[email protected]>
Date: 1127238888 -0700

Clean up QP table array on device removal.

Signed-off-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_qp.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

71eea47d853bb0ce0c6befe11b3e08111263170f
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
--- a/drivers/infiniband/hw/mthca/mthca_qp.c
+++ b/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -2123,5 +2123,6 @@ void __devexit mthca_cleanup_qp_table(st
for (i = 0; i < 2; ++i)
mthca_CONF_SPECIAL_QP(dev, i, 0, &status);

+ mthca_array_cleanup(&dev->qp_table.qp, dev->limits.num_qps);
mthca_alloc_cleanup(&dev->qp_table.alloc);
}

2005-09-20 22:11:49

by Roland Dreier

[permalink] [raw]
Subject: [PATCH 08/10] IB: Fix data length for RMPP SA sends

Subject: [PATCH] IB: Fix data length for RMPP SA sends
From: Hal Rosenstock <[email protected]>
Date: 1127163061 -0700

We need to subtract off the header length from our payload
length when sending multi-packet SA messages.

Signed-off-by: Hal Rosenstock <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/core/user_mad.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)

eff4c654b1a4a5e5493fbdc3affa6dd48765c085
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -334,10 +334,11 @@ static ssize_t ib_umad_write(struct file
ret = -EINVAL;
goto err_ah;
}
- /* Validate that management class can support RMPP */
+
+ /* Validate that the management class can support RMPP */
if (rmpp_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_ADM) {
hdr_len = offsetof(struct ib_sa_mad, data);
- data_len = length;
+ data_len = length - hdr_len;
} else if ((rmpp_mad->mad_hdr.mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
(rmpp_mad->mad_hdr.mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END)) {
hdr_len = offsetof(struct ib_vendor_mad, data);