2005-11-28 23:56:49

by Roland Dreier

[permalink] [raw]
Subject: [git patch review 1/2] IB/mthca: reset QP's last pointers when transitioning to reset state

last pointer is not updated when QP is modified to reset state. This
causes data corruption if WQEs are already posted on the queue.

Signed-off-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/hw/mthca/mthca_qp.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)

applies-to: 1e8504d2a91579756c89ef2d65ebd526f973cde8
187a25863fe014486ee834164776b2a587d6934d
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
index dd4e133..f9c8eb9 100644
--- a/drivers/infiniband/hw/mthca/mthca_qp.c
+++ b/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -871,7 +871,10 @@ int mthca_modify_qp(struct ib_qp *ibqp,
qp->ibqp.srq ? to_msrq(qp->ibqp.srq) : NULL);

mthca_wq_init(&qp->sq);
+ qp->sq.last = get_send_wqe(qp, qp->sq.max - 1);
+
mthca_wq_init(&qp->rq);
+ qp->rq.last = get_recv_wqe(qp, qp->rq.max - 1);

if (mthca_is_memfree(dev)) {
*qp->sq.db = 0;
---
0.99.9k


2005-11-28 23:56:48

by Roland Dreier

[permalink] [raw]
Subject: [git patch review 2/2] IB/umad: fix RMPP handling

ib_umad_write in user_mad.c is looking at rmpp_hdr field in MAD before
checking that the MAD actually has the RMPP header. So for a MAD
without RMPP header it looks like we are actually checking a bit
inside M_Key, or something.

Signed-off-by: Jack Morgenstein <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Signed-off-by: Roland Dreier <[email protected]>

---

drivers/infiniband/core/user_mad.c | 41 +++++++++++++++++++-----------------
1 files changed, 22 insertions(+), 19 deletions(-)

applies-to: 918111360e352d128126bb338227ec4fb6e8afbc
bf6d9e23a36c8a01bf6fbb945387d8ca3870ff71
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index e73f81c..eb7f525 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -310,7 +310,7 @@ static ssize_t ib_umad_write(struct file
u8 method;
__be64 *tid;
int ret, length, hdr_len, copy_offset;
- int rmpp_active = 0;
+ int rmpp_active, has_rmpp_header;

if (count < sizeof (struct ib_user_mad) + IB_MGMT_RMPP_HDR)
return -EINVAL;
@@ -360,28 +360,31 @@ static ssize_t ib_umad_write(struct file
}

rmpp_mad = (struct ib_rmpp_mad *) packet->mad.data;
- if (ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) & IB_MGMT_RMPP_FLAG_ACTIVE) {
- /* RMPP active */
- if (!agent->rmpp_version) {
- ret = -EINVAL;
- goto err_ah;
- }
-
- /* Validate that the management class can support RMPP */
- if (rmpp_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_ADM) {
- hdr_len = IB_MGMT_SA_HDR;
- } else if ((rmpp_mad->mad_hdr.mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
- (rmpp_mad->mad_hdr.mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END)) {
- hdr_len = IB_MGMT_VENDOR_HDR;
- } else {
- ret = -EINVAL;
- goto err_ah;
- }
- rmpp_active = 1;
+ if (rmpp_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_ADM) {
+ hdr_len = IB_MGMT_SA_HDR;
copy_offset = IB_MGMT_RMPP_HDR;
+ has_rmpp_header = 1;
+ } else if (rmpp_mad->mad_hdr.mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START &&
+ rmpp_mad->mad_hdr.mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END) {
+ hdr_len = IB_MGMT_VENDOR_HDR;
+ copy_offset = IB_MGMT_RMPP_HDR;
+ has_rmpp_header = 1;
} else {
hdr_len = IB_MGMT_MAD_HDR;
copy_offset = IB_MGMT_MAD_HDR;
+ has_rmpp_header = 0;
+ }
+
+ if (has_rmpp_header)
+ rmpp_active = ib_get_rmpp_flags(&rmpp_mad->rmpp_hdr) &
+ IB_MGMT_RMPP_FLAG_ACTIVE;
+ else
+ rmpp_active = 0;
+
+ /* Validate that the management class can support RMPP */
+ if (rmpp_active && !agent->rmpp_version) {
+ ret = -EINVAL;
+ goto err_ah;
}

packet->msg = ib_create_send_mad(agent,
---
0.99.9k