The documentation of the function rvt_error_qp says both r_lock and
s_lock need to be held when calling that function.
It also asserts using lockdep that both of those locks are held.
However, the commit I referenced in Fixes accidentally makes the call
to rvt_error_qp in rvt_ruc_loopback no longer covered by r_lock.
This results in the lockdep assertion failing and also possibly in a
race condition.
Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error")
Signed-off-by: Niels Dossche <[email protected]>
---
drivers/infiniband/sw/rdmavt/qp.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index ae50b56e8913..8ef112f883a7 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -3190,7 +3190,11 @@ void rvt_ruc_loopback(struct rvt_qp *sqp)
spin_lock_irqsave(&sqp->s_lock, flags);
rvt_send_complete(sqp, wqe, send_status);
if (sqp->ibqp.qp_type == IB_QPT_RC) {
- int lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
+ int lastwqe;
+
+ spin_lock(&sqp->r_lock);
+ lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
+ spin_unlock(&sqp->r_lock);
sqp->s_flags &= ~RVT_S_BUSY;
spin_unlock_irqrestore(&sqp->s_lock, flags);
--
2.35.1
On 4/4/22 9:42 AM, Jason Gunthorpe wrote:
> On Mon, Feb 28, 2022 at 05:53:30PM +0100, Niels Dossche wrote:
>> The documentation of the function rvt_error_qp says both r_lock and
>> s_lock need to be held when calling that function.
>> It also asserts using lockdep that both of those locks are held.
>> However, the commit I referenced in Fixes accidentally makes the call
>> to rvt_error_qp in rvt_ruc_loopback no longer covered by r_lock.
>> This results in the lockdep assertion failing and also possibly in a
>> race condition.
>>
>> Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error")
>> Signed-off-by: Niels Dossche <[email protected]>
>> ---
>> drivers/infiniband/sw/rdmavt/qp.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> Dennis?
For this one:
Acked-by: Dennis Dalessandro <[email protected]>
> And this too:
>
> https://patchwork.kernel.org/project/linux-rdma/patch/[email protected]/
I think is correct as well. Not thrilled with the lock, unlock, lock. Stuff, but
this whole unwind code needs a once-over so for this patch as well:
Acked-by: Dennis Dalessandro <[email protected]>
Thanks Niels for finding and fixing these issues!
-Denny
On Mon, Feb 28, 2022 at 05:53:30PM +0100, Niels Dossche wrote:
> The documentation of the function rvt_error_qp says both r_lock and
> s_lock need to be held when calling that function.
> It also asserts using lockdep that both of those locks are held.
> However, the commit I referenced in Fixes accidentally makes the call
> to rvt_error_qp in rvt_ruc_loopback no longer covered by r_lock.
> This results in the lockdep assertion failing and also possibly in a
> race condition.
>
> Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error")
> Signed-off-by: Niels Dossche <[email protected]>
> ---
> drivers/infiniband/sw/rdmavt/qp.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
Dennis?
And this too:
https://patchwork.kernel.org/project/linux-rdma/patch/[email protected]/
Thanks,
Jason
On Mon, Feb 28, 2022 at 05:53:30PM +0100, Niels Dossche wrote:
> The documentation of the function rvt_error_qp says both r_lock and
> s_lock need to be held when calling that function.
> It also asserts using lockdep that both of those locks are held.
> However, the commit I referenced in Fixes accidentally makes the call
> to rvt_error_qp in rvt_ruc_loopback no longer covered by r_lock.
> This results in the lockdep assertion failing and also possibly in a
> race condition.
>
> Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error")
> Signed-off-by: Niels Dossche <[email protected]>
> ---
> drivers/infiniband/sw/rdmavt/qp.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
Applied to for-rc, thanks
Jason