Subject: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec

Make sure the device has at least 2 completion vectors before allocating to compvec#1

Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
Signed-off-by: Nicolas Morey-Chaisemartin <[email protected]>
---
Fixes since v1:
- Use num_comp_vector instead online_cpus

net/sunrpc/xprtrdma/verbs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index b725911c0f3f..db913bcef984 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,

sendcq = ib_alloc_cq(ia->ri_device, NULL,
ep->rep_attr.cap.max_send_wr + 1,
- 1, IB_POLL_WORKQUEUE);
+ ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
if (IS_ERR(sendcq)) {
rc = PTR_ERR(sendcq);
dprintk("RPC: %s: failed to create send CQ: %i\n",
--
2.18.0



2019-01-24 16:11:44

by Tom Talpey

[permalink] [raw]
Subject: Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec

On 1/23/2019 11:02 PM, Nicolas Morey-Chaisemartin wrote:
> Make sure the device has at least 2 completion vectors before allocating to compvec#1

I agree this fixes the bug, but wouldn't it be better to attempt
to steer the affinity to chosen cores, rather than unconditionally
using static values? At least make some determination that the
device, and the memory that was just allocated, were pointed to
slightly optimal NUMA node to handle its completions.

Does the verbs API not support this? I thought that was discussed
a while back.

Tom.

> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
> Signed-off-by: Nicolas Morey-Chaisemartin <[email protected]>
> ---
> Fixes since v1:
> - Use num_comp_vector instead online_cpus
>
> net/sunrpc/xprtrdma/verbs.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index b725911c0f3f..db913bcef984 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
>
> sendcq = ib_alloc_cq(ia->ri_device, NULL,
> ep->rep_attr.cap.max_send_wr + 1,
> - 1, IB_POLL_WORKQUEUE);
> + ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
> if (IS_ERR(sendcq)) {
> rc = PTR_ERR(sendcq);
> dprintk("RPC: %s: failed to create send CQ: %i\n",
>

2019-01-24 20:15:34

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec



> On Jan 24, 2019, at 8:04 AM, Tom Talpey <[email protected]> wrote:
>
> On 1/23/2019 11:02 PM, Nicolas Morey-Chaisemartin wrote:
>> Make sure the device has at least 2 completion vectors before allocating to compvec#1
>
> I agree this fixes the bug, but wouldn't it be better to attempt
> to steer the affinity to chosen cores, rather than unconditionally
> using static values?

The goal for the moment is to create a patch that can be
backported to stable kernels with minimal fuss.


> At least make some determination that the
> device, and the memory that was just allocated, were pointed to
> slightly optimal NUMA node to handle its completions.
>
> Does the verbs API not support this? I thought that was discussed
> a while back.

It is true that spreading CQs across interrupt vectors is
desirable, but for various reasons, it has been difficult to
do it in a way that doesn't have bad behavior in some common
cases.

Sagi has constructed some infrastructure that works well for
ULPs that can create many transport instances, but we don't
yet have that ability for NFS.

I'm open to suggestions for a subsequent patch after Nicolas'
fix is merged.


> Tom.
>
>> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
>> Signed-off-by: Nicolas Morey-Chaisemartin <[email protected]>
>> ---
>> Fixes since v1:
>> - Use num_comp_vector instead online_cpus
>> net/sunrpc/xprtrdma/verbs.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
>> index b725911c0f3f..db913bcef984 100644
>> --- a/net/sunrpc/xprtrdma/verbs.c
>> +++ b/net/sunrpc/xprtrdma/verbs.c
>> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
>> sendcq = ib_alloc_cq(ia->ri_device, NULL,
>> ep->rep_attr.cap.max_send_wr + 1,
>> - 1, IB_POLL_WORKQUEUE);
>> + ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);
>> if (IS_ERR(sendcq)) {
>> rc = PTR_ERR(sendcq);
>> dprintk("RPC: %s: failed to create send CQ: %i\n",

--
Chuck Lever




2019-01-25 17:32:08

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v2] xprtrdma: Make sure Send CQ is allocated on an existing compvec



> On Jan 23, 2019, at 11:02 PM, Nicolas Morey-Chaisemartin <[email protected]> wrote:
>
> Make sure the device has at least 2 completion vectors before allocating to compvec#1
>
> Fixes: a4699f5647f3 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
> Signed-off-by: Nicolas Morey-Chaisemartin <[email protected]>
> ---
> Fixes since v1:
> - Use num_comp_vector instead online_cpus
>
> net/sunrpc/xprtrdma/verbs.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index b725911c0f3f..db913bcef984 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -546,7 +546,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
>
> sendcq = ib_alloc_cq(ia->ri_device, NULL,
> ep->rep_attr.cap.max_send_wr + 1,
> - 1, IB_POLL_WORKQUEUE);
> + ia->ri_device->num_comp_vectors > 1 ? 1 : 0, IB_POLL_WORKQUEUE);

Nit: You should split this line. Otherwise:

Reviewed-by: Chuck Lever <[email protected]>

And I'll look into smarter compvec allocation again.


> if (IS_ERR(sendcq)) {
> rc = PTR_ERR(sendcq);
> dprintk("RPC: %s: failed to create send CQ: %i\n",
> --
> 2.18.0
>

--
Chuck Lever