Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp4535685pxv; Tue, 20 Jul 2021 06:09:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy9X3ZqLkMBqnVxIxXPTj9P2PTck6JT3DR+oJMqsanHUgG4T3dnjFKZHzmRqzEIMmke6j9a X-Received: by 2002:a50:fd8e:: with SMTP id o14mr40742627edt.80.1626786553148; Tue, 20 Jul 2021 06:09:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626786553; cv=none; d=google.com; s=arc-20160816; b=qgfE6qWm1MQB+eZlfOOMc1J/6vdcnDs3yOp80feYg5aOrsUx/6nRDXwAaYAq0Q3OHx V3cMzFe+AcTdp5x3SbMShzXQo6bD08pe8SLUd604u9AdFi6MwEpfj5X1VW7wxJ4hBowS mUrYwvCXoNX0ZodecjSkl0NJTXGkVqPQQtz8OGuN3UuU1hqfXUdiGPZ/t7/t2zCJM2YF 6ldy/vt0xERMEEAE54wHhAljJrcB1kO+v9JbU7ehTzDz78OMdq3wN+Cd0cC+hjo6PqVE bC5dFXrMZ18BsXuBulilmKd12tMyBDRhed3kDyrLF068O7SgnshfzkBEfdpxF4x0tKcv q+qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=ljn5LmLaQ+//yPzUilDkcpfMuOiMt24cweYuCDQoUTk=; b=NHqojE9rQZZSFBLT/L5Tya9WWaV1D/JuuLhHtALA85yKps07O6rlb2XSGWxZ9ki5/G UeSgzuvIjmie1j3gkNYI0PcqMcfDsnHvYV5dpvUA+EQyUK/NQAC9DQcn9Dl/rJb6r3jv NddaU9wW/Jr5ALHoHGrMz5RL2giyqLdxRAeZRsks+vBRCILYGjSSdgB7CBCge/eeTmJT q+Yp8SYB672KLhH0lWCBrbVj32ZsgPh4oKRNCu6wDFAieYSaK3inzFJHgdk/BmUK2bb8 esD3+ojB9eAFk5f6KvSUXyUZnSxEYYFRZsd9atMQqhpYqvlKxE2Jy/DSnQf8Z5wCkN6H CSlA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lt7si23813331ejb.360.2021.07.20.06.08.50; Tue, 20 Jul 2021 06:09:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232903AbhGTM0G (ORCPT + 99 others); Tue, 20 Jul 2021 08:26:06 -0400 Received: from smtp11.smtpout.orange.fr ([80.12.242.133]:20306 "EHLO smtp.smtpout.orange.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238456AbhGTMZX (ORCPT ); Tue, 20 Jul 2021 08:25:23 -0400 Received: from [192.168.1.102] ([80.15.159.30]) by mwinf5d34 with ME id XR5v2500f0feRjk03R5wRC; Tue, 20 Jul 2021 15:05:57 +0200 X-ME-Helo: [192.168.1.102] X-ME-Auth: Y2hyaXN0b3BoZS5qYWlsbGV0QHdhbmFkb28uZnI= X-ME-Date: Tue, 20 Jul 2021 15:05:57 +0200 X-ME-IP: 80.15.159.30 Subject: Re: [PATCH] RDMA/irdma: Improve the way 'cqp_request' structures are cleaned when they are recycled To: leon@kernel.org Cc: mustafa.ismail@intel.com, shiraz.saleem@intel.com, dledford@redhat.com, jgg@ziepe.ca, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-janitors@vger.kernel.org, christophe.jaillet@wanadoo.fr References: <7f93f2a2c2fd18ddfeb99339d175b85ffd1c6398.1626713915.git.christophe.jaillet@wanadoo.fr> From: Christophe JAILLET Message-ID: <629bc34e-ef41-9af6-9ed7-71865251a62c@wanadoo.fr> Date: Tue, 20 Jul 2021 15:05:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le 20/07/2021 à 14:23, Leon Romanovsky a écrit : > On Mon, Jul 19, 2021 at 07:02:15PM +0200, Christophe JAILLET wrote: >> A set of IRDMA_CQP_SW_SQSIZE_2048 (i.e. 2048) 'cqp_request' are >> pre-allocated and zeroed in 'irdma_create_cqp()' (hw.c). These >> structures are managed with the 'cqp->cqp_avail_reqs' list which keeps >> track of available entries. >> >> In 'irdma_free_cqp_request()' (utils.c), when an entry is recycled and goes >> back to the 'cqp_avail_reqs' list, some fields are reseted. >> >> However, one of these fields, 'compl_info', is initialized within >> 'irdma_alloc_and_get_cqp_request()'. >> >> Move the corresponding memset to 'irdma_free_cqp_request()' so that the >> clean-up is done in only one place. This makes the logic more easy to >> understand. > > I'm not so sure. The function irdma_alloc_and_get_cqp_request() returns > prepared cqp_request and all users expect that it will returned cleaned > one. The reliance on some other place to clear part of the structure is > prone to errors. Ok, so maybe, moving: cqp_request->request_done = false; cqp_request->callback_fcn = NULL; cqp_request->waiting = false; from 'irdma_free_cqp_request()' to 'irdma_alloc_and_get_cqp_request()' to make explicit what is reseted makes more sense? From my point of view, it is same same: all (re)initialization are done at 1 place only. This would also avoid setting 'waiting' twice (once to false in 'irdma_free_cqp_request()' and one to 'wait' 'irdma_alloc_and_get_cqp_request()') CJ > > Thanks > >> >> This also saves this memset in the case that the 'cqp_avail_reqs' list is >> empty and a new 'cqp_request' structure must be allocated. This memset is >> useless, because the structure is already kzalloc'ed. >> >> Signed-off-by: Christophe JAILLET >> --- >> drivers/infiniband/hw/irdma/utils.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c >> index 5bbe44e54f9a..66711024d38b 100644 >> --- a/drivers/infiniband/hw/irdma/utils.c >> +++ b/drivers/infiniband/hw/irdma/utils.c >> @@ -445,7 +445,6 @@ struct irdma_cqp_request *irdma_alloc_and_get_cqp_request(struct irdma_cqp *cqp, >> >> cqp_request->waiting = wait; >> refcount_set(&cqp_request->refcnt, 1); >> - memset(&cqp_request->compl_info, 0, sizeof(cqp_request->compl_info)); >> >> return cqp_request; >> } >> @@ -475,6 +474,7 @@ void irdma_free_cqp_request(struct irdma_cqp *cqp, >> cqp_request->request_done = false; >> cqp_request->callback_fcn = NULL; >> cqp_request->waiting = false; >> + memset(&cqp_request->compl_info, 0, sizeof(cqp_request->compl_info)); >> >> spin_lock_irqsave(&cqp->req_lock, flags); >> list_add_tail(&cqp_request->list, &cqp->cqp_avail_reqs); >> -- >> 2.30.2 >> >