Received: by 2002:a05:7412:e794:b0:fa:551:50a7 with SMTP id o20csp3231406rdd; Sun, 14 Jan 2024 00:46:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IG5N+aDvD3KxdmUT29/nCSopVIvC4S7q+BsabGLVT2IfLo8QT6RaXBM6oDpoXjpGc8xtL/t X-Received: by 2002:a5d:5749:0:b0:336:6da4:e676 with SMTP id q9-20020a5d5749000000b003366da4e676mr1794460wrw.77.1705222013158; Sun, 14 Jan 2024 00:46:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705222013; cv=none; d=google.com; s=arc-20160816; b=cK0REl6BkvzO1yc9MU9jxTkJ6Z1aBebbJSk2yL6NA3cu6h+ZW3Z1xLkvw1fSWx+M76 4qa+XqAaif4Tc16WmnDRTAmpRczby1scgzaFhdCatIpHPLPlBvH0raL3y7FPw1qd279v hWzHq1MJL8Z+nFEl+4CHlmTu87EMp67emn0dSy3v6ecMYxTrctkOkr6FyYWDh66j5f1M 2ECXFDUAdV5SD88tn4In6cTBs7fgsEZZwpErU/DalRqbUjcTih193g8X5MRaEXEAsCJ3 yDjRYrTzl/PiZgVFVNeSXPDt+Pux8a5oOJSTU46hK3Fyc+Bk3VykEctpFPy/spYidv4m 9miQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=436URDSngZAkqe7SZo8m1DSyUtP1yI7W66vvjrP5hdI=; fh=xJKjjZQjy7xnPyJWevzkF8EyHJ7m+zi9NsLSJMkcZ+Q=; b=Dme9o0WP/aWqHt27pUeI+3NDwfi/UWiuA8xNAZ9QNbpai4rjWj5/l7LlYek5dORdeg IE6sNKrcqSy63lJfwjHpfQETufydchSud2MydDb0SLDiJDP+0Sx+tiCWKKov/Gpdu0/0 aDk+w+meLEnuYB8p5IgH/QteHLO6WLo11AO1XyZeV+t/vtoCw86cL99WC2pf14/8hkZO dkRnxIV4i8+UfJnMvmxpccg3AZf8vawpvcQF9RiRGalms3h56NKOKGk6cld/uuNQRPZC g5STo3SDqWkzJZZSFcjobbtgdsm791JBJ2h5MhscLmBTs5YZn86Su6jFqUt85t6YiXNp vzfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YnE7ufHD; spf=pass (google.com: domain of linux-kernel+bounces-25421-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-25421-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id ds5-20020a0564021cc500b00557a6156958si2941510edb.28.2024.01.14.00.46.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jan 2024 00:46:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-25421-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YnE7ufHD; spf=pass (google.com: domain of linux-kernel+bounces-25421-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-25421-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 4B84B1F21BC6 for ; Sun, 14 Jan 2024 08:46:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9A07D1FAD; Sun, 14 Jan 2024 08:46:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YnE7ufHD" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 770F91EEF9; Sun, 14 Jan 2024 08:46:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1730C433C7; Sun, 14 Jan 2024 08:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1705222002; bh=gFbfafeQqnQ75Wzm+AONw5Kf5Z60IkNmBf0GG049AYM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=YnE7ufHD8Wm3UB6VcVfJy1RER4SDIl5K6KRzhFeTQLaW/tT45EHnEiJOycvEafylv 3UYPt0pgx3uBCSb36PqtkMqeqpq8d6gA6Ypn7T5X9SCVn2o6h4rrPBr7IaOraRJ+DV jwqEsbGvXq9emG5/QuEa5Tpgb6+WBUbQSWTKV8fAPjzVaWAAduXNg+Id9i7OIBTdwp 7I1UK1tVsKMHrZ6vbnriiNvvn1wD1tCV7p60i6MIXcjS3X4NDmzvhCZCT+rj0nHgaV zib83E/qWvCH1QVKt4cNT9y0yjfroPAnTZYu05B8DNjcA/YNV52jyPOw+pl/Z1PLJF kzUisbsS2iAJw== Date: Sun, 14 Jan 2024 10:46:36 +0200 From: Leon Romanovsky To: Long Li Cc: Konstantin Taranov , "jgg@ziepe.ca" , "linux-rdma@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [EXTERNAL] Re: [PATCH next v2 1/1] RDMA/mana_ib: Introduce three helper functions to clean mana_ib code Message-ID: <20240114084636.GB6404@unreal> References: <1704896074-4355-1-git-send-email-kotaranov@linux.microsoft.com> <20240111111417.GC7488@unreal> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Thu, Jan 11, 2024 at 05:15:06PM +0000, Long Li wrote: > > Subject: Re: [EXTERNAL] Re: [PATCH next v2 1/1] RDMA/mana_ib: Introduce > > three helper functions to clean mana_ib code > > > > Hi Leon, > > > > // Sorry for re-sending the email. I did not know that I had to enable plain-text > > messages in outlook. > > > > thanks for pointing out the merge window. I will resend after it. > > > > I can also split it into 2 commits (mdev_to_gc and mana_ib_get_netdev in one, > > and CQ cb in the other), if it is required. > > I cannot separate "mdev_to_gc" and "mana_ib_get_netdev " into 2 commits as it > > will double the changes and obstruct the reader. > > > > > And how is it safe now? What is unsafe here? > > > > The issue with the gdma_dev is the following. > > Our mana HW device has two gdma_devs: one for ethernet and one for rdma. > > And in the code of mana_ib.ko it is not clearly indicated what gdma_dev is used. > > So it means in some functions it is ethernet and in some it is rdma. > > The use of a wrong device in the hardware channel leads to errors and waste > > time of developers. > > > > This problem is addressed as follows: > > We avoid using gdma_dev explicitly in functions of mana_ib.ko except the init. > > As the gdma_context is only one, we abstract it into mdev_to_gc, to prevent the > > user to declare gdma_dev. > > When a function actually needs to use gdma_dev of the ethernet, it uses netdev, > > which encapsulates the correct gdma_dev. > > Getting netdev is implemented with mana_ib_get_netdev, to prevent the user to > > declare gdma_dev. > > When a function needs to use gdma_dev of the rdma, it will use struct > > mana_ib_dev (a child of ib_dev) in its hardware channel requests. > > > > I hope it explains what becomes safer. If it was useful, I can add this explanation > > to the commit message. > > You patch doesn't make it safer, but rather makes it easier to understand. The wording "Unsafe and inconsistent" should be removed. Right, I have no problems with cleanup patch, just don't want to see wrong/inaccurate description. Thanks > > Long > > > > > Thanks, > > Konstantin > > > > ________________________________________ > > From: Leon Romanovsky > > Sent: Thursday, January 11, 2024 12:14 PM > > To: Konstantin Taranov > > Cc: Konstantin Taranov; Long Li; jgg@ziepe.ca; linux-rdma@vger.kernel.org; linux- > > kernel@vger.kernel.org > > Subject: [EXTERNAL] Re: [PATCH next v2 1/1] RDMA/mana_ib: Introduce three > > helper functions to clean mana_ib code > > > > [You don't often get email from leon@kernel.org. Learn why this is important at > > https://aka.ms/LearnAboutSenderIdentification ] > > > > On Wed, Jan 10, 2024 at 06:14:34AM -0800, Konstantin Taranov wrote: > > > From: Konstantin Taranov > > > > > > This patch aims to address two issues in mana_ib: > > > 1) Unsafe and inconsistent access to gdma_dev and gdma_context > > > > And how is it safe now? What is unsafe here? > > > > > 2) Code repetitions > > > > > > As a rule of thumb, functions should not access gdma_dev directly > > > > > > Introduced functions: > > > 1) mdev_to_gc > > > > Which is exactly "mdev->gdma_dev->gdma_context" as before. > > > > > 2) mana_ib_get_netdev > > > 3) mana_ib_install_cq_cb > > > > > > > > > > We are in merge window and cleanup patch will need to wait till it ends. > > > > Thanks > > > > > Signed-off-by: Konstantin Taranov > > > --- > > > Sorry that was sent again, I forgot to add RDMA/mana_ib to the subject > > > --- > > > drivers/infiniband/hw/mana/cq.c | 23 ++++++++++++++-- > > > drivers/infiniband/hw/mana/main.c | 40 ++++++++++------------------ > > > drivers/infiniband/hw/mana/mana_ib.h | 17 ++++++++++++ > > > drivers/infiniband/hw/mana/mr.c | 13 +++------ > > > drivers/infiniband/hw/mana/qp.c | 36 ++++++------------------- > > > 5 files changed, 63 insertions(+), 66 deletions(-) > > > > > > diff --git a/drivers/infiniband/hw/mana/cq.c > > > b/drivers/infiniband/hw/mana/cq.c index 83ebd0705..255e74ab7 100644 > > > --- a/drivers/infiniband/hw/mana/cq.c > > > +++ b/drivers/infiniband/hw/mana/cq.c > > > @@ -16,7 +16,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (udata->inlen < sizeof(ucmd)) > > > return -EINVAL; > > > @@ -81,7 +81,7 @@ int mana_ib_destroy_cq(struct ib_cq *ibcq, struct > > ib_udata *udata) > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > err = mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); > > > if (err) { > > > @@ -107,3 +107,22 @@ void mana_ib_cq_handler(void *ctx, struct > > gdma_queue *gdma_cq) > > > if (cq->ibcq.comp_handler) > > > cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); > > > } > > > + > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq) { > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct gdma_queue *gdma_cq; > > > + > > > + /* Create CQ table entry */ > > > + WARN_ON(gc->cq_table[cq->id]); > > > + gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > + if (!gdma_cq) > > > + return -ENOMEM; > > > + > > > + gdma_cq->cq.context = cq; > > > + gdma_cq->type = GDMA_CQ; > > > + gdma_cq->cq.callback = mana_ib_cq_handler; > > > + gdma_cq->id = cq->id; > > > + gc->cq_table[cq->id] = gdma_cq; > > > + return 0; > > > +} > > > diff --git a/drivers/infiniband/hw/mana/main.c > > > b/drivers/infiniband/hw/mana/main.c > > > index faca09245..29dd2438d 100644 > > > --- a/drivers/infiniband/hw/mana/main.c > > > +++ b/drivers/infiniband/hw/mana/main.c > > > @@ -8,13 +8,10 @@ > > > void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, > > > u32 port) > > > { > > > - struct gdma_dev *gd = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > struct net_device *ndev; > > > - struct mana_context *mc; > > > > > > - mc = gd->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -31,14 +28,11 @@ void mana_ib_uncfg_vport(struct mana_ib_dev *dev, > > > struct mana_ib_pd *pd, int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 > > port, struct mana_ib_pd *pd, > > > u32 doorbell_id) > > > { > > > - struct gdma_dev *mdev = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > - struct mana_context *mc; > > > struct net_device *ndev; > > > int err; > > > > > > - mc = mdev->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -79,17 +73,17 @@ int mana_ib_alloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_create_pd_req req = {}; > > > enum gdma_pd_flags flags = 0; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.flags = flags; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -119,17 +113,17 @@ int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_destory_pd_resp resp = {}; > > > struct gdma_destroy_pd_req req = {}; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.pd_handle = pd->pd_handle; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -206,13 +200,11 @@ int mana_ib_alloc_ucontext(struct ib_ucontext > > *ibcontext, > > > struct ib_device *ibdev = ibcontext->device; > > > struct mana_ib_dev *mdev; > > > struct gdma_context *gc; > > > - struct gdma_dev *dev; > > > int doorbell_page; > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - dev = mdev->gdma_dev; > > > - gc = dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > /* Allocate a doorbell page index */ > > > ret = mana_gd_allocate_doorbell_page(gc, &doorbell_page); @@ > > > -238,7 +230,7 @@ void mana_ib_dealloc_ucontext(struct ib_ucontext > > *ibcontext) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > ret = mana_gd_destroy_doorbell_page(gc, mana_ucontext->doorbell); > > > if (ret) > > > @@ -322,15 +314,13 @@ int mana_ib_gd_create_dma_region(struct > > mana_ib_dev *dev, struct ib_umem *umem, > > > size_t max_pgs_create_cmd; > > > struct gdma_context *gc; > > > size_t num_pages_total; > > > - struct gdma_dev *mdev; > > > unsigned long page_sz; > > > unsigned int tail = 0; > > > u64 *page_addr_list; > > > void *request_buf; > > > int err; > > > > > > - mdev = dev->gdma_dev; > > > - gc = mdev->gdma_context; > > > + gc = mdev_to_gc(dev); > > > hwc = gc->hwc.driver_data; > > > > > > /* Hardware requires dma region to align to chosen page size */ > > > @@ -426,10 +416,8 @@ int mana_ib_gd_create_dma_region(struct > > > mana_ib_dev *dev, struct ib_umem *umem, > > > > > > int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 > > > gdma_region) { > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > > > > - gc = mdev->gdma_context; > > > ibdev_dbg(&dev->ib_dev, "destroy dma region 0x%llx\n", > > > gdma_region); > > > > > > return mana_gd_destroy_dma_region(gc, gdma_region); @@ -447,7 > > > +435,7 @@ int mana_ib_mmap(struct ib_ucontext *ibcontext, struct > > vm_area_struct *vma) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (vma->vm_pgoff != 0) { > > > ibdev_dbg(ibdev, "Unexpected vm_pgoff %lu\n", > > > vma->vm_pgoff); @@ -531,7 +519,7 @@ int > > mana_ib_gd_query_adapter_caps(struct mana_ib_dev *dev) > > > req.hdr.resp.msg_version = GDMA_MESSAGE_V3; > > > req.hdr.dev_id = dev->gdma_dev->dev_id; > > > > > > - err = mana_gd_send_request(dev->gdma_dev->gdma_context, sizeof(req), > > > + err = mana_gd_send_request(mdev_to_gc(dev), sizeof(req), > > > &req, sizeof(resp), &resp); > > > > > > if (err) { > > > diff --git a/drivers/infiniband/hw/mana/mana_ib.h > > > b/drivers/infiniband/hw/mana/mana_ib.h > > > index 6bdc0f549..6b15b2ab5 100644 > > > --- a/drivers/infiniband/hw/mana/mana_ib.h > > > +++ b/drivers/infiniband/hw/mana/mana_ib.h > > > @@ -142,6 +142,22 @@ struct mana_ib_query_adapter_caps_resp { > > > u32 max_inline_data_size; > > > }; /* HW Data */ > > > > > > +static inline struct gdma_context *mdev_to_gc(struct mana_ib_dev > > > +*mdev) { > > > + return mdev->gdma_dev->gdma_context; } > > > + > > > +static inline struct net_device *mana_ib_get_netdev(struct ib_device > > > +*ibdev, u32 port) { > > > + struct mana_ib_dev *mdev = container_of(ibdev, struct mana_ib_dev, > > ib_dev); > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct mana_context *mc = gc->mana.driver_data; > > > + > > > + if (port < 1 || port > mc->num_ports) > > > + return NULL; > > > + return mc->ports[port - 1]; > > > +} > > > + > > > int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem > > *umem, > > > mana_handle_t *gdma_region); > > > > > > @@ -188,6 +204,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > struct ib_udata *udata); > > > > > > int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq); > > > > > > int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > diff --git a/drivers/infiniband/hw/mana/mr.c > > > b/drivers/infiniband/hw/mana/mr.c index 351207c60..ee4d4f834 100644 > > > --- a/drivers/infiniband/hw/mana/mr.c > > > +++ b/drivers/infiniband/hw/mana/mr.c > > > @@ -30,12 +30,9 @@ static int mana_ib_gd_create_mr(struct mana_ib_dev > > > *dev, struct mana_ib_mr *mr, { > > > struct gdma_create_mr_response resp = {}; > > > struct gdma_create_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, sizeof(req), > > > sizeof(resp)); > > > req.pd_handle = mr_params->pd_handle; @@ -77,12 +74,9 @@ static > > > int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, u64 mr_handle) { > > > struct gdma_destroy_mr_response resp = {}; > > > struct gdma_destroy_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_MR, sizeof(req), > > > sizeof(resp)); > > > > > > @@ -164,8 +158,7 @@ struct ib_mr *mana_ib_reg_user_mr(struct ib_pd > > *ibpd, u64 start, u64 length, > > > return &mr->ibmr; > > > > > > err_dma_region: > > > - mana_gd_destroy_dma_region(dev->gdma_dev->gdma_context, > > > - dma_region_handle); > > > + mana_gd_destroy_dma_region(mdev_to_gc(dev), dma_region_handle); > > > > > > err_umem: > > > ib_umem_release(mr->umem); > > > diff --git a/drivers/infiniband/hw/mana/qp.c > > > b/drivers/infiniband/hw/mana/qp.c index e6d063d45..e889c798f 100644 > > > --- a/drivers/infiniband/hw/mana/qp.c > > > +++ b/drivers/infiniband/hw/mana/qp.c > > > @@ -17,12 +17,10 @@ static int mana_ib_cfg_vport_steering(struct > > mana_ib_dev *dev, > > > struct mana_cfg_rx_steer_resp resp = {}; > > > mana_handle_t *req_indir_tab; > > > struct gdma_context *gc; > > > - struct gdma_dev *mdev; > > > u32 req_buf_size; > > > int i, err; > > > > > > - gc = dev->gdma_dev->gdma_context; > > > - mdev = &gc->mana; > > > + gc = mdev_to_gc(dev); > > > > > > req_buf_size = > > > sizeof(*req) + sizeof(mana_handle_t) * > > > MANA_INDIRECT_TABLE_SIZE; @@ -39,7 +37,7 @@ static int > > mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > > > req->rx_enable = 1; > > > req->update_default_rxobj = 1; > > > req->default_rxobj = default_rxobj; > > > - req->hdr.dev_id = mdev->dev_id; > > > + req->hdr.dev_id = gc->mana.dev_id; > > > > > > /* If there are more than 1 entries in indirection table, enable RSS */ > > > if (log_ind_tbl_size) > > > @@ -106,7 +104,6 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > struct gdma_queue **gdma_cq_allocated; > > > mana_handle_t *mana_ind_table; > > > struct mana_port_context *mpc; > > > - struct gdma_queue *gdma_cq; > > > unsigned int ind_tbl_size; > > > struct net_device *ndev; > > > struct mana_ib_cq *cq; > > > @@ -231,19 +228,11 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > mana_ind_table[i] = wq->rx_object; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - ret = -ENOMEM; > > > + ret = mana_ib_install_cq_cb(mdev, cq); > > > + if (!ret) > > > goto fail; > > > - } > > > - gdma_cq_allocated[i] = gdma_cq; > > > > > > - gdma_cq->cq.context = cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = cq->id; > > > - gc->cq_table[cq->id] = gdma_cq; > > > + gdma_cq_allocated[i] = gc->cq_table[cq->id]; > > > } > > > resp.num_entries = i; > > > > > > @@ -409,18 +398,9 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, > > struct ib_pd *ibpd, > > > send_cq->id = cq_spec.queue_index; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[send_cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - err = -ENOMEM; > > > + err = mana_ib_install_cq_cb(mdev, send_cq); > > > + if (err) > > > goto err_destroy_wq_obj; > > > - } > > > - > > > - gdma_cq->cq.context = send_cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = send_cq->id; > > > - gc->cq_table[send_cq->id] = gdma_cq; > > > > > > ibdev_dbg(&mdev->ib_dev, > > > "ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", > > > err, @@ -442,7 +422,7 @@ static int mana_ib_create_qp_raw(struct ib_qp > > > *ibqp, struct ib_pd *ibpd, > > > > > > err_release_gdma_cq: > > > kfree(gdma_cq); > > > - gd->gdma_context->cq_table[send_cq->id] = NULL; > > > + gc->cq_table[send_cq->id] = NULL; > > > > > > err_destroy_wq_obj: > > > mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); > > > > > > base-commit: d24b923f1d696ddacb09f0f2d1b1f4f045cfe65e > > > prerequisite-patch-id: 1b5d35ba40b675d080bfbe6a0e73b0dd163f4a03 > > > -- > > > 2.43.0 > > > > > > > > > ________________________________________ > > From: Leon Romanovsky > > Sent: Thursday, January 11, 2024 12:14 PM > > To: Konstantin Taranov > > Cc: Konstantin Taranov; Long Li; jgg@ziepe.ca; linux-rdma@vger.kernel.org; linux- > > kernel@vger.kernel.org > > Subject: [EXTERNAL] Re: [PATCH next v2 1/1] RDMA/mana_ib: Introduce three > > helper functions to clean mana_ib code > > > > [You don't often get email from leon@kernel.org. Learn why this is important at > > https://aka.ms/LearnAboutSenderIdentification ] > > > > On Wed, Jan 10, 2024 at 06:14:34AM -0800, Konstantin Taranov wrote: > > > From: Konstantin Taranov > > > > > > This patch aims to address two issues in mana_ib: > > > 1) Unsafe and inconsistent access to gdma_dev and gdma_context > > > > And how is it safe now? What is unsafe here? > > > > > 2) Code repetitions > > > > > > As a rule of thumb, functions should not access gdma_dev directly > > > > > > Introduced functions: > > > 1) mdev_to_gc > > > > Which is exactly "mdev->gdma_dev->gdma_context" as before. > > > > > 2) mana_ib_get_netdev > > > 3) mana_ib_install_cq_cb > > > > > > > > > > We are in merge window and cleanup patch will need to wait till it ends. > > > > Thanks > > > > > Signed-off-by: Konstantin Taranov > > > --- > > > Sorry that was sent again, I forgot to add RDMA/mana_ib to the subject > > > --- > > > drivers/infiniband/hw/mana/cq.c | 23 ++++++++++++++-- > > > drivers/infiniband/hw/mana/main.c | 40 ++++++++++------------------ > > > drivers/infiniband/hw/mana/mana_ib.h | 17 ++++++++++++ > > > drivers/infiniband/hw/mana/mr.c | 13 +++------ > > > drivers/infiniband/hw/mana/qp.c | 36 ++++++------------------- > > > 5 files changed, 63 insertions(+), 66 deletions(-) > > > > > > diff --git a/drivers/infiniband/hw/mana/cq.c > > > b/drivers/infiniband/hw/mana/cq.c index 83ebd0705..255e74ab7 100644 > > > --- a/drivers/infiniband/hw/mana/cq.c > > > +++ b/drivers/infiniband/hw/mana/cq.c > > > @@ -16,7 +16,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (udata->inlen < sizeof(ucmd)) > > > return -EINVAL; > > > @@ -81,7 +81,7 @@ int mana_ib_destroy_cq(struct ib_cq *ibcq, struct > > ib_udata *udata) > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > err = mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); > > > if (err) { > > > @@ -107,3 +107,22 @@ void mana_ib_cq_handler(void *ctx, struct > > gdma_queue *gdma_cq) > > > if (cq->ibcq.comp_handler) > > > cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); > > > } > > > + > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq) { > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct gdma_queue *gdma_cq; > > > + > > > + /* Create CQ table entry */ > > > + WARN_ON(gc->cq_table[cq->id]); > > > + gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > + if (!gdma_cq) > > > + return -ENOMEM; > > > + > > > + gdma_cq->cq.context = cq; > > > + gdma_cq->type = GDMA_CQ; > > > + gdma_cq->cq.callback = mana_ib_cq_handler; > > > + gdma_cq->id = cq->id; > > > + gc->cq_table[cq->id] = gdma_cq; > > > + return 0; > > > +} > > > diff --git a/drivers/infiniband/hw/mana/main.c > > > b/drivers/infiniband/hw/mana/main.c > > > index faca09245..29dd2438d 100644 > > > --- a/drivers/infiniband/hw/mana/main.c > > > +++ b/drivers/infiniband/hw/mana/main.c > > > @@ -8,13 +8,10 @@ > > > void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, > > > u32 port) > > > { > > > - struct gdma_dev *gd = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > struct net_device *ndev; > > > - struct mana_context *mc; > > > > > > - mc = gd->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -31,14 +28,11 @@ void mana_ib_uncfg_vport(struct mana_ib_dev *dev, > > > struct mana_ib_pd *pd, int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 > > port, struct mana_ib_pd *pd, > > > u32 doorbell_id) > > > { > > > - struct gdma_dev *mdev = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > - struct mana_context *mc; > > > struct net_device *ndev; > > > int err; > > > > > > - mc = mdev->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -79,17 +73,17 @@ int mana_ib_alloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_create_pd_req req = {}; > > > enum gdma_pd_flags flags = 0; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.flags = flags; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -119,17 +113,17 @@ int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_destory_pd_resp resp = {}; > > > struct gdma_destroy_pd_req req = {}; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.pd_handle = pd->pd_handle; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -206,13 +200,11 @@ int mana_ib_alloc_ucontext(struct ib_ucontext > > *ibcontext, > > > struct ib_device *ibdev = ibcontext->device; > > > struct mana_ib_dev *mdev; > > > struct gdma_context *gc; > > > - struct gdma_dev *dev; > > > int doorbell_page; > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - dev = mdev->gdma_dev; > > > - gc = dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > /* Allocate a doorbell page index */ > > > ret = mana_gd_allocate_doorbell_page(gc, &doorbell_page); @@ > > > -238,7 +230,7 @@ void mana_ib_dealloc_ucontext(struct ib_ucontext > > *ibcontext) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > ret = mana_gd_destroy_doorbell_page(gc, mana_ucontext->doorbell); > > > if (ret) > > > @@ -322,15 +314,13 @@ int mana_ib_gd_create_dma_region(struct > > mana_ib_dev *dev, struct ib_umem *umem, > > > size_t max_pgs_create_cmd; > > > struct gdma_context *gc; > > > size_t num_pages_total; > > > - struct gdma_dev *mdev; > > > unsigned long page_sz; > > > unsigned int tail = 0; > > > u64 *page_addr_list; > > > void *request_buf; > > > int err; > > > > > > - mdev = dev->gdma_dev; > > > - gc = mdev->gdma_context; > > > + gc = mdev_to_gc(dev); > > > hwc = gc->hwc.driver_data; > > > > > > /* Hardware requires dma region to align to chosen page size */ > > > @@ -426,10 +416,8 @@ int mana_ib_gd_create_dma_region(struct > > > mana_ib_dev *dev, struct ib_umem *umem, > > > > > > int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 > > > gdma_region) { > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > > > > - gc = mdev->gdma_context; > > > ibdev_dbg(&dev->ib_dev, "destroy dma region 0x%llx\n", > > > gdma_region); > > > > > > return mana_gd_destroy_dma_region(gc, gdma_region); @@ -447,7 > > > +435,7 @@ int mana_ib_mmap(struct ib_ucontext *ibcontext, struct > > vm_area_struct *vma) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (vma->vm_pgoff != 0) { > > > ibdev_dbg(ibdev, "Unexpected vm_pgoff %lu\n", > > > vma->vm_pgoff); @@ -531,7 +519,7 @@ int > > mana_ib_gd_query_adapter_caps(struct mana_ib_dev *dev) > > > req.hdr.resp.msg_version = GDMA_MESSAGE_V3; > > > req.hdr.dev_id = dev->gdma_dev->dev_id; > > > > > > - err = mana_gd_send_request(dev->gdma_dev->gdma_context, sizeof(req), > > > + err = mana_gd_send_request(mdev_to_gc(dev), sizeof(req), > > > &req, sizeof(resp), &resp); > > > > > > if (err) { > > > diff --git a/drivers/infiniband/hw/mana/mana_ib.h > > > b/drivers/infiniband/hw/mana/mana_ib.h > > > index 6bdc0f549..6b15b2ab5 100644 > > > --- a/drivers/infiniband/hw/mana/mana_ib.h > > > +++ b/drivers/infiniband/hw/mana/mana_ib.h > > > @@ -142,6 +142,22 @@ struct mana_ib_query_adapter_caps_resp { > > > u32 max_inline_data_size; > > > }; /* HW Data */ > > > > > > +static inline struct gdma_context *mdev_to_gc(struct mana_ib_dev > > > +*mdev) { > > > + return mdev->gdma_dev->gdma_context; } > > > + > > > +static inline struct net_device *mana_ib_get_netdev(struct ib_device > > > +*ibdev, u32 port) { > > > + struct mana_ib_dev *mdev = container_of(ibdev, struct mana_ib_dev, > > ib_dev); > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct mana_context *mc = gc->mana.driver_data; > > > + > > > + if (port < 1 || port > mc->num_ports) > > > + return NULL; > > > + return mc->ports[port - 1]; > > > +} > > > + > > > int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem > > *umem, > > > mana_handle_t *gdma_region); > > > > > > @@ -188,6 +204,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > struct ib_udata *udata); > > > > > > int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq); > > > > > > int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > diff --git a/drivers/infiniband/hw/mana/mr.c > > > b/drivers/infiniband/hw/mana/mr.c index 351207c60..ee4d4f834 100644 > > > --- a/drivers/infiniband/hw/mana/mr.c > > > +++ b/drivers/infiniband/hw/mana/mr.c > > > @@ -30,12 +30,9 @@ static int mana_ib_gd_create_mr(struct mana_ib_dev > > > *dev, struct mana_ib_mr *mr, { > > > struct gdma_create_mr_response resp = {}; > > > struct gdma_create_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, sizeof(req), > > > sizeof(resp)); > > > req.pd_handle = mr_params->pd_handle; @@ -77,12 +74,9 @@ static > > > int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, u64 mr_handle) { > > > struct gdma_destroy_mr_response resp = {}; > > > struct gdma_destroy_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_MR, sizeof(req), > > > sizeof(resp)); > > > > > > @@ -164,8 +158,7 @@ struct ib_mr *mana_ib_reg_user_mr(struct ib_pd > > *ibpd, u64 start, u64 length, > > > return &mr->ibmr; > > > > > > err_dma_region: > > > - mana_gd_destroy_dma_region(dev->gdma_dev->gdma_context, > > > - dma_region_handle); > > > + mana_gd_destroy_dma_region(mdev_to_gc(dev), dma_region_handle); > > > > > > err_umem: > > > ib_umem_release(mr->umem); > > > diff --git a/drivers/infiniband/hw/mana/qp.c > > > b/drivers/infiniband/hw/mana/qp.c index e6d063d45..e889c798f 100644 > > > --- a/drivers/infiniband/hw/mana/qp.c > > > +++ b/drivers/infiniband/hw/mana/qp.c > > > @@ -17,12 +17,10 @@ static int mana_ib_cfg_vport_steering(struct > > mana_ib_dev *dev, > > > struct mana_cfg_rx_steer_resp resp = {}; > > > mana_handle_t *req_indir_tab; > > > struct gdma_context *gc; > > > - struct gdma_dev *mdev; > > > u32 req_buf_size; > > > int i, err; > > > > > > - gc = dev->gdma_dev->gdma_context; > > > - mdev = &gc->mana; > > > + gc = mdev_to_gc(dev); > > > > > > req_buf_size = > > > sizeof(*req) + sizeof(mana_handle_t) * > > > MANA_INDIRECT_TABLE_SIZE; @@ -39,7 +37,7 @@ static int > > mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > > > req->rx_enable = 1; > > > req->update_default_rxobj = 1; > > > req->default_rxobj = default_rxobj; > > > - req->hdr.dev_id = mdev->dev_id; > > > + req->hdr.dev_id = gc->mana.dev_id; > > > > > > /* If there are more than 1 entries in indirection table, enable RSS */ > > > if (log_ind_tbl_size) > > > @@ -106,7 +104,6 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > struct gdma_queue **gdma_cq_allocated; > > > mana_handle_t *mana_ind_table; > > > struct mana_port_context *mpc; > > > - struct gdma_queue *gdma_cq; > > > unsigned int ind_tbl_size; > > > struct net_device *ndev; > > > struct mana_ib_cq *cq; > > > @@ -231,19 +228,11 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > mana_ind_table[i] = wq->rx_object; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - ret = -ENOMEM; > > > + ret = mana_ib_install_cq_cb(mdev, cq); > > > + if (!ret) > > > goto fail; > > > - } > > > - gdma_cq_allocated[i] = gdma_cq; > > > > > > - gdma_cq->cq.context = cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = cq->id; > > > - gc->cq_table[cq->id] = gdma_cq; > > > + gdma_cq_allocated[i] = gc->cq_table[cq->id]; > > > } > > > resp.num_entries = i; > > > > > > @@ -409,18 +398,9 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, > > struct ib_pd *ibpd, > > > send_cq->id = cq_spec.queue_index; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[send_cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - err = -ENOMEM; > > > + err = mana_ib_install_cq_cb(mdev, send_cq); > > > + if (err) > > > goto err_destroy_wq_obj; > > > - } > > > - > > > - gdma_cq->cq.context = send_cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = send_cq->id; > > > - gc->cq_table[send_cq->id] = gdma_cq; > > > > > > ibdev_dbg(&mdev->ib_dev, > > > "ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", > > > err, @@ -442,7 +422,7 @@ static int mana_ib_create_qp_raw(struct ib_qp > > > *ibqp, struct ib_pd *ibpd, > > > > > > err_release_gdma_cq: > > > kfree(gdma_cq); > > > - gd->gdma_context->cq_table[send_cq->id] = NULL; > > > + gc->cq_table[send_cq->id] = NULL; > > > > > > err_destroy_wq_obj: > > > mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); > > > > > > base-commit: d24b923f1d696ddacb09f0f2d1b1f4f045cfe65e > > > prerequisite-patch-id: 1b5d35ba40b675d080bfbe6a0e73b0dd163f4a03 > > > -- > > > 2.43.0 > > > > > > > > > ________________________________________ > > From: Leon Romanovsky > > Sent: Thursday, January 11, 2024 12:14 PM > > To: Konstantin Taranov > > Cc: Konstantin Taranov; Long Li; jgg@ziepe.ca; linux-rdma@vger.kernel.org; linux- > > kernel@vger.kernel.org > > Subject: [EXTERNAL] Re: [PATCH next v2 1/1] RDMA/mana_ib: Introduce three > > helper functions to clean mana_ib code > > > > [You don't often get email from leon@kernel.org. Learn why this is important at > > https://aka.ms/LearnAboutSenderIdentification ] > > > > On Wed, Jan 10, 2024 at 06:14:34AM -0800, Konstantin Taranov wrote: > > > From: Konstantin Taranov > > > > > > This patch aims to address two issues in mana_ib: > > > 1) Unsafe and inconsistent access to gdma_dev and gdma_context > > > > And how is it safe now? What is unsafe here? > > > > > 2) Code repetitions > > > > > > As a rule of thumb, functions should not access gdma_dev directly > > > > > > Introduced functions: > > > 1) mdev_to_gc > > > > Which is exactly "mdev->gdma_dev->gdma_context" as before. > > > > > 2) mana_ib_get_netdev > > > 3) mana_ib_install_cq_cb > > > > > > > > > > We are in merge window and cleanup patch will need to wait till it ends. > > > > Thanks > > > > > Signed-off-by: Konstantin Taranov > > > --- > > > Sorry that was sent again, I forgot to add RDMA/mana_ib to the subject > > > --- > > > drivers/infiniband/hw/mana/cq.c | 23 ++++++++++++++-- > > > drivers/infiniband/hw/mana/main.c | 40 ++++++++++------------------ > > > drivers/infiniband/hw/mana/mana_ib.h | 17 ++++++++++++ > > > drivers/infiniband/hw/mana/mr.c | 13 +++------ > > > drivers/infiniband/hw/mana/qp.c | 36 ++++++------------------- > > > 5 files changed, 63 insertions(+), 66 deletions(-) > > > > > > diff --git a/drivers/infiniband/hw/mana/cq.c > > > b/drivers/infiniband/hw/mana/cq.c index 83ebd0705..255e74ab7 100644 > > > --- a/drivers/infiniband/hw/mana/cq.c > > > +++ b/drivers/infiniband/hw/mana/cq.c > > > @@ -16,7 +16,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (udata->inlen < sizeof(ucmd)) > > > return -EINVAL; > > > @@ -81,7 +81,7 @@ int mana_ib_destroy_cq(struct ib_cq *ibcq, struct > > ib_udata *udata) > > > int err; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > err = mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); > > > if (err) { > > > @@ -107,3 +107,22 @@ void mana_ib_cq_handler(void *ctx, struct > > gdma_queue *gdma_cq) > > > if (cq->ibcq.comp_handler) > > > cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); > > > } > > > + > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq) { > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct gdma_queue *gdma_cq; > > > + > > > + /* Create CQ table entry */ > > > + WARN_ON(gc->cq_table[cq->id]); > > > + gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > + if (!gdma_cq) > > > + return -ENOMEM; > > > + > > > + gdma_cq->cq.context = cq; > > > + gdma_cq->type = GDMA_CQ; > > > + gdma_cq->cq.callback = mana_ib_cq_handler; > > > + gdma_cq->id = cq->id; > > > + gc->cq_table[cq->id] = gdma_cq; > > > + return 0; > > > +} > > > diff --git a/drivers/infiniband/hw/mana/main.c > > > b/drivers/infiniband/hw/mana/main.c > > > index faca09245..29dd2438d 100644 > > > --- a/drivers/infiniband/hw/mana/main.c > > > +++ b/drivers/infiniband/hw/mana/main.c > > > @@ -8,13 +8,10 @@ > > > void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, > > > u32 port) > > > { > > > - struct gdma_dev *gd = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > struct net_device *ndev; > > > - struct mana_context *mc; > > > > > > - mc = gd->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -31,14 +28,11 @@ void mana_ib_uncfg_vport(struct mana_ib_dev *dev, > > > struct mana_ib_pd *pd, int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 > > port, struct mana_ib_pd *pd, > > > u32 doorbell_id) > > > { > > > - struct gdma_dev *mdev = &dev->gdma_dev->gdma_context->mana; > > > struct mana_port_context *mpc; > > > - struct mana_context *mc; > > > struct net_device *ndev; > > > int err; > > > > > > - mc = mdev->driver_data; > > > - ndev = mc->ports[port]; > > > + ndev = mana_ib_get_netdev(&dev->ib_dev, port); > > > mpc = netdev_priv(ndev); > > > > > > mutex_lock(&pd->vport_mutex); > > > @@ -79,17 +73,17 @@ int mana_ib_alloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_create_pd_req req = {}; > > > enum gdma_pd_flags flags = 0; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.flags = flags; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -119,17 +113,17 @@ int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > > struct gdma_destory_pd_resp resp = {}; > > > struct gdma_destroy_pd_req req = {}; > > > struct mana_ib_dev *dev; > > > - struct gdma_dev *mdev; > > > + struct gdma_context *gc; > > > int err; > > > > > > dev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - mdev = dev->gdma_dev; > > > + gc = mdev_to_gc(dev); > > > > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_PD, sizeof(req), > > > sizeof(resp)); > > > > > > req.pd_handle = pd->pd_handle; > > > - err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, > > > + err = mana_gd_send_request(gc, sizeof(req), &req, > > > sizeof(resp), &resp); > > > > > > if (err || resp.hdr.status) { > > > @@ -206,13 +200,11 @@ int mana_ib_alloc_ucontext(struct ib_ucontext > > *ibcontext, > > > struct ib_device *ibdev = ibcontext->device; > > > struct mana_ib_dev *mdev; > > > struct gdma_context *gc; > > > - struct gdma_dev *dev; > > > int doorbell_page; > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - dev = mdev->gdma_dev; > > > - gc = dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > /* Allocate a doorbell page index */ > > > ret = mana_gd_allocate_doorbell_page(gc, &doorbell_page); @@ > > > -238,7 +230,7 @@ void mana_ib_dealloc_ucontext(struct ib_ucontext > > *ibcontext) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > ret = mana_gd_destroy_doorbell_page(gc, mana_ucontext->doorbell); > > > if (ret) > > > @@ -322,15 +314,13 @@ int mana_ib_gd_create_dma_region(struct > > mana_ib_dev *dev, struct ib_umem *umem, > > > size_t max_pgs_create_cmd; > > > struct gdma_context *gc; > > > size_t num_pages_total; > > > - struct gdma_dev *mdev; > > > unsigned long page_sz; > > > unsigned int tail = 0; > > > u64 *page_addr_list; > > > void *request_buf; > > > int err; > > > > > > - mdev = dev->gdma_dev; > > > - gc = mdev->gdma_context; > > > + gc = mdev_to_gc(dev); > > > hwc = gc->hwc.driver_data; > > > > > > /* Hardware requires dma region to align to chosen page size */ > > > @@ -426,10 +416,8 @@ int mana_ib_gd_create_dma_region(struct > > > mana_ib_dev *dev, struct ib_umem *umem, > > > > > > int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 > > > gdma_region) { > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > > > > - gc = mdev->gdma_context; > > > ibdev_dbg(&dev->ib_dev, "destroy dma region 0x%llx\n", > > > gdma_region); > > > > > > return mana_gd_destroy_dma_region(gc, gdma_region); @@ -447,7 > > > +435,7 @@ int mana_ib_mmap(struct ib_ucontext *ibcontext, struct > > vm_area_struct *vma) > > > int ret; > > > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > > - gc = mdev->gdma_dev->gdma_context; > > > + gc = mdev_to_gc(mdev); > > > > > > if (vma->vm_pgoff != 0) { > > > ibdev_dbg(ibdev, "Unexpected vm_pgoff %lu\n", > > > vma->vm_pgoff); @@ -531,7 +519,7 @@ int > > mana_ib_gd_query_adapter_caps(struct mana_ib_dev *dev) > > > req.hdr.resp.msg_version = GDMA_MESSAGE_V3; > > > req.hdr.dev_id = dev->gdma_dev->dev_id; > > > > > > - err = mana_gd_send_request(dev->gdma_dev->gdma_context, sizeof(req), > > > + err = mana_gd_send_request(mdev_to_gc(dev), sizeof(req), > > > &req, sizeof(resp), &resp); > > > > > > if (err) { > > > diff --git a/drivers/infiniband/hw/mana/mana_ib.h > > > b/drivers/infiniband/hw/mana/mana_ib.h > > > index 6bdc0f549..6b15b2ab5 100644 > > > --- a/drivers/infiniband/hw/mana/mana_ib.h > > > +++ b/drivers/infiniband/hw/mana/mana_ib.h > > > @@ -142,6 +142,22 @@ struct mana_ib_query_adapter_caps_resp { > > > u32 max_inline_data_size; > > > }; /* HW Data */ > > > > > > +static inline struct gdma_context *mdev_to_gc(struct mana_ib_dev > > > +*mdev) { > > > + return mdev->gdma_dev->gdma_context; } > > > + > > > +static inline struct net_device *mana_ib_get_netdev(struct ib_device > > > +*ibdev, u32 port) { > > > + struct mana_ib_dev *mdev = container_of(ibdev, struct mana_ib_dev, > > ib_dev); > > > + struct gdma_context *gc = mdev_to_gc(mdev); > > > + struct mana_context *mc = gc->mana.driver_data; > > > + > > > + if (port < 1 || port > mc->num_ports) > > > + return NULL; > > > + return mc->ports[port - 1]; > > > +} > > > + > > > int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem > > *umem, > > > mana_handle_t *gdma_region); > > > > > > @@ -188,6 +204,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct > > ib_cq_init_attr *attr, > > > struct ib_udata *udata); > > > > > > int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); > > > +int mana_ib_install_cq_cb(struct mana_ib_dev *mdev, struct mana_ib_cq > > > +*cq); > > > > > > int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); > > > diff --git a/drivers/infiniband/hw/mana/mr.c > > > b/drivers/infiniband/hw/mana/mr.c index 351207c60..ee4d4f834 100644 > > > --- a/drivers/infiniband/hw/mana/mr.c > > > +++ b/drivers/infiniband/hw/mana/mr.c > > > @@ -30,12 +30,9 @@ static int mana_ib_gd_create_mr(struct mana_ib_dev > > > *dev, struct mana_ib_mr *mr, { > > > struct gdma_create_mr_response resp = {}; > > > struct gdma_create_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, sizeof(req), > > > sizeof(resp)); > > > req.pd_handle = mr_params->pd_handle; @@ -77,12 +74,9 @@ static > > > int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, u64 mr_handle) { > > > struct gdma_destroy_mr_response resp = {}; > > > struct gdma_destroy_mr_request req = {}; > > > - struct gdma_dev *mdev = dev->gdma_dev; > > > - struct gdma_context *gc; > > > + struct gdma_context *gc = mdev_to_gc(dev); > > > int err; > > > > > > - gc = mdev->gdma_context; > > > - > > > mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_MR, sizeof(req), > > > sizeof(resp)); > > > > > > @@ -164,8 +158,7 @@ struct ib_mr *mana_ib_reg_user_mr(struct ib_pd > > *ibpd, u64 start, u64 length, > > > return &mr->ibmr; > > > > > > err_dma_region: > > > - mana_gd_destroy_dma_region(dev->gdma_dev->gdma_context, > > > - dma_region_handle); > > > + mana_gd_destroy_dma_region(mdev_to_gc(dev), dma_region_handle); > > > > > > err_umem: > > > ib_umem_release(mr->umem); > > > diff --git a/drivers/infiniband/hw/mana/qp.c > > > b/drivers/infiniband/hw/mana/qp.c index e6d063d45..e889c798f 100644 > > > --- a/drivers/infiniband/hw/mana/qp.c > > > +++ b/drivers/infiniband/hw/mana/qp.c > > > @@ -17,12 +17,10 @@ static int mana_ib_cfg_vport_steering(struct > > mana_ib_dev *dev, > > > struct mana_cfg_rx_steer_resp resp = {}; > > > mana_handle_t *req_indir_tab; > > > struct gdma_context *gc; > > > - struct gdma_dev *mdev; > > > u32 req_buf_size; > > > int i, err; > > > > > > - gc = dev->gdma_dev->gdma_context; > > > - mdev = &gc->mana; > > > + gc = mdev_to_gc(dev); > > > > > > req_buf_size = > > > sizeof(*req) + sizeof(mana_handle_t) * > > > MANA_INDIRECT_TABLE_SIZE; @@ -39,7 +37,7 @@ static int > > mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, > > > req->rx_enable = 1; > > > req->update_default_rxobj = 1; > > > req->default_rxobj = default_rxobj; > > > - req->hdr.dev_id = mdev->dev_id; > > > + req->hdr.dev_id = gc->mana.dev_id; > > > > > > /* If there are more than 1 entries in indirection table, enable RSS */ > > > if (log_ind_tbl_size) > > > @@ -106,7 +104,6 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > struct gdma_queue **gdma_cq_allocated; > > > mana_handle_t *mana_ind_table; > > > struct mana_port_context *mpc; > > > - struct gdma_queue *gdma_cq; > > > unsigned int ind_tbl_size; > > > struct net_device *ndev; > > > struct mana_ib_cq *cq; > > > @@ -231,19 +228,11 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, > > struct ib_pd *pd, > > > mana_ind_table[i] = wq->rx_object; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - ret = -ENOMEM; > > > + ret = mana_ib_install_cq_cb(mdev, cq); > > > + if (!ret) > > > goto fail; > > > - } > > > - gdma_cq_allocated[i] = gdma_cq; > > > > > > - gdma_cq->cq.context = cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = cq->id; > > > - gc->cq_table[cq->id] = gdma_cq; > > > + gdma_cq_allocated[i] = gc->cq_table[cq->id]; > > > } > > > resp.num_entries = i; > > > > > > @@ -409,18 +398,9 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, > > struct ib_pd *ibpd, > > > send_cq->id = cq_spec.queue_index; > > > > > > /* Create CQ table entry */ > > > - WARN_ON(gc->cq_table[send_cq->id]); > > > - gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); > > > - if (!gdma_cq) { > > > - err = -ENOMEM; > > > + err = mana_ib_install_cq_cb(mdev, send_cq); > > > + if (err) > > > goto err_destroy_wq_obj; > > > - } > > > - > > > - gdma_cq->cq.context = send_cq; > > > - gdma_cq->type = GDMA_CQ; > > > - gdma_cq->cq.callback = mana_ib_cq_handler; > > > - gdma_cq->id = send_cq->id; > > > - gc->cq_table[send_cq->id] = gdma_cq; > > > > > > ibdev_dbg(&mdev->ib_dev, > > > "ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", > > > err, @@ -442,7 +422,7 @@ static int mana_ib_create_qp_raw(struct ib_qp > > > *ibqp, struct ib_pd *ibpd, > > > > > > err_release_gdma_cq: > > > kfree(gdma_cq); > > > - gd->gdma_context->cq_table[send_cq->id] = NULL; > > > + gc->cq_table[send_cq->id] = NULL; > > > > > > err_destroy_wq_obj: > > > mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); > > > > > > base-commit: d24b923f1d696ddacb09f0f2d1b1f4f045cfe65e > > > prerequisite-patch-id: 1b5d35ba40b675d080bfbe6a0e73b0dd163f4a03 > > > -- > > > 2.43.0 > > >