Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp634849lqh; Thu, 28 Mar 2024 11:29:44 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCUfZIKdIRq9oWx5+V2LtLvh6/Jaqc3JaP8M65xeKZoieb7IS8jQ82nKYd3mBKbL+jZeFChtNGUCuwVkxAphPCKfC5DsDlKomwIN2ttyjQ== X-Google-Smtp-Source: AGHT+IF5zvXt4+1pADC8lKmWoDiZ0VTO75z4PgR4GxPlE2hBmL1t9KDStJ32/88m6zBXcGGjCcIQ X-Received: by 2002:a05:6a21:380f:b0:1a3:dc61:926a with SMTP id yi15-20020a056a21380f00b001a3dc61926amr3047277pzb.54.1711650584105; Thu, 28 Mar 2024 11:29:44 -0700 (PDT) Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id f7-20020a17090a638700b002a01f8b865bsi4110442pjj.80.2024.03.28.11.29.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 11:29:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-123415-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@kernel.org header.s=k20201202 header.b="d/yG/doR"; arc=fail (body hash mismatch); spf=pass (google.com: domain of linux-kernel+bounces-123415-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-123415-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 07E69294508 for ; Thu, 28 Mar 2024 18:29:02 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 770B9137902; Thu, 28 Mar 2024 18:28:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="d/yG/doR" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49A81136E29; Thu, 28 Mar 2024 18:28:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711650506; cv=none; b=RWO+xypqnhY/5wo8iMzBkWKfk6+UGu83i7eq9FW3GlMpBdO9qopL/d19xWWSKdC9Lhq+INg+3hSdKkSvoFmFvoHugE4jiJrYGk8NY4JHCB/uNxhAz/+mqAxCINg4NjCVqWQtR2I0zzgpwfezrLU5OdGc8GOnv0xBXR88AfBJW+0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711650506; c=relaxed/simple; bh=fCpsuqpXPx6EiIEqbV6/oK+Mco4Eg3WQcaCu8FFYxmE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=szC01qJIyRbhFpJ2EdWWwRN8BTvKblSusZFXnbGqVAt9h9OFgkWngQ4p+w2eGOUCY3feMg5MP4zH4njT8c1LYGmNiAIa3Fu3UR5j066mxjzIUIpkwm6IucL8beqWx3rzRqGNz9tpBXr5tkJrYmpwPpMhjzpiTDf6fx/X2PtbAec= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=d/yG/doR; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E27A2C433F1; Thu, 28 Mar 2024 18:28:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711650505; bh=fCpsuqpXPx6EiIEqbV6/oK+Mco4Eg3WQcaCu8FFYxmE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=d/yG/doRGyXpqxmVEGZRl29Gx01ZBHRqsLyBFhmaT8mmf44tImPwfhWbq/+onqqYQ Mk7belOYYdf/oaQElpAohzTOcmJEhrO4/oBdC872X4+PkuC19MXZYEObKWc8b1DXht E4JJM33unaXbObP+98zU5UfutVyF0MSVFb61PFWrUjcxtA1CoRmOrTxZIvLd/2j5Xc i2AR4w5A19X/edVUb5L7inWvNDxXBn2PViw3xzrRFzrBAWN5Ov77wY7xaniySz8nzx dj+p8Y9PBq0IYtN+jR1hn0KR4MgR7mupP9oDLO77XhK2IZq0VluR8c5zcvoWCrB8P2 oprZ6oa0nyp9A== Date: Thu, 28 Mar 2024 18:28:12 +0000 From: Simon Horman To: Mina Almasry Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , Christian =?utf-8?B?S8O2bmln?= , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Subject: Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to netdevice Message-ID: <20240328182812.GJ651713@kernel.org> References: <20240326225048.785801-1-almasrymina@google.com> <20240326225048.785801-5-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240326225048.785801-5-almasrymina@google.com> On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote: > Add a netdev_dmabuf_binding struct which represents the > dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to > rx queues on the netdevice. On the binding, the dma_buf_attach > & dma_buf_map_attachment will occur. The entries in the sg_table from > mapping will be inserted into a genpool to make it ready > for allocation. > > The chunks in the genpool are owned by a dmabuf_chunk_owner struct which > holds the dma-buf offset of the base of the chunk and the dma_addr of > the chunk. Both are needed to use allocations that come from this chunk. > > We create a new type that represents an allocation from the genpool: > net_iov. We setup the net_iov allocation size in the > genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally > allocated by the page pool and given to the drivers. > > The user can unbind the dmabuf from the netdevice by closing the netlink > socket that established the binding. We do this so that the binding is > automatically unbound even if the userspace process crashes. > > The binding and unbinding leaves an indicator in struct netdev_rx_queue > that the given queue is bound, but the binding doesn't take effect until > the driver actually reconfigures its queues, and re-initializes its page > pool. > > The netdev_dmabuf_binding struct is refcounted, and releases its > resources only when all the refs are released. > > Signed-off-by: Willem de Bruijn > Signed-off-by: Kaiyuan Zhang > Signed-off-by: Mina Almasry .. > +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, > + struct net_devmem_dmabuf_binding *binding) > +{ > + struct netdev_rx_queue *rxq; > + u32 xa_idx; > + int err; > + > + if (rxq_idx >= dev->num_rx_queues) > + return -ERANGE; > + > + rxq = __netif_get_rx_queue(dev, rxq_idx); > + if (rxq->mp_params.mp_priv) > + return -EEXIST; > + > + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, > + GFP_KERNEL); > + if (err) > + return err; > + > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't > + * race with another thread that is also modifying this value. However, > + * the driver may read this config while it's creating its * rx-queues. > + * WRITE_ONCE() here to match the READ_ONCE() in the driver. > + */ > + WRITE_ONCE(rxq->mp_params.mp_ops, &dmabuf_devmem_ops); Hi Mina, This causes a build failure because mabuf_devmem_ops is not added until a subsequent patch in this series. > + WRITE_ONCE(rxq->mp_params.mp_priv, binding); > + > + err = net_devmem_restart_rx_queue(dev, rxq_idx); > + if (err) > + goto err_xa_erase; > + > + return 0; > + > +err_xa_erase: > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL); > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL); > + xa_erase(&binding->bound_rxq_list, xa_idx); > + > + return err; > +} ..