Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp701437lqh; Thu, 28 Mar 2024 13:39:31 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWYP+tLiWHyLQL73C7xpEdjFihNbbIkcHiqPkbWT1uK+56fOWOEzKYDDxCmgneBh+zphXGZZXKpzX6fyvMmSFSzDCxA/wHOmH+dNTzrCg== X-Google-Smtp-Source: AGHT+IE5gwxQbLtScQaPP+x4mF5QR9ZZOBl7NAF+T7hnQQVd1Sf19F+6Nm4E75bxc6MvBfDBEBnU X-Received: by 2002:a05:6a00:3984:b0:6ea:c449:ece9 with SMTP id fi4-20020a056a00398400b006eac449ece9mr392105pfb.31.1711658370811; Thu, 28 Mar 2024 13:39:30 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711658370; cv=pass; d=google.com; s=arc-20160816; b=Ubisk74CvjaasFlj3FiKlykfuST66ShJMkxt6DKi43nKrbe9dmOXzG2+6MntReSYsl 7yqvPSJK2BJzMO6C+phwhJrCiPu6kiXA8m49KUWm3YfE9p/tAUolXlR/qJ0rUEPVIAns rAneX2j9Jc0t7Uo7q8+p2SlrL0dZLhmSPWFTNr921jNuyTWK5GEgEhBoql/739tEAfiF FX+CTFs7wP5nT19eEuRIVd0nhz2EZgmwLP7qHWv4VHnqgjjF/EyM3cz4AzkrCmOomtD9 aYT/qt5KB9tUvQ2Zxnap76lvOwHpKRH77XToC+0xGBKmApNIRM/Yr+O3ywExikYURYu0 4KZw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :references:message-id:subject:cc:to:from:date:dkim-signature; bh=G5HJK5b99X5OdVKFlIfl9912QlDernxyzTl/0qeqcsc=; fh=+wTNpyZVG7LT0oY4HPCJUftGD7bmhIVFDylDTARXZUg=; b=DPw/q4f0vjhHUt5GmJPqSWYC8oEB6xhAYugYBilCa45JteStOm8rvUZ4z3VyI01IZW RVZ2M8KKelZXFFw7qO6XlRTi1XHkfLHXkb2T5vumtoZHVcWXwRtfgaHEtjpkpk4JzN18 5MQCEXQPvFMQSZ0weRHzgUYbwOY5rYJInJ/qCljzdY4yL7AbUaeF53mER6ExCXVFOn8J Rmf7idOeo0iFs4aOqeltNnOHqKLZsYKcfrtdcQX36O/P8HJsim3wqJJB/oYpRpSyY81l wmCqkM1X3wnZeHutaAX8ugie1QuHC/+GWZ78XHo6BX4MbmWZtYL+88lYqk9+7K4UD9qx 7x6g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=dRfKVHSc; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-123593-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-123593-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id e4-20020a656484000000b005e43cb2a1ccsi2107450pgv.325.2024.03.28.13.39.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 13:39:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-123593-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=dRfKVHSc; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-123593-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-123593-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id CE6A2B2311A for ; Thu, 28 Mar 2024 20:38:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1750A13AA3A; Thu, 28 Mar 2024 20:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dRfKVHSc" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA73913A407; Thu, 28 Mar 2024 20:38:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711658302; cv=none; b=hHHaFNqYuI9fyEIptO1yDSE6C2TaxhafvFDW7XVfYu0V70oeiOPIUzAPvGkp8QnpD9Jge4jlhBifd4jb9+7j2z/vQv7M5d7j3oIkb47ZVKnxYu+mDXeCBmB1P3TcpMhutuLlxdtl+R8UA3e5ZKvTLCblKI4mRuAhUYd/1rXK6Eg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711658302; c=relaxed/simple; bh=4V4w7fC0EBbm7te+ffIKRQDvDxxoYpzMpaa3f0AoWlE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=R9ZcLK3fDdjNJfw7QbQUGOnuhDgy38LU7BworcsUB7xsk6qRlaD8OuwBstohGL4I/Wg9Z3iSOAtE/ECbk5jHMVMx/vc6KyO4uoPHFcx/ObqiA3pQicp+SEQ0bsEcQR41UB+25r4A1cvT3uMDjYF118Mqfxna02nTb90zZ8bukHE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dRfKVHSc; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEC19C433F1; Thu, 28 Mar 2024 20:38:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711658301; bh=4V4w7fC0EBbm7te+ffIKRQDvDxxoYpzMpaa3f0AoWlE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=dRfKVHScDsH8tRE1//91CYwzd5KWuDIPzdzz4FYy70fYNQxTI4FdYsr0rnVlRlDzy bg2yy/d0f1eEfdelx5bbivx7vjt1I2XcdZlEKOdXeDikFSelU1LkYAktJZJQE5tLsR aWlyXBPbOFfiGkb1rQ9ZnTbgEL0YbJ4i2SbETO+aCf/YHhMcqNdelehX5uFfKS1xll hu+eiqfLfnMN5AkY0NodQO3KanPcw3aHDw1BYdD1SX/2LmnaHHwN/NMDk0K7Em2Psg C2FdKw3cfmIzQDuZNhqbtY1LdZDhPcphkaPSG7Rh5XQWxlmM7Qthw7AzeTMMxiHvhg jCFiIDoVfirww== Date: Thu, 28 Mar 2024 20:38:08 +0000 From: Simon Horman To: Mina Almasry Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , Christian =?utf-8?B?S8O2bmln?= , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Subject: Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to netdevice Message-ID: <20240328203808.GL651713@kernel.org> References: <20240326225048.785801-1-almasrymina@google.com> <20240326225048.785801-5-almasrymina@google.com> <20240328182812.GJ651713@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Thu, Mar 28, 2024 at 11:55:23AM -0700, Mina Almasry wrote: > On Thu, Mar 28, 2024 at 11:28 AM Simon Horman wrote: > > > > On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote: > > > Add a netdev_dmabuf_binding struct which represents the > > > dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to > > > rx queues on the netdevice. On the binding, the dma_buf_attach > > > & dma_buf_map_attachment will occur. The entries in the sg_table from > > > mapping will be inserted into a genpool to make it ready > > > for allocation. > > > > > > The chunks in the genpool are owned by a dmabuf_chunk_owner struct which > > > holds the dma-buf offset of the base of the chunk and the dma_addr of > > > the chunk. Both are needed to use allocations that come from this chunk. > > > > > > We create a new type that represents an allocation from the genpool: > > > net_iov. We setup the net_iov allocation size in the > > > genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally > > > allocated by the page pool and given to the drivers. > > > > > > The user can unbind the dmabuf from the netdevice by closing the netlink > > > socket that established the binding. We do this so that the binding is > > > automatically unbound even if the userspace process crashes. > > > > > > The binding and unbinding leaves an indicator in struct netdev_rx_queue > > > that the given queue is bound, but the binding doesn't take effect until > > > the driver actually reconfigures its queues, and re-initializes its page > > > pool. > > > > > > The netdev_dmabuf_binding struct is refcounted, and releases its > > > resources only when all the refs are released. > > > > > > Signed-off-by: Willem de Bruijn > > > Signed-off-by: Kaiyuan Zhang > > > Signed-off-by: Mina Almasry > > > > ... > > > > > +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, > > > + struct net_devmem_dmabuf_binding *binding) > > > +{ > > > + struct netdev_rx_queue *rxq; > > > + u32 xa_idx; > > > + int err; > > > + > > > + if (rxq_idx >= dev->num_rx_queues) > > > + return -ERANGE; > > > + > > > + rxq = __netif_get_rx_queue(dev, rxq_idx); > > > + if (rxq->mp_params.mp_priv) > > > + return -EEXIST; > > > + > > > + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, > > > + GFP_KERNEL); > > > + if (err) > > > + return err; > > > + > > > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't > > > + * race with another thread that is also modifying this value. However, > > > + * the driver may read this config while it's creating its * rx-queues. > > > + * WRITE_ONCE() here to match the READ_ONCE() in the driver. > > > + */ > > > + WRITE_ONCE(rxq->mp_params.mp_ops, &dmabuf_devmem_ops); > > > > Hi Mina, > > > > This causes a build failure because mabuf_devmem_ops is not added until a > > subsequent patch in this series. > > > > My apologies. I do notice the failure in patchwork now. I'll do a > patch by patch build for the next iteration. Thanks, much appreciated. > > > + WRITE_ONCE(rxq->mp_params.mp_priv, binding); > > > + > > > + err = net_devmem_restart_rx_queue(dev, rxq_idx); > > > + if (err) > > > + goto err_xa_erase; > > > + > > > + return 0; > > > + > > > +err_xa_erase: > > > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL); > > > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL); > > > + xa_erase(&binding->bound_rxq_list, xa_idx); > > > + > > > + return err; > > > +} > > > > ... > > > > -- > Thanks, > Mina >