Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp3180907rdb; Sat, 9 Dec 2023 15:29:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IEJgcNxQJdNH9g69fbhcYXgny0JPBwCh+tMk//Ja2PYO79tUPsUK63i2sBn1c7BpgF7kwkg X-Received: by 2002:a05:6e02:1c8b:b0:35d:a8a7:ba08 with SMTP id w11-20020a056e021c8b00b0035da8a7ba08mr3774968ill.86.1702164569523; Sat, 09 Dec 2023 15:29:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702164569; cv=none; d=google.com; s=arc-20160816; b=vNllvn4qiRPbkeU7qgpd9j0XtKQjwTaelVYNUOUs+mUEl2fbuDSKsDVmegfqrW24Oi V/+SeKLwCDztCbeywbeuajYpOH/0x/zOy6oBr43gPkQcwrJheiTtuP0dCX/rfwwyRj3V /YIlOkShb+aizXFDGpY1gZHuLAt/Vei8wrSQWEY9XNdv+PI+dneCY+rTnYZbvWMaWYrT PN+E3ycRDomZJneWZTL1dOpLfyACI5Gbe+n811XK07dC0FV5P/N3fVzkUtAHALXi83fX PPgh3quWMc3WUyfE8xMod79MrUn+07FpdxlY/w30shtOMmqdEltTDN2TeMlq1S9lungx ftoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=Ig6AIsViERj0Ld56lAxveBonDojB/036htBTWjI8o3c=; fh=HZ2P8rbh4/kUJ801SrNmh+bUCwC/HVaZPr3yZ3gVyB0=; b=kTlwyXZkXfwtDem/gd+405S/MYQTZfNSTI6Y2GiH/hrLb0NedHK8/b3CrcMZy2Y67x Y6dw/5MpI2byONpJPt80lcG0F3mHONkj3cFW7N2mNnUKk3zY2MQXmvDTepNZIVwWm26t WG9XtdwYnS6xxVufJ1891jOaTigFf2y6dNbVX+I9NjIe1OUL0F1o5c/AyG9qKxgs851h FpRLtcysabSxF3E+JDpdvToZ9YimnkuS0/b05FblcbDOx4QBw2JrP9AP18HZ4RV/FMX9 uJZ3LQQUb8X8oFr5qs9VwzKaq8XBIRTZ0T4ul0ygkmfVft0IGlutxVqoZlIkAQ85T/3d 8PJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=b6Hyp1OB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id t9-20020a654089000000b00577448019b5si3534228pgp.276.2023.12.09.15.29.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Dec 2023 15:29:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=b6Hyp1OB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id D32408063BE1; Sat, 9 Dec 2023 15:29:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229556AbjLIX3H (ORCPT + 99 others); Sat, 9 Dec 2023 18:29:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229488AbjLIX3G (ORCPT ); Sat, 9 Dec 2023 18:29:06 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4449F123 for ; Sat, 9 Dec 2023 15:29:12 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DB64C433C8; Sat, 9 Dec 2023 23:29:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702164551; bh=c9kOjfPZi63X+asNNxhMNmUxQDfFEzKNIVa3FVoqTpU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=b6Hyp1OBNvus/wYWvXxgvsrIZmxWDbJmG7M94TiargWR0ivsB/QtcAV15++RbJpw4 0aBA5M26kNQea5FdiS2dxZto+AGylSMARPzNHWPxVoeLKiA0l6cIpNgoAMvW7REZqh c2PQK/kmpYgy9PugIfR9kwNsrZdY4LqqihHztRYSxkc0KCU4dNqt/0Wv0kcB+cWFox YUDbgRYcK7BMoi+NkWVzj/MJSIR5RKcu5gsSTeGkzfQCwahZHG3Vhh7dbv3fYM7C0m IQsB2hHL6ZcSA5LR7vaDJi4nu1TqHUL4AZvEpnYmBrscOL9JKv+KMYpX68+ufgKMLS bL2ldjH3T1UQA== Message-ID: <279a2999-3c0a-4839-aa2e-602864197410@kernel.org> Date: Sat, 9 Dec 2023 16:29:09 -0700 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice Content-Language: en-US To: Mina Almasry Cc: Shailend Chand , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Jeroen de Borst , Praveen Kaligineedi , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Yunsheng Lin , Harshitha Ramamurthy , Shakeel Butt , Willem de Bruijn , Kaiyuan Zhang References: <20231208005250.2910004-1-almasrymina@google.com> <20231208005250.2910004-7-almasrymina@google.com> <5752508c-f7bc-44ac-8778-c807b2ee5831@kernel.org> From: David Ahern In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sat, 09 Dec 2023 15:29:27 -0800 (PST) On 12/8/23 12:22 PM, Mina Almasry wrote: > On Fri, Dec 8, 2023 at 9:48 AM David Ahern wrote: >> >> On 12/7/23 5:52 PM, Mina Almasry wrote: > ... >>> + >>> + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) { >>> + if (rxq->binding == binding) { >>> + /* We hold the rtnl_lock while binding/unbinding >>> + * dma-buf, so we can't race with another thread that >>> + * is also modifying this value. However, the driver >>> + * may read this config while it's creating its >>> + * rx-queues. WRITE_ONCE() here to match the >>> + * READ_ONCE() in the driver. >>> + */ >>> + WRITE_ONCE(rxq->binding, NULL); >>> + >>> + rxq_idx = get_netdev_rx_queue_index(rxq); >>> + >>> + netdev_restart_rx_queue(binding->dev, rxq_idx); >> >> Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf >> has no outstanding references (ie., no references in the RxQ), then no >> restart is needed. >> > > I think I need to stop the queue while binding to a dmabuf for the > sake of concurrency, no? I.e. the softirq thread may be delivering a > packet, and in parallel a separate thread holds rtnl_lock and tries to > bind the dma-buf. At that point the page_pool recreation will race > with the driver doing page_pool_alloc_page(). I don't think I can > insert a lock to handle this into the rx fast path, no? I think it depends on the details of how entries are added and removed from the pool. I am behind on the pp details at this point, so I do need to do some homework. > > Also, this sounds like it requires (lots of) more changes. The > page_pool + driver need to report how many pending references there > are (with locking so we don't race with incoming packets), and have > them reported via an ndo so that we can skip restarting the queue. > Implementing the changes in to a huge issue but handling the > concurrency may be a genuine blocker. Not sure it's worth the upside > of not restarting the single rx queue? It has to do with the usability of this overall solution. As I mentioned most ML use cases can (and will want to) use many memory allocations for receiving packets - e.g., allocations per message and receiving multiple messages per socket connection. > >>> + } >>> + } >>> + >>> + xa_erase(&netdev_dmabuf_bindings, binding->id); >>> + >>> + netdev_dmabuf_binding_put(binding); >>> +} >>> + >>> +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, >>> + struct netdev_dmabuf_binding *binding) >>> +{ >>> + struct netdev_rx_queue *rxq; >>> + u32 xa_idx; >>> + int err; >>> + >>> + rxq = __netif_get_rx_queue(dev, rxq_idx); >>> + >>> + if (rxq->binding) >>> + return -EEXIST; >>> + >>> + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, >>> + GFP_KERNEL); >>> + if (err) >>> + return err; >>> + >>> + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't >>> + * race with another thread that is also modifying this value. However, >>> + * the driver may read this config while it's creating its * rx-queues. >>> + * WRITE_ONCE() here to match the READ_ONCE() in the driver. >>> + */ >>> + WRITE_ONCE(rxq->binding, binding); >>> + >>> + err = netdev_restart_rx_queue(dev, rxq_idx); >> >> Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf >> binding to add entries to the page pool for the queue. > > To be honest, I think maybe there's a slight disconnect between how > you think the page_pool works, and my primitive understanding of how > it works. Today, I see a 1:1 mapping between rx-queue and page_pool in > the code. I don't see 1:many or many:1 mappings. I am not referring to 1:N or N:1 for page pool and queues. I am referring to entries within a single page pool for a single Rx queue. > > In theory mapping 1 rx-queue to n page_pools is trivial: the driver > can call page_pool_create() multiple times to generate n queues and > decide for incoming packets which one to use. > > However, mapping n rx-queues to 1 page_pool seems like a can of worms. > I see code in the page_pool that looks to me (and Willem) like it's > safe only because the page_pool is used from the same napi context. > with a n rx-queueue: 1 page_pool mapping, that is no longer true, no? > There is a tail end of issues to resolve to be able to map 1 page_pool > to n queues as I understand and even if resolved I'm not sure the > maintainers are interested in taking the code. > > So, per my humble understanding there is no such thing as "add entries > to the page pool for the (specific) queue", the page_pool is always > used by 1 queue. > > Note that even though this limitation exists, we still support binding > 1 dma-buf to multiple queues, because multiple page pools can use the > same netdev_dmabuf_binding. I should add that to the docs. > >> If the pool was >> previously empty, then maybe the queue needs to be "started" in the >> sense of creating with h/w or just pushing buffers into the queue and >> moving the pidx. >> >> > > I don't think it's enough to add buffers to the page_pool, no? The > existing buffers in the page_pool (host mem) must be purged. I think > maybe the queue needs to be stopped as well so that we don't race with > incoming packets and end up with skbs with devmem and non-devmem frags > (unless you're thinking it becomes a requirement to support that, I > think things are complicated as-is and it's a good simplification). > When we already purge the existing buffers & restart the queue, it's > little effort to migrate this to become in line with Jakub's queue-api > that he also wants to use for per-queue configuration & ndo_stop/open. >