Received: by 2002:ab2:3319:0:b0:1ef:7a0f:c32d with SMTP id i25csp797338lqc; Fri, 8 Mar 2024 11:53:55 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUJ3+8suAF5qGiQsrj0Z+56BCJHxGTYIatoByWOKS29JSqPEcHkvpQpoCgqQEd88x3VM6rSgBcwy3Ym9rEs21JCeWa6R+IB47p2yRtejA== X-Google-Smtp-Source: AGHT+IGIS/LbMyShDsUUaf7MNKvijizeh0R9ApJTpDt4zKc84hRgel56xX2QvMRd0p0fQks7Dzyb X-Received: by 2002:a17:902:cec4:b0:1dc:fc84:198 with SMTP id d4-20020a170902cec400b001dcfc840198mr265661plg.29.1709927635196; Fri, 08 Mar 2024 11:53:55 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709927635; cv=pass; d=google.com; s=arc-20160816; b=dQERJcYsFos9+HtdgdN8JtL8kkxOWG3MRp8MryAtK0a1Sbsde5LiNcSp8iTnMQvXw8 E7OHxgVRFEDv59uETc156gPy1YfEBTcxFZbdPaZqRHozfmYUcu2wKx/TrRmzIWNsHs3O wghG6wsI1NQ5jBGhJGynKrVlv0HOHMQFHaWcyFZn1529mnesFEIVJcK8ryIV5D2BZlgf fprK3FLqzOnns5iA02VZAFpg8MBWpdo+n2NFwoilbkszs1LBe/MMCItepnbLbhNbjurL Rj43lB1cVsCs+NQ4IAWz3wYRXcqsONjXh1PdKz/wk0nrPkHtYeoS1ClepbIlVnIBwkv5 ayMw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:list-unsubscribe:list-subscribe :list-id:precedence:dkim-signature; bh=Pd2ZoV00c/hvjjV6TuUocydSDT4SN2a80zpBvuI43a4=; fh=8TrhnNQp1kO0mbsC1XEWYh/6+QXcnllFJ0rcHmHnfRE=; b=09itVZLYKFWsPefsPxQ/PHJpNfQ+q9J7XQE2lU6YOtCFZCg7IEk/84aOUjSIn2h1eH TsloT9o9p1iVXjLEp7WTtODMmmcF8PO9DjluM0v9je981Fae07YCkRt+UkKqjzwLv+Bp RDxIQTcQ6ROdMWttBV83EsqY4NUNJ4a+Bq8svc796N3yY3RLzhd1ww/ddx/rYewIOoDQ pUAahmbQNzNc3cR0NbCfoR0NDYGOV7hnnAQ+154YchGusAwbGvdOV79oR1ZtRCtUGyC8 qxSPPMdKqyv0tO8Dl73Jrjzps1f0HpFgo1T0/yZ3J777GLsHJ+hUSffKA98ZaXpUe1fV Xvbw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=FQIcIjit; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-97522-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97522-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id q13-20020a170902dacd00b001dd4faad832si56996plx.219.2024.03.08.11.53.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Mar 2024 11:53:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-97522-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=FQIcIjit; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-97522-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97522-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 2306FB21F2E for ; Fri, 8 Mar 2024 19:53:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2760C5C615; Fri, 8 Mar 2024 19:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FQIcIjit" Received: from mail-ej1-f44.google.com (mail-ej1-f44.google.com [209.85.218.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9FC25A7B7 for ; Fri, 8 Mar 2024 19:53:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.44 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709927617; cv=none; b=Ts5rnj1H0bTIcn20Cs1MhE1sv992dOHUK5UZQylyrIvnt/emk+8UmmAKXq1IWwjsdfslRzb4VUs2os9vuP5K3rt3Fg6P5ThcevPMAaio5qwFgH56iR4VwKW9mDROpg0fa3WxIWaqL62gqwQYs2CC+eozY3X5GhxdCBSpFFHA9xM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709927617; c=relaxed/simple; bh=+JPchmRbdEWp2hPvsQ8mwR8SsWihb9svN/keL3PqCmM=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=mlUAmJri1kYI5CTY6evnqUhos91P01Ir0DgBSvPuPh4CHrlBoEuEiWfEsKQm4b+x8CNA577Bh6HCdYHR2X6RaRXXul33JTvxbOqLuDlGzEOitLf7CyxD5YILbKCpx4O0qpIZJDXqPXj8gXXAJ5LWrXgj95hvMEuWYyGSGRPpKR4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FQIcIjit; arc=none smtp.client-ip=209.85.218.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Received: by mail-ej1-f44.google.com with SMTP id a640c23a62f3a-a45bdf6e9c2so301397966b.0 for ; Fri, 08 Mar 2024 11:53:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1709927613; x=1710532413; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Pd2ZoV00c/hvjjV6TuUocydSDT4SN2a80zpBvuI43a4=; b=FQIcIjitCXslDz2GbbUrP/Dl2A1fHjodIwfcVwX7edPGJjzUe6ft0Dzx4S5sFwe1Vh g88zviT2eLf5PmHmjVJB2aNpgv9n3ocJvS72MSrLxPfMODNwXkIqGml7VkVhyDJXPZ65 MjMUs4Lqmk59C5q9pviF7lrr5HVfpNnp9VtqWc7ZvnuMI0EGx3Q5NM/4uZGay3xcEJFy qYcxb+eRfAcYnwP/UVfmW9ErY4qC/DPOwckI+rRsS39kszwhkv8lEpCyJ9PUorpE+Wmk 8YbTN6+zdx0SrM+L7muhYuk5uJr4h+Hit4pCKYkJfFmlBE74LLcVw8ZHHf4d82NY4qR1 jV5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709927613; x=1710532413; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pd2ZoV00c/hvjjV6TuUocydSDT4SN2a80zpBvuI43a4=; b=AD067is/zRbIG18e8Z7zC4YI0uhDYo07X50KYTJw4KZf0GT8JFPSEiDRk575V/htnZ D+tSbChg58ZGnEC0bqLr5Bf/X4jQzfwDif7BYleulnx1tq7wWgsZDuSNjK1ayaHq0MY5 vopcfexam1hgzCXf/mSuBiDtW2DCyuk6f0UtCT7Bs2U7gd455u5SxxRjwXCML08dZ/85 W2jK/UYY+NBFEmwNf8yo2a3ZGmuLMrUHHNQqoAsmNAxjYNqeHNgtN6WTjyF1/kub0ygp 7un3FAr9E8ItChVdfbTZgAHVSzhfj+ZSoKdCbhzJjtFwg9Ai7n2JP5MV8hFuwCZJ74Zs D6oQ== X-Forwarded-Encrypted: i=1; AJvYcCWm6E8axWp4G70hMUwGPtPbugCOL86HGNLXEoGGXX7UJuW3HOyUvu2oP5Lx0UE1zzWguLheHTA9YDelyKTW9oImVgZYLZLUEE4vG9FB X-Gm-Message-State: AOJu0Yw5pb6TG6m5TRxpjmF/pVuppwYhsGZqzB07UWDDMkdE4zXr1bg4 AKNrbVkvele1s1fho7mkeJJCGtiBOoQtsEtoGD1+ojyy7EYK877nxEBWxp6feNjS+Qs0FCxEG4V qmUO8ll/zZr7P2gGYIkRjm8pDpinKvLQu6/pu X-Received: by 2002:a17:907:76d7:b0:a3e:9aa3:7024 with SMTP id kf23-20020a17090776d700b00a3e9aa37024mr57660ejc.34.1709927612751; Fri, 08 Mar 2024 11:53:32 -0800 (PST) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20240305020153.2787423-1-almasrymina@google.com> <20240305020153.2787423-3-almasrymina@google.com> <15625bac-dfec-4c4e-a828-d11424f7aced@davidwei.uk> In-Reply-To: <15625bac-dfec-4c4e-a828-d11424f7aced@davidwei.uk> From: Mina Almasry Date: Fri, 8 Mar 2024 11:53:18 -0800 Message-ID: Subject: Re: [RFC PATCH net-next v6 02/15] net: page_pool: create hooks for custom page providers To: David Wei Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Pavel Begunkov , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Jeroen de Borst , Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, Mar 7, 2024 at 8:57=E2=80=AFPM David Wei wrote: > > On 2024-03-04 18:01, Mina Almasry wrote: > > From: Jakub Kicinski > > > > The page providers which try to reuse the same pages will > > need to hold onto the ref, even if page gets released from > > the pool - as in releasing the page from the pp just transfers > > the "ownership" reference from pp to the provider, and provider > > will wait for other references to be gone before feeding this > > page back into the pool. > > > > Signed-off-by: Jakub Kicinski > > Signed-off-by: Mina Almasry > > > > --- > > > > This is implemented by Jakub in his RFC: > > https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@red= hat.com/T/ > > > > I take no credit for the idea or implementation; I only added minor > > edits to make this workable with device memory TCP, and removed some > > hacky test code. This is a critical dependency of device memory TCP > > and thus I'm pulling it into this series to make it revewable and > > mergeable. > > > > RFC v3 -> v1 > > - Removed unusued mem_provider. (Yunsheng). > > - Replaced memory_provider & mp_priv with netdev_rx_queue (Jakub). > > > > --- > > include/net/page_pool/types.h | 12 ++++++++++ > > net/core/page_pool.c | 43 +++++++++++++++++++++++++++++++---- > > 2 files changed, 50 insertions(+), 5 deletions(-) > > > > diff --git a/include/net/page_pool/types.h b/include/net/page_pool/type= s.h > > index 5e43a08d3231..ffe5f31fb0da 100644 > > --- a/include/net/page_pool/types.h > > +++ b/include/net/page_pool/types.h > > @@ -52,6 +52,7 @@ struct pp_alloc_cache { > > * @dev: device, for DMA pre-mapping purposes > > * @netdev: netdev this pool will serve (leave as NULL if none or mul= tiple) > > * @napi: NAPI which is the sole consumer of pages, otherwise NULL > > + * @queue: struct netdev_rx_queue this page_pool is being created fo= r. > > * @dma_dir: DMA mapping direction > > * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV > > * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV > > @@ -64,6 +65,7 @@ struct page_pool_params { > > int nid; > > struct device *dev; > > struct napi_struct *napi; > > + struct netdev_rx_queue *queue; > > enum dma_data_direction dma_dir; > > unsigned int max_len; > > unsigned int offset; > > @@ -126,6 +128,13 @@ struct page_pool_stats { > > }; > > #endif > > > > +struct memory_provider_ops { > > + int (*init)(struct page_pool *pool); > > + void (*destroy)(struct page_pool *pool); > > + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp); > > + bool (*release_page)(struct page_pool *pool, struct page *page); > > +}; > > Separate question as I try to adapt bnxt to this and your queue > configuration API. > > How does GVE handle the need to allocate kernel pages for headers and > dmabuf for payloads? > > Reading the code, struct gve_rx_ring is the main per-ring object with a > page pool. gve_queue_page_lists are filled with page pool netmem > allocations from the page pool in gve_alloc_queue_page_list(). Are these > strictly used for payloads only? > You're almost correct. We actually don't use the gve queue page lists for devmem TCP, that's an unrelated GVE feature/code path for low memory VMs. The code in effect is the !qpl code. In that code, for incoming RX packets we allocate a new or recycled netmem from the page pool in gve_alloc_page_dqo(). These buffers are used for payload only in the case where header split is enabled. In the case header split is disabled, these buffers are used for the entire incoming packet. > I found a struct gve_header_buf in both gve_rx_ring and struct > gve_per_rx_queue_mem_dpo. This is allocated in gve_rx_queue_mem_alloc() > using dma_alloc_coherent(). Is this where GVE stores headers? > Yes, this is where GVE stores headers. > IOW, GVE only uses page pool to allocate memory for QPLs, and QPLs are > used by the device for split payloads. Is my understanding correct? > --=20 Thanks, Mina