Received: by 2002:a89:413:0:b0:1fd:dba5:e537 with SMTP id m19csp179566lqs; Thu, 13 Jun 2024 07:19:43 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVWNDf8ro0ZfSGR8mWiYK9cN3wAJTSgPCsLGXnisuLvvKvdxHKfdEMdMtmrY0+BM2T72upPazhoRrATsS1O2vErFhFN7wJwXbD9aD3qOw== X-Google-Smtp-Source: AGHT+IFjexf+eqqbguIhbM3H0btdbjdOxkINr5/VAnzI6lnL0zNaKbANGQogb3mHDyT5J08wDBPA X-Received: by 2002:a05:620a:40c5:b0:797:855c:1f89 with SMTP id af79cd13be357-797f603b12bmr572795385a.31.1718288382963; Thu, 13 Jun 2024 07:19:42 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718288382; cv=pass; d=google.com; s=arc-20160816; b=gLdbsVgl8NK/0hhokY18NdOXhJwOaXiYHGgBP/m2JK6PUmH3RXhxiKceeEYVLrnVBC msPdISt7wGzkymKxpNGcOJHT+/3YV86DFqtODGMHpq5FZeBZ6NLw6b2L7VMvhsoQ5G4U lL/L3JAIKUdIAR2J67vmqA189tg/o3Unbu1GXRxDd9j4e5gn2NFKcat1qSqMf5IRW14H yeLZ1hRnPu/RDlDzRUCGeE0x0/k5VgCu7Suj2GfjFcAJf67Av0TwiRT5Qsk6x4ScGLfj 15DkmWqE5JdzZfIj4SIANi2AJAAuCBMKaPxL0xsGSRWa0gXWFBuJ6B8yCzxTRdZUdQ5a 9wVQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:list-unsubscribe:list-subscribe :list-id:precedence:dkim-signature; bh=2wBzmun2RrCaRgPyNcy89ODI4aX7kCqGzkCYxhF3xnY=; fh=y5m3UNmh4gUM6tRXTlUbw45IwDYU09nF2/QIvcC8C/4=; b=AmMJVilJ2TaMsIO5vLjEV6QQDWMRDhvXZMBpIFnARhD7TTLNN6JHEbQMIp1JfxcIhR 4Vx53jQiq16fHJ+AOHS6n+nQkWQVO02OFzXyaTYPttEgQTqpCUzuNaD8ekOC/5hTq+vS 2kreGjpln9R4zCsFZ3FtP2qiLJdqNcjfPvs+thr3xgp+eJ4few4v5obr3b/uG2RpaEgB JXcamuQeqYKUcoM+3oqv2M1sAkO6Wa4Iawgfm1xRHa6ZoZ/r8SZLdyw1pqfbCt9nW1zU B3IPcxuKVkBv3TjIjEadmwuq7crnoHpw/NApFjtu5onflbaEG1iRpoc8aMMl8wTTAxTg gohA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=tLfBRN3D; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-213417-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-213417-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id af79cd13be357-798ac096756si145714485a.749.2024.06.13.07.19.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Jun 2024 07:19:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-213417-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=tLfBRN3D; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-213417-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-213417-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 03E511C23ADE for ; Thu, 13 Jun 2024 14:19:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C1971145B03; Thu, 13 Jun 2024 14:18:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tLfBRN3D" Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 515DF14534B for ; Thu, 13 Jun 2024 14:18:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.53 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718288321; cv=none; b=JRgWbdzmnB+cHodtlaw3/fmaVnkBCXOgbUaCXC/XL/6w1nIKGqB45fNQ1k47xVv7+ox5cVJV1Q3Fw+uS/Sl9w0OqF9vB9hkbsJkB95rHag0Bd+8Vk51f3UDjDmtFnVi0ZkmyvBc2IWdh4Y2cFaFKWQR9zvBobxDVWQmlJmugC/E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718288321; c=relaxed/simple; bh=85JEwVuIKe/KftRkpLke8Aoka5Lgv+YCkT+qp2fOU5U=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=DBz5h6lliVlX0rd1fiVubCmUUdOtgBr7SXonTZ3GCtx6Te0c/NdD0wXgGQEVHuFBOExv5d3HiiHeUgHzqrEh9QaK1E+Vlb+yYCXqfQIfj5kxKlz0g3XtqMCnzp+yvTZMbgCt1P/e0DJgKyTBQeUm4fdf0YRAWPtsAUZv2kd1Pm0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tLfBRN3D; arc=none smtp.client-ip=209.85.218.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-a6265d48ec3so157366766b.0 for ; Thu, 13 Jun 2024 07:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718288318; x=1718893118; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2wBzmun2RrCaRgPyNcy89ODI4aX7kCqGzkCYxhF3xnY=; b=tLfBRN3Dc1oy1HFX8sK76unqXkHeH+h9HBReZv3Q03XfyneTzXEpmxqZuXj3OJpMJs /cF+4VNYMJLhFVe01YGDWuPfin/BkU6gr7jT22cQg9tglgb8YhZXhfcOZkPytPNegntY Kn9udAXQrzL8npscipfJvALpRmhC8t5rlmdbjb4FkIaeeBGNhGvs4xGiTsmxECEObLuO 0ynMDUeaBMEdjP0eUpR0/VFW69fM9H/H/3Xpsz7l5vmq1LdHU5MXpkFwpFF6FuVVQvL5 6I8pclo/9YZxE0ZqnUd2Pe6I7H9WOhxFVpEB4yRowEmIBTQTokXlP0jePi3B0o5E35RY 18dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718288318; x=1718893118; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2wBzmun2RrCaRgPyNcy89ODI4aX7kCqGzkCYxhF3xnY=; b=iw4VxY8I4akBApI1O3YLiQLhsjoHPcqY+KbR0KcaCvebHneCRABpxBFZwkMYsF9GQP POanLiIWZv9YP/2rdj+UMuirK6jXoSq/Fg7T4C023ZYKF2tx89nMbUXtq7hVL3Rwb/1w nc9+9Sahl+WuPeFMxx5mG9nIvQybEt4FZW5w8YsDHEv7ZzxrwYv7v6UapqR7TzYRTiBu VaE2p+RPyXbR90+vkkiR/qZadfDVJ5ttlb7qlsJk0Fs6Qlj1Yqq5qCXHxz8X1fKwsuk6 1jduq9aCQ6C97oongTcLURsVk3vFr8U9YBBKaonD3OQoYvUmVH+gQSCZCKoBX+xDct6Y 66PQ== X-Forwarded-Encrypted: i=1; AJvYcCVo8VwbocSnanV23vhlVLL2YouAVqfP3jT99iYpCl6mnFTVM469NANAj4tGIht9b8Ys5Q8V3VJ0q9zk2ivLXYGgGtS3fe57LvoN4ozY X-Gm-Message-State: AOJu0YwVn51kp7YdVr8eH40eXN8GbHaDHUiysX+XpAeM014js2Awg7pp mtOkZLtJqsk9NzjPw3EJTpEaZpeFTakbcawZDv0vzqaOolRuQsd8/p4wvETVvbGKAVbA9DGrL+1 757CSytOQHKOWDe7JvXZM141kSh+OmWHWRVz2 X-Received: by 2002:a17:907:94d1:b0:a6f:4bd5:16bb with SMTP id a640c23a62f3a-a6f4bd51782mr329005366b.56.1718288317233; Thu, 13 Jun 2024 07:18:37 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20240613013557.1169171-1-almasrymina@google.com> <20240613013557.1169171-6-almasrymina@google.com> <322e7317-61dc-4f1e-8706-7db6f5f7a030@bp.renesas.com> In-Reply-To: <322e7317-61dc-4f1e-8706-7db6f5f7a030@bp.renesas.com> From: Mina Almasry Date: Thu, 13 Jun 2024 07:18:23 -0700 Message-ID: Subject: Re: [PATCH net-next v12 05/13] page_pool: convert to use netmem To: Paul Barker Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-renesas-soc@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Donald Hunter , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Sergey Shtylyov , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , linux-mm@kvack.org, Matthew Wilcox Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, Jun 13, 2024 at 1:36=E2=80=AFAM Paul Barker wrote: > > On 13/06/2024 02:35, Mina Almasry wrote: > > Abstrace the memory type from the page_pool so we can later add support > > s/Abstrace/Abstract/ > Thanks, will do. > > for new memory types. Convert the page_pool to use the new netmem type > > abstraction, rather than use struct page directly. > > > > As of this patch the netmem type is a no-op abstraction: it's always a > > struct page underneath. All the page pool internals are converted to > > use struct netmem instead of struct page, and the page pool now exports > > 2 APIs: > > > > 1. The existing struct page API. > > 2. The new struct netmem API. > > > > Keeping the existing API is transitional; we do not want to refactor al= l > > the current drivers using the page pool at once. > > > > The netmem abstraction is currently a no-op. The page_pool uses > > page_to_netmem() to convert allocated pages to netmem, and uses > > netmem_to_page() to convert the netmem back to pages to pass to mm APIs= , > > > > Follow up patches to this series add non-paged netmem support to the > > page_pool. This change is factored out on its own to limit the code > > churn to this 1 patch, for ease of code review. > > > > Signed-off-by: Mina Almasry > > > > --- > > > > v12: > > - Fix allmodconfig build error. Very recently renesas/ravb_main.c added > > a dependency on page_pool that I missed in my rebase. The dependency > > calls page_pool_alloc() directly as it wants to set a custom gfp_mask= , > > which is unique as all other drivers call a wrapper to that function. > > Fix it by adding netmem_to_page() in the driver.> - Fix printing netm= em trace printing (Pavel). > > > > v11: > > - Fix typing to remove sparse warning. (Paolo/Steven) > > > > v9: > > - Fix sparse error (Simon). > > > > v8: > > - Fix napi_pp_put_page() taking netmem instead of page to fix > > patch-by-patch build error. > > - Add net/netmem.h include in this patch to fix patch-by-patch build > > error. > > > > v6: > > > > - Rebased on top of the merged netmem_ref type. > > > > Cc: linux-mm@kvack.org > > Cc: Matthew Wilcox > > > > --- > > drivers/net/ethernet/renesas/ravb_main.c | 5 +- > > include/linux/skbuff_ref.h | 4 +- > > include/net/netmem.h | 15 ++ > > include/net/page_pool/helpers.h | 120 ++++++--- > > include/net/page_pool/types.h | 14 +- > > include/trace/events/page_pool.h | 30 +-- > > net/bpf/test_run.c | 5 +- > > net/core/page_pool.c | 304 ++++++++++++----------- > > net/core/skbuff.c | 8 +- > > 9 files changed, 305 insertions(+), 200 deletions(-) > > > > diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/eth= ernet/renesas/ravb_main.c > > index c1546b916e4ef..093236ebfeecb 100644 > > --- a/drivers/net/ethernet/renesas/ravb_main.c > > +++ b/drivers/net/ethernet/renesas/ravb_main.c > > @@ -303,8 +303,9 @@ ravb_alloc_rx_buffer(struct net_device *ndev, int q= , u32 entry, gfp_t gfp_mask, > > > > rx_buff =3D &priv->rx_buffers[q][entry]; > > size =3D info->rx_buffer_size; > > - rx_buff->page =3D page_pool_alloc(priv->rx_pool[q], &rx_buff->off= set, > > - &size, gfp_mask); > > + rx_buff->page =3D netmem_to_page(page_pool_alloc(priv->rx_pool[q]= , > > + &rx_buff->offset, > > + &size, gfp_mask)); > > if (unlikely(!rx_buff->page)) { > > /* We just set the data size to 0 for a failed mapping wh= ich > > * should prevent DMA from happening... > > [snip] > > > > > -static inline struct page *page_pool_alloc(struct page_pool *pool, > > - unsigned int *offset, > > - unsigned int *size, gfp_t gfp) > > +static inline netmem_ref page_pool_alloc(struct page_pool *pool, > > + unsigned int *offset, > > + unsigned int *size, gfp_t gfp) > > { > > unsigned int max_size =3D PAGE_SIZE << pool->p.order; > > - struct page *page; > > + netmem_ref netmem; > > > > if ((*size << 1) > max_size) { > > *size =3D max_size; > > *offset =3D 0; > > - return page_pool_alloc_pages(pool, gfp); > > + return page_pool_alloc_netmem(pool, gfp); > > } > > > > - page =3D page_pool_alloc_frag(pool, offset, *size, gfp); > > - if (unlikely(!page)) > > - return NULL; > > + netmem =3D page_pool_alloc_frag_netmem(pool, offset, *size, gfp); > > + if (unlikely(!netmem)) > > + return 0; > > > > /* There is very likely not enough space for another fragment, so= append > > * the remaining size to the current fragment to avoid truesize > > @@ -140,7 +142,7 @@ static inline struct page *page_pool_alloc(struct p= age_pool *pool, > > pool->frag_offset =3D max_size; > > } > > > > - return page; > > + return netmem; > > } > > > > /** > > @@ -154,7 +156,7 @@ static inline struct page *page_pool_alloc(struct p= age_pool *pool, > > * utilization and performance penalty. > > * > > * Return: > > - * Return allocated page or page fragment, otherwise return NULL. > > + * Return allocated page or page fragment, otherwise return 0. > > */ > > static inline struct page *page_pool_dev_alloc(struct page_pool *pool, > > unsigned int *offset, > > @@ -162,7 +164,7 @@ static inline struct page *page_pool_dev_alloc(stru= ct page_pool *pool, > > { > > gfp_t gfp =3D (GFP_ATOMIC | __GFP_NOWARN); > > > > - return page_pool_alloc(pool, offset, size, gfp); > > + return netmem_to_page(page_pool_alloc(pool, offset, size, gfp)); > > } > > I find this API change confusing - why should page_pool_alloc() return a > netmem_ref but page_pool_dev_alloc() return a struct page *? > > Is there any reason to change page_pool_alloc() anyway? It calls > page_pool_alloc_pages() or page_pool_alloc_frag() as appropriate, both > of which your patch already converts to wrappers around the appropriate > _netmem() functions. In all instances where page_pool_alloc() is called > in this patch, you wrap it with netmem_to_page() anyway, there are no > calls to page_pool_alloc() added which actually want a netmem_ref. > The general gist is that the page_pool API is being converted to use netmem_ref instead of page. The existing API, which uses struct page, is kept around transitionally, but meant to be removed and everything moved to netmem. APIs that current drivers depend on, like page_pool_dev_alloc(), I've kept as struct page and added netmem versions when needed. APIs that had no external users, like page_pool_alloc(), I took the opportunity to move them to netmem immediately. But you recently depended on that. I thought page_pool_alloc() was an internal function to the page_pool not meant to be called from drivers, but the documentation actually mentions it. Seems like I need to keep it as page* function transitionally as well. I'll look into making this change you suggested, there is no needed page_pool_alloc() caller at the moment. -- Thanks, Mina