Received: by 2002:a05:7412:8521:b0:e2:908c:2ebd with SMTP id t33csp1872085rdf; Sun, 5 Nov 2023 18:44:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IH/VJn77FV8O+umnP4NGvo51hm1f46mYJqki8F7d+YLgLJX3F3KW7ytELbdJA/RTv4KLhg8 X-Received: by 2002:a05:6a00:1822:b0:693:3be8:feba with SMTP id y34-20020a056a00182200b006933be8febamr24511146pfa.19.1699238684357; Sun, 05 Nov 2023 18:44:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699238684; cv=none; d=google.com; s=arc-20160816; b=PyFvZ64TTrcQOk2d/9OFC6sU8iMy+W1l9VdVCveB9mTbfK4rKHOGUaurEVJ9eur/Xh Hpe2GkA9gD7UPh5KJIVGrZgsEX7rOsxnE2In4hZjG1J7tlR8iyyr/6E1oGqcKEc2BLAS DVzICG2SWF9pGhtuAVWS97uCQCxWZCg9wH7Aoc5lAH8iCclSQK6F2NzfN5i6xrHqlQPv SZ/Chs2ohV9NXd5x2NMaioP0JCHp7IOiETs3IegwxMvM4yqYecVVq2daXgemmY7Legje W5Pxe3uFWsKmli0yQHMJwvrnU8IvrN/xTGuegeI0vJkOv3xSXmYQeFSc8nHLsLMMvXiA uR1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=2aWBuORvTMxiaxkeVBpZ19MJEVfIpSc8yfaq5OEMLWc=; fh=XVbuOHCxIGNnvbkFmC1IggK7Kbi8uvrwUckBy4kX5tY=; b=xfeBjw7F7MR8i+e2pNG0jkTs61ACOj8U4joHC7Vb3qo++EV751q6zZQXd6V7vDA7xG Lu3pzslHmLcRkO+WOqP6NZhAQ5M5HTM7KqO8Qy/jkXL/U6SNxiavzKP/KysDgOO4PAb1 h1DDiaAn0Ls0UHnJwHLa8z+DOg0QRhnG+m7fZItAU/Um2qQcvYhA7Q+4BMYoPRSTCzy0 06WHJ+iEuhDSh99j2BHLZ00auuDZ/vLgr23CM9gHESJvewA1cCUK7V2iGyLDrKP5lXB9 O/Ag3N627AdYWJ1RovsmsBa+qPalUIZhsY+CjhaB2KclyzqYoZ3AyKvG7uG/FCIwdCIo 8m5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=3fASEIAw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id cp5-20020a056a00348500b006be322191desi7113092pfb.112.2023.11.05.18.44.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Nov 2023 18:44:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=3fASEIAw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id F107C803098E; Sun, 5 Nov 2023 18:44:42 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230155AbjKFCoc (ORCPT + 99 others); Sun, 5 Nov 2023 21:44:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230149AbjKFCo1 (ORCPT ); Sun, 5 Nov 2023 21:44:27 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92781125 for ; Sun, 5 Nov 2023 18:44:23 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a9012ab0adso56424007b3.1 for ; Sun, 05 Nov 2023 18:44:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238663; x=1699843463; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2aWBuORvTMxiaxkeVBpZ19MJEVfIpSc8yfaq5OEMLWc=; b=3fASEIAw+83aeyOPu29o+h9e9Py/WA3mB46W/RzTuxb5g5o9BZanbIUVaUumEMHy9o frqRfDo8Q3gjyTIYmA2v6GzWsPC14qQfKTlRvE5jCLdJ/j0YveO/NOyVQplB165lZeuZ NuYK0pfyrwTi7QtGrHXTwbaYcTTp00wu4oYFTcz8fq16lOuATHbTnvXy0I8UT13P6i7x IB3SL0mL29a99hli8I05b/sp0F7kvL+sF2aVoOYtW1aspfbOeXjoEMcfYvo6EOmnQETH Sb6BLKvGdH+EXSFm6oTa2iGlPUs4US/OiwD3HXkCLJ/mNaB7aB7wmd9kVDZD0COMIkyl d5Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238663; x=1699843463; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2aWBuORvTMxiaxkeVBpZ19MJEVfIpSc8yfaq5OEMLWc=; b=sUBjs9XyRPt3XMJKiXUnxRa3BS/3tKmPxhlEOaA4dcZh6toTqFgV5ghLHjyDxRunoy nwaaVhs4mF4xH/pVMRGSkejXi+P7TEdkPDFKp+/16iB957tZEBI5SeLDsMyEx/QDSyHL cTL1IYul1p2V2lwhyh3XFns+iS3IZL5soAOtsAkZ4x2kOOkzQA/z7AYcdsoS5YnE/rZT mkxBwHWmC5CTYWFIx8sDuckHJgVEDbzrhpPyPxZ9RXfNU0pTnidnJytHPL69b4gZHMiQ P8KmopTPmJIL8gwwtdZcoGSbv8HqvM4eLHNZcIV8DnB9jbFcuYS0WIx8FyEuSd/1UGA+ Qfsg== X-Gm-Message-State: AOJu0YwVs7/XbGGD1vAhrMwaJX5E7xSj8pVxwumNPhB1AKGTNTTE/4kh eQRRJjHiYwXtFMprURaV1EmhtFXFocVEMnFH9A== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:688b:0:b0:da0:46e5:a7ef with SMTP id d133-20020a25688b000000b00da046e5a7efmr496838ybc.3.1699238662743; Sun, 05 Nov 2023 18:44:22 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:01 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-3-almasrymina@google.com> Subject: [RFC PATCH v3 02/12] net: page_pool: create hooks for custom page providers From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 05 Nov 2023 18:44:43 -0800 (PST) From: Jakub Kicinski The page providers which try to reuse the same pages will need to hold onto the ref, even if page gets released from the pool - as in releasing the page from the pp just transfers the "ownership" reference from pp to the provider, and provider will wait for other references to be gone before feeding this page back into the pool. Signed-off-by: Jakub Kicinski Signed-off-by: Mina Almasry --- This is implemented by Jakub in his RFC: https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.com/T/ I take no credit for the idea or implementation; I only added minor edits to make this workable with device memory TCP, and removed some hacky test code. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable. --- include/net/page_pool/types.h | 18 +++++++++++++ net/core/page_pool.c | 51 +++++++++++++++++++++++++++++++---- 2 files changed, 64 insertions(+), 5 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi; + u8 memory_provider; + void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset; @@ -118,6 +120,19 @@ struct page_pool_stats { }; #endif +struct mem_provider; + +enum pp_memory_provider_type { + __PP_MP_NONE, /* Use system allocator directly */ +}; + +struct pp_memory_provider_ops { + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp); + bool (*release_page)(struct page_pool *pool, struct page *page); +}; + struct page_pool { struct page_pool_params p; @@ -165,6 +180,9 @@ struct page_pool { */ struct ptr_ring ring; + const struct pp_memory_provider_ops *mp_ops; + void *mp_priv; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 578b6f2eeb46..7ea1f4682479 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -23,6 +23,8 @@ #include +static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); + #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ) @@ -172,6 +174,7 @@ static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { unsigned int ring_qsize = 1024; /* Default */ + int err; memcpy(&pool->p, params, sizeof(pool->p)); @@ -225,10 +228,34 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); + switch (pool->p.memory_provider) { + case __PP_MP_NONE: + break; + default: + err = -EINVAL; + goto free_ptr_ring; + } + + pool->mp_priv = pool->p.mp_priv; + if (pool->mp_ops) { + err = pool->mp_ops->init(pool); + if (err) { + pr_warn("%s() mem-provider init failed %d\n", + __func__, err); + goto free_ptr_ring; + } + + static_branch_inc(&page_pool_mem_providers); + } + if (pool->p.flags & PP_FLAG_DMA_MAP) get_device(pool->p.dev); return 0; + +free_ptr_ring: + ptr_ring_cleanup(&pool->ring, NULL); + return err; } /** @@ -490,7 +517,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) return page; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + page = pool->mp_ops->alloc_pages(pool, gfp); + else + page = __page_pool_alloc_pages_slow(pool, gfp); return page; } EXPORT_SYMBOL(page_pool_alloc_pages); @@ -542,10 +572,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) void page_pool_return_page(struct page_pool *pool, struct page *page) { int count; + bool put; - __page_pool_release_page_dma(pool, page); - - page_pool_clear_pp_info(page); + put = true; + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_page(pool, page); + else + __page_pool_release_page_dma(pool, page); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. @@ -553,7 +586,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); trace_page_pool_state_release(pool, page, count); - put_page(page); + if (put) { + page_pool_clear_pp_info(page); + put_page(page); + } /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). @@ -821,6 +857,11 @@ static void __page_pool_destroy(struct page_pool *pool) if (pool->disconnect) pool->disconnect(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); + static_branch_dec(&page_pool_mem_providers); + } + ptr_ring_cleanup(&pool->ring, NULL); if (pool->p.flags & PP_FLAG_DMA_MAP) -- 2.42.0.869.gea05f2083d-goog