Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1814330rdb; Thu, 7 Dec 2023 09:22:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IFpOxJrLYUzUqgMGMNkos2sukg9fe7tpIcGHmZYqXWuCSRapB0wqmO1lfpOx1qnFAtEOlAz X-Received: by 2002:a17:90b:3a8e:b0:286:3d7e:5e7f with SMTP id om14-20020a17090b3a8e00b002863d7e5e7fmr2460136pjb.18.1701969774982; Thu, 07 Dec 2023 09:22:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701969774; cv=none; d=google.com; s=arc-20160816; b=qtFvSKmoP/GpZeA7by+rgTBk62B/9gzpyfzKmvp4TpkdUq8OutQACL5PYxV7zpTYJt rhLXMi/JArmnNS4SihMv96Teykj3xd7IeEIHjB5AMGjXo7G7rcyCnajQ/dn8yt6xTBxm XimA7SUy3Ag/HlA8GxxE8rs0KWA71DFF1J3/KCX1s+D+TlJ8fdiOMNTXSVFBKHa4r29s 24N0sm/V7ZxqmlW8vba/TakOWPKvP5gA8Luk69DrFkG7NptBvc8pW20Dr9zaN5wzfjJz +eWkXfV50vo5zvyStF4HhxzXjLvbdvd9UW3qE+MkzPXSnel4ldKjdzchEXGLqgLd95rT F93Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FOI5rQfaR1Cl0amUi4FByDO+tReayQd01nPFlswp7co=; fh=QtEizEezWMPccSW0U8o1LFLT1SqErEb68UiNokAWV6o=; b=LjEYemz98D2HOKSD2d2D7CwzPLx6WfZ+rrvcDkjk2ltPuIyrh1hThnsXie+FHY+hIe Bg2IJfAnrM7H6TB2locM9Myo2FhjZNQnRI52GFVDW/YvEgK2hT63R0T56s5iXlbk7HCX rlKzqQMFDoeQXsJT1bFfQwBCKeJR47kOf3uiawZVNoY6EQ4nq3oP2qPRhn+obRrHnpNj 8f5TQbm2qQW1KBofvqwMbuaXkI4EAyANsc0uihR1lEBrPB+0u/av/ZkoLVrA/EDgXfpj ZDxDLLk65XalNBzIAVGfXX3GJEFrfrfY3d2itBEFTahh7+CMsB+BbgiCocBB8DE9i4gx 7hMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=m76uxENM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id nd17-20020a17090b4cd100b0028526216f31si150540pjb.106.2023.12.07.09.22.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 09:22:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=m76uxENM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 6966C80C7751; Thu, 7 Dec 2023 09:22:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443351AbjLGRWO (ORCPT + 99 others); Thu, 7 Dec 2023 12:22:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443326AbjLGRWJ (ORCPT ); Thu, 7 Dec 2023 12:22:09 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 243B210FC; Thu, 7 Dec 2023 09:22:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701969735; x=1733505735; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l4CnD+Pxx0Q4YVJ/EYwOlzvB7duChKcWeg7vEHMvYDA=; b=m76uxENM1LzGsFJm6osahjhLZ8Y/CaJQyuvVRNfl7YOwZWjBcpEIebfl PD0CWn3fJiR7n0aAFtOm8RF5L7/u5rydWpcIklThViMzFQ7lMEwUHL/R2 Q8MwyH44532UlFhgmNCsC2dH7tBpjfU1Fyaiv/Ydufgt2oYB/KPZFUPj4 SxM++/+Qq/oYEq+AtSf1MZkucpHkEIbARw51cRppUTXN6JaJlE6al1gEy ptTLPLrz9FudOhTvuciio6WeA/8K6R6xPWJgEqp2NAn/nlwZJtLXs2Y8Z CDU97zsdVdEM/S/p8vrR6YVROftwWG2xrTLNFUzrcj010l4DpfFc7/oT0 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="374434702" X-IronPort-AV: E=Sophos;i="6.04,258,1695711600"; d="scan'208";a="374434702" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2023 09:22:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="721548506" X-IronPort-AV: E=Sophos;i="6.04,258,1695711600"; d="scan'208";a="721548506" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orsmga003.jf.intel.com with ESMTP; 07 Dec 2023 09:21:58 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v6 02/12] page_pool: don't use driver-set flags field directly Date: Thu, 7 Dec 2023 18:20:00 +0100 Message-ID: <20231207172010.1441468-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231207172010.1441468-1-aleksander.lobakin@intel.com> References: <20231207172010.1441468-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Thu, 07 Dec 2023 09:22:44 -0800 (PST) page_pool::p is driver-defined params, copied directly from the structure passed to page_pool_create(). The structure isn't meant to be modified by the Page Pool core code and this even might look confusing[0][1]. In order to be able to alter some flags, let's define our own, internal fields the same way as the already existing one (::has_init_callback). They are defined as bits in the driver-set params, leave them so here as well, to not waste byte-per-bit or so. Almost 30 bits are still free for future extensions. We could've defined only new flags here or only the ones we may need to alter, but checking some flags in one place while others in another doesn't sound convenient or intuitive. ::flags passed by the driver can now go to the "slow" PP params. Suggested-by: Jakub Kicinski Link[0]: https://lore.kernel.org/netdev/20230703133207.4f0c54ce@kernel.org Suggested-by: Alexander Duyck Link[1]: https://lore.kernel.org/netdev/CAKgT0UfZCGnWgOH96E4GV3ZP6LLbROHM7SHE8NKwq+exX+Gk_Q@mail.gmail.com Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 8 +++++--- net/core/page_pool.c | 34 ++++++++++++++++++---------------- 2 files changed, 23 insertions(+), 19 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 35ab82da7f2a..51f6cdd60757 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -44,7 +44,6 @@ struct pp_alloc_cache { /** * struct page_pool_params - page pool parameters - * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV * @order: 2^order pages on allocation * @pool_size: size of the ptr_ring * @nid: NUMA node id to allocate from pages from @@ -54,10 +53,10 @@ struct pp_alloc_cache { * @dma_dir: DMA mapping direction * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV + * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV */ struct page_pool_params { struct_group_tagged(page_pool_params_fast, fast, - unsigned int flags; unsigned int order; unsigned int pool_size; int nid; @@ -68,6 +67,7 @@ struct page_pool_params { unsigned int offset; ); struct_group_tagged(page_pool_params_slow, slow, + unsigned int flags; struct net_device *netdev; /* private: used by test code only */ void (*init_callback)(struct page *page, void *arg); @@ -128,7 +128,9 @@ struct page_pool_stats { struct page_pool { struct page_pool_params_fast p; - bool has_init_callback; + bool dma_map:1; /* Perform DMA mapping */ + bool dma_sync:1; /* Perform DMA sync */ + bool has_init_callback:1; /* slow.init_callback is set */ /* The following block must stay within one cacheline. On 32-bit * systems, sizeof(long) == sizeof(int), so that the block size is diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..59aca3339222 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -179,7 +179,7 @@ static int page_pool_init(struct page_pool *pool, memcpy(&pool->slow, ¶ms->slow, sizeof(pool->slow)); /* Validate only known flags were used */ - if (pool->p.flags & ~(PP_FLAG_ALL)) + if (pool->slow.flags & ~(PP_FLAG_ALL)) return -EINVAL; if (pool->p.pool_size) @@ -193,22 +193,26 @@ static int page_pool_init(struct page_pool *pool, * DMA_BIDIRECTIONAL is for allowing page used for DMA sending, * which is the XDP_TX use-case. */ - if (pool->p.flags & PP_FLAG_DMA_MAP) { + if (pool->slow.flags & PP_FLAG_DMA_MAP) { if ((pool->p.dma_dir != DMA_FROM_DEVICE) && (pool->p.dma_dir != DMA_BIDIRECTIONAL)) return -EINVAL; + + pool->dma_map = true; } - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) { + if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) { /* In order to request DMA-sync-for-device the page * needs to be mapped */ - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!(pool->slow.flags & PP_FLAG_DMA_MAP)) return -EINVAL; if (!pool->p.max_len) return -EINVAL; + pool->dma_sync = true; + /* pool->p.offset has to be set according to the address * offset used by the DMA engine to start copying rx data */ @@ -234,7 +238,7 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) get_device(pool->p.dev); return 0; @@ -244,7 +248,7 @@ static void page_pool_uninit(struct page_pool *pool) { ptr_ring_cleanup(&pool->ring, NULL); - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) put_device(pool->p.dev); #ifdef CONFIG_PAGE_POOL_STATS @@ -387,7 +391,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) if (page_pool_set_dma_addr(page, dma)) goto unmap_failed; - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); return true; @@ -433,8 +437,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, if (unlikely(!page)) return NULL; - if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); return NULL; } @@ -454,8 +457,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; - unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; + bool dma_map = pool->dma_map; struct page *page; int i, nr_pages; @@ -480,8 +483,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, */ for (i = 0; i < nr_pages; i++) { page = pool->alloc.cache[i]; - if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); continue; } @@ -558,7 +560,7 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) dma_addr_t dma; int count; - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!pool->dma_map) /* Always account for inflight pages, even if we didn't * map them */ @@ -624,7 +626,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* If the page refcnt == 1, this will try to recycle the page. - * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for + * If pool->dma_sync is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). * If the page refcnt != 1, then the page will be returned to memory * subsystem. @@ -647,7 +649,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, dma_sync_size); @@ -760,7 +762,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, -1); return page; -- 2.43.0