Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp929658rwd; Tue, 16 May 2023 09:23:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4jxnykKdbNnff39OEhMmuOa/EBuiafhLY264a1D6XkY2iXdtOKaa3c00tzGpw6KMnAa0LD X-Received: by 2002:a17:902:d50d:b0:1ac:5382:6e24 with SMTP id b13-20020a170902d50d00b001ac53826e24mr44645522plg.10.1684254192552; Tue, 16 May 2023 09:23:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684254192; cv=none; d=google.com; s=arc-20160816; b=c2Ncqu5US/blsn2yh6Kc8E1oUbSRBXhfOc3vdhcmNBmCcKQ4uIveC4/kCDjnBjZnzZ 56udw6GChAPDzBY+abUQHjJ5cAu6KqD/z5Tm0Ry/74xM/LXkV8Q0tlMDh8+UbtXuC1rs Sm0/Q1W6SbXQUcUQy0T+Iec2m1D7hJFzjpQUOx1+N3GeImB6ggjKKCjgjy0fk/sy3vTi JWCzOXTkNRU4Qn4ZHFz+RocPy+gpb34MHQLbx2vkgg9tSm2QnsTBm84HQgH6mWhPguHS /SHWmF0txqFHnkL00MEVixb36pBmuuywLdMYyXJxNE64U21XiDSRCzd9FegXPcrx+nSI y3gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ceUm3rwO5Q4mC1y/YV1ptY0EV3b/ZlyLDtOPbi22M+U=; b=KZ97YW+NvZevZSZVfAwajzvCOG32p9tgoew/rRTBytjI3WN7i5wS6JD3WQ8TeaDGo+ SjjC6jMpnLOxHc7IA9zLI6TQEW3BjkkNKv29gY4PtyFDQBBHj2VoPJlJDX+dw3AharWP O+M0PJ3SwjVjdcMksmIS8uwC/UPi4v2ckqY7gcHI73dl5M8ZCRbi1+3ktkD7xqjZxZdh WchjXKXc0XdnPa2+gEz6/LgdvCE99YniIxnZd1/ObxUPBuUkT2kjzAVZXZH/2mVg0gm3 VpOOs6TfY2vdPJ4N8W+ZgTi2rqXouGZJADLq9ieyUkiXFDJMquz6BimpVtjuPaNsIiDx 65OQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kP5n93YV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z2-20020a1709028f8200b001aadd02b88fsi13615359plo.233.2023.05.16.09.23.00; Tue, 16 May 2023 09:23:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kP5n93YV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229680AbjEPQVk (ORCPT + 99 others); Tue, 16 May 2023 12:21:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229656AbjEPQVS (ORCPT ); Tue, 16 May 2023 12:21:18 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA9DD9EDC; Tue, 16 May 2023 09:20:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684254044; x=1715790044; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=peHjIVqWZsKOM9GvZy/kWyMJy5lP3sTMvyjBMczKWBI=; b=kP5n93YVX1C+UZGMZOO9+gzKzVnCuQmn1lmaiUQvBJHjedkBVCR4JfvV gG+YR1u30xByW1gSP3bE0No875UX+zdMbHRXLuC1pgcKEksxSpHEJ/GxI P7ffCp671CdtqvFIhMk6vH9R+QolGYyK2iuMDfJIbaLhXcko9+eW/Ejt3 eJm0uoZaIwTFGhMq3fqk3c2FPohpa9qDkO1geK5UeYJvTpuCRJVBnO2Ex d869f8yLH/E/UJUJwGsCY1sGmol9fHWtzAr3lnU7Q8lwkOG8IKMP6B9dS rOeKYYgR6lbxf/vW+3+qvA8LEM5FeW0b1wcQx5lEKp623KwlibmdReXxn Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="340896698" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="340896698" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2023 09:20:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="701414465" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="701414465" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orsmga002.jf.intel.com with ESMTP; 16 May 2023 09:20:26 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Magnus Karlsson , Michal Kubiak , Larysa Zaremba , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Hellwig , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers Date: Tue, 16 May 2023 18:18:37 +0200 Message-Id: <20230516161841.37138-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230516161841.37138-1-aleksander.lobakin@intel.com> References: <20230516161841.37138-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Each driver is responsible for syncing buffers written by HW for CPU before accessing them. Almost each PP-enabled driver uses the same pattern, which could be shorthanded into a static inline to make driver code a little bit more compact. Introduce a couple such functions. The first one takes the actual size of the data written by HW and is the main one to be used on Rx. The second does the same, but only if the PP performs DMA synchronizations at all. The last one picks max_len from the PP params and is designed for more extreme cases when the size is unknown, but the buffer still needs to be synced. Also constify pointer arguments of page_pool_get_dma_dir() and page_pool_get_dma_addr() to give a bit more room for optimization, as both of them are read-only. Signed-off-by: Alexander Lobakin --- include/net/page_pool.h | 59 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 55 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8435013de06e..f740c50b661f 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -32,7 +32,7 @@ #include /* Needed by ptr_ring */ #include -#include +#include #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA * map/unmap @@ -237,8 +237,8 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, /* get the stored dma direction. A driver might decide to treat this locally and * avoid the extra cache line from page_pool to determine the direction */ -static -inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) +static inline enum dma_data_direction +page_pool_get_dma_dir(const struct page_pool *pool) { return pool->p.dma_dir; } @@ -363,7 +363,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) { dma_addr_t ret = page->dma_addr; @@ -380,6 +380,57 @@ static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) page->dma_addr_upper = upper_32_bits(addr); } +/** + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Can be used as a shorthand to sync Rx pages before accessing them in the + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP. + */ +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + dma_sync_single_range_for_cpu(pool->p.dev, + page_pool_get_dma_addr(page), + pool->p.offset, dma_sync_size, + page_pool_get_dma_dir(pool)); +} + +/** + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.). + */ +static inline void +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool, + const struct page *page, u32 dma_sync_size) +{ + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size); +} + +/** + * page_pool_dma_sync_for_cpu - sync full Rx page for CPU + * @pool: page_pool which this page belongs to + * @page: page to sync + * + * Performs sync for the entire length exposed to hardware. Can be used on + * DMA errors or before freeing the page, when it's unknown whether the HW + * touched the buffer. + */ +static inline void +page_pool_dma_sync_full_for_cpu(const struct page_pool *pool, + const struct page *page) +{ + page_pool_dma_sync_for_cpu(pool, page, pool->p.max_len); +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL -- 2.40.1