Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp2696040rwl; Thu, 13 Apr 2023 09:37:15 -0700 (PDT) X-Google-Smtp-Source: AKy350ZPmpWmneVbOQGVnMMYnM8w0+ob74tw+RYUy+ucPRcR8Ax/4y+hzgr1AhXQc8OfyzwUv+T/ X-Received: by 2002:a05:6a00:2d1a:b0:638:45d8:bca1 with SMTP id fa26-20020a056a002d1a00b0063845d8bca1mr9686613pfb.1.1681403834787; Thu, 13 Apr 2023 09:37:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681403834; cv=none; d=google.com; s=arc-20160816; b=gsBQelHRvV38DQZRvV7O0EQPltw0TOo79dXcJIqUf1bEZhseR85NTD5L7buXc1bMJd 0RmL9mJW5t35agYOOBEBI2vGxudhCk8hZICEXqI3GCvKLOKR0irDr943U1vB3psM7mVx DjEYMdA2LiP2/BYdkXWz09DG5BSKO6Mv+dgho5SHLjM6kGHoxGwQ9Mza+T78ljmehM8g oqIvSTTQkQrDYh9uODJW7wOQ9WnkoNBWEA/jQvmTiXDcSaDmgiJ9z8vPb+il1zYf+3ke tfZLTtWM7vGl5ocThsib9+aQtjKZHUJ73njAafwn5J3DgpRwf5CEHvxpHnmne1co5Dl2 sJOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=QWcaTBDBEPJimQ2B0iKBmMCZmPgNmqhVZGq1zOwTSzs=; b=tYjG9pfl6tAInGrXy6I+cdf2GDtUVrXN0divJ4vkOcNMHZuu52J+g84F1MNAQvBCQP cCY5n+z4HToMOLBwWSCohnCTK7DI6pEwPFyP3Y+aIKXmxfigPJc989X1XB4W98nfKEAQ XnknTHQxzdB096YZ0PzzncMJARyDdraU32YfrdTE8FhAVKREMppgTML1JJ/8PVfeeLyg FY8jPUbEf5wylBKcvaAy365bRLZUi9tJ9W4ymrDSduKzXVIU56Bqeeiexf+NThSYVBJG nwGHiLWdxha6ggbefxPTMXBiDldLaOikZJEGskEUc5f8TV05AYwNOJt614JtmSfHmpaY PZ/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="F2+T/f6d"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a2-20020aa794a2000000b006375460490dsi2117510pfl.136.2023.04.13.09.37.02; Thu, 13 Apr 2023 09:37:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="F2+T/f6d"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbjDMQbN (ORCPT + 99 others); Thu, 13 Apr 2023 12:31:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229797AbjDMQbJ (ORCPT ); Thu, 13 Apr 2023 12:31:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B8C6AF26; Thu, 13 Apr 2023 09:31:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BF02F63FE7; Thu, 13 Apr 2023 16:31:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6576FC433D2; Thu, 13 Apr 2023 16:31:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681403464; bh=ixJOGebRoUo1BEyaLEPH8ePeNJnTnmpcCBkNUL7AClM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=F2+T/f6dx5RhPROs62DbPztZ8KbufhYEmZXNWXLzHVV4t6kOsoMy4CUmGMeGqbaOk i/S+vNsURoC6GQZFbvwG/3KVlPNQz3Xq6y8rENPgul5rAk7fKq22ruQkpRdXutectA 7X91fWt9+RL+LCSbB7JaV0N7tFLwSbRDlwk9REzzMyHWh8Mt7lhKjN03b9WEEh1/fM 26mzi3zXv9q2QjxAGo+3TfcpAAuPBzd8Z7Q0gfCYE3TDY7tNZr1knaBzyvWI3HEo0e YzO5IyW6h23YFJ0dWTMLbfZH7ua5ZUR3h8fXIhM83UGV1g1j2X+DsNoChIJpKIId1G MyEsliI5pKLxA== Date: Thu, 13 Apr 2023 19:30:59 +0300 From: Leon Romanovsky To: Haiyang Zhang Cc: "linux-hyperv@vger.kernel.org" , "netdev@vger.kernel.org" , Dexuan Cui , KY Srinivasan , Paul Rosswurm , "olaf@aepfle.de" , "vkuznets@redhat.com" , "davem@davemloft.net" , "wei.liu@kernel.org" , "edumazet@google.com" , "kuba@kernel.org" , "pabeni@redhat.com" , Long Li , "ssengar@linux.microsoft.com" , "linux-rdma@vger.kernel.org" , "daniel@iogearbox.net" , "john.fastabend@gmail.com" , "bpf@vger.kernel.org" , "ast@kernel.org" , Ajay Sharma , "hawk@kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation code to prepare for various MTU Message-ID: <20230413163059.GS17993@unreal> References: <1681334163-31084-1-git-send-email-haiyangz@microsoft.com> <1681334163-31084-3-git-send-email-haiyangz@microsoft.com> <20230413130428.GO17993@unreal> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 13, 2023 at 02:03:50PM +0000, Haiyang Zhang wrote: > > > > -----Original Message----- > > From: Leon Romanovsky > > Sent: Thursday, April 13, 2023 9:04 AM > > To: Haiyang Zhang > > Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui > > ; KY Srinivasan ; Paul Rosswurm > > ; olaf@aepfle.de; vkuznets@redhat.com; > > davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com; > > kuba@kernel.org; pabeni@redhat.com; Long Li ; > > ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org; > > daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org; > > ast@kernel.org; Ajay Sharma ; > > hawk@kernel.org; linux-kernel@vger.kernel.org > > Subject: Re: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation > > code to prepare for various MTU > > > > On Wed, Apr 12, 2023 at 02:16:01PM -0700, Haiyang Zhang wrote: > > > Move out common buffer allocation code from mana_process_rx_cqe() and > > > mana_alloc_rx_wqe() to helper functions. > > > Refactor related variables so they can be changed in one place, and buffer > > > sizes are in sync. > > > > > > Signed-off-by: Haiyang Zhang > > > Reviewed-by: Jesse Brandeburg > > > --- > > > V3: > > > Refectored to multiple patches for readability. Suggested by Jacob Keller. > > > > > > V2: > > > Refectored to multiple patches for readability. Suggested by Yunsheng Lin. > > > > > > --- > > > drivers/net/ethernet/microsoft/mana/mana_en.c | 154 ++++++++++------- > > - > > > include/net/mana/mana.h | 6 +- > > > 2 files changed, 91 insertions(+), 69 deletions(-) > > > > <...> > > > > > +static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, > > > + dma_addr_t *da, bool is_napi) > > > +{ > > > + struct page *page; > > > + void *va; > > > + > > > + /* Reuse XDP dropped page if available */ > > > + if (rxq->xdp_save_va) { > > > + va = rxq->xdp_save_va; > > > + rxq->xdp_save_va = NULL; > > > + } else { > > > + page = dev_alloc_page(); > > > > Documentation/networking/page_pool.rst > > 10 Basic use involves replacing alloc_pages() calls with the > > 11 page_pool_alloc_pages() call. Drivers should use > > page_pool_dev_alloc_pages() > > 12 replacing dev_alloc_pages(). > > > > General question, is this sentence applicable to all new code or only > > for XDP related paths? > > Quote from the context before that sentence -- > > ============= > Page Pool API > ============= > The page_pool allocator is optimized for the XDP mode that uses one frame > per-page, but it can fallback on the regular page allocator APIs. > Basic use involves replacing alloc_pages() calls with the > page_pool_alloc_pages() call. Drivers should use page_pool_dev_alloc_pages() > replacing dev_alloc_pages(). > > --unquote > > So the page pool is optimized for the XDP, and that sentence is applicable to drivers > that have set up page pool for XDP optimization. "but it can fallback on the regular page allocator APIs." > static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) //need a pool been set up > > Back to our mana driver, we don't have page pool setup yet. (will consider in the future) > So we cannot call page_pool_dev_alloc_pages(pool) in this place yet. ok, thanks > > Thanks, > - Haiyang >