Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp539936pxy; Fri, 30 Apr 2021 10:34:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxjHliAd2rd8o7lLZnb2FkOkYapD8EmijWSKWXAxYbjvLadJvRlOuSbuNPrCjvU6hsaGo/H X-Received: by 2002:a17:902:b210:b029:eb:535f:852 with SMTP id t16-20020a170902b210b02900eb535f0852mr6377876plr.80.1619804048297; Fri, 30 Apr 2021 10:34:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619804048; cv=none; d=google.com; s=arc-20160816; b=F/cabv/+PFcMDgDHDColh9YjY25QWPbbj4VZGH7U/I34IAJ90B4L4K0543CIpLrbjH XtDY+P0+NLpwxXeSojUyQAMC6T3iYYYwJtQ+ZT/VyRttPWbFk/cpE61azgD4oa9p3BSM UErGQFSb8UIuDPYkSw2w34o8+db+lBM5eYfujMalyoI5P2/xlWAw+WPxPllYykUlyiiq +j7GCRUYPtoxraam/gKCPjgw8Wp+SDZVEwVgyGIW5DAB126GSY8iO8M+qRQOrOrWNvYv kqFnMH1rsbpW2s3xclN4hdVAADUFwZmscShlDrTN8BCCQyBfPIdr3swb1QEsm6qfeHrF csvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=w28WRBH0pxau981XEZacM0SFQfgpPmWMOWl6GXZOO88=; b=JMh+6kX0OKB/cR2ISThX193yJtdGhtQnsDaQ4J2cHQs0tali9lPejXGVENA7FGXz+f uqHQkM7ri/PTdospmgq1YbLTPOvHK9ExSqkx4ovKvjw8GQxMK/Rx0g2yA4N6ia7B6V0H QIAGlLuz3G7wSpbULxaFOHfatc1a10CQLhvJqUspAA3AHdchrQvd+ETuTHpU2oDd67MI 6ENiNqV0z18FKqBxuTWeiEj7Hn9PXyhJvIj8fQi7Qv/ZCtCK8RSSJoT4hrQSnUcc4ghS OBl9DcJnovjcanuoCcZoJuq25qzKZpHs5QK9rdxJwvX6RtIR61dWFpVtHDFKsZtaLMZr fakQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kfJ5pYSR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id nm12si17360906pjb.161.2021.04.30.10.33.38; Fri, 30 Apr 2021 10:34:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kfJ5pYSR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231160AbhD3Rdd (ORCPT + 99 others); Fri, 30 Apr 2021 13:33:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230478AbhD3Rdc (ORCPT ); Fri, 30 Apr 2021 13:33:32 -0400 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F566C06138D for ; Fri, 30 Apr 2021 10:32:44 -0700 (PDT) Received: by mail-yb1-xb2b.google.com with SMTP id p126so30471005yba.1 for ; Fri, 30 Apr 2021 10:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=w28WRBH0pxau981XEZacM0SFQfgpPmWMOWl6GXZOO88=; b=kfJ5pYSRxCrK5Nv1bBw8qk3WW1/R6t3ZWXvwh2SCPInbHVO+J37aD4srPrqba1setk Cc61N7btKFWPhmutzj9hF8aspJhNZ7b8b54leYMnAN4XxlPdQ1XniKMMdMHrDbI0Ru1f 37v6DU4S18qC8KiOVRJEMtO3wsYgc+KtADjBjo53d50pPRkIRp7Zy6yhQAvvOaj8RaGd xMsMvjv3ybgm3J5Z1XHt5EAb8hMqTzjUXuHk8332nfpk0IEGTcyfVHaOZDYuhRFxukhl C4F6fAi4lAnJyRbK3wBs57fY4cQjLNui2hOEyA6whg2dQv6lpl23OJLFh5iItymg3ks5 PA2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=w28WRBH0pxau981XEZacM0SFQfgpPmWMOWl6GXZOO88=; b=G74DstV2a3OLj3ca+WtT432j0K7zBkqgUWE3xpm4oimVn1O2qs/DCNhUqyls8asiC1 jajFQsM3U9EIa6Bru5HwrJFe5tFl2h7kWDRolSv1e2dmUBLCS05y0XpuxVTMDXUKeUp2 LWO8KDNTlfjpmvdzzBpHo3eKpmwpfmmT9zK3qDXHwaEr/ZILrMDD2dcSdifJ5uRDRPMl Ak095S3pp51afiMToZVQUYuGo//AEMV9+ngRpzo9q0sTbHxofwCXPThICs4WAFtVQYHD nbm52tM2ohigJ6I1aRgq/EXYi2RMXd/GYtpLkiVXjiXNg/veXq4BbuSV1ToMJuTVSCYX Vipg== X-Gm-Message-State: AOAM532okHVAt0EB/L/XFCT55o52sf+ZEgv4NRGrhzkbiKGs4f8OJLCw ZxnrcNNNLa/KRlimRaxYUa2F+FUxfEnN7uNQ7IiFTQ== X-Received: by 2002:a25:3c3:: with SMTP id 186mr8860593ybd.408.1619803963458; Fri, 30 Apr 2021 10:32:43 -0700 (PDT) MIME-Version: 1.0 References: <20210409223801.104657-1-mcroce@linux.microsoft.com> <9bf7c5b3-c3cf-e669-051f-247aa8df5c5a@huawei.com> In-Reply-To: From: Ilias Apalodimas Date: Fri, 30 Apr 2021 20:32:07 +0300 Message-ID: Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers To: Yunsheng Lin Cc: Matteo Croce , Networking , Linux-MM , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Guillaume Nault , open list , linux-rdma@vger.kernel.org, bpf , Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (-cc invalid emails) Replying to my self here but.... [...] > > > > > > We can't do that. The reason we need those structs is that we rely on the > > > existing XDP code, which already recycles it's buffers, to enable > > > recycling. Since we allocate a page per packet when using page_pool for a > > > driver , the same ideas apply to an SKB and XDP frame. We just recycle the > > > > I am not really familar with XDP here, but a packet from hw is either a > > "struct xdp_frame/xdp_buff" for XDP or a "struct sk_buff" for TCP/IP stack, > > a packet can not be both "struct xdp_frame/xdp_buff" and "struct sk_buff" at > > the same time, right? > > > > Yes, but the payload is irrelevant in both cases and that's what we use > page_pool for. You can't use this patchset unless your driver usues > build_skb(). So in both cases you just allocate memory for the payload and > decide what the wrap the buffer with (XDP or SKB) later. > > > What does not really make sense to me is that the page has to be from page > > pool when a skb's frag page can be recycled, right? If it is ture, the switch > > case in __xdp_return() does not really make sense for skb recycling, why go > > all the trouble of checking the mem->type and mem->id to find the page_pool > > pointer when recyclable page for skb can only be from page pool? > > In any case you need to find in which pool the buffer you try to recycle > belongs. In order to make the whole idea generic and be able to recycle skb > fragments instead of just the skb head you need to store some information on > struct page. That's the fundamental difference of this patchset compared to > the RFC we sent a few years back [1] which was just storing information on the > skb. The way this is done on the current patchset is that we store the > struct xdp_mem_info in page->private and then look it up on xdp_return(). > > Now that being said Matthew recently reworked struct page, so we could see if > we can store the page pool pointer directly instead of the struct > xdp_mem_info. That would allow us to call into page pool functions directly. > But we'll have to agree if that makes sense to go into struct page to begin > with and make sure the pointer is still valid when we take the recycling path. > Thinking more about it the reason that prevented us from storing a page pool pointer directly is not there anymore. Jesper fixed that already a while back. So we might as well store the page_pool ptr in page->private and call into the functions directly. I'll have a look before v4. [...] Thanks /Ilias