Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp97656pxj; Fri, 7 May 2021 04:42:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwUduravezi//yDEFR1OVx9Fm4/1a9DbQFxMH7LsX0dQDhs7I5l/UsTE1KFnijKaEmPQ4Wb X-Received: by 2002:aa7:cd50:: with SMTP id v16mr11428025edw.175.1620387748020; Fri, 07 May 2021 04:42:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620387748; cv=none; d=google.com; s=arc-20160816; b=GfAJyvpBh7tRr6Qxti3zbNoQJqHRFSL9Lb+mMBMwGA6vf8Pc26uhbrVfz3lPQCn9nc HLEpqJD64ZQyD/M581BVniMFpoEnt6cao7JSmMt9ghYClpsJiDtnHLvazxSfH/at8Ir2 NqrqcwgUhcJOKVMInvvPxs9fH+CQznNenDCvUBGfHG/EuaO5UjZpsLIUPvQzt/OXlr6O xTXRaQu2sfndezTGmCWl71UpOAIJMtI6AmeB3Yz99UuVSGwHMzCskwBtTHqkd3hHHUKr LL+8x6O+ZGTFE6qdUh+mZCwnnUxShTpnlvZICL2UmA2u410jlyDY0w/ZFkX6eY0y3FhA aXLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=Doi62W9fxYT+jRS5USkhiS6YV1KgboN170q689fPt5k=; b=cxMB7WGBk+V4Iw3CbQTc9GsLREPg7OVzl8eHutkjBZ5IAA/jvcJwBcnbRr5na6oMPW xI1XHBiZa6+uGKuJjuVK9Mk8gpiQnvSbAkP05tFduxKSXq/zjsXbiIagKf/JfDTEpbm6 WJzNFN7ZqsbgypEQuDIKB3mKgo5D+/xKe1RSq66mkk2YkXu24jGBF5mhoPlkVn2eeqsp dTWF9ZGchI19Koc0gYFylXmFePJfGT6P1e6+AqU1z5Z4kQvvn3XngrMBvMa8cSfxr0Rg /Tpl/ZREDXGy0pDKoRUokZTeX74fvFR9rSj1JDKQ43p5GynCLkc9gsqc/uonnAIuwv6/ NwVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id de20si4798881edb.211.2021.05.07.04.42.04; Fri, 07 May 2021 04:42:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236232AbhEGI3j (ORCPT + 99 others); Fri, 7 May 2021 04:29:39 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:3857 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236210AbhEGI3e (ORCPT ); Fri, 7 May 2021 04:29:34 -0400 Received: from dggeml718-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Fc3RH1fqYz5yRQ; Fri, 7 May 2021 16:25:15 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggeml718-chm.china.huawei.com (10.3.17.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Fri, 7 May 2021 16:28:32 +0800 Received: from [127.0.0.1] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Fri, 7 May 2021 16:28:31 +0800 Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers To: Ilias Apalodimas CC: Matteo Croce , , , Ayush Sawal , "Vinay Kumar Yadav" , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , "Tariq Toukan" , Jesper Dangaard Brouer , "Alexei Starovoitov" , Daniel Borkmann , "John Fastabend" , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Michel Lespinasse , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Guoqing Jiang , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Aleksandr Nogikh , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Guillaume Nault , , , , Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni References: <20210409223801.104657-1-mcroce@linux.microsoft.com> <9bf7c5b3-c3cf-e669-051f-247aa8df5c5a@huawei.com> <33b02220-cc50-f6b2-c436-f4ec041d6bc4@huawei.com> <75a332fa-74e4-7b7b-553e-3a1a6cb85dff@huawei.com> From: Yunsheng Lin Message-ID: Date: Fri, 7 May 2021 16:28:30 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme711-chm.china.huawei.com (10.1.199.107) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/5/7 15:06, Ilias Apalodimas wrote: > On Fri, May 07, 2021 at 11:23:28AM +0800, Yunsheng Lin wrote: >> On 2021/5/6 20:58, Ilias Apalodimas wrote: >>>>>> >>>>> >>>>> Not really, the opposite is happening here. If the pp_recycle bit is set we >>>>> will always call page_pool_return_skb_page(). If the page signature matches >>>>> the 'magic' set by page pool we will always call xdp_return_skb_frame() will >>>>> end up calling __page_pool_put_page(). If the refcnt is 1 we'll try >>>>> to recycle the page. If it's not we'll release it from page_pool (releasing >>>>> some internal references we keep) unmap the buffer and decrement the refcnt. >>>> >>>> Yes, I understood the above is what the page pool do now. >>>> >>>> But the question is who is still holding an extral reference to the page when >>>> kfree_skb()? Perhaps a cloned and pskb_expand_head()'ed skb is holding an extral >>>> reference to the same page? So why not just do a page_ref_dec() if the orginal skb >>>> is freed first, and call __page_pool_put_page() when the cloned skb is freed later? >>>> So that we can always reuse the recyclable page from a recyclable skb. This may >>>> make the page_pool_destroy() process delays longer than before, I am supposed the >>>> page_pool_destroy() delaying for cloned skb case does not really matters here. >>>> >>>> If the above works, I think the samiliar handling can be added to RX zerocopy if >>>> the RX zerocopy also hold extral references to the recyclable page from a recyclable >>>> skb too? >>>> >>> >>> Right, this sounds doable, but I'll have to go back code it and see if it >>> really makes sense. However I'd still prefer the support to go in as-is >>> (including the struct xdp_mem_info in struct page, instead of a page_pool >>> pointer). >>> >>> There's a couple of reasons for that. If we keep the struct xdp_mem_info we >>> can in the future recycle different kind of buffers using __xdp_return(). >>> And this is a non intrusive change if we choose to store the page pool address >>> directly in the future. It just affects the internal contract between the >>> page_pool code and struct page. So it won't affect any drivers that already >>> use the feature. >> >> This patchset has embeded a signature field in "struct page", and xdp_mem_info >> is stored in page_private(), which seems not considering the case for associating >> the page pool with "struct page" directly yet? > > Correct > >> Is the page pool also stored in >> page_private() and a different signature is used to indicate that? > > No only struct xdp_mem_info as you mentioned before > >> >> I am not saying we have to do it in this patchset, but we have to consider it >> while we are adding new signature field to "struct page", right? > > We won't need a new signature. The signature in both cases is there to > guarantee the page you are trying to recycle was indeed allocated by page_pool. > > Basically we got two design choices here: > - We store the page_pool ptr address directly in page->private and then, > we call into page_pool APIs directly to do the recycling. > That would eliminate the lookup through xdp_mem_info and the > XDP helpers to locate page pool pointer (through __xdp_return). > - You store the xdp_mem_info on page_private. In that case you need to go > through __xdp_return() to locate the page_pool pointer. Although we might > loose some performance that would allow us to recycle additional memory types > and not only MEM_TYPE_PAGE_POOL (in case we ever need it). So the signature field in "struct page" is used to only indicate a page is from a page pool, then how do we tell the content of page_private() if both of the above choices are needed, we might still need an extra indicator to tell page_private() is page_pool ptr or xdp_mem_info. It seems storing the page pool ptr in page_private() is clear for recyclable page from a recyclable skb use case, and the use case for storing xdp_mem_info in page_private() is unclear yet? As XDP seems to have the xdp_mem_info in the "struct xdp_frame", so it does not need the xdp_mem_info from page_private(). If the above is true, what does not really makes sense to me here is that: why do we first implement a unclear use case for storing xdp_mem_info in page_private(), why not implement the clear use case for storing page pool ptr in page_private() first? > > > I think both choices are sane. What I am trying to explain here, is > regardless of what we choose now, we can change it in the future without > affecting the API consumers at all. What will change internally is the way we > lookup the page pool pointer we are trying to recycle. It seems the below API need changing? +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page, + struct xdp_mem_info *mem) > > [...] > > > Cheers > /Ilias > > . >