Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp168756pxy; Thu, 6 May 2021 23:59:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzadMBhSbY69bg8E87qw8fYWOuNPlsJiSfrDaff4VmxiGG18+O7EdbT7MnsnUPObbsJrqpT X-Received: by 2002:a17:90a:7605:: with SMTP id s5mr21018968pjk.166.1620370781908; Thu, 06 May 2021 23:59:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620370781; cv=none; d=google.com; s=arc-20160816; b=KKK3nBCvjjbi4akAz/Y5GzSSHmxNCbZiQNKRWGRJdtlLRuN6Bo1SKOpOVKstL7bj8b 5dW6zvG/HmtVRyy6WX99KUOwvC2RCIGn6IXqpicwR2hQ80KjuA6wXN08lyqTw6QuFjHN +IVS9aC26kMHhfEkFGvRDEeuL/B/u9zBnBdjKQGgl0iVjbY4kehjpRXDxMKMNZaJx7V6 vW7aS0DE52ZT7lUjcLUC48whkInbW16yh0nSgmvCOcFPUzODDfj6wy5L+DzywQkeuZUU ANboXB6FMPVv/ZHMhlSBc53AiuxEv9kJxBuXDMW1Jdf57tXQPOOThZwtkqusCxw1cE89 O1rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=ZmW74TWGyoICl3S4touPiwKBbG9f3nk97WdsGOVm+Qw=; b=rjY6488vgKraBdnpnEpJGe3NcqxzSZfhxQ7a3tPIA3wry2vx4DOhNr80fjGVuv1UG2 3Yg8Mj8+3N9H2+9VoGuHwZrsb4l7jcJXuLjYizF9/xYYJdUj+mdvAEaSPQ4UYhTROs/m Z4HH3GrCeIbPm/kLwKGSL7XezB0iD7DmpULhCOmAcCIiHqQ8bH9xQYR6eby9E0DSH4HB 9F2kN5oJIShIk+u11ndRwx8Lm/+7UWTv1NsBrOnRur4UNV5nUbajD+hutA4eNwDmt7Ap k3dmrC7eIAIpytzEvMThGZHbyNqLBEnNtXXF1UBqPRSSZUVWgK1S15SVfWb+Ouu71Nl4 hfXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f4si5754464pju.158.2021.05.06.23.59.29; Thu, 06 May 2021 23:59:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230131AbhEGDYc (ORCPT + 99 others); Thu, 6 May 2021 23:24:32 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:5593 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbhEGDYb (ORCPT ); Thu, 6 May 2021 23:24:31 -0400 Received: from dggeml711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4FbwhK6HgHzYdjR; Fri, 7 May 2021 11:21:05 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggeml711-chm.china.huawei.com (10.3.17.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Fri, 7 May 2021 11:23:29 +0800 Received: from [127.0.0.1] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Fri, 7 May 2021 11:23:29 +0800 Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers To: Ilias Apalodimas CC: Matteo Croce , , , Ayush Sawal , "Vinay Kumar Yadav" , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , "Tariq Toukan" , Jesper Dangaard Brouer , "Alexei Starovoitov" , Daniel Borkmann , "John Fastabend" , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Michel Lespinasse , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Guoqing Jiang , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Aleksandr Nogikh , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Guillaume Nault , , , , Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni References: <20210409223801.104657-1-mcroce@linux.microsoft.com> <9bf7c5b3-c3cf-e669-051f-247aa8df5c5a@huawei.com> <33b02220-cc50-f6b2-c436-f4ec041d6bc4@huawei.com> From: Yunsheng Lin Message-ID: <75a332fa-74e4-7b7b-553e-3a1a6cb85dff@huawei.com> Date: Fri, 7 May 2021 11:23:28 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme714-chm.china.huawei.com (10.1.199.110) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/5/6 20:58, Ilias Apalodimas wrote: >>>> >>> >>> Not really, the opposite is happening here. If the pp_recycle bit is set we >>> will always call page_pool_return_skb_page(). If the page signature matches >>> the 'magic' set by page pool we will always call xdp_return_skb_frame() will >>> end up calling __page_pool_put_page(). If the refcnt is 1 we'll try >>> to recycle the page. If it's not we'll release it from page_pool (releasing >>> some internal references we keep) unmap the buffer and decrement the refcnt. >> >> Yes, I understood the above is what the page pool do now. >> >> But the question is who is still holding an extral reference to the page when >> kfree_skb()? Perhaps a cloned and pskb_expand_head()'ed skb is holding an extral >> reference to the same page? So why not just do a page_ref_dec() if the orginal skb >> is freed first, and call __page_pool_put_page() when the cloned skb is freed later? >> So that we can always reuse the recyclable page from a recyclable skb. This may >> make the page_pool_destroy() process delays longer than before, I am supposed the >> page_pool_destroy() delaying for cloned skb case does not really matters here. >> >> If the above works, I think the samiliar handling can be added to RX zerocopy if >> the RX zerocopy also hold extral references to the recyclable page from a recyclable >> skb too? >> > > Right, this sounds doable, but I'll have to go back code it and see if it > really makes sense. However I'd still prefer the support to go in as-is > (including the struct xdp_mem_info in struct page, instead of a page_pool > pointer). > > There's a couple of reasons for that. If we keep the struct xdp_mem_info we > can in the future recycle different kind of buffers using __xdp_return(). > And this is a non intrusive change if we choose to store the page pool address > directly in the future. It just affects the internal contract between the > page_pool code and struct page. So it won't affect any drivers that already > use the feature. This patchset has embeded a signature field in "struct page", and xdp_mem_info is stored in page_private(), which seems not considering the case for associating the page pool with "struct page" directly yet? Is the page pool also stored in page_private() and a different signature is used to indicate that? I am not saying we have to do it in this patchset, but we have to consider it while we are adding new signature field to "struct page", right? > Regarding the page_ref_dec(), which as I said sounds doable, I'd prefer > playing it safe for now and getting rid of the buffers that somehow ended up > holding an extra reference. Once this gets approved we can go back and try to > save the extra space. I hope I am not wrong but the changes required to > support a few extra refcounts should not change the current patches much. > > Thanks for taking the time on this! Thanks all invovled in the effort improving page pool too:) > /Ilias > >>> >>> [1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/ >>> >>> Cheers >>> /Ilias >>> >>> . >>> >> > > . >