Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp556494pxv; Thu, 8 Jul 2021 08:37:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw90xnq4J1ELPMrawqbemG8og4zbCKBjXiZ7ugip1DeeTKmaQjzolnxhENWKeXTr9f06Zwv X-Received: by 2002:a92:1802:: with SMTP id 2mr21153343ily.139.1625758650954; Thu, 08 Jul 2021 08:37:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625758650; cv=none; d=google.com; s=arc-20160816; b=qyWhyzgU9GnpjXmrTb9DQx+PSPcNpYsYJKst4G8raNQzOJAfQwMzm2YqnC2XXXj/Oq mudWD8rjB4PjLxd3I+v2eFOKyJEziPfmug2IbOYChS8LXfhwTpd8Z+6WatywW3KMR/k0 JTQ+gx8y9Rl/vJBGHt94JGrlV8vCdVJd1IwFGeMcKA3ceVRibmdoPR1j0B7KtEqAqTpG AkDBJMGjvpGik+0vgp+PG6TEgY7unkcU+obncFuoB/B0TQemJ8IiZAsFIz9LqkWjs3/I KEREO4HgVVsX4z2w1RUc7OjYwzb7Xooi95ot6cnGkCPhJzh7l8TPeEXwXRG1QDLUJAoD 9jXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=9wMZ76+EWh/5Ioetg4LvlZ1Fqgk4RPqxatnt+uxY24s=; b=Czgi0IS5CC1TcQbEUcrnPpySfo49ulzPPpmzNeZMr/m4DzlnWLdqnemIwdnEKGZEiH +Q+byR+YsadX1BIkKK69fj2MlY7Nz/NCl0imPdrex7s83o7nFfe60UMk/uGOb77KkU9m hlc5RMgrPtVuCjxxhHZfNO8AsAsc++RLCvuOGO7nMhPYP0+g1uNzz+6C8tI25thyfNcE qZzpmvfbIsu5LHZsfHLqjHNHbXn66F1j8FxEJq2sLtsLyodzdcdCCiHUAiTOoWS7W4ED b14CTK6Z7o6f+CBwEEIlc7rapFKGSE0j0SoParYx4LqFD+1YSxRe7mAbM0AFoIB7CHUL Mrkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=j+yR230Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y7si2689244ilu.160.2021.07.08.08.37.19; Thu, 08 Jul 2021 08:37:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=j+yR230Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232035AbhGHPjW (ORCPT + 99 others); Thu, 8 Jul 2021 11:39:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231956AbhGHPjU (ORCPT ); Thu, 8 Jul 2021 11:39:20 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7BAAC061574 for ; Thu, 8 Jul 2021 08:36:38 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id u5-20020a7bc0450000b02901480e40338bso5090771wmc.1 for ; Thu, 08 Jul 2021 08:36:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=9wMZ76+EWh/5Ioetg4LvlZ1Fqgk4RPqxatnt+uxY24s=; b=j+yR230QuFCdrtt1oOJkozHAv7KF5IkGOxXD39WCgXN4wqglQhDW6CYCjyyZA0akUN fx1d2rtFpEch1Wfzu0OL30tl5k5jfugAlaA/OUNX/mb7nLfvc+6KMHBXtOnNUFr/h0V2 Z93jjFAslS7E59ba6u5dDtXDqWXOyUa/EamnZ5+HY/WRlsI5xfWvcs1B1TL59Nqu+R8Y fiROSqzrCS7vt+jMITDtH9anRPUls0zwgBqdIbO06zoZFAeBKgSinRwEB4v2VXCyaWqs q+2I2DIQh1TXgWRtNBRMbWjzeBusxaPCD2zIeIp4DICdevo8jZz1rSMSqlIyJBiW1iJy D1uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=9wMZ76+EWh/5Ioetg4LvlZ1Fqgk4RPqxatnt+uxY24s=; b=oXBfINKNcxieIsIaPTHSJwuca8J9bHE4Cje07NuWxiDaVqn25fY5pMKv6HTLINNcUj qyCQgJiOOG/ZLLKeBpB1kE4Kd7U6CBYfzxrhSA402o4XdIz1P8YexEVd7W+P3wQrtYkF TLXGo4lihCP1o34v7OC2m+7xVZnjEiCbCJ1bEGHGJ3v3c2+Yjahjz1NUxv0mfKvEj4zV h3mSfJHBbrpATcVQ+2rYbcsnUbrzCfheeUo0ykShtDpkwpBErxYIVEcM6F/Ay6v0//dv 2zW/r0LX3yDlT/728Wr0mD6y5J/JxujfHWvI6sN66+sKCz8cMNy2IVVEoz6gGgWAi+HE 6zOQ== X-Gm-Message-State: AOAM531f3Tbc+CWlNgZGJiO9qQHNfws4MRdzbem1kG33F7bx59rEEWAE LZwLc3Pm73sOiLJOqm2ifpdxNA== X-Received: by 2002:a7b:c852:: with SMTP id c18mr33270556wml.128.1625758597313; Thu, 08 Jul 2021 08:36:37 -0700 (PDT) Received: from enceladus (ppp-94-66-242-227.home.otenet.gr. [94.66.242.227]) by smtp.gmail.com with ESMTPSA id h14sm3082288wro.32.2021.07.08.08.36.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 08:36:36 -0700 (PDT) Date: Thu, 8 Jul 2021 18:36:32 +0300 From: Ilias Apalodimas To: Alexander Duyck Cc: Yunsheng Lin , David Miller , Jakub Kicinski , linuxarm@openeuler.org, yisen.zhuang@huawei.com, Salil Mehta , thomas.petazzoni@bootlin.com, Marcin Wojtas , Russell King - ARM Linux , hawk@kernel.org, Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrew Morton , Peter Zijlstra , Will Deacon , Matthew Wilcox , Vlastimil Babka , fenghua.yu@intel.com, guro@fb.com, peterx@redhat.com, Feng Tang , Jason Gunthorpe , mcroce@microsoft.com, Hugh Dickins , Jonathan Lemon , Alexander Lobakin , Willem de Bruijn , wenxu@ucloud.cn, cong.wang@bytedance.com, Kevin Hao , nogikh@google.com, Marco Elver , Netdev , LKML , bpf Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt Message-ID: References: <29403911-bc26-dd86-83b8-da3c1784d087@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 08, 2021 at 08:29:56AM -0700, Alexander Duyck wrote: > On Thu, Jul 8, 2021 at 8:17 AM Ilias Apalodimas > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > [...] > > > > > > > > > > > > > > The above expectation is based on that the last user will always > > > > > > > > > call page_pool_put_full_page() in order to do the recycling or do > > > > > > > > > the resource cleanup(dma unmaping..etc). > > > > > > > > > > > > > > > > > > As the skb_free_head() and skb_release_data() have both checked the > > > > > > > > > skb->pp_recycle to call the page_pool_put_full_page() if needed, I > > > > > > > > > think we are safe for most case, the one case I am not so sure above > > > > > > > > > is the rx zero copy, which seems to also bump up the refcnt before > > > > > > > > > mapping the page to user space, we might need to ensure rx zero copy > > > > > > > > > is not the last user of the page or if it is the last user, make sure > > > > > > > > > it calls page_pool_put_full_page() too. > > > > > > > > > > > > > > > > Yes, but the skb->pp_recycle value is per skb, not per page. So my > > > > > > > > concern is that carrying around that value can be problematic as there > > > > > > > > are a number of possible cases where the pages might be > > > > > > > > unintentionally recycled. All it would take is for a packet to get > > > > > > > > cloned a few times and then somebody starts using pskb_expand_head and > > > > > > > > you would have multiple cases, possibly simultaneously, of entities > > > > > > > > trying to free the page. I just worry it opens us up to a number of > > > > > > > > possible races. > > > > > > > > > > > > > > Maybe I missde something, but I thought the cloned SKBs would never trigger > > > > > > > the recycling path, since they are protected by the atomic dataref check in > > > > > > > skb_release_data(). What am I missing? > > > > > > > > > > > > Are you talking about the head frag? So normally a clone wouldn't > > > > > > cause an issue because the head isn't changed. In the case of the > > > > > > head_frag we should be safe since pskb_expand_head will just kmalloc > > > > > > the new head and clears head_frag so it won't trigger > > > > > > page_pool_return_skb_page on the head_frag since the dataref just goes > > > > > > from 2 to 1. > > > > > > > > > > > > The problem is that pskb_expand_head memcopies the page frags over and > > > > > > takes a reference on the pages. At that point you would have two skbs > > > > > > both pointing to the same set of pages and each one ready to call > > > > > > page_pool_return_skb_page on the pages at any time and possibly racing > > > > > > with the other. > > > > > > > > > > Ok let me make sure I get the idea properly. > > > > > When pskb_expand_head is called, the new dataref will be 1, but the > > > > > head_frag will be set to 0, in which case the recycling code won't be > > > > > called for that skb. > > > > > So you are mostly worried about a race within the context of > > > > > pskb_expand_skb() between copying the frags, releasing the previous head > > > > > and preparing the new one (on a cloned skb)? > > > > > > > > The race is between freeing the two skbs. So the original and the > > > > clone w/ the expanded head will have separate instances of the page. I > > > > am pretty certain there is a race if the two of them start trying to > > > > free the page frags at the same time. > > > > > > > > > > Right, I completely forgot calling __skb_frag_unref() before releasing the > > > head ... > > > You are right, this will be a race. Let me go back to the original mail > > > thread and see what we can do > > > > > > > What do you think about resetting pp_recycle bit on pskb_expand_head()? > > I assume you mean specifically in the cloned case? > Yes. Even if we do it unconditionally we'll just loose non-cloned buffers from the recycling. I'll send a patch later today. > > If my memory serves me right Eric wanted that from the beginning. Then the > > cloned/expanded SKB won't trigger the recycling. If that skb hits the free > > path first, we'll end up recycling the fragments eventually. If the > > original one goes first, we'll just unmap the page(s) and freeing the cloned > > one will free all the remaining buffers. > > I *think* that should be fine. Effectively what we are doing is making > it so that if the original skb is freed first the pages are released, > and if it is released after the clone/expended skb then it can be > recycled. Exactly > > The issue is we have to maintain it so that there will be exactly one > caller of the recycling function for the pages. So any spot where we > are updating skb->head we will have to see if there is a clone and if > so we have to clear the pp_recycle flag on our skb so that it doesn't > try to recycle the page frags as well. Correct. I'll keep looking around in case there's something less fragile we can do Thanks /Ilias