Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp500384pxv; Thu, 8 Jul 2021 07:22:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6hLLZEVDKC8A+F3MMPO3LmuYHcVJNFrcK1rrzYSODnSVD+CVK3EShuGjE+tY2VY/O8Hro X-Received: by 2002:a92:d08f:: with SMTP id h15mr23499545ilh.297.1625754159344; Thu, 08 Jul 2021 07:22:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625754159; cv=none; d=google.com; s=arc-20160816; b=sDUVUBgCXw+qx0nsS5PTRzmzXN3bpTj52jwLPgnW4JuG9V/VyE8HIZd/DT2Q6v6Fk1 xHtyopxEElhegnppZewptWrRhJlz876ZoOlbna6bV4RQT8K+AsiHZAV01cdZlERr5OKb yvP6eJzGk4fcwezVJ2HhtjXgcVvjRYooxlgCSrI2wmTf8f3SKm/4DQkaHgNZA/MvSkMV ZW7xmq2ewtxu00WuzR/AMu8MuTqcx8Z11zOtJtIFsb8My7j2WB0JWwgKtuoLtGxn0qX9 uyo0wGWwlE6HNuB2WVFVC/y0uGzV/Sj/+AFd9elMNC+xvK5uR1FVIC9uRcb2Vuvm4Hk+ CxpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=S3PWxzev7Fm1gHxNnCDGrUgfUsd5p8eneX+C6EsBy6M=; b=gvoH9PEMTW3VXPcBnKtfxsHjq44BF/k7GcwselwaWd81cE9iNHdtuvIx5dO4SGhKNx h+9RxnFnDf/oG50SKxJyZ1sYi2iyMdkfNguFTqrsE9820oH+H5fsx1yX4au02+MBkLlZ rummNA2/+Efiymy75Zf39TWbQ5ddBNVzeF4SEPnpT2rClkbdzEfNOzlMEeVUFZoi/7Xw lyZjEL5CdktaVDB/DU/SSwj7P3/PY3q5gm6BdPwVDzFkzedOYOL3x2JUxGbTvdXTR+h+ CPOSE6+DlRTUeQA+owZnH0hFqB60ieYHoZ9A7ETC7CnLTcjZYIC901wjo37c9YXqBwBM 0DlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=knp1ELX1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a24si2356074iol.81.2021.07.08.07.22.27; Thu, 08 Jul 2021 07:22:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=knp1ELX1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231905AbhGHOYn (ORCPT + 99 others); Thu, 8 Jul 2021 10:24:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231754AbhGHOYm (ORCPT ); Thu, 8 Jul 2021 10:24:42 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99408C06175F for ; Thu, 8 Jul 2021 07:21:59 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id i8so7720256wrp.12 for ; Thu, 08 Jul 2021 07:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=S3PWxzev7Fm1gHxNnCDGrUgfUsd5p8eneX+C6EsBy6M=; b=knp1ELX1d7zG9sldwK7YgLHAxY1LonJp0NaIMZOc3MGE3C1NPZsIbC9zNitjAK+sZt DF9JaR9ZGs/AGXeMgak+PiyGl32BdiwSsu35E/8h+ohU025rWFP/9opn9lDkxtdt4tlE o2fiA1tJPbesIGr8MeMlQPRC8TgINj9O6+/D0nPv128eCYr4AXGae+5CoLDm95w4n0pa 4FJMAWEue/H3MNTpO+ABXOJWPrCUjkGvMBJ5Qs67/vUImo9vErh2e9I21NCrXokYrSNg nLOyrvuCp7G6y7TcXZ+S7k+kx5MNmy52uZPCWtLFmFf6GUrjNDSSvolee3D87XppmwBT IwuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=S3PWxzev7Fm1gHxNnCDGrUgfUsd5p8eneX+C6EsBy6M=; b=CkdFosjoNqoR0fe0nssTEjamG0X7x+BOAIUHudZHO5y5J4eursTOzARpRMJc+lbdbe xo6mGXSYMC9qa8xT0kwXuOQ4qa35FfZivSB5RRD9YffXWFZT3bt08p/ONKUdPoR0+01i 0x57k3roezDVV6CDoK1Sap8X0ynqrvbGm8SSMZ8z7W7wv8lEZk93bHZ/ZTospr10vpaj vjr4rzIkkQxz1xi38oNX7j5rmkcmwWqaSEpSKMlVorH2O+tIgVxy4Qotyrsym9FMsh8X BkkrC5ZVLPMvl+qALvzTmiwX/HdzLwrhDYgB8TMlPIiuPLFL1aybf13k32NG12q59isM VI9A== X-Gm-Message-State: AOAM532SUg2tiUZDIrVeuU3ArGrVzL4mama+IgCdmS3lXyuopt2c7dg/ a0gsWLszF/B8I30t8VSBZPJ3bQ== X-Received: by 2002:a5d:4fd2:: with SMTP id h18mr10788259wrw.289.1625754118201; Thu, 08 Jul 2021 07:21:58 -0700 (PDT) Received: from enceladus (ppp-94-66-242-227.home.otenet.gr. [94.66.242.227]) by smtp.gmail.com with ESMTPSA id y11sm6325854wmi.33.2021.07.08.07.21.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 07:21:57 -0700 (PDT) Date: Thu, 8 Jul 2021 17:21:53 +0300 From: Ilias Apalodimas To: Alexander Duyck Cc: Yunsheng Lin , David Miller , Jakub Kicinski , linuxarm@openeuler.org, yisen.zhuang@huawei.com, Salil Mehta , thomas.petazzoni@bootlin.com, Marcin Wojtas , Russell King - ARM Linux , hawk@kernel.org, Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrew Morton , Peter Zijlstra , Will Deacon , Matthew Wilcox , Vlastimil Babka , fenghua.yu@intel.com, guro@fb.com, peterx@redhat.com, Feng Tang , Jason Gunthorpe , mcroce@microsoft.com, Hugh Dickins , Jonathan Lemon , Alexander Lobakin , Willem de Bruijn , wenxu@ucloud.cn, cong.wang@bytedance.com, Kevin Hao , nogikh@google.com, Marco Elver , Netdev , LKML , bpf Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt Message-ID: References: <1625044676-12441-1-git-send-email-linyunsheng@huawei.com> <1625044676-12441-2-git-send-email-linyunsheng@huawei.com> <29403911-bc26-dd86-83b8-da3c1784d087@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > > [...] > > > > The above expectation is based on that the last user will always > > > > call page_pool_put_full_page() in order to do the recycling or do > > > > the resource cleanup(dma unmaping..etc). > > > > > > > > As the skb_free_head() and skb_release_data() have both checked the > > > > skb->pp_recycle to call the page_pool_put_full_page() if needed, I > > > > think we are safe for most case, the one case I am not so sure above > > > > is the rx zero copy, which seems to also bump up the refcnt before > > > > mapping the page to user space, we might need to ensure rx zero copy > > > > is not the last user of the page or if it is the last user, make sure > > > > it calls page_pool_put_full_page() too. > > > > > > Yes, but the skb->pp_recycle value is per skb, not per page. So my > > > concern is that carrying around that value can be problematic as there > > > are a number of possible cases where the pages might be > > > unintentionally recycled. All it would take is for a packet to get > > > cloned a few times and then somebody starts using pskb_expand_head and > > > you would have multiple cases, possibly simultaneously, of entities > > > trying to free the page. I just worry it opens us up to a number of > > > possible races. > > > > Maybe I missde something, but I thought the cloned SKBs would never trigger > > the recycling path, since they are protected by the atomic dataref check in > > skb_release_data(). What am I missing? > > Are you talking about the head frag? So normally a clone wouldn't > cause an issue because the head isn't changed. In the case of the > head_frag we should be safe since pskb_expand_head will just kmalloc > the new head and clears head_frag so it won't trigger > page_pool_return_skb_page on the head_frag since the dataref just goes > from 2 to 1. > > The problem is that pskb_expand_head memcopies the page frags over and > takes a reference on the pages. At that point you would have two skbs > both pointing to the same set of pages and each one ready to call > page_pool_return_skb_page on the pages at any time and possibly racing > with the other. Ok let me make sure I get the idea properly. When pskb_expand_head is called, the new dataref will be 1, but the head_frag will be set to 0, in which case the recycling code won't be called for that skb. So you are mostly worried about a race within the context of pskb_expand_skb() between copying the frags, releasing the previous head and preparing the new one (on a cloned skb)? > > I suspect if they both called it at roughly the same time one of them > would trigger a NULL pointer dereference since they would both check > pp_magic first, and then both set pp to NULL. If run on a system where > dma_unmap_page_attrs takes a while it would be very likely to race > since pp_magic doesn't get cleared until after the page is unmapped. Thanks! /Ilias