Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp20944421rwd; Thu, 29 Jun 2023 08:56:56 -0700 (PDT) X-Google-Smtp-Source: APBJJlHhiUm9JbYZnUtQdiI/zpQOIOnP4sbT9e633ltj8R6XKL8fl67yy415jdJdi+e8wkwFjBVR X-Received: by 2002:a17:903:192:b0:1b8:6647:2e88 with SMTP id z18-20020a170903019200b001b866472e88mr774076plg.57.1688054216570; Thu, 29 Jun 2023 08:56:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688054216; cv=none; d=google.com; s=arc-20160816; b=iWFUp7GsYu3GMr8npc7ANHBM5R30/A/Ly+zJsYlbC7G1sHowSxvTtEzuMF5bIG50E0 SC+Z2G4uZrA1zEqotc27qRufaUzR9kJZiLz734eEFTeRcwg+hIf3N9PGlV19uZWKvLcE hT2j4I9dU1e5Xw1HXxg2CpZZsSQPiBV9W9HF0N3mAFSFG0FODF3i04oe/sYkC2sfrCC0 iMMCo0Ko+dgr/Ls+r/93X9Zs0P+nJH9vxDAZdRK/k4Go6Zid/3JY79s6PZMTL1tMSHlh cvImdhYQrLCfyFx1ct9s2ha+spXbETy9ZHTfSYQi881pKjb8vzTCMUHH5CDZRuYhT7Aa 2aYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mcmYloph7CKZl9he7I+677ioTtEfpJMPMwGJ3dK0Rnw=; fh=sUanlMZiD/v/dA0OzGE9hBpo1yKNqixnflKg4ZyEDrQ=; b=a1GvFscfRxjNK4+hDo9aBGXrXU/tzm4fwZ6C5FZeV1vU20wynoYTpaZe+z+1ecBXnT Z2bqmcJBfzzqxbdlfdhKbaZBwEO4lXFXKbC7MKac1Xr1mYd+UAZS4OCM3lFHIgrSsz/j lm4iHHVH5QQSt7gbWBj9AKH/sufX18+clGA3LrljvbJkKPmVecJn9+vB0iEPKOkUmoX/ hpWZZY+lurs04fXQc7h1XzoaoZ904yFNEEszBofpN5umpooUjWF5Cm+/u9wE5mYC0uMT OvPn6SRSxg9RvOVuOXfZb0urUpfV2Y5AfFk5kp4iHTGVXTxSDkCXkQhW6Qt80NvGLISY fynQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h0H92upF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d11-20020a170902c18b00b001b80641e90dsi7395357pld.438.2023.06.29.08.56.41; Thu, 29 Jun 2023 08:56:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h0H92upF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232700AbjF2PYl (ORCPT + 99 others); Thu, 29 Jun 2023 11:24:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232622AbjF2PYI (ORCPT ); Thu, 29 Jun 2023 11:24:08 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 852033584; Thu, 29 Jun 2023 08:24:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688052247; x=1719588247; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DoG27UNRr5H+5NVUUD3zG36LMPgB99hxtFjecm2IJwk=; b=h0H92upFCQGhYvF2XNhavYVb/YFW9l+huTqby4ouBNd3Sy//e8cuj4BT W2mgcfLdSCULHjyE2fYzRg2dajaQtVqKHVHHHbFHIVtwYGARSR9EBxmXp 3URrT6sM2asf91Zc5Z645o1cMdNeSrg641YLRhS3DtG/Vefd3iAvxHnFx MuvWHFExsBP5HoSyedcSte5mip5vSABZIri8cvxtwqMEZuUBjvu4Jok3m UgTdlWCZ1jzJB/vOHOfVpD8sk5GPZ2rnR1vUGcAebCG6CmdA0gqlhEdRY 9MQvhT+hH1zATOUPDRe1fYrmcjLDUQRooexHj0OJOUPqkcdolGNBc+ziH Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10756"; a="346920658" X-IronPort-AV: E=Sophos;i="6.01,168,1684825200"; d="scan'208";a="346920658" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2023 08:24:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10756"; a="830573882" X-IronPort-AV: E=Sophos;i="6.01,168,1684825200"; d="scan'208";a="830573882" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga002.fm.intel.com with ESMTP; 29 Jun 2023 08:24:04 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , Yunsheng Lin , Alexander Duyck , Jesper Dangaard Brouer , Ilias Apalodimas , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 4/4] net: skbuff: always recycle PP pages directly when inside a NAPI loop Date: Thu, 29 Jun 2023 17:23:05 +0200 Message-ID: <20230629152305.905962-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629152305.905962-1-aleksander.lobakin@intel.com> References: <20230629152305.905962-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit 8c48eea3adf3 ("page_pool: allow caching from safely localized NAPI") allowed direct recycling of skb pages to their PP for some cases, but unfortunately missed a couple other majors. For example, %XDP_DROP in skb mode. The netstack just calls kfree_skb(), which unconditionally passes `false` as @napi_safe. Thus, all pages go through ptr_ring and locks, although most of times we're actually inside the NAPI polling this PP is linked with, so that it would be perfectly safe to recycle pages directly. Let's address such. If @napi_safe is true, we're fine, don't change anything for this path. But if it's false, test the introduced %NAPI_STATE_RUNNING. There's good probability it will be set and, if ->list_owner is our current CPU, we're good to use direct recycling, even though @napi_safe is false. For the mentioned xdp-drop-skb-mode case, the improvement I got is 3-4% in Mpps. As for page_pool stats, recycle_ring is now 0 and alloc_slow counter doesn't change most of times, which means the MM layer is not even called to allocate any new pages. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4b7d00d5b5d7..931c83d7b251 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -893,7 +893,8 @@ bool page_pool_return_skb_page(struct page *page, bool napi_safe) * no possible race. */ napi = READ_ONCE(pp->p.napi); - allow_direct = napi_safe && napi && + allow_direct = napi && + (napi_safe || test_bit(NAPI_STATE_RUNNING, &napi->state)) && READ_ONCE(napi->list_owner) == smp_processor_id(); /* Driver set this to memory recycling info. Reset it on recycle. -- 2.41.0