Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp279878pxf; Thu, 11 Mar 2021 03:51:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJxVL1kxpyiiM4TCuwJ+L6A69ArrZ+pPACcLW/Jz6PB+Ro/mYBlqsVgrzBXZCyxLq3iURVi+ X-Received: by 2002:a05:6402:34d:: with SMTP id r13mr8205255edw.64.1615463469709; Thu, 11 Mar 2021 03:51:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615463469; cv=none; d=google.com; s=arc-20160816; b=Z8lhxRVC2FHlg1NhpBbKRrjmfn2Yd6uxIPpwaBh6wS3iI0BYfRuCRc109vVPVlx+0L aknGEdMPw7FnzYMapVKWIpx84HJ1Bv0a240DwS/whLOB0QrBMnazHNQvzMothLr7WOdS nZVIfVszQvom3PtFtjfFfrSw/ESfHK6tU5gggTq80qSP63QNEKcizDxKDGGmALCxof9u 47mSN22D+5eMrC3qBapArllu18A3jfhV7M2kSOIqrPSIQ079kVrxcJ9nvLTtZT1ceuiY Ot9822ihxTD9OjriuwknFc5CkKeHU+MZJmg3Dj19cRm8F1YXVhxT2Xt3eTWtQw3eFhoG klRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=azzn3cO3hoD4rJAMCjqq9TmuEioyfOiZMhvWhUauY14=; b=MjCj8/ESWdysPlcTy/AJB60i0OzCNmLLMoA4ixf1/SDNLG2etLjhTLBD9l67aoGCnP Dgp4zWwDDk/BE0C82p8NqRwU/hk6J8lkTqEpjxIda9ycqFgbexcD+1NibE+yjaUsSla3 DmtbYypC2kVuLkOafqWMqDHkSbED97oNzvsHVBpR2NIFJC0KTK+nmRBKymqda8jIP4uP Wp/PY6Wb1laVY3Wd/T3EdKaDIPBDB/0uESKHK44oDJhKJn5QUZssv/DyJNyHKZbuouBP Pxm/xOqxp08wgeDKB2hwUj8jwuRm3R6IOtA0uw/jRx06sYEUxgkzXhy8fRczKNUYJLDX bDQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g21si1481890edu.536.2021.03.11.03.50.31; Thu, 11 Mar 2021 03:51:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232841AbhCKLt7 (ORCPT + 99 others); Thu, 11 Mar 2021 06:49:59 -0500 Received: from outbound-smtp31.blacknight.com ([81.17.249.62]:54009 "EHLO outbound-smtp31.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232724AbhCKLtv (ORCPT ); Thu, 11 Mar 2021 06:49:51 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp31.blacknight.com (Postfix) with ESMTPS id 976F1C0B46 for ; Thu, 11 Mar 2021 11:49:35 +0000 (GMT) Received: (qmail 21490 invoked from network); 11 Mar 2021 11:49:35 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 11 Mar 2021 11:49:35 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 0/5 v3] Introduce a bulk order-0 page allocator with two in-tree users Date: Thu, 11 Mar 2021 11:49:30 +0000 Message-Id: <20210311114935.11379-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Changelog since v3 o Prep new pages with IRQs enabled o Minor documentation update Changelog since v1 o Parenthesise binary and boolean comparisons o Add reviewed-bys o Rebase to 5.12-rc2 This series introduces a bulk order-0 page allocator with sunrpc and the network page pool being the first users. The implementation is not particularly efficient and the intention is to iron out what the semantics of the API should have for users. Once the semantics are ironed out, it can be made more efficient. Despite that, this is a performance-related for users that require multiple pages for an operation without multiple round-trips to the page allocator. Quoting the last patch for the high-speed networking use-case. For XDP-redirect workload with 100G mlx5 driver (that use page_pool) redirecting xdp_frame packets into a veth, that does XDP_PASS to create an SKB from the xdp_frame, which then cannot return the page to the page_pool. In this case, we saw[1] an improvement of 18.8% from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps). Both users in this series are corner cases (NFS and high-speed networks) so it is unlikely that most users will see any benefit in the short term. Potential other users are batch allocations for page cache readahead, fault around and SLUB allocations when high-order pages are unavailable. It's unknown how much benefit would be seen by converting multiple page allocation calls to a single batch or what difference it may make to headline performance. It's a chicken and egg problem given that the potential benefit cannot be investigated without an implementation to test against. Light testing passed, I'm relying on Chuck and Jesper to test the target users more aggressively but both report performance improvements with the initial RFC. Patch 1 of this series is a cleanup to sunrpc, it could be merged separately but is included here as a pre-requisite. Patch 2 is the prototype bulk allocator Patch 3 is the sunrpc user. Chuck also has a patch which further caches pages but is not included in this series. It's not directly related to the bulk allocator and as it caches pages, it might have other concerns (e.g. does it need a shrinker?) Patch 4 is a preparation patch only for the network user Patch 5 converts the net page pool to the bulk allocator for order-0 pages. There is no obvious impact to the existing paths as only new users of the API should notice a difference between multiple calls to the allocator and a single bulk allocation. include/linux/gfp.h | 13 +++++ mm/page_alloc.c | 118 +++++++++++++++++++++++++++++++++++++++++- net/core/page_pool.c | 102 ++++++++++++++++++++++-------------- net/sunrpc/svc_xprt.c | 47 ++++++++++++----- 4 files changed, 225 insertions(+), 55 deletions(-) -- 2.26.2