Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp299604pxb; Wed, 24 Feb 2021 02:27:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJxv8KXQWUaX+ntJI5EdwDTpMpX9lZ2J0UooFvgNNP2fpA6GPFR/i0o90FeQXkl6uotTGpJZ X-Received: by 2002:a50:e14d:: with SMTP id i13mr32103680edl.106.1614162450549; Wed, 24 Feb 2021 02:27:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614162450; cv=none; d=google.com; s=arc-20160816; b=wpLOnynovKEDB7vGPc/9KcY8Oy8ShZ215EFi485Osv2qIdPhpUci/Le5DF6BJUEplZ mTLIbTnRSnU47qkTxltsaYVQVNpLtqiB12CJntaAMY4xbnOpsKNlSH8udnwEMnkq4RW9 OueOQmfUTyCsrOIKpbntqUfJQIIHBz28DsCx7xw+ujPV6MPPVYzkiMVu0m9u9Kt6TR9B 8biA9N7fQhzUUBZi7Mh/VKkOG72y2k/Tt2GLeg+hA/lRBFBN0BYXIKvns4GuCPYUVv1a TVX5GgkCQrt9o+nk8U5TxM3ABk106h1r52oCj9n6mw1d0k0yEb3lXip6A6yKfvO4yquy 3iaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=EiavY1zZKTa/Vb1jH6FrUaSuTXbfWGHnWK5YdFgUqC4=; b=ZTLg4F7coZV26+B5hbktb+2wv3l8TK4aeeVNp6F8s/XDpe30BjNOC7C0ajgwdbYWUI s1dB9AxJzobNdCdTNRHnrzyryQkqg7JAWhjqyXRCTLhOaDZKH2/R/wT5fr5KtrQlg56t SpZjOtod10iwvgQiZck0AMFWxkwDFMbg5yF1evMMf4HMxQmDCNYcbTJm70NvngJS2dcr 5G5UdqqL6zar3rTznxPf1nr+E6iYLJl/von6R9W6NicU1rNTy8fpwuCO7Ksb26wcjeOI cAhntxVzGkS1Yzi1fYmuMerhUhNcg9o8suAJdpe+BHGWOVnTU8Cbn7gJllJML2NJqPot O0jQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i1si902035edv.131.2021.02.24.02.27.08; Wed, 24 Feb 2021 02:27:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233374AbhBXK05 (ORCPT + 99 others); Wed, 24 Feb 2021 05:26:57 -0500 Received: from outbound-smtp35.blacknight.com ([46.22.139.218]:48079 "EHLO outbound-smtp35.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234344AbhBXK04 (ORCPT ); Wed, 24 Feb 2021 05:26:56 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp35.blacknight.com (Postfix) with ESMTPS id 2ACF11E23 for ; Wed, 24 Feb 2021 10:26:04 +0000 (GMT) Received: (qmail 23429 invoked from network); 24 Feb 2021 10:26:04 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 24 Feb 2021 10:26:03 -0000 From: Mel Gorman To: Chuck Lever , Jesper Dangaard Brouer Cc: LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [RFC PATCH 0/3] Introduce a bulk order-0 page allocator for sunrpc Date: Wed, 24 Feb 2021 10:26:00 +0000 Message-Id: <20210224102603.19524-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org This is a prototype series that introduces a bulk order-0 page allocator with sunrpc being the first user. The implementation is not particularly efficient and the intention is to iron out what the semantics of the API should be. That said, sunrpc was reported to have reduced allocation latency when refilling a pool. As a side-note, while the implementation could be more efficient, it would require fairly deep surgery in numerous places. The lock scope would need to be significantly reduced, particularly as vmstat, per-cpu and the buddy allocator have different locking protocol that overal -- e.g. all partially depend on irqs being disabled at various points. Secondly, the core of the allocator deals with single pages where as both the bulk allocator and per-cpu allocator operate in batches. All of that has to be reconciled with all the existing users and their constraints (memory offline, CMA and cpusets being the trickiest). In terms of semantics required by new users, my preference is that a pair of patches be applied -- the first which adds the required semantic to the bulk allocator and the second which adds the new user. Patch 1 of this series is a cleanup to sunrpc, it could be merged separately but is included here for convenience. Patch 2 is the prototype bulk allocator Patch 3 is the sunrpc user. Chuck also has a patch which further caches pages but is not included in this series. It's not directly related to the bulk allocator and as it caches pages, it might have other concerns (e.g. does it need a shrinker?) This has only been lightly tested on a low-end NFS server. It did not break but would benefit from an evaluation to see how much, if any, the headline performance changes. The biggest concern is that a light test case showed that there are a *lot* of bulk requests for 1 page which gets delegated to the normal allocator. The same criteria should apply to any other users. include/linux/gfp.h | 13 +++++ mm/page_alloc.c | 113 +++++++++++++++++++++++++++++++++++++++++- net/sunrpc/svc_xprt.c | 47 ++++++++++++------ 3 files changed, 157 insertions(+), 16 deletions(-) -- 2.26.2