2021-03-25 11:44:32

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

This series is based on top of Matthew Wilcox's series "Rationalise
__alloc_pages wrapper" and does not apply to 5.14-rc4. If Andrew's tree
is not the testing baseline then the following git tree will work.

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v6r7

Changelog since v5
o Add micro-optimisations from Jesper
o Add array-based versions of the sunrpc and page_pool users
o Allocate 1 page if local zone watermarks are not met
o Fix statistics
o prep_new_pages as they are allocated. Batching prep_new_pages with
IRQs enabled limited how the API could be used (e.g. list must be
empty) and added too much complexity.

Changelog since v4
o Drop users of the API
o Remove free_pages_bulk interface, no users
o Add array interface
o Allocate single page if watermark checks on local zones fail

Changelog since v3
o Rebase on top of Matthew's series consolidating the alloc_pages API
o Rename alloced to allocated
o Split out preparation patch for prepare_alloc_pages
o Defensive check for bulk allocation or <= 0 pages
o Call single page allocation path only if no pages were allocated
o Minor cosmetic cleanups
o Reorder patch dependencies by subsystem. As this is a cross-subsystem
series, the mm patches have to be merged before the sunrpc and net
users.

Changelog since v2
o Prep new pages with IRQs enabled
o Minor documentation update

Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2

This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
efficient as semantics needed to be ironed out first. If no other semantic
changes are needed, it can be made more efficient. Despite that, this
is a performance-related for users that require multiple pages for an
operation without multiple round-trips to the page allocator. Quoting
the last patch for the high-speed networking use-case

Kernel XDP stats CPU pps Delta
Baseline XDP-RX CPU total 3,771,046 n/a
List XDP-RX CPU total 3,940,242 +4.49%
Array XDP-RX CPU total 4,249,224 +12.68%

From the SUNRPC traces of svc_alloc_arg()

Single page: 25.007 us per call over 532,571 calls
Bulk list: 6.258 us per call over 517,034 calls
Bulk array: 4.590 us per call over 517,442 calls

Both potential users in this series are corner cases (NFS and high-speed
networks) so it is unlikely that most users will see any benefit in the
short term. Other potential other users are batch allocations for page
cache readahead, fault around and SLUB allocations when high-order pages
are unavailable. It's unknown how much benefit would be seen by converting
multiple page allocation calls to a single batch or what difference it may
make to headline performance.

Light testing of my own running dbench over NFS passed. Chuck and Jesper
conducted their own tests and details are included in the changelogs.

Patch 1 renames a variable name that is particularly unpopular

Patch 2 adds a bulk page allocator

Patch 3 adds an array-based version of the bulk allocator

Patches 4-5 adds micro-optimisations to the implementation

Patches 6-7 SUNRPC user

Patches 8-9 Network page_pool user

include/linux/gfp.h | 18 +++++
include/net/page_pool.h | 2 +-
mm/page_alloc.c | 157 ++++++++++++++++++++++++++++++++++++++--
net/core/page_pool.c | 111 ++++++++++++++++++----------
net/sunrpc/svc_xprt.c | 38 +++++-----
5 files changed, 263 insertions(+), 63 deletions(-)

--
2.26.2


2021-03-25 11:45:03

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 6/9] SUNRPC: Set rq_page_end differently

From: Chuck Lever <[email protected]>

Patch series "SUNRPC consumer for the bulk page allocator"

This patch set and the measurements below are based on yesterday's
bulk allocator series:

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9

The patches change SUNRPC to invoke the array-based bulk allocator
instead of alloc_page().

The micro-benchmark results are promising. I ran a mixture of 256KB
reads and writes over NFSv3. The server's kernel is built with KASAN
enabled, so the comparison is exaggerated but I believe it is still
valid.

I instrumented svc_recv() to measure the latency of each call to
svc_alloc_arg() and report it via a trace point. The following
results are averages across the trace events.

Single page: 25.007 us per call over 532,571 calls
Bulk list: 6.258 us per call over 517,034 calls
Bulk array: 4.590 us per call over 517,442 calls

This patch (of 2)

Refactor:

I'm about to use the loop variable @i for something else.

As far as the "i++" is concerned, that is a post-increment. The
value of @i is not used subsequently, so the increment operator
is unnecessary and can be removed.

Also note that nfsd_read_actor() was renamed nfsd_splice_actor()
by commit cf8208d0eabd ("sendfile: convert nfsd to
splice_direct_to_actor()").

Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
---
net/sunrpc/svc_xprt.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 3cdd71a8df1e..609bda97d4ae 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -642,7 +642,7 @@ static void svc_check_conn_limits(struct svc_serv *serv)
static int svc_alloc_arg(struct svc_rqst *rqstp)
{
struct svc_serv *serv = rqstp->rq_server;
- struct xdr_buf *arg;
+ struct xdr_buf *arg = &rqstp->rq_arg;
int pages;
int i;

@@ -667,11 +667,10 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
}
rqstp->rq_pages[i] = p;
}
- rqstp->rq_page_end = &rqstp->rq_pages[i];
- rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */
+ rqstp->rq_page_end = &rqstp->rq_pages[pages];
+ rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */

/* Make arg->head point to first page and arg->pages point to rest */
- arg = &rqstp->rq_arg;
arg->head[0].iov_base = page_address(rqstp->rq_pages[0]);
arg->head[0].iov_len = PAGE_SIZE;
arg->pages = rqstp->rq_pages + 1;
--
2.26.2

2021-03-25 11:45:14

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 7/9] SUNRPC: Refresh rq_pages using a bulk page allocator

From: Chuck Lever <[email protected]>

Reduce the rate at which nfsd threads hammer on the page allocator.
This improves throughput scalability by enabling the threads to run
more independently of each other.

[mgorman: Update interpretation of alloc_pages_bulk return value]
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
---
net/sunrpc/svc_xprt.c | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 609bda97d4ae..0c27c3291ca1 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -643,30 +643,29 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
{
struct svc_serv *serv = rqstp->rq_server;
struct xdr_buf *arg = &rqstp->rq_arg;
- int pages;
- int i;
+ unsigned long pages, filled;

- /* now allocate needed pages. If we get a failure, sleep briefly */
pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
if (pages > RPCSVC_MAXPAGES) {
- pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n",
+ pr_warn_once("svc: warning: pages=%lu > RPCSVC_MAXPAGES=%lu\n",
pages, RPCSVC_MAXPAGES);
/* use as many pages as possible */
pages = RPCSVC_MAXPAGES;
}
- for (i = 0; i < pages ; i++)
- while (rqstp->rq_pages[i] == NULL) {
- struct page *p = alloc_page(GFP_KERNEL);
- if (!p) {
- set_current_state(TASK_INTERRUPTIBLE);
- if (signalled() || kthread_should_stop()) {
- set_current_state(TASK_RUNNING);
- return -EINTR;
- }
- schedule_timeout(msecs_to_jiffies(500));
- }
- rqstp->rq_pages[i] = p;
+
+ for (;;) {
+ filled = alloc_pages_bulk_array(GFP_KERNEL, pages,
+ rqstp->rq_pages);
+ if (filled == pages)
+ break;
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (signalled() || kthread_should_stop()) {
+ set_current_state(TASK_RUNNING);
+ return -EINTR;
}
+ schedule_timeout(msecs_to_jiffies(500));
+ }
rqstp->rq_page_end = &rqstp->rq_pages[pages];
rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */

--
2.26.2

2021-03-25 12:53:29

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users. The implementation is not
> efficient as semantics needed to be ironed out first. If no other semantic
> changes are needed, it can be made more efficient. Despite that, this
> is a performance-related for users that require multiple pages for an
> operation without multiple round-trips to the page allocator. Quoting
> the last patch for the high-speed networking use-case
>
> Kernel XDP stats CPU pps Delta
> Baseline XDP-RX CPU total 3,771,046 n/a
> List XDP-RX CPU total 3,940,242 +4.49%
> Array XDP-RX CPU total 4,249,224 +12.68%
>
> >From the SUNRPC traces of svc_alloc_arg()
>
> Single page: 25.007 us per call over 532,571 calls
> Bulk list: 6.258 us per call over 517,034 calls
> Bulk array: 4.590 us per call over 517,442 calls
>
> Both potential users in this series are corner cases (NFS and high-speed
> networks) so it is unlikely that most users will see any benefit in the
> short term. Other potential other users are batch allocations for page
> cache readahead, fault around and SLUB allocations when high-order pages
> are unavailable. It's unknown how much benefit would be seen by converting
> multiple page allocation calls to a single batch or what difference it may
> make to headline performance.

We have a third user, vmalloc(), with a 16% perf improvement. I know the
email says 21% but that includes the 5% improvement from switching to
kvmalloc() to allocate area->pages.

https://lore.kernel.org/linux-mm/[email protected]/

I don't know how many _frequent_ vmalloc users we have that will benefit
from this, but it's probably more than will benefit from improvements
to 200Gbit networking performance.

2021-03-25 13:27:33

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > This series introduces a bulk order-0 page allocator with sunrpc and
> > the network page pool being the first users. The implementation is not
> > efficient as semantics needed to be ironed out first. If no other semantic
> > changes are needed, it can be made more efficient. Despite that, this
> > is a performance-related for users that require multiple pages for an
> > operation without multiple round-trips to the page allocator. Quoting
> > the last patch for the high-speed networking use-case
> >
> > Kernel XDP stats CPU pps Delta
> > Baseline XDP-RX CPU total 3,771,046 n/a
> > List XDP-RX CPU total 3,940,242 +4.49%
> > Array XDP-RX CPU total 4,249,224 +12.68%
> >
> > >From the SUNRPC traces of svc_alloc_arg()
> >
> > Single page: 25.007 us per call over 532,571 calls
> > Bulk list: 6.258 us per call over 517,034 calls
> > Bulk array: 4.590 us per call over 517,442 calls
> >
> > Both potential users in this series are corner cases (NFS and high-speed
> > networks) so it is unlikely that most users will see any benefit in the
> > short term. Other potential other users are batch allocations for page
> > cache readahead, fault around and SLUB allocations when high-order pages
> > are unavailable. It's unknown how much benefit would be seen by converting
> > multiple page allocation calls to a single batch or what difference it may
> > make to headline performance.
>
> We have a third user, vmalloc(), with a 16% perf improvement. I know the
> email says 21% but that includes the 5% improvement from switching to
> kvmalloc() to allocate area->pages.
>
> https://lore.kernel.org/linux-mm/[email protected]/
>

That's fairly promising. Assuming the bulk allocator gets merged, it would
make sense to add vmalloc on top. That's for bringing it to my attention
because it's far more relevant than my imaginary potential use cases.

> I don't know how many _frequent_ vmalloc users we have that will benefit
> from this, but it's probably more than will benefit from improvements
> to 200Gbit networking performance.

I think it was 100Gbit being looked at but your point is still valid and
there is no harm in incrementally improving over time.

--
Mel Gorman
SUSE Labs

2021-03-25 14:08:50

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

> On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > the network page pool being the first users. The implementation is not
> > > efficient as semantics needed to be ironed out first. If no other semantic
> > > changes are needed, it can be made more efficient. Despite that, this
> > > is a performance-related for users that require multiple pages for an
> > > operation without multiple round-trips to the page allocator. Quoting
> > > the last patch for the high-speed networking use-case
> > >
> > > Kernel XDP stats CPU pps Delta
> > > Baseline XDP-RX CPU total 3,771,046 n/a
> > > List XDP-RX CPU total 3,940,242 +4.49%
> > > Array XDP-RX CPU total 4,249,224 +12.68%
> > >
> > > >From the SUNRPC traces of svc_alloc_arg()
> > >
> > > Single page: 25.007 us per call over 532,571 calls
> > > Bulk list: 6.258 us per call over 517,034 calls
> > > Bulk array: 4.590 us per call over 517,442 calls
> > >
> > > Both potential users in this series are corner cases (NFS and high-speed
> > > networks) so it is unlikely that most users will see any benefit in the
> > > short term. Other potential other users are batch allocations for page
> > > cache readahead, fault around and SLUB allocations when high-order pages
> > > are unavailable. It's unknown how much benefit would be seen by converting
> > > multiple page allocation calls to a single batch or what difference it may
> > > make to headline performance.
> >
> > We have a third user, vmalloc(), with a 16% perf improvement. I know the
> > email says 21% but that includes the 5% improvement from switching to
> > kvmalloc() to allocate area->pages.
> >
> > https://lore.kernel.org/linux-mm/[email protected]/
> >
>
> That's fairly promising. Assuming the bulk allocator gets merged, it would
> make sense to add vmalloc on top. That's for bringing it to my attention
> because it's far more relevant than my imaginary potential use cases.
>
For the vmalloc we should be able to allocating on a specific NUMA node,
at least the current interface takes it into account. As far as i see
the current interface allocate on a current node:

static inline unsigned long
alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
{
return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
}

Or am i missing something?

--
Vlad Rezki

2021-03-25 14:13:39

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> For the vmalloc we should be able to allocating on a specific NUMA node,
> at least the current interface takes it into account. As far as i see
> the current interface allocate on a current node:
>
> static inline unsigned long
> alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
> {
> return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
> }
>
> Or am i missing something?

You can call __alloc_pages_bulk() directly; there's no need to indirect
through alloc_pages_bulk_array().

2021-03-25 14:28:10

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > > the network page pool being the first users. The implementation is not
> > > > efficient as semantics needed to be ironed out first. If no other semantic
> > > > changes are needed, it can be made more efficient. Despite that, this
> > > > is a performance-related for users that require multiple pages for an
> > > > operation without multiple round-trips to the page allocator. Quoting
> > > > the last patch for the high-speed networking use-case
> > > >
> > > > Kernel XDP stats CPU pps Delta
> > > > Baseline XDP-RX CPU total 3,771,046 n/a
> > > > List XDP-RX CPU total 3,940,242 +4.49%
> > > > Array XDP-RX CPU total 4,249,224 +12.68%
> > > >
> > > > >From the SUNRPC traces of svc_alloc_arg()
> > > >
> > > > Single page: 25.007 us per call over 532,571 calls
> > > > Bulk list: 6.258 us per call over 517,034 calls
> > > > Bulk array: 4.590 us per call over 517,442 calls
> > > >
> > > > Both potential users in this series are corner cases (NFS and high-speed
> > > > networks) so it is unlikely that most users will see any benefit in the
> > > > short term. Other potential other users are batch allocations for page
> > > > cache readahead, fault around and SLUB allocations when high-order pages
> > > > are unavailable. It's unknown how much benefit would be seen by converting
> > > > multiple page allocation calls to a single batch or what difference it may
> > > > make to headline performance.
> > >
> > > We have a third user, vmalloc(), with a 16% perf improvement. I know the
> > > email says 21% but that includes the 5% improvement from switching to
> > > kvmalloc() to allocate area->pages.
> > >
> > > https://lore.kernel.org/linux-mm/[email protected]/
> > >
> >
> > That's fairly promising. Assuming the bulk allocator gets merged, it would
> > make sense to add vmalloc on top. That's for bringing it to my attention
> > because it's far more relevant than my imaginary potential use cases.
> >
> For the vmalloc we should be able to allocating on a specific NUMA node,
> at least the current interface takes it into account. As far as i see
> the current interface allocate on a current node:
>
> static inline unsigned long
> alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
> {
> return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
> }
>
> Or am i missing something?
>

No, you're not missing anything. Options would be to add a helper similar
alloc_pages_node or to directly call __alloc_pages_bulk specifying a node
and using GFP_THISNODE. prepare_alloc_pages() should pick the correct
zonelist containing only the required node.

> --
> Vlad Rezki

--
Mel Gorman
SUSE Labs

2021-03-25 14:47:18

by Uladzislau Rezki

[permalink] [raw]
Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users

On Thu, Mar 25, 2021 at 02:26:24PM +0000, Mel Gorman wrote:
> On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > > On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote:
> > > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote:
> > > > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > > > the network page pool being the first users. The implementation is not
> > > > > efficient as semantics needed to be ironed out first. If no other semantic
> > > > > changes are needed, it can be made more efficient. Despite that, this
> > > > > is a performance-related for users that require multiple pages for an
> > > > > operation without multiple round-trips to the page allocator. Quoting
> > > > > the last patch for the high-speed networking use-case
> > > > >
> > > > > Kernel XDP stats CPU pps Delta
> > > > > Baseline XDP-RX CPU total 3,771,046 n/a
> > > > > List XDP-RX CPU total 3,940,242 +4.49%
> > > > > Array XDP-RX CPU total 4,249,224 +12.68%
> > > > >
> > > > > >From the SUNRPC traces of svc_alloc_arg()
> > > > >
> > > > > Single page: 25.007 us per call over 532,571 calls
> > > > > Bulk list: 6.258 us per call over 517,034 calls
> > > > > Bulk array: 4.590 us per call over 517,442 calls
> > > > >
> > > > > Both potential users in this series are corner cases (NFS and high-speed
> > > > > networks) so it is unlikely that most users will see any benefit in the
> > > > > short term. Other potential other users are batch allocations for page
> > > > > cache readahead, fault around and SLUB allocations when high-order pages
> > > > > are unavailable. It's unknown how much benefit would be seen by converting
> > > > > multiple page allocation calls to a single batch or what difference it may
> > > > > make to headline performance.
> > > >
> > > > We have a third user, vmalloc(), with a 16% perf improvement. I know the
> > > > email says 21% but that includes the 5% improvement from switching to
> > > > kvmalloc() to allocate area->pages.
> > > >
> > > > https://lore.kernel.org/linux-mm/[email protected]/
> > > >
> > >
> > > That's fairly promising. Assuming the bulk allocator gets merged, it would
> > > make sense to add vmalloc on top. That's for bringing it to my attention
> > > because it's far more relevant than my imaginary potential use cases.
> > >
> > For the vmalloc we should be able to allocating on a specific NUMA node,
> > at least the current interface takes it into account. As far as i see
> > the current interface allocate on a current node:
> >
> > static inline unsigned long
> > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array)
> > {
> > return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array);
> > }
> >
> > Or am i missing something?
> >
>
> No, you're not missing anything. Options would be to add a helper similar
> alloc_pages_node or to directly call __alloc_pages_bulk specifying a node
> and using GFP_THISNODE. prepare_alloc_pages() should pick the correct
> zonelist containing only the required node.
>
IMHO, a helper something like *_node() would be reasonable. I see that many
functions in "mm" have its own variants which explicitly add "_node()" prefix
to signal to users that it is a NUMA aware calls.

As for __alloc_pages_bulk(), i got it.

Thanks!

--
Vlad Rezki