2018-04-05 08:06:45

by Jia He

[permalink] [raw]
Subject: [PATCH v7 0/5] optimize memblock_next_valid_pfn and early_pfn_valid on arm and arm64

Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
where possible") tried to optimize the loop in memmap_init_zone(). But
there is still some room for improvement.

Patch 1 remain the memblock_next_valid_pfn on arm and arm64
Patch 2 optimizes the memblock_next_valid_pfn()
Patch 3~5 optimizes the early_pfn_valid()

As for the performance improvement, after this set, I can see the time
overhead of memmap_init() is reduced from 41313 us to 24345 us in my
armv8a server(QDF2400 with 96G memory).

Attached the memblock region information in my server.
[ 86.956758] Zone ranges:
[ 86.959452] DMA [mem 0x0000000000200000-0x00000000ffffffff]
[ 86.966041] Normal [mem 0x0000000100000000-0x00000017ffffffff]
[ 86.972631] Movable zone start for each node
[ 86.977179] Early memory node ranges
[ 86.980985] node 0: [mem 0x0000000000200000-0x000000000021ffff]
[ 86.987666] node 0: [mem 0x0000000000820000-0x000000000307ffff]
[ 86.994348] node 0: [mem 0x0000000003080000-0x000000000308ffff]
[ 87.001029] node 0: [mem 0x0000000003090000-0x00000000031fffff]
[ 87.007710] node 0: [mem 0x0000000003200000-0x00000000033fffff]
[ 87.014392] node 0: [mem 0x0000000003410000-0x000000000563ffff]
[ 87.021073] node 0: [mem 0x0000000005640000-0x000000000567ffff]
[ 87.027754] node 0: [mem 0x0000000005680000-0x00000000056dffff]
[ 87.034435] node 0: [mem 0x00000000056e0000-0x00000000086fffff]
[ 87.041117] node 0: [mem 0x0000000008700000-0x000000000871ffff]
[ 87.047798] node 0: [mem 0x0000000008720000-0x000000000894ffff]
[ 87.054479] node 0: [mem 0x0000000008950000-0x0000000008baffff]
[ 87.061161] node 0: [mem 0x0000000008bb0000-0x0000000008bcffff]
[ 87.067842] node 0: [mem 0x0000000008bd0000-0x0000000008c4ffff]
[ 87.074524] node 0: [mem 0x0000000008c50000-0x0000000008e2ffff]
[ 87.081205] node 0: [mem 0x0000000008e30000-0x0000000008e4ffff]
[ 87.087886] node 0: [mem 0x0000000008e50000-0x0000000008fcffff]
[ 87.094568] node 0: [mem 0x0000000008fd0000-0x000000000910ffff]
[ 87.101249] node 0: [mem 0x0000000009110000-0x00000000092effff]
[ 87.107930] node 0: [mem 0x00000000092f0000-0x000000000930ffff]
[ 87.114612] node 0: [mem 0x0000000009310000-0x000000000963ffff]
[ 87.121293] node 0: [mem 0x0000000009640000-0x000000000e61ffff]
[ 87.127975] node 0: [mem 0x000000000e620000-0x000000000e64ffff]
[ 87.134657] node 0: [mem 0x000000000e650000-0x000000000fffffff]
[ 87.141338] node 0: [mem 0x0000000010800000-0x0000000017feffff]
[ 87.148019] node 0: [mem 0x000000001c000000-0x000000001c00ffff]
[ 87.154701] node 0: [mem 0x000000001c010000-0x000000001c7fffff]
[ 87.161383] node 0: [mem 0x000000001c810000-0x000000007efbffff]
[ 87.168064] node 0: [mem 0x000000007efc0000-0x000000007efdffff]
[ 87.174746] node 0: [mem 0x000000007efe0000-0x000000007efeffff]
[ 87.181427] node 0: [mem 0x000000007eff0000-0x000000007effffff]
[ 87.188108] node 0: [mem 0x000000007f000000-0x00000017ffffffff]
[ 87.194791] Initmem setup node 0 [mem 0x0000000000200000-0x00000017ffffffff]

Without this patchset:
[ 117.106153] Initmem setup node 0 [mem 0x0000000000200000-0x00000017ffffffff]
[ 117.113677] before memmap_init
[ 117.118195] after memmap_init
>>> memmap_init takes 4518 us
[ 117.121446] before memmap_init
[ 117.154992] after memmap_init
>>> memmap_init takes 33546 us
[ 117.158241] before memmap_init
[ 117.161490] after memmap_init
>>> memmap_init takes 3249 us
>>> totally takes 41313 us

With this patchset:
[ 87.194791] Initmem setup node 0 [mem 0x0000000000200000-0x00000017ffffffff]
[ 87.202314] before memmap_init
[ 87.206164] after memmap_init
>>> memmap_init takes 3850 us
[ 87.209416] before memmap_init
[ 87.226662] after memmap_init
>>> memmap_init takes 17246 us
[ 87.229911] before memmap_init
[ 87.233160] after memmap_init
>>> memmap_init takes 3249 us
>>> totally takes 24345 us

Changelog:
V7: - fix i386 compilation error. refine the commit description
V6: - simplify the codes, move arm/arm64 common codes to one file.
- refine patches as suggested by Danial Vacek and Ard Biesheuvel
V5: - further refining as suggested by Danial Vacek. Make codes
arm/arm64 more arch specific
V4: - refine patches as suggested by Danial Vacek and Wei Yang
- optimized on arm besides arm64
V3: - fix 2 issues reported by kbuild test robot
V2: - rebase to mmotm latest
- remain memblock_next_valid_pfn on arm64
- refine memblock_search_pfn_regions and pfn_valid_region

Jia He (5):
mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64
arm: arm64: page_alloc: reduce unnecessary binary search in
memblock_next_valid_pfn()
mm/memblock: introduce memblock_search_pfn_regions()
arm: arm64: introduce pfn_valid_region()
mm: page_alloc: reduce unnecessary binary search in early_pfn_valid()

arch/arm/mm/init.c | 1 +
arch/arm64/mm/init.c | 1 +
include/linux/arm96_common.h | 76 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/memblock.h | 2 ++
include/linux/mmzone.h | 18 ++++++++++-
mm/memblock.c | 9 ++++++
mm/page_alloc.c | 2 +-
7 files changed, 107 insertions(+), 2 deletions(-)
create mode 100644 include/linux/arm96_common.h

--
2.7.4



2018-04-05 08:06:54

by Jia He

[permalink] [raw]
Subject: [PATCH v7 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64

Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
where possible") optimized the loop in memmap_init_zone(). But it causes
possible panic bug. So Daniel Vacek reverted it later.

But as suggested by Daniel Vacek, it is fine to using memblock to skip
gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID.

On arm and arm64, memblock is used by default. But generic version of
pfn_valid() is based on mem sections and memblock_next_valid_pfn() does
not always return the next valid one but skips more resulting in some
valid frames to be skipped (as if they were invalid). And that's why
kernel was eventually crashing on some !arm machines.

And as verified by Eugeniu Rosca, arm can benifit from commit
b92df1de5d28. So remain the memblock_next_valid_pfn on arm/arm64 and
move the related codes to one file include/linux/arm96_common.h

Suggested-by: Daniel Vacek <[email protected]>
Signed-off-by: Jia He <[email protected]>
---
arch/arm/mm/init.c | 1 +
arch/arm64/mm/init.c | 1 +
include/linux/arm96_common.h | 37 +++++++++++++++++++++++++++++++++++++
include/linux/mmzone.h | 11 +++++++++++
mm/page_alloc.c | 2 +-
5 files changed, 51 insertions(+), 1 deletion(-)
create mode 100644 include/linux/arm96_common.h

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index a1f11a7..296cc52 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -25,6 +25,7 @@
#include <linux/dma-contiguous.h>
#include <linux/sizes.h>
#include <linux/stop_machine.h>
+#include <linux/arm96_common.h>

#include <asm/cp15.h>
#include <asm/mach-types.h>
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 00e7b90..6efab80 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -40,6 +40,7 @@
#include <linux/mm.h>
#include <linux/kexec.h>
#include <linux/crash_dump.h>
+#include <linux/arm96_common.h>

#include <asm/boot.h>
#include <asm/fixmap.h>
diff --git a/include/linux/arm96_common.h b/include/linux/arm96_common.h
new file mode 100644
index 0000000..a6f68ea
--- /dev/null
+++ b/include/linux/arm96_common.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Common definitions of arm and arm64
+ * Copyright (C) 2018 HXT-semitech Corp.
+ */
+#ifndef __ARM96_COMMON_H
+#define __ARM96_COMMON_H
+#ifdef CONFIG_HAVE_ARCH_PFN_VALID
+/* HAVE_MEMBLOCK is always enabled on arm and arm64 */
+ulong __init_memblock memblock_next_valid_pfn(ulong pfn)
+{
+ struct memblock_type *type = &memblock.memory;
+ unsigned int right = type->cnt;
+ unsigned int mid, left = 0;
+ phys_addr_t addr = PFN_PHYS(++pfn);
+
+ do {
+ mid = (right + left) / 2;
+
+ if (addr < type->regions[mid].base)
+ right = mid;
+ else if (addr >= (type->regions[mid].base +
+ type->regions[mid].size))
+ left = mid + 1;
+ else {
+ /* addr is within the region, so pfn is valid */
+ return pfn;
+ }
+ } while (left < right);
+
+ if (right == type->cnt)
+ return -1UL;
+ else
+ return PHYS_PFN(type->regions[right].base);
+}
+EXPORT_SYMBOL(memblock_next_valid_pfn);
+#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/
+#endif /*__ARM96_COMMON_H*/
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d797716..eb56071 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1245,6 +1245,8 @@ static inline int pfn_valid(unsigned long pfn)
return 0;
return valid_section(__nr_to_section(pfn_to_section_nr(pfn)));
}
+
+#define next_valid_pfn(pfn) (pfn++)
#endif

static inline int pfn_present(unsigned long pfn)
@@ -1270,6 +1272,10 @@ static inline int pfn_present(unsigned long pfn)
#endif

#define early_pfn_valid(pfn) pfn_valid(pfn)
+#ifdef CONFIG_HAVE_ARCH_PFN_VALID
+extern ulong memblock_next_valid_pfn(ulong pfn);
+#define next_valid_pfn(pfn) memblock_next_valid_pfn(pfn)
+#endif
void sparse_init(void);
#else
#define sparse_init() do {} while (0)
@@ -1291,6 +1297,11 @@ struct mminit_pfnnid_cache {
#define early_pfn_valid(pfn) (1)
#endif

+/* fallback to default defitions*/
+#ifndef next_valid_pfn
+#define next_valid_pfn(pfn) (pfn++)
+#endif
+
void memory_present(int nid, unsigned long start, unsigned long end);
unsigned long __init node_memmap_size_bytes(int, unsigned long, unsigned long);

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c19f5ac..9d05f29 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5475,7 +5475,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
if (altmap && start_pfn == altmap->base_pfn)
start_pfn += altmap->reserve;

- for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ for (pfn = start_pfn; pfn < end_pfn; next_valid_pfn(pfn)) {
/*
* There can be holes in boot-time mem_map[]s handed to this
* function. They do not exist on hotplugged memory.
--
2.7.4


2018-04-05 08:08:06

by Jia He

[permalink] [raw]
Subject: [PATCH v7 4/5] arm: arm64: introduce pfn_valid_region()

Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
where possible") optimized the loop in memmap_init_zone(). But there is
still some room for improvement. E.g. in early_pfn_valid(), we can record
the last returned memblock region. If current pfn and last pfn are in the
same memory region, we needn't do the unnecessary binary searches because
memblock_is_nomap is the same result for whole memory region.

Signed-off-by: Jia He <[email protected]>
---
include/linux/arm96_common.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)

diff --git a/include/linux/arm96_common.h b/include/linux/arm96_common.h
index 2f4dea4..bb86bd3 100644
--- a/include/linux/arm96_common.h
+++ b/include/linux/arm96_common.h
@@ -48,5 +48,29 @@ ulong __init_memblock memblock_next_valid_pfn(ulong pfn)
return PHYS_PFN(regions[early_region_idx].base);
}
EXPORT_SYMBOL(memblock_next_valid_pfn);
+
+int pfn_valid_region(ulong pfn)
+{
+ ulong start_pfn, end_pfn;
+ struct memblock_type *type = &memblock.memory;
+ struct memblock_region *regions = type->regions;
+
+ if (early_region_idx != -1) {
+ start_pfn = PFN_DOWN(regions[early_region_idx].base);
+ end_pfn = PFN_DOWN(regions[early_region_idx].base +
+ regions[early_region_idx].size);
+
+ if (pfn >= start_pfn && pfn < end_pfn)
+ return !memblock_is_nomap(
+ &regions[early_region_idx]);
+ }
+
+ early_region_idx = memblock_search_pfn_regions(pfn);
+ if (early_region_idx == -1)
+ return false;
+
+ return !memblock_is_nomap(&regions[early_region_idx]);
+}
+EXPORT_SYMBOL(pfn_valid_region);
#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/
#endif /*__ARM96_COMMON_H*/
--
2.7.4


2018-04-05 08:08:33

by Jia He

[permalink] [raw]
Subject: [PATCH v7 3/5] mm/memblock: introduce memblock_search_pfn_regions()

This api is to find the memory region index of input pfn. With this
helper, we can improve the loop in early_pfn_valid by recording last
region index. If current pfn and last pfn are in the same memory
region, we needn't do the unnecessary binary searches because the
result of memblock_is_nomap is the same for whole memory region.

Signed-off-by: Jia He <[email protected]>
---
include/linux/memblock.h | 2 ++
mm/memblock.c | 9 +++++++++
2 files changed, 11 insertions(+)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 0257aee..a0127b3 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -203,6 +203,8 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */

+int memblock_search_pfn_regions(unsigned long pfn);
+
/**
* for_each_free_mem_range - iterate through free memblock areas
* @i: u64 used as loop variable
diff --git a/mm/memblock.c b/mm/memblock.c
index ba7c878..0f4004c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1617,6 +1617,15 @@ static int __init_memblock memblock_search(struct memblock_type *type, phys_addr
return -1;
}

+/* search memblock with the input pfn, return the region idx */
+int __init_memblock memblock_search_pfn_regions(unsigned long pfn)
+{
+ struct memblock_type *type = &memblock.memory;
+ int mid = memblock_search(type, PFN_PHYS(pfn));
+
+ return mid;
+}
+
bool __init memblock_is_reserved(phys_addr_t addr)
{
return memblock_search(&memblock.reserved, addr) != -1;
--
2.7.4


2018-04-05 08:08:40

by Jia He

[permalink] [raw]
Subject: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
where possible") optimized the loop in memmap_init_zone(). But there is
still some room for improvement. E.g. if pfn and pfn+1 are in the same
memblock region, we can simply pfn++ instead of doing the binary search
in memblock_next_valid_pfn.

Signed-off-by: Jia He <[email protected]>
---
include/linux/arm96_common.h | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/include/linux/arm96_common.h b/include/linux/arm96_common.h
index a6f68ea..2f4dea4 100644
--- a/include/linux/arm96_common.h
+++ b/include/linux/arm96_common.h
@@ -5,32 +5,47 @@
#ifndef __ARM96_COMMON_H
#define __ARM96_COMMON_H
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
+static int early_region_idx __init_memblock = -1;
/* HAVE_MEMBLOCK is always enabled on arm and arm64 */
ulong __init_memblock memblock_next_valid_pfn(ulong pfn)
{
struct memblock_type *type = &memblock.memory;
- unsigned int right = type->cnt;
- unsigned int mid, left = 0;
+ struct memblock_region *regions = type->regions;
+ uint right = type->cnt;
+ uint mid, left = 0;
+ ulong start_pfn, end_pfn;
phys_addr_t addr = PFN_PHYS(++pfn);

+ /* fast path, return pfn+1 if next pfn is in the same region */
+ if (early_region_idx != -1) {
+ start_pfn = PFN_DOWN(regions[early_region_idx].base);
+ end_pfn = PFN_DOWN(regions[early_region_idx].base +
+ regions[early_region_idx].size);
+
+ if (pfn >= start_pfn && pfn < end_pfn)
+ return pfn;
+ }
+
+ /* slow path, do the binary searching */
do {
mid = (right + left) / 2;

- if (addr < type->regions[mid].base)
+ if (addr < regions[mid].base)
right = mid;
- else if (addr >= (type->regions[mid].base +
- type->regions[mid].size))
+ else if (addr >= (regions[mid].base + regions[mid].size))
left = mid + 1;
else {
- /* addr is within the region, so pfn is valid */
+ early_region_idx = mid;
return pfn;
}
} while (left < right);

if (right == type->cnt)
return -1UL;
- else
- return PHYS_PFN(type->regions[right].base);
+
+ early_region_idx = right;
+
+ return PHYS_PFN(regions[early_region_idx].base);
}
EXPORT_SYMBOL(memblock_next_valid_pfn);
#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/
--
2.7.4


2018-04-05 08:08:48

by Jia He

[permalink] [raw]
Subject: [PATCH v7 5/5] mm: page_alloc: reduce unnecessary binary search in early_pfn_valid()

Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
where possible") optimized the loop in memmap_init_zone(). But there is
still some room for improvement. E.g. in early_pfn_valid(), if pfn and
pfn+1 are in the same memblock region, we can record the last returned
memblock region index and check whether pfn++ is still in the same
region.

Currently it only improve the performance on arm/arm64 and will have no
impact on other arches.

For the performance improvement, after this set, I can see the time
overhead of memmap_init() is reduced from 41313 us to 24345 us in my
armv8a server(QDF2400 with 96G memory).

Signed-off-by: Jia He <[email protected]>
---
include/linux/mmzone.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index eb56071..ab01bd3 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1271,11 +1271,16 @@ static inline int pfn_present(unsigned long pfn)
#define pfn_to_nid(pfn) (0)
#endif

-#define early_pfn_valid(pfn) pfn_valid(pfn)
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
extern ulong memblock_next_valid_pfn(ulong pfn);
#define next_valid_pfn(pfn) memblock_next_valid_pfn(pfn)
-#endif
+
+extern int pfn_valid_region(ulong pfn);
+#define early_pfn_valid(pfn) pfn_valid_region(pfn)
+#else
+#define early_pfn_valid(pfn) pfn_valid(pfn)
+#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/
+
void sparse_init(void);
#else
#define sparse_init() do {} while (0)
--
2.7.4


2018-04-05 11:29:17

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v7 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64

On Thu, Apr 05, 2018 at 01:04:34AM -0700, Jia He wrote:
> create mode 100644 include/linux/arm96_common.h

'arm96_common'?! No. Just no.

The right way to share common code is to create a header file (or use
an existing one), either in asm-generic or linux, with a #ifdef CONFIG_foo
block and then 'select foo' in the arm Kconfig files. That allows this
common code to be shared, maybe with powerpc or x86 or ... in the future.


2018-04-05 11:37:10

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
> where possible") optimized the loop in memmap_init_zone(). But there is
> still some room for improvement. E.g. if pfn and pfn+1 are in the same
> memblock region, we can simply pfn++ instead of doing the binary search
> in memblock_next_valid_pfn.

Sure, but I bet if we are >end_pfn, we're almost certainly going to the
start_pfn of the next block, so why not test that as well?

> + /* fast path, return pfn+1 if next pfn is in the same region */
> + if (early_region_idx != -1) {
> + start_pfn = PFN_DOWN(regions[early_region_idx].base);
> + end_pfn = PFN_DOWN(regions[early_region_idx].base +
> + regions[early_region_idx].size);
> +
> + if (pfn >= start_pfn && pfn < end_pfn)
> + return pfn;

early_region_idx++;
start_pfn = PFN_DOWN(regions[early_region_idx].base);
if (pfn >= end_pfn && pfn <= start_pfn)
return start_pfn;
> + }

2018-04-05 12:31:26

by Jia He

[permalink] [raw]
Subject: Re: [PATCH v7 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64

Thanks, Matthew


On 4/5/2018 7:23 PM, Matthew Wilcox Wrote:
> On Thu, Apr 05, 2018 at 01:04:34AM -0700, Jia He wrote:
>> create mode 100644 include/linux/arm96_common.h
> 'arm96_common'?! No. Just no.
>
> The right way to share common code is to create a header file (or use
> an existing one), either in asm-generic or linux, with a #ifdef CONFIG_foo
> block and then 'select foo' in the arm Kconfig files. That allows this
> common code to be shared, maybe with powerpc or x86 or ... in the future.
>
ok
How about include/asm-generic/early_pfn.h ?
And could I use CONFIG_HAVE_ARCH_PFN_VALID and CONFIG_HAVE_MEMBLOCKin
this case?
Currently, arm/arm64 have memblock enable by default. When other arches
implement
their HAVE_MEMBLOCK and HAVE_ARCH_PFN_VALID, they can include this file?

--
Cheers,
Jia


2018-04-05 12:45:51

by Jia He

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()



On 4/5/2018 7:34 PM, Matthew Wilcox Wrote:
> On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>> where possible") optimized the loop in memmap_init_zone(). But there is
>> still some room for improvement. E.g. if pfn and pfn+1 are in the same
>> memblock region, we can simply pfn++ instead of doing the binary search
>> in memblock_next_valid_pfn.
> Sure, but I bet if we are >end_pfn, we're almost certainly going to the
> start_pfn of the next block, so why not test that as well?
>
>> + /* fast path, return pfn+1 if next pfn is in the same region */
>> + if (early_region_idx != -1) {
>> + start_pfn = PFN_DOWN(regions[early_region_idx].base);
>> + end_pfn = PFN_DOWN(regions[early_region_idx].base +
>> + regions[early_region_idx].size);
>> +
>> + if (pfn >= start_pfn && pfn < end_pfn)
>> + return pfn;
> early_region_idx++;
> start_pfn = PFN_DOWN(regions[early_region_idx].base);
> if (pfn >= end_pfn && pfn <= start_pfn)
> return start_pfn;
Thanks, thus the binary search in next step can be discarded?

--
Cheers,
Jia


2018-04-05 12:52:38

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote:
>
>
> On 4/5/2018 7:34 PM, Matthew Wilcox Wrote:
> > On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
> > > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
> > > where possible") optimized the loop in memmap_init_zone(). But there is
> > > still some room for improvement. E.g. if pfn and pfn+1 are in the same
> > > memblock region, we can simply pfn++ instead of doing the binary search
> > > in memblock_next_valid_pfn.
> > Sure, but I bet if we are >end_pfn, we're almost certainly going to the
> > start_pfn of the next block, so why not test that as well?
> >
> > > + /* fast path, return pfn+1 if next pfn is in the same region */
> > > + if (early_region_idx != -1) {
> > > + start_pfn = PFN_DOWN(regions[early_region_idx].base);
> > > + end_pfn = PFN_DOWN(regions[early_region_idx].base +
> > > + regions[early_region_idx].size);
> > > +
> > > + if (pfn >= start_pfn && pfn < end_pfn)
> > > + return pfn;
> > early_region_idx++;
> > start_pfn = PFN_DOWN(regions[early_region_idx].base);
> > if (pfn >= end_pfn && pfn <= start_pfn)
> > return start_pfn;
> Thanks, thus the binary search in next step can be discarded?

I don't know all the circumstances in which this is called. Maybe a linear
search with memo is more appropriate than a binary search.

2018-04-06 09:11:58

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

On Thu, Apr 05, 2018 at 05:50:54AM -0700, Matthew Wilcox wrote:
> On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote:
> >
> >
> > On 4/5/2018 7:34 PM, Matthew Wilcox Wrote:
> > > On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
> > > > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
> > > > where possible") optimized the loop in memmap_init_zone(). But there is
> > > > still some room for improvement. E.g. if pfn and pfn+1 are in the same
> > > > memblock region, we can simply pfn++ instead of doing the binary search
> > > > in memblock_next_valid_pfn.
> > > Sure, but I bet if we are >end_pfn, we're almost certainly going to the
> > > start_pfn of the next block, so why not test that as well?
> > >
> > > > + /* fast path, return pfn+1 if next pfn is in the same region */
> > > > + if (early_region_idx != -1) {
> > > > + start_pfn = PFN_DOWN(regions[early_region_idx].base);
> > > > + end_pfn = PFN_DOWN(regions[early_region_idx].base +
> > > > + regions[early_region_idx].size);
> > > > +
> > > > + if (pfn >= start_pfn && pfn < end_pfn)
> > > > + return pfn;
> > > early_region_idx++;
> > > start_pfn = PFN_DOWN(regions[early_region_idx].base);
> > > if (pfn >= end_pfn && pfn <= start_pfn)
> > > return start_pfn;
> > Thanks, thus the binary search in next step can be discarded?
>
> I don't know all the circumstances in which this is called. Maybe a linear
> search with memo is more appropriate than a binary search.

That's been brought up before, and the reasoning appears to be
something along the lines of...

Academics and published wisdom is that on cached architectures, binary
searches are bad because it doesn't operate efficiently due to the
overhead from having to load cache lines. Consequently, there seems
to be a knee-jerk reaction that "all binary searches are bad, we must
eliminate them."

What is failed to be grasped here, though, is that it is typical that
the number of entries in this array tend to be small, so the entire
array takes up one or two cache lines, maybe a maximum of four lines
depending on your cache line length and number of entries.

This means that the binary search expense is reduced, and is lower
than a linear search for the majority of cases.

What is key here as far as performance is concerned is whether the
general usage of pfn_valid() by the kernel is optimal. We should
not optimise only for the boot case, which means evaluating the
effect of these changes with _real_ workloads, not just "does my
machine boot a milliseconds faster".

--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
According to speedtest.net: 8.21Mbps down 510kbps up

2018-04-06 10:24:50

by Daniel Vacek

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

On Fri, Apr 6, 2018 at 11:09 AM, Russell King - ARM Linux
<[email protected]> wrote:
> On Thu, Apr 05, 2018 at 05:50:54AM -0700, Matthew Wilcox wrote:
>> On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote:
>> >
>> >
>> > On 4/5/2018 7:34 PM, Matthew Wilcox Wrote:
>> > > On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
>> > > > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>> > > > where possible") optimized the loop in memmap_init_zone(). But there is
>> > > > still some room for improvement. E.g. if pfn and pfn+1 are in the same
>> > > > memblock region, we can simply pfn++ instead of doing the binary search
>> > > > in memblock_next_valid_pfn.
>> > > Sure, but I bet if we are >end_pfn, we're almost certainly going to the
>> > > start_pfn of the next block, so why not test that as well?
>> > >
>> > > > + /* fast path, return pfn+1 if next pfn is in the same region */
>> > > > + if (early_region_idx != -1) {
>> > > > + start_pfn = PFN_DOWN(regions[early_region_idx].base);
>> > > > + end_pfn = PFN_DOWN(regions[early_region_idx].base +
>> > > > + regions[early_region_idx].size);
>> > > > +
>> > > > + if (pfn >= start_pfn && pfn < end_pfn)
>> > > > + return pfn;
>> > > early_region_idx++;
>> > > start_pfn = PFN_DOWN(regions[early_region_idx].base);
>> > > if (pfn >= end_pfn && pfn <= start_pfn)
>> > > return start_pfn;
>> > Thanks, thus the binary search in next step can be discarded?
>>
>> I don't know all the circumstances in which this is called. Maybe a linear
>> search with memo is more appropriate than a binary search.

This is actually a good point.

> That's been brought up before, and the reasoning appears to be
> something along the lines of...
>
> Academics and published wisdom is that on cached architectures, binary
> searches are bad because it doesn't operate efficiently due to the
> overhead from having to load cache lines. Consequently, there seems
> to be a knee-jerk reaction that "all binary searches are bad, we must
> eliminate them."

a) This does not make sense. At least in general case.
b) It is not the case here. Here it's really mostly called with
sequentially incremented pfns, AFAICT.

> What is failed to be grasped here, though, is that it is typical that
> the number of entries in this array tend to be small, so the entire
> array takes up one or two cache lines, maybe a maximum of four lines
> depending on your cache line length and number of entries.
>
> This means that the binary search expense is reduced, and is lower
> than a linear search for the majority of cases.

In this case it hits mostly the last result or eventually the
sequentially next one.

> What is key here as far as performance is concerned is whether the
> general usage of pfn_valid() by the kernel is optimal. We should
> not optimise only for the boot case, which means evaluating the
> effect of these changes with _real_ workloads, not just "does my
> machine boot a milliseconds faster".

IIUC, this is only used during early boot (and memory hotplug) and it
does not influence regular runtime. Whether the general usage of
pfn_valid() by the kernel is optimal is another good question, but
that's totally unrelated to this series, IMHO.

On the other hand I also wonder if this all really is worth the
negligible boot time speedup.

--nX

> --
> RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line in suburbia: sync at 8.8Mbps down 630kbps up
> According to speedtest.net: 8.21Mbps down 510kbps up

2018-04-08 02:09:26

by Jia He

[permalink] [raw]
Subject: Re: [PATCH v7 2/5] arm: arm64: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn()

Thanks for your comments, Russell


On 4/6/2018 5:09 PM, Russell King - ARM Linux Wrote:
> On Thu, Apr 05, 2018 at 05:50:54AM -0700, Matthew Wilcox wrote:
>> On Thu, Apr 05, 2018 at 08:44:12PM +0800, Jia He wrote:
>>>
>>> On 4/5/2018 7:34 PM, Matthew Wilcox Wrote:
>>>> On Thu, Apr 05, 2018 at 01:04:35AM -0700, Jia He wrote:
>>>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>>>>> where possible") optimized the loop in memmap_init_zone(). But there is
>>>>> still some room for improvement. E.g. if pfn and pfn+1 are in the same
>>>>> memblock region, we can simply pfn++ instead of doing the binary search
>>>>> in memblock_next_valid_pfn.
>>>> Sure, but I bet if we are >end_pfn, we're almost certainly going to the
>>>> start_pfn of the next block, so why not test that as well?
>>>>
>>>>> + /* fast path, return pfn+1 if next pfn is in the same region */
>>>>> + if (early_region_idx != -1) {
>>>>> + start_pfn = PFN_DOWN(regions[early_region_idx].base);
>>>>> + end_pfn = PFN_DOWN(regions[early_region_idx].base +
>>>>> + regions[early_region_idx].size);
>>>>> +
>>>>> + if (pfn >= start_pfn && pfn < end_pfn)
>>>>> + return pfn;
>>>> early_region_idx++;
>>>> start_pfn = PFN_DOWN(regions[early_region_idx].base);
>>>> if (pfn >= end_pfn && pfn <= start_pfn)
>>>> return start_pfn;
>>> Thanks, thus the binary search in next step can be discarded?
>> I don't know all the circumstances in which this is called. Maybe a linear
>> search with memo is more appropriate than a binary search.
> That's been brought up before, and the reasoning appears to be
> something along the lines of...
>
> Academics and published wisdom is that on cached architectures, binary
> searches are bad because it doesn't operate efficiently due to the
> overhead from having to load cache lines. Consequently, there seems
> to be a knee-jerk reaction that "all binary searches are bad, we must
> eliminate them."
IIUC, are you opposed to entirely removing the binary search instead of my
previous patch set?
>
> What is failed to be grasped here, though, is that it is typical that
> the number of entries in this array tend to be small, so the entire
> array takes up one or two cache lines, maybe a maximum of four lines
> depending on your cache line length and number of entries.
>
> This means that the binary search expense is reduced, and is lower
> than a linear search for the majority of cases.
>
> What is key here as far as performance is concerned is whether the
> general usage of pfn_valid() by the kernel is optimal. We should
> not optimise only for the boot case, which means evaluating the
> effect of these changes with _real_ workloads, not just "does my
> machine boot a milliseconds faster".
hmm.. But pfn is linearly increased during the booting time. This assumption
is not correct in real workload for pfn_valid out of booting time. So in my
patchset, I defined another pfn_valid_region for booting time only.

I didn't have many arm/arm64 boxes to verifed. What I can do is guaranteeing
the improvemnet in my armv8a (qualcom centriq 2400). Sorry about it.

--
Cheers,
Jia