2020-07-14 20:56:38

by Daniel Jordan

[permalink] [raw]
Subject: [PATCH v3] x86/mm: use max memory block size on bare metal

Some of our servers spend significant time at kernel boot initializing
memory block sysfs directories and then creating symlinks between them
and the corresponding nodes. The slowness happens because the machines
get stuck with the smallest supported memory block size on x86 (128M),
which results in 16,288 directories to cover the 2T of installed RAM.
The search for each memory block is noticeable even with
commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
xarray to accelerate lookup").

Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
on the end of boot memory") chooses the block size based on alignment
with memory end. That addresses hotplug failures in qemu guests, but
for bare metal systems whose memory end isn't aligned to even the
smallest size, it leaves them at 128M.

Make kernels that aren't running on a hypervisor use the largest
supported size (2G) to minimize overhead on big machines. Kernel boot
goes 7% faster on the aforementioned servers, shaving off half a second.

Signed-off-by: Daniel Jordan <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Sistare <[email protected]>
Cc: [email protected]
Cc: [email protected]
---

v3:
- Add more accurate hypervisor check. Someone kindly pointed me to
517c3ba00916 ("x86/speculation/mds: Apply more accurate check on
hypervisor platform"), and v2 had the same issue.
- Rebase on v5.8-rc5

v2:
- Thanks to David for the idea to make this conditional based on
virtualization.
- Update performance numbers to account for 4fb6eabf1037 (David)

arch/x86/mm/init_64.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index dbae185511cdf..51ea8b8e2959d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1406,6 +1406,15 @@ static unsigned long probe_memory_block_size(void)
goto done;
}

+ /*
+ * Use max block size to minimize overhead on bare metal, where
+ * alignment for memory hotplug isn't a concern.
+ */
+ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+ bz = MAX_BLOCK_SIZE;
+ goto done;
+ }
+
/* Find the largest allowed block size that aligns to memory end */
for (bz = MAX_BLOCK_SIZE; bz > MIN_MEMORY_BLOCK_SIZE; bz >>= 1) {
if (IS_ALIGNED(boot_mem_end, bz))

base-commit: 11ba468877bb23f28956a35e896356252d63c983
--
2.27.0


2020-07-15 16:05:13

by Daniel Jordan

[permalink] [raw]
Subject: Re: [PATCH v3] x86/mm: use max memory block size on bare metal

On Tue, Jul 14, 2020 at 04:54:50PM -0400, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes. The slowness happens because the machines
> get stuck with the smallest supported memory block size on x86 (128M),
> which results in 16,288 directories to cover the 2T of installed RAM.
> The search for each memory block is noticeable even with
> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
> xarray to accelerate lookup").
>
> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
> on the end of boot memory") chooses the block size based on alignment
> with memory end. That addresses hotplug failures in qemu guests, but
> for bare metal systems whose memory end isn't aligned to even the
> smallest size, it leaves them at 128M.
>
> Make kernels that aren't running on a hypervisor use the largest
> supported size (2G) to minimize overhead on big machines. Kernel boot
> goes 7% faster on the aforementioned servers, shaving off half a second.
>
> Signed-off-by: Daniel Jordan <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: David Hildenbrand <[email protected]>

Darn. David, I forgot to add your ack from v2. My assumption is that it still
stands after the minor change in this version, but please do correct me if I'm
wrong.