Hello world,
when lightly testing a dual-socket server with 64-core AMD processors I
noticed that workloads running on cpu #0 can exhibit significantly worse
latencies compared to cpu #1 ... cpu #255. Checking SSD response time,
on cpu #0 I got:
taskset -c 0 ioping -R /dev/sdf
--- /dev/sdf (block device 1.75 TiB) ioping statistics ---
70.7 k requests completed in 2.97 s, 276.3 MiB read, 23.8 k iops, 93.1 MiB/s
generated 70.7 k requests in 3.00 s, 276.4 MiB, 23.6 k iops, 92.1 MiB/s
min/avg/max/mdev = 33.1 us / 41.9 us / 87.9 ms / 452.6 us
Notice 87.9 millisecond maximum response time, and compare with its
hyperthread sibing:
taskset -c 128 ioping -R /dev/sdf
--- /dev/sdf (block device 1.75 TiB) ioping statistics ---
80.5 k requests completed in 2.96 s, 314.5 MiB read, 27.2 k iops, 106.2 MiB/s
generated 80.5 k requests in 3.00 s, 314.5 MiB, 26.8 k iops, 104.8 MiB/s
min/avg/max/mdev = 33.2 us / 36.8 us / 89.2 us / 2.00 us
Of course maximum times themselves vary from run to run, but the general
picture stays: on cpu #0 I get about three orders of magnitude
longer latencies. I think this is outside of "latency-sensitive
workloads might care" territory and closer to "hurts everyone" kind of
issue, hence I'm reporting it.
On this machine there's AMD HEST ACPI table that registers 14342 polled
"generic hardware error sources" (GHES) with poll interval 5 seconds.
(this seems misdesigned: it will cause cross-socket polling unless the
OS takes special care to divine which GHES to poll from where)
Linux setups a timer for each of those individually, so when the machine
is idle there's approximately 2800 timers per second invoked on cpu #0.
Plus, there's a secondary issue with timer migration:
get_nohz_timer_target will attempt to select a non-idle CPU out of 256
(visiting some CPUs repeatedly if they appear in nested domains), and
fail. If I help it along by running 'taskset -c 1 yes > /dev/null' or
disable kernel.timer_migration entirely, it drops maximum latency in the
above ioping test to 1..10ms range (down to two orders of magnitude from
three).
I guess the short answer is that if I don't like it I can boot that
server with 'ghes_disable=1', but is a proper solution possible? Like
requiring explicit opt-in to honor polled GHES entries?
Thank you.
Alexander
On Sun, Nov 28, 2021 at 12:40:48AM +0300, Alexander Monakov wrote:
> Hello world,
>
> when lightly testing a dual-socket server with 64-core AMD processors I
> noticed that workloads running on cpu #0 can exhibit significantly worse
> latencies compared to cpu #1 ... cpu #255. Checking SSD response time,
> on cpu #0 I got:
>
> taskset -c 0 ioping -R /dev/sdf
>
> --- /dev/sdf (block device 1.75 TiB) ioping statistics ---
> 70.7 k requests completed in 2.97 s, 276.3 MiB read, 23.8 k iops, 93.1 MiB/s
> generated 70.7 k requests in 3.00 s, 276.4 MiB, 23.6 k iops, 92.1 MiB/s
> min/avg/max/mdev = 33.1 us / 41.9 us / 87.9 ms / 452.6 us
>
> Notice 87.9 millisecond maximum response time, and compare with its
> hyperthread sibing:
>
> taskset -c 128 ioping -R /dev/sdf
>
> --- /dev/sdf (block device 1.75 TiB) ioping statistics ---
> 80.5 k requests completed in 2.96 s, 314.5 MiB read, 27.2 k iops, 106.2 MiB/s
> generated 80.5 k requests in 3.00 s, 314.5 MiB, 26.8 k iops, 104.8 MiB/s
> min/avg/max/mdev = 33.2 us / 36.8 us / 89.2 us / 2.00 us
>
> Of course maximum times themselves vary from run to run, but the general
> picture stays: on cpu #0 I get about three orders of magnitude
> longer latencies. I think this is outside of "latency-sensitive
> workloads might care" territory and closer to "hurts everyone" kind of
> issue, hence I'm reporting it.
>
>
> On this machine there's AMD HEST ACPI table that registers 14342 polled
> "generic hardware error sources" (GHES) with poll interval 5 seconds.
> (this seems misdesigned: it will cause cross-socket polling unless the
> OS takes special care to divine which GHES to poll from where)
>
> Linux setups a timer for each of those individually, so when the machine
> is idle there's approximately 2800 timers per second invoked on cpu #0.
> Plus, there's a secondary issue with timer migration:
> get_nohz_timer_target will attempt to select a non-idle CPU out of 256
> (visiting some CPUs repeatedly if they appear in nested domains), and
> fail. If I help it along by running 'taskset -c 1 yes > /dev/null' or
> disable kernel.timer_migration entirely, it drops maximum latency in the
> above ioping test to 1..10ms range (down to two orders of magnitude from
> three).
>
> I guess the short answer is that if I don't like it I can boot that
> server with 'ghes_disable=1', but is a proper solution possible? Like
> requiring explicit opt-in to honor polled GHES entries?
>
Hi Alexander,
I believe the large number of GHES structures you have are intended to be used
for the ACPI "GHES_ASSIST" feature. The GHES structures in this case are not
to be used as independent sources. However, this feature is not implemented
yet in Linux, so the kernel does set up these GHES structures as independent
error sources.
One way to avoid the issue is for the firmware to give a large polling
interval in the GHES structures. The kernel will still set up timers for each
structure, but there should be less interference from them. The ACPI spec
seems to allow a polling interval up to 0xFFFFFFFF ms.
Ultimately, I think we'd want the kernel to ignore the GHES structures used
for GHES_ASSIST, and then GHES_ASSIST support can be implemented and used
where appropriate.
I can send a patchset for ignoring the structures. This would be setup for
another set than can fully implement the GHES_ASSIST feature. Would you be
willing to test out that first set to see if it resolves the issue?
Thanks,
Yazen
On Thu, 2 Dec 2021, Yazen Ghannam wrote:
> I believe the large number of GHES structures you have are intended to be used
> for the ACPI "GHES_ASSIST" feature. The GHES structures in this case are not
> to be used as independent sources. However, this feature is not implemented
> yet in Linux, so the kernel does set up these GHES structures as independent
> error sources.
Yes, our HEST has "GHES Assist: 1". But it is disappointing those sources have
"Polled" type, ACPI allocated eight bits for the type, and only 12 types are
registered so far, so it's not like they were running out of space to designate
a separate type for this kind of sources.
[snip increasing polling interval]
> Ultimately, I think we'd want the kernel to ignore the GHES structures used
> for GHES_ASSIST, and then GHES_ASSIST support can be implemented and used
> where appropriate.
>
> I can send a patchset for ignoring the structures. This would be setup for
> another set than can fully implement the GHES_ASSIST feature. Would you be
> willing to test out that first set to see if it resolves the issue?
Sure, please Cc me on the patches.
Alexander