2020-11-06 23:30:00

by Henry Willard

[permalink] [raw]
Subject: [PATCH] arm64: kexec: Use smp_send_stop in machine_shutdown

machine_shutdown() is called by kernel_kexec() to shutdown
the non-boot CPUs prior to starting the new kernel. The
implementation of machine_shutdown() varies by architecture.
Many make an interprocessor call, such as smp_send_stop(),
to stop the non-boot CPUs. On some architectures the CPUs make
some sort of firmware call to stop the CPU. On some architectures
without the necessary firmware support to stop the CPU, the CPUs
go into a disabled loop, which is not suitable for supporting
kexec. On Arm64 systems that support PSCI, CPUs can be stopped
with a PSCI CPU_OFF call.

Arm64 machine_shutdown() uses the CPU hotplug infrastructure via
smp_shutdown_nonboot_cpus() to stop each CPU. This is relatively
slow and takes a best case of .02 to .03 seconds per CPU which are
stopped sequentially. This can take the better part of a second for
all the CPUs to be stopped depending on how many CPUs are present.
If for some reason the CPUs are busy at the time of the kexec reboot,
it can take several seconds to shut them all down. Each CPU shuts
itself down by calling PSCI CPU_OFF.

In some applications such as embedded systems, which need a very
fast reboot (less than a second), this may be too slow.

This patch reverts to using smp_send_stop() to signal all
CPUs to stop immediately. Currently smp_send_stop() causes each cpu
to call local_cpu_stop(), which goes into a disabled loop. This patch
modifies local_cpu_stop() to call cpu_die() when kexec_in_progress
is true, so that the CPU calls PSCI CPU_OFF just as in the case of
smp_shutdown_nonboot_cpus(). Using smp_send_stop() instead of
smp_shutdown_nonboot_cpus() reduces the shutdown time for 23 CPUs
from about .65 seconds on an idle system to less than 5 msecs. On a
busy system smp_shutdown_nonboot_cpus() may take several seconds,
while smp_send_stop() needs only the 5 msecs.

Signed-off-by: Henry Willard <[email protected]>
---
arch/arm64/kernel/process.c | 17 ++++++++++++++---
arch/arm64/kernel/smp.c | 8 +++++++-
2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 4784011cecac..2568452a2417 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -44,6 +44,7 @@
#include <linux/percpu.h>
#include <linux/thread_info.h>
#include <linux/prctl.h>
+#include <linux/kexec.h>

#include <asm/alternative.h>
#include <asm/arch_gicv3.h>
@@ -142,12 +143,22 @@ void arch_cpu_idle_dead(void)
* This must completely disable all secondary CPUs; simply causing those CPUs
* to execute e.g. a RAM-based pin loop is not sufficient. This allows the
* kexec'd kernel to use any and all RAM as it sees fit, without having to
- * avoid any code or data used by any SW CPU pin loop. The CPU hotplug
- * functionality embodied in smpt_shutdown_nonboot_cpus() to achieve this.
+ * avoid any code or data used by any SW CPU pin loop. The target stop function
+ * will call cpu_die() if kexec_in_progress is set.
*/
void machine_shutdown(void)
{
- smp_shutdown_nonboot_cpus(reboot_cpu);
+ unsigned long timeout;
+
+ /*
+ * Don't wait forever, but no longer than a second
+ */
+ timeout = USEC_PER_SEC;
+
+ smp_send_stop();
+ while (num_online_cpus() > 1 && timeout--)
+ udelay(1);
+ return;
}

/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 09c96f57818c..310cdf327d91 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -373,7 +373,9 @@ void cpu_die(void)
unsigned int cpu = smp_processor_id();
const struct cpu_operations *ops = get_cpu_ops(cpu);

- idle_task_exit();
+ /* Skip this if we are about to exit the machine */
+ if (!kexec_in_progress)
+ idle_task_exit();

local_daif_mask();

@@ -847,6 +849,10 @@ static void local_cpu_stop(void)

local_daif_mask();
sdei_mask_local_cpu();
+
+ if (kexec_in_progress)
+ cpu_die();
+
cpu_park_loop();
}

--
1.8.3.1


2020-11-11 18:15:16

by James Morse

[permalink] [raw]
Subject: Re: [PATCH] arm64: kexec: Use smp_send_stop in machine_shutdown

Hi Henry,

On 06/11/2020 23:25, Henry Willard wrote:
> machine_shutdown() is called by kernel_kexec() to shutdown
> the non-boot CPUs prior to starting the new kernel. The
> implementation of machine_shutdown() varies by architecture.
> Many make an interprocessor call, such as smp_send_stop(),
> to stop the non-boot CPUs. On some architectures the CPUs make
> some sort of firmware call to stop the CPU. On some architectures
> without the necessary firmware support to stop the CPU, the CPUs
> go into a disabled loop, which is not suitable for supporting
> kexec. On Arm64 systems that support PSCI, CPUs can be stopped
> with a PSCI CPU_OFF call.

All this variation is because we want to to get the CPU back in a sane state, as if we'd
just come from cold boot. Without the platform firmware doing its initialisation, the only
way we have of doing this is to run the cpuhp callbacks to take the CPU offline cleanly.


> Arm64 machine_shutdown() uses the CPU hotplug infrastructure via
> smp_shutdown_nonboot_cpus() to stop each CPU. This is relatively
> slow and takes a best case of .02 to .03 seconds per CPU which are
> stopped sequentially.

Hmmm, looks like cpuhp doesn't have a way to run the callbacks in parallel...


> This can take the better part of a second for
> all the CPUs to be stopped depending on how many CPUs are present.
> If for some reason the CPUs are busy at the time of the kexec reboot,
> it can take several seconds to shut them all down.

Busy doing what?

I assume the problem is CPUs starting work on behalf of user-space, which is now
pointless, which prevents them from scheduling into the cpuhp work quickly.

Does hoisting kexec's conditional call to freeze_processes() above the #ifdef - so that
user-space threads are no longer schedule-able improve things here?


> Each CPU shuts itself down by calling PSCI CPU_OFF.

> In some applications such as embedded systems, which need a very
> fast reboot (less than a second), this may be too slow.

Where does this requirement come from? Surely kexec is part of a software update, not
regular operation.


> This patch reverts to using smp_send_stop() to signal all
> CPUs to stop immediately. Currently smp_send_stop() causes each cpu
> to call local_cpu_stop(), which goes into a disabled loop. This patch
> modifies local_cpu_stop() to call cpu_die() when kexec_in_progress
> is true, so that the CPU calls PSCI CPU_OFF just as in the case of
> smp_shutdown_nonboot_cpus().

This is appropriate for panic(), as we accept it may fail.

For Kexec(), the CPU must go offline, otherwise we can't overwrite the code it was
running. The arch code can't just call CPU_OFF in any context. See 5.5 CPU_OFF' of
https://developer.arm.com/documentation/den0022/d

5.5.2 describes what the OS must do first, in particular interrupts must be migrated away
from the CPU calling CPU_OFF. Currently the cpuhp notifiers do this, which after this
patch, no longer run.

You're going to need some duct-tape here, but I recall the proposed
'ARCH_OFFLINE_CPUS_ON_REBOOT', which would help, but isn't a complete thing. From the
discussion:
https://lore.kernel.org/lkml/[email protected]/
https://lore.kernel.org/lkml/[email protected]/

using cpuhp to offline these CPUs is the right thing to do.
If the problem is its too slow, can we tackled that instead?


> Using smp_send_stop() instead of
> smp_shutdown_nonboot_cpus() reduces the shutdown time for 23 CPUs
> from about .65 seconds on an idle system to less than 5 msecs. On a
> busy system smp_shutdown_nonboot_cpus() may take several seconds,
> while smp_send_stop() needs only the 5 msecs.

> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index 4784011cecac..2568452a2417 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -142,12 +143,22 @@ void arch_cpu_idle_dead(void)
> * This must completely disable all secondary CPUs; simply causing those CPUs
> * to execute e.g. a RAM-based pin loop is not sufficient. This allows the
> * kexec'd kernel to use any and all RAM as it sees fit, without having to
> - * avoid any code or data used by any SW CPU pin loop. The CPU hotplug
> - * functionality embodied in smpt_shutdown_nonboot_cpus() to achieve this.
> + * avoid any code or data used by any SW CPU pin loop. The target stop function
> + * will call cpu_die() if kexec_in_progress is set.
> */
> void machine_shutdown(void)
> {
> - smp_shutdown_nonboot_cpus(reboot_cpu);
> + unsigned long timeout;
> +
> + /*
> + * Don't wait forever, but no longer than a second
> + */

For kexec we must wait for the CPU to exit the current kernel. If it doesn't we can't
overwrite the current memory image with the kexec payload.

Features like CNP allow CPUs to share TLB entries. If a CPU is left behind in the older
kernel, the code its is executing will be overwritten and its behaviour stops being
predictable. It may start allocating junk TLB entries, that CNP allows CPUs in the new
kernel to use, resulting in hard to debug crashes.

For kdump we avoid this problem by ensuring the old and new kernels never overlap. The old
kernel doesn't even have the kdump carveout mapped.


> + timeout = USEC_PER_SEC;
> +
> + smp_send_stop();
> + while (num_online_cpus() > 1 && timeout--)
> + udelay(1);
> + return;
> }
>
> /*


Thanks,

James

2020-11-16 22:49:42

by Henry Willard

[permalink] [raw]
Subject: Re: [PATCH] arm64: kexec: Use smp_send_stop in machine_shutdown

James

Thanks for taking the time to review this and the pointers.

On 11/11/20 10:11 AM, James Morse wrote:
> Hi Henry,
>
> On 06/11/2020 23:25, Henry Willard wrote:
>> machine_shutdown() is called by kernel_kexec() to shutdown
>> the non-boot CPUs prior to starting the new kernel. The
>> implementation of machine_shutdown() varies by architecture.
>> Many make an interprocessor call, such as smp_send_stop(),
>> to stop the non-boot CPUs. On some architectures the CPUs make
>> some sort of firmware call to stop the CPU. On some architectures
>> without the necessary firmware support to stop the CPU, the CPUs
>> go into a disabled loop, which is not suitable for supporting
>> kexec. On Arm64 systems that support PSCI, CPUs can be stopped
>> with a PSCI CPU_OFF call.
> All this variation is because we want to to get the CPU back in a sane state, as if we'd
> just come from cold boot. Without the platform firmware doing its initialisation, the only
> way we have of doing this is to run the cpuhp callbacks to take the CPU offline cleanly.
If it is unsafe to call cpu_ops.cpu_die (or cpu_die) on Arm except from
cpuhp shouldn't something detect that?
>
>
>> Arm64 machine_shutdown() uses the CPU hotplug infrastructure via
>> smp_shutdown_nonboot_cpus() to stop each CPU. This is relatively
>> slow and takes a best case of .02 to .03 seconds per CPU which are
>> stopped sequentially.
> Hmmm, looks like cpuhp doesn't have a way to run the callbacks in parallel...
>
>
>> This can take the better part of a second for
>> all the CPUs to be stopped depending on how many CPUs are present.
>> If for some reason the CPUs are busy at the time of the kexec reboot,
>> it can take several seconds to shut them all down.
> Busy doing what?
Executing user code
>
> I assume the problem is CPUs starting work on behalf of user-space, which is now
> pointless, which prevents them from scheduling into the cpuhp work quickly.
>
> Does hoisting kexec's conditional call to freeze_processes() above the #ifdef - so that
> user-space threads are no longer schedule-able improve things here?
It might help the worst cases, but even on an idle system it takes a while.
>
>
>> Each CPU shuts itself down by calling PSCI CPU_OFF.
>> In some applications such as embedded systems, which need a very
>> fast reboot (less than a second), this may be too slow.
> Where does this requirement come from? Surely kexec is part of a software update, not
> regular operation.
The requirement comes from the owner of the larger environment of which
this embedded system is a part. So, yes, this is part of software
maintenance of a component during regular operation.
>> This patch reverts to using smp_send_stop() to signal all
>> CPUs to stop immediately. Currently smp_send_stop() causes each cpu
>> to call local_cpu_stop(), which goes into a disabled loop. This patch
>> modifies local_cpu_stop() to call cpu_die() when kexec_in_progress
>> is true, so that the CPU calls PSCI CPU_OFF just as in the case of
>> smp_shutdown_nonboot_cpus().
> This is appropriate for panic(), as we accept it may fail.
>
> For Kexec(), the CPU must go offline, otherwise we can't overwrite the code it was
> running. The arch code can't just call CPU_OFF in any context. See 5.5 CPU_OFF' of
> https://developer.arm.com/documentation/den0022/d
>
> 5.5.2 describes what the OS must do first, in particular interrupts must be migrated away
> from the CPU calling CPU_OFF. Currently the cpuhp notifiers do this, which after this
> patch, no longer run.
I believe this is done by irq_migrate_all_off_this_cpu(), which is
called by take_cpu_down() scheduled on the processor to be shutdown by
stop_machine_cpuslocked(). I missed that. While take_cpu_down() is
running on the target processor the boot CPU is waiting, which is part
of the reason for the latency.
>
> You're going to need some duct-tape here, but I recall the proposed
> 'ARCH_OFFLINE_CPUS_ON_REBOOT', which would help, but isn't a complete thing. From the
> discussion:
> https://lore.kernel.org/lkml/[email protected]/
> https://lore.kernel.org/lkml/[email protected]/
>
> using cpuhp to offline these CPUs is the right thing to do.
> If the problem is its too slow, can we tackled that instead?
I think it is relatively slow because the CPUs are shutdown
sequentially. Besides ia64, most of the architectures that support
kexec, appear to kill them all at once. I will try again. Thanks for the
pointers.
>
>
>> Using smp_send_stop() instead of
>> smp_shutdown_nonboot_cpus() reduces the shutdown time for 23 CPUs
>> from about .65 seconds on an idle system to less than 5 msecs. On a
>> busy system smp_shutdown_nonboot_cpus() may take several seconds,
>> while smp_send_stop() needs only the 5 msecs.
>> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
>> index 4784011cecac..2568452a2417 100644
>> --- a/arch/arm64/kernel/process.c
>> +++ b/arch/arm64/kernel/process.c
>> @@ -142,12 +143,22 @@ void arch_cpu_idle_dead(void)
>> * This must completely disable all secondary CPUs; simply causing those CPUs
>> * to execute e.g. a RAM-based pin loop is not sufficient. This allows the
>> * kexec'd kernel to use any and all RAM as it sees fit, without having to
>> - * avoid any code or data used by any SW CPU pin loop. The CPU hotplug
>> - * functionality embodied in smpt_shutdown_nonboot_cpus() to achieve this.
>> + * avoid any code or data used by any SW CPU pin loop. The target stop function
>> + * will call cpu_die() if kexec_in_progress is set.
>> */
>> void machine_shutdown(void)
>> {
>> - smp_shutdown_nonboot_cpus(reboot_cpu);
>> + unsigned long timeout;
>> +
>> + /*
>> + * Don't wait forever, but no longer than a second
>> + */
> For kexec we must wait for the CPU to exit the current kernel. If it doesn't we can't
> overwrite the current memory image with the kexec payload.
If all the CPUs haven't exited (num_online_cpus() > 1), machine_kexec()
will panic if it isn't a crash kernel. Preferably we want to find out
sooner rather than later if it isn't going to finish.
>
> Features like CNP allow CPUs to share TLB entries. If a CPU is left behind in the older
> kernel, the code its is executing will be overwritten and its behaviour stops being
> predictable. It may start allocating junk TLB entries, that CNP allows CPUs in the new
> kernel to use, resulting in hard to debug crashes.
>
> For kdump we avoid this problem by ensuring the old and new kernels never overlap. The old
> kernel doesn't even have the kdump carveout mapped.
>
>
>> + timeout = USEC_PER_SEC;
>> +
>> + smp_send_stop();
>> + while (num_online_cpus() > 1 && timeout--)
>> + udelay(1);
>> + return;
>> }
>>
>> /*
>
> Thanks,
>
> James
Thanks,

Henry

2020-11-19 18:24:28

by James Morse

[permalink] [raw]
Subject: Re: [PATCH] arm64: kexec: Use smp_send_stop in machine_shutdown

Hi Henry,

On 16/11/2020 21:11, Henry Willard wrote:
> On 11/11/20 10:11 AM, James Morse wrote:
>> On 06/11/2020 23:25, Henry Willard wrote:
>>> machine_shutdown() is called by kernel_kexec() to shutdown
>>> the non-boot CPUs prior to starting the new kernel. The
>>> implementation of machine_shutdown() varies by architecture.
>>> Many make an interprocessor call, such as smp_send_stop(),
>>> to stop the non-boot CPUs. On some architectures the CPUs make
>>> some sort of firmware call to stop the CPU. On some architectures
>>> without the necessary firmware support to stop the CPU, the CPUs
>>> go into a disabled loop, which is not suitable for supporting
>>> kexec. On Arm64 systems that support PSCI, CPUs can be stopped
>>> with a PSCI CPU_OFF call.
>> All this variation is because we want to to get the CPU back in a sane state, as if we'd
>> just come from cold boot. Without the platform firmware doing its initialisation, the only
>> way we have of doing this is to run the cpuhp callbacks to take the CPU offline cleanly.

> If it is unsafe to call cpu_ops.cpu_die (or cpu_die) on Arm except from cpuhp shouldn't
> something detect that?

It wouldn't be the first undocumented assumption in linux!


>>> Arm64 machine_shutdown() uses the CPU hotplug infrastructure via
>>> smp_shutdown_nonboot_cpus() to stop each CPU. This is relatively
>>> slow and takes a best case of .02 to .03 seconds per CPU which are
>>> stopped sequentially.

>> Hmmm, looks like cpuhp doesn't have a way to run the callbacks in parallel...
>>
>>
>>> This can take the better part of a second for
>>> all the CPUs to be stopped depending on how many CPUs are present.
>>> If for some reason the CPUs are busy at the time of the kexec reboot,
>>> it can take several seconds to shut them all down.

>> Busy doing what?

> Executing user code

Nice. For EL0 you can always interrupt it, so that shouldn't matter.
I guess the issue is CPUs waiting for an irqsave spinlock that can't be interrupted until
they've finished the work they are doing.


>> I assume the problem is CPUs starting work on behalf of user-space, which is now
>> pointless, which prevents them from scheduling into the cpuhp work quickly.
>>
>> Does hoisting kexec's conditional call to freeze_processes() above the #ifdef - so that
>> user-space threads are no longer schedule-able improve things here?

> It might help the worst cases, but even on an idle system it takes a while.

>>> Each CPU shuts itself down by calling PSCI CPU_OFF.
>>> In some applications such as embedded systems, which need a very
>>> fast reboot (less than a second), this may be too slow.

>> Where does this requirement come from? Surely kexec is part of a software update, not
>> regular operation.

> The requirement comes from the owner of the larger environment of which this embedded
> system is a part.

> So, yes, this is part of software maintenance of a component during
> regular operation.

What does kexec as part of its regular operation? kexec re-writes all of memory! Surely
its only for software updates, which can't happen by surprise!

(This one second number has come up before. Why not 2, or 10?)


Pavel had a similar requirement. He was looking at doing the kexec-image re-assembly with
the MMU enabled. This benefits a very large kexec-image on platforms with lots of memory,
but where the CPUs suffer when making non-cacheable accesses. The combined series is here:
https://gitlab.arm.com/linux-arm/linux-jm/-/commits/kexec+mmu/v0/

But this didn't show enough of an improvement on Pavel's platform.


>>> This patch reverts to using smp_send_stop() to signal all
>>> CPUs to stop immediately. Currently smp_send_stop() causes each cpu
>>> to call local_cpu_stop(), which goes into a disabled loop. This patch
>>> modifies local_cpu_stop() to call cpu_die() when kexec_in_progress
>>> is true, so that the CPU calls PSCI CPU_OFF just as in the case of
>>> smp_shutdown_nonboot_cpus().
>> This is appropriate for panic(), as we accept it may fail.
>>
>> For Kexec(), the CPU must go offline, otherwise we can't overwrite the code it was
>> running. The arch code can't just call CPU_OFF in any context. See 5.5 CPU_OFF' of
>> https://developer.arm.com/documentation/den0022/d
>>
>> 5.5.2 describes what the OS must do first, in particular interrupts must be migrated away
>> from the CPU calling CPU_OFF. Currently the cpuhp notifiers do this, which after this
>> patch, no longer run.

> I believe this is done by irq_migrate_all_off_this_cpu(), which is called by
> take_cpu_down() scheduled on the processor to be shutdown by stop_machine_cpuslocked(). I
> missed that. While take_cpu_down() is running on the target processor the boot CPU is
> waiting, which is part of the reason for the latency.

Something that may help, but is short of touching the cpuhp code:
You may find that some of the cpuhp callbacks for drivers aren't necessary if this
kernel's state is never going to be restored. Drivers can tell this is happening because
kexec calls the device shutdown() callback. (it also triggers the reboot notifiers).

Its possible that a device's shutdown callback could unregister the cpuhp callbacks and
return the device to a sensible state. Looking at the PMU drivers under drivers/perf there
are many more users of cpuhp callbacks than implementers of shutdown.

This would unfortunately need checking changing per-driver.


(any change like this patch that skips the cpuhp callbacks is likely to expose bugs in
drivers that don't implement shutdown, or relied on the cpuhp for kexec)


>> You're going to need some duct-tape here, but I recall the proposed
>> 'ARCH_OFFLINE_CPUS_ON_REBOOT', which would help, but isn't a complete thing. From the
>> discussion:
>> https://lore.kernel.org/lkml/[email protected]/
>> https://lore.kernel.org/lkml/[email protected]/
>>
>> using cpuhp to offline these CPUs is the right thing to do.
>> If the problem is its too slow, can we tackled that instead?

> I think it is relatively slow because the CPUs are shutdown sequentially. Besides ia64,
> most of the architectures that support kexec, appear to kill them all at once. I will try
> again. Thanks for the pointers.

Improving cphup to be able to bring CPUs online/offline in parallel is the best option,
but its also the hardest. You get two bites of the cherry with this as your next kernel
may be able to bring all the secondary cores online in parallel too.


Thanks,

James