Whilst it is a bit quick after v6, a couple of critical issues
were pointed out by Russell, Salil and Rafael + one build issue
had been missed, so it seems sensible to make sure those conducting
testing or further review have access to a fixed version.
v7:
- Fix misplaced config guard that broke bisection.
- Greatly simplify the condition on which we call
acpi_processor_hotadd_init().
- Improve teardown ordering.
Fundamental change v6+: At the level of common ACPI infrastructure, use
the existing hotplug path for arm64 even though what needs to be
done at the architecture specific level is quite different.
An explicit check in arch_register_cpu() for arm64 prevents
this code doing anything if Physical CPU Hotplug is signalled.
This should resolve any concerns about treating virtual CPU
hotplug as if it were physical and potential unwanted side effects
if physical CPU hotplug is added to the ARM architecture in the
future.
v6: Thanks to Rafael for extensive help with the approach + reviews.
Specific changes:
- Do not differentiate wrt code flow between traditional CPU HP
and the new ARM flow. The conditions on performing hotplug actions
do need to be adjusted though to incorporate the slightly different
state transition
Added PRESENT + !ENABLED -> PRESENT + ENABLED
to existing !PRESENT + !ENABLED -> PRESENT + ENABLED
- Enable ACPI_HOTPLUG_CPU on arm64 and drop the earlier patches that
took various code out of the protection of that. Now the paths
- New patch to drop unnecessary _STA check in hotplug code. This
code cannot be entered unless ENABLED + PRESENT are set.
- New patch to unify the flow of already onlined (at time of driver
load) and hotplugged CPUs in acpi/processor_driver.c.
This change is necessary because we can't easily distinguish the
2 cases of deferred vs hotplug calls of register_cpu() on arm64.
It is also a nice simplification.
- Use flags rather than a structure for the extra parameter to
acpi_scan_check_and_detach() - Thank to Shameer for offline feedback.
Updated version of James' original introduction.
This series adds what looks like cpuhotplug support to arm64 for use in
virtual machines. It does this by moving the cpu_register() calls for
architectures that support ACPI into an arch specific call made from
the ACPI processor driver.
The kubernetes folk really want to be able to add CPUs to an existing VM,
in exactly the same way they do on x86. The use-case is pre-booting guests
with one CPU, then adding the number that were actually needed when the
workload is provisioned.
Wait? Doesn't arm64 support cpuhotplug already!?
In the arm world, cpuhotplug gets used to mean removing the power from a CPU.
The CPU is offline, and remains present. For x86, and ACPI, cpuhotplug
has the additional step of physically removing the CPU, so that it isn't
present anymore.
Arm64 doesn't support this, and can't support it: CPUs are really a slice
of the SoC, and there is not enough information in the existing ACPI tables
to describe which bits of the slice also got removed. Without a reference
machine: adding this support to the spec is a wild goose chase.
Critically: everything described in the firmware tables must remain present.
For a virtual machine this is easy as all the other bits of 'virtual SoC'
are emulated, so they can (and do) remain present when a vCPU is 'removed'.
On a system that supports cpuhotplug the MADT has to describe every possible
CPU at boot. Under KVM, the vGIC needs to know about every possible vCPU before
the guest is started.
With these constraints, virtual-cpuhotplug is really just a hypervisor/firmware
policy about which CPUs can be brought online.
This series adds support for virtual-cpuhotplug as exactly that: firmware
policy. This may even work on a physical machine too; for a guest the part of
firmware is played by the VMM. (typically Qemu).
PSCI support is modified to return 'DENIED' if the CPU can't be brought
online/enabled yet. The CPU object's _STA method's enabled bit is used to
indicate firmware's current disposition. If the CPU has its enabled bit clear,
it will not be registered with sysfs, and attempts to bring it online will
fail. The notifications that _STA has changed its value then work in the same
way as physical hotplug, and firmware can cause the CPU to be registered some
time later, allowing it to be brought online.
This creates something that looks like cpuhotplug to user-space and the
kernel beyond arm64 architecture specific code, as the sysfs
files appear and disappear, and the udev notifications look the same.
One notable difference is the CPU present mask, which is exposed via sysfs.
Because the CPUs remain present throughout, they can still be seen in that mask.
This value does get used by webbrowsers to estimate the number of CPUs
as the CPU online mask is constantly changed on mobile phones.
Linux is tolerant of PSCI returning errors, as its always been allowed to do
that. To avoid confusing OS that can't tolerate this, we needed an additional
bit in the MADT GICC flags. This series copies ACPI_MADT_ONLINE_CAPABLE, which
appears to be for this purpose, but calls it ACPI_MADT_GICC_CPU_CAPABLE as it
has a different bit position in the GICC.
This code is unconditionally enabled for all ACPI architectures, though for
now only arm64 will have deferred the cpu_register() calls.
If folk want to play along at home, you'll need a copy of Qemu that supports this.
https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
Replace your '-smp' argument with something like:
| -smp cpus=1,maxcpus=3,cores=3,threads=1,sockets=1
then feed the following to the Qemu montior;
| (qemu) device_add driver=host-arm-cpu,core-id=1,id=cpu1
| (qemu) device_del cpu1
James Morse (7):
ACPI: processor: Register deferred CPUs from acpi_processor_get_info()
ACPI: Add post_eject to struct acpi_scan_handler for cpu hotplug
arm64: acpi: Move get_cpu_for_acpi_id() to a header
irqchip/gic-v3: Don't return errors from gic_acpi_match_gicc()
irqchip/gic-v3: Add support for ACPI's disabled but 'online capable'
CPUs
arm64: document virtual CPU hotplug's expectations
cpumask: Add enabled cpumask for present CPUs that can be brought
online
Jean-Philippe Brucker (1):
arm64: psci: Ignore DENIED CPUs
Jonathan Cameron (8):
ACPI: processor: Simplify initial onlining to use same path for cold
and hotplug
cpu: Do not warn on arch_register_cpu() returning -EPROBE_DEFER
ACPI: processor: Drop duplicated check on _STA (enabled + present)
ACPI: processor: Move checks and availability of acpi_processor
earlier
ACPI: processor: Add acpi_get_processor_handle() helper
ACPI: scan: switch to flags for acpi_scan_check_and_detach()
arm64: arch_register_cpu() variant to check if an ACPI handle is now
available.
arm64: Kconfig: Enable hotplug CPU on arm64 if ACPI_PROCESSOR is
enabled.
.../ABI/testing/sysfs-devices-system-cpu | 6 +
Documentation/arch/arm64/cpu-hotplug.rst | 79 ++++++++++++
Documentation/arch/arm64/index.rst | 1 +
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/acpi.h | 11 ++
arch/arm64/kernel/acpi.c | 16 +++
arch/arm64/kernel/acpi_numa.c | 11 --
arch/arm64/kernel/psci.c | 2 +-
arch/arm64/kernel/smp.c | 56 ++++++++-
drivers/acpi/acpi_processor.c | 113 ++++++++++--------
drivers/acpi/processor_driver.c | 44 ++-----
drivers/acpi/scan.c | 47 ++++++--
drivers/base/cpu.c | 12 +-
drivers/irqchip/irq-gic-v3.c | 32 +++--
include/acpi/acpi_bus.h | 1 +
include/acpi/processor.h | 2 +-
include/linux/acpi.h | 10 +-
include/linux/cpumask.h | 25 ++++
kernel/cpu.c | 3 +
19 files changed, 357 insertions(+), 115 deletions(-)
create mode 100644 Documentation/arch/arm64/cpu-hotplug.rst
--
2.39.2
Separate code paths, combined with a flag set in acpi_processor.c to
indicate a struct acpi_processor was for a hotplugged CPU ensured that
per CPU data was only set up the first time that a CPU was initialized.
This appears to be unnecessary as the paths can be combined by letting
the online logic also handle any CPUs online at the time of driver load.
Motivation for this change, beyond simplification, is that ARM64
virtual CPU HP uses the same code paths for hotplug and cold path in
acpi_processor.c so had no easy way to set the flag for hotplug only.
Removing this necessity will enable ARM64 vCPU HP to reuse the existing
code paths.
Leave noisy pr_info() in place but update it to not state the CPU
was hotplugged.
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change.
v6: New patch.
RFT: I have very limited test resources for x86 and other
architectures that may be affected by this change.
---
drivers/acpi/acpi_processor.c | 1 -
drivers/acpi/processor_driver.c | 44 ++++++++++-----------------------
include/acpi/processor.h | 2 +-
3 files changed, 14 insertions(+), 33 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 7a0dd35d62c9..7fc924aeeed0 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -216,7 +216,6 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
* gets online for the first time.
*/
pr_info("CPU%d has been hot-added\n", pr->id);
- pr->flags.need_hotplug_init = 1;
out:
cpus_write_unlock();
diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c
index 67db60eda370..55782eac3ff1 100644
--- a/drivers/acpi/processor_driver.c
+++ b/drivers/acpi/processor_driver.c
@@ -33,7 +33,6 @@ MODULE_AUTHOR("Paul Diefenbaugh");
MODULE_DESCRIPTION("ACPI Processor Driver");
MODULE_LICENSE("GPL");
-static int acpi_processor_start(struct device *dev);
static int acpi_processor_stop(struct device *dev);
static const struct acpi_device_id processor_device_ids[] = {
@@ -47,7 +46,6 @@ static struct device_driver acpi_processor_driver = {
.name = "processor",
.bus = &cpu_subsys,
.acpi_match_table = processor_device_ids,
- .probe = acpi_processor_start,
.remove = acpi_processor_stop,
};
@@ -115,12 +113,10 @@ static int acpi_soft_cpu_online(unsigned int cpu)
* CPU got physically hotplugged and onlined for the first time:
* Initialize missing things.
*/
- if (pr->flags.need_hotplug_init) {
+ if (!pr->flags.previously_online) {
int ret;
- pr_info("Will online and init hotplugged CPU: %d\n",
- pr->id);
- pr->flags.need_hotplug_init = 0;
+ pr_info("Will online and init CPU: %d\n", pr->id);
ret = __acpi_processor_start(device);
WARN(ret, "Failed to start CPU: %d\n", pr->id);
} else {
@@ -167,9 +163,6 @@ static int __acpi_processor_start(struct acpi_device *device)
if (!pr)
return -ENODEV;
- if (pr->flags.need_hotplug_init)
- return 0;
-
result = acpi_cppc_processor_probe(pr);
if (result && !IS_ENABLED(CONFIG_ACPI_CPU_FREQ_PSS))
dev_dbg(&device->dev, "CPPC data invalid or not present\n");
@@ -185,32 +178,21 @@ static int __acpi_processor_start(struct acpi_device *device)
status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
acpi_processor_notify, device);
- if (ACPI_SUCCESS(status))
- return 0;
+ if (!ACPI_SUCCESS(status)) {
+ result = -ENODEV;
+ goto err_thermal_exit;
+ }
+ pr->flags.previously_online = 1;
- result = -ENODEV;
- acpi_processor_thermal_exit(pr, device);
+ return 0;
+err_thermal_exit:
+ acpi_processor_thermal_exit(pr, device);
err_power_exit:
acpi_processor_power_exit(pr);
return result;
}
-static int acpi_processor_start(struct device *dev)
-{
- struct acpi_device *device = ACPI_COMPANION(dev);
- int ret;
-
- if (!device)
- return -ENODEV;
-
- /* Protect against concurrent CPU hotplug operations */
- cpu_hotplug_disable();
- ret = __acpi_processor_start(device);
- cpu_hotplug_enable();
- return ret;
-}
-
static int acpi_processor_stop(struct device *dev)
{
struct acpi_device *device = ACPI_COMPANION(dev);
@@ -279,9 +261,9 @@ static int __init acpi_processor_driver_init(void)
if (result < 0)
return result;
- result = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
- "acpi/cpu-drv:online",
- acpi_soft_cpu_online, NULL);
+ result = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+ "acpi/cpu-drv:online",
+ acpi_soft_cpu_online, NULL);
if (result < 0)
goto err;
hp_online = result;
diff --git a/include/acpi/processor.h b/include/acpi/processor.h
index 3f34ebb27525..e6f6074eadbf 100644
--- a/include/acpi/processor.h
+++ b/include/acpi/processor.h
@@ -217,7 +217,7 @@ struct acpi_processor_flags {
u8 has_lpi:1;
u8 power_setup_done:1;
u8 bm_rld_set:1;
- u8 need_hotplug_init:1;
+ u8 previously_online:1;
};
struct acpi_processor {
--
2.39.2
For arm64 the CPU registration cannot complete until the ACPI
interpreter us up and running so in those cases the arch specific
arch_register_cpu() will return -EPROBE_DEFER at this stage and the
registration will be attempted later.
Suggested-by: Rafael J. Wysocki <[email protected]>
Acked-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: Fix condition to not print the error message of success (thanks Russell!)
---
drivers/base/cpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 56fba44ba391..7b83e9c87d7c 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -558,7 +558,7 @@ static void __init cpu_dev_register_generic(void)
for_each_present_cpu(i) {
ret = arch_register_cpu(i);
- if (ret)
+ if (ret && ret != -EPROBE_DEFER)
pr_warn("register_cpu %d failed (%d)\n", i, ret);
}
}
--
2.39.2
The ACPI bus scan will only result in acpi_processor_add() being called
if _STA has already been checked and the result is that the
processor is enabled and present. Hence drop this additional check.
Suggested-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
v6: New patch to drop this unnecessary code. Now I think we only
need to explicitly read STA to print a warning in the ARM64
arch_unregister_cpu() path where we want to know if the
present bit has been unset as well.
---
drivers/acpi/acpi_processor.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 7fc924aeeed0..ba0a6f0ac841 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -186,17 +186,11 @@ static void __init acpi_pcc_cpufreq_init(void) {}
#ifdef CONFIG_ACPI_HOTPLUG_CPU
static int acpi_processor_hotadd_init(struct acpi_processor *pr)
{
- unsigned long long sta;
- acpi_status status;
int ret;
if (invalid_phys_cpuid(pr->phys_id))
return -ENODEV;
- status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
- if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
- return -ENODEV;
-
cpu_maps_update_begin();
cpus_write_lock();
--
2.39.2
Make the per_cpu(processors, cpu) entries available earlier so that
they are available in arch_register_cpu() as ARM64 will need access
to the acpi_handle to distinguish between acpi_processor_add()
and earlier registration attempts (which will fail as _STA cannot
be checked).
Reorder the remove flow to clear this per_cpu() after
arch_unregister_cpu() has completed, allowing it to be used in
there as well.
Note that on x86 for the CPU hotplug case, the pr->id prior to
acpi_map_cpu() may be invalid. Thus the per_cpu() structures
must be initialized after that call or after checking the ID
is valid (not hotplug path).
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: Swap order with acpi_unmap_cpu() in acpi_processor_remove()
to keep it in reverse order of the setup path. (thanks Salil)
Fix an issue with placement of CONFIG_ACPI_HOTPLUG_CPU guards.
v6: As per discussion in v5 thread, don't use the cpu->dev and
make this data available earlier by moving the assignment checks
int acpi_processor_get_info().
---
drivers/acpi/acpi_processor.c | 78 +++++++++++++++++++++--------------
1 file changed, 46 insertions(+), 32 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index ba0a6f0ac841..ac7ddb30f10e 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -183,8 +183,36 @@ static void __init acpi_pcc_cpufreq_init(void) {}
#endif /* CONFIG_X86 */
/* Initialization */
+static DEFINE_PER_CPU(void *, processor_device_array);
+
+static void acpi_processor_set_per_cpu(struct acpi_processor *pr,
+ struct acpi_device *device)
+{
+ BUG_ON(pr->id >= nr_cpu_ids);
+ /*
+ * Buggy BIOS check.
+ * ACPI id of processors can be reported wrongly by the BIOS.
+ * Don't trust it blindly
+ */
+ if (per_cpu(processor_device_array, pr->id) != NULL &&
+ per_cpu(processor_device_array, pr->id) != device) {
+ dev_warn(&device->dev,
+ "BIOS reported wrong ACPI id %d for the processor\n",
+ pr->id);
+ /* Give up, but do not abort the namespace scan. */
+ return;
+ }
+ /*
+ * processor_device_array is not cleared on errors to allow buggy BIOS
+ * checks.
+ */
+ per_cpu(processor_device_array, pr->id) = device;
+ per_cpu(processors, pr->id) = pr;
+}
+
#ifdef CONFIG_ACPI_HOTPLUG_CPU
-static int acpi_processor_hotadd_init(struct acpi_processor *pr)
+static int acpi_processor_hotadd_init(struct acpi_processor *pr,
+ struct acpi_device *device)
{
int ret;
@@ -198,6 +226,8 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
if (ret)
goto out;
+ acpi_processor_set_per_cpu(pr, device);
+
ret = arch_register_cpu(pr->id);
if (ret) {
acpi_unmap_cpu(pr->id);
@@ -217,7 +247,8 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
return ret;
}
#else
-static inline int acpi_processor_hotadd_init(struct acpi_processor *pr)
+static inline int acpi_processor_hotadd_init(struct acpi_processor *pr,
+ struct acpi_device *device)
{
return -ENODEV;
}
@@ -232,6 +263,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
acpi_status status = AE_OK;
static int cpu0_initialized;
unsigned long long value;
+ int ret;
acpi_processor_errata();
@@ -316,10 +348,12 @@ static int acpi_processor_get_info(struct acpi_device *device)
* because cpuid <-> apicid mapping is persistent now.
*/
if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
- int ret = acpi_processor_hotadd_init(pr);
+ ret = acpi_processor_hotadd_init(pr, device);
if (ret)
- return ret;
+ goto err;
+ } else {
+ acpi_processor_set_per_cpu(pr, device);
}
/*
@@ -357,6 +391,10 @@ static int acpi_processor_get_info(struct acpi_device *device)
arch_fix_phys_package_id(pr->id, value);
return 0;
+
+err:
+ per_cpu(processors, pr->id) = NULL;
+ return ret;
}
/*
@@ -365,8 +403,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
* (cpu_data(cpu)) values, like CPU feature flags, family, model, etc.
* Such things have to be put in and set up by the processor driver's .probe().
*/
-static DEFINE_PER_CPU(void *, processor_device_array);
-
static int acpi_processor_add(struct acpi_device *device,
const struct acpi_device_id *id)
{
@@ -395,28 +431,6 @@ static int acpi_processor_add(struct acpi_device *device,
if (result) /* Processor is not physically present or unavailable */
return 0;
- BUG_ON(pr->id >= nr_cpu_ids);
-
- /*
- * Buggy BIOS check.
- * ACPI id of processors can be reported wrongly by the BIOS.
- * Don't trust it blindly
- */
- if (per_cpu(processor_device_array, pr->id) != NULL &&
- per_cpu(processor_device_array, pr->id) != device) {
- dev_warn(&device->dev,
- "BIOS reported wrong ACPI id %d for the processor\n",
- pr->id);
- /* Give up, but do not abort the namespace scan. */
- goto err;
- }
- /*
- * processor_device_array is not cleared on errors to allow buggy BIOS
- * checks.
- */
- per_cpu(processor_device_array, pr->id) = device;
- per_cpu(processors, pr->id) = pr;
-
dev = get_cpu_device(pr->id);
if (!dev) {
result = -ENODEV;
@@ -469,10 +483,6 @@ static void acpi_processor_remove(struct acpi_device *device)
device_release_driver(pr->dev);
acpi_unbind_one(pr->dev);
- /* Clean up. */
- per_cpu(processor_device_array, pr->id) = NULL;
- per_cpu(processors, pr->id) = NULL;
-
cpu_maps_update_begin();
cpus_write_lock();
@@ -480,6 +490,10 @@ static void acpi_processor_remove(struct acpi_device *device)
arch_unregister_cpu(pr->id);
acpi_unmap_cpu(pr->id);
+ /* Clean up. */
+ per_cpu(processor_device_array, pr->id) = NULL;
+ per_cpu(processors, pr->id) = NULL;
+
cpus_write_unlock();
cpu_maps_update_done();
--
2.39.2
If CONFIG_ACPI_PROCESSOR provide a helper to retrieve the
acpi_handle for a given CPU allowing access to methods
in DSDT.
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
v6: New patch
---
drivers/acpi/acpi_processor.c | 10 ++++++++++
include/linux/acpi.h | 7 +++++++
2 files changed, 17 insertions(+)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index ac7ddb30f10e..127ae8dcb787 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -35,6 +35,16 @@ EXPORT_PER_CPU_SYMBOL(processors);
struct acpi_processor_errata errata __read_mostly;
EXPORT_SYMBOL_GPL(errata);
+acpi_handle acpi_get_processor_handle(int cpu)
+{
+ acpi_handle handle = NULL;
+ struct acpi_processor *pr = per_cpu(processors, cpu);;
+
+ if (pr)
+ handle = pr->handle;
+
+ return handle;
+}
static int acpi_processor_errata_piix4(struct pci_dev *dev)
{
u8 value1 = 0;
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 34829f2c517a..9844a3f9c4e5 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -309,6 +309,8 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id,
int acpi_unmap_cpu(int cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
+acpi_handle acpi_get_processor_handle(int cpu);
+
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr);
#endif
@@ -1077,6 +1079,11 @@ static inline bool acpi_sleep_state_supported(u8 sleep_state)
return false;
}
+static inline acpi_handle acpi_get_processor_handle(int cpu)
+{
+ return NULL;
+}
+
#endif /* !CONFIG_ACPI */
extern void arch_post_acpi_subsys_init(void);
--
2.39.2
Precursor patch adds the ability to pass a uintptr_t of flags into
acpi_scan_check_and detach() so that additional flags can be
added to indicate whether to defer portions of the eject flow.
The new flag follows in the next patch.
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
v6: Based on internal feedback switch to less invasive change
to using flags rather than a struct.
---
drivers/acpi/scan.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index d1464324de95..1ec9677e6c2d 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -244,13 +244,16 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
return 0;
}
-static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
+#define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
+
+static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
{
struct acpi_scan_handler *handler = adev->handler;
+ uintptr_t flags = (uintptr_t)p;
- acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, check);
+ acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, p);
- if (check) {
+ if (flags & ACPI_SCAN_CHECK_FLAG_STATUS) {
acpi_bus_get_status(adev);
/*
* Skip devices that are still there and take the enabled
@@ -288,7 +291,9 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
static void acpi_scan_check_subtree(struct acpi_device *adev)
{
- acpi_scan_check_and_detach(adev, (void *)true);
+ uintptr_t flags = ACPI_SCAN_CHECK_FLAG_STATUS;
+
+ acpi_scan_check_and_detach(adev, (void *)flags);
}
static int acpi_scan_hot_remove(struct acpi_device *device)
@@ -2601,7 +2606,9 @@ EXPORT_SYMBOL(acpi_bus_scan);
*/
void acpi_bus_trim(struct acpi_device *adev)
{
- acpi_scan_check_and_detach(adev, NULL);
+ uintptr_t flags = 0;
+
+ acpi_scan_check_and_detach(adev, (void *)flags);
}
EXPORT_SYMBOL_GPL(acpi_bus_trim);
--
2.39.2
From: James Morse <[email protected]>
struct acpi_scan_handler has a detach callback that is used to remove
a driver when a bus is changed. When interacting with an eject-request,
the detach callback is called before _EJ0.
This means the ACPI processor driver can't use _STA to determine if a
CPU has been made not-present, or some of the other _STA bits have been
changed. acpi_processor_remove() needs to know the value of _STA after
_EJ0 has been called.
Add a post_eject callback to struct acpi_scan_handler. This is called
after acpi_scan_hot_remove() has successfully called _EJ0. Because
acpi_scan_check_and_detach() also clears the handler pointer,
it needs to be told if the caller will go on to call
acpi_bus_post_eject(), so that acpi_device_clear_enumerated()
and clearing the handler pointer can be deferred.
An extra flag is added to flags field introduced in the previous
patch to achieve this.
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Joanthan Cameron <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Tested-by: Miguel Luis <[email protected]>
Tested-by: Vishnu Pajjuri <[email protected]>
Tested-by: Jianyong Wu <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
----
v7:
- No change.
v6:
- Switch to flags.
Russell, you hadn't signed off on this when posting last time.
Do you want to insert a suitable tag now?
v5:
- Rebase to take into account the changes to scan handling in the
meantime.
---
drivers/acpi/acpi_processor.c | 4 ++--
drivers/acpi/scan.c | 30 +++++++++++++++++++++++++++---
include/acpi/acpi_bus.h | 1 +
3 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 4e65011e706c..beb1761db579 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -471,7 +471,7 @@ static int acpi_processor_add(struct acpi_device *device,
#ifdef CONFIG_ACPI_HOTPLUG_CPU
/* Removal */
-static void acpi_processor_remove(struct acpi_device *device)
+static void acpi_processor_post_eject(struct acpi_device *device)
{
struct acpi_processor *pr;
@@ -639,7 +639,7 @@ static struct acpi_scan_handler processor_handler = {
.ids = processor_device_ids,
.attach = acpi_processor_add,
#ifdef CONFIG_ACPI_HOTPLUG_CPU
- .detach = acpi_processor_remove,
+ .post_eject = acpi_processor_post_eject,
#endif
.hotplug = {
.enabled = true,
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 1ec9677e6c2d..3ec54624664a 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -245,6 +245,7 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
}
#define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
+#define ACPI_SCAN_CHECK_FLAG_EJECT BIT(1)
static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
{
@@ -273,8 +274,6 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
if (handler) {
if (handler->detach)
handler->detach(adev);
-
- adev->handler = NULL;
} else {
device_release_driver(&adev->dev);
}
@@ -284,6 +283,28 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
*/
acpi_device_set_power(adev, ACPI_STATE_D3_COLD);
adev->flags.initialized = false;
+
+ /* For eject this is deferred to acpi_bus_post_eject() */
+ if (!(flags & ACPI_SCAN_CHECK_FLAG_EJECT)) {
+ adev->handler = NULL;
+ acpi_device_clear_enumerated(adev);
+ }
+ return 0;
+}
+
+static int acpi_bus_post_eject(struct acpi_device *adev, void *not_used)
+{
+ struct acpi_scan_handler *handler = adev->handler;
+
+ acpi_dev_for_each_child_reverse(adev, acpi_bus_post_eject, NULL);
+
+ if (handler) {
+ if (handler->post_eject)
+ handler->post_eject(adev);
+
+ adev->handler = NULL;
+ }
+
acpi_device_clear_enumerated(adev);
return 0;
@@ -301,6 +322,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
acpi_handle handle = device->handle;
unsigned long long sta;
acpi_status status;
+ uintptr_t flags = ACPI_SCAN_CHECK_FLAG_EJECT;
if (device->handler && device->handler->hotplug.demand_offline) {
if (!acpi_scan_is_offline(device, true))
@@ -313,7 +335,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
acpi_handle_debug(handle, "Ejecting\n");
- acpi_bus_trim(device);
+ acpi_scan_check_and_detach(device, (void *)flags);
acpi_evaluate_lck(handle, 0);
/*
@@ -336,6 +358,8 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
} else if (sta & ACPI_STA_DEVICE_ENABLED) {
acpi_handle_warn(handle,
"Eject incomplete - status 0x%llx\n", sta);
+ } else {
+ acpi_bus_post_eject(device, NULL);
}
return 0;
diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
index e7796f373d0d..51a4b936f19e 100644
--- a/include/acpi/acpi_bus.h
+++ b/include/acpi/acpi_bus.h
@@ -129,6 +129,7 @@ struct acpi_scan_handler {
bool (*match)(const char *idstr, const struct acpi_device_id **matchid);
int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id);
void (*detach)(struct acpi_device *dev);
+ void (*post_eject)(struct acpi_device *dev);
void (*bind)(struct device *phys_dev);
void (*unbind)(struct device *phys_dev);
struct acpi_hotplug_profile hotplug;
--
2.39.2
From: James Morse <[email protected]>
ACPI identifies CPUs by UID. get_cpu_for_acpi_id() maps the ACPI UID
to the Linux CPU number.
The helper to retrieve this mapping is only available in arm64's NUMA
code.
Move it to live next to get_acpi_id_for_cpu().
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Tested-by: Miguel Luis <[email protected]>
Tested-by: Vishnu Pajjuri <[email protected]>
Tested-by: Jianyong Wu <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
---
arch/arm64/include/asm/acpi.h | 11 +++++++++++
arch/arm64/kernel/acpi_numa.c | 11 -----------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index 6792a1f83f2a..bc9a6656fc0c 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -119,6 +119,17 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu)
return acpi_cpu_get_madt_gicc(cpu)->uid;
}
+static inline int get_cpu_for_acpi_id(u32 uid)
+{
+ int cpu;
+
+ for (cpu = 0; cpu < nr_cpu_ids; cpu++)
+ if (uid == get_acpi_id_for_cpu(cpu))
+ return cpu;
+
+ return -EINVAL;
+}
+
static inline void arch_fix_phys_package_id(int num, u32 slot) { }
void __init acpi_init_cpus(void);
int apei_claim_sea(struct pt_regs *regs);
diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
index e51535a5f939..0c036a9a3c33 100644
--- a/arch/arm64/kernel/acpi_numa.c
+++ b/arch/arm64/kernel/acpi_numa.c
@@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu)
return acpi_early_node_map[cpu];
}
-static inline int get_cpu_for_acpi_id(u32 uid)
-{
- int cpu;
-
- for (cpu = 0; cpu < nr_cpu_ids; cpu++)
- if (uid == get_acpi_id_for_cpu(cpu))
- return cpu;
-
- return -EINVAL;
-}
-
static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header,
const unsigned long end)
{
--
2.39.2
From: James Morse <[email protected]>
gic_acpi_match_gicc() is only called via gic_acpi_count_gicr_regions().
It should only count the number of enabled redistributors, but it
also tries to sanity check the GICC entry, currently returning an
error if the Enabled bit is set, but the gicr_base_address is zero.
Adding support for the online-capable bit to the sanity check will
complicate it, for no benefit. The existing check implicitly depends on
gic_acpi_count_gicr_regions() previous failing to find any GICR regions
(as it is valid to have gicr_base_address of zero if the redistributors
are described via a GICR entry).
Instead of complicating the check, remove it. Failures that happen at
this point cause the irqchip not to register, meaning no irqs can be
requested. The kernel grinds to a panic() pretty quickly.
Without the check, MADT tables that exhibit this problem are still
caught by gic_populate_rdist(), which helpfully also prints what went
wrong:
| CPU4: mpidr 100 has no re-distributor!
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
---
drivers/irqchip/irq-gic-v3.c | 13 ++-----------
1 file changed, 2 insertions(+), 11 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 6fb276504bcc..10af15f93d4d 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -2415,19 +2415,10 @@ static int __init gic_acpi_match_gicc(union acpi_subtable_headers *header,
* If GICC is enabled and has valid gicr base address, then it means
* GICR base is presented via GICC
*/
- if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address) {
+ if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address)
acpi_data.enabled_rdists++;
- return 0;
- }
- /*
- * It's perfectly valid firmware can pass disabled GICC entry, driver
- * should not treat as errors, skip the entry instead of probe fail.
- */
- if (!acpi_gicc_is_usable(gicc))
- return 0;
-
- return -ENODEV;
+ return 0;
}
static int __init gic_acpi_count_gicr_regions(void)
--
2.39.2
The ARM64 architecture does not support physical CPU HP today.
To avoid any possibility of a bug against such an architecture if defined
in future, check for the physical CPU HP case (not present) and
return an error on any such attempt.
On ARM64 virtual CPU Hotplug relies on the status value that can be
queried via the AML method _STA for the CPU object.
There are two conditions in which the CPU can be registered.
1) ACPI disabled.
2) ACPI enabled and the acpi_handle is available.
_STA evaluates to the CPU is both enabled and present.
(Note that in absence of the _STA method they are always in this
state).
If neither of these conditions is met the CPU is not 'yet' ready
to be used and -EPROBE_DEFER is returned.
Success occurs in the early attempt to register the CPUs if we
are booting with DT (no concept yet of vCPU HP) if not it succeeds
for already enabled CPUs when the ACPI Processor driver attaches to
them. Finally it may succeed via the CPU Hotplug code indicating that
the CPU is now enabled.
For ACPI if CONFIG_ACPI_PROCESSOR the only path to get to
arch_register_cpu() with that handle set is via
acpi_processor_hot_add_init() which is only called from an ACPI bus
scan in which _STA has already been queried there is no need to
repeat it here. Add a comment to remind us of this in the future.
Suggested-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change.
v6: Add protection again Physical CPU HP to the arch specific code
and don't actually check _STA
Tested on arm64 with ACPI + DT build and DT only builds, booting
with ACPI and DT as appropriate.
---
arch/arm64/kernel/smp.c | 53 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dc0e0b3ec2d4..ccb6ad347df9 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -504,6 +504,59 @@ static int __init smp_cpu_setup(int cpu)
static bool bootcpu_valid __initdata;
static unsigned int cpu_count = 1;
+int arch_register_cpu(int cpu)
+{
+ acpi_handle acpi_handle = acpi_get_processor_handle(cpu);
+ struct cpu *c = &per_cpu(cpu_devices, cpu);
+
+ if (!acpi_disabled && !acpi_handle &&
+ IS_ENABLED(CONFIG_ACPI_HOTPLUG_CPU))
+ return -EPROBE_DEFER;
+
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+ /* For now block anything that looks like physical CPU Hotplug */
+ if (invalid_logical_cpuid(cpu) || !cpu_present(cpu)) {
+ pr_err_once("Changing CPU present bit is not supported\n");
+ return -ENODEV;
+ }
+#endif
+
+ /*
+ * Availability of the acpi handle is sufficient to establish
+ * that _STA has aleady been checked. No need to recheck here.
+ */
+ c->hotpluggable = arch_cpu_is_hotpluggable(cpu);
+
+ return register_cpu(c, cpu);
+}
+
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+void arch_unregister_cpu(int cpu)
+{
+ acpi_handle acpi_handle = acpi_get_processor_handle(cpu);
+ struct cpu *c = &per_cpu(cpu_devices, cpu);
+ acpi_status status;
+ unsigned long long sta;
+
+ if (!acpi_handle) {
+ pr_err_once("Removing a CPU without associated ACPI handle\n");
+ return;
+ }
+
+ status = acpi_evaluate_integer(acpi_handle, "_STA", NULL, &sta);
+ if (ACPI_FAILURE(status))
+ return;
+
+ /* For now do not allow anything that looks like physical CPU HP */
+ if (cpu_present(cpu) && !(sta & ACPI_STA_DEVICE_PRESENT)) {
+ pr_err_once("Changing CPU present bit is not supported\n");
+ return;
+ }
+
+ unregister_cpu(c);
+}
+#endif /* CONFIG_ACPI_HOTPLUG_CPU */
+
#ifdef CONFIG_ACPI
static struct acpi_madt_generic_interrupt cpu_madt_gicc[NR_CPUS];
--
2.39.2
From: Jean-Philippe Brucker <[email protected]>
When a CPU is marked as disabled, but online capable in the MADT, PSCI
applies some firmware policy to control when it can be brought online.
PSCI returns DENIED to a CPU_ON request if this is not currently
permitted. The OS can learn the current policy from the _STA enabled bit.
Handle the PSCI DENIED return code gracefully instead of printing an
error.
See https://developer.arm.com/documentation/den0022/f/?lang=en page 58.
Signed-off-by: Jean-Philippe Brucker <[email protected]>
[ morse: Rewrote commit message ]
Signed-off-by: James Morse <[email protected]>
Tested-by: Miguel Luis <[email protected]>
Tested-by: Vishnu Pajjuri <[email protected]>
Tested-by: Jianyong Wu <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
---
arch/arm64/kernel/psci.c | 2 +-
arch/arm64/kernel/smp.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index 29a8e444db83..fabd732d0a2d 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -40,7 +40,7 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
{
phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
- if (err)
+ if (err && err != -EPERM)
pr_err("failed to boot CPU%d (%d)\n", cpu, err);
return err;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 4ced34f62dab..dc0e0b3ec2d4 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -132,7 +132,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
/* Now bring the CPU into our world */
ret = boot_secondary(cpu, idle);
if (ret) {
- pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
+ if (ret != -EPERM)
+ pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
return ret;
}
--
2.39.2
In order to move arch_register_cpu() to be called via the same path
for initially present CPUs described by ACPI and hotplugged CPUs
ACPI_HOTPLUG_CPU needs to be enabled.
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change.
---
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/acpi.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7b11c98b3e84..fed7d0d54179 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -5,6 +5,7 @@ config ARM64
select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI
select ACPI_GTDT if ACPI
+ select ACPI_HOTPLUG_CPU if ACPI_PROCESSOR
select ACPI_IORT if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if (ACPI && PCI)
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index dba8fcec7f33..a74e80d58df3 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -29,6 +29,7 @@
#include <linux/pgtable.h>
#include <acpi/ghes.h>
+#include <acpi/processor.h>
#include <asm/cputype.h>
#include <asm/cpu_ops.h>
#include <asm/daifflags.h>
@@ -413,6 +414,21 @@ void arch_reserve_mem_area(acpi_physical_address addr, size_t size)
memblock_mark_nomap(addr, size);
}
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 apci_id,
+ int *pcpu)
+{
+ return 0;
+}
+EXPORT_SYMBOL(acpi_map_cpu); /* check why */
+
+int acpi_unmap_cpu(int cpu)
+{
+ return 0;
+}
+EXPORT_SYMBOL(acpi_unmap_cpu);
+#endif /* CONFIG_ACPI_HOTPLUG_CPU */
+
#ifdef CONFIG_ACPI_FFH
/*
* Implements ARM64 specific callbacks to support ACPI FFH Operation Region as
--
2.39.2
From: James Morse <[email protected]>
Add a description of physical and virtual CPU hotplug, explain the
differences and elaborate on what is required in ACPI for a working
virtual hotplug system.
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change.
---
Documentation/arch/arm64/cpu-hotplug.rst | 79 ++++++++++++++++++++++++
Documentation/arch/arm64/index.rst | 1 +
2 files changed, 80 insertions(+)
diff --git a/Documentation/arch/arm64/cpu-hotplug.rst b/Documentation/arch/arm64/cpu-hotplug.rst
new file mode 100644
index 000000000000..76ba8d932c72
--- /dev/null
+++ b/Documentation/arch/arm64/cpu-hotplug.rst
@@ -0,0 +1,79 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. _cpuhp_index:
+
+====================
+CPU Hotplug and ACPI
+====================
+
+CPU hotplug in the arm64 world is commonly used to describe the kernel taking
+CPUs online/offline using PSCI. This document is about ACPI firmware allowing
+CPUs that were not available during boot to be added to the system later.
+
+``possible`` and ``present`` refer to the state of the CPU as seen by linux.
+
+
+CPU Hotplug on physical systems - CPUs not present at boot
+----------------------------------------------------------
+
+Physical systems need to mark a CPU that is ``possible`` but not ``present`` as
+being ``present``. An example would be a dual socket machine, where the package
+in one of the sockets can be replaced while the system is running.
+
+This is not supported.
+
+In the arm64 world CPUs are not a single device but a slice of the system.
+There are no systems that support the physical addition (or removal) of CPUs
+while the system is running, and ACPI is not able to sufficiently describe
+them.
+
+e.g. New CPUs come with new caches, but the platform's cache toplogy is
+described in a static table, the PPTT. How caches are shared between CPUs is
+not discoverable, and must be described by firmware.
+
+e.g. The GIC redistributor for each CPU must be accessed by the driver during
+boot to discover the system wide supported features. ACPI's MADT GICC
+structures can describe a redistributor associated with a disabled CPU, but
+can't describe whether the redistributor is accessible, only that it is not
+'always on'.
+
+arm64's ACPI tables assume that everything described is ``present``.
+
+
+CPU Hotplug on virtual systems - CPUs not enabled at boot
+---------------------------------------------------------
+
+Virtual systems have the advantage that all the properties the system will
+ever have can be described at boot. There are no power-domain considerations
+as such devices are emulated.
+
+CPU Hotplug on virtual systems is supported. It is distinct from physical
+CPU Hotplug as all resources are described as ``present``, but CPUs may be
+marked as disabled by firmware. Only the CPU's online/offline behaviour is
+influenced by firmware. An example is where a virtual machine boots with a
+single CPU, and additional CPUs are added once a cloud orchestrator deploys
+the workload.
+
+For a virtual machine, the VMM (e.g. Qemu) plays the part of firmware.
+
+Virtual hotplug is implemented as a firmware policy affecting which CPUs can be
+brought online. Firmware can enforce its policy via PSCI's return codes. e.g.
+``DENIED``.
+
+The ACPI tables must describe all the resources of the virtual machine. CPUs
+that firmware wishes to disable either from boot (or later) should not be
+``enabled`` in the MADT GICC structures, but should have the ``online capable``
+bit set, to indicate they can be enabled later. The boot CPU must be marked as
+``enabled``. The 'always on' GICR structure must be used to describe the
+redistributors.
+
+CPUs described as ``online capable`` but not ``enabled`` can be set to enabled
+by the DSDT's Processor object's _STA method. On virtual systems the _STA method
+must always report the CPU as ``present``. Changes to the firmware policy can
+be notified to the OS via device-check or eject-request.
+
+CPUs described as ``enabled`` in the static table, should not have their _STA
+modified dynamically by firmware. Soft-restart features such as kexec will
+re-read the static properties of the system from these static tables, and
+may malfunction if these no longer describe the running system. Linux will
+re-discover the dynamic properties of the system from the _STA method later
+during boot.
diff --git a/Documentation/arch/arm64/index.rst b/Documentation/arch/arm64/index.rst
index d08e924204bf..78544de0a8a9 100644
--- a/Documentation/arch/arm64/index.rst
+++ b/Documentation/arch/arm64/index.rst
@@ -13,6 +13,7 @@ ARM64 Architecture
asymmetric-32bit
booting
cpu-feature-registers
+ cpu-hotplug
elf_hwcaps
hugetlbpage
kdump
--
2.39.2
From: James Morse <[email protected]>
The 'offline' file in sysfs shows all offline CPUs, including those
that aren't present. User-space is expected to remove not-present CPUs
from this list to learn which CPUs could be brought online.
CPUs can be present but not-enabled. These CPUs can't be brought online
until the firmware policy changes, which comes with an ACPI notification
that will register the CPUs.
With only the offline and present files, user-space is unable to
determine which CPUs it can try to bring online. Add a new CPU mask
that shows this based on all the registered CPUs.
Signed-off-by: James Morse <[email protected]>
Tested-by: Miguel Luis <[email protected]>
Tested-by: Vishnu Pajjuri <[email protected]>
Tested-by: Jianyong Wu <[email protected]>
Acked-by: Thomas Gleixner <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No change
---
.../ABI/testing/sysfs-devices-system-cpu | 6 +++++
drivers/base/cpu.c | 10 ++++++++
include/linux/cpumask.h | 25 +++++++++++++++++++
kernel/cpu.c | 3 +++
4 files changed, 44 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 710d47be11e0..808efb5b860a 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -694,3 +694,9 @@ Description:
(RO) indicates whether or not the kernel directly supports
modifying the crash elfcorehdr for CPU hot un/plug and/or
on/offline changes.
+
+What: /sys/devices/system/cpu/enabled
+Date: Nov 2022
+Contact: Linux kernel mailing list <[email protected]>
+Description:
+ (RO) the list of CPUs that can be brought online.
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 7b83e9c87d7c..353ee39a5cbe 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -95,6 +95,7 @@ void unregister_cpu(struct cpu *cpu)
{
int logical_cpu = cpu->dev.id;
+ set_cpu_enabled(logical_cpu, false);
unregister_cpu_under_node(logical_cpu, cpu_to_node(logical_cpu));
device_unregister(&cpu->dev);
@@ -273,6 +274,13 @@ static ssize_t print_cpus_offline(struct device *dev,
}
static DEVICE_ATTR(offline, 0444, print_cpus_offline, NULL);
+static ssize_t print_cpus_enabled(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(cpu_enabled_mask));
+}
+static DEVICE_ATTR(enabled, 0444, print_cpus_enabled, NULL);
+
static ssize_t print_cpus_isolated(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -413,6 +421,7 @@ int register_cpu(struct cpu *cpu, int num)
register_cpu_under_node(num, cpu_to_node(num));
dev_pm_qos_expose_latency_limit(&cpu->dev,
PM_QOS_RESUME_LATENCY_NO_CONSTRAINT);
+ set_cpu_enabled(num, true);
return 0;
}
@@ -494,6 +503,7 @@ static struct attribute *cpu_root_attrs[] = {
&cpu_attrs[2].attr.attr,
&dev_attr_kernel_max.attr,
&dev_attr_offline.attr,
+ &dev_attr_enabled.attr,
&dev_attr_isolated.attr,
#ifdef CONFIG_NO_HZ_FULL
&dev_attr_nohz_full.attr,
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 1c29947db848..4b202b94c97a 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -93,6 +93,7 @@ static inline void set_nr_cpu_ids(unsigned int nr)
*
* cpu_possible_mask- has bit 'cpu' set iff cpu is populatable
* cpu_present_mask - has bit 'cpu' set iff cpu is populated
+ * cpu_enabled_mask - has bit 'cpu' set iff cpu can be brought online
* cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
* cpu_active_mask - has bit 'cpu' set iff cpu available to migration
*
@@ -125,11 +126,13 @@ static inline void set_nr_cpu_ids(unsigned int nr)
extern struct cpumask __cpu_possible_mask;
extern struct cpumask __cpu_online_mask;
+extern struct cpumask __cpu_enabled_mask;
extern struct cpumask __cpu_present_mask;
extern struct cpumask __cpu_active_mask;
extern struct cpumask __cpu_dying_mask;
#define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask)
#define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask)
+#define cpu_enabled_mask ((const struct cpumask *)&__cpu_enabled_mask)
#define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask)
#define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask)
#define cpu_dying_mask ((const struct cpumask *)&__cpu_dying_mask)
@@ -1009,6 +1012,7 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
#else
#define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
#define for_each_online_cpu(cpu) for_each_cpu((cpu), cpu_online_mask)
+#define for_each_enabled_cpu(cpu) for_each_cpu((cpu), cpu_enabled_mask)
#define for_each_present_cpu(cpu) for_each_cpu((cpu), cpu_present_mask)
#endif
@@ -1031,6 +1035,15 @@ set_cpu_possible(unsigned int cpu, bool possible)
cpumask_clear_cpu(cpu, &__cpu_possible_mask);
}
+static inline void
+set_cpu_enabled(unsigned int cpu, bool can_be_onlined)
+{
+ if (can_be_onlined)
+ cpumask_set_cpu(cpu, &__cpu_enabled_mask);
+ else
+ cpumask_clear_cpu(cpu, &__cpu_enabled_mask);
+}
+
static inline void
set_cpu_present(unsigned int cpu, bool present)
{
@@ -1112,6 +1125,7 @@ static __always_inline unsigned int num_online_cpus(void)
return raw_atomic_read(&__num_online_cpus);
}
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
+#define num_enabled_cpus() cpumask_weight(cpu_enabled_mask)
#define num_present_cpus() cpumask_weight(cpu_present_mask)
#define num_active_cpus() cpumask_weight(cpu_active_mask)
@@ -1120,6 +1134,11 @@ static inline bool cpu_online(unsigned int cpu)
return cpumask_test_cpu(cpu, cpu_online_mask);
}
+static inline bool cpu_enabled(unsigned int cpu)
+{
+ return cpumask_test_cpu(cpu, cpu_enabled_mask);
+}
+
static inline bool cpu_possible(unsigned int cpu)
{
return cpumask_test_cpu(cpu, cpu_possible_mask);
@@ -1144,6 +1163,7 @@ static inline bool cpu_dying(unsigned int cpu)
#define num_online_cpus() 1U
#define num_possible_cpus() 1U
+#define num_enabled_cpus() 1U
#define num_present_cpus() 1U
#define num_active_cpus() 1U
@@ -1157,6 +1177,11 @@ static inline bool cpu_possible(unsigned int cpu)
return cpu == 0;
}
+static inline bool cpu_enabled(unsigned int cpu)
+{
+ return cpu == 0;
+}
+
static inline bool cpu_present(unsigned int cpu)
{
return cpu == 0;
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 07ad53b7f119..6d228f1c4e39 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -3117,6 +3117,9 @@ EXPORT_SYMBOL(__cpu_possible_mask);
struct cpumask __cpu_online_mask __read_mostly;
EXPORT_SYMBOL(__cpu_online_mask);
+struct cpumask __cpu_enabled_mask __read_mostly;
+EXPORT_SYMBOL(__cpu_enabled_mask);
+
struct cpumask __cpu_present_mask __read_mostly;
EXPORT_SYMBOL(__cpu_present_mask);
--
2.39.2
From: James Morse <[email protected]>
The arm64 specific arch_register_cpu() call may defer CPU registration
until the ACPI interpreter is available and the _STA method can
be evaluated.
If this occurs, then a second attempt is made in
acpi_processor_get_info(). Note that the arm64 specific call has
not yet been added so for now this will be called for the original
hotplug case.
For architectures that do not defer until the ACPI Processor
driver loads (e.g. x86), for initially present CPUs there will
already be a CPU device. If present do not try to register again.
Systems can still be booted with 'acpi=off', or not include an
ACPI description at all as in these cases arch_register_cpu()
will not have deferred registration when first called.
This moves the CPU register logic back to a subsys_initcall(),
while the memory nodes will have been registered earlier.
Note this is where the call was prior to the cleanup series so
there should be no side effects of moving it back again for this
specific case.
[PATCH 00/21] Initial cleanups for vCPU HP.
https://lore.kernel.org/all/ZVyz%[email protected]/
commit 5b95f94c3b9f ("x86/topology: Switch over to GENERIC_CPU_DEVICES")
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Tested-by: Miguel Luis <[email protected]>
Tested-by: Vishnu Pajjuri <[email protected]>
Tested-by: Jianyong Wu <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Co-developed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Joanthan Cameron <[email protected]>
---
v7: Simplify the logic on whether to hotadd the CPU.
This path can only be reached either for coldplug in which
case all we care about is has register_cpu() already been
called (identifying deferred), or hotplug in which case
whether register_cpu() has been called is also sufficient.
Checks on _STA related elements or the validity of the ID
are no longer necessary here due to similar checks having
moved elsewhere in the path.
v6: Squash the two paths for conventional CPU Hotplug and arm64
vCPU HP.
---
drivers/acpi/acpi_processor.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 127ae8dcb787..4e65011e706c 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -350,14 +350,14 @@ static int acpi_processor_get_info(struct acpi_device *device)
}
/*
- * Extra Processor objects may be enumerated on MP systems with
- * less than the max # of CPUs. They should be ignored _iff
- * they are physically not present.
- *
- * NOTE: Even if the processor has a cpuid, it may not be present
- * because cpuid <-> apicid mapping is persistent now.
+ * This code is not called unless we know the CPU is present and
+ * enabled. The two paths are:
+ * a) Initially present CPUs on architectures that do not defer
+ * their arch_register_cpu() calls until this point.
+ * b) Hotplugged CPUs (enabled bit in _STA has transitioned from not
+ * enabled to enabled)
*/
- if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
+ if (!get_cpu_device(pr->id)) {
ret = acpi_processor_hotadd_init(pr, device);
if (ret)
--
2.39.2
From: James Morse <[email protected]>
To support virtual CPU hotplug, ACPI has added an 'online capable' bit
to the MADT GICC entries. This indicates a disabled CPU entry may not
be possible to online via PSCI until firmware has set enabled bit in
_STA.
This means that a "usable" GIC is one that is marked as either enabled,
or online capable. Therefore, change acpi_gicc_is_usable() to check both
bits. However, we need to change the test in gic_acpi_match_gicc() back
to testing just the enabled bit so the count of enabled distributors is
correct.
What about the redistributor in the GICC entry? ACPI doesn't want to say.
Assume the worst: When a redistributor is described in the GICC entry,
but the entry is marked as disabled at boot, assume the redistributor
is inaccessible.
The GICv3 driver doesn't support late online of redistributors, so this
means the corresponding CPU can't be brought online either. Clear the
possible and present bits.
Systems that want CPU hotplug in a VM can ensure their redistributors
are always-on, and describe them that way with a GICR entry in the MADT.
When mapping redistributors found via GICC entries, handle the case
where the arch code believes the CPU is present and possible, but it
does not have an accessible redistributor. Print a warning and clear
the present and possible bits.
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Russell King (Oracle) <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
---
v7: No Change.
---
drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
include/linux/acpi.h | 3 ++-
2 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 10af15f93d4d..66132251c1bb 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
(struct acpi_madt_generic_interrupt *)header;
u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
+ int cpu = get_cpu_for_acpi_id(gicc->uid);
void __iomem *redist_base;
if (!acpi_gicc_is_usable(gicc))
return 0;
+ /*
+ * Capable but disabled CPUs can be brought online later. What about
+ * the redistributor? ACPI doesn't want to say!
+ * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
+ * Otherwise, prevent such CPUs from being brought online.
+ */
+ if (!(gicc->flags & ACPI_MADT_ENABLED)) {
+ pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
+ set_cpu_present(cpu, false);
+ set_cpu_possible(cpu, false);
+ return 0;
+ }
+
redist_base = ioremap(gicc->gicr_base_address, size);
if (!redist_base)
return -ENOMEM;
@@ -2413,9 +2427,12 @@ static int __init gic_acpi_match_gicc(union acpi_subtable_headers *header,
/*
* If GICC is enabled and has valid gicr base address, then it means
- * GICR base is presented via GICC
+ * GICR base is presented via GICC. The redistributor is only known to
+ * be accessible if the GICC is marked as enabled. If this bit is not
+ * set, we'd need to add the redistributor at runtime, which isn't
+ * supported.
*/
- if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address)
+ if (gicc->flags & ACPI_MADT_ENABLED && gicc->gicr_base_address)
acpi_data.enabled_rdists++;
return 0;
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9844a3f9c4e5..fcfb7bb6789e 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -239,7 +239,8 @@ void acpi_table_print_madt_entry (struct acpi_subtable_header *madt);
static inline bool acpi_gicc_is_usable(struct acpi_madt_generic_interrupt *gicc)
{
- return gicc->flags & ACPI_MADT_ENABLED;
+ return gicc->flags & (ACPI_MADT_ENABLED |
+ ACPI_MADT_GICC_ONLINE_CAPABLE);
}
/* the following numa functions are architecture-dependent */
--
2.39.2
On Thu, Apr 18, 2024 at 3:54 PM Jonathan Cameron
<[email protected]> wrote:
>
> Whilst it is a bit quick after v6, a couple of critical issues
> were pointed out by Russell, Salil and Rafael + one build issue
> had been missed, so it seems sensible to make sure those conducting
> testing or further review have access to a fixed version.
>
> v7:
> - Fix misplaced config guard that broke bisection.
> - Greatly simplify the condition on which we call
> acpi_processor_hotadd_init().
> - Improve teardown ordering.
Thank you for the update!
From a quick look, patches [01-08/16] appear to be good now, but I'll
do a more detailed review on the following days.
> Fundamental change v6+: At the level of common ACPI infrastructure, use
> the existing hotplug path for arm64 even though what needs to be
> done at the architecture specific level is quite different.
>
> An explicit check in arch_register_cpu() for arm64 prevents
> this code doing anything if Physical CPU Hotplug is signalled.
>
> This should resolve any concerns about treating virtual CPU
> hotplug as if it were physical and potential unwanted side effects
> if physical CPU hotplug is added to the ARM architecture in the
> future.
>
> v6: Thanks to Rafael for extensive help with the approach + reviews.
> Specific changes:
> - Do not differentiate wrt code flow between traditional CPU HP
> and the new ARM flow. The conditions on performing hotplug actions
> do need to be adjusted though to incorporate the slightly different
> state transition
> Added PRESENT + !ENABLED -> PRESENT + ENABLED
> to existing !PRESENT + !ENABLED -> PRESENT + ENABLED
> - Enable ACPI_HOTPLUG_CPU on arm64 and drop the earlier patches that
> took various code out of the protection of that. Now the paths
> - New patch to drop unnecessary _STA check in hotplug code. This
> code cannot be entered unless ENABLED + PRESENT are set.
> - New patch to unify the flow of already onlined (at time of driver
> load) and hotplugged CPUs in acpi/processor_driver.c.
> This change is necessary because we can't easily distinguish the
> 2 cases of deferred vs hotplug calls of register_cpu() on arm64.
> It is also a nice simplification.
> - Use flags rather than a structure for the extra parameter to
> acpi_scan_check_and_detach() - Thank to Shameer for offline feedback.
>
> Updated version of James' original introduction.
>
> This series adds what looks like cpuhotplug support to arm64 for use in
> virtual machines. It does this by moving the cpu_register() calls for
> architectures that support ACPI into an arch specific call made from
> the ACPI processor driver.
>
> The kubernetes folk really want to be able to add CPUs to an existing VM,
> in exactly the same way they do on x86. The use-case is pre-booting guests
> with one CPU, then adding the number that were actually needed when the
> workload is provisioned.
>
> Wait? Doesn't arm64 support cpuhotplug already!?
> In the arm world, cpuhotplug gets used to mean removing the power from a CPU.
> The CPU is offline, and remains present. For x86, and ACPI, cpuhotplug
> has the additional step of physically removing the CPU, so that it isn't
> present anymore.
>
> Arm64 doesn't support this, and can't support it: CPUs are really a slice
> of the SoC, and there is not enough information in the existing ACPI tables
> to describe which bits of the slice also got removed. Without a reference
> machine: adding this support to the spec is a wild goose chase.
>
> Critically: everything described in the firmware tables must remain present.
>
> For a virtual machine this is easy as all the other bits of 'virtual SoC'
> are emulated, so they can (and do) remain present when a vCPU is 'removed'.
>
> On a system that supports cpuhotplug the MADT has to describe every possible
> CPU at boot. Under KVM, the vGIC needs to know about every possible vCPU before
> the guest is started.
> With these constraints, virtual-cpuhotplug is really just a hypervisor/firmware
> policy about which CPUs can be brought online.
>
> This series adds support for virtual-cpuhotplug as exactly that: firmware
> policy. This may even work on a physical machine too; for a guest the part of
> firmware is played by the VMM. (typically Qemu).
>
> PSCI support is modified to return 'DENIED' if the CPU can't be brought
> online/enabled yet. The CPU object's _STA method's enabled bit is used to
> indicate firmware's current disposition. If the CPU has its enabled bit clear,
> it will not be registered with sysfs, and attempts to bring it online will
> fail. The notifications that _STA has changed its value then work in the same
> way as physical hotplug, and firmware can cause the CPU to be registered some
> time later, allowing it to be brought online.
>
> This creates something that looks like cpuhotplug to user-space and the
> kernel beyond arm64 architecture specific code, as the sysfs
> files appear and disappear, and the udev notifications look the same.
>
> One notable difference is the CPU present mask, which is exposed via sysfs.
> Because the CPUs remain present throughout, they can still be seen in that mask.
> This value does get used by webbrowsers to estimate the number of CPUs
> as the CPU online mask is constantly changed on mobile phones.
>
> Linux is tolerant of PSCI returning errors, as its always been allowed to do
> that. To avoid confusing OS that can't tolerate this, we needed an additional
> bit in the MADT GICC flags. This series copies ACPI_MADT_ONLINE_CAPABLE, which
> appears to be for this purpose, but calls it ACPI_MADT_GICC_CPU_CAPABLE as it
> has a different bit position in the GICC.
>
> This code is unconditionally enabled for all ACPI architectures, though for
> now only arm64 will have deferred the cpu_register() calls.
>
> If folk want to play along at home, you'll need a copy of Qemu that supports this.
> https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
>
> Replace your '-smp' argument with something like:
> | -smp cpus=1,maxcpus=3,cores=3,threads=1,sockets=1
>
> then feed the following to the Qemu montior;
> | (qemu) device_add driver=host-arm-cpu,core-id=1,id=cpu1
> | (qemu) device_del cpu1
>
> James Morse (7):
> ACPI: processor: Register deferred CPUs from acpi_processor_get_info()
> ACPI: Add post_eject to struct acpi_scan_handler for cpu hotplug
> arm64: acpi: Move get_cpu_for_acpi_id() to a header
> irqchip/gic-v3: Don't return errors from gic_acpi_match_gicc()
> irqchip/gic-v3: Add support for ACPI's disabled but 'online capable'
> CPUs
> arm64: document virtual CPU hotplug's expectations
> cpumask: Add enabled cpumask for present CPUs that can be brought
> online
>
> Jean-Philippe Brucker (1):
> arm64: psci: Ignore DENIED CPUs
>
> Jonathan Cameron (8):
> ACPI: processor: Simplify initial onlining to use same path for cold
> and hotplug
> cpu: Do not warn on arch_register_cpu() returning -EPROBE_DEFER
> ACPI: processor: Drop duplicated check on _STA (enabled + present)
> ACPI: processor: Move checks and availability of acpi_processor
> earlier
> ACPI: processor: Add acpi_get_processor_handle() helper
> ACPI: scan: switch to flags for acpi_scan_check_and_detach()
> arm64: arch_register_cpu() variant to check if an ACPI handle is now
> available.
> arm64: Kconfig: Enable hotplug CPU on arm64 if ACPI_PROCESSOR is
> enabled.
>
> .../ABI/testing/sysfs-devices-system-cpu | 6 +
> Documentation/arch/arm64/cpu-hotplug.rst | 79 ++++++++++++
> Documentation/arch/arm64/index.rst | 1 +
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/acpi.h | 11 ++
> arch/arm64/kernel/acpi.c | 16 +++
> arch/arm64/kernel/acpi_numa.c | 11 --
> arch/arm64/kernel/psci.c | 2 +-
> arch/arm64/kernel/smp.c | 56 ++++++++-
> drivers/acpi/acpi_processor.c | 113 ++++++++++--------
> drivers/acpi/processor_driver.c | 44 ++-----
> drivers/acpi/scan.c | 47 ++++++--
> drivers/base/cpu.c | 12 +-
> drivers/irqchip/irq-gic-v3.c | 32 +++--
> include/acpi/acpi_bus.h | 1 +
> include/acpi/processor.h | 2 +-
> include/linux/acpi.h | 10 +-
> include/linux/cpumask.h | 25 ++++
> kernel/cpu.c | 3 +
> 19 files changed, 357 insertions(+), 115 deletions(-)
> create mode 100644 Documentation/arch/arm64/cpu-hotplug.rst
>
> --
> 2.39.2
>
>
> On 18 Apr 2024, at 13:53, Jonathan Cameron <[email protected]> wrote:
>
> Whilst it is a bit quick after v6, a couple of critical issues
> were pointed out by Russell, Salil and Rafael + one build issue
> had been missed, so it seems sensible to make sure those conducting
> testing or further review have access to a fixed version.
>
> v7:
> - Fix misplaced config guard that broke bisection.
> - Greatly simplify the condition on which we call
> acpi_processor_hotadd_init().
> - Improve teardown ordering.
>
Hi Jonathan,
I've tested v7 on an arm64 machine running QEMU from
https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2, with KVM.
- boot
- hotplug up to 'maxcpus'
- hotunplug down to the number of boot cpus
- hotplug vcpus and migrate with vcpus offline
- hotplug vcpus and migrate with vcpus online
- hotplug vcpus then unplug vcpus then migrate
- successive live migrations
Feel free to add:
Tested-by: Miguel Luis <[email protected]>
Thank you
Miguel
> Fundamental change v6+: At the level of common ACPI infrastructure, use
> the existing hotplug path for arm64 even though what needs to be
> done at the architecture specific level is quite different.
>
> An explicit check in arch_register_cpu() for arm64 prevents
> this code doing anything if Physical CPU Hotplug is signalled.
>
> This should resolve any concerns about treating virtual CPU
> hotplug as if it were physical and potential unwanted side effects
> if physical CPU hotplug is added to the ARM architecture in the
> future.
>
> v6: Thanks to Rafael for extensive help with the approach + reviews.
> Specific changes:
> - Do not differentiate wrt code flow between traditional CPU HP
> and the new ARM flow. The conditions on performing hotplug actions
> do need to be adjusted though to incorporate the slightly different
> state transition
> Added PRESENT + !ENABLED -> PRESENT + ENABLED
> to existing !PRESENT + !ENABLED -> PRESENT + ENABLED
> - Enable ACPI_HOTPLUG_CPU on arm64 and drop the earlier patches that
> took various code out of the protection of that. Now the paths
> - New patch to drop unnecessary _STA check in hotplug code. This
> code cannot be entered unless ENABLED + PRESENT are set.
> - New patch to unify the flow of already onlined (at time of driver
> load) and hotplugged CPUs in acpi/processor_driver.c.
> This change is necessary because we can't easily distinguish the
> 2 cases of deferred vs hotplug calls of register_cpu() on arm64.
> It is also a nice simplification.
> - Use flags rather than a structure for the extra parameter to
> acpi_scan_check_and_detach() - Thank to Shameer for offline feedback.
>
> Updated version of James' original introduction.
>
> This series adds what looks like cpuhotplug support to arm64 for use in
> virtual machines. It does this by moving the cpu_register() calls for
> architectures that support ACPI into an arch specific call made from
> the ACPI processor driver.
>
> The kubernetes folk really want to be able to add CPUs to an existing VM,
> in exactly the same way they do on x86. The use-case is pre-booting guests
> with one CPU, then adding the number that were actually needed when the
> workload is provisioned.
>
> Wait? Doesn't arm64 support cpuhotplug already!?
> In the arm world, cpuhotplug gets used to mean removing the power from a CPU.
> The CPU is offline, and remains present. For x86, and ACPI, cpuhotplug
> has the additional step of physically removing the CPU, so that it isn't
> present anymore.
>
> Arm64 doesn't support this, and can't support it: CPUs are really a slice
> of the SoC, and there is not enough information in the existing ACPI tables
> to describe which bits of the slice also got removed. Without a reference
> machine: adding this support to the spec is a wild goose chase.
>
> Critically: everything described in the firmware tables must remain present.
>
> For a virtual machine this is easy as all the other bits of 'virtual SoC'
> are emulated, so they can (and do) remain present when a vCPU is 'removed'.
>
> On a system that supports cpuhotplug the MADT has to describe every possible
> CPU at boot. Under KVM, the vGIC needs to know about every possible vCPU before
> the guest is started.
> With these constraints, virtual-cpuhotplug is really just a hypervisor/firmware
> policy about which CPUs can be brought online.
>
> This series adds support for virtual-cpuhotplug as exactly that: firmware
> policy. This may even work on a physical machine too; for a guest the part of
> firmware is played by the VMM. (typically Qemu).
>
> PSCI support is modified to return 'DENIED' if the CPU can't be brought
> online/enabled yet. The CPU object's _STA method's enabled bit is used to
> indicate firmware's current disposition. If the CPU has its enabled bit clear,
> it will not be registered with sysfs, and attempts to bring it online will
> fail. The notifications that _STA has changed its value then work in the same
> way as physical hotplug, and firmware can cause the CPU to be registered some
> time later, allowing it to be brought online.
>
> This creates something that looks like cpuhotplug to user-space and the
> kernel beyond arm64 architecture specific code, as the sysfs
> files appear and disappear, and the udev notifications look the same.
>
> One notable difference is the CPU present mask, which is exposed via sysfs.
> Because the CPUs remain present throughout, they can still be seen in that mask.
> This value does get used by webbrowsers to estimate the number of CPUs
> as the CPU online mask is constantly changed on mobile phones.
>
> Linux is tolerant of PSCI returning errors, as its always been allowed to do
> that. To avoid confusing OS that can't tolerate this, we needed an additional
> bit in the MADT GICC flags. This series copies ACPI_MADT_ONLINE_CAPABLE, which
> appears to be for this purpose, but calls it ACPI_MADT_GICC_CPU_CAPABLE as it
> has a different bit position in the GICC.
>
> This code is unconditionally enabled for all ACPI architectures, though for
> now only arm64 will have deferred the cpu_register() calls.
>
> If folk want to play along at home, you'll need a copy of Qemu that supports this.
> https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
>
> Replace your '-smp' argument with something like:
> | -smp cpus=1,maxcpus=3,cores=3,threads=1,sockets=1
>
> then feed the following to the Qemu montior;
> | (qemu) device_add driver=host-arm-cpu,core-id=1,id=cpu1
> | (qemu) device_del cpu1
>
> James Morse (7):
> ACPI: processor: Register deferred CPUs from acpi_processor_get_info()
> ACPI: Add post_eject to struct acpi_scan_handler for cpu hotplug
> arm64: acpi: Move get_cpu_for_acpi_id() to a header
> irqchip/gic-v3: Don't return errors from gic_acpi_match_gicc()
> irqchip/gic-v3: Add support for ACPI's disabled but 'online capable'
> CPUs
> arm64: document virtual CPU hotplug's expectations
> cpumask: Add enabled cpumask for present CPUs that can be brought
> online
>
> Jean-Philippe Brucker (1):
> arm64: psci: Ignore DENIED CPUs
>
> Jonathan Cameron (8):
> ACPI: processor: Simplify initial onlining to use same path for cold
> and hotplug
> cpu: Do not warn on arch_register_cpu() returning -EPROBE_DEFER
> ACPI: processor: Drop duplicated check on _STA (enabled + present)
> ACPI: processor: Move checks and availability of acpi_processor
> earlier
> ACPI: processor: Add acpi_get_processor_handle() helper
> ACPI: scan: switch to flags for acpi_scan_check_and_detach()
> arm64: arch_register_cpu() variant to check if an ACPI handle is now
> available.
> arm64: Kconfig: Enable hotplug CPU on arm64 if ACPI_PROCESSOR is
> enabled.
>
> .../ABI/testing/sysfs-devices-system-cpu | 6 +
> Documentation/arch/arm64/cpu-hotplug.rst | 79 ++++++++++++
> Documentation/arch/arm64/index.rst | 1 +
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/acpi.h | 11 ++
> arch/arm64/kernel/acpi.c | 16 +++
> arch/arm64/kernel/acpi_numa.c | 11 --
> arch/arm64/kernel/psci.c | 2 +-
> arch/arm64/kernel/smp.c | 56 ++++++++-
> drivers/acpi/acpi_processor.c | 113 ++++++++++--------
> drivers/acpi/processor_driver.c | 44 ++-----
> drivers/acpi/scan.c | 47 ++++++--
> drivers/base/cpu.c | 12 +-
> drivers/irqchip/irq-gic-v3.c | 32 +++--
> include/acpi/acpi_bus.h | 1 +
> include/acpi/processor.h | 2 +-
> include/linux/acpi.h | 10 +-
> include/linux/cpumask.h | 25 ++++
> kernel/cpu.c | 3 +
> 19 files changed, 357 insertions(+), 115 deletions(-)
> create mode 100644 Documentation/arch/arm64/cpu-hotplug.rst
>
> --
> 2.39.2
>
On Thu, 18 Apr 2024 14:54:06 +0100
Jonathan Cameron <[email protected]> wrote:
> From: James Morse <[email protected]>
>
> gic_acpi_match_gicc() is only called via gic_acpi_count_gicr_regions().
> It should only count the number of enabled redistributors, but it
> also tries to sanity check the GICC entry, currently returning an
> error if the Enabled bit is set, but the gicr_base_address is zero.
>
> Adding support for the online-capable bit to the sanity check will
> complicate it, for no benefit. The existing check implicitly depends on
> gic_acpi_count_gicr_regions() previous failing to find any GICR regions
> (as it is valid to have gicr_base_address of zero if the redistributors
> are described via a GICR entry).
>
> Instead of complicating the check, remove it. Failures that happen at
> this point cause the irqchip not to register, meaning no irqs can be
> requested. The kernel grinds to a panic() pretty quickly.
>
> Without the check, MADT tables that exhibit this problem are still
> caught by gic_populate_rdist(), which helpfully also prints what went
> wrong:
> | CPU4: mpidr 100 has no re-distributor!
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Reviewed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
I've been focused on the ACPI aspects until now, but now realize that this
and the next patch should have included the GIC maintainer in the
to list. I'll fix that for future versions, but for now
+CC Marc.
> ---
> v7: No change
> ---
> drivers/irqchip/irq-gic-v3.c | 13 ++-----------
> 1 file changed, 2 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 6fb276504bcc..10af15f93d4d 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -2415,19 +2415,10 @@ static int __init gic_acpi_match_gicc(union acpi_subtable_headers *header,
> * If GICC is enabled and has valid gicr base address, then it means
> * GICR base is presented via GICC
> */
> - if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address) {
> + if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address)
> acpi_data.enabled_rdists++;
> - return 0;
> - }
>
> - /*
> - * It's perfectly valid firmware can pass disabled GICC entry, driver
> - * should not treat as errors, skip the entry instead of probe fail.
> - */
> - if (!acpi_gicc_is_usable(gicc))
> - return 0;
> -
> - return -ENODEV;
> + return 0;
> }
>
> static int __init gic_acpi_count_gicr_regions(void)
On Thu, 18 Apr 2024 14:54:07 +0100
Jonathan Cameron <[email protected]> wrote:
> From: James Morse <[email protected]>
>
> To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> to the MADT GICC entries. This indicates a disabled CPU entry may not
> be possible to online via PSCI until firmware has set enabled bit in
> _STA.
>
> This means that a "usable" GIC is one that is marked as either enabled,
> or online capable. Therefore, change acpi_gicc_is_usable() to check both
> bits. However, we need to change the test in gic_acpi_match_gicc() back
> to testing just the enabled bit so the count of enabled distributors is
> correct.
>
> What about the redistributor in the GICC entry? ACPI doesn't want to say.
> Assume the worst: When a redistributor is described in the GICC entry,
> but the entry is marked as disabled at boot, assume the redistributor
> is inaccessible.
>
> The GICv3 driver doesn't support late online of redistributors, so this
> means the corresponding CPU can't be brought online either. Clear the
> possible and present bits.
>
> Systems that want CPU hotplug in a VM can ensure their redistributors
> are always-on, and describe them that way with a GICR entry in the MADT.
>
> When mapping redistributors found via GICC entries, handle the case
> where the arch code believes the CPU is present and possible, but it
> does not have an accessible redistributor. Print a warning and clear
> the present and possible bits.
>
> Signed-off-by: James Morse <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
+CC Marc,
Whilst this has been unchanged for a long time, I'm not 100% sure
we've specifically drawn your attention to it before now.
Jonathan
>
> ---
> v7: No Change.
> ---
> drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> include/linux/acpi.h | 3 ++-
> 2 files changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 10af15f93d4d..66132251c1bb 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> (struct acpi_madt_generic_interrupt *)header;
> u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> + int cpu = get_cpu_for_acpi_id(gicc->uid);
> void __iomem *redist_base;
>
> if (!acpi_gicc_is_usable(gicc))
> return 0;
>
> + /*
> + * Capable but disabled CPUs can be brought online later. What about
> + * the redistributor? ACPI doesn't want to say!
> + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> + * Otherwise, prevent such CPUs from being brought online.
> + */
> + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> + set_cpu_present(cpu, false);
> + set_cpu_possible(cpu, false);
> + return 0;
> + }
> +
> redist_base = ioremap(gicc->gicr_base_address, size);
> if (!redist_base)
> return -ENOMEM;
> @@ -2413,9 +2427,12 @@ static int __init gic_acpi_match_gicc(union acpi_subtable_headers *header,
>
> /*
> * If GICC is enabled and has valid gicr base address, then it means
> - * GICR base is presented via GICC
> + * GICR base is presented via GICC. The redistributor is only known to
> + * be accessible if the GICC is marked as enabled. If this bit is not
> + * set, we'd need to add the redistributor at runtime, which isn't
> + * supported.
> */
> - if (acpi_gicc_is_usable(gicc) && gicc->gicr_base_address)
> + if (gicc->flags & ACPI_MADT_ENABLED && gicc->gicr_base_address)
> acpi_data.enabled_rdists++;
>
> return 0;
> diff --git a/include/linux/acpi.h b/include/linux/acpi.h
> index 9844a3f9c4e5..fcfb7bb6789e 100644
> --- a/include/linux/acpi.h
> +++ b/include/linux/acpi.h
> @@ -239,7 +239,8 @@ void acpi_table_print_madt_entry (struct acpi_subtable_header *madt);
>
> static inline bool acpi_gicc_is_usable(struct acpi_madt_generic_interrupt *gicc)
> {
> - return gicc->flags & ACPI_MADT_ENABLED;
> + return gicc->flags & (ACPI_MADT_ENABLED |
> + ACPI_MADT_GICC_ONLINE_CAPABLE);
> }
>
> /* the following numa functions are architecture-dependent */
On Thu, 18 Apr 2024 14:54:05 +0100
Jonathan Cameron <[email protected]> wrote:
> From: James Morse <[email protected]>
>
> ACPI identifies CPUs by UID. get_cpu_for_acpi_id() maps the ACPI UID
> to the Linux CPU number.
>
> The helper to retrieve this mapping is only available in arm64's NUMA
> code.
>
> Move it to live next to get_acpi_id_for_cpu().
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Jonathan Cameron <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
Another one where we'd been focused on the general ACPI aspects so long
the CC list didn't include relevant maintainers.
+CC Lorenzo, Hanjun and Sudeep.
> ---
> v7: No change
> ---
> arch/arm64/include/asm/acpi.h | 11 +++++++++++
> arch/arm64/kernel/acpi_numa.c | 11 -----------
> 2 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> index 6792a1f83f2a..bc9a6656fc0c 100644
> --- a/arch/arm64/include/asm/acpi.h
> +++ b/arch/arm64/include/asm/acpi.h
> @@ -119,6 +119,17 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu)
> return acpi_cpu_get_madt_gicc(cpu)->uid;
> }
>
> +static inline int get_cpu_for_acpi_id(u32 uid)
> +{
> + int cpu;
> +
> + for (cpu = 0; cpu < nr_cpu_ids; cpu++)
> + if (uid == get_acpi_id_for_cpu(cpu))
> + return cpu;
> +
> + return -EINVAL;
> +}
> +
> static inline void arch_fix_phys_package_id(int num, u32 slot) { }
> void __init acpi_init_cpus(void);
> int apei_claim_sea(struct pt_regs *regs);
> diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c
> index e51535a5f939..0c036a9a3c33 100644
> --- a/arch/arm64/kernel/acpi_numa.c
> +++ b/arch/arm64/kernel/acpi_numa.c
> @@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu)
> return acpi_early_node_map[cpu];
> }
>
> -static inline int get_cpu_for_acpi_id(u32 uid)
> -{
> - int cpu;
> -
> - for (cpu = 0; cpu < nr_cpu_ids; cpu++)
> - if (uid == get_acpi_id_for_cpu(cpu))
> - return cpu;
> -
> - return -EINVAL;
> -}
> -
> static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header,
> const unsigned long end)
> {
On Thu, 18 Apr 2024 14:54:08 +0100
Jonathan Cameron <[email protected]> wrote:
> From: Jean-Philippe Brucker <[email protected]>
>
> When a CPU is marked as disabled, but online capable in the MADT, PSCI
> applies some firmware policy to control when it can be brought online.
> PSCI returns DENIED to a CPU_ON request if this is not currently
> permitted. The OS can learn the current policy from the _STA enabled bit.
>
> Handle the PSCI DENIED return code gracefully instead of printing an
> error.
>
> See https://developer.arm.com/documentation/den0022/f/?lang=en page 58.
>
> Signed-off-by: Jean-Philippe Brucker <[email protected]>
> [ morse: Rewrote commit message ]
> Signed-off-by: James Morse <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Reviewed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
Focus until now as ACPI side of things (hopefully we are now close on that)
but upshot is we failed to +CC some other relevant maintainers.
+CC Mark and Lorenzo - Not 100% sure who is right person for this, but
PSCI in general seems to be your problem.
> ---
> v7: No change
> ---
> arch/arm64/kernel/psci.c | 2 +-
> arch/arm64/kernel/smp.c | 3 ++-
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> index 29a8e444db83..fabd732d0a2d 100644
> --- a/arch/arm64/kernel/psci.c
> +++ b/arch/arm64/kernel/psci.c
> @@ -40,7 +40,7 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
> {
> phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
> int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
> - if (err)
> + if (err && err != -EPERM)
> pr_err("failed to boot CPU%d (%d)\n", cpu, err);
>
> return err;
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 4ced34f62dab..dc0e0b3ec2d4 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -132,7 +132,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> /* Now bring the CPU into our world */
> ret = boot_secondary(cpu, idle);
> if (ret) {
> - pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> + if (ret != -EPERM)
> + pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> return ret;
> }
>
On Thu, Apr 18, 2024 at 3:55 PM Jonathan Cameron
<[email protected]> wrote:
>
> The ACPI bus scan will only result in acpi_processor_add() being called
> if _STA has already been checked and the result is that the
> processor is enabled and present. Hence drop this additional check.
>
> Suggested-by: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
LGTM, so
Acked-by: Rafael J. Wysocki <[email protected]>
> ---
> v7: No change
> v6: New patch to drop this unnecessary code. Now I think we only
> need to explicitly read STA to print a warning in the ARM64
> arch_unregister_cpu() path where we want to know if the
> present bit has been unset as well.
> ---
> drivers/acpi/acpi_processor.c | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 7fc924aeeed0..ba0a6f0ac841 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -186,17 +186,11 @@ static void __init acpi_pcc_cpufreq_init(void) {}
> #ifdef CONFIG_ACPI_HOTPLUG_CPU
> static int acpi_processor_hotadd_init(struct acpi_processor *pr)
> {
> - unsigned long long sta;
> - acpi_status status;
> int ret;
>
> if (invalid_phys_cpuid(pr->phys_id))
> return -ENODEV;
>
> - status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
> - if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
> - return -ENODEV;
> -
> cpu_maps_update_begin();
> cpus_write_lock();
>
> --
On Thu, Apr 18, 2024 at 3:54 PM Jonathan Cameron
<[email protected]> wrote:
>
> Separate code paths, combined with a flag set in acpi_processor.c to
> indicate a struct acpi_processor was for a hotplugged CPU ensured that
> per CPU data was only set up the first time that a CPU was initialized.
> This appears to be unnecessary as the paths can be combined by letting
> the online logic also handle any CPUs online at the time of driver load.
>
> Motivation for this change, beyond simplification, is that ARM64
> virtual CPU HP uses the same code paths for hotplug and cold path in
> acpi_processor.c so had no easy way to set the flag for hotplug only.
> Removing this necessity will enable ARM64 vCPU HP to reuse the existing
> code paths.
>
> Leave noisy pr_info() in place but update it to not state the CPU
> was hotplugged.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
LGTM, so
Acked-by: Rafael J. Wysocki <[email protected]>
> ---
> v7: No change.
> v6: New patch.
> RFT: I have very limited test resources for x86 and other
> architectures that may be affected by this change.
> ---
> drivers/acpi/acpi_processor.c | 1 -
> drivers/acpi/processor_driver.c | 44 ++++++++++-----------------------
> include/acpi/processor.h | 2 +-
> 3 files changed, 14 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 7a0dd35d62c9..7fc924aeeed0 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -216,7 +216,6 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
> * gets online for the first time.
> */
> pr_info("CPU%d has been hot-added\n", pr->id);
> - pr->flags.need_hotplug_init = 1;
>
> out:
> cpus_write_unlock();
> diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c
> index 67db60eda370..55782eac3ff1 100644
> --- a/drivers/acpi/processor_driver.c
> +++ b/drivers/acpi/processor_driver.c
> @@ -33,7 +33,6 @@ MODULE_AUTHOR("Paul Diefenbaugh");
> MODULE_DESCRIPTION("ACPI Processor Driver");
> MODULE_LICENSE("GPL");
>
> -static int acpi_processor_start(struct device *dev);
> static int acpi_processor_stop(struct device *dev);
>
> static const struct acpi_device_id processor_device_ids[] = {
> @@ -47,7 +46,6 @@ static struct device_driver acpi_processor_driver = {
> .name = "processor",
> .bus = &cpu_subsys,
> .acpi_match_table = processor_device_ids,
> - .probe = acpi_processor_start,
> .remove = acpi_processor_stop,
> };
>
> @@ -115,12 +113,10 @@ static int acpi_soft_cpu_online(unsigned int cpu)
> * CPU got physically hotplugged and onlined for the first time:
> * Initialize missing things.
> */
> - if (pr->flags.need_hotplug_init) {
> + if (!pr->flags.previously_online) {
> int ret;
>
> - pr_info("Will online and init hotplugged CPU: %d\n",
> - pr->id);
> - pr->flags.need_hotplug_init = 0;
> + pr_info("Will online and init CPU: %d\n", pr->id);
> ret = __acpi_processor_start(device);
> WARN(ret, "Failed to start CPU: %d\n", pr->id);
> } else {
> @@ -167,9 +163,6 @@ static int __acpi_processor_start(struct acpi_device *device)
> if (!pr)
> return -ENODEV;
>
> - if (pr->flags.need_hotplug_init)
> - return 0;
> -
> result = acpi_cppc_processor_probe(pr);
> if (result && !IS_ENABLED(CONFIG_ACPI_CPU_FREQ_PSS))
> dev_dbg(&device->dev, "CPPC data invalid or not present\n");
> @@ -185,32 +178,21 @@ static int __acpi_processor_start(struct acpi_device *device)
>
> status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
> acpi_processor_notify, device);
> - if (ACPI_SUCCESS(status))
> - return 0;
> + if (!ACPI_SUCCESS(status)) {
> + result = -ENODEV;
> + goto err_thermal_exit;
> + }
> + pr->flags.previously_online = 1;
>
> - result = -ENODEV;
> - acpi_processor_thermal_exit(pr, device);
> + return 0;
>
> +err_thermal_exit:
> + acpi_processor_thermal_exit(pr, device);
> err_power_exit:
> acpi_processor_power_exit(pr);
> return result;
> }
>
> -static int acpi_processor_start(struct device *dev)
> -{
> - struct acpi_device *device = ACPI_COMPANION(dev);
> - int ret;
> -
> - if (!device)
> - return -ENODEV;
> -
> - /* Protect against concurrent CPU hotplug operations */
> - cpu_hotplug_disable();
> - ret = __acpi_processor_start(device);
> - cpu_hotplug_enable();
> - return ret;
> -}
> -
> static int acpi_processor_stop(struct device *dev)
> {
> struct acpi_device *device = ACPI_COMPANION(dev);
> @@ -279,9 +261,9 @@ static int __init acpi_processor_driver_init(void)
> if (result < 0)
> return result;
>
> - result = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
> - "acpi/cpu-drv:online",
> - acpi_soft_cpu_online, NULL);
> + result = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
> + "acpi/cpu-drv:online",
> + acpi_soft_cpu_online, NULL);
> if (result < 0)
> goto err;
> hp_online = result;
> diff --git a/include/acpi/processor.h b/include/acpi/processor.h
> index 3f34ebb27525..e6f6074eadbf 100644
> --- a/include/acpi/processor.h
> +++ b/include/acpi/processor.h
> @@ -217,7 +217,7 @@ struct acpi_processor_flags {
> u8 has_lpi:1;
> u8 power_setup_done:1;
> u8 bm_rld_set:1;
> - u8 need_hotplug_init:1;
> + u8 previously_online:1;
> };
>
> struct acpi_processor {
> --
On Thu, Apr 18, 2024 at 3:56 PM Jonathan Cameron
<[email protected]> wrote:
>
> If CONFIG_ACPI_PROCESSOR provide a helper to retrieve the
> acpi_handle for a given CPU allowing access to methods
> in DSDT.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> ---
> v7: No change
> v6: New patch
> ---
> drivers/acpi/acpi_processor.c | 10 ++++++++++
> include/linux/acpi.h | 7 +++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index ac7ddb30f10e..127ae8dcb787 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -35,6 +35,16 @@ EXPORT_PER_CPU_SYMBOL(processors);
> struct acpi_processor_errata errata __read_mostly;
> EXPORT_SYMBOL_GPL(errata);
>
> +acpi_handle acpi_get_processor_handle(int cpu)
> +{
> + acpi_handle handle = NULL;
The local var looks redundant.
> + struct acpi_processor *pr = per_cpu(processors, cpu);;
> +
> + if (pr)
> + handle = pr->handle;
> +
> + return handle;
struct acpi_processor *pr;
pr = per_cpu(processors, cpu);
if (pr)
return pr->handle;
return NULL;
> +}
> static int acpi_processor_errata_piix4(struct pci_dev *dev)
> {
> u8 value1 = 0;
> diff --git a/include/linux/acpi.h b/include/linux/acpi.h
> index 34829f2c517a..9844a3f9c4e5 100644
> --- a/include/linux/acpi.h
> +++ b/include/linux/acpi.h
> @@ -309,6 +309,8 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id,
> int acpi_unmap_cpu(int cpu);
> #endif /* CONFIG_ACPI_HOTPLUG_CPU */
>
> +acpi_handle acpi_get_processor_handle(int cpu);
> +
> #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
> int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr);
> #endif
> @@ -1077,6 +1079,11 @@ static inline bool acpi_sleep_state_supported(u8 sleep_state)
> return false;
> }
>
> +static inline acpi_handle acpi_get_processor_handle(int cpu)
> +{
> + return NULL;
> +}
> +
> #endif /* !CONFIG_ACPI */
>
> extern void arch_post_acpi_subsys_init(void);
> --
On Thu, Apr 18, 2024 at 3:57 PM Jonathan Cameron
<[email protected]> wrote:
>
> From: James Morse <[email protected]>
>
> The arm64 specific arch_register_cpu() call may defer CPU registration
> until the ACPI interpreter is available and the _STA method can
> be evaluated.
>
> If this occurs, then a second attempt is made in
> acpi_processor_get_info(). Note that the arm64 specific call has
> not yet been added so for now this will be called for the original
> hotplug case.
>
> For architectures that do not defer until the ACPI Processor
> driver loads (e.g. x86), for initially present CPUs there will
> already be a CPU device. If present do not try to register again.
>
> Systems can still be booted with 'acpi=off', or not include an
> ACPI description at all as in these cases arch_register_cpu()
> will not have deferred registration when first called.
>
> This moves the CPU register logic back to a subsys_initcall(),
> while the memory nodes will have been registered earlier.
> Note this is where the call was prior to the cleanup series so
> there should be no side effects of moving it back again for this
> specific case.
>
> [PATCH 00/21] Initial cleanups for vCPU HP.
> https://lore.kernel.org/all/ZVyz%[email protected]/
> commit 5b95f94c3b9f ("x86/topology: Switch over to GENERIC_CPU_DEVICES")
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Co-developed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Joanthan Cameron <[email protected]>
> ---
> v7: Simplify the logic on whether to hotadd the CPU.
> This path can only be reached either for coldplug in which
> case all we care about is has register_cpu() already been
> called (identifying deferred), or hotplug in which case
> whether register_cpu() has been called is also sufficient.
> Checks on _STA related elements or the validity of the ID
> are no longer necessary here due to similar checks having
> moved elsewhere in the path.
> v6: Squash the two paths for conventional CPU Hotplug and arm64
> vCPU HP.
> ---
> drivers/acpi/acpi_processor.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 127ae8dcb787..4e65011e706c 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -350,14 +350,14 @@ static int acpi_processor_get_info(struct acpi_device *device)
> }
>
> /*
> - * Extra Processor objects may be enumerated on MP systems with
> - * less than the max # of CPUs. They should be ignored _iff
> - * they are physically not present.
> - *
> - * NOTE: Even if the processor has a cpuid, it may not be present
> - * because cpuid <-> apicid mapping is persistent now.
> + * This code is not called unless we know the CPU is present and
> + * enabled. The two paths are:
> + * a) Initially present CPUs on architectures that do not defer
> + * their arch_register_cpu() calls until this point.
> + * b) Hotplugged CPUs (enabled bit in _STA has transitioned from not
> + * enabled to enabled)
> */
> - if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
> + if (!get_cpu_device(pr->id)) {
> ret = acpi_processor_hotadd_init(pr, device);
Yes, this is what I thought it should boil down to, so
Acked-by: Rafael J. Wysocki <[email protected]>
>
> if (ret)
> --
> 2.39.2
>
On Thu, Apr 18, 2024 at 3:56 PM Jonathan Cameron
<[email protected]> wrote:
>
> Make the per_cpu(processors, cpu) entries available earlier so that
> they are available in arch_register_cpu() as ARM64 will need access
> to the acpi_handle to distinguish between acpi_processor_add()
> and earlier registration attempts (which will fail as _STA cannot
> be checked).
>
> Reorder the remove flow to clear this per_cpu() after
> arch_unregister_cpu() has completed, allowing it to be used in
> there as well.
>
> Note that on x86 for the CPU hotplug case, the pr->id prior to
> acpi_map_cpu() may be invalid. Thus the per_cpu() structures
> must be initialized after that call or after checking the ID
> is valid (not hotplug path).
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> ---
> v7: Swap order with acpi_unmap_cpu() in acpi_processor_remove()
> to keep it in reverse order of the setup path. (thanks Salil)
> Fix an issue with placement of CONFIG_ACPI_HOTPLUG_CPU guards.
> v6: As per discussion in v5 thread, don't use the cpu->dev and
> make this data available earlier by moving the assignment checks
> int acpi_processor_get_info().
> ---
> drivers/acpi/acpi_processor.c | 78 +++++++++++++++++++++--------------
> 1 file changed, 46 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index ba0a6f0ac841..ac7ddb30f10e 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -183,8 +183,36 @@ static void __init acpi_pcc_cpufreq_init(void) {}
> #endif /* CONFIG_X86 */
>
> /* Initialization */
> +static DEFINE_PER_CPU(void *, processor_device_array);
> +
> +static void acpi_processor_set_per_cpu(struct acpi_processor *pr,
> + struct acpi_device *device)
> +{
> + BUG_ON(pr->id >= nr_cpu_ids);
> + /*
> + * Buggy BIOS check.
> + * ACPI id of processors can be reported wrongly by the BIOS.
> + * Don't trust it blindly
> + */
> + if (per_cpu(processor_device_array, pr->id) != NULL &&
> + per_cpu(processor_device_array, pr->id) != device) {
> + dev_warn(&device->dev,
> + "BIOS reported wrong ACPI id %d for the processor\n",
> + pr->id);
> + /* Give up, but do not abort the namespace scan. */
> + return;
In this case the caller should make acpi_pricessor_add() return 0, I
think, because otherwise it will attempt to acpi_bind_one() "pr" to
"device" which will confuse things.
So I would make this return false to indicate that.
Or just fold it into the caller and do the error handling there.
On Thu, Apr 18, 2024 at 3:58 PM Jonathan Cameron
<[email protected]> wrote:
>
> From: James Morse <[email protected]>
>
> struct acpi_scan_handler has a detach callback that is used to remove
> a driver when a bus is changed. When interacting with an eject-request,
> the detach callback is called before _EJ0.
>
> This means the ACPI processor driver can't use _STA to determine if a
> CPU has been made not-present, or some of the other _STA bits have been
> changed. acpi_processor_remove() needs to know the value of _STA after
> _EJ0 has been called.
>
> Add a post_eject callback to struct acpi_scan_handler. This is called
> after acpi_scan_hot_remove() has successfully called _EJ0. Because
> acpi_scan_check_and_detach() also clears the handler pointer,
> it needs to be told if the caller will go on to call
> acpi_bus_post_eject(), so that acpi_device_clear_enumerated()
> and clearing the handler pointer can be deferred.
> An extra flag is added to flags field introduced in the previous
> patch to achieve this.
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Joanthan Cameron <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
No objections:
Acked-by: Rafael J. Wysocki <[email protected]>
> ----
> v7:
> - No change.
> v6:
> - Switch to flags.
> Russell, you hadn't signed off on this when posting last time.
> Do you want to insert a suitable tag now?
> v5:
> - Rebase to take into account the changes to scan handling in the
> meantime.
> ---
> drivers/acpi/acpi_processor.c | 4 ++--
> drivers/acpi/scan.c | 30 +++++++++++++++++++++++++++---
> include/acpi/acpi_bus.h | 1 +
> 3 files changed, 30 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 4e65011e706c..beb1761db579 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -471,7 +471,7 @@ static int acpi_processor_add(struct acpi_device *device,
>
> #ifdef CONFIG_ACPI_HOTPLUG_CPU
> /* Removal */
> -static void acpi_processor_remove(struct acpi_device *device)
> +static void acpi_processor_post_eject(struct acpi_device *device)
> {
> struct acpi_processor *pr;
>
> @@ -639,7 +639,7 @@ static struct acpi_scan_handler processor_handler = {
> .ids = processor_device_ids,
> .attach = acpi_processor_add,
> #ifdef CONFIG_ACPI_HOTPLUG_CPU
> - .detach = acpi_processor_remove,
> + .post_eject = acpi_processor_post_eject,
> #endif
> .hotplug = {
> .enabled = true,
> diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
> index 1ec9677e6c2d..3ec54624664a 100644
> --- a/drivers/acpi/scan.c
> +++ b/drivers/acpi/scan.c
> @@ -245,6 +245,7 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
> }
>
> #define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
> +#define ACPI_SCAN_CHECK_FLAG_EJECT BIT(1)
>
> static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
> {
> @@ -273,8 +274,6 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
> if (handler) {
> if (handler->detach)
> handler->detach(adev);
> -
> - adev->handler = NULL;
> } else {
> device_release_driver(&adev->dev);
> }
> @@ -284,6 +283,28 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
> */
> acpi_device_set_power(adev, ACPI_STATE_D3_COLD);
> adev->flags.initialized = false;
> +
> + /* For eject this is deferred to acpi_bus_post_eject() */
> + if (!(flags & ACPI_SCAN_CHECK_FLAG_EJECT)) {
> + adev->handler = NULL;
> + acpi_device_clear_enumerated(adev);
> + }
> + return 0;
> +}
> +
> +static int acpi_bus_post_eject(struct acpi_device *adev, void *not_used)
> +{
> + struct acpi_scan_handler *handler = adev->handler;
> +
> + acpi_dev_for_each_child_reverse(adev, acpi_bus_post_eject, NULL);
> +
> + if (handler) {
> + if (handler->post_eject)
> + handler->post_eject(adev);
> +
> + adev->handler = NULL;
> + }
> +
> acpi_device_clear_enumerated(adev);
>
> return 0;
> @@ -301,6 +322,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
> acpi_handle handle = device->handle;
> unsigned long long sta;
> acpi_status status;
> + uintptr_t flags = ACPI_SCAN_CHECK_FLAG_EJECT;
>
> if (device->handler && device->handler->hotplug.demand_offline) {
> if (!acpi_scan_is_offline(device, true))
> @@ -313,7 +335,7 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
>
> acpi_handle_debug(handle, "Ejecting\n");
>
> - acpi_bus_trim(device);
> + acpi_scan_check_and_detach(device, (void *)flags);
>
> acpi_evaluate_lck(handle, 0);
> /*
> @@ -336,6 +358,8 @@ static int acpi_scan_hot_remove(struct acpi_device *device)
> } else if (sta & ACPI_STA_DEVICE_ENABLED) {
> acpi_handle_warn(handle,
> "Eject incomplete - status 0x%llx\n", sta);
> + } else {
> + acpi_bus_post_eject(device, NULL);
> }
>
> return 0;
> diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
> index e7796f373d0d..51a4b936f19e 100644
> --- a/include/acpi/acpi_bus.h
> +++ b/include/acpi/acpi_bus.h
> @@ -129,6 +129,7 @@ struct acpi_scan_handler {
> bool (*match)(const char *idstr, const struct acpi_device_id **matchid);
> int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id);
> void (*detach)(struct acpi_device *dev);
> + void (*post_eject)(struct acpi_device *dev);
> void (*bind)(struct device *phys_dev);
> void (*unbind)(struct device *phys_dev);
> struct acpi_hotplug_profile hotplug;
> --
> 2.39.2
>
On Thu, Apr 18, 2024 at 3:57 PM Jonathan Cameron
<[email protected]> wrote:
>
> Precursor patch adds the ability to pass a uintptr_t of flags into
> acpi_scan_check_and detach() so that additional flags can be
> added to indicate whether to defer portions of the eject flow.
> The new flag follows in the next patch.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
I have no specific heartburn related to this, so
Acked-by: Rafael J. Wysocki <[email protected]>
> ---
> v7: No change
> v6: Based on internal feedback switch to less invasive change
> to using flags rather than a struct.
> ---
> drivers/acpi/scan.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
> index d1464324de95..1ec9677e6c2d 100644
> --- a/drivers/acpi/scan.c
> +++ b/drivers/acpi/scan.c
> @@ -244,13 +244,16 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
> return 0;
> }
>
> -static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
> +#define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
> +
> +static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
> {
> struct acpi_scan_handler *handler = adev->handler;
> + uintptr_t flags = (uintptr_t)p;
>
> - acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, check);
> + acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, p);
>
> - if (check) {
> + if (flags & ACPI_SCAN_CHECK_FLAG_STATUS) {
> acpi_bus_get_status(adev);
> /*
> * Skip devices that are still there and take the enabled
> @@ -288,7 +291,9 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
>
> static void acpi_scan_check_subtree(struct acpi_device *adev)
> {
> - acpi_scan_check_and_detach(adev, (void *)true);
> + uintptr_t flags = ACPI_SCAN_CHECK_FLAG_STATUS;
> +
> + acpi_scan_check_and_detach(adev, (void *)flags);
> }
>
> static int acpi_scan_hot_remove(struct acpi_device *device)
> @@ -2601,7 +2606,9 @@ EXPORT_SYMBOL(acpi_bus_scan);
> */
> void acpi_bus_trim(struct acpi_device *adev)
> {
> - acpi_scan_check_and_detach(adev, NULL);
> + uintptr_t flags = 0;
> +
> + acpi_scan_check_and_detach(adev, (void *)flags);
> }
> EXPORT_SYMBOL_GPL(acpi_bus_trim);
>
> --
On Thu, Apr 18, 2024 at 9:50 PM Rafael J. Wysocki <[email protected]> wrote:
>
> On Thu, Apr 18, 2024 at 3:54 PM Jonathan Cameron
> <[email protected]> wrote:
> >
> > Whilst it is a bit quick after v6, a couple of critical issues
> > were pointed out by Russell, Salil and Rafael + one build issue
> > had been missed, so it seems sensible to make sure those conducting
> > testing or further review have access to a fixed version.
> >
> > v7:
> > - Fix misplaced config guard that broke bisection.
> > - Greatly simplify the condition on which we call
> > acpi_processor_hotadd_init().
> > - Improve teardown ordering.
>
> Thank you for the update!
>
> From a quick look, patches [01-08/16] appear to be good now, but I'll
> do a more detailed review on the following days.
Done now, I've sent comments on patches [4-5/16].
The other patches in the first half of the series LGTM.
I can't say much about the ARM64-specific part and the last patch has
been already ACKed by Thomas.
Thanks!
On 2024/4/18 21:53, Jonathan Cameron wrote:
> Separate code paths, combined with a flag set in acpi_processor.c to
> indicate a struct acpi_processor was for a hotplugged CPU ensured that
> per CPU data was only set up the first time that a CPU was initialized.
> This appears to be unnecessary as the paths can be combined by letting
> the online logic also handle any CPUs online at the time of driver load.
>
> Motivation for this change, beyond simplification, is that ARM64
> virtual CPU HP uses the same code paths for hotplug and cold path in
> acpi_processor.c so had no easy way to set the flag for hotplug only.
> Removing this necessity will enable ARM64 vCPU HP to reuse the existing
> code paths.
>
> Leave noisy pr_info() in place but update it to not state the CPU
> was hotplugged.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change.
> v6: New patch.
> RFT: I have very limited test resources for x86 and other
> architectures that may be affected by this change.
> ---
> drivers/acpi/acpi_processor.c | 1 -
> drivers/acpi/processor_driver.c | 44 ++++++++++-----------------------
> include/acpi/processor.h | 2 +-
> 3 files changed, 14 insertions(+), 33 deletions(-)
Nice simplification,
Reviewed-by: Hanjun Guo <[email protected]>
Thanks
Hanjun
On 2024/4/18 21:53, Jonathan Cameron wrote:
> For arm64 the CPU registration cannot complete until the ACPI
> interpreter us up and running so in those cases the arch specific
> arch_register_cpu() will return -EPROBE_DEFER at this stage and the
> registration will be attempted later.
>
> Suggested-by: Rafael J. Wysocki <[email protected]>
> Acked-by: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: Fix condition to not print the error message of success (thanks Russell!)
> ---
> drivers/base/cpu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> index 56fba44ba391..7b83e9c87d7c 100644
> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -558,7 +558,7 @@ static void __init cpu_dev_register_generic(void)
>
> for_each_present_cpu(i) {
> ret = arch_register_cpu(i);
> - if (ret)
> + if (ret && ret != -EPROBE_DEFER)
> pr_warn("register_cpu %d failed (%d)\n", i, ret);
> }
> }
Reviewed-by: Hanjun Guo <[email protected]>
Thanks
Hanjun
On 2024/4/18 21:53, Jonathan Cameron wrote:
> The ACPI bus scan will only result in acpi_processor_add() being called
> if _STA has already been checked and the result is that the
> processor is enabled and present. Hence drop this additional check.
>
> Suggested-by: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change
> v6: New patch to drop this unnecessary code. Now I think we only
> need to explicitly read STA to print a warning in the ARM64
> arch_unregister_cpu() path where we want to know if the
> present bit has been unset as well.
> ---
> drivers/acpi/acpi_processor.c | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 7fc924aeeed0..ba0a6f0ac841 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -186,17 +186,11 @@ static void __init acpi_pcc_cpufreq_init(void) {}
> #ifdef CONFIG_ACPI_HOTPLUG_CPU
> static int acpi_processor_hotadd_init(struct acpi_processor *pr)
> {
> - unsigned long long sta;
> - acpi_status status;
> int ret;
>
> if (invalid_phys_cpuid(pr->phys_id))
> return -ENODEV;
>
> - status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
> - if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
> - return -ENODEV;
> -
> cpu_maps_update_begin();
> cpus_write_lock();
Since the status bits were checked before acpi_processor_add() being
called, do we need to remove the if (!acpi_device_is_enabled(device))
check in acpi_processor_add() as well?
Thanks
Hanjun
On Tue, Apr 23, 2024 at 8:49 AM Hanjun Guo <[email protected]> wrote:
>
> On 2024/4/18 21:53, Jonathan Cameron wrote:
> > The ACPI bus scan will only result in acpi_processor_add() being called
> > if _STA has already been checked and the result is that the
> > processor is enabled and present. Hence drop this additional check.
> >
> > Suggested-by: Rafael J. Wysocki <[email protected]>
> > Signed-off-by: Jonathan Cameron <[email protected]>
> >
> > ---
> > v7: No change
> > v6: New patch to drop this unnecessary code. Now I think we only
> > need to explicitly read STA to print a warning in the ARM64
> > arch_unregister_cpu() path where we want to know if the
> > present bit has been unset as well.
> > ---
> > drivers/acpi/acpi_processor.c | 6 ------
> > 1 file changed, 6 deletions(-)
> >
> > diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> > index 7fc924aeeed0..ba0a6f0ac841 100644
> > --- a/drivers/acpi/acpi_processor.c
> > +++ b/drivers/acpi/acpi_processor.c
> > @@ -186,17 +186,11 @@ static void __init acpi_pcc_cpufreq_init(void) {}
> > #ifdef CONFIG_ACPI_HOTPLUG_CPU
> > static int acpi_processor_hotadd_init(struct acpi_processor *pr)
> > {
> > - unsigned long long sta;
> > - acpi_status status;
> > int ret;
> >
> > if (invalid_phys_cpuid(pr->phys_id))
> > return -ENODEV;
> >
> > - status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
> > - if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
> > - return -ENODEV;
> > -
> > cpu_maps_update_begin();
> > cpus_write_lock();
>
> Since the status bits were checked before acpi_processor_add() being
> called, do we need to remove the if (!acpi_device_is_enabled(device))
> check in acpi_processor_add() as well?
No, because its caller only checks the present bit. The function
itself checks the enabled bit.
On 2024/4/23 17:31, Rafael J. Wysocki wrote:
> On Tue, Apr 23, 2024 at 8:49 AM Hanjun Guo <[email protected]> wrote:
>>
>> On 2024/4/18 21:53, Jonathan Cameron wrote:
>>> The ACPI bus scan will only result in acpi_processor_add() being called
>>> if _STA has already been checked and the result is that the
>>> processor is enabled and present. Hence drop this additional check.
>>>
>>> Suggested-by: Rafael J. Wysocki <[email protected]>
>>> Signed-off-by: Jonathan Cameron <[email protected]>
>>>
>>> ---
>>> v7: No change
>>> v6: New patch to drop this unnecessary code. Now I think we only
>>> need to explicitly read STA to print a warning in the ARM64
>>> arch_unregister_cpu() path where we want to know if the
>>> present bit has been unset as well.
>>> ---
>>> drivers/acpi/acpi_processor.c | 6 ------
>>> 1 file changed, 6 deletions(-)
>>>
>>> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
>>> index 7fc924aeeed0..ba0a6f0ac841 100644
>>> --- a/drivers/acpi/acpi_processor.c
>>> +++ b/drivers/acpi/acpi_processor.c
>>> @@ -186,17 +186,11 @@ static void __init acpi_pcc_cpufreq_init(void) {}
>>> #ifdef CONFIG_ACPI_HOTPLUG_CPU
>>> static int acpi_processor_hotadd_init(struct acpi_processor *pr)
>>> {
>>> - unsigned long long sta;
>>> - acpi_status status;
>>> int ret;
>>>
>>> if (invalid_phys_cpuid(pr->phys_id))
>>> return -ENODEV;
>>>
>>> - status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
>>> - if (ACPI_FAILURE(status) || !(sta & ACPI_STA_DEVICE_PRESENT))
>>> - return -ENODEV;
>>> -
>>> cpu_maps_update_begin();
>>> cpus_write_lock();
>>
>> Since the status bits were checked before acpi_processor_add() being
>> called, do we need to remove the if (!acpi_device_is_enabled(device))
>> check in acpi_processor_add() as well?
>
> No, because its caller only checks the present bit. The function
> itself checks the enabled bit.
Thanks for the pointer, I can see the detail in the acpi_bus_attach()
now,
Reviewed-by: Hanjun Guo <[email protected]>
Thanks
Hanjun
> @@ -232,6 +263,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
> acpi_status status = AE_OK;
> static int cpu0_initialized;
> unsigned long long value;
> + int ret;
>
> acpi_processor_errata();
>
> @@ -316,10 +348,12 @@ static int acpi_processor_get_info(struct acpi_device *device)
> * because cpuid <-> apicid mapping is persistent now.
> */
> if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
> - int ret = acpi_processor_hotadd_init(pr);
> + ret = acpi_processor_hotadd_init(pr, device);
>
> if (ret)
> - return ret;
> + goto err;
> + } else {
> + acpi_processor_set_per_cpu(pr, device);
> }
>
> /*
> @@ -357,6 +391,10 @@ static int acpi_processor_get_info(struct acpi_device *device)
> arch_fix_phys_package_id(pr->id, value);
>
> return 0;
> +
> +err:
> + per_cpu(processors, pr->id) = NULL;
..
> + return ret;
> }
>
> /*
> @@ -365,8 +403,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
> * (cpu_data(cpu)) values, like CPU feature flags, family, model, etc.
> * Such things have to be put in and set up by the processor driver's .probe().
> */
> -static DEFINE_PER_CPU(void *, processor_device_array);
> -
> static int acpi_processor_add(struct acpi_device *device,
> const struct acpi_device_id *id)
> {
> @@ -395,28 +431,6 @@ static int acpi_processor_add(struct acpi_device *device,
> if (result) /* Processor is not physically present or unavailable */
> return 0;
>
> - BUG_ON(pr->id >= nr_cpu_ids);
> -
> - /*
> - * Buggy BIOS check.
> - * ACPI id of processors can be reported wrongly by the BIOS.
> - * Don't trust it blindly
> - */
> - if (per_cpu(processor_device_array, pr->id) != NULL &&
> - per_cpu(processor_device_array, pr->id) != device) {
> - dev_warn(&device->dev,
> - "BIOS reported wrong ACPI id %d for the processor\n",
> - pr->id);
> - /* Give up, but do not abort the namespace scan. */
> - goto err;
> - }
> - /*
> - * processor_device_array is not cleared on errors to allow buggy BIOS
> - * checks.
> - */
> - per_cpu(processor_device_array, pr->id) = device;
> - per_cpu(processors, pr->id) = pr;
Nit: seems we need to remove the duplicated
per_cpu(processors, pr->id) = NULL; in acpi_processor_add():
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -446,7 +446,6 @@ static int acpi_processor_add(struct acpi_device
*device,
err:
free_cpumask_var(pr->throttling.shared_cpu_map);
device->driver_data = NULL;
- per_cpu(processors, pr->id) = NULL;
err_free_pr:
kfree(pr);
return result;
Thanks
Hanjun
On Mon, 22 Apr 2024 11:40:20 +0100,
Jonathan Cameron <[email protected]> wrote:
>
> On Thu, 18 Apr 2024 14:54:07 +0100
> Jonathan Cameron <[email protected]> wrote:
>
> > From: James Morse <[email protected]>
> >
> > To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> > to the MADT GICC entries. This indicates a disabled CPU entry may not
> > be possible to online via PSCI until firmware has set enabled bit in
> > _STA.
> >
> > This means that a "usable" GIC is one that is marked as either enabled,
> > or online capable. Therefore, change acpi_gicc_is_usable() to check both
> > bits. However, we need to change the test in gic_acpi_match_gicc() back
> > to testing just the enabled bit so the count of enabled distributors is
> > correct.
> >
> > What about the redistributor in the GICC entry? ACPI doesn't want to say.
> > Assume the worst: When a redistributor is described in the GICC entry,
> > but the entry is marked as disabled at boot, assume the redistributor
> > is inaccessible.
> >
> > The GICv3 driver doesn't support late online of redistributors, so this
> > means the corresponding CPU can't be brought online either. Clear the
> > possible and present bits.
> >
> > Systems that want CPU hotplug in a VM can ensure their redistributors
> > are always-on, and describe them that way with a GICR entry in the MADT.
> >
> > When mapping redistributors found via GICC entries, handle the case
> > where the arch code believes the CPU is present and possible, but it
> > does not have an accessible redistributor. Print a warning and clear
> > the present and possible bits.
> >
> > Signed-off-by: James Morse <[email protected]>
> > Signed-off-by: Russell King (Oracle) <[email protected]>
> > Signed-off-by: Jonathan Cameron <[email protected]>
>
> +CC Marc,
>
> Whilst this has been unchanged for a long time, I'm not 100% sure
> we've specifically drawn your attention to it before now.
>
> Jonathan
>
> >
> > ---
> > v7: No Change.
> > ---
> > drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> > include/linux/acpi.h | 3 ++-
> > 2 files changed, 21 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > index 10af15f93d4d..66132251c1bb 100644
> > --- a/drivers/irqchip/irq-gic-v3.c
> > +++ b/drivers/irqchip/irq-gic-v3.c
> > @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> > (struct acpi_madt_generic_interrupt *)header;
> > u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> > u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> > + int cpu = get_cpu_for_acpi_id(gicc->uid);
> > void __iomem *redist_base;
> >
> > if (!acpi_gicc_is_usable(gicc))
> > return 0;
> >
> > + /*
> > + * Capable but disabled CPUs can be brought online later. What about
> > + * the redistributor? ACPI doesn't want to say!
> > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > + * Otherwise, prevent such CPUs from being brought online.
> > + */
> > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > + set_cpu_present(cpu, false);
> > + set_cpu_possible(cpu, false);
> > + return 0;
> > + }
It seems dangerous to clear those this late in the game, given how
disconnected from the architecture code this is. Are we sure that
nothing has sampled these cpumasks beforehand?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
On 2024/4/18 21:54, Jonathan Cameron wrote:
> From: James Morse <[email protected]>
>
> struct acpi_scan_handler has a detach callback that is used to remove
> a driver when a bus is changed. When interacting with an eject-request,
> the detach callback is called before _EJ0.
>
> This means the ACPI processor driver can't use _STA to determine if a
> CPU has been made not-present, or some of the other _STA bits have been
> changed. acpi_processor_remove() needs to know the value of _STA after
> _EJ0 has been called.
>
> Add a post_eject callback to struct acpi_scan_handler. This is called
> after acpi_scan_hot_remove() has successfully called _EJ0. Because
> acpi_scan_check_and_detach() also clears the handler pointer,
> it needs to be told if the caller will go on to call
> acpi_bus_post_eject(), so that acpi_device_clear_enumerated()
> and clearing the handler pointer can be deferred.
> An extra flag is added to flags field introduced in the previous
> patch to achieve this.
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Joanthan Cameron <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
Reviewed-by: Hanjun Guo <[email protected]>
On 2024/4/18 21:54, Jonathan Cameron wrote:
> From: James Morse <[email protected]>
>
> ACPI identifies CPUs by UID. get_cpu_for_acpi_id() maps the ACPI UID
> to the Linux CPU number.
>
> The helper to retrieve this mapping is only available in arm64's NUMA
> code.
>
> Move it to live next to get_acpi_id_for_cpu().
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Jonathan Cameron <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
Looks good to me,
Acked-by: Hanjun Guo <[email protected]>
Thanks
Hanjun
On 2024/4/18 21:54, Jonathan Cameron wrote:
> Precursor patch adds the ability to pass a uintptr_t of flags into
> acpi_scan_check_and detach() so that additional flags can be
> added to indicate whether to defer portions of the eject flow.
> The new flag follows in the next patch.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change
> v6: Based on internal feedback switch to less invasive change
> to using flags rather than a struct.
> ---
> drivers/acpi/scan.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
> index d1464324de95..1ec9677e6c2d 100644
> --- a/drivers/acpi/scan.c
> +++ b/drivers/acpi/scan.c
> @@ -244,13 +244,16 @@ static int acpi_scan_try_to_offline(struct acpi_device *device)
> return 0;
> }
>
> -static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
> +#define ACPI_SCAN_CHECK_FLAG_STATUS BIT(0)
> +
> +static int acpi_scan_check_and_detach(struct acpi_device *adev, void *p)
> {
> struct acpi_scan_handler *handler = adev->handler;
> + uintptr_t flags = (uintptr_t)p;
>
> - acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, check);
> + acpi_dev_for_each_child_reverse(adev, acpi_scan_check_and_detach, p);
>
> - if (check) {
> + if (flags & ACPI_SCAN_CHECK_FLAG_STATUS) {
> acpi_bus_get_status(adev);
> /*
> * Skip devices that are still there and take the enabled
> @@ -288,7 +291,9 @@ static int acpi_scan_check_and_detach(struct acpi_device *adev, void *check)
>
> static void acpi_scan_check_subtree(struct acpi_device *adev)
> {
> - acpi_scan_check_and_detach(adev, (void *)true);
> + uintptr_t flags = ACPI_SCAN_CHECK_FLAG_STATUS;
> +
> + acpi_scan_check_and_detach(adev, (void *)flags);
> }
>
> static int acpi_scan_hot_remove(struct acpi_device *device)
> @@ -2601,7 +2606,9 @@ EXPORT_SYMBOL(acpi_bus_scan);
> */
> void acpi_bus_trim(struct acpi_device *adev)
> {
> - acpi_scan_check_and_detach(adev, NULL);
> + uintptr_t flags = 0;
> +
> + acpi_scan_check_and_detach(adev, (void *)flags);
> }
> EXPORT_SYMBOL_GPL(acpi_bus_trim);
Reviewed-by: Hanjun Guo <[email protected]>
On 2024/4/18 21:54, Jonathan Cameron wrote:
> From: James Morse <[email protected]>
>
> The arm64 specific arch_register_cpu() call may defer CPU registration
> until the ACPI interpreter is available and the _STA method can
> be evaluated.
>
> If this occurs, then a second attempt is made in
> acpi_processor_get_info(). Note that the arm64 specific call has
> not yet been added so for now this will be called for the original
> hotplug case.
>
> For architectures that do not defer until the ACPI Processor
> driver loads (e.g. x86), for initially present CPUs there will
> already be a CPU device. If present do not try to register again.
>
> Systems can still be booted with 'acpi=off', or not include an
> ACPI description at all as in these cases arch_register_cpu()
> will not have deferred registration when first called.
>
> This moves the CPU register logic back to a subsys_initcall(),
> while the memory nodes will have been registered earlier.
> Note this is where the call was prior to the cleanup series so
> there should be no side effects of moving it back again for this
> specific case.
>
> [PATCH 00/21] Initial cleanups for vCPU HP.
> https://lore.kernel.org/all/ZVyz%[email protected]/
> commit 5b95f94c3b9f ("x86/topology: Switch over to GENERIC_CPU_DEVICES")
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Co-developed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Joanthan Cameron <[email protected]>
> ---
> v7: Simplify the logic on whether to hotadd the CPU.
> This path can only be reached either for coldplug in which
> case all we care about is has register_cpu() already been
> called (identifying deferred), or hotplug in which case
> whether register_cpu() has been called is also sufficient.
> Checks on _STA related elements or the validity of the ID
> are no longer necessary here due to similar checks having
> moved elsewhere in the path.
> v6: Squash the two paths for conventional CPU Hotplug and arm64
> vCPU HP.
> ---
> drivers/acpi/acpi_processor.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 127ae8dcb787..4e65011e706c 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -350,14 +350,14 @@ static int acpi_processor_get_info(struct acpi_device *device)
> }
>
> /*
> - * Extra Processor objects may be enumerated on MP systems with
> - * less than the max # of CPUs. They should be ignored _iff
> - * they are physically not present.
> - *
> - * NOTE: Even if the processor has a cpuid, it may not be present
> - * because cpuid <-> apicid mapping is persistent now.
> + * This code is not called unless we know the CPU is present and
> + * enabled. The two paths are:
> + * a) Initially present CPUs on architectures that do not defer
> + * their arch_register_cpu() calls until this point.
> + * b) Hotplugged CPUs (enabled bit in _STA has transitioned from not
> + * enabled to enabled)
> */
> - if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
> + if (!get_cpu_device(pr->id)) {
> ret = acpi_processor_hotadd_init(pr, device);
>
> if (ret)
Reviewed-by: Hanjun Guo <[email protected]>
On Tue, 23 Apr 2024 13:01:21 +0100
Marc Zyngier <[email protected]> wrote:
> On Mon, 22 Apr 2024 11:40:20 +0100,
> Jonathan Cameron <[email protected]> wrote:
> >
> > On Thu, 18 Apr 2024 14:54:07 +0100
> > Jonathan Cameron <[email protected]> wrote:
> >
> > > From: James Morse <[email protected]>
> > >
> > > To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> > > to the MADT GICC entries. This indicates a disabled CPU entry may not
> > > be possible to online via PSCI until firmware has set enabled bit in
> > > _STA.
> > >
> > > This means that a "usable" GIC is one that is marked as either enabled,
> > > or online capable. Therefore, change acpi_gicc_is_usable() to check both
> > > bits. However, we need to change the test in gic_acpi_match_gicc() back
> > > to testing just the enabled bit so the count of enabled distributors is
> > > correct.
> > >
> > > What about the redistributor in the GICC entry? ACPI doesn't want to say.
> > > Assume the worst: When a redistributor is described in the GICC entry,
> > > but the entry is marked as disabled at boot, assume the redistributor
> > > is inaccessible.
> > >
> > > The GICv3 driver doesn't support late online of redistributors, so this
> > > means the corresponding CPU can't be brought online either. Clear the
> > > possible and present bits.
> > >
> > > Systems that want CPU hotplug in a VM can ensure their redistributors
> > > are always-on, and describe them that way with a GICR entry in the MADT.
> > >
> > > When mapping redistributors found via GICC entries, handle the case
> > > where the arch code believes the CPU is present and possible, but it
> > > does not have an accessible redistributor. Print a warning and clear
> > > the present and possible bits.
> > >
> > > Signed-off-by: James Morse <[email protected]>
> > > Signed-off-by: Russell King (Oracle) <[email protected]>
> > > Signed-off-by: Jonathan Cameron <[email protected]>
> >
> > +CC Marc,
> >
> > Whilst this has been unchanged for a long time, I'm not 100% sure
> > we've specifically drawn your attention to it before now.
> >
> > Jonathan
> >
> > >
> > > ---
> > > v7: No Change.
> > > ---
> > > drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> > > include/linux/acpi.h | 3 ++-
> > > 2 files changed, 21 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > > index 10af15f93d4d..66132251c1bb 100644
> > > --- a/drivers/irqchip/irq-gic-v3.c
> > > +++ b/drivers/irqchip/irq-gic-v3.c
> > > @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> > > (struct acpi_madt_generic_interrupt *)header;
> > > u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> > > u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> > > + int cpu = get_cpu_for_acpi_id(gicc->uid);
> > > void __iomem *redist_base;
> > >
> > > if (!acpi_gicc_is_usable(gicc))
> > > return 0;
> > >
> > > + /*
> > > + * Capable but disabled CPUs can be brought online later. What about
> > > + * the redistributor? ACPI doesn't want to say!
> > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > + * Otherwise, prevent such CPUs from being brought online.
> > > + */
> > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > + set_cpu_present(cpu, false);
> > > + set_cpu_possible(cpu, false);
> > > + return 0;
> > > + }
>
> It seems dangerous to clear those this late in the game, given how
> disconnected from the architecture code this is. Are we sure that
> nothing has sampled these cpumasks beforehand?
Hi Marc,
Any firmware that does this is being considered as buggy already
but given it is firmware and the spec doesn't say much about this,
there is always the possibility.
Not much happens between the point where these are setup and
the point where the the gic inits and this code runs, but even if careful
review showed it was fine today, it will be fragile to future changes.
I'm not sure there is a huge disadvantage for such broken firmware in
clearing these masks from the point of view of what is used throughout
the rest of the kernel. Here I think we are just looking to prevent the CPU
being onlined later.
We could add a set_cpu_broken() with appropriate mask.
Given this is very arm64 specific I'm not sure Rafael will be keen on
us checking such a mask in the generic ACPI code, but we could check it in
arch_register_cpu() and just not register the cpu if it matches.
That will cover the vCPU hotplug case.
Does that sounds sensible, or would you prefer something else?
Jonathan
>
> Thanks,
>
> M.
>
On Wed, 24 Apr 2024 13:54:38 +0100,
Jonathan Cameron <[email protected]> wrote:
>
> On Tue, 23 Apr 2024 13:01:21 +0100
> Marc Zyngier <[email protected]> wrote:
>
> > On Mon, 22 Apr 2024 11:40:20 +0100,
> > Jonathan Cameron <[email protected]> wrote:
> > >
> > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > Jonathan Cameron <[email protected]> wrote:
[...]
> > >
> > > > + /*
> > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > + * the redistributor? ACPI doesn't want to say!
> > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > + */
> > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > + set_cpu_present(cpu, false);
> > > > + set_cpu_possible(cpu, false);
> > > > + return 0;
> > > > + }
> >
> > It seems dangerous to clear those this late in the game, given how
> > disconnected from the architecture code this is. Are we sure that
> > nothing has sampled these cpumasks beforehand?
>
> Hi Marc,
>
> Any firmware that does this is being considered as buggy already
> but given it is firmware and the spec doesn't say much about this,
> there is always the possibility.
There is no shortage of broken firmware out there, and I expect this
trend to progress.
> Not much happens between the point where these are setup and
> the point where the the gic inits and this code runs, but even if careful
> review showed it was fine today, it will be fragile to future changes.
>
> I'm not sure there is a huge disadvantage for such broken firmware in
> clearing these masks from the point of view of what is used throughout
> the rest of the kernel. Here I think we are just looking to prevent the CPU
> being onlined later.
I totally agree on the goal, I simply question the way you get to it.
>
> We could add a set_cpu_broken() with appropriate mask.
> Given this is very arm64 specific I'm not sure Rafael will be keen on
> us checking such a mask in the generic ACPI code, but we could check it in
> arch_register_cpu() and just not register the cpu if it matches.
> That will cover the vCPU hotplug case.
>
> Does that sounds sensible, or would you prefer something else?
Such a 'broken_rdists' mask is exactly what I have in mind, just
keeping it private to the GIC driver, and not expose it anywhere else.
You can then fail the hotplug event early, and avoid changing the
global masks from within the GIC driver. At least, we don't mess with
the internals of the kernel, and the CPU is properly marked as dead
(that mechanism should already work).
I'd expect the handling side to look like this (will not compile, but
you'll get the idea):
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 6fb276504bcc..e8f02bfd0e21 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -1009,6 +1009,9 @@ static int __gic_populate_rdist(struct redist_region *region, void __iomem *ptr)
u64 typer;
u32 aff;
+ if (cpumask_test_cpu(smp_processor_id(), &broken_rdists))
+ return 1;
+
/*
* Convert affinity to a 32bit value that can be matched to
* GICR_TYPER bits [63:32].
@@ -1260,14 +1263,15 @@ static int gic_dist_supports_lpis(void)
!gicv3_nolpi);
}
-static void gic_cpu_init(void)
+static int gic_cpu_init(void)
{
void __iomem *rbase;
- int i;
+ int ret, i;
/* Register ourselves with the rest of the world */
- if (gic_populate_rdist())
- return;
+ ret = gic_populate_rdist();
+ if (ret)
+ return ret;
gic_enable_redist(true);
@@ -1286,6 +1290,8 @@ static void gic_cpu_init(void)
/* initialise system registers */
gic_cpu_sys_reg_init();
+
+ return 0;
}
#ifdef CONFIG_SMP
@@ -1295,7 +1301,11 @@ static void gic_cpu_init(void)
static int gic_starting_cpu(unsigned int cpu)
{
- gic_cpu_init();
+ int ret;
+
+ ret = gic_cpu_init();
+ if (ret)
+ return ret;
if (gic_dist_supports_lpis())
its_cpu_init();
But the question is: do you rely on these masks having been
"corrected" anywhere else?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
On Wed, 24 Apr 2024 17:35:54 +0100
Salil Mehta <[email protected]> wrote:
> > From: Marc Zyngier <[email protected]>
> > Sent: Wednesday, April 24, 2024 4:33 PM
> > To: Jonathan Cameron <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>; Peter Zijlstra
> > <[email protected]>; [email protected];
> > [email protected]; [email protected]; linux-
> > [email protected]; [email protected]; linux-arm-
> > [email protected]; [email protected]; [email protected];
> > Russell King <[email protected]>; Rafael J . Wysocki
> > <[email protected]>; Miguel Luis <[email protected]>; James Morse
> > <[email protected]>; Salil Mehta <[email protected]>; Jean-
> > Philippe Brucker <[email protected]>; Catalin Marinas
> > <[email protected]>; Will Deacon <[email protected]>; Linuxarm
> > <[email protected]>; Ingo Molnar <[email protected]>; Borislav
> > Petkov <[email protected]>; Dave Hansen <[email protected]>;
> > [email protected]; [email protected]
> > Subject: Re: [PATCH v7 11/16] irqchip/gic-v3: Add support for ACPI's
> > disabled but 'online capable' CPUs
> >
> > On Wed, 24 Apr 2024 13:54:38 +0100,
> > Jonathan Cameron <[email protected]> wrote:
> > >
> > > On Tue, 23 Apr 2024 13:01:21 +0100
> > > Marc Zyngier <[email protected]> wrote:
> > >
> > > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > > Jonathan Cameron <[email protected]> wrote:
> > > > >
> > > > > On Thu, 18 Apr 2024 14:54:07 +0100 Jonathan Cameron
> > > > > <[email protected]> wrote:
> >
> > [...]
> >
> > > > >
> > > > > > + /*
> > > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > > + */
> > > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > > + set_cpu_present(cpu, false);
> > > > > > + set_cpu_possible(cpu, false);
>
> (a digression) shouldn't we be clearing the enabled mask as well?
>
> set_cpu_enabled(cpu, false);
FWIW I think not necessary. enabled is only set in register_cpu() and aim here is to
never call that for CPUs in this state.
Anyhow, I got distracted by the firmware bug I found whilst trying to test this but
now have a test setup that hits this path (once deliberately broken), so will
see what we can do about that doesn't have affect those masks.
Jonathan
>
>
> Best regards
> Salil
On Tue, 23 Apr 2024 19:53:34 +0800
Hanjun Guo <[email protected]> wrote:
> > @@ -232,6 +263,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
> > acpi_status status = AE_OK;
> > static int cpu0_initialized;
> > unsigned long long value;
> > + int ret;
> >
> > acpi_processor_errata();
> >
> > @@ -316,10 +348,12 @@ static int acpi_processor_get_info(struct acpi_device *device)
> > * because cpuid <-> apicid mapping is persistent now.
> > */
> > if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
> > - int ret = acpi_processor_hotadd_init(pr);
> > + ret = acpi_processor_hotadd_init(pr, device);
> >
> > if (ret)
> > - return ret;
> > + goto err;
> > + } else {
> > + acpi_processor_set_per_cpu(pr, device);
> > }
> >
> > /*
> > @@ -357,6 +391,10 @@ static int acpi_processor_get_info(struct acpi_device *device)
> > arch_fix_phys_package_id(pr->id, value);
> >
> > return 0;
> > +
> > +err:
> > + per_cpu(processors, pr->id) = NULL;
>
> ...
>
> > + return ret;
> > }
> >
> > /*
> > @@ -365,8 +403,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
> > * (cpu_data(cpu)) values, like CPU feature flags, family, model, etc.
> > * Such things have to be put in and set up by the processor driver's .probe().
> > */
> > -static DEFINE_PER_CPU(void *, processor_device_array);
> > -
> > static int acpi_processor_add(struct acpi_device *device,
> > const struct acpi_device_id *id)
> > {
> > @@ -395,28 +431,6 @@ static int acpi_processor_add(struct acpi_device *device,
> > if (result) /* Processor is not physically present or unavailable */
> > return 0;
> >
> > - BUG_ON(pr->id >= nr_cpu_ids);
> > -
> > - /*
> > - * Buggy BIOS check.
> > - * ACPI id of processors can be reported wrongly by the BIOS.
> > - * Don't trust it blindly
> > - */
> > - if (per_cpu(processor_device_array, pr->id) != NULL &&
> > - per_cpu(processor_device_array, pr->id) != device) {
> > - dev_warn(&device->dev,
> > - "BIOS reported wrong ACPI id %d for the processor\n",
> > - pr->id);
> > - /* Give up, but do not abort the namespace scan. */
> > - goto err;
> > - }
> > - /*
> > - * processor_device_array is not cleared on errors to allow buggy BIOS
> > - * checks.
> > - */
> > - per_cpu(processor_device_array, pr->id) = device;
> > - per_cpu(processors, pr->id) = pr;
>
> Nit: seems we need to remove the duplicated
> per_cpu(processors, pr->id) = NULL; in acpi_processor_add():
>
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -446,7 +446,6 @@ static int acpi_processor_add(struct acpi_device
> *device,
> err:
> free_cpumask_var(pr->throttling.shared_cpu_map);
> device->driver_data = NULL;
> - per_cpu(processors, pr->id) = NULL;
I don't follow. This path is used if processor_get_info() succeeded and
we later fail. I don't see where the the duplication is?
> err_free_pr:
> kfree(pr);
> return result;
>
> Thanks
> Hanjun
On Mon, 22 Apr 2024 20:56:55 +0200
"Rafael J. Wysocki" <[email protected]> wrote:
> On Thu, Apr 18, 2024 at 3:56 PM Jonathan Cameron
> <[email protected]> wrote:
> >
> > Make the per_cpu(processors, cpu) entries available earlier so that
> > they are available in arch_register_cpu() as ARM64 will need access
> > to the acpi_handle to distinguish between acpi_processor_add()
> > and earlier registration attempts (which will fail as _STA cannot
> > be checked).
> >
> > Reorder the remove flow to clear this per_cpu() after
> > arch_unregister_cpu() has completed, allowing it to be used in
> > there as well.
> >
> > Note that on x86 for the CPU hotplug case, the pr->id prior to
> > acpi_map_cpu() may be invalid. Thus the per_cpu() structures
> > must be initialized after that call or after checking the ID
> > is valid (not hotplug path).
> >
> > Signed-off-by: Jonathan Cameron <[email protected]>
> > ---
> > v7: Swap order with acpi_unmap_cpu() in acpi_processor_remove()
> > to keep it in reverse order of the setup path. (thanks Salil)
> > Fix an issue with placement of CONFIG_ACPI_HOTPLUG_CPU guards.
> > v6: As per discussion in v5 thread, don't use the cpu->dev and
> > make this data available earlier by moving the assignment checks
> > int acpi_processor_get_info().
> > ---
> > drivers/acpi/acpi_processor.c | 78 +++++++++++++++++++++--------------
> > 1 file changed, 46 insertions(+), 32 deletions(-)
> >
> > diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> > index ba0a6f0ac841..ac7ddb30f10e 100644
> > --- a/drivers/acpi/acpi_processor.c
> > +++ b/drivers/acpi/acpi_processor.c
> > @@ -183,8 +183,36 @@ static void __init acpi_pcc_cpufreq_init(void) {}
> > #endif /* CONFIG_X86 */
> >
> > /* Initialization */
> > +static DEFINE_PER_CPU(void *, processor_device_array);
> > +
> > +static void acpi_processor_set_per_cpu(struct acpi_processor *pr,
> > + struct acpi_device *device)
> > +{
> > + BUG_ON(pr->id >= nr_cpu_ids);
> > + /*
> > + * Buggy BIOS check.
> > + * ACPI id of processors can be reported wrongly by the BIOS.
> > + * Don't trust it blindly
> > + */
> > + if (per_cpu(processor_device_array, pr->id) != NULL &&
> > + per_cpu(processor_device_array, pr->id) != device) {
> > + dev_warn(&device->dev,
> > + "BIOS reported wrong ACPI id %d for the processor\n",
> > + pr->id);
> > + /* Give up, but do not abort the namespace scan. */
> > + return;
>
> In this case the caller should make acpi_pricessor_add() return 0, I
> think, because otherwise it will attempt to acpi_bind_one() "pr" to
> "device" which will confuse things.
>
> So I would make this return false to indicate that.
>
> Or just fold it into the caller and do the error handling there.
The bios bug mentioned in reply to patch 14 (DSDT entries for non existent CPUs
that have no _STA entries) showed me that we need to know if this succeeded
(I'd not read this at that point).
I'll make it return a bool to say this succeeded and in both call sites
return 0 if not to deal with the bios bug here. Making sure not to clear
the per_cpu() structures unless this we get past that call. If we do
and arch_register_cpu() fails we need to clear these two IDs.
Doing so means that acpi_processor_hotadd_init() is side effect free and
hence we can return in acpi_processor_get_info() which avoids the
need to clear pointers when we don't have a valid pr->id to do it with.
So fully agree we need to bail out properly if this fails.
Jonathan
On Thu, 18 Apr 2024 14:54:10 +0100
Jonathan Cameron <[email protected]> wrote:
> In order to move arch_register_cpu() to be called via the same path
> for initially present CPUs described by ACPI and hotplugged CPUs
> ACPI_HOTPLUG_CPU needs to be enabled.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> ---
> v7: No change.
> ---
> arch/arm64/Kconfig | 1 +
> arch/arm64/kernel/acpi.c | 16 ++++++++++++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 7b11c98b3e84..fed7d0d54179 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -5,6 +5,7 @@ config ARM64
> select ACPI_CCA_REQUIRED if ACPI
> select ACPI_GENERIC_GSI if ACPI
> select ACPI_GTDT if ACPI
> + select ACPI_HOTPLUG_CPU if ACPI_PROCESSOR
> select ACPI_IORT if ACPI
> select ACPI_REDUCED_HARDWARE_ONLY if ACPI
> select ACPI_MCFG if (ACPI && PCI)
> diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
> index dba8fcec7f33..a74e80d58df3 100644
> --- a/arch/arm64/kernel/acpi.c
> +++ b/arch/arm64/kernel/acpi.c
> @@ -29,6 +29,7 @@
> #include <linux/pgtable.h>
>
> #include <acpi/ghes.h>
> +#include <acpi/processor.h>
> #include <asm/cputype.h>
> #include <asm/cpu_ops.h>
> #include <asm/daifflags.h>
> @@ -413,6 +414,21 @@ void arch_reserve_mem_area(acpi_physical_address addr, size_t size)
> memblock_mark_nomap(addr, size);
> }
>
> +#ifdef CONFIG_ACPI_HOTPLUG_CPU
> +int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 apci_id,
> + int *pcpu)
> +{
There are shipping firmware's in the wild that have DSDT entries for
way more CPUs than are actually present and don't bother with niceties like
providing _STA() methods.
As such, we need to check somewhere that the pcpu after this call is valid.
Given today this only applies to arm64 (as the x86 code has an implementation
of this function that will replace an invalid ID with a valid one) I'll add
a small catch here.
if (*pcpu < 0) {
pr_warn("Unable to map from CPU ACPI ID to anything useful\n");
return -EINVAL;
}
I'll have an entirely polite discussion with the relevant team at somepoint, but
on the plus side this is a sensible bit of hardening.
Jonathan
p.s. I want all those other cores!!!!
> + return 0;
> +}
> +EXPORT_SYMBOL(acpi_map_cpu); /* check why */
> +
> +int acpi_unmap_cpu(int cpu)
> +{
> + return 0;
> +}
> +EXPORT_SYMBOL(acpi_unmap_cpu);
> +#endif /* CONFIG_ACPI_HOTPLUG_CPU */
> +
> #ifdef CONFIG_ACPI_FFH
> /*
> * Implements ARM64 specific callbacks to support ACPI FFH Operation Region as
On 2024/4/25 1:18, Jonathan Cameron wrote:
> On Tue, 23 Apr 2024 19:53:34 +0800
> Hanjun Guo <[email protected]> wrote:
>
>>> @@ -232,6 +263,7 @@ static int acpi_processor_get_info(struct acpi_device *device)
>>> acpi_status status = AE_OK;
>>> static int cpu0_initialized;
>>> unsigned long long value;
>>> + int ret;
>>>
>>> acpi_processor_errata();
>>>
>>> @@ -316,10 +348,12 @@ static int acpi_processor_get_info(struct acpi_device *device)
>>> * because cpuid <-> apicid mapping is persistent now.
>>> */
>>> if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
>>> - int ret = acpi_processor_hotadd_init(pr);
>>> + ret = acpi_processor_hotadd_init(pr, device);
>>>
>>> if (ret)
>>> - return ret;
>>> + goto err;
>>> + } else {
>>> + acpi_processor_set_per_cpu(pr, device);
>>> }
>>>
>>> /*
>>> @@ -357,6 +391,10 @@ static int acpi_processor_get_info(struct acpi_device *device)
>>> arch_fix_phys_package_id(pr->id, value);
>>>
>>> return 0;
>>> +
>>> +err:
>>> + per_cpu(processors, pr->id) = NULL;
>>
>> ...
>>
>>> + return ret;
>>> }
>>>
>>> /*
>>> @@ -365,8 +403,6 @@ static int acpi_processor_get_info(struct acpi_device *device)
>>> * (cpu_data(cpu)) values, like CPU feature flags, family, model, etc.
>>> * Such things have to be put in and set up by the processor driver's .probe().
>>> */
>>> -static DEFINE_PER_CPU(void *, processor_device_array);
>>> -
>>> static int acpi_processor_add(struct acpi_device *device,
>>> const struct acpi_device_id *id)
>>> {
>>> @@ -395,28 +431,6 @@ static int acpi_processor_add(struct acpi_device *device,
>>> if (result) /* Processor is not physically present or unavailable */
>>> return 0;
>>>
>>> - BUG_ON(pr->id >= nr_cpu_ids);
>>> -
>>> - /*
>>> - * Buggy BIOS check.
>>> - * ACPI id of processors can be reported wrongly by the BIOS.
>>> - * Don't trust it blindly
>>> - */
>>> - if (per_cpu(processor_device_array, pr->id) != NULL &&
>>> - per_cpu(processor_device_array, pr->id) != device) {
>>> - dev_warn(&device->dev,
>>> - "BIOS reported wrong ACPI id %d for the processor\n",
>>> - pr->id);
>>> - /* Give up, but do not abort the namespace scan. */
>>> - goto err;
>>> - }
>>> - /*
>>> - * processor_device_array is not cleared on errors to allow buggy BIOS
>>> - * checks.
>>> - */
>>> - per_cpu(processor_device_array, pr->id) = device;
>>> - per_cpu(processors, pr->id) = pr;
>>
>> Nit: seems we need to remove the duplicated
>> per_cpu(processors, pr->id) = NULL; in acpi_processor_add():
>>
>> --- a/drivers/acpi/acpi_processor.c
>> +++ b/drivers/acpi/acpi_processor.c
>> @@ -446,7 +446,6 @@ static int acpi_processor_add(struct acpi_device
>> *device,
>> err:
>> free_cpumask_var(pr->throttling.shared_cpu_map);
>> device->driver_data = NULL;
>> - per_cpu(processors, pr->id) = NULL;
>
> I don't follow. This path is used if processor_get_info() succeeded and
> we later fail. I don't see where the the duplication is?
It is! Thanks for the clarification.
Thanks
Hanjun
On Wed, 24 Apr 2024 13:54:38 +0100
Jonathan Cameron <[email protected]> wrote:
> On Tue, 23 Apr 2024 13:01:21 +0100
> Marc Zyngier <[email protected]> wrote:
>
> > On Mon, 22 Apr 2024 11:40:20 +0100,
> > Jonathan Cameron <[email protected]> wrote:
> > >
> > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > Jonathan Cameron <[email protected]> wrote:
> > >
> > > > From: James Morse <[email protected]>
> > > >
> > > > To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> > > > to the MADT GICC entries. This indicates a disabled CPU entry may not
> > > > be possible to online via PSCI until firmware has set enabled bit in
> > > > _STA.
> > > >
> > > > This means that a "usable" GIC is one that is marked as either enabled,
> > > > or online capable. Therefore, change acpi_gicc_is_usable() to check both
> > > > bits. However, we need to change the test in gic_acpi_match_gicc() back
> > > > to testing just the enabled bit so the count of enabled distributors is
> > > > correct.
> > > >
> > > > What about the redistributor in the GICC entry? ACPI doesn't want to say.
> > > > Assume the worst: When a redistributor is described in the GICC entry,
> > > > but the entry is marked as disabled at boot, assume the redistributor
> > > > is inaccessible.
> > > >
> > > > The GICv3 driver doesn't support late online of redistributors, so this
> > > > means the corresponding CPU can't be brought online either. Clear the
> > > > possible and present bits.
> > > >
> > > > Systems that want CPU hotplug in a VM can ensure their redistributors
> > > > are always-on, and describe them that way with a GICR entry in the MADT.
> > > >
> > > > When mapping redistributors found via GICC entries, handle the case
> > > > where the arch code believes the CPU is present and possible, but it
> > > > does not have an accessible redistributor. Print a warning and clear
> > > > the present and possible bits.
> > > >
> > > > Signed-off-by: James Morse <[email protected]>
> > > > Signed-off-by: Russell King (Oracle) <[email protected]>
> > > > Signed-off-by: Jonathan Cameron <[email protected]>
> > >
> > > +CC Marc,
> > >
> > > Whilst this has been unchanged for a long time, I'm not 100% sure
> > > we've specifically drawn your attention to it before now.
> > >
> > > Jonathan
> > >
> > > >
> > > > ---
> > > > v7: No Change.
> > > > ---
> > > > drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> > > > include/linux/acpi.h | 3 ++-
> > > > 2 files changed, 21 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > > > index 10af15f93d4d..66132251c1bb 100644
> > > > --- a/drivers/irqchip/irq-gic-v3.c
> > > > +++ b/drivers/irqchip/irq-gic-v3.c
> > > > @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> > > > (struct acpi_madt_generic_interrupt *)header;
> > > > u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> > > > u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> > > > + int cpu = get_cpu_for_acpi_id(gicc->uid);
> > > > void __iomem *redist_base;
> > > >
> > > > if (!acpi_gicc_is_usable(gicc))
> > > > return 0;
> > > >
> > > > + /*
> > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > + * the redistributor? ACPI doesn't want to say!
> > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > + */
> > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > + set_cpu_present(cpu, false);
> > > > + set_cpu_possible(cpu, false);
> > > > + return 0;
> > > > + }
> >
> > It seems dangerous to clear those this late in the game, given how
> > disconnected from the architecture code this is. Are we sure that
> > nothing has sampled these cpumasks beforehand?
>
> Hi Marc,
>
> Any firmware that does this is being considered as buggy already
> but given it is firmware and the spec doesn't say much about this,
> there is always the possibility.
>
> Not much happens between the point where these are setup and
> the point where the the gic inits and this code runs, but even if careful
> review showed it was fine today, it will be fragile to future changes.
>
> I'm not sure there is a huge disadvantage for such broken firmware in
> clearing these masks from the point of view of what is used throughout
> the rest of the kernel. Here I think we are just looking to prevent the CPU
> being onlined later.
>
> We could add a set_cpu_broken() with appropriate mask.
> Given this is very arm64 specific I'm not sure Rafael will be keen on
> us checking such a mask in the generic ACPI code, but we could check it in
> arch_register_cpu() and just not register the cpu if it matches.
> That will cover the vCPU hotplug case.
>
> Does that sounds sensible, or would you prefer something else?
Hi Marc
Some experiments later (faking this on a physical board - I never liked
CPU 120 anyway!) and using a different mask brings it's own minor pain.
When all the rest of the CPUs are brought up cpuhp_bringup_mask() is called
on cpu_present_mask so we need to do a dance in there to use a temporary
mask with broken cpus removed. I think it makes sense to cut that out
at the top of the cpuhp_bringup_mask() pile of actions rather than trying
to paper over each actual thing that is dying... (looks like an infinite loop
somewhere but I haven't tracked down where yet).
I'll spin a patch so you can see what it looks like, but my concern is
we are just moving the risk from early users of these masks to later cases
where code assumes cpu_present_mask definitely means they are present.
That is probably a small set of cases but not nice either.
Looks like one of those cases where we need to pick the lesser of two evils
which is probably still the cpu_broken_mask approach.
On plus side if we decide to go back to the original approach having seen
that I already have the code :)
Jonathan
>
> Jonathan
>
>
>
>
>
>
>
> >
> > Thanks,
> >
> > M.
> >
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Thu, 25 Apr 2024 10:28:06 +0100
Jonathan Cameron <[email protected]> wrote:
> On Wed, 24 Apr 2024 13:54:38 +0100
> Jonathan Cameron <[email protected]> wrote:
>
> > On Tue, 23 Apr 2024 13:01:21 +0100
> > Marc Zyngier <[email protected]> wrote:
> >
> > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > Jonathan Cameron <[email protected]> wrote:
> > > >
> > > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > > Jonathan Cameron <[email protected]> wrote:
> > > >
> > > > > From: James Morse <[email protected]>
> > > > >
> > > > > To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> > > > > to the MADT GICC entries. This indicates a disabled CPU entry may not
> > > > > be possible to online via PSCI until firmware has set enabled bit in
> > > > > _STA.
> > > > >
> > > > > This means that a "usable" GIC is one that is marked as either enabled,
> > > > > or online capable. Therefore, change acpi_gicc_is_usable() to check both
> > > > > bits. However, we need to change the test in gic_acpi_match_gicc() back
> > > > > to testing just the enabled bit so the count of enabled distributors is
> > > > > correct.
> > > > >
> > > > > What about the redistributor in the GICC entry? ACPI doesn't want to say.
> > > > > Assume the worst: When a redistributor is described in the GICC entry,
> > > > > but the entry is marked as disabled at boot, assume the redistributor
> > > > > is inaccessible.
> > > > >
> > > > > The GICv3 driver doesn't support late online of redistributors, so this
> > > > > means the corresponding CPU can't be brought online either. Clear the
> > > > > possible and present bits.
> > > > >
> > > > > Systems that want CPU hotplug in a VM can ensure their redistributors
> > > > > are always-on, and describe them that way with a GICR entry in the MADT.
> > > > >
> > > > > When mapping redistributors found via GICC entries, handle the case
> > > > > where the arch code believes the CPU is present and possible, but it
> > > > > does not have an accessible redistributor. Print a warning and clear
> > > > > the present and possible bits.
> > > > >
> > > > > Signed-off-by: James Morse <[email protected]>
> > > > > Signed-off-by: Russell King (Oracle) <[email protected]>
> > > > > Signed-off-by: Jonathan Cameron <[email protected]>
> > > >
> > > > +CC Marc,
> > > >
> > > > Whilst this has been unchanged for a long time, I'm not 100% sure
> > > > we've specifically drawn your attention to it before now.
> > > >
> > > > Jonathan
> > > >
> > > > >
> > > > > ---
> > > > > v7: No Change.
> > > > > ---
> > > > > drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> > > > > include/linux/acpi.h | 3 ++-
> > > > > 2 files changed, 21 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > > > > index 10af15f93d4d..66132251c1bb 100644
> > > > > --- a/drivers/irqchip/irq-gic-v3.c
> > > > > +++ b/drivers/irqchip/irq-gic-v3.c
> > > > > @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> > > > > (struct acpi_madt_generic_interrupt *)header;
> > > > > u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> > > > > u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> > > > > + int cpu = get_cpu_for_acpi_id(gicc->uid);
> > > > > void __iomem *redist_base;
> > > > >
> > > > > if (!acpi_gicc_is_usable(gicc))
> > > > > return 0;
> > > > >
> > > > > + /*
> > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > + */
> > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > + set_cpu_present(cpu, false);
> > > > > + set_cpu_possible(cpu, false);
> > > > > + return 0;
> > > > > + }
> > >
> > > It seems dangerous to clear those this late in the game, given how
> > > disconnected from the architecture code this is. Are we sure that
> > > nothing has sampled these cpumasks beforehand?
> >
> > Hi Marc,
> >
> > Any firmware that does this is being considered as buggy already
> > but given it is firmware and the spec doesn't say much about this,
> > there is always the possibility.
> >
> > Not much happens between the point where these are setup and
> > the point where the the gic inits and this code runs, but even if careful
> > review showed it was fine today, it will be fragile to future changes.
> >
> > I'm not sure there is a huge disadvantage for such broken firmware in
> > clearing these masks from the point of view of what is used throughout
> > the rest of the kernel. Here I think we are just looking to prevent the CPU
> > being onlined later.
> >
> > We could add a set_cpu_broken() with appropriate mask.
> > Given this is very arm64 specific I'm not sure Rafael will be keen on
> > us checking such a mask in the generic ACPI code, but we could check it in
> > arch_register_cpu() and just not register the cpu if it matches.
> > That will cover the vCPU hotplug case.
> >
> > Does that sounds sensible, or would you prefer something else?
>
> Hi Marc
>
> Some experiments later (faking this on a physical board - I never liked
> CPU 120 anyway!) and using a different mask brings it's own minor pain.
>
> When all the rest of the CPUs are brought up cpuhp_bringup_mask() is called
> on cpu_present_mask so we need to do a dance in there to use a temporary
> mask with broken cpus removed. I think it makes sense to cut that out
> at the top of the cpuhp_bringup_mask() pile of actions rather than trying
> to paper over each actual thing that is dying... (looks like an infinite loop
> somewhere but I haven't tracked down where yet).
>
> I'll spin a patch so you can see what it looks like, but my concern is
> we are just moving the risk from early users of these masks to later cases
> where code assumes cpu_present_mask definitely means they are present.
> That is probably a small set of cases but not nice either.
>
> Looks like one of those cases where we need to pick the lesser of two evils
> which is probably still the cpu_broken_mask approach.
>
> On plus side if we decide to go back to the original approach having seen
> that I already have the code :)
>
> Jonathan
>
Patch on top of this series. If no one shouts before I have it ready I'll
roll a v8 with the mask introduction as a new patch and the other changes pushed into
appropriate patches.
From 361b76f36bfb4ff74fdceca7ebf14cfa43cae4a9 Mon Sep 17 00:00:00 2001
From: Jonathan Cameron <[email protected]>
Date: Wed, 24 Apr 2024 17:42:49 +0100
Subject: [PATCH] cpu: Add broken cpu mask to mark CPUs where inconsistent
firmware means we can't start them.
On ARM64, it is not currently possible to use CPUs where the GICC entry
in ACPI specifies that it is online capable but not enabled. Only
always enabled entries are supported.
Previously if this condition was met, the present and possible cpu masks
were cleared for the relevant cpus. However, those masks may already
have been used by other code so this is not known to be safe.
An alternative is to use an additional mask (broken) and check that
in the subset of places where these CPUs might be onlined or the
infrastructure to indicate this is possible created.
Specifically in bringup_nonboot_cpus() and in arch_register_cpu().
Signed-off-by: Jonathan Cameron <[email protected]>
---
arch/arm64/kernel/smp.c | 3 +++
drivers/irqchip/irq-gic-v3.c | 3 +--
include/linux/cpumask.h | 19 +++++++++++++++++++
kernel/cpu.c | 8 +++++++-
4 files changed, 30 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index ccb6ad347df9..39cd6a7c40d8 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -513,6 +513,9 @@ int arch_register_cpu(int cpu)
IS_ENABLED(CONFIG_ACPI_HOTPLUG_CPU))
return -EPROBE_DEFER;
+ if (cpu_broken(cpu)) /* Inconsistent firmware - can't online */
+ return -ENODEV;
+
#ifdef CONFIG_ACPI_HOTPLUG_CPU
/* For now block anything that looks like physical CPU Hotplug */
if (invalid_logical_cpuid(cpu) || !cpu_present(cpu)) {
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 66132251c1bb..a0063eb6484d 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -2377,8 +2377,7 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
*/
if (!(gicc->flags & ACPI_MADT_ENABLED)) {
pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
- set_cpu_present(cpu, false);
- set_cpu_possible(cpu, false);
+ set_cpu_broken(cpu);
return 0;
}
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 4b202b94c97a..70a93ad8e590 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -96,6 +96,7 @@ static inline void set_nr_cpu_ids(unsigned int nr)
* cpu_enabled_mask - has bit 'cpu' set iff cpu can be brought online
* cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
* cpu_active_mask - has bit 'cpu' set iff cpu available to migration
+ * cpu_broken_mask - has bit 'cpu' set iff the cpu should never be onlined
*
* If !CONFIG_HOTPLUG_CPU, present == possible, and active == online.
*
@@ -130,12 +131,14 @@ extern struct cpumask __cpu_enabled_mask;
extern struct cpumask __cpu_present_mask;
extern struct cpumask __cpu_active_mask;
extern struct cpumask __cpu_dying_mask;
+extern struct cpumask __cpu_broken_mask;
#define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask)
#define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask)
#define cpu_enabled_mask ((const struct cpumask *)&__cpu_enabled_mask)
#define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask)
#define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask)
#define cpu_dying_mask ((const struct cpumask *)&__cpu_dying_mask)
+#define cpu_broken_mask ((const struct cpumask *)&__cpu_broken_mask)
extern atomic_t __num_online_cpus;
@@ -1073,6 +1076,12 @@ set_cpu_dying(unsigned int cpu, bool dying)
cpumask_clear_cpu(cpu, &__cpu_dying_mask);
}
+static inline void
+set_cpu_broken(unsigned int cpu)
+{
+ cpumask_set_cpu(cpu, &__cpu_broken_mask);
+}
+
/**
* to_cpumask - convert a NR_CPUS bitmap to a struct cpumask *
* @bitmap: the bitmap
@@ -1159,6 +1168,11 @@ static inline bool cpu_dying(unsigned int cpu)
return cpumask_test_cpu(cpu, cpu_dying_mask);
}
+static inline bool cpu_broken(unsigned int cpu)
+{
+ return cpumask_test_cpu(cpu, cpu_broken_mask);
+}
+
#else
#define num_online_cpus() 1U
@@ -1197,6 +1211,11 @@ static inline bool cpu_dying(unsigned int cpu)
return false;
}
+static inline bool cpu_broken(unsigned int cpu)
+{
+ return false;
+}
+
#endif /* NR_CPUS > 1 */
#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 537099bf5d02..f8b73a11869e 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1907,12 +1907,15 @@ static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return fals
void __init bringup_nonboot_cpus(unsigned int max_cpus)
{
+ static const struct cpumask tmp_mask __initdata;
+
/* Try parallel bringup optimization if enabled */
if (cpuhp_bringup_cpus_parallel(max_cpus))
return;
+ cpumask_andnot(&tmp_mask, cpu_present_mask, cpu_broken_mask);
/* Full per CPU serialized bringup */
- cpuhp_bringup_mask(cpu_present_mask, max_cpus, CPUHP_ONLINE);
+ cpuhp_bringup_mask(&tmp_mask, max_cpus, CPUHP_ONLINE);
}
#ifdef CONFIG_PM_SLEEP_SMP
@@ -3129,6 +3132,9 @@ EXPORT_SYMBOL(__cpu_active_mask);
struct cpumask __cpu_dying_mask __read_mostly;
EXPORT_SYMBOL(__cpu_dying_mask);
+struct cpumask __cpu_broken_mask __ro_after_init;
+EXPORT_SYMBOL(__cpu_broken_mask);
+
atomic_t __num_online_cpus __read_mostly;
EXPORT_SYMBOL(__num_online_cpus);
--
2.39.2
>
>
> >
> > Jonathan
> >
> >
> >
> >
> >
> >
> >
> > >
> > > Thanks,
> > >
> > > M.
> > >
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Thu, 25 Apr 2024 10:56:37 +0100
Jonathan Cameron <[email protected]> wrote:
> On Thu, 25 Apr 2024 10:28:06 +0100
> Jonathan Cameron <[email protected]> wrote:
>
> > On Wed, 24 Apr 2024 13:54:38 +0100
> > Jonathan Cameron <[email protected]> wrote:
> >
> > > On Tue, 23 Apr 2024 13:01:21 +0100
> > > Marc Zyngier <[email protected]> wrote:
> > >
> > > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > > Jonathan Cameron <[email protected]> wrote:
> > > > >
> > > > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > > > Jonathan Cameron <[email protected]> wrote:
> > > > >
> > > > > > From: James Morse <[email protected]>
> > > > > >
> > > > > > To support virtual CPU hotplug, ACPI has added an 'online capable' bit
> > > > > > to the MADT GICC entries. This indicates a disabled CPU entry may not
> > > > > > be possible to online via PSCI until firmware has set enabled bit in
> > > > > > _STA.
> > > > > >
> > > > > > This means that a "usable" GIC is one that is marked as either enabled,
> > > > > > or online capable. Therefore, change acpi_gicc_is_usable() to check both
> > > > > > bits. However, we need to change the test in gic_acpi_match_gicc() back
> > > > > > to testing just the enabled bit so the count of enabled distributors is
> > > > > > correct.
> > > > > >
> > > > > > What about the redistributor in the GICC entry? ACPI doesn't want to say.
> > > > > > Assume the worst: When a redistributor is described in the GICC entry,
> > > > > > but the entry is marked as disabled at boot, assume the redistributor
> > > > > > is inaccessible.
> > > > > >
> > > > > > The GICv3 driver doesn't support late online of redistributors, so this
> > > > > > means the corresponding CPU can't be brought online either. Clear the
> > > > > > possible and present bits.
> > > > > >
> > > > > > Systems that want CPU hotplug in a VM can ensure their redistributors
> > > > > > are always-on, and describe them that way with a GICR entry in the MADT.
> > > > > >
> > > > > > When mapping redistributors found via GICC entries, handle the case
> > > > > > where the arch code believes the CPU is present and possible, but it
> > > > > > does not have an accessible redistributor. Print a warning and clear
> > > > > > the present and possible bits.
> > > > > >
> > > > > > Signed-off-by: James Morse <[email protected]>
> > > > > > Signed-off-by: Russell King (Oracle) <[email protected]>
> > > > > > Signed-off-by: Jonathan Cameron <[email protected]>
> > > > >
> > > > > +CC Marc,
> > > > >
> > > > > Whilst this has been unchanged for a long time, I'm not 100% sure
> > > > > we've specifically drawn your attention to it before now.
> > > > >
> > > > > Jonathan
> > > > >
> > > > > >
> > > > > > ---
> > > > > > v7: No Change.
> > > > > > ---
> > > > > > drivers/irqchip/irq-gic-v3.c | 21 +++++++++++++++++++--
> > > > > > include/linux/acpi.h | 3 ++-
> > > > > > 2 files changed, 21 insertions(+), 3 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > > > > > index 10af15f93d4d..66132251c1bb 100644
> > > > > > --- a/drivers/irqchip/irq-gic-v3.c
> > > > > > +++ b/drivers/irqchip/irq-gic-v3.c
> > > > > > @@ -2363,11 +2363,25 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> > > > > > (struct acpi_madt_generic_interrupt *)header;
> > > > > > u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
> > > > > > u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
> > > > > > + int cpu = get_cpu_for_acpi_id(gicc->uid);
> > > > > > void __iomem *redist_base;
> > > > > >
> > > > > > if (!acpi_gicc_is_usable(gicc))
> > > > > > return 0;
> > > > > >
> > > > > > + /*
> > > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > > + */
> > > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > > + set_cpu_present(cpu, false);
> > > > > > + set_cpu_possible(cpu, false);
> > > > > > + return 0;
> > > > > > + }
> > > >
> > > > It seems dangerous to clear those this late in the game, given how
> > > > disconnected from the architecture code this is. Are we sure that
> > > > nothing has sampled these cpumasks beforehand?
> > >
> > > Hi Marc,
> > >
> > > Any firmware that does this is being considered as buggy already
> > > but given it is firmware and the spec doesn't say much about this,
> > > there is always the possibility.
> > >
> > > Not much happens between the point where these are setup and
> > > the point where the the gic inits and this code runs, but even if careful
> > > review showed it was fine today, it will be fragile to future changes.
> > >
> > > I'm not sure there is a huge disadvantage for such broken firmware in
> > > clearing these masks from the point of view of what is used throughout
> > > the rest of the kernel. Here I think we are just looking to prevent the CPU
> > > being onlined later.
> > >
> > > We could add a set_cpu_broken() with appropriate mask.
> > > Given this is very arm64 specific I'm not sure Rafael will be keen on
> > > us checking such a mask in the generic ACPI code, but we could check it in
> > > arch_register_cpu() and just not register the cpu if it matches.
> > > That will cover the vCPU hotplug case.
> > >
> > > Does that sounds sensible, or would you prefer something else?
> >
> > Hi Marc
> >
> > Some experiments later (faking this on a physical board - I never liked
> > CPU 120 anyway!) and using a different mask brings it's own minor pain.
> >
> > When all the rest of the CPUs are brought up cpuhp_bringup_mask() is called
> > on cpu_present_mask so we need to do a dance in there to use a temporary
> > mask with broken cpus removed. I think it makes sense to cut that out
> > at the top of the cpuhp_bringup_mask() pile of actions rather than trying
> > to paper over each actual thing that is dying... (looks like an infinite loop
> > somewhere but I haven't tracked down where yet).
> >
> > I'll spin a patch so you can see what it looks like, but my concern is
> > we are just moving the risk from early users of these masks to later cases
> > where code assumes cpu_present_mask definitely means they are present.
> > That is probably a small set of cases but not nice either.
> >
> > Looks like one of those cases where we need to pick the lesser of two evils
> > which is probably still the cpu_broken_mask approach.
> >
> > On plus side if we decide to go back to the original approach having seen
> > that I already have the code :)
> >
> > Jonathan
> >
>
> Patch on top of this series. If no one shouts before I have it ready I'll
> roll a v8 with the mask introduction as a new patch and the other changes pushed into
> appropriate patches.
>
> From 361b76f36bfb4ff74fdceca7ebf14cfa43cae4a9 Mon Sep 17 00:00:00 2001
> From: Jonathan Cameron <[email protected]>
> Date: Wed, 24 Apr 2024 17:42:49 +0100
> Subject: [PATCH] cpu: Add broken cpu mask to mark CPUs where inconsistent
> firmware means we can't start them.
>
> On ARM64, it is not currently possible to use CPUs where the GICC entry
> in ACPI specifies that it is online capable but not enabled. Only
> always enabled entries are supported.
>
> Previously if this condition was met, the present and possible cpu masks
> were cleared for the relevant cpus. However, those masks may already
> have been used by other code so this is not known to be safe.
>
> An alternative is to use an additional mask (broken) and check that
> in the subset of places where these CPUs might be onlined or the
> infrastructure to indicate this is possible created.
> Specifically in bringup_nonboot_cpus() and in arch_register_cpu().
>
> Signed-off-by: Jonathan Cameron <[email protected]>
Obviously I'd missed Marc's reply on keeping this local to gicv3.
Will give that a go.
Sorry for the noise!
Jonathan
> ---
> arch/arm64/kernel/smp.c | 3 +++
> drivers/irqchip/irq-gic-v3.c | 3 +--
> include/linux/cpumask.h | 19 +++++++++++++++++++
> kernel/cpu.c | 8 +++++++-
> 4 files changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index ccb6ad347df9..39cd6a7c40d8 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -513,6 +513,9 @@ int arch_register_cpu(int cpu)
> IS_ENABLED(CONFIG_ACPI_HOTPLUG_CPU))
> return -EPROBE_DEFER;
>
> + if (cpu_broken(cpu)) /* Inconsistent firmware - can't online */
> + return -ENODEV;
> +
> #ifdef CONFIG_ACPI_HOTPLUG_CPU
> /* For now block anything that looks like physical CPU Hotplug */
> if (invalid_logical_cpuid(cpu) || !cpu_present(cpu)) {
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 66132251c1bb..a0063eb6484d 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -2377,8 +2377,7 @@ gic_acpi_parse_madt_gicc(union acpi_subtable_headers *header,
> */
> if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> - set_cpu_present(cpu, false);
> - set_cpu_possible(cpu, false);
> + set_cpu_broken(cpu);
> return 0;
> }
>
> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
> index 4b202b94c97a..70a93ad8e590 100644
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -96,6 +96,7 @@ static inline void set_nr_cpu_ids(unsigned int nr)
> * cpu_enabled_mask - has bit 'cpu' set iff cpu can be brought online
> * cpu_online_mask - has bit 'cpu' set iff cpu available to scheduler
> * cpu_active_mask - has bit 'cpu' set iff cpu available to migration
> + * cpu_broken_mask - has bit 'cpu' set iff the cpu should never be onlined
> *
> * If !CONFIG_HOTPLUG_CPU, present == possible, and active == online.
> *
> @@ -130,12 +131,14 @@ extern struct cpumask __cpu_enabled_mask;
> extern struct cpumask __cpu_present_mask;
> extern struct cpumask __cpu_active_mask;
> extern struct cpumask __cpu_dying_mask;
> +extern struct cpumask __cpu_broken_mask;
> #define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask)
> #define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask)
> #define cpu_enabled_mask ((const struct cpumask *)&__cpu_enabled_mask)
> #define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask)
> #define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask)
> #define cpu_dying_mask ((const struct cpumask *)&__cpu_dying_mask)
> +#define cpu_broken_mask ((const struct cpumask *)&__cpu_broken_mask)
>
> extern atomic_t __num_online_cpus;
>
> @@ -1073,6 +1076,12 @@ set_cpu_dying(unsigned int cpu, bool dying)
> cpumask_clear_cpu(cpu, &__cpu_dying_mask);
> }
>
> +static inline void
> +set_cpu_broken(unsigned int cpu)
> +{
> + cpumask_set_cpu(cpu, &__cpu_broken_mask);
> +}
> +
> /**
> * to_cpumask - convert a NR_CPUS bitmap to a struct cpumask *
> * @bitmap: the bitmap
> @@ -1159,6 +1168,11 @@ static inline bool cpu_dying(unsigned int cpu)
> return cpumask_test_cpu(cpu, cpu_dying_mask);
> }
>
> +static inline bool cpu_broken(unsigned int cpu)
> +{
> + return cpumask_test_cpu(cpu, cpu_broken_mask);
> +}
> +
> #else
>
> #define num_online_cpus() 1U
> @@ -1197,6 +1211,11 @@ static inline bool cpu_dying(unsigned int cpu)
> return false;
> }
>
> +static inline bool cpu_broken(unsigned int cpu)
> +{
> + return false;
> +}
> +
> #endif /* NR_CPUS > 1 */
>
> #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 537099bf5d02..f8b73a11869e 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -1907,12 +1907,15 @@ static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return fals
>
> void __init bringup_nonboot_cpus(unsigned int max_cpus)
> {
> + static const struct cpumask tmp_mask __initdata;
> +
> /* Try parallel bringup optimization if enabled */
> if (cpuhp_bringup_cpus_parallel(max_cpus))
> return;
>
> + cpumask_andnot(&tmp_mask, cpu_present_mask, cpu_broken_mask);
> /* Full per CPU serialized bringup */
> - cpuhp_bringup_mask(cpu_present_mask, max_cpus, CPUHP_ONLINE);
> + cpuhp_bringup_mask(&tmp_mask, max_cpus, CPUHP_ONLINE);
> }
>
> #ifdef CONFIG_PM_SLEEP_SMP
> @@ -3129,6 +3132,9 @@ EXPORT_SYMBOL(__cpu_active_mask);
> struct cpumask __cpu_dying_mask __read_mostly;
> EXPORT_SYMBOL(__cpu_dying_mask);
>
> +struct cpumask __cpu_broken_mask __ro_after_init;
> +EXPORT_SYMBOL(__cpu_broken_mask);
> +
> atomic_t __num_online_cpus __read_mostly;
> EXPORT_SYMBOL(__num_online_cpus);
>
On Wed, 24 Apr 2024 18:08:30 +0100
Jonathan Cameron <[email protected]> wrote:
> On Wed, 24 Apr 2024 17:35:54 +0100
> Salil Mehta <[email protected]> wrote:
>
> > > From: Marc Zyngier <[email protected]>
> > > Sent: Wednesday, April 24, 2024 4:33 PM
> > > To: Jonathan Cameron <[email protected]>
> > > Cc: Thomas Gleixner <[email protected]>; Peter Zijlstra
> > > <[email protected]>; [email protected];
> > > [email protected]; [email protected]; linux-
> > > [email protected]; [email protected]; linux-arm-
> > > [email protected]; [email protected]; [email protected];
> > > Russell King <[email protected]>; Rafael J . Wysocki
> > > <[email protected]>; Miguel Luis <[email protected]>; James Morse
> > > <[email protected]>; Salil Mehta <[email protected]>; Jean-
> > > Philippe Brucker <[email protected]>; Catalin Marinas
> > > <[email protected]>; Will Deacon <[email protected]>; Linuxarm
> > > <[email protected]>; Ingo Molnar <[email protected]>; Borislav
> > > Petkov <[email protected]>; Dave Hansen <[email protected]>;
> > > [email protected]; [email protected]
> > > Subject: Re: [PATCH v7 11/16] irqchip/gic-v3: Add support for ACPI's
> > > disabled but 'online capable' CPUs
> > >
> > > On Wed, 24 Apr 2024 13:54:38 +0100,
> > > Jonathan Cameron <[email protected]> wrote:
> > > >
> > > > On Tue, 23 Apr 2024 13:01:21 +0100
> > > > Marc Zyngier <[email protected]> wrote:
> > > >
> > > > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > > > Jonathan Cameron <[email protected]> wrote:
> > > > > >
> > > > > > On Thu, 18 Apr 2024 14:54:07 +0100 Jonathan Cameron
> > > > > > <[email protected]> wrote:
> > >
> > > [...]
> > >
> > > > > >
> > > > > > > + /*
> > > > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > > > + */
> > > > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > > > + set_cpu_present(cpu, false);
> > > > > > > + set_cpu_possible(cpu, false);
> >
> > (a digression) shouldn't we be clearing the enabled mask as well?
> >
> > set_cpu_enabled(cpu, false);
>
> FWIW I think not necessary. enabled is only set in register_cpu() and aim here is to
> never call that for CPUs in this state.
>
> Anyhow, I got distracted by the firmware bug I found whilst trying to test this but
> now have a test setup that hits this path (once deliberately broken), so will
> see what we can do about that doesn't have affect those masks.
This may be relevant with the context of Marc's email. Don't crop so much!
However I think we probably don't care. This is bios bug, if we miss report it such
that userspace thinks it can online something that work work, it probably doesn't
matter.
Jonathan
>
> Jonathan
>
>
> >
> >
> > Best regards
> > Salil
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Wed, 24 Apr 2024 16:33:22 +0100
Marc Zyngier <[email protected]> wrote:
> On Wed, 24 Apr 2024 13:54:38 +0100,
> Jonathan Cameron <[email protected]> wrote:
> >
> > On Tue, 23 Apr 2024 13:01:21 +0100
> > Marc Zyngier <[email protected]> wrote:
> >
> > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > Jonathan Cameron <[email protected]> wrote:
> > > >
> > > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > > Jonathan Cameron <[email protected]> wrote:
>
> [...]
>
> > > >
> > > > > + /*
> > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > + */
> > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > + set_cpu_present(cpu, false);
> > > > > + set_cpu_possible(cpu, false);
> > > > > + return 0;
> > > > > + }
> > >
> > > It seems dangerous to clear those this late in the game, given how
> > > disconnected from the architecture code this is. Are we sure that
> > > nothing has sampled these cpumasks beforehand?
> >
> > Hi Marc,
> >
> > Any firmware that does this is being considered as buggy already
> > but given it is firmware and the spec doesn't say much about this,
> > there is always the possibility.
>
> There is no shortage of broken firmware out there, and I expect this
> trend to progress.
>
> > Not much happens between the point where these are setup and
> > the point where the the gic inits and this code runs, but even if careful
> > review showed it was fine today, it will be fragile to future changes.
> >
> > I'm not sure there is a huge disadvantage for such broken firmware in
> > clearing these masks from the point of view of what is used throughout
> > the rest of the kernel. Here I think we are just looking to prevent the CPU
> > being onlined later.
>
> I totally agree on the goal, I simply question the way you get to it.
>
> >
> > We could add a set_cpu_broken() with appropriate mask.
> > Given this is very arm64 specific I'm not sure Rafael will be keen on
> > us checking such a mask in the generic ACPI code, but we could check it in
> > arch_register_cpu() and just not register the cpu if it matches.
> > That will cover the vCPU hotplug case.
> >
> > Does that sounds sensible, or would you prefer something else?
>
>
> Such a 'broken_rdists' mask is exactly what I have in mind, just
> keeping it private to the GIC driver, and not expose it anywhere else.
> You can then fail the hotplug event early, and avoid changing the
> global masks from within the GIC driver. At least, we don't mess with
> the internals of the kernel, and the CPU is properly marked as dead
> (that mechanism should already work).
>
> I'd expect the handling side to look like this (will not compile, but
> you'll get the idea):
Hi Marc,
In general this looks good - but...
I haven't gotten to the bottom of why yet (and it might be a side
effect of how I hacked the test by lying in minimal fashion and
just frigging the MADT read functions) but the hotplug flow is only getting
as far as calling __cpu_up() before it seems to enter an infinite loop.
That is it never gets far enough to fail this test.
Getting stuck in a psci cpu_on call. I'm guessing something that
we didn't get to in the earlier gicv3 calls before bailing out is blocking that?
Looks like it gets to
SMCCC smc
and is never seen again.
Any ideas on where to look? The one advantage so far of the higher level
approach is we never tried the hotplug callbacks at all so avoided hitting
that call. One (little bit horrible) solution that might avoid this would
be to add another cpuhp state very early on and fail at that stage.
I'm not keen on doing that without a better explanation than I have so far!
Thanks,
J
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 6fb276504bcc..e8f02bfd0e21 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -1009,6 +1009,9 @@ static int __gic_populate_rdist(struct redist_region *region, void __iomem *ptr)
> u64 typer;
> u32 aff;
>
> + if (cpumask_test_cpu(smp_processor_id(), &broken_rdists))
> + return 1;
> +
> /*
> * Convert affinity to a 32bit value that can be matched to
> * GICR_TYPER bits [63:32].
> @@ -1260,14 +1263,15 @@ static int gic_dist_supports_lpis(void)
> !gicv3_nolpi);
> }
>
> -static void gic_cpu_init(void)
> +static int gic_cpu_init(void)
> {
> void __iomem *rbase;
> - int i;
> + int ret, i;
>
> /* Register ourselves with the rest of the world */
> - if (gic_populate_rdist())
> - return;
> + ret = gic_populate_rdist();
> + if (ret)
> + return ret;
>
> gic_enable_redist(true);
>
> @@ -1286,6 +1290,8 @@ static void gic_cpu_init(void)
>
> /* initialise system registers */
> gic_cpu_sys_reg_init();
> +
> + return 0;
> }
>
> #ifdef CONFIG_SMP
> @@ -1295,7 +1301,11 @@ static void gic_cpu_init(void)
>
> static int gic_starting_cpu(unsigned int cpu)
> {
> - gic_cpu_init();
> + int ret;
> +
> + ret = gic_cpu_init();
> + if (ret)
> + return ret;
>
> if (gic_dist_supports_lpis())
> its_cpu_init();
>
> But the question is: do you rely on these masks having been
> "corrected" anywhere else?
>
> Thanks,
>
> M.
>
On Thu, 25 Apr 2024 13:31:50 +0100
Jonathan Cameron <[email protected]> wrote:
> On Wed, 24 Apr 2024 16:33:22 +0100
> Marc Zyngier <[email protected]> wrote:
>
> > On Wed, 24 Apr 2024 13:54:38 +0100,
> > Jonathan Cameron <[email protected]> wrote:
> > >
> > > On Tue, 23 Apr 2024 13:01:21 +0100
> > > Marc Zyngier <[email protected]> wrote:
> > >
> > > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > > Jonathan Cameron <[email protected]> wrote:
> > > > >
> > > > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > > > Jonathan Cameron <[email protected]> wrote:
> >
> > [...]
> >
> > > > >
> > > > > > + /*
> > > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > > + */
> > > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > > + set_cpu_present(cpu, false);
> > > > > > + set_cpu_possible(cpu, false);
> > > > > > + return 0;
> > > > > > + }
> > > >
> > > > It seems dangerous to clear those this late in the game, given how
> > > > disconnected from the architecture code this is. Are we sure that
> > > > nothing has sampled these cpumasks beforehand?
> > >
> > > Hi Marc,
> > >
> > > Any firmware that does this is being considered as buggy already
> > > but given it is firmware and the spec doesn't say much about this,
> > > there is always the possibility.
> >
> > There is no shortage of broken firmware out there, and I expect this
> > trend to progress.
> >
> > > Not much happens between the point where these are setup and
> > > the point where the the gic inits and this code runs, but even if careful
> > > review showed it was fine today, it will be fragile to future changes.
> > >
> > > I'm not sure there is a huge disadvantage for such broken firmware in
> > > clearing these masks from the point of view of what is used throughout
> > > the rest of the kernel. Here I think we are just looking to prevent the CPU
> > > being onlined later.
> >
> > I totally agree on the goal, I simply question the way you get to it.
> >
> > >
> > > We could add a set_cpu_broken() with appropriate mask.
> > > Given this is very arm64 specific I'm not sure Rafael will be keen on
> > > us checking such a mask in the generic ACPI code, but we could check it in
> > > arch_register_cpu() and just not register the cpu if it matches.
> > > That will cover the vCPU hotplug case.
> > >
> > > Does that sounds sensible, or would you prefer something else?
> >
> >
> > Such a 'broken_rdists' mask is exactly what I have in mind, just
> > keeping it private to the GIC driver, and not expose it anywhere else.
> > You can then fail the hotplug event early, and avoid changing the
> > global masks from within the GIC driver. At least, we don't mess with
> > the internals of the kernel, and the CPU is properly marked as dead
> > (that mechanism should already work).
> >
> > I'd expect the handling side to look like this (will not compile, but
> > you'll get the idea):
> Hi Marc,
>
> In general this looks good - but...
>
> I haven't gotten to the bottom of why yet (and it might be a side
> effect of how I hacked the test by lying in minimal fashion and
> just frigging the MADT read functions) but the hotplug flow is only getting
> as far as calling __cpu_up() before it seems to enter an infinite loop.
> That is it never gets far enough to fail this test.
>
> Getting stuck in a psci cpu_on call. I'm guessing something that
> we didn't get to in the earlier gicv3 calls before bailing out is blocking that?
> Looks like it gets to
> SMCCC smc
> and is never seen again.
>
> Any ideas on where to look? The one advantage so far of the higher level
> approach is we never tried the hotplug callbacks at all so avoided hitting
> that call. One (little bit horrible) solution that might avoid this would
> be to add another cpuhp state very early on and fail at that stage.
> I'm not keen on doing that without a better explanation than I have so far!
Whilst it still doesn't work I suspect I'm loosing ability to print to the console
between that point and somewhat later and real problem is elsewhere.
Jonathan
>
> Thanks,
>
> J
>
>
> > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > index 6fb276504bcc..e8f02bfd0e21 100644
> > --- a/drivers/irqchip/irq-gic-v3.c
> > +++ b/drivers/irqchip/irq-gic-v3.c
> > @@ -1009,6 +1009,9 @@ static int __gic_populate_rdist(struct redist_region *region, void __iomem *ptr)
> > u64 typer;
> > u32 aff;
> >
> > + if (cpumask_test_cpu(smp_processor_id(), &broken_rdists))
> > + return 1;
> > +
> > /*
> > * Convert affinity to a 32bit value that can be matched to
> > * GICR_TYPER bits [63:32].
> > @@ -1260,14 +1263,15 @@ static int gic_dist_supports_lpis(void)
> > !gicv3_nolpi);
> > }
> >
> > -static void gic_cpu_init(void)
> > +static int gic_cpu_init(void)
> > {
> > void __iomem *rbase;
> > - int i;
> > + int ret, i;
> >
> > /* Register ourselves with the rest of the world */
> > - if (gic_populate_rdist())
> > - return;
> > + ret = gic_populate_rdist();
> > + if (ret)
> > + return ret;
> >
> > gic_enable_redist(true);
> >
> > @@ -1286,6 +1290,8 @@ static void gic_cpu_init(void)
> >
> > /* initialise system registers */
> > gic_cpu_sys_reg_init();
> > +
> > + return 0;
> > }
> >
> > #ifdef CONFIG_SMP
> > @@ -1295,7 +1301,11 @@ static void gic_cpu_init(void)
> >
> > static int gic_starting_cpu(unsigned int cpu)
> > {
> > - gic_cpu_init();
> > + int ret;
> > +
> > + ret = gic_cpu_init();
> > + if (ret)
> > + return ret;
> >
> > if (gic_dist_supports_lpis())
> > its_cpu_init();
> >
> > But the question is: do you rely on these masks having been
> > "corrected" anywhere else?
> >
> > Thanks,
> >
> > M.
> >
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Thu, 25 Apr 2024 16:00:17 +0100
Jonathan Cameron <[email protected]> wrote:
> On Thu, 25 Apr 2024 13:31:50 +0100
> Jonathan Cameron <[email protected]> wrote:
>
> > On Wed, 24 Apr 2024 16:33:22 +0100
> > Marc Zyngier <[email protected]> wrote:
> >
> > > On Wed, 24 Apr 2024 13:54:38 +0100,
> > > Jonathan Cameron <[email protected]> wrote:
> > > >
> > > > On Tue, 23 Apr 2024 13:01:21 +0100
> > > > Marc Zyngier <[email protected]> wrote:
> > > >
> > > > > On Mon, 22 Apr 2024 11:40:20 +0100,
> > > > > Jonathan Cameron <[email protected]> wrote:
> > > > > >
> > > > > > On Thu, 18 Apr 2024 14:54:07 +0100
> > > > > > Jonathan Cameron <[email protected]> wrote:
> > >
> > > [...]
> > >
> > > > > >
> > > > > > > + /*
> > > > > > > + * Capable but disabled CPUs can be brought online later. What about
> > > > > > > + * the redistributor? ACPI doesn't want to say!
> > > > > > > + * Virtual hotplug systems can use the MADT's "always-on" GICR entries.
> > > > > > > + * Otherwise, prevent such CPUs from being brought online.
> > > > > > > + */
> > > > > > > + if (!(gicc->flags & ACPI_MADT_ENABLED)) {
> > > > > > > + pr_warn_once("CPU %u's redistributor is inaccessible: this CPU can't be brought online\n", cpu);
> > > > > > > + set_cpu_present(cpu, false);
> > > > > > > + set_cpu_possible(cpu, false);
> > > > > > > + return 0;
> > > > > > > + }
> > > > >
> > > > > It seems dangerous to clear those this late in the game, given how
> > > > > disconnected from the architecture code this is. Are we sure that
> > > > > nothing has sampled these cpumasks beforehand?
> > > >
> > > > Hi Marc,
> > > >
> > > > Any firmware that does this is being considered as buggy already
> > > > but given it is firmware and the spec doesn't say much about this,
> > > > there is always the possibility.
> > >
> > > There is no shortage of broken firmware out there, and I expect this
> > > trend to progress.
> > >
> > > > Not much happens between the point where these are setup and
> > > > the point where the the gic inits and this code runs, but even if careful
> > > > review showed it was fine today, it will be fragile to future changes.
> > > >
> > > > I'm not sure there is a huge disadvantage for such broken firmware in
> > > > clearing these masks from the point of view of what is used throughout
> > > > the rest of the kernel. Here I think we are just looking to prevent the CPU
> > > > being onlined later.
> > >
> > > I totally agree on the goal, I simply question the way you get to it.
> > >
> > > >
> > > > We could add a set_cpu_broken() with appropriate mask.
> > > > Given this is very arm64 specific I'm not sure Rafael will be keen on
> > > > us checking such a mask in the generic ACPI code, but we could check it in
> > > > arch_register_cpu() and just not register the cpu if it matches.
> > > > That will cover the vCPU hotplug case.
> > > >
> > > > Does that sounds sensible, or would you prefer something else?
> > >
> > >
> > > Such a 'broken_rdists' mask is exactly what I have in mind, just
> > > keeping it private to the GIC driver, and not expose it anywhere else.
> > > You can then fail the hotplug event early, and avoid changing the
> > > global masks from within the GIC driver. At least, we don't mess with
> > > the internals of the kernel, and the CPU is properly marked as dead
> > > (that mechanism should already work).
> > >
> > > I'd expect the handling side to look like this (will not compile, but
> > > you'll get the idea):
> > Hi Marc,
> >
> > In general this looks good - but...
> >
> > I haven't gotten to the bottom of why yet (and it might be a side
> > effect of how I hacked the test by lying in minimal fashion and
> > just frigging the MADT read functions) but the hotplug flow is only getting
> > as far as calling __cpu_up() before it seems to enter an infinite loop.
> > That is it never gets far enough to fail this test.
> >
> > Getting stuck in a psci cpu_on call. I'm guessing something that
> > we didn't get to in the earlier gicv3 calls before bailing out is blocking that?
> > Looks like it gets to
> > SMCCC smc
> > and is never seen again.
> >
> > Any ideas on where to look? The one advantage so far of the higher level
> > approach is we never tried the hotplug callbacks at all so avoided hitting
> > that call. One (little bit horrible) solution that might avoid this would
> > be to add another cpuhp state very early on and fail at that stage.
> > I'm not keen on doing that without a better explanation than I have so far!
>
> Whilst it still doesn't work I suspect I'm loosing ability to print to the console
> between that point and somewhat later and real problem is elsewhere.
Hi again,
Found it I think. cpuhp calls between cpu:bringup and ap:online
arm made from notify_cpu_starting() are clearly marked as nofail with a comment.
STARTING must not fail!
https://elixir.bootlin.com/linux/latest/source/kernel/cpu.c#L1642
Whilst I have no immediate idea why that comment is there it is pretty strong
argument against trying to have the CPUHP_AP_IRQ_GIC_STARTING callback fail
and expecting it to carry on working :(
There would have been a nice print message, but given I don't appear to have
a working console after that stage I never see it.
So the best I have yet come up with for this is the option of a new callback registered
in gic_smp_init()
cpuhp_setup_state_nocalls(CPUHP_BP_PREPARE_DYN,
"irqchip/arm/gicv3:checkrdist",
gic_broken_rdist, NULL);
with callback being simply
static int gic_broken_rdist(unsigned int cpu)
{
if (cpumask_test_cpu(cpu, &broken_rdists))
return -EINVAL;
return 0;
}
That gets called cpuhp_up_callbacks() and is allows to fail and roll back the steps.
Not particularly satisfying but keeps the logic confined to the gicv3 driver.
What do you think?
Jonathan
>
> Jonathan
>
> >
> > Thanks,
> >
> > J
> >
> >
> > > diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> > > index 6fb276504bcc..e8f02bfd0e21 100644
> > > --- a/drivers/irqchip/irq-gic-v3.c
> > > +++ b/drivers/irqchip/irq-gic-v3.c
> > > @@ -1009,6 +1009,9 @@ static int __gic_populate_rdist(struct redist_region *region, void __iomem *ptr)
> > > u64 typer;
> > > u32 aff;
> > >
> > > + if (cpumask_test_cpu(smp_processor_id(), &broken_rdists))
> > > + return 1;
> > > +
> > > /*
> > > * Convert affinity to a 32bit value that can be matched to
> > > * GICR_TYPER bits [63:32].
> > > @@ -1260,14 +1263,15 @@ static int gic_dist_supports_lpis(void)
> > > !gicv3_nolpi);
> > > }
> > >
> > > -static void gic_cpu_init(void)
> > > +static int gic_cpu_init(void)
> > > {
> > > void __iomem *rbase;
> > > - int i;
> > > + int ret, i;
> > >
> > > /* Register ourselves with the rest of the world */
> > > - if (gic_populate_rdist())
> > > - return;
> > > + ret = gic_populate_rdist();
> > > + if (ret)
> > > + return ret;
> > >
> > > gic_enable_redist(true);
> > >
> > > @@ -1286,6 +1290,8 @@ static void gic_cpu_init(void)
> > >
> > > /* initialise system registers */
> > > gic_cpu_sys_reg_init();
> > > +
> > > + return 0;
> > > }
> > >
> > > #ifdef CONFIG_SMP
> > > @@ -1295,7 +1301,11 @@ static void gic_cpu_init(void)
> > >
> > > static int gic_starting_cpu(unsigned int cpu)
> > > {
> > > - gic_cpu_init();
> > > + int ret;
> > > +
> > > + ret = gic_cpu_init();
> > > + if (ret)
> > > + return ret;
> > >
> > > if (gic_dist_supports_lpis())
> > > its_cpu_init();
> > >
> > > But the question is: do you rely on these masks having been
> > > "corrected" anywhere else?
> > >
> > > Thanks,
> > >
> > > M.
> > >
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > [email protected]
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 4/18/24 23:54, Jonathan Cameron wrote:
> If CONFIG_ACPI_PROCESSOR provide a helper to retrieve the
> acpi_handle for a given CPU allowing access to methods
> in DSDT.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> ---
> v7: No change
> v6: New patch
> ---
> drivers/acpi/acpi_processor.c | 10 ++++++++++
> include/linux/acpi.h | 7 +++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index ac7ddb30f10e..127ae8dcb787 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -35,6 +35,16 @@ EXPORT_PER_CPU_SYMBOL(processors);
> struct acpi_processor_errata errata __read_mostly;
> EXPORT_SYMBOL_GPL(errata);
>
> +acpi_handle acpi_get_processor_handle(int cpu)
> +{
> + acpi_handle handle = NULL;
> + struct acpi_processor *pr = per_cpu(processors, cpu);;
^^
s/;;/;
> +
> + if (pr)
> + handle = pr->handle;
> +
> + return handle;
> +}
> static int acpi_processor_errata_piix4(struct pci_dev *dev)
> {
> u8 value1 = 0;
> diff --git a/include/linux/acpi.h b/include/linux/acpi.h
> index 34829f2c517a..9844a3f9c4e5 100644
> --- a/include/linux/acpi.h
> +++ b/include/linux/acpi.h
> @@ -309,6 +309,8 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id,
> int acpi_unmap_cpu(int cpu);
> #endif /* CONFIG_ACPI_HOTPLUG_CPU */
>
> +acpi_handle acpi_get_processor_handle(int cpu);
> +
> #ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
> int acpi_get_ioapic_id(acpi_handle handle, u32 gsi_base, u64 *phys_addr);
> #endif
> @@ -1077,6 +1079,11 @@ static inline bool acpi_sleep_state_supported(u8 sleep_state)
> return false;
> }
>
> +static inline acpi_handle acpi_get_processor_handle(int cpu)
> +{
> + return NULL;
> +}
> +
> #endif /* !CONFIG_ACPI */
>
> extern void arch_post_acpi_subsys_init(void);
Thanks,
Gavin
On 4/18/24 23:54, Jonathan Cameron wrote:
> From: James Morse <[email protected]>
>
> The arm64 specific arch_register_cpu() call may defer CPU registration
> until the ACPI interpreter is available and the _STA method can
> be evaluated.
>
> If this occurs, then a second attempt is made in
> acpi_processor_get_info(). Note that the arm64 specific call has
> not yet been added so for now this will be called for the original
> hotplug case.
>
> For architectures that do not defer until the ACPI Processor
> driver loads (e.g. x86), for initially present CPUs there will
> already be a CPU device. If present do not try to register again.
>
> Systems can still be booted with 'acpi=off', or not include an
> ACPI description at all as in these cases arch_register_cpu()
> will not have deferred registration when first called.
>
> This moves the CPU register logic back to a subsys_initcall(),
> while the memory nodes will have been registered earlier.
> Note this is where the call was prior to the cleanup series so
> there should be no side effects of moving it back again for this
> specific case.
>
> [PATCH 00/21] Initial cleanups for vCPU HP.
> https://lore.kernel.org/all/ZVyz%[email protected]/
> commit 5b95f94c3b9f ("x86/topology: Switch over to GENERIC_CPU_DEVICES")
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Co-developed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Joanthan Cameron <[email protected]>
> ---
s/Joanthan/Jonathan ?
> v7: Simplify the logic on whether to hotadd the CPU.
> This path can only be reached either for coldplug in which
> case all we care about is has register_cpu() already been
> called (identifying deferred), or hotplug in which case
> whether register_cpu() has been called is also sufficient.
> Checks on _STA related elements or the validity of the ID
> are no longer necessary here due to similar checks having
> moved elsewhere in the path.
> v6: Squash the two paths for conventional CPU Hotplug and arm64
> vCPU HP.
> ---
> drivers/acpi/acpi_processor.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
> index 127ae8dcb787..4e65011e706c 100644
> --- a/drivers/acpi/acpi_processor.c
> +++ b/drivers/acpi/acpi_processor.c
> @@ -350,14 +350,14 @@ static int acpi_processor_get_info(struct acpi_device *device)
> }
>
> /*
> - * Extra Processor objects may be enumerated on MP systems with
> - * less than the max # of CPUs. They should be ignored _iff
> - * they are physically not present.
> - *
> - * NOTE: Even if the processor has a cpuid, it may not be present
> - * because cpuid <-> apicid mapping is persistent now.
> + * This code is not called unless we know the CPU is present and
> + * enabled. The two paths are:
> + * a) Initially present CPUs on architectures that do not defer
> + * their arch_register_cpu() calls until this point.
> + * b) Hotplugged CPUs (enabled bit in _STA has transitioned from not
> + * enabled to enabled)
> */
> - if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
> + if (!get_cpu_device(pr->id)) {
> ret = acpi_processor_hotadd_init(pr, device);
>
> if (ret)
Thanks,
Gavin
On 4/18/24 23:53, Jonathan Cameron wrote:
> For arm64 the CPU registration cannot complete until the ACPI
> interpreter us up and running so in those cases the arch specific
^^
I guess it's a typo? s/us/is
> arch_register_cpu() will return -EPROBE_DEFER at this stage and the
> registration will be attempted later.
>
> Suggested-by: Rafael J. Wysocki <[email protected]>
> Acked-by: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: Fix condition to not print the error message of success (thanks Russell!)
> ---
> drivers/base/cpu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Gavin Shan <[email protected]>
> diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
> index 56fba44ba391..7b83e9c87d7c 100644
> --- a/drivers/base/cpu.c
> +++ b/drivers/base/cpu.c
> @@ -558,7 +558,7 @@ static void __init cpu_dev_register_generic(void)
>
> for_each_present_cpu(i) {
> ret = arch_register_cpu(i);
> - if (ret)
> + if (ret && ret != -EPROBE_DEFER)
> pr_warn("register_cpu %d failed (%d)\n", i, ret);
> }
> }
On 4/18/24 23:53, Jonathan Cameron wrote:
> Separate code paths, combined with a flag set in acpi_processor.c to
> indicate a struct acpi_processor was for a hotplugged CPU ensured that
> per CPU data was only set up the first time that a CPU was initialized.
> This appears to be unnecessary as the paths can be combined by letting
> the online logic also handle any CPUs online at the time of driver load.
>
> Motivation for this change, beyond simplification, is that ARM64
> virtual CPU HP uses the same code paths for hotplug and cold path in
> acpi_processor.c so had no easy way to set the flag for hotplug only.
> Removing this necessity will enable ARM64 vCPU HP to reuse the existing
> code paths.
>
> Leave noisy pr_info() in place but update it to not state the CPU
> was hotplugged.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change.
> v6: New patch.
> RFT: I have very limited test resources for x86 and other
> architectures that may be affected by this change.
> ---
> drivers/acpi/acpi_processor.c | 1 -
> drivers/acpi/processor_driver.c | 44 ++++++++++-----------------------
> include/acpi/processor.h | 2 +-
> 3 files changed, 14 insertions(+), 33 deletions(-)
>
Reviewed-by: Gavin Shan <[email protected]>
On 4/18/24 23:53, Jonathan Cameron wrote:
> The ACPI bus scan will only result in acpi_processor_add() being called
> if _STA has already been checked and the result is that the
> processor is enabled and present. Hence drop this additional check.
>
> Suggested-by: Rafael J. Wysocki <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change
> v6: New patch to drop this unnecessary code. Now I think we only
> need to explicitly read STA to print a warning in the ARM64
> arch_unregister_cpu() path where we want to know if the
> present bit has been unset as well.
> ---
> drivers/acpi/acpi_processor.c | 6 ------
> 1 file changed, 6 deletions(-)
>
Reviewed-by: Gavin Shan <[email protected]>
On 4/18/24 23:54, Jonathan Cameron wrote:
> Precursor patch adds the ability to pass a uintptr_t of flags into
> acpi_scan_check_and detach() so that additional flags can be
> added to indicate whether to defer portions of the eject flow.
> The new flag follows in the next patch.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
> ---
> v7: No change
> v6: Based on internal feedback switch to less invasive change
> to using flags rather than a struct.
> ---
> drivers/acpi/scan.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
Reviewed-by: Gavin Shan <[email protected]>
On 4/18/24 23:54, Jonathan Cameron wrote:
> From: Jean-Philippe Brucker <[email protected]>
>
> When a CPU is marked as disabled, but online capable in the MADT, PSCI
> applies some firmware policy to control when it can be brought online.
> PSCI returns DENIED to a CPU_ON request if this is not currently
> permitted. The OS can learn the current policy from the _STA enabled bit.
>
> Handle the PSCI DENIED return code gracefully instead of printing an
> error.
>
> See https://developer.arm.com/documentation/den0022/f/?lang=en page 58.
>
> Signed-off-by: Jean-Philippe Brucker <[email protected]>
> [ morse: Rewrote commit message ]
> Signed-off-by: James Morse <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Reviewed-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Russell King (Oracle) <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
> ---
> v7: No change
> ---
> arch/arm64/kernel/psci.c | 2 +-
> arch/arm64/kernel/smp.c | 3 ++-
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> index 29a8e444db83..fabd732d0a2d 100644
> --- a/arch/arm64/kernel/psci.c
> +++ b/arch/arm64/kernel/psci.c
> @@ -40,7 +40,7 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
> {
> phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
> int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
> - if (err)
> + if (err && err != -EPERM)
> pr_err("failed to boot CPU%d (%d)\n", cpu, err);
>
> return err;
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 4ced34f62dab..dc0e0b3ec2d4 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -132,7 +132,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> /* Now bring the CPU into our world */
> ret = boot_secondary(cpu, idle);
> if (ret) {
> - pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> + if (ret != -EPERM)
> + pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> return ret;
> }
>
The changes in smp.c are based the assumption that PSCI is the only backend, which
isn't true. So we probably need move this error message to specific backend, which
could be PSCI, ACPI parking protocol, or smp_spin_table.
Thanks,
Gavin
On Fri, 26 Apr 2024 19:36:10 +1000
Gavin Shan <[email protected]> wrote:
> On 4/18/24 23:54, Jonathan Cameron wrote:
> > From: Jean-Philippe Brucker <[email protected]>
> >
> > When a CPU is marked as disabled, but online capable in the MADT, PSCI
> > applies some firmware policy to control when it can be brought online.
> > PSCI returns DENIED to a CPU_ON request if this is not currently
> > permitted. The OS can learn the current policy from the _STA enabled bit.
> >
> > Handle the PSCI DENIED return code gracefully instead of printing an
> > error.
> >
> > See https://developer.arm.com/documentation/den0022/f/?lang=en page 58.
> >
> > Signed-off-by: Jean-Philippe Brucker <[email protected]>
> > [ morse: Rewrote commit message ]
> > Signed-off-by: James Morse <[email protected]>
> > Tested-by: Miguel Luis <[email protected]>
> > Tested-by: Vishnu Pajjuri <[email protected]>
> > Tested-by: Jianyong Wu <[email protected]>
> > Reviewed-by: Jonathan Cameron <[email protected]>
> > Signed-off-by: Russell King (Oracle) <[email protected]>
> > Signed-off-by: Jonathan Cameron <[email protected]>
> > ---
> > v7: No change
> > ---
> > arch/arm64/kernel/psci.c | 2 +-
> > arch/arm64/kernel/smp.c | 3 ++-
> > 2 files changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> > index 29a8e444db83..fabd732d0a2d 100644
> > --- a/arch/arm64/kernel/psci.c
> > +++ b/arch/arm64/kernel/psci.c
> > @@ -40,7 +40,7 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
> > {
> > phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
> > int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
> > - if (err)
> > + if (err && err != -EPERM)
> > pr_err("failed to boot CPU%d (%d)\n", cpu, err);
> >
> > return err;
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 4ced34f62dab..dc0e0b3ec2d4 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -132,7 +132,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> > /* Now bring the CPU into our world */
> > ret = boot_secondary(cpu, idle);
> > if (ret) {
> > - pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> > + if (ret != -EPERM)
> > + pr_err("CPU%u: failed to boot: %d\n", cpu, ret);
> > return ret;
> > }
> >
>
> The changes in smp.c are based the assumption that PSCI is the only backend, which
> isn't true. So we probably need move this error message to specific backend, which
> could be PSCI, ACPI parking protocol, or smp_spin_table.
Do we? I'll check but I doubt other options ever return -EPERM so this change should
not impact those at all. If they do add support in future for rejecting on the basis
of not having permission then this is fine anyway.
Jonathan
>
> Thanks,
> Gavin
>
On Thu, 18 Apr 2024 14:54:04 +0100
Jonathan Cameron <[email protected]> wrote:
> From: James Morse <[email protected]>
>
> struct acpi_scan_handler has a detach callback that is used to remove
> a driver when a bus is changed. When interacting with an eject-request,
> the detach callback is called before _EJ0.
>
> This means the ACPI processor driver can't use _STA to determine if a
> CPU has been made not-present, or some of the other _STA bits have been
> changed. acpi_processor_remove() needs to know the value of _STA after
> _EJ0 has been called.
>
> Add a post_eject callback to struct acpi_scan_handler. This is called
> after acpi_scan_hot_remove() has successfully called _EJ0. Because
> acpi_scan_check_and_detach() also clears the handler pointer,
> it needs to be told if the caller will go on to call
> acpi_bus_post_eject(), so that acpi_device_clear_enumerated()
> and clearing the handler pointer can be deferred.
> An extra flag is added to flags field introduced in the previous
> patch to achieve this.
>
> Signed-off-by: James Morse <[email protected]>
> Reviewed-by: Joanthan Cameron <[email protected]>
Gavin's earlier review showed I can't type.
Fixed up by dropping this RB seeing as I signed off anyway.
> Reviewed-by: Gavin Shan <[email protected]>
> Tested-by: Miguel Luis <[email protected]>
> Tested-by: Vishnu Pajjuri <[email protected]>
> Tested-by: Jianyong Wu <[email protected]>
> Signed-off-by: Jonathan Cameron <[email protected]>
>
On Thu, 25 Apr 2024 17:55:27 +0100,
Jonathan Cameron <[email protected]> wrote:
>
> On Thu, 25 Apr 2024 16:00:17 +0100
> Jonathan Cameron <[email protected]> wrote:
>
> > On Thu, 25 Apr 2024 13:31:50 +0100
> > Jonathan Cameron <[email protected]> wrote:
> >
> > > On Wed, 24 Apr 2024 16:33:22 +0100
> > > Marc Zyngier <[email protected]> wrote:
[...]
> > >
> > > > I'd expect the handling side to look like this (will not compile, but
> > > > you'll get the idea):
> > > Hi Marc,
> > >
> > > In general this looks good - but...
> > >
> > > I haven't gotten to the bottom of why yet (and it might be a side
> > > effect of how I hacked the test by lying in minimal fashion and
> > > just frigging the MADT read functions) but the hotplug flow is only getting
> > > as far as calling __cpu_up() before it seems to enter an infinite loop.
> > > That is it never gets far enough to fail this test.
> > >
> > > Getting stuck in a psci cpu_on call. I'm guessing something that
> > > we didn't get to in the earlier gicv3 calls before bailing out is blocking that?
> > > Looks like it gets to
> > > SMCCC smc
> > > and is never seen again.
> > >
> > > Any ideas on where to look? The one advantage so far of the higher level
> > > approach is we never tried the hotplug callbacks at all so avoided hitting
> > > that call. One (little bit horrible) solution that might avoid this would
> > > be to add another cpuhp state very early on and fail at that stage.
> > > I'm not keen on doing that without a better explanation than I have so far!
> >
> > Whilst it still doesn't work I suspect I'm loosing ability to print to the console
> > between that point and somewhat later and real problem is
> > elsewhere.
Sorry, travelling at the moment, so only spotted this now.
>
> Hi again,
>
> Found it I think. cpuhp calls between cpu:bringup and ap:online
> arm made from notify_cpu_starting() are clearly marked as nofail with a comment.
> STARTING must not fail!
>
> https://elixir.bootlin.com/linux/latest/source/kernel/cpu.c#L1642
Ah, now that rings a bell! ;-)
>
> Whilst I have no immediate idea why that comment is there it is pretty strong
> argument against trying to have the CPUHP_AP_IRQ_GIC_STARTING callback fail
> and expecting it to carry on working :(
> There would have been a nice print message, but given I don't appear to have
> a working console after that stage I never see it.
>
> So the best I have yet come up with for this is the option of a new callback registered
> in gic_smp_init()
>
> cpuhp_setup_state_nocalls(CPUHP_BP_PREPARE_DYN,
> "irqchip/arm/gicv3:checkrdist",
> gic_broken_rdist, NULL);
>
> with callback being simply
>
> static int gic_broken_rdist(unsigned int cpu)
> {
> if (cpumask_test_cpu(cpu, &broken_rdists))
> return -EINVAL;
>
> return 0;
> }
>
> That gets called cpuhp_up_callbacks() and is allows to fail and roll back the steps.
>
> Not particularly satisfying but keeps the logic confined to the gicv3 driver.
>
> What do you think?
Good enough for me. Cc me on the resulting patch when you repost it so
that I can eyeball it, but this is IMO the right direction.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.