2019-07-04 17:16:43

by Thomas Gleixner

[permalink] [raw]
Subject: [patch V2 21/25] x86/smp: Enhance native_send_call_func_ipi()

Nadav noticed that the cpumask allocations in native_send_call_func_ipi()
are noticeable in microbenchmarks.

Use the new cpumask_or_equal() function to simplify the decision whether
the supplied target CPU mask is either equal to cpu_online_mask or equal to
cpu_online_mask except for the CPU on which the function is invoked.

cpumask_or_equal() or's the target mask and the cpumask of the current CPU
together and compares it to cpu_online_mask.

If the result is false, use the mask based IPI function, otherwise check
whether the current CPU is set in the target mask and invoke either the
send_IPI_all() or the send_IPI_allbutselt() APIC callback.

Make the shorthand decision also depend on the static key which enables
shorthand mode. That allows to remove the extra cpumask comparison with
cpu_callout_mask.

Reported-by: Nadav Amit <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
V2: New patch
---
arch/x86/kernel/apic/ipi.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)

--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -83,23 +83,21 @@ void native_send_call_func_single_ipi(in

void native_send_call_func_ipi(const struct cpumask *mask)
{
- cpumask_var_t allbutself;
+ if (static_branch_likely(&apic_use_ipi_shorthand)) {
+ unsigned int cpu = smp_processor_id();

- if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
- apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
+ if (!cpumask_or_equal(mask, cpumask_of(cpu), cpu_online_mask))
+ goto sendmask;
+
+ if (cpumask_test_cpu(cpu, mask))
+ apic->send_IPI_all(CALL_FUNCTION_VECTOR);
+ else if (num_online_cpus() > 1)
+ apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
return;
}

- cpumask_copy(allbutself, cpu_online_mask);
- cpumask_clear_cpu(smp_processor_id(), allbutself);
-
- if (cpumask_equal(mask, allbutself) &&
- cpumask_equal(cpu_online_mask, cpu_callout_mask))
- apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
- else
- apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
-
- free_cpumask_var(allbutself);
+sendmask:
+ apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
}

#endif /* CONFIG_SMP */



2019-07-05 01:36:05

by Nadav Amit

[permalink] [raw]
Subject: Re: [patch V2 21/25] x86/smp: Enhance native_send_call_func_ipi()

> On Jul 4, 2019, at 8:52 AM, Thomas Gleixner <[email protected]> wrote:
>
> Nadav noticed that the cpumask allocations in native_send_call_func_ipi()
> are noticeable in microbenchmarks.
>
> Use the new cpumask_or_equal() function to simplify the decision whether
> the supplied target CPU mask is either equal to cpu_online_mask or equal to
> cpu_online_mask except for the CPU on which the function is invoked.
>
> cpumask_or_equal() or's the target mask and the cpumask of the current CPU
> together and compares it to cpu_online_mask.
>
> If the result is false, use the mask based IPI function, otherwise check
> whether the current CPU is set in the target mask and invoke either the
> send_IPI_all() or the send_IPI_allbutselt() APIC callback.
>
> Make the shorthand decision also depend on the static key which enables
> shorthand mode. That allows to remove the extra cpumask comparison with
> cpu_callout_mask.
>
> Reported-by: Nadav Amit <[email protected]>
> Signed-off-by: Thomas Gleixner <[email protected]>
> ---
> V2: New patch
> ---
> arch/x86/kernel/apic/ipi.c | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> --- a/arch/x86/kernel/apic/ipi.c
> +++ b/arch/x86/kernel/apic/ipi.c
> @@ -83,23 +83,21 @@ void native_send_call_func_single_ipi(in
>
> void native_send_call_func_ipi(const struct cpumask *mask)
> {
> - cpumask_var_t allbutself;
> + if (static_branch_likely(&apic_use_ipi_shorthand)) {
> + unsigned int cpu = smp_processor_id();
>
> - if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
> - apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> + if (!cpumask_or_equal(mask, cpumask_of(cpu), cpu_online_mask))
> + goto sendmask;
> +
> + if (cpumask_test_cpu(cpu, mask))
> + apic->send_IPI_all(CALL_FUNCTION_VECTOR);
> + else if (num_online_cpus() > 1)
> + apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
> return;
> }
>
> - cpumask_copy(allbutself, cpu_online_mask);
> - cpumask_clear_cpu(smp_processor_id(), allbutself);
> -
> - if (cpumask_equal(mask, allbutself) &&
> - cpumask_equal(cpu_online_mask, cpu_callout_mask))
> - apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
> - else
> - apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> -
> - free_cpumask_var(allbutself);
> +sendmask:
> + apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> }
>
> #endif /* CONFIG_SMP */

It does look better and simpler than my solution.