2019-10-27 16:26:47

by Vitaly Kuznetsov

[permalink] [raw]
Subject: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

When sending an IPI to a single CPU there is no need to deal with cpumasks.
With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
cycles) improvement with smp_call_function_single() loop benchmark. The
optimization, however, is tiny and straitforward. Also, send_ipi_one() is
important for PV spinlock kick.

I was also wondering if it would make sense to switch to using regular
APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
vector)).

Signed-off-by: Vitaly Kuznetsov <[email protected]>
---
Changes since v2:
- Check VP number instead of CPU number against >= 64 [Michael]
- Check for VP_INVAL
---
arch/x86/hyperv/hv_apic.c | 16 +++++++++++++---
arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++
2 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
index e01078e93dd3..40e0e322161d 100644
--- a/arch/x86/hyperv/hv_apic.c
+++ b/arch/x86/hyperv/hv_apic.c
@@ -194,10 +194,20 @@ static bool __send_ipi_mask(const struct cpumask *mask, int vector)

static bool __send_ipi_one(int cpu, int vector)
{
- struct cpumask mask = CPU_MASK_NONE;
+ int vp = hv_cpu_number_to_vp_number(cpu);

- cpumask_set_cpu(cpu, &mask);
- return __send_ipi_mask(&mask, vector);
+ trace_hyperv_send_ipi_one(cpu, vector);
+
+ if (!hv_hypercall_pg || (vp == VP_INVAL))
+ return false;
+
+ if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR))
+ return false;
+
+ if (vp >= 64)
+ return __send_ipi_mask_ex(cpumask_of(cpu), vector);
+
+ return !hv_do_fast_hypercall16(HVCALL_SEND_IPI, vector, BIT_ULL(vp));
}

static void hv_send_ipi(int cpu, int vector)
diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h
index ace464f09681..4d705cb4d63b 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -71,6 +71,21 @@ TRACE_EVENT(hyperv_send_ipi_mask,
__entry->ncpus, __entry->vector)
);

+TRACE_EVENT(hyperv_send_ipi_one,
+ TP_PROTO(int cpu,
+ int vector),
+ TP_ARGS(cpu, vector),
+ TP_STRUCT__entry(
+ __field(int, cpu)
+ __field(int, vector)
+ ),
+ TP_fast_assign(__entry->cpu = cpu;
+ __entry->vector = vector;
+ ),
+ TP_printk("cpu %d vector %x",
+ __entry->cpu, __entry->vector)
+ );
+
#endif /* CONFIG_HYPERV */

#undef TRACE_INCLUDE_PATH
--
2.20.1


2019-10-27 19:59:53

by Michael Kelley (LINUX)

[permalink] [raw]
Subject: RE: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

From: Vitaly Kuznetsov <[email protected]> Sent: Sunday, October 27, 2019 8:20 AM
>
> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL
>

Reviewed-by: Michael Kelley <[email protected]>




2019-10-28 19:20:04

by Roman Kagan

[permalink] [raw]
Subject: Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

On Sun, Oct 27, 2019 at 04:19:38PM +0100, Vitaly Kuznetsov wrote:
> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL
> ---
> arch/x86/hyperv/hv_apic.c | 16 +++++++++++++---
> arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++
> 2 files changed, 28 insertions(+), 3 deletions(-)

Reviewed-by: Roman Kagan <[email protected]>

2019-11-07 13:28:40

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

Vitaly Kuznetsov <[email protected]> writes:

> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL

Hi Sasha,

do you have plans to pick this up for hyperv-next or should we ask x86
folks to?

Thanks!

--
Vitaly

2019-11-07 21:24:26

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case

On Thu, 7 Nov 2019, Vitaly Kuznetsov wrote:

> Vitaly Kuznetsov <[email protected]> writes:
>
> > When sending an IPI to a single CPU there is no need to deal with cpumasks.
> > With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> > cycles) improvement with smp_call_function_single() loop benchmark. The
> > optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> > important for PV spinlock kick.
> >
> > I was also wondering if it would make sense to switch to using regular
> > APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> > cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> > vector)).
> >
> > Signed-off-by: Vitaly Kuznetsov <[email protected]>
> > ---
> > Changes since v2:
> > - Check VP number instead of CPU number against >= 64 [Michael]
> > - Check for VP_INVAL
>
> Hi Sasha,
>
> do you have plans to pick this up for hyperv-next or should we ask x86
> folks to?

I'm picking up the constant TSC one anyway, so I can just throw that in as
well.

Thanks,

tglx

2019-11-12 10:52:11

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/hyperv] x86/hyperv: Micro-optimize send_ipi_one()

The following commit has been merged into the x86/hyperv branch of tip:

Commit-ID: b264f57fde0c686c5c1dfdd0c21992c49196bb87
Gitweb: https://git.kernel.org/tip/b264f57fde0c686c5c1dfdd0c21992c49196bb87
Author: Vitaly Kuznetsov <[email protected]>
AuthorDate: Sun, 27 Oct 2019 16:19:38 +01:00
Committer: Thomas Gleixner <[email protected]>
CommitterDate: Tue, 12 Nov 2019 11:44:20 +01:00

x86/hyperv: Micro-optimize send_ipi_one()

When sending an IPI to a single CPU there is no need to deal with cpumasks.

With 2 CPU guest on WS2019 a minor (like 3%, 8043 -> 7761 CPU cycles)
improvement with smp_call_function_single() loop benchmark can be seeb. The
optimization, however, is tiny and straitforward. Also, send_ipi_one() is
important for PV spinlock kick.

Switching to the regular APIC IPI send for CPU > 64 case does not make
sense as it is twice as expesive (12650 CPU cycles for __send_ipi_mask_ex()
call, 26000 for orig_apic.send_IPI(cpu, vector)).

Signed-off-by: Vitaly Kuznetsov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Reviewed-by: Roman Kagan <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]

---
arch/x86/hyperv/hv_apic.c | 16 +++++++++++++---
arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++
2 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
index 5c056b8..86c8674 100644
--- a/arch/x86/hyperv/hv_apic.c
+++ b/arch/x86/hyperv/hv_apic.c
@@ -194,10 +194,20 @@ do_ex_hypercall:

static bool __send_ipi_one(int cpu, int vector)
{
- struct cpumask mask = CPU_MASK_NONE;
+ int vp = hv_cpu_number_to_vp_number(cpu);

- cpumask_set_cpu(cpu, &mask);
- return __send_ipi_mask(&mask, vector);
+ trace_hyperv_send_ipi_one(cpu, vector);
+
+ if (!hv_hypercall_pg || (vp == VP_INVAL))
+ return false;
+
+ if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR))
+ return false;
+
+ if (vp >= 64)
+ return __send_ipi_mask_ex(cpumask_of(cpu), vector);
+
+ return !hv_do_fast_hypercall16(HVCALL_SEND_IPI, vector, BIT_ULL(vp));
}

static void hv_send_ipi(int cpu, int vector)
diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h
index ace464f..4d705cb 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -71,6 +71,21 @@ TRACE_EVENT(hyperv_send_ipi_mask,
__entry->ncpus, __entry->vector)
);

+TRACE_EVENT(hyperv_send_ipi_one,
+ TP_PROTO(int cpu,
+ int vector),
+ TP_ARGS(cpu, vector),
+ TP_STRUCT__entry(
+ __field(int, cpu)
+ __field(int, vector)
+ ),
+ TP_fast_assign(__entry->cpu = cpu;
+ __entry->vector = vector;
+ ),
+ TP_printk("cpu %d vector %x",
+ __entry->cpu, __entry->vector)
+ );
+
#endif /* CONFIG_HYPERV */

#undef TRACE_INCLUDE_PATH