2022-04-20 16:56:16

by Zeng Guang

[permalink] [raw]
Subject: [PATCH v9 0/9] IPI virtualization support for VM

Currently, issuing an IPI except self-ipi in guest on Intel CPU
always causes a VM-exit. It can lead to non-negligible overhead
to some workloads involving frequent IPIs when running in VMs.

IPI virtualization is a new VT-x feature, targeting to eliminate
VM-exits on source vCPUs when issuing unicast, physical-addressing
IPIs. Once it is enabled, the processor virtualizes following kinds
of operations that send IPIs without causing VM-exits:
- Memory-mapped ICR writes
- MSR-mapped ICR writes
- SENDUIPI execution

This patch series implements IPI virtualization support in KVM.

Patches 1-4 add tertiary processor-based VM-execution support
framework, which is used to enumerate IPI virtualization.

Patch 5 handles APIC-write VM exit due to writes to ICR MSR when
guest works in x2APIC mode. This is a new case introduced by
Intel VT-x.

Patch 6 cleanup code in vmx_refresh_apicv_exec_ctrl(). Prepare for
IPIv status dynamical update along with APICv status change.

Patch 7 move kvm_arch_vcpu_precreate() under kvm->lock protection.
This patch is prepared for IPIv PID-table allocation prior to
the creation of vCPUs.

Patch 8 provide userspace capability to set maximum possible VCPU
ID for current VM. IPIv can refer to this value to allocate memory
for PID-pointer table.

Patch 9 implements IPI virtualization related function including
feature enabling through tertiary processor-based VM-execution in
various scenarios of VMCS configuration, PID table setup in vCPU
creation and vCPU block consideration.

Document for IPI virtualization is now available at the latest "Intel
Architecture Instruction Set Extensions Programming Reference".

Document Link:
https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

We did experiment to measure average time sending IPI from source vCPU
to the target vCPU completing the IPI handling by kvm unittest w/ and
w/o IPI virtualization. When IPI virtualization enabled, it will reduce
22.21% and 15.98% cycles consuming in xAPIC mode and x2APIC mode
respectively.
--------------------------------------
KVM unittest:vmexit/ipi

2 vCPU, AP was modified to run in idle loop instead of halt to ensure
no VM exit impact on target vCPU.

Cycles of IPI
xAPIC mode x2APIC mode
test w/o IPIv w/ IPIv w/o IPIv w/ IPIv
1 6106 4816 4265 3768
2 6244 4656 4404 3546
3 6165 4658 4233 3474
4 5992 4710 4363 3430
5 6083 4741 4215 3551
6 6238 4904 4304 3547
7 6164 4617 4263 3709
8 5984 4763 4518 3779
9 5931 4712 4645 3667
10 5955 4530 4332 3724
11 5897 4673 4283 3569
12 6140 4794 4178 3598
13 6183 4728 4363 3628
14 5991 4994 4509 3842
15 5866 4665 4520 3739
16 6032 4654 4229 3701
17 6050 4653 4185 3726
18 6004 4792 4319 3746
19 5961 4626 4196 3392
20 6194 4576 4433 3760

Average cycles 6059 4713.1 4337.85 3644.8
%Reduction -22.21% -15.98%

--------------------------------------
IPI microbenchmark:
(https://lore.kernel.org/kvm/[email protected])

2 vCPUs, 1:1 pin vCPU to pCPU, guest VM runs with idle=poll, x2APIC mode

Result with IPIv enabled:

Dry-run: 0, 272798 ns
Self-IPI: 5094123, 11114037 ns
Normal IPI: 131697087, 173321200 ns
Broadcast IPI: 0, 155649075 ns
Broadcast lock: 0, 161518031 ns

Result with IPIv disabled:

Dry-run: 0, 272766 ns
Self-IPI: 5091788, 11123699 ns
Normal IPI: 145215772, 174558920 ns
Broadcast IPI: 0, 175785384 ns
Broadcast lock: 0, 149076195 ns


As IPIv can benefit unicast IPI to other CPU, Normal IPI test case gain
about 9.73% time saving on average out of 15 test runs when IPIv is
enabled.

Normal IPI statistics (unit:ns):
test w/o IPIv w/ IPIv
1 153346049 140907046
2 147218648 141660618
3 145215772 117890672
4 146621682 136430470
5 144821472 136199421
6 144704378 131676928
7 141403224 131697087
8 144775766 125476250
9 140658192 137263330
10 144768626 138593127
11 145166679 131946752
12 145020451 116852889
13 148161353 131406280
14 148378655 130174353
15 148903652 127969674

Average time 145944306.6 131742993.1 ns
%Reduction -9.73%

--------------------------------------
hackbench:

8 vCPUs, guest VM free run, x2APIC mode
./hackbench -p -l 100000

w/o IPIv w/ IPIv
Time 91.887 74.605
%Reduction -18.808%

96 vCPUs, guest VM fre run, x2APIC mode
./hackbench -p -l 1000000

w/o IPIv w/ IPIv
Time 287.504 235.185
%Reduction -18.198%

--------------------------------------
v8->v9:
1. Drop patch to forbid change of APIC ID.
2. Change max_vcpu_ids only set once
3. Refactor vCPU pre-creation code

v7->v8:
1. Add trace in kvm_apic_write_nodecode() to track
vICR Write in APIC Write VM-exit handling
2. Move IPIv PID table allocation done in the vCPU
pre-creation (kvm_arch_vcpu_precreate()) protected
by kvm->lock.
3. Misc code refine

v6->v7:
1. Revise kvm_apic_write_nodecode() on dealing with
vICR busy bit in x2apic mode
2. Merge PID-table memory allocation with max_vcpu_id
into IPIv enabling patch
3. Change to allocate PID-table, setup vCPU's PID-table
entry and IPIv related VMCS fields once IPIv can
be enabled, which support runtime enabling IPIv.

v5->v6:
1. Adapt kvm_apic_write_nodecode() implementation based
on Sean's fix of x2apic's ICR register process.
2. Drop the patch handling IPIv table entry setting in
case APIC ID changed, instead applying Levitsky's patch
to disallow setting APIC ID in any case.
3. Drop the patch resizing the PID-pointer table on demand.
Allow userspace to set maximum vcpu id at runtime that
IPIv can refer to the practical value to allocate memory
for PID-pointer table.

v4 -> v5:
1. Deal with enable_ipiv parameter following current
vmcs configuration rule.
2. Allocate memory for PID-pointer table dynamically
3. Support guest runtime modify APIC ID in xAPIC mode
4. Helper to judge possibility to take PI block in IPIv case

v3 -> v4:
1. Refine code style of patch 2
2. Move tertiary control shadow build into patch 3
3. Make vmx_tertiary_exec_control to be static function

v2 -> v3:
1. Misc change on tertiary execution control
definition and capability setup
2. Alternative to get tertiary execution
control configuration

v1 -> v2:
1. Refine the IPIv enabling logic for VM.
Remove ipiv_active definition per vCPU.

--------------------------------------

Chao Gao (1):
KVM: VMX: enable IPI virtualization

Robert Hoo (4):
x86/cpu: Add new VMX feature, Tertiary VM-Execution control
KVM: VMX: Extend BUILD_CONTROLS_SHADOW macro to support 64-bit
variation
KVM: VMX: Detect Tertiary VM-Execution control when setup VMCS config
KVM: VMX: Report tertiary_exec_control field in dump_vmcs()

Zeng Guang (4):
KVM: x86: Add support for vICR APIC-write VM-Exits in x2APIC mode
KVM: VMX: Clean up vmx_refresh_apicv_exec_ctrl()
KVM: Move kvm_arch_vcpu_precreate() under kvm->lock
KVM: x86: Allow userspace set maximum VCPU id for VM

Documentation/virt/kvm/api.rst | 18 ++++
arch/s390/kvm/kvm-s390.c | 2 -
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 7 ++
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/include/asm/vmx.h | 11 +++
arch/x86/include/asm/vmxfeatures.h | 5 +-
arch/x86/kernel/cpu/feat_ctl.c | 9 +-
arch/x86/kvm/lapic.c | 24 ++++-
arch/x86/kvm/vmx/capabilities.h | 13 +++
arch/x86/kvm/vmx/evmcs.c | 2 +
arch/x86/kvm/vmx/evmcs.h | 1 +
arch/x86/kvm/vmx/posted_intr.c | 15 +++-
arch/x86/kvm/vmx/posted_intr.h | 2 +
arch/x86/kvm/vmx/vmcs.h | 1 +
arch/x86/kvm/vmx/vmx.c | 137 +++++++++++++++++++++++++----
arch/x86/kvm/vmx/vmx.h | 64 ++++++++------
arch/x86/kvm/x86.c | 29 +++++-
virt/kvm/kvm_main.c | 10 ++-
19 files changed, 294 insertions(+), 58 deletions(-)

--
2.27.0


2022-04-27 11:06:42

by Zeng Guang

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

Kindly PING!

Thanks for your time.
BR,
Zeng Guang

On 4/19/2022 11:31 PM, Zeng, Guang wrote:
> Currently, issuing an IPI except self-ipi in guest on Intel CPU
> always causes a VM-exit. It can lead to non-negligible overhead
> to some workloads involving frequent IPIs when running in VMs.
>
> IPI virtualization is a new VT-x feature, targeting to eliminate
> VM-exits on source vCPUs when issuing unicast, physical-addressing
> IPIs. Once it is enabled, the processor virtualizes following kinds
> of operations that send IPIs without causing VM-exits:
> - Memory-mapped ICR writes
> - MSR-mapped ICR writes
> - SENDUIPI execution
>
> This patch series implements IPI virtualization support in KVM.
>
> Patches 1-4 add tertiary processor-based VM-execution support
> framework, which is used to enumerate IPI virtualization.
>
> Patch 5 handles APIC-write VM exit due to writes to ICR MSR when
> guest works in x2APIC mode. This is a new case introduced by
> Intel VT-x.
>
> Patch 6 cleanup code in vmx_refresh_apicv_exec_ctrl(). Prepare for
> IPIv status dynamical update along with APICv status change.
>
> Patch 7 move kvm_arch_vcpu_precreate() under kvm->lock protection.
> This patch is prepared for IPIv PID-table allocation prior to
> the creation of vCPUs.
>
> Patch 8 provide userspace capability to set maximum possible VCPU
> ID for current VM. IPIv can refer to this value to allocate memory
> for PID-pointer table.
>
> Patch 9 implements IPI virtualization related function including
> feature enabling through tertiary processor-based VM-execution in
> various scenarios of VMCS configuration, PID table setup in vCPU
> creation and vCPU block consideration.
>
> Document for IPI virtualization is now available at the latest "Intel
> Architecture Instruction Set Extensions Programming Reference".
>
> Document Link:
> https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
>
> We did experiment to measure average time sending IPI from source vCPU
> to the target vCPU completing the IPI handling by kvm unittest w/ and
> w/o IPI virtualization. When IPI virtualization enabled, it will reduce
> 22.21% and 15.98% cycles consuming in xAPIC mode and x2APIC mode
> respectively.
> --------------------------------------
> KVM unittest:vmexit/ipi
>
> 2 vCPU, AP was modified to run in idle loop instead of halt to ensure
> no VM exit impact on target vCPU.
>
> Cycles of IPI
> xAPIC mode x2APIC mode
> test w/o IPIv w/ IPIv w/o IPIv w/ IPIv
> 1 6106 4816 4265 3768
> 2 6244 4656 4404 3546
> 3 6165 4658 4233 3474
> 4 5992 4710 4363 3430
> 5 6083 4741 4215 3551
> 6 6238 4904 4304 3547
> 7 6164 4617 4263 3709
> 8 5984 4763 4518 3779
> 9 5931 4712 4645 3667
> 10 5955 4530 4332 3724
> 11 5897 4673 4283 3569
> 12 6140 4794 4178 3598
> 13 6183 4728 4363 3628
> 14 5991 4994 4509 3842
> 15 5866 4665 4520 3739
> 16 6032 4654 4229 3701
> 17 6050 4653 4185 3726
> 18 6004 4792 4319 3746
> 19 5961 4626 4196 3392
> 20 6194 4576 4433 3760
>
> Average cycles 6059 4713.1 4337.85 3644.8
> %Reduction -22.21% -15.98%
>
> --------------------------------------
> IPI microbenchmark:
> (https://lore.kernel.org/kvm/[email protected])
>
> 2 vCPUs, 1:1 pin vCPU to pCPU, guest VM runs with idle=poll, x2APIC mode
>
> Result with IPIv enabled:
>
> Dry-run: 0, 272798 ns
> Self-IPI: 5094123, 11114037 ns
> Normal IPI: 131697087, 173321200 ns
> Broadcast IPI: 0, 155649075 ns
> Broadcast lock: 0, 161518031 ns
>
> Result with IPIv disabled:
>
> Dry-run: 0, 272766 ns
> Self-IPI: 5091788, 11123699 ns
> Normal IPI: 145215772, 174558920 ns
> Broadcast IPI: 0, 175785384 ns
> Broadcast lock: 0, 149076195 ns
>
>
> As IPIv can benefit unicast IPI to other CPU, Normal IPI test case gain
> about 9.73% time saving on average out of 15 test runs when IPIv is
> enabled.
>
> Normal IPI statistics (unit:ns):
> test w/o IPIv w/ IPIv
> 1 153346049 140907046
> 2 147218648 141660618
> 3 145215772 117890672
> 4 146621682 136430470
> 5 144821472 136199421
> 6 144704378 131676928
> 7 141403224 131697087
> 8 144775766 125476250
> 9 140658192 137263330
> 10 144768626 138593127
> 11 145166679 131946752
> 12 145020451 116852889
> 13 148161353 131406280
> 14 148378655 130174353
> 15 148903652 127969674
>
> Average time 145944306.6 131742993.1 ns
> %Reduction -9.73%
>
> --------------------------------------
> hackbench:
>
> 8 vCPUs, guest VM free run, x2APIC mode
> ./hackbench -p -l 100000
>
> w/o IPIv w/ IPIv
> Time 91.887 74.605
> %Reduction -18.808%
>
> 96 vCPUs, guest VM fre run, x2APIC mode
> ./hackbench -p -l 1000000
>
> w/o IPIv w/ IPIv
> Time 287.504 235.185
> %Reduction -18.198%
>
> --------------------------------------
> v8->v9:
> 1. Drop patch to forbid change of APIC ID.
> 2. Change max_vcpu_ids only set once
> 3. Refactor vCPU pre-creation code
>
> v7->v8:
> 1. Add trace in kvm_apic_write_nodecode() to track
> vICR Write in APIC Write VM-exit handling
> 2. Move IPIv PID table allocation done in the vCPU
> pre-creation (kvm_arch_vcpu_precreate()) protected
> by kvm->lock.
> 3. Misc code refine
>
> v6->v7:
> 1. Revise kvm_apic_write_nodecode() on dealing with
> vICR busy bit in x2apic mode
> 2. Merge PID-table memory allocation with max_vcpu_id
> into IPIv enabling patch
> 3. Change to allocate PID-table, setup vCPU's PID-table
> entry and IPIv related VMCS fields once IPIv can
> be enabled, which support runtime enabling IPIv.
>
> v5->v6:
> 1. Adapt kvm_apic_write_nodecode() implementation based
> on Sean's fix of x2apic's ICR register process.
> 2. Drop the patch handling IPIv table entry setting in
> case APIC ID changed, instead applying Levitsky's patch
> to disallow setting APIC ID in any case.
> 3. Drop the patch resizing the PID-pointer table on demand.
> Allow userspace to set maximum vcpu id at runtime that
> IPIv can refer to the practical value to allocate memory
> for PID-pointer table.
>
> v4 -> v5:
> 1. Deal with enable_ipiv parameter following current
> vmcs configuration rule.
> 2. Allocate memory for PID-pointer table dynamically
> 3. Support guest runtime modify APIC ID in xAPIC mode
> 4. Helper to judge possibility to take PI block in IPIv case
>
> v3 -> v4:
> 1. Refine code style of patch 2
> 2. Move tertiary control shadow build into patch 3
> 3. Make vmx_tertiary_exec_control to be static function
>
> v2 -> v3:
> 1. Misc change on tertiary execution control
> definition and capability setup
> 2. Alternative to get tertiary execution
> control configuration
>
> v1 -> v2:
> 1. Refine the IPIv enabling logic for VM.
> Remove ipiv_active definition per vCPU.
>
> --------------------------------------
>
> Chao Gao (1):
> KVM: VMX: enable IPI virtualization
>
> Robert Hoo (4):
> x86/cpu: Add new VMX feature, Tertiary VM-Execution control
> KVM: VMX: Extend BUILD_CONTROLS_SHADOW macro to support 64-bit
> variation
> KVM: VMX: Detect Tertiary VM-Execution control when setup VMCS config
> KVM: VMX: Report tertiary_exec_control field in dump_vmcs()
>
> Zeng Guang (4):
> KVM: x86: Add support for vICR APIC-write VM-Exits in x2APIC mode
> KVM: VMX: Clean up vmx_refresh_apicv_exec_ctrl()
> KVM: Move kvm_arch_vcpu_precreate() under kvm->lock
> KVM: x86: Allow userspace set maximum VCPU id for VM
>
> Documentation/virt/kvm/api.rst | 18 ++++
> arch/s390/kvm/kvm-s390.c | 2 -
> arch/x86/include/asm/kvm-x86-ops.h | 1 +
> arch/x86/include/asm/kvm_host.h | 7 ++
> arch/x86/include/asm/msr-index.h | 1 +
> arch/x86/include/asm/vmx.h | 11 +++
> arch/x86/include/asm/vmxfeatures.h | 5 +-
> arch/x86/kernel/cpu/feat_ctl.c | 9 +-
> arch/x86/kvm/lapic.c | 24 ++++-
> arch/x86/kvm/vmx/capabilities.h | 13 +++
> arch/x86/kvm/vmx/evmcs.c | 2 +
> arch/x86/kvm/vmx/evmcs.h | 1 +
> arch/x86/kvm/vmx/posted_intr.c | 15 +++-
> arch/x86/kvm/vmx/posted_intr.h | 2 +
> arch/x86/kvm/vmx/vmcs.h | 1 +
> arch/x86/kvm/vmx/vmx.c | 137 +++++++++++++++++++++++++----
> arch/x86/kvm/vmx/vmx.h | 64 ++++++++------
> arch/x86/kvm/x86.c | 29 +++++-
> virt/kvm/kvm_main.c | 10 ++-
> 19 files changed, 294 insertions(+), 58 deletions(-)
>

2022-05-03 00:12:37

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On 4/19/22 17:31, Zeng Guang wrote:
> Currently, issuing an IPI except self-ipi in guest on Intel CPU
> always causes a VM-exit. It can lead to non-negligible overhead
> to some workloads involving frequent IPIs when running in VMs.
>
> IPI virtualization is a new VT-x feature, targeting to eliminate
> VM-exits on source vCPUs when issuing unicast, physical-addressing
> IPIs. Once it is enabled, the processor virtualizes following kinds
> of operations that send IPIs without causing VM-exits:
> - Memory-mapped ICR writes
> - MSR-mapped ICR writes
> - SENDUIPI execution
>
> This patch series implements IPI virtualization support in KVM.
>
> Patches 1-4 add tertiary processor-based VM-execution support
> framework, which is used to enumerate IPI virtualization.
>
> Patch 5 handles APIC-write VM exit due to writes to ICR MSR when
> guest works in x2APIC mode. This is a new case introduced by
> Intel VT-x.
>
> Patch 6 cleanup code in vmx_refresh_apicv_exec_ctrl(). Prepare for
> IPIv status dynamical update along with APICv status change.
>
> Patch 7 move kvm_arch_vcpu_precreate() under kvm->lock protection.
> This patch is prepared for IPIv PID-table allocation prior to
> the creation of vCPUs.
>
> Patch 8 provide userspace capability to set maximum possible VCPU
> ID for current VM. IPIv can refer to this value to allocate memory
> for PID-pointer table.
>
> Patch 9 implements IPI virtualization related function including
> feature enabling through tertiary processor-based VM-execution in
> various scenarios of VMCS configuration, PID table setup in vCPU
> creation and vCPU block consideration.

I queued it, but I am not going to send it to Linus until I get
selftests for KVM_CAP_MAX_VCPU_ID. Selftests are generally _not_
optional for new userspace API.

Please send a patch on top of kvm/queue.

Paolo

2022-05-03 10:47:52

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On 5/3/22 09:32, Zeng Guang wrote:
>
> I don't see "[PATCH v9 4/9] KVM: VMX: Report tertiary_exec_control field in
> dump_vmcs()" in kvm/queue. Does it not need ?

Added now (somehow the patches were not threaded, so I had to catch them
one by one from lore).

> Selftests for KVM_CAP_MAX_VCPU_ID is posted in V2 which is revised on top of
> kvm/queue.
> ([PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test - Zeng
> Guang (kernel.org)
> <https://lore.kernel.org/lkml/[email protected]/>)

Queued, thanks.

Paolo

2022-05-17 04:43:19

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On Tue, May 03, 2022, Paolo Bonzini wrote:
> On 5/3/22 09:32, Zeng Guang wrote:
> >
> > I don't see "[PATCH v9 4/9] KVM: VMX: Report tertiary_exec_control field in
> > dump_vmcs()" in kvm/queue. Does it not need ?
>
> Added now (somehow the patches were not threaded, so I had to catch them one
> by one from lore).
>
> > Selftests for KVM_CAP_MAX_VCPU_ID is posted in V2 which is revised on top of
> > kvm/queue.
> > ([PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test - Zeng
> > Guang (kernel.org) <https://lore.kernel.org/lkml/[email protected]/>)
>
> Queued, thanks.

Shouldn't we have a solution for the read-only APIC_ID mess before this is merged?

2022-05-17 19:50:40

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On Mon, May 16, 2022 at 08:49:52PM +0000, Sean Christopherson wrote:
>On Tue, May 03, 2022, Paolo Bonzini wrote:
>> On 5/3/22 09:32, Zeng Guang wrote:
>> >
>> > I don't see "[PATCH v9 4/9] KVM: VMX: Report tertiary_exec_control field in
>> > dump_vmcs()" in kvm/queue. Does it not need ?
>>
>> Added now (somehow the patches were not threaded, so I had to catch them one
>> by one from lore).
>>
>> > Selftests for KVM_CAP_MAX_VCPU_ID is posted in V2 which is revised on top of
>> > kvm/queue.
>> > ([PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test - Zeng
>> > Guang (kernel.org) <https://lore.kernel.org/lkml/[email protected]/>)
>>
>> Queued, thanks.
>
>Shouldn't we have a solution for the read-only APIC_ID mess before this is merged?

We can add a new inhibit to disable APICv if guest attempts to change APIC
ID when IPIv (or AVIC) is enabled. Maxim also thinks using a new inhibit is
the right direction [1].

If no objection to this approach and Maxim doesn't have the patch, we can post
one. But we will rely on Maxim to fix APIC ID mess for nested AVIC.

[1] https://lore.kernel.org/all/[email protected]/

2022-05-18 04:52:51

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

+ Maxim

On Tue, May 17, 2022 at 09:53:26PM +0800, Chao Gao wrote:
>On Mon, May 16, 2022 at 08:49:52PM +0000, Sean Christopherson wrote:
>>On Tue, May 03, 2022, Paolo Bonzini wrote:
>>> On 5/3/22 09:32, Zeng Guang wrote:
>>> >
>>> > I don't see "[PATCH v9 4/9] KVM: VMX: Report tertiary_exec_control field in
>>> > dump_vmcs()" in kvm/queue. Does it not need ?
>>>
>>> Added now (somehow the patches were not threaded, so I had to catch them one
>>> by one from lore).
>>>
>>> > Selftests for KVM_CAP_MAX_VCPU_ID is posted in V2 which is revised on top of
>>> > kvm/queue.
>>> > ([PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test - Zeng
>>> > Guang (kernel.org) <https://lore.kernel.org/lkml/[email protected]/>)
>>>
>>> Queued, thanks.
>>
>>Shouldn't we have a solution for the read-only APIC_ID mess before this is merged?
>
>We can add a new inhibit to disable APICv if guest attempts to change APIC
>ID when IPIv (or AVIC) is enabled. Maxim also thinks using a new inhibit is
>the right direction [1].
>
>If no objection to this approach and Maxim doesn't have the patch, we can post
>one. But we will rely on Maxim to fix APIC ID mess for nested AVIC.
>
>[1] https://lore.kernel.org/all/[email protected]/

2022-05-19 13:50:08

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On Tue, May 17, 2022 at 10:02:23PM +0800, Chao Gao wrote:
>+ Maxim
>
>On Tue, May 17, 2022 at 09:53:26PM +0800, Chao Gao wrote:
>>On Mon, May 16, 2022 at 08:49:52PM +0000, Sean Christopherson wrote:
>>>On Tue, May 03, 2022, Paolo Bonzini wrote:
>>>> On 5/3/22 09:32, Zeng Guang wrote:
>>>> >
>>>> > I don't see "[PATCH v9 4/9] KVM: VMX: Report tertiary_exec_control field in
>>>> > dump_vmcs()" in kvm/queue. Does it not need ?
>>>>
>>>> Added now (somehow the patches were not threaded, so I had to catch them one
>>>> by one from lore).
>>>>
>>>> > Selftests for KVM_CAP_MAX_VCPU_ID is posted in V2 which is revised on top of
>>>> > kvm/queue.
>>>> > ([PATCH v2] kvm: selftests: Add KVM_CAP_MAX_VCPU_ID cap test - Zeng
>>>> > Guang (kernel.org) <https://lore.kernel.org/lkml/[email protected]/>)
>>>>
>>>> Queued, thanks.
>>>
>>>Shouldn't we have a solution for the read-only APIC_ID mess before this is merged?

Paolo & Sean,

If a solution for read-only APIC ID mess is needed before merging IPIv
series, do you think the Maxim's patch [1] after some improvement will
suffice? Let us know if there is any gap.

[1]: https://lore.kernel.org/all/[email protected]/

2022-05-21 03:53:03

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v9 0/9] IPI virtualization support for VM

On Thu, May 19, 2022, Chao Gao wrote:
> On Tue, May 17, 2022 at 10:02:23PM +0800, Chao Gao wrote:
> >+ Maxim
> >
> >On Tue, May 17, 2022 at 09:53:26PM +0800, Chao Gao wrote:
> >>On Mon, May 16, 2022 at 08:49:52PM +0000, Sean Christopherson wrote:
> >>>Shouldn't we have a solution for the read-only APIC_ID mess before this is merged?
>
> Paolo & Sean,
>
> If a solution for read-only APIC ID mess is needed before merging IPIv
> series, do you think the Maxim's patch [1] after some improvement will
> suffice? Let us know if there is any gap.

Yep, inhibiting APICv if APIC ID is changed should do the trick, and it's nice and
simple. I can't think of any gaps.

> [1]: https://lore.kernel.org/all/[email protected]/