2021-11-10 11:18:47

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [RFC] hypercall-vsock: add a new vsock transport

On Wed, Nov 10, 2021 at 07:12:36AM +0000, Wang, Wei W wrote:
>Hi,
>
>We plan to add a new vsock transport based on hypercall (e.g. vmcall on Intel CPUs).
>It transports AF_VSOCK packets between the guest and host, which is similar to
>virtio-vsock, vmci-vsock and hyperv-vsock.
>
>Compared to the above listed vsock transports which are designed for high performance,
>the main advantages of hypercall-vsock are:
>
>1) It is VMM agnostic. For example, one guest working on hypercall-vsock can run on
>
>either KVM, Hyperv, or VMware.
>
>2) It is simpler. It doesn't rely on any complex bus enumeration
>
>(e.g. virtio-pci based vsock device may need the whole implementation of PCI).
>
>An example usage is the communication between MigTD and host (Page 8 at
>https://static.sched.com/hosted_files/kvmforum2021/ef/TDX%20Live%20Migration_Wei%20Wang.pdf).
>MigTD communicates to host to assist the migration of the target (user)
>TD.
>MigTD is part of the TCB, so its implementation is expected to be as simple as possible
>(e.g. bare mental implementation without OS, no PCI driver support).

Adding Andra and Sergio, because IIRC Firecracker and libkrun emulates
virtio-vsock with virtio-mmio so the implementation should be simple and
also not directly tied to a specific VMM.

Maybe this fit for your use case too, in this way we don't have to
maintain another driver.

Thanks,
Stefano


2021-11-10 21:46:31

by Paraschiv, Andra-Irina

[permalink] [raw]
Subject: Re: [RFC] hypercall-vsock: add a new vsock transport



On 10/11/2021 13:17, Stefano Garzarella wrote:
>
> On Wed, Nov 10, 2021 at 07:12:36AM +0000, Wang, Wei W wrote:
>> Hi,
>>
>> We plan to add a new vsock transport based on hypercall (e.g. vmcall
>> on Intel CPUs).
>> It transports AF_VSOCK packets between the guest and host, which is
>> similar to
>> virtio-vsock, vmci-vsock and hyperv-vsock.
>>
>> Compared to the above listed vsock transports which are designed for
>> high performance,
>> the main advantages of hypercall-vsock are:
>>
>> 1)       It is VMM agnostic. For example, one guest working on
>> hypercall-vsock can run on
>>
>> either KVM, Hyperv, or VMware.
>>
>> 2)       It is simpler. It doesn't rely on any complex bus enumeration
>>
>> (e.g. virtio-pci based vsock device may need the whole implementation
>> of PCI).
>>
>> An example usage is the communication between MigTD and host (Page 8 at
>> https://static.sched.com/hosted_files/kvmforum2021/ef/TDX%20Live%20Migration_Wei%20Wang.pdf).
>>
>> MigTD communicates to host to assist the migration of the target (user)
>> TD.
>> MigTD is part of the TCB, so its implementation is expected to be as
>> simple as possible
>> (e.g. bare mental implementation without OS, no PCI driver support).

Thanks for CC. Mixing both threads.

From Stefan:

"
AF_VSOCK is designed to allow multiple transports, so why not. There is
a cost to developing and maintaining a vsock transport though.

I think Amazon Nitro enclaves use virtio-vsock and I've CCed Andra in
case she has thoughts on the pros/cons and how to minimize the trusted
computing base.

If simplicity is the top priority then VIRTIO's MMIO transport without
indirect descriptors and using the packed virtqueue layout reduces the
size of the implementation:
https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-1440002

Stefan
"


On the Nitro Enclaves project side, virtio-mmio is used for the vsock
device setup for the enclave. That has worked fine, it has helped to
have an already available implementation (e.g. virtio-mmio / virtio-pci)
for adoption and ease of use in different types of setups (e.g. distros,
kernel versions).

From Stefano:

>
> Adding Andra and Sergio, because IIRC Firecracker and libkrun emulates
> virtio-vsock with virtio-mmio so the implementation should be simple and
> also not directly tied to a specific VMM.
>
> Maybe this fit for your use case too, in this way we don't have to
> maintain another driver.
>
> Thanks,
> Stefano
>

Indeed, on the Firecracker side, the vsock device is setup using
virtio-mmio [1][2][3]. One specific thing is that on the host, instead
of using vhost, AF_UNIX sockets are used [4].

Thanks,
Andra

[1]
https://github.com/firecracker-microvm/firecracker/blob/main/src/devices/src/virtio/vsock/mod.rs#L30
[2]
https://github.com/firecracker-microvm/firecracker/blob/main/src/vmm/src/builder.rs#L936
[3]
https://github.com/firecracker-microvm/firecracker/blob/main/src/vmm/src/builder.rs#L859
[4]
https://github.com/firecracker-microvm/firecracker/blob/main/docs/vsock.md



Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.

2021-11-11 08:14:24

by Wei Wang

[permalink] [raw]
Subject: RE: [RFC] hypercall-vsock: add a new vsock transport

On Wednesday, November 10, 2021 7:17 PM, Stefano Garzarella wrote:


> Adding Andra and Sergio, because IIRC Firecracker and libkrun emulates
> virtio-vsock with virtio-mmio so the implementation should be simple and also
> not directly tied to a specific VMM.
>

OK. This would be OK for KVM based guests.
For Hyperv and VMWare based guests, they don't have virtio-mmio support.
If the MigTD (a special guest) we provide is based on virtio-mmio, it would not be usable to them.

Thanks,
Wei


2021-11-11 08:24:13

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [RFC] hypercall-vsock: add a new vsock transport

On 11/11/21 09:14, Wang, Wei W wrote:
>> Adding Andra and Sergio, because IIRC Firecracker and libkrun
>> emulates virtio-vsock with virtio-mmio so the implementation
>> should be simple and also not directly tied to a specific VMM.
>>
> OK. This would be OK for KVM based guests. For Hyperv and VMWare
> based guests, they don't have virtio-mmio support. If the MigTD (a
> special guest) we provide is based on virtio-mmio, it would not be
> usable to them.

Hyper-V and VMware (and KVM) would have to add support for
hypercall-vsock anyway. Why can't they just implement a subset of
virtio-mmio? It's not hard and there's even plenty of permissively-
licensed code in the various VMMs for the *BSDs.

In fact, instead of defining your own transport for vsock, my first idea
would have been the opposite: reuse virtio-mmio for the registers and
the virtqueue format, and define your own virtio device for the MigTD!

Thanks,

Paolo