Gunyah is an open-source Type-1 hypervisor developed by Qualcomm. It
does not depend on any lower-privileged OS/kernel code for its core
functionality. This increases its security and can support a smaller
trusted computing based when compared to Type-2 hypervisors.
Add documentation describing the Gunyah hypervisor and the main
components of the Gunyah hypervisor which are of interest to Linux
virtualization development.
Reviewed-by: Bagas Sanjaya <[email protected]>
Signed-off-by: Elliot Berman <[email protected]>
---
Documentation/virt/gunyah/index.rst | 113 ++++++++++++++++++++
Documentation/virt/gunyah/message-queue.rst | 63 +++++++++++
Documentation/virt/index.rst | 1 +
3 files changed, 177 insertions(+)
create mode 100644 Documentation/virt/gunyah/index.rst
create mode 100644 Documentation/virt/gunyah/message-queue.rst
diff --git a/Documentation/virt/gunyah/index.rst b/Documentation/virt/gunyah/index.rst
new file mode 100644
index 0000000000000..74aa345e0a144
--- /dev/null
+++ b/Documentation/virt/gunyah/index.rst
@@ -0,0 +1,113 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Gunyah Hypervisor
+=================
+
+.. toctree::
+ :maxdepth: 1
+
+ message-queue
+
+Gunyah is a Type-1 hypervisor which is independent of any OS kernel, and runs in
+a higher CPU privilege level. It does not depend on any lower-privileged operating system
+for its core functionality. This increases its security and can support a much smaller
+trusted computing base than a Type-2 hypervisor.
+
+Gunyah is an open source hypervisor. The source repo is available at
+https://github.com/quic/gunyah-hypervisor.
+
+Gunyah provides these following features.
+
+- Scheduling:
+
+ A scheduler for virtual CPUs (vCPUs) on physical CPUs enables time-sharing
+ of the CPUs. Gunyah supports two models of scheduling:
+
+ 1. "Behind the back" scheduling in which Gunyah hypervisor schedules vCPUS on its own.
+ 2. "Proxy" scheduling in which a delegated VM can donate part of one of its vCPU slice
+ to another VM's vCPU via a hypercall.
+
+- Memory Management:
+
+ APIs handling memory, abstracted as objects, limiting direct use of physical
+ addresses. Memory ownership and usage tracking of all memory under its control.
+ Memory partitioning between VMs is a fundamental security feature.
+
+- Interrupt Virtualization:
+
+ Uses CPU hardware interrupt virtualization capabilities. Interrupts are handled
+ in the hypervisor and routed to the assigned VM.
+
+- Inter-VM Communication:
+
+ There are several different mechanisms provided for communicating between VMs.
+
+- Virtual platform:
+
+ Architectural devices such as interrupt controllers and CPU timers are directly provided
+ by the hypervisor as well as core virtual platform devices and system APIs such as ARM PSCI.
+
+- Device Virtualization:
+
+ Para-virtualization of devices is supported using inter-VM communication.
+
+Architectures supported
+=======================
+AArch64 with a GIC
+
+Resources and Capabilities
+==========================
+
+Some services or resources provided by the Gunyah hypervisor are described to a virtual machine by
+capability IDs. For instance, inter-VM communication is performed with doorbells and message queues.
+Gunyah allows access to manipulate that doorbell via the capability ID. These resources are
+described in Linux as a struct gh_resource.
+
+High level management of these resources is performed by the resource manager VM. RM informs a
+guest VM about resources it can access through either the device tree or via guest-initiated RPC.
+
+For each virtual machine, Gunyah maintains a table of resources which can be accessed by that VM.
+An entry in this table is called a "capability" and VMs can only access resources via this
+capability table. Hence, virtual Gunyah resources are referenced by a "capability IDs" and not
+"resource IDs". If 2 VMs have access to the same resource, they might not be using the same
+capability ID to access that resource since the capability tables are independent per VM.
+
+Resource Manager
+================
+
+The resource manager (RM) is a privileged application VM supporting the Gunyah Hypervisor.
+It provides policy enforcement aspects of the virtualization system. The resource manager can
+be treated as an extension of the Hypervisor but is separated to its own partition to ensure
+that the hypervisor layer itself remains small and secure and to maintain a separation of policy
+and mechanism in the platform. RM runs at arm64 NS-EL1 similar to other virtual machines.
+
+Communication with the resource manager from each guest VM happens with message-queue.rst. Details
+about the specific messages can be found in drivers/virt/gunyah/rsc_mgr.c
+
+::
+
+ +-------+ +--------+ +--------+
+ | RM | | VM_A | | VM_B |
+ +-.-.-.-+ +---.----+ +---.----+
+ | | | |
+ +-.-.-----------.------------.----+
+ | | \==========/ | |
+ | \========================/ |
+ | Gunyah |
+ +---------------------------------+
+
+The source for the resource manager is available at https://github.com/quic/gunyah-resource-manager.
+
+The resource manager provides the following features:
+
+- VM lifecycle management: allocating a VM, starting VMs, destruction of VMs
+- VM access control policy, including memory sharing and lending
+- Interrupt routing configuration
+- Forwarding of system-level events (e.g. VM shutdown) to owner VM
+
+When booting a virtual machine which uses a devicetree such as Linux, resource manager overlays a
+/hypervisor node. This node can let Linux know it is running as a Gunyah guest VM,
+how to communicate with resource manager, and basic description and capabilities of
+this VM. See Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml for a description
+of this node.
diff --git a/Documentation/virt/gunyah/message-queue.rst b/Documentation/virt/gunyah/message-queue.rst
new file mode 100644
index 0000000000000..b352918ae54b4
--- /dev/null
+++ b/Documentation/virt/gunyah/message-queue.rst
@@ -0,0 +1,63 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Message Queues
+==============
+Message queue is a simple low-capacity IPC channel between two VMs. It is
+intended for sending small control and configuration messages. Each message
+queue is unidirectional, so a full-duplex IPC channel requires a pair of queues.
+
+Messages can be up to 240 bytes in length. Longer messages require a further
+protocol on top of the message queue messages themselves. For instance, communication
+with the resource manager adds a header field for sending longer messages via multiple
+message fragments.
+
+The diagram below shows how message queue works. A typical configuration involves
+2 message queues. Message queue 1 allows VM_A to send messages to VM_B. Message
+queue 2 allows VM_B to send messages to VM_A.
+
+1. VM_A sends a message of up to 240 bytes in length. It raises a hypercall
+ with the message to inform the hypervisor to add the message to
+ message queue 1's queue. The hypervisor copies memory into the internal
+ message queue representation; the memory doesn't need to be shared between
+ VM_A and VM_B.
+
+2. Gunyah raises the corresponding interrupt for VM_B (Rx vIRQ) when any of
+ these happens:
+
+ a. gh_msgq_send() has PUSH flag. Queue is immediately flushed. This is the typical case.
+ b. Explicility with gh_msgq_push command from VM_A.
+ c. Message queue has reached a threshold depth.
+
+3. VM_B calls gh_msgq_recv() and Gunyah copies message to requested buffer.
+
+4. Gunyah buffers messages in the queue. If the queue became full when VM_A added a message,
+ the return values for gh_msgq_send() include a flag that indicates the queue is full.
+ Once VM_B receives the message and, thus, there is space in the queue, Gunyah
+ will raise the Tx vIRQ on VM_A to indicate it can continue sending messages.
+
+For VM_B to send a message to VM_A, the process is identical, except that hypercalls
+reference message queue 2's capability ID. Each message queue has its own independent
+vIRQ: two TX message queues will have two vIRQs (and two capability IDs).
+
+::
+
+ +---------------+ +-----------------+ +---------------+
+ | VM_A | |Gunyah hypervisor| | VM_B |
+ | | | | | |
+ | | | | | |
+ | | Tx | | | |
+ | |-------->| | Rx vIRQ | |
+ |gh_msgq_send() | Tx vIRQ |Message queue 1 |-------->|gh_msgq_recv() |
+ | |<------- | | | |
+ | | | | | |
+ | Message Queue | | | | Message Queue |
+ | driver | | | | driver |
+ | | | | | |
+ | | | | | |
+ | | | | Tx | |
+ | | Rx vIRQ | |<--------| |
+ |gh_msgq_recv() |<--------|Message queue 2 | Tx vIRQ |gh_msgq_send() |
+ | | | |-------->| |
+ | | | | | |
+ | | | | | |
+ +---------------+ +-----------------+ +---------------+
diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst
index 7fb55ae08598d..15869ee059b35 100644
--- a/Documentation/virt/index.rst
+++ b/Documentation/virt/index.rst
@@ -16,6 +16,7 @@ Virtualization Support
coco/sev-guest
coco/tdx-guest
hyperv/index
+ gunyah/index
.. only:: html and subproject
--
2.40.0
On 6/13/23 12:20 PM, Elliot Berman wrote:
> Gunyah is an open-source Type-1 hypervisor developed by Qualcomm. It
> does not depend on any lower-privileged OS/kernel code for its core
> functionality. This increases its security and can support a smaller
> trusted computing based when compared to Type-2 hypervisors.
s/based/base/
>
> Add documentation describing the Gunyah hypervisor and the main
> components of the Gunyah hypervisor which are of interest to Linux
> virtualization development.
>
> Reviewed-by: Bagas Sanjaya <[email protected]>
> Signed-off-by: Elliot Berman <[email protected]>
I have some questions and comments. But I trust that you
can answer them and update your patch in a reasonable way
to address what I say. So... please consider these things,
and update as you see fit.
Reviewed-by: Alex Elder <[email protected]>
> ---
> Documentation/virt/gunyah/index.rst | 113 ++++++++++++++++++++
> Documentation/virt/gunyah/message-queue.rst | 63 +++++++++++
> Documentation/virt/index.rst | 1 +
> 3 files changed, 177 insertions(+)
> create mode 100644 Documentation/virt/gunyah/index.rst
> create mode 100644 Documentation/virt/gunyah/message-queue.rst
>
> diff --git a/Documentation/virt/gunyah/index.rst b/Documentation/virt/gunyah/index.rst
> new file mode 100644
> index 0000000000000..74aa345e0a144
> --- /dev/null
> +++ b/Documentation/virt/gunyah/index.rst
> @@ -0,0 +1,113 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=================
> +Gunyah Hypervisor
> +=================
> +
> +.. toctree::
> + :maxdepth: 1
> +
> + message-queue
> +
> +Gunyah is a Type-1 hypervisor which is independent of any OS kernel, and runs in
> +a higher CPU privilege level. It does not depend on any lower-privileged operating system
> +for its core functionality. This increases its security and can support a much smaller
> +trusted computing base than a Type-2 hypervisor.
> +
> +Gunyah is an open source hypervisor. The source repo is available at
> +https://github.com/quic/gunyah-hypervisor.
> +
> +Gunyah provides these following features.
> +
> +- Scheduling:
> +
> + A scheduler for virtual CPUs (vCPUs) on physical CPUs enables time-sharing
> + of the CPUs. Gunyah supports two models of scheduling:
> +
> + 1. "Behind the back" scheduling in which Gunyah hypervisor schedules vCPUS on its own.
s/VCPUS/VCPUs/
> + 2. "Proxy" scheduling in which a delegated VM can donate part of one of its vCPU slice
> + to another VM's vCPU via a hypercall.
This might sound dumb, but can there be more vCPUs than there
are physical CPUs? Is a vCPU *tied* to a particular physical
CPU, or does it just indicate that a VM has one abstracted CPU
available to use--and any available physical CPU core can
implement it (possibly changing between time slices)?
> +
> +- Memory Management:
> +
> + APIs handling memory, abstracted as objects, limiting direct use of physical
> + addresses. Memory ownership and usage tracking of all memory under its control.
> + Memory partitioning between VMs is a fundamental security feature.
> +
> +- Interrupt Virtualization:
> +
> + Uses CPU hardware interrupt virtualization capabilities. Interrupts are handled
> + in the hypervisor and routed to the assigned VM.
> +
> +- Inter-VM Communication:
> +
> + There are several different mechanisms provided for communicating between VMs.
> +
> +- Virtual platform:
> +
> + Architectural devices such as interrupt controllers and CPU timers are directly provided
> + by the hypervisor as well as core virtual platform devices and system APIs such as ARM PSCI.
> +
> +- Device Virtualization:
> +
> + Para-virtualization of devices is supported using inter-VM communication.
> +
> +Architectures supported
> +=======================
> +AArch64 with a GIC
> +
> +Resources and Capabilities
> +==========================
> +
> +Some services or resources provided by the Gunyah hypervisor are described to a virtual machine by
> +capability IDs. For instance, inter-VM communication is performed with doorbells and message queues.
> +Gunyah allows access to manipulate that doorbell via the capability ID. These resources are
> +described in Linux as a struct gh_resource.
> +
> +High level management of these resources is performed by the resource manager VM. RM informs a
> +guest VM about resources it can access through either the device tree or via guest-initiated RPC.
> +
> +For each virtual machine, Gunyah maintains a table of resources which can be accessed by that VM.
> +An entry in this table is called a "capability" and VMs can only access resources via this
> +capability table. Hence, virtual Gunyah resources are referenced by a "capability IDs" and not
> +"resource IDs". If 2 VMs have access to the same resource, they might not be using the same
> +capability ID to access that resource since the capability tables are independent per VM.
> +
> +Resource Manager
> +================
> +
> +The resource manager (RM) is a privileged application VM supporting the Gunyah Hypervisor.
> +It provides policy enforcement aspects of the virtualization system. The resource manager can
> +be treated as an extension of the Hypervisor but is separated to its own partition to ensure
> +that the hypervisor layer itself remains small and secure and to maintain a separation of policy
> +and mechanism in the platform. RM runs at arm64 NS-EL1 similar to other virtual machines.
> +
> +Communication with the resource manager from each guest VM happens with message-queue.rst. Details
> +about the specific messages can be found in drivers/virt/gunyah/rsc_mgr.c
> +
> +::
> +
> + +-------+ +--------+ +--------+
> + | RM | | VM_A | | VM_B |
> + +-.-.-.-+ +---.----+ +---.----+
> + | | | |
> + +-.-.-----------.------------.----+
> + | | \==========/ | |
> + | \========================/ |
> + | Gunyah |
> + +---------------------------------+
> +
> +The source for the resource manager is available at https://github.com/quic/gunyah-resource-manager.
> +
> +The resource manager provides the following features:
> +
> +- VM lifecycle management: allocating a VM, starting VMs, destruction of VMs
> +- VM access control policy, including memory sharing and lending
> +- Interrupt routing configuration
> +- Forwarding of system-level events (e.g. VM shutdown) to owner VM
> +
> +When booting a virtual machine which uses a devicetree such as Linux, resource manager overlays a
> +/hypervisor node. This node can let Linux know it is running as a Gunyah guest VM,
> +how to communicate with resource manager, and basic description and capabilities of
Maybe:
This node lets Linux know it is running as a Gunyah guest VM.
It provides a basic description and capabilities of the VM,
as well as information required to communicate with the resource
manager.
> +this VM. See Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml for a description
> +of this node.
> diff --git a/Documentation/virt/gunyah/message-queue.rst b/Documentation/virt/gunyah/message-queue.rst
> new file mode 100644
> index 0000000000000..b352918ae54b4
> --- /dev/null
> +++ b/Documentation/virt/gunyah/message-queue.rst
> @@ -0,0 +1,63 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +Message Queues
> +==============
> +Message queue is a simple low-capacity IPC channel between two VMs. It is
I don't know what the "capacity" of an IPC channel is. But
that's OK I guess; it's sort of descriptive.
> +intended for sending small control and configuration messages. Each message
> +queue is unidirectional, so a full-duplex IPC channel requires a pair of queues.
> +
> +Messages can be up to 240 bytes in length. Longer messages require a further
> +protocol on top of the message queue messages themselves. For instance, communication
> +with the resource manager adds a header field for sending longer messages via multiple
> +message fragments.
> +
> +The diagram below shows how message queue works. A typical configuration involves
> +2 message queues. Message queue 1 allows VM_A to send messages to VM_B. Message
> +queue 2 allows VM_B to send messages to VM_A.
> +
> +1. VM_A sends a message of up to 240 bytes in length. It raises a hypercall
> + with the message to inform the hypervisor to add the message to
> + message queue 1's queue. The hypervisor copies memory into the internal
> + message queue representation; the memory doesn't need to be shared between
> + VM_A and VM_B.
> +
> +2. Gunyah raises the corresponding interrupt for VM_B (Rx vIRQ) when any of
> + these happens:
> +
> + a. gh_msgq_send() has PUSH flag. Queue is immediately flushed. This is the typical case.
> + b. Explicility with gh_msgq_push command from VM_A.
s/Explicility/Explicitly/
Is gh_msgq_send() a function and gh_msgq_push a "command" or
something? Why the difference in parentheses? (Pick a
convention and follow it.)
Does "Queue is flushed" mean "VM_B is interrupted"?
VM_A calls gh_msgq_push, and that causes the VM_B interrupt to
be signaled?
I'm being a little picky but I think these descriptions could be
improved a bit.
> + c. Message queue has reached a threshold depth.
> +
> +3. VM_B calls gh_msgq_recv() and Gunyah copies message to requested buffer.
It sure would be nice if all this didn't have to be copied
twice. But I recognize the copies ensure isolation.
> +
> +4. Gunyah buffers messages in the queue. If the queue became full when VM_A added a message,
> + the return values for gh_msgq_send() include a flag that indicates the queue is full.
> + Once VM_B receives the message and, thus, there is space in the queue, Gunyah
> + will raise the Tx vIRQ on VM_A to indicate it can continue sending messages.
Does the Tx vIRQ on VM_A fire after *every* message is sent,
or only when the state of the queue goes from "full" to "not"?
(Looking at patch 6 it looks like the latter.)
If it's signaled after every message is sent, does it
indicate that the message has been *received* by VM_B
(versus just received and copied by Gunyah)?
> +
> +For VM_B to send a message to VM_A, the process is identical, except that hypercalls
> +reference message queue 2's capability ID. Each message queue has its own independent
> +vIRQ: two TX message queues will have two vIRQs (and two capability IDs).
> +
> +::
> +
> + +---------------+ +-----------------+ +---------------+
> + | VM_A | |Gunyah hypervisor| | VM_B |
> + | | | | | |
> + | | | | | |
> + | | Tx | | | |
> + | |-------->| | Rx vIRQ | |
> + |gh_msgq_send() | Tx vIRQ |Message queue 1 |-------->|gh_msgq_recv() |
> + | |<------- | | | |
> + | | | | | |
> + | Message Queue | | | | Message Queue |
> + | driver | | | | driver |
> + | | | | | |
> + | | | | | |
> + | | | | Tx | |
> + | | Rx vIRQ | |<--------| |
> + |gh_msgq_recv() |<--------|Message queue 2 | Tx vIRQ |gh_msgq_send() |
> + | | | |-------->| |
> + | | | | | |
> + | | | | | |
> + +---------------+ +-----------------+ +---------------+
> diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst
> index 7fb55ae08598d..15869ee059b35 100644
> --- a/Documentation/virt/index.rst
> +++ b/Documentation/virt/index.rst
> @@ -16,6 +16,7 @@ Virtualization Support
> coco/sev-guest
> coco/tdx-guest
> hyperv/index
> + gunyah/index
>
> .. only:: html and subproject
>
On 6/16/2023 9:32 AM, Alex Elder wrote:
> On 6/13/23 12:20 PM, Elliot Berman wrote:
>> Gunyah is an open-source Type-1 hypervisor developed by Qualcomm. It
>> does not depend on any lower-privileged OS/kernel code for its core
>> functionality. This increases its security and can support a smaller
>> trusted computing based when compared to Type-2 hypervisors.
>
> s/based/base/
>
>>
>> Add documentation describing the Gunyah hypervisor and the main
>> components of the Gunyah hypervisor which are of interest to Linux
>> virtualization development.
>>
>> Reviewed-by: Bagas Sanjaya <[email protected]>
>> Signed-off-by: Elliot Berman <[email protected]>
>
> I have some questions and comments. But I trust that you
> can answer them and update your patch in a reasonable way
> to address what I say. So... please consider these things,
> and update as you see fit.
>
> Reviewed-by: Alex Elder <[email protected]>
>
>> ---
>> Documentation/virt/gunyah/index.rst | 113 ++++++++++++++++++++
>> Documentation/virt/gunyah/message-queue.rst | 63 +++++++++++
>> Documentation/virt/index.rst | 1 +
>> 3 files changed, 177 insertions(+)
>> create mode 100644 Documentation/virt/gunyah/index.rst
>> create mode 100644 Documentation/virt/gunyah/message-queue.rst
>>
>> diff --git a/Documentation/virt/gunyah/index.rst
>> b/Documentation/virt/gunyah/index.rst
>> new file mode 100644
>> index 0000000000000..74aa345e0a144
>> --- /dev/null
>> +++ b/Documentation/virt/gunyah/index.rst
>> @@ -0,0 +1,113 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> +=================
>> +Gunyah Hypervisor
>> +=================
>> +
>> +.. toctree::
>> + :maxdepth: 1
>> +
>> + message-queue
>> +
>> +Gunyah is a Type-1 hypervisor which is independent of any OS kernel,
>> and runs in
>> +a higher CPU privilege level. It does not depend on any
>> lower-privileged operating system
>> +for its core functionality. This increases its security and can
>> support a much smaller
>> +trusted computing base than a Type-2 hypervisor.
>> +
>> +Gunyah is an open source hypervisor. The source repo is available at
>> +https://github.com/quic/gunyah-hypervisor.
>> +
>> +Gunyah provides these following features.
>> +
>> +- Scheduling:
>> +
>> + A scheduler for virtual CPUs (vCPUs) on physical CPUs enables
>> time-sharing
>> + of the CPUs. Gunyah supports two models of scheduling:
>> +
>> + 1. "Behind the back" scheduling in which Gunyah hypervisor
>> schedules vCPUS on its own.
>
> s/VCPUS/VCPUs/
>
>> + 2. "Proxy" scheduling in which a delegated VM can donate part of
>> one of its vCPU slice
>> + to another VM's vCPU via a hypercall.
>
> This might sound dumb, but can there be more vCPUs than there
> are physical CPUs? Is a vCPU *tied* to a particular physical
> CPU, or does it just indicate that a VM has one abstracted CPU
> available to use--and any available physical CPU core can
> implement it (possibly changing between time slices)?
>
There can be more vCPUs than physical CPUs. If someone wanted to
hard-code their VM to use 16 vCPUs, they could (I picked 16 arbitrarily).
The latter -- the physical CPU that makes the "vcpu_run" hypercall will
be the one to run the vCPU. The userspace thread triggers the hypercall
via GH_VCPU_RUN ioctl and is dependent on the host's task placement for
which physical cpu that userspace thread runs on.
>> +
>> +- Memory Management:
>> +
>> + APIs handling memory, abstracted as objects, limiting direct use of
>> physical
>> + addresses. Memory ownership and usage tracking of all memory under
>> its control.
>> + Memory partitioning between VMs is a fundamental security feature.
>> +
>> +- Interrupt Virtualization:
>> +
>> + Uses CPU hardware interrupt virtualization capabilities. Interrupts
>> are handled
>> + in the hypervisor and routed to the assigned VM.
>> +
>> +- Inter-VM Communication:
>> +
>> + There are several different mechanisms provided for communicating
>> between VMs.
>> +
>> +- Virtual platform:
>> +
>> + Architectural devices such as interrupt controllers and CPU timers
>> are directly provided
>> + by the hypervisor as well as core virtual platform devices and
>> system APIs such as ARM PSCI.
>> +
>> +- Device Virtualization:
>> +
>> + Para-virtualization of devices is supported using inter-VM
>> communication.
>> +
>> +Architectures supported
>> +=======================
>> +AArch64 with a GIC
>> +
>> +Resources and Capabilities
>> +==========================
>> +
>> +Some services or resources provided by the Gunyah hypervisor are
>> described to a virtual machine by
>> +capability IDs. For instance, inter-VM communication is performed
>> with doorbells and message queues.
>> +Gunyah allows access to manipulate that doorbell via the capability
>> ID. These resources are
>> +described in Linux as a struct gh_resource.
>> +
>> +High level management of these resources is performed by the resource
>> manager VM. RM informs a
>> +guest VM about resources it can access through either the device tree
>> or via guest-initiated RPC.
>> +
>> +For each virtual machine, Gunyah maintains a table of resources which
>> can be accessed by that VM.
>> +An entry in this table is called a "capability" and VMs can only
>> access resources via this
>> +capability table. Hence, virtual Gunyah resources are referenced by a
>> "capability IDs" and not
>> +"resource IDs". If 2 VMs have access to the same resource, they might
>> not be using the same
>> +capability ID to access that resource since the capability tables are
>> independent per VM.
>> +
>> +Resource Manager
>> +================
>> +
>> +The resource manager (RM) is a privileged application VM supporting
>> the Gunyah Hypervisor.
>> +It provides policy enforcement aspects of the virtualization system.
>> The resource manager can
>> +be treated as an extension of the Hypervisor but is separated to its
>> own partition to ensure
>> +that the hypervisor layer itself remains small and secure and to
>> maintain a separation of policy
>> +and mechanism in the platform. RM runs at arm64 NS-EL1 similar to
>> other virtual machines.
>> +
>> +Communication with the resource manager from each guest VM happens
>> with message-queue.rst. Details
>> +about the specific messages can be found in
>> drivers/virt/gunyah/rsc_mgr.c
>> +
>> +::
>> +
>> + +-------+ +--------+ +--------+
>> + | RM | | VM_A | | VM_B |
>> + +-.-.-.-+ +---.----+ +---.----+
>> + | | | |
>> + +-.-.-----------.------------.----+
>> + | | \==========/ | |
>> + | \========================/ |
>> + | Gunyah |
>> + +---------------------------------+
>> +
>> +The source for the resource manager is available at
>> https://github.com/quic/gunyah-resource-manager.
>> +
>> +The resource manager provides the following features:
>> +
>> +- VM lifecycle management: allocating a VM, starting VMs, destruction
>> of VMs
>> +- VM access control policy, including memory sharing and lending
>> +- Interrupt routing configuration
>> +- Forwarding of system-level events (e.g. VM shutdown) to owner VM
>> +
>> +When booting a virtual machine which uses a devicetree such as Linux,
>> resource manager overlays a
>> +/hypervisor node. This node can let Linux know it is running as a
>> Gunyah guest VM,
>> +how to communicate with resource manager, and basic description and
>> capabilities of
>
> Maybe:
>
> This node lets Linux know it is running as a Gunyah guest VM.
> It provides a basic description and capabilities of the VM,
> as well as information required to communicate with the resource
> manager.
>
>> +this VM. See
>> Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml for
>> a description
>> +of this node.
>> diff --git a/Documentation/virt/gunyah/message-queue.rst
>> b/Documentation/virt/gunyah/message-queue.rst
>> new file mode 100644
>> index 0000000000000..b352918ae54b4
>> --- /dev/null
>> +++ b/Documentation/virt/gunyah/message-queue.rst
>> @@ -0,0 +1,63 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> +Message Queues
>> +==============
>> +Message queue is a simple low-capacity IPC channel between two VMs.
>> It is
>
> I don't know what the "capacity" of an IPC channel is. But
> that's OK I guess; it's sort of descriptive.
>
>> +intended for sending small control and configuration messages. Each
>> message
>> +queue is unidirectional, so a full-duplex IPC channel requires a pair
>> of queues.
>> +
>> +Messages can be up to 240 bytes in length. Longer messages require a
>> further
>> +protocol on top of the message queue messages themselves. For
>> instance, communication
>> +with the resource manager adds a header field for sending longer
>> messages via multiple
>> +message fragments.
>> +
>> +The diagram below shows how message queue works. A typical
>> configuration involves
>> +2 message queues. Message queue 1 allows VM_A to send messages to
>> VM_B. Message
>> +queue 2 allows VM_B to send messages to VM_A.
>> +
>> +1. VM_A sends a message of up to 240 bytes in length. It raises a
>> hypercall
>> + with the message to inform the hypervisor to add the message to
>> + message queue 1's queue. The hypervisor copies memory into the
>> internal
>> + message queue representation; the memory doesn't need to be shared
>> between
>> + VM_A and VM_B.
>> +
>> +2. Gunyah raises the corresponding interrupt for VM_B (Rx vIRQ) when
>> any of
>> + these happens:
>> +
>> + a. gh_msgq_send() has PUSH flag. Queue is immediately flushed.
>> This is the typical case.
>> + b. Explicility with gh_msgq_push command from VM_A.
>
> s/Explicility/Explicitly/
>
> Is gh_msgq_send() a function and gh_msgq_push a "command" or
> something? Why the difference in parentheses? (Pick a
> convention and follow it.)
Will fix.
>
> Does "Queue is flushed" mean "VM_B is interrupted"?
Yes, I'll clarify that's what it means. VM_B could get the interrupt and
still decide not to read from the queue.
>
> VM_A calls gh_msgq_push, and that causes the VM_B interrupt to
> be signaled?
>
Yes.
> I'm being a little picky but I think these descriptions could be
> improved a bit.
>
>> + c. Message queue has reached a threshold depth.
>> +
>> +3. VM_B calls gh_msgq_recv() and Gunyah copies message to requested
>> buffer.
>
> It sure would be nice if all this didn't have to be copied
> twice. But I recognize the copies ensure isolation.
>
>> +
>> +4. Gunyah buffers messages in the queue. If the queue became full
>> when VM_A added a message,
>> + the return values for gh_msgq_send() include a flag that indicates
>> the queue is full.
>> + Once VM_B receives the message and, thus, there is space in the
>> queue, Gunyah
>> + will raise the Tx vIRQ on VM_A to indicate it can continue sending
>> messages.
>
> Does the Tx vIRQ on VM_A fire after *every* message is sent,
> or only when the state of the queue goes from "full" to "not"?
> (Looking at patch 6 it looks like the latter.)
Tx vIRQ only fires when state of queue goes from "full" to "not".
This may not be very relevant, but Gunyah allows the "not full"
threshold to be less than the queue depth. For instance, the Tx vIRQ
could be configured to only fire once there are no pending messages in
the queue. Linux doesn't presently configure this threshold.
>
> If it's signaled after every message is sent, does it
> indicate that the message has been *received* by VM_B
> (versus just received and copied by Gunyah)?
>
To connect some dots: the Tx vIRQ is fired when the reader reads a
message and the number of messages still in the queue decrements to the
"not full" threshold.
https://github.com/quic/gunyah-hypervisor/blob/3d4014404993939f898018cfb1935c2d9bfc2830/hyp/ipc/msgqueue/src/msgqueue_common.c#L142-L148
>> +
>> +For VM_B to send a message to VM_A, the process is identical, except
>> that hypercalls
>> +reference message queue 2's capability ID. Each message queue has its
>> own independent
>> +vIRQ: two TX message queues will have two vIRQs (and two capability
>> IDs).
>> +
>> +::
>> +
>> + +---------------+ +-----------------+
>> +---------------+
>> + | VM_A | |Gunyah hypervisor| |
>> VM_B |
>> + | | | |
>> | |
>> + | | | |
>> | |
>> + | | Tx | |
>> | |
>> + | |-------->| | Rx vIRQ
>> | |
>> + |gh_msgq_send() | Tx vIRQ |Message queue 1
>> |-------->|gh_msgq_recv() |
>> + | |<------- | |
>> | |
>> + | | | |
>> | |
>> + | Message Queue | | | | Message
>> Queue |
>> + | driver | | | |
>> driver |
>> + | | | |
>> | |
>> + | | | |
>> | |
>> + | | | | Tx
>> | |
>> + | | Rx vIRQ |
>> |<--------| |
>> + |gh_msgq_recv() |<--------|Message queue 2 | Tx vIRQ
>> |gh_msgq_send() |
>> + | | |
>> |-------->| |
>> + | | | |
>> | |
>> + | | | |
>> | |
>> + +---------------+ +-----------------+
>> +---------------+
>> diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst
>> index 7fb55ae08598d..15869ee059b35 100644
>> --- a/Documentation/virt/index.rst
>> +++ b/Documentation/virt/index.rst
>> @@ -16,6 +16,7 @@ Virtualization Support
>> coco/sev-guest
>> coco/tdx-guest
>> hyperv/index
>> + gunyah/index
>> .. only:: html and subproject
>
On 7/3/23 5:41 PM, Elliot Berman wrote:
>> If it's signaled after every message is sent, does it
>> indicate that the message has been *received* by VM_B
>> (versus just received and copied by Gunyah)?
>>
>
> To connect some dots: the Tx vIRQ is fired when the reader reads a
> message and the number of messages still in the queue decrements to the
> "not full" threshold.
>
> https://github.com/quic/gunyah-hypervisor/blob/3d4014404993939f898018cfb1935c2d9bfc2830/hyp/ipc/msgqueue/src/msgqueue_common.c#L142-L148
So the Tx vIRQ on the sender is only fired when the state of the
receiver's Rx queue goes from "full" to "not full".
Normally there is no signal sent, and a sender sends messages
until it gets a "queue full" flag back from a gh_msgq_send()
call. At that point it should stop sending, until the Tx vIRQ
fires to indicate the receiver queue has "room" (fewer than
the "full threshold" messages are consumed).
There is no way (at this layer of the protocol) to tell whether
a given message has been *received*, only that it has been *sent*
(meaning the hypervisor has accepted it). And Gunyah provides
reliable delivery (each message received in send order, exactly
once).
Now that I re-read what you said it makes sense and I guess I
just misunderstood. There *might* be a way to reword slightly
to prevent any misinterpretation.
Thanks.
-Alex
On Tue, Jun 13, 2023 at 10:20:29AM -0700, Elliot Berman wrote:
> Gunyah is an open-source Type-1 hypervisor developed by Qualcomm. It
> does not depend on any lower-privileged OS/kernel code for its core
> functionality. This increases its security and can support a smaller
> trusted computing based when compared to Type-2 hypervisors.
>
> Add documentation describing the Gunyah hypervisor and the main
> components of the Gunyah hypervisor which are of interest to Linux
> virtualization development.
>
> Reviewed-by: Bagas Sanjaya <[email protected]>
> Signed-off-by: Elliot Berman <[email protected]>
> ---
> Documentation/virt/gunyah/index.rst | 113 ++++++++++++++++++++
> Documentation/virt/gunyah/message-queue.rst | 63 +++++++++++
> Documentation/virt/index.rst | 1 +
> 3 files changed, 177 insertions(+)
> create mode 100644 Documentation/virt/gunyah/index.rst
> create mode 100644 Documentation/virt/gunyah/message-queue.rst
>
> diff --git a/Documentation/virt/gunyah/index.rst b/Documentation/virt/gunyah/index.rst
> new file mode 100644
> index 0000000000000..74aa345e0a144
> --- /dev/null
> +++ b/Documentation/virt/gunyah/index.rst
> @@ -0,0 +1,113 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=================
> +Gunyah Hypervisor
> +=================
> +
> +.. toctree::
> + :maxdepth: 1
> +
> + message-queue
> +
> +Gunyah is a Type-1 hypervisor which is independent of any OS kernel, and runs in
> +a higher CPU privilege level. It does not depend on any lower-privileged operating system
> +for its core functionality. This increases its security and can support a much smaller
> +trusted computing base than a Type-2 hypervisor.
Wrap your lines at 80 characters please.
> +
> +Gunyah is an open source hypervisor. The source repo is available at
> +https://github.com/quic/gunyah-hypervisor.
> +
> +Gunyah provides these following features.
> +
> +- Scheduling:
> +
> + A scheduler for virtual CPUs (vCPUs) on physical CPUs enables time-sharing
> + of the CPUs. Gunyah supports two models of scheduling:
> +
> + 1. "Behind the back" scheduling in which Gunyah hypervisor schedules vCPUS on its own.
> + 2. "Proxy" scheduling in which a delegated VM can donate part of one of its vCPU slice
> + to another VM's vCPU via a hypercall.
> +
> +- Memory Management:
> +
> + APIs handling memory, abstracted as objects, limiting direct use of physical
> + addresses. Memory ownership and usage tracking of all memory under its control.
> + Memory partitioning between VMs is a fundamental security feature.
> +
> +- Interrupt Virtualization:
> +
> + Uses CPU hardware interrupt virtualization capabilities. Interrupts are handled
> + in the hypervisor and routed to the assigned VM.
> +
> +- Inter-VM Communication:
> +
> + There are several different mechanisms provided for communicating between VMs.
> +
> +- Virtual platform:
> +
> + Architectural devices such as interrupt controllers and CPU timers are directly provided
> + by the hypervisor as well as core virtual platform devices and system APIs such as ARM PSCI.
> +
> +- Device Virtualization:
> +
> + Para-virtualization of devices is supported using inter-VM communication.
> +
> +Architectures supported
> +=======================
> +AArch64 with a GIC
> +
> +Resources and Capabilities
> +==========================
> +
> +Some services or resources provided by the Gunyah hypervisor are described to a virtual machine by
To my understanding neither resources, nor services are "described", but
rather "exposed through capability IDs"
Is it really "some services or resource", isn't everything in Gunyah
exposed to the VMs as a capability?
> +capability IDs. For instance, inter-VM communication is performed with doorbells and message queues.
> +Gunyah allows access to manipulate that doorbell via the capability ID. These resources are
s/manipulate/interact with/ ?
> +described in Linux as a struct gh_resource.
> +
> +High level management of these resources is performed by the resource manager VM. RM informs a
> +guest VM about resources it can access through either the device tree or via guest-initiated RPC.
> +
> +For each virtual machine, Gunyah maintains a table of resources which can be accessed by that VM.
> +An entry in this table is called a "capability" and VMs can only access resources via this
> +capability table. Hence, virtual Gunyah resources are referenced by a "capability IDs" and not
> +"resource IDs". If 2 VMs have access to the same resource, they might not be using the same
> +capability ID to access that resource since the capability tables are independent per VM.
I think you can rewrite this section more succinctly by saying that Gunyah
handles resources, which are selectively exposed to each VM through
VM-specific capability ids.
> +
> +Resource Manager
> +================
> +
> +The resource manager (RM) is a privileged application VM supporting the Gunyah Hypervisor.
> +It provides policy enforcement aspects of the virtualization system. The resource manager can
> +be treated as an extension of the Hypervisor but is separated to its own partition to ensure
> +that the hypervisor layer itself remains small and secure and to maintain a separation of policy
> +and mechanism in the platform. RM runs at arm64 NS-EL1 similar to other virtual machines.
s/RM/The resource manager/ and a ',' after EL1, please.
> +
> +Communication with the resource manager from each guest VM happens with message-queue.rst. Details
s/each guest VM/other VMs/ or perhaps even spell out virtual machines.
> +about the specific messages can be found in drivers/virt/gunyah/rsc_mgr.c
> +
> +::
> +
> + +-------+ +--------+ +--------+
> + | RM | | VM_A | | VM_B |
> + +-.-.-.-+ +---.----+ +---.----+
> + | | | |
> + +-.-.-----------.------------.----+
> + | | \==========/ | |
> + | \========================/ |
> + | Gunyah |
> + +---------------------------------+
> +
> +The source for the resource manager is available at https://github.com/quic/gunyah-resource-manager.
> +
> +The resource manager provides the following features:
> +
> +- VM lifecycle management: allocating a VM, starting VMs, destruction of VMs
> +- VM access control policy, including memory sharing and lending
> +- Interrupt routing configuration
> +- Forwarding of system-level events (e.g. VM shutdown) to owner VM
> +
> +When booting a virtual machine which uses a devicetree such as Linux, resource manager overlays a
You can omit "such as Linux" without loosing information. Also, "the
resource manager".
> +/hypervisor node. This node can let Linux know it is running as a Gunyah guest VM,
"can"? Looking at the implementation that doesn't seem to be how you
detect that you're running under Gunyah.
> +how to communicate with resource manager, and basic description and capabilities of
When it comes to RM, it doesn't seem to be "can" anymore. Here it _is_
the way you inform the OS about how to communicate with the resource
manager.
> +this VM. See Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml for a description
> +of this node.
> diff --git a/Documentation/virt/gunyah/message-queue.rst b/Documentation/virt/gunyah/message-queue.rst
> new file mode 100644
> index 0000000000000..b352918ae54b4
> --- /dev/null
> +++ b/Documentation/virt/gunyah/message-queue.rst
> @@ -0,0 +1,63 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +Message Queues
> +==============
> +Message queue is a simple low-capacity IPC channel between two VMs. It is
s/VMs/virtual machines/
> +intended for sending small control and configuration messages. Each message
> +queue is unidirectional, so a full-duplex IPC channel requires a pair of queues.
> +
> +Messages can be up to 240 bytes in length. Longer messages require a further
> +protocol on top of the message queue messages themselves. For instance, communication
> +with the resource manager adds a header field for sending longer messages via multiple
> +message fragments.
> +
> +The diagram below shows how message queue works. A typical configuration involves
> +2 message queues. Message queue 1 allows VM_A to send messages to VM_B. Message
> +queue 2 allows VM_B to send messages to VM_A.
What is described below all relates to one message queue, I think it
would be clearer if you remove the second message queue from the example
below and simply state that each direction need its own queue (as you
do here).
> +
> +1. VM_A sends a message of up to 240 bytes in length. It raises a hypercall
> + with the message to inform the hypervisor to add the message to
> + message queue 1's queue. The hypervisor copies memory into the internal
> + message queue representation; the memory doesn't need to be shared between
> + VM_A and VM_B.
> +
> +2. Gunyah raises the corresponding interrupt for VM_B (Rx vIRQ) when any of
> + these happens:
> +
> + a. gh_msgq_send() has PUSH flag. Queue is immediately flushed. This is the typical case.
Funny when the description of the unusual word "push" directly grabs the
word "flush" which everyone understands the meaning of. But I guess this
is Gunyah nomenclature.
On the other hand, if you consider the PUSH flag to just mean "push a RX
interrupt", then the naming isn't so strange. But it wouldn't
necessarily imply that "the queue is flushed", it's simply raising the
rx irq for the receiving side to drain the queue - at it's own pace.
I wouldn't be surprised to see that pushed messages denotes the "typical
case", but I don't think it's relevant details for the rx irq
description.
> + b. Explicility with gh_msgq_push command from VM_A.
There's no function named gh_msgq_send(), and the gh_msgq_push is
lacking parenthesis. Please refer to these consistently with references
to functions, or perhaps capitalize them to refer to the hypercall?
> + c. Message queue has reached a threshold depth.
> +
> +3. VM_B calls gh_msgq_recv() and Gunyah copies message to requested buffer.
s/requested/a provided/
> +
> +4. Gunyah buffers messages in the queue. If the queue became full when VM_A added a message,
s/the/a/
> + the return values for gh_msgq_send() include a flag that indicates the queue is full.
> + Once VM_B receives the message and, thus, there is space in the queue, Gunyah
"Once messages are drained from the queue, Gunyah will raise the..."
> + will raise the Tx vIRQ on VM_A to indicate it can continue sending messages.
> +
> +For VM_B to send a message to VM_A, the process is identical, except that hypercalls
> +reference message queue 2's capability ID. Each message queue has its own independent
> +vIRQ: two TX message queues will have two vIRQs (and two capability IDs).
> +
> +::
> +
> + +---------------+ +-----------------+ +---------------+
> + | VM_A | |Gunyah hypervisor| | VM_B |
> + | | | | | |
> + | | | | | |
> + | | Tx | | | |
> + | |-------->| | Rx vIRQ | |
> + |gh_msgq_send() | Tx vIRQ |Message queue 1 |-------->|gh_msgq_recv() |
> + | |<------- | | | |
> + | | | | | |
> + | Message Queue | | | | Message Queue |
> + | driver | | | | driver |
> + | | | | | |
> + | | | | | |
> + | | | | Tx | |
> + | | Rx vIRQ | |<--------| |
> + |gh_msgq_recv() |<--------|Message queue 2 | Tx vIRQ |gh_msgq_send() |
> + | | | |-------->| |
> + | | | | | |
> + | | | | | |
> + +---------------+ +-----------------+ +---------------+
> diff --git a/Documentation/virt/index.rst b/Documentation/virt/index.rst
> index 7fb55ae08598d..15869ee059b35 100644
> --- a/Documentation/virt/index.rst
> +++ b/Documentation/virt/index.rst
> @@ -16,6 +16,7 @@ Virtualization Support
> coco/sev-guest
> coco/tdx-guest
> hyperv/index
> + gunyah/index
'g' < 'h'
Regards,
Bjorn