2020-07-28 06:24:59

by Yi Liu

[permalink] [raw]
Subject: [PATCH v6 14/15] vfio: Document dual stage control

From: Eric Auger <[email protected]>

The VFIO API was enhanced to support nested stage control: a bunch of
new ioctls and usage guideline.

Let's document the process to follow to set up nested mode.

Cc: Kevin Tian <[email protected]>
CC: Jacob Pan <[email protected]>
Cc: Alex Williamson <[email protected]>
Cc: Eric Auger <[email protected]>
Cc: Jean-Philippe Brucker <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Lu Baolu <[email protected]>
Reviewed-by: Stefan Hajnoczi <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
Signed-off-by: Liu Yi L <[email protected]>
---
v5 -> v6:
*) tweak per Eric's comments.

v3 -> v4:
*) add review-by from Stefan Hajnoczi

v2 -> v3:
*) address comments from Stefan Hajnoczi

v1 -> v2:
*) new in v2, compared with Eric's original version, pasid table bind
and fault reporting is removed as this series doesn't cover them.
Original version from Eric.
https://lkml.org/lkml/2020/3/20/700
---
Documentation/driver-api/vfio.rst | 75 +++++++++++++++++++++++++++++++++++++++
1 file changed, 75 insertions(+)

diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index f1a4d3c..c0d43f0 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -239,6 +239,81 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);

+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. Stage corresponds
+to the ARM terminology while level corresponds to Intel's terminology.
+In the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can
+use stage-1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage
+2 for VM isolation (GPA -> HPA).
+
+Under dual stage translation, the guest gets ownership of the stage-1 page
+tables and also owns stage-1 configuration structures. The hypervisor owns
+the root configuration structure (for security reason), including stage-2
+configuration. This works as long as configuration structures and page table
+formats are compatible between the virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage-2, leaving stage-1 available
+for guest usage. The stage-1 format and binding method are vendor specific
+and reported in nesting cap (VFIO_IOMMU_TYPE1_INFO_CAP_NESTING) through
+VFIO_IOMMU_GET_INFO:
+
+ ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
+
+The nesting cap info is available only after NESTING_IOMMU is selected.
+If underlying IOMMU doesn't support nesting, VFIO_SET_IOMMU fails and
+userspace should try other IOMMU types. Details of the nesting cap info
+can be found in Documentation/userspace-api/iommu.rst.
+
+The stage-1 page table can be bound to the IOMMU in two methods: directly
+or indirectly. Direct binding requires userspace to notify VFIO of every
+guest stage-1 page table binding, while indirect binding allows userspace
+to bind once with an intermediate structure (e.g. PASID table) which
+indirectly links to guest stage-1 page tables. The actual binding method
+depends on IOMMU vendor. Currently only the direct binding capability (
+IOMMU_NESTING_FEAT_BIND_PGTBL) is supported:
+
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
+ memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+
+When multiple stage-1 page tables are supported on a device, each page
+table is associated with a PASID (Process Address Space ID) to differentiate
+with each other. In such case, userspace should include PASID in the
+bind_data when issuing direct binding request.
+
+PASID could be managed per-device or system-wide which, again, depends on
+IOMMU vendor and is reported in nesting cap info. When system-wide policy
+is reported (IOMMU_NESTING_FEAT_SYSWIDE_PASID), e.g. as by Intel platforms,
+userspace *must* allocate PASID from VFIO before attempting binding of
+stage-1 page table:
+
+ req.flags = VFIO_IOMMU_ALLOC_PASID;
+ ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
+
+Once the stage-1 page table is bound to the IOMMU, the guest is allowed to
+fully manage its mapping at its disposal. The IOMMU walks nested stage-1
+and stage-2 page tables when serving DMA requests from assigned device, and
+may cache the stage-1 mapping in the IOTLB. When required (IOMMU_NESTING_
+FEAT_CACHE_INVLD), userspace *must* forward guest stage-1 invalidation to
+the host, so the IOTLB is invalidated:
+
+ nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
+ memcpy(&nesting_op->data, &cache_inv_data, sizeof(cache_inv_data));
+ ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
+
+Forwarded invalidations can happen at various granularity levels (page
+level, context level, etc.)
+
VFIO User API
-------------------------------------------------------------------------------

--
2.7.4


2020-08-16 12:19:03

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v6 14/15] vfio: Document dual stage control

Hi Yi,

On 7/28/20 8:27 AM, Liu Yi L wrote:
> From: Eric Auger <[email protected]>
>
> The VFIO API was enhanced to support nested stage control: a bunch of> new ioctls and usage guideline.
>
> Let's document the process to follow to set up nested mode.
>
> Cc: Kevin Tian <[email protected]>
> CC: Jacob Pan <[email protected]>
> Cc: Alex Williamson <[email protected]>
> Cc: Eric Auger <[email protected]>
> Cc: Jean-Philippe Brucker <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Lu Baolu <[email protected]>
> Reviewed-by: Stefan Hajnoczi <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
> Signed-off-by: Liu Yi L <[email protected]>
> ---
> v5 -> v6:
> *) tweak per Eric's comments.
>
> v3 -> v4:
> *) add review-by from Stefan Hajnoczi
>
> v2 -> v3:
> *) address comments from Stefan Hajnoczi
>
> v1 -> v2:
> *) new in v2, compared with Eric's original version, pasid table bind
> and fault reporting is removed as this series doesn't cover them.
> Original version from Eric.
> https://lkml.org/lkml/2020/3/20/700
> ---
> Documentation/driver-api/vfio.rst | 75 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 75 insertions(+)
>
> diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
> index f1a4d3c..c0d43f0 100644
> --- a/Documentation/driver-api/vfio.rst
> +++ b/Documentation/driver-api/vfio.rst
> @@ -239,6 +239,81 @@ group and can access them as follows::
> /* Gratuitous device reset and go... */
> ioctl(device, VFIO_DEVICE_RESET);
>
> +IOMMU Dual Stage Control
> +------------------------
> +
> +Some IOMMUs support 2 stages/levels of translation. Stage corresponds
> +to the ARM terminology while level corresponds to Intel's terminology.
> +In the following text we use either without distinction.
> +
> +This is useful when the guest is exposed with a virtual IOMMU and some
> +devices are assigned to the guest through VFIO. Then the guest OS can
> +use stage-1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage
> +2 for VM isolation (GPA -> HPA).
> +
> +Under dual stage translation, the guest gets ownership of the stage-1 page
> +tables and also owns stage-1 configuration structures. The hypervisor owns
> +the root configuration structure (for security reason), including stage-2
> +configuration.
This is only true for vtd. On ARM the stage2 cfg is the Context
Descriptor table (aka PASID table). root cfg only store the GPA of the
CD table.
This works as long as configuration structures and page table
> +formats are compatible between the virtual IOMMU and the physical IOMMU.
> +
> +Assuming the HW supports it, this nested mode is selected by choosing the
> +VFIO_TYPE1_NESTING_IOMMU type through:
> +
> + ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
> +
> +This forces the hypervisor to use the stage-2, leaving stage-1 available
> +for guest usage. The stage-1 format and binding method are vendor specific
. There are reported in the nesting capability ...
> +and reported in nesting cap (VFIO_IOMMU_TYPE1_INFO_CAP_NESTING) through
> +VFIO_IOMMU_GET_INFO:
> +
> + ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
> +
> +The nesting cap info is available only after NESTING_IOMMU is selected.
> +If underlying IOMMU doesn't support nesting, VFIO_SET_IOMMU fails and
If the underlying
> +userspace should try other IOMMU types. Details of the nesting cap info
> +can be found in Documentation/userspace-api/iommu.rst.
> +
> +The stage-1 page table can be bound to the IOMMU in two methods: directly> +or indirectly. Direct binding requires userspace to notify VFIO of every
Not sure we shall use this direct/indirect terminology. I don't think
this is part of either ARM or Intel SPEC.

Suggestion: On Intel, the stage1 page table info are mediated by the
userspace for each PASID. On ARM, the userspace directly passes the GPA
of the whole PASID table. Currently only Intel's binding is supported.
> +guest stage-1 page table binding, while indirect binding allows userspace
> +to bind once with an intermediate structure (e.g. PASID table) which
> +indirectly links to guest stage-1 page tables. The actual binding method
> +depends on IOMMU vendor. Currently only the direct binding capability (
> +IOMMU_NESTING_FEAT_BIND_PGTBL) is supported:
> +
> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
> + memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> +
> +When multiple stage-1 page tables are supported on a device, each page
> +table is associated with a PASID (Process Address Space ID) to differentiate
> +with each other. In such case, userspace should include PASID in the
> +bind_data when issuing direct binding request.
> +
> +PASID could be managed per-device or system-wide which, again, depends on
> +IOMMU vendor and is reported in nesting cap info. When system-wide policy
> +is reported (IOMMU_NESTING_FEAT_SYSWIDE_PASID), e.g. as by Intel platforms,
> +userspace *must* allocate PASID from VFIO before attempting binding of
> +stage-1 page table:
> +
> + req.flags = VFIO_IOMMU_ALLOC_PASID;
> + ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
> +
> +Once the stage-1 page table is bound to the IOMMU, the guest is allowed to
> +fully manage its mapping at its disposal. The IOMMU walks nested stage-1
> +and stage-2 page tables when serving DMA requests from assigned device, and
> +may cache the stage-1 mapping in the IOTLB. When required (IOMMU_NESTING_
> +FEAT_CACHE_INVLD), userspace *must* forward guest stage-1 invalidation to
> +the host, so the IOTLB is invalidated:
> +
> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
> + memcpy(&nesting_op->data, &cache_inv_data, sizeof(cache_inv_data));
> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> +
> +Forwarded invalidations can happen at various granularity levels (page
> +level, context level, etc.)
> +
> VFIO User API
> -------------------------------------------------------------------------------
>
>
Thanks

Eric

2020-08-17 07:06:34

by Yi Liu

[permalink] [raw]
Subject: RE: [PATCH v6 14/15] vfio: Document dual stage control

Hi Eric,

> From: Auger Eric <[email protected]>
> Sent: Sunday, August 16, 2020 7:52 PM
>
> Hi Yi,
>
> On 7/28/20 8:27 AM, Liu Yi L wrote:
> > From: Eric Auger <[email protected]>
> >
> > The VFIO API was enhanced to support nested stage control: a bunch of> new
> ioctls and usage guideline.
> >
> > Let's document the process to follow to set up nested mode.
> >
> > Cc: Kevin Tian <[email protected]>
> > CC: Jacob Pan <[email protected]>
> > Cc: Alex Williamson <[email protected]>
> > Cc: Eric Auger <[email protected]>
> > Cc: Jean-Philippe Brucker <[email protected]>
> > Cc: Joerg Roedel <[email protected]>
> > Cc: Lu Baolu <[email protected]>
> > Reviewed-by: Stefan Hajnoczi <[email protected]>
> > Signed-off-by: Eric Auger <[email protected]>
> > Signed-off-by: Liu Yi L <[email protected]>
> > ---
> > v5 -> v6:
> > *) tweak per Eric's comments.
> >
> > v3 -> v4:
> > *) add review-by from Stefan Hajnoczi
> >
> > v2 -> v3:
> > *) address comments from Stefan Hajnoczi
> >
> > v1 -> v2:
> > *) new in v2, compared with Eric's original version, pasid table bind
> > and fault reporting is removed as this series doesn't cover them.
> > Original version from Eric.
> > https://lkml.org/lkml/2020/3/20/700
> > ---
> > Documentation/driver-api/vfio.rst | 75
> +++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 75 insertions(+)
> >
> > diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
> > index f1a4d3c..c0d43f0 100644
> > --- a/Documentation/driver-api/vfio.rst
> > +++ b/Documentation/driver-api/vfio.rst
> > @@ -239,6 +239,81 @@ group and can access them as follows::
> > /* Gratuitous device reset and go... */
> > ioctl(device, VFIO_DEVICE_RESET);
> >
> > +IOMMU Dual Stage Control
> > +------------------------
> > +
> > +Some IOMMUs support 2 stages/levels of translation. Stage corresponds
> > +to the ARM terminology while level corresponds to Intel's terminology.
> > +In the following text we use either without distinction.
> > +
> > +This is useful when the guest is exposed with a virtual IOMMU and some
> > +devices are assigned to the guest through VFIO. Then the guest OS can
> > +use stage-1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage
> > +2 for VM isolation (GPA -> HPA).
> > +
> > +Under dual stage translation, the guest gets ownership of the stage-1 page
> > +tables and also owns stage-1 configuration structures. The hypervisor owns
> > +the root configuration structure (for security reason), including stage-2
> > +configuration.
> This is only true for vtd. On ARM the stage2 cfg is the Context
> Descriptor table (aka PASID table). root cfg only store the GPA of the
> CD table.

I've a check with you on the meaning of "configuration structures".
For Vt-d, does it mean the root table/context table/pasid table? if
I'm correct, then how about below description?

"Under dual stage translation, the guest gets ownership of the stage-1
configuration structures or page tables. This depends on vendor. The
hypervisor owns the root configuration structure (for security reason),
including stage-2 configuration."

> This works as long as configuration structures and page table
> > +formats are compatible between the virtual IOMMU and the physical IOMMU.
> > +
> > +Assuming the HW supports it, this nested mode is selected by choosing the
> > +VFIO_TYPE1_NESTING_IOMMU type through:
> > +
> > + ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
> > +
> > +This forces the hypervisor to use the stage-2, leaving stage-1 available
> > +for guest usage. The stage-1 format and binding method are vendor specific
> . There are reported in the nesting capability ...

got it.

"The stage-1 format and binding method are reported in nesting capability."

> > +and reported in nesting cap (VFIO_IOMMU_TYPE1_INFO_CAP_NESTING) through
> > +VFIO_IOMMU_GET_INFO:
> > +
> > + ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
> > +
> > +The nesting cap info is available only after NESTING_IOMMU is selected.
> > +If underlying IOMMU doesn't support nesting, VFIO_SET_IOMMU fails and
> If the underlying

got it.

> > +userspace should try other IOMMU types. Details of the nesting cap info
> > +can be found in Documentation/userspace-api/iommu.rst.
> > +
> > +The stage-1 page table can be bound to the IOMMU in two methods: directly>
> +or indirectly. Direct binding requires userspace to notify VFIO of every
> Not sure we shall use this direct/indirect terminology. I don't think
> this is part of either ARM or Intel SPEC.
>
> Suggestion: On Intel, the stage1 page table info are mediated by the
> userspace for each PASID. On ARM, the userspace directly passes the GPA
> of the whole PASID table. Currently only Intel's binding is supported.

got it. this is what we want to say by ditect/indirect terminology.

Regards,
Yi Liu

> > +guest stage-1 page table binding, while indirect binding allows userspace
> > +to bind once with an intermediate structure (e.g. PASID table) which
> > +indirectly links to guest stage-1 page tables. The actual binding method
> > +depends on IOMMU vendor. Currently only the direct binding capability (
> > +IOMMU_NESTING_FEAT_BIND_PGTBL) is supported:
> > +
> > + nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
> > + memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
> > + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> > +
> > +When multiple stage-1 page tables are supported on a device, each page
> > +table is associated with a PASID (Process Address Space ID) to differentiate
> > +with each other. In such case, userspace should include PASID in the
> > +bind_data when issuing direct binding request.
> > +
> > +PASID could be managed per-device or system-wide which, again, depends on
> > +IOMMU vendor and is reported in nesting cap info. When system-wide policy
> > +is reported (IOMMU_NESTING_FEAT_SYSWIDE_PASID), e.g. as by Intel platforms,
> > +userspace *must* allocate PASID from VFIO before attempting binding of
> > +stage-1 page table:
> > +
> > + req.flags = VFIO_IOMMU_ALLOC_PASID;
> > + ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
> > +
> > +Once the stage-1 page table is bound to the IOMMU, the guest is allowed to
> > +fully manage its mapping at its disposal. The IOMMU walks nested stage-1
> > +and stage-2 page tables when serving DMA requests from assigned device, and
> > +may cache the stage-1 mapping in the IOTLB. When required (IOMMU_NESTING_
> > +FEAT_CACHE_INVLD), userspace *must* forward guest stage-1 invalidation to
> > +the host, so the IOTLB is invalidated:
> > +
> > + nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
> > + memcpy(&nesting_op->data, &cache_inv_data, sizeof(cache_inv_data));
> > + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> > +
> > +Forwarded invalidations can happen at various granularity levels (page
> > +level, context level, etc.)
> > +
> > VFIO User API
> > -------------------------------------------------------------------------------
> >
> >
> Thanks
>
> Eric

2020-08-17 07:42:43

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v6 14/15] vfio: Document dual stage control

Hi Yi,

On 8/17/20 9:00 AM, Liu, Yi L wrote:
> Hi Eric,
>
>> From: Auger Eric <[email protected]>
>> Sent: Sunday, August 16, 2020 7:52 PM
>>
>> Hi Yi,
>>
>> On 7/28/20 8:27 AM, Liu Yi L wrote:
>>> From: Eric Auger <[email protected]>
>>>
>>> The VFIO API was enhanced to support nested stage control: a bunch of> new
>> ioctls and usage guideline.
>>>
>>> Let's document the process to follow to set up nested mode.
>>>
>>> Cc: Kevin Tian <[email protected]>
>>> CC: Jacob Pan <[email protected]>
>>> Cc: Alex Williamson <[email protected]>
>>> Cc: Eric Auger <[email protected]>
>>> Cc: Jean-Philippe Brucker <[email protected]>
>>> Cc: Joerg Roedel <[email protected]>
>>> Cc: Lu Baolu <[email protected]>
>>> Reviewed-by: Stefan Hajnoczi <[email protected]>
>>> Signed-off-by: Eric Auger <[email protected]>
>>> Signed-off-by: Liu Yi L <[email protected]>
>>> ---
>>> v5 -> v6:
>>> *) tweak per Eric's comments.
>>>
>>> v3 -> v4:
>>> *) add review-by from Stefan Hajnoczi
>>>
>>> v2 -> v3:
>>> *) address comments from Stefan Hajnoczi
>>>
>>> v1 -> v2:
>>> *) new in v2, compared with Eric's original version, pasid table bind
>>> and fault reporting is removed as this series doesn't cover them.
>>> Original version from Eric.
>>> https://lkml.org/lkml/2020/3/20/700
>>> ---
>>> Documentation/driver-api/vfio.rst | 75
>> +++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 75 insertions(+)
>>>
>>> diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
>>> index f1a4d3c..c0d43f0 100644
>>> --- a/Documentation/driver-api/vfio.rst
>>> +++ b/Documentation/driver-api/vfio.rst
>>> @@ -239,6 +239,81 @@ group and can access them as follows::
>>> /* Gratuitous device reset and go... */
>>> ioctl(device, VFIO_DEVICE_RESET);
>>>
>>> +IOMMU Dual Stage Control
>>> +------------------------
>>> +
>>> +Some IOMMUs support 2 stages/levels of translation. Stage corresponds
>>> +to the ARM terminology while level corresponds to Intel's terminology.
>>> +In the following text we use either without distinction.
>>> +
>>> +This is useful when the guest is exposed with a virtual IOMMU and some
>>> +devices are assigned to the guest through VFIO. Then the guest OS can
>>> +use stage-1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage
>>> +2 for VM isolation (GPA -> HPA).
>>> +
>>> +Under dual stage translation, the guest gets ownership of the stage-1 page
>>> +tables and also owns stage-1 configuration structures. The hypervisor owns
>>> +the root configuration structure (for security reason), including stage-2
>>> +configuration.
>> This is only true for vtd. On ARM the stage2 cfg is the Context
>> Descriptor table (aka PASID table). root cfg only store the GPA of the
>> CD table.
>
> I've a check with you on the meaning of "configuration structures".
> For Vt-d, does it mean the root table/context table/pasid table? if
> I'm correct, then how about below description?
Yes I agree
>
> "Under dual stage translation, the guest gets ownership of the stage-1
> configuration structures or page tables.
Actually on ARM the guest both owns the S1 configuration (CD table) and
S1 page tables ;-)

on Intel I understand the guest only owns the S1 page tables.

If confirmed, you may use such kind of explicit statement.

Thanks

Eric

This depends on vendor. The
> hypervisor owns the root configuration structure (for security reason),
> including stage-2 configuration."
>
>> This works as long as configuration structures and page table
>>> +formats are compatible between the virtual IOMMU and the physical IOMMU.
>>> +
>>> +Assuming the HW supports it, this nested mode is selected by choosing the
>>> +VFIO_TYPE1_NESTING_IOMMU type through:
>>> +
>>> + ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
>>> +
>>> +This forces the hypervisor to use the stage-2, leaving stage-1 available
>>> +for guest usage. The stage-1 format and binding method are vendor specific
>> . There are reported in the nesting capability ...
>
> got it.
>
> "The stage-1 format and binding method are reported in nesting capability."
>
>>> +and reported in nesting cap (VFIO_IOMMU_TYPE1_INFO_CAP_NESTING) through
>>> +VFIO_IOMMU_GET_INFO:
>>> +
>>> + ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
>>> +
>>> +The nesting cap info is available only after NESTING_IOMMU is selected.
>>> +If underlying IOMMU doesn't support nesting, VFIO_SET_IOMMU fails and
>> If the underlying
>
> got it.
>
>>> +userspace should try other IOMMU types. Details of the nesting cap info
>>> +can be found in Documentation/userspace-api/iommu.rst.
>>> +
>>> +The stage-1 page table can be bound to the IOMMU in two methods: directly>
>> +or indirectly. Direct binding requires userspace to notify VFIO of every
>> Not sure we shall use this direct/indirect terminology. I don't think
>> this is part of either ARM or Intel SPEC.
>>
>> Suggestion: On Intel, the stage1 page table info are mediated by the
>> userspace for each PASID. On ARM, the userspace directly passes the GPA
>> of the whole PASID table. Currently only Intel's binding is supported.
>
> got it. this is what we want to say by ditect/indirect terminology.
>
> Regards,
> Yi Liu
>
>>> +guest stage-1 page table binding, while indirect binding allows userspace
>>> +to bind once with an intermediate structure (e.g. PASID table) which
>>> +indirectly links to guest stage-1 page tables. The actual binding method
>>> +depends on IOMMU vendor. Currently only the direct binding capability (
>>> +IOMMU_NESTING_FEAT_BIND_PGTBL) is supported:
>>> +
>>> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
>>> + memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
>>> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
>>> +
>>> +When multiple stage-1 page tables are supported on a device, each page
>>> +table is associated with a PASID (Process Address Space ID) to differentiate
>>> +with each other. In such case, userspace should include PASID in the
>>> +bind_data when issuing direct binding request.
>>> +
>>> +PASID could be managed per-device or system-wide which, again, depends on
>>> +IOMMU vendor and is reported in nesting cap info. When system-wide policy
>>> +is reported (IOMMU_NESTING_FEAT_SYSWIDE_PASID), e.g. as by Intel platforms,
>>> +userspace *must* allocate PASID from VFIO before attempting binding of
>>> +stage-1 page table:
>>> +
>>> + req.flags = VFIO_IOMMU_ALLOC_PASID;
>>> + ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
>>> +
>>> +Once the stage-1 page table is bound to the IOMMU, the guest is allowed to
>>> +fully manage its mapping at its disposal. The IOMMU walks nested stage-1
>>> +and stage-2 page tables when serving DMA requests from assigned device, and
>>> +may cache the stage-1 mapping in the IOTLB. When required (IOMMU_NESTING_
>>> +FEAT_CACHE_INVLD), userspace *must* forward guest stage-1 invalidation to
>>> +the host, so the IOTLB is invalidated:
>>> +
>>> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
>>> + memcpy(&nesting_op->data, &cache_inv_data, sizeof(cache_inv_data));
>>> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
>>> +
>>> +Forwarded invalidations can happen at various granularity levels (page
>>> +level, context level, etc.)
>>> +
>>> VFIO User API
>>> -------------------------------------------------------------------------------
>>>
>>>
>> Thanks
>>
>> Eric
>

2020-08-17 07:45:51

by Yi Liu

[permalink] [raw]
Subject: RE: [PATCH v6 14/15] vfio: Document dual stage control

Hi Eric,

> From: Eric Auger <[email protected]>
> Sent: Monday, August 17, 2020 3:41 PM
>
> Hi Yi,
>
> On 8/17/20 9:00 AM, Liu, Yi L wrote:
> > Hi Eric,
> >
> >> From: Auger Eric <[email protected]>
> >> Sent: Sunday, August 16, 2020 7:52 PM
> >>
> >> Hi Yi,
> >>
> >> On 7/28/20 8:27 AM, Liu Yi L wrote:
> >>> From: Eric Auger <[email protected]>
> >>>
> >>> The VFIO API was enhanced to support nested stage control: a bunch
> >>> of> new
> >> ioctls and usage guideline.
> >>>
> >>> Let's document the process to follow to set up nested mode.
> >>>
> >>> Cc: Kevin Tian <[email protected]>
> >>> CC: Jacob Pan <[email protected]>
> >>> Cc: Alex Williamson <[email protected]>
> >>> Cc: Eric Auger <[email protected]>
> >>> Cc: Jean-Philippe Brucker <[email protected]>
> >>> Cc: Joerg Roedel <[email protected]>
> >>> Cc: Lu Baolu <[email protected]>
> >>> Reviewed-by: Stefan Hajnoczi <[email protected]>
> >>> Signed-off-by: Eric Auger <[email protected]>
> >>> Signed-off-by: Liu Yi L <[email protected]>
> >>> ---
> >>> v5 -> v6:
> >>> *) tweak per Eric's comments.
> >>>
> >>> v3 -> v4:
> >>> *) add review-by from Stefan Hajnoczi
> >>>
> >>> v2 -> v3:
> >>> *) address comments from Stefan Hajnoczi
> >>>
> >>> v1 -> v2:
> >>> *) new in v2, compared with Eric's original version, pasid table bind
> >>> and fault reporting is removed as this series doesn't cover them.
> >>> Original version from Eric.
> >>> https://lkml.org/lkml/2020/3/20/700
> >>> ---
> >>> Documentation/driver-api/vfio.rst | 75
> >> +++++++++++++++++++++++++++++++++++++++
> >>> 1 file changed, 75 insertions(+)
> >>>
> >>> diff --git a/Documentation/driver-api/vfio.rst
> >>> b/Documentation/driver-api/vfio.rst
> >>> index f1a4d3c..c0d43f0 100644
> >>> --- a/Documentation/driver-api/vfio.rst
> >>> +++ b/Documentation/driver-api/vfio.rst
> >>> @@ -239,6 +239,81 @@ group and can access them as follows::
> >>> /* Gratuitous device reset and go... */
> >>> ioctl(device, VFIO_DEVICE_RESET);
> >>>
> >>> +IOMMU Dual Stage Control
> >>> +------------------------
> >>> +
> >>> +Some IOMMUs support 2 stages/levels of translation. Stage
> >>> +corresponds to the ARM terminology while level corresponds to Intel's
> terminology.
> >>> +In the following text we use either without distinction.
> >>> +
> >>> +This is useful when the guest is exposed with a virtual IOMMU and
> >>> +some devices are assigned to the guest through VFIO. Then the guest
> >>> +OS can use stage-1 (GIOVA -> GPA or GVA->GPA), while the hypervisor
> >>> +uses stage
> >>> +2 for VM isolation (GPA -> HPA).
> >>> +
> >>> +Under dual stage translation, the guest gets ownership of the
> >>> +stage-1 page tables and also owns stage-1 configuration structures.
> >>> +The hypervisor owns the root configuration structure (for security
> >>> +reason), including stage-2 configuration.
> >> This is only true for vtd. On ARM the stage2 cfg is the Context
> >> Descriptor table (aka PASID table). root cfg only store the GPA of
> >> the CD table.
> >
> > I've a check with you on the meaning of "configuration structures".
> > For Vt-d, does it mean the root table/context table/pasid table? if
> > I'm correct, then how about below description?
> Yes I agree

thanks.

> >
> > "Under dual stage translation, the guest gets ownership of the stage-1
> > configuration structures or page tables.
> Actually on ARM the guest both owns the S1 configuration (CD table) and
> S1 page tables ;-)

I see. so on ARM platform, guest owns both configuration and page table.

> on Intel I understand the guest only owns the S1 page tables.

yes, on Intel, guest only owns the S1 page tables.

> If confirmed, you may use such kind of explicit statement.

will do.

Regards,
Yi Liu

> Thanks
>
> Eric
>
> This depends on vendor. The
> > hypervisor owns the root configuration structure (for security
> > reason), including stage-2 configuration."
> >
> >> This works as long as configuration structures and page table
> >>> +formats are compatible between the virtual IOMMU and the physical IOMMU.
> >>> +
> >>> +Assuming the HW supports it, this nested mode is selected by
> >>> +choosing the VFIO_TYPE1_NESTING_IOMMU type through:
> >>> +
> >>> + ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
> >>> +
> >>> +This forces the hypervisor to use the stage-2, leaving stage-1
> >>> +available for guest usage. The stage-1 format and binding method
> >>> +are vendor specific
> >> . There are reported in the nesting capability ...
> >
> > got it.
> >
> > "The stage-1 format and binding method are reported in nesting capability."
> >
> >>> +and reported in nesting cap (VFIO_IOMMU_TYPE1_INFO_CAP_NESTING)
> >>> +through
> >>> +VFIO_IOMMU_GET_INFO:
> >>> +
> >>> + ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info);
> >>> +
> >>> +The nesting cap info is available only after NESTING_IOMMU is selected.
> >>> +If underlying IOMMU doesn't support nesting, VFIO_SET_IOMMU fails
> >>> +and
> >> If the underlying
> >
> > got it.
> >
> >>> +userspace should try other IOMMU types. Details of the nesting cap
> >>> +info can be found in Documentation/userspace-api/iommu.rst.
> >>> +
> >>> +The stage-1 page table can be bound to the IOMMU in two methods:
> >>> +directly>
> >> +or indirectly. Direct binding requires userspace to notify VFIO of
> >> +every
> >> Not sure we shall use this direct/indirect terminology. I don't think
> >> this is part of either ARM or Intel SPEC.
> >>
> >> Suggestion: On Intel, the stage1 page table info are mediated by the
> >> userspace for each PASID. On ARM, the userspace directly passes the
> >> GPA of the whole PASID table. Currently only Intel's binding is supported.
> >
> > got it. this is what we want to say by ditect/indirect terminology.
> >
> > Regards,
> > Yi Liu
> >
> >>> +guest stage-1 page table binding, while indirect binding allows
> >>> +userspace to bind once with an intermediate structure (e.g. PASID
> >>> +table) which indirectly links to guest stage-1 page tables. The
> >>> +actual binding method depends on IOMMU vendor. Currently only the
> >>> +direct binding capability (
> >>> +IOMMU_NESTING_FEAT_BIND_PGTBL) is supported:
> >>> +
> >>> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL;
> >>> + memcpy(&nesting_op->data, &bind_data, sizeof(bind_data));
> >>> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> >>> +
> >>> +When multiple stage-1 page tables are supported on a device, each
> >>> +page table is associated with a PASID (Process Address Space ID) to
> >>> +differentiate with each other. In such case, userspace should
> >>> +include PASID in the bind_data when issuing direct binding request.
> >>> +
> >>> +PASID could be managed per-device or system-wide which, again,
> >>> +depends on IOMMU vendor and is reported in nesting cap info. When
> >>> +system-wide policy is reported (IOMMU_NESTING_FEAT_SYSWIDE_PASID),
> >>> +e.g. as by Intel platforms, userspace *must* allocate PASID from
> >>> +VFIO before attempting binding of
> >>> +stage-1 page table:
> >>> +
> >>> + req.flags = VFIO_IOMMU_ALLOC_PASID;
> >>> + ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req);
> >>> +
> >>> +Once the stage-1 page table is bound to the IOMMU, the guest is
> >>> +allowed to fully manage its mapping at its disposal. The IOMMU
> >>> +walks nested stage-1 and stage-2 page tables when serving DMA
> >>> +requests from assigned device, and may cache the stage-1 mapping in
> >>> +the IOTLB. When required (IOMMU_NESTING_ FEAT_CACHE_INVLD),
> >>> +userspace *must* forward guest stage-1 invalidation to the host, so the IOTLB
> is invalidated:
> >>> +
> >>> + nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD;
> >>> + memcpy(&nesting_op->data, &cache_inv_data, sizeof(cache_inv_data));
> >>> + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op);
> >>> +
> >>> +Forwarded invalidations can happen at various granularity levels
> >>> +(page level, context level, etc.)
> >>> +
> >>> VFIO User API
> >>>
> >>> --------------------------------------------------------------------
> >>> -----------
> >>>
> >>>
> >> Thanks
> >>
> >> Eric
> >