2023-12-08 16:47:58

by Ankit Agrawal

[permalink] [raw]
Subject: [PATCH v3 0/2] kvm: arm64: allow vm to select DEVICE_* and

From: Ankit Agrawal <[email protected]>

Currently, KVM for ARM64 maps at stage 2 memory that is considered device
(i.e. it is not RAM) with DEVICE_nGnRE memory attributes; this setting
overrides (as per the ARM architecture [1]) any device MMIO mapping
present at stage 1, resulting in a set-up whereby a guest operating
system cannot determine device MMIO mapping memory attributes on its
own but it is always overridden by the KVM stage 2 default.

This set-up does not allow guest operating systems to select device
memory attributes independently from KVM stage-2 mappings
(refer to [1], "Combining stage 1 and stage 2 memory type attributes"),
which turns out to be an issue in that guest operating systems
(e.g. Linux) may request to map devices MMIO regions with memory
attributes that guarantee better performance (e.g. gathering
attribute - that for some devices can generate larger PCIe memory
writes TLPs) and specific operations (e.g. unaligned transactions)
such as the NormalNC memory type.

The default device stage 2 mapping was chosen in KVM for ARM64 since
it was considered safer (i.e. it would not allow guests to trigger
uncontained failures ultimately crashing the machine) but this
turned out to be asynchronous (SError) defeating the purpose.

Failures containability is a property of the platform and is independent
from the memory type used for MMIO device memory mappings.

Actually, DEVICE_nGnRE memory type is even more problematic than
Normal-NC memory type in terms of faults containability in that e.g.
aborts triggered on DEVICE_nGnRE loads cannot be made, architecturally,
synchronous (i.e. that would imply that the processor should issue at
most 1 load transaction at a time - it cannot pipeline them - otherwise
the synchronous abort semantics would break the no-speculation attribute
attached to DEVICE_XXX memory).

This means that regardless of the combined stage1+stage2 mappings a
platform is safe if and only if device transactions cannot trigger
uncontained failures and that in turn relies on platform capabilities
and the device type being assigned (i.e. PCIe AER/DPC error containment
and RAS architecture[3]); therefore the default KVM device stage 2
memory attributes play no role in making device assignment safer
for a given platform (if the platform design adheres to design
guidelines outlined in [3]) and therefore can be relaxed.

For all these reasons, relax the KVM stage 2 device memory attributes
from DEVICE_nGnRE to Normal-NC.

The NormalNC was chosen over a different Normal memory type default
at stage-2 (e.g. Normal Write-through) to avoid cache allocation/snooping.

Relaxing S2 KVM device MMIO mappings to Normal-NC is not expected to
trigger any issue on guest device reclaim use cases either (i.e. device
MMIO unmap followed by a device reset) at least for PCIe devices, in that
in PCIe a device reset is architected and carried out through PCI config
space transactions that are naturally ordered with respect to MMIO
transactions according to the PCI ordering rules.

Having Normal-NC S2 default puts guests in control (thanks to
stage1+stage2 combined memory attributes rules [1]) of device MMIO
regions memory mappings, according to the rules described in [1]
and summarized here ([(S1) - stage1], [(S2) - stage 2]):

S1 | S2 | Result
NORMAL-WB | NORMAL-NC | NORMAL-NC
NORMAL-WT | NORMAL-NC | NORMAL-NC
NORMAL-NC | NORMAL-NC | NORMAL-NC
DEVICE<attr> | NORMAL-NC | DEVICE<attr>

It is worth noting that currently, to map devices MMIO space to user
space in a device pass-through use case the VFIO framework applies memory
attributes derived from pgprot_noncached() settings applied to VMAs, which
result in device-nGnRnE memory attributes for the stage-1 VMM mappings.

This means that a userspace mapping for device MMIO space carried
out with the current VFIO framework and a guest OS mapping for the same
MMIO space may result in a mismatched alias as described in [2].

Defaulting KVM device stage-2 mappings to Normal-NC attributes does not
change anything in this respect, in that the mismatched aliases would
only affect (refer to [2] for a detailed explanation) ordering between
the userspace and GuestOS mappings resulting stream of transactions
(i.e. it does not cause loss of property for either stream of
transactions on its own), which is harmless given that the userspace
and GuestOS access to the device is carried out through independent
transactions streams.

Generalizing to other devices may be problematic. E.g. GICv2 VCPU
interface, which is effectively a shared peripheral, can allow a
guest to affect another guest's interrupt distribution. Hence
limit the change to VFIO PCI as caution. This is achieved by
making the VFIO PCI core module set a flag that is tested by KVM
to activate the code. This could be extended to other devices in
the future once that is deemed safe.

[1] section D8.5 - DDI0487J_a_a-profile_architecture_reference_manual.pdf
[2] section B2.8 - DDI0487J_a_a-profile_architecture_reference_manual.pdf
[3] sections 1.7.7.3/1.8.5.2/appendix C - DEN0029H_SBSA_7.1.pdf

Applied over next-20231201

History
=======
v2 -> v3
- Added a new patch (and converted to patch series) suggested by
Catalin Marinas to ensure the code changes are restricted to
VFIO PCI devices.
- Introduced VM_VFIO_ALLOW_WC flag for VFIO PCI to communicate
with VMM.
- Reverted GIC mapping to DEVICE.

v1 -> v2
- Updated commit log to the one posted by
Lorenzo Pieralisi <[email protected]> (Thanks!)
- Added new flag to represent the NORMAL_NC setting. Updated
stage2_set_prot_attr() to handle new flag.

v2 Link:
https://lore.kernel.org/all/[email protected]/

Signed-off-by: Ankit Agrawal <[email protected]>
Suggested-by: Jason Gunthorpe <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Tested-by: Ankit Agrawal <[email protected]>

Ankit Agrawal (2):
kvm: arm64: introduce new flag for non-cacheable IO memory
kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

arch/arm64/include/asm/kvm_pgtable.h | 2 ++
arch/arm64/include/asm/memory.h | 2 ++
arch/arm64/kvm/hyp/pgtable.c | 14 ++++++++++++--
arch/arm64/kvm/mmu.c | 16 +++++++++++++---
drivers/vfio/pci/vfio_pci_core.c | 3 ++-
include/linux/mm.h | 7 +++++++
6 files changed, 38 insertions(+), 6 deletions(-)

--
2.17.1


2023-12-08 16:48:01

by Ankit Agrawal

[permalink] [raw]
Subject: [PATCH v3 1/2] kvm: arm64: introduce new flag for non-cacheable IO memory

From: Ankit Agrawal <[email protected]>

For various reasons described in the cover letter, and primarily to
allow VM get IO memory with NORMALNC properties, it is desired
to relax the KVM stage 2 device memory attributes from DEVICE_nGnRE
to NormalNC. So set S2 PTE for IO memory as NORMAL_NC.

A Normal-NC flag is not present today. So add a new kvm_pgtable_prot
(KVM_PGTABLE_PROT_NORMAL_NC) flag for it, along with its
corresponding PTE value 0x5 (0b101) determined from [1].

Lastly, adapt the stage2 PTE property setter function
(stage2_set_prot_attr) to handle the NormalNC attribute.

[1] section D8.5.5 of DDI0487J_a_a-profile_architecture_reference_manual.pdf

Signed-off-by: Ankit Agrawal <[email protected]>
Suggested-by: Jason Gunthorpe <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Tested-by: Ankit Agrawal <[email protected]>
---
arch/arm64/include/asm/kvm_pgtable.h | 2 ++
arch/arm64/include/asm/memory.h | 2 ++
arch/arm64/kvm/hyp/pgtable.c | 11 +++++++++--
3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index cfdf40f734b1..19278dfe7978 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -197,6 +197,7 @@ enum kvm_pgtable_stage2_flags {
* @KVM_PGTABLE_PROT_W: Write permission.
* @KVM_PGTABLE_PROT_R: Read permission.
* @KVM_PGTABLE_PROT_DEVICE: Device attributes.
+ * @KVM_PGTABLE_PROT_NORMAL_NC: Normal noncacheable attributes.
* @KVM_PGTABLE_PROT_SW0: Software bit 0.
* @KVM_PGTABLE_PROT_SW1: Software bit 1.
* @KVM_PGTABLE_PROT_SW2: Software bit 2.
@@ -208,6 +209,7 @@ enum kvm_pgtable_prot {
KVM_PGTABLE_PROT_R = BIT(2),

KVM_PGTABLE_PROT_DEVICE = BIT(3),
+ KVM_PGTABLE_PROT_NORMAL_NC = BIT(4),

KVM_PGTABLE_PROT_SW0 = BIT(55),
KVM_PGTABLE_PROT_SW1 = BIT(56),
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index fde4186cc387..c247e5f29d5a 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -147,6 +147,7 @@
* Memory types for Stage-2 translation
*/
#define MT_S2_NORMAL 0xf
+#define MT_S2_NORMAL_NC 0x5
#define MT_S2_DEVICE_nGnRE 0x1

/*
@@ -154,6 +155,7 @@
* Stage-2 enforces Normal-WB and Device-nGnRE
*/
#define MT_S2_FWB_NORMAL 6
+#define MT_S2_FWB_NORMAL_NC 5
#define MT_S2_FWB_DEVICE_nGnRE 1

#ifdef CONFIG_ARM64_4K_PAGES
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index c651df904fe3..d4835d553c61 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -718,10 +718,17 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
kvm_pte_t *ptep)
{
bool device = prot & KVM_PGTABLE_PROT_DEVICE;
- kvm_pte_t attr = device ? KVM_S2_MEMATTR(pgt, DEVICE_nGnRE) :
- KVM_S2_MEMATTR(pgt, NORMAL);
+ bool normal_nc = prot & KVM_PGTABLE_PROT_NORMAL_NC;
+ kvm_pte_t attr;
u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;

+ if (device)
+ attr = KVM_S2_MEMATTR(pgt, DEVICE_nGnRE);
+ else if (normal_nc)
+ attr = KVM_S2_MEMATTR(pgt, NORMAL_NC);
+ else
+ attr = KVM_S2_MEMATTR(pgt, NORMAL);
+
if (!(prot & KVM_PGTABLE_PROT_X))
attr |= KVM_PTE_LEAF_ATTR_HI_S2_XN;
else if (device)
--
2.17.1

2023-12-08 16:48:51

by Ankit Agrawal

[permalink] [raw]
Subject: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

From: Ankit Agrawal <[email protected]>

To provide VM with the ability to get device IO memory with NormalNC
property, map device MMIO in KVM for ARM64 at stage2 as NormalNC.
Having NormalNC S2 default puts guests in control (based on [1],
"Combining stage 1 and stage 2 memory type attributes") of device
MMIO regions memory mappings. The rules are summarized below:
([(S1) - stage1], [(S2) - stage 2])

S1 | S2 | Result
NORMAL-WB | NORMAL-NC | NORMAL-NC
NORMAL-WT | NORMAL-NC | NORMAL-NC
NORMAL-NC | NORMAL-NC | NORMAL-NC
DEVICE<attr> | NORMAL-NC | DEVICE<attr>

Generalizing this to non PCI devices may be problematic. E.g. GICv2
vCPU interface, which is effectively a shared peripheral, can allow
a guest to affect another guest's interrupt distribution. The issue
may be solved by limiting the relaxation to mappings that have a user
VMA. Still There is insufficient information and uncertainity in the
behavior of non PCI driver. Hence caution is maintained and the change
is restricted to the VFIO-PCI devices. PCIe on the other hand is safe
because the PCI bridge does not generate errors, and thus do not cause
uncontained failures.

Limiting to the VFIO PCI module is done with the help of a new mm
flag VM_VFIO_ALLOW_WC. The VFIO PCI core module set this flag to
communicate to KVM. KVM use this flag to activate the code.

This could be extended to other devices in the future once that
is deemed safe.

[1] section D8.5.5 of DDI0487J_a_a-profile_architecture_reference_manual.pdf

Signed-off-by: Ankit Agrawal <[email protected]>
Suggested-by: Catalin Marinas <[email protected]>
Acked-by: Jason Gunthorpe <[email protected]>
Tested-by: Ankit Agrawal <[email protected]>
---
arch/arm64/kvm/hyp/pgtable.c | 3 +++
arch/arm64/kvm/mmu.c | 16 +++++++++++++---
drivers/vfio/pci/vfio_pci_core.c | 3 ++-
include/linux/mm.h | 7 +++++++
4 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index d4835d553c61..c8696c9e7a60 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -722,6 +722,9 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
kvm_pte_t attr;
u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;

+ if (device && normal_nc)
+ return -EINVAL;
+
if (device)
attr = KVM_S2_MEMATTR(pgt, DEVICE_nGnRE);
else if (normal_nc)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d14504821b79..1ce1b6d89bf9 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1381,7 +1381,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
int ret = 0;
bool write_fault, writable, force_pte = false;
bool exec_fault, mte_allowed;
- bool device = false;
+ bool device = false, vfio_pci_device = false;
unsigned long mmu_seq;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
@@ -1472,6 +1472,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
gfn = fault_ipa >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);

+ vfio_pci_device = !!(vma->vm_flags & VM_VFIO_ALLOW_WC);
+
/* Don't use the VMA after the unlock -- it may have vanished */
vma = NULL;

@@ -1557,8 +1559,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault)
prot |= KVM_PGTABLE_PROT_X;

- if (device)
- prot |= KVM_PGTABLE_PROT_DEVICE;
+ if (device) {
+ /*
+ * To provide VM with the ability to get device IO memory
+ * with NormalNC property, map device MMIO as NormalNC in S2.
+ */
+ if (vfio_pci_device)
+ prot |= KVM_PGTABLE_PROT_NORMAL_NC;
+ else
+ prot |= KVM_PGTABLE_PROT_DEVICE;
+ }
else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
prot |= KVM_PGTABLE_PROT_X;

diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 1cbc990d42e0..c3f95ec7fc3a 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1863,7 +1863,8 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
* See remap_pfn_range(), called from vfio_pci_fault() but we can't
* change vm_flags within the fault handler. Set them now.
*/
- vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
+ vm_flags_set(vma, VM_VFIO_ALLOW_WC | VM_IO | VM_PFNMAP |
+ VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &vfio_pci_mmap_ops;

return 0;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a422cc123a2d..8d3c4820c492 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -391,6 +391,13 @@ extern unsigned int kobjsize(const void *objp);
# define VM_UFFD_MINOR VM_NONE
#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */

+#ifdef CONFIG_64BIT
+#define VM_VFIO_ALLOW_WC_BIT 39 /* Convey KVM to map S2 NORMAL_NC */
+#define VM_VFIO_ALLOW_WC BIT(VM_VFIO_ALLOW_WC_BIT)
+#else
+#define VM_VFIO_ALLOW_WC VM_NONE
+#endif
+
/* Bits set in the VMA until the stack is in its final location */
#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY)

--
2.17.1

2023-12-12 12:18:01

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] kvm: arm64: introduce new flag for non-cacheable IO memory

On Fri, Dec 08, 2023 at 10:17:08PM +0530, [email protected] wrote:
> From: Ankit Agrawal <[email protected]>
>
> For various reasons described in the cover letter, and primarily to
> allow VM get IO memory with NORMALNC properties, it is desired
> to relax the KVM stage 2 device memory attributes from DEVICE_nGnRE
> to NormalNC. So set S2 PTE for IO memory as NORMAL_NC.
>
> A Normal-NC flag is not present today. So add a new kvm_pgtable_prot
> (KVM_PGTABLE_PROT_NORMAL_NC) flag for it, along with its
> corresponding PTE value 0x5 (0b101) determined from [1].
>
> Lastly, adapt the stage2 PTE property setter function
> (stage2_set_prot_attr) to handle the NormalNC attribute.
>
> [1] section D8.5.5 of DDI0487J_a_a-profile_architecture_reference_manual.pdf
>
> Signed-off-by: Ankit Agrawal <[email protected]>
> Suggested-by: Jason Gunthorpe <[email protected]>
> Acked-by: Catalin Marinas <[email protected]>
> Tested-by: Ankit Agrawal <[email protected]>
> ---
> arch/arm64/include/asm/kvm_pgtable.h | 2 ++
> arch/arm64/include/asm/memory.h | 2 ++
> arch/arm64/kvm/hyp/pgtable.c | 11 +++++++++--
> 3 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index cfdf40f734b1..19278dfe7978 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -197,6 +197,7 @@ enum kvm_pgtable_stage2_flags {
> * @KVM_PGTABLE_PROT_W: Write permission.
> * @KVM_PGTABLE_PROT_R: Read permission.
> * @KVM_PGTABLE_PROT_DEVICE: Device attributes.
> + * @KVM_PGTABLE_PROT_NORMAL_NC: Normal noncacheable attributes.
> * @KVM_PGTABLE_PROT_SW0: Software bit 0.
> * @KVM_PGTABLE_PROT_SW1: Software bit 1.
> * @KVM_PGTABLE_PROT_SW2: Software bit 2.
> @@ -208,6 +209,7 @@ enum kvm_pgtable_prot {
> KVM_PGTABLE_PROT_R = BIT(2),
>
> KVM_PGTABLE_PROT_DEVICE = BIT(3),
> + KVM_PGTABLE_PROT_NORMAL_NC = BIT(4),
>
> KVM_PGTABLE_PROT_SW0 = BIT(55),
> KVM_PGTABLE_PROT_SW1 = BIT(56),
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index fde4186cc387..c247e5f29d5a 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -147,6 +147,7 @@
> * Memory types for Stage-2 translation
> */
> #define MT_S2_NORMAL 0xf
> +#define MT_S2_NORMAL_NC 0x5
> #define MT_S2_DEVICE_nGnRE 0x1
>
> /*
> @@ -154,6 +155,7 @@
> * Stage-2 enforces Normal-WB and Device-nGnRE
> */
> #define MT_S2_FWB_NORMAL 6
> +#define MT_S2_FWB_NORMAL_NC 5
> #define MT_S2_FWB_DEVICE_nGnRE 1
>
> #ifdef CONFIG_ARM64_4K_PAGES
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index c651df904fe3..d4835d553c61 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -718,10 +718,17 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
> kvm_pte_t *ptep)
> {
> bool device = prot & KVM_PGTABLE_PROT_DEVICE;
> - kvm_pte_t attr = device ? KVM_S2_MEMATTR(pgt, DEVICE_nGnRE) :
> - KVM_S2_MEMATTR(pgt, NORMAL);
> + bool normal_nc = prot & KVM_PGTABLE_PROT_NORMAL_NC;
> + kvm_pte_t attr;
> u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;
>
> + if (device)
> + attr = KVM_S2_MEMATTR(pgt, DEVICE_nGnRE);
> + else if (normal_nc)
> + attr = KVM_S2_MEMATTR(pgt, NORMAL_NC);
> + else
> + attr = KVM_S2_MEMATTR(pgt, NORMAL);

I think it would be worth rejecting the case where both
KVM_PGTABLE_PROT_DEVICE and KVM_PGTABLE_PROT_NORMAL_NC are passed, since
that's clearly a bug in the caller and silently going with device is
arbitrary and confusing.

Will

2023-12-12 17:32:22

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] kvm: arm64: introduce new flag for non-cacheable IO memory

On Fri, Dec 08, 2023 at 10:17:08PM +0530, [email protected] wrote:
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index c651df904fe3..d4835d553c61 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -718,10 +718,17 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
> kvm_pte_t *ptep)
> {
> bool device = prot & KVM_PGTABLE_PROT_DEVICE;
> - kvm_pte_t attr = device ? KVM_S2_MEMATTR(pgt, DEVICE_nGnRE) :
> - KVM_S2_MEMATTR(pgt, NORMAL);
> + bool normal_nc = prot & KVM_PGTABLE_PROT_NORMAL_NC;
> + kvm_pte_t attr;
> u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;
>
> + if (device)
> + attr = KVM_S2_MEMATTR(pgt, DEVICE_nGnRE);
> + else if (normal_nc)
> + attr = KVM_S2_MEMATTR(pgt, NORMAL_NC);
> + else
> + attr = KVM_S2_MEMATTR(pgt, NORMAL);

As Will said, maybe a WARN_ON_ONCE(device && normal_nc). It would fall
back to device which I think is fine.

Reviewed-by: Catalin Marinas <[email protected]>

2023-12-12 17:47:02

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Fri, Dec 08, 2023 at 10:17:09PM +0530, [email protected] wrote:
> arch/arm64/kvm/hyp/pgtable.c | 3 +++
> arch/arm64/kvm/mmu.c | 16 +++++++++++++---
> drivers/vfio/pci/vfio_pci_core.c | 3 ++-
> include/linux/mm.h | 7 +++++++
> 4 files changed, 25 insertions(+), 4 deletions(-)

It might be worth factoring out the vfio bits into a separate patch
together with a bit of documentation around this new vma flag (up to
Alex really).

> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index d4835d553c61..c8696c9e7a60 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -722,6 +722,9 @@ static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot p
> kvm_pte_t attr;
> u32 sh = KVM_PTE_LEAF_ATTR_LO_S2_SH_IS;
>
> + if (device && normal_nc)
> + return -EINVAL;

Ah, the comment Will and I made on patch 1 is handled here. Add a
WARN_ON_ONCE() and please move this hunk to the first patch, it makes
more sense there.

> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index d14504821b79..1ce1b6d89bf9 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1381,7 +1381,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> int ret = 0;
> bool write_fault, writable, force_pte = false;
> bool exec_fault, mte_allowed;
> - bool device = false;
> + bool device = false, vfio_pci_device = false;

I don't think the variable here should be named vfio_pci_device, the
VM_* flag doesn't mention PCI. So just something like "vfio_allow_wc".

> unsigned long mmu_seq;
> struct kvm *kvm = vcpu->kvm;
> struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> @@ -1472,6 +1472,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> gfn = fault_ipa >> PAGE_SHIFT;
> mte_allowed = kvm_vma_mte_allowed(vma);
>
> + vfio_pci_device = !!(vma->vm_flags & VM_VFIO_ALLOW_WC);

Nitpick: no need for !!, you are assigning to a bool variable already.

> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 1cbc990d42e0..c3f95ec7fc3a 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -1863,7 +1863,8 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma
> * See remap_pfn_range(), called from vfio_pci_fault() but we can't
> * change vm_flags within the fault handler. Set them now.
> */
> - vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
> + vm_flags_set(vma, VM_VFIO_ALLOW_WC | VM_IO | VM_PFNMAP |
> + VM_DONTEXPAND | VM_DONTDUMP);

Please add a comment here that write-combining is allowed to be enabled
by the arch (KVM) code but the default user mmap() will still use
pgprot_noncached().

> vma->vm_ops = &vfio_pci_mmap_ops;
>
> return 0;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a422cc123a2d..8d3c4820c492 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -391,6 +391,13 @@ extern unsigned int kobjsize(const void *objp);
> # define VM_UFFD_MINOR VM_NONE
> #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
>
> +#ifdef CONFIG_64BIT
> +#define VM_VFIO_ALLOW_WC_BIT 39 /* Convey KVM to map S2 NORMAL_NC */

This comment shouldn't be in the core header file. It knows nothing
about S2 and Normal-NC, that's arm64 terminology. You can mention
something like VFIO can use this flag hint that write-combining is
allowed.

> +#define VM_VFIO_ALLOW_WC BIT(VM_VFIO_ALLOW_WC_BIT)
> +#else
> +#define VM_VFIO_ALLOW_WC VM_NONE
> +#endif

And I think we need to add some documentation (is there any
VFIO-specific doc) that describes what this flag actually means, what is
permitted. For example, arm64 doesn't have write-combining without
speculative fetches. So if one adds this flag to a new driver, they
should know the implications. There's also an expectation that the
actual driver (KVM guests) or maybe later DPDK can choose the safe
non-cacheable or write-combine (Linux terminology) attributes for the
BAR.

--
Catalin

2023-12-12 18:12:15

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Tue, Dec 12, 2023 at 05:46:34PM +0000, Catalin Marinas wrote:
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index a422cc123a2d..8d3c4820c492 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -391,6 +391,13 @@ extern unsigned int kobjsize(const void *objp);
> > # define VM_UFFD_MINOR VM_NONE
> > #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
> >
> > +#ifdef CONFIG_64BIT
> > +#define VM_VFIO_ALLOW_WC_BIT 39 /* Convey KVM to map S2 NORMAL_NC */
>
> This comment shouldn't be in the core header file. It knows nothing
> about S2 and Normal-NC, that's arm64 terminology. You can mention
> something like VFIO can use this flag hint that write-combining is
> allowed.

Let's write a comment down here to address both remarks:

This flag is used to connect VFIO to arch specific KVM code. It
indicates that the memory under this VMA is safe for use with any
non-cachable memory type inside KVM. Some VFIO devices, on some
platforms, are thought to be unsafe and can cause machine crashes if
KVM does not lock down the memory type.

> should know the implications. There's also an expectation that the
> actual driver (KVM guests) or maybe later DPDK can choose the safe
> non-cacheable or write-combine (Linux terminology) attributes for the
> BAR.

DPDK won't rely on this interface

Thanks,
Jason

2023-12-13 20:05:56

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

Hi,

Sorry, a bit late to the discussion :)

On Tue, Dec 12, 2023 at 02:11:56PM -0400, Jason Gunthorpe wrote:
> On Tue, Dec 12, 2023 at 05:46:34PM +0000, Catalin Marinas wrote:
> > should know the implications. There's also an expectation that the
> > actual driver (KVM guests) or maybe later DPDK can choose the safe
> > non-cacheable or write-combine (Linux terminology) attributes for the
> > BAR.
>
> DPDK won't rely on this interface

Wait, so what's the expected interface for determining the memory
attributes at stage-1? I'm somewhat concerned that we're conflating two
things here:

1) KVM needs to know the memory attributes to use at stage-2, which
isn't fundamentally different from what's needed for userspace
stage-1 mappings.

2) KVM additionally needs a hint that the device / VFIO can handle
mismatched aliases w/o the machine exploding. This goes beyond
supporting Normal-NC mappings at stage-2 and is really a bug
with our current scheme (nGnRnE at stage-1, nGnRE at stage-2).

I was hoping that (1) could be some 'common' plumbing for both userspace
and KVM mappings. And for (2), any case where a device is intolerant of
mismatches && KVM cannot force the memory attributes should be rejected.

AFAICT, the only reason PCI devices can get the blanket treatment of
Normal-NC at stage-2 is because userspace has a Device-* mapping and can't
speculatively load from the alias. This feels a bit hacky, and maybe we
should prioritize an interface for mapping a device into a VM w/o a
valid userspace mapping.

I very much understand that this has been going on for a while, and we
need to do *something* to get passthrough working well for devices that
like 'WC'. I just want to make sure we don't paint ourselves into a corner
that's hard to get out of in the future.

--
Thanks,
Oliver

2023-12-14 15:48:38

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

[+James]

On Wed, Dec 13, 2023 at 08:05:29PM +0000, Oliver Upton wrote:
> Hi,
>
> Sorry, a bit late to the discussion :)
>
> On Tue, Dec 12, 2023 at 02:11:56PM -0400, Jason Gunthorpe wrote:
> > On Tue, Dec 12, 2023 at 05:46:34PM +0000, Catalin Marinas wrote:
> > > should know the implications. There's also an expectation that the
> > > actual driver (KVM guests) or maybe later DPDK can choose the safe
> > > non-cacheable or write-combine (Linux terminology) attributes for the
> > > BAR.
> >
> > DPDK won't rely on this interface
>
> Wait, so what's the expected interface for determining the memory
> attributes at stage-1? I'm somewhat concerned that we're conflating two
> things here:
>
> 1) KVM needs to know the memory attributes to use at stage-2, which
> isn't fundamentally different from what's needed for userspace
> stage-1 mappings.
>
> 2) KVM additionally needs a hint that the device / VFIO can handle
> mismatched aliases w/o the machine exploding. This goes beyond
> supporting Normal-NC mappings at stage-2 and is really a bug
> with our current scheme (nGnRnE at stage-1, nGnRE at stage-2).
>
> I was hoping that (1) could be some 'common' plumbing for both userspace
> and KVM mappings. And for (2), any case where a device is intolerant of
> mismatches && KVM cannot force the memory attributes should be rejected.
>
> AFAICT, the only reason PCI devices can get the blanket treatment of
> Normal-NC at stage-2 is because userspace has a Device-* mapping and can't
> speculatively load from the alias. This feels a bit hacky, and maybe we
> should prioritize an interface for mapping a device into a VM w/o a
> valid userspace mapping.

FWIW - I have tried to summarize the reasoning behind PCIe devices
Normal-NC default stage-2 safety in a document that I have just realized
now it has become this series cover letter, I don't think the PCI blanket
treatment is related *only* to the current user space mappings (ie
BTW, AFAICS it is also *possible* at present to map a prefetchable BAR through
sysfs with Normal-NC memory attributes in the host at the same time a PCI
device is passed-through to a guest with VFIO - and therefore we have a
dev-nGnRnE stage-1 mapping for it. Don't think anyone does that - what for -
but it is possible and KVM would not know about it).

Again, FWIW, we were told (source Arm ARM) mismatched aliases concerning
device-XXX vs Normal-NC are not problematic as long as the transactions
issued for the related mappings are independent (and none of the
mappings is cacheable).

I appreciate this is not enough to give everyone full confidence on
this solution robustness - that's why I wrote that up so that we know
what we are up against and write KVM interfaces accordingly.

> I very much understand that this has been going on for a while, and we
> need to do *something* to get passthrough working well for devices that
> like 'WC'. I just want to make sure we don't paint ourselves into a corner
> that's hard to get out of in the future.

That makes perfect sense, see above, if there is anything we can do
to clarify we will, in whatever shape it is preferred.

Thanks,
Lorenzo

2023-12-14 16:56:58

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Thu, Dec 14, 2023 at 04:48:15PM +0100, Lorenzo Pieralisi wrote:

[...]

> > AFAICT, the only reason PCI devices can get the blanket treatment of
> > Normal-NC at stage-2 is because userspace has a Device-* mapping and can't
> > speculatively load from the alias. This feels a bit hacky, and maybe we
> > should prioritize an interface for mapping a device into a VM w/o a
> > valid userspace mapping.
>
> FWIW - I have tried to summarize the reasoning behind PCIe devices
> Normal-NC default stage-2 safety in a document that I have just realized
> now it has become this series cover letter, I don't think the PCI blanket
> treatment is related *only* to the current user space mappings (ie
> BTW, AFAICS it is also *possible* at present to map a prefetchable BAR through
> sysfs with Normal-NC memory attributes in the host at the same time a PCI
> device is passed-through to a guest with VFIO - and therefore we have a
> dev-nGnRnE stage-1 mapping for it. Don't think anyone does that - what for -
> but it is possible and KVM would not know about it).
>
> Again, FWIW, we were told (source Arm ARM) mismatched aliases concerning
> device-XXX vs Normal-NC are not problematic as long as the transactions
> issued for the related mappings are independent (and none of the
> mappings is cacheable).
>
> I appreciate this is not enough to give everyone full confidence on
> this solution robustness - that's why I wrote that up so that we know
> what we are up against and write KVM interfaces accordingly.

Apologies, I didn't mean to question what's going on here from the
hardware POV. My concern was more from the kernel + user interfaces POV,
this all seems to work (specifically for PCI) by maintaining an
intentional mismatch between the VFIO stage-1 and KVM stage-2 mappings.

If we add more behind-the-scenes tricks to get other MMIO mappings
working in the future then this whole interaction will get even
hairier. At least if we follow the stage-1 attributes (where possible)
then we can document some sort of expected behavior in KVM. The VMM would
need know if the device has read side-effects, as the only way to get a
Normal-NC mapping in the guest would be to have one at stage-1.

Kinda stinks to make the VMM aware of the device, but IMO it is a
fundamental limitation of the way we back memslots right now.

--
Thanks,
Oliver

2023-12-21 13:20:16

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

Catching up on emails before going on holiday (again).

On Thu, Dec 14, 2023 at 04:56:01PM +0000, Oliver Upton wrote:
> On Thu, Dec 14, 2023 at 04:48:15PM +0100, Lorenzo Pieralisi wrote:
> > > AFAICT, the only reason PCI devices can get the blanket treatment of
> > > Normal-NC at stage-2 is because userspace has a Device-* mapping and can't
> > > speculatively load from the alias. This feels a bit hacky, and maybe we
> > > should prioritize an interface for mapping a device into a VM w/o a
> > > valid userspace mapping.
> >
> > FWIW - I have tried to summarize the reasoning behind PCIe devices
> > Normal-NC default stage-2 safety in a document that I have just realized
> > now it has become this series cover letter, I don't think the PCI blanket
> > treatment is related *only* to the current user space mappings (ie
> > BTW, AFAICS it is also *possible* at present to map a prefetchable BAR through
> > sysfs with Normal-NC memory attributes in the host at the same time a PCI
> > device is passed-through to a guest with VFIO - and therefore we have a
> > dev-nGnRnE stage-1 mapping for it. Don't think anyone does that - what for -
> > but it is possible and KVM would not know about it).
> >
> > Again, FWIW, we were told (source Arm ARM) mismatched aliases concerning
> > device-XXX vs Normal-NC are not problematic as long as the transactions
> > issued for the related mappings are independent (and none of the
> > mappings is cacheable).
> >
> > I appreciate this is not enough to give everyone full confidence on
> > this solution robustness - that's why I wrote that up so that we know
> > what we are up against and write KVM interfaces accordingly.
>
> Apologies, I didn't mean to question what's going on here from the
> hardware POV. My concern was more from the kernel + user interfaces POV,
> this all seems to work (specifically for PCI) by maintaining an
> intentional mismatch between the VFIO stage-1 and KVM stage-2 mappings.

If you stare at it long enough, the mismatch starts to look fine ;).
Even if you have the VFIO stage 1 Normal NC, KVM stage 2 Normal NC, you
can still have the guest setting stage 1 to Device and introduce an
architectural mismatch. These aliases have some bad reputation but the
behaviour is constrained architecturally.

IMHO we should move on from this attribute mismatch since we can't fully
solve it anyway and focus instead on what the device, system can
tolerate, who's responsible for deciding which MMIO ranges can be mapped
as Normal NC. There are a few options here (talking in the PCIe context
but it can be extended to other VFIO mappings):

1. The VMM is responsible for intra-BAR relaxation of the KVM stage 2:
a) via the stage 1 VFIO mapping attributes - Device or Normal
b) via other means (e.g. ioctl(<range>)) while the stage 1 VFIO stays
Device

2. KVM decides the intra-BAR relaxation irrespective of the VFIO stage 1
attributes (VMM mapping)

3. KVM decides the full-BAR relaxation with the guest responsible for
the intra-BAR attributes. As with (2), that's irrespective of the
VFIO stage 1 host mapping

Whichever option we pick, it won't be the host forcing the Normal NC
mapping, that's still a guest decision and the host only allowing it.

(1) needs specific device knowledge in the VMM or a VFIO-specific driver
(or both if the VMM isn't fully trusted to request the right
attributes). (2) moves the device-specific knowledge to KVM or a
combination of KVM and VFIO-specific driver. Things can get a lot worse
if the Device vs Normal ranges within a BAR are configurable and needs
some paravirtualised interface for the guest to agree with the host.

These patches aim for (3) but only if the host VFIO driver deems it safe
(hence PCIe only for now). I find this an acceptable compromise.

If we really want to avoid any aliases (though I think we are spending
too many cycles on something that's not a real issue), the only way is
to have fd-based mappings in KVM so that there's no VMM alias. After
that we need to choose between (2) and (3) since the VMM may no longer
be able to probe the device and figure out which ranges need what
attributes.

> If we add more behind-the-scenes tricks to get other MMIO mappings
> working in the future then this whole interaction will get even
> hairier. At least if we follow the stage-1 attributes (where possible)
> then we can document some sort of expected behavior in KVM. The VMM would
> need know if the device has read side-effects, as the only way to get a
> Normal-NC mapping in the guest would be to have one at stage-1.

I don't think KVM or the VMM should attempt to hand-hold the guest and
ensure that it maps an MMIO with read side-effects appropriately. The
guest driver can do this by itself or get incorrect hw behaviour. Such
hand-holding is only needed if the speculative loads have wider system
implications but we concluded that it's not the case for PCIe. Even with
a Device mapping, the guest can always issue random reads from an
assigned MMIO range and cause side-effects.

> Kinda stinks to make the VMM aware of the device, but IMO it is a
> fundamental limitation of the way we back memslots right now.

As I mentioned above, the limitation may be more complex if the
intra-BAR attributes are not something readily available in the device
documentation. Maybe Jason or Ankit can shed some light here: are those
intra-BAR ranges configurable by the (guest) driver or they are already
pre-configured by firmware and the driver only needs to probe them?

Anyway, about to go on the Christmas break, so most likely I'll follow
up in January. Happy holidays!

--
Catalin

2024-01-02 17:16:10

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Thu, Dec 21, 2023 at 01:19:18PM +0000, Catalin Marinas wrote:

> If we really want to avoid any aliases (though I think we are spending
> too many cycles on something that's not a real issue), the only way is
> to have fd-based mappings in KVM so that there's no VMM alias. After
> that we need to choose between (2) and (3) since the VMM may no longer
> be able to probe the device and figure out which ranges need what
> attributes.

If we use a FD then KVM will be invoking some API on the FD to get the
physical memory addreses and we can have that API also return
information on the allowed memory types.

> > Kinda stinks to make the VMM aware of the device, but IMO it is a
> > fundamental limitation of the way we back memslots right now.
>
> As I mentioned above, the limitation may be more complex if the
> intra-BAR attributes are not something readily available in the device
> documentation. Maybe Jason or Ankit can shed some light here: are those
> intra-BAR ranges configurable by the (guest) driver or they are already
> pre-configured by firmware and the driver only needs to probe them?

Configured by the guest on the fly, on a page by page basis.

There is no way for the VMM to pre-predict what memory type the VM
will need. The VM must be in control of this.

Jason

2024-01-02 17:26:46

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Wed, Dec 13, 2023 at 08:05:29PM +0000, Oliver Upton wrote:
> Hi,
>
> Sorry, a bit late to the discussion :)
>
> On Tue, Dec 12, 2023 at 02:11:56PM -0400, Jason Gunthorpe wrote:
> > On Tue, Dec 12, 2023 at 05:46:34PM +0000, Catalin Marinas wrote:
> > > should know the implications. There's also an expectation that the
> > > actual driver (KVM guests) or maybe later DPDK can choose the safe
> > > non-cacheable or write-combine (Linux terminology) attributes for the
> > > BAR.
> >
> > DPDK won't rely on this interface
>
> Wait, so what's the expected interface for determining the memory
> attributes at stage-1? I'm somewhat concerned that we're conflating two
> things here:

Someday we will have a VFIO ioctl interface to request individual
pages within a BAR be mmap'd with pgprot_writecombine(). Only
something like DPDK would call this ioctl, it would not be used by a
VMM.

> 1) KVM needs to know the memory attributes to use at stage-2, which
> isn't fundamentally different from what's needed for userspace
> stage-1 mappings.
>
> 2) KVM additionally needs a hint that the device / VFIO can handle
> mismatched aliases w/o the machine exploding. This goes beyond
> supporting Normal-NC mappings at stage-2 and is really a bug
> with our current scheme (nGnRnE at stage-1, nGnRE at stage-2).

Not at all.

This whole issue comes from a fear that some HW will experience an
uncontained failure if NORMAL_NC is used for access to MMIO memory.
Marc pointed at some of the GIC registers as a possible concrete
example of this (though nobody has come with a concrete example in the
VFIO space).

When KVM sets the S2 memory types it is primarily making a decision
what memory types the VM is *NOT* permitted to use, which is
fundamentally based on what kind of physical device is behind that
memory and if the VMM is able to manage the cache.

Ie the purpose of the S2 memory types is to restrict allowed VM memory
types to protect the integrity of the machine and hypervisor from the
VM.

Thus we have what this series does. In most cases KVM will continue to
do as it does today and restrict MMIO memory to Device_XX. We have a
new kind of VMA flag that says this physical memory can be safe with
Device_* and Normal_NC, which causes KVM to stop blocking VM use of
those memory types.

> I was hoping that (1) could be some 'common' plumbing for both userspace
> and KVM mappings. And for (2), any case where a device is intolerant of
> mismatches && KVM cannot force the memory attributes should be rejected.

It has nothing to do with mismatches. Catalin explained this in his
other email.

> AFAICT, the only reason PCI devices can get the blanket treatment of
> Normal-NC at stage-2 is because userspace has a Device-* mapping and can't
> speculatively load from the alias. This feels a bit hacky, and maybe we
> should prioritize an interface for mapping a device into a VM w/o a
> valid userspace mapping.

Userspace has a device-* mapping, yes, that is because userspace can't
know anything better.

> I very much understand that this has been going on for a while, and we
> need to do *something* to get passthrough working well for devices that
> like 'WC'. I just want to make sure we don't paint ourselves into a corner
> that's hard to get out of in the future.

Fundamentally KVM needs to understand the restrictions of the
underlying physical MMIO, and this has to be a secure indication from
the kernel component supplying the memory to KVM consuming it. Here we
are using a VMA flag, but any other behind-the-scenes scheme would
work in the future.

Jason

2024-01-03 11:43:35

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] kvm: arm64: introduce new flag for non-cacheable IO memory

On 08/12/2023 16:47, [email protected] wrote:
> From: Ankit Agrawal <[email protected]>
>
> For various reasons described in the cover letter, and primarily to

Cover letter is not part of the git history. It doesn't hurt to repeat
the same here for the sake of referring, given how important that is.


Suzuki


2024-01-03 13:25:22

by Ankit Agrawal

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] kvm: arm64: introduce new flag for non-cacheable IO memory

>> From: Ankit Agrawal <[email protected]>
>>
>> For various reasons described in the cover letter, and primarily to
>
> Cover letter is not part of the git history. It doesn't hurt to repeat
> the same here for the sake of referring, given how important that is.

Hi Suzuki, this is addressed in the latest version:
https://lore.kernel.org/all/[email protected]/

2024-01-03 13:34:26

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Tue, Jan 02, 2024 at 01:09:08PM -0400, Jason Gunthorpe wrote:
> On Thu, Dec 21, 2023 at 01:19:18PM +0000, Catalin Marinas wrote:
> > If we really want to avoid any aliases (though I think we are spending
> > too many cycles on something that's not a real issue), the only way is
> > to have fd-based mappings in KVM so that there's no VMM alias. After
> > that we need to choose between (2) and (3) since the VMM may no longer
> > be able to probe the device and figure out which ranges need what
> > attributes.
>
> If we use a FD then KVM will be invoking some API on the FD to get the
> physical memory addreses and we can have that API also return
> information on the allowed memory types.

I think the part with a VFIO WC flag wouldn't be any different. The
fd-based mapping only solves the mismatched alias, otherwise the
decision for Normal NC vs Device still lies with the guest driver.

> > > Kinda stinks to make the VMM aware of the device, but IMO it is a
> > > fundamental limitation of the way we back memslots right now.
> >
> > As I mentioned above, the limitation may be more complex if the
> > intra-BAR attributes are not something readily available in the device
> > documentation. Maybe Jason or Ankit can shed some light here: are those
> > intra-BAR ranges configurable by the (guest) driver or they are already
> > pre-configured by firmware and the driver only needs to probe them?
>
> Configured by the guest on the fly, on a page by page basis.
>
> There is no way for the VMM to pre-predict what memory type the VM
> will need. The VM must be in control of this.

That's a key argument why the VMM cannot do this, unless we come up with
some para-virtualised interface and split the device configuration logic
between the VMM and the VM. I don't think that's feasible, too much
complexity.

--
Catalin

2024-01-05 20:44:11

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Thu, Dec 21, 2023 at 01:19:18PM +0000, Catalin Marinas wrote:

[...]

> > Apologies, I didn't mean to question what's going on here from the
> > hardware POV. My concern was more from the kernel + user interfaces POV,
> > this all seems to work (specifically for PCI) by maintaining an
> > intentional mismatch between the VFIO stage-1 and KVM stage-2 mappings.
>
> If you stare at it long enough, the mismatch starts to look fine ;).
> Even if you have the VFIO stage 1 Normal NC, KVM stage 2 Normal NC, you
> can still have the guest setting stage 1 to Device and introduce an
> architectural mismatch. These aliases have some bad reputation but the
> behaviour is constrained architecturally.
>
> IMHO we should move on from this attribute mismatch since we can't fully
> solve it anyway and focus instead on what the device, system can
> tolerate, who's responsible for deciding which MMIO ranges can be mapped
> as Normal NC.

Fair enough :) The other slightly unsavory part is that we're baking
the mapping policy into KVM. I'd prefer it if this policy were kept in
userspace somehow, but there's no actual usecase for userspace selecting
memory attributes at this point.

> If we really want to avoid any aliases (though I think we are spending
> too many cycles on something that's not a real issue), the only way is
> to have fd-based mappings in KVM so that there's no VMM alias. After
> that we need to choose between (2) and (3) since the VMM may no longer
> be able to probe the device and figure out which ranges need what
> attributes.

These are the sorts of things I was more worried about. I completely
agree that the patches are fine for relaxing the 'simple' PCIe use
cases, I just don't want to establish the precedent that the kernel/KVM
will be on the hook to work out more complex use cases that may require
the composition of various mappings.

But I'm happy to table that discussion until the usecase arises :)

--
Thanks,
Oliver

2024-01-08 11:05:28

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Fri, Jan 05, 2024 at 08:42:31PM +0000, Oliver Upton wrote:
> On Thu, Dec 21, 2023 at 01:19:18PM +0000, Catalin Marinas wrote:
> > > Apologies, I didn't mean to question what's going on here from the
> > > hardware POV. My concern was more from the kernel + user interfaces POV,
> > > this all seems to work (specifically for PCI) by maintaining an
> > > intentional mismatch between the VFIO stage-1 and KVM stage-2 mappings.
> >
> > If you stare at it long enough, the mismatch starts to look fine ;).
> > Even if you have the VFIO stage 1 Normal NC, KVM stage 2 Normal NC, you
> > can still have the guest setting stage 1 to Device and introduce an
> > architectural mismatch. These aliases have some bad reputation but the
> > behaviour is constrained architecturally.
> >
> > IMHO we should move on from this attribute mismatch since we can't fully
> > solve it anyway and focus instead on what the device, system can
> > tolerate, who's responsible for deciding which MMIO ranges can be mapped
> > as Normal NC.
>
> Fair enough :) The other slightly unsavory part is that we're baking
> the mapping policy into KVM. I'd prefer it if this policy were kept in
> userspace somehow, but there's no actual usecase for userspace selecting
> memory attributes at this point.

If by policy you mean who's deciding the write-combining relaxation,
this series moved it to the vfio-pci host driver. KVM only picks the
appropriate memory type for stage 2 based on the vma flags. That's
Normal NC in the absence of anything better on arm64 and it does more
than just write-combining but we can describe what this new VM_* flag
allows.

If we want to keep this decision strictly in user space, we can do it
with some ioctl(). The downside is that the host kernel now puts more
trust in the user VMM, so my preference would be to keep this in the
vfio driver. Or we can do both, vfio-pci allows the relaxation, the VMM
tells KVM to go for a more relaxed stage 2 via an ioctl().

--
Catalin

2024-01-08 13:19:05

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 2/2] kvm: arm64: set io memory s2 pte as normalnc for vfio pci devices

On Mon, Jan 08, 2024 at 11:04:47AM +0000, Catalin Marinas wrote:

> If we want to keep this decision strictly in user space, we can do it
> with some ioctl(). The downside is that the host kernel now puts more
> trust in the user VMM, so my preference would be to keep this in the
> vfio driver. Or we can do both, vfio-pci allows the relaxation, the VMM
> tells KVM to go for a more relaxed stage 2 via an ioctl().

What is the point? We'd need a use case for why the VMM should have
the ability to create a more restrictive MMIO mapping.

I can't think of one.

So I'd go the other way, if someday we find out we need more
restrictive then the VMM should ask for more restrictive (not weirdly
ask for less restrictive)

Jason