From: Ankit Agrawal <[email protected]>
NVIDIA's upcoming Grace Hopper Superchip provides a PCI-like device
for the on-chip GPU that is the logical OS representation of the
internal propritary cache coherent interconnect.
This representation has a number of limitations compared to a real PCI
device, in particular, it does not model the coherent GPU memory
aperture as a PCI config space BAR, and PCI doesn't know anything
about cacheable memory types.
Provide a VFIO PCI variant driver that adapts the unique PCI
representation into a more standard PCI representation facing
userspace. The GPU memory aperture is obtained from ACPI, according to
the FW specification, and exported to userspace as the VFIO_REGION
that covers the first PCI BAR. qemu will naturally generate a PCI
device in the VM where the cacheable aperture is reported in BAR1.
Since this memory region is actually cache coherent with the CPU, the
VFIO variant driver will mmap it into VMA using a cacheable mapping.
As this is the first time an ARM environment has placed cacheable
non-struct page backed memory (eg from remap_pfn_range) into a KVM
page table, fix a bug in ARM KVM where it does not copy the cacheable
memory attributes from non-struct page backed PTEs to ensure the guest
also gets a cacheable mapping.
Finally, the cacheable memory can participate in memory failure
handling. ECC failures on this memory will trigger the normal ARM
mechanism to get into memory-failure.c. Since this memory is not
backed by struct page create a mechanism to route the memory-failure's
physical address to the VMA owner so that a SIGBUS can be generated
toward the correct process. This works with the existing KVM/qemu
handling for memory failure reporting toward a guest.
This goes along with a qemu series to provides the necessary
implementation of the Grace Hopper Superchip firmware specification so
that the guest operating system can see the correct ACPI modeling for
the coherent GPU device.
https://github.com/qemu/qemu/compare/master...ankita-nv:qemu:dev-ankit/cohmem-0330
Applied and tested over v6.3-rc4.
Ankit Agrawal (6):
kvm: determine memory type from VMA
vfio/nvgpu: expose GPU device memory as BAR1
mm: handle poisoning of pfn without struct pages
mm: Add poison error check in fixup_user_fault() for mapped PFN
mm: Change ghes code to allow poison of non-struct PFN
vfio/nvgpu: register device memory for poison handling
MAINTAINERS | 6 +
arch/arm64/include/asm/kvm_pgtable.h | 8 +-
arch/arm64/include/asm/memory.h | 6 +-
arch/arm64/kvm/hyp/pgtable.c | 16 +-
arch/arm64/kvm/mmu.c | 27 +-
drivers/acpi/apei/ghes.c | 12 +-
drivers/vfio/pci/Kconfig | 2 +
drivers/vfio/pci/Makefile | 2 +
drivers/vfio/pci/nvgpu/Kconfig | 10 +
drivers/vfio/pci/nvgpu/Makefile | 3 +
drivers/vfio/pci/nvgpu/main.c | 359 +++++++++++++++++++++++++++
include/linux/memory-failure.h | 22 ++
include/linux/mm.h | 1 +
include/ras/ras_event.h | 1 +
mm/gup.c | 2 +-
mm/memory-failure.c | 148 +++++++++--
virt/kvm/kvm_main.c | 6 +
17 files changed, 586 insertions(+), 45 deletions(-)
create mode 100644 drivers/vfio/pci/nvgpu/Kconfig
create mode 100644 drivers/vfio/pci/nvgpu/Makefile
create mode 100644 drivers/vfio/pci/nvgpu/main.c
create mode 100644 include/linux/memory-failure.h
--
2.17.1
[...]
> This goes along with a qemu series to provides the necessary
> implementation of the Grace Hopper Superchip firmware specification so
> that the guest operating system can see the correct ACPI modeling for
> the coherent GPU device.
> https://github.com/qemu/qemu/compare/master...ankita-nv:qemu:dev-ankit/cohmem-0330
>
> Applied and tested over v6.3-rc4.
>
I briefly skimmed over the series, the patch subject prefixes are a bit
misleading IMHO and could be improved:
> Ankit Agrawal (6):
> kvm: determine memory type from VMA
this is arch64 specific kvm (kvm/aarch64: ?)
> vfio/nvgpu: expose GPU device memory as BAR1
> mm: handle poisoning of pfn without struct pages
mm/memory-failure:
> mm: Add poison error check in fixup_user_fault() for mapped PFN
That's both MM and core-KVM, maybe worth splitting up.
> mm: Change ghes code to allow poison of non-struct PFN
That's drivers/acpi/apei code, not core-mm code.
> vfio/nvgpu: register device memory for poison handling
--
Thanks,
David / dhildenb
Thanks David, response inline.
[...]
> I briefly skimmed over the series, the patch subject prefixes are a bit
> misleading IMHO and could be improved:
Understood. Will fix that in the next iteration.
>> Ankit Agrawal (6):
>> kvm: determine memory type from VMA
> this is arch64 specific kvm (kvm/aarch64: ?)
Right. I'll change the prefix to kvm/aarch64
>>> vfio/nvgpu: expose GPU device memory as BAR1
>> mm: handle poisoning of pfn without struct pages
>mm/memory-failure:
Will change the prefix.
>> mm: Add poison error check in fixup_user_fault() for mapped PFN
> That's both MM and core-KVM, maybe worth splitting up.
Ack, will do.
>> mm: Change ghes code to allow poison of non-struct PFN
> That's drivers/acpi/apei code, not core-mm code.
Ack.
>> vfio/nvgpu: register device memory for poison handling
Thanks,
Ankit Agrawal
On Wed, 12 Apr 2023 09:43:56 +0100,
Ankit Agrawal <[email protected]> wrote:
>
> Thanks David, response inline.
>
> [...]
>
> > I briefly skimmed over the series, the patch subject prefixes are a bit
> > misleading IMHO and could be improved:
>
> Understood. Will fix that in the next iteration.
>
>
> >> Ankit Agrawal (6):
> >> kvm: determine memory type from VMA
>
> > this is arch64 specific kvm (kvm/aarch64: ?)
> Right. I'll change the prefix to kvm/aarch64
Please look at the git log: the idiomatic prefix is
"KVM: arm64: Something starting with a capital letter"
AArch64 is almost never used anywhere in the arm64 tree.
M.
--
Without deviation from the norm, progress is not possible.
On Wed, 05 Apr 2023 19:01:28 +0100,
<[email protected]> wrote:
>
> From: Ankit Agrawal <[email protected]>
>
> NVIDIA's upcoming Grace Hopper Superchip provides a PCI-like device
> for the on-chip GPU that is the logical OS representation of the
> internal propritary cache coherent interconnect.
>
> This representation has a number of limitations compared to a real PCI
> device, in particular, it does not model the coherent GPU memory
> aperture as a PCI config space BAR, and PCI doesn't know anything
> about cacheable memory types.
>
> Provide a VFIO PCI variant driver that adapts the unique PCI
> representation into a more standard PCI representation facing
> userspace. The GPU memory aperture is obtained from ACPI, according to
> the FW specification, and exported to userspace as the VFIO_REGION
> that covers the first PCI BAR. qemu will naturally generate a PCI
> device in the VM where the cacheable aperture is reported in BAR1.
>
> Since this memory region is actually cache coherent with the CPU, the
> VFIO variant driver will mmap it into VMA using a cacheable mapping.
>
> As this is the first time an ARM environment has placed cacheable
> non-struct page backed memory (eg from remap_pfn_range) into a KVM
> page table, fix a bug in ARM KVM where it does not copy the cacheable
> memory attributes from non-struct page backed PTEs to ensure the guest
> also gets a cacheable mapping.
This is not a bug, but a conscious design decision. As you pointed out
above, nothing needed this until now, and a device mapping is the only
safe thing to do as we know exactly *nothing* about the memory that
gets mapped.
M.
--
Without deviation from the norm, progress is not possible.
On Wed, Apr 12, 2023 at 01:28:08PM +0100, Marc Zyngier wrote:
> On Wed, 05 Apr 2023 19:01:28 +0100,
> <[email protected]> wrote:
> >
> > From: Ankit Agrawal <[email protected]>
> >
> > NVIDIA's upcoming Grace Hopper Superchip provides a PCI-like device
> > for the on-chip GPU that is the logical OS representation of the
> > internal propritary cache coherent interconnect.
> >
> > This representation has a number of limitations compared to a real PCI
> > device, in particular, it does not model the coherent GPU memory
> > aperture as a PCI config space BAR, and PCI doesn't know anything
> > about cacheable memory types.
> >
> > Provide a VFIO PCI variant driver that adapts the unique PCI
> > representation into a more standard PCI representation facing
> > userspace. The GPU memory aperture is obtained from ACPI, according to
> > the FW specification, and exported to userspace as the VFIO_REGION
> > that covers the first PCI BAR. qemu will naturally generate a PCI
> > device in the VM where the cacheable aperture is reported in BAR1.
> >
> > Since this memory region is actually cache coherent with the CPU, the
> > VFIO variant driver will mmap it into VMA using a cacheable mapping.
> >
> > As this is the first time an ARM environment has placed cacheable
> > non-struct page backed memory (eg from remap_pfn_range) into a KVM
> > page table, fix a bug in ARM KVM where it does not copy the cacheable
> > memory attributes from non-struct page backed PTEs to ensure the guest
> > also gets a cacheable mapping.
>
> This is not a bug, but a conscious design decision. As you pointed out
> above, nothing needed this until now, and a device mapping is the only
> safe thing to do as we know exactly *nothing* about the memory that
> gets mapped.
IMHO, from the mm perspective, the bug is using pfn_is_map_memory() to
determine the cachability or device memory status of a PFN in a
VMA. That is not what that API is for.
The cachability should be determined by the pgprot bits in the VMA.
VM_IO is the flag that says the VMA maps memory with side-effects.
I understand in ARM KVM it is not allowed for the VM and host to have
different cachability, so mis-detecting host cachable memory and
making it forced non-cachable in the VM is not a safe thing to do?
Jason
On Wed, 12 Apr 2023 13:53:07 +0100,
Jason Gunthorpe <[email protected]> wrote:
>
> On Wed, Apr 12, 2023 at 01:28:08PM +0100, Marc Zyngier wrote:
> > On Wed, 05 Apr 2023 19:01:28 +0100,
> > <[email protected]> wrote:
> > >
> > > From: Ankit Agrawal <[email protected]>
> > >
> > > NVIDIA's upcoming Grace Hopper Superchip provides a PCI-like device
> > > for the on-chip GPU that is the logical OS representation of the
> > > internal propritary cache coherent interconnect.
> > >
> > > This representation has a number of limitations compared to a real PCI
> > > device, in particular, it does not model the coherent GPU memory
> > > aperture as a PCI config space BAR, and PCI doesn't know anything
> > > about cacheable memory types.
> > >
> > > Provide a VFIO PCI variant driver that adapts the unique PCI
> > > representation into a more standard PCI representation facing
> > > userspace. The GPU memory aperture is obtained from ACPI, according to
> > > the FW specification, and exported to userspace as the VFIO_REGION
> > > that covers the first PCI BAR. qemu will naturally generate a PCI
> > > device in the VM where the cacheable aperture is reported in BAR1.
> > >
> > > Since this memory region is actually cache coherent with the CPU, the
> > > VFIO variant driver will mmap it into VMA using a cacheable mapping.
> > >
> > > As this is the first time an ARM environment has placed cacheable
> > > non-struct page backed memory (eg from remap_pfn_range) into a KVM
> > > page table, fix a bug in ARM KVM where it does not copy the cacheable
> > > memory attributes from non-struct page backed PTEs to ensure the guest
> > > also gets a cacheable mapping.
> >
> > This is not a bug, but a conscious design decision. As you pointed out
> > above, nothing needed this until now, and a device mapping is the only
> > safe thing to do as we know exactly *nothing* about the memory that
> > gets mapped.
>
> IMHO, from the mm perspective, the bug is using pfn_is_map_memory() to
> determine the cachability or device memory status of a PFN in a
> VMA. That is not what that API is for.
It is the right API for what KVM/arm64 has been designed for. RAM gets
a normal memory mapping, and everything else gets device. That may not
suit your *new* use case, but that doesn't make it broken.
>
> The cachability should be determined by the pgprot bits in the VMA.
>
> VM_IO is the flag that says the VMA maps memory with side-effects.
>
> I understand in ARM KVM it is not allowed for the VM and host to have
> different cachability, so mis-detecting host cachable memory and
> making it forced non-cachable in the VM is not a safe thing to do?
Only if you insist on not losing coherency between the two aliases
used at the same time (something that would seem pretty improbable).
And said coherency can be restored by using CMOs, as documented in
B2.8.
M.
--
Without deviation from the norm, progress is not possible.
On Thu, Apr 13, 2023 at 10:52:10AM +0100, Marc Zyngier wrote:
> > IMHO, from the mm perspective, the bug is using pfn_is_map_memory() to
> > determine the cachability or device memory status of a PFN in a
> > VMA. That is not what that API is for.
>
> It is the right API for what KVM/arm64 has been designed for. RAM gets
> a normal memory mapping, and everything else gets device.
The MM has a pretty flexible definition of "RAM" these days. For
instance, I don't think pfn_is_map_memory() works correctly for all
the cases we can do now with devm_memremap_pages().
> That may not suit your *new* use case, but that doesn't make it
> broken.
I've now spent alot of time working on improving VFIO and the related
ecosystem. I would to get to a point where we have a consistent VFIO
experience on all the platforms.
Currently, real NIC and GPU HW with wide VFIO deployments on x86 do
not work fully correctly on KVM/arm64. write-combining in the VM is
the big problem for existing HW, and this new CXL-like stuff has
problems with cachability.
I don't really care what we call it, as long as we can agree that VFIO
devices not working fully in VMs is a problem that should be fixed.
> Only if you insist on not losing coherency between the two aliases
> used at the same time (something that would seem pretty improbable).
This is VFIO so there is DMA involved. My understanding has been that
the SMMU is allowed to pull data out of the cache. So if the
hypervisor cachable side has pulled a line into cache and the VM
uncached side dirtied the physical memory, it is allowed that SMMU
will read stale cache data? Thus the VM will experience data
corruption on its DMAs.
With VFIO live migration I expect the hypervisor qemu side to be
actively reading from the cachable memory while the VM is running to
migrate it, so it does not seem improbable.
Jason