2022-05-23 07:18:02

by Chao Peng

[permalink] [raw]
Subject: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

This is the v6 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:

2764011106d0 (kvm/queue) KVM: VMX: Include MKTME KeyID bits in
shadow_zero_check

and Sean's below patch:

KVM: x86/mmu: Add RET_PF_CONTINUE to eliminate bool+int* "returns"
https://lkml.org/lkml/2022/4/22/1598

Introduction
------------
In general this patch series introduce fd-based memslot which provides
guest memory through memory file descriptor fd[offset,size] instead of
hva/size. The fd can be created from a supported memory filesystem
like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
and the the memory backing store exchange callbacks when such memslot
gets created. At runtime KVM will call into callbacks provided by the
backing store to get the pfn with the fd+offset. Memory backing store
will also call into KVM callbacks when userspace fallocate/punch hole
on the fd to notify KVM to map/unmap secondary MMU page tables.

Comparing to existing hva-based memslot, this new type of memslot allows
guest memory unmapped from host userspace like QEMU and even the kernel
itself, therefore reduce attack surface and prevent bugs.

Based on this fd-based memslot, we can build guest private memory that
is going to be used in confidential computing environments such as Intel
TDX and AMD SEV. When supported, the memory backing store can provide
more enforcement on the fd and KVM can use a single memslot to hold both
the private and shared part of the guest memory.

mm extension
---------------------
Introduces new MFD_INACCESSIBLE flag for memfd_create(), the file created
with these flags cannot read(), write() or mmap() etc via normal
MMU operations. The file content can only be used with the newly
introduced memfile_notifier extension.

The memfile_notifier extension provides two sets of callbacks for KVM to
interact with the memory backing store:
- memfile_notifier_ops: callbacks for memory backing store to notify
KVM when memory gets allocated/invalidated.
- backing store callbacks: callbacks for KVM to call into memory backing
store to request memory pages for guest private memory.

The memfile_notifier extension also provides APIs for memory backing
store to register/unregister itself and to trigger the notifier when the
bookmarked memory gets fallocated/invalidated.

memslot extension
-----------------
Add the private fd and the fd offset to existing 'shared' memslot so that
both private/shared guest memory can live in one single memslot. A page in
the memslot is either private or shared. A page is private only when it's
already allocated in the backing store fd, all the other cases it's treated
as shared, this includes those already mapped as shared as well as those
having not been mapped. This means the memory backing store is the place
which tells the truth of which page is private.

Private memory map/unmap and conversion
---------------------------------------
Userspace's map/unmap operations are done by fallocate() ioctl on the
backing store fd.
- map: default fallocate() with mode=0.
- unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
secondary MMU page tables.

Test
----
To test the new functionalities of this patch TDX patchset is needed.
Since TDX patchset has not been merged so I did two kinds of test:

- Selftest on normal VM from Vishal
https://lkml.org/lkml/2022/5/10/2045
The selftest has been ported to this patchset and you can find it in
repo: https://github.com/chao-p/linux/tree/privmem-v6

- Private memory funational test on latest TDX code
The patch is rebased to latest TDX code and tested the new
funcationalities. See below repos:
Linux: https://github.com/chao-p/linux/commits/privmem-v6-tdx
QEMU: https://github.com/chao-p/qemu/tree/privmem-v6

An example QEMU command line for TDX test:
-object tdx-guest,id=tdx \
-object memory-backend-memfd-private,id=ram1,size=2G \
-machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1

What's missing
--------------
- The accounting for longterm pinned memory in the backing store is
not included since I havn't come out a good solution yet.
- Batch invalidation notify for shmem is not ready, as I still see
it's a bit tricky to do that clearly.

Changelog
----------
v6:
- Re-organzied patch for both mm/KVM parts.
- Added flags for memfile_notifier so its consumers can state their
features and memory backing store can check against these flags.
- Put a backing store reference in the memfile_notifier and move pfn_ops
into backing store.
- Only support boot time backing store register.
- Overall KVM part improvement suggested by Sean and some others.
v5:
- Removed userspace visible F_SEAL_INACCESSIBLE, instead using an
in-kernel flag (SHM_F_INACCESSIBLE for shmem). Private fd can only
be created by MFD_INACCESSIBLE.
- Introduced new APIs for backing store to register itself to
memfile_notifier instead of direct function call.
- Added the accounting and restriction for MFD_INACCESSIBLE memory.
- Added KVM API doc for new memslot extensions and man page for the new
MFD_INACCESSIBLE flag.
- Removed the overlap check for mapping the same file+offset into
multiple gfns due to perf consideration, warned in document.
- Addressed other comments in v4.
v4:
- Decoupled the callbacks between KVM/mm from memfd and use new
name 'memfile_notifier'.
- Supported register multiple memslots to the same backing store.
- Added per-memslot pfn_ops instead of per-system.
- Reworked the invalidation part.
- Improved new KVM uAPIs (private memslot extension and memory
error) per Sean's suggestions.
- Addressed many other minor fixes for comments from v3.
v3:
- Added locking protection when calling
invalidate_page_range/fallocate callbacks.
- Changed memslot structure to keep use useraddr for shared memory.
- Re-organized F_SEAL_INACCESSIBLE and MEMFD_OPS.
- Added MFD_INACCESSIBLE flag to force F_SEAL_INACCESSIBLE.
- Commit message improvement.
- Many small fixes for comments from the last version.

Links to previous discussions
-----------------------------
[1] Original design proposal:
https://lkml.kernel.org/kvm/[email protected]/
[2] Updated proposal and RFC patch v1:
https://lkml.kernel.org/linux-fsdevel/[email protected]/
[3] Patch v5: https://lkml.org/lkml/2022/3/10/457

Chao Peng (6):
mm: Introduce memfile_notifier
mm/memfd: Introduce MFD_INACCESSIBLE flag
KVM: Extend the memslot to support fd-based private memory
KVM: Add KVM_EXIT_MEMORY_FAULT exit
KVM: Handle page fault for private memory
KVM: Enable and expose KVM_MEM_PRIVATE

Kirill A. Shutemov (1):
mm/shmem: Support memfile_notifier

Documentation/virt/kvm/api.rst | 60 ++++++++++--
arch/mips/include/asm/kvm_host.h | 2 +-
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/Kconfig | 2 +
arch/x86/kvm/mmu.h | 1 +
arch/x86/kvm/mmu/mmu.c | 70 +++++++++++++-
arch/x86/kvm/mmu/mmu_internal.h | 17 ++++
arch/x86/kvm/mmu/mmutrace.h | 1 +
arch/x86/kvm/mmu/paging_tmpl.h | 5 +-
arch/x86/kvm/x86.c | 2 +-
include/linux/kvm_host.h | 51 +++++++++--
include/linux/memfile_notifier.h | 99 ++++++++++++++++++++
include/linux/shmem_fs.h | 2 +
include/uapi/linux/kvm.h | 33 +++++++
include/uapi/linux/memfd.h | 1 +
mm/Kconfig | 4 +
mm/Makefile | 1 +
mm/memfd.c | 15 ++-
mm/memfile_notifier.c | 137 +++++++++++++++++++++++++++
mm/shmem.c | 120 +++++++++++++++++++++++-
virt/kvm/Kconfig | 3 +
virt/kvm/kvm_main.c | 153 +++++++++++++++++++++++++++++--
22 files changed, 748 insertions(+), 33 deletions(-)
create mode 100644 include/linux/memfile_notifier.h
create mode 100644 mm/memfile_notifier.c

--
2.25.1



2022-05-23 07:36:57

by Chao Peng

[permalink] [raw]
Subject: [PATCH v6 5/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit

This new KVM exit allows userspace to handle memory-related errors. It
indicates an error happens in KVM at guest memory range [gpa, gpa+size).
The flags includes additional information for userspace to handle the
error. Currently bit 0 is defined as 'private memory' where '1'
indicates error happens due to private memory access and '0' indicates
error happens due to shared memory access.

After private memory is enabled, this new exit will be used for KVM to
exit to userspace for shared memory <-> private memory conversion in
memory encryption usage.

In such usage, typically there are two kind of memory conversions:
- explicit conversion: happens when guest explicitly calls into KVM to
map a range (as private or shared), KVM then exits to userspace to
do the map/unmap operations.
- implicit conversion: happens in KVM page fault handler.
* if the fault is due to a private memory access then causes a
userspace exit for a shared->private conversion request when the
page has not been allocated in the private memory backend.
* If the fault is due to a shared memory access then causes a
userspace exit for a private->shared conversion request when the
page has already been allocated in the private memory backend.

Suggested-by: Sean Christopherson <[email protected]>
Co-developed-by: Yu Zhang <[email protected]>
Signed-off-by: Yu Zhang <[email protected]>
Signed-off-by: Chao Peng <[email protected]>
---
Documentation/virt/kvm/api.rst | 22 ++++++++++++++++++++++
include/uapi/linux/kvm.h | 9 +++++++++
2 files changed, 31 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b959445b64cc..2421c012278b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6341,6 +6341,28 @@ array field represents return values. The userspace should update the return
values of SBI call before resuming the VCPU. For more details on RISC-V SBI
spec refer, https://github.com/riscv/riscv-sbi-doc.

+::
+
+ /* KVM_EXIT_MEMORY_FAULT */
+ struct {
+ #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0)
+ __u32 flags;
+ __u32 padding;
+ __u64 gpa;
+ __u64 size;
+ } memory;
+If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has
+encountered a memory error which is not handled by KVM kernel module and
+userspace may choose to handle it. The 'flags' field indicates the memory
+properties of the exit.
+
+ - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by
+ private memory access when the bit is set otherwise the memory error is
+ caused by shared memory access when the bit is clear.
+
+'gpa' and 'size' indicate the memory range the error occurs at. The userspace
+may handle the error and return to KVM to retry the previous memory access.
+
::

/* Fix the size of the union. */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 28cacd3656d4..6ca864be258f 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -294,6 +294,7 @@ struct kvm_xen_exit {
#define KVM_EXIT_X86_BUS_LOCK 33
#define KVM_EXIT_XEN 34
#define KVM_EXIT_RISCV_SBI 35
+#define KVM_EXIT_MEMORY_FAULT 36

/* For KVM_EXIT_INTERNAL_ERROR */
/* Emulate instruction failed. */
@@ -518,6 +519,14 @@ struct kvm_run {
unsigned long args[6];
unsigned long ret[2];
} riscv_sbi;
+ /* KVM_EXIT_MEMORY_FAULT */
+ struct {
+#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0)
+ __u32 flags;
+ __u32 padding;
+ __u64 gpa;
+ __u64 size;
+ } memory;
/* Fix the size of the union. */
char padding[256];
};
--
2.25.1


2022-06-06 20:46:53

by Vishal Annapurve

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

>
> Private memory map/unmap and conversion
> ---------------------------------------
> Userspace's map/unmap operations are done by fallocate() ioctl on the
> backing store fd.
> - map: default fallocate() with mode=0.
> - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> secondary MMU page tables.
>
....
> QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
>
> An example QEMU command line for TDX test:
> -object tdx-guest,id=tdx \
> -object memory-backend-memfd-private,id=ram1,size=2G \
> -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
>

There should be more discussion around double allocation scenarios
when using the private fd approach. A malicious guest or buggy
userspace VMM can cause physical memory getting allocated for both
shared (memory accessible from host) and private fds backing the guest
memory.
Userspace VMM will need to unback the shared guest memory while
handling the conversion from shared to private in order to prevent
double allocation even with malicious guests or bugs in userspace VMM.

Options to unback shared guest memory seem to be:
1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop
kernel from backing the shared memory on subsequent write accesses
2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared
guest memory, this option still is similar to madvice since this would
still allow shared memory to get backed on write accesses
3) munmap - This would give away the contiguous virtual memory region
reservation with holes in the guest backing memory, which might make
guest memory management difficult.
4) mprotect(... PROT_NONE) - This would keep the virtual memory
address range backing the guest memory preserved

ram_block_discard_range_fd from reference implementation:
https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on
fallocate/madvise.

Any thoughts/suggestions around better ways to unback the shared
memory in order to avoid double allocation scenarios?

Regards,
Vishal

2022-06-07 09:57:19

by Chao Peng

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote:
> >
> > Private memory map/unmap and conversion
> > ---------------------------------------
> > Userspace's map/unmap operations are done by fallocate() ioctl on the
> > backing store fd.
> > - map: default fallocate() with mode=0.
> > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> > secondary MMU page tables.
> >
> ....
> > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
> >
> > An example QEMU command line for TDX test:
> > -object tdx-guest,id=tdx \
> > -object memory-backend-memfd-private,id=ram1,size=2G \
> > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> >
>
> There should be more discussion around double allocation scenarios
> when using the private fd approach. A malicious guest or buggy
> userspace VMM can cause physical memory getting allocated for both
> shared (memory accessible from host) and private fds backing the guest
> memory.
> Userspace VMM will need to unback the shared guest memory while
> handling the conversion from shared to private in order to prevent
> double allocation even with malicious guests or bugs in userspace VMM.

I don't know how malicious guest can cause that. The initial design of
this serie is to put the private/shared memory into two different
address spaces and gives usersapce VMM the flexibility to convert
between the two. It can choose respect the guest conversion request or
not.

It's possible for a usrspace VMM to cause double allocation if it fails
to call the unback operation during the conversion, this may be a bug
or not. Double allocation may not be a wrong thing, even in conception.
At least TDX allows you to use half shared half private in guest, means
both shared/private can be effective. Unbacking the memory is just the
current QEMU implementation choice.

Chao
>
> Options to unback shared guest memory seem to be:
> 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop
> kernel from backing the shared memory on subsequent write accesses
> 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared
> guest memory, this option still is similar to madvice since this would
> still allow shared memory to get backed on write accesses
> 3) munmap - This would give away the contiguous virtual memory region
> reservation with holes in the guest backing memory, which might make
> guest memory management difficult.
> 4) mprotect(... PROT_NONE) - This would keep the virtual memory
> address range backing the guest memory preserved
>
> ram_block_discard_range_fd from reference implementation:
> https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on
> fallocate/madvise.
>
> Any thoughts/suggestions around better ways to unback the shared
> memory in order to avoid double allocation scenarios?
>
> Regards,
> Vishal

2022-06-08 07:50:22

by Chao Peng

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote:
> On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <[email protected]> wrote:
> >
> > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote:
> > > >
> > > > Private memory map/unmap and conversion
> > > > ---------------------------------------
> > > > Userspace's map/unmap operations are done by fallocate() ioctl on the
> > > > backing store fd.
> > > > - map: default fallocate() with mode=0.
> > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> > > > secondary MMU page tables.
> > > >
> > > ....
> > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
> > > >
> > > > An example QEMU command line for TDX test:
> > > > -object tdx-guest,id=tdx \
> > > > -object memory-backend-memfd-private,id=ram1,size=2G \
> > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> > > >
> > >
> > > There should be more discussion around double allocation scenarios
> > > when using the private fd approach. A malicious guest or buggy
> > > userspace VMM can cause physical memory getting allocated for both
> > > shared (memory accessible from host) and private fds backing the guest
> > > memory.
> > > Userspace VMM will need to unback the shared guest memory while
> > > handling the conversion from shared to private in order to prevent
> > > double allocation even with malicious guests or bugs in userspace VMM.
> >
> > I don't know how malicious guest can cause that. The initial design of
> > this serie is to put the private/shared memory into two different
> > address spaces and gives usersapce VMM the flexibility to convert
> > between the two. It can choose respect the guest conversion request or
> > not.
>
> For example, the guest could maliciously give a device driver a
> private page so that a host-side virtual device will blindly write the
> private page.

With this patch series, it's actually even not possible for userspace VMM
to allocate private page by a direct write, it's basically unmapped from
there. If it really wants to, it should so something special, by intention,
that's basically the conversion, which we should allow.

>
> > It's possible for a usrspace VMM to cause double allocation if it fails
> > to call the unback operation during the conversion, this may be a bug
> > or not. Double allocation may not be a wrong thing, even in conception.
> > At least TDX allows you to use half shared half private in guest, means
> > both shared/private can be effective. Unbacking the memory is just the
> > current QEMU implementation choice.
>
> Right. But the idea is that this patch series should accommodate all
> of the CVM architectures. Or at least that's what I know was
> envisioned last time we discussed this topic for SNP [*].

AFAICS, this series should work for both TDX and SNP, and other CVM
architectures. I don't see where TDX can work but SNP cannot, or I
missed something here?

>
> Regardless, it's important to ensure that the VM respects its memory
> budget. For example, within Google, we run VMs inside of containers.
> So if we double allocate we're going to OOM. This seems acceptable for
> an early version of CVMs. But ultimately, I think we need a more
> robust way to ensure that the VM operates within its memory container.
> Otherwise, the OOM is going to be hard to diagnose and distinguish
> from a real OOM.

Thanks for bringing this up. But in my mind I still think userspace VMM
can do and it's its responsibility to guarantee that, if that is hard
required. By design, userspace VMM is the decision-maker for page
conversion and has all the necessary information to know which page is
shared/private. It also has the necessary knobs to allocate/free the
physical pages for guest memory. Definitely, we should make userspace
VMM more robust.

Chao
>
> [*] https://lore.kernel.org/all/[email protected]/
>
> >
> > Chao
> > >
> > > Options to unback shared guest memory seem to be:
> > > 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop
> > > kernel from backing the shared memory on subsequent write accesses
> > > 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared
> > > guest memory, this option still is similar to madvice since this would
> > > still allow shared memory to get backed on write accesses
> > > 3) munmap - This would give away the contiguous virtual memory region
> > > reservation with holes in the guest backing memory, which might make
> > > guest memory management difficult.
> > > 4) mprotect(... PROT_NONE) - This would keep the virtual memory
> > > address range backing the guest memory preserved
> > >
> > > ram_block_discard_range_fd from reference implementation:
> > > https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on
> > > fallocate/madvise.
> > >
> > > Any thoughts/suggestions around better ways to unback the shared
> > > memory in order to avoid double allocation scenarios?
>
> I agree with Vishal. I think this patch set is making great progress.
> But the double allocation scenario seems like a high-level design
> issue that warrants more discussion.

2022-06-08 08:07:51

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <[email protected]> wrote:
>
> On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote:
> > >
> > > Private memory map/unmap and conversion
> > > ---------------------------------------
> > > Userspace's map/unmap operations are done by fallocate() ioctl on the
> > > backing store fd.
> > > - map: default fallocate() with mode=0.
> > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> > > secondary MMU page tables.
> > >
> > ....
> > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
> > >
> > > An example QEMU command line for TDX test:
> > > -object tdx-guest,id=tdx \
> > > -object memory-backend-memfd-private,id=ram1,size=2G \
> > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> > >
> >
> > There should be more discussion around double allocation scenarios
> > when using the private fd approach. A malicious guest or buggy
> > userspace VMM can cause physical memory getting allocated for both
> > shared (memory accessible from host) and private fds backing the guest
> > memory.
> > Userspace VMM will need to unback the shared guest memory while
> > handling the conversion from shared to private in order to prevent
> > double allocation even with malicious guests or bugs in userspace VMM.
>
> I don't know how malicious guest can cause that. The initial design of
> this serie is to put the private/shared memory into two different
> address spaces and gives usersapce VMM the flexibility to convert
> between the two. It can choose respect the guest conversion request or
> not.

For example, the guest could maliciously give a device driver a
private page so that a host-side virtual device will blindly write the
private page.

> It's possible for a usrspace VMM to cause double allocation if it fails
> to call the unback operation during the conversion, this may be a bug
> or not. Double allocation may not be a wrong thing, even in conception.
> At least TDX allows you to use half shared half private in guest, means
> both shared/private can be effective. Unbacking the memory is just the
> current QEMU implementation choice.

Right. But the idea is that this patch series should accommodate all
of the CVM architectures. Or at least that's what I know was
envisioned last time we discussed this topic for SNP [*].

Regardless, it's important to ensure that the VM respects its memory
budget. For example, within Google, we run VMs inside of containers.
So if we double allocate we're going to OOM. This seems acceptable for
an early version of CVMs. But ultimately, I think we need a more
robust way to ensure that the VM operates within its memory container.
Otherwise, the OOM is going to be hard to diagnose and distinguish
from a real OOM.

[*] https://lore.kernel.org/all/[email protected]/

>
> Chao
> >
> > Options to unback shared guest memory seem to be:
> > 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop
> > kernel from backing the shared memory on subsequent write accesses
> > 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared
> > guest memory, this option still is similar to madvice since this would
> > still allow shared memory to get backed on write accesses
> > 3) munmap - This would give away the contiguous virtual memory region
> > reservation with holes in the guest backing memory, which might make
> > guest memory management difficult.
> > 4) mprotect(... PROT_NONE) - This would keep the virtual memory
> > address range backing the guest memory preserved
> >
> > ram_block_discard_range_fd from reference implementation:
> > https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on
> > fallocate/madvise.
> >
> > Any thoughts/suggestions around better ways to unback the shared
> > memory in order to avoid double allocation scenarios?

I agree with Vishal. I think this patch set is making great progress.
But the double allocation scenario seems like a high-level design
issue that warrants more discussion.

2022-06-08 20:36:04

by Vishal Annapurve

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

...
> With this patch series, it's actually even not possible for userspace VMM
> to allocate private page by a direct write, it's basically unmapped from
> there. If it really wants to, it should so something special, by intention,
> that's basically the conversion, which we should allow.
>

A VM can pass GPA backed by private pages to userspace VMM and when
Userspace VMM accesses the backing hva there will be pages allocated
to back the shared fd causing 2 sets of pages backing the same guest
memory range.

> Thanks for bringing this up. But in my mind I still think userspace VMM
> can do and it's its responsibility to guarantee that, if that is hard
> required. By design, userspace VMM is the decision-maker for page
> conversion and has all the necessary information to know which page is
> shared/private. It also has the necessary knobs to allocate/free the
> physical pages for guest memory. Definitely, we should make userspace
> VMM more robust.

Making Userspace VMM more robust to avoid double allocation can get
complex, it will have to keep track of all in-use (by Userspace VMM)
shared fd memory to disallow conversion from shared to private and
will have to ensure that all guest supplied addresses belong to shared
GPA ranges.
A coarser but simpler alternative could be to always allow shared to
private conversion with unbacking the memory from shared fd and exit
if the VMM runs in double allocation scenarios. In either cases,
unbacking shared fd memory ideally should prevent memory allocation on
subsequent write accesses to ensure double allocation scenarios are
caught early.

Regards,
Vishal

2022-06-09 21:09:53

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> ...
> > With this patch series, it's actually even not possible for userspace VMM
> > to allocate private page by a direct write, it's basically unmapped from
> > there. If it really wants to, it should so something special, by intention,
> > that's basically the conversion, which we should allow.
> >
>
> A VM can pass GPA backed by private pages to userspace VMM and when
> Userspace VMM accesses the backing hva there will be pages allocated
> to back the shared fd causing 2 sets of pages backing the same guest
> memory range.
>
> > Thanks for bringing this up. But in my mind I still think userspace VMM
> > can do and it's its responsibility to guarantee that, if that is hard
> > required.

That was my initial reaction too, but there are unfortunate side effects to punting
this to userspace.

> By design, userspace VMM is the decision-maker for page
> > conversion and has all the necessary information to know which page is
> > shared/private. It also has the necessary knobs to allocate/free the
> > physical pages for guest memory. Definitely, we should make userspace
> > VMM more robust.
>
> Making Userspace VMM more robust to avoid double allocation can get
> complex, it will have to keep track of all in-use (by Userspace VMM)
> shared fd memory to disallow conversion from shared to private and
> will have to ensure that all guest supplied addresses belong to shared
> GPA ranges.

IMO, the complexity argument isn't sufficient justfication for introducing new
kernel functionality. If multiple processes are accessing guest memory then there
already needs to be some amount of coordination, i.e. it can't be _that_ complex.

My concern with forcing userspace to fully handle unmapping shared memory is that
it may lead to additional performance overhead and/or noisy neighbor issues, even
if all guests are well-behaved.

Unnmapping arbitrary ranges will fragment the virtual address space and consume
more memory for all the result VMAs. The extra memory consumption isn't that big
of a deal, and it will be self-healing to some extent as VMAs will get merged when
the holes are filled back in (if the guest converts back to shared), but it's still
less than desirable.

More concerning is having to take mmap_lock for write for every conversion, which
is very problematic for configurations where a single userspace process maps memory
belong to multiple VMs. Unmapping and remapping on every conversion will create a
bottleneck, especially if a VM has sub-optimal behavior and is converting pages at
a high rate.

One argument is that userspace can simply rely on cgroups to detect misbehaving
guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
kill from the host is typically considered a _host_ issue and will be treated as
a missed SLO.

An idea for handling this in the kernel without too much complexity would be to
add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
still work, but writes to previously unreserved/unallocated memory would get a
SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
unintentional allocations without having to coordinate unmapping/remapping across
multiple processes.

2022-06-10 01:03:41

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 7, 2022 at 7:22 PM Chao Peng <[email protected]> wrote:
>
> On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote:
> > On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <[email protected]> wrote:
> > >
> > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote:
> > > > >
> > > > > Private memory map/unmap and conversion
> > > > > ---------------------------------------
> > > > > Userspace's map/unmap operations are done by fallocate() ioctl on the
> > > > > backing store fd.
> > > > > - map: default fallocate() with mode=0.
> > > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE.
> > > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap
> > > > > secondary MMU page tables.
> > > > >
> > > > ....
> > > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6
> > > > >
> > > > > An example QEMU command line for TDX test:
> > > > > -object tdx-guest,id=tdx \
> > > > > -object memory-backend-memfd-private,id=ram1,size=2G \
> > > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1
> > > > >
> > > >
> > > > There should be more discussion around double allocation scenarios
> > > > when using the private fd approach. A malicious guest or buggy
> > > > userspace VMM can cause physical memory getting allocated for both
> > > > shared (memory accessible from host) and private fds backing the guest
> > > > memory.
> > > > Userspace VMM will need to unback the shared guest memory while
> > > > handling the conversion from shared to private in order to prevent
> > > > double allocation even with malicious guests or bugs in userspace VMM.
> > >
> > > I don't know how malicious guest can cause that. The initial design of
> > > this serie is to put the private/shared memory into two different
> > > address spaces and gives usersapce VMM the flexibility to convert
> > > between the two. It can choose respect the guest conversion request or
> > > not.
> >
> > For example, the guest could maliciously give a device driver a
> > private page so that a host-side virtual device will blindly write the
> > private page.
>
> With this patch series, it's actually even not possible for userspace VMM
> to allocate private page by a direct write, it's basically unmapped from
> there. If it really wants to, it should so something special, by intention,
> that's basically the conversion, which we should allow.

I think Vishal did a better job to explain this scenario in his last
reply than I did.

> > > It's possible for a usrspace VMM to cause double allocation if it fails
> > > to call the unback operation during the conversion, this may be a bug
> > > or not. Double allocation may not be a wrong thing, even in conception.
> > > At least TDX allows you to use half shared half private in guest, means
> > > both shared/private can be effective. Unbacking the memory is just the
> > > current QEMU implementation choice.
> >
> > Right. But the idea is that this patch series should accommodate all
> > of the CVM architectures. Or at least that's what I know was
> > envisioned last time we discussed this topic for SNP [*].
>
> AFAICS, this series should work for both TDX and SNP, and other CVM
> architectures. I don't see where TDX can work but SNP cannot, or I
> missed something here?

Agreed. I was just responding to the "At least TDX..." bit. Sorry for
any confusion.

> >
> > Regardless, it's important to ensure that the VM respects its memory
> > budget. For example, within Google, we run VMs inside of containers.
> > So if we double allocate we're going to OOM. This seems acceptable for
> > an early version of CVMs. But ultimately, I think we need a more
> > robust way to ensure that the VM operates within its memory container.
> > Otherwise, the OOM is going to be hard to diagnose and distinguish
> > from a real OOM.
>
> Thanks for bringing this up. But in my mind I still think userspace VMM
> can do and it's its responsibility to guarantee that, if that is hard
> required. By design, userspace VMM is the decision-maker for page
> conversion and has all the necessary information to know which page is
> shared/private. It also has the necessary knobs to allocate/free the
> physical pages for guest memory. Definitely, we should make userspace
> VMM more robust.

Vishal and Sean did a better job to articulate the concern in their
most recent replies.

2022-06-14 07:51:25

by Chao Peng

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> > ...
> > > With this patch series, it's actually even not possible for userspace VMM
> > > to allocate private page by a direct write, it's basically unmapped from
> > > there. If it really wants to, it should so something special, by intention,
> > > that's basically the conversion, which we should allow.
> > >
> >
> > A VM can pass GPA backed by private pages to userspace VMM and when
> > Userspace VMM accesses the backing hva there will be pages allocated
> > to back the shared fd causing 2 sets of pages backing the same guest
> > memory range.
> >
> > > Thanks for bringing this up. But in my mind I still think userspace VMM
> > > can do and it's its responsibility to guarantee that, if that is hard
> > > required.
>
> That was my initial reaction too, but there are unfortunate side effects to punting
> this to userspace.
>
> > By design, userspace VMM is the decision-maker for page
> > > conversion and has all the necessary information to know which page is
> > > shared/private. It also has the necessary knobs to allocate/free the
> > > physical pages for guest memory. Definitely, we should make userspace
> > > VMM more robust.
> >
> > Making Userspace VMM more robust to avoid double allocation can get
> > complex, it will have to keep track of all in-use (by Userspace VMM)
> > shared fd memory to disallow conversion from shared to private and
> > will have to ensure that all guest supplied addresses belong to shared
> > GPA ranges.
>
> IMO, the complexity argument isn't sufficient justfication for introducing new
> kernel functionality. If multiple processes are accessing guest memory then there
> already needs to be some amount of coordination, i.e. it can't be _that_ complex.
>
> My concern with forcing userspace to fully handle unmapping shared memory is that
> it may lead to additional performance overhead and/or noisy neighbor issues, even
> if all guests are well-behaved.
>
> Unnmapping arbitrary ranges will fragment the virtual address space and consume
> more memory for all the result VMAs. The extra memory consumption isn't that big
> of a deal, and it will be self-healing to some extent as VMAs will get merged when
> the holes are filled back in (if the guest converts back to shared), but it's still
> less than desirable.
>
> More concerning is having to take mmap_lock for write for every conversion, which
> is very problematic for configurations where a single userspace process maps memory
> belong to multiple VMs. Unmapping and remapping on every conversion will create a
> bottleneck, especially if a VM has sub-optimal behavior and is converting pages at
> a high rate.
>
> One argument is that userspace can simply rely on cgroups to detect misbehaving
> guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> kill from the host is typically considered a _host_ issue and will be treated as
> a missed SLO.
>
> An idea for handling this in the kernel without too much complexity would be to
> add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> still work, but writes to previously unreserved/unallocated memory would get a
> SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> unintentional allocations without having to coordinate unmapping/remapping across
> multiple processes.

Since this is mainly for shared memory and the motivation is catching
misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
those range backed by private fd as PROT_NONE during the conversion so
subsequence misbehaved accesses will be blocked instead of causing double
allocation silently.

Chao

2022-06-14 17:49:49

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <[email protected]> wrote:
>
> On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> > On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> >
> > One argument is that userspace can simply rely on cgroups to detect misbehaving
> > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> > kill from the host is typically considered a _host_ issue and will be treated as
> > a missed SLO.
> >
> > An idea for handling this in the kernel without too much complexity would be to
> > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> > still work, but writes to previously unreserved/unallocated memory would get a
> > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> > unintentional allocations without having to coordinate unmapping/remapping across
> > multiple processes.
>
> Since this is mainly for shared memory and the motivation is catching
> misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
> those range backed by private fd as PROT_NONE during the conversion so
> subsequence misbehaved accesses will be blocked instead of causing double
> allocation silently.

This patch series is fairly close to implementing a rather more
efficient solution. I'm not familiar enough with hypervisor userspace
to really know if this would work, but:

What if shared guest memory could also be file-backed, either in the
same fd or with a second fd covering the shared portion of a memslot?
This would allow changes to the backing store (punching holes, etc) to
be some without mmap_lock or host-userspace TLB flushes? Depending on
what the guest is doing with its shared memory, userspace might need
the memory mapped or it might not.

--Andy

2022-06-14 19:16:53

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 14, 2022, Andy Lutomirski wrote:
> On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <[email protected]> wrote:
> >
> > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> > > On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> > >
> > > One argument is that userspace can simply rely on cgroups to detect misbehaving
> > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> > > kill from the host is typically considered a _host_ issue and will be treated as
> > > a missed SLO.
> > >
> > > An idea for handling this in the kernel without too much complexity would be to
> > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> > > still work, but writes to previously unreserved/unallocated memory would get a
> > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> > > unintentional allocations without having to coordinate unmapping/remapping across
> > > multiple processes.
> >
> > Since this is mainly for shared memory and the motivation is catching
> > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
> > those range backed by private fd as PROT_NONE during the conversion so
> > subsequence misbehaved accesses will be blocked instead of causing double
> > allocation silently.

PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap().

> This patch series is fairly close to implementing a rather more
> efficient solution. I'm not familiar enough with hypervisor userspace
> to really know if this would work, but:
>
> What if shared guest memory could also be file-backed, either in the
> same fd or with a second fd covering the shared portion of a memslot?
> This would allow changes to the backing store (punching holes, etc) to
> be some without mmap_lock or host-userspace TLB flushes? Depending on
> what the guest is doing with its shared memory, userspace might need
> the memory mapped or it might not.

That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue,
unless I'm misreading code, is that punching a hole in the shared memory backing
store doesn't prevent reallocating that hole on fault, i.e. a helper process that
keeps a valid mapping of guest shared memory can silently fill the hole.

What we're hoping to achieve is a way to prevent allocating memory without a very
explicit action from userspace, e.g. fallocate().

2022-06-14 21:27:05

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <[email protected]> wrote:
>
> On Tue, Jun 14, 2022, Andy Lutomirski wrote:
> > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <[email protected]> wrote:
> > >
> > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> > > >
> > > > One argument is that userspace can simply rely on cgroups to detect misbehaving
> > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> > > > kill from the host is typically considered a _host_ issue and will be treated as
> > > > a missed SLO.
> > > >
> > > > An idea for handling this in the kernel without too much complexity would be to
> > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> > > > still work, but writes to previously unreserved/unallocated memory would get a
> > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> > > > unintentional allocations without having to coordinate unmapping/remapping across
> > > > multiple processes.
> > >
> > > Since this is mainly for shared memory and the motivation is catching
> > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
> > > those range backed by private fd as PROT_NONE during the conversion so
> > > subsequence misbehaved accesses will be blocked instead of causing double
> > > allocation silently.
>
> PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap().
>
> > This patch series is fairly close to implementing a rather more
> > efficient solution. I'm not familiar enough with hypervisor userspace
> > to really know if this would work, but:
> >
> > What if shared guest memory could also be file-backed, either in the
> > same fd or with a second fd covering the shared portion of a memslot?
> > This would allow changes to the backing store (punching holes, etc) to
> > be some without mmap_lock or host-userspace TLB flushes? Depending on
> > what the guest is doing with its shared memory, userspace might need
> > the memory mapped or it might not.
>
> That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue,
> unless I'm misreading code, is that punching a hole in the shared memory backing
> store doesn't prevent reallocating that hole on fault, i.e. a helper process that
> keeps a valid mapping of guest shared memory can silently fill the hole.
>
> What we're hoping to achieve is a way to prevent allocating memory without a very
> explicit action from userspace, e.g. fallocate().

Ah, I misunderstood. I thought your goal was to mmap it and prevent
page faults from allocating.

It is indeed the case (and has been since before quite a few of us
were born) that a hole in a sparse file is logically just a bunch of
zeros. A way to make a file for which a hole is an actual hole seems
like it would solve this problem nicely. It could also be solved more
specifically for KVM by making sure that the private/shared mode that
userspace programs is strict enough to prevent accidental allocations
-- if a GPA is definitively private, shared, neither, or (potentially,
on TDX only) both, then a page that *isn't* shared will never be
accidentally allocated by KVM. If the shared backing is not mmapped,
it also won't be accidentally allocated by host userspace on a stray
or careless write.


--Andy

2022-06-15 10:04:30

by Chao Peng

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Tue, Jun 14, 2022 at 01:59:41PM -0700, Andy Lutomirski wrote:
> On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <[email protected]> wrote:
> >
> > On Tue, Jun 14, 2022, Andy Lutomirski wrote:
> > > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <[email protected]> wrote:
> > > >
> > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote:
> > > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote:
> > > > >
> > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving
> > > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM
> > > > > kill from the host is typically considered a _host_ issue and will be treated as
> > > > > a missed SLO.
> > > > >
> > > > > An idea for handling this in the kernel without too much complexity would be to
> > > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from
> > > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor
> > > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would
> > > > > still work, but writes to previously unreserved/unallocated memory would get a
> > > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent
> > > > > unintentional allocations without having to coordinate unmapping/remapping across
> > > > > multiple processes.
> > > >
> > > > Since this is mainly for shared memory and the motivation is catching
> > > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark
> > > > those range backed by private fd as PROT_NONE during the conversion so
> > > > subsequence misbehaved accesses will be blocked instead of causing double
> > > > allocation silently.
> >
> > PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap().

Yes, right.

> >
> > > This patch series is fairly close to implementing a rather more
> > > efficient solution. I'm not familiar enough with hypervisor userspace
> > > to really know if this would work, but:
> > >
> > > What if shared guest memory could also be file-backed, either in the
> > > same fd or with a second fd covering the shared portion of a memslot?
> > > This would allow changes to the backing store (punching holes, etc) to
> > > be some without mmap_lock or host-userspace TLB flushes? Depending on
> > > what the guest is doing with its shared memory, userspace might need
> > > the memory mapped or it might not.
> >
> > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue,
> > unless I'm misreading code, is that punching a hole in the shared memory backing
> > store doesn't prevent reallocating that hole on fault, i.e. a helper process that
> > keeps a valid mapping of guest shared memory can silently fill the hole.
> >
> > What we're hoping to achieve is a way to prevent allocating memory without a very
> > explicit action from userspace, e.g. fallocate().
>
> Ah, I misunderstood. I thought your goal was to mmap it and prevent
> page faults from allocating.

I think we still need the mmap, but want to prevent allocating when
userspace touches previously mmaped area that has never filled the page.
I don't have clear answer if other operations like read/write should be
also prevented (probably yes). And only after an explicit fallocate() to
allocate the page these operations would act normally.

>
> It is indeed the case (and has been since before quite a few of us
> were born) that a hole in a sparse file is logically just a bunch of
> zeros. A way to make a file for which a hole is an actual hole seems
> like it would solve this problem nicely. It could also be solved more
> specifically for KVM by making sure that the private/shared mode that
> userspace programs is strict enough to prevent accidental allocations
> -- if a GPA is definitively private, shared, neither, or (potentially,
> on TDX only) both, then a page that *isn't* shared will never be
> accidentally allocated by KVM.

KVM is clever enough to not allocate since it knows a GPA is shared or
not. This case it's the host userspace that can cause the allocating and
is too complex to check on every access from guest.

> If the shared backing is not mmapped,
> it also won't be accidentally allocated by host userspace on a stray
> or careless write.

As said above, mmap is still prefered, otherwise too many changes are
needed for usespace VMM.

Thanks,
Chao
>
>
> --Andy

2022-06-15 14:43:24

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v6 0/8] KVM: mm: fd-based approach for supporting KVM guest private memory

On Wed, Jun 15, 2022, Chao Peng wrote:
> On Tue, Jun 14, 2022 at 01:59:41PM -0700, Andy Lutomirski wrote:
> > On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <[email protected]> wrote:
> > >
> > > On Tue, Jun 14, 2022, Andy Lutomirski wrote:
> > > > This patch series is fairly close to implementing a rather more
> > > > efficient solution. I'm not familiar enough with hypervisor userspace
> > > > to really know if this would work, but:
> > > >
> > > > What if shared guest memory could also be file-backed, either in the
> > > > same fd or with a second fd covering the shared portion of a memslot?
> > > > This would allow changes to the backing store (punching holes, etc) to
> > > > be some without mmap_lock or host-userspace TLB flushes? Depending on
> > > > what the guest is doing with its shared memory, userspace might need
> > > > the memory mapped or it might not.
> > >
> > > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue,
> > > unless I'm misreading code, is that punching a hole in the shared memory backing
> > > store doesn't prevent reallocating that hole on fault, i.e. a helper process that
> > > keeps a valid mapping of guest shared memory can silently fill the hole.
> > >
> > > What we're hoping to achieve is a way to prevent allocating memory without a very
> > > explicit action from userspace, e.g. fallocate().
> >
> > Ah, I misunderstood. I thought your goal was to mmap it and prevent
> > page faults from allocating.

I don't think you misunderstood, that's also one of the goals. The use case is
that multiple processes in the host mmap() guest memory, and we'd like to be able
to punch a hole without having to rendezvous with all processes and also to prevent
an unintentional re-allocation.

> I think we still need the mmap, but want to prevent allocating when
> userspace touches previously mmaped area that has never filled the page.

Yes, or if a chunk was filled at some point but then was removed via PUNCH_HOLE.

> I don't have clear answer if other operations like read/write should be
> also prevented (probably yes). And only after an explicit fallocate() to
> allocate the page these operations would act normally.

I always forget about read/write. I believe reads should be ok, the semantics of
holes are that they return zeros, i.e. can use ZERO_PAGE() and not allocate a new
backing page. Not sure what to do about writes though. Allocating on direct writes
might be ok for our use case, but that could also result in a rather wierd API.

> > It is indeed the case (and has been since before quite a few of us
> > were born) that a hole in a sparse file is logically just a bunch of
> > zeros. A way to make a file for which a hole is an actual hole seems
> > like it would solve this problem nicely. It could also be solved more
> > specifically for KVM by making sure that the private/shared mode that
> > userspace programs is strict enough to prevent accidental allocations
> > -- if a GPA is definitively private, shared, neither, or (potentially,
> > on TDX only) both, then a page that *isn't* shared will never be
> > accidentally allocated by KVM.
>
> KVM is clever enough to not allocate since it knows a GPA is shared or
> not. This case it's the host userspace that can cause the allocating and
> is too complex to check on every access from guest.

Yes, KVM is not in the picture at all. KVM won't trigger allocation, but KVM also
is not in a position to prevent userspace from touching memory.

> > If the shared backing is not mmapped,
> > it also won't be accidentally allocated by host userspace on a stray
> > or careless write.
>
> As said above, mmap is still prefered, otherwise too many changes are
> needed for usespace VMM.

Forcing userspace to change doesn't bother me too much, the biggest concern is
having to take mmap_lock for write in a per-host process.