2020-10-12 11:03:51

by Joao Martins

[permalink] [raw]
Subject: Re: [PATCH 00/35] Enhance memory utilization with DMEMFS

On 10/10/20 9:15 AM, yulei zhang wrote:
> On Fri, Oct 9, 2020 at 7:53 PM Joao Martins <[email protected]> wrote:
>> On 10/9/20 12:39 PM, yulei zhang wrote:
>>> Joao, thanks a lot for the feedback. One more thing needs to mention
>>> is that dmemfs also support fine-grained
>>> memory management which makes it more flexible for tenants with
>>> different requirements.
>>>
>> So as DAX when it allows to partition a region (starting 5.10). Meaning you have a region
>> which you dedicated to userspace. That region can then be partitioning into devices which
>> give you access to multiple (possibly discontinuous) extents with at a given page
>> granularity (selectable when you create the device), accessed through mmap().
>> You can then give that device to a cgroup. Or you can return that memory back to the
>> kernel (should you run into OOM situation), or you recreate the same mappings across
>> reboot/kexec.
>>
>> I probably need to read your patches again, but can you extend on the 'dmemfs also support
>> fine-grained memory management' to understand what is the gap that you mention?
>>
> sure, dmemfs uses bitmap to track the memory usage in the reserved
> memory region in
> a given page size granularity. And for each user the memory can be
> discrete as well.
>
That same functionality of tracking reserved region usage across different users at any
page granularity is covered the DAX series I mentioned below. The discrete part -- IIUC
what you meant -- is then reduced using DAX ABI/tools to create a device file vs a filesystem.

>>> On Fri, Oct 9, 2020 at 3:01 AM Joao Martins <[email protected]> wrote:
>>>>
>>>> [adding a couple folks that directly or indirectly work on the subject]
>>>>
>>>> On 10/8/20 8:53 AM, [email protected] wrote:
>>>>> From: Yulei Zhang <[email protected]>
>>>>>
>>>>> In current system each physical memory page is assocaited with
>>>>> a page structure which is used to track the usage of this page.
>>>>> But due to the memory usage rapidly growing in cloud environment,
>>>>> we find the resource consuming for page structure storage becomes
>>>>> highly remarkable. So is it an expense that we could spare?
>>>>>
>>>> Happy to see another person working to solve the same problem!
>>>>
>>>> I am really glad to see more folks being interested in solving
>>>> this problem and I hope we can join efforts?
>>>>
>>>> BTW, there is also a second benefit in removing struct page -
>>>> which is carving out memory from the direct map.
>>>>
>>>>> This patchset introduces an idea about how to save the extra
>>>>> memory through a new virtual filesystem -- dmemfs.
>>>>>
>>>>> Dmemfs (Direct Memory filesystem) is device memory or reserved
>>>>> memory based filesystem. This kind of memory is special as it
>>>>> is not managed by kernel and most important it is without 'struct page'.
>>>>> Therefore we can leverage the extra memory from the host system
>>>>> to support more tenants in our cloud service.
>>>>>
>>>> This is like a walk down the memory lane.
>>>>
>>>> About a year ago we followed the same exact idea/motivation to
>>>> have memory outside of the direct map (and removing struct page overhead)
>>>> and started with our own layer/thingie. However we realized that DAX
>>>> is one the subsystems which already gives you direct access to memory
>>>> for free (and is already upstream), plus a couple of things which we
>>>> found more handy.
>>>>
>>>> So we sent an RFC a couple months ago:
>>>>
>>>> https://lore.kernel.org/linux-mm/[email protected]/
>>>>
>>>> Since then majority of the work has been in improving DAX[1].
>>>> But now that is done I am going to follow up with the above patchset.
>>>>
>>>> [1]
>>>> https://lore.kernel.org/linux-mm/159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com/
>>>>
>>>> (Give me a couple of days and I will send you the link to the latest
>>>> patches on a git-tree - would love feedback!)
>>>>
>>>> The struct page removal for DAX would then be small, and ticks the
>>>> same bells and whistles (MCE handling, reserving PAT memtypes, ptrace
>>>> support) that we both do, with a smaller diffstat and it doesn't
>>>> touch KVM (not at least fundamentally).
>>>>
>>>> 15 files changed, 401 insertions(+), 38 deletions(-)
>>>>
>>>> The things needed in core-mm is for handling PMD/PUD PAGE_SPECIAL much
>>>> like we both do. Furthermore there wouldn't be a need for a new vm type,
>>>> consuming an extra page bit (in addition to PAGE_SPECIAL) or new filesystem.
>>>>
>>>> [1]
>>>> https://lore.kernel.org/linux-mm/159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com/
>>>>
>>>>
>>>>> We uses a kernel boot parameter 'dmem=' to reserve the system
>>>>> memory when the host system boots up, the details can be checked
>>>>> in /Documentation/admin-guide/kernel-parameters.txt.
>>>>>
>>>>> Theoretically for each 4k physical page it can save 64 bytes if
>>>>> we drop the 'struct page', so for guest memory with 320G it can
>>>>> save about 5G physical memory totally.
>>>>>
>>>> Also worth mentioning that if you only care about 'struct page' cost, and not on the
>>>> security boundary, there's also some work on hugetlbfs preallocation of hugepages into
>>>> tricking vmemmap in reusing tail pages.
>>>>
>>>> https://lore.kernel.org/linux-mm/[email protected]/
>>>>
>>>> Going forward that could also make sense for device-dax to avoid so many
>>>> struct pages allocated (which would require its transition to compound
>>>> struct pages like hugetlbfs which we are looking at too). In addition an
>>>> idea <handwaving> would be perhaps to have a stricter mode in DAX where
>>>> we initialize/use the metadata ('struct page') but remove the underlaying
>>>> PFNs (of the 'struct page') from the direct map having to bear the cost of
>>>> mapping/unmapping on gup/pup.
>>>>
>>>> Joao


2020-10-15 04:07:25

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH 00/35] Enhance memory utilization with DMEMFS

On Mon, Oct 12, 2020 at 4:00 AM Joao Martins <[email protected]> wrote:
[..]
> On 10/10/20 9:15 AM, yulei zhang wrote:
> > On Fri, Oct 9, 2020 at 7:53 PM Joao Martins <[email protected]> wrote:
> >> On 10/9/20 12:39 PM, yulei zhang wrote:
> >>> Joao, thanks a lot for the feedback. One more thing needs to mention
> >>> is that dmemfs also support fine-grained
> >>> memory management which makes it more flexible for tenants with
> >>> different requirements.
> >>>
> >> So as DAX when it allows to partition a region (starting 5.10). Meaning you have a region
> >> which you dedicated to userspace. That region can then be partitioning into devices which
> >> give you access to multiple (possibly discontinuous) extents with at a given page
> >> granularity (selectable when you create the device), accessed through mmap().
> >> You can then give that device to a cgroup. Or you can return that memory back to the
> >> kernel (should you run into OOM situation), or you recreate the same mappings across
> >> reboot/kexec.
> >>
> >> I probably need to read your patches again, but can you extend on the 'dmemfs also support
> >> fine-grained memory management' to understand what is the gap that you mention?
> >>
> > sure, dmemfs uses bitmap to track the memory usage in the reserved
> > memory region in
> > a given page size granularity. And for each user the memory can be
> > discrete as well.
> >
> That same functionality of tracking reserved region usage across different users at any
> page granularity is covered the DAX series I mentioned below. The discrete part -- IIUC
> what you meant -- is then reduced using DAX ABI/tools to create a device file vs a filesystem.

Put another way. Linux already has a fine grained memory management
system, the page allocator. Now, with recent device-dax extensions, it
also has a coarse grained memory management system for physical
address-space partitioning and a path for struct-page-less backing for
VMs. What feature gaps remain vs dmemfs, and can those gaps be closed
with incremental improvements to the 2 existing memory-management
systems?

2020-10-19 13:41:53

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 00/35] Enhance memory utilization with DMEMFS

On 15/10/20 00:25, Dan Williams wrote:
> Now, with recent device-dax extensions, it
> also has a coarse grained memory management system for physical
> address-space partitioning and a path for struct-page-less backing for
> VMs. What feature gaps remain vs dmemfs, and can those gaps be closed
> with incremental improvements to the 2 existing memory-management
> systems?

If I understand correctly, devm_memremap_pages() on ZONE_DEVICE memory
would still create the "struct page" albeit lazily? KVM then would use
the usual get_user_pages() path.

Looking more closely at the implementation of dmemfs, I don't understand
is why dmemfs needs VM_DMEM etc. and cannot provide access to mmap-ed
memory using remap_pfn_range and VM_PFNMAP, just like /dev/mem. If it
did that KVM would get physical addresses using fixup_user_fault and
never need pfn_to_page() or get_user_pages(). I'm not saying that would
instantly be an approval, but it would make remove a lot of hooks.

Paolo

2020-10-19 19:08:55

by Joao Martins

[permalink] [raw]
Subject: Re: [PATCH 00/35] Enhance memory utilization with DMEMFS

On 10/19/20 2:37 PM, Paolo Bonzini wrote:
> On 15/10/20 00:25, Dan Williams wrote:
>> Now, with recent device-dax extensions, it
>> also has a coarse grained memory management system for physical
>> address-space partitioning and a path for struct-page-less backing for
>> VMs. What feature gaps remain vs dmemfs, and can those gaps be closed
>> with incremental improvements to the 2 existing memory-management
>> systems?
>
> If I understand correctly, devm_memremap_pages() on ZONE_DEVICE memory
> would still create the "struct page" albeit lazily? KVM then would use
> the usual get_user_pages() path.
>
Correct.

The removal of struct page would be one of the added incremental improvements, like a
'map' with 'raw' sysfs attribute for dynamic dax regions that wouldn't online/create the
struct pages. The remaining plumbing (...)

> Looking more closely at the implementation of dmemfs, I don't understand
> is why dmemfs needs VM_DMEM etc. and cannot provide access to mmap-ed
> memory using remap_pfn_range and VM_PFNMAP, just like /dev/mem. If it
> did that KVM would get physical addresses using fixup_user_fault and
> never need pfn_to_page() or get_user_pages(). I'm not saying that would
> instantly be an approval, but it would make remove a lot of hooks.
>

(...) is similar to what you describe above. Albeit there's probably no need to do a
remap_pfn_range at mmap(), as DAX supplies a fault/huge_fault. Also, using that means it's
limited to a single contiguous PFN chunk.

KVM has the bits to make it work without struct pages, I don't think there's a need for
new pg/pfn_t/VM_* bits (aside from relying on {PFN,PAGE}_SPECIAL) as mentioned at the
start of the thread. I'm storing my wip here:

https://github.com/jpemartins/linux pageless-dax

Which is based on the first series that had been submitted earlier this year:

https://lore.kernel.org/kvm/[email protected]/

Joao