2017-06-23 15:00:40

by Bob Liu

[permalink] [raw]
Subject: Re: [HMM 00/15] HMM (Heterogeneous Memory Management) v23

Hi,

On Thu, May 25, 2017 at 1:20 AM, Jérôme Glisse <[email protected]> wrote:
> Patchset is on top of git://git.cmpxchg.org/linux-mmotm.git so i
> test same kernel as kbuild system, git branch:
>
> https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-v23
>
> Change since v22 is use of static key for special ZONE_DEVICE case in
> put_page() and build fix for architecture with no mmu.
>
> Everything else is the same. Below is the long description of what HMM
> is about and why. At the end of this email i describe briefly each patch
> and suggest reviewers for each of them.
>
>
> Heterogeneous Memory Management (HMM) (description and justification)
>
> Today device driver expose dedicated memory allocation API through their
> device file, often relying on a combination of IOCTL and mmap calls. The
> device can only access and use memory allocated through this API. This
> effectively split the program address space into object allocated for the
> device and useable by the device and other regular memory (malloc, mmap
> of a file, share memory, â) only accessible by CPU (or in a very limited
> way by a device by pinning memory).
>
> Allowing different isolated component of a program to use a device thus
> require duplication of the input data structure using device memory
> allocator. This is reasonable for simple data structure (array, grid,
> image, â) but this get extremely complex with advance data structure
> (list, tree, graph, â) that rely on a web of memory pointers. This is
> becoming a serious limitation on the kind of work load that can be
> offloaded to device like GPU.
>
> New industry standard like C++, OpenCL or CUDA are pushing to remove this
> barrier. This require a shared address space between GPU device and CPU so
> that GPU can access any memory of a process (while still obeying memory
> protection like read only). This kind of feature is also appearing in
> various other operating systems.
>
> HMM is a set of helpers to facilitate several aspects of address space
> sharing and device memory management. Unlike existing sharing mechanism

It looks like the address space sharing and device memory management
are two different things. They don't depend on each other and HMM has
helpers for both.

Is it possible to separate these two things into two patchsets?
Which will make it's more easy to review and also follow the "Do one
thing, and do it well" philosophy.

Thanks,
Bob Liu


2017-06-23 15:29:03

by Jerome Glisse

[permalink] [raw]
Subject: Re: [HMM 00/15] HMM (Heterogeneous Memory Management) v23

On Fri, Jun 23, 2017 at 11:00:37PM +0800, Bob Liu wrote:
> Hi,
>
> On Thu, May 25, 2017 at 1:20 AM, J?r?me Glisse <[email protected]> wrote:
> > Patchset is on top of git://git.cmpxchg.org/linux-mmotm.git so i
> > test same kernel as kbuild system, git branch:
> >
> > https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-v23
> >
> > Change since v22 is use of static key for special ZONE_DEVICE case in
> > put_page() and build fix for architecture with no mmu.
> >
> > Everything else is the same. Below is the long description of what HMM
> > is about and why. At the end of this email i describe briefly each patch
> > and suggest reviewers for each of them.
> >
> >
> > Heterogeneous Memory Management (HMM) (description and justification)
> >
> > Today device driver expose dedicated memory allocation API through their
> > device file, often relying on a combination of IOCTL and mmap calls. The
> > device can only access and use memory allocated through this API. This
> > effectively split the program address space into object allocated for the
> > device and useable by the device and other regular memory (malloc, mmap
> > of a file, share memory, ?) only accessible by CPU (or in a very limited
> > way by a device by pinning memory).
> >
> > Allowing different isolated component of a program to use a device thus
> > require duplication of the input data structure using device memory
> > allocator. This is reasonable for simple data structure (array, grid,
> > image, ?) but this get extremely complex with advance data structure
> > (list, tree, graph, ?) that rely on a web of memory pointers. This is
> > becoming a serious limitation on the kind of work load that can be
> > offloaded to device like GPU.
> >
> > New industry standard like C++, OpenCL or CUDA are pushing to remove this
> > barrier. This require a shared address space between GPU device and CPU so
> > that GPU can access any memory of a process (while still obeying memory
> > protection like read only). This kind of feature is also appearing in
> > various other operating systems.
> >
> > HMM is a set of helpers to facilitate several aspects of address space
> > sharing and device memory management. Unlike existing sharing mechanism
>
> It looks like the address space sharing and device memory management
> are two different things. They don't depend on each other and HMM has
> helpers for both.
>
> Is it possible to separate these two things into two patchsets?
> Which will make it's more easy to review and also follow the "Do one
> thing, and do it well" philosophy.
>

They are already seperate. Patch 3-5 are for address space mirroring.
Patch 6-10 for device memory using struct page and ZONE_DEVICE. Finaly
patch 11-15 for adding new page migration helper capable of using
device DMA engine to perform memory copy operation.

Patch 1 is just common documentation and patch 2 is common helpers and
definitions.

Also they are separate at kernel configuration level. So for all intents
and purposes this is already 2 separate things, just in one posting
because first user will use both. You can use one without the other and
it will work properly.

Cheers,
J?r?me