2019-06-26 12:30:37

by Christoph Hellwig

[permalink] [raw]
Subject: dev_pagemap related cleanups v3

Hi Dan, Jérôme and Jason,

below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code

Note: this series is on top of Linux 5.2-rc5 and has some minor
conflicts with the hmm tree that are easy to resolve.

Diffstat summary:

32 files changed, 361 insertions(+), 1012 deletions(-)

Git tree:

git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup.3

Gitweb:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/hmm-devmem-cleanup.3


Changes since v2:
- fix nvdimm kunit build
- add a new memory type for device dax
- fix a few issues in intermediate patches that didn't show up in the end
result
- incorporate feedback from Michal Hocko, including killing of
the DEVICE_PUBLIC memory type entirely

Changes since v1:
- rebase
- also switch p2pdma to the internal refcount
- add type checking for pgmap->type
- rename the migrate method to migrate_to_ram
- cleanup the altmap_valid flag
- various tidbits from the reviews


2019-06-26 12:30:48

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 24/25] mm: remove the HMM config option

All the mm/hmm.c code is better keyed off HMM_MIRROR. Also let nouveau
depend on it instead of the mix of a dummy dependency symbol plus the
actually selected one. Drop various odd dependencies, as the code is
pretty portable.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/gpu/drm/nouveau/Kconfig | 3 +--
include/linux/hmm.h | 5 +----
include/linux/mm_types.h | 2 +-
mm/Kconfig | 27 ++++-----------------------
mm/Makefile | 2 +-
mm/hmm.c | 2 --
6 files changed, 8 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 6303d203ab1d..66c839d8e9d1 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -84,11 +84,10 @@ config DRM_NOUVEAU_BACKLIGHT

config DRM_NOUVEAU_SVM
bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support"
- depends on ARCH_HAS_HMM
depends on DEVICE_PRIVATE
depends on DRM_NOUVEAU
+ depends on HMM_MIRROR
depends on STAGING
- select HMM_MIRROR
default n
help
Say Y here if you want to enable experimental support for
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 3d00e9550e77..b697496e85ba 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -62,7 +62,7 @@
#include <linux/kconfig.h>
#include <asm/pgtable.h>

-#if IS_ENABLED(CONFIG_HMM)
+#ifdef CONFIG_HMM_MIRROR

#include <linux/device.h>
#include <linux/migrate.h>
@@ -332,9 +332,6 @@ static inline uint64_t hmm_pfn_from_pfn(const struct hmm_range *range,
return hmm_device_entry_from_pfn(range, pfn);
}

-
-
-#if IS_ENABLED(CONFIG_HMM_MIRROR)
/*
* Mirroring: how to synchronize device page table with CPU page table.
*
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index f33a1289c101..8d37182f8dbe 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -501,7 +501,7 @@ struct mm_struct {
#endif
struct work_struct async_put_work;

-#if IS_ENABLED(CONFIG_HMM)
+#ifdef CONFIG_HMM_MIRROR
/* HMM needs to track a few things per mm */
struct hmm *hmm;
#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index eecf037a54b3..1e426c26b1d6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -669,37 +669,18 @@ config ZONE_DEVICE

If FS_DAX is enabled, then say Y.

-config ARCH_HAS_HMM_MIRROR
- bool
- default y
- depends on (X86_64 || PPC64)
- depends on MMU && 64BIT
-
-config ARCH_HAS_HMM
- bool
- depends on (X86_64 || PPC64)
- depends on ZONE_DEVICE
- depends on MMU && 64BIT
- depends on MEMORY_HOTPLUG
- depends on MEMORY_HOTREMOVE
- depends on SPARSEMEM_VMEMMAP
- default y
-
config MIGRATE_VMA_HELPER
bool

config DEV_PAGEMAP_OPS
bool

-config HMM
- bool
- select MMU_NOTIFIER
- select MIGRATE_VMA_HELPER
-
config HMM_MIRROR
bool "HMM mirror CPU page table into a device page table"
- depends on ARCH_HAS_HMM
- select HMM
+ depends on (X86_64 || PPC64)
+ depends on MMU && 64BIT
+ select MMU_NOTIFIER
+ select MIGRATE_VMA_HELPER
help
Select HMM_MIRROR if you want to mirror range of the CPU page table of a
process into a device page table. Here, mirror means "keep synchronized".
diff --git a/mm/Makefile b/mm/Makefile
index ac5e5ba78874..91c99040065c 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -102,5 +102,5 @@ obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
obj-$(CONFIG_PERCPU_STATS) += percpu-stats.o
-obj-$(CONFIG_HMM) += hmm.o
+obj-$(CONFIG_HMM_MIRROR) += hmm.o
obj-$(CONFIG_MEMFD_CREATE) += memfd.o
diff --git a/mm/hmm.c b/mm/hmm.c
index 90ca0cdab9db..d62ce64d6bca 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -25,7 +25,6 @@
#include <linux/mmu_notifier.h>
#include <linux/memory_hotplug.h>

-#if IS_ENABLED(CONFIG_HMM_MIRROR)
static const struct mmu_notifier_ops hmm_mmu_notifier_ops;

static inline struct hmm *mm_get_hmm(struct mm_struct *mm)
@@ -1326,4 +1325,3 @@ long hmm_range_dma_unmap(struct hmm_range *range,
return cpages;
}
EXPORT_SYMBOL(hmm_range_dma_unmap);
-#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
--
2.20.1

2019-06-26 12:31:03

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 12/25] memremap: add a migrate_to_ram method to struct dev_pagemap_ops

This replaces the hacky ->fault callback, which is currently directly
called from common code through a hmm specific data structure as an
exercise in layering violations.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Ralph Campbell <[email protected]>
---
include/linux/hmm.h | 6 ------
include/linux/memremap.h | 6 ++++++
include/linux/swapops.h | 15 ---------------
kernel/memremap.c | 35 ++++-------------------------------
mm/hmm.c | 13 +++++--------
mm/memory.c | 9 ++-------
6 files changed, 17 insertions(+), 67 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 44a5ac738bb5..ba19c19e24ed 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -692,11 +692,6 @@ struct hmm_devmem_ops {
* chunk, as an optimization. It must, however, prioritize the faulting address
* over all the others.
*/
-typedef vm_fault_t (*dev_page_fault_t)(struct vm_area_struct *vma,
- unsigned long addr,
- const struct page *page,
- unsigned int flags,
- pmd_t *pmdp);

struct hmm_devmem {
struct completion completion;
@@ -707,7 +702,6 @@ struct hmm_devmem {
struct dev_pagemap pagemap;
const struct hmm_devmem_ops *ops;
struct percpu_ref ref;
- dev_page_fault_t page_fault;
};

/*
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index b8666a0d8665..ac985bd03a7f 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -80,6 +80,12 @@ struct dev_pagemap_ops {
* Wait for refcount in struct dev_pagemap to be idle and reap it.
*/
void (*cleanup)(struct dev_pagemap *pgmap);
+
+ /*
+ * Used for private (un-addressable) device memory only. Must migrate
+ * the page back to a CPU accessible page.
+ */
+ vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
};

/**
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 4d961668e5fc..15bdb6fe71e5 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -129,12 +129,6 @@ static inline struct page *device_private_entry_to_page(swp_entry_t entry)
{
return pfn_to_page(swp_offset(entry));
}
-
-vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
- unsigned long addr,
- swp_entry_t entry,
- unsigned int flags,
- pmd_t *pmdp);
#else /* CONFIG_DEVICE_PRIVATE */
static inline swp_entry_t make_device_private_entry(struct page *page, bool write)
{
@@ -164,15 +158,6 @@ static inline struct page *device_private_entry_to_page(swp_entry_t entry)
{
return NULL;
}
-
-static inline vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
- unsigned long addr,
- swp_entry_t entry,
- unsigned int flags,
- pmd_t *pmdp)
-{
- return VM_FAULT_SIGBUS;
-}
#endif /* CONFIG_DEVICE_PRIVATE */

#ifdef CONFIG_MIGRATION
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 3219a4c91d07..c06a5487dda7 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -11,7 +11,6 @@
#include <linux/types.h>
#include <linux/wait_bit.h>
#include <linux/xarray.h>
-#include <linux/hmm.h>

static DEFINE_XARRAY(pgmap_array);
#define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1)
@@ -46,36 +45,6 @@ static int devmap_managed_enable_get(struct device *dev, struct dev_pagemap *pgm
}
#endif /* CONFIG_DEV_PAGEMAP_OPS */

-#if IS_ENABLED(CONFIG_DEVICE_PRIVATE)
-vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
- unsigned long addr,
- swp_entry_t entry,
- unsigned int flags,
- pmd_t *pmdp)
-{
- struct page *page = device_private_entry_to_page(entry);
- struct hmm_devmem *devmem;
-
- devmem = container_of(page->pgmap, typeof(*devmem), pagemap);
-
- /*
- * The page_fault() callback must migrate page back to system memory
- * so that CPU can access it. This might fail for various reasons
- * (device issue, device was unsafely unplugged, ...). When such
- * error conditions happen, the callback must return VM_FAULT_SIGBUS.
- *
- * Note that because memory cgroup charges are accounted to the device
- * memory, this should never fail because of memory restrictions (but
- * allocation of regular system page might still fail because we are
- * out of memory).
- *
- * There is a more in-depth description of what that callback can and
- * cannot do, in include/linux/memremap.h
- */
- return devmem->page_fault(vma, addr, page, flags, pmdp);
-}
-#endif /* CONFIG_DEVICE_PRIVATE */
-
static void pgmap_array_delete(struct resource *res)
{
xa_store_range(&pgmap_array, PHYS_PFN(res->start), PHYS_PFN(res->end),
@@ -193,6 +162,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
WARN(1, "Device private memory not supported\n");
return ERR_PTR(-EINVAL);
}
+ if (!pgmap->ops || !pgmap->ops->migrate_to_ram) {
+ WARN(1, "Missing migrate_to_ram method\n");
+ return ERR_PTR(-EINVAL);
+ }
break;
case MEMORY_DEVICE_FS_DAX:
if (!IS_ENABLED(CONFIG_ZONE_DEVICE) ||
diff --git a/mm/hmm.c b/mm/hmm.c
index 5b0bd5f6a74f..96633ee066d8 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1366,15 +1366,12 @@ static void hmm_devmem_ref_kill(struct dev_pagemap *pgmap)
percpu_ref_kill(pgmap->ref);
}

-static vm_fault_t hmm_devmem_fault(struct vm_area_struct *vma,
- unsigned long addr,
- const struct page *page,
- unsigned int flags,
- pmd_t *pmdp)
+static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf)
{
- struct hmm_devmem *devmem = page->pgmap->data;
+ struct hmm_devmem *devmem = vmf->page->pgmap->data;

- return devmem->ops->fault(devmem, vma, addr, page, flags, pmdp);
+ return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page,
+ vmf->flags, vmf->pmd);
}

static void hmm_devmem_free(struct page *page, void *data)
@@ -1388,6 +1385,7 @@ static const struct dev_pagemap_ops hmm_pagemap_ops = {
.page_free = hmm_devmem_free,
.kill = hmm_devmem_ref_kill,
.cleanup = hmm_devmem_ref_exit,
+ .migrate_to_ram = hmm_devmem_migrate_to_ram,
};

/*
@@ -1438,7 +1436,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT;
devmem->pfn_last = devmem->pfn_first +
(resource_size(devmem->resource) >> PAGE_SHIFT);
- devmem->page_fault = hmm_devmem_fault;

devmem->pagemap.type = MEMORY_DEVICE_PRIVATE;
devmem->pagemap.res = *devmem->resource;
diff --git a/mm/memory.c b/mm/memory.c
index bd21e7063bf0..293d2936fd6c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2748,13 +2748,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
migration_entry_wait(vma->vm_mm, vmf->pmd,
vmf->address);
} else if (is_device_private_entry(entry)) {
- /*
- * For un-addressable device memory we call the pgmap
- * fault handler callback. The callback must migrate
- * the page back to some CPU accessible page.
- */
- ret = device_private_entry_fault(vma, vmf->address, entry,
- vmf->flags, vmf->pmd);
+ vmf->page = device_private_entry_to_page(entry);
+ ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
} else if (is_hwpoison_entry(entry)) {
ret = VM_FAULT_HWPOISON;
} else {
--
2.20.1

2019-06-26 21:39:14

by Ira Weiny

[permalink] [raw]
Subject: Re: [PATCH 24/25] mm: remove the HMM config option

On Wed, Jun 26, 2019 at 02:27:23PM +0200, Christoph Hellwig wrote:
> All the mm/hmm.c code is better keyed off HMM_MIRROR. Also let nouveau
> depend on it instead of the mix of a dummy dependency symbol plus the
> actually selected one. Drop various odd dependencies, as the code is
> pretty portable.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Seems reasonable to me.

Reviewed-by: Ira Weiny <[email protected]>

> ---
> drivers/gpu/drm/nouveau/Kconfig | 3 +--
> include/linux/hmm.h | 5 +----
> include/linux/mm_types.h | 2 +-
> mm/Kconfig | 27 ++++-----------------------
> mm/Makefile | 2 +-
> mm/hmm.c | 2 --
> 6 files changed, 8 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 6303d203ab1d..66c839d8e9d1 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -84,11 +84,10 @@ config DRM_NOUVEAU_BACKLIGHT
>
> config DRM_NOUVEAU_SVM
> bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support"
> - depends on ARCH_HAS_HMM
> depends on DEVICE_PRIVATE
> depends on DRM_NOUVEAU
> + depends on HMM_MIRROR
> depends on STAGING
> - select HMM_MIRROR
> default n
> help
> Say Y here if you want to enable experimental support for
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 3d00e9550e77..b697496e85ba 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -62,7 +62,7 @@
> #include <linux/kconfig.h>
> #include <asm/pgtable.h>
>
> -#if IS_ENABLED(CONFIG_HMM)
> +#ifdef CONFIG_HMM_MIRROR
>
> #include <linux/device.h>
> #include <linux/migrate.h>
> @@ -332,9 +332,6 @@ static inline uint64_t hmm_pfn_from_pfn(const struct hmm_range *range,
> return hmm_device_entry_from_pfn(range, pfn);
> }
>
> -
> -
> -#if IS_ENABLED(CONFIG_HMM_MIRROR)
> /*
> * Mirroring: how to synchronize device page table with CPU page table.
> *
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index f33a1289c101..8d37182f8dbe 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -501,7 +501,7 @@ struct mm_struct {
> #endif
> struct work_struct async_put_work;
>
> -#if IS_ENABLED(CONFIG_HMM)
> +#ifdef CONFIG_HMM_MIRROR
> /* HMM needs to track a few things per mm */
> struct hmm *hmm;
> #endif
> diff --git a/mm/Kconfig b/mm/Kconfig
> index eecf037a54b3..1e426c26b1d6 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -669,37 +669,18 @@ config ZONE_DEVICE
>
> If FS_DAX is enabled, then say Y.
>
> -config ARCH_HAS_HMM_MIRROR
> - bool
> - default y
> - depends on (X86_64 || PPC64)
> - depends on MMU && 64BIT
> -
> -config ARCH_HAS_HMM
> - bool
> - depends on (X86_64 || PPC64)
> - depends on ZONE_DEVICE
> - depends on MMU && 64BIT
> - depends on MEMORY_HOTPLUG
> - depends on MEMORY_HOTREMOVE
> - depends on SPARSEMEM_VMEMMAP
> - default y
> -
> config MIGRATE_VMA_HELPER
> bool
>
> config DEV_PAGEMAP_OPS
> bool
>
> -config HMM
> - bool
> - select MMU_NOTIFIER
> - select MIGRATE_VMA_HELPER
> -
> config HMM_MIRROR
> bool "HMM mirror CPU page table into a device page table"
> - depends on ARCH_HAS_HMM
> - select HMM
> + depends on (X86_64 || PPC64)
> + depends on MMU && 64BIT
> + select MMU_NOTIFIER
> + select MIGRATE_VMA_HELPER
> help
> Select HMM_MIRROR if you want to mirror range of the CPU page table of a
> process into a device page table. Here, mirror means "keep synchronized".
> diff --git a/mm/Makefile b/mm/Makefile
> index ac5e5ba78874..91c99040065c 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -102,5 +102,5 @@ obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
> obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
> obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
> obj-$(CONFIG_PERCPU_STATS) += percpu-stats.o
> -obj-$(CONFIG_HMM) += hmm.o
> +obj-$(CONFIG_HMM_MIRROR) += hmm.o
> obj-$(CONFIG_MEMFD_CREATE) += memfd.o
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 90ca0cdab9db..d62ce64d6bca 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -25,7 +25,6 @@
> #include <linux/mmu_notifier.h>
> #include <linux/memory_hotplug.h>
>
> -#if IS_ENABLED(CONFIG_HMM_MIRROR)
> static const struct mmu_notifier_ops hmm_mmu_notifier_ops;
>
> static inline struct hmm *mm_get_hmm(struct mm_struct *mm)
> @@ -1326,4 +1325,3 @@ long hmm_range_dma_unmap(struct hmm_range *range,
> return cpages;
> }
> EXPORT_SYMBOL(hmm_range_dma_unmap);
> -#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
> --
> 2.20.1
>
> _______________________________________________
> Linux-nvdimm mailing list
> [email protected]
> https://lists.01.org/mailman/listinfo/linux-nvdimm

2019-06-27 16:30:23

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 12/25] memremap: add a migrate_to_ram method to struct dev_pagemap_ops

On Wed, Jun 26, 2019 at 02:27:11PM +0200, Christoph Hellwig wrote:
> This replaces the hacky ->fault callback, which is currently directly
> called from common code through a hmm specific data structure as an
> exercise in layering violations.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> Reviewed-by: Ralph Campbell <[email protected]>
> ---
> include/linux/hmm.h | 6 ------
> include/linux/memremap.h | 6 ++++++
> include/linux/swapops.h | 15 ---------------
> kernel/memremap.c | 35 ++++-------------------------------
> mm/hmm.c | 13 +++++--------
> mm/memory.c | 9 ++-------
> 6 files changed, 17 insertions(+), 67 deletions(-)

Reviewed-by: Jason Gunthorpe <[email protected]>

I'ver heard there are some other use models for fault() here beyond
migrate to ram, but we can rename it if we ever see them.

> +static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf)
> {
> - struct hmm_devmem *devmem = page->pgmap->data;
> + struct hmm_devmem *devmem = vmf->page->pgmap->data;
>
> - return devmem->ops->fault(devmem, vma, addr, page, flags, pmdp);
> + return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page,
> + vmf->flags, vmf->pmd);
> }

Next cycle we should probably rename this fault to migrate_to_ram as
well and pass in the vmf..

Jason

2019-06-27 16:30:46

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 24/25] mm: remove the HMM config option

On Wed, Jun 26, 2019 at 02:27:23PM +0200, Christoph Hellwig wrote:
> All the mm/hmm.c code is better keyed off HMM_MIRROR. Also let nouveau
> depend on it instead of the mix of a dummy dependency symbol plus the
> actually selected one. Drop various odd dependencies, as the code is
> pretty portable.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> drivers/gpu/drm/nouveau/Kconfig | 3 +--
> include/linux/hmm.h | 5 +----
> include/linux/mm_types.h | 2 +-
> mm/Kconfig | 27 ++++-----------------------
> mm/Makefile | 2 +-
> mm/hmm.c | 2 --
> 6 files changed, 8 insertions(+), 33 deletions(-)

Makes more sense to me too

Reviewed-by: Jason Gunthorpe <[email protected]>

Jason

2019-06-27 17:03:50

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 12/25] memremap: add a migrate_to_ram method to struct dev_pagemap_ops

On Thu, Jun 27, 2019 at 04:29:45PM +0000, Jason Gunthorpe wrote:
> I'ver heard there are some other use models for fault() here beyond
> migrate to ram, but we can rename it if we ever see them.

Well, it absolutely needs to migrate to some piece of addressable
and coherent memory, so ram might be a nice shortcut for that.

> > +static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf)
> > {
> > - struct hmm_devmem *devmem = page->pgmap->data;
> > + struct hmm_devmem *devmem = vmf->page->pgmap->data;
> >
> > - return devmem->ops->fault(devmem, vma, addr, page, flags, pmdp);
> > + return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page,
> > + vmf->flags, vmf->pmd);
> > }
>
> Next cycle we should probably rename this fault to migrate_to_ram as
> well and pass in the vmf..

That ->fault op goes away entirely in one of the next patches in the
series.