2010-12-17 04:16:40

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,0/8] mm: vcm: The Virtual Memory Manager for multiple IOMMUs

Hello,

The VCM is a framework to deal with multiple IOMMUs in a system
with intuitive and abstract objects
These patches are the bugfix and enhanced version of previous RFC by Michal Nazarewicz.
(https://patchwork.kernel.org/patch/157451/)

It is introduced by Zach Pfeffer and implemented by Michal Nazarewicz.
These patches include entirely new implementation of VCM than the one submitted by Zach Pfeffer.

The prerequisites of these patches are the followings by Michal Nazarewicz:
https://patchwork.kernel.org/patch/340281/
https://patchwork.kernel.org/patch/414381/
https://patchwork.kernel.org/patch/414541/

In addition to the above patches,
the prerequisites of "[RFCv2,7/8] mm: vcm: vcm-cma: VCM CMA driver added" is
CMA RFCv8 introduced by Michal Nazarewicz:
https://patchwork.kernel.org/patch/414351/

The VCM also works correctly without "[RFC,6/7] mm: vcm: vcm-cma: VCM CMA driver added"

The last patch, "[RFC,7/7] mm: vcm: Sample driver added" is not the one to be submitted
but is an example to show how to use the VCM.

The VCM provides generic interfaces and objects to deal with IOMMUs in various architectures
especially the ones that embed multiple IOMMUs including GART.

Chagelog:
v2: 1. Added reference counting on a reservation.
When vcm_reserve() creates a reservation, it sets the reference counter
of the reservation to 1. The ownership of the reservation is only owned by
the caller of vcm_reserve. If the caller passes the reservation to another
callee functions, the callee functions must increment the reference counter
with vcm_ref_reserve() to set the ownership of the reservation.
To release the ownership, just call vcm_unreserve(). vcm_unreserve decrements
the reference counter of the given reservation. vcm_unreserve() eventually
unreserves the reservation when its reference counter becomes 0.
2. Applied the design changes of CMA by Michal Nazarewicz.
Since it is dramatically changed, vcm-cma also followed.

Patch list:
[RFCv2,1/8] mm: vcm: Virtual Contiguous Memory framework added
[RFCv2,2/8] mm: vcm: reference counting on a reservation added
[RFCv2,3/8] mm: vcm: physical memory allocator added
[RFCv2,4/8] mm: vcm: VCM VMM driver added
[RFCv2,5/8] mm: vcm: VCM MMU wrapper added
[RFCv2,6/8] mm: vcm: VCM One-to-One wrapper added
[RFCv2,7/8] mm: vcm: vcm-cma: VCM CMA driver added
[RFCv2,8/8] mm: vcm: Sample driver added

Summery:
Documentation/00-INDEX | 2 +
Documentation/virtual-contiguous-memory.txt | 940 +++++++++++++++++++++++++
include/linux/vcm-cma.h | 38 +
include/linux/vcm-drv.h | 326 +++++++++
include/linux/vcm-sample.h | 30 +
include/linux/vcm.h | 311 +++++++++
mm/Kconfig | 79 +++
mm/Makefile | 3 +
mm/vcm-cma.c | 99 +++
mm/vcm-sample.c | 119 ++++
mm/vcm.c | 987 +++++++++++++++++++++++++++
11 files changed, 2934 insertions(+), 0 deletions(-)
create mode 100644 Documentation/virtual-contiguous-memory.txt
create mode 100644 include/linux/vcm-cma.h
create mode 100644 include/linux/vcm-drv.h
create mode 100644 include/linux/vcm-sample.h
create mode 100644 include/linux/vcm.h
create mode 100644 mm/vcm-cma.c
create mode 100644 mm/vcm-sample.c
create mode 100644 mm/vcm.c


2010-12-17 04:16:46

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,6/8] mm: vcm: VCM One-to-One wrapper added

From: Michal Nazarewicz <[email protected]>

This commits adds a VCM One-to-One wrapper which is meant to be
a helper code for creating VCM drivers for "fake" MMUs, ie.
situation where there is no real hardware MMU and memory
must be contiguous physically and mapped directly to "virtual"
address space.

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 33 ++++++++++
include/linux/vcm-drv.h | 41 ++++++++++++
mm/Kconfig | 10 +++
mm/vcm.c | 90 +++++++++++++++++++++++++++
4 files changed, 174 insertions(+), 0 deletions(-)

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 9036abe..070685b 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -883,6 +883,39 @@ rather then the whole mapping. It basically incorporates call to the
vcm_phys_walk() function so driver does not need to call it
explicitly.

+** Writing a one-to-one VCM driver
+
+Similarly to a wrapper for a real hardware MMU a wrapper for
+one-to-one VCM contexts has been created. It implements all of the
+houskeeping operations and leaves only contiguous memory management
+(that is allocating and freeing contiguous regions) to the VCM O2O
+driver.
+
+As with other drivers, one-to-one driver needs to provide a context
+creation function. It needs to allocate space for vcm_o2o structure
+and initialise its vcm.start, vcm.end and driver fields. Calling
+vcm_o2o_init() will fill the other fields and validate entered values:
+
+ struct vcm *__must_check vcm_o2o_init(struct vcm_o2o *o2o);
+
+There are the following two operations used by the wrapper:
+
+ void (*cleanup)(struct vcm *vcm);
+ struct vcm_phys *(*phys)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+
+The cleanup operation cleans the context and frees all resources. If
+not provided, kfree() is used.
+
+The phys operation is used in the same way as the core driver's phys
+operation. The only difference is that it must return a physically
+contiguous memory block -- ie. returned structure must have only one
+part. On error, the operation must return an error-pointer. It is
+required.
+
+Note that to use the VCM one-to-one wrapper one needs to select the
+VCM_O2O Kconfig option or otherwise the wrapper won't be available.
+
* Epilogue

The initial version of the VCM framework was written by Zach Pfeffer
diff --git a/include/linux/vcm-drv.h b/include/linux/vcm-drv.h
index 98d065b..d7d97de 100644
--- a/include/linux/vcm-drv.h
+++ b/include/linux/vcm-drv.h
@@ -194,6 +194,47 @@ struct vcm *__must_check vcm_mmu_init(struct vcm_mmu *mmu);

#endif

+#ifdef CONFIG_VCM_O2O
+
+/**
+ * struct vcm_o2o_driver - VCM One-to-One driver
+ * @cleanup: cleans up the VCM context; if not specified. kfree() is used.
+ * @phys: allocates a physical contiguous memory block; this is used in
+ * the same way &struct vcm_driver's phys is used expect it must
+ * provide a contiguous block (ie. exactly one part); required.
+ */
+struct vcm_o2o_driver {
+ void (*cleanup)(struct vcm *vcm);
+ struct vcm_phys *(*phys)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+};
+
+/**
+ * struct vcm_o2o - VCM One-to-One context
+ * @vcm: VCM context.
+ * @driver: VCM One-to-One driver's operations.
+ */
+struct vcm_o2o {
+ struct vcm vcm;
+ const struct vcm_o2o_driver *driver;
+};
+
+/**
+ * vcm_o2o_init() - initialises a VCM context for a one-to-one context.
+ * @o2o: the vcm_o2o context to initialise.
+ *
+ * This function initialises the vcm_o2o structure created by a O2O
+ * driver when setting things up. It sets up all fields of the
+ * structure expect for @o2o->vcm.start, @o2o->vcm.size and
+ * @o2o->driver which are validated by this function. If they have
+ * invalid value function produces warning and returns an
+ * error-pointer. On any other error, an error-pointer is returned as
+ * well. If everything is fine, address of @o2o->vcm is returned.
+ */
+struct vcm *__must_check vcm_o2o_init(struct vcm_o2o *o2o);
+
+#endif
+
#ifdef CONFIG_VCM_PHYS

/**
diff --git a/mm/Kconfig b/mm/Kconfig
index e91499d..53328d2 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -380,6 +380,16 @@ config VCM_MMU
will be automatically selected. You select it if you are going to
build external modules that will use this functionality.

+config VCM_O2O
+ bool "VCM O2O wrapper"
+ depends on VCM && MODULES
+ help
+ This enables the VCM one-to-one wrapper which helps creating VCM
+ drivers for devices without IO MMUs. If a VCM driver is built that
+ requires this option, it will be automatically selected. You select
+ it if you are going to build external modules that will use this
+ functionality.
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/vcm.c b/mm/vcm.c
index 0d74e95..312540a 100644
--- a/mm/vcm.c
+++ b/mm/vcm.c
@@ -648,6 +648,96 @@ EXPORT_SYMBOL_GPL(vcm_mmu_init);
#endif


+/**************************** One-to-One wrapper ****************************/
+
+#ifdef CONFIG_VCM_O2O
+
+static void vcm_o2o_cleanup(struct vcm *vcm)
+{
+ struct vcm_o2o *o2o = container_of(vcm, struct vcm_o2o, vcm);
+ if (o2o->driver->cleanup)
+ o2o->driver->cleanup(vcm);
+ else
+ kfree(o2o);
+}
+
+static struct vcm_phys *
+vcm_o2o_phys(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ struct vcm_o2o *o2o = container_of(vcm, struct vcm_o2o, vcm);
+ struct vcm_phys *phys;
+
+ phys = o2o->driver->phys(vcm, size, flags);
+ if (!IS_ERR(phys) &&
+ WARN_ON(!phys->free || !phys->parts->size ||
+ phys->parts->size < size ||
+ ((phys->parts->start | phys->parts->size) &
+ ~PAGE_MASK))) {
+ if (phys->free)
+ phys->free(phys);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return phys;
+}
+
+static struct vcm_res *
+vcm_o2o_map(struct vcm *vcm, struct vcm_phys *phys, unsigned flags)
+{
+ struct vcm_res *res;
+
+ if (phys->count != 1)
+ return ERR_PTR(-EOPNOTSUPP);
+
+ if (!phys->parts->size
+ || ((phys->parts->start | phys->parts->size) & ~PAGE_MASK))
+ return ERR_PTR(-EINVAL);
+
+ res = kmalloc(sizeof *res, GFP_KERNEL);
+ if (!res)
+ return ERR_PTR(-ENOMEM);
+
+ res->start = phys->parts->start;
+ res->res_size = phys->parts->size;
+ return res;
+}
+
+static int vcm_o2o_bind(struct vcm_res *res, struct vcm_phys *phys)
+{
+ if (phys->count != 1)
+ return -EOPNOTSUPP;
+
+ if (!phys->parts->size
+ || ((phys->parts->start | phys->parts->size) & ~PAGE_MASK))
+ return -EINVAL;
+
+ if (res->start != phys->parts->start)
+ return -EOPNOTSUPP;
+
+ return 0;
+}
+
+struct vcm *__must_check vcm_o2o_init(struct vcm_o2o *o2o)
+{
+ static const struct vcm_driver driver = {
+ .cleanup = vcm_o2o_cleanup,
+ .phys = vcm_o2o_phys,
+ .map = vcm_o2o_map,
+ .bind = vcm_o2o_bind,
+ .unreserve = (void (*)(struct vcm_res *))kfree,
+ };
+
+ if (WARN_ON(!o2o || !o2o->driver || !o2o->driver->phys))
+ return ERR_PTR(-EINVAL);
+
+ o2o->vcm.driver = &driver;
+ return vcm_init(&o2o->vcm);
+}
+EXPORT_SYMBOL_GPL(vcm_o2o_init);
+
+#endif
+
+
/************************ Physical memory management ************************/

#ifdef CONFIG_VCM_PHYS
--
1.6.2.5

2010-12-17 04:16:44

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,4/8] mm: vcm: VCM VMM driver added

From: Michal Nazarewicz <[email protected]>

This commit adds a VCM VMM driver that handles kernl virtual
address space mappings. The VCM context is available as a static
object vcm_vmm. It is mostly just a wrapper around vmap()
function.

Signed-off-by: Michal Nazarewicz <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 22 +++++-
include/linux/vcm.h | 13 +++
mm/vcm.c | 108 +++++++++++++++++++++++++++
3 files changed, 140 insertions(+), 3 deletions(-)

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 10a0638..c830b69 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -510,6 +510,25 @@ state.

The following VCM drivers are provided:

+** Virtual Memory Manager driver
+
+Virtual Memory Manager driver is available as vcm_vmm and lets one map
+VCM managed physical memory into kernel space. The calls that this
+driver supports are:
+
+ vcm_make_binding()
+ vcm_destroy_binding()
+
+ vcm_alloc()
+
+ vcm_map()
+ vcm_unmap()
+
+vcm_map() is likely to work with physical memory allocated in context
+of other drivers as well (the only requirement is that "page" field of
+struct vcm_phys_part will be set for all physically contiguous parts
+and that each part's size will be multiply of PAGE_SIZE).
+
** Real hardware drivers

There are no real hardware drivers at this time.
@@ -793,6 +812,3 @@ rewritten by Michal Nazarewicz <[email protected]>.
The new version is still lacking a few important features. Most
notably, no real hardware MMU has been implemented yet. This may be
ported from original Zach's proposal.
-
-Also, support for VMM is lacking. This is another thing that can be
-ported from Zach's proposal.
diff --git a/include/linux/vcm.h b/include/linux/vcm.h
index 3d54f18..800b5a0 100644
--- a/include/linux/vcm.h
+++ b/include/linux/vcm.h
@@ -295,4 +295,17 @@ int __must_check vcm_activate(struct vcm *vcm);
*/
void vcm_deactivate(struct vcm *vcm);

+/**
+ * vcm_vmm - VMM context
+ *
+ * Context for manipulating kernel virtual mappings. Reserve as well
+ * as rebinding is not supported by this driver. Also, all mappings
+ * are always active (till unbound) regardless of calls to
+ * vcm_activate().
+ *
+ * After mapping, the start field of struct vcm_res should be cast to
+ * pointer to void and interpreted as a valid kernel space pointer.
+ */
+extern struct vcm vcm_vmm[1];
+
#endif
diff --git a/mm/vcm.c b/mm/vcm.c
index 6804114..cd9f4ee 100644
--- a/mm/vcm.c
+++ b/mm/vcm.c
@@ -16,6 +16,7 @@
#include <linux/vcm-drv.h>
#include <linux/module.h>
#include <linux/mm.h>
+#include <linux/vmalloc.h>
#include <linux/err.h>
#include <linux/slab.h>

@@ -305,6 +306,113 @@ void vcm_deactivate(struct vcm *vcm)
EXPORT_SYMBOL_GPL(vcm_deactivate);


+/****************************** VCM VMM driver ******************************/
+
+static void vcm_vmm_cleanup(struct vcm *vcm)
+{
+ /* This should never be called. vcm_vmm is a static object. */
+ BUG_ON(1);
+}
+
+static struct vcm_phys *
+vcm_vmm_phys(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ static const unsigned char orders[] = { 0 };
+ return vcm_phys_alloc(size, flags, orders);
+}
+
+static void vcm_vmm_unreserve(struct vcm_res *res)
+{
+ kfree(res);
+}
+
+struct vcm_res *vcm_vmm_map(struct vcm *vcm, struct vcm_phys *phys,
+ unsigned flags)
+{
+ /*
+ * Original implementation written by Cho KyongHo
+ * ([email protected]). Later rewritten by mina86.
+ */
+ struct vcm_phys_part *part;
+ struct page **pages, **p;
+ struct vcm_res *res;
+ int ret = -ENOMEM;
+ unsigned i;
+
+ pages = kmalloc((phys->size >> PAGE_SHIFT) * sizeof *pages, GFP_KERNEL);
+ if (!pages)
+ return ERR_PTR(-ENOMEM);
+ p = pages;
+
+ res = kmalloc(sizeof *res, GFP_KERNEL);
+ if (!res)
+ goto error_pages;
+
+ i = phys->count;
+ part = phys->parts;
+ do {
+ unsigned j = part->size >> PAGE_SHIFT;
+ struct page *page = part->page;
+ if (!page)
+ goto error_notsupp;
+ do {
+ *p++ = page++;
+ } while (--j);
+ } while (++part, --i);
+
+ res->start = (dma_addr_t)vmap(pages, p - pages, VM_ALLOC, PAGE_KERNEL);
+ if (!res->start)
+ goto error_res;
+
+ kfree(pages);
+ res->res_size = phys->size;
+ return res;
+
+error_notsupp:
+ ret = -EOPNOTSUPP;
+error_res:
+ kfree(res);
+error_pages:
+ kfree(pages);
+ return ERR_PTR(ret);
+}
+
+static void vcm_vmm_unbind(struct vcm_res *res)
+{
+ vunmap((void *)res->start);
+}
+
+static int vcm_vmm_activate(struct vcm *vcm)
+{
+ /* no operation, all bindings are immediately active */
+ return 0;
+}
+
+static void vcm_vmm_deactivate(struct vcm *vcm)
+{
+ /*
+ * no operation, all bindings are immediately active and
+ * cannot be deactivated unless unbound.
+ */
+}
+
+struct vcm vcm_vmm[1] = { {
+ .start = 0,
+ .size = ~(resource_size_t)0,
+ /* prevent activate/deactivate from being called */
+ .activations = ATOMIC_INIT(1),
+ .driver = &(const struct vcm_driver) {
+ .cleanup = vcm_vmm_cleanup,
+ .phys = vcm_vmm_phys,
+ .unbind = vcm_vmm_unbind,
+ .unreserve = vcm_vmm_unreserve,
+ .activate = vcm_vmm_activate,
+ .deactivate = vcm_vmm_deactivate,
+ }
+} };
+EXPORT_SYMBOL_GPL(vcm_vmm);
+
+
/****************************** VCM Drivers API *****************************/

struct vcm *__must_check vcm_init(struct vcm *vcm)
--
1.6.2.5

2010-12-17 04:16:41

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,7/8] mm: vcm: vcm-cma: VCM CMA driver added

From: Michal Nazarewicz <[email protected]>

This commit adds a VCM driver that instead of using real
hardware MMU emulates one and uses CMA for allocating
contiguous memory chunks.

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 12 +++-
include/linux/vcm-cma.h | 38 ++++++++++
mm/Kconfig | 14 ++++
mm/Makefile | 1 +
mm/vcm-cma.c | 99 +++++++++++++++++++++++++++
5 files changed, 163 insertions(+), 1 deletions(-)
create mode 100644 include/linux/vcm-cma.h
create mode 100644 mm/vcm-cma.c

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 070685b..46edaee 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -552,7 +552,17 @@ well:
If one uses vcm_unbind() then vcm_bind() on the same reservation,
physical memory pair should also work.

-There are no One-to-One drivers at this time.
+*** VCM CMA
+
+VCM CMA driver is a One-to-One driver which uses CMA (see
+[[file:contiguous-memory.txt][contiguous-memory.txt]]) to allocate physically contiguous memory. VCM
+CMA context is created by calling:
+
+ struct vcm *__must_check
+ vcm_cma_create(const char *regions, dma_addr_t alignment);
+
+Its first argument is the list of regions that CMA should try to
+allocate memory from. The second argument is required alignment.

* Writing a VCM driver

diff --git a/include/linux/vcm-cma.h b/include/linux/vcm-cma.h
new file mode 100644
index 0000000..57c2cc9
--- /dev/null
+++ b/include/linux/vcm-cma.h
@@ -0,0 +1,38 @@
+/*
+ * Virtual Contiguous Memory driver for CMA header
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#ifndef __LINUX_VCM_CMA_H
+#define __LINUX_VCM_CMA_H
+
+#include <linux/types.h>
+
+struct vcm;
+
+/**
+ * vcm_cma_create() - creates a VCM context that fakes a hardware MMU
+ * @ctx: the cma context that is defined by the machine's implementation.
+ * from.
+ * @alignment: required alignment of allocations.
+ *
+ * This creates VCM context that can be used on platforms with no
+ * hardware MMU or for devices that aro conected to the bus directly.
+ * Because it does not represent real MMU it has some limitations:
+ * basically, vcm_alloc(), vcm_reserve() and vcm_bind() are likely to
+ * fail so vcm_make_binding() should be used instead.
+ */
+struct vcm *__must_check
+vcm_cma_create(struct cma *ctx, unsigned long alignment);
+
+#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index 53328d2..5cd25e7 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -390,6 +390,20 @@ config VCM_O2O
it if you are going to build external modules that will use this
functionality.

+config VCM_CMA
+ bool "VCM CMA driver"
+ depends on VCM && CMA
+ select VCM_O2O
+ help
+ This enables VCM driver that instead of using a real hardware
+ MMU fakes one and uses a direct mapping. It provides a subset
+ of functionalities of a real MMU but if drivers limits their
+ use of VCM to only supported operations they can work on
+ both systems with and without MMU with no changes.
+
+ For more information see
+ <Documentation/virtual-contiguous-memory.txt>. If unsure, say "n".
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/Makefile b/mm/Makefile
index b96a6cb..6663fc2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -44,3 +44,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CMA) += cma.o
obj-$(CONFIG_VCM) += vcm.o
+obj-$(CONFIG_VCM_CMA) += vcm-cma.o
diff --git a/mm/vcm-cma.c b/mm/vcm-cma.c
new file mode 100644
index 0000000..dcdc751
--- /dev/null
+++ b/mm/vcm-cma.c
@@ -0,0 +1,99 @@
+/*
+ * Virtual Contiguous Memory driver for CMA
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#include <linux/vcm-drv.h>
+#include <linux/cma.h>
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+
+struct vcm_cma {
+ struct vcm_o2o o2o;
+ struct cma *ctx;
+ unsigned long alignment;
+};
+
+struct vcm_cma_phys {
+ struct cm *chunk;
+ struct vcm_phys phys;
+};
+
+static void vcm_cma_free(struct vcm_phys *_phys)
+{
+ struct vcm_cma_phys *phys =
+ container_of(_phys, struct vcm_cma_phys, phys);
+ cm_unpin(phys->chunk);
+ cm_free(phys->chunk);
+ kfree(phys);
+}
+
+static struct vcm_phys *
+vcm_cma_phys(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ struct vcm_cma *cma = container_of(vcm, struct vcm_cma, o2o.vcm);
+ struct vcm_cma_phys *phys;
+ struct cm *chunk;
+
+ phys = kmalloc(sizeof *phys + sizeof *phys->phys.parts, GFP_KERNEL);
+ if (!phys)
+ return ERR_PTR(-ENOMEM);
+
+ chunk = cm_alloc(cma->ctx, size, cma->alignment);
+ if (IS_ERR(chunk)) {
+ kfree(phys);
+ return ERR_CAST(chunk);
+ }
+
+ phys->chunk = chunk;
+ phys->phys.count = 1;
+ phys->phys.free = vcm_cma_free;
+ phys->phys.parts->start = cm_pin(chunk);
+ phys->phys.parts->size = size;
+ return &phys->phys;
+}
+
+struct vcm *__must_check
+vcm_cma_create(struct cma *ctx, unsigned long alignment)
+{
+ static const struct vcm_o2o_driver driver = {
+ .phys = vcm_cma_phys,
+ };
+
+ struct vcm_cma *cma;
+ struct vcm *vcm;
+
+ if (alignment & (alignment - 1))
+ return ERR_PTR(-EINVAL);
+
+ cma = kmalloc(sizeof *cma, GFP_KERNEL);
+ if (!cma)
+ return ERR_PTR(-ENOMEM);
+
+ cma->o2o.driver = &driver;
+ /* dummy size and start address!
+ * to be accepted by vcm_init()
+ * vcm_cma does not use these members.
+ */
+ cma->o2o.vcm.start = 0;
+ cma->o2o.vcm.size = PAGE_SIZE;
+ cma->ctx = ctx;
+ cma->alignment = alignment;
+ vcm = vcm_o2o_init(&cma->o2o);
+ if (IS_ERR(vcm))
+ kfree(cma);
+ return vcm;
+}
+EXPORT_SYMBOL_GPL(vcm_cma_create);
--
1.6.2.5

2010-12-17 04:16:43

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,3/8] mm: vcm: physical memory allocator added

From: Michal Nazarewicz <[email protected]>

This commits adds vcm_phys_alloc() function with some
accompanying functions which allocates physical memory. This
should be used from withing alloc or phys callback of a VCM
driver if one does not want to provide its own allocator.

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 31 ++++
include/linux/vcm-drv.h | 88 ++++++++++
mm/Kconfig | 9 +
mm/vcm.c | 249 +++++++++++++++++++++++++++
4 files changed, 377 insertions(+), 0 deletions(-)

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 2008465..10a0638 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -672,6 +672,37 @@ Both phys and alloc callbacks need to provide a free callbakc along
with the vc_phys structure, which will, as one may imagine, free
allocated space when user calls vcm_free().

+Unless VCM driver needs some special handling of physical memory, the
+vcm_phys_alloc() function can be used:
+
+ struct vcm_phys *__must_check
+ vcm_phys_alloc(resource_size_t size, unsigned flags,
+ const unsigned char *orders);
+
+The last argument of this function (orders) is an array of orders of
+page sizes that function should try to allocate. This array must be
+sorted from highest order to lowest and the last entry must be zero.
+
+For instance, an array { 8, 4, 0 } means that the function should try
+and allocate 1MiB, 64KiB and 4KiB pages (this is assuming PAGE_SIZE is
+4KiB which is true for all supported architectures). For example, if
+requested size is 2MiB and 68 KiB, the function will try to allocate
+two 1MiB pages, one 64KiB page and one 4KiB page. This may be useful
+when the mapping is written to the MMU since the largest possible
+pages will be used reducing the number of entries.
+
+The function allocates memory from DMA32 zone. If driver has some
+other requirements (that is require different GFP flags) it can use
+__vcm_phys_alloc() function which, besides arguments that
+vcm_phys_alloc() accepts, take GFP flags as the last argument:
+
+ struct vcm_phys *__must_check
+ __vcm_phys_alloc(resource_size_t size, unsigned flags,
+ const unsigned char *orders, gfp_t gfp);
+
+However, if those functions are used, VCM driver needs to select an
+VCM_PHYS Kconfig option or oterwise they won't be available.
+
All those operations may assume that size is a non-zero and divisible
by PAGE_SIZE.

diff --git a/include/linux/vcm-drv.h b/include/linux/vcm-drv.h
index d7ae660..536b051 100644
--- a/include/linux/vcm-drv.h
+++ b/include/linux/vcm-drv.h
@@ -114,4 +114,92 @@ struct vcm_phys {
*/
struct vcm *__must_check vcm_init(struct vcm *vcm);

+#ifdef CONFIG_VCM_PHYS
+
+/**
+ * __vcm_phys_alloc() - allocates physical discontiguous space
+ * @size: size of the block to allocate.
+ * @flags: additional allocation flags; XXX FIXME: document
+ * @orders: array of orders of pages supported by the MMU sorted from
+ * the largest to the smallest. The last element is always
+ * zero (which means 4K page).
+ * @gfp: the gfp flags for pages to allocate.
+ *
+ * This function tries to allocate a physical discontiguous space in
+ * such a way that it allocates the largest possible blocks from the
+ * sizes donated by the @orders array. So if @orders is { 8, 0 }
+ * (which means 1MiB and 4KiB pages are to be used) and requested
+ * @size is 2MiB and 12KiB the function will try to allocate two 1MiB
+ * pages and three 4KiB pages (in that order). If big page cannot be
+ * allocated the function will still try to allocate more smaller
+ * pages.
+ */
+struct vcm_phys *__must_check
+__vcm_phys_alloc(resource_size_t size, unsigned flags,
+ const unsigned char *orders, gfp_t gfp);
+
+/**
+ * vcm_phys_alloc() - allocates physical discontiguous space
+ * @size: size of the block to allocate.
+ * @flags: additional allocation flags; XXX FIXME: document
+ * @orders: array of orders of pages supported by the MMU sorted from
+ * the largest to the smallest. The last element is always
+ * zero (which means 4K page).
+ *
+ * This function tries to allocate a physical discontiguous space in
+ * such a way that it allocates the largest possible blocks from the
+ * sizes donated by the @orders array. So if @orders is { 8, 0 }
+ * (which means 1MiB and 4KiB pages are to be used) and requested
+ * @size is 2MiB and 12KiB the function will try to allocate two 1MiB
+ * pages and three 4KiB pages (in that order). If big page cannot be
+ * allocated the function will still try to allocate more smaller
+ * pages.
+ */
+static inline struct vcm_phys *__must_check
+vcm_phys_alloc(resource_size_t size, unsigned flags,
+ const unsigned char *orders) {
+ return __vcm_phys_alloc(size, flags, orders, GFP_DMA32);
+}
+
+/**
+ * vcm_phys_walk() - helper function for mapping physical pages
+ * @vaddr: virtual address to map/unmap physical space to/from
+ * @phys: physical space
+ * @orders: array of orders of pages supported by the MMU sorted from
+ * the largest to the smallest. The last element is always
+ * zero (which means 4K page).
+ * @callback: function called for each page.
+ * @recover: function called for each page when @callback returns
+ * negative number; if it also returns negative number
+ * function terminates; may be NULL.
+ * @priv: private data for the callbacks.
+ *
+ * This function walks through @phys trying to mach largest possible
+ * page size donated by @orders. For each such page @callback is
+ * called. If @callback returns negative number the function calls
+ * @recover for each page @callback was called successfully.
+ *
+ * So, for instance, if we have a physical memory which consist of
+ * 1Mib part and 8KiB part and @orders is { 8, 0 } (which means 1MiB
+ * and 4KiB pages are to be used), @callback will be called first with
+ * 1MiB page and then two times with 4KiB page. This is of course
+ * provided that @vaddr has correct alignment.
+ *
+ * The idea is for hardware MMU drivers to call this function and
+ * provide a callbacks for mapping/unmapping a single page. The
+ * function divides the region into pages that the MMU can handle.
+ *
+ * If @callback at one point returns a negative number this is the
+ * return value of the function; otherwise zero is returned.
+ */
+int vcm_phys_walk(dma_addr_t vaddr, const struct vcm_phys *phys,
+ const unsigned char *orders,
+ int (*callback)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv),
+ int (*recovery)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv),
+ void *priv);
+
+#endif
+
#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index b937f32..00d975e 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -360,6 +360,15 @@ config VCM_RES_REFCNT
This enables reference counting on a reservation to make sharing
and migrating the ownership of the reservation easier.

+config VCM_PHYS
+ bool "VCM physical allocation wrappers"
+ depends on VCM && MODULES
+ help
+ This enables the vcm_phys family of functions provided for VCM
+ drivers. If a VCM driver is built that requires this option, it
+ will be automatically selected. You select it if you are going to
+ build external modules that will use this functionality.
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/vcm.c b/mm/vcm.c
index 5819f0f..6804114 100644
--- a/mm/vcm.c
+++ b/mm/vcm.c
@@ -319,3 +319,252 @@ struct vcm *__must_check vcm_init(struct vcm *vcm)
return vcm;
}
EXPORT_SYMBOL_GPL(vcm_init);
+
+
+/************************ Physical memory management ************************/
+
+#ifdef CONFIG_VCM_PHYS
+
+struct vcm_phys_list {
+ struct vcm_phys_list *next;
+ unsigned count;
+ struct vcm_phys_part parts[31];
+};
+
+static struct vcm_phys_list *__must_check
+vcm_phys_alloc_list_order(struct vcm_phys_list *last, resource_size_t *pages,
+ unsigned flags, unsigned order, unsigned *total,
+ gfp_t gfp)
+{
+ unsigned count;
+
+ count = *pages >> order;
+
+ do {
+ struct page *page = alloc_pages(gfp, order);
+
+ if (!page)
+ /*
+ * If allocation failed we may still
+ * try to continua allocating smaller
+ * pages.
+ */
+ break;
+
+ if (last->count == ARRAY_SIZE(last->parts)) {
+ struct vcm_phys_list *l;
+ l = kmalloc(sizeof *l, GFP_KERNEL);
+ if (!l)
+ return NULL;
+
+ l->next = NULL;
+ l->count = 0;
+ last->next = l;
+ last = l;
+ }
+
+ last->parts[last->count].start = page_to_phys(page);
+ last->parts[last->count].size = (1 << order);
+ last->parts[last->count].page = page;
+ ++last->count;
+ ++*total;
+ *pages -= 1 << order;
+ } while (--count);
+
+ return last;
+}
+
+static unsigned __must_check
+vcm_phys_alloc_list(struct vcm_phys_list *first,
+ resource_size_t size, unsigned flags,
+ const unsigned char *orders, gfp_t gfp)
+{
+ struct vcm_phys_list *last = first;
+ unsigned total_parts = 0;
+ resource_size_t pages;
+
+ /*
+ * We are trying to allocate as large pages as possible but
+ * not larger then pages that MMU driver that called us
+ * supports (ie. the ones provided by page_sizes). This makes
+ * it possible to map the region using fewest possible number
+ * of entries.
+ */
+ pages = size >> PAGE_SHIFT;
+ do {
+ while (!(pages >> *orders))
+ ++orders;
+
+ last = vcm_phys_alloc_list_order(last, &pages, flags, *orders,
+ &total_parts, gfp);
+ if (!last)
+ return 0;
+
+ } while (*orders++ && pages);
+
+ if (pages)
+ return 0;
+
+ return total_parts;
+}
+
+static void vcm_phys_free_parts(struct vcm_phys_part *parts, unsigned count)
+{
+ do {
+ __free_pages(parts->page, ffs(parts->size) - 1 - PAGE_SHIFT);
+ } while (++parts, --count);
+}
+
+static void vcm_phys_free(struct vcm_phys *phys)
+{
+ vcm_phys_free_parts(phys->parts, phys->count);
+ kfree(phys);
+}
+
+struct vcm_phys *__must_check
+__vcm_phys_alloc(resource_size_t size, unsigned flags,
+ const unsigned char *orders, gfp_t gfp)
+{
+ struct vcm_phys_list *lst, *n;
+ struct vcm_phys_part *out;
+ struct vcm_phys *phys;
+ unsigned count;
+
+ if (WARN_ON((size & (PAGE_SIZE - 1)) || !size || !orders))
+ return ERR_PTR(-EINVAL);
+
+ lst = kmalloc(sizeof *lst, GFP_KERNEL);
+ if (!lst)
+ return ERR_PTR(-ENOMEM);
+
+ lst->next = NULL;
+ lst->count = 0;
+
+ count = vcm_phys_alloc_list(lst, size, flags, orders, gfp);
+ if (!count)
+ goto error;
+
+ phys = kmalloc(sizeof *phys + count * sizeof *phys->parts, GFP_KERNEL);
+ if (!phys)
+ goto error;
+
+ phys->free = vcm_phys_free;
+ phys->count = count;
+ phys->size = size;
+
+ out = phys->parts;
+ do {
+ memcpy(out, lst->parts, lst->count * sizeof *out);
+ out += lst->count;
+
+ n = lst->next;
+ kfree(lst);
+ lst = n;
+ } while (lst);
+
+ return phys;
+
+error:
+ do {
+ vcm_phys_free_parts(lst->parts, lst->count);
+
+ n = lst->next;
+ kfree(lst);
+ lst = n;
+ } while (lst);
+
+ return ERR_PTR(-ENOMEM);
+}
+EXPORT_SYMBOL_GPL(__vcm_phys_alloc);
+
+static inline bool is_of_order(dma_addr_t size, unsigned order)
+{
+ return !(size & (((dma_addr_t)PAGE_SIZE << order) - 1));
+}
+
+static int
+__vcm_phys_walk_part(dma_addr_t vaddr, const struct vcm_phys_part *part,
+ const unsigned char *orders,
+ int (*callback)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv), void *priv,
+ unsigned *limit)
+{
+ resource_size_t size = part->size;
+ dma_addr_t paddr = part->start;
+ resource_size_t ps;
+
+ while (!is_of_order(vaddr, *orders))
+ ++orders;
+ while (!is_of_order(paddr, *orders))
+ ++orders;
+
+ ps = PAGE_SIZE << *orders;
+ for (; *limit && size; --*limit) {
+ int ret;
+
+ while (ps > size)
+ ps = PAGE_SIZE << *++orders;
+
+ ret = callback(vaddr, paddr, *orders, priv);
+ if (ret < 0)
+ return ret;
+
+ ps = PAGE_SIZE << *orders;
+ vaddr += ps;
+ paddr += ps;
+ size -= ps;
+ }
+
+ return 0;
+}
+
+int vcm_phys_walk(dma_addr_t _vaddr, const struct vcm_phys *phys,
+ const unsigned char *orders,
+ int (*callback)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *arg),
+ int (*recovery)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *arg),
+ void *priv)
+{
+ unsigned limit = ~0;
+ int r = 0;
+
+ if (WARN_ON(!phys || ((_vaddr | phys->size) & (PAGE_SIZE - 1)) ||
+ !phys->size || !orders || !callback))
+ return -EINVAL;
+
+ for (;;) {
+ const struct vcm_phys_part *part = phys->parts;
+ unsigned count = phys->count;
+ dma_addr_t vaddr = _vaddr;
+ int ret = 0;
+
+ for (; count && limit; --count, ++part) {
+ ret = __vcm_phys_walk_part(vaddr, part, orders,
+ callback, priv, &limit);
+ if (ret)
+ break;
+
+ vaddr += part->size;
+ }
+
+ if (r)
+ /* We passed error recovery */
+ return r;
+
+ /*
+ * Either operation suceeded or we were not provided
+ * with a recovery callback -- return.
+ */
+ if (!ret || !recovery)
+ return ret;
+
+ /* Switch to recovery */
+ limit = ~0 - limit;
+ callback = recovery;
+ r = ret;
+ }
+}
+EXPORT_SYMBOL_GPL(vcm_phys_walk);
+
+#endif
--
1.6.2.5

2010-12-17 04:17:45

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,1/8] mm: vcm: Virtual Contiguous Memory framework added

From: Michal Nazarewicz <[email protected]>

This commit adds the Virtual Contiguous Memory framework which
provides an abstraction for virtual address space provided by
various MMUs present on the platform.

The framework uses plugable MMU drivers for hardware MMUs and
if drivers obeys some limitations it can be also used on
platforms with no MMU.

For more information see
<Documentation/virtual-contiguous-memory.txt>.

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/00-INDEX | 2 +
Documentation/virtual-contiguous-memory.txt | 720 +++++++++++++++++++++++++++
include/linux/vcm-drv.h | 117 +++++
include/linux/vcm.h | 275 ++++++++++
mm/Kconfig | 15 +
mm/Makefile | 1 +
mm/vcm.c | 304 +++++++++++
7 files changed, 1434 insertions(+), 0 deletions(-)
create mode 100644 Documentation/virtual-contiguous-memory.txt
create mode 100644 include/linux/vcm-drv.h
create mode 100644 include/linux/vcm.h
create mode 100644 mm/vcm.c

diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index 8dfc670..7033c56 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -342,6 +342,8 @@ video-output.txt
- sysfs class driver interface to enable/disable a video output device.
video4linux/
- directory with info regarding video/TV/radio cards and linux.
+virtual-contiguous-memory.txt
+ - documentation on virtual contiguous memory manager framework.
vm/
- directory with info on the Linux vm code.
volatile-considered-harmful.txt
diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
new file mode 100644
index 0000000..9793a86
--- /dev/null
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -0,0 +1,720 @@
+ -*- org -*-
+
+This document covers how to use the Virtual Contiguous Memory framework
+(VCM), how the implementation works, and how to implement MMU drivers
+that can be plugged into VCM. It also contains a rationale for VCM.
+
+* The Virtual Contiguous Memory Manager
+
+The VCM was built to solve the system-wide memory mapping issues that
+occur when many bus-masters have IOMMUs.
+
+An IOMMU maps device addresses to physical addresses. It also
+insulates the system from spurious or malicious device bus
+transactions and allows fine-grained mapping attribute control. The
+Linux kernel core does not contain a generic API to handle IOMMU
+mapped memory; device driver writers must implement device specific
+code to interoperate with the Linux kernel core. As the number of
+IOMMUs increases, coordinating the many address spaces mapped by all
+discrete IOMMUs becomes difficult without in-kernel support.
+
+The VCM API enables device independent IOMMU control, virtual memory
+manager (VMM) interoperation and non-IOMMU enabled device
+interoperation by treating devices with or without IOMMUs and all CPUs
+with or without MMUs, their mapping contexts and their mappings using
+common abstractions. Physical hardware is given a generic device type
+and mapping contexts are abstracted into Virtual Contiguous Memory
+(VCM) regions. Users "reserve" memory from VCMs and "bind" their
+reservations with physical memory.
+
+If drivers limit their use of VCM contexts to a some subset of VCM
+functionality, they can work with no changes with or without MMU.
+
+** Why the VCM is Needed
+
+Driver writers who control devices with IOMMUs must contend with
+device control and memory management. Driver writers have a large
+device driver API that they can leverage to control their devices, but
+they are lacking a unified API to help them program mappings into
+IOMMUs and share those mappings with other devices and CPUs in the
+system.
+
+Sharing is complicated by Linux's CPU-centric VMM. The CPU-centric
+model generally makes sense because average hardware only contains
+a MMU for the CPU and possibly a graphics MMU. If every device in the
+system has one or more MMUs the CPU-centric memory management (MM)
+programming model breaks down.
+
+Abstracting IOMMU device programming into a common API has already
+begun in the Linux kernel. It was built to abstract the difference
+between AMD and Intel IOMMUs to support x86 virtualization on both
+platforms. The interface is listed in include/linux/iommu.h. It
+contains interfaces for mapping and unmapping as well as domain
+management. This interface has not gained widespread use outside the
+x86; PA-RISC, Alpha and SPARC architectures and ARM and PowerPC
+platforms all use their own mapping modules to control their IOMMUs.
+The VCM contains an IOMMU programming layer, but since its
+abstraction supports map management independent of device control, the
+layer is not used directly. This higher-level view enables a new
+kernel service, not just an IOMMU interoperation layer.
+
+** The General Idea: Map Management using Graphs
+
+Looking at mapping from a system-wide perspective reveals a general
+graph problem. The VCM's API is built to manage the general mapping
+graph. Each node that talks to memory, either through an MMU or
+directly (physically mapped) can be thought of as the device-end of
+a mapping edge. The other edge is the physical memory (or
+intermediate virtual space) that is mapped. The figure below shows
+an example three with CPU and a few devices connected to the memory
+directly or through a MMU.
+
++--------------------------------------------------------------------+
+| Memory |
++--------------------------------------------------------------------+
+ |
+ +------------------+-----------+-------+----------+-----------+
+ | | | | |
++-----+ +-----+ +-----+ +--------+ +--------+
+| MMU | | MMU | | MMU | | Device | | Device |
++-----+ +-----+ +-----+ +--------+ +--------+
+ | | |
++-----+ +-------+---+-----.... +-----+
+| CPU | | | | GPU |
++-----+ +--------+ +--------+ +-----+
+ | Device | | Device | ...
+ +--------+ +--------+
+
+For each MMU in the system a VCM context is created through an through
+which drivers can make reservations and bind virtual addresses to
+physical space. In the direct-mapped case the device is assigned
+a one-to-one MMU (as shown on the figure below). This scheme allows
+direct mapped devices to participate in general graph management.
+
++--------------------------------------------------------------------+
+| Memory |
++--------------------------------------------------------------------+
+ |
+ +------------------+-----------+-------+----------------+
+ | | | |
++-----+ +-----+ +-----+ +------------+
+| MMU | | MMU | | MMU | | One-to-One |
++-----+ +-----+ +-----+ +------------+
+ | | | |
++-----+ +-------+---+-----.... +-----+ +-----+-----+
+| CPU | | | | GPU | | |
++-----+ +--------+ +--------+ +-----+ +--------+ +--------+
+ | Device | | Device | ... | Device | | Device |
+ +--------+ +--------+ +--------+ +--------+
+
+The CPU nodes can also be brought under the same mapping abstraction
+with the use of a light overlay on the existing VMM. This light
+overlay allows VCM-managed mappings to interoperate with the common
+API. The light overlay enables this without substantial modifications
+to the existing VMM.
+
+In addition to CPU nodes that are running Linux (and the VMM), remote
+CPU nodes that may be running other operating systems can be brought
+into the general abstraction. Routing all memory management requests
+from a remote node through the central memory management framework
+enables new features like system-wide memory migration. This feature
+may only be feasible for large buffers that are managed outside of the
+fast-path, but having remote allocation in a system enables features
+that are impossible to build without it.
+
+The fundamental objects that support graph-based map management are:
+Virtual Contiguous Memory contexts, reservations, and physical memory
+allocations.
+
+* Usage Overview
+
+In a nutshell, platform initialises VCM context for each MMU on the
+system and possibly one-to-one VCM contexts which are passed to device
+drivers. Later on, drivers make reservation of virtual address space
+from the VCM context. At this point no physical memory has been
+committed to the reservation. To bind physical memory with a
+reservation, physical memory is allocated (possibly discontiguous) and
+then bound to the reservation.
+
+Single physical allocation can be bound to several different
+reservations also from different VCM contexts. This allows for
+devices connected through different MMUs (or directly) to the memory
+banks to share physical memory buffers; this also lets it possible to
+map such memory into CPU's address space (be it kernel or user space)
+so that the same data can be accessed by the CPU.
+
+[[file:../include/linux/vcm.h][include/linux/vcm.h]] includes comments documenting each API.
+
+** Virtual Contiguous Memory context
+
+A Virtual Contiguous Memory context (VCM) abstracts an address space
+a device sees. A VCM is created with a VCM driver dependent call. It
+is destroyed with a call to:
+
+ void vcm_destroy(struct vcm *vcm);
+
+The newly created VCM instance can be passed to any function that needs to
+operate on or with a virtual contiguous memory region. All internals
+of the VCM driver and how the mappings are handled is hidden and VCM
+driver dependent.
+
+** Bindings
+
+If all that driver needs is allocate some physical space and map it
+into its address space, a vcm_make_binding() call can be used:
+
+ struct vcm_res *__must_check
+ vcm_make_binding(struct vcm *vcm, resource_size_t size,
+ unsigned alloc_flags, unsigned res_flags);
+
+This call allocates physical memory, reserves virtual address space
+and binds those together. If all those succeeds a reservation is
+returned which has physical memory associated with it.
+
+If driver does not require more complicated VCM functionality, it is
+desirable to use this function since it will work on both real MMUs
+and one-to-one mappings.
+
+To destroy created binding, vcm_destroy_binding() can be used:
+
+ void vcm_destroy_binding(struct vcm_res *res);
+
+** Physical memory
+
+Physical memory allocations are handled using the following functions:
+
+ struct vcm_phys *__must_check
+ vcm_alloc(struct vcm *vcm, resource_size_t size, unsigned flags);
+
+ void vcm_free(struct vcm_phys *phys);
+
+It is noteworthy that physical space allocation is done in the context
+of a VCM. This is especially important in case of one-to-one VCM
+contexts which cannot handle discontiguous physical memory.
+
+Also, depending on VCM context, the physical space may be allocated in
+parts of different sizes. For instance, if a given MMU supports
+16MiB, 1MiB, 64KiB and 4KiB pages, it is likely that vcm_alloc() in
+context of this MMU's driver will try to split into as few as possible
+parts of those sizes.
+
+In case of one-to-one VCM contexts, a physical memory allocated with
+the call to vcm_alloc() may be usable only with vcm_map() function.
+
+** Mappings
+
+The easiest way to map a physical space into virtual address space
+represented by VCM context is to use the vcm_map() function:
+
+ struct vcm_res *__must_check
+ vcm_map(struct vcm *vcm, struct vcm_phys *phys, unsigned flags);
+
+This functions reserves address space from VCM context and binds
+physical space to it. To reverse the process vcm_unmap() can be used:
+
+ void vcm_unmap(struct vcm_res *res);
+
+Similarly to vcm_make_binding(), Usage vcm_map() may be advantageous
+over the use of vcm_reserve() followed by vcm_bind(). This is not
+only true for one-to-one mapping but if it so happens that the call to
+vcm_map() request mapping of a physically contiguous space into kernel
+space, a direct mapping can be returned instead of creating a new one.
+
+In some cases, a reservation created with vcm_map() can be used only
+with the physical memory passed as the argument to vcm_map() (so if
+user chooses to call vcm_unbind() and then vcm_bind() on a different
+physical memory, the call may fail).
+
+** Reservations
+
+A reservation is a contiguous region allocated from a virtual address
+space represented by VCM context. Just after reservation is created,
+no physical memory needs to be is bound to it. To manage reservations
+following two functions are provided:
+
+ struct vcm_res *__must_check
+ vcm_reserve(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+
+ void vcm_unreserve(struct vcm_res *res);
+
+The first one creates a reservation of desired size, and the second
+one destroys it.
+
+** Binding memory
+
+To bind a physical memory into a reservation vcm_bind() function is
+used:
+
+ int __must_check vcm_bind(struct vcm_res *res,
+ struct vcm_phys *phys);
+
+When the binding is no longer needed, vcm_unbind() destroys the
+connection:
+
+ struct vcm_phys *vcm_unbind(struct vcm_res *res);
+
+** Activating mappings
+
+Unless a VCM context is activated, none of the bindings are actually
+guaranteed to be available. When device driver needs the mappings
+it need to call vcm_activate() function to guarantee that the mappings
+are sent to hardware MMU.
+
+ int __must_check vcm_activate(struct vcm *vcm);
+
+After VCM context is activated all further bindings (made with
+vcm_make_binding(), vcm_map() or vcm_bind()) will be updated so there
+is no need to call vcm_activate() after each binding is done or
+undone.
+
+To deactivate the VCM context vcm_deactivate() function is used:
+
+ void vcm_deactivate(struct vcm *vcm);
+
+Both of those functions can be called several times if all calls to
+vcm_activate() are paired with a later call to vcm_deactivate().
+
+** Device driver example
+
+The following is a simple, untested example of how platform and
+devices work together to use the VCM framework. Platform initialises
+contexts for each MMU in the systems, and through platform device data
+passes them to correct drivers.
+
+Device driver header file:
+
+ struct foo_platform_data {
+ /* ... */
+ struct vcm *vcm;
+ /* ... */
+ };
+
+Platform code:
+
+ static int plat_bar_vcm_init(void)
+ {
+ struct foo_platform_data *fpdata;
+ struct vcm *vcm;
+
+ vcm = vcm_baz_create(...);
+ if (IS_ERR(vcm))
+ return PTR_ERR(vcm);
+
+ fpdata = dev_get_platdata(&foo_device.dev);
+ fpdata->vcm = vcm;
+
+ /* ... */
+
+ return 0;
+ }
+
+Device driver implementation:
+
+ struct foo_private {
+ /* ... */
+ struct vcm_res *fw;
+ /* ... */
+ };
+
+ static inline struct vcm_res *__must_check
+ __foo_alloc(struct device *dev, size_t size)
+ {
+ struct foo_platform_data *pdata =
+ dev_get_platdata(dev);
+ return vcm_make_binding(pdata->vcm, size, 0, 0);
+ }
+
+ static inline void __foo_free(struct vcm_res *res)
+ {
+ vcm_destroy_binding(res);
+ }
+
+ static int foo_probe(struct device *dev)
+ {
+ struct foo_platform_data *pdata =
+ dev_get_platdata(dev);
+ struct foo_private *priv;
+
+ if (IS_ERR_OR_NULL(pdata->vcm))
+ return pdata->vcm ? PTR_ERR(pdata->vcm) : -EINVAL;
+
+ priv = kzalloc(sizeof *priv, GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ /* ... */
+
+ priv->fw = __foo_alloc(dev, 1 << 20);
+ if (IS_ERR(priv->fw)) {
+ kfree(priv);
+ return PTR_ERR(priv->fw);
+ }
+ /* copy firmware to fw */
+
+ vcm_activate(pdata->vcm);
+
+ dev->p = priv;
+
+ return 0;
+ }
+
+ static int foo_remove(struct device *dev)
+ {
+ struct foo_platform_data *pdata =
+ dev_get_platdata(dev);
+ struct foo_private *priv = dev->p;
+
+ /* ... */
+
+ vcm_deactivate(pdata->vcm);
+ __foo_free(priv->fw);
+
+ kfree(priv);
+
+ return 0;
+ }
+
+ static int foo_do_something(struct device *dev, /* ... */)
+ {
+ struct foo_platform_data *pdata =
+ dev_get_platdata(dev);
+ struct vcm_res *buf;
+ int ret;
+
+ buf = __foo_alloc(/* ... size ...*/);
+ if (IS_ERR(buf))
+ return ERR_PTR(buf);
+
+ /*
+ * buf->start is address visible from device's
+ * perspective.
+ */
+
+ /* ... set hardware up ... */
+
+ /* ... wait for completion ... */
+
+ __foo_free(buf);
+
+ return ret;
+ }
+
+In the above example only vcm_make_binding() function is used so that
+the above scheme will work not only for systems with MMU but also in
+case of one-to-one VCM context.
+
+** IOMMU and one-to-one contexts
+
+The following example demonstrates mapping IOMMU and one-to-one
+reservations to the same physical memory. For readability, error
+handling is not shown on the listings.
+
+First, each contexts needs to be created. A call used for creating
+context is dependent on the driver used. The following is just an
+example of how this could look like:
+
+ struct vcm *vcm_onetoone, *vcm_iommu;
+
+ vcm_onetoone = vcm_onetoone_create();
+ vcm_iommu = vcm_foo_mmu_create();
+
+Once contexts are created, physical space needs to be allocated,
+reservations made on each context and physical memory mapped to those
+reservations. Because there is a one-to-one context, the memory has
+to be allocated from its context. It's also best to map the memory in
+the single call using vcm_make_binding():
+
+ struct vcm_res *res_onetoone;
+
+ res_onetoone = vcm_make_binding(vcm_o2o, SZ_2MB | SZ_4K, 0, 0);
+
+What's left is map the space in the other context. If the reservation
+in the other two contexts won't be used for any other purpose then to
+reference the memory allocated in above, it's best to use vcm_map():
+
+ struct vcm_res *res_iommu;
+
+ res_iommu = vcm_map(vcm_iommu, res_onetoone->phys, 0);
+
+Once the bindings have been created, the contexts need to be activated
+to make sure that they are actually on the hardware. (In case of
+one-to-one mapping it's most likely a no-operation but it's still
+required by the VCM API so it must not be omitted.)
+
+ vcm_activate(vcm_onetoone);
+ vcm_activate(vcm_iommu);
+
+At this point, both reservations represent addresses in respective
+address space that is bound to the same physical memory. Devices
+connected through the MMU can access it, as well as devices connected
+directly to the memory banks. The bus address for the devices and
+virtual address for the CPU is available through the 'start' member of
+the vcm_res structure (ie. res_* objects above).
+
+Once the mapping is no longer used and memory no longer needed it can
+be freed as follows:
+
+ vcm_unmap(res_iommu);
+ vcm_destroy_binding(res_onetoone);
+
+If the contexts are not needed either, they can be disabled:
+
+ vcm_deactivate(vcm_iommu);
+ vcm_deactivate(vcm_onetoone);
+
+and than, even destroyed:
+
+ vcm_destroy(vcm_iommu);
+ vcm_destroy(vcm_onetoone);
+
+* Available drivers
+
+Not all drivers support all of the VCM functionality. What is always
+supported is:
+
+ vcm_free()
+ vcm_unbind()
+ vcm_unreserve()
+
+Even though, vcm_unbind() may leave virtual reservation in unusable
+state.
+
+The following VCM drivers are provided:
+
+** Real hardware drivers
+
+There are no real hardware drivers at this time.
+
+** One-to-One drivers
+
+As it has been noted, one-to-One drivers are limited in the sense that
+certain operations are very unlikely to succeed. In fact, it is often
+certain that some operations will fail. If your driver needs to be
+able to run with One-to-One driver you should limit operations to:
+
+ vcm_make_binding()
+ vcm_destroy_binding()
+
+unless the above are not enough, then the following two may be used as
+well:
+
+ vcm_map()
+ vcm_unmap()
+
+If one uses vcm_unbind() then vcm_bind() on the same reservation,
+physical memory pair should also work.
+
+There are no One-to-One drivers at this time.
+
+* Writing a VCM driver
+
+The core of VCM does not handle communication with the MMU. For this
+purpose a VCM driver is used. Its purpose is to manage virtual
+address space reservations, physical allocations as well as updating
+mappings in the hardware MMU.
+
+API designed for VCM drivers is described in the
+[[file:../include/linux/vcm-drv.h][include/linux/vcm-drv.h]] file so it might be a good idea to take a look
+inside.
+
+VCM provides API for three different kinds of drivers. The most
+basic is a core VCM which VCM use directly. Other then that, VCM
+provides two wrappers -- VCM MMU and VCM One-to-One -- which can be
+used to create drivers for real hardware VCM contexts and for
+One-to-One contexts.
+
+All of the drivers need to provide a context creation functions which
+will allocate memory, fill start address, size and pointer to driver
+operations, and then call an init function which fills rest of the
+fields and validates entered values.
+
+** Writing a core VCM driver
+
+The core driver needs to provide a context creation function as well
+as at least some of the following operations:
+
+ void (*cleanup)(struct vcm *vcm);
+
+ int (*alloc)(struct vcm *vcm, resource_size_t size,
+ struct vcm_phys **phys, unsigned alloc_flags,
+ struct vcm_res **res, unsigned res_flags);
+ struct vcm_res *(*res)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+ struct vcm_phys *(*phys)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+
+ void (*unreserve)(struct vcm_res *res);
+
+ struct vcm_res *(*map)(struct vcm *vcm, struct vcm_phys *phys,
+ unsigned flags);
+ int (*bind)(struct vcm_res *res, struct vcm_phys *phys);
+ void (*unbind)(struct vcm_res *res);
+
+ int (*activate)(struct vcm *vcm);
+ void (*deactivate)(struct vcm *vcm);
+
+All of the operations (expect for the alloc) may assume that all
+pointer arguments are not-NULL. (In case of alloc, if any argument is
+NULL it is either phys or res (never both).)
+
+*** Context creation
+
+To use a VCM driver a VCM context has to be provided which is bound to
+the driver. This is done by a driver-dependent call defined in it's
+header file. Such a call may take varyous arguments to configure the
+context of the MMU. Its prototype may look as follows:
+
+ struct vcm *__must_check vcm_samp_create(/* ... */);
+
+The driver will most likely define a structure encapsulating the vcm
+structure (in the usual way). The context creation function must
+allocate space for such a structure and initialise it correctly
+including all members of the vcm structure expect for activations.
+The activations member is initialised by calling:
+
+ struct vcm *__must_check vcm_init(struct vcm *vcm);
+
+This function also validates that all fields are set correctly.
+
+The driver field of the vcm structure must point to a structure with
+all operations supported by the driver.
+
+If everything succeeds, the function has to return pointer to the vcm
+structure inside the encapsulating structure. It is the pointer that
+will be passed to all of the driver's operations. On error,
+a pointer-error must be returned (ie. not NULL).
+
+The function might look something like the following:
+
+ struct vcm *__must_check vcm_foo_create(/* ... */)
+ {
+ struct vcm_foo *foo;
+ struct vcm *vcm;
+
+ foo = kzalloc(sizeof *foo, GFP_KERNEL);
+ if (!foo)
+ return ERR_PTR(-ENOMEM);
+
+ /* ... do stuff ... */
+
+ foo->vcm.start = /* ... */;
+ foo->vcm.size = /* ... */;
+ foo->vcm.driver = &vcm_foo_driver;
+
+ vcm = vcm_init(&foo->vcm);
+ if (IS_ERR(vcm)) {
+ /* ... error recovery ... */
+ kfree(foo);
+ }
+ return vcm;
+ }
+
+*** Cleaning up
+
+The cleanup operation is called when the VCM context is destroyed.
+Its purpose is to free all resources acquired when VCM context was
+created including the space for the context structure. If it is not
+given, the memory is freed using the kfree() function.
+
+*** Allocation and reservations
+
+If alloc operation is specified, res and phys operations are ignored.
+The observable behaviour of the alloc operation should mimic as
+closely as possible res and phys operations called one after the
+other.
+
+The reason for this operation is that in case of one-to-one VCM
+contexts, the driver may not be able to bind together arbitrary
+reservation with an arbitrary physical space. In one-to-one contexts,
+reservations and physical memory are tight together and need to be
+made at the same time to make binding possible.
+
+The alloc operation may be called with both, res and phys being set,
+or at most one of them being NULL.
+
+The res operation reserves virtual address space in the VCM context.
+The function must set the start and res_size members of the vcm_res
+structure -- all other fields are filled by the VCM framework.
+
+The phys operation allocates physical space which can later be bound
+to the reservation.
+
+Both phys and alloc callbacks need to provide a free callbakc along
+with the vc_phys structure, which will, as one may imagine, free
+allocated space when user calls vcm_free().
+
+All those operations may assume that size is a non-zero and divisible
+by PAGE_SIZE.
+
+*** Binding
+
+The map operation is optional and it joins res and bind operations
+together. Like alloc operation, this is provided because in case of
+one-to-one mappings, the VCM driver may be unable to bind together
+physical space with an arbitrary reservation.
+
+Moreover, in case of some VCM drivers, a mapping for given physical
+memory can already be present (ie. in case of using VMM).
+
+Reservation created with map operation does not have to be usable
+with any other physical space then the one provided when reservation
+was created.
+
+The bind operation binds given reservation with a given physical
+memory. The operation may assume that reservation given as an
+argument is not bound to any physical memory.
+
+Whichever of the two operation is used, the binding must be reflected
+on the hardware if the VCM context has been activated. If VCM context
+has not been activated this is not required.
+
+The vcm_map() function uses map operation if one is provided.
+Otherwise, it falls back to alloc or res operation followed by bind
+operation. If this is also not possible, -EOPNOTSUPP is returned.
+Similarly, vcm_bind() function uses the bind operation unless it is
+not provided in which case -EOPNOTSUPP is returned.
+
+Also, if alloc operation is not provided but map is, the
+vcm_make_binding() function will use phys and map operations.
+
+*** Freeing resources
+
+The unbind callback removes the binding between reservation and
+a physical memory. If unbind operation is not provided, VCM assumes
+that it is a no-operation.
+
+The unreserve callback releases a reservation as well as free
+allocated space for the vcm_res structure. It is required and if it
+is not provided vcm_unreserve() will generate a warning.
+
+*** Activation
+
+When VCM context is activated, the activate callback is called. It is
+called only once even if vcm_activate() is called several times on the
+same context.
+
+When VCM context is deactivated (that is, if for each call to
+vcm_activate(), vcm_deactivate() was called) the deactivate callback
+is called.
+
+When VCM context is activated, all bound reservations must be
+reflected on the hardware MMU (if any). Also, ofter activation, all
+calls to vcm_bind(), vcm_map() or vcm_make_binding() must
+automatically reflect new mappings on the hardware MMU.
+
+Neither of the operations are required and if missing, VCM will
+assume they are a no-operation and no warning will be generated.
+
+* Epilogue
+
+The initial version of the VCM framework was written by Zach Pfeffer
+<[email protected]>. It was then redesigned and mostly
+rewritten by Michal Nazarewicz <[email protected]>.
+
+The new version is still lacking a few important features. Most
+notably, no real hardware MMU has been implemented yet. This may be
+ported from original Zach's proposal.
+
+Also, support for VMM is lacking. This is another thing that can be
+ported from Zach's proposal.
diff --git a/include/linux/vcm-drv.h b/include/linux/vcm-drv.h
new file mode 100644
index 0000000..d7ae660
--- /dev/null
+++ b/include/linux/vcm-drv.h
@@ -0,0 +1,117 @@
+/*
+ * Virtual Contiguous Memory driver API header
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#ifndef __LINUX_VCM_DRV_H
+#define __LINUX_VCM_DRV_H
+
+#include <linux/vcm.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/gfp.h>
+
+#include <linux/atomic.h>
+
+/**
+ * struct vcm_driver - the MMU driver operations.
+ * @cleanup: called when vcm objects is destroyed; if omitted,
+ * kfree() will be used.
+ * @alloc: callback function for allocating physical memory and
+ * reserving virtual address space; XXX FIXME: document;
+ * if set, @res and @alloc are ignored.
+ * @res: creates a reservation of virtual address space; XXX FIXME:
+ * document; if @alloc is provided this is ignored.
+ * @phys: allocates a physical memory; XXX FIXME: document; if @alloc
+ * is provided this is ignored.
+ * @unreserve: destroys a virtual address space reservation created by @alloc;
+ * required.
+ * @map: reserves address space and binds a physical memory to it.
+ * @bind: binds a physical memory to a reserved address space.
+ * @unbind: unbinds a physical memory from reserved address space.
+ * @activate: activates the context making all bindings active; once
+ * the context has been activated, this callback is not
+ * called again until context is deactivated and
+ * activated again (so if user calls vcm_activate()
+ * several times only the first call in sequence will
+ * invoke this callback).
+ * @deactivate: deactivates the context making all bindings inactive;
+ * call this callback always accompanies call to the
+ * @activate callback.
+ */
+struct vcm_driver {
+ void (*cleanup)(struct vcm *vcm);
+
+ int (*alloc)(struct vcm *vcm, resource_size_t size,
+ struct vcm_phys **phys, unsigned alloc_flags,
+ struct vcm_res **res, unsigned res_flags);
+ struct vcm_res *(*res)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+ struct vcm_phys *(*phys)(struct vcm *vcm, resource_size_t size,
+ unsigned flags);
+
+ void (*unreserve)(struct vcm_res *res);
+
+ struct vcm_res *(*map)(struct vcm *vcm, struct vcm_phys *phys,
+ unsigned flags);
+ int (*bind)(struct vcm_res *res, struct vcm_phys *phys);
+ void (*unbind)(struct vcm_res *res);
+
+ int (*activate)(struct vcm *vcm);
+ void (*deactivate)(struct vcm *vcm);
+};
+
+/**
+ * struct vcm_phys - representation of allocated physical memory.
+ * @count: number of contiguous parts the memory consists of; if this
+ * equals one the whole memory block is physically contiguous;
+ * read only.
+ * @size: total size of the allocated memory; read only.
+ * @free: callback function called when memory is freed; internal.
+ * @bindings: how many virtual address space reservations this memory has
+ * been bound to; internal.
+ * @parts: array of @count parts describing each physically contiguous
+ * memory block that the whole area consists of; each element
+ * describes part's physical starting address in bytes
+ * (@parts->start), its size in bytes (@parts->size) and
+ * (optionally) pointer to first struct poge (@parts->page);
+ * read only.
+ */
+struct vcm_phys {
+ unsigned count;
+ resource_size_t size;
+
+ void (*free)(struct vcm_phys *phys);
+ atomic_t bindings;
+
+ struct vcm_phys_part {
+ phys_addr_t start;
+ struct page *page;
+ resource_size_t size;
+ } parts[0];
+};
+
+/**
+ * vcm_init() - initialises VCM context structure.
+ * @vcm: the VCM context to initialise.
+ *
+ * This function initialises the vcm structure created by a MMU driver
+ * when setting things up. It sets up all fields of the vcm structure
+ * expect for @vcm->start, @vcm->size and @vcm->driver which are
+ * validated by this function. If they have invalid value function
+ * produces warning and returns an error-pointer. If everything is
+ * fine, @vcm is returned.
+ */
+struct vcm *__must_check vcm_init(struct vcm *vcm);
+
+#endif
diff --git a/include/linux/vcm.h b/include/linux/vcm.h
new file mode 100644
index 0000000..965dc9b
--- /dev/null
+++ b/include/linux/vcm.h
@@ -0,0 +1,275 @@
+/*
+ * Virtual Contiguous Memory header
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#ifndef __LINUX_VCM_H
+#define __LINUX_VCM_H
+
+#include <linux/kref.h>
+#include <linux/compiler.h>
+
+struct vcm_driver;
+struct vcm_phys;
+
+/**
+ * struct vcm - A virtually contiguous memory context.
+ * @start: the smallest possible address available in this context.
+ * @size: size of available address space in bytes; internal, read
+ * only for MMU drivers.
+ * @activations: How many times context was activated; internal,
+ * read only for MMU drivers.
+ * @driver: driver handling this driver; internal.
+ *
+ * This structure represents a context of virtually contiguous memory
+ * managed by a MMU pointed by the @mmu pointer. This is the main
+ * structure used to interact with the VCM framework.
+ *
+ * Whenever driver wants to reserve virtual address space or allocate
+ * backing storage this pointer to this structure must be passed.
+ *
+ */
+struct vcm {
+ dma_addr_t start;
+ resource_size_t size;
+ atomic_t activations;
+ const struct vcm_driver *driver;
+};
+
+/**
+ * struct vcm_res - A reserved virtually contiguous address space.
+ * @start: bus address of the region in bytes; read only.
+ * @bound_size: number of bytes actually bound to the virtual address;
+ * read only.
+ * @res_size: size of the reserved address space in bytes; read only.
+ * @vcm: VCM context; internal, read only for MMU drivers.
+ * @phys: pointer to physical memory bound to this reservation; NULL
+ * if no physical memory is bound; read only.
+ *
+ * This structure represents a portion virtually contiguous address
+ * space reserved for use with the driver. Once address space is
+ * reserved a physical memory can be bound to it so that it will paint
+ * to real memory.
+ */
+struct vcm_res {
+ dma_addr_t start;
+ resource_size_t bound_size;
+ resource_size_t res_size;
+
+ struct vcm *vcm;
+ struct vcm_phys *phys;
+};
+
+
+/**
+ * vcm_destroy() - destroys a VCM context.
+ * @vcm: VCM to destroy.
+ */
+void vcm_destroy(struct vcm *vcm);
+
+/**
+ * vcm_make_binding() - allocates memory and binds it to virtual address space
+ * @vcm: VCM context to reserve virtual address space in
+ * @size: number of bytes to allocate; aligned up to a PAGE_SIZE
+ * @alloc_flags: additional allocator flags; see vcm_alloc() for
+ * description of those.
+ * @res_flags: additional reservation flags; see vcm_reserve() for
+ * description of those.
+ *
+ * This is a call that binds together three other calls:
+ * vcm_reserve(), vcm_alloc() and vcm_bind(). The purpose of this
+ * function is that on systems with no IO MMU separate calls to
+ * vcm_alloc() and vcm_reserve() may fail whereas when called together
+ * they may work correctly.
+ *
+ * This is a consequence of the fact that with no IO MMU the simulated
+ * virtual address must be the same as physical address, thus if first
+ * virtual address space were to be reserved and then physical memory
+ * allocated, both addresses may not match.
+ *
+ * With this call, a driver that simulates IO MMU may simply allocate
+ * a physical memory and when this succeeds create correct reservation.
+ *
+ * In short, if device drivers do not need more advanced MMU
+ * functionolities, they should limit themselves to this function
+ * since then the drivers may be easily ported to systems without IO
+ * MMU.
+ *
+ * To access the vcm_phys structure created by this call a phys field
+ * of returned vcm_res structure should be used.
+ *
+ * On error returns a pointer which yields true when tested with
+ * IS_ERR().
+ */
+struct vcm_res *__must_check
+vcm_make_binding(struct vcm *vcm, resource_size_t size,
+ unsigned alloc_flags, unsigned res_flags);
+
+/**
+ * vcm_map() - makes a reservation and binds physical memory to it
+ * @vcm: VCM context
+ * @phys: physical memory to bind.
+ * @flags: additional flags; see vcm_reserve() for description of
+ * those.
+ *
+ * This is a call that binds together two other calls: vcm_reserve()
+ * and vcm_bind(). If all you need is reserve address space and
+ * bind physical memory it's better to use this call since it may
+ * create better mappings in some situations.
+ *
+ * Drivers may be optimised in such a way that it won't be possible to
+ * use reservation with a different physical memory.
+ *
+ * On error returns a pointer which yields true when tested with
+ * IS_ERR().
+ */
+struct vcm_res *__must_check
+vcm_map(struct vcm *vcm, struct vcm_phys *phys, unsigned flags);
+
+/**
+ * vcm_alloc() - allocates a physical memory for use with vcm_res.
+ * @vcm: VCM context allocation is performed in.
+ * @size: number of bytes to allocate; aligned up to a PAGE_SIZE
+ * @flags: additional allocator flags; XXX FIXME: describe
+ *
+ * In case of some MMU drivers, the @vcm may be important and later
+ * binding (vcm_bind()) may fail if done on another @vcm.
+ *
+ * On success returns a vcm_phys structure representing an allocated
+ * physical memory that can be bound to reserved virtual address
+ * space. On error returns a pointer which yields true when tested with
+ * IS_ERR().
+ */
+struct vcm_phys *__must_check
+vcm_alloc(struct vcm *vcm, resource_size_t size, unsigned flags);
+
+/**
+ * vcm_free() - frees an allocated physical memory
+ * @phys: physical memory to free.
+ *
+ * If the physical memory is bound to any reserved address space it
+ * must be unbound first. Otherwise a warning will be issued and
+ * the memory won't be freed causing memory leaks.
+ */
+void vcm_free(struct vcm_phys *phys);
+
+/**
+ * vcm_reserve() - reserves a portion of virtual address space.
+ * @vcm: VCM context reservation is performed in.
+ * @size: number of bytes to allocate; aligned up to a PAGE_SIZE
+ * @flags: additional reservation flags; XXX FIXME: describe
+ * @alignment: required alignment of the reserved space; must be
+ * a power of two or zero.
+ *
+ * On success returns a vcm_res structure representing a reserved
+ * (contiguous) virtual address space that physical memory can be
+ * bound to (using vcm_bind()). On error returns a pointer which
+ * yields true when tested with IS_ERR().
+ */
+struct vcm_res *__must_check
+vcm_reserve(struct vcm *vcm, resource_size_t size, unsigned flags);
+
+/**
+ * vcm_unreserve() - destroyers a virtual address space reservation
+ * @res: reservation to destroy.
+ *
+ * If any physical memory is bound to the reserved address space it
+ * must be unbound first. Otherwise it will be unbound and warning
+ * will be issued.
+ */
+void vcm_unreserve(struct vcm_res *res);
+
+/**
+ * vcm_bind() - binds a physical memory to virtual address space
+ * @res: virtual address space to bind the physical memory.
+ * @phys: physical memory to bind to the virtual addresses.
+ *
+ * The mapping won't be active unless vcm_activate() on the VCM @res
+ * was created in context of was called.
+ *
+ * If @phys is already bound to @res this function returns -EALREADY.
+ * If some other physical memory is bound to @res -EADDRINUSE is
+ * returned. If size of the physical memory is larger then the
+ * virtual space -ENOSPC is returned. In all other cases the physical
+ * memory is bound to the virtual address and on success zero is
+ * returned, on error a negative number.
+ */
+int __must_check vcm_bind(struct vcm_res *res, struct vcm_phys *phys);
+
+/**
+ * vcm_unbind() - unbinds a physical memory from virtual address space
+ * @res: virtual address space to unbind the physical memory from.
+ *
+ * This reverses the effect of the vcm_bind() function. Function
+ * returns physical space that was bound to the reservation (or NULL
+ * if no space was bound in which case also a warning is issued).
+ */
+struct vcm_phys *vcm_unbind(struct vcm_res *res);
+
+/**
+ * vcm_destroy_binding() - destroys the binding
+ * @res: a bound reserved address space to destroy.
+ *
+ * This function incorporates three functions: vcm_unbind(),
+ * vcm_free() and vcm_unreserve() (in that order) in one call.
+ */
+void vcm_destroy_binding(struct vcm_res *res);
+
+/**
+ * vcm_unmap() - unbinds physical memory and unreserves address space
+ * @res: reservation to destroy
+ *
+ * This is a call that binds together two other calls: vcm_unbind()
+ * and vcm_unreserve().
+ */
+static inline void vcm_unmap(struct vcm_res *res)
+{
+ vcm_unbind(res);
+ vcm_unreserve(res);
+}
+
+/**
+ * vcm_activate() - activates bindings in VCM.
+ * @vcm: VCM to activate bindings in.
+ *
+ * All of the bindings on the @vcm done before this function is called
+ * are inactive and do not take effect. The call to this function
+ * guarantees that all bindings are sent to the hardware MMU (if any).
+ *
+ * After VCM is activated all bindings will be automatically updated
+ * on the hardware MMU, so there is no need to call this function
+ * after each vcm_bind()/vcm_unbind().
+ *
+ * Each call to vcm_activate() should be later accompanied by a call
+ * to vcm_deactivate(). Otherwise a warning will be issued when VCM
+ * context is destroyed (vcm_destroy()). This function can be called
+ * several times.
+ *
+ * On success returns zero, on error a negative error code.
+ */
+int __must_check vcm_activate(struct vcm *vcm);
+
+/**
+ * vcm_deactivate() - deactivates bindings in VCM.
+ * @vcm: VCM to deactivate bindings in.
+ *
+ * This function reverts effect of the vcm_activate() function. After
+ * calling this function caller has no guarantee that bindings defined
+ * in VCM are active.
+ *
+ * If this is called without calling the vcm_activate() warning is
+ * issued.
+ */
+void vcm_deactivate(struct vcm *vcm);
+
+#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index ae35744..7f0e4b1 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -338,6 +338,21 @@ config CMA_DEBUG

This is mostly used during development. If unsure, say "n".

+config VCM
+ bool "Virtual Contiguous Memory framework"
+ help
+ This enables the Virtual Contiguous Memory framework which
+ provides an abstraction for virtual address space provided by
+ various MMUs present on the platform.
+
+ The framework uses plugable MMU drivers for hardware MMUs and
+ if drivers obeys some limitations it can be also used on
+ platforms with no MMU.
+
+ For more information see
+ <Documentation/virtual-contiguous-memory.txt>. If unsure, say
+ "n".
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/Makefile b/mm/Makefile
index 9bd9f8f..b96a6cb 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -43,3 +43,4 @@ obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CMA) += cma.o
+obj-$(CONFIG_VCM) += vcm.o
diff --git a/mm/vcm.c b/mm/vcm.c
new file mode 100644
index 0000000..1389ee6
--- /dev/null
+++ b/mm/vcm.c
@@ -0,0 +1,304 @@
+/*
+ * Virtual Contiguous Memory core
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#include <linux/vcm-drv.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+
+/******************************** Devices API *******************************/
+
+void vcm_destroy(struct vcm *vcm)
+{
+ if (WARN_ON(atomic_read(&vcm->activations)))
+ vcm->driver->deactivate(vcm);
+
+ if (vcm->driver->cleanup)
+ vcm->driver->cleanup(vcm);
+ else
+ kfree(vcm);
+}
+EXPORT_SYMBOL_GPL(vcm_destroy);
+
+static void
+__vcm_alloc_and_reserve(struct vcm *vcm, resource_size_t size,
+ struct vcm_phys **phys, unsigned alloc_flags,
+ struct vcm_res **res, unsigned res_flags)
+{
+ int ret, alloc = 0;
+
+ if (WARN_ON(!vcm) || !size) {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ size = PAGE_ALIGN(size);
+
+ if (vcm->driver->alloc) {
+ ret = vcm->driver->alloc(vcm, size,
+ phys, alloc_flags, res, res_flags);
+ if (ret)
+ goto error;
+ alloc = 1;
+ } else if ((res && !vcm->driver->res) || (phys && !vcm->driver->phys)) {
+ ret = -EOPNOTSUPP;
+ goto error;
+ }
+
+ if (res) {
+ if (!alloc) {
+ *res = vcm->driver->res(vcm, size, res_flags);
+ if (IS_ERR(*res)) {
+ ret = PTR_ERR(*res);
+ goto error;
+ }
+ }
+ (*res)->bound_size = 0;
+ (*res)->vcm = vcm;
+ (*res)->phys = NULL;
+ }
+
+ if (phys) {
+ if (!alloc) {
+ *phys = vcm->driver->phys(vcm, size, alloc_flags);
+ if (WARN_ON(!(*phys)->free))
+ phys = ERR_PTR(-EINVAL);
+ if (IS_ERR(*phys)) {
+ ret = PTR_ERR(*phys);
+ goto error;
+ }
+ }
+ atomic_set(&(*phys)->bindings, 0);
+ }
+
+ return;
+
+error:
+ if (phys)
+ *phys = ERR_PTR(ret);
+ if (res) {
+ if (*res)
+ vcm_unreserve(*res);
+ *res = ERR_PTR(ret);
+ }
+}
+
+struct vcm_res *__must_check
+vcm_make_binding(struct vcm *vcm, resource_size_t size,
+ unsigned alloc_flags, unsigned res_flags)
+{
+ struct vcm_phys *phys;
+ struct vcm_res *res;
+
+ if (WARN_ON(!vcm || !size || (size & (PAGE_SIZE - 1))))
+ return ERR_PTR(-EINVAL);
+ else if (vcm->driver->alloc || !vcm->driver->map) {
+ int ret;
+
+ __vcm_alloc_and_reserve(vcm, size, &phys, alloc_flags,
+ &res, res_flags);
+
+ if (IS_ERR(res))
+ return res;
+
+ ret = vcm_bind(res, phys);
+ if (!ret)
+ return res;
+
+ if (vcm->driver->unreserve)
+ vcm->driver->unreserve(res);
+ phys->free(phys);
+ return ERR_PTR(ret);
+ } else {
+ __vcm_alloc_and_reserve(vcm, size, &phys, alloc_flags,
+ NULL, 0);
+
+ if (IS_ERR(phys))
+ return ERR_CAST(res);
+
+ res = vcm_map(vcm, phys, res_flags);
+ if (IS_ERR(res))
+ phys->free(phys);
+
+ return res;
+ }
+}
+EXPORT_SYMBOL_GPL(vcm_make_binding);
+
+struct vcm_phys *__must_check
+vcm_alloc(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ struct vcm_phys *phys;
+
+ __vcm_alloc_and_reserve(vcm, size, &phys, flags, NULL, 0);
+
+ return phys;
+}
+EXPORT_SYMBOL_GPL(vcm_alloc);
+
+struct vcm_res *__must_check
+vcm_reserve(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ struct vcm_res *res;
+
+ __vcm_alloc_and_reserve(vcm, size, NULL, 0, &res, flags);
+
+ return res;
+}
+EXPORT_SYMBOL_GPL(vcm_reserve);
+
+struct vcm_res *__must_check
+vcm_map(struct vcm *vcm, struct vcm_phys *phys, unsigned flags)
+{
+ struct vcm_res *res;
+ int ret;
+
+ if (WARN_ON(!vcm))
+ return ERR_PTR(-EINVAL);
+
+ if (vcm->driver->map) {
+ res = vcm->driver->map(vcm, phys, flags);
+ if (!IS_ERR(res)) {
+ atomic_inc(&phys->bindings);
+ res->phys = phys;
+ res->bound_size = phys->size;
+ res->vcm = vcm;
+ }
+ return res;
+ }
+
+ res = vcm_reserve(vcm, phys->size, flags);
+ if (IS_ERR(res))
+ return res;
+
+ ret = vcm_bind(res, phys);
+ if (!ret)
+ return res;
+
+ vcm_unreserve(res);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(vcm_map);
+
+void vcm_unreserve(struct vcm_res *res)
+{
+ if (!WARN_ON(!res)) {
+ if (WARN_ON(res->phys))
+ vcm_unbind(res);
+ if (!WARN_ON_ONCE(!res->vcm->driver->unreserve))
+ res->vcm->driver->unreserve(res);
+ }
+}
+EXPORT_SYMBOL_GPL(vcm_unreserve);
+
+void vcm_free(struct vcm_phys *phys)
+{
+ if (!WARN_ON(!phys || atomic_read(&phys->bindings)))
+ phys->free(phys);
+}
+EXPORT_SYMBOL_GPL(vcm_free);
+
+int __must_check vcm_bind(struct vcm_res *res, struct vcm_phys *phys)
+{
+ int ret;
+
+ if (WARN_ON(!res || !phys))
+ return -EINVAL;
+
+ if (res->phys == phys)
+ return -EALREADY;
+
+ if (res->phys)
+ return -EADDRINUSE;
+
+ if (phys->size > res->res_size)
+ return -ENOSPC;
+
+ if (!res->vcm->driver->bind)
+ return -EOPNOTSUPP;
+
+ ret = res->vcm->driver->bind(res, phys);
+ if (ret >= 0) {
+ atomic_inc(&phys->bindings);
+ res->phys = phys;
+ res->bound_size = phys->size;
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(vcm_bind);
+
+struct vcm_phys *vcm_unbind(struct vcm_res *res)
+{
+ struct vcm_phys *phys = NULL;
+ if (!WARN_ON(!res || !res->phys)) {
+ phys = res->phys;
+ if (res->vcm->driver->unbind)
+ res->vcm->driver->unbind(res);
+ WARN_ON(!atomic_add_unless(&phys->bindings, -1, 0));
+ res->phys = NULL;
+ res->bound_size = 0;
+ }
+ return phys;
+}
+EXPORT_SYMBOL_GPL(vcm_unbind);
+
+void vcm_destroy_binding(struct vcm_res *res)
+{
+ if (!WARN_ON(!res)) {
+ struct vcm_phys *phys = vcm_unbind(res);
+ if (phys)
+ vcm_free(phys);
+ vcm_unreserve(res);
+ }
+}
+EXPORT_SYMBOL_GPL(vcm_destroy_binding);
+
+int __must_check vcm_activate(struct vcm *vcm)
+{
+ if (WARN_ON(!vcm))
+ return -EINVAL;
+ else if (atomic_inc_return(&vcm->activations) != 1
+ || !vcm->driver->activate)
+ return 0;
+ else
+ return vcm->driver->activate(vcm);
+}
+EXPORT_SYMBOL_GPL(vcm_activate);
+
+void vcm_deactivate(struct vcm *vcm)
+{
+ if (!WARN_ON(!vcm || !atomic_read(&vcm->activations))
+ && atomic_dec_and_test(&vcm->activations)
+ && vcm->driver->deactivate)
+ vcm->driver->deactivate(vcm);
+}
+EXPORT_SYMBOL_GPL(vcm_deactivate);
+
+
+/****************************** VCM Drivers API *****************************/
+
+struct vcm *__must_check vcm_init(struct vcm *vcm)
+{
+ if (WARN_ON(!vcm || !vcm->size
+ || ((vcm->start | vcm->size) & ~PAGE_MASK)
+ || !vcm->driver || !vcm->driver->unreserve))
+ return ERR_PTR(-EINVAL);
+
+ atomic_set(&vcm->activations, 0);
+
+ return vcm;
+}
+EXPORT_SYMBOL_GPL(vcm_init);
--
1.6.2.5

2010-12-17 04:18:07

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,2/8] mm: vcm: reference counting on a reservation added

This commits adds vcm_ref_reserve() and refcnt member into vcm_res
structure. This feature is enabled by turnning on
CONFIG_VCM_RES_REFCNT. This enables the users of the vcm framework
not to care about the sequence of reserving and unreserving
in complex scenarios.

Signed-off-by: KyongHo Cho <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 47 +++++++++++++++++++++++++++
include/linux/vcm.h | 23 +++++++++++++
mm/Kconfig | 7 ++++
mm/vcm.c | 17 ++++++++++
4 files changed, 94 insertions(+), 0 deletions(-)

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 9793a86..2008465 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -275,6 +275,34 @@ To deactivate the VCM context vcm_deactivate() function is used:
Both of those functions can be called several times if all calls to
vcm_activate() are paired with a later call to vcm_deactivate().

+** Aquiring and releasing ownership of a reservation
+
+Once a device driver reserve a reservation, it may want to pass other device
+drivers or attach the reservation in a data structre. Since the reservation
+may be shared among many device drivers, the VCM context is needed to provide
+a simple way to unreserve a reservation.
+
+Below 2 functions gives the ownership of a reservation to the caller:
+
+ struct vcm_res *__must_check
+ vcm_reserve(struct vcm *vcm, resource_size_t size, unsigned flags);
+
+ int __must_check vcm_ref_reserve(struct vcm_res *res);
+
+vcm_reserve() creates a new reservation, thus the first owner of the
+reservation is set to the caller of vcm_reservation(). It then passes to a
+function the reservation. The function that received the reservation calls
+vcm_ref_reserve() to acquire the ownership of the given reservation. If the
+function decides that it does not need the reservation any more, it calls
+vcm_release() to release the ownership of the reservation.
+
+ void vcm_unreserve(struct vcm_res *res);
+
+It is not required to determine if other functions and drivers still need to
+access the reservation because this function just release the ownership of the
+reservation. If vcm_unreserve() finds no one has the ownership of the given
+reservation, only then does it unreserve (remove) the given reservation.
+
** Device driver example

The following is a simple, untested example of how platform and
@@ -706,6 +734,25 @@ automatically reflect new mappings on the hardware MMU.
Neither of the operations are required and if missing, VCM will
assume they are a no-operation and no warning will be generated.

+*** Ownership of a reservation
+
+When to aquire the ownership of a reservation:
+ - When creating a new reservation (having the ownership automatically)
+ - When assigning a reservation to a member of data structure
+ - When a reservation is passed from the caller function
+ - When requiring to access a reservation (that is a global variable)
+ at first in its context.
+The first one is done with vcm_reserve() and others with vcm_ref_reserve().
+
+When to release the ownership of a reservation:
+ - When a reservation is no longer needed in its context.
+ - When returning from a function and the function received a reservation
+ from its caller and acquired the ownership of the reservation.
+ - When removing a reservation from a data structure that includes a pointer
+ to the reservation
+It is not required as well unable to remove the reservation explicitly. The
+last call to vcm_unreserve() will cause the reservation to be removed.
+
* Epilogue

The initial version of the VCM framework was written by Zach Pfeffer
diff --git a/include/linux/vcm.h b/include/linux/vcm.h
index 965dc9b..3d54f18 100644
--- a/include/linux/vcm.h
+++ b/include/linux/vcm.h
@@ -52,6 +52,9 @@ struct vcm {
* @bound_size: number of bytes actually bound to the virtual address;
* read only.
* @res_size: size of the reserved address space in bytes; read only.
+ * @refcnt: reference count of a reservation to pass ownership of
+ * a reservation in a safe way; internal.
+ * Implemented only when CONFIG_VCM_RES_REFCNT is enabled.
* @vcm: VCM context; internal, read only for MMU drivers.
* @phys: pointer to physical memory bound to this reservation; NULL
* if no physical memory is bound; read only.
@@ -65,6 +68,9 @@ struct vcm_res {
dma_addr_t start;
resource_size_t bound_size;
resource_size_t res_size;
+#ifdef CONFIG_VCM_RES_REFCNT
+ atomic_t refcnt;
+#endif

struct vcm *vcm;
struct vcm_phys *phys;
@@ -180,6 +186,23 @@ struct vcm_res *__must_check
vcm_reserve(struct vcm *vcm, resource_size_t size, unsigned flags);

/**
+ * vcm_ref_reserve() - acquires the ownership of a reservation.
+ * @res: a valid reservation to access
+ *
+ * On success returns 0 and leads the same effect as vcm_reserve() in the
+ * context of the caller of this function. In other words, once a function
+ * acquire the ownership of a reservation with vcm_ref_reserve(), it must
+ * release the ownership with vcm_release() as soon as it does not need
+ * the reservation.
+ *
+ * On error returns -EINVAL. The only reason of the error is passing an invalid
+ * reservation like NULL or an unreserved reservation.
+ */
+#ifdef CONFIG_VCM_RES_REFCNT
+int __must_check vcm_ref_reserve(struct vcm_res *res);
+#endif
+
+/**
* vcm_unreserve() - destroyers a virtual address space reservation
* @res: reservation to destroy.
*
diff --git a/mm/Kconfig b/mm/Kconfig
index 7f0e4b1..b937f32 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -353,6 +353,13 @@ config VCM
<Documentation/virtual-contiguous-memory.txt>. If unsure, say
"n".

+config VCM_RES_REFCNT
+ bool "Reference counting on reservations"
+ depends on VCM
+ help
+ This enables reference counting on a reservation to make sharing
+ and migrating the ownership of the reservation easier.
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/vcm.c b/mm/vcm.c
index 1389ee6..5819f0f 100644
--- a/mm/vcm.c
+++ b/mm/vcm.c
@@ -156,10 +156,23 @@ vcm_reserve(struct vcm *vcm, resource_size_t size, unsigned flags)

__vcm_alloc_and_reserve(vcm, size, NULL, 0, &res, flags);

+#ifdef CONFIG_VCM_RES_REFCNT
+ if (!IS_ERR(res))
+ atomic_inc(&res->refcnt);
+#endif
+
return res;
}
EXPORT_SYMBOL_GPL(vcm_reserve);

+int __must_check vcm_ref_reserve(struct vcm_res *res)
+{
+ if (WARN_ON(!res) || (atomic_inc_return(&res->refcnt) < 2))
+ return -EINVAL;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(vcm_ref_reserve);
+
struct vcm_res *__must_check
vcm_map(struct vcm *vcm, struct vcm_phys *phys, unsigned flags)
{
@@ -196,6 +209,10 @@ EXPORT_SYMBOL_GPL(vcm_map);
void vcm_unreserve(struct vcm_res *res)
{
if (!WARN_ON(!res)) {
+#ifdef CONFIG_VCM_RES_REFCNT
+ if (!atomic_dec_and_test(&res->refcnt))
+ return;
+#endif
if (WARN_ON(res->phys))
vcm_unbind(res);
if (!WARN_ON_ONCE(!res->vcm->driver->unreserve))
--
1.6.2.5

2010-12-17 04:18:11

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,8/8] mm: vcm: Sample driver added

This commit adds a sample Virtual Contiguous Memory framework
driver. It handles no real hardware and is there only for
demonstrating purposes.

* * * THIS COMMIT IS NOT FOR MERGING * * *

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 3 +
include/linux/vcm-sample.h | 30 +++++++
mm/Kconfig | 13 +++
mm/Makefile | 1 +
mm/vcm-sample.c | 119 +++++++++++++++++++++++++++
5 files changed, 166 insertions(+), 0 deletions(-)
create mode 100644 include/linux/vcm-sample.h
create mode 100644 mm/vcm-sample.c

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index 9354c4c..8edd457 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -781,6 +781,9 @@ already there.
Note that to use the VCM MMU wrapper one needs to select the VCM_MMU
Kconfig option or otherwise the wrapper won't be available.

+There is a sample driver provided which provides a template for real
+drivers. It can be found in [[file:../mm/vcm-sample.c][mm/vcm-sample.c]] file.
+
*** Context creation

Similarly to normal drivers, MMU driver needs to provide a context
diff --git a/include/linux/vcm-sample.h b/include/linux/vcm-sample.h
new file mode 100644
index 0000000..86a71ca
--- /dev/null
+++ b/include/linux/vcm-sample.h
@@ -0,0 +1,30 @@
+/*
+ * Virtual Contiguous Memory sample driver header file
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#ifndef __LINUX_VCM_SAMP_H
+#define __LINUX_VCM_SAMP_H
+
+#include <linux/types.h>
+
+struct vcm;
+
+/**
+ * vcm_samp_create() - creates a VCM context
+ *
+ * ... Documentation goes here ...
+ */
+struct vcm *__must_check vcm_samp_create(/* ... */);
+
+#endif
diff --git a/mm/Kconfig b/mm/Kconfig
index 0f4d893..adb90a8 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -430,6 +430,19 @@ config VCM_CMA
For more information see
<Documentation/virtual-contiguous-memory.txt>. If unsure, say "n".

+config VCM_SAMP
+ bool "VCM sample driver"
+ depends on VCM
+ select VCM_MMU
+ help
+ This enables a sample driver for the VCM framework. This driver
+ does not handle any real harwdare. It's merely an template of
+ how for real drivers.
+
+ For more information see
+ <Documentation/virtual-contiguous-memory.txt>. If unsure, say
+ "n".
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/Makefile b/mm/Makefile
index 78e1bd5..515c433 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -46,3 +46,4 @@ obj-$(CONFIG_CMA) += cma.o
obj-$(CONFIG_CMA_BEST_FIT) += cma-best-fit.o
obj-$(CONFIG_VCM) += vcm.o
obj-$(CONFIG_VCM_CMA) += vcm-cma.o
+obj-$(CONFIG_VCM_SAMPLE) += vcm-sample.o
diff --git a/mm/vcm-sample.c b/mm/vcm-sample.c
new file mode 100644
index 0000000..27a2ae7
--- /dev/null
+++ b/mm/vcm-sample.c
@@ -0,0 +1,119 @@
+/*
+ * Virtual Contiguous Memory driver template
+ * Copyright (c) 2010 by Samsung Electronics.
+ * Written by Michal Nazarewicz ([email protected])
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * This is just a sample code. It does nothing useful other then
+ * presenting a template for VCM driver.
+ */
+
+/*
+ * See Documentation/virtual-contiguous-memory.txt for details.
+ */
+
+#include <linux/vcm-drv.h>
+
+struct vcm_samp {
+ struct vcm_mmu mmu;
+ /* ... */
+};
+
+static const unsigned vcm_samp_orders[] = {
+ 4 + 20 - PAGES_SHIFT, /* 16MiB pages */
+ 0 + 20 - PAGES_SHIFT, /* 1MiB pages */
+ 6 + 10 - PAGES_SHIFT, /* 64KiB pages */
+ 2 + 10 - PAGES_SHIFT, /* 4KiB pages */
+};
+
+static int vcm_samp_activate_page(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv)
+{
+ struct vcm_samp *samp =
+ container_of((struct vcm *)priv, struct vcm_samp, mmu.vcm);
+
+ /*
+ * Handle adding a mapping from virtual page at @vaddr to
+ * physical page ad @paddr. The page is of order @order which
+ * means that it's (PAGE_SIZE << @order) bytes.
+ */
+
+ return -EOPNOTSUPP;
+}
+
+static int vcm_samp_deactivate_page(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv)
+{
+ struct vcm_samp *samp =
+ container_of((struct vcm *)priv, struct vcm_samp, mmu.vcm);
+
+ /*
+ * Handle removing a mapping from virtual page at @vaddr to
+ * physical page at @paddr. The page is of order @order which
+ * means that it's (PAGE_SIZE << @order) bytes.
+ */
+
+ /* It's best not to fail here */
+ return 0;
+}
+
+static void vcm_samp_cleanup(struct vcm *vcm)
+{
+ struct vcm_samp *samp =
+ container_of(res->vcm, struct vcm_samp, mmu.vcm);
+
+ /* Clean ups ... */
+
+ kfree(samp);
+}
+
+struct vcm *__must_check vcm_samp_create(/* ... */)
+{
+ static const struct vcm_mmu_driver driver = {
+ .order = vcm_samp_orders,
+ .cleanup = vcm_samp_cleanup,
+ .activate_page = vcm_samp_activate_page,
+ .deactivate_page = vcm_samp_deactivate_page,
+ };
+
+ struct vcm_samp *samp;
+ struct vcm *vcm;
+
+ switch (0) {
+ case 0:
+ case PAGE_SHIFT == 12:
+ /*
+ * If you have a compilation error here it means you
+ * are compiling for a very strange platfrom where
+ * PAGE_SHIFT is not 12 (ie. PAGE_SIZE is not 4KiB).
+ * This driver assumes PAGE_SHIFT is 12.
+ */
+ };
+
+ samp = kzalloc(sizeof *samp, GFP_KERNEL);
+ if (!samp)
+ return ERR_PTR(-ENOMEM);
+
+ /* ... Set things up ... */
+
+ samp->mmu.driver = &driver;
+ /* skip first 64K so that zero address will be a NULL pointer */
+ samp->mmu.vcm.start = (64 << 10);
+ samp->mmu.vcm.size = -(64 << 10);
+
+ vcm = vcm_mmu_init(&samp->mmu);
+ if (!IS_ERR(vcm))
+ return vcm;
+
+ /* ... Error recovery ... */
+
+ kfree(samp);
+ return vcm;
+}
+EXPORT_SYMBOL_GPL(vcm_samp_create);
--
1.6.2.5

2010-12-17 04:18:12

by Cho KyongHo

[permalink] [raw]
Subject: [RFCv2,5/8] mm: vcm: VCM MMU wrapper added

From: Michal Nazarewicz <[email protected]>

This commits adds a VCM MMU wrapper which is meant to be a helper
code for creating VCM drivers for real hardware MMUs.

Signed-off-by: Michal Nazarewicz <[email protected]>
Signed-off-by: Kyungmin Park <[email protected]>
---
Documentation/virtual-contiguous-memory.txt | 80 ++++++++++
include/linux/vcm-drv.h | 80 ++++++++++
mm/Kconfig | 11 ++
mm/vcm.c | 219 +++++++++++++++++++++++++++
4 files changed, 390 insertions(+), 0 deletions(-)

diff --git a/Documentation/virtual-contiguous-memory.txt b/Documentation/virtual-contiguous-memory.txt
index c830b69..9036abe 100644
--- a/Documentation/virtual-contiguous-memory.txt
+++ b/Documentation/virtual-contiguous-memory.txt
@@ -803,6 +803,86 @@ When to release the ownership of a reservation:
It is not required as well unable to remove the reservation explicitly. The
last call to vcm_unreserve() will cause the reservation to be removed.

+** Writing a hardware MMU driver
+
+It may be undesirable to implement all of the operations that are
+required to create a usable driver. In case of hardware MMUs a helper
+wrapper driver has been created to make writing real drivers as simple
+as possible.
+
+The wrapper implements most of the functionality of the driver leaving
+only implementation of the actual talking to the hardware MMU in hands
+of programmer. Reservations managements as general housekeeping is
+already there.
+
+Note that to use the VCM MMU wrapper one needs to select the VCM_MMU
+Kconfig option or otherwise the wrapper won't be available.
+
+*** Context creation
+
+Similarly to normal drivers, MMU driver needs to provide a context
+creation function. Such a function must provide a vcm_mmu object and
+initialise vcm.start, vcm.size and driver fields of the structure.
+When this is done, vcm_mmu_init() should be called which will
+initialise the rest of the fields and validate entered values:
+
+ struct vcm *__must_check vcm_mmu_init(struct vcm_mmu *mmu);
+
+This is, in fact, very similar to the way standard driver is created.
+
+*** Orders
+
+One of the fields of the vcm_mmu_driver structure is orders. This is
+an array of orders of pages supported by the hardware MMU. It must be
+sorted from largest to smallest and zero terminated.
+
+The order is the logarithm with the base two of the size of supported
+page size divided by PAGE_SIZE. For instance, { 8, 4, 0 } means that
+MMU supports 1MiB, 64KiB and 4KiB pages.
+
+*** Operations
+
+The three operations that MMU wrapper driver uses are:
+
+ void (*cleanup)(struct vcm *vcm);
+
+ int (*activate)(struct vcm_res *res, struct vcm_phys *phys);
+ void (*deactivate)(struct vcm_res *res, struct vcm_phys *phys);
+
+ int (*activate_page)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *vcm),
+ int (*deactivate_page)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *vcm),
+
+The first one frees all resources allocated by the context creation
+function (including the structure itself). If this operation is not
+given, kfree() will be called on vcm_mmu structure.
+
+The activate and deactivate operations are required and they are used
+to update mappings in the MMU. Whenever binding is activated or
+deactivated the respective operation is called.
+
+To divide mapping into physical pages, vcm_phys_walk() function can be
+used:
+
+ int vcm_phys_walk(dma_addr_t vaddr, const struct vcm_phys *phys,
+ const unsigned char *orders,
+ int (*callback)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv),
+ int (*recovery)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *priv),
+ void *priv);
+
+It start from given virtual address and tries to divide allocated
+physical memory to as few pages as possible where order of each page
+is one of the orders specified by orders argument.
+
+It may be easier to implement activate_page and deactivate_page
+operations instead thought. They are called on each individual page
+rather then the whole mapping. It basically incorporates call to the
+vcm_phys_walk() function so driver does not need to call it
+explicitly.
+
* Epilogue

The initial version of the VCM framework was written by Zach Pfeffer
diff --git a/include/linux/vcm-drv.h b/include/linux/vcm-drv.h
index 536b051..98d065b 100644
--- a/include/linux/vcm-drv.h
+++ b/include/linux/vcm-drv.h
@@ -114,6 +114,86 @@ struct vcm_phys {
*/
struct vcm *__must_check vcm_init(struct vcm *vcm);

+#ifdef CONFIG_VCM_MMU
+
+struct vcm_mmu;
+
+/**
+ * struct vcm_mmu_driver - a driver used for real MMUs.
+ * @orders: array of orders of pages supported by the MMU sorted from
+ * the largest to the smallest. The last element is always
+ * zero (which means 4K page).
+ * @cleanup: Function called when the VCM context is destroyed;
+ * optional, if not provided, kfree() is used.
+ * @activate: callback function for activating a single mapping; it's
+ * role is to set up the MMU so that reserved address space
+ * donated by res will point to physical memory donated by
+ * phys; called under spinlock with IRQs disabled - cannot
+ * sleep; required unless @activate_page and @deactivate_page
+ * are both provided
+ * @deactivate: this reverses the effect of @activate; called under spinlock
+ * with IRQs disabled - cannot sleep; required unless
+ * @deactivate_page is provided.
+ * @activate_page: callback function for activating a single page; it is
+ * ignored if @activate is provided; it's given a single
+ * page such that its order (given as third argument) is
+ * one of the supported orders specified in @orders;
+ * called under spinlock with IRQs disabled - cannot
+ * sleep; required unless @activate is provided.
+ * @deactivate_page: this reverses the effect of the @activate_page
+ * callback; called under spinlock with IRQs disabled
+ * - cannot sleep; required unless @activate and
+ * @deactivate are both provided.
+ */
+struct vcm_mmu_driver {
+ const unsigned char *orders;
+
+ void (*cleanup)(struct vcm *vcm);
+ int (*activate)(struct vcm_res *res, struct vcm_phys *phys);
+ void (*deactivate)(struct vcm_res *res, struct vcm_phys *phys);
+ int (*activate_page)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *vcm);
+ int (*deactivate_page)(dma_addr_t vaddr, dma_addr_t paddr,
+ unsigned order, void *vcm);
+};
+
+/**
+ * struct vcm_mmu - VCM MMU context
+ * @vcm: VCM context.
+ * @driver: VCM MMU driver's operations.
+ * @pool: virtual address space allocator; internal.
+ * @bound_res: list of bound reservations; internal.
+ * @lock: protects @bound_res and calls to activate/deactivate
+ * operations; internal.
+ * @activated: whether VCM context has been activated; internal.
+ */
+struct vcm_mmu {
+ struct vcm vcm;
+ const struct vcm_mmu_driver *driver;
+ /* internal */
+ struct gen_pool *pool;
+ struct list_head bound_res;
+ /* Protects operations on bound_res list. */
+ spinlock_t lock;
+ int activated;
+};
+
+/**
+ * vcm_mmu_init() - initialises a VCM context for a real MMU.
+ * @mmu: the vcm_mmu context to initialise.
+ *
+ * This function initialises the vcm_mmu structure created by a MMU
+ * driver when setting things up. It sets up all fields of the
+ * structure expect for @mmu->vcm.start, @mmu.vcm->size and
+ * @mmu->driver which are validated by this function. If they have
+ * invalid value function produces warning and returns an
+ * error-pointer. On any other error, an error-pointer is returned as
+ * well. If everything is fine, address of @mmu->vcm is returned.
+ */
+struct vcm *__must_check vcm_mmu_init(struct vcm_mmu *mmu);
+
+#endif
+
#ifdef CONFIG_VCM_PHYS

/**
diff --git a/mm/Kconfig b/mm/Kconfig
index 00d975e..e91499d 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -369,6 +369,17 @@ config VCM_PHYS
will be automatically selected. You select it if you are going to
build external modules that will use this functionality.

+config VCM_MMU
+ bool "VCM MMU wrapper"
+ depends on VCM && MODULES
+ select VCM_PHYS
+ select GENERIC_ALLOCATOR
+ help
+ This enables the VCM MMU wrapper which helps creating VCM drivers
+ for IO MMUs. If a VCM driver is built that requires this option, it
+ will be automatically selected. You select it if you are going to
+ build external modules that will use this functionality.
+
#
# UP and nommu archs use km based percpu allocator
#
diff --git a/mm/vcm.c b/mm/vcm.c
index cd9f4ee..0d74e95 100644
--- a/mm/vcm.c
+++ b/mm/vcm.c
@@ -19,6 +19,8 @@
#include <linux/vmalloc.h>
#include <linux/err.h>
#include <linux/slab.h>
+#include <linux/genalloc.h>
+

/******************************** Devices API *******************************/

@@ -429,6 +431,223 @@ struct vcm *__must_check vcm_init(struct vcm *vcm)
EXPORT_SYMBOL_GPL(vcm_init);


+/*************************** Hardware MMU wrapper ***************************/
+
+#ifdef CONFIG_VCM_MMU
+
+struct vcm_mmu_res {
+ struct vcm_res res;
+ struct list_head bound;
+};
+
+static void vcm_mmu_cleanup(struct vcm *vcm)
+{
+ struct vcm_mmu *mmu = container_of(vcm, struct vcm_mmu, vcm);
+ WARN_ON(spin_is_locked(&mmu->lock) || !list_empty(&mmu->bound_res));
+ gen_pool_destroy(mmu->pool);
+ if (mmu->driver->cleanup)
+ mmu->driver->cleanup(vcm);
+ else
+ kfree(mmu);
+}
+
+static struct vcm_res *
+vcm_mmu_res(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ struct vcm_mmu *mmu = container_of(vcm, struct vcm_mmu, vcm);
+ const unsigned char *orders;
+ struct vcm_mmu_res *res;
+ dma_addr_t addr;
+ unsigned order;
+
+ res = kzalloc(sizeof *res, GFP_KERNEL);
+ if (!res)
+ return ERR_PTR(-ENOMEM);
+
+ order = ffs(size) - PAGE_SHIFT - 1;
+ for (orders = mmu->driver->orders; *orders > order; ++orders)
+ /* nop */;
+ order = *orders + PAGE_SHIFT;
+
+ addr = gen_pool_alloc_aligned(mmu->pool, size, order);
+ if (!addr) {
+ kfree(res);
+ return ERR_PTR(-ENOSPC);
+ }
+
+ INIT_LIST_HEAD(&res->bound);
+ res->res.start = addr;
+ res->res.res_size = size;
+
+ return &res->res;
+}
+
+static struct vcm_phys *
+vcm_mmu_phys(struct vcm *vcm, resource_size_t size, unsigned flags)
+{
+ return vcm_phys_alloc(size, flags,
+ container_of(vcm, struct vcm_mmu,
+ vcm)->driver->orders);
+}
+
+static int __must_check
+__vcm_mmu_activate(struct vcm_res *res, struct vcm_phys *phys)
+{
+ struct vcm_mmu *mmu = container_of(res->vcm, struct vcm_mmu, vcm);
+ if (mmu->driver->activate)
+ return mmu->driver->activate(res, phys);
+
+ return vcm_phys_walk(res->start, phys, mmu->driver->orders,
+ mmu->driver->activate_page,
+ mmu->driver->deactivate_page, res->vcm);
+}
+
+static void __vcm_mmu_deactivate(struct vcm_res *res, struct vcm_phys *phys)
+{
+ struct vcm_mmu *mmu = container_of(res->vcm, struct vcm_mmu, vcm);
+ if (mmu->driver->deactivate)
+ return mmu->driver->deactivate(res, phys);
+
+ vcm_phys_walk(res->start, phys, mmu->driver->orders,
+ mmu->driver->deactivate_page, NULL, res->vcm);
+}
+
+static int vcm_mmu_bind(struct vcm_res *_res, struct vcm_phys *phys)
+{
+ struct vcm_mmu_res *res = container_of(_res, struct vcm_mmu_res, res);
+ struct vcm_mmu *mmu = container_of(_res->vcm, struct vcm_mmu, vcm);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&mmu->lock, flags);
+ if (mmu->activated) {
+ ret = __vcm_mmu_activate(_res, phys);
+ if (ret < 0)
+ goto done;
+ }
+ list_add_tail(&res->bound, &mmu->bound_res);
+ ret = 0;
+done:
+ spin_unlock_irqrestore(&mmu->lock, flags);
+
+ return ret;
+}
+
+static void vcm_mmu_unbind(struct vcm_res *_res)
+{
+ struct vcm_mmu_res *res = container_of(_res, struct vcm_mmu_res, res);
+ struct vcm_mmu *mmu = container_of(_res->vcm, struct vcm_mmu, vcm);
+ unsigned long flags;
+
+ spin_lock_irqsave(&mmu->lock, flags);
+ if (mmu->activated)
+ __vcm_mmu_deactivate(_res, _res->phys);
+ list_del_init(&res->bound);
+ spin_unlock_irqrestore(&mmu->lock, flags);
+}
+
+static void vcm_mmu_unreserve(struct vcm_res *res)
+{
+ struct vcm_mmu *mmu = container_of(res->vcm, struct vcm_mmu, vcm);
+ gen_pool_free(mmu->pool, res->start, res->res_size);
+}
+
+static int vcm_mmu_activate(struct vcm *vcm)
+{
+ struct vcm_mmu *mmu = container_of(vcm, struct vcm_mmu, vcm);
+ struct vcm_mmu_res *r, *rr;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&mmu->lock, flags);
+
+ list_for_each_entry(r, &mmu->bound_res, bound) {
+ ret = __vcm_mmu_activate(&r->res, r->res.phys);
+ if (ret >= 0)
+ continue;
+
+ list_for_each_entry(rr, &mmu->bound_res, bound) {
+ if (r == rr)
+ goto done;
+ __vcm_mmu_deactivate(&rr->res, rr->res.phys);
+ }
+ }
+
+ mmu->activated = 1;
+ ret = 0;
+
+done:
+ spin_unlock_irqrestore(&mmu->lock, flags);
+
+ return ret;
+}
+
+static void vcm_mmu_deactivate(struct vcm *vcm)
+{
+ struct vcm_mmu *mmu = container_of(vcm, struct vcm_mmu, vcm);
+ struct vcm_mmu_res *r;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mmu->lock, flags);
+
+ mmu->activated = 0;
+
+ list_for_each_entry(r, &mmu->bound_res, bound)
+ mmu->driver->deactivate(&r->res, r->res.phys);
+
+ spin_unlock_irqrestore(&mmu->lock, flags);
+}
+
+struct vcm *__must_check vcm_mmu_init(struct vcm_mmu *mmu)
+{
+ static const struct vcm_driver driver = {
+ .cleanup = vcm_mmu_cleanup,
+ .res = vcm_mmu_res,
+ .phys = vcm_mmu_phys,
+ .bind = vcm_mmu_bind,
+ .unbind = vcm_mmu_unbind,
+ .unreserve = vcm_mmu_unreserve,
+ .activate = vcm_mmu_activate,
+ .deactivate = vcm_mmu_deactivate,
+ };
+
+ struct vcm *vcm;
+ int ret;
+
+ if (WARN_ON(!mmu || !mmu->driver ||
+ !(mmu->driver->activate ||
+ (mmu->driver->activate_page &&
+ mmu->driver->deactivate_page)) ||
+ !(mmu->driver->deactivate ||
+ mmu->driver->deactivate_page)))
+ return ERR_PTR(-EINVAL);
+
+ mmu->vcm.driver = &driver;
+ vcm = vcm_init(&mmu->vcm);
+ if (IS_ERR(vcm))
+ return vcm;
+
+ mmu->pool = gen_pool_create(PAGE_SHIFT, -1);
+ if (!mmu->pool)
+ return ERR_PTR(-ENOMEM);
+
+ ret = gen_pool_add(mmu->pool, mmu->vcm.start, mmu->vcm.size, -1);
+ if (ret) {
+ gen_pool_destroy(mmu->pool);
+ return ERR_PTR(ret);
+ }
+
+ vcm->driver = &driver;
+ INIT_LIST_HEAD(&mmu->bound_res);
+ spin_lock_init(&mmu->lock);
+
+ return &mmu->vcm;
+}
+EXPORT_SYMBOL_GPL(vcm_mmu_init);
+
+#endif
+
+
/************************ Physical memory management ************************/

#ifdef CONFIG_VCM_PHYS
--
1.6.2.5