2010-06-30 23:50:59

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 0/9] dspbridge: iommu migration

This set of patches remove the dspbridge custom mmu implementation
and use iommu module instead.


NOTE: in order to dspbridge can work properly the patch
"0001-iovmm-add-superpages-support-to-fixed-da-address.patch"
is needed (specifically iommu_kmap calls need this patch).

Fernando Guzman Lugo (9):
dspbridge: replace iommu custom for opensource implementation
dspbridge: move shared memory iommu maps to tiomap3430.c
dspbridge: rename bridge_brd_mem_map/unmap to a proper name
dspbridge: remove custom mmu code from tiomap3430.c
dspbridge: add mmufault support
dspbridge: remove hw directory
dspbridge: move all iommu related code to a new file
dspbridge: add map support for big buffers
dspbridge: cleanup bridge_dev_context and cfg_hostres structures

arch/arm/plat-omap/include/dspbridge/cfgdefs.h | 1 -
arch/arm/plat-omap/include/dspbridge/dsp-mmu.h | 90 ++
arch/arm/plat-omap/include/dspbridge/dspdefs.h | 44 -
arch/arm/plat-omap/include/dspbridge/dspdeh.h | 1 -
arch/arm/plat-omap/include/dspbridge/dspioctl.h | 7 -
drivers/dsp/bridge/Makefile | 5 +-
drivers/dsp/bridge/core/_deh.h | 3 -
drivers/dsp/bridge/core/_tiomap.h | 15 +-
drivers/dsp/bridge/core/dsp-mmu.c | 229 ++++
drivers/dsp/bridge/core/io_sm.c | 185 +---
drivers/dsp/bridge/core/mmu_fault.c | 139 ---
drivers/dsp/bridge/core/mmu_fault.h | 36 -
drivers/dsp/bridge/core/tiomap3430.c | 1297 ++++-------------------
drivers/dsp/bridge/core/tiomap3430_pwr.c | 183 +---
drivers/dsp/bridge/core/tiomap_io.c | 16 +-
drivers/dsp/bridge/core/ue_deh.c | 87 +--
drivers/dsp/bridge/hw/EasiGlobal.h | 41 -
drivers/dsp/bridge/hw/GlobalTypes.h | 308 ------
drivers/dsp/bridge/hw/MMUAccInt.h | 76 --
drivers/dsp/bridge/hw/MMURegAcM.h | 226 ----
drivers/dsp/bridge/hw/hw_defs.h | 60 --
drivers/dsp/bridge/hw/hw_mmu.c | 587 ----------
drivers/dsp/bridge/hw/hw_mmu.h | 161 ---
drivers/dsp/bridge/pmgr/dev.c | 2 -
drivers/dsp/bridge/rmgr/drv.c | 4 -
drivers/dsp/bridge/rmgr/node.c | 4 +-
drivers/dsp/bridge/rmgr/proc.c | 19 +-
27 files changed, 599 insertions(+), 3227 deletions(-)
create mode 100644 arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
create mode 100644 drivers/dsp/bridge/core/dsp-mmu.c
delete mode 100644 drivers/dsp/bridge/core/mmu_fault.c
delete mode 100644 drivers/dsp/bridge/core/mmu_fault.h
delete mode 100644 drivers/dsp/bridge/hw/EasiGlobal.h
delete mode 100644 drivers/dsp/bridge/hw/GlobalTypes.h
delete mode 100644 drivers/dsp/bridge/hw/MMUAccInt.h
delete mode 100644 drivers/dsp/bridge/hw/MMURegAcM.h
delete mode 100644 drivers/dsp/bridge/hw/hw_defs.h
delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.c
delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.h


2010-06-30 23:50:36

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 3/9] dspbridge: rename bridge_brd_mem_map/unmap to a proper name

Now these functions only map user space addresses to dsp virtual
addresses, so now the functions have a more meaningful name

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
arch/arm/plat-omap/include/dspbridge/dspdefs.h | 44 --------------------
drivers/dsp/bridge/core/_tiomap.h | 25 +++++++++++
drivers/dsp/bridge/core/tiomap3430.c | 52 ++++++++++--------------
drivers/dsp/bridge/pmgr/dev.c | 2 -
drivers/dsp/bridge/rmgr/proc.c | 12 +++--
5 files changed, 53 insertions(+), 82 deletions(-)

diff --git a/arch/arm/plat-omap/include/dspbridge/dspdefs.h b/arch/arm/plat-omap/include/dspbridge/dspdefs.h
index 493f62e..4f56ae6 100644
--- a/arch/arm/plat-omap/include/dspbridge/dspdefs.h
+++ b/arch/arm/plat-omap/include/dspbridge/dspdefs.h
@@ -162,48 +162,6 @@ typedef int(*fxn_brd_memwrite) (struct bridge_dev_context
u32 ulMemType);

/*
- * ======== bridge_brd_mem_map ========
- * Purpose:
- * Map a MPU memory region to a DSP/IVA memory space
- * Parameters:
- * hDevContext: Handle to Bridge driver defined device info.
- * ul_mpu_addr: MPU memory region start address.
- * ulVirtAddr: DSP/IVA memory region u8 address.
- * ul_num_bytes: Number of bytes to map.
- * map_attrs: Mapping attributes (e.g. endianness).
- * Returns:
- * 0: Success.
- * -EPERM: Other, unspecified error.
- * Requires:
- * hDevContext != NULL;
- * Ensures:
- */
-typedef int(*fxn_brd_memmap) (struct bridge_dev_context
- * hDevContext, u32 ul_mpu_addr,
- u32 ulVirtAddr, u32 ul_num_bytes,
- u32 ulMapAttrs,
- struct page **mapped_pages);
-
-/*
- * ======== bridge_brd_mem_un_map ========
- * Purpose:
- * UnMap an MPU memory region from DSP/IVA memory space
- * Parameters:
- * hDevContext: Handle to Bridge driver defined device info.
- * ulVirtAddr: DSP/IVA memory region u8 address.
- * ul_num_bytes: Number of bytes to unmap.
- * Returns:
- * 0: Success.
- * -EPERM: Other, unspecified error.
- * Requires:
- * hDevContext != NULL;
- * Ensures:
- */
-typedef int(*fxn_brd_memunmap) (struct bridge_dev_context
- * hDevContext,
- u32 ulVirtAddr, u32 ul_num_bytes);
-
-/*
* ======== bridge_brd_stop ========
* Purpose:
* Bring board to the BRD_STOPPED state.
@@ -1061,8 +1019,6 @@ struct bridge_drv_interface {
fxn_brd_setstate pfn_brd_set_state; /* Sets the Board State */
fxn_brd_memcopy pfn_brd_mem_copy; /* Copies DSP Memory */
fxn_brd_memwrite pfn_brd_mem_write; /* Write DSP Memory w/o halt */
- fxn_brd_memmap pfn_brd_mem_map; /* Maps MPU mem to DSP mem */
- fxn_brd_memunmap pfn_brd_mem_un_map; /* Unmaps MPU mem to DSP mem */
fxn_chnl_create pfn_chnl_create; /* Create channel manager. */
fxn_chnl_destroy pfn_chnl_destroy; /* Destroy channel manager. */
fxn_chnl_open pfn_chnl_open; /* Create a new channel. */
diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index 6a822c6..4aa2358 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -396,4 +396,29 @@ static inline void dsp_iotlb_init(struct iotlb_entry *e, u32 da, u32 pa,
e->mixed = 0;
}

+/**
+ * user_to_dsp_map() - maps user to dsp virtual address
+ * @mmu: Pointer to iommu handle.
+ * @uva: Virtual user space address.
+ * @da DSP address
+ * @size Buffer size to map.
+ * @usr_pgs struct page array pointer where the user pages will be stored
+ *
+ * This function maps a user space buffer into DSP virtual address.
+ *
+ */
+
+int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
+ struct page **usr_pgs);
+
+/**
+ * user_to_dsp_unmap() - unmaps DSP virtual buffer.
+ * @mmu: Pointer to iommu handle.
+ * @da DSP address
+ *
+ * This function unmaps a user space buffer into DSP virtual address.
+ *
+ */
+int user_to_dsp_unmap(struct iommu *mmu, u32 da);
+
#endif /* _TIOMAP_ */
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 89d4936..88f5167 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -98,12 +98,6 @@ static int bridge_brd_mem_copy(struct bridge_dev_context *hDevContext,
static int bridge_brd_mem_write(struct bridge_dev_context *dev_context,
IN u8 *pbHostBuf, u32 dwDSPAddr,
u32 ul_num_bytes, u32 ulMemType);
-static int bridge_brd_mem_map(struct bridge_dev_context *hDevContext,
- u32 ul_mpu_addr, u32 ulVirtAddr,
- u32 ul_num_bytes, u32 ul_map_attr,
- struct page **mapped_pages);
-static int bridge_brd_mem_un_map(struct bridge_dev_context *hDevContext,
- u32 ulVirtAddr, u32 ul_num_bytes);
static int bridge_dev_create(OUT struct bridge_dev_context
**ppDevContext,
struct dev_object *hdev_obj,
@@ -181,8 +175,6 @@ static struct bridge_drv_interface drv_interface_fxns = {
bridge_brd_set_state,
bridge_brd_mem_copy,
bridge_brd_mem_write,
- bridge_brd_mem_map,
- bridge_brd_mem_un_map,
/* The following CHNL functions are provided by chnl_io.lib: */
bridge_chnl_create,
bridge_chnl_destroy,
@@ -1221,22 +1213,24 @@ static int bridge_brd_mem_write(struct bridge_dev_context *hDevContext,
return status;
}

-/*
- * ======== bridge_brd_mem_map ========
- * This function maps MPU buffer to the DSP address space. It performs
- * linear to physical address translation if required. It translates each
- * page since linear addresses can be physically non-contiguous
- * All address & size arguments are assumed to be page aligned (in proc.c)
+/**
+ * user_to_dsp_map() - maps user to dsp virtual address
+ * @mmu: Pointer to iommu handle.
+ * @uva: Virtual user space address.
+ * @da DSP address
+ * @size Buffer size to map.
+ * @usr_pgs struct page array pointer where the user pages will be stored
+ *
+ * This function maps a user space buffer into DSP virtual address.
*
- * TODO: Disable MMU while updating the page tables (but that'll stall DSP)
*/
-static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctx,
- u32 uva, u32 da, u32 size, u32 attr,
- struct page **usr_pgs)
+
+int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
+ struct page **usr_pgs)
+
{
int res, w;
unsigned pages, i;
- struct iommu *mmu = dev_ctx->dsp_mmu;
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
struct sg_table *sgt;
@@ -1293,25 +1287,21 @@ err_sg:
return res;
}

-/*
- * ======== bridge_brd_mem_un_map ========
- * Invalidate the PTEs for the DSP VA block to be unmapped.
+/**
+ * user_to_dsp_unmap() - unmaps DSP virtual buffer.
+ * @mmu: Pointer to iommu handle.
+ * @da DSP address
+ *
+ * This function unmaps a user space buffer into DSP virtual address.
*
- * PTEs of a mapped memory block are contiguous in any page table
- * So, instead of looking up the PTE address for every 4K block,
- * we clear consecutive PTEs until we unmap all the bytes
*/
-static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctx,
- u32 da, u32 size)
+int user_to_dsp_unmap(struct iommu *mmu, u32 da)
{
unsigned i;
struct sg_table *sgt;
struct scatterlist *sg;

- if (!size)
- return -EINVAL;
-
- sgt = iommu_vunmap(dev_ctx->dsp_mmu, da);
+ sgt = iommu_vunmap(mmu, da);
if (!sgt)
return -EFAULT;

diff --git a/drivers/dsp/bridge/pmgr/dev.c b/drivers/dsp/bridge/pmgr/dev.c
index 50a5d97..39c1faf 100644
--- a/drivers/dsp/bridge/pmgr/dev.c
+++ b/drivers/dsp/bridge/pmgr/dev.c
@@ -1101,8 +1101,6 @@ static void store_interface_fxns(struct bridge_drv_interface *drv_fxns,
STORE_FXN(fxn_brd_setstate, pfn_brd_set_state);
STORE_FXN(fxn_brd_memcopy, pfn_brd_mem_copy);
STORE_FXN(fxn_brd_memwrite, pfn_brd_mem_write);
- STORE_FXN(fxn_brd_memmap, pfn_brd_mem_map);
- STORE_FXN(fxn_brd_memunmap, pfn_brd_mem_un_map);
STORE_FXN(fxn_chnl_create, pfn_chnl_create);
STORE_FXN(fxn_chnl_destroy, pfn_chnl_destroy);
STORE_FXN(fxn_chnl_open, pfn_chnl_open);
diff --git a/drivers/dsp/bridge/rmgr/proc.c b/drivers/dsp/bridge/rmgr/proc.c
index c5a8b6b..299bef3 100644
--- a/drivers/dsp/bridge/rmgr/proc.c
+++ b/drivers/dsp/bridge/rmgr/proc.c
@@ -53,6 +53,7 @@
#include <dspbridge/msg.h>
#include <dspbridge/dspioctl.h>
#include <dspbridge/drv.h>
+#include <_tiomap.h>

/* ----------------------------------- This */
#include <dspbridge/proc.h>
@@ -1384,9 +1385,10 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
if (!map_obj)
status = -ENOMEM;
else
- status = (*p_proc_object->intf_fxns->pfn_brd_mem_map)
- (p_proc_object->hbridge_context, pa_align, va_align,
- size_align, ul_map_attr, map_obj->pages);
+ status = user_to_dsp_map(
+ p_proc_object->hbridge_context->dsp_mmu,
+ pa_align, va_align, size_align,
+ map_obj->pages);
}
if (DSP_SUCCEEDED(status)) {
/* Mapped address = MSB of VA | LSB of PA */
@@ -1714,8 +1716,8 @@ int proc_un_map(void *hprocessor, void *map_addr,
status = dmm_un_map_memory(dmm_mgr, (u32) va_align, &size_align);
/* Remove mapping from the page tables. */
if (DSP_SUCCEEDED(status)) {
- status = (*p_proc_object->intf_fxns->pfn_brd_mem_un_map)
- (p_proc_object->hbridge_context, va_align, size_align);
+ status = user_to_dsp_unmap(
+ p_proc_object->hbridge_context->dsp_mmu, va_align);
}

mutex_unlock(&proc_lock);
--
1.7.0.4

2010-06-30 23:50:35

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 1/9] dspbridge: replace iommu custom for opensource implementation

This patch replace the call to custom dsp mmu implemenation
for the once on iommu module.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
drivers/dsp/bridge/core/_tiomap.h | 16 +
drivers/dsp/bridge/core/io_sm.c | 114 ++------
drivers/dsp/bridge/core/tiomap3430.c | 501 +++++-----------------------------
drivers/dsp/bridge/core/ue_deh.c | 10 -
4 files changed, 118 insertions(+), 523 deletions(-)

diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index bf0164e..d13677a 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -23,6 +23,8 @@
#include <plat/clockdomain.h>
#include <mach-omap2/prm-regbits-34xx.h>
#include <mach-omap2/cm-regbits-34xx.h>
+#include <plat/iommu.h>
+#include <plat/iovmm.h>
#include <dspbridge/devdefs.h>
#include <hw_defs.h>
#include <dspbridge/dspioctl.h> /* for bridge_ioctl_extproc defn */
@@ -330,6 +332,7 @@ struct bridge_dev_context {
u32 dw_internal_size; /* Internal memory size */

struct omap_mbox *mbox; /* Mail box handle */
+ struct iommu *dsp_mmu; /* iommu for iva2 handler */

struct cfg_hostres *resources; /* Host Resources */

@@ -374,4 +377,17 @@ extern s32 dsp_debug;
*/
int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val);

+static inline void dsp_iotlb_init(struct iotlb_entry *e, u32 da, u32 pa,
+ u32 pgsz)
+{
+ e->da = da;
+ e->pa = pa;
+ e->valid = 1;
+ e->prsvd = 1;
+ e->pgsz = pgsz & MMU_CAM_PGSZ_MASK;
+ e->endian = MMU_RAM_ENDIAN_LITTLE;
+ e->elsz = MMU_RAM_ELSZ_32;
+ e->mixed = 0;
+}
+
#endif /* _TIOMAP_ */
diff --git a/drivers/dsp/bridge/core/io_sm.c b/drivers/dsp/bridge/core/io_sm.c
index 7fb840d..1f47f8b 100644
--- a/drivers/dsp/bridge/core/io_sm.c
+++ b/drivers/dsp/bridge/core/io_sm.c
@@ -290,6 +290,8 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
struct cod_manager *cod_man;
struct chnl_mgr *hchnl_mgr;
struct msg_mgr *hmsg_mgr;
+ struct iommu *mmu;
+ struct iotlb_entry e;
u32 ul_shm_base;
u32 ul_shm_base_offset;
u32 ul_shm_limit;
@@ -312,7 +314,6 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
struct bridge_ioctl_extproc ae_proc[BRDIOCTL_NUMOFMMUTLB];
struct cfg_hostres *host_res;
struct bridge_dev_context *pbridge_context;
- u32 map_attrs;
u32 shm0_end;
u32 ul_dyn_ext_base;
u32 ul_seg1_size = 0;
@@ -336,6 +337,21 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
status = -EFAULT;
goto func_end;
}
+
+ mmu = pbridge_context->dsp_mmu;
+
+ if (mmu)
+ iommu_put(mmu);
+ mmu = iommu_get("iva2");
+
+ if (IS_ERR_OR_NULL(mmu)) {
+ pr_err("Error in iommu_get\n");
+ pbridge_context->dsp_mmu = NULL;
+ status = -EFAULT;
+ goto func_end;
+ }
+ pbridge_context->dsp_mmu = mmu;
+
status = dev_get_cod_mgr(hio_mgr->hdev_obj, &cod_man);
if (!cod_man) {
status = -EFAULT;
@@ -477,55 +493,16 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
gpp_va_curr = ul_gpp_va;
num_bytes = ul_seg1_size;

- /*
- * Try to fit into TLB entries. If not possible, push them to page
- * tables. It is quite possible that if sections are not on
- * bigger page boundary, we may end up making several small pages.
- * So, push them onto page tables, if that is the case.
- */
- map_attrs = 0x00000000;
- map_attrs = DSP_MAPLITTLEENDIAN;
- map_attrs |= DSP_MAPPHYSICALADDR;
- map_attrs |= DSP_MAPELEMSIZE32;
- map_attrs |= DSP_MAPDONOTLOCK;
-
- while (num_bytes) {
- /*
- * To find the max. page size with which both PA & VA are
- * aligned.
- */
- all_bits = pa_curr | va_curr;
- dev_dbg(bridge, "all_bits %x, pa_curr %x, va_curr %x, "
- "num_bytes %x\n", all_bits, pa_curr, va_curr,
- num_bytes);
- for (i = 0; i < 4; i++) {
- if ((num_bytes >= page_size[i]) && ((all_bits &
- (page_size[i] -
- 1)) == 0)) {
- status =
- hio_mgr->intf_fxns->
- pfn_brd_mem_map(hio_mgr->hbridge_context,
- pa_curr, va_curr,
- page_size[i], map_attrs,
- NULL);
- if (DSP_FAILED(status))
- goto func_end;
- pa_curr += page_size[i];
- va_curr += page_size[i];
- gpp_va_curr += page_size[i];
- num_bytes -= page_size[i];
- /*
- * Don't try smaller sizes. Hopefully we have
- * reached an address aligned to a bigger page
- * size.
- */
- break;
- }
- }
+ va_curr = iommu_kmap(mmu, va_curr, pa_curr, num_bytes,
+ IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
+ if (IS_ERR_VALUE(va_curr)) {
+ status = (int)va_curr;
+ goto func_end;
}
- pa_curr += ul_pad_size;
- va_curr += ul_pad_size;
- gpp_va_curr += ul_pad_size;
+
+ pa_curr += ul_pad_size + num_bytes;
+ va_curr += ul_pad_size + num_bytes;
+ gpp_va_curr += ul_pad_size + num_bytes;

/* Configure the TLB entries for the next cacheable segment */
num_bytes = ul_seg_size;
@@ -567,22 +544,6 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
ae_proc[ndx].ul_dsp_va *
hio_mgr->word_size, page_size[i]);
ndx++;
- } else {
- status =
- hio_mgr->intf_fxns->
- pfn_brd_mem_map(hio_mgr->hbridge_context,
- pa_curr, va_curr,
- page_size[i], map_attrs,
- NULL);
- dev_dbg(bridge,
- "shm MMU PTE entry PA %x"
- " VA %x DSP_VA %x Size %x\n",
- ae_proc[ndx].ul_gpp_pa,
- ae_proc[ndx].ul_gpp_va,
- ae_proc[ndx].ul_dsp_va *
- hio_mgr->word_size, page_size[i]);
- if (DSP_FAILED(status))
- goto func_end;
}
pa_curr += page_size[i];
va_curr += page_size[i];
@@ -635,35 +596,20 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
"DSP_VA 0x%x\n", ae_proc[ndx].ul_gpp_pa,
ae_proc[ndx].ul_dsp_va);
ndx++;
- } else {
- status = hio_mgr->intf_fxns->pfn_brd_mem_map
- (hio_mgr->hbridge_context,
- hio_mgr->ext_proc_info.ty_tlb[i].
- ul_gpp_phys,
- hio_mgr->ext_proc_info.ty_tlb[i].
- ul_dsp_virt, 0x100000, map_attrs,
- NULL);
}
}
if (DSP_FAILED(status))
goto func_end;
}

- map_attrs = 0x00000000;
- map_attrs = DSP_MAPLITTLEENDIAN;
- map_attrs |= DSP_MAPPHYSICALADDR;
- map_attrs |= DSP_MAPELEMSIZE32;
- map_attrs |= DSP_MAPDONOTLOCK;
+ dsp_iotlb_init(&e, 0, 0, IOVMF_PGSZ_4K);

/* Map the L4 peripherals */
i = 0;
while (l4_peripheral_table[i].phys_addr) {
- status = hio_mgr->intf_fxns->pfn_brd_mem_map
- (hio_mgr->hbridge_context, l4_peripheral_table[i].phys_addr,
- l4_peripheral_table[i].dsp_virt_addr, HW_PAGE_SIZE4KB,
- map_attrs, NULL);
- if (DSP_FAILED(status))
- goto func_end;
+ e.da = l4_peripheral_table[i].dsp_virt_addr;
+ e.pa = l4_peripheral_table[i].phys_addr;
+ iopgtable_store_entry(mmu, &e);
i++;
}

diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 35c6678..e750767 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -373,6 +373,8 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
{
int status = 0;
struct bridge_dev_context *dev_context = hDevContext;
+ struct iommu *mmu;
+ struct iotlb_entry en;
u32 dw_sync_addr = 0;
u32 ul_shm_base; /* Gpp Phys SM base addr(byte) */
u32 ul_shm_base_virt; /* Dsp Virt SM base addr */
@@ -392,6 +394,8 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
struct dspbridge_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;

+ mmu = dev_context->dsp_mmu;
+
/* The device context contains all the mmu setup info from when the
* last dsp base image was loaded. The first entry is always
* SHMMEM base. */
@@ -442,30 +446,10 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
}
}
if (DSP_SUCCEEDED(status)) {
- /* Reset and Unreset the RST2, so that BOOTADDR is copied to
- * IVA2 SYSC register */
- (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2,
- OMAP3430_RST2_IVA2, OMAP3430_IVA2_MOD, RM_RSTCTRL);
- udelay(100);
- (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2, 0,
- OMAP3430_IVA2_MOD, RM_RSTCTRL);
- udelay(100);
-
- /* Disbale the DSP MMU */
- hw_mmu_disable(resources->dw_dmmu_base);
- /* Disable TWL */
- hw_mmu_twl_disable(resources->dw_dmmu_base);
-
/* Only make TLB entry if both addresses are non-zero */
for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB;
entry_ndx++) {
struct bridge_ioctl_extproc *e = &dev_context->atlb_entry[entry_ndx];
- struct hw_mmu_map_attrs_t map_attrs = {
- .endianism = e->endianism,
- .element_size = e->elem_size,
- .mixed_size = e->mixed_mode,
- };
-
if (!e->ul_gpp_pa || !e->ul_dsp_va)
continue;

@@ -476,13 +460,9 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
e->ul_dsp_va,
e->ul_size);

- hw_mmu_tlb_add(dev_context->dw_dsp_mmu_base,
- e->ul_gpp_pa,
- e->ul_dsp_va,
- e->ul_size,
- itmp_entry_ndx,
- &map_attrs, 1, 1);
-
+ dsp_iotlb_init(&en, e->ul_dsp_va, e->ul_gpp_pa,
+ bytes_to_iopgsz(e->ul_size));
+ iopgtable_store_entry(mmu, &en);
itmp_entry_ndx++;
}
}
@@ -490,19 +470,6 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
/* Lock the above TLB entries and get the BIOS and load monitor timer
* information */
if (DSP_SUCCEEDED(status)) {
- hw_mmu_num_locked_set(resources->dw_dmmu_base, itmp_entry_ndx);
- hw_mmu_victim_num_set(resources->dw_dmmu_base, itmp_entry_ndx);
- hw_mmu_ttb_set(resources->dw_dmmu_base,
- dev_context->pt_attrs->l1_base_pa);
- hw_mmu_twl_enable(resources->dw_dmmu_base);
- /* Enable the SmartIdle and AutoIdle bit for MMU_SYSCONFIG */
-
- temp = __raw_readl((resources->dw_dmmu_base) + 0x10);
- temp = (temp & 0xFFFFFFEF) | 0x11;
- __raw_writel(temp, (resources->dw_dmmu_base) + 0x10);
-
- /* Let the DSP MMU run */
- hw_mmu_enable(resources->dw_dmmu_base);

/* Enable the BIOS clock */
(void)dev_get_symbol(dev_context->hdev_obj,
@@ -510,9 +477,6 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
(void)dev_get_symbol(dev_context->hdev_obj,
BRIDGEINIT_LOADMON_GPTIMER,
&ul_load_monitor_timer);
- }
-
- if (DSP_SUCCEEDED(status)) {
if (ul_load_monitor_timer != 0xFFFF) {
clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
ul_load_monitor_timer;
@@ -593,9 +557,6 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,

/* Let DSP go */
dev_dbg(bridge, "%s Unreset\n", __func__);
- /* Enable DSP MMU Interrupts */
- hw_mmu_event_enable(resources->dw_dmmu_base,
- HW_MMU_ALL_INTERRUPTS);
/* release the RST1, DSP starts executing now .. */
(*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, 0,
OMAP3430_IVA2_MOD, RM_RSTCTRL);
@@ -754,6 +715,9 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
omap_mbox_put(dev_context->mbox);
dev_context->mbox = NULL;
}
+
+ if (dev_context->dsp_mmu)
+ dev_context->dsp_mmu = (iommu_put(dev_context->dsp_mmu), NULL);
/* Reset IVA2 clocks*/
(*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2 | OMAP3430_RST2_IVA2 |
OMAP3430_RST3_IVA2, OMAP3430_IVA2_MOD, RM_RSTCTRL);
@@ -1199,219 +1163,67 @@ static int bridge_brd_mem_write(struct bridge_dev_context *hDevContext,
*
* TODO: Disable MMU while updating the page tables (but that'll stall DSP)
*/
-static int bridge_brd_mem_map(struct bridge_dev_context *hDevContext,
- u32 ul_mpu_addr, u32 ulVirtAddr,
- u32 ul_num_bytes, u32 ul_map_attr,
- struct page **mapped_pages)
+static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctx,
+ u32 uva, u32 da, u32 size, u32 attr,
+ struct page **usr_pgs)
{
- u32 attrs;
- int status = 0;
- struct bridge_dev_context *dev_context = hDevContext;
- struct hw_mmu_map_attrs_t hw_attrs;
+ int res, w;
+ unsigned pages, i;
+ struct iommu *mmu = dev_ctx->dsp_mmu;
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
- u32 write = 0;
- u32 num_usr_pgs = 0;
- struct page *mapped_page, *pg;
- s32 pg_num;
- u32 va = ulVirtAddr;
- struct task_struct *curr_task = current;
- u32 pg_i = 0;
- u32 mpu_addr, pa;
-
- dev_dbg(bridge,
- "%s hDevCtxt %p, pa %x, va %x, size %x, ul_map_attr %x\n",
- __func__, hDevContext, ul_mpu_addr, ulVirtAddr, ul_num_bytes,
- ul_map_attr);
- if (ul_num_bytes == 0)
- return -EINVAL;
+ struct sg_table *sgt;
+ struct scatterlist *sg;

- if (ul_map_attr & DSP_MAP_DIR_MASK) {
- attrs = ul_map_attr;
- } else {
- /* Assign default attributes */
- attrs = ul_map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
- }
- /* Take mapping properties */
- if (attrs & DSP_MAPBIGENDIAN)
- hw_attrs.endianism = HW_BIG_ENDIAN;
- else
- hw_attrs.endianism = HW_LITTLE_ENDIAN;
-
- hw_attrs.mixed_size = (enum hw_mmu_mixed_size_t)
- ((attrs & DSP_MAPMIXEDELEMSIZE) >> 2);
- /* Ignore element_size if mixed_size is enabled */
- if (hw_attrs.mixed_size == 0) {
- if (attrs & DSP_MAPELEMSIZE8) {
- /* Size is 8 bit */
- hw_attrs.element_size = HW_ELEM_SIZE8BIT;
- } else if (attrs & DSP_MAPELEMSIZE16) {
- /* Size is 16 bit */
- hw_attrs.element_size = HW_ELEM_SIZE16BIT;
- } else if (attrs & DSP_MAPELEMSIZE32) {
- /* Size is 32 bit */
- hw_attrs.element_size = HW_ELEM_SIZE32BIT;
- } else if (attrs & DSP_MAPELEMSIZE64) {
- /* Size is 64 bit */
- hw_attrs.element_size = HW_ELEM_SIZE64BIT;
- } else {
- /*
- * Mixedsize isn't enabled, so size can't be
- * zero here
- */
- return -EINVAL;
- }
- }
- if (attrs & DSP_MAPDONOTLOCK)
- hw_attrs.donotlockmpupage = 1;
- else
- hw_attrs.donotlockmpupage = 0;
+ if (!size || !usr_pgs)
+ return -EINVAL;

- if (attrs & DSP_MAPVMALLOCADDR) {
- return mem_map_vmalloc(hDevContext, ul_mpu_addr, ulVirtAddr,
- ul_num_bytes, &hw_attrs);
- }
- /*
- * Do OS-specific user-va to pa translation.
- * Combine physically contiguous regions to reduce TLBs.
- * Pass the translated pa to pte_update.
- */
- if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_context, ul_mpu_addr, ulVirtAddr,
- ul_num_bytes, &hw_attrs);
- goto func_cont;
- }
+ pages = size / PG_SIZE4K;

- /*
- * Important Note: ul_mpu_addr is mapped from user application process
- * to current process - it must lie completely within the current
- * virtual memory address space in order to be of use to us here!
- */
down_read(&mm->mmap_sem);
- vma = find_vma(mm, ul_mpu_addr);
- if (vma)
- dev_dbg(bridge,
- "VMAfor UserBuf: ul_mpu_addr=%x, ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
-
- /*
- * It is observed that under some circumstances, the user buffer is
- * spread across several VMAs. So loop through and check if the entire
- * user buffer is covered
- */
- while ((vma) && (ul_mpu_addr + ul_num_bytes > vma->vm_end)) {
- /* jump to the next VMA region */
+ vma = find_vma(mm, uva);
+ while (vma && (uva + size > vma->vm_end))
vma = find_vma(mm, vma->vm_end + 1);
- dev_dbg(bridge,
- "VMA for UserBuf ul_mpu_addr=%x ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
- }
+
if (!vma) {
pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
- __func__, ul_mpu_addr, ul_num_bytes);
- status = -EINVAL;
+ __func__, uva, size);
up_read(&mm->mmap_sem);
- goto func_cont;
+ return -EINVAL;
}
+ if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
+ w = 1;
+ res = get_user_pages(current, mm, uva, pages, w, 1, usr_pgs, NULL);
+ up_read(&mm->mmap_sem);
+ if (res < 0)
+ return res;

- if (vma->vm_flags & VM_IO) {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
- mpu_addr = ul_mpu_addr;
-
- /* Get the physical addresses for user buffer */
- for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pa = user_va2_pa(mm, mpu_addr);
- if (!pa) {
- status = -EPERM;
- pr_err("DSPBRIDGE: VM_IO mapping physical"
- "address is invalid\n");
- break;
- }
- if (pfn_valid(__phys_to_pfn(pa))) {
- pg = PHYS_TO_PAGE(pa);
- get_page(pg);
- if (page_count(pg) < 1) {
- pr_err("Bad page in VM_IO buffer\n");
- bad_page_dump(pa, pg);
- }
- }
- status = pte_set(dev_context->pt_attrs, pa,
- va, HW_PAGE_SIZE4KB, &hw_attrs);
- if (DSP_FAILED(status))
- break;
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);

- va += HW_PAGE_SIZE4KB;
- mpu_addr += HW_PAGE_SIZE4KB;
- pa += HW_PAGE_SIZE4KB;
- }
- } else {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
- if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
- write = 1;
-
- for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pg_num = get_user_pages(curr_task, mm, ul_mpu_addr, 1,
- write, 1, &mapped_page, NULL);
- if (pg_num > 0) {
- if (page_count(mapped_page) < 1) {
- pr_err("Bad page count after doing"
- "get_user_pages on"
- "user buffer\n");
- bad_page_dump(page_to_phys(mapped_page),
- mapped_page);
- }
- status = pte_set(dev_context->pt_attrs,
- page_to_phys(mapped_page), va,
- HW_PAGE_SIZE4KB, &hw_attrs);
- if (DSP_FAILED(status))
- break;
-
- if (mapped_pages)
- mapped_pages[pg_i] = mapped_page;
-
- va += HW_PAGE_SIZE4KB;
- ul_mpu_addr += HW_PAGE_SIZE4KB;
- } else {
- pr_err("DSPBRIDGE: get_user_pages FAILED,"
- "MPU addr = 0x%x,"
- "vma->vm_flags = 0x%lx,"
- "get_user_pages Err"
- "Value = %d, Buffer"
- "size=0x%x\n", ul_mpu_addr,
- vma->vm_flags, pg_num, ul_num_bytes);
- status = -EPERM;
- break;
- }
- }
- }
- up_read(&mm->mmap_sem);
-func_cont:
- if (DSP_SUCCEEDED(status)) {
- status = 0;
- } else {
- /*
- * Roll out the mapped pages incase it failed in middle of
- * mapping
- */
- if (pg_i) {
- bridge_brd_mem_un_map(dev_context, ulVirtAddr,
- (pg_i * PG_SIZE4K));
- }
- status = -EPERM;
+ if (!sgt)
+ return -ENOMEM;
+
+ res = sg_alloc_table(sgt, pages, GFP_KERNEL);
+
+ if (res < 0)
+ goto err_sg;
+
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ sg_set_page(sg, usr_pgs[i], PAGE_SIZE, 0);
+
+ da = iommu_vmap(mmu, da, sgt, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
+
+ if (IS_ERR_VALUE(da)) {
+ res = (int)da;
+ goto err_map;
}
- /*
- * In any case, flush the TLB
- * This is called from here instead from pte_update to avoid unnecessary
- * repetition while mapping non-contiguous physical regions of a virtual
- * region
- */
- flush_all(dev_context);
- dev_dbg(bridge, "%s status %x\n", __func__, status);
- return status;
+ return 0;
+
+err_map:
+ sg_free_table(sgt);
+err_sg:
+ kfree(sgt);
+ return res;
}

/*
@@ -1422,196 +1234,27 @@ func_cont:
* So, instead of looking up the PTE address for every 4K block,
* we clear consecutive PTEs until we unmap all the bytes
*/
-static int bridge_brd_mem_un_map(struct bridge_dev_context *hDevContext,
- u32 ulVirtAddr, u32 ul_num_bytes)
+static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctx,
+ u32 da, u32 size)
{
- u32 l1_base_va;
- u32 l2_base_va;
- u32 l2_base_pa;
- u32 l2_page_num;
- u32 pte_val;
- u32 pte_size;
- u32 pte_count;
- u32 pte_addr_l1;
- u32 pte_addr_l2 = 0;
- u32 rem_bytes;
- u32 rem_bytes_l2;
- u32 va_curr;
- struct page *pg = NULL;
- int status = 0;
- struct bridge_dev_context *dev_context = hDevContext;
- struct pg_table_attrs *pt = dev_context->pt_attrs;
- u32 temp;
- u32 paddr;
- u32 numof4k_pages = 0;
-
- va_curr = ulVirtAddr;
- rem_bytes = ul_num_bytes;
- rem_bytes_l2 = 0;
- l1_base_va = pt->l1_base_va;
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
- dev_dbg(bridge, "%s hDevContext %p, va %x, NumBytes %x l1_base_va %x, "
- "pte_addr_l1 %x\n", __func__, hDevContext, ulVirtAddr,
- ul_num_bytes, l1_base_va, pte_addr_l1);
+ unsigned i;
+ struct sg_table *sgt;
+ struct scatterlist *sg;

- while (rem_bytes && (DSP_SUCCEEDED(status))) {
- u32 va_curr_orig = va_curr;
- /* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
- pte_val = *(u32 *) pte_addr_l1;
- pte_size = hw_mmu_pte_size_l1(pte_val);
-
- if (pte_size != HW_MMU_COARSE_PAGE_SIZE)
- goto skip_coarse_page;
-
- /*
- * Get the L2 PA from the L1 PTE, and find
- * corresponding L2 VA
- */
- l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
- l2_base_va = l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
- l2_page_num =
- (l2_base_pa - pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
- /*
- * Find the L2 PTE address from which we will start
- * clearing, the number of PTEs to be cleared on this
- * page, and the size of VA space that needs to be
- * cleared on this L2 page
- */
- pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, va_curr);
- pte_count = pte_addr_l2 & (HW_MMU_COARSE_PAGE_SIZE - 1);
- pte_count = (HW_MMU_COARSE_PAGE_SIZE - pte_count) / sizeof(u32);
- if (rem_bytes < (pte_count * PG_SIZE4K))
- pte_count = rem_bytes / PG_SIZE4K;
- rem_bytes_l2 = pte_count * PG_SIZE4K;
-
- /*
- * Unmap the VA space on this L2 PT. A quicker way
- * would be to clear pte_count entries starting from
- * pte_addr_l2. However, below code checks that we don't
- * clear invalid entries or less than 64KB for a 64KB
- * entry. Similar checking is done for L1 PTEs too
- * below
- */
- while (rem_bytes_l2 && (DSP_SUCCEEDED(status))) {
- pte_val = *(u32 *) pte_addr_l2;
- pte_size = hw_mmu_pte_size_l2(pte_val);
- /* va_curr aligned to pte_size? */
- if (pte_size == 0 || rem_bytes_l2 < pte_size ||
- va_curr & (pte_size - 1)) {
- status = -EPERM;
- break;
- }
+ if (!size)
+ return -EINVAL;

- /* Collect Physical addresses from VA */
- paddr = (pte_val & ~(pte_size - 1));
- if (pte_size == HW_PAGE_SIZE64KB)
- numof4k_pages = 16;
- else
- numof4k_pages = 1;
- temp = 0;
- while (temp++ < numof4k_pages) {
- if (!pfn_valid(__phys_to_pfn(paddr))) {
- paddr += HW_PAGE_SIZE4KB;
- continue;
- }
- pg = PHYS_TO_PAGE(paddr);
- if (page_count(pg) < 1) {
- pr_info("DSPBRIDGE: UNMAP function: "
- "COUNT 0 FOR PA 0x%x, size = "
- "0x%x\n", paddr, ul_num_bytes);
- bad_page_dump(paddr, pg);
- } else {
- SetPageDirty(pg);
- page_cache_release(pg);
- }
- paddr += HW_PAGE_SIZE4KB;
- }
- if (hw_mmu_pte_clear(pte_addr_l2, va_curr, pte_size)
- == RET_FAIL) {
- status = -EPERM;
- goto EXIT_LOOP;
- }
+ sgt = iommu_vunmap(dev_ctx->dsp_mmu, da);
+ if (!sgt)
+ return -EFAULT;

- status = 0;
- rem_bytes_l2 -= pte_size;
- va_curr += pte_size;
- pte_addr_l2 += (pte_size >> 12) * sizeof(u32);
- }
- spin_lock(&pt->pg_lock);
- if (rem_bytes_l2 == 0) {
- pt->pg_info[l2_page_num].num_entries -= pte_count;
- if (pt->pg_info[l2_page_num].num_entries == 0) {
- /*
- * Clear the L1 PTE pointing to the L2 PT
- */
- if (hw_mmu_pte_clear(l1_base_va, va_curr_orig,
- HW_MMU_COARSE_PAGE_SIZE) ==
- RET_OK)
- status = 0;
- else {
- status = -EPERM;
- spin_unlock(&pt->pg_lock);
- goto EXIT_LOOP;
- }
- }
- rem_bytes -= pte_count * PG_SIZE4K;
- } else
- status = -EPERM;
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ put_page(sg_page(sg));

- spin_unlock(&pt->pg_lock);
- continue;
-skip_coarse_page:
- /* va_curr aligned to pte_size? */
- /* pte_size = 1 MB or 16 MB */
- if (pte_size == 0 || rem_bytes < pte_size ||
- va_curr & (pte_size - 1)) {
- status = -EPERM;
- break;
- }
+ sg_free_table(sgt);
+ kfree(sgt);

- if (pte_size == HW_PAGE_SIZE1MB)
- numof4k_pages = 256;
- else
- numof4k_pages = 4096;
- temp = 0;
- /* Collect Physical addresses from VA */
- paddr = (pte_val & ~(pte_size - 1));
- while (temp++ < numof4k_pages) {
- if (pfn_valid(__phys_to_pfn(paddr))) {
- pg = PHYS_TO_PAGE(paddr);
- if (page_count(pg) < 1) {
- pr_info("DSPBRIDGE: UNMAP function: "
- "COUNT 0 FOR PA 0x%x, size = "
- "0x%x\n", paddr, ul_num_bytes);
- bad_page_dump(paddr, pg);
- } else {
- SetPageDirty(pg);
- page_cache_release(pg);
- }
- }
- paddr += HW_PAGE_SIZE4KB;
- }
- if (hw_mmu_pte_clear(l1_base_va, va_curr, pte_size) == RET_OK) {
- status = 0;
- rem_bytes -= pte_size;
- va_curr += pte_size;
- } else {
- status = -EPERM;
- goto EXIT_LOOP;
- }
- }
- /*
- * It is better to flush the TLB here, so that any stale old entries
- * get flushed
- */
-EXIT_LOOP:
- flush_all(dev_context);
- dev_dbg(bridge,
- "%s: va_curr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
- " rem_bytes_l2 %x status %x\n", __func__, va_curr, pte_addr_l1,
- pte_addr_l2, rem_bytes, rem_bytes_l2, status);
- return status;
+ return 0;
}

/*
diff --git a/drivers/dsp/bridge/core/ue_deh.c b/drivers/dsp/bridge/core/ue_deh.c
index 64e9366..ce13e6c 100644
--- a/drivers/dsp/bridge/core/ue_deh.c
+++ b/drivers/dsp/bridge/core/ue_deh.c
@@ -99,14 +99,6 @@ int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
deh_mgr->err_info.dw_val2 = 0L;
deh_mgr->err_info.dw_val3 = 0L;

- /* Install ISR function for DSP MMU fault */
- if ((request_irq(INT_DSP_MMU_IRQ, mmu_fault_isr, 0,
- "DspBridge\tiommu fault",
- (void *)deh_mgr)) == 0)
- status = 0;
- else
- status = -EPERM;
-
err:
if (DSP_FAILED(status)) {
/* If create failed, cleanup */
@@ -131,8 +123,6 @@ int bridge_deh_destroy(struct deh_mgr *deh_mgr)
ntfy_delete(deh_mgr->ntfy_obj);
kfree(deh_mgr->ntfy_obj);
}
- /* Disable DSP MMU fault */
- free_irq(INT_DSP_MMU_IRQ, deh_mgr);

/* Free DPC object */
tasklet_kill(&deh_mgr->dpc_tasklet);
--
1.7.0.4

2010-06-30 23:51:03

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 5/9] dspbridge: add mmufault support

With changes for iommu migration mmu fault report and dsp track
dump is broken, this patch fixes that.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
drivers/dsp/bridge/core/mmu_fault.c | 93 ++++++---------------------------
drivers/dsp/bridge/core/mmu_fault.h | 5 +-
drivers/dsp/bridge/core/tiomap3430.c | 2 +
drivers/dsp/bridge/core/ue_deh.c | 31 +++++-------
4 files changed, 34 insertions(+), 97 deletions(-)

diff --git a/drivers/dsp/bridge/core/mmu_fault.c b/drivers/dsp/bridge/core/mmu_fault.c
index 5c0124f..d991c6a 100644
--- a/drivers/dsp/bridge/core/mmu_fault.c
+++ b/drivers/dsp/bridge/core/mmu_fault.c
@@ -23,9 +23,12 @@
/* ----------------------------------- Trace & Debug */
#include <dspbridge/host_os.h>
#include <dspbridge/dbc.h>
+#include <plat/iommu.h>

/* ----------------------------------- OS Adaptation Layer */
#include <dspbridge/drv.h>
+#include <dspbridge/dev.h>
+

/* ----------------------------------- Link Driver */
#include <dspbridge/dspdeh.h>
@@ -40,11 +43,6 @@
#include "_tiomap.h"
#include "mmu_fault.h"

-static u32 dmmu_event_mask;
-u32 fault_addr;
-
-static bool mmu_check_if_fault(struct bridge_dev_context *dev_context);
-
/*
* ======== mmu_fault_dpc ========
* Deferred procedure call to handle DSP MMU fault.
@@ -62,78 +60,21 @@ void mmu_fault_dpc(IN unsigned long pRefData)
* ======== mmu_fault_isr ========
* ISR to be triggered by a DSP MMU fault interrupt.
*/
-irqreturn_t mmu_fault_isr(int irq, IN void *pRefData)
-{
- struct deh_mgr *deh_mgr_obj = (struct deh_mgr *)pRefData;
- struct bridge_dev_context *dev_context;
- struct cfg_hostres *resources;
-
- DBC_REQUIRE(irq == INT_DSP_MMU_IRQ);
- DBC_REQUIRE(deh_mgr_obj);
-
- if (deh_mgr_obj) {
-
- dev_context =
- (struct bridge_dev_context *)deh_mgr_obj->hbridge_context;
-
- resources = dev_context->resources;
-
- if (!resources) {
- dev_dbg(bridge, "%s: Failed to get Host Resources\n",
- __func__);
- return IRQ_HANDLED;
- }
- if (mmu_check_if_fault(dev_context)) {
- printk(KERN_INFO "***** DSPMMU FAULT ***** IRQStatus "
- "0x%x\n", dmmu_event_mask);
- printk(KERN_INFO "***** DSPMMU FAULT ***** fault_addr "
- "0x%x\n", fault_addr);
- /*
- * Schedule a DPC directly. In the future, it may be
- * necessary to check if DSP MMU fault is intended for
- * Bridge.
- */
- tasklet_schedule(&deh_mgr_obj->dpc_tasklet);
-
- /* Reset err_info structure before use. */
- deh_mgr_obj->err_info.dw_err_mask = DSP_MMUFAULT;
- deh_mgr_obj->err_info.dw_val1 = fault_addr >> 16;
- deh_mgr_obj->err_info.dw_val2 = fault_addr & 0xFFFF;
- deh_mgr_obj->err_info.dw_val3 = 0L;
- /* Disable the MMU events, else once we clear it will
- * start to raise INTs again */
- hw_mmu_event_disable(resources->dw_dmmu_base,
- HW_MMU_TRANSLATION_FAULT);
- } else {
- hw_mmu_event_disable(resources->dw_dmmu_base,
- HW_MMU_ALL_INTERRUPTS);
- }
- }
- return IRQ_HANDLED;
-}
+int mmu_fault_isr(struct iommu *mmu)

-/*
- * ======== mmu_check_if_fault ========
- * Check to see if MMU Fault is valid TLB miss from DSP
- * Note: This function is called from an ISR
- */
-static bool mmu_check_if_fault(struct bridge_dev_context *dev_context)
{
+ struct deh_mgr *dm;
+ u32 da;
+
+ dev_get_deh_mgr(dev_get_first(), &dm);
+
+ if (!dm)
+ return -EPERM;
+
+ da = iommu_read_reg(mmu, MMU_FAULT_AD);
+ iommu_write_reg(mmu, 0, MMU_IRQENABLE);
+ dm->err_info.dw_val1 = da;
+ tasklet_schedule(&dm->dpc_tasklet);

- bool ret = false;
- hw_status hw_status_obj;
- struct cfg_hostres *resources = dev_context->resources;
-
- if (!resources) {
- dev_dbg(bridge, "%s: Failed to get Host Resources in\n",
- __func__);
- return ret;
- }
- hw_status_obj =
- hw_mmu_event_status(resources->dw_dmmu_base, &dmmu_event_mask);
- if (dmmu_event_mask == HW_MMU_TRANSLATION_FAULT) {
- hw_mmu_fault_addr_read(resources->dw_dmmu_base, &fault_addr);
- ret = true;
- }
- return ret;
+ return 0;
}
diff --git a/drivers/dsp/bridge/core/mmu_fault.h b/drivers/dsp/bridge/core/mmu_fault.h
index 74db489..df3fba6 100644
--- a/drivers/dsp/bridge/core/mmu_fault.h
+++ b/drivers/dsp/bridge/core/mmu_fault.h
@@ -19,8 +19,6 @@
#ifndef MMU_FAULT_
#define MMU_FAULT_

-extern u32 fault_addr;
-
/*
* ======== mmu_fault_dpc ========
* Deferred procedure call to handle DSP MMU fault.
@@ -31,6 +29,7 @@ void mmu_fault_dpc(IN unsigned long pRefData);
* ======== mmu_fault_isr ========
* ISR to be triggered by a DSP MMU fault interrupt.
*/
-irqreturn_t mmu_fault_isr(int irq, IN void *pRefData);
+int mmu_fault_isr(struct iommu *mmu);
+

#endif /* MMU_FAULT_ */
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 96cceea..89867e7 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -57,6 +57,7 @@
#include "_tiomap.h"
#include "_tiomap_pwr.h"
#include "tiomap_io.h"
+#include "mmu_fault.h"

/* Offset in shared mem to write to in order to synchronize start with DSP */
#define SHMSYNCOFFSET 4 /* GPP byte offset */
@@ -382,6 +383,7 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
goto end;
}
dev_context->dsp_mmu = mmu;
+ mmu->isr = mmu_fault_isr;
sm_sg = dev_context->sh_s;

sm_sg->seg0_da = iommu_kmap(mmu, sm_sg->seg0_da, sm_sg->seg0_pa,
diff --git a/drivers/dsp/bridge/core/ue_deh.c b/drivers/dsp/bridge/core/ue_deh.c
index ce13e6c..a03d172 100644
--- a/drivers/dsp/bridge/core/ue_deh.c
+++ b/drivers/dsp/bridge/core/ue_deh.c
@@ -18,6 +18,7 @@

/* ----------------------------------- Host OS */
#include <dspbridge/host_os.h>
+#include <plat/iommu.h>

/* ----------------------------------- DSP/BIOS Bridge */
#include <dspbridge/std.h>
@@ -51,12 +52,6 @@
#include "_tiomap_pwr.h"
#include <dspbridge/io_sm.h>

-
-static struct hw_mmu_map_attrs_t map_attrs = { HW_LITTLE_ENDIAN,
- HW_ELEM_SIZE16BIT,
- HW_MMU_CPUES
-};
-
static void *dummy_va_addr;

int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
@@ -154,10 +149,10 @@ int bridge_deh_register_notify(struct deh_mgr *deh_mgr, u32 event_mask,
void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
{
struct bridge_dev_context *dev_context;
- int status = 0;
u32 hw_mmu_max_tlb_count = 31;
struct cfg_hostres *resources;
- hw_status hw_status_obj;
+ u32 fault_addr, tmp;
+ struct iotlb_entry e;

if (!deh_mgr)
return;
@@ -181,6 +176,9 @@ void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
break;
case DSP_MMUFAULT:
/* MMU fault routine should have set err info structure. */
+ fault_addr = iommu_read_reg(dev_context->dsp_mmu,
+ MMU_FAULT_AD);
+
deh_mgr->err_info.dw_err_mask = DSP_MMUFAULT;
dev_err(bridge, "%s: %s, err_info = 0x%x\n",
__func__, "DSP_MMUFAULT", dwErrInfo);
@@ -206,21 +204,18 @@ void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
dev_context->num_tlb_entries =
dev_context->fixed_tlb_entries;
}
- if (DSP_SUCCEEDED(status)) {
- hw_status_obj =
- hw_mmu_tlb_add(resources->dw_dmmu_base,
- virt_to_phys(dummy_va_addr), fault_addr,
- HW_PAGE_SIZE4KB, 1,
- &map_attrs, HW_SET, HW_SET);
- }
+ dsp_iotlb_init(&e, fault_addr & PAGE_MASK,
+ virt_to_phys(dummy_va_addr), IOVMF_PGSZ_4K);
+ load_iotlb_entry(dev_context->dsp_mmu, &e);
+

dsp_clk_enable(DSP_CLK_GPT8);

dsp_gpt_wait_overflow(DSP_CLK_GPT8, 0xfffffffe);

- /* Clear MMU interrupt */
- hw_mmu_event_ack(resources->dw_dmmu_base,
- HW_MMU_TRANSLATION_FAULT);
+ tmp = iommu_read_reg(dev_context->dsp_mmu, MMU_IRQSTATUS);
+ iommu_write_reg(dev_context->dsp_mmu, tmp, MMU_IRQSTATUS);
+
dump_dsp_stack(deh_mgr->hbridge_context);
dsp_clk_disable(DSP_CLK_GPT8);
break;
--
1.7.0.4

2010-06-30 23:51:06

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 7/9] dspbridge: move all iommu related code to a new file

This patch moves all the code related to iommu in the
dsp-mmu.c file

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
arch/arm/plat-omap/include/dspbridge/dsp-mmu.h | 90 ++++++++++
arch/arm/plat-omap/include/dspbridge/dspdeh.h | 1 -
drivers/dsp/bridge/Makefile | 2 +-
drivers/dsp/bridge/core/_deh.h | 3 -
drivers/dsp/bridge/core/_tiomap.h | 41 +-----
drivers/dsp/bridge/core/dsp-mmu.c | 218 ++++++++++++++++++++++++
drivers/dsp/bridge/core/mmu_fault.c | 76 --------
drivers/dsp/bridge/core/mmu_fault.h | 35 ----
drivers/dsp/bridge/core/tiomap3430.c | 111 +------------
drivers/dsp/bridge/core/ue_deh.c | 68 +-------
drivers/dsp/bridge/rmgr/proc.c | 6 +-
11 files changed, 318 insertions(+), 333 deletions(-)
create mode 100644 arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
create mode 100644 drivers/dsp/bridge/core/dsp-mmu.c
delete mode 100644 drivers/dsp/bridge/core/mmu_fault.c
delete mode 100644 drivers/dsp/bridge/core/mmu_fault.h

diff --git a/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h b/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
new file mode 100644
index 0000000..266f38b
--- /dev/null
+++ b/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
@@ -0,0 +1,90 @@
+/*
+ * dsp-mmu.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP iommu.
+ *
+ * Copyright (C) 2005-2010 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DSP_MMU_
+#define _DSP_MMU_
+
+#include <plat/iommu.h>
+#include <plat/iovmm.h>
+
+/**
+ * dsp_iotlb_init() - initialize dsp mmu entry
+ * @e: Pointer tbl entry.
+ * @da DSP address
+ * @pa: physical address.
+ * @pgsz page size to map.
+ *
+ * This function initializes a dsp mmu entry in order to be used with
+ * iommu functions.
+ */
+static inline void dsp_iotlb_init(struct iotlb_entry *e, u32 da, u32 pa,
+ u32 pgsz)
+{
+ e->da = da;
+ e->pa = pa;
+ e->valid = 1;
+ e->prsvd = 1;
+ e->pgsz = pgsz & MMU_CAM_PGSZ_MASK;
+ e->endian = MMU_RAM_ENDIAN_LITTLE;
+ e->elsz = MMU_RAM_ELSZ_32;
+ e->mixed = 0;
+}
+
+/**
+ * dsp_mmu_init() - initialize dsp_mmu module and returns a handle
+ *
+ * This function initialize dsp mmu module and returns a struct iommu
+ * handle to use it for dsp maps.
+ *
+ */
+struct iommu *dsp_mmu_init(void);
+
+/**
+ * dsp_mmu_exit() - destroy dsp mmu module
+ * @mmu: Pointer to iommu handle.
+ *
+ * This function destroys dsp mmu module.
+ *
+ */
+void dsp_mmu_exit(struct iommu *mmu);
+
+/**
+ * user_to_dsp_map() - maps user to dsp virtual address
+ * @mmu: Pointer to iommu handle.
+ * @uva: Virtual user space address.
+ * @da DSP address
+ * @size Buffer size to map.
+ * @usr_pgs struct page array pointer where the user pages will be stored
+ *
+ * This function maps a user space buffer into DSP virtual address.
+ *
+ */
+int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
+ struct page **usr_pgs);
+
+/**
+ * user_to_dsp_unmap() - unmaps DSP virtual buffer.
+ * @mmu: Pointer to iommu handle.
+ * @da DSP address
+ *
+ * This function unmaps a user space buffer into DSP virtual address.
+ *
+ */
+int user_to_dsp_unmap(struct iommu *mmu, u32 da);
+
+#endif
diff --git a/arch/arm/plat-omap/include/dspbridge/dspdeh.h b/arch/arm/plat-omap/include/dspbridge/dspdeh.h
index 4394711..af19926 100644
--- a/arch/arm/plat-omap/include/dspbridge/dspdeh.h
+++ b/arch/arm/plat-omap/include/dspbridge/dspdeh.h
@@ -43,5 +43,4 @@ extern int bridge_deh_register_notify(struct deh_mgr *deh_mgr,
extern void bridge_deh_notify(struct deh_mgr *deh_mgr,
u32 ulEventMask, u32 dwErrInfo);

-extern void bridge_deh_release_dummy_mem(void);
#endif /* DSPDEH_ */
diff --git a/drivers/dsp/bridge/Makefile b/drivers/dsp/bridge/Makefile
index 66ca10a..9f32055 100644
--- a/drivers/dsp/bridge/Makefile
+++ b/drivers/dsp/bridge/Makefile
@@ -5,7 +5,7 @@ libservices = services/sync.o services/cfg.o \
services/ntfy.o services/services.o
libcore = core/chnl_sm.o core/msg_sm.o core/io_sm.o core/tiomap3430.o \
core/tiomap3430_pwr.o core/tiomap_io.o \
- core/mmu_fault.o core/ue_deh.o core/wdt.o core/dsp-clock.o
+ core/dsp-mmu.o core/ue_deh.o core/wdt.o core/dsp-clock.o
libpmgr = pmgr/chnl.o pmgr/io.o pmgr/msg.o pmgr/cod.o pmgr/dev.o pmgr/dspapi.o \
pmgr/dmm.o pmgr/cmm.o pmgr/dbll.o
librmgr = rmgr/dbdcd.o rmgr/disp.o rmgr/drv.o rmgr/mgr.o rmgr/node.o \
diff --git a/drivers/dsp/bridge/core/_deh.h b/drivers/dsp/bridge/core/_deh.h
index 8da2212..b1ef2e9 100644
--- a/drivers/dsp/bridge/core/_deh.h
+++ b/drivers/dsp/bridge/core/_deh.h
@@ -27,9 +27,6 @@ struct deh_mgr {
struct bridge_dev_context *hbridge_context; /* Bridge context. */
struct ntfy_object *ntfy_obj; /* NTFY object */
struct dsp_errorinfo err_info; /* DSP exception info. */
-
- /* MMU Fault DPC */
- struct tasklet_struct dpc_tasklet;
};

#endif /* _DEH_ */
diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index 35f20a7..8a9a822 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -23,8 +23,7 @@
#include <plat/clockdomain.h>
#include <mach-omap2/prm-regbits-34xx.h>
#include <mach-omap2/cm-regbits-34xx.h>
-#include <plat/iommu.h>
-#include <plat/iovmm.h>
+#include <dspbridge/dsp-mmu.h>
#include <dspbridge/devdefs.h>
#include <dspbridge/dspioctl.h> /* for bridge_ioctl_extproc defn */
#include <dspbridge/sync.h>
@@ -381,42 +380,4 @@ extern s32 dsp_debug;
*/
int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val);

-static inline void dsp_iotlb_init(struct iotlb_entry *e, u32 da, u32 pa,
- u32 pgsz)
-{
- e->da = da;
- e->pa = pa;
- e->valid = 1;
- e->prsvd = 1;
- e->pgsz = pgsz & MMU_CAM_PGSZ_MASK;
- e->endian = MMU_RAM_ENDIAN_LITTLE;
- e->elsz = MMU_RAM_ELSZ_32;
- e->mixed = 0;
-}
-
-/**
- * user_to_dsp_map() - maps user to dsp virtual address
- * @mmu: Pointer to iommu handle.
- * @uva: Virtual user space address.
- * @da DSP address
- * @size Buffer size to map.
- * @usr_pgs struct page array pointer where the user pages will be stored
- *
- * This function maps a user space buffer into DSP virtual address.
- *
- */
-
-int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
- struct page **usr_pgs);
-
-/**
- * user_to_dsp_unmap() - unmaps DSP virtual buffer.
- * @mmu: Pointer to iommu handle.
- * @da DSP address
- *
- * This function unmaps a user space buffer into DSP virtual address.
- *
- */
-int user_to_dsp_unmap(struct iommu *mmu, u32 da);
-
#endif /* _TIOMAP_ */
diff --git a/drivers/dsp/bridge/core/dsp-mmu.c b/drivers/dsp/bridge/core/dsp-mmu.c
new file mode 100644
index 0000000..e8da327
--- /dev/null
+++ b/drivers/dsp/bridge/core/dsp-mmu.c
@@ -0,0 +1,218 @@
+/*
+ * dsp-mmu.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP iommu.
+ *
+ * Copyright (C) 2010 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/host_os.h>
+#include <plat/dmtimer.h>
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/io_sm.h>
+#include <dspbridge/dspdeh.h>
+#include "_tiomap.h"
+
+#include <dspbridge/dsp-mmu.h>
+
+static struct tasklet_struct mmu_tasklet;
+
+static void fault_tasklet(unsigned long data)
+{
+ struct iommu *mmu = (struct iommu *)data;
+ struct bridge_dev_context *dev_ctx;
+ struct deh_mgr *dm;
+ struct iotlb_entry e;
+ u32 fa, tmp, dummy;
+
+ dev_get_deh_mgr(dev_get_first(), &dm);
+ dev_get_bridge_context(dev_get_first(), &dev_ctx);
+
+ if (!dm || !dev_ctx)
+ return;
+
+ dummy = __get_free_page(GFP_ATOMIC);
+
+ if (!dummy)
+ return;
+
+ print_dsp_trace_buffer(dev_ctx);
+ dump_dl_modules(dev_ctx);
+
+ fa = iommu_read_reg(mmu, MMU_FAULT_AD);
+ dsp_iotlb_init(&e, fa & PAGE_MASK, __pa(dummy), IOVMF_PGSZ_4K);
+ load_iotlb_entry(mmu, &e);
+
+ dsp_clk_enable(DSP_CLK_GPT7);
+ dsp_gpt_wait_overflow(DSP_CLK_GPT7, 0xfffffffe);
+
+ /* Clear MMU interrupt */
+ tmp = iommu_read_reg(mmu, MMU_IRQSTATUS);
+ iommu_write_reg(mmu, tmp, MMU_IRQSTATUS);
+
+ dump_dsp_stack(dev_ctx);
+ dsp_clk_disable(DSP_CLK_GPT7);
+
+ bridge_deh_notify(dm, DSP_MMUFAULT, fa);
+ free_page(dummy);
+}
+
+/*
+ * ======== mmu_fault_isr ========
+ * ISR to be triggered by a DSP MMU fault interrupt.
+ */
+static int mmu_fault_callback(struct iommu *mmu)
+{
+ if (!mmu)
+ return -EPERM;
+
+ iommu_write_reg(mmu, 0, MMU_IRQENABLE);
+ tasklet_schedule(&mmu_tasklet);
+ return 0;
+}
+
+/**
+ * dsp_mmu_init() - initialize dsp_mmu module and returns a handle
+ *
+ * This function initialize dsp mmu module and returns a struct iommu
+ * handle to use it for dsp maps.
+ *
+ */
+struct iommu *dsp_mmu_init()
+{
+ struct iommu *mmu;
+ mmu = iommu_get("iva2");
+ if (IS_ERR(mmu))
+ return mmu;
+
+ tasklet_init(&mmu_tasklet, fault_tasklet, (unsigned long)mmu);
+ mmu->isr = mmu_fault_callback;
+
+ return mmu;
+}
+
+/**
+ * dsp_mmu_exit() - destroy dsp mmu module
+ * @mmu: Pointer to iommu handle.
+ *
+ * This function destroys dsp mmu module.
+ *
+ */
+void dsp_mmu_exit(struct iommu *mmu)
+{
+
+ if (mmu)
+ iommu_put(mmu);
+ tasklet_kill(&mmu_tasklet);
+}
+
+/**
+ * user_to_dsp_map() - maps user to dsp virtual address
+ * @mmu: Pointer to iommu handle.
+ * @uva: Virtual user space address.
+ * @da DSP address
+ * @size Buffer size to map.
+ * @usr_pgs struct page array pointer where the user pages will be stored
+ *
+ * This function maps a user space buffer into DSP virtual address.
+ *
+ */
+
+int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
+ struct page **usr_pgs)
+{
+ int res, w;
+ unsigned pages, i;
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ struct sg_table *sgt;
+ struct scatterlist *sg;
+
+ if (!size || !usr_pgs)
+ return -EINVAL;
+
+ pages = size / PG_SIZE4K;
+
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, uva);
+ while (vma && (uva + size > vma->vm_end))
+ vma = find_vma(mm, vma->vm_end + 1);
+
+ if (!vma) {
+ pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
+ __func__, uva, size);
+ up_read(&mm->mmap_sem);
+ return -EINVAL;
+ }
+ if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
+ w = 1;
+ res = get_user_pages(current, mm, uva, pages, w, 1, usr_pgs, NULL);
+ up_read(&mm->mmap_sem);
+ if (res < 0)
+ return res;
+
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+
+ if (!sgt)
+ return -ENOMEM;
+
+ res = sg_alloc_table(sgt, pages, GFP_KERNEL);
+
+ if (res < 0)
+ goto err_sg;
+
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ sg_set_page(sg, usr_pgs[i], PAGE_SIZE, 0);
+
+ da = iommu_vmap(mmu, da, sgt, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
+
+ if (IS_ERR_VALUE(da)) {
+ res = (int)da;
+ goto err_map;
+ }
+ return 0;
+
+err_map:
+ sg_free_table(sgt);
+err_sg:
+ kfree(sgt);
+ return res;
+}
+
+/**
+ * user_to_dsp_unmap() - unmaps DSP virtual buffer.
+ * @mmu: Pointer to iommu handle.
+ * @da DSP address
+ *
+ * This function unmaps a user space buffer into DSP virtual address.
+ *
+ */
+int user_to_dsp_unmap(struct iommu *mmu, u32 da)
+{
+ unsigned i;
+ struct sg_table *sgt;
+ struct scatterlist *sg;
+
+ sgt = iommu_vunmap(mmu, da);
+ if (!sgt)
+ return -EFAULT;
+
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ put_page(sg_page(sg));
+
+ sg_free_table(sgt);
+ kfree(sgt);
+
+ return 0;
+}
diff --git a/drivers/dsp/bridge/core/mmu_fault.c b/drivers/dsp/bridge/core/mmu_fault.c
deleted file mode 100644
index 54c0bc3..0000000
--- a/drivers/dsp/bridge/core/mmu_fault.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * mmu_fault.c
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Implements DSP MMU fault handling functions.
- *
- * Copyright (C) 2005-2006 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-/* ----------------------------------- DSP/BIOS Bridge */
-#include <dspbridge/std.h>
-#include <dspbridge/dbdefs.h>
-
-/* ----------------------------------- Trace & Debug */
-#include <dspbridge/host_os.h>
-#include <dspbridge/dbc.h>
-#include <plat/iommu.h>
-
-/* ----------------------------------- OS Adaptation Layer */
-#include <dspbridge/drv.h>
-#include <dspbridge/dev.h>
-
-
-/* ----------------------------------- Link Driver */
-#include <dspbridge/dspdeh.h>
-
-/* ----------------------------------- This */
-#include "_deh.h"
-#include <dspbridge/cfg.h>
-#include "_tiomap.h"
-#include "mmu_fault.h"
-
-/*
- * ======== mmu_fault_dpc ========
- * Deferred procedure call to handle DSP MMU fault.
- */
-void mmu_fault_dpc(IN unsigned long pRefData)
-{
- struct deh_mgr *hdeh_mgr = (struct deh_mgr *)pRefData;
-
- if (hdeh_mgr)
- bridge_deh_notify(hdeh_mgr, DSP_MMUFAULT, 0L);
-
-}
-
-/*
- * ======== mmu_fault_isr ========
- * ISR to be triggered by a DSP MMU fault interrupt.
- */
-int mmu_fault_isr(struct iommu *mmu)
-
-{
- struct deh_mgr *dm;
- u32 da;
-
- dev_get_deh_mgr(dev_get_first(), &dm);
-
- if (!dm)
- return -EPERM;
-
- da = iommu_read_reg(mmu, MMU_FAULT_AD);
- iommu_write_reg(mmu, 0, MMU_IRQENABLE);
- dm->err_info.dw_val1 = da;
- tasklet_schedule(&dm->dpc_tasklet);
-
- return 0;
-}
diff --git a/drivers/dsp/bridge/core/mmu_fault.h b/drivers/dsp/bridge/core/mmu_fault.h
deleted file mode 100644
index df3fba6..0000000
--- a/drivers/dsp/bridge/core/mmu_fault.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * mmu_fault.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Defines DSP MMU fault handling functions.
- *
- * Copyright (C) 2005-2006 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef MMU_FAULT_
-#define MMU_FAULT_
-
-/*
- * ======== mmu_fault_dpc ========
- * Deferred procedure call to handle DSP MMU fault.
- */
-void mmu_fault_dpc(IN unsigned long pRefData);
-
-/*
- * ======== mmu_fault_isr ========
- * ISR to be triggered by a DSP MMU fault interrupt.
- */
-int mmu_fault_isr(struct iommu *mmu);
-
-
-#endif /* MMU_FAULT_ */
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 9b6293b..aa6e999 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -53,7 +53,6 @@
#include "_tiomap.h"
#include "_tiomap_pwr.h"
#include "tiomap_io.h"
-#include "mmu_fault.h"

/* Offset in shared mem to write to in order to synchronize start with DSP */
#define SHMSYNCOFFSET 4 /* GPP byte offset */
@@ -369,8 +368,8 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
mmu = dev_context->dsp_mmu;

if (mmu)
- iommu_put(mmu);
- mmu = iommu_get("iva2");
+ dsp_mmu_exit(mmu);
+ mmu = dsp_mmu_init();

if (IS_ERR(mmu)) {
pr_err("Error in iommu_get %ld\n", PTR_ERR(mmu));
@@ -379,7 +378,6 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
goto end;
}
dev_context->dsp_mmu = mmu;
- mmu->isr = mmu_fault_isr;
sm_sg = dev_context->sh_s;

sm_sg->seg0_da = iommu_kmap(mmu, sm_sg->seg0_da, sm_sg->seg0_pa,
@@ -612,7 +610,7 @@ static int bridge_brd_stop(struct bridge_dev_context *hDevContext)
iommu_kunmap(dev_context->dsp_mmu,
dev_context->sh_s->seg1_da);
}
- iommu_put(dev_context->dsp_mmu);
+ dsp_mmu_exit(dev_context->dsp_mmu);
dev_context->dsp_mmu = NULL;
}

@@ -673,7 +671,7 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
iommu_kunmap(dev_context->dsp_mmu,
dev_context->sh_s->seg1_da);
}
- iommu_put(dev_context->dsp_mmu);
+ dsp_mmu_exit(dev_context->dsp_mmu);
dev_context->dsp_mmu = NULL;
}

@@ -987,107 +985,6 @@ static int bridge_brd_mem_write(struct bridge_dev_context *hDevContext,
return status;
}

-/**
- * user_to_dsp_map() - maps user to dsp virtual address
- * @mmu: Pointer to iommu handle.
- * @uva: Virtual user space address.
- * @da DSP address
- * @size Buffer size to map.
- * @usr_pgs struct page array pointer where the user pages will be stored
- *
- * This function maps a user space buffer into DSP virtual address.
- *
- */
-
-int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
- struct page **usr_pgs)
-
-{
- int res, w;
- unsigned pages, i;
- struct vm_area_struct *vma;
- struct mm_struct *mm = current->mm;
- struct sg_table *sgt;
- struct scatterlist *sg;
-
- if (!size || !usr_pgs)
- return -EINVAL;
-
- pages = size / PG_SIZE4K;
-
- down_read(&mm->mmap_sem);
- vma = find_vma(mm, uva);
- while (vma && (uva + size > vma->vm_end))
- vma = find_vma(mm, vma->vm_end + 1);
-
- if (!vma) {
- pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
- __func__, uva, size);
- up_read(&mm->mmap_sem);
- return -EINVAL;
- }
- if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
- w = 1;
- res = get_user_pages(current, mm, uva, pages, w, 1, usr_pgs, NULL);
- up_read(&mm->mmap_sem);
- if (res < 0)
- return res;
-
- sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
-
- if (!sgt)
- return -ENOMEM;
-
- res = sg_alloc_table(sgt, pages, GFP_KERNEL);
-
- if (res < 0)
- goto err_sg;
-
- for_each_sg(sgt->sgl, sg, sgt->nents, i)
- sg_set_page(sg, usr_pgs[i], PAGE_SIZE, 0);
-
- da = iommu_vmap(mmu, da, sgt, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
-
- if (IS_ERR_VALUE(da)) {
- res = (int)da;
- goto err_map;
- }
- return 0;
-
-err_map:
- sg_free_table(sgt);
-err_sg:
- kfree(sgt);
- return res;
-}
-
-/**
- * user_to_dsp_unmap() - unmaps DSP virtual buffer.
- * @mmu: Pointer to iommu handle.
- * @da DSP address
- *
- * This function unmaps a user space buffer into DSP virtual address.
- *
- */
-int user_to_dsp_unmap(struct iommu *mmu, u32 da)
-{
- unsigned i;
- struct sg_table *sgt;
- struct scatterlist *sg;
-
- sgt = iommu_vunmap(mmu, da);
- if (!sgt)
- return -EFAULT;
-
- for_each_sg(sgt->sgl, sg, sgt->nents, i)
- put_page(sg_page(sg));
-
- sg_free_table(sgt);
- kfree(sgt);
-
- return 0;
-}
-
/*
* ======== wait_for_start ========
* Wait for the singal from DSP that it has started, or time out.
diff --git a/drivers/dsp/bridge/core/ue_deh.c b/drivers/dsp/bridge/core/ue_deh.c
index 72cc6c0..4a00d09 100644
--- a/drivers/dsp/bridge/core/ue_deh.c
+++ b/drivers/dsp/bridge/core/ue_deh.c
@@ -18,11 +18,11 @@

/* ----------------------------------- Host OS */
#include <dspbridge/host_os.h>
-#include <plat/iommu.h>

/* ----------------------------------- DSP/BIOS Bridge */
#include <dspbridge/std.h>
#include <dspbridge/dbdefs.h>
+#include <dspbridge/dsp-mmu.h>

/* ----------------------------------- Trace & Debug */
#include <dspbridge/dbc.h>
@@ -42,14 +42,11 @@
#include <dspbridge/wdt.h>

/* ----------------------------------- This */
-#include "mmu_fault.h"
#include "_tiomap.h"
#include "_deh.h"
#include "_tiomap_pwr.h"
#include <dspbridge/io_sm.h>

-static void *dummy_va_addr;
-
int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
struct dev_object *hdev_obj)
{
@@ -63,7 +60,6 @@ int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
/* Get Bridge context info. */
dev_get_bridge_context(hdev_obj, &hbridge_context);
DBC_ASSERT(hbridge_context);
- dummy_va_addr = NULL;
/* Allocate IO manager object: */
deh_mgr = kzalloc(sizeof(struct deh_mgr), GFP_KERNEL);
if (!deh_mgr) {
@@ -80,9 +76,6 @@ int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
goto err;
}

- /* Create a MMUfault DPC */
- tasklet_init(&deh_mgr->dpc_tasklet, mmu_fault_dpc, (u32) deh_mgr);
-
/* Fill in context structure */
deh_mgr->hbridge_context = hbridge_context;
deh_mgr->err_info.dw_err_mask = 0L;
@@ -107,17 +100,12 @@ int bridge_deh_destroy(struct deh_mgr *deh_mgr)
if (!deh_mgr)
return -EFAULT;

- /* Release dummy VA buffer */
- bridge_deh_release_dummy_mem();
/* If notification object exists, delete it */
if (deh_mgr->ntfy_obj) {
ntfy_delete(deh_mgr->ntfy_obj);
kfree(deh_mgr->ntfy_obj);
}

- /* Free DPC object */
- tasklet_kill(&deh_mgr->dpc_tasklet);
-
/* Deallocate the DEH manager object */
kfree(deh_mgr);

@@ -145,10 +133,7 @@ int bridge_deh_register_notify(struct deh_mgr *deh_mgr, u32 event_mask,
void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
{
struct bridge_dev_context *dev_context;
- u32 hw_mmu_max_tlb_count = 31;
struct cfg_hostres *resources;
- u32 fault_addr, tmp;
- struct iotlb_entry e;

if (!deh_mgr)
return;
@@ -171,49 +156,8 @@ void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
dump_dsp_stack(dev_context);
break;
case DSP_MMUFAULT:
- /* MMU fault routine should have set err info structure. */
- fault_addr = iommu_read_reg(dev_context->dsp_mmu,
- MMU_FAULT_AD);
-
- deh_mgr->err_info.dw_err_mask = DSP_MMUFAULT;
- dev_err(bridge, "%s: %s, err_info = 0x%x\n",
- __func__, "DSP_MMUFAULT", dwErrInfo);
- dev_info(bridge, "%s: %s, high=0x%x, low=0x%x, "
- "fault=0x%x\n", __func__, "DSP_MMUFAULT",
- (unsigned int) deh_mgr->err_info.dw_val1,
- (unsigned int) deh_mgr->err_info.dw_val2,
- (unsigned int) fault_addr);
- dummy_va_addr = (void*)__get_free_page(GFP_ATOMIC);
- dev_context = (struct bridge_dev_context *)
- deh_mgr->hbridge_context;
-
- print_dsp_trace_buffer(dev_context);
- dump_dl_modules(dev_context);
-
- /*
- * Reset the dynamic mmu index to fixed count if it exceeds
- * 31. So that the dynmmuindex is always between the range of
- * standard/fixed entries and 31.
- */
- if (dev_context->num_tlb_entries >
- hw_mmu_max_tlb_count) {
- dev_context->num_tlb_entries =
- dev_context->fixed_tlb_entries;
- }
- dsp_iotlb_init(&e, fault_addr & PAGE_MASK,
- virt_to_phys(dummy_va_addr), IOVMF_PGSZ_4K);
- load_iotlb_entry(dev_context->dsp_mmu, &e);
-
-
- dsp_clk_enable(DSP_CLK_GPT8);
-
- dsp_gpt_wait_overflow(DSP_CLK_GPT8, 0xfffffffe);
-
- tmp = iommu_read_reg(dev_context->dsp_mmu, MMU_IRQSTATUS);
- iommu_write_reg(dev_context->dsp_mmu, tmp, MMU_IRQSTATUS);
-
- dump_dsp_stack(deh_mgr->hbridge_context);
- dsp_clk_disable(DSP_CLK_GPT8);
+ dev_err(bridge, "%s: %s, fault address = 0x%x\n",
+ __func__, "DSP_MMUFault", dwErrInfo);
break;
#ifdef CONFIG_BRIDGE_NTFY_PWRERR
case DSP_PWRERROR:
@@ -276,9 +220,3 @@ int bridge_deh_get_info(struct deh_mgr *deh_mgr,

return 0;
}
-
-void bridge_deh_release_dummy_mem(void)
-{
- free_page((unsigned long)dummy_va_addr);
- dummy_va_addr = NULL;
-}
diff --git a/drivers/dsp/bridge/rmgr/proc.c b/drivers/dsp/bridge/rmgr/proc.c
index 299bef3..4f10a41 100644
--- a/drivers/dsp/bridge/rmgr/proc.c
+++ b/drivers/dsp/bridge/rmgr/proc.c
@@ -1634,11 +1634,7 @@ int proc_stop(void *hprocessor)
status = -EFAULT;
goto func_end;
}
- if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_status)
- (p_proc_object->hbridge_context, &brd_state))) {
- if (brd_state == BRD_ERROR)
- bridge_deh_release_dummy_mem();
- }
+
/* check if there are any running nodes */
status = dev_get_node_manager(p_proc_object->hdev_obj, &hnode_mgr);
if (DSP_SUCCEEDED(status) && hnode_mgr) {
--
1.7.0.4

2010-06-30 23:51:09

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 6/9] dspbridge: remove hw directory

hw directory only was being used for custom iommu implementation
api's, so after the iommu module migration this directory is not
needed anymore.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
arch/arm/plat-omap/include/dspbridge/dspioctl.h | 7 -
drivers/dsp/bridge/Makefile | 3 +-
drivers/dsp/bridge/core/_tiomap.h | 1 -
drivers/dsp/bridge/core/io_sm.c | 4 -
drivers/dsp/bridge/core/mmu_fault.c | 4 -
drivers/dsp/bridge/core/tiomap3430.c | 22 +-
drivers/dsp/bridge/core/tiomap3430_pwr.c | 183 +------
drivers/dsp/bridge/core/tiomap_io.c | 5 +-
drivers/dsp/bridge/core/ue_deh.c | 4 -
drivers/dsp/bridge/hw/EasiGlobal.h | 41 --
drivers/dsp/bridge/hw/GlobalTypes.h | 308 ------------
drivers/dsp/bridge/hw/MMUAccInt.h | 76 ---
drivers/dsp/bridge/hw/MMURegAcM.h | 226 ---------
drivers/dsp/bridge/hw/hw_defs.h | 60 ---
drivers/dsp/bridge/hw/hw_mmu.c | 587 -----------------------
drivers/dsp/bridge/hw/hw_mmu.h | 161 -------
drivers/dsp/bridge/rmgr/node.c | 4 +-
17 files changed, 35 insertions(+), 1661 deletions(-)
delete mode 100644 drivers/dsp/bridge/hw/EasiGlobal.h
delete mode 100644 drivers/dsp/bridge/hw/GlobalTypes.h
delete mode 100644 drivers/dsp/bridge/hw/MMUAccInt.h
delete mode 100644 drivers/dsp/bridge/hw/MMURegAcM.h
delete mode 100644 drivers/dsp/bridge/hw/hw_defs.h
delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.c
delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.h

diff --git a/arch/arm/plat-omap/include/dspbridge/dspioctl.h b/arch/arm/plat-omap/include/dspbridge/dspioctl.h
index 41e0594..bad1801 100644
--- a/arch/arm/plat-omap/include/dspbridge/dspioctl.h
+++ b/arch/arm/plat-omap/include/dspbridge/dspioctl.h
@@ -19,10 +19,6 @@
#ifndef DSPIOCTL_
#define DSPIOCTL_

-/* ------------------------------------ Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
/*
* Any IOCTLS at or above this value are reserved for standard Bridge driver
* interfaces.
@@ -65,9 +61,6 @@ struct bridge_ioctl_extproc {
/* GPP virtual address. __va does not work for ioremapped addresses */
u32 ul_gpp_va;
u32 ul_size; /* Size of the mapped memory in bytes */
- enum hw_endianism_t endianism;
- enum hw_mmu_mixed_size_t mixed_mode;
- enum hw_element_size_t elem_size;
};

#endif /* DSPIOCTL_ */
diff --git a/drivers/dsp/bridge/Makefile b/drivers/dsp/bridge/Makefile
index 4c2f923..66ca10a 100644
--- a/drivers/dsp/bridge/Makefile
+++ b/drivers/dsp/bridge/Makefile
@@ -13,10 +13,9 @@ librmgr = rmgr/dbdcd.o rmgr/disp.o rmgr/drv.o rmgr/mgr.o rmgr/node.o \
rmgr/nldr.o rmgr/drv_interface.o
libdload = dynload/cload.o dynload/getsection.o dynload/reloc.o \
dynload/tramp.o
-libhw = hw/hw_mmu.o

bridgedriver-objs = $(libgen) $(libservices) $(libcore) $(libpmgr) $(librmgr) \
- $(libdload) $(libhw)
+ $(libdload)

#Machine dependent
ccflags-y += -D_TI_ -D_DB_TIOMAP -DTMS32060 \
diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index c41fd8e..35f20a7 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -26,7 +26,6 @@
#include <plat/iommu.h>
#include <plat/iovmm.h>
#include <dspbridge/devdefs.h>
-#include <hw_defs.h>
#include <dspbridge/dspioctl.h> /* for bridge_ioctl_extproc defn */
#include <dspbridge/sync.h>
#include <dspbridge/clk.h>
diff --git a/drivers/dsp/bridge/core/io_sm.c b/drivers/dsp/bridge/core/io_sm.c
index aca9854..72d64cb 100644
--- a/drivers/dsp/bridge/core/io_sm.c
+++ b/drivers/dsp/bridge/core/io_sm.c
@@ -40,10 +40,6 @@
#include <dspbridge/ntfy.h>
#include <dspbridge/sync.h>

-/* Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
/* Bridge Driver */
#include <dspbridge/dspdeh.h>
#include <dspbridge/dspio.h>
diff --git a/drivers/dsp/bridge/core/mmu_fault.c b/drivers/dsp/bridge/core/mmu_fault.c
index d991c6a..54c0bc3 100644
--- a/drivers/dsp/bridge/core/mmu_fault.c
+++ b/drivers/dsp/bridge/core/mmu_fault.c
@@ -33,10 +33,6 @@
/* ----------------------------------- Link Driver */
#include <dspbridge/dspdeh.h>

-/* ------------------------------------ Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
/* ----------------------------------- This */
#include "_deh.h"
#include <dspbridge/cfg.h>
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 89867e7..9b6293b 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -34,10 +34,6 @@
#include <dspbridge/drv.h>
#include <dspbridge/sync.h>

-/* ------------------------------------ Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
/* ----------------------------------- Link Driver */
#include <dspbridge/dspdefs.h>
#include <dspbridge/dspchnl.h>
@@ -483,24 +479,18 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg;

/*PM_IVA2GRPSEL_PER = 0xC0;*/
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) + 0xA8));
+ temp = __raw_readl(resources->dw_per_pm_base + 0xA8);
temp = (temp & 0xFFFFFF30) | 0xC0;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8)) =
- (u32) temp;
+ __raw_writel(temp, resources->dw_per_pm_base + 0xA8);

/*PM_MPUGRPSEL_PER &= 0xFFFFFF3F; */
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) + 0xA4));
+ temp = __raw_readl(resources->dw_per_pm_base + 0xA4);
temp = (temp & 0xFFFFFF3F);
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4)) =
- (u32) temp;
+ __raw_writel(temp, resources->dw_per_pm_base + 0xA4);
/*CM_SLEEPDEP_PER |= 0x04; */
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_base) + 0x44));
+ temp = __raw_readl(resources->dw_per_pm_base + 0x44);
temp = (temp & 0xFFFFFFFB) | 0x04;
- *((reg_uword32 *) ((u32) (resources->dw_per_base) + 0x44)) =
- (u32) temp;
+ __raw_writel(temp, resources->dw_per_pm_base + 0x44);

/*CM_CLKSTCTRL_IVA2 = 0x00000003 -To Allow automatic transitions */
(*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_ENABLE_AUTO,
diff --git a/drivers/dsp/bridge/core/tiomap3430_pwr.c b/drivers/dsp/bridge/core/tiomap3430_pwr.c
index a45db99..6746fc5 100644
--- a/drivers/dsp/bridge/core/tiomap3430_pwr.c
+++ b/drivers/dsp/bridge/core/tiomap3430_pwr.c
@@ -27,10 +27,6 @@
#include <dspbridge/dev.h>
#include <dspbridge/iodefs.h>

-/* ------------------------------------ Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
#include <dspbridge/pwr_sh.h>

/* ----------------------------------- Bridge Driver */
@@ -412,7 +408,7 @@ void dsp_clk_wakeup_event_ctrl(u32 ClkId, bool enable)
struct cfg_hostres *resources;
int status = 0;
u32 iva2_grpsel;
- u32 mpu_grpsel;
+ u32 mpu_grpsel, mask;
struct dev_object *hdev_object = NULL;
struct bridge_dev_context *bridge_context = NULL;

@@ -430,175 +426,46 @@ void dsp_clk_wakeup_event_ctrl(u32 ClkId, bool enable)

switch (ClkId) {
case BPWR_GP_TIMER5:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_GPT5;
- mpu_grpsel &= ~OMAP3430_GRPSEL_GPT5;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_GPT5;
- iva2_grpsel &= ~OMAP3430_GRPSEL_GPT5;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = OMAP3430_GRPSEL_GPT5;
break;
case BPWR_GP_TIMER6:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_GPT6;
- mpu_grpsel &= ~OMAP3430_GRPSEL_GPT6;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_GPT6;
- iva2_grpsel &= ~OMAP3430_GRPSEL_GPT6;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = OMAP3430_GRPSEL_GPT6;
break;
case BPWR_GP_TIMER7:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_GPT7;
- mpu_grpsel &= ~OMAP3430_GRPSEL_GPT7;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_GPT7;
- iva2_grpsel &= ~OMAP3430_GRPSEL_GPT7;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = OMAP3430_GRPSEL_GPT7;
break;
case BPWR_GP_TIMER8:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_GPT8;
- mpu_grpsel &= ~OMAP3430_GRPSEL_GPT8;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_GPT8;
- iva2_grpsel &= ~OMAP3430_GRPSEL_GPT8;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = OMAP3430_GRPSEL_GPT8;
break;
case BPWR_MCBSP1:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_core_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_core_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_MCBSP1;
- mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP1;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_MCBSP1;
- iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP1;
- }
- *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = BPWR_MCBSP1;
break;
case BPWR_MCBSP2:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_MCBSP2;
- mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP2;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_MCBSP2;
- iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP2;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = BPWR_MCBSP2;
break;
case BPWR_MCBSP3:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_MCBSP3;
- mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP3;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_MCBSP3;
- iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP3;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = BPWR_MCBSP3;
break;
case BPWR_MCBSP4:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_MCBSP4;
- mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP4;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_MCBSP4;
- iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP4;
- }
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = BPWR_MCBSP4;
break;
case BPWR_MCBSP5:
- iva2_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_core_pm_base) +
- 0xA8));
- mpu_grpsel = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_core_pm_base) +
- 0xA4));
- if (enable) {
- iva2_grpsel |= OMAP3430_GRPSEL_MCBSP5;
- mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP5;
- } else {
- mpu_grpsel |= OMAP3430_GRPSEL_MCBSP5;
- iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP5;
- }
- *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA8))
- = iva2_grpsel;
- *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA4))
- = mpu_grpsel;
+ mask = BPWR_MCBSP5;
break;
+ default:
+ return;
+
}
+ iva2_grpsel = __raw_readl(resources->dw_per_pm_base + 0xA8);
+ mpu_grpsel = __raw_readl(resources->dw_per_pm_base + 0xA4);
+ if (enable) {
+ iva2_grpsel |= mask;
+ mpu_grpsel &= ~mask;
+ } else {
+ mpu_grpsel |= mask;
+ iva2_grpsel &= ~mask;
+ }
+ __raw_writel(iva2_grpsel, resources->dw_per_pm_base + 0xA8);
+ __raw_writel(mpu_grpsel, resources->dw_per_pm_base + 0xA4);
+
}
diff --git a/drivers/dsp/bridge/core/tiomap_io.c b/drivers/dsp/bridge/core/tiomap_io.c
index c23ca66..3c0d3a3 100644
--- a/drivers/dsp/bridge/core/tiomap_io.c
+++ b/drivers/dsp/bridge/core/tiomap_io.c
@@ -142,7 +142,7 @@ int read_ext_dsp_data(struct bridge_dev_context *hDevContext,
ul_shm_base_virt - ul_tlb_base_virt;
ul_shm_offset_virt +=
PG_ALIGN_HIGH(ul_ext_end - ul_dyn_ext_base +
- 1, HW_PAGE_SIZE64KB);
+ 1, PAGE_SIZE * 16);
dw_ext_prog_virt_mem -= ul_shm_offset_virt;
dw_ext_prog_virt_mem +=
(ul_ext_base - ul_dyn_ext_base);
@@ -394,7 +394,6 @@ int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val)
omap_dspbridge_dev->dev.platform_data;
struct cfg_hostres *resources = dev_context->resources;
int status = 0;
- u32 temp;

if (!dev_context->mbox)
return 0;
@@ -438,7 +437,7 @@ int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val)
omap_mbox_restore_ctx(dev_context->mbox);

/* Access MMU SYS CONFIG register to generate a short wakeup */
- temp = *(reg_uword32 *) (resources->dw_dmmu_base + 0x10);
+ __raw_readl(resources->dw_dmmu_base + 0x10);

dev_context->dw_brd_state = BRD_RUNNING;
} else if (dev_context->dw_brd_state == BRD_RETENTION) {
diff --git a/drivers/dsp/bridge/core/ue_deh.c b/drivers/dsp/bridge/core/ue_deh.c
index a03d172..72cc6c0 100644
--- a/drivers/dsp/bridge/core/ue_deh.c
+++ b/drivers/dsp/bridge/core/ue_deh.c
@@ -41,10 +41,6 @@
#include <dspbridge/dspapi.h>
#include <dspbridge/wdt.h>

-/* ------------------------------------ Hardware Abstraction Layer */
-#include <hw_defs.h>
-#include <hw_mmu.h>
-
/* ----------------------------------- This */
#include "mmu_fault.h"
#include "_tiomap.h"
diff --git a/drivers/dsp/bridge/hw/EasiGlobal.h b/drivers/dsp/bridge/hw/EasiGlobal.h
deleted file mode 100644
index 9b45aa7..0000000
--- a/drivers/dsp/bridge/hw/EasiGlobal.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * EasiGlobal.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _EASIGLOBAL_H
-#define _EASIGLOBAL_H
-#include <linux/types.h>
-
-/*
- * DEFINE: READ_ONLY, WRITE_ONLY & READ_WRITE
- *
- * DESCRIPTION: Defines used to describe register types for EASI-checker tests.
- */
-
-#define READ_ONLY 1
-#define WRITE_ONLY 2
-#define READ_WRITE 3
-
-/*
- * MACRO: _DEBUG_LEVEL1_EASI
- *
- * DESCRIPTION: A MACRO which can be used to indicate that a particular beach
- * register access function was called.
- *
- * NOTE: We currently dont use this functionality.
- */
-#define _DEBUG_LEVEL1_EASI(easiNum) ((void)0)
-
-#endif /* _EASIGLOBAL_H */
diff --git a/drivers/dsp/bridge/hw/GlobalTypes.h b/drivers/dsp/bridge/hw/GlobalTypes.h
deleted file mode 100644
index 9b55150..0000000
--- a/drivers/dsp/bridge/hw/GlobalTypes.h
+++ /dev/null
@@ -1,308 +0,0 @@
-/*
- * GlobalTypes.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Global HW definitions
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _GLOBALTYPES_H
-#define _GLOBALTYPES_H
-
-/*
- * Definition: TRUE, FALSE
- *
- * DESCRIPTION: Boolean Definitions
- */
-#ifndef TRUE
-#define FALSE 0
-#define TRUE (!(FALSE))
-#endif
-
-/*
- * Definition: NULL
- *
- * DESCRIPTION: Invalid pointer
- */
-#ifndef NULL
-#define NULL (void *)0
-#endif
-
-/*
- * Definition: RET_CODE_BASE
- *
- * DESCRIPTION: Base value for return code offsets
- */
-#define RET_CODE_BASE 0
-
-/*
- * Definition: *BIT_OFFSET
- *
- * DESCRIPTION: offset in bytes from start of 32-bit word.
- */
-#define LOWER16BIT_OFFSET 0
-#define UPPER16BIT_OFFSET 2
-
-#define LOWER8BIT_OFFSET 0
-#define LOWER_MIDDLE8BIT_OFFSET 1
-#define UPPER_MIDDLE8BIT_OFFSET 2
-#define UPPER8BIT_OFFSET 3
-
-#define LOWER8BIT_OF16_OFFSET 0
-#define UPPER8BIT_OF16_OFFSET 1
-
-/*
- * Definition: *BIT_SHIFT
- *
- * DESCRIPTION: offset in bits from start of 32-bit word.
- */
-#define LOWER16BIT_SHIFT 0
-#define UPPER16BIT_SHIFT 16
-
-#define LOWER8BIT_SHIFT 0
-#define LOWER_MIDDLE8BIT_SHIFT 8
-#define UPPER_MIDDLE8BIT_SHIFT 16
-#define UPPER8BIT_SHIFT 24
-
-#define LOWER8BIT_OF16_SHIFT 0
-#define UPPER8BIT_OF16_SHIFT 8
-
-/*
- * Definition: LOWER16BIT_MASK
- *
- * DESCRIPTION: 16 bit mask used for inclusion of lower 16 bits i.e. mask out
- * the upper 16 bits
- */
-#define LOWER16BIT_MASK 0x0000FFFF
-
-/*
- * Definition: LOWER8BIT_MASK
- *
- * DESCRIPTION: 8 bit masks used for inclusion of 8 bits i.e. mask out
- * the upper 16 bits
- */
-#define LOWER8BIT_MASK 0x000000FF
-
-/*
- * Definition: RETURN32BITS_FROM16LOWER_AND16UPPER(lower16Bits, upper16Bits)
- *
- * DESCRIPTION: Returns a 32 bit value given a 16 bit lower value and a 16
- * bit upper value
- */
-#define RETURN32BITS_FROM16LOWER_AND16UPPER(lower16Bits, upper16Bits)\
- (((((u32)lower16Bits) & LOWER16BIT_MASK)) | \
- (((((u32)upper16Bits) & LOWER16BIT_MASK) << UPPER16BIT_SHIFT)))
-
-/*
- * Definition: RETURN16BITS_FROM8LOWER_AND8UPPER(lower16Bits, upper16Bits)
- *
- * DESCRIPTION: Returns a 16 bit value given a 8 bit lower value and a 8
- * bit upper value
- */
-#define RETURN16BITS_FROM8LOWER_AND8UPPER(lower8Bits, upper8Bits)\
- (((((u32)lower8Bits) & LOWER8BIT_MASK)) | \
- (((((u32)upper8Bits) & LOWER8BIT_MASK) << UPPER8BIT_OF16_SHIFT)))
-
-/*
- * Definition: RETURN32BITS_FROM48BIT_VALUES(lower8Bits, lowerMiddle8Bits,
- * lowerUpper8Bits, upper8Bits)
- *
- * DESCRIPTION: Returns a 32 bit value given four 8 bit values
- */
-#define RETURN32BITS_FROM48BIT_VALUES(lower8Bits, lowerMiddle8Bits,\
- lowerUpper8Bits, upper8Bits)\
- (((((u32)lower8Bits) & LOWER8BIT_MASK)) | \
- (((((u32)lowerMiddle8Bits) & LOWER8BIT_MASK) <<\
- LOWER_MIDDLE8BIT_SHIFT)) | \
- (((((u32)lowerUpper8Bits) & LOWER8BIT_MASK) <<\
- UPPER_MIDDLE8BIT_SHIFT)) | \
- (((((u32)upper8Bits) & LOWER8BIT_MASK) <<\
- UPPER8BIT_SHIFT)))
-
-/*
- * Definition: READ_LOWER16BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 16 lower bits of 32bit value
- */
-#define READ_LOWER16BITS_OF32(value32bits)\
- ((u16)((u32)(value32bits) & LOWER16BIT_MASK))
-
-/*
- * Definition: READ_UPPER16BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 16 lower bits of 32bit value
- */
-#define READ_UPPER16BITS_OF32(value32bits)\
- (((u16)((u32)(value32bits) >> UPPER16BIT_SHIFT)) &\
- LOWER16BIT_MASK)
-
-/*
- * Definition: READ_LOWER8BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 8 lower bits of 32bit value
- */
-#define READ_LOWER8BITS_OF32(value32bits)\
- ((u8)((u32)(value32bits) & LOWER8BIT_MASK))
-
-/*
- * Definition: READ_LOWER_MIDDLE8BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 8 lower middle bits of 32bit value
- */
-#define READ_LOWER_MIDDLE8BITS_OF32(value32bits)\
- (((u8)((u32)(value32bits) >> LOWER_MIDDLE8BIT_SHIFT)) &\
- LOWER8BIT_MASK)
-
-/*
- * Definition: READ_LOWER_MIDDLE8BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 8 lower middle bits of 32bit value
- */
-#define READ_UPPER_MIDDLE8BITS_OF32(value32bits)\
- (((u8)((u32)(value32bits) >> LOWER_MIDDLE8BIT_SHIFT)) &\
- LOWER8BIT_MASK)
-
-/*
- * Definition: READ_UPPER8BITS_OF32(value32bits)
- *
- * DESCRIPTION: Returns a 8 upper bits of 32bit value
- */
-#define READ_UPPER8BITS_OF32(value32bits)\
- (((u8)((u32)(value32bits) >> UPPER8BIT_SHIFT)) & LOWER8BIT_MASK)
-
-/*
- * Definition: READ_LOWER8BITS_OF16(value16bits)
- *
- * DESCRIPTION: Returns a 8 lower bits of 16bit value
- */
-#define READ_LOWER8BITS_OF16(value16bits)\
- ((u8)((u16)(value16bits) & LOWER8BIT_MASK))
-
-/*
- * Definition: READ_UPPER8BITS_OF16(value32bits)
- *
- * DESCRIPTION: Returns a 8 upper bits of 16bit value
- */
-#define READ_UPPER8BITS_OF16(value16bits)\
- (((u8)((u32)(value16bits) >> UPPER8BIT_SHIFT)) & LOWER8BIT_MASK)
-
-/* UWORD16: 16 bit tpyes */
-
-/* reg_uword8, reg_word8: 8 bit register types */
-typedef volatile unsigned char reg_uword8;
-typedef volatile signed char reg_word8;
-
-/* reg_uword16, reg_word16: 16 bit register types */
-#ifndef OMAPBRIDGE_TYPES
-typedef volatile unsigned short reg_uword16;
-#endif
-typedef volatile short reg_word16;
-
-/* reg_uword32, REG_WORD32: 32 bit register types */
-typedef volatile unsigned long reg_uword32;
-
-/* FLOAT
- *
- * Type to be used for floating point calculation. Note that floating point
- * calculation is very CPU expensive, and you should only use if you
- * absolutely need this. */
-
-/* boolean_t: Boolean Type True, False */
-/* return_code_t: Return codes to be returned by all library functions */
-enum return_code_label {
- RET_OK = 0,
- RET_FAIL = -1,
- RET_BAD_NULL_PARAM = -2,
- RET_PARAM_OUT_OF_RANGE = -3,
- RET_INVALID_ID = -4,
- RET_EMPTY = -5,
- RET_FULL = -6,
- RET_TIMEOUT = -7,
- RET_INVALID_OPERATION = -8,
-
- /* Add new error codes at end of above list */
-
- RET_NUM_RET_CODES /* this should ALWAYS be LAST entry */
-};
-
-/* MACRO: RD_MEM8, WR_MEM8
- *
- * DESCRIPTION: 32 bit memory access macros
- */
-#define RD_MEM8(addr) ((u8)(*((u8 *)(addr))))
-#define WR_MEM8(addr, data) (*((u8 *)(addr)) = (u8)(data))
-
-/* MACRO: RD_MEM8_VOLATILE, WR_MEM8_VOLATILE
- *
- * DESCRIPTION: 8 bit register access macros
- */
-#define RD_MEM8_VOLATILE(addr) ((u8)(*((reg_uword8 *)(addr))))
-#define WR_MEM8_VOLATILE(addr, data) (*((reg_uword8 *)(addr)) = (u8)(data))
-
-/*
- * MACRO: RD_MEM16, WR_MEM16
- *
- * DESCRIPTION: 16 bit memory access macros
- */
-#define RD_MEM16(addr) ((u16)(*((u16 *)(addr))))
-#define WR_MEM16(addr, data) (*((u16 *)(addr)) = (u16)(data))
-
-/*
- * MACRO: RD_MEM16_VOLATILE, WR_MEM16_VOLATILE
- *
- * DESCRIPTION: 16 bit register access macros
- */
-#define RD_MEM16_VOLATILE(addr) ((u16)(*((reg_uword16 *)(addr))))
-#define WR_MEM16_VOLATILE(addr, data) (*((reg_uword16 *)(addr)) =\
- (u16)(data))
-
-/*
- * MACRO: RD_MEM32, WR_MEM32
- *
- * DESCRIPTION: 32 bit memory access macros
- */
-#define RD_MEM32(addr) ((u32)(*((u32 *)(addr))))
-#define WR_MEM32(addr, data) (*((u32 *)(addr)) = (u32)(data))
-
-/*
- * MACRO: RD_MEM32_VOLATILE, WR_MEM32_VOLATILE
- *
- * DESCRIPTION: 32 bit register access macros
- */
-#define RD_MEM32_VOLATILE(addr) ((u32)(*((reg_uword32 *)(addr))))
-#define WR_MEM32_VOLATILE(addr, data) (*((reg_uword32 *)(addr)) =\
- (u32)(data))
-
-/* Not sure if this all belongs here */
-
-#define CHECK_RETURN_VALUE(actualValue, expectedValue, returnCodeIfMismatch,\
- spyCodeIfMisMatch)
-#define CHECK_RETURN_VALUE_RET(actualValue, expectedValue, returnCodeIfMismatch)
-#define CHECK_RETURN_VALUE_RES(actualValue, expectedValue, spyCodeIfMisMatch)
-#define CHECK_RETURN_VALUE_RET_VOID(actualValue, expectedValue,\
- spyCodeIfMisMatch)
-
-#define CHECK_INPUT_PARAM(actualValue, invalidValue, returnCodeIfMismatch,\
- spyCodeIfMisMatch)
-#define CHECK_INPUT_PARAM_NO_SPY(actualValue, invalidValue,\
- returnCodeIfMismatch)
-#define CHECK_INPUT_RANGE(actualValue, minValidValue, maxValidValue,\
- returnCodeIfMismatch, spyCodeIfMisMatch)
-#define CHECK_INPUT_RANGE_NO_SPY(actualValue, minValidValue, maxValidValue,\
- returnCodeIfMismatch)
-#define CHECK_INPUT_RANGE_MIN0(actualValue, maxValidValue,\
- returnCodeIfMismatch, spyCodeIfMisMatch)
-#define CHECK_INPUT_RANGE_NO_SPY_MIN0(actualValue, maxValidValue,\
- returnCodeIfMismatch)
-
-#endif /* _GLOBALTYPES_H */
diff --git a/drivers/dsp/bridge/hw/MMUAccInt.h b/drivers/dsp/bridge/hw/MMUAccInt.h
deleted file mode 100644
index 1cefca3..0000000
--- a/drivers/dsp/bridge/hw/MMUAccInt.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * MMUAccInt.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _MMU_ACC_INT_H
-#define _MMU_ACC_INT_H
-
-/* Mappings of level 1 EASI function numbers to function names */
-
-#define EASIL1_MMUMMU_SYSCONFIG_READ_REGISTER32 (MMU_BASE_EASIL1 + 3)
-#define EASIL1_MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32 (MMU_BASE_EASIL1 + 17)
-#define EASIL1_MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32 (MMU_BASE_EASIL1 + 39)
-#define EASIL1_MMUMMU_IRQSTATUS_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 51)
-#define EASIL1_MMUMMU_IRQENABLE_READ_REGISTER32 (MMU_BASE_EASIL1 + 102)
-#define EASIL1_MMUMMU_IRQENABLE_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 103)
-#define EASIL1_MMUMMU_WALKING_STTWL_RUNNING_READ32 (MMU_BASE_EASIL1 + 156)
-#define EASIL1_MMUMMU_CNTLTWL_ENABLE_READ32 (MMU_BASE_EASIL1 + 174)
-#define EASIL1_MMUMMU_CNTLTWL_ENABLE_WRITE32 (MMU_BASE_EASIL1 + 180)
-#define EASIL1_MMUMMU_CNTLMMU_ENABLE_WRITE32 (MMU_BASE_EASIL1 + 190)
-#define EASIL1_MMUMMU_FAULT_AD_READ_REGISTER32 (MMU_BASE_EASIL1 + 194)
-#define EASIL1_MMUMMU_TTB_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 198)
-#define EASIL1_MMUMMU_LOCK_READ_REGISTER32 (MMU_BASE_EASIL1 + 203)
-#define EASIL1_MMUMMU_LOCK_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 204)
-#define EASIL1_MMUMMU_LOCK_BASE_VALUE_READ32 (MMU_BASE_EASIL1 + 205)
-#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_READ32 (MMU_BASE_EASIL1 + 209)
-#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_WRITE32 (MMU_BASE_EASIL1 + 211)
-#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_SET32 (MMU_BASE_EASIL1 + 212)
-#define EASIL1_MMUMMU_LD_TLB_READ_REGISTER32 (MMU_BASE_EASIL1 + 213)
-#define EASIL1_MMUMMU_LD_TLB_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 214)
-#define EASIL1_MMUMMU_CAM_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 226)
-#define EASIL1_MMUMMU_RAM_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 268)
-#define EASIL1_MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 322)
-
-/* Register offset address definitions */
-#define MMU_MMU_SYSCONFIG_OFFSET 0x10
-#define MMU_MMU_IRQSTATUS_OFFSET 0x18
-#define MMU_MMU_IRQENABLE_OFFSET 0x1c
-#define MMU_MMU_WALKING_ST_OFFSET 0x40
-#define MMU_MMU_CNTL_OFFSET 0x44
-#define MMU_MMU_FAULT_AD_OFFSET 0x48
-#define MMU_MMU_TTB_OFFSET 0x4c
-#define MMU_MMU_LOCK_OFFSET 0x50
-#define MMU_MMU_LD_TLB_OFFSET 0x54
-#define MMU_MMU_CAM_OFFSET 0x58
-#define MMU_MMU_RAM_OFFSET 0x5c
-#define MMU_MMU_GFLUSH_OFFSET 0x60
-#define MMU_MMU_FLUSH_ENTRY_OFFSET 0x64
-/* Bitfield mask and offset declarations */
-#define MMU_MMU_SYSCONFIG_IDLE_MODE_MASK 0x18
-#define MMU_MMU_SYSCONFIG_IDLE_MODE_OFFSET 3
-#define MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK 0x1
-#define MMU_MMU_SYSCONFIG_AUTO_IDLE_OFFSET 0
-#define MMU_MMU_WALKING_ST_TWL_RUNNING_MASK 0x1
-#define MMU_MMU_WALKING_ST_TWL_RUNNING_OFFSET 0
-#define MMU_MMU_CNTL_TWL_ENABLE_MASK 0x4
-#define MMU_MMU_CNTL_TWL_ENABLE_OFFSET 2
-#define MMU_MMU_CNTL_MMU_ENABLE_MASK 0x2
-#define MMU_MMU_CNTL_MMU_ENABLE_OFFSET 1
-#define MMU_MMU_LOCK_BASE_VALUE_MASK 0xfc00
-#define MMU_MMU_LOCK_BASE_VALUE_OFFSET 10
-#define MMU_MMU_LOCK_CURRENT_VICTIM_MASK 0x3f0
-#define MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET 4
-
-#endif /* _MMU_ACC_INT_H */
diff --git a/drivers/dsp/bridge/hw/MMURegAcM.h b/drivers/dsp/bridge/hw/MMURegAcM.h
deleted file mode 100644
index 8c0c549..0000000
--- a/drivers/dsp/bridge/hw/MMURegAcM.h
+++ /dev/null
@@ -1,226 +0,0 @@
-/*
- * MMURegAcM.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _MMU_REG_ACM_H
-#define _MMU_REG_ACM_H
-
-#include <GlobalTypes.h>
-#include <linux/io.h>
-#include <EasiGlobal.h>
-
-#include "MMUAccInt.h"
-
-#if defined(USE_LEVEL_1_MACROS)
-
-#define MMUMMU_SYSCONFIG_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_READ_REGISTER32),\
- __raw_readl((baseAddress)+MMU_MMU_SYSCONFIG_OFFSET))
-
-#define MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_SYSCONFIG_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32);\
- data &= ~(MMU_MMU_SYSCONFIG_IDLE_MODE_MASK);\
- newValue <<= MMU_MMU_SYSCONFIG_IDLE_MODE_OFFSET;\
- newValue &= MMU_MMU_SYSCONFIG_IDLE_MODE_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_SYSCONFIG_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32);\
- data &= ~(MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK);\
- newValue <<= MMU_MMU_SYSCONFIG_AUTO_IDLE_OFFSET;\
- newValue &= MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_IRQSTATUS_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQSTATUSReadRegister32),\
- __raw_readl((baseAddress)+MMU_MMU_IRQSTATUS_OFFSET))
-
-#define MMUMMU_IRQSTATUS_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_IRQSTATUS_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQSTATUS_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQENABLE_READ_REGISTER32),\
- __raw_readl((baseAddress)+MMU_MMU_IRQENABLE_OFFSET))
-
-#define MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_IRQENABLE_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQENABLE_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_WALKING_STTWL_RUNNING_READ32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_WALKING_STTWL_RUNNING_READ32),\
- (((__raw_readl(((baseAddress)+(MMU_MMU_WALKING_ST_OFFSET))))\
- & MMU_MMU_WALKING_ST_TWL_RUNNING_MASK) >>\
- MMU_MMU_WALKING_ST_TWL_RUNNING_OFFSET))
-
-#define MMUMMU_CNTLTWL_ENABLE_READ32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLTWL_ENABLE_READ32),\
- (((__raw_readl(((baseAddress)+(MMU_MMU_CNTL_OFFSET)))) &\
- MMU_MMU_CNTL_TWL_ENABLE_MASK) >>\
- MMU_MMU_CNTL_TWL_ENABLE_OFFSET))
-
-#define MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_CNTL_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLTWL_ENABLE_WRITE32);\
- data &= ~(MMU_MMU_CNTL_TWL_ENABLE_MASK);\
- newValue <<= MMU_MMU_CNTL_TWL_ENABLE_OFFSET;\
- newValue &= MMU_MMU_CNTL_TWL_ENABLE_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_CNTL_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLMMU_ENABLE_WRITE32);\
- data &= ~(MMU_MMU_CNTL_MMU_ENABLE_MASK);\
- newValue <<= MMU_MMU_CNTL_MMU_ENABLE_OFFSET;\
- newValue &= MMU_MMU_CNTL_MMU_ENABLE_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_FAULT_AD_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_FAULT_AD_READ_REGISTER32),\
- __raw_readl((baseAddress)+MMU_MMU_FAULT_AD_OFFSET))
-
-#define MMUMMU_TTB_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_TTB_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_TTB_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_LOCK_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_READ_REGISTER32),\
- __raw_readl((baseAddress)+MMU_MMU_LOCK_OFFSET))
-
-#define MMUMMU_LOCK_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_LOCK_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_LOCK_BASE_VALUE_READ32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_BASE_VALUE_READ32),\
- (((__raw_readl(((baseAddress)+(MMU_MMU_LOCK_OFFSET)))) &\
- MMU_MMU_LOCK_BASE_VALUE_MASK) >>\
- MMU_MMU_LOCK_BASE_VALUE_OFFSET))
-
-#define MMUMMU_LOCK_BASE_VALUE_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_LOCK_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCKBaseValueWrite32);\
- data &= ~(MMU_MMU_LOCK_BASE_VALUE_MASK);\
- newValue <<= MMU_MMU_LOCK_BASE_VALUE_OFFSET;\
- newValue &= MMU_MMU_LOCK_BASE_VALUE_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_LOCK_CURRENT_VICTIM_READ32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_READ32),\
- (((__raw_readl(((baseAddress)+(MMU_MMU_LOCK_OFFSET)))) &\
- MMU_MMU_LOCK_CURRENT_VICTIM_MASK) >>\
- MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET))
-
-#define MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_LOCK_OFFSET;\
- register u32 data = __raw_readl((baseAddress)+offset);\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_WRITE32);\
- data &= ~(MMU_MMU_LOCK_CURRENT_VICTIM_MASK);\
- newValue <<= MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET;\
- newValue &= MMU_MMU_LOCK_CURRENT_VICTIM_MASK;\
- newValue |= data;\
- __raw_writel(newValue, baseAddress+offset);\
-}
-
-#define MMUMMU_LOCK_CURRENT_VICTIM_SET32(var, value)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_SET32),\
- (((var) & ~(MMU_MMU_LOCK_CURRENT_VICTIM_MASK)) |\
- (((value) << MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET) &\
- MMU_MMU_LOCK_CURRENT_VICTIM_MASK)))
-
-#define MMUMMU_LD_TLB_READ_REGISTER32(baseAddress)\
- (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LD_TLB_READ_REGISTER32),\
- __raw_readl((baseAddress)+MMU_MMU_LD_TLB_OFFSET))
-
-#define MMUMMU_LD_TLB_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_LD_TLB_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LD_TLB_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_CAM_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_CAM_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CAM_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_RAM_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_RAM_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_RAM_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#define MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(baseAddress, value)\
-{\
- const u32 offset = MMU_MMU_FLUSH_ENTRY_OFFSET;\
- register u32 newValue = (value);\
- _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32);\
- __raw_writel(newValue, (baseAddress)+offset);\
-}
-
-#endif /* USE_LEVEL_1_MACROS */
-
-#endif /* _MMU_REG_ACM_H */
diff --git a/drivers/dsp/bridge/hw/hw_defs.h b/drivers/dsp/bridge/hw/hw_defs.h
deleted file mode 100644
index 98f6045..0000000
--- a/drivers/dsp/bridge/hw/hw_defs.h
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * hw_defs.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * Global HW definitions
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _HW_DEFS_H
-#define _HW_DEFS_H
-
-#include <GlobalTypes.h>
-
-/* Page size */
-#define HW_PAGE_SIZE4KB 0x1000
-#define HW_PAGE_SIZE64KB 0x10000
-#define HW_PAGE_SIZE1MB 0x100000
-#define HW_PAGE_SIZE16MB 0x1000000
-
-/* hw_status: return type for HW API */
-typedef long hw_status;
-
-/* Macro used to set and clear any bit */
-#define HW_CLEAR 0
-#define HW_SET 1
-
-/* hw_endianism_t: Enumerated Type used to specify the endianism
- * Do NOT change these values. They are used as bit fields. */
-enum hw_endianism_t {
- HW_LITTLE_ENDIAN,
- HW_BIG_ENDIAN
-};
-
-/* hw_element_size_t: Enumerated Type used to specify the element size
- * Do NOT change these values. They are used as bit fields. */
-enum hw_element_size_t {
- HW_ELEM_SIZE8BIT,
- HW_ELEM_SIZE16BIT,
- HW_ELEM_SIZE32BIT,
- HW_ELEM_SIZE64BIT
-};
-
-/* hw_idle_mode_t: Enumerated Type used to specify Idle modes */
-enum hw_idle_mode_t {
- HW_FORCE_IDLE,
- HW_NO_IDLE,
- HW_SMART_IDLE
-};
-
-#endif /* _HW_DEFS_H */
diff --git a/drivers/dsp/bridge/hw/hw_mmu.c b/drivers/dsp/bridge/hw/hw_mmu.c
deleted file mode 100644
index 965b659..0000000
--- a/drivers/dsp/bridge/hw/hw_mmu.c
+++ /dev/null
@@ -1,587 +0,0 @@
-/*
- * hw_mmu.c
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * API definitions to setup MMU TLB and PTE
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#include <GlobalTypes.h>
-#include <linux/io.h>
-#include "MMURegAcM.h"
-#include <hw_defs.h>
-#include <hw_mmu.h>
-#include <linux/types.h>
-
-#define MMU_BASE_VAL_MASK 0xFC00
-#define MMU_PAGE_MAX 3
-#define MMU_ELEMENTSIZE_MAX 3
-#define MMU_ADDR_MASK 0xFFFFF000
-#define MMU_TTB_MASK 0xFFFFC000
-#define MMU_SECTION_ADDR_MASK 0xFFF00000
-#define MMU_SSECTION_ADDR_MASK 0xFF000000
-#define MMU_PAGE_TABLE_MASK 0xFFFFFC00
-#define MMU_LARGE_PAGE_MASK 0xFFFF0000
-#define MMU_SMALL_PAGE_MASK 0xFFFFF000
-
-#define MMU_LOAD_TLB 0x00000001
-
-/*
- * hw_mmu_page_size_t: Enumerated Type used to specify the MMU Page Size(SLSS)
- */
-enum hw_mmu_page_size_t {
- HW_MMU_SECTION,
- HW_MMU_LARGE_PAGE,
- HW_MMU_SMALL_PAGE,
- HW_MMU_SUPERSECTION
-};
-
-/*
- * FUNCTION : mmu_flush_entry
- *
- * INPUTS:
- *
- * Identifier : baseAddress
- * Type : const u32
- * Description : Base Address of instance of MMU module
- *
- * RETURNS:
- *
- * Type : hw_status
- * Description : RET_OK -- No errors occured
- * RET_BAD_NULL_PARAM -- A Pointer
- * Paramater was set to NULL
- *
- * PURPOSE: : Flush the TLB entry pointed by the
- * lock counter register
- * even if this entry is set protected
- *
- * METHOD: : Check the Input parameter and Flush a
- * single entry in the TLB.
- */
-static hw_status mmu_flush_entry(const void __iomem *baseAddress);
-
-/*
- * FUNCTION : mmu_set_cam_entry
- *
- * INPUTS:
- *
- * Identifier : baseAddress
- * TypE : const u32
- * Description : Base Address of instance of MMU module
- *
- * Identifier : pageSize
- * TypE : const u32
- * Description : It indicates the page size
- *
- * Identifier : preservedBit
- * Type : const u32
- * Description : It indicates the TLB entry is preserved entry
- * or not
- *
- * Identifier : validBit
- * Type : const u32
- * Description : It indicates the TLB entry is valid entry or not
- *
- *
- * Identifier : virtual_addr_tag
- * Type : const u32
- * Description : virtual Address
- *
- * RETURNS:
- *
- * Type : hw_status
- * Description : RET_OK -- No errors occured
- * RET_BAD_NULL_PARAM -- A Pointer Paramater
- * was set to NULL
- * RET_PARAM_OUT_OF_RANGE -- Input Parameter out
- * of Range
- *
- * PURPOSE: : Set MMU_CAM reg
- *
- * METHOD: : Check the Input parameters and set the CAM entry.
- */
-static hw_status mmu_set_cam_entry(const void __iomem *baseAddress,
- const u32 pageSize,
- const u32 preservedBit,
- const u32 validBit,
- const u32 virtual_addr_tag);
-
-/*
- * FUNCTION : mmu_set_ram_entry
- *
- * INPUTS:
- *
- * Identifier : baseAddress
- * Type : const u32
- * Description : Base Address of instance of MMU module
- *
- * Identifier : physicalAddr
- * Type : const u32
- * Description : Physical Address to which the corresponding
- * virtual Address shouldpoint
- *
- * Identifier : endianism
- * Type : hw_endianism_t
- * Description : endianism for the given page
- *
- * Identifier : element_size
- * Type : hw_element_size_t
- * Description : The element size ( 8,16, 32 or 64 bit)
- *
- * Identifier : mixed_size
- * Type : hw_mmu_mixed_size_t
- * Description : Element Size to follow CPU or TLB
- *
- * RETURNS:
- *
- * Type : hw_status
- * Description : RET_OK -- No errors occured
- * RET_BAD_NULL_PARAM -- A Pointer Paramater
- * was set to NULL
- * RET_PARAM_OUT_OF_RANGE -- Input Parameter
- * out of Range
- *
- * PURPOSE: : Set MMU_CAM reg
- *
- * METHOD: : Check the Input parameters and set the RAM entry.
- */
-static hw_status mmu_set_ram_entry(const void __iomem *baseAddress,
- const u32 physicalAddr,
- enum hw_endianism_t endianism,
- enum hw_element_size_t element_size,
- enum hw_mmu_mixed_size_t mixed_size);
-
-/* HW FUNCTIONS */
-
-hw_status hw_mmu_enable(const void __iomem *baseAddress)
-{
- hw_status status = RET_OK;
-
- MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, HW_SET);
-
- return status;
-}
-
-hw_status hw_mmu_disable(const void __iomem *baseAddress)
-{
- hw_status status = RET_OK;
-
- MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, HW_CLEAR);
-
- return status;
-}
-
-hw_status hw_mmu_num_locked_set(const void __iomem *baseAddress,
- u32 numLockedEntries)
-{
- hw_status status = RET_OK;
-
- MMUMMU_LOCK_BASE_VALUE_WRITE32(baseAddress, numLockedEntries);
-
- return status;
-}
-
-hw_status hw_mmu_victim_num_set(const void __iomem *baseAddress,
- u32 victimEntryNum)
-{
- hw_status status = RET_OK;
-
- MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, victimEntryNum);
-
- return status;
-}
-
-hw_status hw_mmu_event_ack(const void __iomem *baseAddress, u32 irqMask)
-{
- hw_status status = RET_OK;
-
- MMUMMU_IRQSTATUS_WRITE_REGISTER32(baseAddress, irqMask);
-
- return status;
-}
-
-hw_status hw_mmu_event_disable(const void __iomem *baseAddress, u32 irqMask)
-{
- hw_status status = RET_OK;
- u32 irq_reg;
-
- irq_reg = MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress);
-
- MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, irq_reg & ~irqMask);
-
- return status;
-}
-
-hw_status hw_mmu_event_enable(const void __iomem *baseAddress, u32 irqMask)
-{
- hw_status status = RET_OK;
- u32 irq_reg;
-
- irq_reg = MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress);
-
- MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, irq_reg | irqMask);
-
- return status;
-}
-
-hw_status hw_mmu_event_status(const void __iomem *baseAddress, u32 *irqMask)
-{
- hw_status status = RET_OK;
-
- *irqMask = MMUMMU_IRQSTATUS_READ_REGISTER32(baseAddress);
-
- return status;
-}
-
-hw_status hw_mmu_fault_addr_read(const void __iomem *baseAddress, u32 *addr)
-{
- hw_status status = RET_OK;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
-
- /* read values from register */
- *addr = MMUMMU_FAULT_AD_READ_REGISTER32(baseAddress);
-
- return status;
-}
-
-hw_status hw_mmu_ttb_set(const void __iomem *baseAddress, u32 TTBPhysAddr)
-{
- hw_status status = RET_OK;
- u32 load_ttb;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
-
- load_ttb = TTBPhysAddr & ~0x7FUL;
- /* write values to register */
- MMUMMU_TTB_WRITE_REGISTER32(baseAddress, load_ttb);
-
- return status;
-}
-
-hw_status hw_mmu_twl_enable(const void __iomem *baseAddress)
-{
- hw_status status = RET_OK;
-
- MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, HW_SET);
-
- return status;
-}
-
-hw_status hw_mmu_twl_disable(const void __iomem *baseAddress)
-{
- hw_status status = RET_OK;
-
- MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, HW_CLEAR);
-
- return status;
-}
-
-hw_status hw_mmu_tlb_flush(const void __iomem *baseAddress, u32 virtualAddr,
- u32 pageSize)
-{
- hw_status status = RET_OK;
- u32 virtual_addr_tag;
- enum hw_mmu_page_size_t pg_size_bits;
-
- switch (pageSize) {
- case HW_PAGE_SIZE4KB:
- pg_size_bits = HW_MMU_SMALL_PAGE;
- break;
-
- case HW_PAGE_SIZE64KB:
- pg_size_bits = HW_MMU_LARGE_PAGE;
- break;
-
- case HW_PAGE_SIZE1MB:
- pg_size_bits = HW_MMU_SECTION;
- break;
-
- case HW_PAGE_SIZE16MB:
- pg_size_bits = HW_MMU_SUPERSECTION;
- break;
-
- default:
- return RET_FAIL;
- }
-
- /* Generate the 20-bit tag from virtual address */
- virtual_addr_tag = ((virtualAddr & MMU_ADDR_MASK) >> 12);
-
- mmu_set_cam_entry(baseAddress, pg_size_bits, 0, 0, virtual_addr_tag);
-
- mmu_flush_entry(baseAddress);
-
- return status;
-}
-
-hw_status hw_mmu_tlb_add(const void __iomem *baseAddress,
- u32 physicalAddr,
- u32 virtualAddr,
- u32 pageSize,
- u32 entryNum,
- struct hw_mmu_map_attrs_t *map_attrs,
- s8 preservedBit, s8 validBit)
-{
- hw_status status = RET_OK;
- u32 lock_reg;
- u32 virtual_addr_tag;
- enum hw_mmu_page_size_t mmu_pg_size;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
- CHECK_INPUT_RANGE_MIN0(pageSize, MMU_PAGE_MAX, RET_PARAM_OUT_OF_RANGE,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
- CHECK_INPUT_RANGE_MIN0(map_attrs->element_size, MMU_ELEMENTSIZE_MAX,
- RET_PARAM_OUT_OF_RANGE, RES_MMU_BASE +
- RES_INVALID_INPUT_PARAM);
-
- switch (pageSize) {
- case HW_PAGE_SIZE4KB:
- mmu_pg_size = HW_MMU_SMALL_PAGE;
- break;
-
- case HW_PAGE_SIZE64KB:
- mmu_pg_size = HW_MMU_LARGE_PAGE;
- break;
-
- case HW_PAGE_SIZE1MB:
- mmu_pg_size = HW_MMU_SECTION;
- break;
-
- case HW_PAGE_SIZE16MB:
- mmu_pg_size = HW_MMU_SUPERSECTION;
- break;
-
- default:
- return RET_FAIL;
- }
-
- lock_reg = MMUMMU_LOCK_READ_REGISTER32(baseAddress);
-
- /* Generate the 20-bit tag from virtual address */
- virtual_addr_tag = ((virtualAddr & MMU_ADDR_MASK) >> 12);
-
- /* Write the fields in the CAM Entry Register */
- mmu_set_cam_entry(baseAddress, mmu_pg_size, preservedBit, validBit,
- virtual_addr_tag);
-
- /* Write the different fields of the RAM Entry Register */
- /* endianism of the page,Element Size of the page (8, 16, 32, 64 bit) */
- mmu_set_ram_entry(baseAddress, physicalAddr, map_attrs->endianism,
- map_attrs->element_size, map_attrs->mixed_size);
-
- /* Update the MMU Lock Register */
- /* currentVictim between lockedBaseValue and (MMU_Entries_Number - 1) */
- MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, entryNum);
-
- /* Enable loading of an entry in TLB by writing 1
- into LD_TLB_REG register */
- MMUMMU_LD_TLB_WRITE_REGISTER32(baseAddress, MMU_LOAD_TLB);
-
- MMUMMU_LOCK_WRITE_REGISTER32(baseAddress, lock_reg);
-
- return status;
-}
-
-hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
- u32 physicalAddr,
- u32 virtualAddr,
- u32 pageSize, struct hw_mmu_map_attrs_t *map_attrs)
-{
- hw_status status = RET_OK;
- u32 pte_addr, pte_val;
- s32 num_entries = 1;
-
- switch (pageSize) {
- case HW_PAGE_SIZE4KB:
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtualAddr &
- MMU_SMALL_PAGE_MASK);
- pte_val =
- ((physicalAddr & MMU_SMALL_PAGE_MASK) |
- (map_attrs->endianism << 9) | (map_attrs->
- element_size << 4) |
- (map_attrs->mixed_size << 11) | 2);
- break;
-
- case HW_PAGE_SIZE64KB:
- num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtualAddr &
- MMU_LARGE_PAGE_MASK);
- pte_val =
- ((physicalAddr & MMU_LARGE_PAGE_MASK) |
- (map_attrs->endianism << 9) | (map_attrs->
- element_size << 4) |
- (map_attrs->mixed_size << 11) | 1);
- break;
-
- case HW_PAGE_SIZE1MB:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtualAddr &
- MMU_SECTION_ADDR_MASK);
- pte_val =
- ((((physicalAddr & MMU_SECTION_ADDR_MASK) |
- (map_attrs->endianism << 15) | (map_attrs->
- element_size << 10) |
- (map_attrs->mixed_size << 17)) & ~0x40000) | 0x2);
- break;
-
- case HW_PAGE_SIZE16MB:
- num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtualAddr &
- MMU_SSECTION_ADDR_MASK);
- pte_val =
- (((physicalAddr & MMU_SSECTION_ADDR_MASK) |
- (map_attrs->endianism << 15) | (map_attrs->
- element_size << 10) |
- (map_attrs->mixed_size << 17)
- ) | 0x40000 | 0x2);
- break;
-
- case HW_MMU_COARSE_PAGE_SIZE:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtualAddr &
- MMU_SECTION_ADDR_MASK);
- pte_val = (physicalAddr & MMU_PAGE_TABLE_MASK) | 1;
- break;
-
- default:
- return RET_FAIL;
- }
-
- while (--num_entries >= 0)
- ((u32 *) pte_addr)[num_entries] = pte_val;
-
- return status;
-}
-
-hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtualAddr, u32 page_size)
-{
- hw_status status = RET_OK;
- u32 pte_addr;
- s32 num_entries = 1;
-
- switch (page_size) {
- case HW_PAGE_SIZE4KB:
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtualAddr &
- MMU_SMALL_PAGE_MASK);
- break;
-
- case HW_PAGE_SIZE64KB:
- num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtualAddr &
- MMU_LARGE_PAGE_MASK);
- break;
-
- case HW_PAGE_SIZE1MB:
- case HW_MMU_COARSE_PAGE_SIZE:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtualAddr &
- MMU_SECTION_ADDR_MASK);
- break;
-
- case HW_PAGE_SIZE16MB:
- num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtualAddr &
- MMU_SSECTION_ADDR_MASK);
- break;
-
- default:
- return RET_FAIL;
- }
-
- while (--num_entries >= 0)
- ((u32 *) pte_addr)[num_entries] = 0;
-
- return status;
-}
-
-/* mmu_flush_entry */
-static hw_status mmu_flush_entry(const void __iomem *baseAddress)
-{
- hw_status status = RET_OK;
- u32 flush_entry_data = 0x1;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
-
- /* write values to register */
- MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(baseAddress, flush_entry_data);
-
- return status;
-}
-
-/* mmu_set_cam_entry */
-static hw_status mmu_set_cam_entry(const void __iomem *baseAddress,
- const u32 pageSize,
- const u32 preservedBit,
- const u32 validBit,
- const u32 virtual_addr_tag)
-{
- hw_status status = RET_OK;
- u32 mmu_cam_reg;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
-
- mmu_cam_reg = (virtual_addr_tag << 12);
- mmu_cam_reg = (mmu_cam_reg) | (pageSize) | (validBit << 2) |
- (preservedBit << 3);
-
- /* write values to register */
- MMUMMU_CAM_WRITE_REGISTER32(baseAddress, mmu_cam_reg);
-
- return status;
-}
-
-/* mmu_set_ram_entry */
-static hw_status mmu_set_ram_entry(const void __iomem *baseAddress,
- const u32 physicalAddr,
- enum hw_endianism_t endianism,
- enum hw_element_size_t element_size,
- enum hw_mmu_mixed_size_t mixed_size)
-{
- hw_status status = RET_OK;
- u32 mmu_ram_reg;
-
- /*Check the input Parameters */
- CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
- RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
- CHECK_INPUT_RANGE_MIN0(element_size, MMU_ELEMENTSIZE_MAX,
- RET_PARAM_OUT_OF_RANGE, RES_MMU_BASE +
- RES_INVALID_INPUT_PARAM);
-
- mmu_ram_reg = (physicalAddr & MMU_ADDR_MASK);
- mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
- (mixed_size << 6));
-
- /* write values to register */
- MMUMMU_RAM_WRITE_REGISTER32(baseAddress, mmu_ram_reg);
-
- return status;
-
-}
diff --git a/drivers/dsp/bridge/hw/hw_mmu.h b/drivers/dsp/bridge/hw/hw_mmu.h
deleted file mode 100644
index 9b13468..0000000
--- a/drivers/dsp/bridge/hw/hw_mmu.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/*
- * hw_mmu.h
- *
- * DSP-BIOS Bridge driver support functions for TI OMAP processors.
- *
- * MMU types and API declarations
- *
- * Copyright (C) 2007 Texas Instruments, Inc.
- *
- * This package is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
- * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
- */
-
-#ifndef _HW_MMU_H
-#define _HW_MMU_H
-
-#include <linux/types.h>
-
-/* Bitmasks for interrupt sources */
-#define HW_MMU_TRANSLATION_FAULT 0x2
-#define HW_MMU_ALL_INTERRUPTS 0x1F
-
-#define HW_MMU_COARSE_PAGE_SIZE 0x400
-
-/* hw_mmu_mixed_size_t: Enumerated Type used to specify whether to follow
- CPU/TLB Element size */
-enum hw_mmu_mixed_size_t {
- HW_MMU_TLBES,
- HW_MMU_CPUES
-};
-
-/* hw_mmu_map_attrs_t: Struct containing MMU mapping attributes */
-struct hw_mmu_map_attrs_t {
- enum hw_endianism_t endianism;
- enum hw_element_size_t element_size;
- enum hw_mmu_mixed_size_t mixed_size;
- bool donotlockmpupage;
-};
-
-extern hw_status hw_mmu_enable(const void __iomem *baseAddress);
-
-extern hw_status hw_mmu_disable(const void __iomem *baseAddress);
-
-extern hw_status hw_mmu_num_locked_set(const void __iomem *baseAddress,
- u32 numLockedEntries);
-
-extern hw_status hw_mmu_victim_num_set(const void __iomem *baseAddress,
- u32 victimEntryNum);
-
-/* For MMU faults */
-extern hw_status hw_mmu_event_ack(const void __iomem *baseAddress,
- u32 irqMask);
-
-extern hw_status hw_mmu_event_disable(const void __iomem *baseAddress,
- u32 irqMask);
-
-extern hw_status hw_mmu_event_enable(const void __iomem *baseAddress,
- u32 irqMask);
-
-extern hw_status hw_mmu_event_status(const void __iomem *baseAddress,
- u32 *irqMask);
-
-extern hw_status hw_mmu_fault_addr_read(const void __iomem *baseAddress,
- u32 *addr);
-
-/* Set the TT base address */
-extern hw_status hw_mmu_ttb_set(const void __iomem *baseAddress,
- u32 TTBPhysAddr);
-
-extern hw_status hw_mmu_twl_enable(const void __iomem *baseAddress);
-
-extern hw_status hw_mmu_twl_disable(const void __iomem *baseAddress);
-
-extern hw_status hw_mmu_tlb_flush(const void __iomem *baseAddress,
- u32 virtualAddr, u32 pageSize);
-
-extern hw_status hw_mmu_tlb_add(const void __iomem *baseAddress,
- u32 physicalAddr,
- u32 virtualAddr,
- u32 pageSize,
- u32 entryNum,
- struct hw_mmu_map_attrs_t *map_attrs,
- s8 preservedBit, s8 validBit);
-
-/* For PTEs */
-extern hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
- u32 physicalAddr,
- u32 virtualAddr,
- u32 pageSize,
- struct hw_mmu_map_attrs_t *map_attrs);
-
-extern hw_status hw_mmu_pte_clear(const u32 pg_tbl_va,
- u32 page_size, u32 virtualAddr);
-
-static inline u32 hw_mmu_pte_addr_l1(u32 L1_base, u32 va)
-{
- u32 pte_addr;
- u32 va31_to20;
-
- va31_to20 = va >> (20 - 2); /* Left-shift by 2 here itself */
- va31_to20 &= 0xFFFFFFFCUL;
- pte_addr = L1_base + va31_to20;
-
- return pte_addr;
-}
-
-static inline u32 hw_mmu_pte_addr_l2(u32 L2_base, u32 va)
-{
- u32 pte_addr;
-
- pte_addr = (L2_base & 0xFFFFFC00) | ((va >> 10) & 0x3FC);
-
- return pte_addr;
-}
-
-static inline u32 hw_mmu_pte_coarse_l1(u32 pte_val)
-{
- u32 pte_coarse;
-
- pte_coarse = pte_val & 0xFFFFFC00;
-
- return pte_coarse;
-}
-
-static inline u32 hw_mmu_pte_size_l1(u32 pte_val)
-{
- u32 pte_size = 0;
-
- if ((pte_val & 0x3) == 0x1) {
- /* Points to L2 PT */
- pte_size = HW_MMU_COARSE_PAGE_SIZE;
- }
-
- if ((pte_val & 0x3) == 0x2) {
- if (pte_val & (1 << 18))
- pte_size = HW_PAGE_SIZE16MB;
- else
- pte_size = HW_PAGE_SIZE1MB;
- }
-
- return pte_size;
-}
-
-static inline u32 hw_mmu_pte_size_l2(u32 pte_val)
-{
- u32 pte_size = 0;
-
- if (pte_val & 0x2)
- pte_size = HW_PAGE_SIZE4KB;
- else if (pte_val & 0x1)
- pte_size = HW_PAGE_SIZE64KB;
-
- return pte_size;
-}
-
-#endif /* _HW_MMU_H */
diff --git a/drivers/dsp/bridge/rmgr/node.c b/drivers/dsp/bridge/rmgr/node.c
index 3d2cf96..e1b3128 100644
--- a/drivers/dsp/bridge/rmgr/node.c
+++ b/drivers/dsp/bridge/rmgr/node.c
@@ -621,9 +621,7 @@ func_cont:
ul_gpp_mem_base = (u32) host_res->dw_mem_base[1];
off_set = pul_value - dynext_base;
ul_stack_seg_addr = ul_gpp_mem_base + off_set;
- ul_stack_seg_val = (u32) *((reg_uword32 *)
- ((u32)
- (ul_stack_seg_addr)));
+ ul_stack_seg_val = __raw_readl(ul_stack_seg_addr);

dev_dbg(bridge, "%s: StackSegVal = 0x%x, StackSegAddr ="
" 0x%x\n", __func__, ul_stack_seg_val,
--
1.7.0.4

2010-06-30 23:52:21

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 2/9] dspbridge: move shared memory iommu maps to tiomap3430.c

Now the iommu map of shared memory segments are done in
bridge_brd_start and unmaped in bridge_brd_stop.

NOTE: video sequencer reset is not done in dspbridge anymore,
due to dspbridge does not manage it.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
drivers/dsp/bridge/core/_tiomap.h | 6 +
drivers/dsp/bridge/core/io_sm.c | 117 ++----------
drivers/dsp/bridge/core/tiomap3430.c | 353 ++++++++++++++++++++--------------
drivers/dsp/bridge/core/tiomap_io.c | 11 +-
4 files changed, 237 insertions(+), 250 deletions(-)

diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index d13677a..6a822c6 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -310,6 +310,11 @@ static const struct bpwr_clk_t bpwr_clks[] = {

#define CLEAR_BIT_INDEX(reg, index) (reg &= ~(1 << (index)))

+struct shm_segs {
+ u32 seg0_da, seg0_pa, seg0_va, seg0_size;
+ u32 seg1_da, seg1_pa, seg1_va, seg1_size;
+};
+
/* This Bridge driver's device context: */
struct bridge_dev_context {
struct dev_object *hdev_obj; /* Handle to Bridge device object. */
@@ -333,6 +338,7 @@ struct bridge_dev_context {

struct omap_mbox *mbox; /* Mail box handle */
struct iommu *dsp_mmu; /* iommu for iva2 handler */
+ struct shm_segs *sh_s;

struct cfg_hostres *resources; /* Host Resources */

diff --git a/drivers/dsp/bridge/core/io_sm.c b/drivers/dsp/bridge/core/io_sm.c
index 1f47f8b..aca9854 100644
--- a/drivers/dsp/bridge/core/io_sm.c
+++ b/drivers/dsp/bridge/core/io_sm.c
@@ -290,8 +290,7 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
struct cod_manager *cod_man;
struct chnl_mgr *hchnl_mgr;
struct msg_mgr *hmsg_mgr;
- struct iommu *mmu;
- struct iotlb_entry e;
+ struct shm_segs *sm_sg;
u32 ul_shm_base;
u32 ul_shm_base_offset;
u32 ul_shm_limit;
@@ -317,14 +316,6 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
u32 shm0_end;
u32 ul_dyn_ext_base;
u32 ul_seg1_size = 0;
- u32 pa_curr = 0;
- u32 va_curr = 0;
- u32 gpp_va_curr = 0;
- u32 num_bytes = 0;
- u32 all_bits = 0;
- u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
- HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
- };

status = dev_get_bridge_context(hio_mgr->hdev_obj, &pbridge_context);
if (!pbridge_context) {
@@ -338,19 +329,12 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
goto func_end;
}

- mmu = pbridge_context->dsp_mmu;
+ sm_sg = kmalloc(sizeof(*sm_sg), GFP_KERNEL);

- if (mmu)
- iommu_put(mmu);
- mmu = iommu_get("iva2");
-
- if (IS_ERR_OR_NULL(mmu)) {
- pr_err("Error in iommu_get\n");
- pbridge_context->dsp_mmu = NULL;
- status = -EFAULT;
+ if (!sm_sg) {
+ status = -ENOMEM;
goto func_end;
}
- pbridge_context->dsp_mmu = mmu;

status = dev_get_cod_mgr(hio_mgr->hdev_obj, &cod_man);
if (!cod_man) {
@@ -488,74 +472,16 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
if (DSP_FAILED(status))
goto func_end;

- pa_curr = ul_gpp_pa;
- va_curr = ul_dyn_ext_base * hio_mgr->word_size;
- gpp_va_curr = ul_gpp_va;
- num_bytes = ul_seg1_size;
+ sm_sg->seg1_pa = ul_gpp_pa;
+ sm_sg->seg1_da = ul_dyn_ext_base;
+ sm_sg->seg1_va = ul_gpp_va;
+ sm_sg->seg1_size = ul_seg1_size;
+ sm_sg->seg0_pa = ul_gpp_pa + ul_pad_size + ul_seg1_size;
+ sm_sg->seg0_da = ul_dsp_va;
+ sm_sg->seg0_va = ul_gpp_va + ul_pad_size + ul_seg1_size;
+ sm_sg->seg0_size = ul_seg_size;

- va_curr = iommu_kmap(mmu, va_curr, pa_curr, num_bytes,
- IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
- if (IS_ERR_VALUE(va_curr)) {
- status = (int)va_curr;
- goto func_end;
- }
-
- pa_curr += ul_pad_size + num_bytes;
- va_curr += ul_pad_size + num_bytes;
- gpp_va_curr += ul_pad_size + num_bytes;
-
- /* Configure the TLB entries for the next cacheable segment */
- num_bytes = ul_seg_size;
- va_curr = ul_dsp_va * hio_mgr->word_size;
- while (num_bytes) {
- /*
- * To find the max. page size with which both PA & VA are
- * aligned.
- */
- all_bits = pa_curr | va_curr;
- dev_dbg(bridge, "all_bits for Seg1 %x, pa_curr %x, "
- "va_curr %x, num_bytes %x\n", all_bits, pa_curr,
- va_curr, num_bytes);
- for (i = 0; i < 4; i++) {
- if (!(num_bytes >= page_size[i]) ||
- !((all_bits & (page_size[i] - 1)) == 0))
- continue;
- if (ndx < MAX_LOCK_TLB_ENTRIES) {
- /*
- * This is the physical address written to
- * DSP MMU.
- */
- ae_proc[ndx].ul_gpp_pa = pa_curr;
- /*
- * This is the virtual uncached ioremapped
- * address!!!
- */
- ae_proc[ndx].ul_gpp_va = gpp_va_curr;
- ae_proc[ndx].ul_dsp_va =
- va_curr / hio_mgr->word_size;
- ae_proc[ndx].ul_size = page_size[i];
- ae_proc[ndx].endianism = HW_LITTLE_ENDIAN;
- ae_proc[ndx].elem_size = HW_ELEM_SIZE16BIT;
- ae_proc[ndx].mixed_mode = HW_MMU_CPUES;
- dev_dbg(bridge, "shm MMU TLB entry PA %x"
- " VA %x DSP_VA %x Size %x\n",
- ae_proc[ndx].ul_gpp_pa,
- ae_proc[ndx].ul_gpp_va,
- ae_proc[ndx].ul_dsp_va *
- hio_mgr->word_size, page_size[i]);
- ndx++;
- }
- pa_curr += page_size[i];
- va_curr += page_size[i];
- gpp_va_curr += page_size[i];
- num_bytes -= page_size[i];
- /*
- * Don't try smaller sizes. Hopefully we have reached
- * an address aligned to a bigger page size.
- */
- break;
- }
- }
+ pbridge_context->sh_s = sm_sg;

/*
* Copy remaining entries from CDB. All entries are 1 MB and
@@ -602,17 +528,6 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
goto func_end;
}

- dsp_iotlb_init(&e, 0, 0, IOVMF_PGSZ_4K);
-
- /* Map the L4 peripherals */
- i = 0;
- while (l4_peripheral_table[i].phys_addr) {
- e.da = l4_peripheral_table[i].dsp_virt_addr;
- e.pa = l4_peripheral_table[i].phys_addr;
- iopgtable_store_entry(mmu, &e);
- i++;
- }
-
for (i = ndx; i < BRDIOCTL_NUMOFMMUTLB; i++) {
ae_proc[i].ul_dsp_va = 0;
ae_proc[i].ul_gpp_pa = 0;
@@ -635,12 +550,12 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
status = -EFAULT;
goto func_end;
} else {
- if (ae_proc[0].ul_dsp_va > ul_shm_base) {
+ if (sm_sg->seg0_da > ul_shm_base) {
status = -EPERM;
goto func_end;
}
/* ul_shm_base may not be at ul_dsp_va address */
- ul_shm_base_offset = (ul_shm_base - ae_proc[0].ul_dsp_va) *
+ ul_shm_base_offset = (ul_shm_base - sm_sg->seg0_da) *
hio_mgr->word_size;
/*
* bridge_dev_ctrl() will set dev context dsp-mmu info. In
@@ -665,7 +580,7 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
}
/* Register SM */
status =
- register_shm_segs(hio_mgr, cod_man, ae_proc[0].ul_gpp_pa);
+ register_shm_segs(hio_mgr, cod_man, sm_sg->seg0_pa);
}

hio_mgr->shared_mem = (struct shm *)ul_shm_base;
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index e750767..89d4936 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -300,8 +300,7 @@ static int bridge_brd_monitor(struct bridge_dev_context *hDevContext)
(*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_DISABLE_AUTO,
OMAP3430_IVA2_MOD, CM_CLKSTCTRL);
}
- (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2, 0,
- OMAP3430_IVA2_MOD, RM_RSTCTRL);
+
dsp_clk_enable(DSP_CLK_IVA2);

if (DSP_SUCCEEDED(status)) {
@@ -374,15 +373,16 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
int status = 0;
struct bridge_dev_context *dev_context = hDevContext;
struct iommu *mmu;
- struct iotlb_entry en;
+ struct iotlb_entry e;
+ struct shm_segs *sm_sg;
+ int i;
+ struct bridge_ioctl_extproc *tlb = dev_context->atlb_entry;
u32 dw_sync_addr = 0;
u32 ul_shm_base; /* Gpp Phys SM base addr(byte) */
u32 ul_shm_base_virt; /* Dsp Virt SM base addr */
u32 ul_tlb_base_virt; /* Base of MMU TLB entry */
/* Offset of shm_base_virt from tlb_base_virt */
u32 ul_shm_offset_virt;
- s32 entry_ndx;
- s32 itmp_entry_ndx = 0; /* DSP-MMU TLB entry base address */
struct cfg_hostres *resources = NULL;
u32 temp;
u32 ul_dsp_clk_rate;
@@ -394,8 +394,6 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
struct dspbridge_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;

- mmu = dev_context->dsp_mmu;
-
/* The device context contains all the mmu setup info from when the
* last dsp base image was loaded. The first entry is always
* SHMMEM base. */
@@ -405,12 +403,12 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
ul_shm_base_virt *= DSPWORDSIZE;
DBC_ASSERT(ul_shm_base_virt != 0);
/* DSP Virtual address */
- ul_tlb_base_virt = dev_context->atlb_entry[0].ul_dsp_va;
+ ul_tlb_base_virt = dev_context->sh_s->seg0_da;
DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);
ul_shm_offset_virt =
ul_shm_base_virt - (ul_tlb_base_virt * DSPWORDSIZE);
/* Kernel logical address */
- ul_shm_base = dev_context->atlb_entry[0].ul_gpp_va + ul_shm_offset_virt;
+ ul_shm_base = dev_context->sh_s->seg0_va + ul_shm_offset_virt;

DBC_ASSERT(ul_shm_base != 0);
/* 2nd wd is used as sync field */
@@ -445,152 +443,193 @@ static int bridge_brd_start(struct bridge_dev_context *hDevContext,
OMAP343X_CONTROL_IVA2_BOOTMOD));
}
}
- if (DSP_SUCCEEDED(status)) {
- /* Only make TLB entry if both addresses are non-zero */
- for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB;
- entry_ndx++) {
- struct bridge_ioctl_extproc *e = &dev_context->atlb_entry[entry_ndx];
- if (!e->ul_gpp_pa || !e->ul_dsp_va)
- continue;
-
- dev_dbg(bridge,
- "MMU %d, pa: 0x%x, va: 0x%x, size: 0x%x",
- itmp_entry_ndx,
- e->ul_gpp_pa,
- e->ul_dsp_va,
- e->ul_size);
-
- dsp_iotlb_init(&en, e->ul_dsp_va, e->ul_gpp_pa,
- bytes_to_iopgsz(e->ul_size));
- iopgtable_store_entry(mmu, &en);
- itmp_entry_ndx++;
- }
+
+ if (status)
+ goto err1;
+
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2, 0,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);
+
+ mmu = dev_context->dsp_mmu;
+
+ if (mmu)
+ iommu_put(mmu);
+ mmu = iommu_get("iva2");
+
+ if (IS_ERR(mmu)) {
+ pr_err("Error in iommu_get %ld\n", PTR_ERR(mmu));
+ dev_context->dsp_mmu = NULL;
+ status = (int)mmu;
+ goto end;
}
+ dev_context->dsp_mmu = mmu;
+ sm_sg = dev_context->sh_s;

- /* Lock the above TLB entries and get the BIOS and load monitor timer
- * information */
- if (DSP_SUCCEEDED(status)) {
+ sm_sg->seg0_da = iommu_kmap(mmu, sm_sg->seg0_da, sm_sg->seg0_pa,
+ sm_sg->seg0_size, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);

- /* Enable the BIOS clock */
- (void)dev_get_symbol(dev_context->hdev_obj,
- BRIDGEINIT_BIOSGPTIMER, &ul_bios_gp_timer);
- (void)dev_get_symbol(dev_context->hdev_obj,
- BRIDGEINIT_LOADMON_GPTIMER,
- &ul_load_monitor_timer);
- if (ul_load_monitor_timer != 0xFFFF) {
- clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
- ul_load_monitor_timer;
- dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
- } else {
- dev_dbg(bridge, "Not able to get the symbol for Load "
- "Monitor Timer\n");
- }
+ if (IS_ERR_VALUE(sm_sg->seg0_da)) {
+ status = (int)sm_sg->seg0_da;
+ goto err1;
}

- if (DSP_SUCCEEDED(status)) {
- if (ul_bios_gp_timer != 0xFFFF) {
- clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
- ul_bios_gp_timer;
- dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
- } else {
- dev_dbg(bridge,
- "Not able to get the symbol for BIOS Timer\n");
- }
+ sm_sg->seg1_da = iommu_kmap(mmu, sm_sg->seg1_da, sm_sg->seg1_pa,
+ sm_sg->seg1_size, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
+
+ if (IS_ERR_VALUE(sm_sg->seg1_da)) {
+ iommu_kunmap(mmu, sm_sg->seg0_da);
+ status = (int)sm_sg->seg1_da;
+ goto err2;
}

- if (DSP_SUCCEEDED(status)) {
- /* Set the DSP clock rate */
- (void)dev_get_symbol(dev_context->hdev_obj,
- "_BRIDGEINIT_DSP_FREQ", &ul_dsp_clk_addr);
- /*Set Autoidle Mode for IVA2 PLL */
- (*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
- OMAP3430_IVA2_MOD, OMAP3430_CM_AUTOIDLE_PLL);
-
- if ((unsigned int *)ul_dsp_clk_addr != NULL) {
- /* Get the clock rate */
- ul_dsp_clk_rate = dsp_clk_get_iva2_rate();
- dev_dbg(bridge, "%s: DSP clock rate (KHZ): 0x%x \n",
- __func__, ul_dsp_clk_rate);
- (void)bridge_brd_write(dev_context,
- (u8 *) &ul_dsp_clk_rate,
- ul_dsp_clk_addr, sizeof(u32), 0);
- }
- /*
- * Enable Mailbox events and also drain any pending
- * stale messages.
- */
- dev_context->mbox = omap_mbox_get("dsp");
- if (IS_ERR(dev_context->mbox)) {
- dev_context->mbox = NULL;
- pr_err("%s: Failed to get dsp mailbox handle\n",
- __func__);
- status = -EPERM;
- }
+ dsp_iotlb_init(&e, 0, 0, IOVMF_PGSZ_4K);

+ /* Map the L4 peripherals */
+ i = 0;
+ while (l4_peripheral_table[i].phys_addr) {
+ e.da = l4_peripheral_table[i].dsp_virt_addr;
+ e.pa = l4_peripheral_table[i].phys_addr;
+ iopgtable_store_entry(mmu, &e);
+ i++;
+ }
+
+ for (i = 0; i < BRDIOCTL_NUMOFMMUTLB; i++) {
+ if (!tlb[i].ul_gpp_pa)
+ continue;
+
+ dev_dbg(bridge, "(proc) MMU %d GppPa: 0x%x DspVa 0x%x Size"
+ " 0x%x\n", i, tlb[i].ul_gpp_pa, tlb[i].ul_dsp_va,
+ tlb[i].ul_size);
+
+ dsp_iotlb_init(&e, tlb[i].ul_dsp_va, tlb[i].ul_gpp_pa,
+ bytes_to_iopgsz(tlb[i].ul_size));
+ iopgtable_store_entry(mmu, &e);
}
- if (DSP_SUCCEEDED(status)) {
- dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg;
+
+ /* Get the BIOS and load monitor timer information */
+ /* Enable the BIOS clock */
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ BRIDGEINIT_BIOSGPTIMER, &ul_bios_gp_timer);
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ BRIDGEINIT_LOADMON_GPTIMER,
+ &ul_load_monitor_timer);
+ if (ul_load_monitor_timer != 0xFFFF) {
+ clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
+ ul_load_monitor_timer;
+ dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ } else {
+ dev_dbg(bridge, "Not able to get the symbol for Load "
+ "Monitor Timer\n");
+ }
+
+ if (ul_bios_gp_timer != 0xFFFF) {
+ clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
+ ul_bios_gp_timer;
+ dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ } else {
+ dev_dbg(bridge,
+ "Not able to get the symbol for BIOS Timer\n");
+ }
+
+ /* Set the DSP clock rate */
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ "_BRIDGEINIT_DSP_FREQ", &ul_dsp_clk_addr);
+ /*Set Autoidle Mode for IVA2 PLL */
+ (*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
+ OMAP3430_IVA2_MOD, OMAP3430_CM_AUTOIDLE_PLL);
+
+ if ((unsigned int *)ul_dsp_clk_addr != NULL) {
+ /* Get the clock rate */
+ ul_dsp_clk_rate = dsp_clk_get_iva2_rate();
+ dev_dbg(bridge, "%s: DSP clock rate (KHZ): 0x%x \n",
+ __func__, ul_dsp_clk_rate);
+ (void)bridge_brd_write(dev_context,
+ (u8 *) &ul_dsp_clk_rate,
+ ul_dsp_clk_addr, sizeof(u32), 0);
+ }
+ /*
+ * Enable Mailbox events and also drain any pending
+ * stale messages.
+ */
+ dev_context->mbox = omap_mbox_get("dsp");
+ if (IS_ERR(dev_context->mbox)) {
+ dev_context->mbox = NULL;
+ pr_err("%s: Failed to get dsp mailbox handle\n", __func__);
+ status = -EPERM;
+ goto err3;
+ }
+
+ dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg;

/*PM_IVA2GRPSEL_PER = 0xC0;*/
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) + 0xA8));
- temp = (temp & 0xFFFFFF30) | 0xC0;
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8)) =
- (u32) temp;
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) + 0xA8));
+ temp = (temp & 0xFFFFFF30) | 0xC0;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8)) =
+ (u32) temp;

/*PM_MPUGRPSEL_PER &= 0xFFFFFF3F; */
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_pm_base) + 0xA4));
- temp = (temp & 0xFFFFFF3F);
- *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4)) =
- (u32) temp;
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) + 0xA4));
+ temp = (temp & 0xFFFFFF3F);
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4)) =
+ (u32) temp;
/*CM_SLEEPDEP_PER |= 0x04; */
- temp = (u32) *((reg_uword32 *)
- ((u32) (resources->dw_per_base) + 0x44));
- temp = (temp & 0xFFFFFFFB) | 0x04;
- *((reg_uword32 *) ((u32) (resources->dw_per_base) + 0x44)) =
- (u32) temp;
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_base) + 0x44));
+ temp = (temp & 0xFFFFFFFB) | 0x04;
+ *((reg_uword32 *) ((u32) (resources->dw_per_base) + 0x44)) =
+ (u32) temp;

/*CM_CLKSTCTRL_IVA2 = 0x00000003 -To Allow automatic transitions */
- (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_ENABLE_AUTO,
- OMAP3430_IVA2_MOD, CM_CLKSTCTRL);
-
- /* Let DSP go */
- dev_dbg(bridge, "%s Unreset\n", __func__);
- /* release the RST1, DSP starts executing now .. */
- (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, 0,
- OMAP3430_IVA2_MOD, RM_RSTCTRL);
-
- dev_dbg(bridge, "Waiting for Sync @ 0x%x\n", dw_sync_addr);
- dev_dbg(bridge, "DSP c_int00 Address = 0x%x\n", dwDSPAddr);
- if (dsp_debug)
- while (*((volatile u16 *)dw_sync_addr))
- ;;
-
- /* Wait for DSP to clear word in shared memory */
- /* Read the Location */
- if (!wait_for_start(dev_context, dw_sync_addr))
- status = -ETIMEDOUT;
+ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_ENABLE_AUTO,
+ OMAP3430_IVA2_MOD, CM_CLKSTCTRL);
+
+ /* Let DSP go */
+ dev_dbg(bridge, "%s Unreset\n", __func__);
+ /* release the RST1, DSP starts executing now .. */
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, 0,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);
+
+ dev_dbg(bridge, "Waiting for Sync @ 0x%x\n", dw_sync_addr);
+ dev_dbg(bridge, "DSP c_int00 Address = 0x%x\n", dwDSPAddr);
+ if (dsp_debug)
+ while (*((volatile u16 *)dw_sync_addr))
+ ;;
+
+ /* Wait for DSP to clear word in shared memory */
+ /* Read the Location */
+ if (!wait_for_start(dev_context, dw_sync_addr)) {
+ status = -ETIMEDOUT;
+ goto err3;
+ }

- /* Start wdt */
- dsp_wdt_sm_set((void *)ul_shm_base);
- dsp_wdt_enable(true);
+ /* Start wdt */
+ dsp_wdt_sm_set((void *)ul_shm_base);
+ dsp_wdt_enable(true);

- status = dev_get_io_mgr(dev_context->hdev_obj, &hio_mgr);
- if (hio_mgr) {
- io_sh_msetting(hio_mgr, SHM_OPPINFO, NULL);
- /* Write the synchronization bit to indicate the
- * completion of OPP table update to DSP
- */
- *((volatile u32 *)dw_sync_addr) = 0XCAFECAFE;
+ status = dev_get_io_mgr(dev_context->hdev_obj, &hio_mgr);
+ if (hio_mgr) {
+ io_sh_msetting(hio_mgr, SHM_OPPINFO, NULL);
+ /* Write the synchronization bit to indicate the
+ * completion of OPP table update to DSP
+ */
+ *((volatile u32 *)dw_sync_addr) = 0XCAFECAFE;

- /* update board state */
- dev_context->dw_brd_state = BRD_RUNNING;
- /* (void)chnlsm_enable_interrupt(dev_context); */
- } else {
- dev_context->dw_brd_state = BRD_UNKNOWN;
- }
+ /* update board state */
+ dev_context->dw_brd_state = BRD_RUNNING;
+ /* (void)chnlsm_enable_interrupt(dev_context); */
+ } else {
+ dev_context->dw_brd_state = BRD_UNKNOWN;
+ goto err3;
}
+end:
+ return 0;
+err3:
+ iommu_kunmap(mmu, sm_sg->seg0_da);
+err2:
+ iommu_kunmap(mmu, sm_sg->seg1_da);
+err1:
return status;
}

@@ -654,15 +693,30 @@ static int bridge_brd_stop(struct bridge_dev_context *hDevContext)
memset((u8 *) pt_attrs->pg_info, 0x00,
(pt_attrs->l2_num_pages * sizeof(struct page_info)));
}
+
+ /* Reset DSP */
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, OMAP3430_RST1_IVA2,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);
/* Disable the mailbox interrupts */
if (dev_context->mbox) {
omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
omap_mbox_put(dev_context->mbox);
dev_context->mbox = NULL;
}
- /* Reset IVA2 clocks*/
- (*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2 | OMAP3430_RST2_IVA2 |
- OMAP3430_RST3_IVA2, OMAP3430_IVA2_MOD, RM_RSTCTRL);
+
+ if (dev_context->dsp_mmu) {
+ if (dev_context->sh_s) {
+ iommu_kunmap(dev_context->dsp_mmu,
+ dev_context->sh_s->seg0_da);
+ iommu_kunmap(dev_context->dsp_mmu,
+ dev_context->sh_s->seg1_da);
+ }
+ iommu_put(dev_context->dsp_mmu);
+ dev_context->dsp_mmu = NULL;
+ }
+
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2, OMAP3430_RST2_IVA2,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);

return status;
}
@@ -709,6 +763,11 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
memset((u8 *) pt_attrs->pg_info, 0x00,
(pt_attrs->l2_num_pages * sizeof(struct page_info)));
}
+
+ /* Reset DSP */
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, OMAP3430_RST1_IVA2,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);
+
/* Disable the mail box interrupts */
if (dev_context->mbox) {
omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
@@ -716,11 +775,19 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
dev_context->mbox = NULL;
}

- if (dev_context->dsp_mmu)
- dev_context->dsp_mmu = (iommu_put(dev_context->dsp_mmu), NULL);
- /* Reset IVA2 clocks*/
- (*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2 | OMAP3430_RST2_IVA2 |
- OMAP3430_RST3_IVA2, OMAP3430_IVA2_MOD, RM_RSTCTRL);
+ if (dev_context->dsp_mmu) {
+ if (dev_context->sh_s) {
+ iommu_kunmap(dev_context->dsp_mmu,
+ dev_context->sh_s->seg0_da);
+ iommu_kunmap(dev_context->dsp_mmu,
+ dev_context->sh_s->seg1_da);
+ }
+ iommu_put(dev_context->dsp_mmu);
+ dev_context->dsp_mmu = NULL;
+ }
+
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2, OMAP3430_RST2_IVA2,
+ OMAP3430_IVA2_MOD, RM_RSTCTRL);

return status;
}
diff --git a/drivers/dsp/bridge/core/tiomap_io.c b/drivers/dsp/bridge/core/tiomap_io.c
index 3b2ea70..c23ca66 100644
--- a/drivers/dsp/bridge/core/tiomap_io.c
+++ b/drivers/dsp/bridge/core/tiomap_io.c
@@ -133,10 +133,9 @@ int read_ext_dsp_data(struct bridge_dev_context *hDevContext,

if (DSP_SUCCEEDED(status)) {
ul_tlb_base_virt =
- dev_context->atlb_entry[0].ul_dsp_va * DSPWORDSIZE;
+ dev_context->sh_s->seg0_da * DSPWORDSIZE;
DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);
- dw_ext_prog_virt_mem =
- dev_context->atlb_entry[0].ul_gpp_va;
+ dw_ext_prog_virt_mem = dev_context->sh_s->seg0_va;

if (!trace_read) {
ul_shm_offset_virt =
@@ -317,8 +316,8 @@ int write_ext_dsp_data(struct bridge_dev_context *dev_context,
ret = -EPERM;

if (DSP_SUCCEEDED(ret)) {
- ul_tlb_base_virt =
- dev_context->atlb_entry[0].ul_dsp_va * DSPWORDSIZE;
+ ul_tlb_base_virt = dev_context->sh_s->seg0_da *
+ DSPWORDSIZE;
DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);

if (symbols_reloaded) {
@@ -339,7 +338,7 @@ int write_ext_dsp_data(struct bridge_dev_context *dev_context,
ul_shm_base_virt - ul_tlb_base_virt;
if (trace_load) {
dw_ext_prog_virt_mem =
- dev_context->atlb_entry[0].ul_gpp_va;
+ dev_context->sh_s->seg0_va;
} else {
dw_ext_prog_virt_mem = host_res->dw_mem_base[1];
dw_ext_prog_virt_mem +=
--
1.7.0.4

2010-06-30 23:52:13

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 8/9] dspbridge: add map support for big buffers

due to a restriction in scatter gather lists, it can
not be created a list for a buffer bigger than 1MB.
This patch is spliting big mappings into 1MB mappings.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
arch/arm/plat-omap/include/dspbridge/dsp-mmu.h | 2 +-
drivers/dsp/bridge/core/dsp-mmu.c | 55 ++++++++++++++---------
drivers/dsp/bridge/rmgr/proc.c | 3 +-
3 files changed, 36 insertions(+), 24 deletions(-)

diff --git a/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h b/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
index 266f38b..2e4bf6a 100644
--- a/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
+++ b/arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
@@ -85,6 +85,6 @@ int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
* This function unmaps a user space buffer into DSP virtual address.
*
*/
-int user_to_dsp_unmap(struct iommu *mmu, u32 da);
+int user_to_dsp_unmap(struct iommu *mmu, u32 da, unsigned size);

#endif
diff --git a/drivers/dsp/bridge/core/dsp-mmu.c b/drivers/dsp/bridge/core/dsp-mmu.c
index e8da327..9a46206 100644
--- a/drivers/dsp/bridge/core/dsp-mmu.c
+++ b/drivers/dsp/bridge/core/dsp-mmu.c
@@ -133,7 +133,7 @@ int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
struct page **usr_pgs)
{
int res, w;
- unsigned pages, i;
+ unsigned pages, i, j = 0;
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
struct sg_table *sgt;
@@ -162,24 +162,31 @@ int user_to_dsp_map(struct iommu *mmu, u32 uva, u32 da, u32 size,
if (res < 0)
return res;

- sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+ while (pages) {
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);

- if (!sgt)
- return -ENOMEM;
+ if (!sgt)
+ return -ENOMEM;

- res = sg_alloc_table(sgt, pages, GFP_KERNEL);
+ res = sg_alloc_table(sgt,
+ min((unsigned)SG_MAX_SINGLE_ALLOC, pages), GFP_KERNEL);
+ pages -= min((unsigned)SG_MAX_SINGLE_ALLOC, pages);

- if (res < 0)
- goto err_sg;
+ if (res < 0)
+ goto err_sg;
+
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ sg_set_page(sg, usr_pgs[j++], PAGE_SIZE, 0);

- for_each_sg(sgt->sgl, sg, sgt->nents, i)
- sg_set_page(sg, usr_pgs[i], PAGE_SIZE, 0);
+ da = iommu_vmap(mmu, da, sgt, IOVMF_ENDIAN_LITTLE |
+ IOVMF_ELSZ_32);

- da = iommu_vmap(mmu, da, sgt, IOVMF_ENDIAN_LITTLE | IOVMF_ELSZ_32);
+ if (IS_ERR_VALUE(da)) {
+ res = (int)da;
+ goto err_map;
+ }

- if (IS_ERR_VALUE(da)) {
- res = (int)da;
- goto err_map;
+ da += SG_MAX_SINGLE_ALLOC * PAGE_SIZE;
}
return 0;

@@ -198,21 +205,25 @@ err_sg:
* This function unmaps a user space buffer into DSP virtual address.
*
*/
-int user_to_dsp_unmap(struct iommu *mmu, u32 da)
+int user_to_dsp_unmap(struct iommu *mmu, u32 da, unsigned size)
{
unsigned i;
struct sg_table *sgt;
struct scatterlist *sg;
+ const unsigned max_sz = SG_MAX_SINGLE_ALLOC * PAGE_SIZE;

- sgt = iommu_vunmap(mmu, da);
- if (!sgt)
- return -EFAULT;
-
- for_each_sg(sgt->sgl, sg, sgt->nents, i)
- put_page(sg_page(sg));
+ while (size) {
+ size -= min(max_sz, size);
+ sgt = iommu_vunmap(mmu, da);
+ if (!sgt)
+ return -EFAULT;

- sg_free_table(sgt);
- kfree(sgt);
+ for_each_sg(sgt->sgl, sg, sgt->nents, i)
+ put_page(sg_page(sg));

+ sg_free_table(sgt);
+ kfree(sgt);
+ da += max_sz;
+ }
return 0;
}
diff --git a/drivers/dsp/bridge/rmgr/proc.c b/drivers/dsp/bridge/rmgr/proc.c
index 4f10a41..997918e 100644
--- a/drivers/dsp/bridge/rmgr/proc.c
+++ b/drivers/dsp/bridge/rmgr/proc.c
@@ -1713,7 +1713,8 @@ int proc_un_map(void *hprocessor, void *map_addr,
/* Remove mapping from the page tables. */
if (DSP_SUCCEEDED(status)) {
status = user_to_dsp_unmap(
- p_proc_object->hbridge_context->dsp_mmu, va_align);
+ p_proc_object->hbridge_context->dsp_mmu,
+ va_align, size_align);
}

mutex_unlock(&proc_lock);
--
1.7.0.4

2010-06-30 23:52:19

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 4/9] dspbridge: remove custom mmu code from tiomap3430.c

This patch removes all the custom mmu code remaining in
tiomap3430.c which is not needed anymore.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
drivers/dsp/bridge/core/_tiomap.h | 1 -
drivers/dsp/bridge/core/tiomap3430.c | 470 ----------------------------------
2 files changed, 0 insertions(+), 471 deletions(-)

diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index 4aa2358..c41fd8e 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -356,7 +356,6 @@ struct bridge_dev_context {

/* TC Settings */
bool tc_word_swap_on; /* Traffic Controller Word Swap */
- struct pg_table_attrs *pt_attrs;
u32 dsp_per_clks;
};

diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index 88f5167..96cceea 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -105,56 +105,9 @@ static int bridge_dev_create(OUT struct bridge_dev_context
static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
u32 dw_cmd, IN OUT void *pargs);
static int bridge_dev_destroy(struct bridge_dev_context *dev_context);
-static u32 user_va2_pa(struct mm_struct *mm, u32 address);
-static int pte_update(struct bridge_dev_context *hDevContext, u32 pa,
- u32 va, u32 size,
- struct hw_mmu_map_attrs_t *map_attrs);
-static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *attrs);
-static int mem_map_vmalloc(struct bridge_dev_context *hDevContext,
- u32 ul_mpu_addr, u32 ulVirtAddr,
- u32 ul_num_bytes,
- struct hw_mmu_map_attrs_t *hw_attrs);

bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr);

-/* ----------------------------------- Globals */
-
-/* Attributes of L2 page tables for DSP MMU */
-struct page_info {
- u32 num_entries; /* Number of valid PTEs in the L2 PT */
-};
-
-/* Attributes used to manage the DSP MMU page tables */
-struct pg_table_attrs {
- spinlock_t pg_lock; /* Critical section object handle */
-
- u32 l1_base_pa; /* Physical address of the L1 PT */
- u32 l1_base_va; /* Virtual address of the L1 PT */
- u32 l1_size; /* Size of the L1 PT */
- u32 l1_tbl_alloc_pa;
- /* Physical address of Allocated mem for L1 table. May not be aligned */
- u32 l1_tbl_alloc_va;
- /* Virtual address of Allocated mem for L1 table. May not be aligned */
- u32 l1_tbl_alloc_sz;
- /* Size of consistent memory allocated for L1 table.
- * May not be aligned */
-
- u32 l2_base_pa; /* Physical address of the L2 PT */
- u32 l2_base_va; /* Virtual address of the L2 PT */
- u32 l2_size; /* Size of the L2 PT */
- u32 l2_tbl_alloc_pa;
- /* Physical address of Allocated mem for L2 table. May not be aligned */
- u32 l2_tbl_alloc_va;
- /* Virtual address of Allocated mem for L2 table. May not be aligned */
- u32 l2_tbl_alloc_sz;
- /* Size of consistent memory allocated for L2 table.
- * May not be aligned */
-
- u32 l2_num_pages; /* Number of allocated L2 PT */
- /* Array [l2_num_pages] of L2 PT info structs */
- struct page_info *pg_info;
-};

/*
* This Bridge driver's function interface table.
@@ -210,32 +163,6 @@ static struct bridge_drv_interface drv_interface_fxns = {
bridge_msg_set_queue_id,
};

-static inline void tlb_flush_all(const void __iomem *base)
-{
- __raw_writeb(__raw_readb(base + MMU_GFLUSH) | 1, base + MMU_GFLUSH);
-}
-
-static inline void flush_all(struct bridge_dev_context *dev_context)
-{
- if (dev_context->dw_brd_state == BRD_DSP_HIBERNATION ||
- dev_context->dw_brd_state == BRD_HIBERNATION)
- wake_dsp(dev_context, NULL);
-
- tlb_flush_all(dev_context->dw_dsp_mmu_base);
-}
-
-static void bad_page_dump(u32 pa, struct page *pg)
-{
- pr_emerg("DSPBRIDGE: MAP function: COUNT 0 FOR PA 0x%x\n", pa);
- pr_emerg("Bad page state in process '%s'\n"
- "page:%p flags:0x%0*lx mapping:%p mapcount:%d count:%d\n"
- "Backtrace:\n",
- current->comm, pg, (int)(2 * sizeof(unsigned long)),
- (unsigned long)pg->flags, pg->mapping,
- page_mapcount(pg), page_count(pg));
- dump_stack();
-}
-
/*
* ======== bridge_drv_entry ========
* purpose:
@@ -637,7 +564,6 @@ static int bridge_brd_stop(struct bridge_dev_context *hDevContext)
{
int status = 0;
struct bridge_dev_context *dev_context = hDevContext;
- struct pg_table_attrs *pt_attrs;
u32 dsp_pwr_state;
int clk_status;
struct dspbridge_platform_data *pdata =
@@ -677,15 +603,6 @@ static int bridge_brd_stop(struct bridge_dev_context *hDevContext)

dsp_wdt_enable(false);

- /* This is a good place to clear the MMU page tables as well */
- if (dev_context->pt_attrs) {
- pt_attrs = dev_context->pt_attrs;
- memset((u8 *) pt_attrs->l1_base_va, 0x00, pt_attrs->l1_size);
- memset((u8 *) pt_attrs->l2_base_va, 0x00, pt_attrs->l2_size);
- memset((u8 *) pt_attrs->pg_info, 0x00,
- (pt_attrs->l2_num_pages * sizeof(struct page_info)));
- }
-
/* Reset DSP */
(*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, OMAP3430_RST1_IVA2,
OMAP3430_IVA2_MOD, RM_RSTCTRL);
@@ -725,7 +642,6 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
{
int status = 0;
struct bridge_dev_context *dev_context = hDevContext;
- struct pg_table_attrs *pt_attrs;
int clk_status;
struct dspbridge_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;
@@ -747,15 +663,6 @@ static int bridge_brd_delete(struct bridge_dev_context *hDevContext)

dev_context->dw_brd_state = BRD_STOPPED; /* update board state */

- /* This is a good place to clear the MMU page tables as well */
- if (dev_context->pt_attrs) {
- pt_attrs = dev_context->pt_attrs;
- memset((u8 *) pt_attrs->l1_base_va, 0x00, pt_attrs->l1_size);
- memset((u8 *) pt_attrs->l2_base_va, 0x00, pt_attrs->l2_size);
- memset((u8 *) pt_attrs->pg_info, 0x00,
- (pt_attrs->l2_num_pages * sizeof(struct page_info)));
- }
-
/* Reset DSP */
(*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2, OMAP3430_RST1_IVA2,
OMAP3430_IVA2_MOD, RM_RSTCTRL);
@@ -836,10 +743,6 @@ static int bridge_dev_create(OUT struct bridge_dev_context
struct bridge_dev_context *dev_context = NULL;
s32 entry_ndx;
struct cfg_hostres *resources = pConfig;
- struct pg_table_attrs *pt_attrs;
- u32 pg_tbl_pa;
- u32 pg_tbl_va;
- u32 align_size;
struct drv_data *drv_datap = dev_get_drvdata(bridge);

/* Allocate and initialize a data structure to contain the bridge driver
@@ -871,97 +774,11 @@ static int bridge_dev_create(OUT struct bridge_dev_context
if (!dev_context->dw_dsp_base_addr)
status = -EPERM;

- pt_attrs = kzalloc(sizeof(struct pg_table_attrs), GFP_KERNEL);
- if (pt_attrs != NULL) {
- /* Assuming that we use only DSP's memory map
- * until 0x4000:0000 , we would need only 1024
- * L1 enties i.e L1 size = 4K */
- pt_attrs->l1_size = 0x1000;
- align_size = pt_attrs->l1_size;
- /* Align sizes are expected to be power of 2 */
- /* we like to get aligned on L1 table size */
- pg_tbl_va = (u32) mem_alloc_phys_mem(pt_attrs->l1_size,
- align_size, &pg_tbl_pa);
-
- /* Check if the PA is aligned for us */
- if ((pg_tbl_pa) & (align_size - 1)) {
- /* PA not aligned to page table size ,
- * try with more allocation and align */
- mem_free_phys_mem((void *)pg_tbl_va, pg_tbl_pa,
- pt_attrs->l1_size);
- /* we like to get aligned on L1 table size */
- pg_tbl_va =
- (u32) mem_alloc_phys_mem((pt_attrs->l1_size) * 2,
- align_size, &pg_tbl_pa);
- /* We should be able to get aligned table now */
- pt_attrs->l1_tbl_alloc_pa = pg_tbl_pa;
- pt_attrs->l1_tbl_alloc_va = pg_tbl_va;
- pt_attrs->l1_tbl_alloc_sz = pt_attrs->l1_size * 2;
- /* Align the PA to the next 'align' boundary */
- pt_attrs->l1_base_pa =
- ((pg_tbl_pa) +
- (align_size - 1)) & (~(align_size - 1));
- pt_attrs->l1_base_va =
- pg_tbl_va + (pt_attrs->l1_base_pa - pg_tbl_pa);
- } else {
- /* We got aligned PA, cool */
- pt_attrs->l1_tbl_alloc_pa = pg_tbl_pa;
- pt_attrs->l1_tbl_alloc_va = pg_tbl_va;
- pt_attrs->l1_tbl_alloc_sz = pt_attrs->l1_size;
- pt_attrs->l1_base_pa = pg_tbl_pa;
- pt_attrs->l1_base_va = pg_tbl_va;
- }
- if (pt_attrs->l1_base_va)
- memset((u8 *) pt_attrs->l1_base_va, 0x00,
- pt_attrs->l1_size);
-
- /* number of L2 page tables = DMM pool used + SHMMEM +EXTMEM +
- * L4 pages */
- pt_attrs->l2_num_pages = ((DMMPOOLSIZE >> 20) + 6);
- pt_attrs->l2_size = HW_MMU_COARSE_PAGE_SIZE *
- pt_attrs->l2_num_pages;
- align_size = 4; /* Make it u32 aligned */
- /* we like to get aligned on L1 table size */
- pg_tbl_va = (u32) mem_alloc_phys_mem(pt_attrs->l2_size,
- align_size, &pg_tbl_pa);
- pt_attrs->l2_tbl_alloc_pa = pg_tbl_pa;
- pt_attrs->l2_tbl_alloc_va = pg_tbl_va;
- pt_attrs->l2_tbl_alloc_sz = pt_attrs->l2_size;
- pt_attrs->l2_base_pa = pg_tbl_pa;
- pt_attrs->l2_base_va = pg_tbl_va;
-
- if (pt_attrs->l2_base_va)
- memset((u8 *) pt_attrs->l2_base_va, 0x00,
- pt_attrs->l2_size);
-
- pt_attrs->pg_info = kzalloc(pt_attrs->l2_num_pages *
- sizeof(struct page_info), GFP_KERNEL);
- dev_dbg(bridge,
- "L1 pa %x, va %x, size %x\n L2 pa %x, va "
- "%x, size %x\n", pt_attrs->l1_base_pa,
- pt_attrs->l1_base_va, pt_attrs->l1_size,
- pt_attrs->l2_base_pa, pt_attrs->l2_base_va,
- pt_attrs->l2_size);
- dev_dbg(bridge, "pt_attrs %p L2 NumPages %x pg_info %p\n",
- pt_attrs, pt_attrs->l2_num_pages, pt_attrs->pg_info);
- }
- if ((pt_attrs != NULL) && (pt_attrs->l1_base_va != 0) &&
- (pt_attrs->l2_base_va != 0) && (pt_attrs->pg_info != NULL))
- dev_context->pt_attrs = pt_attrs;
- else
- status = -ENOMEM;
-
if (DSP_SUCCEEDED(status)) {
- spin_lock_init(&pt_attrs->pg_lock);
dev_context->tc_word_swap_on = drv_datap->tc_wordswapon;
-
- /* Set the Clock Divisor for the DSP module */
- udelay(5);
/* MMU address is obtained from the host
* resources struct */
dev_context->dw_dsp_mmu_base = resources->dw_dmmu_base;
- }
- if (DSP_SUCCEEDED(status)) {
dev_context->hdev_obj = hdev_obj;
dev_context->ul_int_mask = 0;
/* Store current board state. */
@@ -970,23 +787,6 @@ static int bridge_dev_create(OUT struct bridge_dev_context
/* Return ptr to our device state to the DSP API for storage */
*ppDevContext = dev_context;
} else {
- if (pt_attrs != NULL) {
- kfree(pt_attrs->pg_info);
-
- if (pt_attrs->l2_tbl_alloc_va) {
- mem_free_phys_mem((void *)
- pt_attrs->l2_tbl_alloc_va,
- pt_attrs->l2_tbl_alloc_pa,
- pt_attrs->l2_tbl_alloc_sz);
- }
- if (pt_attrs->l1_tbl_alloc_va) {
- mem_free_phys_mem((void *)
- pt_attrs->l1_tbl_alloc_va,
- pt_attrs->l1_tbl_alloc_pa,
- pt_attrs->l1_tbl_alloc_sz);
- }
- }
- kfree(pt_attrs);
kfree(dev_context);
}
func_end:
@@ -1054,7 +854,6 @@ static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
*/
static int bridge_dev_destroy(struct bridge_dev_context *hDevContext)
{
- struct pg_table_attrs *pt_attrs;
int status = 0;
struct bridge_dev_context *dev_context = (struct bridge_dev_context *)
hDevContext;
@@ -1068,23 +867,6 @@ static int bridge_dev_destroy(struct bridge_dev_context *hDevContext)

/* first put the device to stop state */
bridge_brd_delete(dev_context);
- if (dev_context->pt_attrs) {
- pt_attrs = dev_context->pt_attrs;
- kfree(pt_attrs->pg_info);
-
- if (pt_attrs->l2_tbl_alloc_va) {
- mem_free_phys_mem((void *)pt_attrs->l2_tbl_alloc_va,
- pt_attrs->l2_tbl_alloc_pa,
- pt_attrs->l2_tbl_alloc_sz);
- }
- if (pt_attrs->l1_tbl_alloc_va) {
- mem_free_phys_mem((void *)pt_attrs->l1_tbl_alloc_va,
- pt_attrs->l1_tbl_alloc_pa,
- pt_attrs->l1_tbl_alloc_sz);
- }
- kfree(pt_attrs);
-
- }

if (dev_context->resources) {
host_res = dev_context->resources;
@@ -1315,258 +1097,6 @@ int user_to_dsp_unmap(struct iommu *mmu, u32 da)
}

/*
- * ======== user_va2_pa ========
- * Purpose:
- * This function walks through the page tables to convert a userland
- * virtual address to physical address
- */
-static u32 user_va2_pa(struct mm_struct *mm, u32 address)
-{
- pgd_t *pgd;
- pmd_t *pmd;
- pte_t *ptep, pte;
-
- pgd = pgd_offset(mm, address);
- if (!(pgd_none(*pgd) || pgd_bad(*pgd))) {
- pmd = pmd_offset(pgd, address);
- if (!(pmd_none(*pmd) || pmd_bad(*pmd))) {
- ptep = pte_offset_map(pmd, address);
- if (ptep) {
- pte = *ptep;
- if (pte_present(pte))
- return pte & PAGE_MASK;
- }
- }
- }
-
- return 0;
-}
-
-/*
- * ======== pte_update ========
- * This function calculates the optimum page-aligned addresses and sizes
- * Caller must pass page-aligned values
- */
-static int pte_update(struct bridge_dev_context *hDevContext, u32 pa,
- u32 va, u32 size,
- struct hw_mmu_map_attrs_t *map_attrs)
-{
- u32 i;
- u32 all_bits;
- u32 pa_curr = pa;
- u32 va_curr = va;
- u32 num_bytes = size;
- struct bridge_dev_context *dev_context = hDevContext;
- int status = 0;
- u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
- HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
- };
-
- while (num_bytes && DSP_SUCCEEDED(status)) {
- /* To find the max. page size with which both PA & VA are
- * aligned */
- all_bits = pa_curr | va_curr;
-
- for (i = 0; i < 4; i++) {
- if ((num_bytes >= page_size[i]) && ((all_bits &
- (page_size[i] -
- 1)) == 0)) {
- status =
- pte_set(dev_context->pt_attrs, pa_curr,
- va_curr, page_size[i], map_attrs);
- pa_curr += page_size[i];
- va_curr += page_size[i];
- num_bytes -= page_size[i];
- /* Don't try smaller sizes. Hopefully we have
- * reached an address aligned to a bigger page
- * size */
- break;
- }
- }
- }
-
- return status;
-}
-
-/*
- * ======== pte_set ========
- * This function calculates PTE address (MPU virtual) to be updated
- * It also manages the L2 page tables
- */
-static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *attrs)
-{
- u32 i;
- u32 pte_val;
- u32 pte_addr_l1;
- u32 pte_size;
- /* Base address of the PT that will be updated */
- u32 pg_tbl_va;
- u32 l1_base_va;
- /* Compiler warns that the next three variables might be used
- * uninitialized in this function. Doesn't seem so. Working around,
- * anyways. */
- u32 l2_base_va = 0;
- u32 l2_base_pa = 0;
- u32 l2_page_num = 0;
- int status = 0;
-
- l1_base_va = pt->l1_base_va;
- pg_tbl_va = l1_base_va;
- if ((size == HW_PAGE_SIZE64KB) || (size == HW_PAGE_SIZE4KB)) {
- /* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
- if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
- pte_val = *(u32 *) pte_addr_l1;
- pte_size = hw_mmu_pte_size_l1(pte_val);
- } else {
- return -EPERM;
- }
- spin_lock(&pt->pg_lock);
- if (pte_size == HW_MMU_COARSE_PAGE_SIZE) {
- /* Get the L2 PA from the L1 PTE, and find
- * corresponding L2 VA */
- l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
- l2_base_va =
- l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
- l2_page_num =
- (l2_base_pa -
- pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
- } else if (pte_size == 0) {
- /* L1 PTE is invalid. Allocate a L2 PT and
- * point the L1 PTE to it */
- /* Find a free L2 PT. */
- for (i = 0; (i < pt->l2_num_pages) &&
- (pt->pg_info[i].num_entries != 0); i++)
- ;;
- if (i < pt->l2_num_pages) {
- l2_page_num = i;
- l2_base_pa = pt->l2_base_pa + (l2_page_num *
- HW_MMU_COARSE_PAGE_SIZE);
- l2_base_va = pt->l2_base_va + (l2_page_num *
- HW_MMU_COARSE_PAGE_SIZE);
- /* Endianness attributes are ignored for
- * HW_MMU_COARSE_PAGE_SIZE */
- status =
- hw_mmu_pte_set(l1_base_va, l2_base_pa, va,
- HW_MMU_COARSE_PAGE_SIZE,
- attrs);
- } else {
- status = -ENOMEM;
- }
- } else {
- /* Found valid L1 PTE of another size.
- * Should not overwrite it. */
- status = -EPERM;
- }
- if (DSP_SUCCEEDED(status)) {
- pg_tbl_va = l2_base_va;
- if (size == HW_PAGE_SIZE64KB)
- pt->pg_info[l2_page_num].num_entries += 16;
- else
- pt->pg_info[l2_page_num].num_entries++;
- dev_dbg(bridge, "PTE: L2 BaseVa %x, BasePa %x, PageNum "
- "%x, num_entries %x\n", l2_base_va,
- l2_base_pa, l2_page_num,
- pt->pg_info[l2_page_num].num_entries);
- }
- spin_unlock(&pt->pg_lock);
- }
- if (DSP_SUCCEEDED(status)) {
- dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, va %x, size %x\n",
- pg_tbl_va, pa, va, size);
- dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
- "mixed_size %x\n", attrs->endianism,
- attrs->element_size, attrs->mixed_size);
- status = hw_mmu_pte_set(pg_tbl_va, pa, va, size, attrs);
- }
-
- return status;
-}
-
-/* Memory map kernel VA -- memory allocated with vmalloc */
-static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
- u32 ul_mpu_addr, u32 ulVirtAddr,
- u32 ul_num_bytes,
- struct hw_mmu_map_attrs_t *hw_attrs)
-{
- int status = 0;
- struct page *page[1];
- u32 i;
- u32 pa_curr;
- u32 pa_next;
- u32 va_curr;
- u32 size_curr;
- u32 num_pages;
- u32 pa;
- u32 num_of4k_pages;
- u32 temp = 0;
-
- /*
- * Do Kernel va to pa translation.
- * Combine physically contiguous regions to reduce TLBs.
- * Pass the translated pa to pte_update.
- */
- num_pages = ul_num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
- i = 0;
- va_curr = ul_mpu_addr;
- page[0] = vmalloc_to_page((void *)va_curr);
- pa_next = page_to_phys(page[0]);
- while (DSP_SUCCEEDED(status) && (i < num_pages)) {
- /*
- * Reuse pa_next from the previous iteraion to avoid
- * an extra va2pa call
- */
- pa_curr = pa_next;
- size_curr = PAGE_SIZE;
- /*
- * If the next page is physically contiguous,
- * map it with the current one by increasing
- * the size of the region to be mapped
- */
- while (++i < num_pages) {
- page[0] =
- vmalloc_to_page((void *)(va_curr + size_curr));
- pa_next = page_to_phys(page[0]);
-
- if (pa_next == (pa_curr + size_curr))
- size_curr += PAGE_SIZE;
- else
- break;
-
- }
- if (pa_next == 0) {
- status = -ENOMEM;
- break;
- }
- pa = pa_curr;
- num_of4k_pages = size_curr / HW_PAGE_SIZE4KB;
- while (temp++ < num_of4k_pages) {
- get_page(PHYS_TO_PAGE(pa));
- pa += HW_PAGE_SIZE4KB;
- }
- status = pte_update(dev_context, pa_curr, ulVirtAddr +
- (va_curr - ul_mpu_addr), size_curr,
- hw_attrs);
- va_curr += size_curr;
- }
- if (DSP_SUCCEEDED(status))
- status = 0;
- else
- status = -EPERM;
-
- /*
- * In any case, flush the TLB
- * This is called from here instead from pte_update to avoid unnecessary
- * repetition while mapping non-contiguous physical regions of a virtual
- * region
- */
- flush_all(dev_context);
- dev_dbg(bridge, "%s status %x\n", __func__, status);
- return status;
-}
-
-/*
* ======== wait_for_start ========
* Wait for the singal from DSP that it has started, or time out.
*/
--
1.7.0.4

2010-06-30 23:52:07

by Fernando Guzman Lugo

[permalink] [raw]
Subject: [PATCHv3 9/9] dspbridge: cleanup bridge_dev_context and cfg_hostres structures

this patch cleans up cfg_hostres and bridge_dev_context
structures of custom mmu code not needed anymore.

Signed-off-by: Fernando Guzman Lugo <[email protected]>
---
arch/arm/plat-omap/include/dspbridge/cfgdefs.h | 1 -
drivers/dsp/bridge/core/_tiomap.h | 5 -----
drivers/dsp/bridge/core/tiomap3430.c | 8 --------
drivers/dsp/bridge/core/tiomap_io.c | 2 +-
drivers/dsp/bridge/rmgr/drv.c | 4 ----
5 files changed, 1 insertions(+), 19 deletions(-)

diff --git a/arch/arm/plat-omap/include/dspbridge/cfgdefs.h b/arch/arm/plat-omap/include/dspbridge/cfgdefs.h
index 38122db..dfb55cc 100644
--- a/arch/arm/plat-omap/include/dspbridge/cfgdefs.h
+++ b/arch/arm/plat-omap/include/dspbridge/cfgdefs.h
@@ -68,7 +68,6 @@ struct cfg_hostres {
void __iomem *dw_per_base;
u32 dw_per_pm_base;
u32 dw_core_pm_base;
- void __iomem *dw_dmmu_base;
void __iomem *dw_sys_ctrl_base;
};

diff --git a/drivers/dsp/bridge/core/_tiomap.h b/drivers/dsp/bridge/core/_tiomap.h
index 8a9a822..82bce7d 100644
--- a/drivers/dsp/bridge/core/_tiomap.h
+++ b/drivers/dsp/bridge/core/_tiomap.h
@@ -323,7 +323,6 @@ struct bridge_dev_context {
*/
u32 dw_dsp_ext_base_addr; /* See the comment above */
u32 dw_api_reg_base; /* API mem map'd registers */
- void __iomem *dw_dsp_mmu_base; /* DSP MMU Mapped registers */
u32 dw_api_clk_base; /* CLK Registers */
u32 dw_dsp_clk_m2_base; /* DSP Clock Module m2 */
u32 dw_public_rhea; /* Pub Rhea */
@@ -347,10 +346,6 @@ struct bridge_dev_context {
/* DMMU TLB entries */
struct bridge_ioctl_extproc atlb_entry[BRDIOCTL_NUMOFMMUTLB];
u32 dw_brd_state; /* Last known board state. */
- u32 ul_int_mask; /* int mask */
- u16 io_base; /* Board I/O base */
- u32 num_tlb_entries; /* DSP MMU TLB entry counter */
- u32 fixed_tlb_entries; /* Fixed DSPMMU TLB entry count */

/* TC Settings */
bool tc_word_swap_on; /* Traffic Controller Word Swap */
diff --git a/drivers/dsp/bridge/core/tiomap3430.c b/drivers/dsp/bridge/core/tiomap3430.c
index aa6e999..83a9561 100644
--- a/drivers/dsp/bridge/core/tiomap3430.c
+++ b/drivers/dsp/bridge/core/tiomap3430.c
@@ -753,7 +753,6 @@ static int bridge_dev_create(OUT struct bridge_dev_context
dev_context->atlb_entry[entry_ndx].ul_gpp_pa =
dev_context->atlb_entry[entry_ndx].ul_dsp_va = 0;
}
- dev_context->num_tlb_entries = 0;
dev_context->dw_dsp_base_addr = (u32) MEM_LINEAR_ADDRESS((void *)
(pConfig->
dw_mem_base
@@ -766,11 +765,7 @@ static int bridge_dev_create(OUT struct bridge_dev_context

if (DSP_SUCCEEDED(status)) {
dev_context->tc_word_swap_on = drv_datap->tc_wordswapon;
- /* MMU address is obtained from the host
- * resources struct */
- dev_context->dw_dsp_mmu_base = resources->dw_dmmu_base;
dev_context->hdev_obj = hdev_obj;
- dev_context->ul_int_mask = 0;
/* Store current board state. */
dev_context->dw_brd_state = BRD_STOPPED;
dev_context->resources = resources;
@@ -887,8 +882,6 @@ static int bridge_dev_destroy(struct bridge_dev_context *hDevContext)
iounmap((void *)host_res->dw_mem_base[3]);
if (host_res->dw_mem_base[4])
iounmap((void *)host_res->dw_mem_base[4]);
- if (host_res->dw_dmmu_base)
- iounmap(host_res->dw_dmmu_base);
if (host_res->dw_per_base)
iounmap(host_res->dw_per_base);
if (host_res->dw_per_pm_base)
@@ -902,7 +895,6 @@ static int bridge_dev_destroy(struct bridge_dev_context *hDevContext)
host_res->dw_mem_base[2] = (u32) NULL;
host_res->dw_mem_base[3] = (u32) NULL;
host_res->dw_mem_base[4] = (u32) NULL;
- host_res->dw_dmmu_base = NULL;
host_res->dw_sys_ctrl_base = NULL;

kfree(host_res);
diff --git a/drivers/dsp/bridge/core/tiomap_io.c b/drivers/dsp/bridge/core/tiomap_io.c
index 3c0d3a3..2f2f8c2 100644
--- a/drivers/dsp/bridge/core/tiomap_io.c
+++ b/drivers/dsp/bridge/core/tiomap_io.c
@@ -437,7 +437,7 @@ int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val)
omap_mbox_restore_ctx(dev_context->mbox);

/* Access MMU SYS CONFIG register to generate a short wakeup */
- __raw_readl(resources->dw_dmmu_base + 0x10);
+ iommu_read_reg(dev_context->dsp_mmu, MMU_SYSCONFIG);

dev_context->dw_brd_state = BRD_RUNNING;
} else if (dev_context->dw_brd_state == BRD_RETENTION) {
diff --git a/drivers/dsp/bridge/rmgr/drv.c b/drivers/dsp/bridge/rmgr/drv.c
index c6e38e5..7804479 100644
--- a/drivers/dsp/bridge/rmgr/drv.c
+++ b/drivers/dsp/bridge/rmgr/drv.c
@@ -829,7 +829,6 @@ static int request_bridge_resources(struct cfg_hostres *res)
host_res->dw_sys_ctrl_base = ioremap(OMAP_SYSC_BASE, OMAP_SYSC_SIZE);
dev_dbg(bridge, "dw_mem_base[0] 0x%x\n", host_res->dw_mem_base[0]);
dev_dbg(bridge, "dw_mem_base[3] 0x%x\n", host_res->dw_mem_base[3]);
- dev_dbg(bridge, "dw_dmmu_base %p\n", host_res->dw_dmmu_base);

/* for 24xx base port is not mapping the mamory for DSP
* internal memory TODO Do a ioremap here */
@@ -883,8 +882,6 @@ int drv_request_bridge_res_dsp(void **phost_resources)
OMAP_PER_PRM_SIZE);
host_res->dw_core_pm_base = (u32) ioremap(OMAP_CORE_PRM_BASE,
OMAP_CORE_PRM_SIZE);
- host_res->dw_dmmu_base = ioremap(OMAP_DMMU_BASE,
- OMAP_DMMU_SIZE);

dev_dbg(bridge, "dw_mem_base[0] 0x%x\n",
host_res->dw_mem_base[0]);
@@ -896,7 +893,6 @@ int drv_request_bridge_res_dsp(void **phost_resources)
host_res->dw_mem_base[3]);
dev_dbg(bridge, "dw_mem_base[4] 0x%x\n",
host_res->dw_mem_base[4]);
- dev_dbg(bridge, "dw_dmmu_base %p\n", host_res->dw_dmmu_base);

shm_size = drv_datap->shm_size;
if (shm_size >= 0x10000) {
--
1.7.0.4

2010-07-01 00:09:57

by Fernando Guzman Lugo

[permalink] [raw]
Subject: RE: [PATCHv3 0/9] dspbridge: iommu migration



Sorry wrong version of the patches. Please discard them.

Sorry for the spam,
Fernando.


> -----Original Message-----
> From: Guzman Lugo, Fernando
> Sent: Wednesday, June 30, 2010 7:00 PM
> To: [email protected]; [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; Guzman Lugo, Fernando
> Subject: [PATCHv3 0/9] dspbridge: iommu migration
>
> This set of patches remove the dspbridge custom mmu implementation
> and use iommu module instead.
>
>
> NOTE: in order to dspbridge can work properly the patch
> "0001-iovmm-add-superpages-support-to-fixed-da-address.patch"
> is needed (specifically iommu_kmap calls need this patch).
>
> Fernando Guzman Lugo (9):
> dspbridge: replace iommu custom for opensource implementation
> dspbridge: move shared memory iommu maps to tiomap3430.c
> dspbridge: rename bridge_brd_mem_map/unmap to a proper name
> dspbridge: remove custom mmu code from tiomap3430.c
> dspbridge: add mmufault support
> dspbridge: remove hw directory
> dspbridge: move all iommu related code to a new file
> dspbridge: add map support for big buffers
> dspbridge: cleanup bridge_dev_context and cfg_hostres structures
>
> arch/arm/plat-omap/include/dspbridge/cfgdefs.h | 1 -
> arch/arm/plat-omap/include/dspbridge/dsp-mmu.h | 90 ++
> arch/arm/plat-omap/include/dspbridge/dspdefs.h | 44 -
> arch/arm/plat-omap/include/dspbridge/dspdeh.h | 1 -
> arch/arm/plat-omap/include/dspbridge/dspioctl.h | 7 -
> drivers/dsp/bridge/Makefile | 5 +-
> drivers/dsp/bridge/core/_deh.h | 3 -
> drivers/dsp/bridge/core/_tiomap.h | 15 +-
> drivers/dsp/bridge/core/dsp-mmu.c | 229 ++++
> drivers/dsp/bridge/core/io_sm.c | 185 +---
> drivers/dsp/bridge/core/mmu_fault.c | 139 ---
> drivers/dsp/bridge/core/mmu_fault.h | 36 -
> drivers/dsp/bridge/core/tiomap3430.c | 1297 ++++--------------
> -----
> drivers/dsp/bridge/core/tiomap3430_pwr.c | 183 +---
> drivers/dsp/bridge/core/tiomap_io.c | 16 +-
> drivers/dsp/bridge/core/ue_deh.c | 87 +--
> drivers/dsp/bridge/hw/EasiGlobal.h | 41 -
> drivers/dsp/bridge/hw/GlobalTypes.h | 308 ------
> drivers/dsp/bridge/hw/MMUAccInt.h | 76 --
> drivers/dsp/bridge/hw/MMURegAcM.h | 226 ----
> drivers/dsp/bridge/hw/hw_defs.h | 60 --
> drivers/dsp/bridge/hw/hw_mmu.c | 587 ----------
> drivers/dsp/bridge/hw/hw_mmu.h | 161 ---
> drivers/dsp/bridge/pmgr/dev.c | 2 -
> drivers/dsp/bridge/rmgr/drv.c | 4 -
> drivers/dsp/bridge/rmgr/node.c | 4 +-
> drivers/dsp/bridge/rmgr/proc.c | 19 +-
> 27 files changed, 599 insertions(+), 3227 deletions(-)
> create mode 100644 arch/arm/plat-omap/include/dspbridge/dsp-mmu.h
> create mode 100644 drivers/dsp/bridge/core/dsp-mmu.c
> delete mode 100644 drivers/dsp/bridge/core/mmu_fault.c
> delete mode 100644 drivers/dsp/bridge/core/mmu_fault.h
> delete mode 100644 drivers/dsp/bridge/hw/EasiGlobal.h
> delete mode 100644 drivers/dsp/bridge/hw/GlobalTypes.h
> delete mode 100644 drivers/dsp/bridge/hw/MMUAccInt.h
> delete mode 100644 drivers/dsp/bridge/hw/MMURegAcM.h
> delete mode 100644 drivers/dsp/bridge/hw/hw_defs.h
> delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.c
> delete mode 100644 drivers/dsp/bridge/hw/hw_mmu.h

2010-07-01 17:16:44

by Kanigeri, Hari

[permalink] [raw]
Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support

Hi Fernando,

> +int mmu_fault_isr(struct iommu *mmu)
>
> -/*
> - * ======== mmu_check_if_fault =======
> - * Check to see if MMU Fault is valid TLB miss from DSP
> - * Note: This function is called from an ISR
> - */
> -static bool mmu_check_if_fault(struct bridge_dev_context *dev_context)
> {
> + struct deh_mgr *dm;
> + u32 da;
> +
> + dev_get_deh_mgr(dev_get_first(), &dm);
> +
> + if (!dm)
> + return -EPERM;
> +
> + da = iommu_read_reg(mmu, MMU_FAULT_AD);
> + iommu_write_reg(mmu, 0, MMU_IRQENABLE);

-- Isn't the MMU already enabled at this point when the function callback is called by iommu ?

> + dm->err_info.dw_val1 = da;
> + tasklet_schedule(&dm->dpc_tasklet);

-- The iommu fault isr disables the IOMMU at the end of the fault handler, so by the time your tasklet is scheduled you might have the MMU in a disabled state. Looks to me either this requires change in iommu to remove the disable part or enable the MMU in the tasklet instead of doing it early in mmu_fault_isr.

Thank you,
Best regards,
Hari

2010-07-01 17:54:39

by Fernando Guzman Lugo

[permalink] [raw]
Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support



Hi Hari,

> -----Original Message-----
> From: Kanigeri, Hari
> Sent: Thursday, July 01, 2010 12:17 PM
> To: Guzman Lugo, Fernando; [email protected]; linux-
> [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; Guzman Lugo, Fernando
> Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support
>
> Hi Fernando,
>
> > +int mmu_fault_isr(struct iommu *mmu)
> >
> > -/*
> > - * ======== mmu_check_if_fault =======
> > - * Check to see if MMU Fault is valid TLB miss from DSP
> > - * Note: This function is called from an ISR
> > - */
> > -static bool mmu_check_if_fault(struct bridge_dev_context *dev_context)
> > {
> > + struct deh_mgr *dm;
> > + u32 da;
> > +
> > + dev_get_deh_mgr(dev_get_first(), &dm);
> > +
> > + if (!dm)
> > + return -EPERM;
> > +
> > + da = iommu_read_reg(mmu, MMU_FAULT_AD);
> > + iommu_write_reg(mmu, 0, MMU_IRQENABLE);
>
> -- Isn't the MMU already enabled at this point when the function callback
> is called by iommu ?

This line is actually disabling the interrupts. I am writing "0x0" in the MMU_IRQENABLE.

>
> > + dm->err_info.dw_val1 = da;
> > + tasklet_schedule(&dm->dpc_tasklet);
>
> -- The iommu fault isr disables the IOMMU at the end of the fault handler,
> so by the time your tasklet is scheduled you might have the MMU in a
> disabled state. Looks to me either this requires change in iommu to remove
> the disable part or enable the MMU in the tasklet instead of doing it
> early in mmu_fault_isr.

I am returning 0 in the callback function, that means the callback function has managed the fault and the mmu_fault_isr does not do anything else


if (obj->isr)
err = obj->isr(obj);

if (!err)
return IRQ_HANDLED;

it is working for me without not modifications in the iommu_fault_handler function.

Thanks for the comments,
Fernando.

>
> Thank you,
> Best regards,
> Hari

2010-07-01 18:04:16

by Kanigeri, Hari

[permalink] [raw]
Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support

> > > + da = iommu_read_reg(mmu, MMU_FAULT_AD);
> > > + iommu_write_reg(mmu, 0, MMU_IRQENABLE);
> >
> > -- Isn't the MMU already enabled at this point when the function
> callback
> > is called by iommu ?
>
> This line is actually disabling the interrupts. I am writing "0x0" in the
> MMU_IRQENABLE.

-- oops ! sorry about that. Didn't pay attention to 0x0. Yes, this should work.

Best regards,
Hari

2010-07-01 18:26:49

by Fernando Guzman Lugo

[permalink] [raw]
Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support



> -----Original Message-----
> From: Kanigeri, Hari
> Sent: Thursday, July 01, 2010 1:04 PM
> To: Guzman Lugo, Fernando; [email protected]; linux-
> [email protected]
> Cc: [email protected]; Hiroshi DOYU; [email protected];
> [email protected]
> Subject: RE: [PATCHv3 5/9] dspbridge: add mmufault support
>
> > > > + da = iommu_read_reg(mmu, MMU_FAULT_AD);
> > > > + iommu_write_reg(mmu, 0, MMU_IRQENABLE);
> > >
> > > -- Isn't the MMU already enabled at this point when the function
> > callback
> > > is called by iommu ?
> >
> > This line is actually disabling the interrupts. I am writing "0x0" in
> the
> > MMU_IRQENABLE.
>
> -- oops ! sorry about that. Didn't pay attention to 0x0. Yes, this should
> work.

Not problem :). Could you please comment in the second set of patches I sent? The previous ones have version 3 which is not correct, that was a version I was using internally, but the patches shouldn't have sent to mailing list with that version. Because it is the first version posted.

Thanks and regards,
Fernando.

>
> Best regards,
> Hari